This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

How to Guides

Welcome to our comprehensive How-To Guide for using kubeops. Whether youre a beginner aiming to understand the basics or an experienced user looking to fine-tune your skills, this guide is designed to provide you with detailed step-by-step instructions on how to navigate and utilize all the features of kubeops effectively.

In the following sections, you will find everything from initial setup and configuration, to advanced tips and tricks that will help you get the most out of the software. Our aim is to assist you in becoming proficient with kubeops, enhancing both your productivity and your user experience.

Lets get started on your journey to mastering kubeops!

1 - Ingress Configuration

Here is a brief overview of how you can configure your ingress manually.

Manual configuration of the Nginx-Ingress-Controller

Right now the Ingress Controller Package is not fully configured. To make complete use of the Ingress capabilities of the cluster, the user needs to manually update some of the settings of the corresponding service.

Locating the service

The service in question is called “ingress-nginx-controller” and can be found in the same namespace as the ingress package itself. To locate the service across all namespaces, you could use the following command.

kubectl get service -A | grep ingress-nginx-controller

This command should return two entries of services, “ingress-nginx-controller” and “ingress-nginx-controller-admission”, though only the first one needs to be further adjusted.

Setting the Ingress-Controller service to type NodePort

To edit the service, you can use the following command, although the actual namespace may be different. This will change the service type to NodePort.

kubectl patch service ingress-nginx-controller -n kubeops -p '{"spec":{"type":"NodePort"}}'

Kubernetes will now automatically assign unused portnumbers for the nodePort to allow http and https connections to the service. These can be retrieved by running the same command, used to locate the service. Alternatively, you can use the following command, which adds the portnumbers 30080 and 30443 for the respective protocols. By doing so, you have to make sure, that these portnumbers are not being used by any other NodePort service.

kubectl patch service ingress-nginx-controller -n kubeops --type=json -p '[{"op":"replace","path":"/spec/type","value":"NodePort"}, {"op":"add","path":"/spec/ports/0/nodePort","value":30080}, {"op":"add","path":"/spec/ports/1/nodePort","value":30443}]'

Configuring external IPs

If you have access to external IPs that route to one or more cluster nodes, you can expose your Kubernetes-Services of any type through these addresses. The command below shows how to add an external IP-Adress to the service with the example value of “192.168.0.1”. Keep in mind that this value has to be changed in order to fit your networking settings.

kubectl patch service ingress-nginx-controller -n kubeops -p '{"spec":{"externalIPs":["192.168.0.1"]}}'

2 - Create Cluster

Here is a brief overview of how you can create a simple functional cluster. Including prerequisites and step by step instructions .

How to create a working cluster?

Pre-requisites

  • maintenance packages installed?
  • network connection?
  • LIMAROOT set

Steps

  • create yaml file
  • create cluster with multiple nodes
  • add nodes to created cluster
  • delete nodes when needed

Once you have completed the KubeOps installation, you are ready to dive into the KubeOps-Platform.

How to use LIMA

Downloaded all maintenance packages? If yes, then you are ready to use LIMA for managing your Kubernetes clusters!

In the following sections we will walk you through a quick cluster setup and adding nodes.

So the first thing to do is to create a YAML file that contains the specifications of your cluster. Customize the file below according to your downloaded maintenance packages, e.g. the parameters kubernetesVersion, firewall, containerRuntime. Also adjust the other parameters like masterPassword, masterHost, apiEndpoint to your environment.

createCluster.yaml

apiVersion: lima/clusterconfig/v1alpha2
spec:
  clusterName: ExampleClusterName
  masterUser: root
  masterPassword: "myPassword"
  masterHost: 10.2.1.11
  kubernetesVersion: 1.22.2
  registry: registry.preprod.kubernative.net/kubeops
  useInsecureRegistry: false
  ignoreFirewallError: false
  firewall: firewalld
  apiEndpoint: 10.2.1.11:6443
  serviceSubnet: 192.168.128.0/20
  podSubnet: 192.168.144.0/20
  debug: true
  logLevel: v
  systemCpu: 100m
  systemMemory: 100Mi
  sudo: false
  containerRuntime: crio
  pluginNetwork:
    type: weave
    parameters:
      weavePassword: re4llyS7ron6P4ssw0rd
  auditLog: false
  serial: 1
  seLinuxSupport: true

Most of these parameters are optional and can be left out. If you want to know more about each parameter please refer to our Full Documentation


Set up a single node cluster

To set up a single node cluster we need our createCluster.yaml file from above.
Run the create cluster command on the admin node to create a cluster with one node.

lima create cluster -f createCluster.yaml

Done! LIMA is setting up your Kubernetes cluster. In a few minutes you have set up a regular single master cluster.

If LIMA is successfully finished you can check with kubectl get nodes your Kubernetes single node cluster.

It looks very alone and sad right? Jump to the next section to add some friends to your cluster!


Optional step

The master node which you used to set up your cluster is only suitable as an example installation or for testing. To use this node for production workloads remove the taint from the master node.

kubectl taint nodes --all node-role.kubernetes.io/master-

Add nodes to your cluster

Let’s give your single node cluster some friends. What we need for this is another YAML file. We can call the YAML file whatever we want - we call it addNode.yaml.

addNode.yaml

apiVersion: lima/nodeconfig/v1alpha1
clusterName: ExampleClusterName
spec: 
  masters:
  - host: 10.2.1.12
    user: root
    password: "myPassword"
  workers:
  - host: 10.2.1.13 #IP-address of the node you want to add
    user: root
    password: "myPassword"

We do not need to pull any other maintenance packages. We already did that and are using the same specifications from our single node cluster. The only thing to do is to use the create nodes command

lima create nodes -f addNode.yaml

Done! LIMA adds the nodes to your single node cluster. After LIMA is finished check again with kubectl get nodes the state of your Kubernetes cluster. Your master node should not be alone anymore!

3 - Importing the ELRepo Secure Boot key

This guide explains how to prepare a system with Secure Boot for using third-party kernel modules by importing the ELRepo Secure Boot key, ensuring compatibility and secure module integration..

KubeOps supports inter-node traffic encryption through the use of the calico-wireguard extension. For this to work correctly, the wireguard kernel module needs to be installed on every node in the cluster.

KubeOps distributes and installs the required software automatically. However, since these are third-party modules signed by the ELRepo community project, system administrators must import the ELRepo Secure Boot public key into their MOK (Machine Owner Key) list in order to use them on a system with Secure Boot enabled.

This only applies to RHEL 8 machines.

Download the key

The secureboot key must be located on every node of the cluster. It can be directly downloaded with the following command:

curl -O https://elrepo.org/SECURE-BOOT-KEY-elrepo.org.der

If you are working with an airgap environment, you might need to manually distribute the file to all your nodes.

Import the key in the MOK list

With the key in place, install it by using this command:

mokutil --import SECURE-BOOT-KEY-elrepo.org.der

When prompted, enter a password of your choice. This password will be used when enrolling the key into the MOK list.

Reboot the system and enroll the key

Upon rebooting, the “Shim UEFI key management” screen appears. You will need to press any key withing 10 seconds to proceed.

Enroll the key by following these steps:
- Select Enroll MOK.
- Select View key 0 to inspect the public key and other important information. Press Esc when you are done.
- Select Continue and enter the previously created password.
- When asked to enroll the keys, select OK.
- Select Reboot and restart the system.

The key has now been added to the MOK list and enrolled.

4 - Install Maintenance Packages

This guide provides an overview of installing essential maintenance packages for KubeOps clusters. It covers how to pull and manage various Kubernetes tools, dependencies, and Container Runtime Interface (CRI) packages to set up and maintain your cluster. Ensure compatibility between versions to successfully deploy your first Kubernetes environment.

Installing the essential Maintenance Packages

KubeOps provides you packages for the supported Kubernetes tools. These maintenance packages help you update the kubernetes tools to the desired versions on your clusters along with its dependencies.

It is necessary to install the required maintenance packages to create your first Kubernetes cluster. The packages are available on kubeops hub.

So let’s get started!

Note : Be sure you have the supported KOSI version for the KubeOps Version installed or you can not pull any maintenance packages!

Commands to install a package

Following are the most common commands to be used on Admin Node to get and install any maintenance package.

  1. Use the command get maintenance to list all available maintenance packages.

     lima get maintenance
    

    This will display a list of all the available maintenance packages.

Example :
| SOFTWARE          | VERSION | STATUS     | SOFTWAREPACKAGE            |TYPE     |
|      --           |   --    |      --    |     --                     |   --    |
| Kubernetes        | 1.24.8  | available  | lima/kubernetes:1.24.8     | upgrade |
| iptablesEL8       | 1.8.4   | available  | lima/iptablesel8:1.8.4     | update  |
| firewalldEL8      | 0.8.2   | downloaded | lima/firewalldel8:0.8.2    | update  |

Please observe and download correct packages based on following important column in this table.  

|Name | Description |
|-------------------------------------------|-------------------------------------------|
| SOFTWARE | It is the name of software which is required for your cluster. |
| VERSION | It is the software version. Select correct version based on your Kubernetes and KubeOps version. |
| SOFTWAREPACKAGE | It is the unique name of the maintenance package. Use this to pull the package on your machine.|
| STATUS | There can be any of the following status indicated. |
| | - available: package is remotely available |
| | - not found : package not found |
| | - downloaded : the package is locally and remotely available |
| | - only local : package is locally available |
| | - unknown: unknown package |   
  1. Use command pull maintenance to pull/download the package on your machine.

    lima pull maintenance <SOFTWAREPACKAGE>
    

    It is possible to pull more than 1 package with one pull invocation.
    For example:

    lima pull maintenance lima/kubernetes:1.23.5 lima/dockerEL7:18.09.1
    

List of Maintenance Packages

Following are the essential maintenance packages to be pulled. Use the above mentioned Common Commands to install desired packages.

1.Kubernetes

The first step is to choose a Kubernetes version and to pull its available package LIMA currently supports following Kubernetes versions:

1.26.x 1.27.x 1.28.x 1.29.x 1.30.x 1.31.x 1.32.x
1.26.3 1.27.1 1.28.0 1.29.0 1.30.0 1.31.2 1.32.0
1.26.4 1.27.2 1.28.1 1.29.1 1.30.1 1.31.4
1.26.5 1.27.3 1.28.2 1.29.2 1.30.6
1.26.6 1.27.4 1.28.3 1.29.3 1.30.8
1.26.7 1.27.5 1.28.4 1.29.4
1.26.8 1.27.6 1.28.5 1.29.5
1.26.9 1.27.7 1.28.6 1.29.10
1.27.8 1.28.7 1.29.12
1.27.9 1.28.8
1.27.10 1.28.9
1.28.10

Following are the packages available for the supported Kubernetes versions.

Kubernetes version Available packages
1.26.x kubernetes-1.26.x
1.27.x kubernetes-1.27.x
1.28.x kubernetes-1.28.x
1.29.x kubernetes-1.29.x
1.30.x kubernetes-1.30.x
1.31.x kubernetes-1.31.x
1.32.x kubernetes-1.32.x

2. Install Kubectl

To install Kubectl you won’t need to pull any other package. The Kubernetes package pulled in above step already contains Kubectl installation file.

In the following example the downloaded package is kubernetes-1.30.1.

dnf install $LIMAROOT/packages/kubernetes-1.30.1/kubectl-1.30.1-150500.1.1.x86_64.rpm

3.Kubernetes Dependencies

The next step is to pull the Kubernetes dependencies:

OS Available packages
RHEL 8 kubeDependencies-EL8-1.0.4
RHEL 8 kubeDependencies-EL8-1.0.6

4.CRIs

Choose your CRI and pull the available packages:

OS CRI Available packages
RHEL 8 docker dockerEL8-20.10.2
containerd containerdEL8-1.4.3
CRI-O crioEL8-x.xx.x
crioEL8-dependencies-1.0.1
podmanEL8-18.09.1

Note : CRI-O packages are depending on the chosen Kubernetes version. Choose the CRI-O package which matches with the chosen Kubernetes version.

  • E.g kubernetes-1.23.5 requires crioEL7-1.23.5
  • E.g kubernetes-1.24.8 requires crioEL7-1.24.8

5.Firewall

Choose your firewall and pull the available packages:

OS Firewall Available packages
RHEL 8 iptables iptablesEL8-1.8.4
firewalld firewalldEL8-0.9.3

Example

Assuming a setup should exist with OS RHEL 8, CRI-O and Kubernetes 1.22.2 with the requested version, the following maintenance packages need to be installed:

  • kubernetes-1.22.2
  • kubeDependencies-EL8-1.0.2
  • crioEL8-1.22.2
  • crioEL8-dependencies-1.0.1
  • podmanEL8-18.09.1


5 - Upgrade KubeOps Software

This guide outlines the steps for upgrading KubeOps software. It covers updating essential packages, configuring kubeopsctl.yaml, removing old versions, and installing new ones. It also provides instructions for upgrading other components like rook-ceph, harbor, opensearch, and monitoring tools by modifying the configuration file and applying the updates systematically..

Upgrading KubeOps Software

1. Update essential KubeOps Packages

Update kubeops setup

Before installing the kubeops software, create a kubeopsctl.yaml with following parameters:

### General values for registry access ###
apiVersion: kubeops/kubeopsctl/alpha/v5  # mandatory
kubeOpsUser: "demo" # mandatory
kubeOpsUserPassword: "Password" # mandatory
kubeOpsUserMail: "demo@demo.net" # mandatory
imagePullRegistry: "registry.preprod.kubernative.net/lima" # mandatory, the registry from which the images for the cluster are pulled
localRegistry: false # mandatory, set to true if you use a local registry

After creating the kubeopsctl.yaml please place another file into your machine to update the software:

### Values for setup configuration ###
clusterName: "example" # mandatory
clusterUser: "root" # mandatory
kubernetesVersion: "1.28.2" # mandatory, check lima documentation
masterIP: 10.2.10.12 # mandatory
containerRuntime: "containerd" # mandatory

1. Remove old KubeOps software

If you want to remove the KubeOps software, it is recommended that you use your package manager. For RHEL environments it is yum. If you want to remove the KubeOps software with yum, use the following commands:

yum autoremove kosi
yum autoremove lima

2. Install new KubeOps software

Now, you can install the new software with yum.

sudo yum install <kosi-rpm>

3. Upgrade kubeops software

To upgrade your kubeops software, you have to use following command:

  kubeopsctl apply -f kubeopsctl.yaml

4. Maintain the old Deployment Information (optional)

After upgrading KOSI from 2.5 to 2.6, the deployment.yaml file has to be moved to the $KUBEOPSROOT directory, if it is desired to keep old deployments.
Be sure there you set the $KUBEOPSROOT variable.

  1. Set the $KUBEOPSROOT variable
echo 'export KUBEOPSROOT="$HOME/kubeops"' >> $HOME/.bashrc
source ~/.bashrc

5. Update other softwares

1. Upgrade rook-ceph

In order to upgrade rook-ceph, you have to go in your kubeopsctl.yaml file and set rook-ceph: false to rook-ceph: true

After that, use the command bellow:

kubeopsctl apply -f kubeopsctl.yaml

2. Update harbor

For Updating harbor, change your kubeopsctl.yaml file and set harbor: false to harbor: true.

Please set other applications to false before applying the kubeopsctl.yaml file.

3. Update opensearch

In order to update opensearch, change your kubeopsctl.yaml file and set opensearch: false to opensearch: true.

Please set other applications to false before applying the kubeopsctl.yaml file.

4. Update logstash

In order to update logstash, change your kubeopsctl.yaml file and set logstash: false to logstash: true

Please set other applications to false before applying the kubeopsctl.yaml file.

5. Update filebeat

In order to update filebeat, change your kubeopsctl.yaml file and set filebeat: false to filebeat: true.

Please set other applications to false before applying the kubeopsctl.yaml file.

6. Update prometheus

In order to update prometheus, change your kubeopsctl.yaml file and set prometheus: false to prometheus: true.

Please set other applications to false before applying the kubeopsctl.yaml file.

7. Update opa

In order to update opa, change your kubeopsctl.yaml file and set opa: false to opa: true. Please set other applications to false before applying the kubeopsctl.yaml file.

6 - Update postgres resources of harbor

Update postgres resources of harbor.

How to Update Harbor Advanced Parameters Using kubeopsctl

Prerequisites

Before proceeding, ensure you have:

  • kubeopsctl installed and configured.
  • Access to your Kubernetes cluster.
  • The necessary permissions to apply changes to the Harbor deployment.

Understanding advancedParameters

Harbor allows advanced configuration via the harborValues.advancedParameters section. This section provides fine-grained control over various components, such as PostgreSQL, Redis, and logLevel, by defining resource allocations and other configurations.

Example Structure of advancedParameters

The advancedParameters section in kubeopsctl.yaml follows this structure:

harborValues:
  advancedParameters:
    postgres:
      resources:
        requests:
          memory: "512Mi"  # Minimum memory requested by PostgreSQL
          cpu: "200m"       # Minimum CPU requested by PostgreSQL
        limits:
          memory: "1Gi"     # Maximum memory PostgreSQL can use
          cpu: "500m"       # Maximum CPU PostgreSQL can use
    
    internal:
      redis:
        resources:
          requests:
            memory: "256Mi"  # Minimum memory requested by Redis
            cpu: "100m"      # Minimum CPU requested by Redis
          limits:
            memory: "512Mi"  # Maximum CPU Redis can use
            cpu: "300m"      # Maximum CPU Redis can use

    logLevel: "debug"  # Adjust logging level for debugging purposes
  • postgres: Defines resource limits for the PostgreSQL database.
  • redis: Configures Redis instance resources.
  • logLevel: Allows setting the logging level.

Modify these values based on your cluster’s available resources and workload requirements.

Step 1: Update Your kubeopsctl.yaml Configuration

Ensure that your kubeopsctl.yaml file includes the harborValues.advancedParameters section. If necessary, update or add parameters to customize your Harbor deployment.

Step 2: Apply the Configuration with kubeopsctl

Once your kubeopsctl.yaml file is ready, apply the changes using the following command:

kubeopsctl apply -f kubeopsctl.yaml

This command updates the advanced parameters for the Harbor deployment.

Step 3: Verify the Changes

To confirm that the new configuration has been applied, run:

kubectl get pod -n <your-harbor-namespace> -o yaml | grep -A6 -i 'resources:'

Replace <your-harbor-namespace> with the namespace where Harbor is deployed.

Alternatively, describe any component to check the applied settings:

kubectl describe pod <component-pod-name> -n <your-harbor-namespace>

Conclusion

Using kubeopsctl, you can efficiently update various advanced parameters in your Harbor deployment. The advancedParameters section allows fine-tuned configuration for multiple components, ensuring optimal resource usage and performance.

If you encounter any issues, check the logs with:

kubectl logs -n <your-harbor-namespace> <component-pod-name>

7 - Use Kubeopsctl

kubeopsctl is a KubeOps tool that simplifies cluster management by allowing users to define the desired cluster state in a YAML file. After configuring the cluster’s setup, the changes can be easily applied using the apply command, making it straightforward to manage updates and configurations..

KubeOpsctl

kubeopsctl is a new KubeOps tool which can be used for managing a cluster and its state eaisily. Now you can just describe a desired cluster state and then kubeopsctl creates a cluster with the desired state.

Using KubeOpsCtl

Using this feature is as easy as configuring the cluster yaml file with desired cluster state and details and using the apply command. Below are the detailed steps.

1.Configure Cluster/Nodes/Software using yaml file

You need to have a cluster definition file which describes the different aspects of your cluster. this files describes only one cluster.

Full yaml syntax

apiVersion: kubeops/kubeopsctl/alpha/v5  # mandatory
kubeOpsUser: "demo" # mandatory,  change to your username
kubeOpsUserPassword: "Password" # mandatory,  change to your password
kubeOpsUserMail: "demo@demo.net" # change to your email
imagePullRegistry: "registry.preprod.kubernative.net/kubeops" # mandatory
localRegistry: false # mandatory
### Values for setup configuration ###
clusterName: "testkubeopsctl"  # mandatory
clusterUser: "myuser"  # mandatory
kubernetesVersion: "1.28.2" # mandatory, check lima documentation
masterIP: 10.2.10.31 # mandatory
# at least 3 masters and 3 workers are needed
zones:
  - name: zone1
    nodes:
      master:
        - name: cluster1master1
          ipAdress: 10.2.10.11
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
        - name: cluster1master2
          ipAdress: 10.2.10.12
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
      worker:
        - name: cluster1worker1
          ipAdress: 10.2.10.14
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
        - name: cluster1worker2
          ipAdress: 10.2.10.15
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
  - name: zone2
    nodes:
      master:
        - name: cluster1master3
          ipAdress: 10.2.10.13
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: drained
          kubeversion: 1.28.2  
      worker:
        - name: cluster1worker1
          ipAdress: 10.2.10.16
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2

# set to true if you want to install it into your cluster
rook-ceph: false # mandatory
harbor: false # mandatory
opensearch: false # mandatory
opensearch-dashboards: false # mandatory
logstash: false # mandatory
filebeat: false # mandatory
prometheus: false # mandatory
opa: false # mandatory
kubeops-dashboard: false # mandatory
certman: false # mandatory
ingress: false # mandatory
keycloak: false # mandatory

###Values for Rook-Ceph###
rookValues:
  namespace: kubeops
  nodePort: 31931 # optional, default: 31931
  cluster:
    storage:
      # Will only be used if useAllDevices is set to false and will be ignored if individual devices have been specified on a node.
      deviceFilter: "^sd[a-b]"
      # This setting can be used to store metadata on a different device. Only recommended if an additional metadata device is available.
      # Optional, will be overwritten by the corresponding node-level setting.
      config:
        metadataDevice: "sda"
      # Names of individual nodes in the cluster that should have their storage included.
      # Will only be used if useAllNodes is set to false.
      nodes:
        - name: "<ip-adress of node_1>"
          devices:
            - name: "sdb" 
        - name: "<ip-adress of node_2>"
          deviceFilter: "^sd[a-b]"
          config:
            metadataDevice: "sda" # optional
#-------------------------------------------------------------------------------------------------------------------------------
### Values for Postgres ###
postgrespass: "password" # mandatory, set password for harbor postgres access 
postgres:
  resources:
    requests:
      storage: 2Gi # mandatory, depending on storage capacity
#-------------------------------------------------------------------------------------------------------------------------------
### Values for Redis ###
redispass: "password" # mandatory set password for harbor redis access 
redis:
  resources:
    requests:
      storage: 2Gi # mandatory depending on storage capacity
#-------------------------------------------------------------------------------------------------------------------------------
### Values for Harbor deployment ###
## For detailed explaination for each key see: https://artifacthub.io/packages/helm/harbor/harbor/1.8.1#configuration ##
harborValues: 
  harborpass: "password" # mandatory: set password for harbor access 
  externalURL: https://10.2.10.13 # mandatory, the ip address, from which harbor is accessable outside of the cluster
  nodePort: 30003
  harborPersistence:
    persistentVolumeClaim:
      registry:
        size: 40Gi # optional, default is 40Gi
        storageClass: "rook-cephfs" #optional, default is rook-cephfs
      jobservice:
        jobLog:
          size: 1Gi # optional, default is 1Gi
          storageClass: "rook-cephfs" #optional, default is rook-cephfs
      database:
        size: 1Gi # optional, default is 1Gi
        storageClass: "rook-cephfs" #optional, default is rook-cephfs
      redis:
        size: 1Gi # optional, default is 1Gi
        storageClass: "rook-cephfs" #optional, default is rook-cephfs
      trivy: 
        size: 5Gi # optional, default is 5Gi
        storageClass: "rook-cephfs" #optional, default is rook-cephfs
#--------------------------------------------------------------------------------------------------------------------------------------
###Values for filebeat deployment###
filebeatValues:
  namespace: kubeops # optional, default is kubeops   
#--------------------------------------------------------------------------------------------------------------------------------
###Values for Logstash deployment###
##For detailed explaination for each key see: https://github.com/elastic/helm-charts/releases/tag/v7.16.3###
logstashValues:
  namespace: kubeops
  volumeClaimTemplate:
    resources:
      requests:
        storage: 1Gi # mandatory, depending on storage capacity
#--------------------------------------------------------------------------------------------------------------------------------------
###Values for OpenSearch-Dashboards deployment###
##For detailed explaination for each key see: https://github.com/opensearch-project/helm-charts/tree/main/charts/opensearch-dashboards###
openSearchDashboardValues:
  namespace: kubeops
  nodePort: 30050
#--------------------------------------------------------------------------------------------------------------------------------
###Values for OpenSearch deployment###
##For detailed explaination for each key see: https://github.com/opensearch-project/helm-charts/tree/main/charts/opensearch###
openSearchValues:
  namespace: kubeops
  resources:
  persistence:
    size: 4Gi # mandatory
#--------------------------------------------------------------------------------------------------------------------------------
###Values for Prometheus deployment###
prometheusValues:
  prometheusResources:
    nodePort: 32090
#--------------------------------------------------------------------------------------------------------------------------------
###Values for OPA deployment###
opaValues:
  namespace: kubeops
#--------------------------------------------------------------------------------------------------------------------------------
###Values for KubeOps-Dashboard (Headlamp) deployment###
kubeOpsDashboardValues:
  service:
    nodePort: 30007
#--------------------------------------------------------------------------------------------------------------------------------
###Values for cert-manager deployment###
certmanValues:
  namespace: kubeops
  replicaCount: 3
  logLevel: 2
#--------------------------------------------------------------------------------------------------------------------------------
###Values for ingress-nginx deployment###
ingressValues:
  namespace: kubeops
apiVersion: kubeops/kubeopsctl/alpha/v5  # mandatory
kubeOpsUser: "demo" # mandatory,  change to your username
kubeOpsUserPassword: "Password" # mandatory,  change to your password
kubeOpsUserMail: "demo@demo.net" # change to your email
imagePullRegistry: "registry.preprod.kubernative.net/kubeops" # mandatory
localRegistry: false # mandatory
### Values for setup configuration ###
clusterName: "testkubeopsctl"  # mandatory
clusterUser: "myuser"  # mandatory
kubernetesVersion: "1.28.2" # mandatory, check lima documentation
#masterHost: optional if you have an hostname, default value in "masterIP"
masterIP: 10.2.10.31 # mandatory
firewall: "nftables" # mandatory, default "nftables"
pluginNetwork: "calico" # mandatory, default "nftables"
containerRuntime: "containerd" # mandatory, default "containerd"

these are parameters for the cluster creation, and software for the clustercreation, p.e. the containerruntime for running the contianers of the cluster. Also there are parameters for the lima software (see documentation of lima for futher explanation).

### Additional values for cluster configuration
useInsecureRegistry: false # optional, default is false
ignoreFirewallError: false # optional, default is false
serviceSubnet: 192.168.128.0/17 # optional, default "192.168.128.0/17"
podSubnet: 192.168.0.0/17 # optional, default "192.168.0.0/17"
debug: true # optional, default is true
logLevel: vvvvv # optional, default "vvvvv"
systemCpu: "1" # optional, default "1"
systemMemory: "2G" # optional, default "2G"
sudo: true # optional, default is true

also important are parameters like for the networking like the subnets for the pods and services inside the kubernetes cluster.

# at least 3 masters and 3 workers are needed
zones:
  - name: zone1
    nodes:
      master:
        - name: cluster1master1
          ipAdress: 10.2.10.11
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
        - name: cluster1master2
          ipAdress: 10.2.10.12
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
      worker:
        - name: cluster1worker1
          ipAdress: 10.2.10.14
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
        - name: cluster1worker1
          ipAdress: 10.2.10.15
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
  - name: zone2
    nodes:
      master:
        - name: cluster1master3
          ipAdress: 10.2.10.13
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2  
      worker:
        - name: cluster1worker1
          ipAdress: 10.2.10.16
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: drained
          kubeversion: 1.28.2

so here are thetwo zones, which contain master and worker nodes.
There are two different states: active and drained.
also there can be two different kubernetes versions.
So if you want to do updates in tranches, this is possible with kubeopsctl. Also you can set system memory and system cpu of the nodes for kubernetes itself. it is not possible to delete nodes, for deleting nodes you have to use lima. Also if you want to make an update in tranches, you need at least one master with the greater version.

All other parameters are explained here

2 Apply changes to cluster

Once you have configured the cluster changes in yaml file, use following command to apply the changes.

kubeopsctl apply -f kubeopsctl.yaml

8 - Backup and restore

In this article, we look at the backup procedure with Velero.

Backup and restoring artifacts

What is Velero?

Velero uses object storage to store backups and associated artifacts. It also optionally integrates supported block storage systems to snapshot your persistent volumes. Before beginning the installation process, you should identify the object storage provider and optional block storage provider(s) you’ll be using from the list of compatible providers.

Velero supports storage providers for both cloud-provider environments and on-premises environments.

Velero prerequisites:

  • Access to a Kubernetes cluster, v1.16 or later, with DNS and container networking enabled.
  • kubectl installed locally
  • Object Storage (S3, Cloud Provider Environment, On-Premises Environment)

Compatible providers and on-premises documentation can be read on https://velero.io/docs

Install Velero

This command is an example on how you can install velero into your cluster:

velero install --provider aws --plugins velero/velero-plugin-for-aws:v1.2.1 --bucket velero --secret-file ./credentials-velero --use-volume-snapshots=false --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://minio.velero.svc:9000

NOTE:

  • s3Url has to be the url of your s3 storage login.
  • example for credentials-velero file:
    [default]
    aws_access_key_id = your_s3_storage_username
    aws_secret_access_key = your_s3_storage_password
    

Backup the cluster

Scheduled Backups

This command creates a backup for the cluster every 6 hours:

velero schedule create cluster --schedule "0 */6 * * *"

Get Schedules

This command lists all schedules for backups:

velero schedule get

Delete Schedules

This command deletes the specified schedule:

velero schedule delete cluster

Restore Scheduled Backup

This command restores the backup according to a schedule:

velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>

Backup

This command creates a backup for the cluster

velero backup create cluster

Get Backups

This command lists all created backups:

velero backup get

Delete Backups

This commands deletes the specified backup:

velero backup delete <BACKUP NAME>

Restore Backup

This commands restores the specified backup:

velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>

Backup a specific deployment

Scheduled Backups

This command creates a backup for the namespace “logging” every 6 hours:

velero schedule create filebeat --schedule "0 */6 * * *" --include-namespaces logging --include-cluster-resources=true

This command creates a backup for the deployment “filebeat” every 6 hours:

velero schedule create filebeat --schedule "0 */6 * * *" --include-namespaces logging --selector app=filebeat-filebeat,release=filebeat --include-resources serviceaccount,deployment,daemonset,configmap,clusterrolebinding,clusterrole --include-cluster-resources=true

Get Schedules

This command lists all schedules for backups:

velero schedule get

Delete Schedules

This command deletes the specified schedule:

velero schedule delete filebeat

Restore Scheduled Backup

This command restores the backup from a schedule:

velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>

Backup

This command creates a backup for the namespace “logging”:

velero backup create filebeat --include-namespaces logging --include-cluster-resources=true

This command creates a backup for the deployment “filebeat”:

velero backup create filebeat --include-namespaces logging --selector app=filebeat-filebeat,release=filebeat --include-resources serviceaccount,deployment,daemonset,configmap,clusterrolebinding,clusterrole --include-cluster-resources=true

Get Backups

This command lists all created backups:

velero backup get

Delete Backups

This commands deletes the specified backup:

velero backup delete <BACKUP NAME>

Restore Backup

This commands restores the specified backup:

velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>

Scheduled Backups

This command creates a backup for the namespace “logging” every 6 hours:

velero schedule create logstash --schedule "0 */6 * * *" --include-namespaces logging --include-cluster-resources=true

This command creates a backup for the deployment “logstash” every 6 hours:

velero schedule create logstash --schedule "0 */6 * * *" --include-namespaces logging --selector app=logstash-logstash,chart=logstash,release=logstash --include-resources StatefulSet,ServiceAccount,Service,Secret,RoleBinding,Role,PodSecurityPolicy,PodDisruptionBudget,Ingress,ConfigMap --include-cluster-resources=true

Get Schedules

This command lists all schedules for backups:

velero schedule get

Delete Schedules

This command deletes the specified schedule:

velero schedule delete logstash

Restore Scheduled Backup

This command restores the backup from a schedule:

velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>

Backup

This command creates a backup for the namespace “logging”:

velero backup create logstash --include-namespaces logging --include-cluster-resources=true

This command creates a backup for the deployment “logstash”:

velero backup create logstash --include-namespaces logging --selector app=logstash-logstash,chart=logstash,release=logstash --include-resources StatefulSet,ServiceAccount,Service,Secret,RoleBinding,Role,PodSecurityPolicy,PodDisruptionBudget,Ingress,ConfigMap --include-cluster-resources=true

Get Backups

This command lists all created backups:

velero backup get

Delete Backups

This commands deletes the specified backup:

velero backup delete <BACKUP NAME>

Restore Backup

This commands restores the specified backup:

velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>

Scheduled Backups

This command creates a backup for the namespace “logging” every 6 hours:

velero schedule create opensearch --schedule "0 */6 * * *" --include-namespaces logging --include-cluster-resources=true

This command creates a backup for the deployment “opensearch” every 6 hours:

velero schedule create opensearch --schedule "0 */6 * * *" --include-namespaces logging --selector app.kubernetes.io/instance=opensearch,app.kubernetes.io/name=opensearch --include-resources ConfigMap,Ingress,NetworkPolicy,PodDisruptionBudget,PodSecurityPolicy,Role,RoleBinding,Secret,Service,ServiceAccount,StatefulSet --include-cluster-resources=true

Get Schedules

This command lists all schedules for backups:

velero schedule get

Delete Schedules

This command deletes the specified schedule:

velero schedule delete opensearch

Restore Scheduled Backup

This command restores the backup from a schedule:

velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>

Backup

This command creates a backup for the namespace “logging”:

velero backup create opensearch --include-namespaces logging --include-cluster-resources=true

This command creates a backup for the deployment “opensearch”:

velero backup create opensearch --include-namespaces logging --selector app.kubernetes.io/instance=opensearch,app.kubernetes.io/name=opensearch --include-resources ConfigMap,Ingress,NetworkPolicy,PodDisruptionBudget,PodSecurityPolicy,Role,RoleBinding,Secret,Service,ServiceAccount,StatefulSet --include-cluster-resources=true

Get Backups

This command lists all created backups:

velero backup get

Delete Backups

This commands deletes the specified backup:

velero backup delete <BACKUP NAME>

Restore Backup

This commands restores the specified backup:

velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>

Scheduled Backups

This command creates a backup for the namespace “monitoring” every 6 hours:

velero schedule create prometheus --schedule "0 */6 * * *" --include-namespaces monitoring --include-cluster-resources=true

This command creates a backup for the deployment “prometheus” every 6 hours:

velero schedule create prometheus --schedule "0 */6 * * *" --include-namespaces monitoring --include-resources Alertmanager,Secret,Ingress,List,PodDisruptionBudget,Role,RoleBinding,PodSecurityPolicy,Service,ServiceAccount,ServiceMonitor,Endpoints,ConfigMap,ConfigMapList,ClusterRole,ClusterRoleBinding,SecretProviderClass,PodMonitor,Prometheus,Job,NetworkPolicy,MutatingWebhookConfiguration,ValidatingWebhookConfiguration,Issuer,Deployment,VerticalPodAutoscaler,ThanosRuler --include-cluster-resources=true

Get Schedules

This command lists all schedules for backups:

velero schedule get

Delete Schedules

This command deletes the specified schedule:

velero schedule delete prometheus

Restore Scheduled Backup

This command restores the backup from a schedule:

velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>

Backup

This command creates a backup for the namespace “monitoring”:

velero backup create prometheus --include-namespaces monitoring --include-cluster-resources=true

This command creates a backup for the deployment “prometheus”:

velero backup create prometheus --include-namespaces monitoring --include-resources Alertmanager,Secret,Ingress,List,PodDisruptionBudget,Role,RoleBinding,PodSecurityPolicy,Service,ServiceAccount,ServiceMonitor,Endpoints,ConfigMap,ConfigMapList,ClusterRole,ClusterRoleBinding,SecretProviderClass,PodMonitor,Prometheus,Job,NetworkPolicy,MutatingWebhookConfiguration,ValidatingWebhookConfiguration,Issuer,Deployment,VerticalPodAutoscaler,ThanosRuler --include-cluster-resources=true

Get Backups

This command lists all created backups:

velero backup get

Delete Backups

This commands deletes the specified backup:

velero backup delete <BACKUP NAME>

Restore Backup

This commands restores the specified backup:

velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>

Scheduled Backups

This command creates a backup for the namespace “harbor” every 6 hours:

velero schedule create harbor --schedule "0 */6 * * *" --include-namespaces harbor --include-cluster-resources=true

This command creates a backup for the deployment “harbor” every 6 hours:

velero schedule create harbor --schedule "0 */6 * * *" --include-namespaces harbor --include-resources ConfigMap,Deployment,PersistentVolumeClaim,Secret,Service,StatefulSet,Ingress,ServiceMonitor --include-cluster-resources=true

Get Schedules

This command lists all schedules for backups:

velero schedule get

Delete Schedules

This command deletes the specified schedule:

velero schedule delete harbor

Restore Scheduled Backup

This command restores the backup from a schedule:

velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>

Backup

This command creates a backup for the namespace “harbor”:

velero backup create harbor --include-namespaces harbor --include-cluster-resources=true

This command creates a backup for the deployment “harbor”:

velero backup create harbor --include-namespaces harbor --include-resources ConfigMap,Deployment,PersistentVolumeClaim,Secret,Service,StatefulSet,Ingress,ServiceMonitor --include-cluster-resources=true --include-cluster-resources=true

Get Backups

This command lists all created backups:

velero backup get

Delete Backups

This commands deletes the specified backup:

velero backup delete <BACKUP NAME>

Restore Backup

This commands restores the specified backup:

velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>

Scheduled Backups

This command creates a backup for the namespace “gatekeeper-system” every 6 hours:

velero schedule create gatekeeper --schedule "0 */6 * * *" --include-namespaces gatekeeper-system --include-cluster-resources=true

This command creates a backup for the deployment “gatekeeper” every 6 hours:

velero schedule create gatekeeper --schedule "0 */6 * * *" --include-namespaces gatekeeper-system --include-resources PodSecurityPolicy,ServiceAccount,Deployment,PodDisruptionBudget,ResourceQuota,ClusterRole,Role,ClusterRoleBinding,RoleBinding,MutatingWebhookConfiguration,ValidatingWebhookConfiguration,Secret,Service,Job --include-cluster-resources=true

Get Schedules

This command lists all schedules for backups:

velero schedule get

Delete Schedules

This command deletes the specified schedule:

velero schedule delete gatekeeper

Restore Scheduled Backup

This command restores the backup from a schedule:

velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>

Backup

This command creates a backup for the namespace “gatekeeper-system”:

velero backup create gatekeeper --include-namespaces gatekeeper-system --include-cluster-resources=true

This command creates a backup for the deployment “gatekeeper-system”:

velero backup create gatekeeper --include-namespaces gatekeeper-system --include-resources PodSecurityPolicy,ServiceAccount,Deployment,PodDisruptionBudget,ResourceQuota,ClusterRole,Role,ClusterRoleBinding,RoleBinding,MutatingWebhookConfiguration,ValidatingWebhookConfiguration,Secret,Service,Job --include-cluster-resources=true --include-cluster-resources=true

Get Backups

This command lists all created backups:

velero backup get

Delete Backups

This commands deletes the specified backup:

velero backup delete <BACKUP NAME>

Restore Backup

This commands restores the specified backup:

velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>

Scheduled Backups

This command creates a backup for the namespace “rook-ceph” every 6 hours:

velero schedule create rook-ceph --schedule "0 */6 * * *" --include-namespaces rook-ceph --include-cluster-resources=true

Get Schedules

This command lists all schedules for backups:

velero schedule get

Delete Schedules

This command deletes the specified schedule:

velero schedule delete rook-ceph

Restore Scheduled Backup

This command restores the backup from a schedule:

velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>

Backup

This command creates a backup for the namespace “rook-ceph”:

velero backup create rook-ceph --include-namespaces rook-ceph --include-cluster-resources=true

Get Backups

This command lists all created backups:

velero backup get

Delete Backups

This commands deletes the specified backup:

velero backup delete <BACKUP NAME>

Restore Backup

This commands restores the specified backup:

velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>

9 - Renew Certificates

Renewal of certificates made easy with LIMA.

Renewing all certificates at once


LIMA enables you to renew all Certificates, for a specific Cluster, on all control-plane-nodes in one command.

lima renew cert <clusterName>
Note: Renewing certificates can take several minutes for restarting all certificates services.

Here is an example to renew certificates on cluster with name “Democluster”:

lima renew cert Democluster

Note: This command renew all certificates on the existing control-plane, there is no option to renew single certificates.

10 - Deploy Package On Cluster

This guide provides a simplified process for deploying packages in a Kubernetes cluster using Kosi with either the Helm or Kubectl plugin.

Deploying package on Cluster

You can install artifacts in your cluster in several ways. For this purpose, you can use these four plugins when creating a package:

  • helm
  • kubectl
  • cmd
  • Kosi

As an example, this guide installs the nginx-ingress Ingress Controller.

Using the Helm-Plugin

Prerequisite

In order to install an artifact with the Helm plugin, the Helm chart must first be downloaded. This step is not covered in this guide.

Create KOSI package

First you need to create a KOSI package. The following command creates the necessary files in the current directory:

kosi create

The downloaded Helm chart must also be located in the current directory. To customize the deployment of the Helm chart, the values.yaml file must be edited. This file can be downloaded from ArtifactHub and must be placed in the same directory as the Helm chart.

All files required by a task in the package must be named in the package.yaml file under includes.files. The container images required by the Helm chart must also be listed in the package.yaml under includes.containers. For installation, the required files and images must be listed under the installation.includes key.
In the example below, only two files are required for the installation: the Helm Chart for the nginx-ingress and the values.yaml to configure the deployment. To install nginx-ingress you will also need the nginx/nginx-ingress image with the tag 3.0.1.

To install nginx-ingress with the Helm plugin, call the plugin as shown in the example under installation.tasks. The deployment configuration file is listed under values and the packed Helm chart is specified with the key tgz. Furthermore, it is also possible to specify the namespace in which the artifact should be deployed and the name of the deployment. The full documentation for the Helm plugin can be found here.

apiversion: kubernative/kubeops/sina/user/v4
name: deployExample
description: "This Package is an example. 
              It shows how to deploy an artifact to your cluster using the helm plugin."
version: 0.1.0  
includes: 
  files:  
    config: "values.yaml"
    nginx: "nginx-ingress-0.16.1.tgz"
  containers: 
    nginx-ingress:
      registry: docker.io 
      image: nginx/nginx-ingress
      tag: 3.0.1
docs: docs.tgz
logo: logo.png
installation:  
  includes: 
    files: 
      - config 
      - nginx
    containers: 
      - nginx-ingress
  tasks: 
    - helm:
        command: "install"
        values:
          - values.yaml
        tgz: "nginx-ingress-0.16.1.tgz"
        namespace: dev
        deploymentName: nginx-ingress
...

update:  
  tasks:
  
delete:  
  tasks:

Once the package.yaml file has been fully configured, all files must be combined into a KOSI package. To do this, execute the following command in the directory where the package.yaml file is located.

kosi build

To make the generated kosi package available on other machines, it is pushed to the user’s private KubeOps Hub. To do this, the user must first log in to the hub.

$ kosi login -u <username>
2023-02-04 11:19:43 Info:      KOSI version: 2.6.0_Beta0
2023-02-04 11:19:43 Info:      Please enter password
****************
2023-02-04 11:19:26 Info:      Login Succeeded to Hub.
$ kosi push --hub kosi
2023-02-04 11:23:18 Info:      KOSI version: 2.6.0_Beta0
2023-02-04 11:23:19 Info:      Push to Private Registry registry.preprod.kubernative.net/<username>/

Deployment

Once the KOSI package has been created and published, it needs to be installed on the Admin node. The following command will download and execute the package. The package name and version refer to the values defined in package.yaml with the keys name and version.

kosi install --hub <username> <username>/<packagename>:<version>

For the example package, the command would be: kosi install --hub <username> <username>/deployExample:0.1.0.

Using the Kubectl-Plugin

Prerequisite

In order to install an artifact with the Kubectl plugin, the kubeops-kubernetes-plugins package must be installed on the admin node. This step is not covered in this guide.

Create KOSI package

First you need to create a KOSI package. The following command creates the necessary files in the current directory:

kosi create

The NGINX ingress controller YAML manifest can either be automaticly downloaded and applyed directly with kubectl apply or it can be downloaded manually if you want to customize the deployment. The YAML manifest can be downloaded from the NGINX GitHub Repo and must be placed in the same directory as the files for the kosi package.

All files required by a task in the package must be named in the package.yaml file under includes.files. The container images required by the YAML manifest must also be listed in the package.yaml under includes.containers. For installation, the required files and images must be listed under the installation.includes key.
In the example below, only one file is required for the installation: the YAML manifest for the nginx-ingress controller. To install nginx-ingress you will also need the registry.k8s.io/ingress-nginx/controller image with the tag v1.5.1 and the image registry.k8s.io/ingress-nginx/kube-webhook-certgen with tag v20220916-gd32f8c343.

To install nginx-ingress with the Kubectl plugin, call the plugin as shown in the example under installation.tasks. The full documentation for the Kubectl plugin can be found here.

apiversion: kubernative/kubeops/sina/user/v4
name: deployExample
description: "This Package is an example. 
              It shows how to deploy an artifact to your cluster using the helm plugin."
version: 0.1.0  
includes: 
  files:  
    manifest: "deploy.yaml"
  containers: 
    nginx-ingress:
      registry: registry.k8s.io
      image: ingress-nginx/controller
      tag: v1.5.1
    webhook-certgen:
      registry: registry.k8s.io
      image: ingress-nginx/kube-webhook-certgen
      tag: v20220916-gd32f8c343
docs: docs.tgz
logo: logo.png
installation:  
  includes: 
    files: 
      - manifest
    containers: 
      - nginx-ingress
      - webhook-certgen
  tasks: 
    - kubectl:
        operation: "apply"
        flags: " -f <absolute path>/deploy.yaml"
        sudo: true
        sudoPassword: "toor"

...

update:  
  tasks:
  
delete:  
  tasks:

Once the package.yaml file has been fully configured, all files must be combined into a KOSI package. To do this, execute the following command in the directory where the package.yaml file is located.

kosi build

To make the generated KOSI package available on other machines, it is pushed to the user’s private KubeOps Hub. To do this, the user must first log in to the hub.

$ kosi login -u <username>
2023-02-04 11:19:43 Info:      kosi version: 2.6.0_Beta0
2023-02-04 11:19:43 Info:      Please enter password
****************
2023-02-04 11:19:26 Info:      Login Succeeded to Hub.
$ kosi push --hub kosi
2023-02-04 11:23:18 Info:      kosi version: 2.6.0_Beta0
2023-02-04 11:23:19 Info:      Push to Private Registry registry.preprod.kubernative.net/<username>/

Deployment

Once the KOSI package has been created and published, it needs to be installed on the Admin node. The following command will download and execute the package. The package name and version refer to the values defined in package.yaml with the keys name and version.

kosi install --hub <username> <username>/<packagename>:<version>

For the example package, the command would be: kosi install --hub <username> <username>/deployExample:0.1.0.

11 - Replace Cluster Nodes

This guide explains how to replace nodes in a Kubernetes cluster using KubeOps, which involves deleting an existing node and adding a new one through a YAML configuration file.

Replace cluster nodes

This section describes how to replace cluster nodes in your cluster.

Direct replacement of nodes is not possible in KubeOps; however you can delete the node and add a new node to the cluster as shown in the following example.

Steps to replace a Kubernetes Node

  1. Use the command delete on the admin node to delete the unwanted node from the cluster.

    The command is:

    lima delete -n <IP of your node> <name of your Cluster>
    
    If you are deleting a node, then its data becomes inaccessible or erased.
  2. Now create a new .yaml file with a configuration for the node as shown below

    Example:

    apiVersion: lima/nodeconfig/v1alpha1
    clusterName: roottest
    spec:
      masters: []
      workers:
      - host: 10.2.10.17  ## ip of the new node to be joined
        systemCpu: "200m"
        systemMemory: "200Mi"        
        user: root
        password: toor
    
  3. Lastly use the command create nodes to create and join the new node.

    The command is:

    lima create nodes -f <node yaml file name>
    

Example 1

In the following example, we will replace a node with ip 10.2.10.15 from demoCluster to a new worker node with ip 10.2.10.17:

  1. Delete node.

    lima delete -n 10.2.10.15 demoCluster
    
  2. create addNode.yaml for new worker node.

    apiVersion: lima/nodeconfig/v1alpha1
    clusterName: roottest
    spec:
      masters: []
      workers:
      - host: 10.2.10.17
        systemCpu: "200m"
        systemMemory: "200Mi"
        user: root
        password: toor
    
  3. Join the new node.

    lima create nodes -f addNode.yaml
    

Example 2

If you are rejoining a master node, all other steps are the same except, you need to add the node configuration in the yaml file as shown in the example below:

apiVersion: lima/nodeconfig/v1alpha1
clusterName: roottest
spec:
  masters:
  - host: 10.2.10.17
    systemCpu: "200m"
    systemMemory: "200Mi"
    user: root
    password: toor
  workers: []

12 - Update Kubernetes Version

This guide outlines the steps to upgrade the Kubernetes version of a cluster, specifically demonstrating how to change the version using a configuration file.

Upgrading Kubernetes version

You can use the following steps to upgrade the Kubernetes version of a cluster.

In the following example, we will upgrade Kubernetes version of your cluster with name Democluster from Kubernetes version 1.27.2 to Kubernetes version 1.28.2

  1. You have to create a kubeobsctl.yaml with following yaml syntax.
   apiVersion: kubeops/kubeopsctl/alpha/v5  # mandatory
   kubeOpsUser: "demo" # mandatory,  change to your username
   kubeOpsUserPassword: "Password" # mandatory,  change to your password
   kubeOpsUserMail: "demo@demo.net" # change to your email
   imagePullRegistry: "registry.preprod.kubernative.net/kubeops" # mandatory
   localRegistry: false # mandatory
   ### Values for setup configuration ###
   clusterName: "Democluster"  # mandatory
   clusterUser: "myuser"  # mandatory
   kubernetesVersion: "1.28.2" # mandatory, check lima documentation
   masterIP: 10.2.10.11 # mandatory
   ### Additional values for cluster configuration
   # at least 3 masters and 3 workers are needed
   zones:
   - name: zone1
      nodes:
         master:
         - name: cluster1master1
            ipAdress: 10.2.10.11
            user: myuser
            systemCpu: 100m
            systemMemory: 100Mi 
            status: active
            kubeversion: 1.28.2
         - name: cluster1master2
            ipAdress: 10.2.10.12
            user: myuser
            systemCpu: 100m
            systemMemory: 100Mi 
            status: active
            kubeversion: 1.28.2
         worker:
         - name: cluster1worker1
            ipAdress: 10.2.10.14
            user: myuser
            systemCpu: 100m
            systemMemory: 100Mi 
            status: active
            kubeversion: 1.28.2
         - name: cluster1worker2
            ipAdress: 10.2.10.15
            systemCpu: 100m
            systemMemory: 100Mi 
            status: active
            kubeversion: 1.28.2
   - name: zone2
      nodes:
         master:
         - name: cluster1master3
            ipAdress: 10.2.10.13
            user: myuser
            systemCpu: 100m
            systemMemory: 100Mi 
            status: drained
            kubeversion: 1.28.2  
         worker:
         - name: cluster1worker1
            ipAdress: 10.2.10.16
            user: myuser
            systemCpu: 100m
            systemMemory: 100Mi 
            status: active
            kubeversion: 1.28.2

   # set to true if you want to install it into your cluster
   rook-ceph: false # mandatory
   harbor: false # mandatory
   opensearch: false # mandatory
   opensearch-dashboards: false # mandatory
   logstash: false # mandatory
   filebeat: false # mandatory
   prometheus: false # mandatory
   opa: false # mandatory
   kubeops-dashboard: false # mandatory
   certman: false # mandatory
   ingress: false # mandatory
   keycloak: false # mandatory
  1. Upgrade the version

    Once the kubeopsctl.yaml file is created in order to change the Version of your cluster use the following command:

    kubeopsctl upgrade -f kubeopsctl.yaml
    

rook-ceph has no pdbs, so if you drain nodes for the kubernetes upgrade, rook ceph is temporarily unavailable. you should drain only one node at a time for the kubernetes upgrade.

13 - Change CRI

A brief overview of how you can change the Container Runtime Interface (CRI) of your cluster to the supported CRI containerd and crio.

Changing Container Runtime Interface

KubeOps enables you to change the Container Runtime Interface (CRI) of the clusters to any of the following supported CRIs

  • containerd
  • crio

You can use the following steps to change the CRI

In the example below, we will change the CRI of the cluster with the name Democluster to containerd.

  1. Download the desired CRI maintenance package from hub
In this case you will need package `lima/containerdlp151:1.6.6`.  
To download the package use command:  
lima pull maintenance lima/containerdlp151:1.6.6
Note : Packages may vary based on OS and Kubernetes version on your machine.
To select the correct maintenance package based on your machine configuration, refer to Installing maintenance packages
  1. Change the CRI of your cluster.

Once the desired CRI maintenance package is downloaded, to change the CRI of your cluster use command:

lima change runtime -r containerd Democluster

So in this case you want to change your runtime to containerd. The desired container runtime is specified after the -r parameter, which is necessary. In this example the cluster has the name Democluster, which is also necessary.

14 - How to delete nodes from the cluster with lima

A compact overview of how you can delete nodes from your cluster with Lima.

Note: If we want to delete a node from our kubernetes cluster we have to use lima.

If you are using our platform, lima is already installed by it. If this is not the case, please install lima manually.

These are the prerequisites that have to fulfilled before we can delete a node from our cluster.

  • lima has to be installed
  • a functioning cluster must exist

If you want to remove a node from your cluster you can run the delete command on the admin node.

lima delete -n <node which should be deleted> <name of your cluster>

Note: The example cluster name has to be the same like the one set in the Kubectl.yaml file. Under clusterName:

For example we want to delete worker node 2 from our existing kubernetes cluster named example and the IP-address 10.2.1.9 with the following command:

lima delete -n 10.2.1.9 example

15 - Accessing Dashboards

A brief overview of how you can access dashboards.

Accessing Dashboards installed with KubeOps

To access a Application dashboard an SSH-Tunnel to one of the Control-Planes is needed. The following Dashboards are available and configured with the following NodePorts by default:

NodePort

30211

Initial login credentials

  • username: the username set in the kubeopsvalues.yaml for the cluster creation
  • password: the password set in the kubeopsvalues.yaml for the cluster creation

NodePort

30050

Initial login credentials

  • username: admin
  • password: Password@@123456

NodePort

  • https: 30003

Initial login credentials

  • username: admin
  • password: the password set in the kubeopsvalues.yaml for the cluster creation

NodePort

The Rook/Ceph Dashboard has no fixed NodePort yet. To find out the NodePort used by Rook/Ceph follow these steps:

  1. List the Services in the KubeOps namespace
kubectl get svc -n kubeops
  1. Find the line with the service rook-ceph-mgr-dashboard-external-http
NAME                                      TYPE        CLUSTER-IP        EXTERNAL-IP   PORT(S)                                     AGE
rook-ceph-mgr-dashboard-external-http     NodePort    192.168.197.13    <none>        7000:31268/TCP                              21h

Or use,

echo $(kubectl get --namespace rook-ceph -o jsonpath="{.spec.ports[0].nodePort}" services rook-ceph-mgr-dashboard-external-http)

In the example above the NodePort to connect to Rook/Ceph would be 31268.

Initial login credentials

echo Username: admin
echo Password: $(kubectl get secret rook-ceph-dashboard-password -n rook-ceph --template={{.data.password}} | base64 -d)

The dashboard can be accessed with localhost:NodePort/ceph-dashboard/

NodePort

30007

Initial login credentials

kubectl -n monitoring create token headlamp-admin

NodePort

30180

Initial login credentials

echo Username: admin
echo Password: $(kubectl get secret --namespace keycloak keycloak -o jsonpath="{.data.ADMIN_PASSWORD}" | base64 -d)

Access the dashboard with localhost:NodePort/

Connecting to the Dashboard

In order to connect to one of the dashboards, an ssh tunnel has to be established. There are various tools for doing this, like the command line, putty or MobaXterm.
To establish a tunnel, the NodePort of the dashboard has to be forwarded on one of the control planes to the local machine. After that the dashboard can be accessed with localhost:<Port>.

Connecting to the Dashboard via DNS

In order to connect to the dashboard via DNS the hosts file in /etc/hosts need the following additional entries:

10.2.10.11 kubeops-dashboard.local
10.2.10.11 harbor.local
10.2.10.11 keycloak.local
10.2.10.11 opensearch.local
10.2.10.11 grafana.local
10.2.10.11 rook-ceph.local

The IP address must be the same as the address of your Master1.

16 - Replace the kubeops-cert with your own cert

This section outlines how to replace the default kubeops certificate with a custom one by creating a new certificate in a Kubernetes secret and updating the configuration accordingly.

Replace the kubeops-cert with your own cert

1. Create your own cert in a secret

In this example, a new secret with the name example-ca is created.

This command creates two files: tls.key and tls.cert:

openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=foo.bar.com"

Create a new tls secret in the namespace cert-manager:

kubectl create secret tls example-ca --key="tls.key" --cert="tls.crt" -n cert-manager

2. Create the new configuration

Make sure that certman is set to true.

certman: true

Add the following section to kubeopsctl.yaml.

certmanValues:
  secretName: example-ca

3. Apply the new configuration with kubeopsctl

kubeopsctl apply -f kubeopsctl.yaml

17 - Create a new Repository

This section provides a comprehensive guide on setting up a new RPM repository in KubeOps for the centralized distribution of software packages, covering prerequisites, repository setup steps, and commands for managing the repository and installing packages.

Kubeops RPM Repository Setup Guide

Setting up a new RPM repository allows for centralized, secure, and efficient distribution of software packages, simplifying installation, updates, and dependency management.

Prerequisites

To setup a new repostory on your KubeOps platform, following pre-requisites must be fulfilled.

  • httpd (apache) server to access the repository over HTTP.
  • Root or administrative access to the server.
  • Software packages (RPM files) to include in the repository.
  • createrepo (an RPM package management tool) to create a new repository.

Repository Setup Steps

1. Install Required Tools

sudo yum install -y httpd createrepo

2. Create Repository Dierectory

When Apache is installed, the default Apache VirtualHost DocumentRoot created at /var/www/html. Create a new repository KubeOpsRepo under DocumentRoot.

sudo mkdir -p /var/www/html/KubeOpsRepo

3. Copy RPM Packages

Copy RPM packages into KubeOpsRepo repository.

Use below command to copy the packages that are already present in the host machine, else directly populate the packages into KubeOpsRepo

sudo cp -r <sourcePathForRPMs> /var/www/html/KubeOpsRepo/

4. Generate the GPG Signature (optional)

If you want to use your packages in a secure way, we recommend using GPG Signature.

How does the GPG tool work?

The GNU Privacy Guard (GPG) is used for secure communication and data integrity verification.
When gpgcheck set to 1 (enabled), the package will verify the GPG signature of each packages against the correponding key in the keyring. If the package’s signature matches the expected signature, the package is considered valid and can be installed. If the signature does not match or the package is not signed, the package manager will refuse to install the package or display a warning.

GPG Signature for new registry

  1. Create a GPG key and add it to the /var/www/html/KubeOpsRepo/. Check here to know how to create GPG keypairs.

  2. Save the GPG key as RPM-GPG-KEY-KubeOpsRepo using following command.

sudo cd /var/www/html/KubeOpsRepo/
gpg --armor --export > RPM-GPG-KEY-KubeOpsRepo

You can use following command to verify the gpg key.

curl -s http://<ip-address-of-server>/KubeOpsRepo/RPM-GPG-KEY-myrepo

5. Initialize the KubeOpsRepo

By running createrepo command the KubeOpsRepo will be initialized.

sudo cd /var/www/html/KubeOpsRepo/
sudo createrepo .

The newly created directoryrepodata conatains metadata files that describe the RPM packages in the repository, including package information, dependencies, and checksums, enabling efficient package management and dependency resolution.

6. Start and Enable Apache Service

sudo systemctl start httpd
sudo systemctl enable httpd

Configure Firewall (Optional)

If the firewall is enabled, we need to allow incoming HTTP traffic.

sudo firewall-cmd --add-service=http --permanent
sudo firewall-cmd --reload

7. Configure the local repository

To install packages from KubeOpsRepo without specifying the URL everytime, we can configure the local repository. Also if you are using GPG signature, then gpgcheck needs to be enabled.

  1. Create a Repository Configuration File
    Create a new .repo configuration file (e.g. KubeOpsRepo.repo) in /etc/yum.repos.d/ directory with following command.
sudo vi /etc/yum.repos.d/KubeOpsRepo.repo
  1. Add following confuration content to the File
[KubeOpsRepo]  
name=KubeOps Repository
baseurl=http://<ip-address-of-server>/KubeOpsRepo/
enabled=1
gpgcheck=1
gpgkey=http://<ip-address-of-server>/KubeOpsRepo/RPM-GPG-KEY-KubeOpsRepo

Below are the configuration details :

  1. KubeOpsRepo: It is the repository ID.
  2. baseurl: It is the base URL of the new repository. Add your repository URL here.
  3. name : It can be customized to a descriptive name.
  4. enabled=1: This enables the the repository.
  5. gpgcheck=1 : It is used to enable GPG signature verification for the repository.
  6. gpgkey : Add the address where your GPG key is placed.
In case, you are not using the GPG signature verification
1. you can skip step 4
and
2. set the gpgcheck=0 in the above configuration file.

8. Test the Local Repository

To ensure that the latest metadata for the repositories available, you can run below command: (optional)

sudo yum makecache

To verify the repository in list

You can check the reposity in the repolist with following command :

sudo yum repolist

This will list out all the repositories with the information about the repositories.

[root@cluster3admin1 ~]# yum repolist
Updating Subscription Management repositories.
repo id                                                        repo name
KubeOpsRepo                                                    KubeOps Repository
rhel-8-for-x86_64-appstream-rpms                               Red Hat Enterprise Linux 8 for x86_64 - AppStream (RPMs)
rhel-8-for-x86_64-baseos-rpms                                  Red Hat Enterprise Linux 8 for x86_64 - BaseOS (RPMs)

To List all the packages in repository

You can list all the packages availbale in KubeOpsRepo with following command :

# To check all the packages including duplicate installed packages
sudo yum list available --disablerepo="*" --enablerepo="KubeOpsRepo" --showduplicates
# sudo yum list --showduplicates | grep KubeOpsRepo 

To Install the Packages from the repository directly

Now you can directly install the packages from the KubeOpsRepo Repository with following command :

sudo yum install package_name

For Example :

sudo yum install lima

18 - Add certificate as trusted

This section outlines the process for adding a certificate as trusted by downloading it from the browser and installing it in the Trusted Root Certification Authorities on Windows or Linux systems.

1. Download the certificate

  1. As soon as Chrome issues a certificate warning, click on Not secure to the left of the address bar.
  2. Show the certificate (Click on Certificate is not valid).
  3. Go to Details tab.
  4. Click Export... at the bottom and save the certificate.
  1. As soon as Firefox issues a certificate warning, click on Advanced....
  2. View the certificate (Click on View Certificate).
  3. Scroll down to Miscellaneous and save the certificate.

2. Install the certificate

  1. Press Windows + R.
  2. Enter mmc and click OK.
  3. Click on File > Add/Remove snap-in....
  4. Select Certificates in the Available snap-ins list and click on Add >, then on OK. Add the snap-in.
  5. In the tree pane, open Certificates - Current user > Trusted Root Certification Authorities, then right-click Certificates and select All tasks > Import....
  6. The Certificate Import Wizard opens here. Click on Next.
  7. Select the previously saved certificate and click Next.
  8. Click Next again in the next window.
  9. Click on Finish. If a warning pops up, click on Yes.
  10. The program can now be closed. Console settings do not need to be saved.
  11. Clear browser cache and restart browser.

The procedures for using a browser to import a certificate as trusted (on Linux systems) vary depending on the browser and Linux distribution used. To manually cause a self-signed certificate to be trusted by a browser on a Linux system:

Distribution Copy certificate here Run following command to trust certificate
RedHat /etc/pki/ca-trust/source/anchors/ update-ca-trust extract

Note: If the directory does not exist, create it.
Note: If you do not have the ca-certificates package, install it with your package manager.

19 - Change registry

In KubeOps you have the possibility to change the registry from A to B for the respective tools.

Changing Registry from A to B

KubeOps enables you to change the registry from A to B with following commands

kosi 2.6.0 - kosi 2.7.0

Kubeops 1.0.6

fileBeat
kosi pull kubeops/kosi-filebeat-os:1.2.0 -o filebeat.kosi -r 10.9.10.222:30003
kosi pull kubeops/kosi-filebeat-os:1.2.0 -o filebeat.kosi -r localhost:30003
kosi install -p filebeat.kosi
harbor
kosi pull kubeops/harbor:1.0.1 -o harbor.kosi  -r 10.9.10.222:30003
kosi pull kubeops/harbor:1.0.1 -o harbor.kosi -r localhost:30003 
kosi install -p harbor.kosi
logstash
kosi pull kubeops/kosi-logstash-os:1.0.1 -o logstash.kosi -r 10.9.10.222:30003
kosi pull kubeops/kosi-logstash-os:1.0.1 -o logstash.kosi -r localhost:30003
kosi install -p logstash.kosi
opa-gatekeeper
kosi pull kubeops/opa-gatekeeper:1.0.1 -o opa-gatekeeper.kosi -r 10.9.10.222:30003
kosi pull kubeops/opa-gatekeeper:1.0.1 -o opa-gatekeeper.kosi -r localhost:30003
kosi install -p opa-gatekeeper.kosi
opensearch
kosi pull kubeops/kosi-opensearch-os:1.0.3 -o opensearch.kosi -r 10.9.10.222:30003
kosi pull kubeops/kosi-opensearch-os:1.0.3 -o opa-gatekeeper.kosi -r localhost:30003
kosi install -p opa-gatekeeper.kosi
opensearch-dashboards
kosi pull kubeops/kosi-opensearch-dashboards:1.0.1 -o opensearch-dashboards.kosi -r 10.9.10.222:30003
  kosi pull kubeops/kosi-opensearch-dashboards:1.0.1 -o opensearch-dashboards.kosi -r localhost:30003
kosi install -p opensearch-dashboards.kosi
prometheus
kosi pull kubeops/kosi-kube-prometheus-stack:1.0.3 -o prometheus.kosi -r 10.9.10.222:30003
kosi pull kubeops/kosi-kube-prometheus-stack:1.0.3 -o prometheus.kosi -r localhost:30003
kosi install -p prometheus.kosi

rook

kosi pull kubeops/rook-ceph:1.0.3 -o rook-ceph.kosi -r 10.9.10.222:30003
kosi pull kubeops/rook-ceph:1.0.3 -o rook-ceph.kosi -r localhost:30003
kosi install -p rook-ceph.kosi

Kubeops 1.1.2

fileBeat

kosi pull kubeops/kosi-filebeat-os:1.1.1 -o kubeops/kosi-filebeat-os:1.1.1 -r 10.9.10.222:30003
kosi pull kubeops/kosi-filebeat-os:1.1.1 -o kubeops/kosi-filebeat-os:1.1.1 -t localhost:30003
kosi install -p package.yaml

harbor

kosi pull kubeops/harbor:1.1.1 -o kubeops/harbor:1.1.1 -r 10.9.10.222:30003
kosi pull kubeops/harbor:1.1.1 -o kubeops/harbor:1.1.1 -t localhost:30003
kosi install -p package.yaml

logstash

kosi pull kubeops/kosi-logstash-os:1.1.1 -o kubeops/kosi-logstash-os:1.1.1 -r 10.9.10.222:30003
kosi pull kubeops/kosi-logstash-os:1.1.1 -o kubeops/kosi-logstash-os:1.1.1 -t localhost:30003
kosi install -p package.yaml

opa-gatekeeper

kosi pull kubeops/opa-gatekeeper:1.1.1 -o kubeops/opa-gatekeeper:1.1.1 -r 10.9.10.222:30003
kosi pull kubeops/opa-gatekeeper:1.1.1 -o kubeops/opa-gatekeeper:1.1.1 -t localhost:30003
kosi install -p package.yaml

opensearch

kosi pull kubeops/kosi-opensearch-os:1.1.1 -o kubeops/kosi-opensearch-os:1.1.1 -r 10.9.10.222:30003
kosi pull kubeops/kosi-opensearch-os:1.1.1 -o kubeops/kosi-opensearch-os:1.1.1 -t localhost:30003
kosi install -p package.yaml

opensearch-dashboards

kosi pull kubeops/kosi-opensearch-dashboards:1.1.1 -o kubeops/kosi-opensearch-dashboards:1.1.1 -r 10.9.10.222:30003
kosi pull kubeops/kosi-opensearch-dashboards:1.1.1 -o kubeops/kosi-opensearch-dashboards:1.1.1 -t localhost:30003
kosi install -p package.yaml

prometheus

kosi pull kubeops/kosi-kube-prometheus-stack:1.1.1 -o kubeops/kosi-kube-prometheus-stack:1.1.1 -r 10.9.10.222:30003
kosi pull kubeops/kosi-kube-prometheus-stack:1.1.1 -o kubeops/kosi-kube-prometheus-stack:1.1.1 -t localhost:30003
kosi install -p package.yaml

rook

kosi pull kubeops/rook-ceph:1.1.1 -o kubeops/rook-ceph:1.1.1 -r 10.9.10.222:30003
kosi pull kubeops/rook-ceph:1.1.1 -o kubeops/rook-ceph:1.1.1 -t localhost:30003
kosi install -p package.yaml

cert-manager

kosi pull kubeops/cert-manager:1.0.2 -o kubeops/cert-manager:1.0.2 -r 10.9.10.222:30003
kosi pull kubeops/cert-manager:1.0.2 -o kubeops/cert-manager:1.0.2 -t localhost:30003
kosi install -p package.yaml

ingress-nginx

kosi pull kubeops/ingress-nginx:1.0.1 -o kubeops/ingress-nginx:1.0.1 -r 10.9.10.222:30003
kosi pull kubeops/ingress-nginx:1.0.1 -o kubeops/ingress-nginx:1.0.1 -t localhost:30003
kosi install -p package.yaml

kubeops-dashboard

kosi pull kubeops/kubeops-dashboard:1.0.1 -o kubeops/kubeops-dashboard:1.0.1 -r 10.9.10.222:30003
kosi pull kubeops/kubeops-dashboard:1.0.1 -o kubeops/kubeops-dashboard:1.0.1 -t localhost:30003
kosi install -p package.yaml

kubeopsctl 1.4.0

Kubeops 1.4.0

you have to create the file kubeopsctl.yaml :

apiVersion: kubeops/kubeopsctl/alpha/v5  # mandatory
kubeOpsUser: "demo" # change to your username
kubeOpsUserPassword: "Password" # change to your password
kubeOpsUserMail: "demo@demo.net" # change to your email
imagePullRegistry: "registry.preprod.kubeops.net/kubeops"

clusterName: "example" 
clusterUser: "root" 
kubernetesVersion: "1.28.2" 
masterIP: 10.2.10.11 
firewall: "nftables" 
pluginNetwork: "calico" 
containerRuntime: "containerd" 

localRegistry: false

# at least 3 masters and 3 workers are needed
zones:
  - name: zone1
    nodes:
      master:
        - name: cluster1master1
          ipAdress: 10.2.10.11
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
        - name: cluster1master2
          ipAdress: 10.2.10.12
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
      worker:
        - name: cluster1worker1
          ipAdress: 10.2.10.14
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
        - name: cluster1worker2
          ipAdress: 10.2.10.15
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
  - name: zone2
    nodes:
      master:
        - name: cluster1master3
          ipAdress: 10.2.10.13
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: drained
          kubeversion: 1.28.2  
      worker:
        - name: cluster1worker1
          ipAdress: 10.2.10.16
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2

controlPlaneList: 
  - 10.2.10.12 # use ip adress here for master2
  - 10.2.10.13 # use ip adress here for master3

workerList: 
  - 10.2.10.14 # use ip adress here for worker1
  - 10.2.10.15 # use ip adress here for worker2
  - 10.2.10.16 # use ip adress here for worker3

rook-ceph: false
harbor: false
opensearch: false
opensearch-dashboards: false
logstash: false
filebeat: false
prometheus: false
opa: false
kubeops-dashboard: false
certman: false
ingress: false 
keycloak: false # mandatory, set to true if you want to install it into your cluster
velero: false

storageClass: "rook-cephfs"

rookValues:
  namespace: kubeops
  nodePort: 31931
  hostname: rook-ceph.local
  cluster:
    spec:
      dataDirHostPath: "/var/lib/rook"
    removeOSDsIfOutAndSafeToRemove: true
    storage:
      # Global filter to only select certain devicesnames. This example matches names starting with sda or sdb.
      # Will only be used if useAllDevices is set to false and will be ignored if individual devices have been specified on a node.
      deviceFilter: "^sd[a-b]"
      # Names of individual nodes in the cluster that should have their storage included.
      # Will only be used if useAllNodes is set to false.
      nodes:
        - name: "<ip-adress of node_1>"
          devices:
            - name: "sdb" 
        - name: "<ip-adress of node_2>"
          deviceFilter: "^sd[a-b]"
          # config:
          #   metadataDevice: "sda"
    resources:
      mgr:
        requests:
          cpu: "500m"
          memory: "1Gi"
      mon:
        requests:
          cpu: "2"
          memory: "1Gi"
      osd:
        requests:
          cpu: "2"
          memory: "1Gi"
  operator:
    data:
      rookLogLevel: "DEBUG"
  blockStorageClass:
    parameters:
      fstype: "ext4"

postgrespass: "password"  # change to your desired password
postgres:
  storageClass: "rook-cephfs"
  volumeMode: "Filesystem"
  accessModes: ["ReadWriteMany"]
  resources:
    requests:
      storage: 2Gi

redispass: "password" # change to your desired password
redis:
  storageClass: "rook-cephfs"
  volumeMode: "Filesystem"
  accessModes: ["ReadWriteMany"]
  resources:
    requests:
      storage: 2Gi

harborValues: 
  namespace: kubeops
  harborpass: "password" # change to your desired password
  externalURL: https://10.2.10.13 # change to ip adress of master1
  nodePort: 30003
  hostname: harbor.local
  harborPersistence:
    persistentVolumeClaim:
      registry:
        size: 40Gi
        storageClass: "rook-cephfs"
      chartmuseum:
        size: 5Gi
        storageClass: "rook-cephfs"
      jobservice:
        jobLog:
          size: 1Gi
          storageClass: "rook-cephfs"
        scanDataExports:
          size: 1Gi
          storageClass: "rook-cephfs"
      database:
        size: 1Gi
        storageClass: "rook-cephfs"
      redis:
        size: 1Gi
        storageClass: "rook-cephfs"
      trivy: 
        size: 5Gi
        storageClass: "rook-cephfs"

filebeatValues:
  namespace: kubeops 

logstashValues:
  namespace: kubeops
  volumeClaimTemplate:
    resources:
      requests:
        storage: 1Gi
    accessModes: 
      - ReadWriteMany
    storageClass: "rook-cephfs"

openSearchDashboardValues:
  namespace: kubeops
  nodePort: 30050
  hostname: opensearch.local

openSearchValues:
  namespace: kubeops
  opensearchJavaOpts: "-Xmx512M -Xms512M"
  replicas: "3"
  resources:
    requests:
      cpu: "250m"
      memory: "1024Mi"
    limits:
      cpu: "300m"
      memory: "3072Mi"
  persistence:
    size: 4Gi
    enabled: "true"
    enableInitChown: "false"
    enabled: "false"
    labels:
      enabled: "false"
    storageClass: "rook-cephfs"
    accessModes:
      - "ReadWriteMany"
  securityConfig:
    enabled: false
    ### Additional values can be set, if securityConfig is enabled:
    # path: "/usr/share/opensearch/plugins/opensearch-security/securityconfig"
    # actionGroupsSecret:
    # configSecret:
    # internalUsersSecret: internal-users-config-secret
    # rolesSecret:
    # rolesMappingSecret:
    # tenantsSecret:
    # config:
    #   securityConfigSecret: ""
    #   dataComplete: true
    #   data: {}

prometheusValues:
  namespace: kubeops
  privateRegistry: false

  grafanaUsername: "user"
  grafanaPassword: "password"
  grafanaResources:
    storageClass: "rook-cephfs"
    storage: 5Gi
    nodePort: 30211
    hostname: grafana.local

  prometheusResources:
    storageClass: "rook-cephfs"
    storage: 25Gi
    retention: 10d
    retentionSize: "24GB"
    nodePort: 32090
    hostname: prometheus.local

opaValues:
  namespace: kubeops

kubeOpsDashboardValues:
  namespace: kubeops
  hostname: kubeops-dashboard.local
  service:
    nodePort: 30007

certmanValues:
  namespace: kubeops
  replicaCount: 3
  logLevel: 2

ingressValues:
  namespace: kubeops
  externalIPs: []

keycloakValues:
  namespace: "kubeops"
  storageClass: "rook-cephfs"
  nodePort: "30180"
  hostname: keycloak.local
  keycloak:
    auth:
      adminUser: admin
      adminPassword: admin
      existingSecret: ""
  postgresql:
    auth:
      postgresPassword: ""
      username: bn_keycloak
      password: ""
      database: bitnami_keycloak
      existingSecret: ""

fileBeat

In Order to change registry of filebeat, you have to go in your kubeopsctl.yaml file and set filebeat: false to filebeat: true

kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003

harbor

In Order to change registry of harbor, you have to go in your kubeopsctl.yaml file and set harbor: false to harbor: true

kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003
logstash

In Order to change registry of logstash, you have to go in your kubeopsctl.yaml file and set logstash: false to logstash: true

kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003

opa-gatekeeper

In Order to change registry of opa-gatekeeper, you have to go in your kubeopsctl.yaml file and set opa: false to opa: true

kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003

opensearch

In Order to change registry of opensearch, you have to go in your kubeopsctl.yaml file and set opensearch: false to opensearch: true

kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003

opensearch-dashboards

In Order to change registry of opensearch-dashboards, you have to go in your kubeopsctl.yaml file and set opensearch-dashboards: false to opensearch-dashboards: true

kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003

prometheus

In Order to change registry of prometheus, you have to go in your kubeopsctl.yaml file and set prometheus: false to prometheus: true

kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003

rook-ceph

In Order to change registry of rook-ceph, you have to go in your kubeopsctl.yaml file and set rook-ceph: false to rook-ceph: true

kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003

cert-manager

In Order to change registry of cert-manager, you have to go in your kubeopsctl.yaml file and set certman: false to certman: true

kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003

ingress-nginx

In Order to change registry of ingress-nginx, you have to go in your kubeopsctl.yaml file and set ingress: false to ingress: true

kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003

keycloak

In Order to change registry of keycloak, you have to go in your kubeopsctl.yaml file and set keycloak: false to keycloak: true

kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003

kubeops-dashboard

In Order to change registry of kubeops-dashboard, you have to go in your kubeopsctl.yaml file and set kubeops-dashboard: false to kubeops-dashboard: true

kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003

20 - Change the OpenSearch Password

Detailed instructions on how to change the OpenSearch password.

Changing a User Password in OpenSearch

This guide explains how to change a user password in OpenSearch with SecurityConfig enabled and an external Kubernetes Secret for user credentials.

Steps to Change the Password Using an External Secret

Prerequisites

  • Access to the Kubernetes cluster where OpenSearch is deployed.
  • Permissions to view and modify secrets in the relevant namespace.

Step 1: Generate a New Password Hash

Execute the command below (replacing the placeholders) to generate a hashed version of your new password:

kubectl exec -it <opensearch_pod_name> -n <opensearch_pod_namespace> -- bash -c "sh /usr/share/opensearch/plugins/opensearch-security/tools/hash.sh -p <new_password>"

Step 2: Extract the Existing Secret and Update internal_users.yaml

Retrieve the existing secret containing internal_users.yml. The secret stores the configuration in base64 encoding, so extract and decode it:

kubectl get secrets -n <opensearch_pod_namespace> internal-users-config-secret -o jsonpath='{.data.internal_users\.yml}' | base64 -d > internal_users.yaml

Now, update the hashed password generated in Step 1 in the internal_users.yaml file for the inteded user.

Step 3: Patch the Secret with Updated internal_users.yml Data and Restart the Opensearch Pods

Encode the updated internal_users.yaml and apply it back to the secret.

cat internal_users.yaml | base64 -w 0 | xargs -I {} kubectl patch secret -n <opensearch_pod_namespace> internal-users-config-secret --patch '{"data": {"internal_users.yml": "{}"}}'

Restart the Opensearch pods to use the updated secret.

kubectl rollout restart statefulset opensearch-cluster-master -n <opensearch_pod_namespace>

NOTE: Please wait for the rollout to complete.

Step 4: Run securityadmin.sh to Apply the Changes

This completes the password update process, ensuring that changes persist across OpenSearch pods.

kubectl exec -it <opensearch_pod_name> -n <opensearch_pod_namespace> -- bash -c "\
    cp /usr/share/opensearch/plugins/opensearch-security/securityconfig/internal_users.yml /usr/share/opensearch/config/opensearch-security/ && \
    sh /usr/share/opensearch/plugins/opensearch-security/tools/securityadmin.sh \
    -cd /usr/share/opensearch/config/opensearch-security/ \
    -icl -nhnv \
    -cacert /usr/share/opensearch/config/root-ca.pem \
    -cert /usr/share/opensearch/config/kirk.pem \
    -key /usr/share/opensearch/config/kirk-key.pem"

21 - Enabling AuditLog

A brief overview of how you can enable AuditLog.

Enabling AuditLog

This guide describes the steps to enable the AuditLog in a Kubernetes cluster.

Steps to Enable the AuditLog

  1. Create the Directory: Navigate to the $KUBEOPSROOT/lima directory and create the auditLog folder:

    mkdir -p $KUBEOPSROOT/lima/auditLog
    
  2. Create the Audit Policy File: In the $KUBEOPSROOT/lima/auditLog directory, create the policy.yaml file:

    touch $KUBEOPSROOT/lima/auditLog/policy.yaml
    
  3. Configure the Audit Policy: Add the content to policy.yaml according to the official Kubernetes Audit Policy documentation. Rules can be added or removed as needed to customize the auditlogs.

    Example content for policy.yaml:

    apiVersion: audit.k8s.io/v1
    kind: Policy
    rules:
      - level: Metadata
        resources:
          - group: ""
            resources: ["pods"]
    
  4. Enable the AuditLog: To enable the auditlog for a cluster, execute the following command:

    lima change auditlog <clustername> -a true
    

    Example:

    lima change auditlog democluster -a true
    

Note

  • The auditlog can also be disabled if needed by setting the -a parameter to false:

    lima change auditlog <clustername> -a false
    

Additional Information

  • More details on configuring the audit policy can be found in the official Kubernetes documentation: Audit Policy.

22 - How to set up SSH keys

Setting up SSH (Secure Shell) keys is an essential step for securely accessing remote servers without the need to enter a password each time. Here’s a short introduction on how to set up SSH keys.

To securely access the kubeops master and worker machines, you need to create a ssh-key-pair (private and public key) on the admin machine. Afterwards copy the public key onto each machine.

Install SSH Client

Most Linux distributions come with an SSH client pre-installed. If its not installed, you can install it using your distributions package manager.

For RHEL8 OS use following command.

sudo dnf install -y openssh-client

Generate SSH Keys

If you do not already have an SSH key or if you want to generate a new key pair specifically for this connection, follow these steps.

Run the command

ssh-keygen

Follow the prompts to choose a file location and passphrase (optional but recommended for added security).

Copy the Public Key to the Remote Machine

To avoid password prompts every time you connect, you can authorize your public key on the remote machine.

You can manually copy the public key to the servers authorized keys using the command ssh-copy-id.

ssh-copy-id <username>@<remote_host>

Replace <username>@<remote_host> with your actual username and the remote machine‘s IP address or hostname.

If ssh-copy-id is not available, you can use the following command:

cat ~/.ssh/id_rsa.pub | ssh username@remote_host "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys"

Additional Information

For more information about commands see the documentation of your respective operating system.

For ssh or ssh-keygen you can use the manual pages:

man ssh
man ssh-keygen

23 - Accessing KubeOps RPM Server

Detailed instructions on how to access the KubeOps RPM Server.

Accessing the KubeOps RPM Server

In order to access the KubeOps RPM server, the following /etc/yum.repos.d/kubeops.repo file must be created with this content:

[kubeops-repo]
name = RHEL 8 BaseOS
baseurl = https://rpm.kubeops.net/kubeopsRepo/
gpgcheck = 0
enabled = 1
module_hotfixes=1

The key rpmServer must be added to the kubeopsctl.yaml file:

apiVersion: kubeops/kubeopsctl/alpha/v5 # mandatory
imagePullRegistry: "registry.preprod.kubernative.net/kubeops"
localRegistry: true
clusterName: "example"
kubernetesVersion: "1.28.2"
masterIP: 10.2.10.11
systemCpu: "200m"
systemMemory: "200Mi"
rpmServer: https://rpm.kubeops.net/kubeopsRepo/repodata/

zones:
  - name: zone1
    nodes:
      master:
        - name: master1
          ipAdress: 10.2.10.11
          status: active
          kubeversion: 1.28.2
        - name: master2
          ipAdress: 10.2.10.12
          status: active
          kubeversion: 1.28.2
      worker:
        - name: worker1
          ipAdress: 10.2.10.14
          status: active
          kubeversion: 1.28.2
        - name: worker2
          ipAdress: 10.2.10.15
          status: active
          kubeversion: 1.28.2
  - name: zone2
    nodes:
      master:
        - name: master3
          ipAdress: 10.2.10.13
          status: active
          kubeversion: 1.28.2  
      worker:
        - name: worker3
          ipAdress: 10.2.10.16
          status: active
          kubeversion: 1.28.2


# mandatory, set to true if you want to install it into your cluster
rook-ceph: true
harbor: true
opensearch: true
opensearch-dashboards: true
logstash: true
filebeat: true
prometheus: true
opa: true
kubeops-dashboard: true
certman: true
ingress: true 
keycloak: true
velero: true

harborValues: 
  harborpass: "password" # change to your desired password
  databasePassword: "Postgres_Password" # change to your desired password
  redisPassword: "Redis_Password" 
  externalURL: http://10.2.10.11:30002 # change to ip adress of master1

prometheusValues:
  grafanaUsername: "user"
  grafanaPassword: "password"

ingressValues:
  externalIPs: []

keycloakValues:
  keycloak:
    auth:
      adminUser: admin
      adminPassword: admin
  postgresql:
    auth:
      postgresPassword: ""
      username: bn_keycloak
      password: ""
      database: bitnami_keycloak
      existingSecret: ""

veleroValues:
  accessKeyId: "your_s3_storage_username"
  secretAccessKey: "your_s3_storage_password"

Accessing the RPM Server in an Air Gap Environment

If you have an AirGap environment, the following files must be placed in the specified directory.

kubeops.repo in /etc/yum.repos.d/

[kubeops-repo]
name = RHEL 8 BaseOS
baseurl = https://rpm.kubeops.net/kubeopsRepo/
gpgcheck = 0
enabled = 1
module_hotfixes=1

route-ens192 in /etc/sysconfig/network-scripts/

The following entry must be added to the route-ens192 file:

193.7.169.20 via <Your Gateway Adress> <network interface>

For example:

193.7.169.20 via 10.2.10.1 dev ens192

hosts in route-ens192 in /etc/

The following entry must be added to the hosts file:

193.7.169.20 rpm.kubeops.net

Test your connection

Please ensure that each node has a connection to the RPM server.The following command can be used for this:

dnf list kubeopsctl --showduplicates

24 - fix rook-ceph

repair rook-ceph when worker nodes are down

fix rook-ceph

if some worker nodes are down, you need to change the rook-ceph configuration if you use the parameter useallnodes and usealldevices. this guide is for temporarily fixing rook-ceph, and is not a permanent solution

  1. get the tools pod
kubectl -n <rook-ceph namespace> get pod | grep tools
  1. get the status
kubectl -n <rook-ceph namespace> exec -it <rook-ceph namespace> -- bash
ceph status
ceph osd status

if there are osds without the status exists,up they need to be removed

ceph osd out <id of osd>
ceph osd crush remove osd.<id of osd>
ceph auth del osd.<id of osd>

you can now check the rest of the osds with ceph osd status

it could be that you also need to decrease the replicationsize:

ceph osd pool ls
ceph osd pool set <pool-name> size 2

the default pool-name should be the replicapool.

then you can delete the deployments of the pods that are making problems

kubectl -n <rook-ceph namespace> delete deploy <deployment-name>