1 - Ingress Configuration

Manual configuration of the Nginx-Ingress-Controller

Right now the Ingress Controller Package is not fully configured. To make complete use of the Ingress capabilities of the cluster, the user needs to manually update some of the settings of the corresponding service.

Locating the service

The service in question is called “ingress-nginx-controller” and can be found in the same namespace as the ingress package itself. To locate the service across all namespaces, you could use the following command.

kubectl get service -A | grep ingress-nginx-controller

This command should return two entries of services, “ingress-nginx-controller” and “ingress-nginx-controller-admission”, though only the first one needs to be further adjusted.

Setting the Ingress-Controller service to type NodePort

To edit the service, you can use the following command, although the actual namespace may be different. This will change the service type to NodePort.

kubectl patch service ingress-nginx-controller -n kubeops -p '{"spec":{"type":"NodePort"}}'

Kubernetes will now automatically assign unused portnumbers for the nodePort to allow http and https connections to the service. These can be retrieved by running the same command, used to locate the service. Alternatively, you can use the following command, which adds the portnumbers 30080 and 30443 for the respective protocols. By doing so, you have to make sure, that these portnumbers are not being used by any other NodePort service.

kubectl patch service ingress-nginx-controller -n kubeops --type=json -p '[{"op":"replace","path":"/spec/type","value":"NodePort"}, {"op":"add","path":"/spec/ports/0/nodePort","value":30080}, {"op":"add","path":"/spec/ports/1/nodePort","value":30443}]'

Configuring external IPs

If you have access to external IPs that route to one or more cluster nodes, you can expose your Kubernetes-Services of any type through these addresses. The command below shows how to add an external IP-Adress to the service with the example value of “192.168.0.1”. Keep in mind that this value has to be changed in order to fit your networking settings.

kubectl patch service ingress-nginx-controller -n kubeops -p '{"spec":{"externalIPs":["192.168.0.1"]}}'

2 - Use Keycloak

keycloak

Now KubeOps-platform introduces keycloak, an one authentication and login system to use all the dashboards without the need of entering your credentials.

Install keycloak

you need kubeopsctl for installing keycloak: you need the parameter keycloak set to true:

...
keycloak: false # mandatory
...

later, you have configuration parameters for keycloak:

...

keycloakValues:
  namespace: "kubeops" # Optional, default is "keycloak"
  storageClass: "rook-cephfs" # Optional, default is "rook-cephfs"
  keycloak:
    auth:
      adminUser: admin # Optional, default is admin
      adminPassword: admin # Optional, default is admin
      existingSecret: "" # Optional, default is ""
  postgresql:
    auth:
      postgresPassword: "" # Optional, default is ""
      username: bn_keycloak # Optional, default is "bn_keycloak"
      password: "" # Optional, default is ""
      database: bitnami_keycloak # Optional, default is "bitnami_keycloak"
      existingSecret: "" # Optional, default is ""
...

Configure Dashboards for Keycloak

For Harbor

  1. Log in to the Harbor web console.
  2. Navigate to Administration > Configuration > Auth.
  3. Select OIDC as the Auth mode.
  4. Enter the required information from Keycloak:
  • OIDC Provider Name: Keycloak

  • OIDC Endpoint: (your Keycloak server URL)

  • OIDC Client ID: (The client ID you created in Keycloak for Harbor)

  • OIDC Client Secret: (The client secret you created in Keycloak for Harbor)

    Harbor-configuration-Example

  1. Confirm the settings and test the login via Keycloak.

For Prometheus

  1. Keycloak supports OAuth2, which can be used for authentication with Prometheus. To do this, you must change the configuration of Prometheus to use the OAuth2 flow.
  2. in the prometheus.yml configuration file, you can add the authentication parameters under the oauth2 key:
scrape_configs:
  - job_name: 'example-job'
    oauth2:
      client_id: 'your-client-id'
      client_secret: 'your-client-secret
      token_url: 'http://keycloak.example.com/auth/realms/your-realm/protocol/openid-connect/token'

For OpenSearch

  1. install the OpenSearch Security Plugin if it is not already installed.
  2. modify the OpenSearch security configuration file (config.yml) to use OIDC (OpenID Connect) for authentication:
authc:
  openid_auth_domain:
    http_enabled: true
    transport_enabled: true
    order: 0
    http_authenticator:
      type: openid
      challenge: false
      config:
        subject_key: preferred_username
        roles_key: roles
        openid_connect_url: http://keycloak.example.com/auth/realms/your-realm/.well-known/openid-configuration

3 - Create Cluster

How to create a working cluster?

Pre-requisites

  • maintenance packages installed?
  • network connection?
  • LIMAROOT set

Steps

  • create yaml file
  • create cluster with multiple nodes
  • add nodes to created cluster
  • delete nodes when needed

Once you have completed the KubeOps installation, you are ready to dive into the KubeOps-Platform.

How to use LIMA

Downloaded all maintenance packages? If yes, then you are ready to use LIMA for managing your Kubernetes clusters!

In the following sections we will walk you through a quick cluster setup and adding nodes.

So the first thing to do is to create a YAML file that contains the specifications of your cluster. Customize the file below according to your downloaded maintenance packages, e.g. the parameters kubernetesVersion, firewall, containerRuntime. Also adjust the other parameters like masterPassword, masterHost, apiEndpoint to your environment.

createCluster.yaml

apiVersion: lima/clusterconfig/v1alpha2
spec:
  clusterName: ExampleClusterName
  masterUser: root
  masterPassword: "myPassword"
  masterHost: 10.2.1.11
  kubernetesVersion: 1.22.2
  registry: registry1.kubernative.net/lima
  useInsecureRegistry: false
  ignoreFirewallError: false
  firewall: firewalld
  apiEndpoint: 10.2.1.11:6443
  serviceSubnet: 192.168.128.0/20
  podSubnet: 192.168.144.0/20
  debug: true
  logLevel: v
  systemCpu: 100m
  systemMemory: 100Mi
  sudo: false
  containerRuntime: crio
  pluginNetwork:
    type: weave
    parameters:
      weavePassword: re4llyS7ron6P4ssw0rd
  auditLog: false
  serial: 1
  seLinuxSupport: true

Most of these parameters are optional and can be left out. If you want to know more about each parameter please refer to our Full Documentation


Set up a single node cluster

To set up a single node cluster we need our createCluster.yaml file from above.
Run the create cluster command on the admin node to create a cluster with one node.

lima create cluster -f createCluster.yaml

Done! LIMA is setting up your Kubernetes cluster. In a few minutes you have set up a regular single master cluster.

If LIMA is successfully finished you can check with kubectl get nodes your Kubernetes single node cluster.

It looks very alone and sad right? Jump to the next section to add some friends to your cluster!


Optional step

The master node which you used to set up your cluster is only suitable as an example installation or for testing. To use this node for production workloads remove the taint from the master node.

kubectl taint nodes --all node-role.kubernetes.io/master-

Add nodes to your cluster

Let’s give your single node cluster some friends. What we need for this is another YAML file. We can call the YAML file whatever we want - we call it addNode.yaml.

addNode.yaml

apiVersion: lima/nodeconfig/v1alpha1
clusterName: ExampleClusterName
spec: 
  masters:
  - host: 10.2.1.12
    user: root
    password: "myPassword"
  workers:
  - host: 10.2.1.13 #IP-address of the node you want to add
    user: root
    password: "myPassword"

We do not need to pull any other maintenance packages. We already did that and are using the same specifications from our single node cluster. The only thing to do is to use the create nodes command

lima create nodes -f addNode.yaml

Done! LIMA adds the nodes to your single node cluster. After LIMA is finished check again with kubectl get nodes the state of your Kubernetes cluster. Your master node should not be alone anymore!

4 - Install Maintenance Packages

Installing the essential Maintenance Packages

KubeOps provides you packages for the supported Kubernetes tools. These maintenance packages help you update the kubernetes tools to the desired versions on your clusters along with its dependencies.

It is necessary to install the required maintenance packages to create your first Kubernetes cluster. The packages are available on kubeops hub.

So let’s get started!

Note : Be sure you have the supported KOSI version for the KubeOps Version installed or you can not pull any maintenance packages!

Commands to install a package

Following are the most common commands to be used on Admin Node to get and install any maintenance package.

  1. Use the command get maintenance to list all available maintenance packages.

     lima get maintenance
    

    This will display a list of all the available maintenance packages.

Example :
| SOFTWARE          | VERSION | STATUS     | SOFTWAREPACKAGE            |TYPE     |
|      --           |   --    |      --    |     --                     |   --    |
| Kubernetes        | 1.24.8  | available  | lima/kubernetes:1.24.8     | upgrade |
| iptablesEL8       | 1.8.4   | available  | lima/iptablesel8:1.8.4     | update  |
| firewalldEL8      | 0.8.2   | downloaded | lima/firewalldel8:0.8.2    | update  |

Please observe and download correct packages based on following important column in this table.  

|Name | Description |
|-------------------------------------------|-------------------------------------------|
| SOFTWARE | It is the name of software which is required for your cluster. |
| VERSION | It is the software version. Select correct version based on your Kubernetes and KubeOps version. |
| SOFTWAREPACKAGE | It is the unique name of the maintenance package. Use this to pull the package on your machine.|
| STATUS | There can be any of the following status indicated. |
| | - available: package is remotely available |
| | - not found : package not found |
| | - downloaded : the package is locally and remotely available |
| | - only local : package is locally available |
| | - unknown: unknown package |   
  1. Use command pull maintenance to pull/download the package on your machine.

    lima pull maintenance <SOFTWAREPACKAGE>
    

    It is possible to pull more than 1 package with one pull invocation.
    For example:

    lima pull maintenance lima/kubernetes:1.23.5 lima/dockerEL7:18.09.1
    

List of Maintenance Packages

Following are the essential maintenance packages to be pulled. Use the above mentioned Common Commands to install desired packages.

1.Kubernetes

The first step is to choose a Kubernetes version and to pull its available package LIMA currently supports following Kubernetes versions:

1.26.x 1.27.x 1.28.x 1.29.x
1.26.3 1.27.1 1.28.0 1.29.0
1.26.4 1.27.2 1.28.1 1.29.1
1.26.5 1.27.3 1.28.2
1.26.6 1.27.4
1.26.7 1.27.5
1.26.8 1.27.6
1.26.9 1.27.7
1.27.8
1.27.9
1.27.10

Following are the packages available for the supported Kubernetes versions.

Kubernetes version Available packages
1.26.x kubernetes-1.26.x
1.27.x kubernetes-1.27.x
1.28.x kubernetes-1.28.x
1.29.x kubernetes-1.29.x

2. Install Kubectl

To install Kubectl you won’t need to pull any other package. The Kubernetes package pulled in above step already contains Kubectl installation file.

In the following example the downloaded package is kubernetes-1.23.5.

dnf install $LIMAROOT/packages/kubernetes-1.21.5/kubectl-1.23.5-0.x86_64.rpm
zypper install $LIMAROOT/packages/kubernetes-1.21.5/kubectl-1.23.5-0.x86_64.rpm

3.Kubernetes Dependencies

The next step is to pull the Kubernetes dependencies:

OS Available packages
RHEL 8 /openSUSE 15 kubeDependencies-EL8-1.0.4

4.CRIs

Choose your CRI and pull the available packages:

OS CRI Available packages
openSUSE 15 docker dockerLP151-19.03.5
containerd containerdLP151-1.6.6
CRI-O crioLP151-1.22.0
podmanLP151-3.4.7
RHEL 8 docker dockerEL8-20.10.2
containerd containerdEL8-1.4.3
CRI-O crioEL8-x.xx.x
crioEL8-dependencies-1.0.1
podmanEL8-18.09.1

Note : CRI-O packages are depending on the chosen Kubernetes version. Choose the CRI-O package which matches with the chosen Kubernetes version.

  • E.g kubernetes-1.23.5 requires crioEL7-1.23.5
  • E.g kubernetes-1.24.8 requires crioEL7-1.24.8

5.Firewall

Choose your firewall and pull the available packages:

OS Firewall Available packages
openSUSE 15 iptables iptablesEL7-1.4.21
firewalld firewalldEL7-0.6.3
RHEL 8 iptables iptablesEL8-1.8.4
firewalld firewalldEL8-0.9.3

Example

Assuming a setup should exist with OS RHEL 8, CRI-O and Kubernetes 1.22.2 with the requested version, the following maintenance packages need to be installed:

  • kubernetes-1.22.2
  • kubeDependencies-EL8-1.0.2
  • crioEL8-1.22.2
  • crioEL8-dependencies-1.0.1
  • podmanEL8-18.09.1


5 - Upgrade KubeOps Software

Upgrading KubeOps Software

1. Update essential KubeOps Packages

Update kubeops setup

Before installing the kubeops software, create a kubeopsctl.yaml with following parameters:

### General values for registry access ###
apiVersion: kubeops/kubeopsctl/alpha/v3  # mandatory
kubeOpsUser: "demo" # mandatory
kubeOpsUserPassword: "Password" # mandatory
kubeOpsUserMail: "demo@demo.net" # mandatory
imagePullRegistry: "registry1.kubernative.net/lima" # mandatory, the registry from which the images for the cluster are pulled
localRegistry: false # mandatory, set to true if you use a local registry

After creating the kubeopsctl.yaml please place another file into your machine to update the software:

### Values for setup configuration ###
clusterName: "example" # mandatory
clusterUser: "root" # mandatory
kubernetesVersion: "1.28.2" # mandatory, check lima documentation
masterIP: 10.2.10.12 # mandatory
containerRuntime: "containerd" # mandatory

1. Remove old KubeOps software

If you want to remove the KubeOps software, it is recommended that you use your package manager. For RHEL environments it is yum, while for opensuse environments it would be zypper. If you want to remove the KubeOps software with yum, use the following commands:

yum autoremove kosi
yum autoremove lima

If you want to remove the KubeOps software with zypper, use the following commands:

zypper remove kosi
zypper remove lima

2. Install new KubeOps software

Now, you can install the new software with yum or zypper.

yum install <kosi-rpm>
zypper install <kosi-rpm>

3. Upgrade kubeops software

To upgrade your kubeops software, you have to use following command:

  kubeopsctl apply -f kubeopsctl.yaml

4. Maintain the old Deployment Information (optional)

After upgrading KOSI from 2.5 to 2.6, the deployment.yaml file has to be moved to the $KUBEOPSROOT directory, if it is desired to keep old deployments.
Be sure there you set the $KUBEOPSROOT variable.

  1. Set the $KUBEOPSROOT variable
echo 'export KUBEOPSROOT="$HOME/kubeops"' >> $HOME/.bashrc
source ~/.bashrc

5. Update other softwares

1. Upgrade rook-ceph

In order to upgrade rook-ceph, you have to go in your kubeopsctl.yaml file and set rook-ceph: false to rook-ceph: true

After that, use the command bellow:

kubeopsctl apply -f kubeopsctl.yaml

2. Update harbor

For Updating harbor, change your kubeopsctl.yaml file and set harbor: false to harbor: true.

Please set other applications to false before applying the kubeopsctl.yaml file.

3. Update opensearch

In order to update opensearch, change your kubeopsctl.yaml file and set opensearch: false to opensearch: true.

Please set other applications to false before applying the kubeopsctl.yaml file.

4. Update logstash

In order to update logstash, change your kubeopsctl.yaml file and set logstash: false to logstash: true

Please set other applications to false before applying the kubeopsctl.yaml file.

5. Update filebeat

In order to update filebeat, change your kubeopsctl.yaml file and set filebeat: false to filebeat: true.

Please set other applications to false before applying the kubeopsctl.yaml file.

6. Update prometheus

In order to update prometheus, change your kubeopsctl.yaml file and set prometheus: false to prometheus: true.

Please set other applications to false before applying the kubeopsctl.yaml file.

7. Update opa

In order to update opa, change your kubeopsctl.yaml file and set opa: false to opa: true. Please set other applications to false before applying the kubeopsctl.yaml file.

6 - Use Kubeopsctl

KubeOpsctl

kubeopsctl is a new KubeOps tool which can be used for managing a cluster and its state eaisily. Now you can just describe a desired cluster state and then kubeopsctl creates a cluster with the desired state.

Using KubeOpsCtl

Using this feature is as easy as configuring the cluster yaml file with desired cluster state and details and using the apply command. Below are the detailed steps.

1.Configure Cluster/Nodes/Software using yaml file

You need to have a cluster definition file which describes the different aspects of your cluster. this files describes only one cluster.

Full yaml syntax

apiVersion: kubeops/kubeopsctl/alpha/v3  # mandatory
kubeOpsUser: "demo" # mandatory,  change to your username
kubeOpsUserPassword: "Password" # mandatory,  change to your password
kubeOpsUserMail: "demo@demo.net" # change to your email
imagePullRegistry: "registry1.kubernative.net/lima" # mandatory
localRegistry: false # mandatory
### Values for setup configuration ###
clusterName: "testkubeopsctl"  # mandatory
clusterUser: "mnyuser"  # mandatory
kubernetesVersion: "1.28.2" # mandatory, check lima documentation
masterIP: 10.2.10.31 # mandatory
# at least 3 masters and 3 workers are needed
zones:
  - name: zone1
    nodes:
      master:
        - name: cluster1master1
          ipAdress: 10.2.10.11
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
        - name: cluster1master2
          ipAdress: 10.2.10.12
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
      worker:
        - name: cluster1worker1
          ipAdress: 10.2.10.14
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
        - name: cluster1worker2
          ipAdress: 10.2.10.15
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
  - name: zone2
    nodes:
      master:
        - name: cluster1master3
          ipAdress: 10.2.10.13
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: drained
          kubeversion: 1.28.2  
      worker:
        - name: cluster1worker1
          ipAdress: 10.2.10.16
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2

# set to true if you want to install it into your cluster
rook-ceph: false # mandatory
harbor: false # mandatory
opensearch: false # mandatory
opensearch-dashboards: false # mandatory
logstash: false # mandatory
filebeat: false # mandatory
prometheus: false # mandatory
opa: false # mandatory
headlamp: false # mandatory
certman: false # mandatory
ingress: false # mandatory
keycloak: false # mandatory

###Values for Rook-Ceph###
rookValues:
  namespace: kubeops
  nodePort: 31931 # optional, default: 31931
  cluster:
    storage:
      # Will only be used if useAllDevices is set to false and will be ignored if individual devices have been specified on a node.
      deviceFilter: "^sd[a-b]"
      # This setting can be used to store metadata on a different device. Only recommended if an additional metadata device is available.
      # Optional, will be overwritten by the corresponding node-level setting.
      config:
        metadataDevice: "sda"
      # Names of individual nodes in the cluster that should have their storage included.
      # Will only be used if useAllNodes is set to false.
      nodes:
        - name: "<ip-adress of node_1>"
          devices:
            - name: "sdb" 
        - name: "<ip-adress of node_2>"
          deviceFilter: "^sd[a-b]"
          config:
            metadataDevice: "sda" # optional
#-------------------------------------------------------------------------------------------------------------------------------
### Values for Postgres ###
postgrespass: "password" # mandatory, set password for harbor postgres access 
postgres:
  resources:
    requests:
      storage: 2Gi # mandatory, depending on storage capacity
#-------------------------------------------------------------------------------------------------------------------------------
### Values for Redis ###
redispass: "password" # mandatory set password for harbor redis access 
redis:
  resources:
    requests:
      storage: 2Gi # mandatory depending on storage capacity
#-------------------------------------------------------------------------------------------------------------------------------
### Values for Harbor deployment ###
## For detailed explaination for each key see: https://artifacthub.io/packages/helm/harbor/harbor/1.8.1#configuration ##
harborValues: 
  harborpass: "password" # mandatory: set password for harbor access 
  externalURL: https://10.2.10.13 # mandatory, the ip address, from which harbor is accessable outside of the cluster
  nodePort: 30003
  harborPersistence:
    persistentVolumeClaim:
      registry:
        size: 5Gi # mandatory, depending on storage capacity
      chartmuseum:
        size: 5Gi # mandatory, depending on storage capacity
      jobservice:
        jobLog:
          size: 1Gi # mandatory: Depending on storage capacity
        scanDataExports:
          size: 1Gi # mandatory: Depending on storage capacity
      database:
        size: 1Gi # mandatory, depending on storage capacity
      redis:
        size: 1Gi # mandatory, depending on storage capacity
      trivy: 
        size: 5Gi # mandatory, depending on storage capacity
#--------------------------------------------------------------------------------------------------------------------------------------
###Values for filebeat deployment###
filebeatValues:
  namespace: kubeops # optional, default is kubeops   
#--------------------------------------------------------------------------------------------------------------------------------
###Values for Logstash deployment###
##For detailed explaination for each key see: https://github.com/elastic/helm-charts/releases/tag/v7.16.3###
logstashValues:
  namespace: kubeops
  volumeClaimTemplate:
    resources:
      requests:
        storage: 1Gi # mandatory, depending on storage capacity
#--------------------------------------------------------------------------------------------------------------------------------------
###Values for OpenSearch-Dashboards deployment###
##For detailed explaination for each key see: https://github.com/opensearch-project/helm-charts/tree/main/charts/opensearch-dashboards###
openSearchDashboardValues:
  namespace: kubeops
  nodePort: 30050
#--------------------------------------------------------------------------------------------------------------------------------
###Values for OpenSearch deployment###
##For detailed explaination for each key see: https://github.com/opensearch-project/helm-charts/tree/main/charts/opensearch###
openSearchValues:
  namespace: kubeops
  resources:
  persistence:
    size: 4Gi # mandatory
#--------------------------------------------------------------------------------------------------------------------------------
###Values for Prometheus deployment###
prometheusValues:
  prometheusResources:
    nodePort: 32090
#--------------------------------------------------------------------------------------------------------------------------------
###Values for OPA deployment###
opaValues:
  namespace: kubeops
#--------------------------------------------------------------------------------------------------------------------------------
###Values for Headlamp deployment###
headlampValues:
  service:
    nodePort: 30007
#--------------------------------------------------------------------------------------------------------------------------------
###Values for cert-manager deployment###
certmanValues:
  namespace: kubeops
  replicaCount: 3
  logLevel: 2
#--------------------------------------------------------------------------------------------------------------------------------
###Values for ingress-nginx deployment###
ingressValues:
  namespace: kubeops
apiVersion: kubeops/kubeopsctl/alpha/v3  # mandatory
kubeOpsUser: "demo" # mandatory,  change to your username
kubeOpsUserPassword: "Password" # mandatory,  change to your password
kubeOpsUserMail: "demo@demo.net" # change to your email
imagePullRegistry: "registry1.kubernative.net/lima" # mandatory
localRegistry: false # mandatory
### Values for setup configuration ###
clusterName: "testkubeopsctl"  # mandatory
clusterUser: "mnyuser"  # mandatory
kubernetesVersion: "1.28.2" # mandatory, check lima documentation
#masterHost: optional if you have an hostname, default value in "masterIP"
masterIP: 10.2.10.31 # mandatory
firewall: "nftables" # mandatory, default "nftables"
pluginNetwork: "calico" # mandatory, default "nftables"
containerRuntime: "containerd" # mandatory, default "containerd"

these are parameters for the cluster creation, and software for the clustercreation, p.e. the containerruntime for running the contianers of the cluster. Also there are parameters for the lima software (see documentation of lima for futher explanation).

### Additional values for cluster configuration
useInsecureRegistry: false # optional, default is false
ignoreFirewallError: false # optional, default is false
serviceSubnet: 192.168.128.0/17 # optional, default "192.168.128.0/17"
podSubnet: 192.168.0.0/17 # optional, default "192.168.0.0/17"
debug: true # optional, default is true
logLevel: vvvvv # optional, default "vvvvv"
systemCpu: "1" # optional, default "1"
systemMemory: "2G" # optional, default "2G"
sudo: true # optional, default is true

also important are parameters like for the networking like the subnets for the pods and services inside the kubernetes cluster.

# at least 3 masters and 3 workers are needed
zones:
  - name: zone1
    nodes:
      master:
        - name: cluster1master1
          ipAdress: 10.2.10.11
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
        - name: cluster1master2
          ipAdress: 10.2.10.12
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
      worker:
        - name: cluster1worker1
          ipAdress: 10.2.10.14
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
        - name: cluster1worker1
          ipAdress: 10.2.10.15
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
  - name: zone2
    nodes:
      master:
        - name: cluster1master3
          ipAdress: 10.2.10.13
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2  
      worker:
        - name: cluster1worker1
          ipAdress: 10.2.10.16
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: drained
          kubeversion: 1.28.2

so here are thetwo zones, which contain master and worker nodes.
There are two different states: active and drained.
also there can be two different kubernetes versions.
So if you want to do updates in tranches, this is possible with kubeopsctl. Also you can set system memory and system cpu of the nodes for kubernetes itself. it is not possible to delete nodes, for deleting nodes you have to use lima. Also if you want to make an update in tranches, you need at least one master with the greater version.

All other parameters are explained here

2 Apply changes to cluster

Once you have configured the cluster changes in yaml file, use following command to apply the changes.

kubeopsctl apply -f kubeopsctl.yaml

7 - Backup and restore

Backup and restoring artifacts

What is Velero?

Velero uses object storage to store backups and associated artifacts. It also optionally integrates supported block storage systems to snapshot your persistent volumes. Before beginning the installation process, you should identify the object storage provider and optional block storage provider(s) you’ll be using from the list of compatible providers.

Velero supports storage providers for both cloud-provider environments and on-premises environments.

Velero prerequisites:

  • Access to a Kubernetes cluster, v1.16 or later, with DNS and container networking enabled.
  • kubectl installed locally
  • Object Storage (S3, Cloud Provider Environment, On-Premises Environment)

Compatible providers and on-premises documentation can be read on https://velero.io/docs

Install Velero

This command is an example on how you can install velero into your cluster:

velero install --provider aws --plugins velero/velero-plugin-for-aws:v1.2.1 --bucket velero --secret-file ./credentials-velero --use-volume-snapshots=false --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://minio.velero.svc:9000

NOTE:

  • s3Url has to be the url of your s3 storage login.
  • example for credentials-velero file:
    [default]
    aws_access_key_id = your_s3_storage_username
    aws_secret_access_key = your_s3_storage_password
    

Backup the cluster

Scheduled Backups

This command creates a backup for the cluster every 6 hours:

velero schedule create cluster --schedule "0 */6 * * *"

Get Schedules

This command lists all schedules for backups:

velero schedule get

Delete Schedules

This command deletes the specified schedule:

velero schedule delete cluster

Restore Scheduled Backup

This command restores the backup according to a schedule:

velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>

Backup

This command creates a backup for the cluster

velero backup create cluster

Get Backups

This command lists all created backups:

velero backup get

Delete Backups

This commands deletes the specified backup:

velero backup delete <BACKUP NAME>

Restore Backup

This commands restores the specified backup:

velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>

Backup a specific deployment

Scheduled Backups

This command creates a backup for the namespace “logging” every 6 hours:

velero schedule create filebeat --schedule "0 */6 * * *" --include-namespaces logging --include-cluster-resources=true

This command creates a backup for the deployment “filebeat” every 6 hours:

velero schedule create filebeat --schedule "0 */6 * * *" --include-namespaces logging --selector app=filebeat-filebeat,release=filebeat --include-resources serviceaccount,deployment,daemonset,configmap,clusterrolebinding,clusterrole --include-cluster-resources=true

Get Schedules

This command lists all schedules for backups:

velero schedule get

Delete Schedules

This command deletes the specified schedule:

velero schedule delete filebeat

Restore Scheduled Backup

This command restores the backup from a schedule:

velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>

Backup

This command creates a backup for the namespace “logging”:

velero backup create filebeat --include-namespaces logging --include-cluster-resources=true

This command creates a backup for the deployment “filebeat”:

velero backup create filebeat --include-namespaces logging --selector app=filebeat-filebeat,release=filebeat --include-resources serviceaccount,deployment,daemonset,configmap,clusterrolebinding,clusterrole --include-cluster-resources=true

Get Backups

This command lists all created backups:

velero backup get

Delete Backups

This commands deletes the specified backup:

velero backup delete <BACKUP NAME>

Restore Backup

This commands restores the specified backup:

velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>

Scheduled Backups

This command creates a backup for the namespace “logging” every 6 hours:

velero schedule create logstash --schedule "0 */6 * * *" --include-namespaces logging --include-cluster-resources=true

This command creates a backup for the deployment “logstash” every 6 hours:

velero schedule create logstash --schedule "0 */6 * * *" --include-namespaces logging --selector app=logstash-logstash,chart=logstash,release=logstash --include-resources StatefulSet,ServiceAccount,Service,Secret,RoleBinding,Role,PodSecurityPolicy,PodDisruptionBudget,Ingress,ConfigMap --include-cluster-resources=true

Get Schedules

This command lists all schedules for backups:

velero schedule get

Delete Schedules

This command deletes the specified schedule:

velero schedule delete logstash

Restore Scheduled Backup

This command restores the backup from a schedule:

velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>

Backup

This command creates a backup for the namespace “logging”:

velero backup create logstash --include-namespaces logging --include-cluster-resources=true

This command creates a backup for the deployment “logstash”:

velero backup create logstash --include-namespaces logging --selector app=logstash-logstash,chart=logstash,release=logstash --include-resources StatefulSet,ServiceAccount,Service,Secret,RoleBinding,Role,PodSecurityPolicy,PodDisruptionBudget,Ingress,ConfigMap --include-cluster-resources=true

Get Backups

This command lists all created backups:

velero backup get

Delete Backups

This commands deletes the specified backup:

velero backup delete <BACKUP NAME>

Restore Backup

This commands restores the specified backup:

velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>

Scheduled Backups

This command creates a backup for the namespace “logging” every 6 hours:

velero schedule create opensearch --schedule "0 */6 * * *" --include-namespaces logging --include-cluster-resources=true

This command creates a backup for the deployment “opensearch” every 6 hours:

velero schedule create opensearch --schedule "0 */6 * * *" --include-namespaces logging --selector app.kubernetes.io/instance=opensearch,app.kubernetes.io/name=opensearch --include-resources ConfigMap,Ingress,NetworkPolicy,PodDisruptionBudget,PodSecurityPolicy,Role,RoleBinding,Secret,Service,ServiceAccount,StatefulSet --include-cluster-resources=true

Get Schedules

This command lists all schedules for backups:

velero schedule get

Delete Schedules

This command deletes the specified schedule:

velero schedule delete opensearch

Restore Scheduled Backup

This command restores the backup from a schedule:

velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>

Backup

This command creates a backup for the namespace “logging”:

velero backup create opensearch --include-namespaces logging --include-cluster-resources=true

This command creates a backup for the deployment “opensearch”:

velero backup create opensearch --include-namespaces logging --selector app.kubernetes.io/instance=opensearch,app.kubernetes.io/name=opensearch --include-resources ConfigMap,Ingress,NetworkPolicy,PodDisruptionBudget,PodSecurityPolicy,Role,RoleBinding,Secret,Service,ServiceAccount,StatefulSet --include-cluster-resources=true

Get Backups

This command lists all created backups:

velero backup get

Delete Backups

This commands deletes the specified backup:

velero backup delete <BACKUP NAME>

Restore Backup

This commands restores the specified backup:

velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>

Scheduled Backups

This command creates a backup for the namespace “monitoring” every 6 hours:

velero schedule create prometheus --schedule "0 */6 * * *" --include-namespaces monitoring --include-cluster-resources=true

This command creates a backup for the deployment “prometheus” every 6 hours:

velero schedule create prometheus --schedule "0 */6 * * *" --include-namespaces monitoring --include-resources Alertmanager,Secret,Ingress,List,PodDisruptionBudget,Role,RoleBinding,PodSecurityPolicy,Service,ServiceAccount,ServiceMonitor,Endpoints,ConfigMap,ConfigMapList,ClusterRole,ClusterRoleBinding,SecretProviderClass,PodMonitor,Prometheus,Job,NetworkPolicy,MutatingWebhookConfiguration,ValidatingWebhookConfiguration,Issuer,Deployment,VerticalPodAutoscaler,ThanosRuler --include-cluster-resources=true

Get Schedules

This command lists all schedules for backups:

velero schedule get

Delete Schedules

This command deletes the specified schedule:

velero schedule delete prometheus

Restore Scheduled Backup

This command restores the backup from a schedule:

velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>

Backup

This command creates a backup for the namespace “monitoring”:

velero backup create prometheus --include-namespaces monitoring --include-cluster-resources=true

This command creates a backup for the deployment “prometheus”:

velero backup create prometheus --include-namespaces monitoring --include-resources Alertmanager,Secret,Ingress,List,PodDisruptionBudget,Role,RoleBinding,PodSecurityPolicy,Service,ServiceAccount,ServiceMonitor,Endpoints,ConfigMap,ConfigMapList,ClusterRole,ClusterRoleBinding,SecretProviderClass,PodMonitor,Prometheus,Job,NetworkPolicy,MutatingWebhookConfiguration,ValidatingWebhookConfiguration,Issuer,Deployment,VerticalPodAutoscaler,ThanosRuler --include-cluster-resources=true

Get Backups

This command lists all created backups:

velero backup get

Delete Backups

This commands deletes the specified backup:

velero backup delete <BACKUP NAME>

Restore Backup

This commands restores the specified backup:

velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>

Scheduled Backups

This command creates a backup for the namespace “harbor” every 6 hours:

velero schedule create harbor --schedule "0 */6 * * *" --include-namespaces harbor --include-cluster-resources=true

This command creates a backup for the deployment “harbor” every 6 hours:

velero schedule create harbor --schedule "0 */6 * * *" --include-namespaces harbor --include-resources ConfigMap,Deployment,PersistentVolumeClaim,Secret,Service,StatefulSet,Ingress,ServiceMonitor --include-cluster-resources=true

Get Schedules

This command lists all schedules for backups:

velero schedule get

Delete Schedules

This command deletes the specified schedule:

velero schedule delete harbor

Restore Scheduled Backup

This command restores the backup from a schedule:

velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>

Backup

This command creates a backup for the namespace “harbor”:

velero backup create harbor --include-namespaces harbor --include-cluster-resources=true

This command creates a backup for the deployment “harbor”:

velero backup create harbor --include-namespaces harbor --include-resources ConfigMap,Deployment,PersistentVolumeClaim,Secret,Service,StatefulSet,Ingress,ServiceMonitor --include-cluster-resources=true --include-cluster-resources=true

Get Backups

This command lists all created backups:

velero backup get

Delete Backups

This commands deletes the specified backup:

velero backup delete <BACKUP NAME>

Restore Backup

This commands restores the specified backup:

velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>

Scheduled Backups

This command creates a backup for the namespace “gatekeeper-system” every 6 hours:

velero schedule create gatekeeper --schedule "0 */6 * * *" --include-namespaces gatekeeper-system --include-cluster-resources=true

This command creates a backup for the deployment “gatekeeper” every 6 hours:

velero schedule create gatekeeper --schedule "0 */6 * * *" --include-namespaces gatekeeper-system --include-resources PodSecurityPolicy,ServiceAccount,Deployment,PodDisruptionBudget,ResourceQuota,ClusterRole,Role,ClusterRoleBinding,RoleBinding,MutatingWebhookConfiguration,ValidatingWebhookConfiguration,Secret,Service,Job --include-cluster-resources=true

Get Schedules

This command lists all schedules for backups:

velero schedule get

Delete Schedules

This command deletes the specified schedule:

velero schedule delete gatekeeper

Restore Scheduled Backup

This command restores the backup from a schedule:

velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>

Backup

This command creates a backup for the namespace “gatekeeper-system”:

velero backup create gatekeeper --include-namespaces gatekeeper-system --include-cluster-resources=true

This command creates a backup for the deployment “gatekeeper-system”:

velero backup create gatekeeper --include-namespaces gatekeeper-system --include-resources PodSecurityPolicy,ServiceAccount,Deployment,PodDisruptionBudget,ResourceQuota,ClusterRole,Role,ClusterRoleBinding,RoleBinding,MutatingWebhookConfiguration,ValidatingWebhookConfiguration,Secret,Service,Job --include-cluster-resources=true --include-cluster-resources=true

Get Backups

This command lists all created backups:

velero backup get

Delete Backups

This commands deletes the specified backup:

velero backup delete <BACKUP NAME>

Restore Backup

This commands restores the specified backup:

velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>

Scheduled Backups

This command creates a backup for the namespace “rook-ceph” every 6 hours:

velero schedule create rook-ceph --schedule "0 */6 * * *" --include-namespaces rook-ceph --include-cluster-resources=true

Get Schedules

This command lists all schedules for backups:

velero schedule get

Delete Schedules

This command deletes the specified schedule:

velero schedule delete rook-ceph

Restore Scheduled Backup

This command restores the backup from a schedule:

velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>

Backup

This command creates a backup for the namespace “rook-ceph”:

velero backup create rook-ceph --include-namespaces rook-ceph --include-cluster-resources=true

Get Backups

This command lists all created backups:

velero backup get

Delete Backups

This commands deletes the specified backup:

velero backup delete <BACKUP NAME>

Restore Backup

This commands restores the specified backup:

velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>

8 - Renew Certificates

Renewing all certificates at once


LIMA enables you to renew all Certificates, for a specific Cluster, on all control-plane-nodes in one command.

lima renew cert <clusterName>
Note: Renewing certificates can take several minutes for restarting all certificates services.

Here is an example to renew certificates on cluster with name “Democluster”:

lima renew cert Democluster

Note: This command renew all certificates on the existing control-plane, there is no option to renew single certificates.

9 - Deploy Package On Cluster

Deploying package on Cluster

You can install artifacts in your cluster in several ways. For this purpose, you can use these four plugins when creating a package:

  • helm
  • kubectl
  • cmd
  • Kosi

As an example, this guide installs the nginx-ingress Ingress Controller.

Using the Helm-Plugin

Prerequisite

In order to install an artifact with the Helm plugin, the Helm chart must first be downloaded. This step is not covered in this guide.

Create KOSI package

First you need to create a KOSI package. The following command creates the necessary files in the current directory:

kosi create

The downloaded Helm chart must also be located in the current directory. To customize the deployment of the Helm chart, the values.yaml file must be edited. This file can be downloaded from ArtifactHub and must be placed in the same directory as the Helm chart.

All files required by a task in the package must be named in the package.yaml file under includes.files. The container images required by the Helm chart must also be listed in the package.yaml under includes.containers. For installation, the required files and images must be listed under the installation.includes key.
In the example below, only two files are required for the installation: the Helm Chart for the nginx-ingress and the values.yaml to configure the deployment. To install nginx-ingress you will also need the nginx/nginx-ingress image with the tag 3.0.1.

To install nginx-ingress with the Helm plugin, call the plugin as shown in the example under installation.tasks. The deployment configuration file is listed under values and the packed Helm chart is specified with the key tgz. Furthermore, it is also possible to specify the namespace in which the artifact should be deployed and the name of the deployment. The full documentation for the Helm plugin can be found here.

apiversion: kubernative/kubeops/sina/user/v3
name: deployExample
description: "This Package is an example. 
              It shows how to deploy an artifact to your cluster using the helm plugin."
version: 0.1.0  
includes: 
  files:  
    config: "values.yaml"
    nginx: "nginx-ingress-0.16.1.tgz"
  containers: 
    nginx-ingress:
      registry: docker.io 
      image: nginx/nginx-ingress
      tag: 3.0.1
docs: docs.tgz
logo: logo.png
installation:  
  includes: 
    files: 
      - config 
      - nginx
    containers: 
      - nginx-ingress
  tasks: 
    - helm:
        command: "install"
        values:
          - values.yaml
        tgz: "nginx-ingress-0.16.1.tgz"
        namespace: dev
        deploymentName: nginx-ingress
...

update:  
  tasks:
  
delete:  
  tasks:

Once the package.yaml file has been fully configured, all files must be combined into a KOSI package. To do this, execute the following command in the directory where the package.yaml file is located.

kosi build

To make the generated kosi package available on other machines, it is pushed to the user’s private KubeOps Hub. To do this, the user must first log in to the hub.

$ kosi login -u <username>
2023-02-04 11:19:43 Info:      KOSI version: 2.6.0_Beta0
2023-02-04 11:19:43 Info:      Please enter password
****************
2023-02-04 11:19:26 Info:      Login Succeeded to Hub.
$ kosi push --hub kosi
2023-02-04 11:23:18 Info:      KOSI version: 2.6.0_Beta0
2023-02-04 11:23:19 Info:      Push to Private Registry registry.preprod.kubernative.net/<username>/

Deployment

Once the KOSI package has been created and published, it needs to be installed on the Admin node. The following command will download and execute the package. The package name and version refer to the values defined in package.yaml with the keys name and version.

kosi install --hub <username> <username>/<packagename>:<version>

For the example package, the command would be: kosi install --hub <username><username>/deployExample:0.1.0.

Using the Kubectl-Plugin

Prerequisite

In order to install an artifact with the Kubectl plugin, the kubeops-kubernetes-plugins package must be installed on the admin node. This step is not covered in this guide.

Create KOSI package

First you need to create a KOSI package. The following command creates the necessary files in the current directory:

kosi create

The NGINX ingress controller YAML manifest can either be automaticly downloaded and applyed directly with kubectl apply or it can be downloaded manually if you want to customize the deployment. The YAML manifest can be downloaded from the NGINX GitHub Repo and must be placed in the same directory as the files for the kosi package.

All files required by a task in the package must be named in the package.yaml file under includes.files. The container images required by the YAML manifest must also be listed in the package.yaml under includes.containers. For installation, the required files and images must be listed under the installation.includes key.
In the example below, only one file is required for the installation: the YAML manifest for the nginx-ingress controller. To install nginx-ingress you will also need the registry.k8s.io/ingress-nginx/controller image with the tag v1.5.1 and the image registry.k8s.io/ingress-nginx/kube-webhook-certgen with tag v20220916-gd32f8c343.

To install nginx-ingress with the Kubectl plugin, call the plugin as shown in the example under installation.tasks. The full documentation for the Kubectl plugin can be found here.

apiversion: kubernative/kubeops/sina/user/v3
name: deployExample
description: "This Package is an example. 
              It shows how to deploy an artifact to your cluster using the helm plugin."
version: 0.1.0  
includes: 
  files:  
    manifest: "deploy.yaml"
  containers: 
    nginx-ingress:
      registry: registry.k8s.io
      image: ingress-nginx/controller
      tag: v1.5.1
    webhook-certgen:
      registry: registry.k8s.io
      image: ingress-nginx/kube-webhook-certgen
      tag: v20220916-gd32f8c343
docs: docs.tgz
logo: logo.png
installation:  
  includes: 
    files: 
      - manifest
    containers: 
      - nginx-ingress
      - webhook-certgen
  tasks: 
    - kubectl:
        operation: "apply"
        flags: " -f <absolute path>/deploy.yaml"
        sudo: true
        sudoPassword: "toor"

...

update:  
  tasks:
  
delete:  
  tasks:

Once the package.yaml file has been fully configured, all files must be combined into a KOSI package. To do this, execute the following command in the directory where the package.yaml file is located.

kosi build

To make the generated KOSI package available on other machines, it is pushed to the user’s private KubeOps Hub. To do this, the user must first log in to the hub.

$ kosi login -u <username>
2023-02-04 11:19:43 Info:      kosi version: 2.6.0_Beta0
2023-02-04 11:19:43 Info:      Please enter password
****************
2023-02-04 11:19:26 Info:      Login Succeeded to Hub.
$ kosi push --hub kosi
2023-02-04 11:23:18 Info:      kosi version: 2.6.0_Beta0
2023-02-04 11:23:19 Info:      Push to Private Registry registry.preprod.kubernative.net/<username>/

Deployment

Once the KOSI package has been created and published, it needs to be installed on the Admin node. The following command will download and execute the package. The package name and version refer to the values defined in package.yaml with the keys name and version.

kosi install --hub <username> <username>/<packagename>:<version>

For the example package, the command would be: kosi install --hub <username><username>/deployExample:0.1.0.

10 - Replace Cluster Nodes

Replace cluster nodes

This section describes how to replace cluster nodes in your cluster.

Direct replacement of nodes is not possible in KubeOps; however you can delete the node and add a new node to the cluster as shown in the following example.

Steps to replace a Kubernetes Node

  1. Use the command delete on the admin node to delete the unwanted node from the cluster.

    The command is:

    lima delete -n <IP of your node> <name of your Cluster>
    
    If you are deleting a node, then its data becomes inaccessible or erased.
  2. Now create a new .yaml file with a configuration for the node as shown below

    Example:

    apiVersion: lima/nodeconfig/v1alpha1
    clusterName: roottest
    spec:
      masters: []
      workers:
      - host: 10.2.10.17  ## ip of the new node to be joined
        user: root
        password: toor
    
  3. Lastly use the command create nodes to create and join the new node.

    The command is:

    lima create nodes -f <node yaml file name>
    

Example 1

In the following example, we will replace a node with ip 10.2.10.15 from demoCluster to a new worker node with ip 10.2.10.17:

  1. Delete node.

    lima delete -n 10.2.10.15 demoCluster
    
  2. create addNode.yaml for new worker node.

    apiVersion: lima/nodeconfig/v1alpha1
    clusterName: roottest
    spec:
      masters: []
      workers:
      - host: 10.2.10.17
        user: root
        password: toor
    
  3. Join the new node.

    lima create nodes -f addNode.yaml
    

Example 2

If you are rejoining a master node, all other steps are the same except, you need to add the node configuration in the yaml file as shown in the example below:

apiVersion: lima/nodeconfig/v1alpha1
clusterName: roottest
spec:
  masters:
  - host: 10.2.10.17
    user: root
    password: toor
  workers: []

11 - Update Kubernetes Version

Upgrading Kubernetes version

You can use the following steps to upgrade the Kubernetes version of a cluster.

In the following example, we will upgrade Kubernetes version of your cluster with name Democluster from Kubernetes version 1.27.2 to Kubernetes version 1.28.2

  1. You have to create a kubeobsctl.yaml with following yaml syntax.
   apiVersion: kubeops/kubeopsctl/alpha/v3  # mandatory
   kubeOpsUser: "demo" # mandatory,  change to your username
   kubeOpsUserPassword: "Password" # mandatory,  change to your password
   kubeOpsUserMail: "demo@demo.net" # change to your email
   imagePullRegistry: "registry1.kubernative.net/lima" # mandatory
   localRegistry: false # mandatory
   ### Values for setup configuration ###
   clusterName: "Democluster"  # mandatory
   clusterUser: "mnyuser"  # mandatory
   kubernetesVersion: "1.28.2" # mandatory, check lima documentation
   masterIP: 10.2.10.11 # mandatory
   ### Additional values for cluster configuration
   # at least 3 masters and 3 workers are needed
   zones:
   - name: zone1
      nodes:
         master:
         - name: cluster1master1
            ipAdress: 10.2.10.11
            user: myuser
            systemCpu: 100m
            systemMemory: 100Mi 
            status: active
            kubeversion: 1.28.2
         - name: cluster1master2
            ipAdress: 10.2.10.12
            user: myuser
            systemCpu: 100m
            systemMemory: 100Mi 
            status: active
            kubeversion: 1.28.2
         worker:
         - name: cluster1worker1
            ipAdress: 10.2.10.14
            user: myuser
            systemCpu: 100m
            systemMemory: 100Mi 
            status: active
            kubeversion: 1.28.2
         - name: cluster1worker2
            ipAdress: 10.2.10.15
            systemCpu: 100m
            systemMemory: 100Mi 
            status: active
            kubeversion: 1.28.2
   - name: zone2
      nodes:
         master:
         - name: cluster1master3
            ipAdress: 10.2.10.13
            user: myuser
            systemCpu: 100m
            systemMemory: 100Mi 
            status: drained
            kubeversion: 1.28.2  
         worker:
         - name: cluster1worker1
            ipAdress: 10.2.10.16
            user: myuser
            systemCpu: 100m
            systemMemory: 100Mi 
            status: active
            kubeversion: 1.28.2

   # set to true if you want to install it into your cluster
   rook-ceph: false # mandatory
   harbor: false # mandatory
   opensearch: false # mandatory
   opensearch-dashboards: false # mandatory
   logstash: false # mandatory
   filebeat: false # mandatory
   prometheus: false # mandatory
   opa: false # mandatory
   headlamp: false # mandatory
   certman: false # mandatory
   ingress: false # mandatory
   keycloak: false # mandatory
  1. Upgrade the version

    Once the kubeopsctl.yaml file is created in order to change the Version of your cluster use the following command:

    kubeopsctl apply -f kubeopsctl.yaml
    

12 - Change CRI

Changing Container Runtime Interface

KubeOps enables you to change the Container Runtime Interface (CRI) of the clusters to any of the following supported CRIs

  • containerd
  • crio

You can use the following steps to change the CRI

In the example below, we will change the CRI of the cluster with the name Democluster to containerd on the machine with openSuse OS .

  1. Download the desired CRI maintenance package from hub
In this case you will need package `lima/containerdlp151:1.6.6`.  
To download the package use command:  

 ````console
 lima pull maintenance lima/containerdlp151:1.6.6
 ````
 
Note : Packages may vary based on OS and Kubernetes version on your machine.
To select the correct maintenance package based on your machine configuration, refer to Installing maintenance packages
  1. Change the CRI of your cluster.

Once the desired CRI maintenance package is downloaded, to change the CRI of your cluster use command: console lima change runtime -r containerd Democluster

So in this case you want to change your runtime to containerd. The desired container runtime is specified after the -r parameter, which is necessary. In this example the cluster has the name Democluster, which is also necessary.

13 - How to delete nodes from the cluster with lima


Note: If we want to delete a node from our kubernetes cluster we have to use lima.

If you are using our platform, lima is already installed by it. If this is not the case, please install lima manually.

These are the prerequisites that have to fulfilled before we can delete a node from our cluster.

  • lima has to be installed
  • a functioning cluster must exist

If you want to remove a node from your cluster you can run the delete command on the admin node.

lima delete -n <node which should be deleted> <name of your cluster>

Note: The example cluster name has to be the same like the one set in the Kubectl.yaml file. Under clusterName:

For example we want to delete worker node 2 from our existing kubernetes cluster named example and the IP-address 10.2.1.9 with the following command:

lima delete -n 10.2.1.9 example

14 - Accessing Dashboards

Accessing Dashboards installed with KubeOps

To access a Application dashboard an SSH-Tunnel to one of the Control-Planes is needed. The following Dashboards are available and configured with the following NodePorts by default:

NodePort

30211

Initial login credentials

  • username: the username set in the kubeopsvalues.yaml for the cluster creation
  • password: the password set in the kubeopsvalues.yaml for the cluster creation

NodePort

30050

Initial login credentials

  • username: admin
  • password: admin

NodePort

  • https: 30003

Initial login credentials

  • username: admin
  • password: the password set in the kubeopsvalues.yaml for the cluster creation

NodePort

The Rook/Ceph Dashboard has no fixed NodePort yet. To find out the NodePort used by Rook/Ceph follow these steps:

  1. List the Services in the KubeOps namespace
kubectl get svc -n kubeops
  1. Find the line with the service rook-ceph-mgr-dashboard-external-http
NAME                                      TYPE        CLUSTER-IP        EXTERNAL-IP   PORT(S)                                     AGE
rook-ceph-mgr-dashboard-external-http     NodePort    192.168.197.13    <none>        7000:31268/TCP                              21h

In the example above the NodePort to connect to Rook/Ceph would be 31268.

Initial login credentials

  • username: admin
  • password:
kubectl get secret rook-ceph-dashboard-password -n kubeops --template={{.data.password}} | base64 -d

The dashboard can be accessed with localhost:Port/ceph-dashboard/

NodePort

30007

Initial login credentials

An access token is required to log in to the headlamp daschboard.
The access token is linked to the service account headlamp-admin and stored in the secret headlamp-admin
The access token can be read from the secret

echo $(kubectl get secret headlamp-admin --namespace headlamp --template=\{\{.data.token\}\} | base64 --decode)

Connecting to the Dashboard

In order to connect to one of the dashboards, an ssh tunnel has to be established. There are various tools for doing this, like the command line, putty or MobaXterm.
To establish a tunnel, the NodePort of the dashboard has to be forwarded on one of the control planes to the local machine. After that the dashboard can be accessed with localhost:<Port>.

NodePort

  • https: 30003

Initial login credentials

  • username: admin
  • password: the password set in the kubeopsctl.yaml for the cluster creation

NodePort

The Rook/Ceph Dashboard has no fixed NodePort yet. To find out the NodePort used by Rook/Ceph follow these steps:

  1. List the Services in the KubeOps namespace
kubectl get svc -n kubeops
  1. Find the line with the service rook-ceph-mgr-dashboard-external-http
NAME                                      TYPE        CLUSTER-IP        EXTERNAL-IP   PORT(S)                                     AGE
rook-ceph-mgr-dashboard-external-http     NodePort    192.168.197.13    <none>        7000:31268/TCP                              21h

In the example above the NodePort to connect to Rook/Ceph would be 31268.

Initial login credentials

  • username: admin
  • password:
kubectl get secret rook-ceph-dashboard-password -n kubeops --template={{.data.password}} | base64 -d

The dashboard can be accessed with localhost:Port/ceph-dashboard/#/login

NodePort

30007

Initial login credentials

An access token is required to log in to the headlamp daschboard.
The access token is linked to the service account headlamp-admin and stored in the secret headlamp-admin
The access token can be read from the secret

echo $(kubectl get secret headlamp-admin --namespace headlamp --template=\{\{.data.token\}\} | base64 --decode)

Connecting to the Dashboard

In order to connect to one of the dashboards, an ssh tunnel has to be established. There are various tools for doing this, like the command line, putty or MobaXterm.
To establish a tunnel, the NodePort of the dashboard has to be forwarded on one of the control planes to the local machine. After that the dashboard can be accessed with localhost:<Port>.

15 - Create a new Repository

Kubeops RPM Repository Setup Guide

Setting up a new RPM repository allows for centralized, secure, and efficient distribution of software packages, simplifying installation, updates, and dependency management.

Prerequisites

To setup a new repostory on your KubeOps platform, following pre-requisites must be fulfilled.

  • httpd (apache) server to access the repository over HTTP.
  • Root or administrative access to the server.
  • Software packages (RPM files) to include in the repository.
  • createrepo (an RPM package management tool) to create a new repository.

Repository Setup Steps

1. Install Required Tools

sudo yum install -y httpd createrepo

2. Create Repository Dierectory

When Apache is installed, the default Apache VirtualHost DocumentRoot created at /var/www/html. Create a new repository KubeOpsRepo under DocumentRoot.

sudo mkdir -p /var/www/html/KubeOpsRepo

3. Copy RPM Packages

Copy RPM packages into KubeOpsRepo repository.

Use below command to copy the packages that are already present in the host machine, else directly populate the packages into KubeOpsRepo

sudo cp -r <sourcePathForRPMs> /var/www/html/KubeOpsRepo/

4. Generate the GPG Signature (optional)

If you want to use your packages in a secure way, we recommend using GPG Signature.

How does the GPG tool work?

The GNU Privacy Guard (GPG) is used for secure communication and data integrity verification.
When gpgcheck set to 1 (enabled), the package will verify the GPG signature of each packages against the correponding key in the keyring. If the package’s signature matches the expected signature, the package is considered valid and can be installed. If the signature does not match or the package is not signed, the package manager will refuse to install the package or display a warning.

GPG Signature for new registry

  1. Create a GPG key and add it to the /var/www/html/KubeOpsRepo/. Check here to know how to create GPG keypairs.

  2. Save the GPG key as RPM-GPG-KEY-KubeOpsRepo using following command.

sudo cd /var/www/html/KubeOpsRepo/
gpg --armor --export > RPM-GPG-KEY-KubeOpsRepo

You can use following command to verify the gpg key.

curl -s http://<ip-address-of-server>/KubeOpsRepo/RPM-GPG-KEY-myrepo

5. Initialize the KubeOpsRepo

By running createrepo command the KubeOpsRepo will be initialized.

sudo cd /var/www/html/KubeOpsRepo/
sudo createrepo .

The newly created directoryrepodata conatains metadata files that describe the RPM packages in the repository, including package information, dependencies, and checksums, enabling efficient package management and dependency resolution.

6. Start and Enable Apache Service

sudo systemctl start httpd
sudo systemctl enable httpd

Configure Firewall (Optional)

If the firewall is enabled, we need to allow incoming HTTP traffic.

sudo firewall-cmd --add-service=http --permanent
sudo firewall-cmd --reload

7. Configure the local repository

To install packages from KubeOpsRepo without specifying the URL everytime, we can configure the local repository. Also if you are using GPG signature, then gpgcheck needs to be enabled.

  1. Create a Repository Configuration File
    Create a new .repo configuration file (e.g. KubeOpsRepo.repo) in /etc/yum.repos.d/ directory with following command.
sudo vi /etc/yum.repos.d/KubeOpsRepo.repo
  1. Add following confuration content to the File
[KubeOpsRepo]  
name=KubeOps Repository
baseurl=http://<ip-address-of-server>/KubeOpsRepo/
enabled=1
gpgcheck=1
gpgkey=http://<ip-address-of-server>/KubeOpsRepo/RPM-GPG-KEY-KubeOpsRepo

Below are the configuration details :

  1. KubeOpsRepo: It is the repository ID.
  2. baseurl: It is the base URL of the new repository. Add your repository URL here.
  3. name : It can be customized to a descriptive name.
  4. enabled=1: This enables the the repository.
  5. gpgcheck=1 : It is used to enable GPG signature verification for the repository.
  6. gpgkey : Add the address where your GPG key is placed.
In case, you are not using the GPG signature verification
1. you can skip step 4
and
2. set the gpgcheck=0 in the above configuration file.

8. Test the Local Repository

To ensure that the latest metadata for the repositories available, you can run below command: (optional)

sudo yum makecache

To verify the repository in list

You can check the reposity in the repolist with following command :

sudo yum repolist

This will list out all the repositories with the information about the repositories.

[root@cluster3admin1 ~]# yum repolist
Updating Subscription Management repositories.
repo id                                                        repo name
KubeOpsRepo                                                    KubeOps Repository
rhel-8-for-x86_64-appstream-rpms                               Red Hat Enterprise Linux 8 for x86_64 - AppStream (RPMs)
rhel-8-for-x86_64-baseos-rpms                                  Red Hat Enterprise Linux 8 for x86_64 - BaseOS (RPMs)

To List all the packages in repository

You can list all the packages availbale in KubeOpsRepo with following command :

# To check all the packages including duplicate installed packages
sudo yum list available --disablerepo="*" --enablerepo="KubeOpsRepo" --showduplicates
# sudo yum list --showduplicates | grep KubeOpsRepo 

To Install the Packages from the repository directly

Now you can directly install the packages from the KubeOpsRepo Repository with following command :

sudo yum install package_name

For Example :

sudo yum install lima

16 - Change registry

Changing Registry from A to B

KubeOps enables you to change the registry from A to B with following commands

kosi 2.6.0 - kosi 2.7.0

Kubeops 1.0.6

fileBeat
kosi pull kubeops/kosi-filebeat-os:1.2.0 -o filebeat.kosi -r 10.9.10.222:30003
kosi pull kubeops/kosi-filebeat-os:1.2.0 -o filebeat.kosi -r localhost:30003
kosi install -p filebeat.kosi
harbor
kosi pull kubeops/harbor:1.0.1 -o harbor.kosi  -r 10.9.10.222:30003
kosi pull kubeops/harbor:1.0.1 -o harbor.kosi -r localhost:30003 
kosi install -p harbor.kosi
logstash
kosi pull kubeops/kosi-logstash-os:1.0.1 -o logstash.kosi -r 10.9.10.222:30003
kosi pull kubeops/kosi-logstash-os:1.0.1 -o logstash.kosi -r localhost:30003
kosi install -p logstash.kosi
opa-gatekeeper
kosi pull kubeops/opa-gatekeeper:1.0.1 -o opa-gatekeeper.kosi -r 10.9.10.222:30003
kosi pull kubeops/opa-gatekeeper:1.0.1 -o opa-gatekeeper.kosi -r localhost:30003
kosi install -p opa-gatekeeper.kosi
opensearch
kosi pull kubeops/kosi-opensearch-os:1.0.3 -o opensearch.kosi -r 10.9.10.222:30003
kosi pull kubeops/kosi-opensearch-os:1.0.3 -o opa-gatekeeper.kosi -r localhost:30003
kosi install -p opa-gatekeeper.kosi
opensearch-dashboards
kosi pull kubeops/kosi-opensearch-dashboards:1.0.1 -o opensearch-dashboards.kosi -r 10.9.10.222:30003
  kosi pull kubeops/kosi-opensearch-dashboards:1.0.1 -o opensearch-dashboards.kosi -r localhost:30003
kosi install -p opensearch-dashboards.kosi
prometheus
kosi pull kubeops/kosi-kube-prometheus-stack:1.0.3 -o prometheus.kosi -r 10.9.10.222:30003
kosi pull kubeops/kosi-kube-prometheus-stack:1.0.3 -o prometheus.kosi -r localhost:30003
kosi install -p prometheus.kosi

rook

kosi pull kubeops/rook-ceph:1.0.3 -o rook-ceph.kosi -r 10.9.10.222:30003
kosi pull kubeops/rook-ceph:1.0.3 -o rook-ceph.kosi -r localhost:30003
kosi install -p rook-ceph.kosi

Kubeops 1.1.2

fileBeat

kosi pull kubeops/kosi-filebeat-os:1.1.1 -o kubeops/kosi-filebeat-os:1.1.1 -r 10.9.10.222:30003
kosi pull kubeops/kosi-filebeat-os:1.1.1 -o kubeops/kosi-filebeat-os:1.1.1 -t localhost:30003
kosi install -p package.yaml

harbor

kosi pull kubeops/harbor:1.1.1 -o kubeops/harbor:1.1.1 -r 10.9.10.222:30003
kosi pull kubeops/harbor:1.1.1 -o kubeops/harbor:1.1.1 -t localhost:30003
kosi install -p package.yaml

logstash

kosi pull kubeops/kosi-logstash-os:1.1.1 -o kubeops/kosi-logstash-os:1.1.1 -r 10.9.10.222:30003
kosi pull kubeops/kosi-logstash-os:1.1.1 -o kubeops/kosi-logstash-os:1.1.1 -t localhost:30003
kosi install -p package.yaml

opa-gatekeeper

kosi pull kubeops/opa-gatekeeper:1.1.1 -o kubeops/opa-gatekeeper:1.1.1 -r 10.9.10.222:30003
kosi pull kubeops/opa-gatekeeper:1.1.1 -o kubeops/opa-gatekeeper:1.1.1 -t localhost:30003
kosi install -p package.yaml

opensearch

kosi pull kubeops/kosi-opensearch-os:1.1.1 -o kubeops/kosi-opensearch-os:1.1.1 -r 10.9.10.222:30003
kosi pull kubeops/kosi-opensearch-os:1.1.1 -o kubeops/kosi-opensearch-os:1.1.1 -t localhost:30003
kosi install -p package.yaml

opensearch-dashboards

kosi pull kubeops/kosi-opensearch-dashboards:1.1.1 -o kubeops/kosi-opensearch-dashboards:1.1.1 -r 10.9.10.222:30003
kosi pull kubeops/kosi-opensearch-dashboards:1.1.1 -o kubeops/kosi-opensearch-dashboards:1.1.1 -t localhost:30003
kosi install -p package.yaml

prometheus

kosi pull kubeops/kosi-kube-prometheus-stack:1.1.1 -o kubeops/kosi-kube-prometheus-stack:1.1.1 -r 10.9.10.222:30003
kosi pull kubeops/kosi-kube-prometheus-stack:1.1.1 -o kubeops/kosi-kube-prometheus-stack:1.1.1 -t localhost:30003
kosi install -p package.yaml

rook

kosi pull kubeops/rook-ceph:1.1.1 -o kubeops/rook-ceph:1.1.1 -r 10.9.10.222:30003
kosi pull kubeops/rook-ceph:1.1.1 -o kubeops/rook-ceph:1.1.1 -t localhost:30003
kosi install -p package.yaml

cert-manager

kosi pull kubeops/cert-manager:1.0.2 -o kubeops/cert-manager:1.0.2 -r 10.9.10.222:30003
kosi pull kubeops/cert-manager:1.0.2 -o kubeops/cert-manager:1.0.2 -t localhost:30003
kosi install -p package.yaml

ingress-nginx

kosi pull kubeops/ingress-nginx:1.0.1 -o kubeops/ingress-nginx:1.0.1 -r 10.9.10.222:30003
kosi pull kubeops/ingress-nginx:1.0.1 -o kubeops/ingress-nginx:1.0.1 -t localhost:30003
kosi install -p package.yaml

kubeops-dashboard

kosi pull kubeops/kubeops-dashboard:1.0.1 -o kubeops/kubeops-dashboard:1.0.1 -r 10.9.10.222:30003
kosi pull kubeops/kubeops-dashboard:1.0.1 -o kubeops/kubeops-dashboard:1.0.1 -t localhost:30003
kosi install -p package.yaml

kubeopsctl 1.4.0

Kubeops 1.4.0

you have to create the file kubeopsctl.yaml :

apiVersion: kubeops/kubeopsctl/alpha/v3  # mandatory
kubeOpsUser: "demo" # change to your username
kubeOpsUserPassword: "Password" # change to your password
kubeOpsUserMail: "demo@demo.net" # change to your email
imagePullRegistry: "registry1.kubernative.net/lima"

clusterName: "example" 
clusterUser: "root" 
kubernetesVersion: "1.28.2" 
masterIP: 10.2.10.11 
firewall: "nftables" 
pluginNetwork: "calico" 
containerRuntime: "containerd" 

localRegistry: false

# at least 3 masters and 3 workers are needed
zones:
  - name: zone1
    nodes:
      master:
        - name: cluster1master1
          ipAdress: 10.2.10.11
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
        - name: cluster1master2
          ipAdress: 10.2.10.12
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
      worker:
        - name: cluster1worker1
          ipAdress: 10.2.10.14
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
        - name: cluster1worker2
          ipAdress: 10.2.10.15
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
  - name: zone2
    nodes:
      master:
        - name: cluster1master3
          ipAdress: 10.2.10.13
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: drained
          kubeversion: 1.28.2  
      worker:
        - name: cluster1worker1
          ipAdress: 10.2.10.16
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2

controlPlaneList: 
  - 10.2.10.12 # use ip adress here for master2
  - 10.2.10.13 # use ip adress here for master3

workerList: 
  - 10.2.10.14 # use ip adress here for worker1
  - 10.2.10.15 # use ip adress here for worker2
  - 10.2.10.16 # use ip adress here for worker3

rook-ceph: false
harbor: false
opensearch: false
opensearch-dashboards: false
logstash: false
filebeat: false
prometheus: false
opa: false
headlamp: false
certman: false
ingress: false 
keycloak: false # mandatory, set to true if you want to install it into your cluster
velero: false

storageClass: "rook-cephfs"

rookValues:
  namespace: kubeops
  nodePort: 31931
  hostname: rook-ceph.local
  cluster:
    spec:
      dataDirHostPath: "/var/lib/rook"
    removeOSDsIfOutAndSafeToRemove: true
    storage:
      # Global filter to only select certain devicesnames. This example matches names starting with sda or sdb.
      # Will only be used if useAllDevices is set to false and will be ignored if individual devices have been specified on a node.
      deviceFilter: "^sd[a-b]"
      # Names of individual nodes in the cluster that should have their storage included.
      # Will only be used if useAllNodes is set to false.
      nodes:
        - name: "<ip-adress of node_1>"
          devices:
            - name: "sdb" 
        - name: "<ip-adress of node_2>"
          deviceFilter: "^sd[a-b]"
          # config:
          #   metadataDevice: "sda"
    resources:
      mgr:
        requests:
          cpu: "500m"
          memory: "1Gi"
      mon:
        requests:
          cpu: "2"
          memory: "1Gi"
      osd:
        requests:
          cpu: "2"
          memory: "1Gi"
  operator:
    data:
      rookLogLevel: "DEBUG"
  blockStorageClass:
    parameters:
      fstype: "ext4"

postgrespass: "password"  # change to your desired password
postgres:
  storageClassName: "rook-cephfs"
  volumeMode: "Filesystem"
  accessModes: ["ReadWriteMany"]
  resources:
    requests:
      storage: 2Gi

redispass: "password" # change to your desired password
redis:
  storageClassName: "rook-cephfs"
  volumeMode: "Filesystem"
  accessModes: ["ReadWriteMany"]
  resources:
    requests:
      storage: 2Gi

harborValues: 
  namespace: kubeops
  harborpass: "password" # change to your desired password
  externalURL: https://10.2.10.13 # change to ip adress of master1
  nodePort: 30003
  hostname: harbor.local
  harborPersistence:
    persistentVolumeClaim:
      registry:
        size: 5Gi
        storageClass: "rook-cephfs"
      chartmuseum:
        size: 5Gi
        storageClass: "rook-cephfs"
      jobservice:
        jobLog:
          size: 1Gi
          storageClass: "rook-cephfs"
        scanDataExports:
          size: 1Gi
          storageClass: "rook-cephfs"
      database:
        size: 1Gi
        storageClass: "rook-cephfs"
      redis:
        size: 1Gi
        storageClass: "rook-cephfs"
      trivy: 
        size: 5Gi
        storageClass: "rook-cephfs"

filebeatValues:
  namespace: kubeops 

logstashValues:
  namespace: kubeops
  volumeClaimTemplate:
    resources:
      requests:
        storage: 1Gi
    accessModes: 
      - ReadWriteMany
    storageClassName: "rook-cephfs"

openSearchDashboardValues:
  namespace: kubeops
  nodePort: 30050
  hostname: opensearch.local

openSearchValues:
  namespace: kubeops
  opensearchJavaOpts: "-Xmx512M -Xms512M"
  replicas: "3"
  resources:
    requests:
      cpu: "250m"
      memory: "1024Mi"
    limits:
      cpu: "300m"
      memory: "3072Mi"
  persistence:
    size: 4Gi
    enabled: "true"
    enableInitChown: "false"
    enabled: "false"
    labels:
      enabled: "false"
    storageClass: "rook-cephfs"
    accessModes:
      - "ReadWriteMany"
  securityConfig:
    enabled: false
    ### Additional values can be set, if securityConfig is enabled:
    # path: "/usr/share/opensearch/plugins/opensearch-security/securityconfig"
    # actionGroupsSecret:
    # configSecret:
    # internalUsersSecret: internal-users-config-secret
    # rolesSecret:
    # rolesMappingSecret:
    # tenantsSecret:
    # config:
    #   securityConfigSecret: ""
    #   dataComplete: true
    #   data: {}

prometheusValues:
  namespace: kubeops
  privateRegistry: false

  grafanaUsername: "user"
  grafanaPassword: "password"
  grafanaResources:
    storageClass: "rook-cephfs"
    storage: 5Gi
    nodePort: 30211
    hostname: grafana.local

  prometheusResources:
    storageClass: "rook-cephfs"
    storage: 25Gi
    retention: 10d
    retentionSize: "24GB"
    nodePort: 32090
    hostname: prometheus.local

opaValues:
  namespace: kubeops

headlampValues:
  namespace: kubeops
  hostname: kubeops-dashboard.local
  service:
    nodePort: 30007

certmanValues:
  namespace: kubeops
  replicaCount: 3
  logLevel: 2

ingressValues:
  namespace: kubeops
  externalIPs: []

keycloakValues:
  namespace: "kubeops"
  storageClass: "rook-cephfs"
  nodePort: "30180"
  hostname: keycloak.local
  keycloak:
    auth:
      adminUser: admin
      adminPassword: admin
      existingSecret: ""
  postgresql:
    auth:
      postgresPassword: ""
      username: bn_keycloak
      password: ""
      database: bitnami_keycloak
      existingSecret: ""

fileBeat

In Order to change registry of filebeat, you have to go in your kubeopsctl.yaml file and set filebeat: false to filebeat: true

kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003

harbor

In Order to change registry of harbor, you have to go in your kubeopsctl.yaml file and set harbor: false to harbor: true

kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003
logstash

In Order to change registry of logstash, you have to go in your kubeopsctl.yaml file and set logstash: false to logstash: true

kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003

opa-gatekeeper

In Order to change registry of opa-gatekeeper, you have to go in your kubeopsctl.yaml file and set opa: false to opa: true

kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003

opensearch

In Order to change registry of opensearch, you have to go in your kubeopsctl.yaml file and set opensearch: false to opensearch: true

kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003

opensearch-dashboards

In Order to change registry of opensearch-dashboards, you have to go in your kubeopsctl.yaml file and set opensearch-dashboards: false to opensearch-dashboards: true

kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003

prometheus

In Order to change registry of prometheus, you have to go in your kubeopsctl.yaml file and set prometheus: false to prometheus: true

kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003

rook-ceph

In Order to change registry of rook-ceph, you have to go in your kubeopsctl.yaml file and set rook-ceph: false to rook-ceph: true

kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003

cert-manager

In Order to change registry of cert-manager, you have to go in your kubeopsctl.yaml file and set certman: false to certman: true

kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003

ingress-nginx

In Order to change registry of ingress-nginx, you have to go in your kubeopsctl.yaml file and set ingress: false to ingress: true

kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003

keycloak

In Order to change registry of keycloak, you have to go in your kubeopsctl.yaml file and set keycloak: false to keycloak: true

kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003

headlamp

In Order to change registry of headlamp, you have to go in your kubeopsctl.yaml file and set headlamp: false to headlamp: true

kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003

17 - Change the OpenSearch password

Changing the password of OpenSearch

Changing the password with default settings

If OpenSearch is installed without any SecurityConfig-settings, i.e. the SecurityConfig value is disabled inside the installation-values for OpenSearch, the following steps have to be taken in order to change password for a user.

Step 1: Generate a new passwordhash
Opensearch stores hashed passwords for authentication. In Order to change the password of a user we first have to generate the corresponding hash-value using the interactive hash.sh script, which can be found within the OpenSearch-container:

kubectl exec -it opensearch-cluster-master-0 -n kubeops -- bash

sh /usr/share/opensearch/plugins/opensearch-security/tools/hash.sh

Step 2: Save the new passwordhash in the internal_users.yaml file
By default, OpenSearch uses the internal_users.yaml file to save user-settings. To change the userpassword, one has to replace the hash-value for the specific user inside this file. Again, the needed file is located inside the OpenSearch-container. Use the following command to edit the internal_users.yaml file and replace the hash-entry with the newly generated one.

vi /usr/share/opensearch/config/opensearch-security/internal_users.yaml

Step 3: Update the OpenSearch-cluster:
Use the provided script securityadmin.sh, inside the OpenSearch-container to update the OpenSearch-cluster and persist the changes on the user-database:

sh /usr/share/opensearch/plugins/opensearch-security/tools/securityadmin.sh -cd /usr/share/opensearch/config/opensearch-security/ -icl -nhnv -cacert /usr/share/opensearch/config/root-ca.pem -cert /usr/share/opensearch/config/kirk.pem -key /usr/share/opensearch/config/kirk-key.pem

Opensearch with external secret

If OpenSearch is instead deployed with the SecurityConfig enabled and has created a external Secret, some additional steps/changes are required to change a user password.

Step 1: Generate a new passwordhash
Opensearch stores hashed passwords for authentication. In Order to change the password of a user we first have to generate the corresponding hash-value using the interactive hash.sh script, which can be found within the OpenSearch-container:

kubectl exec -it opensearch-cluster-master-0 -n kubeops -- bash

sh /usr/share/opensearch/plugins/opensearch-security/tools/hash.sh

Step 2: Localize the secret and extract the userdata
In this case, users and additional userdata is stored inside the internal-users-config-secret, a secret created within the kubernetes-cluster. It is stored in the same namespace as the OpenSearch-Deployment itself. Inside the Secret exists a data entry, which essentially contains the internal_users.yaml (a list of users and their userdata in yaml format) encoded in base64 as a String. The following commands will extract and decode the data, so the user can edit the local copy of the yaml-file, and replace the hash-entry with the newly generated one.

kubectl get secrets -n kubeops internal-users-config-secret -o jsonpath='{.data.internal_users\.yml}' | base64 -d > internal_users.yaml

vi internal_users.yaml

Step 3: Patch the secret and restart the OpenSearch pods
After editing the extracted data, it must be reencoded into base64, to then replace the old data inside the secret. After that, the OpenSearch pods need to be restarted in some way, for them to reload the secret.

cat internal_users.yaml | base64 -w 0 | xargs -I {} kubectl patch secret -n kubeops internal-users-config-secret --patch '{"data": {"internal_users.yml": "{}"}}'

kubectl rollout restart statefulset opensearch-cluster-master -n kubeops

Step 4: Update the OpenSearch-cluster:
Use the provided script securityadmin.sh, inside the OpenSearch-container to update the OpenSearch-cluster and persist the changes on the user-database. For the script to work properly, one must copy the internal_users.yaml into a certain directory, containing all the needed files.

cp /usr/share/opensearch/plugins/opensearch-security/securityconfig/internal_users.yml /usr/share/opensearch/config/opensearch-security/

sh /usr/share/opensearch/plugins/opensearch-security/tools/securityadmin.sh -cd /usr/share/opensearch/config/opensearch-security/ -icl -nhnv -cacert /usr/share/opensearch/config/root-ca.pem -cert /usr/share/opensearch/config/kirk.pem -key /usr/share/opensearch/config/kirk-key.pem

18 - Create Kosi package

Creating Kosi package

kosi create

To create a Kosi package, you must first run the kosi create command in your directory.

The kosi create command creates four files (package.yaml, template.yaml, logo.png and docs.tgz) in the current directory. These files can be edited.

[root@localhost ~]# kosi create

Created files:

  • package.yaml - Defines properties of the Kosi package. (see below)
  • template.yaml - Required if the template engine Scriban is to be used.
  • logo.png - A package-thumbnail with the size of 50x50px, for showing logo on the KubeOpsHub.
  • docs.tgz - A zipped directory with the documentation of the package, for showing documentation on the KubeOpsHub.

The documentation of the package is written in markdown. The file for the documentation is called readme.md.
To edit the markdown, you can unzip the docs.tgz in your directory with the command tar -xzf docs.tgz and zip it again with the command tar -czf docs.tgz docs/ after you finished.

Note: Please name your markdown files inside docs.tgz without a version-tag (docs/documentation-1.0.0.md).
Do not change the file names of any of the files above generated with the kosi create command.

package.yaml

The package.yaml defines a package in a specific version as well as the tasks needed to install it. The tasks which are used in the package.yaml are plugins, which can be created by the user.

Elements:

  • includes.files: Describes the files which are inluded in the Kosi package.
  • includes.containers: Used for docker images. A container for the docker images will be created when the kosi install, kosi update or kosi delete command is used.
  • installation.tasks: The tree describes the tasks (Kosi plugins), which are executed with the kosi install command.
  • update.tasks: The tree describes the tasks (Kosi plugins), which are executed with the kosi update command.
  • delete.tasks: The tree describes the tasks (Kosi plugins), which are executed with the kosi delete command.

IMPORTANT: It is required to enter the package name in lowercase.
Do not use any docker tags (:v1.0.0) in your package name.

Example package.yaml

apiversion: kubernative/kubeops/sina/user/v3 # Required field
name: kosi-example-packagev3 # Required field
description: kosi-example-package # Required field
version: 0.1.0 # Required field
includes:  # Required field: When "files" or "containers" are needed.
  files:  # Optional field: IF file is attached, e.g. "rpm, .extension"
    input: "template.yaml"
  containers: # Optional field: When "containers" are needed.
    example:
      registry: docker.io
      image: nginx
      tag: latest
docs: docs.tgz
logo: logo.png
installation: # Required field
  includes: # Optional field: When "files" or "containers" are needed.
    files: # Optional field:
      - input # Reference to includes
    containers: # Optional field:
      - example # Reference to includes
  tasks: 
    - cmd:
        command: "touch ~/kosiExample1"
update: # Required field
  includes: # Optional field: When "files" or "containers" are needed.
    files: # Optional field:
      - input # Reference to includes
    containers: # Optional field:
      - example # Reference to includes
  tasks:
    - cmd:
        command: "touch ~/kosiExample2"
delete: # Required field
  includes: # Optional field: When "files" or "containers" are needed.
    files: # Optional field:
      - input # Reference to includes
    containers: # Optional field:
      - example # Reference to includes
  tasks:
    - cmd:
        command: "rm ~/kosiExample1"
    - cmd:
        command: "rm ~/kosiExample2"

kosi build

Now, after you created and edited the files from kosi create, you can simply build a Kosi package by just running the kosi build command in your directory.

[root@localhost ~]# kosi build

All files specified in the package.yaml are combined together with the package.yaml to form a kosi package.


In these few steps, you can successfully create and use the kosi package. This is the basic functionality offered by Kosi.

You can always explore Full Documentation to go through all the functionality and features provided by Kosi.

19 - Install package from Hub

Installing KOSI packages from KubeOps Hub

To install KOSI packages from the KubeOps Hub on your machines.

  1. First you need to search the package using command kosi search ( Refer kosi search command for more info) on the KubeOps Hub.

  2. Now copy the installation address of desired package and use it the kosi install command:

    [root@localhost ~]# kosi install --hub <hubname> <installation address>
    

The --hub parameter is to be used to install packages from the software Hub.

To be able to install a package from the software Hub, you have to be logged in as a user.


Install from Private Hub

Example: The package livedemo of user kosi with version 2.7.1 is to be installed from the private software Hub:

[root@localhost ~]# kosi install kosi/livedemo:2.7.1

Install from Public Hub

Example: The package livedemo of the user kosi with the version 2.7.1 is to be installed from the public software Hub:

[root@localhost ~]# kosi install --hub public kosi/livedemo:2.7.1

Install along with yaml files

The -f parameter must be used to use yaml files from the user.

[root@localhost ~]# kosi install <package> -f <user.yaml>

Example: The package livedemo of user kosi with version 2.7.1 is to be installed from the public software hub and user specific files are to be used for the installation:

[root@localhost ~]# kosi install --hub public kosi/livedemo:2.7.1 -f userfile1.yaml

Install in specific namespace

-namespace flag The namespace parameter can be used to specify a kubernetes namespace in which to perform the installation.

[root@localhost ~]# kosi install --hub <hubname> <package> --namespace <namespace>

Example: The package livedemo of user kosi with version 2.7.1 is to be installed from the public software hub and a custom kubernetes namespace is used:

[root@localhost ~]# kosi install --hub public kosi/livedemo:2.7.1 --namespace MyNamespace

Note: If no –namespace parameter is specified, the namespace default will be used.


Install with specific deployment name

–dname flag

The parameter dname can be used to save the package under a specific name.

[root@localhost ~]# kosi install --hub <hubname> <package> --dname <deploymentname>

Example: The package livedemo of user kosi with version 2.7.1 is to be installed from the public software hub and a deployment name is set:

[root@localhost ~]# kosi install --hub public kosi/livedemo:2.7.1 --dname MyDeployment

If no –dname parameter is specified, a random deployment name will be generated.

Note: The deployment name is stored in the file //var/kubeops/kosi/deployment.yaml.


In these few steps, you can successfully install and use the KOSI package.

You can always explore Full Documentation to go through all the functionality and features provided by KOSI.


Install on a machine with no internet connection

  1. Download the package using kosi pull with a internet connection.
[root@localhost ~]# kosi pull [package name from hub] -o [your preferred name] --hub public
  1. Transfer the package to the machine with no internet connection but have KubeOps installed on it.

  2. Install it with following command

[root@localhost ~]# kosi install -p [package name]