This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

KUBEOPSCTL

Try kubeopsctl

Follow quickstart guides to learn how to setup secure clusters.

1 - Getting-Started

Begin your exploration of kubeopsctl, diving into its robust capabilities and streamlined workflow for Kubernetes infrastructure management.

1.1 - About Kubeopsctl

This article will give you a little insight into Kubeopsctl and its advantages.

What is kubeopsctl?

kubeopsctl serves as a versatile utility designed specifically to efficiently manage both the configuration and status of a cluster.

With its capabilities, users can articulate their desired cluster state in detail, outlining configurations and specifications.

Subsequently, kubeopsctl orchestrates the creation of a cluster that precisely matches the specified state, ensuring alignment between intentions and operational reality.

Why use kubeopsctl?

In kubeopsctl, configuration management involves defining, maintaining, and updating the desired state of a cluster, including configurations for nodes, pods, services, and other resources in the application environment.

The main goal of kubeopsctl is to match the clusters actual state with the desired state specified in the configuration files. With a declarative model, kubeopsctl enables administrators to express their desired system setup, focusing on „what“ they want rather than „how“ to achieve it.

This approach improves flexibility and automation in managing complex systems, making the management process smoother and allowing easy adjustment to changing needs.

kubeopsctl uses YAML files to store configuration data in a human-readable format. These files document important metadata about the objects managed by kubeopsctl, such as pods, services, and deployments.

Highlights

  • creating a cluster
  • adding nodes to your cluster
  • drain nodes
  • updating single nodes
  • label nodes for zones
  • adding platform software into your cluster

1.2 - Installation

This section provides an introduction to KubeOps, covering essential topics such as hardware, software, and network requirements. It also outlines the steps for installing the necessary software and highlights key configurations needed for KubeOps setup.

KubeOps Installation and Setup

Welcome to the very first step to getting started with KubeOps. In this section, you will get to know about

  • hardware, software and network requirements
  • steps to install the required software
  • key configurations for KubeOps

Prerequisites

A total of 7 machines are required:

  • one admin
  • three masters
  • three workers

All of your machines need the Red Hat Enterprise Linux 8 os.
Below you can see the minimal requirements for CPU, memory and disk storage:

OS Minimum Requirements
Red Hat Enterprise Linux 8 8 CPU cores, 16 GB memory, 50GB disk storage

For each working node, an additional unformatted hard disk with 50 GB each is required. For more information about the hard drives for rook-ceph, visit the rook-ceph prerequisites page

Requirements on admin

The following requirements must be fulfilled on the admin machine.

  1. All the utilized users require sudo privileges. If you are using Kubeops as a user, you need a user with sudo rights, for RHEL 8 Environments it is the wheel group the user should be added to. Make sure that you change your user with:
su -l <user>
  1. Admin machine must be synchronized with the current time.

  2. You need an internet connection to use the default KubeOps Registry.

    Important: Choose the correct KubeOps Registry based on your version:

    "registry1.kubernative.net/lima"
    
    "registry.kubeops.net/kubeops"
    

    A local registry can be used in the Airgap environment. KubeOps only supports secure registries. It is important to list your registry as an insecure registry in registry.conf (/etc/containers/registries.conf for podman, /etc/docker/deamon.json for docker), in case of insecure registry usage.

Now you can create your own registry instead of using the default. Checkout how to Guide Create a new Repository. for more info.

  1. Podman must be installed on your machine.
sudo dnf install -y podman
  1. $KUBEOPSROOT and $LIMAROOT must be set.
echo "export KUBEOPSROOT=\"${HOME}/kubeops\"" >> $HOME/.bashrc
echo "export LIMAROOT=\"${HOME}/kubeops/lima\"" >> $HOME/.bashrc
source $HOME/.bashrc
  1. Install KOSI and Login with Valid Credentials
  • Install KOSI from an RPM file

    sudo dnf install -y kosi*.rpm
    

    INFO: For additional details on downloading KOSI package found here and installing the KOSI package, refer to this link.

    NOTE: Helm package installation is not required, as it will be installed automatically with the kubeopsctl platform.

  • Login to the kubeops hub using kosi

    After entering the password, you will be logged in to the kubeops hub. Use the following command to initiate the login process:

      kosi login -u <user>
    

Requirements for each node

The following requirements must be fulfilled on each node.

  1. All the utilized users require sudo privileges. If you are using Kubeops as a user, you need a user with sudo rights, for and RHEL 8 Environments it is the wheel group the user should be added to.

  2. Every machine must be synchronized with the current time.

  3. You have to assign lowercase unique hostnames for every machine you are using.

    We recommended using self-explanatory hostnames.

    To set the hostname on your machine use the following command:

    sudo hostnamectl set-hostname <name of node>
    
    • Example
      Use the commands below to set the hostnames on each machine as admin, master, node1 node2.
      sudo hostnamectl set-hostname admin
      sudo hostnamectl set-hostname master 
      sudo hostnamectl set-hostname node1
      sudo hostnamectl set-hostname node2
      

    Requires sudo privileges

    It is recommended that a dns service is running, or if you don’t have a dns service, you can change the /etc/hosts file. an example for a entry in the /etc/hosts file could be:

    10.2.10.12 admin
    10.2.10.13 master1
    10.2.10.14 master2
    10.2.10.15 master3
    10.2.10.16 node1
    10.2.10.17 node2
    10.2.10.18 node3
    

  4. To establish an SSH connection between your machines, you either need an SSH key or you need to install sshpass.

    1. Generate an SSH key on admin machine using following command

      ssh-keygen
      

      There will be two keys generated in ~/.ssh directory.
      The first key is the id_rsa(private) and the second key is the id_rsa.pub(public).

    2. Copy the ssh key from admin machine to your node machine/s with following command

      ssh-copy-id <ip address or hostname of your node machine>
      
    3. Now try establishing a connection to your node machine/s

      ssh <ip address or hostname of your node machine>
      
  5. it is recommended that runc is uninstalled

    sudo dnf remove -y runc
    
  6. tc should be installed.

    sudo dnf install -y tc
    sudo dnf install -y libnftnl
    
  7. for opensearch, the /etc/sysctl.conf should be configured, the line

      vm.max_map_count=262144
    

    should be added. also the command should be executed after that

      sudo sysctl -p
    
  8. The user need to install versionlock with this command

    sudo dnf install python3-dnf-plugin-versionlock.noarch
    
  9. Optional: In order to use encrypted traffic inside the cluster, follow these steps:

For RHEL machines, you will need to import the ELRepo Secure Boot key into your system. You can find a detailed explanation and comprehensive instructions in our how-to-guide Importing the Secure-Boot key

This is only necessary if your system has Secure Boot enabled. If this isn´t the case, or you dont want to use any encryption at all, you can skip this step.

Installing KubeOpsCtl

  1. Create a kubeopsctl.yaml file with respective information as shown in kubeopsctl.yaml parameters, in order to use the KubeOps package.
  2. Install the kubeops*.rpm on your admin machine.
    sudo dnf install -y kubeopsctl*.rpm
    

Working with KubeOpsCtl

Before starting with KubeOps cluster, it is important to check if podman is running.

  • To verify that podman is running, use the following command:

    sudo systemctl status podman
    
  • To start and enable podman use the following commands:

    sudo systemctl enable podman
    
    sudo systemctl start --now podman
    
Note: This must be done with the root user or with a user with sudo privileges.

Run kubeopsctl on your commandline like:

kubeopsctl apply -f kubeopsctl.yaml

kubeopsctl.yaml parameters

the names of the nodes should be the same as the hostnames of the machines.

Choose the appropriate imagePullRegistry based on your kubeops version.

### General values for registry access ###
imagePullRegistry: "registry1.kubernative.net/lima" # mandatory, the registry from which the images for the cluster are pulled
localRegistry: false # mandatory, set to true if you use a local registry, if you want to set the value to true then you also have to set harbor to true
### General values for registry access ###
imagePullRegistry: "registry.kubeops.net/kubeops" # mandatory, the registry from which the images for the cluster are pulled
localRegistry: false # mandatory, set to true if you use a local registry, if you want to set the value to true then you also have to set harbor to true

The imagePullRegistry parameter is for the registry, from which the images for the platform softeware is pulled. The localRegistry is a parameter for using a insecure, local registry for pulling images.

### Values for setup configuration ###
apiVersion: kubeops/kubeopsctl/alpha/v5 # mandatory
clusterName: "example" # mandatory
clusterUser: "myuser" # mandatory
kubernetesVersion: "1.30.0" # mandatory, check lima documentation
#masterHost: optional if you have an hostname, default value in "masterIP"
masterIP: 10.2.10.11 # mandatory
firewall: "nftables" # mandatory, default "nftables"
pluginNetwork: "calico" # mandatory, default "calico"
containerRuntime: "containerd" # mandatory
clusterOS: "Red Hat Enterprise Linux" # mandatory, must be "Red Hat Enterprise Linux"
### Additional values for cluster configuration
useInsecureRegistry: false # optional, default is false
ignoreFirewallError: false # optional, default is false
serviceSubnet: 192.168.128.0/17 # optional, default "192.168.128.0/17"
podSubnet: 192.168.0.0/17 # optional, default "192.168.0.0/17"
debug: false # optional, default is false
logLevel: vvvvv # optional, default "vvvvv"
systemCpu: "1" # optional, default "1"
systemMemory: "2G" # optional, default "2G"
sudo: true # optional, default is true
tmpCopyDir: "/tmp" # optional, default is /tmp
createCluster: true # optional, default is true
updateRegistry: true # optional, default is true
  • the parameter clusterName is used to interact with and manage the cluster later on, p.e. if you want to change the runtime, you need the clusterName parameter.

  • the clusteruser is the linux user for using the cluster. the clusterOS is the linux distribution of the cluster.

  • masterIP is the ip-adress of the clustermaster or the first master, which is later used for interacting with the cluster.

  • useInsecureRegistry is for using a local and insecure registry for pulling images for the lima software.

  • ignoreFirewallError is a parameter for ignoring firewall errors while the cluster is created (not while operating on the cluster).

  • serviceSubnet is the subnet for all kubernetes service IP-adresses.

  • podSubnet is the subnet for all kubernetes pod IP-adresses.

  • systemCpu is the maximum of cpu that the kube-apiserver is allowd to use.

  • sudo is a parameter for using sudo for commands that need sudo rights, if you use a non-root linux-user.

  • tmpCopyDir is a parameter for templating the folder on the cluster nodes, where images of lima will be copied to.

  • createCluster is a parameter with which you can specify whether you want to create a cluster or not.

  • updateRegistry is a parameter for updating the docker registry.

zones:
  - name: zone1 
    nodes:
      master:
        - name: cluster1master1
          ipAdress: 10.2.10.11
          user: myuser
          systemCpu: 200m
          systemMemory: 200Mi 
          status: active
          kubeversion: 1.30.0
        - name: cluster1master2
          ipAdress: 10.2.10.12
          user: myuser
          systemCpu: 200m
          systemMemory: 200Mi 
          status: active
          kubeversion: 1.30.0
      worker:
        - name: cluster1worker1
          ipAdress: 10.2.10.14
          user: myuser
          systemCpu: 200m
          systemMemory: 200Mi 
          status: active
          kubeversion: 1.30.0
        - name: cluster1worker2
          ipAdress: 10.2.10.15
          user: myuser
          systemCpu: 200m
          systemMemory: 200Mi 
          status: active
          kubeversion: 1.30.0
  - name: zone2
    nodes:
      master:
        - name: cluster1master3
          ipAdress: 10.2.10.13
          user: myuser
          systemCpu: 200m
          systemMemory: 200Mi 
          status: active
          kubeversion: 1.30.0  
      worker:
        - name: cluster1worker3
          ipAdress: 10.2.10.16
          user: myuser
          systemCpu: 200m
          systemMemory: 200Mi 
          status: active
          kubeversion: 1.30.0

This YAML content is mandatory and describes a configuration for managing multiple zones in a Kubernetes cluster. Let’s break it down step by step:

  • zones: This is the top-level key in the YAML file, representing a list of zones within the Kubernetes cluster.

    • zone1 and zone2: These are two zones within the cluster, each with its own configuration.

      • nodes: This is a sub-key under each zone, indicating the different types of nodes within that zone.

        • master: This is a sub-key under nodes, representing the master nodes in the zone.

          • cluster1master1, cluster1master2, and cluster1master3: These are individual master nodes in the cluster, each with its own configuration settings. They have attributes like name (node name has to be equal to host name), ipAdress (IP address), user (the user associated with the node), systemCpu (CPU resources allocated to the system), systemMemory (system memory allocated), status (the status of the node, can be either “active” or “drained”), and kubeversion (the Kubernetes version running on the node). Those kubernetes versions in the zone are for the nodes. NOTE: If you drain too many nodes, you may have too few OSDs for Rook.
        • worker: This is another sub-key under nodes, representing the worker nodes in the zone.

          • cluster1worker1, cluster1worker2, and cluster1worker3: Similar to the master nodes, these are individual worker nodes in the cluster, each with its own configuration settings, including name, IP address, user, system resources, status, and Kubernetes version.
# mandatory, set to true if you want to install it into your cluster
rook-ceph: true
harbor: true # if localRegistry is set to true, harbor also needs to be set to true
opensearch: true
opensearch-dashboards: true
logstash: true
filebeat: true
prometheus: true
opa: true
kubeops-dashboard: true
certman: true
ingress: true 
keycloak: true
velero: true

this values are booleans for deciding which applications are later installed into the cluster.

# Global values, will be overwritten by the corresponding values of the individual packages
namespace: "kubeops"
storageClass: "rook-cephfs"

These global values will be used for installing the packages, but will be overwritten by the corresponding package-level settings.

  • namespace defines the kubernetes-namespace in which the applications are deployed
  • storageClass defines the name of StorageClass-Ressource that will be used by the applications.
rookValues:
  namespace: kubeops
  cluster:
    spec:
      dataDirHostPath: "/var/lib/rook" # optional, default is /var/lib/rook
    storage:
      useAllNodes: true # optional, default value: true
      useAllDevices: true # optional, default value: true
      deviceFilter: "^sd[a-b]" # optional, will only be used if useAllDevices is set to false
      config:
        metadataDevice: "sda" # optional, only set this value, if there is a device available
      nodes: # optional if useAllNodes is set to true, otherwise mandatory
        - name: "<ip-adress of node_1>"
          devices:
            - name: "sdb" 
        - name: "<ip-adress of node_2>"
          deviceFilter: "^sd[a-b]"
          config:
            metadataDevice: "sda" # optional
    resources:
      mgr:
        requests:
          cpu: "500m" # optional, default is 500m, limit: 1000m
          memory: "512Mi" # optional, default is 1Gi, limit: 1Gi
      mon:
        requests:
          cpu: "1" # optional, default is 1, limit: 2000m
          memory: "1Gi" # optional, default is 1Gi, limit: 2Gi
      osd:
        requests:
          cpu: "1" # optional, default is 1, limit: 2
          memory: "1Gi" # optional, default is 4Gi, limit: 4Gi
    dashboard:
      enabled: true
  operator:
    data:
      rookLogLevel: "DEBUG" # optional, default is DEBUG
  • the namespace parameter is important for the apllications, because this parameter decides, in which namespace the individual applications are deployed.
  • dataDirHostPath is for setting the path of the configuration fiules of rook.
  • useAllNodes is parameter of rook-ceph and if it is set to true, all worker nodes will be used for rook-ceph.
  • useAllDevices is parameter of rook-ceph and if it is set to true, all possible devices will be used for rook-ceph.
  • deviceFilter is a Global filter to only select certain devicesnames. This example matches names starting with sda or sdb. it will only be used if useAllDevices is set to false and will be ignored if individual devices have been specified on a node.
  • metadataDevice: Name of a device or lvm to use for the metadata of OSDs(daemons for storing data on the local file system) on each node Performance can be improved by using a low latency device (SSD or NVMe) as the metadata device, while other spinning platter (HDD) devices on a node are used to store data. This global setting will be overwritten by the corresponding node-level setting.
  • nodes: Names of individual nodes in the cluster that should have their storage included. Will only be used if useAllNodes is set to false. Specific configurations of the individual nodes will overwrite global settings.
  • resources refers to the cpu and memory that the parts of rook-ceph will be requesting. In this case it is the manager, the monitoring pods and the OSDs (they have the job of managing the local storages of the nodes and together they form the distributed storage) as well as the filesystem and object-store pods (they manage the respecting storage solution).
  • rookLogLevel: the loglevel of rook-ceph. this provides the most informative logs.
harborValues: 
  namespace: kubeops # optional, default is kubeops
  harborpass: "password" # mandatory: set password for harbor access
  databasePassword: "Postgres_Password" # mandatory: set password for database access
  redisPassword: "Redis_Password" # mandatory: set password for redis access
  externalURL: http://10.2.10.11:30002 # mandatory, the ip address and port, from which harbor is accessable outside of the cluster
  nodePort: 30002 # mandatory
  hostname: harbor.local # mandatory
  harborPersistence:
    persistentVolumeClaim:
      registry:
        size: 40Gi # optional, default is 40Gi
        storageClass: "rook-cephfs" #optional, default is rook-cephfs
      jobservice:
        jobLog:
          size: 1Gi # optional, default is 1Gi
          storageClass: "rook-cephfs" #optional, default is rook-cephfs
      database:
        size: 1Gi # optional, default is 1Gi
        storageClass: "rook-cephfs" #optional, default is rook-cephfs
      redis:
        size: 1Gi # optional, default is 1Gi
        storageClass: "rook-cephfs" #optional, default is rook-cephfs
      trivy: 
        size: 5Gi # optional, default is 5Gi
        storageClass: "rook-cephfs" #optional, default is rook-cephfs

You can set the root password for the postgres-database, redis and harbor. For the persistant volumes of harbor, the sizes and the storageclass are also templatable. So all applications of harbor, i.e. trivy for the image-scanning or the chart museum for helm charts.

###Values for filebeat deployment###
filebeatValues:
  namespace: kubeops #optional, default is kubeops
  • namespace value specifies the Kubernetes namespace where filebeat will be deployed.
###Values for Logstash deployment###
##For detailed explaination for each key see: https://github.com/elastic/helm-charts/releases/tag/v7.16.3###
logstashValues:
  namespace: kubeops
  volumeClaimTemplate:
    accessModes: 
      - ReadWriteMany #optional, default is [ReadWriteMany]
    resources:
      requests:
        storage: 1Gi # mandatory, depending on storage capacity
    storageClass: "rook-cephfs" #optional, default is rook-cephfs

for logstash the pvc size is also templateable

###Values for OpenSearch-Dashboards deployment###
##For detailed explaination for each key see: https://github.com/opensearch-project/helm-charts/tree/main/charts/opensearch-dashboards###
openSearchDashboardValues:
  namespace: kubeops
  nodePort: 30050
###Values for OpenSearch deployment###
##For detailed explaination for each key see: https://github.com/opensearch-project/helm-charts/tree/main/charts/opensearch###
openSearchValues:
  namespace: kubeops
  opensearchJavaOpts: "-Xmx512M -Xms512M" # optional, default is -Xmx512M -Xms512M
  resources:
    requests:
      cpu: "250m" # optional, default is 250m
      memory: "1024Mi" # optional, default is 1024Mi
    limits:
      cpu: "300m" # optional, default is 300m
      memory: "3072Mi" # optional, default is 3072Mi
  persistence:
    size: 4Gi # mandatory
    enabled: "true" # optional, default is true
    enableInitChown: "false" # optional, default is false
    labels:
      enabled: "false" # optional, default is false
    storageClass: "rook-cephfs" # optional, default is rook-cephfs
    accessModes:
      - "ReadWriteMany" # optional, default is {ReadWriteMany}
  securityConfig:
    enabled: false # optional, default value: false
    ### Additional values can be set, if securityConfig is enabled:
    # path: "/usr/share/opensearch/plugins/opensearch-security/securityconfig"
    # actionGroupsSecret:
    # configSecret:
    # internalUsersSecret: internal-users-config-secret
    # rolesSecret:
    # rolesMappingSecret:
    # tenantsSecret:
    # config:
    #   securityConfigSecret: ""
    #   dataComplete: true
    #   data: {}
  replicas: "3" # optional, default is 3
#--------------------------------------------------------------------------------------------------------------------------------
  • opensearchJavaOpts is the size of the java heap.
  • enableInitChown is the procedure of changing the owner of the opensearch configuration files, so non-root users can change the configuration files.
  • if you want labels for the opensearch pods in the cluster, you can enable the labels with the enabled parameter under the labels subtree.
  • if you want to use a custom security config, you can enable it and use then paramters like the path to the file. if you want more info, you can find it here
  • the replicas are 3 by default, but you can template it for better scaling.

###Values for Prometheus deployment###
prometheusValues:
  namespace: kubeops # optional, default is kubeops
  grafanaUsername: "user" # optional, default is user
  grafanaPassword: "password" # optional, default is password
  retentionSize: "24GB" # optional, default is 24GB
  grafanaResources:
    nodePort: 30211 # optional, default is 30211
    storageClass: "rook-cephfs" # optional, default is rook-cephfs
    storage: 5Gi # optional, default is 5Gi
    grafanaUsername: "admin" # optional, default is admin
    grafanaPassword: "admin" # optional, default is admin
    retention: 10d # mandatory
    retentionSize: "24GB" # mandatory
    storageClass: "rook-cephfs" # optional, default is rook-cephfs
    storage: 25Gi # optional, default is 25Gi

  prometheusResources:
    retention: 10d # optional, default is 10d
    retentionSize: "24GB" # optional, default is "24GB"
    storageClass: "rook-cephfs" # optional, default is rook-cephfs
    storage: 25Gi # optional, default is 25Gi

the nodePort is 30211, so you can visit the grafana apllication on every master with <ip-adress of master>:30211, but you can template and thus change it.

###Values for OPA deployment###
opaValues:
  namespace: kubeops
  • namespace value specifies the Kubernetes namespace where OPA will be deployed.
###Values for KubeOps-Dashboard (Headlamp) deployment###
kubeOpsDashboardValues:
  namespace: kubeops
  hostname: kubeops-dashboard.local
  service:
    nodePort: 30007
  • namespace value specifies the Kubernetes namespace where KubeOps-Dashboard will be deployed.
  • hostname is for accessing the KubeOps-Dashboard service.
  • the nodePort value specifies the node port for accessing KubeOps-Dashboard.
###Values for cert-manager deployment###
certmanValues:
  namespace: kubeops
  replicaCount: 3
  logLevel: 2
  • namespace value specifies the Kubernetes namespace where cert-manager will be deployed.
  • replicaCount specifies the number of replicas for the cert-manager deployment.
  • logLevel specifies the logging level for cert-manager.
###Values for ingress-nginx deployment###
ingressValues:
  namespace: kubeops
  externalIPs: []
  • namespace value specifies the Kubernetes namespace where ingress-nginx will be deployed.
  • externalIPs value specifies a list of external IP addresses that will be used to expose the ingress-nginx service. This allows external traffic to reach the ingress controller. The value for this key is expected to be provided as a list of IP addresses.
###Values for keycloak deployment###
keycloakValues:
  namespace: "kubeops" # Optional, default is "keycloak"
  storageClass: "rook-cephfs" # Optional, default is "rook-cephfs"
  nodePort: "30180" # Optional, default is "30180"
  hostname: keycloak.local
  keycloak:
    auth:
      adminUser: admin # Optional, default is admin
      adminPassword: admin # Optional, default is admin
  postgresql:
    auth:
      postgresUserPassword: "" # Optional, default is ""
      username: bn_keycloak # Optional, default is "bn_keycloak"
      password: "" # Optional, default is ""
      database: bitnami_keycloak # Optional, default is "bitnami_keycloak"
    volumeSize: "8Gi"
  • namespace value specifies the Kubernetes namespace where keycloak will be deployed.
  • storageClass value specifies the storage class to be used for persistent storage in Kubernetes. If not provided, it defaults to “rook-cephfs”.
  • nodePort value specifies the node port for accessing Keycloak. If not provided, it defaults to “30180”.
  • hostname value specifies the hostname for accessing the Keycloak service.
  • adminUser value specifies the username for the Keycloak admin user. Defaults to “admin”.
  • adminPassword value specifies the password for the Keycloak admin user. Defaults to “admin”.
  • postgresUserPassword value specifie the password for the PostgreSQL database. Defaults to an empty string.
  • username value specifies the username for the PostgreSQL database. Defaults to “bn_keycloak”.
  • password value specifies the password for the PostgreSQL database. Defaults to an empty string.
  • database value specifies the name of the PostgreSQL database. Defaults to “bitnami_keycloak”.
veleroValues:
  namespace: "velero"
  accessKeyId: "your_s3_storage_username"
  secretAccessKey: "your_s3_storage_password"
  useNodeAgent: false
  defaultVolumesToFsBackup: false
  provider: "aws"
  bucket: "velero"
  useVolumeSnapshots: false
  backupLocationConfig:
    region: "minio"
    s3ForcePathStyle: true
    s3Url: "http://minio.velero.svc:9000"
  • namespace: Specifies the Kubernetes namespace where Velero will be deployed.
  • accessKeyId: Your access key ID for accessing the S3 storage service.
  • secretAccessKey: Your secret access key for accessing the S3 storage service.
  • useNodeAgent: Indicates whether to use a node agent for backup operations. If set to true, Velero will use a node agent.
  • defaultVolumesToFsBackup: Specifies whether to default volumes to file system backup. If set to true, Velero will use file system backup by default.
  • provider: Specifies the cloud provider where the storage service resides.
  • bucket: The name of the S3 bucket where Velero will store backups.
  • useVolumeSnapshots: Indicates whether to use volume snapshots for backups. If set to true, Velero will use volume snapshots.
  • backupLocationConfig: Configuration for the backup location.
  • region: Specifies the region where the S3 storage service is located.
  • s3ForcePathStyle: Specifies whether to force the use of path-style URLs for S3 requests.
  • s3Url: The URL for accessing the S3-compatible storage service.

1.3 - Set up a Basic Cluster

This guide shows you how to setup a cluster with 7 nodes using kubeopsctl.

In this quickstart you will learn about:

  • kubeopctl plattform requirements
  • best practices for machine setup
  • setup secure headless environment for communication between admin and all masters / workers
  • how to install required software
  • how to use the official KubeOps Website to download kubeopsctl
  • how to create a basic cluster

After the installation, kubeopctl is available as command line interface.

Prerequisites

To get the most out of this guide, the following requirements should be met:

  • basic understanding of Linux environments, bash / shell
  • basic understanding of text editors, vi / nano
  • administrator privileges (root) are granted

A total of 7 machines (virtual or physical) are required and need to be set up:

  • one admin - control plane, this machine will manage all tasks on the cluster and integrated machines
  • three masters
  • three workers

The final cluster will have the following structure. Masters and workers are added to two clusters zones.

Step 1 - Minimal Platform Requirements

kubeopsctl is designed to work with the latest versions of the following operating systems.

Supported Operating Systems

Operating system Red Hat Enterprise Linux (Version 8.2 or newer)

System requirements Admin Nodes

CPU 2x
Memory 2 GB
Diskspace 50 GB
Internet Access Yes
to use default KubeOps Registry

Important: Choose the correct KubeOps Registry based on your version:

"registry1.kubernative.net/lima"
"registry.kubeops.net/kubeops"

System requirements Master Nodes

CPU 4x
Memory 8 GB
Diskspace 50 GB

System requirements Worker Nodes

CPU 8x
Memory 16 GB
Diskspace 50 GB
50 GB unformatted non-partitioned disk storage for ceph

For more information about rook-ceph, see the prerequisites in its official documentation.

Step 2 - Set up your Machines

You can setup the admin, master and worker nodes as virtual or as physical machines.

During the setup of the machines, make sure that you meet the following requirements:

  • heed the platform requirements as mentioned above
  • all machines need to be synchronized with the current time
  • all machines need to be within the same network environment

To get the most out of this guide, use the following hostnames for your basic cluster:

Hostnames

Machine / Purpose Hostnames
Admin admin
Masters master1
master2
master3
Workers worker1
worker2
worker3
Assigning Hostnames manually

If you need to assign hostnames manually, login to the machine and use the hostnamectl set-hostname command.

hostnamectl set-hostname master1

Repeat this process for all machines where necessary.

Remove firewalld on Red Hat Enterprise Linux 8

If you are using Red Hat Enterprise Linux 8, you must remove firewalld. Kubeopsctl installs nftables by default.
You can use the following commands to remove firewalld:

systemctl disable --now firewalld
systemctl mask firewalld
dnf remove -y firewalld
reboot

Step 3 - Set up Access for the Admin Machine

The admin machine needs secure and headless access to all other machines.

Set up IP Addresses for DNS

It is recommended, that a DNS service is running. If you do not have a DNS service, you need to edit the /etc/hosts file on the admin machine.

Add the following lines at the end of the /etc/hosts file. Replace the IP addresses with the actual addresses you noted during the setup of all machines. Replace the hostnames with the actual hostnames you assigned during the setup of all machines:

10.2.10.10 admin
10.2.10.11 master1
10.2.10.12 master2
10.2.10.13 master3
10.2.10.14 worker1
10.2.10.15 worker2
10.2.10.16 worker3
Set up Secure Access

To securely access the master and worker machines, you need to create a ssh-key-pair (private and public key) on the admin machine. Afterwards copy the public key onto each machine.

To learn more about ssh and key-pairs see our guide on How to set up SSH keys.

Step 4 - Install Podman on the Admin Machine

To ensure compatibility across different containerized environments, kubeopsctl requires the installation of Podman (latest version).

Install Podman on the admin machine using the inbuilt package manager.

sudo dnf install -y podman

Step 5 - Install kubeopsctl on the Admin Machine

With everything prepared, the next step is to download and install kubeopsctl on the admin machine.

Downloading KOSI

Login into your KubeOps account. If you do not already have an account, you can create it by using the KubeOps website.

Download your desired version of the kubeopsctl package file (.rpm) from the official download page onto the admin machine.

  • Installing KOSI

    sudo dnf install -y <path>/<kosi_rpm>
    
  • Login to the kubeops hub using kosi

    After you input the password, you will gain access to the kubeops hub. Use the following command to begin the login process:

      kosi login -u <user>
    
Installing kubeopsctl

Install kubeopsctl using the inbuilt package manager. Replace <path> and <kubeopsctl_rpm> with the respective path and file name of the kubeopsctl package file.

To install kubeopsctl use the following command.

sudo dnf install -y <path>/<kubeopsctl_rpm>
Create Work Folders and Setup Environment Variables

After the setup, you need to create work folders where kubeopsctl can save and manage configurations and other settings.

mkdir -p ~/kubeops
mkdir -p ~/kubeops/lima

To work comfortably, you need to assign these folders to the predefined environment variables KUBEOPSROOT and LIMAROOT.

echo 'export KUBEOPSROOT="${HOME}/kubeops"' >> $HOME/.bashrc
echo 'export LIMAROOT="${HOME}/kubeops/lima"' >> $HOME/.bashrc
source $HOME/.bashrc
Verify your Installation

To verify the installation of kubeopsctl on your system, use the command kubeopsctl version.

kubeopsctl version

Step 6 - Configure the Basic Cluster

With everything ready to start, the next step is to configure the cluster.

For configurations, kubeopsctl uses the YAML format.

Use an editor to create and edit the configuration file:

nano ~/basicCluster.yml

Copy and paste all lines into the file. You need to edit specific parameters according to the assigned IP addresses, hostnames etc.:

  • master/name - set all master hostnames
  • master/ipAdress - set all master IP addresses
  • worker/name - set all worker hostnames
  • worker/ipAdress - set all worker IP addresses
apiVersion: kubeops/kubeopsctl/alpha/v5 # mandatory
imagePullRegistry: "registry1.kubernative.net/lima"
localRegistry: true
clusterName: "example"
kubernetesVersion: "1.30.0"
masterIP: 10.2.10.11
systemCpu: "200m"
systemMemory: "200Mi"

zones:
  - name: zone1
    nodes:
      master:
        - name: master1
          ipAdress: 10.2.10.11
          status: active
          kubeversion: 1.30.0
        - name: master2
          ipAdress: 10.2.10.12
          status: active
          kubeversion: 1.30.0
      worker:
        - name: worker1
          ipAdress: 10.2.10.14
          status: active
          kubeversion: 1.30.0
        - name: worker2
          ipAdress: 10.2.10.15
          status: active
          kubeversion: 1.30.0
  - name: zone2
    nodes:
      master:
        - name: master3
          ipAdress: 10.2.10.13
          status: active
          kubeversion: 1.30.0  
      worker:
        - name: worker3
          ipAdress: 10.2.10.16
          status: active
          kubeversion: 1.30.0


# mandatory, set to true if you want to install it into your cluster
rook-ceph: true
harbor: true
opensearch: true
opensearch-dashboards: true
logstash: true
filebeat: true
prometheus: true
opa: true
kubeops-dashboard: true
certman: true
ingress: true 
keycloak: true
velero: true

harborValues: 
  harborpass: "password" # change to your desired password
  databasePassword: "Postgres_Password" # change to your desired password
  redisPassword: "Redis_Password" 
  externalURL: http://10.2.10.11:30002 # change to ip adress of master1

prometheusValues:
  grafanaUsername: "user"
  grafanaPassword: "password"

ingressValues:
  externalIPs: []

keycloakValues:
  keycloak:
    auth:
      adminUser: admin
      adminPassword: admin
  postgresql:
    auth:
      postgresPassword: ""
      username: bn_keycloak
      password: ""
      database: bitnami_keycloak
      existingSecret: ""

veleroValues:
  accessKeyId: "your_s3_storage_username"
  secretAccessKey: "your_s3_storage_password"
apiVersion: kubeops/kubeopsctl/alpha/v5 # mandatory
imagePullRegistry: "registry.kubeops.net/kubeops"
localRegistry: true
clusterName: "example"
kubernetesVersion: "1.30.0"
masterIP: 10.2.10.11
systemCpu: "200m"
systemMemory: "200Mi"

zones:
  - name: zone1
    nodes:
      master:
        - name: master1
          ipAdress: 10.2.10.11
          status: active
          kubeversion: 1.30.0
        - name: master2
          ipAdress: 10.2.10.12
          status: active
          kubeversion: 1.30.0
      worker:
        - name: worker1
          ipAdress: 10.2.10.14
          status: active
          kubeversion: 1.30.0
        - name: worker2
          ipAdress: 10.2.10.15
          status: active
          kubeversion: 1.30.0
  - name: zone2
    nodes:
      master:
        - name: master3
          ipAdress: 10.2.10.13
          status: active
          kubeversion: 1.30.0  
      worker:
        - name: worker3
          ipAdress: 10.2.10.16
          status: active
          kubeversion: 1.30.0


# mandatory, set to true if you want to install it into your cluster
rook-ceph: true
harbor: true
opensearch: true
opensearch-dashboards: true
logstash: true
filebeat: true
prometheus: true
opa: true
kubeops-dashboard: true
certman: true
ingress: true 
keycloak: true
velero: true

harborValues: 
  harborpass: "password" # change to your desired password
  databasePassword: "Postgres_Password" # change to your desired password
  redisPassword: "Redis_Password" 
  externalURL: http://10.2.10.11:30002 # change to ip adress of master1

prometheusValues:
  grafanaUsername: "user"
  grafanaPassword: "password"

ingressValues:
  externalIPs: []

keycloakValues:
  keycloak:
    auth:
      adminUser: admin
      adminPassword: admin
  postgresql:
    auth:
      postgresUserPassword: ""
      username: bn_keycloak
      password: ""
      database: bitnami_keycloak
    volumeSize: 8Gi

veleroValues:
  accessKeyId: "your_s3_storage_username"
  secretAccessKey: "your_s3_storage_password"

Step 7 - Start the Basic Cluster

After the configuration is setup correctly, you can start your basic cluster for the first time:

kubeopsctl apply -f ~/basicCluster.yml

1.4 - Commands, Cluster Health and Modifications

An overview on commands, cluster health, and basic modifications. This guide provides a straightforward approach to mastering kubeopsctl command line operations and ensuring the stability and performance of your cluster infrastructure.

In this quickstart you will learn about:

  • basic kubeopsctl command line operations
  • apply changes to cluster, e.g. add new worker

Prerequisites

To get the most out of this guide, the following requirements should be met:

  • kubeopsctl is installed, see kubeopsctl Installation Guide
  • a cluster is installed and running
  • basic understanding of Linux environments, bash / shell
  • basic understanding of text editors, vi / nano
  • administrator privileges (root) are granted

Overview of the kubeopsctl CLI

kubeopsctl provides a set of command line operations. For more information, see here.

The main kubeopsctl commands are:

Command Description
--version Shows the current version of kubeopsctl.
--help Shows an overview of all available commands:
apply Sets up the kubeops platform with a configuration file.
change registry Changes the currently used registry to a different one with a given configuration file.
drain Drains a cluster, zone or node.
uncordon Uncordons a cluster, zone or node. This ensures that while existing pods will remain running on the node, no new pods can be scheduled.
upgrade Upgrades the Kubernetes version of a cluster, zone or node.
status Prints the state of a cluster.
kubeopsctl --version
Show kubeopsctl Help
kubeopsctl --help
Setup Kubernetes Cluster or Apply Changes

Sets up the kubeops platform with a configuration file. Use the flag -f and pass the configuration file, e.g. kubeopsctl.yaml.

kubeopsctl apply -f kubeopsctl.yaml

You can also set the log level to a specific value. Available log levels are:

kubeopsctl apply -f kubeopsctl.yaml -l Debug3

The default log level is Info.

  • Error
  • Warning
  • Info (default log level)
  • Debug1
  • Debug2
  • Debug3

The command kubeopsctl apply can also be used to modify a cluster. For more information see Apply Changes to Cluster within this document.

Change the Registry

Changes the currently used registry to a different one with a given configuration file. For example:

kubeopsctl change registry -f kubeopsctl.yaml -r 10.2.10.11/library -t localhost/library
  • The -f parameter is used to use yaml parameter file.
  • The parameter -r is used to pull the docker images which are included in the package to a given local docker registry.
  • The -t parameter is used to tag the images with localhost. For the szenario that the registry of the cluster is exposed to the admin via a network internal domain name, but this name cant be resolved by the nodes, the flag -t can be used, to use the cluster internal hostname of the registry.
Draining

For draining clusters, zones or nodes use kubeopsctl drain <type>/<name>.

To drain a cluster use and replace <clustername> with the desired cluster:

kubeopsctl drain cluster/<clustername>

To drain a zone use and replace <zonename> with the desired zone:

kubeopsctl drain zone/<zonename>

To drain a node use and replace <nodename> with the desired node:

kubeopsctl drain node/<nodename>
Uncordoning

Uncordoning is similar to draining but draining means remove all running tasks/pods so the node can be fixed or taken offline safely. Uncordoning is just opening it up again for new tasks/pods.

For uncordoning clusters, zones or nodes use kubeopsctl uncordon TYPE/NAME.

To uncordon a cluster use and replace <clustername> with the desired cluster:

kubeopsctl uncordon cluster/<clustername>

To uncordon a zone use and replace <zonename> with the desired zone:

kubeopsctl uncordon zone/<zonename>

To uncordon a node use and replace <nodename> with the desired node:

kubeopsctl uncordon node/<nodename>
Upgrading

Upgrade clusters, zones or nodes by using kubeopsctl upgrade -v <version>.

To upgrade a cluster use and replace <clustername> with the desired cluster:

kubeopsctl upgrade cluster/<clustername> -v 1.26.6

To upgrade a zone use and replace <zonename> with the desired zone:

kubeopsctl upgrade zone/<zonename> -v 1.26.6

To upgrade a node use and replace <nodename> with the desired node:

kubeopsctl upgrade node/<nodename> -v 1.26.6

Check on Health and State of Clusters

To check on health and state of a cluster use the command kubeopsctl status cluster/<clustername>.

For example:

kubeopsctl status cluster/basiccluster

Apply Changes to Cluster

If the cluster is already running, it may be necessary to make additional settings or carry out updates.

For example:

  • when IP addresses or host names of nodes change
  • if you want to add or remove nodes

In any case, you need to modify the base configuration and apply the necessary changes to the configuration file. Changes will be applied state wise - short hand: only the difference will be applied.

For more information, see How to Guides.

Example: Add new Worker

If you want to add a new worker to your cluster, edit the configuration file.

In this example we reuse the basic configuration basiccluster.yml from the previous chapter (see Set up a Basic Cluster). Use an editor to edit the configuration file:

nano ~/basiccluster.yml

Add the lines to the configuration file for an additional worker4 at zone2. Also edit the desired ipAdress for the new worker.

apiVersion: kubeops/kubeopsctl/alpha/v3 # mandatory
imagePullRegistry: "registry1.kubernative.net/lima"
localRegistry: true
clusterName: "example"
kubernetesVersion: "1.30.0"
masterIP: 10.2.10.11

zones:
	- name: zone1
		nodes:
			master:
				- name: master1
					ipAdress: 10.2.10.11
					status: active
					kubeversion: 1.30.0
				- name: master2
					ipAdress: 10.2.10.12
					status: active
					kubeversion: 1.30.0
			worker:
				- name: worker1
					ipAdress: 10.2.10.14
					status: active
					kubeversion: 1.30.0
				- name: worker2
					ipAdress: 10.2.10.15
					status: active
					kubeversion: 1.26.2
	- name: zone2
		nodes:
			master:
				- name: master3
					ipAdress: 10.2.10.13
					status: active
					kubeversion: 1.30.0	
			worker:
				- name: worker3
					ipAdress: 10.2.10.16
					status: active
					kubeversion: 1.30.0

# mandatory, set to true if you want to install it into your cluster
rook-ceph: true
harbor: true
opensearch: true
opensearch-dashboards: true
logstash: true
filebeat: true
prometheus: true
opa: true
headlamp: true
certman: true
ingress: true
keycloak: true
velero: true

rookValues:
	cluster:
		resources:
			mgr:
				requests:
					cpu: 500m
					memory: 1Gi
			mon:
				requests:
					cpu: 500m
					memory: 1Gi
			osd:
				requests:
					cpu: 500m
					memory: 1Gi

harborValues:
	harborpass: "password" # change to your desired password
	databasePassword: "Postgres_Password" # change to your desired password
	redisPassword: "Redis_Password"
	externalURL: http://10.2.10.11:30002 # change to ip adress of master1
	nodePort: 30002
	harborPersistence:
		persistentVolumeClaim:
			registry:
				size: 40Gi
			jobservice:
				jobLog:
					size: 1Gi
			database:
				size: 1Gi
			redis:
				size: 1Gi
			trivy:
				size: 5Gi

prometheusValues:
	grafanaUsername: "user"
	grafanaPassword: "password"

logstashValues:
	volumeClaimTemplate:
		resources:
			requests:
				storage: 1Gi

openSearchValues:
	persistence:
		size: 4Gi

keycloakValues:
	keycloak:
		auth:
			adminUser: admin
			adminPassword: admin
	postgresql:
		auth:
			postgresPassword: ""
			username: bn_keycloak
			password: ""
			database: bitnami_keycloak
			existingSecret: ""

veleroValues:
	namespace: "velero"
	accessKeyId: "your_s3_storage_username"
	secretAccessKey: "your_s3_storage_password"
	useNodeAgent: false
	defaultVolumesToFsBackup: false
	provider: "aws"
	bucket: "velero"
	useVolumeSnapshots: false
	backupLocationConfig:
		region: "minio"
		s3ForcePathStyle: true
		s3Url: "http://minio.velero.svc:9000"
apiVersion: kubeops/kubeopsctl/alpha/v3 # mandatory
imagePullRegistry: "registry.kubeops.net/kubeops"
localRegistry: true
clusterName: "example"
kubernetesVersion: "1.30.0"
masterIP: 10.2.10.11

zones:
	- name: zone1
		nodes:
			master:
				- name: master1
					ipAdress: 10.2.10.11
					status: active
					kubeversion: 1.30.0
				- name: master2
					ipAdress: 10.2.10.12
					status: active
					kubeversion: 1.30.0
			worker:
				- name: worker1
					ipAdress: 10.2.10.14
					status: active
					kubeversion: 1.30.0
				- name: worker2
					ipAdress: 10.2.10.15
					status: active
					kubeversion: 1.26.2
	- name: zone2
		nodes:
			master:
				- name: master3
					ipAdress: 10.2.10.13
					status: active
					kubeversion: 1.30.0	
			worker:
				- name: worker3
					ipAdress: 10.2.10.16
					status: active
					kubeversion: 1.30.0

# mandatory, set to true if you want to install it into your cluster
rook-ceph: true
harbor: true
opensearch: true
opensearch-dashboards: true
logstash: true
filebeat: true
prometheus: true
opa: true
headlamp: true
certman: true
ingress: true
keycloak: true
velero: true

rookValues:
	cluster:
		resources:
			mgr:
				requests:
					cpu: 500m
					memory: 1Gi
			mon:
				requests:
					cpu: 500m
					memory: 1Gi
			osd:
				requests:
					cpu: 500m
					memory: 1Gi

harborValues:
	harborpass: "password" # change to your desired password
	databasePassword: "Postgres_Password" # change to your desired password
	redisPassword: "Redis_Password"
	externalURL: http://10.2.10.11:30002 # change to ip adress of master1
	nodePort: 30002
	harborPersistence:
		persistentVolumeClaim:
			registry:
				size: 40Gi
			jobservice:
				jobLog:
					size: 1Gi
			database:
				size: 1Gi
			redis:
				size: 1Gi
			trivy:
				size: 5Gi

prometheusValues:
	grafanaUsername: "user"
	grafanaPassword: "password"

logstashValues:
	volumeClaimTemplate:
		resources:
			requests:
				storage: 1Gi

openSearchValues:
	persistence:
		size: 4Gi

keycloakValues:
	keycloak:
		auth:
			adminUser: admin
			adminPassword: admin
	postgresql:
		auth:
			postgresUserPassword: ""
			username: bn_keycloak
			password: ""
			database: bitnami_keycloak
		volumeSize: "8Gi"

veleroValues:
	namespace: "velero"
	accessKeyId: "your_s3_storage_username"
	secretAccessKey: "your_s3_storage_password"
	useNodeAgent: false
	defaultVolumesToFsBackup: false
	provider: "aws"
	bucket: "velero"
	useVolumeSnapshots: false
	backupLocationConfig:
		region: "minio"
		s3ForcePathStyle: true
		s3Url: "http://minio.velero.svc:9000"

After the modification is set up corretly, you can apply it to your cluster:

kubeopsctl apply -f ~/basiccluster.yml
Best Practices for Changes and Modifications

When configuring the first setup for a cluster, keep your base configuration file, e.g. basiccluster.yml.

Since the modifications are carried out per status / change, we recommend naming the new configuration files and adding a timestamp, if necessary.

For Example:

  • basiccluster.yml
  • 20240101-add-worker4.yml
  • 20240101-drain-worker2.yml
  • 20240101-update-kubeversion.yml

2 - How to Guides

Welcome to our comprehensive How-To Guide for using kubeops. Whether youre a beginner aiming to understand the basics or an experienced user looking to fine-tune your skills, this guide is designed to provide you with detailed step-by-step instructions on how to navigate and utilize all the features of kubeops effectively.

In the following sections, you will find everything from initial setup and configuration, to advanced tips and tricks that will help you get the most out of the software. Our aim is to assist you in becoming proficient with kubeops, enhancing both your productivity and your user experience.

Lets get started on your journey to mastering kubeops!

2.1 - Ingress Configuration

Here is a brief overview of how you can configure your ingress manually.

Manual configuration of the Nginx-Ingress-Controller

Right now the Ingress Controller Package is not fully configured. To make complete use of the Ingress capabilities of the cluster, the user needs to manually update some of the settings of the corresponding service.

Locating the service

The service in question is called “ingress-nginx-controller” and can be found in the same namespace as the ingress package itself. To locate the service across all namespaces, you could use the following command.

kubectl get service -A | grep ingress-nginx-controller

This command should return two entries of services, “ingress-nginx-controller” and “ingress-nginx-controller-admission”, though only the first one needs to be further adjusted.

Setting the Ingress-Controller service to type NodePort

To edit the service, you can use the following command, although the actual namespace may be different. This will change the service type to NodePort.

kubectl patch service ingress-nginx-controller -n kubeops -p '{"spec":{"type":"NodePort"}}'

Kubernetes will now automatically assign unused portnumbers for the nodePort to allow http and https connections to the service. These can be retrieved by running the same command, used to locate the service. Alternatively, you can use the following command, which adds the portnumbers 30080 and 30443 for the respective protocols. By doing so, you have to make sure, that these portnumbers are not being used by any other NodePort service.

kubectl patch service ingress-nginx-controller -n kubeops --type=json -p '[{"op":"replace","path":"/spec/type","value":"NodePort"}, {"op":"add","path":"/spec/ports/0/nodePort","value":30080}, {"op":"add","path":"/spec/ports/1/nodePort","value":30443}]'

Configuring external IPs

If you have access to external IPs that route to one or more cluster nodes, you can expose your Kubernetes-Services of any type through these addresses. The command below shows how to add an external IP-Adress to the service with the example value of “192.168.0.1”. Keep in mind that this value has to be changed in order to fit your networking settings.

kubectl patch service ingress-nginx-controller -n kubeops -p '{"spec":{"externalIPs":["192.168.0.1"]}}'

2.2 - Create Cluster

Here is a brief overview of how you can create a simple functional cluster. Including prerequisites and step by step instructions .

How to create a working cluster?

Pre-requisites

  • maintenance packages installed?
  • network connection?
  • LIMAROOT set

Steps

  • create yaml file
  • create cluster with multiple nodes
  • add nodes to created cluster
  • delete nodes when needed

Once you have completed the KubeOps installation, you are ready to dive into the KubeOps-Platform.

How to use LIMA

Downloaded all maintenance packages? If yes, then you are ready to use LIMA for managing your Kubernetes clusters!

In the following sections we will walk you through a quick cluster setup and adding nodes.

So the first thing to do is to create a YAML file that contains the specifications of your cluster. Customize the file below according to your downloaded maintenance packages, e.g. the parameters kubernetesVersion, firewall, containerRuntime. Also adjust the other parameters like masterPassword, masterHost, apiEndpoint to your environment.

createCluster.yaml

apiVersion: lima/clusterconfig/v1alpha2
spec:
  clusterName: ExampleClusterName
  masterUser: root
  masterPassword: "myPassword"
  masterHost: 10.2.1.11
  kubernetesVersion: 1.22.2
  registry: registry.preprod.kubernative.net/kubeops
  useInsecureRegistry: false
  ignoreFirewallError: false
  firewall: firewalld
  apiEndpoint: 10.2.1.11:6443
  serviceSubnet: 192.168.128.0/20
  podSubnet: 192.168.144.0/20
  debug: true
  logLevel: v
  systemCpu: 100m
  systemMemory: 100Mi
  sudo: false
  containerRuntime: crio
  pluginNetwork:
    type: weave
    parameters:
      weavePassword: re4llyS7ron6P4ssw0rd
  auditLog: false
  serial: 1
  seLinuxSupport: true

Most of these parameters are optional and can be left out. If you want to know more about each parameter please refer to our Full Documentation


Set up a single node cluster

To set up a single node cluster we need our createCluster.yaml file from above.
Run the create cluster command on the admin node to create a cluster with one node.

lima create cluster -f createCluster.yaml

Done! LIMA is setting up your Kubernetes cluster. In a few minutes you have set up a regular single master cluster.

If LIMA is successfully finished you can check with kubectl get nodes your Kubernetes single node cluster.

It looks very alone and sad right? Jump to the next section to add some friends to your cluster!


Optional step

The master node which you used to set up your cluster is only suitable as an example installation or for testing. To use this node for production workloads remove the taint from the master node.

kubectl taint nodes --all node-role.kubernetes.io/master-

Add nodes to your cluster

Let’s give your single node cluster some friends. What we need for this is another YAML file. We can call the YAML file whatever we want - we call it addNode.yaml.

addNode.yaml

apiVersion: lima/nodeconfig/v1alpha1
clusterName: ExampleClusterName
spec: 
  masters:
  - host: 10.2.1.12
    user: root
    password: "myPassword"
  workers:
  - host: 10.2.1.13 #IP-address of the node you want to add
    user: root
    password: "myPassword"

We do not need to pull any other maintenance packages. We already did that and are using the same specifications from our single node cluster. The only thing to do is to use the create nodes command

lima create nodes -f addNode.yaml

Done! LIMA adds the nodes to your single node cluster. After LIMA is finished check again with kubectl get nodes the state of your Kubernetes cluster. Your master node should not be alone anymore!

2.3 - Importing the ELRepo Secure Boot key

This guide explains how to prepare a system with Secure Boot for using third-party kernel modules by importing the ELRepo Secure Boot key, ensuring compatibility and secure module integration..

KubeOps supports inter-node traffic encryption through the use of the calico-wireguard extension. For this to work correctly, the wireguard kernel module needs to be installed on every node in the cluster.

KubeOps distributes and installs the required software automatically. However, since these are third-party modules signed by the ELRepo community project, system administrators must import the ELRepo Secure Boot public key into their MOK (Machine Owner Key) list in order to use them on a system with Secure Boot enabled.

This only applies to RHEL 8 machines.

Download the key

The secureboot key must be located on every node of the cluster. It can be directly downloaded with the following command:

curl -O https://elrepo.org/SECURE-BOOT-KEY-elrepo.org.der

If you are working with an airgap environment, you might need to manually distribute the file to all your nodes.

Import the key in the MOK list

With the key in place, install it by using this command:

mokutil --import SECURE-BOOT-KEY-elrepo.org.der

When prompted, enter a password of your choice. This password will be used when enrolling the key into the MOK list.

Reboot the system and enroll the key

Upon rebooting, the “Shim UEFI key management” screen appears. You will need to press any key withing 10 seconds to proceed.

Enroll the key by following these steps:
- Select Enroll MOK.
- Select View key 0 to inspect the public key and other important information. Press Esc when you are done.
- Select Continue and enter the previously created password.
- When asked to enroll the keys, select OK.
- Select Reboot and restart the system.

The key has now been added to the MOK list and enrolled.

2.4 - Install Maintenance Packages

This guide provides an overview of installing essential maintenance packages for KubeOps clusters. It covers how to pull and manage various Kubernetes tools, dependencies, and Container Runtime Interface (CRI) packages to set up and maintain your cluster. Ensure compatibility between versions to successfully deploy your first Kubernetes environment.

Installing the essential Maintenance Packages

KubeOps provides you packages for the supported Kubernetes tools. These maintenance packages help you update the kubernetes tools to the desired versions on your clusters along with its dependencies.

It is necessary to install the required maintenance packages to create your first Kubernetes cluster. The packages are available on kubeops hub.

So let’s get started!

Note : Be sure you have the supported KOSI version for the KubeOps Version installed or you can not pull any maintenance packages!

Commands to install a package

Following are the most common commands to be used on Admin Node to get and install any maintenance package.

  1. Use the command get maintenance to list all available maintenance packages.

     lima get maintenance
    

    This will display a list of all the available maintenance packages.

Example :
| SOFTWARE          | VERSION | STATUS     | SOFTWAREPACKAGE            |TYPE     |
|      --           |   --    |      --    |     --                     |   --    |
| Kubernetes        | 1.24.8  | available  | lima/kubernetes:1.24.8     | upgrade |
| iptablesEL8       | 1.8.4   | available  | lima/iptablesel8:1.8.4     | update  |
| firewalldEL8      | 0.8.2   | downloaded | lima/firewalldel8:0.8.2    | update  |

Please observe and download correct packages based on following important column in this table.  

|Name | Description |
|-------------------------------------------|-------------------------------------------|
| SOFTWARE | It is the name of software which is required for your cluster. |
| VERSION | It is the software version. Select correct version based on your Kubernetes and KubeOps version. |
| SOFTWAREPACKAGE | It is the unique name of the maintenance package. Use this to pull the package on your machine.|
| STATUS | There can be any of the following status indicated. |
| | - available: package is remotely available |
| | - not found : package not found |
| | - downloaded : the package is locally and remotely available |
| | - only local : package is locally available |
| | - unknown: unknown package |   
  1. Use command pull maintenance to pull/download the package on your machine.

    lima pull maintenance <SOFTWAREPACKAGE>
    

    It is possible to pull more than 1 package with one pull invocation.
    For example:

    lima pull maintenance lima/kubernetes:1.23.5 lima/dockerEL7:18.09.1
    

List of Maintenance Packages

Following are the essential maintenance packages to be pulled. Use the above mentioned Common Commands to install desired packages.

1.Kubernetes

The first step is to choose a Kubernetes version and to pull its available package LIMA currently supports following Kubernetes versions:

1.26.x 1.27.x 1.28.x 1.29.x 1.30.x 1.31.x 1.32.x
1.26.3 1.27.1 1.28.0 1.29.0 1.30.0 1.31.2 1.32.0
1.26.4 1.27.2 1.28.1 1.29.1 1.30.1 1.31.4
1.26.5 1.27.3 1.28.2 1.29.2 1.30.6
1.26.6 1.27.4 1.28.3 1.29.3 1.30.8
1.26.7 1.27.5 1.28.4 1.29.4
1.26.8 1.27.6 1.28.5 1.29.5
1.26.9 1.27.7 1.28.6 1.29.10
1.27.8 1.28.7 1.29.12
1.27.9 1.28.8
1.27.10 1.28.9
1.28.10

Following are the packages available for the supported Kubernetes versions.

Kubernetes version Available packages
1.26.x kubernetes-1.26.x
1.27.x kubernetes-1.27.x
1.28.x kubernetes-1.28.x
1.29.x kubernetes-1.29.x
1.30.x kubernetes-1.30.x
1.31.x kubernetes-1.31.x
1.32.x kubernetes-1.32.x

2. Install Kubectl

To install Kubectl you won’t need to pull any other package. The Kubernetes package pulled in above step already contains Kubectl installation file.

In the following example the downloaded package is kubernetes-1.30.1.

dnf install $LIMAROOT/packages/kubernetes-1.30.1/kubectl-1.30.1-150500.1.1.x86_64.rpm

3.Kubernetes Dependencies

The next step is to pull the Kubernetes dependencies:

OS Available packages
RHEL 8 kubeDependencies-EL8-1.0.4
RHEL 8 kubeDependencies-EL8-1.0.6

4.CRIs

Choose your CRI and pull the available packages:

OS CRI Available packages
RHEL 8 docker dockerEL8-20.10.2
containerd containerdEL8-1.4.3
CRI-O crioEL8-x.xx.x
crioEL8-dependencies-1.0.1
podmanEL8-18.09.1

Note : CRI-O packages are depending on the chosen Kubernetes version. Choose the CRI-O package which matches with the chosen Kubernetes version.

  • E.g kubernetes-1.23.5 requires crioEL7-1.23.5
  • E.g kubernetes-1.24.8 requires crioEL7-1.24.8

5.Firewall

Choose your firewall and pull the available packages:

OS Firewall Available packages
RHEL 8 iptables iptablesEL8-1.8.4
firewalld firewalldEL8-0.9.3

Example

Assuming a setup should exist with OS RHEL 8, CRI-O and Kubernetes 1.22.2 with the requested version, the following maintenance packages need to be installed:

  • kubernetes-1.22.2
  • kubeDependencies-EL8-1.0.2
  • crioEL8-1.22.2
  • crioEL8-dependencies-1.0.1
  • podmanEL8-18.09.1


2.5 - Upgrade KubeOps Software

This guide outlines the steps for upgrading KubeOps software. It covers updating essential packages, configuring kubeopsctl.yaml, removing old versions, and installing new ones. It also provides instructions for upgrading other components like rook-ceph, harbor, opensearch, and monitoring tools by modifying the configuration file and applying the updates systematically..

Upgrading KubeOps Software

1. Update essential KubeOps Packages

Update kubeops setup

Before installing the kubeops software, create a kubeopsctl.yaml with following parameters:

### General values for registry access ###
apiVersion: kubeops/kubeopsctl/alpha/v5  # mandatory
kubeOpsUser: "demo" # mandatory
kubeOpsUserPassword: "Password" # mandatory
kubeOpsUserMail: "demo@demo.net" # mandatory
imagePullRegistry: "registry.preprod.kubernative.net/lima" # mandatory, the registry from which the images for the cluster are pulled
localRegistry: false # mandatory, set to true if you use a local registry

After creating the kubeopsctl.yaml please place another file into your machine to update the software:

### Values for setup configuration ###
clusterName: "example" # mandatory
clusterUser: "root" # mandatory
kubernetesVersion: "1.28.2" # mandatory, check lima documentation
masterIP: 10.2.10.12 # mandatory
containerRuntime: "containerd" # mandatory

1. Remove old KubeOps software

If you want to remove the KubeOps software, it is recommended that you use your package manager. For RHEL environments it is yum. If you want to remove the KubeOps software with yum, use the following commands:

yum autoremove kosi
yum autoremove lima

2. Install new KubeOps software

Now, you can install the new software with yum.

sudo yum install <kosi-rpm>

3. Upgrade kubeops software

To upgrade your kubeops software, you have to use following command:

  kubeopsctl apply -f kubeopsctl.yaml

4. Maintain the old Deployment Information (optional)

After upgrading KOSI from 2.5 to 2.6, the deployment.yaml file has to be moved to the $KUBEOPSROOT directory, if it is desired to keep old deployments.
Be sure there you set the $KUBEOPSROOT variable.

  1. Set the $KUBEOPSROOT variable
echo 'export KUBEOPSROOT="$HOME/kubeops"' >> $HOME/.bashrc
source ~/.bashrc

5. Update other softwares

1. Upgrade rook-ceph

In order to upgrade rook-ceph, you have to go in your kubeopsctl.yaml file and set rook-ceph: false to rook-ceph: true

After that, use the command bellow:

kubeopsctl apply -f kubeopsctl.yaml

2. Update harbor

For Updating harbor, change your kubeopsctl.yaml file and set harbor: false to harbor: true.

Please set other applications to false before applying the kubeopsctl.yaml file.

3. Update opensearch

In order to update opensearch, change your kubeopsctl.yaml file and set opensearch: false to opensearch: true.

Please set other applications to false before applying the kubeopsctl.yaml file.

4. Update logstash

In order to update logstash, change your kubeopsctl.yaml file and set logstash: false to logstash: true

Please set other applications to false before applying the kubeopsctl.yaml file.

5. Update filebeat

In order to update filebeat, change your kubeopsctl.yaml file and set filebeat: false to filebeat: true.

Please set other applications to false before applying the kubeopsctl.yaml file.

6. Update prometheus

In order to update prometheus, change your kubeopsctl.yaml file and set prometheus: false to prometheus: true.

Please set other applications to false before applying the kubeopsctl.yaml file.

7. Update opa

In order to update opa, change your kubeopsctl.yaml file and set opa: false to opa: true. Please set other applications to false before applying the kubeopsctl.yaml file.

2.6 - Update postgres resources of harbor

Update postgres resources of harbor.

How to Update Harbor Advanced Parameters Using kubeopsctl

Prerequisites

Before proceeding, ensure you have:

  • kubeopsctl installed and configured.
  • Access to your Kubernetes cluster.
  • The necessary permissions to apply changes to the Harbor deployment.

Understanding advancedParameters

Harbor allows advanced configuration via the harborValues.advancedParameters section. This section provides fine-grained control over various components, such as PostgreSQL, Redis, and logLevel, by defining resource allocations and other configurations.

Example Structure of advancedParameters

The advancedParameters section in kubeopsctl.yaml follows this structure:

harborValues:
  advancedParameters:
    postgres:
      resources:
        requests:
          memory: "512Mi"  # Minimum memory requested by PostgreSQL
          cpu: "200m"       # Minimum CPU requested by PostgreSQL
        limits:
          memory: "1Gi"     # Maximum memory PostgreSQL can use
          cpu: "500m"       # Maximum CPU PostgreSQL can use
    
    internal:
      redis:
        resources:
          requests:
            memory: "256Mi"  # Minimum memory requested by Redis
            cpu: "100m"      # Minimum CPU requested by Redis
          limits:
            memory: "512Mi"  # Maximum CPU Redis can use
            cpu: "300m"      # Maximum CPU Redis can use

    logLevel: "debug"  # Adjust logging level for debugging purposes
  • postgres: Defines resource limits for the PostgreSQL database.
  • redis: Configures Redis instance resources.
  • logLevel: Allows setting the logging level.

Modify these values based on your cluster’s available resources and workload requirements.

Step 1: Update Your kubeopsctl.yaml Configuration

Ensure that your kubeopsctl.yaml file includes the harborValues.advancedParameters section. If necessary, update or add parameters to customize your Harbor deployment.

Step 2: Apply the Configuration with kubeopsctl

Once your kubeopsctl.yaml file is ready, apply the changes using the following command:

kubeopsctl apply -f kubeopsctl.yaml

This command updates the advanced parameters for the Harbor deployment.

Step 3: Verify the Changes

To confirm that the new configuration has been applied, run:

kubectl get pod -n <your-harbor-namespace> -o yaml | grep -A6 -i 'resources:'

Replace <your-harbor-namespace> with the namespace where Harbor is deployed.

Alternatively, describe any component to check the applied settings:

kubectl describe pod <component-pod-name> -n <your-harbor-namespace>

Conclusion

Using kubeopsctl, you can efficiently update various advanced parameters in your Harbor deployment. The advancedParameters section allows fine-tuned configuration for multiple components, ensuring optimal resource usage and performance.

If you encounter any issues, check the logs with:

kubectl logs -n <your-harbor-namespace> <component-pod-name>

2.7 - Use Kubeopsctl

kubeopsctl is a KubeOps tool that simplifies cluster management by allowing users to define the desired cluster state in a YAML file. After configuring the cluster’s setup, the changes can be easily applied using the apply command, making it straightforward to manage updates and configurations..

KubeOpsctl

kubeopsctl is a new KubeOps tool which can be used for managing a cluster and its state eaisily. Now you can just describe a desired cluster state and then kubeopsctl creates a cluster with the desired state.

Using KubeOpsCtl

Using this feature is as easy as configuring the cluster yaml file with desired cluster state and details and using the apply command. Below are the detailed steps.

1.Configure Cluster/Nodes/Software using yaml file

You need to have a cluster definition file which describes the different aspects of your cluster. this files describes only one cluster.

Full yaml syntax

apiVersion: kubeops/kubeopsctl/alpha/v5  # mandatory
kubeOpsUser: "demo" # mandatory,  change to your username
kubeOpsUserPassword: "Password" # mandatory,  change to your password
kubeOpsUserMail: "demo@demo.net" # change to your email
imagePullRegistry: "registry.preprod.kubernative.net/kubeops" # mandatory
localRegistry: false # mandatory
### Values for setup configuration ###
clusterName: "testkubeopsctl"  # mandatory
clusterUser: "myuser"  # mandatory
kubernetesVersion: "1.28.2" # mandatory, check lima documentation
masterIP: 10.2.10.31 # mandatory
# at least 3 masters and 3 workers are needed
zones:
  - name: zone1
    nodes:
      master:
        - name: cluster1master1
          ipAdress: 10.2.10.11
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
        - name: cluster1master2
          ipAdress: 10.2.10.12
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
      worker:
        - name: cluster1worker1
          ipAdress: 10.2.10.14
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
        - name: cluster1worker2
          ipAdress: 10.2.10.15
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
  - name: zone2
    nodes:
      master:
        - name: cluster1master3
          ipAdress: 10.2.10.13
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: drained
          kubeversion: 1.28.2  
      worker:
        - name: cluster1worker1
          ipAdress: 10.2.10.16
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2

# set to true if you want to install it into your cluster
rook-ceph: false # mandatory
harbor: false # mandatory
opensearch: false # mandatory
opensearch-dashboards: false # mandatory
logstash: false # mandatory
filebeat: false # mandatory
prometheus: false # mandatory
opa: false # mandatory
kubeops-dashboard: false # mandatory
certman: false # mandatory
ingress: false # mandatory
keycloak: false # mandatory

###Values for Rook-Ceph###
rookValues:
  namespace: kubeops
  nodePort: 31931 # optional, default: 31931
  cluster:
    storage:
      # Will only be used if useAllDevices is set to false and will be ignored if individual devices have been specified on a node.
      deviceFilter: "^sd[a-b]"
      # This setting can be used to store metadata on a different device. Only recommended if an additional metadata device is available.
      # Optional, will be overwritten by the corresponding node-level setting.
      config:
        metadataDevice: "sda"
      # Names of individual nodes in the cluster that should have their storage included.
      # Will only be used if useAllNodes is set to false.
      nodes:
        - name: "<ip-adress of node_1>"
          devices:
            - name: "sdb" 
        - name: "<ip-adress of node_2>"
          deviceFilter: "^sd[a-b]"
          config:
            metadataDevice: "sda" # optional
#-------------------------------------------------------------------------------------------------------------------------------
### Values for Postgres ###
postgrespass: "password" # mandatory, set password for harbor postgres access 
postgres:
  resources:
    requests:
      storage: 2Gi # mandatory, depending on storage capacity
#-------------------------------------------------------------------------------------------------------------------------------
### Values for Redis ###
redispass: "password" # mandatory set password for harbor redis access 
redis:
  resources:
    requests:
      storage: 2Gi # mandatory depending on storage capacity
#-------------------------------------------------------------------------------------------------------------------------------
### Values for Harbor deployment ###
## For detailed explaination for each key see: https://artifacthub.io/packages/helm/harbor/harbor/1.8.1#configuration ##
harborValues: 
  harborpass: "password" # mandatory: set password for harbor access 
  externalURL: https://10.2.10.13 # mandatory, the ip address, from which harbor is accessable outside of the cluster
  nodePort: 30003
  harborPersistence:
    persistentVolumeClaim:
      registry:
        size: 40Gi # optional, default is 40Gi
        storageClass: "rook-cephfs" #optional, default is rook-cephfs
      jobservice:
        jobLog:
          size: 1Gi # optional, default is 1Gi
          storageClass: "rook-cephfs" #optional, default is rook-cephfs
      database:
        size: 1Gi # optional, default is 1Gi
        storageClass: "rook-cephfs" #optional, default is rook-cephfs
      redis:
        size: 1Gi # optional, default is 1Gi
        storageClass: "rook-cephfs" #optional, default is rook-cephfs
      trivy: 
        size: 5Gi # optional, default is 5Gi
        storageClass: "rook-cephfs" #optional, default is rook-cephfs
#--------------------------------------------------------------------------------------------------------------------------------------
###Values for filebeat deployment###
filebeatValues:
  namespace: kubeops # optional, default is kubeops   
#--------------------------------------------------------------------------------------------------------------------------------
###Values for Logstash deployment###
##For detailed explaination for each key see: https://github.com/elastic/helm-charts/releases/tag/v7.16.3###
logstashValues:
  namespace: kubeops
  volumeClaimTemplate:
    resources:
      requests:
        storage: 1Gi # mandatory, depending on storage capacity
#--------------------------------------------------------------------------------------------------------------------------------------
###Values for OpenSearch-Dashboards deployment###
##For detailed explaination for each key see: https://github.com/opensearch-project/helm-charts/tree/main/charts/opensearch-dashboards###
openSearchDashboardValues:
  namespace: kubeops
  nodePort: 30050
#--------------------------------------------------------------------------------------------------------------------------------
###Values for OpenSearch deployment###
##For detailed explaination for each key see: https://github.com/opensearch-project/helm-charts/tree/main/charts/opensearch###
openSearchValues:
  namespace: kubeops
  resources:
  persistence:
    size: 4Gi # mandatory
#--------------------------------------------------------------------------------------------------------------------------------
###Values for Prometheus deployment###
prometheusValues:
  prometheusResources:
    nodePort: 32090
#--------------------------------------------------------------------------------------------------------------------------------
###Values for OPA deployment###
opaValues:
  namespace: kubeops
#--------------------------------------------------------------------------------------------------------------------------------
###Values for KubeOps-Dashboard (Headlamp) deployment###
kubeOpsDashboardValues:
  service:
    nodePort: 30007
#--------------------------------------------------------------------------------------------------------------------------------
###Values for cert-manager deployment###
certmanValues:
  namespace: kubeops
  replicaCount: 3
  logLevel: 2
#--------------------------------------------------------------------------------------------------------------------------------
###Values for ingress-nginx deployment###
ingressValues:
  namespace: kubeops
apiVersion: kubeops/kubeopsctl/alpha/v5  # mandatory
kubeOpsUser: "demo" # mandatory,  change to your username
kubeOpsUserPassword: "Password" # mandatory,  change to your password
kubeOpsUserMail: "demo@demo.net" # change to your email
imagePullRegistry: "registry.preprod.kubernative.net/kubeops" # mandatory
localRegistry: false # mandatory
### Values for setup configuration ###
clusterName: "testkubeopsctl"  # mandatory
clusterUser: "myuser"  # mandatory
kubernetesVersion: "1.28.2" # mandatory, check lima documentation
#masterHost: optional if you have an hostname, default value in "masterIP"
masterIP: 10.2.10.31 # mandatory
firewall: "nftables" # mandatory, default "nftables"
pluginNetwork: "calico" # mandatory, default "nftables"
containerRuntime: "containerd" # mandatory, default "containerd"

these are parameters for the cluster creation, and software for the clustercreation, p.e. the containerruntime for running the contianers of the cluster. Also there are parameters for the lima software (see documentation of lima for futher explanation).

### Additional values for cluster configuration
useInsecureRegistry: false # optional, default is false
ignoreFirewallError: false # optional, default is false
serviceSubnet: 192.168.128.0/17 # optional, default "192.168.128.0/17"
podSubnet: 192.168.0.0/17 # optional, default "192.168.0.0/17"
debug: true # optional, default is true
logLevel: vvvvv # optional, default "vvvvv"
systemCpu: "1" # optional, default "1"
systemMemory: "2G" # optional, default "2G"
sudo: true # optional, default is true

also important are parameters like for the networking like the subnets for the pods and services inside the kubernetes cluster.

# at least 3 masters and 3 workers are needed
zones:
  - name: zone1
    nodes:
      master:
        - name: cluster1master1
          ipAdress: 10.2.10.11
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
        - name: cluster1master2
          ipAdress: 10.2.10.12
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
      worker:
        - name: cluster1worker1
          ipAdress: 10.2.10.14
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
        - name: cluster1worker1
          ipAdress: 10.2.10.15
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
  - name: zone2
    nodes:
      master:
        - name: cluster1master3
          ipAdress: 10.2.10.13
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2  
      worker:
        - name: cluster1worker1
          ipAdress: 10.2.10.16
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: drained
          kubeversion: 1.28.2

so here are thetwo zones, which contain master and worker nodes.
There are two different states: active and drained.
also there can be two different kubernetes versions.
So if you want to do updates in tranches, this is possible with kubeopsctl. Also you can set system memory and system cpu of the nodes for kubernetes itself. it is not possible to delete nodes, for deleting nodes you have to use lima. Also if you want to make an update in tranches, you need at least one master with the greater version.

All other parameters are explained here

2 Apply changes to cluster

Once you have configured the cluster changes in yaml file, use following command to apply the changes.

kubeopsctl apply -f kubeopsctl.yaml

2.8 - Backup and restore

In this article, we look at the backup procedure with Velero.

Backup and restoring artifacts

What is Velero?

Velero uses object storage to store backups and associated artifacts. It also optionally integrates supported block storage systems to snapshot your persistent volumes. Before beginning the installation process, you should identify the object storage provider and optional block storage provider(s) you’ll be using from the list of compatible providers.

Velero supports storage providers for both cloud-provider environments and on-premises environments.

Velero prerequisites:

  • Access to a Kubernetes cluster, v1.16 or later, with DNS and container networking enabled.
  • kubectl installed locally
  • Object Storage (S3, Cloud Provider Environment, On-Premises Environment)

Compatible providers and on-premises documentation can be read on https://velero.io/docs

Install Velero

This command is an example on how you can install velero into your cluster:

velero install --provider aws --plugins velero/velero-plugin-for-aws:v1.2.1 --bucket velero --secret-file ./credentials-velero --use-volume-snapshots=false --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://minio.velero.svc:9000

NOTE:

  • s3Url has to be the url of your s3 storage login.
  • example for credentials-velero file:
    [default]
    aws_access_key_id = your_s3_storage_username
    aws_secret_access_key = your_s3_storage_password
    

Backup the cluster

Scheduled Backups

This command creates a backup for the cluster every 6 hours:

velero schedule create cluster --schedule "0 */6 * * *"

Get Schedules

This command lists all schedules for backups:

velero schedule get

Delete Schedules

This command deletes the specified schedule:

velero schedule delete cluster

Restore Scheduled Backup

This command restores the backup according to a schedule:

velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>

Backup

This command creates a backup for the cluster

velero backup create cluster

Get Backups

This command lists all created backups:

velero backup get

Delete Backups

This commands deletes the specified backup:

velero backup delete <BACKUP NAME>

Restore Backup

This commands restores the specified backup:

velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>

Backup a specific deployment

Scheduled Backups

This command creates a backup for the namespace “logging” every 6 hours:

velero schedule create filebeat --schedule "0 */6 * * *" --include-namespaces logging --include-cluster-resources=true

This command creates a backup for the deployment “filebeat” every 6 hours:

velero schedule create filebeat --schedule "0 */6 * * *" --include-namespaces logging --selector app=filebeat-filebeat,release=filebeat --include-resources serviceaccount,deployment,daemonset,configmap,clusterrolebinding,clusterrole --include-cluster-resources=true

Get Schedules

This command lists all schedules for backups:

velero schedule get

Delete Schedules

This command deletes the specified schedule:

velero schedule delete filebeat

Restore Scheduled Backup

This command restores the backup from a schedule:

velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>

Backup

This command creates a backup for the namespace “logging”:

velero backup create filebeat --include-namespaces logging --include-cluster-resources=true

This command creates a backup for the deployment “filebeat”:

velero backup create filebeat --include-namespaces logging --selector app=filebeat-filebeat,release=filebeat --include-resources serviceaccount,deployment,daemonset,configmap,clusterrolebinding,clusterrole --include-cluster-resources=true

Get Backups

This command lists all created backups:

velero backup get

Delete Backups

This commands deletes the specified backup:

velero backup delete <BACKUP NAME>

Restore Backup

This commands restores the specified backup:

velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>

Scheduled Backups

This command creates a backup for the namespace “logging” every 6 hours:

velero schedule create logstash --schedule "0 */6 * * *" --include-namespaces logging --include-cluster-resources=true

This command creates a backup for the deployment “logstash” every 6 hours:

velero schedule create logstash --schedule "0 */6 * * *" --include-namespaces logging --selector app=logstash-logstash,chart=logstash,release=logstash --include-resources StatefulSet,ServiceAccount,Service,Secret,RoleBinding,Role,PodSecurityPolicy,PodDisruptionBudget,Ingress,ConfigMap --include-cluster-resources=true

Get Schedules

This command lists all schedules for backups:

velero schedule get

Delete Schedules

This command deletes the specified schedule:

velero schedule delete logstash

Restore Scheduled Backup

This command restores the backup from a schedule:

velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>

Backup

This command creates a backup for the namespace “logging”:

velero backup create logstash --include-namespaces logging --include-cluster-resources=true

This command creates a backup for the deployment “logstash”:

velero backup create logstash --include-namespaces logging --selector app=logstash-logstash,chart=logstash,release=logstash --include-resources StatefulSet,ServiceAccount,Service,Secret,RoleBinding,Role,PodSecurityPolicy,PodDisruptionBudget,Ingress,ConfigMap --include-cluster-resources=true

Get Backups

This command lists all created backups:

velero backup get

Delete Backups

This commands deletes the specified backup:

velero backup delete <BACKUP NAME>

Restore Backup

This commands restores the specified backup:

velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>

Scheduled Backups

This command creates a backup for the namespace “logging” every 6 hours:

velero schedule create opensearch --schedule "0 */6 * * *" --include-namespaces logging --include-cluster-resources=true

This command creates a backup for the deployment “opensearch” every 6 hours:

velero schedule create opensearch --schedule "0 */6 * * *" --include-namespaces logging --selector app.kubernetes.io/instance=opensearch,app.kubernetes.io/name=opensearch --include-resources ConfigMap,Ingress,NetworkPolicy,PodDisruptionBudget,PodSecurityPolicy,Role,RoleBinding,Secret,Service,ServiceAccount,StatefulSet --include-cluster-resources=true

Get Schedules

This command lists all schedules for backups:

velero schedule get

Delete Schedules

This command deletes the specified schedule:

velero schedule delete opensearch

Restore Scheduled Backup

This command restores the backup from a schedule:

velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>

Backup

This command creates a backup for the namespace “logging”:

velero backup create opensearch --include-namespaces logging --include-cluster-resources=true

This command creates a backup for the deployment “opensearch”:

velero backup create opensearch --include-namespaces logging --selector app.kubernetes.io/instance=opensearch,app.kubernetes.io/name=opensearch --include-resources ConfigMap,Ingress,NetworkPolicy,PodDisruptionBudget,PodSecurityPolicy,Role,RoleBinding,Secret,Service,ServiceAccount,StatefulSet --include-cluster-resources=true

Get Backups

This command lists all created backups:

velero backup get

Delete Backups

This commands deletes the specified backup:

velero backup delete <BACKUP NAME>

Restore Backup

This commands restores the specified backup:

velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>

Scheduled Backups

This command creates a backup for the namespace “monitoring” every 6 hours:

velero schedule create prometheus --schedule "0 */6 * * *" --include-namespaces monitoring --include-cluster-resources=true

This command creates a backup for the deployment “prometheus” every 6 hours:

velero schedule create prometheus --schedule "0 */6 * * *" --include-namespaces monitoring --include-resources Alertmanager,Secret,Ingress,List,PodDisruptionBudget,Role,RoleBinding,PodSecurityPolicy,Service,ServiceAccount,ServiceMonitor,Endpoints,ConfigMap,ConfigMapList,ClusterRole,ClusterRoleBinding,SecretProviderClass,PodMonitor,Prometheus,Job,NetworkPolicy,MutatingWebhookConfiguration,ValidatingWebhookConfiguration,Issuer,Deployment,VerticalPodAutoscaler,ThanosRuler --include-cluster-resources=true

Get Schedules

This command lists all schedules for backups:

velero schedule get

Delete Schedules

This command deletes the specified schedule:

velero schedule delete prometheus

Restore Scheduled Backup

This command restores the backup from a schedule:

velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>

Backup

This command creates a backup for the namespace “monitoring”:

velero backup create prometheus --include-namespaces monitoring --include-cluster-resources=true

This command creates a backup for the deployment “prometheus”:

velero backup create prometheus --include-namespaces monitoring --include-resources Alertmanager,Secret,Ingress,List,PodDisruptionBudget,Role,RoleBinding,PodSecurityPolicy,Service,ServiceAccount,ServiceMonitor,Endpoints,ConfigMap,ConfigMapList,ClusterRole,ClusterRoleBinding,SecretProviderClass,PodMonitor,Prometheus,Job,NetworkPolicy,MutatingWebhookConfiguration,ValidatingWebhookConfiguration,Issuer,Deployment,VerticalPodAutoscaler,ThanosRuler --include-cluster-resources=true

Get Backups

This command lists all created backups:

velero backup get

Delete Backups

This commands deletes the specified backup:

velero backup delete <BACKUP NAME>

Restore Backup

This commands restores the specified backup:

velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>

Scheduled Backups

This command creates a backup for the namespace “harbor” every 6 hours:

velero schedule create harbor --schedule "0 */6 * * *" --include-namespaces harbor --include-cluster-resources=true

This command creates a backup for the deployment “harbor” every 6 hours:

velero schedule create harbor --schedule "0 */6 * * *" --include-namespaces harbor --include-resources ConfigMap,Deployment,PersistentVolumeClaim,Secret,Service,StatefulSet,Ingress,ServiceMonitor --include-cluster-resources=true

Get Schedules

This command lists all schedules for backups:

velero schedule get

Delete Schedules

This command deletes the specified schedule:

velero schedule delete harbor

Restore Scheduled Backup

This command restores the backup from a schedule:

velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>

Backup

This command creates a backup for the namespace “harbor”:

velero backup create harbor --include-namespaces harbor --include-cluster-resources=true

This command creates a backup for the deployment “harbor”:

velero backup create harbor --include-namespaces harbor --include-resources ConfigMap,Deployment,PersistentVolumeClaim,Secret,Service,StatefulSet,Ingress,ServiceMonitor --include-cluster-resources=true --include-cluster-resources=true

Get Backups

This command lists all created backups:

velero backup get

Delete Backups

This commands deletes the specified backup:

velero backup delete <BACKUP NAME>

Restore Backup

This commands restores the specified backup:

velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>

Scheduled Backups

This command creates a backup for the namespace “gatekeeper-system” every 6 hours:

velero schedule create gatekeeper --schedule "0 */6 * * *" --include-namespaces gatekeeper-system --include-cluster-resources=true

This command creates a backup for the deployment “gatekeeper” every 6 hours:

velero schedule create gatekeeper --schedule "0 */6 * * *" --include-namespaces gatekeeper-system --include-resources PodSecurityPolicy,ServiceAccount,Deployment,PodDisruptionBudget,ResourceQuota,ClusterRole,Role,ClusterRoleBinding,RoleBinding,MutatingWebhookConfiguration,ValidatingWebhookConfiguration,Secret,Service,Job --include-cluster-resources=true

Get Schedules

This command lists all schedules for backups:

velero schedule get

Delete Schedules

This command deletes the specified schedule:

velero schedule delete gatekeeper

Restore Scheduled Backup

This command restores the backup from a schedule:

velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>

Backup

This command creates a backup for the namespace “gatekeeper-system”:

velero backup create gatekeeper --include-namespaces gatekeeper-system --include-cluster-resources=true

This command creates a backup for the deployment “gatekeeper-system”:

velero backup create gatekeeper --include-namespaces gatekeeper-system --include-resources PodSecurityPolicy,ServiceAccount,Deployment,PodDisruptionBudget,ResourceQuota,ClusterRole,Role,ClusterRoleBinding,RoleBinding,MutatingWebhookConfiguration,ValidatingWebhookConfiguration,Secret,Service,Job --include-cluster-resources=true --include-cluster-resources=true

Get Backups

This command lists all created backups:

velero backup get

Delete Backups

This commands deletes the specified backup:

velero backup delete <BACKUP NAME>

Restore Backup

This commands restores the specified backup:

velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>

Scheduled Backups

This command creates a backup for the namespace “rook-ceph” every 6 hours:

velero schedule create rook-ceph --schedule "0 */6 * * *" --include-namespaces rook-ceph --include-cluster-resources=true

Get Schedules

This command lists all schedules for backups:

velero schedule get

Delete Schedules

This command deletes the specified schedule:

velero schedule delete rook-ceph

Restore Scheduled Backup

This command restores the backup from a schedule:

velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>

Backup

This command creates a backup for the namespace “rook-ceph”:

velero backup create rook-ceph --include-namespaces rook-ceph --include-cluster-resources=true

Get Backups

This command lists all created backups:

velero backup get

Delete Backups

This commands deletes the specified backup:

velero backup delete <BACKUP NAME>

Restore Backup

This commands restores the specified backup:

velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>

2.9 - Renew Certificates

Renewal of certificates made easy with LIMA.

Renewing all certificates at once


LIMA enables you to renew all Certificates, for a specific Cluster, on all control-plane-nodes in one command.

lima renew cert <clusterName>
Note: Renewing certificates can take several minutes for restarting all certificates services.

Here is an example to renew certificates on cluster with name “Democluster”:

lima renew cert Democluster

Note: This command renew all certificates on the existing control-plane, there is no option to renew single certificates.

2.10 - Deploy Package On Cluster

This guide provides a simplified process for deploying packages in a Kubernetes cluster using Kosi with either the Helm or Kubectl plugin.

Deploying package on Cluster

You can install artifacts in your cluster in several ways. For this purpose, you can use these four plugins when creating a package:

  • helm
  • kubectl
  • cmd
  • Kosi

As an example, this guide installs the nginx-ingress Ingress Controller.

Using the Helm-Plugin

Prerequisite

In order to install an artifact with the Helm plugin, the Helm chart must first be downloaded. This step is not covered in this guide.

Create KOSI package

First you need to create a KOSI package. The following command creates the necessary files in the current directory:

kosi create

The downloaded Helm chart must also be located in the current directory. To customize the deployment of the Helm chart, the values.yaml file must be edited. This file can be downloaded from ArtifactHub and must be placed in the same directory as the Helm chart.

All files required by a task in the package must be named in the package.yaml file under includes.files. The container images required by the Helm chart must also be listed in the package.yaml under includes.containers. For installation, the required files and images must be listed under the installation.includes key.
In the example below, only two files are required for the installation: the Helm Chart for the nginx-ingress and the values.yaml to configure the deployment. To install nginx-ingress you will also need the nginx/nginx-ingress image with the tag 3.0.1.

To install nginx-ingress with the Helm plugin, call the plugin as shown in the example under installation.tasks. The deployment configuration file is listed under values and the packed Helm chart is specified with the key tgz. Furthermore, it is also possible to specify the namespace in which the artifact should be deployed and the name of the deployment. The full documentation for the Helm plugin can be found here.

apiversion: kubernative/kubeops/sina/user/v4
name: deployExample
description: "This Package is an example. 
              It shows how to deploy an artifact to your cluster using the helm plugin."
version: 0.1.0  
includes: 
  files:  
    config: "values.yaml"
    nginx: "nginx-ingress-0.16.1.tgz"
  containers: 
    nginx-ingress:
      registry: docker.io 
      image: nginx/nginx-ingress
      tag: 3.0.1
docs: docs.tgz
logo: logo.png
installation:  
  includes: 
    files: 
      - config 
      - nginx
    containers: 
      - nginx-ingress
  tasks: 
    - helm:
        command: "install"
        values:
          - values.yaml
        tgz: "nginx-ingress-0.16.1.tgz"
        namespace: dev
        deploymentName: nginx-ingress
...

update:  
  tasks:
  
delete:  
  tasks:

Once the package.yaml file has been fully configured, all files must be combined into a KOSI package. To do this, execute the following command in the directory where the package.yaml file is located.

kosi build

To make the generated kosi package available on other machines, it is pushed to the user’s private KubeOps Hub. To do this, the user must first log in to the hub.

$ kosi login -u <username>
2023-02-04 11:19:43 Info:      KOSI version: 2.6.0_Beta0
2023-02-04 11:19:43 Info:      Please enter password
****************
2023-02-04 11:19:26 Info:      Login Succeeded to Hub.
$ kosi push --hub kosi
2023-02-04 11:23:18 Info:      KOSI version: 2.6.0_Beta0
2023-02-04 11:23:19 Info:      Push to Private Registry registry.preprod.kubernative.net/<username>/

Deployment

Once the KOSI package has been created and published, it needs to be installed on the Admin node. The following command will download and execute the package. The package name and version refer to the values defined in package.yaml with the keys name and version.

kosi install --hub <username> <username>/<packagename>:<version>

For the example package, the command would be: kosi install --hub <username> <username>/deployExample:0.1.0.

Using the Kubectl-Plugin

Prerequisite

In order to install an artifact with the Kubectl plugin, the kubeops-kubernetes-plugins package must be installed on the admin node. This step is not covered in this guide.

Create KOSI package

First you need to create a KOSI package. The following command creates the necessary files in the current directory:

kosi create

The NGINX ingress controller YAML manifest can either be automaticly downloaded and applyed directly with kubectl apply or it can be downloaded manually if you want to customize the deployment. The YAML manifest can be downloaded from the NGINX GitHub Repo and must be placed in the same directory as the files for the kosi package.

All files required by a task in the package must be named in the package.yaml file under includes.files. The container images required by the YAML manifest must also be listed in the package.yaml under includes.containers. For installation, the required files and images must be listed under the installation.includes key.
In the example below, only one file is required for the installation: the YAML manifest for the nginx-ingress controller. To install nginx-ingress you will also need the registry.k8s.io/ingress-nginx/controller image with the tag v1.5.1 and the image registry.k8s.io/ingress-nginx/kube-webhook-certgen with tag v20220916-gd32f8c343.

To install nginx-ingress with the Kubectl plugin, call the plugin as shown in the example under installation.tasks. The full documentation for the Kubectl plugin can be found here.

apiversion: kubernative/kubeops/sina/user/v4
name: deployExample
description: "This Package is an example. 
              It shows how to deploy an artifact to your cluster using the helm plugin."
version: 0.1.0  
includes: 
  files:  
    manifest: "deploy.yaml"
  containers: 
    nginx-ingress:
      registry: registry.k8s.io
      image: ingress-nginx/controller
      tag: v1.5.1
    webhook-certgen:
      registry: registry.k8s.io
      image: ingress-nginx/kube-webhook-certgen
      tag: v20220916-gd32f8c343
docs: docs.tgz
logo: logo.png
installation:  
  includes: 
    files: 
      - manifest
    containers: 
      - nginx-ingress
      - webhook-certgen
  tasks: 
    - kubectl:
        operation: "apply"
        flags: " -f <absolute path>/deploy.yaml"
        sudo: true
        sudoPassword: "toor"

...

update:  
  tasks:
  
delete:  
  tasks:

Once the package.yaml file has been fully configured, all files must be combined into a KOSI package. To do this, execute the following command in the directory where the package.yaml file is located.

kosi build

To make the generated KOSI package available on other machines, it is pushed to the user’s private KubeOps Hub. To do this, the user must first log in to the hub.

$ kosi login -u <username>
2023-02-04 11:19:43 Info:      kosi version: 2.6.0_Beta0
2023-02-04 11:19:43 Info:      Please enter password
****************
2023-02-04 11:19:26 Info:      Login Succeeded to Hub.
$ kosi push --hub kosi
2023-02-04 11:23:18 Info:      kosi version: 2.6.0_Beta0
2023-02-04 11:23:19 Info:      Push to Private Registry registry.preprod.kubernative.net/<username>/

Deployment

Once the KOSI package has been created and published, it needs to be installed on the Admin node. The following command will download and execute the package. The package name and version refer to the values defined in package.yaml with the keys name and version.

kosi install --hub <username> <username>/<packagename>:<version>

For the example package, the command would be: kosi install --hub <username> <username>/deployExample:0.1.0.

2.11 - Replace Cluster Nodes

This guide explains how to replace nodes in a Kubernetes cluster using KubeOps, which involves deleting an existing node and adding a new one through a YAML configuration file.

Replace cluster nodes

This section describes how to replace cluster nodes in your cluster.

Direct replacement of nodes is not possible in KubeOps; however you can delete the node and add a new node to the cluster as shown in the following example.

Steps to replace a Kubernetes Node

  1. Use the command delete on the admin node to delete the unwanted node from the cluster.

    The command is:

    lima delete -n <IP of your node> <name of your Cluster>
    
    If you are deleting a node, then its data becomes inaccessible or erased.
  2. Now create a new .yaml file with a configuration for the node as shown below

    Example:

    apiVersion: lima/nodeconfig/v1alpha1
    clusterName: roottest
    spec:
      masters: []
      workers:
      - host: 10.2.10.17  ## ip of the new node to be joined
        systemCpu: "200m"
        systemMemory: "200Mi"        
        user: root
        password: toor
    
  3. Lastly use the command create nodes to create and join the new node.

    The command is:

    lima create nodes -f <node yaml file name>
    

Example 1

In the following example, we will replace a node with ip 10.2.10.15 from demoCluster to a new worker node with ip 10.2.10.17:

  1. Delete node.

    lima delete -n 10.2.10.15 demoCluster
    
  2. create addNode.yaml for new worker node.

    apiVersion: lima/nodeconfig/v1alpha1
    clusterName: roottest
    spec:
      masters: []
      workers:
      - host: 10.2.10.17
        systemCpu: "200m"
        systemMemory: "200Mi"
        user: root
        password: toor
    
  3. Join the new node.

    lima create nodes -f addNode.yaml
    

Example 2

If you are rejoining a master node, all other steps are the same except, you need to add the node configuration in the yaml file as shown in the example below:

apiVersion: lima/nodeconfig/v1alpha1
clusterName: roottest
spec:
  masters:
  - host: 10.2.10.17
    systemCpu: "200m"
    systemMemory: "200Mi"
    user: root
    password: toor
  workers: []

2.12 - Update Kubernetes Version

This guide outlines the steps to upgrade the Kubernetes version of a cluster, specifically demonstrating how to change the version using a configuration file.

Upgrading Kubernetes version

You can use the following steps to upgrade the Kubernetes version of a cluster.

In the following example, we will upgrade Kubernetes version of your cluster with name Democluster from Kubernetes version 1.27.2 to Kubernetes version 1.28.2

  1. You have to create a kubeobsctl.yaml with following yaml syntax.
   apiVersion: kubeops/kubeopsctl/alpha/v5  # mandatory
   kubeOpsUser: "demo" # mandatory,  change to your username
   kubeOpsUserPassword: "Password" # mandatory,  change to your password
   kubeOpsUserMail: "demo@demo.net" # change to your email
   imagePullRegistry: "registry.preprod.kubernative.net/kubeops" # mandatory
   localRegistry: false # mandatory
   ### Values for setup configuration ###
   clusterName: "Democluster"  # mandatory
   clusterUser: "myuser"  # mandatory
   kubernetesVersion: "1.28.2" # mandatory, check lima documentation
   masterIP: 10.2.10.11 # mandatory
   ### Additional values for cluster configuration
   # at least 3 masters and 3 workers are needed
   zones:
   - name: zone1
      nodes:
         master:
         - name: cluster1master1
            ipAdress: 10.2.10.11
            user: myuser
            systemCpu: 100m
            systemMemory: 100Mi 
            status: active
            kubeversion: 1.28.2
         - name: cluster1master2
            ipAdress: 10.2.10.12
            user: myuser
            systemCpu: 100m
            systemMemory: 100Mi 
            status: active
            kubeversion: 1.28.2
         worker:
         - name: cluster1worker1
            ipAdress: 10.2.10.14
            user: myuser
            systemCpu: 100m
            systemMemory: 100Mi 
            status: active
            kubeversion: 1.28.2
         - name: cluster1worker2
            ipAdress: 10.2.10.15
            systemCpu: 100m
            systemMemory: 100Mi 
            status: active
            kubeversion: 1.28.2
   - name: zone2
      nodes:
         master:
         - name: cluster1master3
            ipAdress: 10.2.10.13
            user: myuser
            systemCpu: 100m
            systemMemory: 100Mi 
            status: drained
            kubeversion: 1.28.2  
         worker:
         - name: cluster1worker1
            ipAdress: 10.2.10.16
            user: myuser
            systemCpu: 100m
            systemMemory: 100Mi 
            status: active
            kubeversion: 1.28.2

   # set to true if you want to install it into your cluster
   rook-ceph: false # mandatory
   harbor: false # mandatory
   opensearch: false # mandatory
   opensearch-dashboards: false # mandatory
   logstash: false # mandatory
   filebeat: false # mandatory
   prometheus: false # mandatory
   opa: false # mandatory
   kubeops-dashboard: false # mandatory
   certman: false # mandatory
   ingress: false # mandatory
   keycloak: false # mandatory
  1. Upgrade the version

    Once the kubeopsctl.yaml file is created in order to change the Version of your cluster use the following command:

    kubeopsctl upgrade -f kubeopsctl.yaml
    

rook-ceph has no pdbs, so if you drain nodes for the kubernetes upgrade, rook ceph is temporarily unavailable. you should drain only one node at a time for the kubernetes upgrade.

2.13 - Change CRI

A brief overview of how you can change the Container Runtime Interface (CRI) of your cluster to the supported CRI containerd and crio.

Changing Container Runtime Interface

KubeOps enables you to change the Container Runtime Interface (CRI) of the clusters to any of the following supported CRIs

  • containerd
  • crio

You can use the following steps to change the CRI

In the example below, we will change the CRI of the cluster with the name Democluster to containerd.

  1. Download the desired CRI maintenance package from hub
In this case you will need package `lima/containerdlp151:1.6.6`.  
To download the package use command:  
lima pull maintenance lima/containerdlp151:1.6.6
Note : Packages may vary based on OS and Kubernetes version on your machine.
To select the correct maintenance package based on your machine configuration, refer to Installing maintenance packages
  1. Change the CRI of your cluster.

Once the desired CRI maintenance package is downloaded, to change the CRI of your cluster use command:

lima change runtime -r containerd Democluster

So in this case you want to change your runtime to containerd. The desired container runtime is specified after the -r parameter, which is necessary. In this example the cluster has the name Democluster, which is also necessary.

2.14 - How to delete nodes from the cluster with lima

A compact overview of how you can delete nodes from your cluster with Lima.

Note: If we want to delete a node from our kubernetes cluster we have to use lima.

If you are using our platform, lima is already installed by it. If this is not the case, please install lima manually.

These are the prerequisites that have to fulfilled before we can delete a node from our cluster.

  • lima has to be installed
  • a functioning cluster must exist

If you want to remove a node from your cluster you can run the delete command on the admin node.

lima delete -n <node which should be deleted> <name of your cluster>

Note: The example cluster name has to be the same like the one set in the Kubectl.yaml file. Under clusterName:

For example we want to delete worker node 2 from our existing kubernetes cluster named example and the IP-address 10.2.1.9 with the following command:

lima delete -n 10.2.1.9 example

2.15 - Accessing Dashboards

A brief overview of how you can access dashboards.

Accessing Dashboards installed with KubeOps

To access a Application dashboard an SSH-Tunnel to one of the Control-Planes is needed. The following Dashboards are available and configured with the following NodePorts by default:

NodePort

30211

Initial login credentials

  • username: the username set in the kubeopsvalues.yaml for the cluster creation
  • password: the password set in the kubeopsvalues.yaml for the cluster creation

NodePort

30050

Initial login credentials

  • username: admin
  • password: Password@@123456

NodePort

  • https: 30003

Initial login credentials

  • username: admin
  • password: the password set in the kubeopsvalues.yaml for the cluster creation

NodePort

The Rook/Ceph Dashboard has no fixed NodePort yet. To find out the NodePort used by Rook/Ceph follow these steps:

  1. List the Services in the KubeOps namespace
kubectl get svc -n kubeops
  1. Find the line with the service rook-ceph-mgr-dashboard-external-http
NAME                                      TYPE        CLUSTER-IP        EXTERNAL-IP   PORT(S)                                     AGE
rook-ceph-mgr-dashboard-external-http     NodePort    192.168.197.13    <none>        7000:31268/TCP                              21h

Or use,

echo $(kubectl get --namespace rook-ceph -o jsonpath="{.spec.ports[0].nodePort}" services rook-ceph-mgr-dashboard-external-http)

In the example above the NodePort to connect to Rook/Ceph would be 31268.

Initial login credentials

echo Username: admin
echo Password: $(kubectl get secret rook-ceph-dashboard-password -n rook-ceph --template={{.data.password}} | base64 -d)

The dashboard can be accessed with localhost:NodePort/ceph-dashboard/

NodePort

30007

Initial login credentials

kubectl -n monitoring create token headlamp-admin

NodePort

30180

Initial login credentials

echo Username: admin
echo Password: $(kubectl get secret --namespace keycloak keycloak -o jsonpath="{.data.ADMIN_PASSWORD}" | base64 -d)

Access the dashboard with localhost:NodePort/

Connecting to the Dashboard

In order to connect to one of the dashboards, an ssh tunnel has to be established. There are various tools for doing this, like the command line, putty or MobaXterm.
To establish a tunnel, the NodePort of the dashboard has to be forwarded on one of the control planes to the local machine. After that the dashboard can be accessed with localhost:<Port>.

Connecting to the Dashboard via DNS

In order to connect to the dashboard via DNS the hosts file in /etc/hosts need the following additional entries:

10.2.10.11 kubeops-dashboard.local
10.2.10.11 harbor.local
10.2.10.11 keycloak.local
10.2.10.11 opensearch.local
10.2.10.11 grafana.local
10.2.10.11 rook-ceph.local

The IP address must be the same as the address of your Master1.

2.16 - Replace the kubeops-cert with your own cert

This section outlines how to replace the default kubeops certificate with a custom one by creating a new certificate in a Kubernetes secret and updating the configuration accordingly.

Replace the kubeops-cert with your own cert

1. Create your own cert in a secret

In this example, a new secret with the name example-ca is created.

This command creates two files: tls.key and tls.cert:

openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=foo.bar.com"

Create a new tls secret in the namespace cert-manager:

kubectl create secret tls example-ca --key="tls.key" --cert="tls.crt" -n cert-manager

2. Create the new configuration

Make sure that certman is set to true.

certman: true

Add the following section to kubeopsctl.yaml.

certmanValues:
  secretName: example-ca

3. Apply the new configuration with kubeopsctl

kubeopsctl apply -f kubeopsctl.yaml

2.17 - Create a new Repository

This section provides a comprehensive guide on setting up a new RPM repository in KubeOps for the centralized distribution of software packages, covering prerequisites, repository setup steps, and commands for managing the repository and installing packages.

Kubeops RPM Repository Setup Guide

Setting up a new RPM repository allows for centralized, secure, and efficient distribution of software packages, simplifying installation, updates, and dependency management.

Prerequisites

To setup a new repostory on your KubeOps platform, following pre-requisites must be fulfilled.

  • httpd (apache) server to access the repository over HTTP.
  • Root or administrative access to the server.
  • Software packages (RPM files) to include in the repository.
  • createrepo (an RPM package management tool) to create a new repository.

Repository Setup Steps

1. Install Required Tools

sudo yum install -y httpd createrepo

2. Create Repository Dierectory

When Apache is installed, the default Apache VirtualHost DocumentRoot created at /var/www/html. Create a new repository KubeOpsRepo under DocumentRoot.

sudo mkdir -p /var/www/html/KubeOpsRepo

3. Copy RPM Packages

Copy RPM packages into KubeOpsRepo repository.

Use below command to copy the packages that are already present in the host machine, else directly populate the packages into KubeOpsRepo

sudo cp -r <sourcePathForRPMs> /var/www/html/KubeOpsRepo/

4. Generate the GPG Signature (optional)

If you want to use your packages in a secure way, we recommend using GPG Signature.

How does the GPG tool work?

The GNU Privacy Guard (GPG) is used for secure communication and data integrity verification.
When gpgcheck set to 1 (enabled), the package will verify the GPG signature of each packages against the correponding key in the keyring. If the package’s signature matches the expected signature, the package is considered valid and can be installed. If the signature does not match or the package is not signed, the package manager will refuse to install the package or display a warning.

GPG Signature for new registry

  1. Create a GPG key and add it to the /var/www/html/KubeOpsRepo/. Check here to know how to create GPG keypairs.

  2. Save the GPG key as RPM-GPG-KEY-KubeOpsRepo using following command.

sudo cd /var/www/html/KubeOpsRepo/
gpg --armor --export > RPM-GPG-KEY-KubeOpsRepo

You can use following command to verify the gpg key.

curl -s http://<ip-address-of-server>/KubeOpsRepo/RPM-GPG-KEY-myrepo

5. Initialize the KubeOpsRepo

By running createrepo command the KubeOpsRepo will be initialized.

sudo cd /var/www/html/KubeOpsRepo/
sudo createrepo .

The newly created directoryrepodata conatains metadata files that describe the RPM packages in the repository, including package information, dependencies, and checksums, enabling efficient package management and dependency resolution.

6. Start and Enable Apache Service

sudo systemctl start httpd
sudo systemctl enable httpd

Configure Firewall (Optional)

If the firewall is enabled, we need to allow incoming HTTP traffic.

sudo firewall-cmd --add-service=http --permanent
sudo firewall-cmd --reload

7. Configure the local repository

To install packages from KubeOpsRepo without specifying the URL everytime, we can configure the local repository. Also if you are using GPG signature, then gpgcheck needs to be enabled.

  1. Create a Repository Configuration File
    Create a new .repo configuration file (e.g. KubeOpsRepo.repo) in /etc/yum.repos.d/ directory with following command.
sudo vi /etc/yum.repos.d/KubeOpsRepo.repo
  1. Add following confuration content to the File
[KubeOpsRepo]  
name=KubeOps Repository
baseurl=http://<ip-address-of-server>/KubeOpsRepo/
enabled=1
gpgcheck=1
gpgkey=http://<ip-address-of-server>/KubeOpsRepo/RPM-GPG-KEY-KubeOpsRepo

Below are the configuration details :

  1. KubeOpsRepo: It is the repository ID.
  2. baseurl: It is the base URL of the new repository. Add your repository URL here.
  3. name : It can be customized to a descriptive name.
  4. enabled=1: This enables the the repository.
  5. gpgcheck=1 : It is used to enable GPG signature verification for the repository.
  6. gpgkey : Add the address where your GPG key is placed.
In case, you are not using the GPG signature verification
1. you can skip step 4
and
2. set the gpgcheck=0 in the above configuration file.

8. Test the Local Repository

To ensure that the latest metadata for the repositories available, you can run below command: (optional)

sudo yum makecache

To verify the repository in list

You can check the reposity in the repolist with following command :

sudo yum repolist

This will list out all the repositories with the information about the repositories.

[root@cluster3admin1 ~]# yum repolist
Updating Subscription Management repositories.
repo id                                                        repo name
KubeOpsRepo                                                    KubeOps Repository
rhel-8-for-x86_64-appstream-rpms                               Red Hat Enterprise Linux 8 for x86_64 - AppStream (RPMs)
rhel-8-for-x86_64-baseos-rpms                                  Red Hat Enterprise Linux 8 for x86_64 - BaseOS (RPMs)

To List all the packages in repository

You can list all the packages availbale in KubeOpsRepo with following command :

# To check all the packages including duplicate installed packages
sudo yum list available --disablerepo="*" --enablerepo="KubeOpsRepo" --showduplicates
# sudo yum list --showduplicates | grep KubeOpsRepo 

To Install the Packages from the repository directly

Now you can directly install the packages from the KubeOpsRepo Repository with following command :

sudo yum install package_name

For Example :

sudo yum install lima

2.18 - Add certificate as trusted

This section outlines the process for adding a certificate as trusted by downloading it from the browser and installing it in the Trusted Root Certification Authorities on Windows or Linux systems.

1. Download the certificate

  1. As soon as Chrome issues a certificate warning, click on Not secure to the left of the address bar.
  2. Show the certificate (Click on Certificate is not valid).
  3. Go to Details tab.
  4. Click Export... at the bottom and save the certificate.
  1. As soon as Firefox issues a certificate warning, click on Advanced....
  2. View the certificate (Click on View Certificate).
  3. Scroll down to Miscellaneous and save the certificate.

2. Install the certificate

  1. Press Windows + R.
  2. Enter mmc and click OK.
  3. Click on File > Add/Remove snap-in....
  4. Select Certificates in the Available snap-ins list and click on Add >, then on OK. Add the snap-in.
  5. In the tree pane, open Certificates - Current user > Trusted Root Certification Authorities, then right-click Certificates and select All tasks > Import....
  6. The Certificate Import Wizard opens here. Click on Next.
  7. Select the previously saved certificate and click Next.
  8. Click Next again in the next window.
  9. Click on Finish. If a warning pops up, click on Yes.
  10. The program can now be closed. Console settings do not need to be saved.
  11. Clear browser cache and restart browser.

The procedures for using a browser to import a certificate as trusted (on Linux systems) vary depending on the browser and Linux distribution used. To manually cause a self-signed certificate to be trusted by a browser on a Linux system:

Distribution Copy certificate here Run following command to trust certificate
RedHat /etc/pki/ca-trust/source/anchors/ update-ca-trust extract

Note: If the directory does not exist, create it.
Note: If you do not have the ca-certificates package, install it with your package manager.

2.19 - Change registry

In KubeOps you have the possibility to change the registry from A to B for the respective tools.

Changing Registry from A to B

KubeOps enables you to change the registry from A to B with following commands

kosi 2.6.0 - kosi 2.7.0

Kubeops 1.0.6

fileBeat
kosi pull kubeops/kosi-filebeat-os:1.2.0 -o filebeat.kosi -r 10.9.10.222:30003
kosi pull kubeops/kosi-filebeat-os:1.2.0 -o filebeat.kosi -r localhost:30003
kosi install -p filebeat.kosi
harbor
kosi pull kubeops/harbor:1.0.1 -o harbor.kosi  -r 10.9.10.222:30003
kosi pull kubeops/harbor:1.0.1 -o harbor.kosi -r localhost:30003 
kosi install -p harbor.kosi
logstash
kosi pull kubeops/kosi-logstash-os:1.0.1 -o logstash.kosi -r 10.9.10.222:30003
kosi pull kubeops/kosi-logstash-os:1.0.1 -o logstash.kosi -r localhost:30003
kosi install -p logstash.kosi
opa-gatekeeper
kosi pull kubeops/opa-gatekeeper:1.0.1 -o opa-gatekeeper.kosi -r 10.9.10.222:30003
kosi pull kubeops/opa-gatekeeper:1.0.1 -o opa-gatekeeper.kosi -r localhost:30003
kosi install -p opa-gatekeeper.kosi
opensearch
kosi pull kubeops/kosi-opensearch-os:1.0.3 -o opensearch.kosi -r 10.9.10.222:30003
kosi pull kubeops/kosi-opensearch-os:1.0.3 -o opa-gatekeeper.kosi -r localhost:30003
kosi install -p opa-gatekeeper.kosi
opensearch-dashboards
kosi pull kubeops/kosi-opensearch-dashboards:1.0.1 -o opensearch-dashboards.kosi -r 10.9.10.222:30003
  kosi pull kubeops/kosi-opensearch-dashboards:1.0.1 -o opensearch-dashboards.kosi -r localhost:30003
kosi install -p opensearch-dashboards.kosi
prometheus
kosi pull kubeops/kosi-kube-prometheus-stack:1.0.3 -o prometheus.kosi -r 10.9.10.222:30003
kosi pull kubeops/kosi-kube-prometheus-stack:1.0.3 -o prometheus.kosi -r localhost:30003
kosi install -p prometheus.kosi

rook

kosi pull kubeops/rook-ceph:1.0.3 -o rook-ceph.kosi -r 10.9.10.222:30003
kosi pull kubeops/rook-ceph:1.0.3 -o rook-ceph.kosi -r localhost:30003
kosi install -p rook-ceph.kosi

Kubeops 1.1.2

fileBeat

kosi pull kubeops/kosi-filebeat-os:1.1.1 -o kubeops/kosi-filebeat-os:1.1.1 -r 10.9.10.222:30003
kosi pull kubeops/kosi-filebeat-os:1.1.1 -o kubeops/kosi-filebeat-os:1.1.1 -t localhost:30003
kosi install -p package.yaml

harbor

kosi pull kubeops/harbor:1.1.1 -o kubeops/harbor:1.1.1 -r 10.9.10.222:30003
kosi pull kubeops/harbor:1.1.1 -o kubeops/harbor:1.1.1 -t localhost:30003
kosi install -p package.yaml

logstash

kosi pull kubeops/kosi-logstash-os:1.1.1 -o kubeops/kosi-logstash-os:1.1.1 -r 10.9.10.222:30003
kosi pull kubeops/kosi-logstash-os:1.1.1 -o kubeops/kosi-logstash-os:1.1.1 -t localhost:30003
kosi install -p package.yaml

opa-gatekeeper

kosi pull kubeops/opa-gatekeeper:1.1.1 -o kubeops/opa-gatekeeper:1.1.1 -r 10.9.10.222:30003
kosi pull kubeops/opa-gatekeeper:1.1.1 -o kubeops/opa-gatekeeper:1.1.1 -t localhost:30003
kosi install -p package.yaml

opensearch

kosi pull kubeops/kosi-opensearch-os:1.1.1 -o kubeops/kosi-opensearch-os:1.1.1 -r 10.9.10.222:30003
kosi pull kubeops/kosi-opensearch-os:1.1.1 -o kubeops/kosi-opensearch-os:1.1.1 -t localhost:30003
kosi install -p package.yaml

opensearch-dashboards

kosi pull kubeops/kosi-opensearch-dashboards:1.1.1 -o kubeops/kosi-opensearch-dashboards:1.1.1 -r 10.9.10.222:30003
kosi pull kubeops/kosi-opensearch-dashboards:1.1.1 -o kubeops/kosi-opensearch-dashboards:1.1.1 -t localhost:30003
kosi install -p package.yaml

prometheus

kosi pull kubeops/kosi-kube-prometheus-stack:1.1.1 -o kubeops/kosi-kube-prometheus-stack:1.1.1 -r 10.9.10.222:30003
kosi pull kubeops/kosi-kube-prometheus-stack:1.1.1 -o kubeops/kosi-kube-prometheus-stack:1.1.1 -t localhost:30003
kosi install -p package.yaml

rook

kosi pull kubeops/rook-ceph:1.1.1 -o kubeops/rook-ceph:1.1.1 -r 10.9.10.222:30003
kosi pull kubeops/rook-ceph:1.1.1 -o kubeops/rook-ceph:1.1.1 -t localhost:30003
kosi install -p package.yaml

cert-manager

kosi pull kubeops/cert-manager:1.0.2 -o kubeops/cert-manager:1.0.2 -r 10.9.10.222:30003
kosi pull kubeops/cert-manager:1.0.2 -o kubeops/cert-manager:1.0.2 -t localhost:30003
kosi install -p package.yaml

ingress-nginx

kosi pull kubeops/ingress-nginx:1.0.1 -o kubeops/ingress-nginx:1.0.1 -r 10.9.10.222:30003
kosi pull kubeops/ingress-nginx:1.0.1 -o kubeops/ingress-nginx:1.0.1 -t localhost:30003
kosi install -p package.yaml

kubeops-dashboard

kosi pull kubeops/kubeops-dashboard:1.0.1 -o kubeops/kubeops-dashboard:1.0.1 -r 10.9.10.222:30003
kosi pull kubeops/kubeops-dashboard:1.0.1 -o kubeops/kubeops-dashboard:1.0.1 -t localhost:30003
kosi install -p package.yaml

kubeopsctl 1.4.0

Kubeops 1.4.0

you have to create the file kubeopsctl.yaml :

apiVersion: kubeops/kubeopsctl/alpha/v5  # mandatory
kubeOpsUser: "demo" # change to your username
kubeOpsUserPassword: "Password" # change to your password
kubeOpsUserMail: "demo@demo.net" # change to your email
imagePullRegistry: "registry.preprod.kubeops.net/kubeops"

clusterName: "example" 
clusterUser: "root" 
kubernetesVersion: "1.28.2" 
masterIP: 10.2.10.11 
firewall: "nftables" 
pluginNetwork: "calico" 
containerRuntime: "containerd" 

localRegistry: false

# at least 3 masters and 3 workers are needed
zones:
  - name: zone1
    nodes:
      master:
        - name: cluster1master1
          ipAdress: 10.2.10.11
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
        - name: cluster1master2
          ipAdress: 10.2.10.12
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
      worker:
        - name: cluster1worker1
          ipAdress: 10.2.10.14
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
        - name: cluster1worker2
          ipAdress: 10.2.10.15
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
  - name: zone2
    nodes:
      master:
        - name: cluster1master3
          ipAdress: 10.2.10.13
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: drained
          kubeversion: 1.28.2  
      worker:
        - name: cluster1worker1
          ipAdress: 10.2.10.16
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2

controlPlaneList: 
  - 10.2.10.12 # use ip adress here for master2
  - 10.2.10.13 # use ip adress here for master3

workerList: 
  - 10.2.10.14 # use ip adress here for worker1
  - 10.2.10.15 # use ip adress here for worker2
  - 10.2.10.16 # use ip adress here for worker3

rook-ceph: false
harbor: false
opensearch: false
opensearch-dashboards: false
logstash: false
filebeat: false
prometheus: false
opa: false
kubeops-dashboard: false
certman: false
ingress: false 
keycloak: false # mandatory, set to true if you want to install it into your cluster
velero: false

storageClass: "rook-cephfs"

rookValues:
  namespace: kubeops
  nodePort: 31931
  hostname: rook-ceph.local
  cluster:
    spec:
      dataDirHostPath: "/var/lib/rook"
    removeOSDsIfOutAndSafeToRemove: true
    storage:
      # Global filter to only select certain devicesnames. This example matches names starting with sda or sdb.
      # Will only be used if useAllDevices is set to false and will be ignored if individual devices have been specified on a node.
      deviceFilter: "^sd[a-b]"
      # Names of individual nodes in the cluster that should have their storage included.
      # Will only be used if useAllNodes is set to false.
      nodes:
        - name: "<ip-adress of node_1>"
          devices:
            - name: "sdb" 
        - name: "<ip-adress of node_2>"
          deviceFilter: "^sd[a-b]"
          # config:
          #   metadataDevice: "sda"
    resources:
      mgr:
        requests:
          cpu: "500m"
          memory: "1Gi"
      mon:
        requests:
          cpu: "2"
          memory: "1Gi"
      osd:
        requests:
          cpu: "2"
          memory: "1Gi"
  operator:
    data:
      rookLogLevel: "DEBUG"
  blockStorageClass:
    parameters:
      fstype: "ext4"

postgrespass: "password"  # change to your desired password
postgres:
  storageClass: "rook-cephfs"
  volumeMode: "Filesystem"
  accessModes: ["ReadWriteMany"]
  resources:
    requests:
      storage: 2Gi

redispass: "password" # change to your desired password
redis:
  storageClass: "rook-cephfs"
  volumeMode: "Filesystem"
  accessModes: ["ReadWriteMany"]
  resources:
    requests:
      storage: 2Gi

harborValues: 
  namespace: kubeops
  harborpass: "password" # change to your desired password
  externalURL: https://10.2.10.13 # change to ip adress of master1
  nodePort: 30003
  hostname: harbor.local
  harborPersistence:
    persistentVolumeClaim:
      registry:
        size: 40Gi
        storageClass: "rook-cephfs"
      chartmuseum:
        size: 5Gi
        storageClass: "rook-cephfs"
      jobservice:
        jobLog:
          size: 1Gi
          storageClass: "rook-cephfs"
        scanDataExports:
          size: 1Gi
          storageClass: "rook-cephfs"
      database:
        size: 1Gi
        storageClass: "rook-cephfs"
      redis:
        size: 1Gi
        storageClass: "rook-cephfs"
      trivy: 
        size: 5Gi
        storageClass: "rook-cephfs"

filebeatValues:
  namespace: kubeops 

logstashValues:
  namespace: kubeops
  volumeClaimTemplate:
    resources:
      requests:
        storage: 1Gi
    accessModes: 
      - ReadWriteMany
    storageClass: "rook-cephfs"

openSearchDashboardValues:
  namespace: kubeops
  nodePort: 30050
  hostname: opensearch.local

openSearchValues:
  namespace: kubeops
  opensearchJavaOpts: "-Xmx512M -Xms512M"
  replicas: "3"
  resources:
    requests:
      cpu: "250m"
      memory: "1024Mi"
    limits:
      cpu: "300m"
      memory: "3072Mi"
  persistence:
    size: 4Gi
    enabled: "true"
    enableInitChown: "false"
    enabled: "false"
    labels:
      enabled: "false"
    storageClass: "rook-cephfs"
    accessModes:
      - "ReadWriteMany"
  securityConfig:
    enabled: false
    ### Additional values can be set, if securityConfig is enabled:
    # path: "/usr/share/opensearch/plugins/opensearch-security/securityconfig"
    # actionGroupsSecret:
    # configSecret:
    # internalUsersSecret: internal-users-config-secret
    # rolesSecret:
    # rolesMappingSecret:
    # tenantsSecret:
    # config:
    #   securityConfigSecret: ""
    #   dataComplete: true
    #   data: {}

prometheusValues:
  namespace: kubeops
  privateRegistry: false

  grafanaUsername: "user"
  grafanaPassword: "password"
  grafanaResources:
    storageClass: "rook-cephfs"
    storage: 5Gi
    nodePort: 30211
    hostname: grafana.local

  prometheusResources:
    storageClass: "rook-cephfs"
    storage: 25Gi
    retention: 10d
    retentionSize: "24GB"
    nodePort: 32090
    hostname: prometheus.local

opaValues:
  namespace: kubeops

kubeOpsDashboardValues:
  namespace: kubeops
  hostname: kubeops-dashboard.local
  service:
    nodePort: 30007

certmanValues:
  namespace: kubeops
  replicaCount: 3
  logLevel: 2

ingressValues:
  namespace: kubeops
  externalIPs: []

keycloakValues:
  namespace: "kubeops"
  storageClass: "rook-cephfs"
  nodePort: "30180"
  hostname: keycloak.local
  keycloak:
    auth:
      adminUser: admin
      adminPassword: admin
      existingSecret: ""
  postgresql:
    auth:
      postgresPassword: ""
      username: bn_keycloak
      password: ""
      database: bitnami_keycloak
      existingSecret: ""

fileBeat

In Order to change registry of filebeat, you have to go in your kubeopsctl.yaml file and set filebeat: false to filebeat: true

kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003

harbor

In Order to change registry of harbor, you have to go in your kubeopsctl.yaml file and set harbor: false to harbor: true

kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003
logstash

In Order to change registry of logstash, you have to go in your kubeopsctl.yaml file and set logstash: false to logstash: true

kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003

opa-gatekeeper

In Order to change registry of opa-gatekeeper, you have to go in your kubeopsctl.yaml file and set opa: false to opa: true

kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003

opensearch

In Order to change registry of opensearch, you have to go in your kubeopsctl.yaml file and set opensearch: false to opensearch: true

kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003

opensearch-dashboards

In Order to change registry of opensearch-dashboards, you have to go in your kubeopsctl.yaml file and set opensearch-dashboards: false to opensearch-dashboards: true

kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003

prometheus

In Order to change registry of prometheus, you have to go in your kubeopsctl.yaml file and set prometheus: false to prometheus: true

kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003

rook-ceph

In Order to change registry of rook-ceph, you have to go in your kubeopsctl.yaml file and set rook-ceph: false to rook-ceph: true

kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003

cert-manager

In Order to change registry of cert-manager, you have to go in your kubeopsctl.yaml file and set certman: false to certman: true

kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003

ingress-nginx

In Order to change registry of ingress-nginx, you have to go in your kubeopsctl.yaml file and set ingress: false to ingress: true

kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003

keycloak

In Order to change registry of keycloak, you have to go in your kubeopsctl.yaml file and set keycloak: false to keycloak: true

kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003

kubeops-dashboard

In Order to change registry of kubeops-dashboard, you have to go in your kubeopsctl.yaml file and set kubeops-dashboard: false to kubeops-dashboard: true

kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003

2.20 - Change the OpenSearch Password

Detailed instructions on how to change the OpenSearch password.

Changing a User Password in OpenSearch

This guide explains how to change a user password in OpenSearch with SecurityConfig enabled and an external Kubernetes Secret for user credentials.

Steps to Change the Password Using an External Secret

Prerequisites

  • Access to the Kubernetes cluster where OpenSearch is deployed.
  • Permissions to view and modify secrets in the relevant namespace.

Step 1: Generate a New Password Hash

Execute the command below (replacing the placeholders) to generate a hashed version of your new password:

kubectl exec -it <opensearch_pod_name> -n <opensearch_pod_namespace> -- bash -c "sh /usr/share/opensearch/plugins/opensearch-security/tools/hash.sh -p <new_password>"

Step 2: Extract the Existing Secret and Update internal_users.yaml

Retrieve the existing secret containing internal_users.yml. The secret stores the configuration in base64 encoding, so extract and decode it:

kubectl get secrets -n <opensearch_pod_namespace> internal-users-config-secret -o jsonpath='{.data.internal_users\.yml}' | base64 -d > internal_users.yaml

Now, update the hashed password generated in Step 1 in the internal_users.yaml file for the inteded user.

Step 3: Patch the Secret with Updated internal_users.yml Data and Restart the Opensearch Pods

Encode the updated internal_users.yaml and apply it back to the secret.

cat internal_users.yaml | base64 -w 0 | xargs -I {} kubectl patch secret -n <opensearch_pod_namespace> internal-users-config-secret --patch '{"data": {"internal_users.yml": "{}"}}'

Restart the Opensearch pods to use the updated secret.

kubectl rollout restart statefulset opensearch-cluster-master -n <opensearch_pod_namespace>

NOTE: Please wait for the rollout to complete.

Step 4: Run securityadmin.sh to Apply the Changes

This completes the password update process, ensuring that changes persist across OpenSearch pods.

kubectl exec -it <opensearch_pod_name> -n <opensearch_pod_namespace> -- bash -c "\
    cp /usr/share/opensearch/plugins/opensearch-security/securityconfig/internal_users.yml /usr/share/opensearch/config/opensearch-security/ && \
    sh /usr/share/opensearch/plugins/opensearch-security/tools/securityadmin.sh \
    -cd /usr/share/opensearch/config/opensearch-security/ \
    -icl -nhnv \
    -cacert /usr/share/opensearch/config/root-ca.pem \
    -cert /usr/share/opensearch/config/kirk.pem \
    -key /usr/share/opensearch/config/kirk-key.pem"

2.21 - Enabling AuditLog

A brief overview of how you can enable AuditLog.

Enabling AuditLog

This guide describes the steps to enable the AuditLog in a Kubernetes cluster.

Steps to Enable the AuditLog

  1. Create the Directory: Navigate to the $KUBEOPSROOT/lima directory and create the auditLog folder:

    mkdir -p $KUBEOPSROOT/lima/auditLog
    
  2. Create the Audit Policy File: In the $KUBEOPSROOT/lima/auditLog directory, create the policy.yaml file:

    touch $KUBEOPSROOT/lima/auditLog/policy.yaml
    
  3. Configure the Audit Policy: Add the content to policy.yaml according to the official Kubernetes Audit Policy documentation. Rules can be added or removed as needed to customize the auditlogs.

    Example content for policy.yaml:

    apiVersion: audit.k8s.io/v1
    kind: Policy
    rules:
      - level: Metadata
        resources:
          - group: ""
            resources: ["pods"]
    
  4. Enable the AuditLog: To enable the auditlog for a cluster, execute the following command:

    lima change auditlog <clustername> -a true
    

    Example:

    lima change auditlog democluster -a true
    

Note

  • The auditlog can also be disabled if needed by setting the -a parameter to false:

    lima change auditlog <clustername> -a false
    

Additional Information

  • More details on configuring the audit policy can be found in the official Kubernetes documentation: Audit Policy.

2.22 - How to set up SSH keys

Setting up SSH (Secure Shell) keys is an essential step for securely accessing remote servers without the need to enter a password each time. Here’s a short introduction on how to set up SSH keys.

To securely access the kubeops master and worker machines, you need to create a ssh-key-pair (private and public key) on the admin machine. Afterwards copy the public key onto each machine.

Install SSH Client

Most Linux distributions come with an SSH client pre-installed. If its not installed, you can install it using your distributions package manager.

For RHEL8 OS use following command.

sudo dnf install -y openssh-client

Generate SSH Keys

If you do not already have an SSH key or if you want to generate a new key pair specifically for this connection, follow these steps.

Run the command

ssh-keygen

Follow the prompts to choose a file location and passphrase (optional but recommended for added security).

Copy the Public Key to the Remote Machine

To avoid password prompts every time you connect, you can authorize your public key on the remote machine.

You can manually copy the public key to the servers authorized keys using the command ssh-copy-id.

ssh-copy-id <username>@<remote_host>

Replace <username>@<remote_host> with your actual username and the remote machine‘s IP address or hostname.

If ssh-copy-id is not available, you can use the following command:

cat ~/.ssh/id_rsa.pub | ssh username@remote_host "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys"

Additional Information

For more information about commands see the documentation of your respective operating system.

For ssh or ssh-keygen you can use the manual pages:

man ssh
man ssh-keygen

2.23 - Accessing KubeOps RPM Server

Detailed instructions on how to access the KubeOps RPM Server.

Accessing the KubeOps RPM Server

In order to access the KubeOps RPM server, the following /etc/yum.repos.d/kubeops.repo file must be created with this content:

[kubeops-repo]
name = RHEL 8 BaseOS
baseurl = https://rpm.kubeops.net/kubeopsRepo/
gpgcheck = 0
enabled = 1
module_hotfixes=1

The key rpmServer must be added to the kubeopsctl.yaml file:

apiVersion: kubeops/kubeopsctl/alpha/v5 # mandatory
imagePullRegistry: "registry.preprod.kubernative.net/kubeops"
localRegistry: true
clusterName: "example"
kubernetesVersion: "1.28.2"
masterIP: 10.2.10.11
systemCpu: "200m"
systemMemory: "200Mi"
rpmServer: https://rpm.kubeops.net/kubeopsRepo/repodata/

zones:
  - name: zone1
    nodes:
      master:
        - name: master1
          ipAdress: 10.2.10.11
          status: active
          kubeversion: 1.28.2
        - name: master2
          ipAdress: 10.2.10.12
          status: active
          kubeversion: 1.28.2
      worker:
        - name: worker1
          ipAdress: 10.2.10.14
          status: active
          kubeversion: 1.28.2
        - name: worker2
          ipAdress: 10.2.10.15
          status: active
          kubeversion: 1.28.2
  - name: zone2
    nodes:
      master:
        - name: master3
          ipAdress: 10.2.10.13
          status: active
          kubeversion: 1.28.2  
      worker:
        - name: worker3
          ipAdress: 10.2.10.16
          status: active
          kubeversion: 1.28.2


# mandatory, set to true if you want to install it into your cluster
rook-ceph: true
harbor: true
opensearch: true
opensearch-dashboards: true
logstash: true
filebeat: true
prometheus: true
opa: true
kubeops-dashboard: true
certman: true
ingress: true 
keycloak: true
velero: true

harborValues: 
  harborpass: "password" # change to your desired password
  databasePassword: "Postgres_Password" # change to your desired password
  redisPassword: "Redis_Password" 
  externalURL: http://10.2.10.11:30002 # change to ip adress of master1

prometheusValues:
  grafanaUsername: "user"
  grafanaPassword: "password"

ingressValues:
  externalIPs: []

keycloakValues:
  keycloak:
    auth:
      adminUser: admin
      adminPassword: admin
  postgresql:
    auth:
      postgresPassword: ""
      username: bn_keycloak
      password: ""
      database: bitnami_keycloak
      existingSecret: ""

veleroValues:
  accessKeyId: "your_s3_storage_username"
  secretAccessKey: "your_s3_storage_password"

Accessing the RPM Server in an Air Gap Environment

If you have an AirGap environment, the following files must be placed in the specified directory.

kubeops.repo in /etc/yum.repos.d/

[kubeops-repo]
name = RHEL 8 BaseOS
baseurl = https://rpm.kubeops.net/kubeopsRepo/
gpgcheck = 0
enabled = 1
module_hotfixes=1

route-ens192 in /etc/sysconfig/network-scripts/

The following entry must be added to the route-ens192 file:

193.7.169.20 via <Your Gateway Adress> <network interface>

For example:

193.7.169.20 via 10.2.10.1 dev ens192

hosts in route-ens192 in /etc/

The following entry must be added to the hosts file:

193.7.169.20 rpm.kubeops.net

Test your connection

Please ensure that each node has a connection to the RPM server.The following command can be used for this:

dnf list kubeopsctl --showduplicates

2.24 - fix rook-ceph

repair rook-ceph when worker nodes are down

fix rook-ceph

if some worker nodes are down, you need to change the rook-ceph configuration if you use the parameter useallnodes and usealldevices. this guide is for temporarily fixing rook-ceph, and is not a permanent solution

  1. get the tools pod
kubectl -n <rook-ceph namespace> get pod | grep tools
  1. get the status
kubectl -n <rook-ceph namespace> exec -it <rook-ceph namespace> -- bash
ceph status
ceph osd status

if there are osds without the status exists,up they need to be removed

ceph osd out <id of osd>
ceph osd crush remove osd.<id of osd>
ceph auth del osd.<id of osd>

you can now check the rest of the osds with ceph osd status

it could be that you also need to decrease the replicationsize:

ceph osd pool ls
ceph osd pool set <pool-name> size 2

the default pool-name should be the replicapool.

then you can delete the deployments of the pods that are making problems

kubectl -n <rook-ceph namespace> delete deploy <deployment-name>

3 - Reference

In the reference you will find articles on the Kubeopsctl Commands, Fileformats, KubeOps Version and the Glossary

3.1 - Commands

KubeOps kubeopsctl

This documentation shows all features of kubeopsctl and how to use these features.

The kosi software must be installed from the start.

General commands

Overview of all KUBEOPSCTL commands

Usage:
  kubeopsctl [command] [options]

Options:
  --version       Show version information
  -?, -h, --help  Show help and usage information

Commands:
  apply             Use the apply command to apply a specific config to create or modify the cluster.
  change            change
  drain <argument>  Drain Command.
  uncordon <name>   Uncordon Command.
  upgrade <name>    upgrade Command.
  status <name>     Status Command.

Command ‘kubeopsctl –version’

The kubeopsctl --version command shows you the current version of kubeopsctl.

kubeopsctl --version

The output should be:

0.2.0-Alpha0

Command ‘kubeopsctl –help’

The command kubeopsctl --help gives you an overview of all available commands:

kubeopsctl --help

Alternatively, you can also enter kubeopsctl or kubeopsctl -? in the command line.

Command ‘kubeopsctl apply’

The command kubeopsctl apply is used to set up the kubeops platform with a configuration file.

Example:

kubeopsctl apply -f kubeopsctl.yaml

-f flag

The -f parameter is used to use yaml parameter file.

-l flag

The -l parameter is used to set the log level to a specific value. The default log level is Info. Available log levels are Error, Warning, Info, Debug1, Debug2, Debug3.

Example:

kubeopsctl apply -f kubeopsctl.yaml -l Debug3

Command ‘kubeopsctl change registry’

The command kubeopsctl change registry is used to change the currently used registry to a different one.

Example:

kubeopsctl change registry -f kubeopsctl.yaml -r 10.2.10.11/library -t localhost/library 

-f flag

The -f parameter is used to use yaml parameter file.

-r flag

The parameter r is used to pull the docker images which are included in the package to a given local docker registry.

-t flag

The -t parameter is used to tag the images with localhost. For the szenario that the registry of the cluster is exposed to the admin via a network internal domain name, but this name can’t be resolved by the nodes, the flag -t can be used, to use the cluster internal hostname of the registry.

Command ‘kubeopsctl drain’

The command kubeopsctl drain is used to drain a cluster, zone or node.

In this example we are draining a cluster:

kubeopsctl drain cluster/example 

In this example we are draining a zone:

kubeopsctl drain zone/zone1 

In this example we are draining a node:

kubeopsctl drain node/master1 
there can be issues with the draining of nodes if there is rook installed, because rook has pod disruption budgets, so there must be enough nodes ready for rook.

Command ‘kubeopsctl uncordon’

The command kubeopsctl uncordon is used to drain a cluster, zone or node.

In this example we are uncordoning a cluster:

kubeopsctl uncordon cluster/example 

In this example we are uncordoning a zone:

kubeopsctl uncordon zone/zone1 

In this example we are uncordoning a node:

kubeopsctl uncordon node/master1 

Command ‘kubeopsctl upgrade’

The command kubeopsctl upgrade is used to upgrade the kubernetes version of a cluster, zone or node.

In this example we are uncordoning a cluster:

kubeopsctl upgrade cluster/example -v 1.26.6

In this example we are uncordoning a zone:

kubeopsctl upgrade zone/zone1 -v 1.26.6 

In this example we are uncordoning a node:

kubeopsctl upgrade node/master1 -v 1.26.6 

-v flag

The parameter v is used to set a higher kubernetes version.

Command ‘kubeopsctl status’

The command kubeopsctl status is used to get the status of a cluster.

Example:

kubeopsctl status cluster/cluster1 -v 1.26.6 

3.2 - Documentation-kubeopsctl

KubeOps kubeopsctl

this documentation shows all feature of kubeopsctl and how to use these features.

the kosi software must be installed from the start.

General commands

Overview of all KUBEOPSCTL commands

Usage:
  kubeopsctl [command] [options]

Options:
  --version       Show version information
  -?, -h, --help  Show help and usage information

Commands:
  apply             Use the apply command to apply a specific config to create or modify the cluster.
  change            change
  drain <argument>  Drain Command.
  uncordon <name>   Uncordon Command.
  upgrade <name>    upgrade Command.
  status <name>     Status Command.

Command ‘kubeopsctl –version’

The kubeopsctl --version command shows you the current version of kubeopsctl.

kubeopsctl --version

The output should be:

0.2.0-Alpha0

Command ‘kubeopsctl –help’

The command kubeopsctl --help gives you an overview of all available commands:

kubeopsctl --help

Alternatively, you can also enter kubeopsctl or kubeopsctl -? in the command line.

Command ‘kubeopsctl apply’

The command kubeopsctl apply is used to set up the kubeops platform with a configuration file.

Example:

kubeopsctl apply -f kubeopsctl.yaml

-f flag

The -f parameter is used to use yaml parameter file.

-l flag

The -l parameter is used to set the log level to a specific value. The default log level is Info. Available log levels are Error, Warning, Info, Debug1, Debug2, Debug3.

Example:

kubeopsctl apply -f kubeopsctl.yaml -l Debug3

Command ‘kubeopsctl change registry’

The command kubeopsctl change registry is used to change the currently used registry to a different one.

Example:

kubeopsctl change registry -f kubeopsctl.yaml -r 10.2.10.11/library -t localhost/library 

-f flag

The -f parameter is used to use yaml parameter file.

-r flag

The parameter r is used to pull the docker images which are included in the package to a given local docker registry.

-t flag

The -t parameter is used to tag the images with localhost. For the szenario that the registry of the cluster is exposed to the admin via a network internal domain name, but this name can’t be resolved by the nodes, the flag -t can be used, to use the cluster internal hostname of the registry.

Command ‘kubeopsctl drain’

The command kubeopsctl drain is used to drain a cluster, zone or node.

In this example we are draining a cluster:

kubeopsctl drain cluster/example 

In this example we are draining a zone:

kubeopsctl drain zone/zone1 

In this example we are draining a node:

kubeopsctl drain node/master1 
there can be issues with the draining of nodes if there is rook installed, because rook has pod disruption budgets, so there must be enough nodes ready for rook.

Command ‘kubeopsctl uncordon’

The command kubeopsctl uncordon is used to drain a cluster, zone or node.

In this example we are uncordoning a cluster:

kubeopsctl uncordon cluster/example 

In this example we are uncordoning a zone:

kubeopsctl uncordon zone/zone1 

In this example we are uncordoning a node:

kubeopsctl uncordon node/master1 

Command ‘kubeopsctl upgrade’

The command kubeopsctl upgrade is used to upgrade the kubernetes version of a cluster, zone or node.

In this example we are uncordoning a cluster:

kubeopsctl upgrade cluster/example -v 1.26.6

In this example we are uncordoning a zone:

kubeopsctl upgrade zone/zone1 -v 1.26.6 

In this example we are uncordoning a node:

kubeopsctl upgrade node/master1 -v 1.26.6 

-v flag

The parameter v is used to set a higher kubernetes version.

Command ‘kubeopsctl status’

The command kubeopsctl status is used to get the status of a cluster.

Example:

kubeopsctl status cluster/cluster1 -v 1.26.6 

Prerequisites

Minimum hardware and OS requirments for a linux machine are

OS Minimum Requirements
Red Hat Enterprise Linux 8 8 CPU cores, 16 GB memory
OpenSUSE 15 8 CPU cores, 16 GB memory
At least one machine should be used as an admin machine for cluster lifecycle management.

Requirements on admin

The following requirements must be fulfilled on the admin machine.

  1. All the utilized users require sudo privileges. If you are using Kubeops as a user, you need a user with sudo rights, so for Opensuse and RHEL 8 Environments it is the wheel group the user should be added to. Make sure that you change your user with:
su -l <user>
  1. Admin machine must be synchronized with the current time.

  2. You need an internet connection to use the default KubeOps registry registry.preprod.kubernative.net/kubeops.

A local registry can be used in the Airgap environment. KubeOps only supports secure registries.
It is important to list your registry as an insecure registry in registry.conf (/etc/containers/registries.conf for podman, /etc/docker/deamon.json for docker), in case of insecure registry usage.

Now you can create your own registry instead of using the default. Checkout how to Guide Create a new Repository. for more info.

  1. kosi 2.8.0 must be installed on your machine. Click here to view how it is done in the Quick Start Guide.

  2. it is recommended that runc is uninstalled To uninstall runc on your OS use the following command:

    dnf remove -y runc
    zypper remove -y runc

  3. tc should be installed. To install tc on your OS use the following command:

    dnf install -y tc
    zypper install -y iproute2

  4. for opensearch, the /etc/sysctl.conf should be configured, the line

vm.max_map_count=262144

should be added. also the command

 sysctl -p

should be executed after that.

  1. Podman must be installed on your machine. To install podman on RHEL8 use command.
    dnf install podman
    zypper install podman
  1. You must install the kubeops-basic-plugins:0.4.0 .

    Simply type in the following command to install the Basic-Plugins.

    kosi install --hub=public pia/kubeops-basic-plugins:0.4.0
    

    Noteable is that you must have to install it on a Root-User Machine.

  2. You must install the kubeops-kubernetes-plugins:0.5.0.

    Simply type in the following command to install the Kubernetes-Plugins.

    kosi install --hub public pia/kubeops-kubernetes-plugins:0.5.0
    

Requirements for each node

The following requirements must be fulfilled on each node.

  1. All the utilized users require sudo privileges. If you are using Kubeops as a user, you need a user with sudo rights, so for Opensuse and RHEL 8 Environments it is the wheel group the user should be added to.

  2. Every machine must be synchronized with the current time.

  3. You have to assign lowercase unique hostnames for every machine you are using.

    We recommended using self-explanatory hostnames.

    To set the hostname on your machine use the following command:

    hostnamectl set-hostname <name of node>
    
    • Example
      Use the commands below to set the hostnames on each machine as admin, master, node1 node2.
      hostnamectl set-hostname admin
      hostnamectl set-hostname master 
      hostnamectl set-hostname node1
      hostnamectl set-hostname node2
      

    Requires sudo privileges

    It is recommended that a dns service is running, or if you don’t have a dns service, you can change the /etc/hosts file. an example for a entry in the /etc/hosts file could be:

    10.2.10.12 admin
    10.2.10.13 master1
    10.2.10.14 master2
    10.2.10.15 master3
    10.2.10.16 node1
    10.2.10.17 node2
    10.2.10.18 node3
    

  4. To establish an SSH connection between your machines, you either need an SSH key or you need to install sshpass.

    1. Generate an SSH key on admin machine using following command

      ssh-keygen
      

      There will be two keys generated in ~/.ssh directory.
      The first key is the id_rsa(private) and the second key is the id_rsa.pub(public).

    2. Copy the ssh key from admin machine to your node machine/s with following command

      ssh-copy-id <ip address or hostname of your node machine>
      
    3. Now try establishing a connection to your node machine/s

      ssh <ip address or hostname of your node machine>
      

How to Configure Cluster/Nodes/Software using yaml file

you need to have a cluster definition file which describes the different aspects of your cluster. this files describes only one cluster.

Full yaml syntax

Choose the appropriate imagePullRegistry based on your kubeops version.

### General values for registry access ###
imagePullRegistry: "registry.preprod.kubernative.net/lima" # mandatory, the registry from which the images for the cluster are pulled
localRegistry: false # mandatory, set to true if you use a local registry, if you want to set the value to true then you also have to set harbor to true
### General values for registry access ###
imagePullRegistry: "registry.preprod.kubernative.net/kubeops" # mandatory, the registry from which the images for the cluster are pulled
localRegistry: false # mandatory, set to true if you use a local registry, if you want to set the value to true then you also have to set harbor to true
apiVersion: kubeops/kubeopsctl/alpha/v5 # mandatory
clusterName: "example" # mandatory
clusterUser: "myuser" # mandatory
kubernetesVersion: "1.28.2" # mandatory
#masterHost: optional if you have an hostname, default value in "masterIP"
masterIP: 10.2.10.12 # mandatory
firewall: "nftables" # mandatory, default "nftables"
pluginNetwork: "calico" # mandatory, default "calico"
containerRuntime: "containerd" # mandatory
clusterOS: "Red Hat Enterprise Linux" # mandatory, can be "Red Hat Enterprise Linux" or "openSUSE Leap"
### Additional values for cluster configuration
useInsecureRegistry: false # optional, default is false
ignoreFirewallError: false # optional, default is false
serviceSubnet: 192.168.128.0/17 # optional, default "192.168.128.0/17"
podSubnet: 192.168.0.0/17 # optional, default "192.168.0.0/17"
debug: false # optional, default is false
logLevel: vvvvv # optional, default "vvvvv"
systemCpu: "1" # optional, default "1"
systemMemory: "2G" # optional, default "2G"
sudo: true # optional, default is true
tmpCopyDir: "/tmp" # optional, default is /tmp
createCluster: true # optional, default is true
updateRegistry: true # optional, default is true

zones:
  - name: zone1
    nodes:
      master:
        - name: cluster1master1
          ipAdress: 10.2.10.11
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
        - name: cluster1master2
          ipAdress: 10.2.10.12
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
      worker:
        - name: cluster1worker1
          ipAdress: 10.2.10.14
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
        - name: cluster1worker2
          ipAdress: 10.2.10.15
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
  - name: zone2
    nodes:
      master:
        - name: cluster1master3
          ipAdress: 10.2.10.13
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: drained
          kubeversion: 1.28.2  
      worker:
        - name: cluster1worker1
          ipAdress: 10.2.10.16
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2

# set to true if you want to install it into your cluster
rook-ceph: true # mandatory
harbor: true # mandatory
opensearch: true # mandatory
opensearch-dashboards: true # mandatory
logstash: true # mandatory
filebeat: true # mandatory
prometheus: true # mandatory
opa: true # mandatory
kubeops-dashboard: true # mandatory
certman: true # mandatory
ingress: true # mandatory
keycloak: true # mandatory
velero: true # mandatory

nameSpace: "kubeops" #optional, the default value is different for each application
storageClass: "rook-cephfs" # optional, default value is "rook-cephfs"

###Values for Rook-Ceph###
rookValues:
  namespace: kubeops
  cluster:
    spec:
      dataDirHostPath: "/var/lib/rook" # optional, default is /var/lib/rook
    storage:
      useAllNodes: true # optional, default value: true
      useAllDevices: true # optional, default value: true
      deviceFilter: "^sd[a-b]" # optional, will only be used if useAllDevices is set to false
      config:
        metadataDevice: "sda" # optional, only set this value, if there is a device available
      nodes: # optional if useAllNodes is set to true, otherwise mandatory
        - name: "<ip-adress of node_1>"
          devices:
            - name: "sdb" 
        - name: "<ip-adress of node_2>"
          deviceFilter: "^sd[a-b]"
          config:
            metadataDevice: "sda" # optional
    resources:
      mgr:
        requests:
          cpu: "500m" # optional, default is 500m, limit: 1000m
          memory: "512Mi" # optional, default is 1Gi, limit: 1Gi
      mon:
        requests:
          cpu: "1" # optional, default is 1, limit: 2000m
          memory: "1Gi" # optional, default is 1Gi, limit: 2Gi
      osd:
        requests:
          cpu: "1" # optional, default is 1, limit: 2
          memory: "1Gi" # optional, default is 4Gi, limit: 4Gi
  operator:
    data:
      rookLogLevel: "DEBUG" # optional, default is DEBUG
#-------------------------------------------------------------------------------------------------------------------------------
### Values for Harbor deployment ###
## For detailed explaination for each key see: https://artifacthub.io/packages/helm/harbor/harbor/1.8.1#configuration ##
harborValues: 
  namespace: kubeops # optional, default is kubeops
  harborpass: "password" # mandatory: set password for harbor access
  databasePassword: "Postgres_Password" # mandatory: set password for database access
  redisPassword: "Redis_Password" # mandatory: set password for redis access
  externalURL: http://10.2.10.11:30002 # mandatory, the ip address and port, from which harbor is accessable outside of the cluster
  nodePort: 30002 # mandatory
  hostname: harbor.local # mandatory
  harborPersistence:
    persistentVolumeClaim:
      registry:
        size: 40Gi # optional, default is 40Gi
        storageClass: "rook-cephfs" #optional, default is rook-cephfs
      jobservice:
        jobLog:
          size: 1Gi # optional, default is 1Gi
          storageClass: "rook-cephfs" #optional, default is rook-cephfs
      database:
        size: 1Gi # optional, default is 1Gi
        storageClass: "rook-cephfs" #optional, default is rook-cephfs
      redis:
        size: 1Gi # optional, default is 1Gi
        storageClass: "rook-cephfs" #optional, default is rook-cephfs
      trivy: 
        size: 5Gi # optional, default is 5Gi
        storageClass: "rook-cephfs" #optional, default is rook-cephfs
#--------------------------------------------------------------------------------------------------------------------------------------
###Values for filebeat deployment###
filebeatValues:
  namespace: kubeops # optional, default is kubeops   
#--------------------------------------------------------------------------------------------------------------------------------
###Values for Logstash deployment###
##For detailed explaination for each key see: https://github.com/elastic/helm-charts/releases/tag/v7.16.3###
logstashValues:
  namespace: kubeops
  volumeClaimTemplate:
    accessModes: 
      - ReadWriteMany #optional, default is [ReadWriteMany]
    resources:
      requests:
        storage: 1Gi # mandatory, depending on storage capacity
    storageClass: "rook-cephfs" #optional, default is rook-cephfs
#--------------------------------------------------------------------------------------------------------------------------------------
###Values for OpenSearch-Dashboards deployment###
##For detailed explaination for each key see: https://github.com/opensearch-project/helm-charts/tree/main/charts/opensearch-dashboards###
openSearchDashboardValues:
  namespace: kubeops
  nodePort: 30050
#--------------------------------------------------------------------------------------------------------------------------------
###Values for OpenSearch deployment###
##For detailed explaination for each key see: https://github.com/opensearch-project/helm-charts/tree/main/charts/opensearch###
openSearchValues:
  namespace: kubeops
  opensearchJavaOpts: "-Xmx512M -Xms512M" # optional, default is -Xmx512M -Xms512M
  resources:
    requests:
      cpu: "250m" # optional, default is 250m
      memory: "1024Mi" # optional, default is 1024Mi
    limits:
      cpu: "300m" # optional, default is 300m
      memory: "3072Mi" # optional, default is 3072Mi
  persistence:
    size: 4Gi # mandatory
    enabled: "true" # optional, default is true
    enableInitChown: "false" # optional, default is false
    labels:
      enabled: "false" # optional, default is false
    storageClass: "rook-cephfs" # optional, default is rook-cephfs
    accessModes:
      - "ReadWriteMany" # optional, default is {ReadWriteMany}
  securityConfig:
    enabled: false # optional, default value: false
    ### Additional values can be set, if securityConfig is enabled:
    # path: "/usr/share/opensearch/plugins/opensearch-security/securityconfig"
    # actionGroupsSecret:
    # configSecret:
    # internalUsersSecret: internal-users-config-secret
    # rolesSecret:
    # rolesMappingSecret:
    # tenantsSecret:
    # config:
    #   securityConfigSecret: ""
    #   dataComplete: true
    #   data: {}
  replicas: "3" # optional, default is 3
#--------------------------------------------------------------------------------------------------------------------------------
###Values for Prometheus deployment###
prometheusValues:
  namespace: kubeops # optional, default is kubeops
  privateRegistry: false # optional, default is false
  grafanaUsername: "user" # optional, default is user
  grafanaPassword: "password" # optional, default is password
  grafanaResources:
    storageClass: "rook-cephfs" # optional, default is rook-cephfs
    storage: 5Gi # optional, default is 5Gi
    nodePort: 30211 # optional, default is 30211

  prometheusResources:
    storageClass: "rook-cephfs" # optional, default is rook-cephfs
    storage: 25Gi # optional, default is 25Gi
    retention: 10d # optional, default is 10d
    retentionSize: "24GB" # optional, default is 24GB
    nodePort: 32090
#--------------------------------------------------------------------------------------------------------------------------------
###Values for OPA deployment###
opaValues:
  namespace: kubeops

#--------------------------------------------------------------------------------------------------------------------------------
###Values for KubeOps-Dashboard (Headlamp) deployment###
kubeOpsDashboardValues:
  service:
    nodePort: 30007
#--------------------------------------------------------------------------------------------------------------------------------
###Values for cert-manager deployment###
certmanValues:
  namespace: kubeops
  replicaCount: 3
  logLevel: 2
  secretName: root-secret 
#--------------------------------------------------------------------------------------------------------------------------------
###Values for ingress-nginx deployment###
ingressValues:
  namespace: kubeops
keycloakValues:
  namespace: "kubeops" # Optional, default is "keycloak"
  storageClass: "rook-cephfs" # Optional, default is "rook-cephfs"
  keycloak:
    auth:
      adminUser: admin # Optional, default is admin
      adminPassword: admin # Optional, default is admin
  postgresql:
    auth:
      postgresUserPassword: "" # Optional, default is ""
      username: bn_keycloak # Optional, default is "bn_keycloak"
      password: "" # Optional, default is ""
      database: bitnami_keycloak # Optional, default is "bitnami_keycloak"
    volumeSize: 8Gi

veleroValues:
  namespace: "velero"
  accessKeyId: "your_s3_storage_username"
  secretAccessKey: "your_s3_storage_password"
  useNodeAgent: false
  defaultVolumesToFsBackup: false
  provider: "aws"
  bucket: "velero"
  useVolumeSnapshots: false
  backupLocationConfig:
    region: "minio"
    s3ForcePathStyle: true
    s3Url: "http://minio.velero.svc:9000"

how to use kubeopsctl

apply changes to cluster

kubeopsctl apply -f kubeopsctl.yaml

3.3 - Fileformats

Fileformats in kubeopsctl

This documentation shows you all the different kind of fileformats kubeopsctl uses and how to use them.

How to configure Cluster/Nodes/Software using kubeopsctl.yaml file

You need to have a cluster definition file which describes the different aspects of your cluster. This files describes only one cluster.

Choose the appropriate imagePullRegistry based on your kubeops version.

### General values for registry access ###
imagePullRegistry: "registry.preprod.kubernative.net/lima" # mandatory, the registry from which the images for the cluster are pulled
localRegistry: false # mandatory, set to true if you use a local registry, if you want to set the value to true then you also have to set harbor to true
### General values for registry access ###
imagePullRegistry: "registry.preprod.kubernative.net/kubeops" # mandatory, the registry from which the images for the cluster are pulled
localRegistry: false # mandatory, set to true if you use a local registry, if you want to set the value to true then you also have to set harbor to true

Full yaml syntax

apiVersion: kubeops/kubeopsctl/alpha/v5 # mandatory
clusterName: "example" # mandatory
clusterUser: "myuser" # mandatory
kubernetesVersion: "1.28.2" # mandatory
#masterHost: optional if you have an hostname, default value in "masterIP"
masterIP: 10.2.10.12 # mandatory
firewall: "nftables" # mandatory, default "nftables"
pluginNetwork: "calico" # mandatory, default "calico"
containerRuntime: "containerd" # mandatory
clusterOS: "Red Hat Enterprise Linux" # mandatory, must be "Red Hat Enterprise Linux"
### Additional values for cluster configuration
useInsecureRegistry: false # optional, default is false
ignoreFirewallError: false # optional, default is false
serviceSubnet: 192.168.128.0/17 # optional, default "192.168.128.0/17"
podSubnet: 192.168.0.0/17 # optional, default "192.168.0.0/17"
debug: false # optional, default is false
logLevel: vvvvv # optional, default "vvvvv"
systemCpu: "1" # optional, default "1"
systemMemory: "2G" # optional, default "2G"
sudo: true # optional, default is true
tmpCopyDir: "/tmp" # optional, default is /tmp
createCluster: true # optional, default is true
updateRegistry: true # optional, default is true

zones:
  - name: zone1
    nodes:
      master:
        - name: cluster1master1
          ipAdress: 10.2.10.11
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
        - name: cluster1master2
          ipAdress: 10.2.10.12
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
      worker:
        - name: cluster1worker1
          ipAdress: 10.2.10.14
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
        - name: cluster1worker2
          ipAdress: 10.2.10.15
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
  - name: zone2
    nodes:
      master:
        - name: cluster1master3
          ipAdress: 10.2.10.13
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: drained
          kubeversion: 1.28.2  
      worker:
        - name: cluster1worker1
          ipAdress: 10.2.10.16
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2

# set to true if you want to install it into your cluster
rook-ceph: true # mandatory
harbor: true # mandatory
opensearch: true # mandatory
opensearch-dashboards: true # mandatory
logstash: true # mandatory
filebeat: true # mandatory
prometheus: true # mandatory
opa: true # mandatory
kubeops-dashboard: true # mandatory
certman: true # mandatory
ingress: true # mandatory
keycloak: true # mandatory
velero: true # mandatory

nameSpace: "kubeops" #optional, the default value is different for each application
storageClass: "rook-cephfs" # optional, default value is "rook-cephfs"

###Values for Rook-Ceph###
rookValues:
  namespace: kubeops
  cluster:
    spec:
      dataDirHostPath: "/var/lib/rook" # optional, default is /var/lib/rook
    storage:
      useAllNodes: true # optional, default value: true
      useAllDevices: true # optional, default value: true
      deviceFilter: "^sd[a-b]" # optional, will only be used if useAllDevices is set to false
      config:
        metadataDevice: "sda" # optional, only set this value, if there is a device available
      nodes: # optional if useAllNodes is set to true, otherwise mandatory
        - name: "<ip-adress of node_1>"
          devices:
            - name: "sdb" 
        - name: "<ip-adress of node_2>"
          deviceFilter: "^sd[a-b]"
          config:
            metadataDevice: "sda" # optional
    resources:
      mgr:
        requests:
          cpu: "500m" # optional, default is 500m, limit: 1000m
          memory: "512Mi" # optional, default is 1Gi, limit: 1Gi
      mon:
        requests:
          cpu: "1" # optional, default is 1, limit: 2000m
          memory: "1Gi" # optional, default is 1Gi, limit: 2Gi
      osd:
        requests:
          cpu: "1" # optional, default is 1, limit: 2
          memory: "1Gi" # optional, default is 4Gi, limit: 4Gi
  operator:
    data:
      rookLogLevel: "DEBUG" # optional, default is DEBUG
#-------------------------------------------------------------------------------------------------------------------------------
### Values for Harbor deployment ###
## For detailed explaination for each key see: https://artifacthub.io/packages/helm/harbor/harbor/1.8.1#configuration ##
harborValues: 
  namespace: kubeops # optional, default is kubeops
  harborpass: "password" # mandatory: set password for harbor access
  databasePassword: "Postgres_Password" # mandatory: set password for database access
  redisPassword: "Redis_Password" # mandatory: set password for redis access
  externalURL: http://10.2.10.11:30002 # mandatory, the ip address and port, from which harbor is accessable outside of the cluster
  nodePort: 30002 # mandatory
  hostname: harbor.local # mandatory
  harborPersistence:
    persistentVolumeClaim:
      registry:
        size: 40Gi # optional, default is 40Gi
        storageClass: "rook-cephfs" #optional, default is rook-cephfs
      jobservice:
        jobLog:
          size: 1Gi # optional, default is 1Gi
          storageClass: "rook-cephfs" #optional, default is rook-cephfs
      database:
        size: 1Gi # optional, default is 1Gi
        storageClass: "rook-cephfs" #optional, default is rook-cephfs
      redis:
        size: 1Gi # optional, default is 1Gi
        storageClass: "rook-cephfs" #optional, default is rook-cephfs
      trivy: 
        size: 5Gi # optional, default is 5Gi
        storageClass: "rook-cephfs" #optional, default is rook-cephfs
#--------------------------------------------------------------------------------------------------------------------------------------
###Values for filebeat deployment###
filebeatValues:
  namespace: kubeops # optional, default is kubeops   
#--------------------------------------------------------------------------------------------------------------------------------
###Values for Logstash deployment###
##For detailed explaination for each key see: https://github.com/elastic/helm-charts/releases/tag/v7.16.3###
logstashValues:
  namespace: kubeops
  volumeClaimTemplate:
    accessModes: 
      - ReadWriteMany #optional, default is [ReadWriteMany]
    resources:
      requests:
        storage: 1Gi # mandatory, depending on storage capacity
    storageClass: "rook-cephfs" #optional, default is rook-cephfs
#--------------------------------------------------------------------------------------------------------------------------------------
###Values for OpenSearch-Dashboards deployment###
##For detailed explaination for each key see: https://github.com/opensearch-project/helm-charts/tree/main/charts/opensearch-dashboards###
openSearchDashboardValues:
  namespace: kubeops
  nodePort: 30050
#--------------------------------------------------------------------------------------------------------------------------------
###Values for OpenSearch deployment###
##For detailed explaination for each key see: https://github.com/opensearch-project/helm-charts/tree/main/charts/opensearch###
openSearchValues:
  namespace: kubeops
  opensearchJavaOpts: "-Xmx512M -Xms512M" # optional, default is -Xmx512M -Xms512M
  resources:
    requests:
      cpu: "250m" # optional, default is 250m
      memory: "1024Mi" # optional, default is 1024Mi
    limits:
      cpu: "300m" # optional, default is 300m
      memory: "3072Mi" # optional, default is 3072Mi
  persistence:
    size: 4Gi # mandatory
    enabled: "true" # optional, default is true
    enableInitChown: "false" # optional, default is false
    labels:
      enabled: "false" # optional, default is false
    storageClass: "rook-cephfs" # optional, default is rook-cephfs
    accessModes:
      - "ReadWriteMany" # optional, default is {ReadWriteMany}
  securityConfig:
    enabled: false # optional, default value: false
    ### Additional values can be set, if securityConfig is enabled:
    # path: "/usr/share/opensearch/plugins/opensearch-security/securityconfig"
    # actionGroupsSecret:
    # configSecret:
    # internalUsersSecret: internal-users-config-secret
    # rolesSecret:
    # rolesMappingSecret:
    # tenantsSecret:
    # config:
    #   securityConfigSecret: ""
    #   dataComplete: true
    #   data: {}
  replicas: "3" # optional, default is 3
#--------------------------------------------------------------------------------------------------------------------------------
###Values for Prometheus deployment###
prometheusValues:
  namespace: kubeops # optional, default is kubeops
  privateRegistry: false # optional, default is false
  grafanaUsername: "user" # optional, default is user
  grafanaPassword: "password" # optional, default is password
  grafanaResources:
    storageClass: "rook-cephfs" # optional, default is rook-cephfs
    storage: 5Gi # optional, default is 5Gi
    nodePort: 30211 # optional, default is 30211

  prometheusResources:
    storageClass: "rook-cephfs" # optional, default is rook-cephfs
    storage: 25Gi # optional, default is 25Gi
    retention: 10d # optional, default is 10d
    retentionSize: "24GB" # optional, default is 24GB
    nodePort: 32090
#--------------------------------------------------------------------------------------------------------------------------------
###Values for OPA deployment###
opaValues:
  namespace: kubeops

#--------------------------------------------------------------------------------------------------------------------------------
###Values for KubeOps-Dashboard (Headlamp) deployment###
kubeOpsDashboardValues:
  service:
    nodePort: 30007
#--------------------------------------------------------------------------------------------------------------------------------
###Values for cert-manager deployment###
certmanValues:
  namespace: kubeops
  replicaCount: 3
  logLevel: 2
  secretName: root-secret 
#--------------------------------------------------------------------------------------------------------------------------------
###Values for ingress-nginx deployment###
ingressValues:
  namespace: kubeops
keycloakValues:
  namespace: "kubeops" # Optional, default is "keycloak"
  storageClass: "rook-cephfs" # Optional, default is "rook-cephfs"
  keycloak:
    auth:
      adminUser: admin # Optional, default is admin
      adminPassword: admin # Optional, default is admin
      existingSecret: "" # Optional, default is ""
  postgresql:
    auth:
      postgresPassword: "" # Optional, default is ""
      username: bn_keycloak # Optional, default is "bn_keycloak"
      password: "" # Optional, default is ""
      database: bitnami_keycloak # Optional, default is "bitnami_keycloak"
      existingSecret: "" # Optional, default is ""
veleroValues:
  namespace: "velero"
  accessKeyId: "your_s3_storage_username"
  secretAccessKey: "your_s3_storage_password"
  useNodeAgent: false
  defaultVolumesToFsBackup: false
  provider: "aws"
  bucket: "velero"
  useVolumeSnapshots: false
  backupLocationConfig:
    region: "minio"
    s3ForcePathStyle: true
    s3Url: "http://minio.velero.svc:9000"
apiVersion: kubeops/kubeopsctl/alpha/v4 # mandatory
clusterName: "example" # mandatory
clusterUser: "myuser" # mandatory
kubernetesVersion: "1.28.2" # mandatory
#masterHost: optional if you have an hostname, default value in "masterIP"
masterIP: 10.2.10.11 # mandatory
firewall: "nftables" # mandatory, default "nftables"
pluginNetwork: "calico" # mandatory, default "calico"
containerRuntime: "containerd" # mandatory
clusterOS: "Red Hat Enterprise Linux" # mandatory, must be "Red Hat Enterprise Linux"
### Additional values for cluster configuration
useInsecureRegistry: false # optional, default is false
ignoreFirewallError: false # optional, default is false
serviceSubnet: 192.168.128.0/17 # optional, default "192.168.128.0/17"
podSubnet: 192.168.0.0/17 # optional, default "192.168.0.0/17"
debug: false # optional, default is false
logLevel: vvvvv # optional, default "vvvvv"
systemCpu: "1" # optional, default "1"
systemMemory: "2G" # optional, default "2G"
sudo: true # optional, default is true
tmpCopyDir: "/tmp" # optional, default is /tmp
createCluster: true # optional, default is true
updateRegistry: true # optional, default is true

zones:
  - name: zone1
    nodes:
      master:
        - name: cluster1master1
          ipAdress: 10.2.10.11
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
        - name: cluster1master2
          ipAdress: 10.2.10.12
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
      worker:
        - name: cluster1worker1
          ipAdress: 10.2.10.14
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
        - name: cluster1worker2
          ipAdress: 10.2.10.15
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
  - name: zone2
    nodes:
      master:
        - name: cluster1master3
          ipAdress: 10.2.10.13
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: drained
          kubeversion: 1.28.2  
      worker:
        - name: cluster1worker1
          ipAdress: 10.2.10.16
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2

# set to true if you want to install it into your cluster
rook-ceph: true # mandatory
harbor: true # mandatory
opensearch: true # mandatory
opensearch-dashboards: true # mandatory
logstash: true # mandatory
filebeat: true # mandatory
prometheus: true # mandatory
opa: true # mandatory
kubeops-dashboard: true # mandatory
certman: true # mandatory
ingress: true # mandatory
keycloak: true # mandatory
velero: true # mandatory

nameSpace: "kubeops" #optional, the default value is different for each application
storageClass: "rook-cephfs" # optional, default value is "rook-cephfs"

###Values for Rook-Ceph###
rookValues:
  namespace: kubeops
  cluster:
    spec:
      dataDirHostPath: "/var/lib/rook" # optional, default is /var/lib/rook
    storage:
      useAllNodes: true # optional, default value: true
      useAllDevices: true # optional, default value: true
      deviceFilter: "^sd[a-b]" # optional, will only be used if useAllDevices is set to false
      config:
        metadataDevice: "sda" # optional, only set this value, if there is a device available
      nodes: # optional if useAllNodes is set to true, otherwise mandatory
        - name: "<ip-adress of node_1>"
          devices:
            - name: "sdb" 
        - name: "<ip-adress of node_2>"
          deviceFilter: "^sd[a-b]"
          config:
            metadataDevice: "sda" # optional
    resources:
      mgr:
        requests:
          cpu: "500m" # optional, default is 500m, limit: 1000m
          memory: "512Mi" # optional, default is 1Gi, limit: 1Gi
      mon:
        requests:
          cpu: "1" # optional, default is 1, limit: 2000m
          memory: "1Gi" # optional, default is 1Gi, limit: 2Gi
      osd:
        requests:
          cpu: "1" # optional, default is 1, limit: 2
          memory: "1Gi" # optional, default is 4Gi, limit: 4Gi
      cephFileSystems:
        requests:
          cpu: "1" # optional, default is 1, limit: 2
          memory: "1Gi" # optional, default is 1, limit: 4Gi
      cephObjectStores:
        requests:
          cpu: "1" # optional, default is 1, limit: 2
          memory: "1Gi" # optional, default is 1Gi, limit: 2Gi
  operator:
    data:
      rookLogLevel: "DEBUG" # optional, default is DEBUG
#-------------------------------------------------------------------------------------------------------------------------------
### Values for Harbor deployment ###
## For detailed explaination for each key see: https://artifacthub.io/packages/helm/harbor/harbor/1.8.1#configuration ##
harborValues: 
  namespace: kubeops # optional, default is kubeops
  harborpass: "password" # mandatory: set password for harbor access
  databasePassword: "Postgres_Password" # mandatory: set password for database access
  redisPassword: "Redis_Password" # mandatory: set password for redis access
  externalURL: http://10.2.10.11:30002 # mandatory, the ip address and port, from which harbor is accessable outside of the cluster
  nodePort: 30002 # mandatory
  hostname: harbor.local # mandatory
  harborPersistence:
    persistentVolumeClaim:
      registry:
        size: 40Gi # optional, default is 40Gi
        storageClass: "rook-cephfs" #optional, default is rook-cephfs
      jobservice:
        jobLog:
          size: 1Gi # optional, default is 1Gi
          storageClass: "rook-cephfs" #optional, default is rook-cephfs
      database:
        size: 1Gi # optional, default is 1Gi
        storageClass: "rook-cephfs" #optional, default is rook-cephfs
      redis:
        size: 1Gi # optional, default is 1Gi
        storageClass: "rook-cephfs" #optional, default is rook-cephfs
      trivy: 
        size: 5Gi # optional, default is 5Gi
        storageClass: "rook-cephfs" #optional, default is rook-cephfs
#--------------------------------------------------------------------------------------------------------------------------------------
###Values for filebeat deployment###
filebeatValues:
  namespace: kubeops # optional, default is kubeops   
#--------------------------------------------------------------------------------------------------------------------------------
###Values for Logstash deployment###
##For detailed explaination for each key see: https://github.com/elastic/helm-charts/releases/tag/v7.16.3###
logstashValues:
  namespace: kubeops
  volumeClaimTemplate:
    accessModes: 
      - ReadWriteMany #optional, default is [ReadWriteMany]
    resources:
      requests:
        storage: 1Gi # mandatory, depending on storage capacity
    storageClass: "rook-cephfs" #optional, default is rook-cephfs
#--------------------------------------------------------------------------------------------------------------------------------------
###Values for OpenSearch-Dashboards deployment###
##For detailed explaination for each key see: https://github.com/opensearch-project/helm-charts/tree/main/charts/opensearch-dashboards###
openSearchDashboardValues:
  namespace: kubeops
  nodePort: 30050
#--------------------------------------------------------------------------------------------------------------------------------
###Values for OpenSearch deployment###
##For detailed explaination for each key see: https://github.com/opensearch-project/helm-charts/tree/main/charts/opensearch###
openSearchValues:
  namespace: kubeops
  opensearchJavaOpts: "-Xmx512M -Xms512M" # optional, default is -Xmx512M -Xms512M
  resources:
    requests:
      cpu: "250m" # optional, default is 250m
      memory: "1024Mi" # optional, default is 1024Mi
    limits:
      cpu: "300m" # optional, default is 300m
      memory: "3072Mi" # optional, default is 3072Mi
  persistence:
    size: 4Gi # mandatory
    enabled: "true" # optional, default is true
    enableInitChown: "false" # optional, default is false
    labels:
      enabled: "false" # optional, default is false
    storageClass: "rook-cephfs" # optional, default is rook-cephfs
    accessModes:
      - "ReadWriteMany" # optional, default is {ReadWriteMany}
  securityConfig:
    enabled: false # optional, default value: false
    ### Additional values can be set, if securityConfig is enabled:
    # path: "/usr/share/opensearch/plugins/opensearch-security/securityconfig"
    # actionGroupsSecret:
    # configSecret:
    # internalUsersSecret: internal-users-config-secret
    # rolesSecret:
    # rolesMappingSecret:
    # tenantsSecret:
    # config:
    #   securityConfigSecret: ""
    #   dataComplete: true
    #   data: {}
  replicas: "3" # optional, default is 3
#--------------------------------------------------------------------------------------------------------------------------------
###Values for Prometheus deployment###
prometheusValues:
  namespace: kubeops # optional, default is kubeops
  privateRegistry: false # optional, default is false
  grafanaUsername: "user" # optional, default is user
  grafanaPassword: "password" # optional, default is password
  grafanaResources:
    storageClass: "rook-cephfs" # optional, default is rook-cephfs
    storage: 5Gi # optional, default is 5Gi
    nodePort: 30211 # optional, default is 30211

  prometheusResources:
    storageClass: "rook-cephfs" # optional, default is rook-cephfs
    storage: 25Gi # optional, default is 25Gi
    retention: 10d # optional, default is 10d
    retentionSize: "24GB" # optional, default is 24GB
    nodePort: 32090
#--------------------------------------------------------------------------------------------------------------------------------
###Values for OPA deployment###
opaValues:
  namespace: kubeops

#--------------------------------------------------------------------------------------------------------------------------------
###Values for KubeOps-Dashboard (Headlamp) deployment###
kubeOpsDashboardValues:
  service:
    nodePort: 30007
#--------------------------------------------------------------------------------------------------------------------------------
###Values for cert-manager deployment###
certmanValues:
  namespace: kubeops
  replicaCount: 3
  logLevel: 2
  secretName: root-secret 
#--------------------------------------------------------------------------------------------------------------------------------
###Values for ingress-nginx deployment###
ingressValues:
  namespace: kubeops
keycloakValues:
  namespace: "kubeops" # Optional, default is "keycloak"
  storageClass: "rook-cephfs" # Optional, default is "rook-cephfs"
  keycloak:
    auth:
      adminUser: admin # Optional, default is admin
      adminPassword: admin # Optional, default is admin
      existingSecret: "" # Optional, default is ""
  postgresql:
    auth:
      postgresPassword: "" # Optional, default is ""
      username: bn_keycloak # Optional, default is "bn_keycloak"
      password: "" # Optional, default is ""
      database: bitnami_keycloak # Optional, default is "bitnami_keycloak"
      existingSecret: "" # Optional, default is ""
veleroValues:
  namespace: "velero"
  accessKeyId: "your_s3_storage_username"
  secretAccessKey: "your_s3_storage_password"
  useNodeAgent: false
  defaultVolumesToFsBackup: false
  provider: "aws"
  bucket: "velero"
  useVolumeSnapshots: false
  backupLocationConfig:
    region: "minio"
    s3ForcePathStyle: true
    s3Url: "http://minio.velero.svc:9000"
apiVersion: kubeops/kubeopsctl/alpha/v3 # mandatory
imagePullRegistry: "registry.preprod.kubernative.net/kubeops"
clusterName: "example" # mandatory
clusterUser: "myuser" # mandatory
kubernetesVersion: "1.28.2" # mandatory
#masterHost: optional if you have an hostname, default value in "masterIP"
masterIP: 10.2.10.11 # mandatory
firewall: "nftables" # mandatory, default "nftables"
pluginNetwork: "calico" # mandatory, default "calico"
containerRuntime: "containerd" # mandatory
clusterOS: "Red Hat Enterprise Linux" # mandatory, must be "Red Hat Enterprise Linux"
### Additional values for cluster configuration
useInsecureRegistry: false # optional, default is false
ignoreFirewallError: false # optional, default is false
serviceSubnet: 192.168.128.0/17 # optional, default "192.168.128.0/17"
podSubnet: 192.168.0.0/17 # optional, default "192.168.0.0/17"
debug: false # optional, default is false
logLevel: vvvvv # optional, default "vvvvv"
systemCpu: "1" # optional, default "1"
systemMemory: "2G" # optional, default "2G"
sudo: true # optional, default is true
tmpCopyDir: "/tmp" # optional, default is /tmp
createCluster: true # optional, default is true
updateRegistry: true # optional, default is true

zones:
  - name: zone1
    nodes:
      master:
        - name: cluster1master1
          ipAdress: 10.2.10.11
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
        - name: cluster1master2
          ipAdress: 10.2.10.12
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
      worker:
        - name: cluster1worker1
          ipAdress: 10.2.10.14
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
        - name: cluster1worker2
          ipAdress: 10.2.10.15
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
  - name: zone2
    nodes:
      master:
        - name: cluster1master3
          ipAdress: 10.2.10.13
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: drained
          kubeversion: 1.28.2  
      worker:
        - name: cluster1worker1
          ipAdress: 10.2.10.16
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2

# set to true if you want to install it into your cluster
rook-ceph: true # mandatory
harbor: true # mandatory
opensearch: true # mandatory
opensearch-dashboards: true # mandatory
logstash: true # mandatory
filebeat: true # mandatory
prometheus: true # mandatory
opa: true # mandatory
headlamp: true # mandatory
certman: true # mandatory
ingress: true # mandatory
keycloak: true # mandatory
velero: true # mandatory

nameSpace: "kubeops" #optional, the default value is different for each application
storageClass: "rook-cephfs" # optional, default value is "rook-cephfs"

###Values for Rook-Ceph###
rookValues:
  namespace: kubeops
  cluster:
    spec:
      dataDirHostPath: "/var/lib/rook" # optional, default is /var/lib/rook
    storage:
      useAllNodes: true # optional, default value: true
      useAllDevices: true # optional, default value: true
      deviceFilter: "^sd[a-b]" # optional, will only be used if useAllDevices is set to false
      config:
        metadataDevice: "sda" # optional, only set this value, if there is a device available
      nodes: # optional if useAllNodes is set to true, otherwise mandatory
        - name: "<ip-adress of node_1>"
          devices:
            - name: "sdb" 
        - name: "<ip-adress of node_2>"
          deviceFilter: "^sd[a-b]"
          config:
            metadataDevice: "sda" # optional
    resources:
      mgr:
        requests:
          cpu: "500m" # optional, default is 500m, limit: 1000m
          memory: "512Mi" # optional, default is 1Gi, limit: 1Gi
      mon:
        requests:
          cpu: "1" # optional, default is 1, limit: 2000m
          memory: "1Gi" # optional, default is 1Gi, limit: 2Gi
      osd:
        requests:
          cpu: "1" # optional, default is 1, limit: 2
          memory: "1Gi" # optional, default is 4Gi, limit: 4Gi
      cephFileSystems:
        requests:
          cpu: "1" # optional, default is 1, limit: 2
          memory: "1Gi" # optional, default is 1, limit: 4Gi
      cephObjectStores:
        requests:
          cpu: "1" # optional, default is 1, limit: 2
          memory: "1Gi" # optional, default is 1Gi, limit: 2Gi
  operator:
    data:
      rookLogLevel: "DEBUG" # optional, default is DEBUG
#-------------------------------------------------------------------------------------------------------------------------------
### Values for Harbor deployment ###
## For detailed explaination for each key see: https://artifacthub.io/packages/helm/harbor/harbor/1.8.1#configuration ##
harborValues: 
  namespace: kubeops # optional, default is kubeops
  harborpass: "password" # mandatory: set password for harbor access
  databasePassword: "Postgres_Password" # mandatory: set password for database access
  redisPassword: "Redis_Password" # mandatory: set password for redis access
  externalURL: http://10.2.10.11:30002 # mandatory, the ip address and port, from which harbor is accessable outside of the cluster
  nodePort: 30002 # mandatory
  hostname: harbor.local # mandatory
  harborPersistence:
    persistentVolumeClaim:
      registry:
        size: 40Gi # optional, default is 40Gi
        storageClass: "rook-cephfs" #optional, default is rook-cephfs
      jobservice:
        jobLog:
          size: 1Gi # optional, default is 1Gi
          storageClass: "rook-cephfs" #optional, default is rook-cephfs
      database:
        size: 1Gi # optional, default is 1Gi
        storageClass: "rook-cephfs" #optional, default is rook-cephfs
      redis:
        size: 1Gi # optional, default is 1Gi
        storageClass: "rook-cephfs" #optional, default is rook-cephfs
      trivy: 
        size: 5Gi # optional, default is 5Gi
        storageClass: "rook-cephfs" #optional, default is rook-cephfs
#--------------------------------------------------------------------------------------------------------------------------------------
###Values for filebeat deployment###
filebeatValues:
  namespace: kubeops # optional, default is kubeops   
#--------------------------------------------------------------------------------------------------------------------------------
###Values for Logstash deployment###
##For detailed explaination for each key see: https://github.com/elastic/helm-charts/releases/tag/v7.16.3###
logstashValues:
  namespace: kubeops
  volumeClaimTemplate:
    accessModes: 
      - ReadWriteMany #optional, default is [ReadWriteMany]
    resources:
      requests:
        storage: 1Gi # mandatory, depending on storage capacity
    storageClass: "rook-cephfs" #optional, default is rook-cephfs
#--------------------------------------------------------------------------------------------------------------------------------------
###Values for OpenSearch-Dashboards deployment###
##For detailed explaination for each key see: https://github.com/opensearch-project/helm-charts/tree/main/charts/opensearch-dashboards###
openSearchDashboardValues:
  namespace: kubeops
  nodePort: 30050
#--------------------------------------------------------------------------------------------------------------------------------
###Values for OpenSearch deployment###
##For detailed explaination for each key see: https://github.com/opensearch-project/helm-charts/tree/main/charts/opensearch###
openSearchValues:
  namespace: kubeops
  opensearchJavaOpts: "-Xmx512M -Xms512M" # optional, default is -Xmx512M -Xms512M
  resources:
    requests:
      cpu: "250m" # optional, default is 250m
      memory: "1024Mi" # optional, default is 1024Mi
    limits:
      cpu: "300m" # optional, default is 300m
      memory: "3072Mi" # optional, default is 3072Mi
  persistence:
    size: 4Gi # mandatory
    enabled: "true" # optional, default is true
    enableInitChown: "false" # optional, default is false
    labels:
      enabled: "false" # optional, default is false
    storageClass: "rook-cephfs" # optional, default is rook-cephfs
    accessModes:
      - "ReadWriteMany" # optional, default is {ReadWriteMany}
  securityConfig:
    enabled: false # optional, default value: false
    ### Additional values can be set, if securityConfig is enabled:
    # path: "/usr/share/opensearch/plugins/opensearch-security/securityconfig"
    # actionGroupsSecret:
    # configSecret:
    # internalUsersSecret: internal-users-config-secret
    # rolesSecret:
    # rolesMappingSecret:
    # tenantsSecret:
    # config:
    #   securityConfigSecret: ""
    #   dataComplete: true
    #   data: {}
  replicas: "3" # optional, default is 3
#--------------------------------------------------------------------------------------------------------------------------------
###Values for Prometheus deployment###
prometheusValues:
  namespace: kubeops # optional, default is kubeops
  privateRegistry: false # optional, default is false
  grafanaUsername: "user" # optional, default is user
  grafanaPassword: "password" # optional, default is password
  grafanaResources:
    storageClass: "rook-cephfs" # optional, default is rook-cephfs
    storage: 5Gi # optional, default is 5Gi
    nodePort: 30211 # optional, default is 30211

  prometheusResources:
    storageClass: "rook-cephfs" # optional, default is rook-cephfs
    storage: 25Gi # optional, default is 25Gi
    retention: 10d # optional, default is 10d
    retentionSize: "24GB" # optional, default is 24GB
    nodePort: 32090
#--------------------------------------------------------------------------------------------------------------------------------
###Values for OPA deployment###
opaValues:
  namespace: kubeops

#--------------------------------------------------------------------------------------------------------------------------------
###Values for Headlamp deployment###
headlampValues:
  service:
    nodePort: 30007
#--------------------------------------------------------------------------------------------------------------------------------
###Values for cert-manager deployment###
certmanValues:
  namespace: kubeops
  replicaCount: 3
  logLevel: 2
#--------------------------------------------------------------------------------------------------------------------------------
###Values for ingress-nginx deployment###
ingressValues:
  namespace: kubeops
keycloakValues:
  namespace: "kubeops" # Optional, default is "keycloak"
  storageClass: "rook-cephfs" # Optional, default is "rook-cephfs"
  keycloak:
    auth:
      adminUser: admin # Optional, default is admin
      adminPassword: admin # Optional, default is admin
      existingSecret: "" # Optional, default is ""
  postgresql:
    auth:
      postgresPassword: "" # Optional, default is ""
      username: bn_keycloak # Optional, default is "bn_keycloak"
      password: "" # Optional, default is ""
      database: bitnami_keycloak # Optional, default is "bitnami_keycloak"
      existingSecret: "" # Optional, default is ""
veleroValues:
  namespace: "velero"
  accessKeyId: "your_s3_storage_username"
  secretAccessKey: "your_s3_storage_password"
  useNodeAgent: false
  defaultVolumesToFsBackup: false
  provider: "aws"
  bucket: "velero"
  useVolumeSnapshots: false
  backupLocationConfig:
    region: "minio"
    s3ForcePathStyle: true
    s3Url: "http://minio.velero.svc:9000"

kubeopsctl.yaml in detail

You can find a more detailed description of the individual parameters here

Cluster creation

apiVersion: kubeops/kubeopsctl/alpha/v5  # mandatory
imagePullRegistry: "registry.preprod.kubernative.net/kubeops" # mandatory
localRegistry: false # mandatory
### Values for setup configuration ###
clusterName: "testkubeopsctl"  # mandatory
clusterUser: "myuser"  # mandatory
kubernetesVersion: "1.28.2" # mandatory, check lima documentation
#masterHost: optional if you have an hostname, default value in "masterIP"
masterIP: 10.2.10.11 # mandatory
firewall: "nftables" # mandatory, default "nftables"
pluginNetwork: "calico" # mandatory, default "nftables"
containerRuntime: "containerd" # mandatory, default "containerd"
clusterOS: "Red Hat Enterprise Linux" # optional, can be "Red Hat Enterprise Linux", remove this line if you want to use default installed OS on admin machine but it has to be "Red Hat Enterprise Linux"

These are parameters for the cluster creation, and software for the clustercreation, p.e. the containerruntime for running the contianers of the cluster. Also there are parameters for the lima software (see documentation of lima for futher explanation).

Network and cluster configuration

### Additional values for cluster configuration
useInsecureRegistry: false # optional, default is false
ignoreFirewallError: false # optional, default is false
serviceSubnet: 192.168.128.0/17 # optional, default "192.168.128.0/17"
podSubnet: 192.168.0.0/17 # optional, default "192.168.0.0/17"
debug: true # optional, default is true
logLevel: vvvvv # optional, default "vvvvv"
systemCpu: "1" # optional, default "1"
systemMemory: "2G" # optional, default "2G"
sudo: true # optional, default is true

Also important are parameters like for the networking like the subnets for the pods and services inside the kubernetes cluster.

Zones

# at least 3 masters and 3 workers are needed
zones:
  - name: zone1
    nodes:
      master:
        - name: cluster1master1
          ipAdress: 10.2.10.11
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
        - name: cluster1master2
          ipAdress: 10.2.10.12
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
      worker:
        - name: cluster1worker1
          ipAdress: 10.2.10.14
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
        - name: cluster1worker1
          ipAdress: 10.2.10.15
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
  - name: zone2
    nodes:
      master:
        - name: cluster1master3
          ipAdress: 10.2.10.13
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2  
      worker:
        - name: cluster1worker1
          ipAdress: 10.2.10.16
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: drained
          kubeversion: 1.28.2

New are the zones, which contain master and worker nodes. There are two different states: active and drained. Also there can be two different kubernetes versions. So if you want to do updates in tranches, this is possible with kubeopsctl. Also you can set system memory and system cpu of the nodes for kubernetes itself. It is not possible to delete nodes, for deleting nodes you have to use lima. Also if you want to make an update in tranches, you need at least one master with the greater version.

Tools

###Values for Rook-Ceph###
rookValues:
  namespace: kubeops
  cluster:
    spec:
      dataDirHostPath: "/var/lib/rook" # optional, default is /var/lib/rook
    storage:
      useAllNodes: true # optional, default value: true
      useAllDevices: true # optional, default value: true
      deviceFilter: "^sd[a-b]" # optional, will only be used if useAllDevices is set to false
      config:
        metadataDevice: "sda" # optional, only set this value, if there is a device available
      nodes: # optional if useAllNodes is set to true, otherwise mandatory
        - name: "<ip-adress of node_1>"
          devices:
            - name: "sdb" 
        - name: "<ip-adress of node_2>"
          deviceFilter: "^sd[a-b]"
          config:
            metadataDevice: "sda" # optional
    resources:
      mgr:
        requests:
          cpu: "500m" # optional, default is 500m, limit: 1000m
          memory: "512Mi" # optional, default is 1Gi, limit: 1Gi
      mon:
        requests:
          cpu: "1" # optional, default is 1, limit: 2000m
          memory: "1Gi" # optional, default is 1Gi, limit: 2Gi
      osd:
        requests:
          cpu: "1" # optional, default is 1, limit: 2
          memory: "1Gi" # optional, default is 4Gi, limit: 4Gi
  operator:
    data:
      rookLogLevel: "DEBUG" # optional, default is DEBUG
Parameter Name Description Possible Values Default Value Optional
namespace Namespace for Rook-Ceph Text kubeops No
cluster.spec.dataDirHostPath Data directory on the host Text /var/lib/rook Yes
cluster.storage.useAllNodes Use all nodes Boolean true Yes
cluster.storage.useAllDevices Use all devices Boolean true Yes
cluster.storage.deviceFilter Device filter Regex ^sd[a-b] Yes
cluster.storage.config.metadataDevice Metadata device Text sda Yes
cluster.storage.nodes.name Node name IP No
cluster.storage.nodes.devices.name Device name Text sdb No
cluster.storage.nodes.deviceFilter Device filter Regex ^sd[a-b] Yes
cluster.storage.nodes.config.metadataDevice Metadata device Text sda Yes
cluster.resources.mgr.requests.cpu CPU requests for mgr Text 500m Yes
cluster.resources.mgr.requests.memory Memory requests for mgr Text 512Mi Yes
cluster.resources.mon.requests.cpu CPU requests for mon Text 1 Yes
cluster.resources.mon.requests.memory Memory requests for mon Text 1Gi Yes
cluster.resources.osd.requests.cpu CPU requests for osd Text 1 Yes
cluster.resources.osd.requests.memory Memory requests for osd Text 1Gi Yes
operator.data.rookLogLevel Log level Text DEBUG Yes
### Values for Harbor deployment ###
## For detailed explaination for each key see: https://artifacthub.io/packages/helm/harbor/harbor/1.8.1#configuration ##
harborValues: 
  namespace: kubeops # optional, default is kubeops
  harborpass: "password" # mandatory: set password for harbor access
  databasePassword: "Postgres_Password" # mandatory: set password for database access
  redisPassword: "Redis_Password" # mandatory: set password for redis access
  externalURL: http://10.2.10.11:30002 # mandatory, the ip address and port, from which harbor is accessable outside of the cluster
  nodePort: 30002 # mandatory
  hostname: harbor.local # mandatory
harborPersistence:
  persistentVolumeClaim:
    registry:
      size: 40Gi # optional, default is 40Gi
      storageClass: "rook-cephfs" #optional, default is rook-cephfs
    jobservice:
      jobLog:
        size: 1Gi # optional, default is 1Gi
        storageClass: "rook-cephfs" #optional, default is rook-cephfs
    database:
      size: 1Gi # optional, default is 1Gi
      storageClass: "rook-cephfs" #optional, default is rook-cephfs
    redis:
      size: 1Gi # optional, default is 1Gi
      storageClass: "rook-cephfs" #optional, default is rook-cephfs
    trivy: 
      size: 5Gi # optional, default is 5Gi
      storageClass: "rook-cephfs" #optional, default is rook-cephfs
Parameter Name Description Possible Values Default Value Optional
namespace Namespace for Harbor Text kubeops Yes
harborpass Password for Harbor access Text No
databasePassword Password for database access Text No
redisPassword Password for Redis access Text No
externalURL External URL for Harbor access URL http://10.2.10.11:30002 No
nodePort NodePort for Harbor Number 30002 No
hostname Hostname for Harbor Text harbor.local No
harborPersistence.persistentVolumeClaim.registry.size Storage size for registry Text 5Gi No
harborPersistence.persistentVolumeClaim.registry.storageClass Storage class for registry Text rook-cephfs Yes
harborPersistence.persistentVolumeClaim.jobservice.jobLog.size Storage size for job logs Text 1Gi No
harborPersistence.persistentVolumeClaim.jobservice.jobLog.storageClass Storage class for job logs Text rook-cephfs Yes
harborPersistence.persistentVolumeClaim.database.size Storage size for database Text 1Gi No
harborPersistence.persistentVolumeClaim.database.storageClass Storage class for database Text rook-cephfs Yes
harborPersistence.persistentVolumeClaim.redis.size Storage size for Redis Text 1Gi No
harborPersistence.persistentVolumeClaim.redis.storageClass Storage class for Redis Text rook-cephfs Yes
harborPersistence.persistentVolumeClaim.trivy.size Storage size for Trivy Text 5Gi No
harborPersistence.persistentVolumeClaim.trivy.storageClass Storage class for Trivy Text rook-cephfs Yes
###Values for filebeat deployment###
filebeatValues:
  namespace: kubeops # optional, default is kubeops   
Parameter Name Description Possible Values Default Value Optional
namespace Namespace for Filebeat Text kubeops Yes
###Values for Logstash deployment###
##For detailed explaination for each key see: https://github.com/elastic/helm-charts/releases/tag/v7.16.3###
logstashValues:
  namespace: kubeops
  volumeClaimTemplate:
    accessModes: 
      - ReadWriteMany #optional, default is [ReadWriteMany]
    resources:
      requests:
        storage: 1Gi # mandatory, depending on storage capacity
    storageClass: "rook-cephfs" #optional, default is rook-cephfs
Parameter Name Description Possible Values Default Value Optional
namespace Namespace for Logstash Text kubeops No
volumeClaimTemplate.accessModes Access modes for volume claim List of Texts [ReadWriteMany] Yes
volumeClaimTemplate.resources.requests.storage Storage requests for volume claim Text 1Gi No
volumeClaimTemplate.storageClass Storage class for volume claim Text rook-cephfs Yes
###Values for OpenSearch-Dashboards deployment###
##For detailed explaination for each key see: https://github.com/opensearch-project/helm-charts/tree/main/charts/opensearch-dashboards###
openSearchDashboardValues:
  namespace: kubeops
  nodePort: 30050
Parameter Name Description Possible Values Default Value Optional
namespace Namespace for OpenSearch-Dashboards Text kubeops No
nodePort NodePort for OpenSearch-Dashboards Number 30050 No
###Values for OpenSearch deployment###
##For detailed explaination for each key see: https://github.com/opensearch-project/helm-charts/tree/main/charts/opensearch###
openSearchValues:
  namespace: kubeops
  opensearchJavaOpts: "-Xmx512M -Xms512M" # optional, default is -Xmx512M -Xms512M
  resources:
    requests:
      cpu: "250m" # optional, default is 250m
      memory: "1024Mi" # optional, default is 1024Mi
    limits:
      cpu: "300m" # optional, default is 300m
      memory: "3072Mi" # optional, default is 3072Mi
  persistence:
    size: 4Gi # mandatory
    enabled: "true" # optional, default is true
    enableInitChown: "false" # optional, default is false
    labels:
      enabled: "false" # optional, default is false
    storageClass: "rook-cephfs" # optional, default is rook-cephfs
    accessModes:
      - "ReadWriteMany" # optional, default is {ReadWriteMany}
  securityConfig:
    enabled: false # optional, default value: false
    ### Additional values can be set, if securityConfig is enabled:
    # path: "/usr/share/opensearch/plugins/opensearch-security/securityconfig"
    # actionGroupsSecret:
    # configSecret:
    # internalUsersSecret: internal-users-config-secret
    # rolesSecret:
    # rolesMappingSecret:
    # tenantsSecret:
    # config:
    #   securityConfigSecret: ""
    #   dataComplete: true
    #   data: {}
  replicas: "3" # optional, default is 3
Parameter Name Description Possible Values Default Value Optional
namespace Namespace for OpenSearch Text kubeops No
opensearchJavaOpts Java options for OpenSearch Text -Xmx512M -Xms512M Yes
resources.requests.cpu CPU requests Text 250m Yes
resources.requests.memory Memory requests Text 1024Mi Yes
resources.limits.cpu CPU limits Text 300m Yes
resources.limits.memory Memory limits Text 3072Mi Yes
persistence.size Storage size for persistent volume Text 4Gi No
persistence.enabled Enable persistent volume Boolean true Yes
persistence.enableInitChown Enable initial chown for persistent volume Boolean false Yes
persistence.labels.enabled Enable labels for persistent volume Boolean false Yes
persistence.storageClass Storage class for persistent volume Text rook-cephfs Yes
persistence.accessModes Access modes for persistent volume List of Texts [ReadWriteMany] Yes
securityConfig.enabled Enable security configuration Boolean false Yes
replicas Number of replicas Number 3 Yes
###Values for Prometheus deployment###
prometheusValues:
  namespace: kubeops # optional, default is kubeops
  privateRegistry: false # optional, default is false
  grafanaUsername: "user" # optional, default is user
  grafanaPassword: "password" # optional, default is password
  grafanaResources:
    storageClass: "rook-cephfs" # optional, default is rook-cephfs
    storage: 5Gi # optional, default is 5Gi
    nodePort: 30211 # optional, default is 30211

  prometheusResources:
    storageClass: "rook-cephfs" # optional, default is rook-cephfs
    storage: 25Gi # optional, default is 25Gi
    retention: 10d # optional, default is 10d
    retentionSize: "24GB" # optional, default is 24GB
    nodePort: 32090
Parameter Name Description Possible Values Default Value Optional
namespace Namespace for Prometheus Text kubeops Yes
privateRegistry Use private registry Boolean false Yes
grafanaUsername Username for Grafana Text user Yes
grafanaPassword Password for Grafana Text password Yes
grafanaResources.storageClass Storage class for Grafana Text rook-cephfs Yes
grafanaResources.storage Storage size for Grafana Text 5Gi Yes
grafanaResources.nodePort NodePort for Grafana Number 30211 Yes
prometheusResources.storageClass Storage class for Prometheus Text rook-cephfs Yes
prometheusResources.storage Storage size for Prometheus Text 25Gi Yes
prometheusResources.retention Retention period for Prometheus Text 10d Yes
prometheusResources.retentionSize Retention size for Prometheus Text 24GB Yes
prometheusResources.nodePort NodePort for Prometheus Number 32090 No
###Values for OPA deployment###
opaValues:
  namespace: kubeops
Parameter Name Description Possible Values Default Value Optional
namespace Namespace for OPA Text kubeops No
###Values for KubeOps-Dashboard (Headlamp) deployment###
kubeOpsDashboardValues:
  service:
    nodePort: 30007
Parameter Name Description Possible Values Default Value Optional
service.nodePort NodePort for KubeOps-Dashboard Number 30007 No
###Values for cert-manager deployment###
certmanValues:
  namespace: kubeops
  replicaCount: 3
  logLevel: 2
  secretName: root-secret 
Parameter Name Description Possible Values Default Value Optional
namespace Namespace for Cert-Manager Text kubeops No
replicaCount Number of replicas Number 3 No
logLevel Log level Number 2 No
secretName Name of the secret Text root-secret No
###Values for ingress-nginx deployment###
ingressValues:
  namespace: kubeops
Parameter Name Description Possible Values Default Value Optional
namespace Namespace for Ingress-Nginx Text kubeops No
keycloakValues:
  namespace: "kubeops" # Optional, default is "keycloak"
  storageClass: "rook-cephfs" # Optional, default is "rook-cephfs"
  keycloak:
    auth:
      adminUser: admin # Optional, default is admin
      adminPassword: admin # Optional, default is admin
      existingSecret: "" # Optional, default is ""
  postgresql:
    auth:
      postgresPassword: "" # Optional, default is ""
      username: bn_keycloak # Optional, default is "bn_keycloak"
      password: "" # Optional, default is ""
      database: bitnami_keycloak # Optional, default is "bitnami_keycloak"
      existingSecret: "" # Optional, default is ""
Parameter Name Description Possible Values Default Value Optional
namespace Namespace for Keycloak Text kubeops Yes
storageClass Storage class for Keycloak Text rook-cephfs Yes
keycloak.auth.adminUser Admin username Text admin Yes
keycloak.auth.adminPassword Admin password Text admin Yes
keycloak.auth.existingSecret Existing secret Text Yes
postgresql.auth.postgresPassword Password for Postgres DB Text Yes
postgresql.auth.username Username for Postgres Text bn_keycloak Yes
postgresql.auth.password Password for Postgres Text Yes
postgresql.auth.database Database name Text bitnami_keycloak Yes
postgresql.auth.existingSecret Existing secret for Postgres Text Yes
veleroValues:
  namespace: "velero"
  accessKeyId: "your_s3_storage_username"
  secretAccessKey: "your_s3_storage_password"
  useNodeAgent: false
  defaultVolumesToFsBackup: false
  provider: "aws"
  bucket: "velero"
  useVolumeSnapshots: false
  backupLocationConfig:
    region: "minio"
    s3ForcePathStyle: true
    s3Url: "http://minio.velero.svc:9000"
Parameter Name Description Possible Values Default Value Optional
namespace Namespace for Velero Text velero No
accessKeyId Access key ID Text No
secretAccessKey Secret access key Text No
useNodeAgent Use node agent Boolean false No
defaultVolumesToFsBackup Use default volumes to FS backup Boolean false No
provider Provider Text aws No
bucket Bucket name Text velero No
useVolumeSnapshots Use volume snapshots Boolean false No
backupLocationConfig.region Region for backup location Text minio No
backupLocationConfig.s3ForcePathStyle Enforce S3 path style Boolean true No
backupLocationConfig.s3Url S3 URL for backup location URL http://minio.velero.svc:9000 No

Overwrite platform parameters

New with KubeOps 1.6.0 (apiVersion kubeops/kubeopsctl/alpha/v5)
As a user, you can overwrite platform parameters by changing the advanced parameters (e.g. keycloak, velero, harbor, prometheus, etc) to include your desired values. An exception to this is rook-ceph, which has two advancedParameters: one for the rook-operator Helm chart and another for the rook-cluster Helm chart.

...
veleroValues:
  s3ForcePathStyle: false #default would be true
  advancedParameters: 

keycloakValues:
  namespace: "kubeops" #default is "keycloak"
  advancedParameters:

harborValues: 
  namespace: harbor 
  advancedParameters:

prometheusValues: 
  privateRegistry: true
  advancedParameters:

rookValues:
  cluster:
    advancedParameters:
      dataDirHostPath: "/var/lib/rook" # Default ist "/var/lib/rook"
  operator:
    advancedParameters:
      resources:
        limits:
          cpu: "500m" # Default ist "100m"
  

...

Overwriting of list elements

If you want to overwrite paramters inside of a list, then you would need to overwrite the whole list element. If we take the rook helm package as an example:

...
cephBlockPools:
  - name: replicapool
    # see https://github.com/rook/rook/blob/master/Documentation/CRDs/Block-Storage/ceph-block-pool-crd.md#spec for available configuration
    spec:
      failureDomain: host
      replicated:
        size: 3
      # Enables collecting RBD per-image IO statistics by enabling dynamic OSD performance counters. Defaults to false.
      # For reference: https://docs.ceph.com/docs/master/mgr/prometheus/#rbd-io-statistics
      # enableRBDStats: true
    storageClass:
      enabled: true
      name: rook-ceph-block
      isDefault: true
      reclaimPolicy: Retain
      allowVolumeExpansion: true
      volumeBindingMode: "Immediate"
...

If we wanted to change the replicas of the replicapool of the ceph block pools(under spec.replicated.size), then we would also need to overwrite the storageclass and other specs of the replicapool, because list elements can only be overwritten and not merged. if we only use the replicas in the advanced parameters, then the failuredomain would be empty

3.4 - KubeOps Version

KubeOps Version

Here is the KubeOps and it’s supported tools versions list. Make sure to install or upgrade according to supported versions only.

KubeOps Supported KOSI Version Supported LIMA Version Supported kubeopsctl Version Deprecation Date
KubeOps 1.7.0 KOSI 2.12.X LIMA 1.7.X kubeopsctl 1.7.X 01.10.2026
KubeOps 1.6.0 KOSI 2.11.X LIMA 1.6.X kubeopsctl 1.6.X 01.11.2025

KubeOps 1.7.0_Beta0 supports

Tools Supported App Version Chart Version Supported Package Version SHA256 Checksum
calicomultus V 0.0.5 kubeops/calicomultus:0.0.5 6e2dfd8135160fce5643f4b2b71de6e6af47925fcbaf38340704d25a79c39c09
cert-manager V 1.15.3 V 1.15.3 kubeops/cert-manager:1.7.0_Beta0 4185b79d5d09a7cbbd6779c9e55d71ab99d6f2e46f3aed8abb2b97ba8aa586e4
clustercreate V 1.7.0_Beta0 kubeops/clustercreate:1.7.0_Beta0 299f7ffe125e2ca8db1c8f1aacfc3b5783271906f56c78c5fc5a6e5730ca83e5
fileBeat V 8.5.1 8.5.1 kubeops/filebeat-os:1.7.0_Beta0 d31d63fc1a5e086199c0b465dca0a8756c22c88800c4d1a82af7b8b9f108ce63
harbor V 2.12.0 1.16.0 kubeops/harbor:1.7.0_Beta0 38ea742f5c40bd59c217777f0707469c78353acb859296ae5c5f0fbac129fc32
helm V 3.14.4 kubeops/helm:1.7.0_Beta0 a9e4704fdb1b60791c0ff91851a2c69ed31782c865650f66c6a3f6ab96852568
ingress-nginx V 1.11.5 4.11.5 kubeops/ingress-nginx:1.7.0_Beta0 0b967f3a34fea7a12b86bc226599e32adb305371f1ab5368570ebb5fbc0021c6
keycloak V 1.16.0 1.0.0 kubeops/keycloak:1.7.0_Beta0 3aec97cbbb559954a038a2212e1a52a9720e47c4ba0d8088fe47b000f42c469a
kubeops-dashboard V 0.26.0 0.26.0 kubeops/kubeops-dashboard:1.7.0_Beta0 a777e3b9568cfc60d7a9adef8f81f2345b979c545b566317ed0bd8ed0cf9faf3
prometheus V 0.76.1 62.7.0 kubeops/kube-prometheus-stack:1.7.0_Beta0 9210e5ef28babfed186b47043e95e6014dd3eadcdb1dbd521a5903190ecd7062
logstash V 8.4.0 8.5.1 kubeops/logstash-os:1.7.0_Beta0 88d144bf4a8bdd0a78c30694aa723fd059cabfed246420251f0c35a22ba6b35f
opa-gatekeeper V 3.17.1 v3.17.1 kubeops/opa-gatekeeper:1.7.0_Beta0 fb72ae157ece5f8b41716d6d1fe95e1a574ca7d5a9196c071f2c83bbd3faebe7
opensearch-dashboards V 2.19.1 2.28.0 kubeops/opensearch-dashboards:1.7.0_Beta0 d4698f3252e14b12c68befb4cd3e0d6ac1f87b121f00a467722e878172effbad
opensearch V 2.19.1 2.32.0 kubeops/opensearch-os:1.7.0_Beta0 80583692c4010b0df3ff4f02c933ce1ebd792b54e3c4e4c9d3713c2730a9e02c
rook v1.15.6 cluster/v1.15.6 operator v1.15.6 cluster/v1.15.6 operator kubeops/rook-ceph:1.7.0_Beta0 5720f56cde2eb37ef2b74ee0e5dc764555157503554936cc78e03514779ad2fd
setup V 1.7.0_Beta0 kubeops/setup:1.7.0_Beta0 a6061c2795a52a772895c53ec334b595475da025b41d4cc14c6930d7d7cff018
velero V 1.13.2 6.4.0 kubeops/velero:1.7.0_Beta0 a826c2b2189f9e0f60fcf571d7230cd079ebc2f1e7a9594a9f310ec530ea64a8

Kubeops 1.6.8 supports

Tool App Version Chart Version Supported Package Version SHA256 Checksum
harbor v2.12.1 1.16.0 kubeops/harbor 38a1471919eb95e69196b3c34aa682cdc471177e06680fc4ccda0c6394e83561
cert-manager v1.15.3 v1.15.3 kubeops/cert-manager 9b17986c8cb2cb18e276167dc63d3fe4a2a83d7e02c0fb7463676954d626dc88
filebeat v8.5.1 8.5.1 kubeops/filebeat c888885c3001d5ecac8c6fe25f2c09a3352427153dc38994d3447d4a2b7fee2b
ingress-nginx v1.11.5 4.11.5 kubeops/ingress-nginx 664eb9b7dfba4a7516fc9fb68382f4ceaa591950fde7f9d8db6d82f2be802f3f
keycloak v22.0.1 16.0.5 kubeops/keycloak 469edff4c01f2dcd8339fe3debc23d8425cf8f86bedb91401dc6c18d9436396c
kubeops-dashboard v0.26.0 0.26.0 kubeops/kubeops-dashboard 0429b5dfe0dbf1242c6b6e9da08565578c7008a339cb6aec950f7519b66bcd1d
logstash v8.4.0 8.5.1 kubeops/logstash 6586d68ed7f858722796d7c623f1701339efc11eddb71d8b02985bb643fdec2f
gatekeeper v3.17.1 3.17.1 kubeops/gatekeeper 42bee78b7bb056e354c265384a0fdc36dc7999278ce70531219efe7b8b0759e6
prometheus v0.76.1 62.7.0 kubeops/prometheus 987227f99dc8f57fa9ac1d5407af2d82d58ec57510ca91d540ebc0d5e0f011bc
rook-ceph v1.15.6 v1.15.6 kubeops/rook-ceph 9dd9a5e96ccf2a7ebd1cb737ee4669fbdadee240f5071a3c2a993be1929b0905
velero v1.13.2 6.4.0 kubeops/velero b53948b2565c60f434dffa1dba3fc21b679c5b08308a2dde421125a4b81616cc
opensearch v2.19.1 2.32.0 kubeops/opensearch 0e52bd9818be03c457d09132bd3c1a6790482bb7141f08987dc3bbf678d193bb
opensearch-dashboards v2.19.1 2.28.0 kubeops/opensearch-dashboards 137dd6c80ed753a4e4637c51039788f184bdee6afb2626fddb2937aea919cbd8
clustercreate v1.6.8 kubeops/clustercreate:1.6.8 032e67d4799ea8d56424a0173012d301118291ab3cfdd561657f2225d6446e8e
setup v1.6.8 kubeops/setup:1.6.8 bd9c5e71dc4564bede85d27da39e5d31e286624be9efbd1e662ecc52bb8b136b
helm v3.14.4 kubeops/helm:1.6.8 f3909a4ac8c7051cc4893a587806681bc55abdbf9a3241dc3c29c165081bc7b0
calicomultus V 0.0.5 kubeops/calicomultus:1.6.8 b3d249483c10cbd226093978e4d760553f2b2bf7e4a5d8176b56af2808e70aa1

Kubeops 1.6.7 supports

Tools Supported App Version Chart Version Supported Package Version SHA256 Checksum
calicomultus V 0.0.5 lima/calicomultus:0.0.5 18d458d6bda62efb37f6e07378bb90a8cee824286749c42154815fae01a10b62
cert-manager V 1.15.3 V 1.15.3 kubeops/cert-manager:1.6.7 39c2b7fb490dd5e3ad8b8a03ec6287a6d02dd518b86efd83b3a1e815fd641c98
clustercreate V 1.6.7 kubeops/clustercreate:1.6.7 e83bd6e24dd5762e698d83d6e9e29480deda8bff693af7d835c83ba2e88ae3c2
fileBeat V 8.5.1 8.5.1 kubeops/filebeat-os:1.6.7 64c4922429702e39fa8eed54856525161150c7c4c6b5328a2ac90ce56588fd71
harbor V 2.12.0 1.16.0 kubeops/harbor:1.6.7 769e914a9f02a6ca78ec03418895e67c058f851ce24560474640c60dab4c730a
helm V 3.14.4 kubeops/helm:1.6.7 c970844547cde59195bc1c5b4f17521597b9af1012f112c24185169492d59213
ingress-nginx V 1.11.5 4.11.5 kubeops/ingress-nginx:1.6.7 deaf25204437c2812b459c9e2b68ae83bc5343a57ac2ab87d9d8dd4b3d06039d
keycloak V 22.0.1 16.0.5 kubeops/keycloak:1.6.7 3829d879e3098b14f16709f97b579bb9446ff2984553b72bba39b238aaaf332a
kubeops-dashboard V 0.26.0 0.26.0 kubeops/kubeops-dashboard:1.6.7 f237297adb8b01b7ad7344321d69928273c7e1a7a342634401d71205297a90dd
prometheus V 0.76.1 62.7.0 kubeops/kube-prometheus-stack:1.6.7 832b5019fe6f8d3949d768e98b182bcb84d05019ca854c08518313875ab4eedb
logstash V 8.4.0 8.5.1 kubeops/logstash-os:1.6.7 955f38c63dc5d4a3a67450c85e262a2884711910cfd1ee56860279a07f5ef833
opa-gatekeeper V 3.17.1 v3.17.1 kubeops/opa-gatekeeper:1.6.7 3257e829cc4829c190a069b2a6409ea32ed1a38031f45b8c880eb69b85173c64
opensearch-dashboards V 2.16.0 2.22.0 kubeops/opensearch-dashboards:1.6.7 8c8e4dca83591ef1ff8b23d94646d0098c2c575e193f6baf746e64a03aface05
opensearch V 2.16.0 2.23.1 kubeops/opensearch-os:1.6.7 fb80f291318a6c00696a0a8775c571dea3ed7a2bec1b8d3394c07081b2409605
rook v1.15.6 cluster/v1.15.6 operator v1.15.1 cluster/v1.15.1 operator kubeops/rook-ceph:1.6.7 7198a6b33e677399ad90a2c780a7bf8af96e00de5ed46eef8215f6626645f06f
setup V 1.6.7 kubeops/setup:1.6.7 2de24b9e24913e5f3966069de200644ae44b7777c7f94793a6f059f112649ea5
velero V 1.13.2 6.4.0 kubeops/velero:1.6.7 0ab6465bd5e8e422d06ce1fc2bd1d620e36bdedbc2105bc45e9a80d9f9e71e0d

Kubeops 1.6.6 supports

Tools Supported App Version Chart Version Supported Package Version SHA256 Checksum
calicomultus V 0.0.4 lima/calicomultus:0.0.4 0e32dcea34fe707248cf2348f0ed422c278574594e440db8ceac8190e3984dac
cert-manager V 1.15.3 1.15.3 kubeops/cert-manager:1.6.6 63e2ef627351619ab9813a684106dc19b187c63d643b68096889d8e0abf0640b
clustercreate V 1.6.6 kubeops/clustercreate:1.6.6 dc334cf0cede9352069e775c0ed4df606f468340f257c4fa5687db7a828906c9
fileBeat V 8.5.1 8.5.1 kubeops/filebeat-os:1.6.6 09ad9bf914b7d694619be3195b43c51157234f8bb5b5b24adfe399298f47e495
harbor V 2.12.0 1.16.0 kubeops/harbor:1.6.6 4f2a1112234edb3cf69ec4447b96bbc81593f61327ac93f6576ebe0ab1ee4d9b
helm V 3.14.4 kubeops/helm:1.6.6 9ea60096ce6faa4654b8eced71c27e733fa791bacfc40095dfc907fd9a7d5b46
ingress-nginx V 1.10.0 4.10.0 kubeops/ingress-nginx:1.6.6 f518a5d909697b0275b4515dc1bc49a411b54992db469319e070809e8bbffd9e
keycloak V 22.0.1 16.0.5 kubeops/keycloak:1.6.6 3d2781a454f0cbe98c611e42910fb0e199db1dec79ac970c08ed4e9735581c4c
kubeops-dashboard V 0.26.0 0.26.0 kubeops/kubeops-dashboard:1.6.6 d21106e44b52f30cb23cb01bf2217662d7b393fd11046cbcc4e9ff165a725c1b
prometheus V 0.76.1 62.7.0 kubeops/kube-prometheus-stack:1.6.6 ada9d5a69b8c277c2c9037097e6a994d6c20ff794b51a65c30bdf480cfb23e52
logstash V 8.4.0 8.5.1 kubeops/logstash-os:1.6.6 89b8ae46c9bbc90c9d96e45a801c580608d24e4eaef2adadadb646a3012ece72
opa-gatekeeper V 3.17.1 3.17.1 kubeops/opa-gatekeeper:1.6.6 a9c5423fdfabf456fa18b9808b9e9c9ee9428d5f5c4035810b9dbc3bfb838e4c
opensearch-dashboards V 2.16.0 2.22.0 kubeops/opensearch-dashboards:1.6.6 2f8f66e6e321b773fcd5fb66014600b4f9cffda4bcea9f9451802274561f3ff4
opensearch V 2.16.0 2.23.1 kubeops/opensearch-os:1.6.6 8ab9d1d09398083679a3233aaf73f1a664bd7162e99a1aef51b716cd8daa3e55
rook v1.15.6 cluster/v1.15.6 operator v1.15.1 cluster/v1.15.1 operator kubeops/rook-ceph:1.6.6 14b8cb259d6a0bb73ac576de7a07ed76499b43551a3d8a44b76eea181013080e
setup V 1.6.6 kubeops/setup:1.6.6 92e392f170edb2edc5c92e265e9d92a4d6db5c6226f4603b33cece7361928089
velero V 1.13.2 6.4.0 kubeops/velero:1.6.6 98bde279f5a8b589a5234d63fba900235b07060c6554e9f822d41b072ddbd2f9

KubeOps 1.6.5 supports

Tools Supported App Version Chart Version Supported Package Version SHA256 Checksum
calicomultus V 0.0.4 lima/calicomultus:0.0.4 0e32dcea34fe707248cf2348f0ed422c278574594e440db8ceac8190e3984dac
cert-manager V 1.15.3 V 1.15.3 kubeops/cert-manager:1.6.5 a66cfcf7849f745033fc8d6140d7b1ebbccb013739b37e26d9eb6dd22e0fb973
clustercreate V 1.6.5 kubeops/clustercreate:1.6.5 a577edf4ea90710d041d31f434c4114d8efb4d6d9140ce39ca3b651f637b7147
fileBeat V 8.5.1 8.5.1 kubeops/filebeat-os:1.6.5 70c2fceba0bc5e3d4dc0b56ad5cae769f79dc439915b0757b9b041244582b923
harbor V 2.12.0 1.16.0 kubeops/harbor:1.6.5 9d3283235cf41073d1ade638218d8036cb35764473edc2a6d3046ca7b5435228
helm V 3.14.4 kubeops/helm:1.6.5 d1c67bc9084d647217ee57f2e9fd4df3cbeb50d771961423c9e8246651910daa
ingress-nginx V 1.10.0 4.10.0 kubeops/ingress-nginx:1.6.5 9453b739e927f36cebe17b4da8f08f843693a52b049a358612aab82f8d1cc659
keycloak V 22.0.1 16.0.5 kubeops/keycloak:1.6.5 df652caa301d5171a7a3ae1ae8191790ef9f0af6de2750edbf2629b9022ccb3b
kubeops-dashboard V 0.26.0 0.26.0 kubeops/kubeops-dashboard:1.6.5 ccbb8721aa9a5c60661726feac3b3fd63d6711875b3a8e816b1cbdc68c51f530
prometheus V 0.76.1 62.7.0 kubeops/kube-prometheus-stack:1.6.5 c3544bd6ddbac3c9ac58b3445c8a868a979ce669d1096dcdafa9842b35edd2d7
logstash V 8.4.0 8.5.1 kubeops/logstash-os:1.6.5 2ec4835b326afce0cdb01d15bbe84dabfe988dab295b6d12114383dd528b7807
opa-gatekeeper V 3.17.1 v3.17.1 kubeops/opa-gatekeeper:1.6.5 f5e6d871c12d463430aacd5adfd9fbc728a3dbf684424c002de1ae8d0b4df014
opensearch-dashboards V 2.22.0 2.16.0 kubeops/opensearch-dashboards:1.6.5 a28a3b2161b276385062072fa05ac9cd34447e207c701b0700c78f5e828ec133
opensearch V 2.23.1 2.16.0 kubeops/opensearch-os:1.6.5 e8bf63cbbb452e3e5cf7e62e3c6324e7dad31d1713e306c3847770a7ef67ca3a
rook v1.15.6 cluster/v1.15.6 operator v1.15.6 cluster/v1.15.6 operator kubeops/rook-ceph:1.6.5 a8ee95eaca0705f95884d54a188fa97e5c9080601dc3722a16a80a1599783caa
setup V 1.6.5 kubeops/setup:1.6.5 1ac0ab68976e4c6e1abd867a8d3a58391b2b0fddd24ba1aefbed6b0f5d60b9ab
velero V 1.13.2 6.4.0 kubeops/velero:1.6.5 7c224434008d856b9fe0275ac0c528f865fb9a549a46bbeb90a34221a4d8c187

KubeOps 1.6.4 supports

Tools Supported App Version Chart Version Supported Package Version SHA256 Checksum
calicomultus V 0.0.4 lima/calicomultus:0.0.4 0e32dcea34fe707248cf2348f0ed422c278574594e440db8ceac8190e3984dac
cert-manager V 1.15.3 V 1.15.3 kubeops/cert-manager:1.6.4 61fea41a31cdb7fefb2f4046c9c94ef08dc57523c0b8516ebc11f278b3d79b37
clustercreate V 1.6.4 kubeops/clustercreate:1.6.4 b9a0c9eefeebc6057abcecc7cd6e53956baf28614d48141e1530ae6a4f433f2b
fileBeat V 8.5.1 8.5.1 kubeops/filebeat-os:1.6.4 23dd2674b3447b3459c8a2b65f550726b6c97ca588d5c7259eb788bec6e4d317
harbor V 2.11.1 1.15.1 kubeops/harbor:1.6.4 b794a6504769abff5b4ebba7c6384f83409c8d7d8d7687e3e49eec8a31e1a192
helm V 3.14.4 kubeops/helm:1.6.4 1309b1cefb132152cd6900954b6b68cce6ce3b1c9e878fc925d8ef0439eee5f1
ingress-nginx V 1.10.0 4.10.0 kubeops/ingress-nginx:1.6.4 24214c2e96cf949073ba2e132a57c03096f36f5920a6938656bd159242ce8ec2
keycloak V 22.0.1 16.0.5 kubeops/keycloak:1.6.4 835d63c0d905dca14ee1aa5bc830e4cb3506c948d1c076317993d2e1a8b083ba
kubeops-dashboard V 0.26.0 0.26.0 kubeops/kubeops-dashboard:1.6.4 0ebf8ef4d2bf01bc5257c0bf5048db7e785743461ce1969847de0c9605562ef4
prometheus V 0.76.1 62.7.0 kubeops/kube-prometheus-stack:1.6.4 59eb96a2f09fa8b632d4958215fd45a82df3c0f7697281ea63e54f49d4a81601
logstash V 8.4.0 8.5.1 kubeops/logstash-os:1.6.4 3571e2ed554c8bd57272fa8e2de85e26e67a7dbf62d8622c5d796d5e3f8b6cf5
opa-gatekeeper V 3.17.1 v3.17.1 kubeops/opa-gatekeeper:1.6.4 f5ce384bd332f3b6ffccd09b5824e92976019132b324c8fecbc261d14f2df095
opensearch-dashboards V 2.22.0 2.16.0 kubeops/opensearch-dashboards:1.6.4 6dd16d2e411bdde910fc3370c1aca73c3c934832e45174ec71887d74d70dfcec
opensearch V 2.23.1 2.16.0 kubeops/opensearch-os:1.6.4 cab021ed5f832057f2d4a7deaaccb1e2d2ab5d29bac502fb0daeebd8692a8178
rook v1.15.6 cluster/v1.15.6 operator v1.15.1 cluster/v1.15.1 operator kubeops/rook-ceph:1.6.4 3f7c8c22406b5dc50add81f0df45a65d6d81ec47bbf3fb9935959ff870481601
setup V 1.6.4 kubeops/setup:1.6.4 4760479e480453029f59152839d6624f7c5a7374fbc37ec2d7d14f8253ab9204
velero V 1.13.2 6.4.0 kubeops/velero:1.6.4 55136b3b4ea5aa8582b1300c37f084a48870f531496ed37a0849c33a63460b15

KubeOps 1.6.3 supports

Tools Supported App Version Chart Version Supported Package Version SHA256 Checksum
calicomultus V 0.0.4 lima/calicomultus:0.0.4 0e32dcea34fe707248cf2348f0ed422c278574594e440db8ceac8190e3984dac
cert-manager V 1.15.3 V 1.15.3 kubeops/cert-manager:1.6.3 11105f523a2d8faf3bbfdca9e4d06145b4d52bad0ee0f16586266c26b59d5fe5
clustercreate V 1.6.3 kubeops/clustercreate:1.6.3 9bce651b5d3caa5e83bfad25ef5d2908e16b2cf854168baf59b9ff586841e856
fileBeat V 8.5.1 8.5.1 kubeops/filebeat-os:1.6.3 36d0359590b0c5dd3e8f8cd4c5d769a1eea3c3009593cc465ae31f4d9fbeaa02
harbor V 2.11.1 1.15.1 kubeops/harbor:1.6.3 9a9d46f2c81a7596c8d00e920b3a733331d2def0676cc077b00749293e24255a
helm V 3.14.4 kubeops/helm:1.6.3 f3e90f91c99314ad8357a11129602ddb693aa7792038306f903cff3791a22a3e
ingress-nginx V 1.10.0 4.10.0 kubeops/ingress-nginx:1.6.3 97d27c7cfe437275994757e0d3395c1864fd1cd57f0441754c7ec2cf128893ab
keycloak V 22.0.1 16.0.5 kubeops/keycloak:1.6.3 3d300a861d8024595fbc65be6010a3738384754c574bff9aca07d3dfc988671d
kubeops-dashboard V 0.26.0 0.26.0 kubeops/kubeops-dashboard:1.6.3 ab7a339a132138f732aa1a9b70e3308c449566920155f67e4a72a1f2591b09db
prometheus V 0.76.1 62.7.0 kubeops/kube-prometheus-stack:1.6.3 e24aa21f9bcdf900f8d15edeab380ac68921b937af2baa638971364264a9d6cd
logstash V 8.4.0 8.5.1 kubeops/logstash-os:1.6.3 521d5238e1e6ca5adb12f088e279e05e47432b99008f99d5aed0bee75b740582
opa-gatekeeper V 3.17.1 v3.17.1 kubeops/opa-gatekeeper:1.6.3 73d1e72c88da83889e48a908f6bac522d416e219b4d342dbcfff7ca987f32f49
opensearch-dashboards V 2.22.0 2.16.0 kubeops/opensearch-dashboards:1.6.3 0ef3767f2c1b134d539f5f69a5e74509c2d232ccd337f33eea1d792e0f538f43
opensearch V 2.23.1 2.16.0 kubeops/opensearch-os:1.6.3 f9165115615e6f58ad320085bf73a37d559aa24e93225edd60cea203f8bdfe70
rook v1.15.1 cluster/vv1.15.1 operator v1.15.1 cluster/v1.15.1 operator kubeops/rook-ceph:1.6.3 13b274e95da154699f72ae8442d1dca654311805d33b33f3d1eb6ea93bc8d5fe
setup V 1.6.3 kubeops/setup:1.6.3 cbe81f4169ead9c61bf65cf7b8cc47674a61ce9a6df37e6d8f7074254ea01d7f
velero V 1.13.2 6.4.0 kubeops/velero:1.6.3 b27addb2fc9d7498d82a649cdda61aec32b6f257377472fed243621dbc55b68b

KubeOps 1.6.2 supports

Tools Supported App Version Chart Version Supported Package Version SHA256 Checksum
calicomultus V 0.0.4 lima/calicomultus:0.0.4 0e32dcea34fe707248cf2348f0ed422c278574594e440db8ceac8190e3984dac
cert-manager V 1.15.3 V 1.15.3 kubeops/cert-manager:1.6.1 8ca88b91370d395ea9bcf6f1967a38a2345ea7024936a3be86c51a8079f719a7
clustercreate V 1.6.1 kubeops/clustercreate:1.6.1 5aeec18ea4c960ee4301f9a7808f4eda7d76ec1811be15a6c8092155997a41ce
fileBeat V 8.5.1 8.5.1 kubeops/filebeat-os:1.6.1 0908fa15d1d85a59a9887ac8080a5e46f3ee13167f1fcaadefbf4b6229f0cf94
harbor V 2.11.1 1.15.1 kubeops/harbor:1.6.1 715b9ce2d0925d8207311fc078c10aa5dfe01685b47743203e17241e0c4ac3c7
helm V 3.14.4 kubeops/helm:1.6.1 f149f8687285479e935207fc1c52e0c19e0bf21bc5b00bf11433f2fef7eb2800
ingress-nginx V 1.10.0 4.10.0 kubeops/ingress-nginx:1.6.1 6a7d6c60c26d52a6e322422655e75d8a84040e3022c74a1341b3cc7dae3f1d14
keycloak V 22.0.1 16.0.5 kubeops/keycloak:1.6.1 135546a99aa8f25496262ed36a910f80f35c76f0f122652bd196a68b519a41e4
kubeops-dashboard V 1.0.0 0.11.0 kubeops/kubeops-dashboard:1.6.1 37c04d6cd7654847add82572c8b2d38520ea63aff47af3b222283b1d570f44a8
prometheus V 0.76.1 62.7.0 kubeops/kube-prometheus-stack:1.6.1 c45db783a0e5c0475d9cd8e9c1309fa9af45410a8cca055f1c4028b8488cb4c9
logstash V 8.4.0 8.5.1 kubeops/logstash-os:1.6.1 19171f2aab866c53733147904aed8d39981f892aac0861dfd54354cdd98d0510
opa-gatekeeper V 3.16.0 v3.16.0 kubeops/opa-gatekeeper:1.6.1 811f14f669324a7c9bfbac04aac074945c4aecffc926fc75126b44ff0bd41eb2
opensearch-dashboards V 2.14.0 2.18.0 kubeops/opensearch-dashboards:1.6.1 7985e684a549f2eada4f3bf9a6490dc38be9b525e8f43ad9ff0c9377bccb0b7b
opensearch V 2.16.0 2.23.1 kubeops/opensearch-os:1.6.1 cb804a50ab971ec55c893bd949127de2011503af37e221c0eb3ad83f5c78a502
rook v1.12.5 cluster/v1.12.5 operator v1.12.5 cluster/v1.12.5 operator kubeops/rook-ceph:1.6.1 25e684fdc279b4f97cf1a5039f54fffbc1cf294f45935c20167dadd81a35ad52
setup V 1.6.1 kubeops/setup:1.6.1 5b40a96733c2e526e642f17d2941d7a9422ae0a858f14af343277051df96dc09
velero V 1.13.2 6.4.0 kubeops/velero:1.6.1 cb228d2c6fd69749e91444def89fd79be51bcb816cabc61c7032404f5257a767

KubeOps 1.6.1 supports

Tools Supported App Version Chart Version Supported Package Version SHA256 Checksum
calicomultus V 0.0.4 lima/calicomultus:0.0.4 0e32dcea34fe707248cf2348f0ed422c278574594e440db8ceac8190e3984dac
cert-manager V 1.15.3 V 1.15.3 kubeops/cert-manager:1.6.1 8ca88b91370d395ea9bcf6f1967a38a2345ea7024936a3be86c51a8079f719a7
clustercreate V 1.6.1 kubeops/clustercreate:1.6.1 5aeec18ea4c960ee4301f9a7808f4eda7d76ec1811be15a6c8092155997a41ce
fileBeat V 8.5.1 8.5.1 kubeops/filebeat-os:1.6.1 0908fa15d1d85a59a9887ac8080a5e46f3ee13167f1fcaadefbf4b6229f0cf94
harbor V 2.11.1 1.15.1 kubeops/harbor:1.6.1 715b9ce2d0925d8207311fc078c10aa5dfe01685b47743203e17241e0c4ac3c7
helm V 3.14.4 kubeops/helm:1.6.1 kubeops/helm:1.6.1.output
ingress-nginx V 1.10.0 4.10.0 kubeops/ingress-nginx:1.6.1 6a7d6c60c26d52a6e322422655e75d8a84040e3022c74a1341b3cc7dae3f1d14
keycloak V 22.0.1 16.0.5 kubeops/keycloak:1.6.1 135546a99aa8f25496262ed36a910f80f35c76f0f122652bd196a68b519a41e4
kubeops-dashboard V 0.15.1 0.11.0 kubeops/kubeops-dashboard:1.6.1 37c04d6cd7654847add82572c8b2d38520ea63aff47af3b222283b1d570f44a8
prometheus V 0.76.1 62.7.0 kubeops/kube-prometheus-stack:1.6.1 c45db783a0e5c0475d9cd8e9c1309fa9af45410a8cca055f1c4028b8488cb4c9
logstash V 8.4.0 8.5.1 kubeops/logstash-os:1.6.1 19171f2aab866c53733147904aed8d39981f892aac0861dfd54354cdd98d0510
opa-gatekeeper V 3.16.0 v3.16.0 kubeops/opa-gatekeeper:1.6.1 811f14f669324a7c9bfbac04aac074945c4aecffc926fc75126b44ff0bd41eb2
opensearch-dashboards V 2.14.0 2.18.0 kubeops/opensearch-dashboards:1.6.1 7985e684a549f2eada4f3bf9a6490dc38be9b525e8f43ad9ff0c9377bccb0b7b
opensearch V 2.16.0 2.23.1 kubeops/opensearch-os:1.6.1 cb804a50ab971ec55c893bd949127de2011503af37e221c0eb3ad83f5c78a502
rook v1.12.5 cluster/v1.12.5 operator v1.12.5 cluster/v1.12.5 operator kubeops/rook-ceph:1.6.1 25e684fdc279b4f97cf1a5039f54fffbc1cf294f45935c20167dadd81a35ad52
setup V 1.6.1 kubeops/setup:1.6.1 5b40a96733c2e526e642f17d2941d7a9422ae0a858f14af343277051df96dc09
velero V 1.13.2 6.4.0 kubeops/velero:1.6.1 kubeops/velero:1.6.1.output

KubeOps 1.6.0 supports

Tools Supported App Version Chart Version Supported Package Version SHA256 Checksum
calicomultus V 0.0.4 lima/calicomultus:0.0.4 0e32dcea34fe707248cf2348f0ed422c278574594e440db8ceac8190e3984dac
cert-manager V 1.14.5 v1.14.5 kubeops/cert-manager:1.6.0 1a9ed861709cbfb05158f7610026acf5199749f989e1527ad48b80a277323765
clustercreate V 1.6.0 kubeops/clustercreate:1.6.0 730925a6231a4fc8c7abf162d5d47a0f60107cb4dfa825db6e52a15d769a812d
fileBeat V 8.5.1 8.5.1 kubeops/filebeat-os:1.6.0 061dc4feed5d970db3d353d29b8ef8256a826b146e0d95befbea4d5350874b8f
harbor V 2.9.3 1.13.3 kubeops/harbor:1.6.0 3dd7dceb969dad59140e58631fd3a0c9f60ed22e2f1c2e1d087118e9c7592f26
helm V 3.14.4 kubeops/helm:1.6.0 cb53f7b751473dd96f435d9f614e51edeaea99f2ca57a3710b59c788540d48d5
ingress-nginx V 1.10.0 4.10.0 kubeops/ingress-nginx:1.6.0 068618eb258c2558c3097ed19344da9caad0d7b44a8252b160cd36ef4425b790
keycloak V 22.0.1 16.0.5 kubeops/keycloak:1.6.0 236b6955dc08707d6e645625681da0e356c865da9810695d4af7fc2509c36f25
kubeops-dashboard 0.15.1 0.11.0 kubeops/kubeops-dashboard:1.6.0 5564ec8dfa33bb332e2863b171580bffebad3dc379a1fd365bddf5fc1343caac
prometheus V 0.73.2 58.6.0 kubeops/kube-prometheus-stack:1.6.0 0286d6a05e61081e3abe783a36512bf372a3184e6f91457819a2b2c4046ce35a
logstash V 8.4.0 8.5.1 kubeops/logstash-os:1.6.0 88851fe2d5ec269a21811b15bb9cd11778d6564560eefe3938e874202298f3f1
opa-gatekeeper V 3.16.0 v3.16.0 kubeops/opa-gatekeeper:1.6.0 ad841610be5ce624abeb6e439e3e353bd2f1240ca613e24ebdc13f36e8891a1a
opensearch-dashboards V 2.14.0 2.18.0 kubeops/opensearch-dashboards:1.6.0 6364ffb7dbe05ea16a685ddf6e3d3a2b59ef6e8b28e5a1194710a5c37ae72c40
opensearch V 2.14.0 2.20.0 kubeops/opensearch-os:1.6.0 16e1699fe187962fc58190a79d137db4c07723f2a84a889f393830b0093dba82
rook v1.12.5 cluster/v1.12.5 operator v1.12.5 cluster/v1.12.5 operator kubeops/rook-ceph:1.6.0 48f79af13a0da86ea5019c78c24aa52c719d03a6ea2ab4e332b2078df0c02a16
setup V 1.6.0 kubeops/setup:1.6.0 e2dd0419e17bbd2deaaea1f2888d391749afc0f550145c1e6a3ef5d5fba3a6a2
velero V 1.13.2 6.4.0 kubeops/velero:1.6.0 2b9e27dcf3a927ebe044f332b597d399d99a1a95660f7a186cf7fb3658b3676d

KubeOps 1.6.0_Beta1 supports

Tools supported App Version Chart Version supported Package Version SHA256 Checksum
fileBeat V 8.5.1 8.5.1 kubeops/filebeat-os:1.6.0_Beta1 72d21348d6153beb9b36c287900579f1100ccd7333f63ff30edc576cfcb47250
harbor V 2.9.3 1.13.3 kubeops/harbor:1.6.0_Beta1 317c7a931bb7f1f5d1d10dd373355f048e350878c0eee086c67714b104fad7cb
helm V 3.14.4 kubeops/helm:1.6.0_Beta1 7890eb0c45ae420b664c655175844d84194520ae20429ad3b9d894eb865a8e66
logstash V 8.4.0 8.5.1 kubeops/logstash-os:1.6.0_Beta1 3c90093bc613adc8919cd3cc9f7f87ecad72a89b85ea30a9c161a5c59e9e925b
opa-gatekeeper V 3.16.0 v3.16.0 kubeops/opa-gatekeeper:1.6.0_Beta1 3f655af62141c33437fe1183700c9f5ae5bd537b84df0d649023ae2cdc83cd11
opensearch V 2.14.0 2.20.0 kubeops/opensearch-os:1.6.0_Beta1 0953ab749ccdf8b03648f850298f259504d40338bffe03dde2d6ab27ff0cb787
opensearch-dashboards V 2.14.0 2.18.0 kubeops/opensearch-dashboards:1.6.0_Beta1 9b15ab03a8be7c0e7515056e7b46d2ca9a425690701a7a77afb2b4455790041e
prometheus V 0.73.2 58.6.0 kubeops/kube-prometheus-stack:1.6.0_Beta1 277131992a7b70669e8aa2a299417da15a4631c89c9cca0f89128a1f2d81e532
rook v1.12.5 cluster /v1.12.5 operator v1.12.5 cluster/v1.12.5 operator kubeops/rook-ceph:1.6.0_Beta1 495a9afeb61ff50800c6bc9931b934ee75bd78c988f8fa47a8ee79299f1a3b51
cert-manager V 1.14.5 v1.14.5 kubeops/cert-manager:1.6.0_Beta1 220c892ed25f126e63da55f942a690a4d0443f5ed27e66b6532cdf573bb597af
ingress-nginx V 1.10.0 4.10.0 kubeops/ingress-nginx:1.6.0_Beta1 c21c12901d4f0542928234a4695c579fc24588a5d83ad27a61321c6b697f5588
kubeops-dashboard V 1.0.0 0.11.0 kubeops/kubeops-dashboard:1.6.0_Beta1 2fb230a9a9f2a3bfa5e4d588c5f63f12d1a8bc462f664ddd9b1d088f9ea141ac
keycloak V 22.0.1 16.0.5 kubeops/keycloak:1.6.0_Beta1 a9a3dedc583ec3d1b481e4799a50fe440af542f5b71176e0c80ba2e66d08bcdb
velero V 1.13.2 6.4.0 kubeops/velero:1.6.0_Beta1 9bf96e91902aa8caff4e2245ae7d08058bfa099079fbf4ba77df08b4c697d654
clustercreate V 1.6.0 V kubeops/clustercreate:1.6.0_Beta1 36dd0d62ff6a405d8dab1c330972cd033d73413cab578d51a9249738a0f14385
setup V 1.6.0 kubeops/setup:1.6.0_Beta1 6c9a96e026dbe7820f56c22767765adadd12466056d02de19d9bcae2c9fbbcde
calicomultus V 0.0.4 lima/calicomultus:0.0.4 a58aa03128ee88f3803803186c357f7daab252d8f3ae51f4aea124e8f4939f7f

KubeOps 1.6.0_Beta0 supports

Tools supported App Version Chart Version supported Package Version SHA256 Checksum
fileBeat V 8.5.1 8.5.1 kubeops/filebeat-os:1.6.0_Beta0 a4d399bcd9efb238b07aee4b54ad11de132599801608dffc69ca4eee04f71c07
harbor V 2.9.3 1.13.3 kubeops/harbor:1.6.0_Beta0 c345a0bb5fd80414405588583843978f8ca7dc31e07cbbf1c0db956866bc9e4d
helm V 3.14.4 kubeops/helm:1.6.0_Beta0 0ea84bc7b77dff23a13a1a2a9426930f68845e0b5a1481c2362f3d895215274f
logstash V 8.4.0 8.5.1 kubeops/logstash-os:1.6.0_Beta0 75d42128535c5d30bc175287a3b9c04a193698bff64a830873c02ae697573127
opa-gatekeeper V 3.16.0 v3.16.0 kubeops/opa-gatekeeper:1.6.0_Beta0 e19e933869c2feb73bea8838a7f4bfcf0bf19090bae97cbf84407241ea3ca973
opensearch V 2.14.0 2.20.0 kubeops/opensearch-os:1.6.0_Beta0 d2f58718d691946ea60bebe8eec6629f78f290405fe3fa572cec41b81106526e
opensearch-dashboards V 2.14.0 2.18.0 kubeops/opensearch-dashboards:1.6.0_Beta0 af6b12543a1e4cc863b06709ccbf67dec02db0f68d359916950a839abc581e5e
prometheus V 0.73.2 58.6.0 kubeops/kube-prometheus-stack:1.6.0_Beta0 2d773591c3dda297c00cc37abed74d1cf1d1575feb2a69610f0bdc6fc9a69040
rook v1.12.5 cluster /v1.12.5 operator v1.12.5 cluster/v1.12.5 operator kubeops/rook-ceph:1.6.0_Beta0 c3e95ec2fb9b96346cba802dd010a00fd1ddd791a2ce2cbefa464cfbb4a922cc
cert-manager V 1.14.5 v1.14.5 kubeops/cert-manager:1.6.0_Beta0 792759e538124e8307daf9abb81aef203655a102a713a05a0b3b547b8c19dd99
ingress-nginx V 1.10.0 4.10.0 kubeops/ingress-nginx:1.6.0_Beta0 41f64cea80d92a6356a713fb612a5bafbe6a527b2bd9e21e974347dd3f3ad0d2
kubeops-dashboard V 1.0.0 0.11.0 kubeops/kubeops-dashboard:1.6.0_Beta0 a9522a68a6be45358b96a78527ca3653439b2c24c5ab349ac6767e003dee80a4
keycloak V 22.0.1 16.0.5 kubeops/keycloak:1.6.0_Beta0 f3dcc5dd3b21d5da83c72f757146df3ddc32e5b793f7c6039df751ab88ccc2b4
velero V 1.13.2 6.4.0 kubeops/velero:1.6.0_Beta0 4653976cf762030e859fe83af4ac0f0830d61dec0a9f40d33ab590743a6baebe
clustercreate V 1.6.0 V kubeops/clustercreate:1.6.0_Beta0 29dc5a9d903eb2d9ac836f580e1ca4ff2691f24989bdb1c31313526de29e0208
setup V 1.6.0 kubeops/setup:1.6.0_Beta0 7c41ace358a4e62fb0c31a920456308086a1bda4294e1ff0ab26763ae41da9bd
calicomultus V 0.0.4 lima/calicomultus:0.0.4 a58aa03128ee88f3803803186c357f7daab252d8f3ae51f4aea124e8f4939f7f

3.5 - Glossary

Glossary

This section defines a glossary of common KubeOps terms.

SINA package

SINA package is the .sina file packaged by bundling package.yaml and other essential yaml files and artifacts. This package is ready to install on your Kubernetes Clusters.

KubeOps Hub

KubeOps Hub is a secure repository where published SINA packages can be stored and shared. You are welcome to contribute and use public hub also at the same time KubeOps provides you a way to access your own private hub.

Installation Address

It is the distinctive address automatically generated for each published package on KubeOps Hub. It is constructed using name of package creator, package name and package version.
You can use this address at the time of package installation on your Kubernetes Cluster.

It is indicated by the install column in KubeOps Hub.

Deployment name

When a package is installed, SINA creates a deployment name to track that installation. Alternatively, SINA also lets you specify the deployment name of your choice during the installation.
A single package may be installed many times into the same cluster and create multiple deployments.
It is indicated by Deployment column in the list of package deployments.

Tasks

As the name suggests, “Tasks” in package.yaml are one or more sets of instructions to be executed. These are defined by utilizing Plugins.

Plugins

SINA provides many functions which enable you to define tasks to be executed using your package. These are called Plugins. They are the crucial part of your package development.

LIMAROOT Variable

LIMAROOT is an envoirment variable for LIMA. It is the place where LIMA stores information about your clusters. The environment variable LIMAROOT is set by default to /var/lima. However LIMA also facilitates setting your own LIMAROOT by yourself.

KUBEOPSROOT Variable

The environment variable KUBEOPSROOT stores the location of the SINA plugins and the config.yaml. To use the variable, the config.yaml and the plugins have to be copied manually.

apiVersion

It shows the supported KubeOps tool API version. You do not need to change it unless otherwise specified.

Registry

As the name suggests, it is the location where docker images can be stored. You can either use the default KubeOps registry or specify your own local registry for AirGap environments. You need an internet connection to use the default registry provided by KubeOps.

Maintenance Package

KubeOps provides a package for the supported Kubernetes tools. These packages help you update the Kubernetes tools to the desired versions on your clusters along with the dependencies.

Cluster

In computing, a cluster refers to a group of interconnected computers or servers that work together as a single system.

These machines, or nodes, are typically networked and collaborate to execute tasks or provide services. Clusters are commonly used in various fields such as distributed computing, high-performance computing, and cloud computing to improve reliability, scalability, and performance. In the context of technologies like Kubernetes, a cluster consists of multiple nodes managed collectively to deploy, manage, and scale containerized applications.

Container

A container is a lightweight, standalone package that includes everything needed to run a piece of software, including the code, runtime, libraries, and dependencies.

Containers are isolated from each other and from the underlying infrastructure, providing consistency and portability across different environments. Kubernetes manages containers, orchestrating their deployment, scaling, and management across a cluster of nodes. Containers are often used to encapsulate microservices or individual components of an application, allowing for efficient resource utilization and simplified deployment processes.

Drain-node

A Drain Node is a feature in distributed systems, especially prevalent in Kubernetes, used for gracefully removing a node from a cluster.

It allows the system to evict all existing workload from the node and prevent new workload assignments before shutting it down, ensuring minimal disruption to operations.

Kube-proxy

Kube-Proxy, short for Kubernetes Proxy, is a network proxy that runs on each node in a Kubernetes cluster. Its primary responsibility is to manage network connectivity for Kubernetes services. Its main tasks include service proxying and load balancing.

Kubelet

Kubelet is a crucial component of Kubernetes responsible for managing individual nodes in a cluster. It ensures that containers are running in pods as expected, maintaining their health and performance.

Kubelet communicates with the Kubernetes API server to receive instructions about which pods should be scheduled and executed on its node. It also monitors the state of these pods, reporting any issues back to the API server. Kubelet plays a vital role in the orchestration and management of containerized workloads within a Kubernetes cluster.

Node

A Kubernetes node oversees and executes pods.

It serves as the operational unit (virtual or physical machine) for executing assigned tasks. Similar to how pods bring together multiple containers to collaborate, a node gathers complete pods to work in unison. In large-scale operations, the goal is to delegate tasks to nodes with available pods ready to handle them.

Pod

In Kubernetes, a pod groups containers and is the smallest unit managed by the system.

Each pod shares an IP address among its containers and resources like memory and storage. This allows treating the containers as a single application, similar to traditional setups where processes run together on one host. Often, a pod contains just one container for simple tasks, but for more complex operations requiring collaboration among multiple processes with shared data, multi-container pods simplify deployment.

For example, in an image-processing service creating JPEGs, one pod might have containers for resizing images and managing background tasks or data cleanup, all working together.

Registry

Helm registry serves as a centralized repository for Helm charts, facilitating the discovery, distribution, and installation of Kubernetes applications and services.

It allows users to easily find, share, and consume pre-packaged Kubernetes resources, streamlining the deployment process in Kubernetes environments.

Zone

A “zone” typically refers to a subset of the overall cluster that shares certain characteristics, such as geographic location or hardware specifications. Zoning helps distribute resources strategically and can enhance fault tolerance by ensuring redundancy within distinct zones.

3.6 - FAQs

FAQ - Kubeopsctl

Known Issues

ImagepullBackoffs in Cluster

If you have imagepullbackoffs in your cluster, p.e. for prometheus, you can just use the kubeopsctl change registry command again. e.g.

kubeopsctl change registry -r <your master ip>:30002/library -t localhost:30002/library -f kubeopsctl.yaml

FAQ - KubeOps KOSI

Error Messages

There is an error message regarding Remote-Certificate

  • Error: http://hub.kubernative.net/dispatcher?apiversion=3&vlientversion=2.X.0 : 0
  • X means per version
  • CentOS 7 cannot update the version by itself (ca-certificates-2021.2.50-72.el7_9.noarch).
    • Fix: yum update ca-certificates -y or yum update
  • Manual download and install of ca-certificates RPM:
    • Download: curl http://mirror.centos.org/centos/7/updates/x86_64/Packages/ca-certificates-2021.2.50-72.el7_9.noarch.rpm -o ca-certificates-2021.2.50-72.el7_9.noarch.rpm
    • Install: yum install ca-certificates-2021.2.50-72.el7_9.noarch.rpm -y

KOSI Usage

Can I use KOSI with sudo?

  • At the moment, KOSI has no sudo support.
  • Docker and Helm, which are required, need sudo permissions.

I get an error message when I try to search an empty Hub?

  • Known bug, will be fixed in a later release.
  • Need at least one package in the Hub before you can search.

Package Configuration

In my package.yaml, can I use uppercase characters as a name?

  • Currently, only lowercase characters are allowed.
  • This will be fixed in a later release.

I have an error message that says “Username or password contain non-Latin characters”?

  • Known bug, may occur with incorrect username or password.
  • Please ensure both are correct.

In my template.yaml, can I just write a value without an associated key?

  • No, a YAML file requires a key-value structure.

Do I have to use the template plugin in my KOSI package?

  • No, you don’t have to use the template plugin if you don’t want to.

I have an error message that says “reference not set to an instance of an object”?

  • Error from our tool for reading YAML files.
  • Indicates an attempt to read a value from a non-existent key in a YAML file.

I try to template but the value of a key stays empty.

  • Check the correct path of your values.
  • If your key contains “-”, the template plugin may not recognize it.
  • Removing “-” will solve the issue.

FAQ - KubeOps LIMA

Error Messages

LIMA Cluster not ready

  • You have to apply the calico.yaml in the $LIMAROOT folder:
kubectl apply -f $LIMAROOT/calico.yaml

read header failed: Broken pipe

for lima version >= 0.9.0

  • Lima stops in line

ansible Playbook : COMPLETE : Ansible playbooks complete.

  • Search for
$LIMAROOT/dockerLogs/dockerLogs_latest.txt 

in the path Broken pipe. From the line with Broken pipe check if the following lines exist:

debug3: mux_client_read_packet: read header failed: Broken pipe

debug2: Received exit status from master 1

Shared connection to vli50707 closed.

<vli50707> ESTABLISH SSH CONNECTION FOR USER: demouser

<vli50707> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)

(ControlPersist=60s)

If this is the case, line /etc/ansible/ansible.cfg

in the currently running lima container in file ssh_args =-C -o ControlMaster=auto -o ControlPersist=60s must be commented out or removed.

Example:

docker container ls

CONTAINER ID IMAGE COMMAND

CREATED STATUS PORTS NAMES

99cabe7133e5 registry.preprod.kubernative.net/kubeops/lima:v0.8.0 "/bin/bash" 6 days

ago Up 6 days lima-v0.8.0

docker exec -it 99cabe7133e5 bash

vi /etc/ansible/ansible.cfg 

Change the line ssh_args = -C -o ControlMaster=auto -o ControlPersist=60s to #ssh_args = -C-o ControlMaster=auto -o ControlPersist=60s or delete the line.

I want to delete the cluster master node and rejoin the cluster. When trying to rejoin the node a problem occurs and rejoining fails. What can be done?

To delete the cluster master, we need to set the cluster master to a different master machine first.

  1. On the admin machine: change the IP-Address from the current to new cluster master in:

    1. $LIMAROOT/<name_of_cluster>/clusterStorage.yaml
    2. ~/.kube/config
  2. Delete the node

  3. Delete the images to prevent interference: ctr -n k8s.io i rm $(ctr -n k8s.io i ls -q)

  4. Change IP on new cluster master in

/etc/kubernetes/admin.conf
  1. Change IPs in config maps:

    1. kubectl edit cm kubeadm-config -n kube-system
    2. kubectl edit cm kube-proxy -n kube-system
    3. kubectl edit cm kubeadm-config -n kube-system
    4. kubectl edit cm cluster-info -n kube-public
  2. Restart kubelet

  3. Rejoin the node

Using LIMA on RHEL8 fails to download metadata for repo “rhel-8-for-x86_64-baseos-rpms”. What should I do?

This is a common problem which happens now and then, but the real source of error is difficult to identify. Nevertheless, the workaround is quick and easy: clean up the current repo data, refresh the subscription-manager and update the whole operating system. This can be done with the following commands:

dnf clean all

rm -frv /var/cache/dnf

subscription-manager refresh

dnf update -y

How does LIMA handle SELinux?

SELinux will be temporarily deactivated during the execution of LIMA tasks. After the execution is finished, SELinux is automatically reactivated. This indicates you are not required to manually enable SELinux every time while working with LIMA.

My pods are stuck: CONFIG-UPDATE 0/1 CONTAINERCREATING

  1. They are responsible for updating the loadbalancer, you can update them manualy and delete the pod

  2. You can try redeploying the deamonset to the kube-system namespace

My master can not join, it fails when creating /ROOT/.KUBE

try the following commands on the master

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config 

Some nodes are missing the loadbalancer

  1. Check if the Loadbalancer staticPod file can be found in the manifest folder of the node.

  2. If it isn’t there please copy it from another node.

Some nodes didn’t upgrade. What to do now?

  1. Retry to upgrade your cluster.

  2. If LIMA thinks you are already on the target version edit the stored data of your cluster at $LIMAROOT/myClusterName/clusterStorage.yaml.

    Set the Key kubernetesVersion to the lowest kubernetes Version present on a Node in your cluster.

Could not detect a supported package manager from the followings list: [‘PORTAGE’, ‘RPM’, ‘PKG’, ‘APT’], or the required PYTHON library is not installed. Check warnings for details.

  1. Check if you got a package manager.

  2. You have to install python3 with yum install python and then create a symlink from python to python3 with update-alternatives --config python.

Aborting, target uses SELINUX but PYTHON bindings (LIBSELINUX-PYTHON) aren’t installed!

You have to install libselinux-python on your cluster machine so you can install a firewall via LIMA.

FAQ - KubeOps PIA

The httpd service is terminating too long. How can I force the shut down?

  1. Use following command to force shut down httpd service:
kubectl delete deployment pia-httpd –grace-period=0 –force
  1. Most deployments got a networking service like our httpd does.

Delete the networking service with the command:

kubectl delete svc pia-httpd-service –grace-period=0 –force

I get the error that some nodes are not ‘Ready’. How do I fix the problem?

  1. Use kubectl get nodes command to find out first which node is not ready.

  2. To identify the problem, get access to the shell of the non-ready node . Use systemctl status kubelet to get status information about state of kubelet.

  3. The most common cause of this error is that the kubelet has the problem of not automatically identify the node. In this case, the kubelet must be restarted manually on the non-ready machine. This is done with systemctl enable kubelet and systemctl start kubelet. maybe you need to restart containerd: systemctl stop containerd and systemctl restart containerd

  4. If the issue persists, reason behind the error can be evaluated by your cluster administrators.

I checked my clusterStorage.yaml after the clustercrartion and there is only a entry for master1

This error occurs sporadically and will be fixed in a later version. The error has no effect.

FAQ KubeOps PLATFORM

Support of S3 storage configuration doesn’t work

At the moment, the kosi-package rook-ceph:1.1.2 (utilized in kubeOps 1.1.3) is employing a Ceph version with a known bug that prevents the proper setup and utilization of object storage via the S3 API. If you require the functionality provided by this storage class, we suggest considering the use of kubeOps 1.0.7. This particular version does not encounter the aforementioned issue and provides comprehensive support for S3 storage solutions.

Change encoding to UTF-8

Please make sure that your uservalues.yaml is using UTF-8 encoding.

If you get issues with encoding, you can change your file to UTF-8 with the following command:

iconv -f UTF-8 -t ISO-8859-1 uservalues.yaml > uservalues.yaml

How to update Calico Multus?

  1. Get podSubnet located in clusterStorage.yaml ($LIMAROOT/<clustername>/clusterStorage.yaml)

  2. Create a values.yaml with key=>podSubnet an value=>

    Example:

    podSubnet: 192.168.0.0/17

  3. Get the deployment name of the current calicomultus installation with the kosi list- command

Example:

| Deployment | Package | PublicHub | Hub |

|-------------|--------------------------------------|--------------|----------|

| 39e6da | local/calicomultus:0.0.1 |        | local |
  1. Update the deployment with
kosi update lima/calicomultus:0.0.2 --dname <yourdeploymentname> --hub=public -f values.yaml

–dname: important parameter mandatory for the update command.

-f values.yaml: important that the right podSubnet is being used.

Known issue:

error: resource mapping not found for name: calico-kube-controllers namespace:co.yaml: no matches for kind PodDisruptionBudget in version policy/v1beta1

ensure CRDs are installed first

Create Cluster-Package with firewalld:

If you want to create a cluster with firewalld and the kubeops/clustercreate:1.0. - package you have to manually pull the firewalld - maintenance - package for your OS first, after executing the kubeops/setup:1.0.1 - package.

Opensearch pods do not start:

If the following message appears in the Opensearch pod logs, the vm.max_map_count:

ERROR: [1] bootstrap checks failed

[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

On all control-plane and worker nodes the line vm.max_map_count=262144 must be added to the file /etc/sysctl.conf.

After that the following command must be executed in the console on all control-plane and worker nodes: sysctl -p

Finally, the Opensearch pods must be restarted.

Does the KubeOps Platform have a vendor lock-in?

No, the KubeOps Platform does not have a vendor lock-in. It is built entirely on open standards and Kubernetes technologies, ensuring you retain full control over your infrastructure at all times.

If you decide to discontinue using the KubeOps Platform, you can:

  • Export data and configurations: All data and settings are available in standardized formats.
  • Migrate workloads: Your applications can run on any Kubernetes environment without requiring modifications.
  • Replace modules: Features like monitoring, security, or lifecycle management can be gradually replaced with alternative tools.

Your infrastructure will remain fully operational throughout the transition. Additionally, we provide comprehensive documentation and optional support to ensure a smooth migration process.

FAQ - KubeOps KUBEOPSCTL

Known issue:

The namespace for packages must remain consistent. For example, if you deploy a package in the “monitoring” namespace, all Kosi updates should also be applied within the same namespace.

HA capability only after 12h, for earlier HA capability manually move the file /etc/kubernetes/manifest/haproxy.yaml out of the folder and back in again

After upgrading a node or zone it is possible that the lima container is still running. Please confirm with podman ps -a if a lima container is running. Remove the lima container with podman rm -f <container id>. After that you can start another upgrade of node or zone.

Sometimes the rook-ceph PDBs are blocking the kubernetes upgrade if you have 3 workers, so you have to delete the rook-ceph PDBs so that the nodes can be drained in the kubernetes upgrade process. the PDBs are created dynamically, so you have to the PDBs could be created after some time.

if the calico or the multus images have a imagepullbackoff, you need toe execute kosi pull --hub public lima/calicomultus:0.0.3 -o calico.tgz -r masternode:5000 -t localhost:5001 for all masternodes.

even if you have the updateRegistry parameter in your yaml file set to true, the images will not be rewritten. you can use lima update -r (clustername from the yaml file).

The rook-ceph dashboard inaccessable with kubeopsctl v1.6.2

An additional worker or master is not added to the existing cluster. In kubeopsctl 1.5.0 an additional worker or master is not added to the existing cluster. We faced that issue with kubeopsctl 1.5.0. After the cluster creation an additional master or worker node should be joined. The kubeopsctl logs are showing that the additional node couldn’t be found. In $KUBEOPSROOT/lima/dockerLogs/dockerLogs_latest.txt at the bottom of the file we found the Error Variable useInsecureRegistry is not defined. After checking $KUBEOPSROOT/lima/test/clusterStorage.yaml (test is the name of our cluster, in your case its the clustername you gave in the kubeopsctl.yaml file) we found out that there is the entry useInsecureRegistry: without value. After we changed it to useInsecureRegistry: false and tried to add the additional node it worked.

ImagepullBackoffs in Cluster If you have imagepullbackoffs in your cluster, p.e. for prometheus, you can just use the kubeopsctl change registry command again. e.g.

kubeopsctl change registry -r <your master ip>:30002/library -t localhost:30002/library -f kubeopsctl.yaml

Update the kubeopsctl.yaml from 1.6.X to 1.7.X you need to change change the ipAdress parameter to ipAddress in your kubeopsctl.yaml file and all generated yaml files in $KUBEOPSROOT

4 - Release Notes

Check out the latest release notes, providing a thorough insight into the exhilarating updates and enhancements within our newest kubeopsctl version.

4.1 - KubeOps 1.7.0_Beta0

KubeOps 1.7.0_Beta0 - Release Date 30.04.2025

Changelog kubeopsctl 1.7.0_Beta0

New

  • Added Licence verification

Bugfix

  • Resolved issue with kubeopsctl status for keycloak

Changelog lima 1.7.0_Beta0

Bugfix

  • Fixed a bug where ssh key support for ed25519 was not possible.

Changelog opensearch 1.7.0_Beta0

Bugfix

  • Fixed a bug where the secret internal-users-config-secret was not created.
  • resolved imagePullSecrets warning in Kubernetes events
  • added ressource limits

Changelog nginx-ingress 1.7.0_Beta0

Updates

  • Upgraded Helm chart to version 4.11.5 to address CVE-2025-1974.

Bugfix

  • resolved imagePullSecrets warning in Kubernetes events
  • added ressource limits

changelog prometheus 1.7.0_Beta0

Bugfix

  • Updated execution permissions for the executable in the Grafana OpenSearch plugin.
  • resolved imagePullSecrets warning in Kubernetes events
  • added ressource limits

changelog kubeops-dashboard 1.7.0_Beta0

Bugfix

  • Fixed CVE-2024-45337 issue
  • resolved imagePullSecrets warning in Kubernetes events
  • added ressource limits

changelog rook-ceph 1.7.0_Beta0

Bugfix

  • Added missing ImagePullSecret in Rook Operator and Ceph Cluster.
  • resolved imagePullSecrets warning in Kubernetes events
  • added ressource limits
  • removed not templatable values

changelog harbor 1.7.0_Beta0

Bugfix

  • Added missing ImagePullSecret in Rook Operator and Ceph Cluster.
  • Resolved the PostgreSQL issue encountered during the Harbor upgrade.

changelog setup 1.7.0_Beta0

Updates

  • Added rhel9 support for kubedependencies, containerd, containerde-dependencies, nftablese

changelog lima 1.7.0_Beta0

New

  • Added Licence verification
  • added new version of calicomultus

Updates

  • Added rhel9 support for kubedependencies, containerd, containerde-dependencies, nftablese
  • Lima will now create the file /etc/systemd/system/containerd.service.d/override.conf in order to overwriten LimitNOFILE=Infinity with LimitNOFILE=1048576 for ceph support in Red Hat Enterprise 9
  • Lima will now add SystemdCgroup = true below [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] in file /etc/containerd/config.toml
  • Resolved the login issue for non-root users when creating a Lima container
  • fixed issue with reading clusterstorage.yaml
  • fixed issue with kubernetes upgrade to 1.32 versions
  • fixed issue with syntax error in lima update -r

Changelog cert-manager 1.7.0_Beta0

Bugfix

  • resolved imagePullSecrets warning in Kubernetes events
  • added ressource limits

Changelog filebeat 1.7.0_Beta0

Bugfix

  • resolved imagePullSecrets warning in Kubernetes events
  • added ressource limits

Changelog keycloak 1.7.0_Beta0

Bugfix

  • resolved imagePullSecrets warning in Kubernetes events
  • added ressource limits
  • fixed cves, updated version

Changelog logstash 1.7.0_Beta0

Bugfix

  • resolved imagePullSecrets warning in Kubernetes events
  • added ressource limits

Changelog opa-gatekeeper 1.7.0_Beta0

Bugfix

  • resolved imagePullSecrets warning in Kubernetes events
  • added ressource limits

Changelog opensearch-dashboards 1.7.0_Beta0

Bugfix

  • resolved imagePullSecrets warning in Kubernetes events
  • added ressource limits

Changelog velero 1.7.0_Beta0

Bugfix

  • resolved imagePullSecrets warning in Kubernetes events
  • added ressource limits

Changelog kubevirt 1.0.0_Beta0

new

  • added package

4.2 - KubeOps 1.7.0_Alpha2

KubeOps 1.7.0_Alpha2 - Release Date 11.04.2025

Changelog kubeopsctl 1.7.0_Alpha2

Bugfix

  • Resolved issue with kubeopsctl status for keycloak

Changelog lima 1.7.0_Alpha2

Bugfix

  • Fixed a bug where ssh key support for ed25519 was not possible.

Changelog opensearch 1.7.0_Alpha2

Bugfix

  • Fixed a bug where the secret internal-users-config-secret was not created.

4.3 - KubeOps 1.7.0_Alpha1

KubeOps 1.7.0_Alpha1 - Release Date 08.04.2025

Changelog nginx-ingress 1.7.0_Alpha1

Updates

  • Upgraded Helm chart to version 4.11.5 to address CVE-2025-1974.

changelog prometheus 1.7.0_Alpha1

Bugfix

  • Updated execution permissions for the executable in the Grafana OpenSearch plugin.

changelog kubeops-dashboard 1.7.0_Alpha1

Bugfix

  • Fixed CVE-2024-45337 issue

changelog rook-ceph 1.7.0_Alpha1

Bugfix

  • Added missing ImagePullSecret in Rook Operator and Ceph Cluster.

changelog harbor 1.7.0_Alpha1

Bugfix

  • Added missing ImagePullSecret in Rook Operator and Ceph Cluster.

changelog setup 1.7.0_Alpha1

Updates

  • Added rhel9 support for kubedependencies, containerd, containerde-dependencies, nftablese

changelog lima 1.7.0_Alpha1

Updates

  • Added rhel9 support for kubedependencies, containerd, containerde-dependencies, nftablese
  • Lima will now create the file /etc/systemd/system/containerd.service.d/override.conf in order to overwriten LimitNOFILE=Infinity with LimitNOFILE=1048576 for ceph support in Red Hat Enterprise 9
  • Lima will now add SystemdCgroup = true below [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] in file /etc/containerd/config.toml

4.4 - KubeOps 1.7.0_Alpha0

KubeOps 1.7.0_Alpha0 - Release Date 01.04.2025

Changelog kubeopsctl 1.7.0_Alpha0

New

  • Added Licence verification

Changelog lima 1.7.0_Alpha0

New

  • Added Licence verification
  • added new version of calicomultus

Bugfix

  • Resolved the login issue for non-root users when creating a Lima container
  • fixed issue with reading clusterstorage.yaml
  • fixed issue with kubernetes upgrade to 1.32 versions
  • fixed issue with syntax error in lima update -r

Changelog cert-manager 1.7.0_Alpha0

Bugfix

  • resolved imagePullSecrets warning in Kubernetes events
  • added ressource limits

Changelog filebeat 1.7.0_Alpha0

Bugfix

  • resolved imagePullSecrets warning in Kubernetes events
  • added ressource limits

Changelog nginx-ingress 1.7.0_Alpha0

Bugfix

  • resolved imagePullSecrets warning in Kubernetes events
  • added ressource limits

Changelog keycloak 1.7.0_Alpha0

Bugfix

  • resolved imagePullSecrets warning in Kubernetes events
  • added ressource limits
  • fixed cves, updated version

changelog kubeops-dashboard 1.7.0_Alpha0

fixed

  • resolved imagePullSecrets warning in Kubernetes events
  • added ressource limits

Changelog logstash 1.7.0_Alpha0

Bugfix

  • resolved imagePullSecrets warning in Kubernetes events
  • added ressource limits

Changelog opa-gatekeeper 1.7.0_Alpha0

Bugfix

  • resolved imagePullSecrets warning in Kubernetes events
  • added ressource limits

Changelog opensearch 1.7.0_Alpha0

Bugfix

  • resolved imagePullSecrets warning in Kubernetes events
  • added ressource limits

Changelog opensearch-dashboards 1.7.0_Alpha0

Bugfix

  • resolved imagePullSecrets warning in Kubernetes events
  • added ressource limits

Changelog prometheus 1.7.0_Alpha0

Bugfix

  • resolved imagePullSecrets warning in Kubernetes events
  • added ressource limits

Changelog velero 1.7.0_Alpha0

Bugfix

  • resolved imagePullSecrets warning in Kubernetes events
  • added ressource limits

Changelog rook-ceph 1.7.0_Alpha0

Bugfix

  • resolved imagePullSecrets warning in Kubernetes events
  • added ressource limits
  • removed not templatable values

Changelog kubevirt 1.7.0_Alpha0

new

- added package

4.5 - KubeOps 1.6.8

KubeOps 1.6.8 - Release Date 30.04.2025

changelog lima 1.6.8

new

  • added logic for creating rook-ceph on rhel 9
  • added new containerd and containerd packages for rhel9

fixed

  • fixed issue where setting the auditlog parameter to false did nothing
  • fixed issue where setting the auditlog parameter to true only made changes on the clustermaster

changelog opensearch 1.6.8

changed

  • updated opensearch to version: 2.32.0 and appVersion: 2.19.1
  • Issue with creating and accessing secrets in OpenSearch has been resolved.
  • Issue with creating namespace for opensearch has been resolved

changelog opensearch dashboards 1.6.8

changed

  • updated opensearch to version: 2.28.0 and appVersion: 2.19.1

changelog prometheus 1.6.8

fixed

  • fixed issue where grafana opnsearch datasource hat wrong permissions

changelog ingress nginx 1.6.8

changed

  • updated helm chart to appVersion: 1.11.5 and version: 4.11.5

fixed

  • fixed critical cves:
    • CVE-2025-1097
    • CVE-2025-1098
    • CVE-2025-1974
    • CVE-2025-24513
    • CVE-2025-24514
  • fixed issue where advanced parameters did not work

changelog kubeopsctl 1.6.8

fixed

  • fixed issue where clusterstorage was not updated after kubeopsctl commands
  • fixed issue where kubeopsctl version command did not create a output

4.6 - KubeOps 1.6.7

KubeOps 1.6.7 - Release Date 27.03.2025

changelog lima 1.6.7

fixed

  • fixed issue in kubernetes 1.31.2 where the pause image has the tag 3.10.0 instead of 3.10
  • fixed issue in kubernetes 1.31.4 where the pause image has the tag 3.10.0 instead of 3.10
  • fixed issue where the kubernetes upgrade from 1.30 to 1.31 failed with the CreateContainerConfigError
  • fixed issue where the kubernetes upgrade from 1.30 to 1.31 only applied on the worker nodes
  • fixed issue where the kubernetes upgrade from 1.30 to 1.31 create a version conflict between the kube-apiserver of an upgraded master and an not upgraded master

changelog kubeopsctl 1.6.7

fixed

  • fixed issue where the kubernetes upgrade from 1.30 to 1.31 only applied on the worker nodes
  • fixed issue where the kubernetes upgrade from 1.30 to 1.31 create a version conflict between the kube-apiserver of an upgraded master and an not upgraded master

changelog rook-ceph 1.6.7

fixed

  • fixed templating of mgr so that it can be modified via the advanced parameters

added

  • added ressourcenlimits for rook

changelog harbor 1.6.7

fixed

  • fixed templating of postgres so that it can be modified via the advanced parameters

added

  • added ressourcenlimits for harbor

changelog helm 1.6.7

added

  • added ressourcenlimits for helm

changelog cert-manager 1.6.7

added

  • added ressourcenlimits for cert-manager

changelog filebeat 1.6.7

added

  • added ressourcenlimits for filebeat

changelog ingress-nginx 1.6.7

added

  • added ressourcenlimits for ingress.nginx

changelog keycloak 1.6.7

added

  • added ressourcenlimits for keycloak

changelog kubeops-package 1.6.7

added

  • added ressourcenlimits for kubeops-package

changelog kubeops-setup 1.6.7

added

  • added ressourcenlimits for kubeops-setup

changelog logstash 1.6.7

added

  • added ressourcenlimits for logstash

changelog opa-gtekeeper 1.6.7

added

  • added ressourcenlimits for opa-gatekeeper

changelog opensearch 1.6.7

added

  • added ressourcenlimits for opensearch

known issue

  • the installation can create the following error: Failed loading builtin log types from disk! java.nio.file.FileSystemNotFoundException

changelog prometheus 1.6.7

added

  • added ressourcenlimits for prometheus

4.7 - KubeOps 1.6.6

KubeOps 1.6.6

changelog lima 1.6.6

fixed

  • fixed issue where a non exiting cluster storage lead to exceptions
  • fixed issue where the restart of the proxy in lima update registry lead to errors

changelog opensearch 1.6.6

fixed

  • fixed issue where package had automatically a imagepullsecret from the local harbor

changelog opensearch-dashboards 1.6.6

fixed

  • fixed issue where package had automatically a imagepullsecret from the local harbor

known issue

  • the installation can create the following error: Failed loading builtin log types from disk! java.nio.file.FileSystemNotFoundException

changelog logstash 1.6.6

fixed

  • fixed issue where package had automatically a imagepullsecret from the local harbor

changelog filebeat 1.6.6

fixed

  • fixed issue where package had automatically a imagepullsecret from the local harbor

changelog prometheus 1.6.6

fixed

  • fixed issue where package had automatically a imagepullsecret from the local harbor

changelog prometheus 1.6.6

fixed

  • fixed issue where package had automatically a imagepullsecret from the local harbor

changelog opa-gatekeeper 1.6.6

fixed

  • fixed issue where package had automatically a imagepullsecret from the local harbor

changelog cert-manager 1.6.6

fixed

  • fixed issue where package had automatically a imagepullsecret from the local harbor

changelog ingress-nginx 1.6.6

fixed

  • fixed issue where package had automatically a imagepullsecret from the local harbor

changelog keycloak 1.6.6

fixed

  • fixed issue where package had automatically a imagepullsecret from the local harbor

changelog velero 1.6.6

fixed

  • fixed issue where package had automatically a imagepullsecret from the local harbor

4.8 - KubeOps 1.6.5

KubeOps 1.6.5

changelog lima 1.6.5

added

  • added kubernetes version 1.29.12
  • added kubernetes version 1.30.8

changelog kubeops-dashboard 1.6.5

fixed

  • fixed issue where nodeport was randomly assigned

Changelog opensearch-dashboards 1.6.5

known issue

  • the installation can create the following error: Failed loading builtin log types from disk! java.nio.file.FileSystemNotFoundException

4.9 - KubeOps 1.6.4

KubeOps 1.6.4

changelog lima 1.6.4

added

  • added kubernetes version to 1.29.10
  • added kubernetes version to 1.30.6

changed

  • updated yq version in lima code for vulnerability fix

fixed

  • fixed issue where change auditlog only worked on clustermaster
  • fixed issue where clusterstorage was not complete after cluster creation

changelog rook-ceph 1.6.4

changed

  • updated app-version version of rook-ceph operator to 1.15.6, helm chart version to 1.15.6
  • updated helm chart version of rook-ceph cluster to 1.15.6, helm chart version to 1.15.6

changelog kube prometheus stack 1.6.4

changed

  • updated the ingress configuration to enable HTTPS access for Prometheus and Grafana services
  • updated to grafana plugin to 11.3.1
  • updated prometheus image because of critical cves

changelog kubeopsctl 1.6.4

fixed

  • fixed issue where change auditlog only worked on clustermaster
  • fixed issue where an warning message was thrown, that the node could be be upgraded, even if no upgrade was applied

changelog opensearch 1.6.4

added

  • added ressourcenlimits for opensearch

known issue

  • the installation can create the following error: Failed loading builtin log types from disk! java.nio.file.FileSystemNotFoundException

4.10 - KubeOps 1.6.3

KubeOps 1.6.3

changelog kubeopsctl 1.6.3

bugfixes

  • resolved issue where only masters in zones was not possible
  • resolved issue where installation of harbor without password and external url was possible
  • resolved issue where imagepullsecret was not created correctly for harbor with domain name
  • resolved issue where harbor password was in logs

changelog lima 1.6.3

bugfixes

  • Resolved the issue where the kubectl command was not found

added

  • added kubernetes version to 1.31.2

changelog rook-ceph 1.6.3

fixes

  • resolved issue with accessing ceph dashboard
  • fixed issue with updating rook-cephfs storageClass

Changelog Harbor 1.6.3

fixes

  • hardened images because of critical cves: CVE-2024-41110, CVE-2024-45491, CVE-2024-45492, CVE-2024-37371

Changelog prometheus-stack 1.6.3

fixes

  • changed scrape port of etcd for prometheus stack to 2381
  • resolved issues with prometheus crds in prometheus upgrade

Changelog nginx-ingress 1.6.3

fixes

  • fixed the critical vulnerability CVE-2024-24790

Changelog kubeops-dashboard 1.6.3

fixes

  • updated helm chart to 0.26.0 and app version 0.26.0

Changelog opensearch-dashboards 1.6.3

updates

  • upgraded Helm chart version to 2.22.0

known issue

  • the installation can create the following error: Failed loading builtin log types from disk! java.nio.file.FileSystemNotFoundException

4.11 - KubeOps 1.6.2

KubeOps 1.6.2

changelog kubeopsctl 1.6.2

bugfixes

  • resolved issue where images were not pinned, so images were deleted by kubernetes
  • resolved issue where updateRegistry was not executed
  • resolved issue where lima container did not properly stop after errors
  • fixed issue with storageClassName in logstashValues

changelog rook-ceph

fixes

  • resolved issue where parameters useallnodes and usealldevivces, and deviceslist were not templated

Changelog opa gatekeeper 1.6.2

fixes

  • fixed issue where opa gatekeeper namespace was controlled by helm
  • updated helm chart of opa gatekeeper, changed curl image

Changelog keycloak 1.6.2

fixes

  • fixed issue with critical CVEs

Changelog opensearch-dashboards 1.6.2

known issue

  • the installation can create the following error: Failed loading builtin log types from disk! java.nio.file.FileSystemNotFoundException

4.12 - KubeOps 1.6.1

KubeOps 1.6.1

Changelog kubeopsctl 1.6.1

Bugfix

  • Resolved the issue of missing master nodes details in /usr/local/etc/haproxy/haproxy.cfg
  • Resolved the issue where no error message was displayed when running kubeopsctl apply -f with a non-existent YAML file

Changelog lima 1.6.1

Bugfix

  • Resolved issue with upgrading kubenetes versions

Changelog velero 1.6.1

Bugfix

  • Fixed issue with CVE-2024-24790 vulnerability

Changelog kubeops-dashboard 1.6.1

Bugfix

  • Fixed issue with CVE-2024-24790 and CVE-2024-41110 vulnerabilities

Changelog cert-manager 1.6.1

Bugfix

  • Upgraded the cert-manager Helm chart to version 1.15.3

Changelog prometheus 1.6.1

Bugfix

  • Upgraded the prometheus Helm chart to version 62.7.0

Changelog nginx-ingress 1.6.1

Bugfix

  • Fixed issue with CVE-2024-24790 vulnerability

Changelog harbor 1.6.1

Bugfix

  • Upgraded the prometheus Helm chart to version 1.15.1

Changelog opensearch-dashboards 1.6.1

known issue

  • the installation can create the following error: Failed loading builtin log types from disk! java.nio.file.FileSystemNotFoundException

4.13 - KubeOps 1.6.0

KubeOps 1.6.0

Changelog kubeopsctl 1.6.0

New

  • Added apiVersion kubeops/kubeopsctl/alpha/v5
  • Added a overwrite parameters for platform packages such as rook, harbor, logstash, filebeat etc.

Bugfix

  • Fixed the issue where the platform was creating Harbor project despite it being set to false.
  • Resolved the issue with status giving output for invalid argument/s
  • Resolved the issue with multiline literal that converted “|” to “>”

Changelog lima 1.6.0

Bugfix

  • Resolved issue with upgrading of clustermaster twice

Changelog keycloak 1.6.0

Bugfix

  • Issue with CVE-2024-37371 vulnerability resolved.
  • Resolved issue with nodeport configuration

Changelog prometheus 1.6.0

Bugfix

  • Fixed the issue with the “ZBP” tag in the Grafana-Alertmanager dashboard.

Changelog kubeops-dashboard 1.6.0

Bugfix

  • Correct version has been adapted

4.14 - KubeOps 1.6.0-Beta1

KubeOps 1.6.0-Beta1

Changelog kubeopsctl 1.6.0_Beta1

Bugfix

  • Resolved issue with log messages are written along with output on the screen
  • Resolved issue with support for API versions v1, v2, v3, and v4

Changelog lima 1.6.0_Beta1

Bugfix

  • Resolved issue with ‘NullReferenceException’ for unsupported packages
  • Resolved issue with logging channels

4.15 - KubeOps 1.6.0-Beta0

KubeOps 1.6.0-Beta0

Changelog kubeopsctl 1.6.0_Beta0

New

  • Added apiVersion kubeops/kubeopsctl/alpha/v5
  • Added a tutorial for using advanced parameters to overwrite values

Bugfix

  • Moved advancedParameter under cluster and operator in rookValues
  • fixed issue where packages could not be upgraded
  • fixed issue with v4 to v5 model conversion

Known Issues

  • log messages are written along with output on the screen

Changelog lima 1.6.0_Beta0

Bugfix

  • added more parallelization in the clustercreation and update registry for more speed
  • fixed problem where kubernetes upgrade was not possible

Changelog opensearch 1.6.0_Beta0

Bugfix

  • The initial dashboard password has been set to a neutral value
  • fixed issue with ’nil pointer evaluating interface
  • Added a tutorial for using advanced parameters to overwrite values in kubeopsctl.yaml

Changelog rook-ceph 1.6.0_Beta0

Bugfix

  • Removed the image.tag specification from the values.yaml file in the Helm chart to allow for more dynamic configuration of the image version during deployments

Changlog nginx-ingress 1.6.0_Beta0

Bugfix

  • number of replicas set to 3

Changelog setup 1.6.0_Beta0

Bugfix

  • fixed issue with installing ‘kubectl’ on admin node
  • updated calicomultus version to 0.0.4

Changelog harbor 1.6.0_Beta0

Bugfix

  • set replicas to 3

Changelog filebeat 1.6.0_Beta0

Bugfix

  • changed toleration from master to control-plane