This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Reference

1 - Documentation-kubeopsctl

KubeOps kubeopsctl

this documentation shows all feature of kubeopsctl and how to use these features.

the kosi software must be installed from the start.

General commands

Overview of all KUBEOPSCTL commands

Usage:
  kubeopsctl [command] [options]

Options:
  --version       Show version information
  -?, -h, --help  Show help and usage information

Commands:
  apply             Use the apply command to apply a specific config to create or modify the cluster.
  change            change
  drain <argument>  Drain Command.
  uncordon <name>   Uncordon Command.
  upgrade <name>    upgrade Command.
  status <name>     Status Command.

Command ‘kubeopsctl –version’

The kubeopsctl --version command shows you the current version of kubeopsctl.

kubeopsctl --version

The output should be:

0.2.0-Alpha0

Command ‘kubeopsctl –help’

The command kubeopsctl --help gives you an overview of all available commands:

kubeopsctl --help

Alternatively, you can also enter kubeopsctl or kubeopsctl -? in the command line.

Command ‘kubeopsctl apply’

The command kubeopsctl apply is used to set up the kubeops platform with a configuration file.

Example:

kubeopsctl apply -f kubeopsctl.yaml

-f flag

The -f parameter is used to use yaml parameter file.

-l flag

The -l parameter is used to set the log level to a specific value. The default log level is Info. Available log levels are Error, Warning, Info, Debug1, Debug2, Debug3.

Example:

kubeopsctl apply -f kubeopsctl.yaml -l Debug3

Command ‘kubeopsctl change registry’

The command kubeopsctl change registry is used to change the currently used registry to a different one.

Example:

kubeopsctl change registry -f kubeopsctl.yaml -r 10.2.10.11/library -t localhost/library 

-f flag

The -f parameter is used to use yaml parameter file.

-r flag

The parameter r is used to pull the docker images which are included in the package to a given local docker registry.

-t flag

The -t parameter is used to tag the images with localhost. For the szenario that the registry of the cluster is exposed to the admin via a network internal domain name, but this name can’t be resolved by the nodes, the flag -t can be used, to use the cluster internal hostname of the registry.

Command ‘kubeopsctl drain’

The command kubeopsctl drain is used to drain a cluster, zone or node.

In this example we are draining a cluster:

kubeopsctl drain cluster/example 

In this example we are draining a zone:

kubeopsctl drain zone/zone1 

In this example we are draining a node:

kubeopsctl drain node/master1 

Command ‘kubeopsctl uncordon’

The command kubeopsctl uncordon is used to drain a cluster, zone or node.

In this example we are uncordoning a cluster:

kubeopsctl uncordon cluster/example 

In this example we are uncordoning a zone:

kubeopsctl uncordon zone/zone1 

In this example we are uncordoning a node:

kubeopsctl uncordon node/master1 

Command ‘kubeopsctl upgrade’

The command kubeopsctl upgrade is used to upgrade the kubernetes version of a cluster, zone or node.

In this example we are uncordoning a cluster:

kubeopsctl upgrade cluster/example -v 1.26.6

In this example we are uncordoning a zone:

kubeopsctl upgrade zone/zone1 -v 1.26.6 

In this example we are uncordoning a node:

kubeopsctl upgrade node/master1 -v 1.26.6 

-v flag

The parameter v is used to set a higher kubernetes version.

Command ‘kubeopsctl status’

The command kubeopsctl status is used to get the status of a cluster.

Example:

kubeopsctl status cluster/cluster1 -v 1.26.6 

Prerequisites

Minimum hardware and OS requirments for a linux machine are

OS Minimum Requirements
Red Hat Enterprise Linux 8 8 CPU cores, 16 GB memory
OpenSUSE 15 8 CPU cores, 16 GB memory
At least one machine should be used as an admin machine for cluster lifecycle management.

Requirements on admin

The following requirements must be fulfilled on the admin machine.

  1. All the utilized users require sudo privileges. If you are using Kubeops as a user, you need a user with sudo rights, so for Opensuse and RHEL 8 Environments it is the wheel group the user should be added to. Make sure that you change your user with:
su -l <user>
  1. Admin machine must be synchronized with the current time.

  2. You need an internet connection to use the default KubeOps registry registry1.kubernative.net/lima.

A local registry can be used in the Airgap environment. KubeOps only supports secure registries.
It is important to list your registry as an insecure registry in registry.conf (/etc/containers/registries.conf for podman, /etc/docker/deamon.json for docker), in case of insecure registry usage.

Now you can create your own registry instead of using the default. Checkout how to Guide Create a new Repository. for more info.

  1. kosi 2.8.0 must be installed on your machine. Click here to view how it is done in the Quick Start Guide.

  2. it is recommended that runc is uninstalled To uninstall runc on your OS use the following command:

    dnf remove -y runc
    zypper remove -y runc

  3. tc should be installed. To install tc on your OS use the following command:

    dnf install -y tc
    zypper install -y iproute2

  4. for opensearch, the /etc/sysctl.conf should be configured, the line

vm.max_map_count=262144

should be added. also the command

 sysctl -p

should be executed after that.

  1. Podman must be installed on your machine. To install podman on RHEL8 use command.
    dnf install podman
    zypper install podman
  1. You must install the kubeops-basic-plugins:0.4.0 .

    Simply type in the following command to install the Basic-Plugins.

    kosi install --hub=public pia/kubeops-basic-plugins:0.4.0
    

    Noteable is that you must have to install it on a Root-User Machine.

  2. You must install the kubeops-kubernetes-plugins:0.5.0.

    Simply type in the following command to install the Kubernetes-Plugins.

    kosi install --hub public pia/kubeops-kubernetes-plugins:0.5.0
    

Requirements for each node

The following requirements must be fulfilled on each node.

  1. All the utilized users require sudo privileges. If you are using Kubeops as a user, you need a user with sudo rights, so for Opensuse and RHEL 8 Environments it is the wheel group the user should be added to.

  2. Every machine must be synchronized with the current time.

  3. You have to assign lowercase unique hostnames for every machine you are using.

    We recommended using self-explanatory hostnames.

    To set the hostname on your machine use the following command:

    hostnamectl set-hostname <name of node>
    
    • Example
      Use the commands below to set the hostnames on each machine as admin, master, node1 node2.
      hostnamectl set-hostname admin
      hostnamectl set-hostname master 
      hostnamectl set-hostname node1
      hostnamectl set-hostname node2
      

    Requires sudo privileges

    It is recommended that a dns service is running, or if you don’t have a nds service, you can change the /etc/hosts file. an example for a entry in the /etc/hosts file could be:

    10.2.10.12 admin
    10.2.10.13 master1
    10.2.10.14 master2
    10.2.10.15 master3
    10.2.10.16 node1
    10.2.10.17 node2
    10.2.10.18 node3
    

  4. To establish an SSH connection between your machines, you either need an SSH key or you need to install sshpass.

    1. Generate an SSH key on admin machine using following command

      ssh-keygen
      

      There will be two keys generated in ~/.ssh directory.
      The first key is the id_rsa(private) and the second key is the id_rsa.pub(public).

    2. Copy the ssh key from admin machine to your node machine/s with following command

      ssh-copy-id <ip address or hostname of your node machine>
      
    3. Now try establishing a connection to your node machine/s

      ssh <ip address or hostname of your node machine>
      

How to Configure Cluster/Nodes/Software using yaml file

you need to have a cluster definition file which describes the different aspects of your cluster. this files describes only one cluster.

Full yaml syntax

apiVersion: kubeops/kubeopsctl/alpha/v3 # mandatory
clusterName: "example" # mandatory
clusterUser: "root" # mandatory
kubernetesVersion: "1.28.2" # mandatory
#masterHost: optional if you have an hostname, default value in "masterIP"
masterIP: 10.2.10.12 # mandatory
firewall: "nftables" # mandatory, default "nftables"
pluginNetwork: "calico" # mandatory, default "calico"
containerRuntime: "containerd" # mandatory
clusterOS: "Red Hat Enterprise Linux" # mandatory, can be "Red Hat Enterprise Linux" or "openSUSE Leap"
### Additional values for cluster configuration
useInsecureRegistry: false # optional, default is false
ignoreFirewallError: false # optional, default is false
serviceSubnet: 192.168.128.0/17 # optional, default "192.168.128.0/17"
podSubnet: 192.168.0.0/17 # optional, default "192.168.0.0/17"
debug: false # optional, default is false
logLevel: vvvvv # optional, default "vvvvv"
systemCpu: "1" # optional, default "1"
systemMemory: "2G" # optional, default "2G"
sudo: true # optional, default is true
tmpCopyDir: "/tmp" # optional, default is /tmp
createCluster: true # optional, default is true
updateRegistry: true # optional, default is true

zones:
  - name: zone1
    nodes:
      master:
        - name: cluster1master1
          ipAdress: 10.2.10.11
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
        - name: cluster1master2
          ipAdress: 10.2.10.12
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
      worker:
        - name: cluster1worker1
          ipAdress: 10.2.10.14
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
        - name: cluster1worker2
          ipAdress: 10.2.10.15
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
  - name: zone2
    nodes:
      master:
        - name: cluster1master3
          ipAdress: 10.2.10.13
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: drained
          kubeversion: 1.28.2  
      worker:
        - name: cluster1worker1
          ipAdress: 10.2.10.16
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2

# set to true if you want to install it into your cluster
rook-ceph: true # mandatory
harbor: true # mandatory
opensearch: true # mandatory
opensearch-dashboards: true # mandatory
logstash: true # mandatory
filebeat: true # mandatory
prometheus: true # mandatory
opa: true # mandatory
headlamp: true # mandatory
certman: true # mandatory
ingress: true # mandatory
keycloak: true # mandatory
velero: true # mandatory

nameSpace: "kubeops" #optional, the default value is different for each application
storageClass: "rook-cephfs" # optional, default value is "rook-cephfs"

###Values for Rook-Ceph###
rookValues:
  namespace: kubeops
  cluster:
    spec:
      dataDirHostPath: "/var/lib/rook" # optional, default is /var/lib/rook
    storage:
      useAllNodes: true # optional, default value: true
      useAllDevices: true # optional, default value: true
      deviceFilter: "^sd[a-b]" # optional, will only be used if useAllDevices is set to false
      config:
        metadataDevice: "sda" # optional, only set this value, if there is a device available
      nodes: # optional if useAllNodes is set to true, otherwise mandatory
        - name: "<ip-adress of node_1>"
          devices:
            - name: "sdb" 
        - name: "<ip-adress of node_2>"
          deviceFilter: "^sd[a-b]"
          config:
            metadataDevice: "sda" # optional
    resources:
      mgr:
        requests:
          cpu: "500m" # optional, default is 500m, limit: 1000m
          memory: "512Mi" # optional, default is 1Gi, limit: 1Gi
      mon:
        requests:
          cpu: "1" # optional, default is 1, limit: 2000m
          memory: "1Gi" # optional, default is 1Gi, limit: 2Gi
      osd:
        requests:
          cpu: "1" # optional, default is 1, limit: 2
          memory: "1Gi" # optional, default is 4Gi, limit: 4Gi
      cephFileSystems:
        requests:
          cpu: "1" # optional, default is 1, limit: 2
          memory: "1Gi" # optional, default is 1, limit: 4Gi
      cephObjectStores:
        requests:
          cpu: "1" # optional, default is 1, limit: 2
          memory: "1Gi" # optional, default is 1Gi, limit: 2Gi
  operator:
    data:
      rookLogLevel: "DEBUG" # optional, default is DEBUG
#-------------------------------------------------------------------------------------------------------------------------------
### Values for Harbor deployment ###
## For detailed explaination for each key see: https://artifacthub.io/packages/helm/harbor/harbor/1.8.1#configuration ##
harborValues: 
  namespace: kubeops # optional, default is kubeops
  harborpass: "password" # mandatory: set password for harbor access
  databasePassword: "Postgres_Password" # mandatory: set password for database access
  redisPassword: "Redis_Password" # mandatory: set password for redis access
  externalURL: http://10.2.10.13:30002 # mandatory, the ip address and port, from which harbor is accessable outside of the cluster
  nodePort: 30002 # mandatory
  hostname: harbor.local # mandatory
  harborPersistence:
    persistentVolumeClaim:
      registry:
        size: 5Gi # mandatory, depending on storage capacity
        storageClass: "rook-cephfs" #optional, default is rook-cephfs
      jobservice:
        jobLog:
          size: 1Gi # mandatory: Depending on storage capacity
          storageClass: "rook-cephfs" #optional, default is rook-cephfs
      database:
        size: 1Gi # mandatory, depending on storage capacity
        storageClass: "rook-cephfs" #optional, default is rook-cephfs
      redis:
        size: 1Gi # mandatory, depending on storage capacity
        storageClass: "rook-cephfs" #optional, default is rook-cephfs
      trivy: 
        size: 5Gi # mandatory, depending on storage capacity
        storageClass: "rook-cephfs" #optional, default is rook-cephfs
#--------------------------------------------------------------------------------------------------------------------------------------
###Values for filebeat deployment###
filebeatValues:
  namespace: kubeops # optional, default is kubeops   
#--------------------------------------------------------------------------------------------------------------------------------
###Values for Logstash deployment###
##For detailed explaination for each key see: https://github.com/elastic/helm-charts/releases/tag/v7.16.3###
logstashValues:
  namespace: kubeops
  volumeClaimTemplate:
    accessModes: 
      - ReadWriteMany #optional, default is [ReadWriteMany]
    resources:
      requests:
        storage: 1Gi # mandatory, depending on storage capacity
    storageClass: "rook-cephfs" #optional, default is rook-cephfs
#--------------------------------------------------------------------------------------------------------------------------------------
###Values for OpenSearch-Dashboards deployment###
##For detailed explaination for each key see: https://github.com/opensearch-project/helm-charts/tree/main/charts/opensearch-dashboards###
openSearchDashboardValues:
  namespace: kubeops
  nodePort: 30050
#--------------------------------------------------------------------------------------------------------------------------------
###Values for OpenSearch deployment###
##For detailed explaination for each key see: https://github.com/opensearch-project/helm-charts/tree/main/charts/opensearch###
openSearchValues:
  namespace: kubeops
  opensearchJavaOpts: "-Xmx512M -Xms512M" # optional, default is -Xmx512M -Xms512M
  resources:
    requests:
      cpu: "250m" # optional, default is 250m
      memory: "1024Mi" # optional, default is 1024Mi
    limits:
      cpu: "300m" # optional, default is 300m
      memory: "3072Mi" # optional, default is 3072Mi
  persistence:
    size: 4Gi # mandatory
    enabled: "true" # optional, default is true
    enableInitChown: "false" # optional, default is false
    labels:
      enabled: "false" # optional, default is false
    storageClass: "rook-cephfs" # optional, default is rook-cephfs
    accessModes:
      - "ReadWriteMany" # optional, default is {ReadWriteMany}
  securityConfig:
    enabled: false # optional, default value: false
    ### Additional values can be set, if securityConfig is enabled:
    # path: "/usr/share/opensearch/plugins/opensearch-security/securityconfig"
    # actionGroupsSecret:
    # configSecret:
    # internalUsersSecret: internal-users-config-secret
    # rolesSecret:
    # rolesMappingSecret:
    # tenantsSecret:
    # config:
    #   securityConfigSecret: ""
    #   dataComplete: true
    #   data: {}
  replicas: "3" # optional, default is 3
#--------------------------------------------------------------------------------------------------------------------------------
###Values for Prometheus deployment###
prometheusValues:
  namespace: kubeops # optional, default is kubeops
  privateRegistry: false # optional, default is false
  grafanaUsername: "user" # optional, default is user
  grafanaPassword: "password" # optional, default is password
  grafanaResources:
    storageClass: "rook-cephfs" # optional, default is rook-cephfs
    storage: 5Gi # optional, default is 5Gi
    nodePort: 30211 # optional, default is 30211

  prometheusResources:
    storageClass: "rook-cephfs" # optional, default is rook-cephfs
    storage: 25Gi # optional, default is 25Gi
    retention: 10d # optional, default is 10d
    retentionSize: "24GB" # optional, default is 24GB
    nodePort: 32090
#--------------------------------------------------------------------------------------------------------------------------------
###Values for OPA deployment###
opaValues:
  namespace: kubeops

#--------------------------------------------------------------------------------------------------------------------------------
###Values for Headlamp deployment###
headlampValues:
  service:
    nodePort: 30007
#--------------------------------------------------------------------------------------------------------------------------------
###Values for cert-manager deployment###
certmanValues:
  namespace: kubeops
  replicaCount: 3
  logLevel: 2
#--------------------------------------------------------------------------------------------------------------------------------
###Values for ingress-nginx deployment###
ingressValues:
  namespace: kubeops
keycloakValues:
  namespace: "kubeops" # Optional, default is "keycloak"
  storageClass: "rook-cephfs" # Optional, default is "rook-cephfs"
  keycloak:
    auth:
      adminUser: admin # Optional, default is admin
      adminPassword: admin # Optional, default is admin
      existingSecret: "" # Optional, default is ""
  postgresql:
    auth:
      postgresPassword: "" # Optional, default is ""
      username: bn_keycloak # Optional, default is "bn_keycloak"
      password: "" # Optional, default is ""
      database: bitnami_keycloak # Optional, default is "bitnami_keycloak"
      existingSecret: "" # Optional, default is ""
veleroValues:
  namespace: "velero"
  accessKeyId: "your_s3_storage_username"
  secretAccessKey: "your_s3_storage_password"
  useNodeAgent: false
  defaultVolumesToFsBackup: false
  provider: "aws"
  bucket: "velero"
  useVolumeSnapshots: false
  backupLocationConfig:
    region: "minio"
    s3ForcePathStyle: true
    s3Url: "http://minio.velero.svc:9000"

kubeopsctl.yaml in detail

apiVersion: kubeops/kubeopsctl/alpha/v3  # mandatory
imagePullRegistry: "registry1.kubernative.net/lima" # mandatory
localRegistry: false # mandatory
### Values for setup configuration ###
clusterName: "testkubeopsctl"  # mandatory
clusterUser: "mnyuser"  # mandatory
kubernetesVersion: "1.28.2" # mandatory, check lima documentation
#masterHost: optional if you have an hostname, default value in "masterIP"
masterIP: 10.2.10.11 # mandatory
firewall: "nftables" # mandatory, default "nftables"
pluginNetwork: "calico" # mandatory, default "nftables"
containerRuntime: "containerd" # mandatory, default "containerd"
clusterOS: "Red Hat Enterprise Linux" # optional, can be "Red Hat Enterprise Linux" or "openSUSE Leap", remove this line if you want to use default installed OS on admin machine but it has to be "Red Hat Enterprise Linux" or "openSUSE Leap"

these are parameters for the cluster creation, and software for the clustercreation, p.e. the containerruntime for running the contianers of the cluster. Also there are parameters for the lima software (see documentation of lima for futher explanation).

### Additional values for cluster configuration
useInsecureRegistry: false # optional, default is false
ignoreFirewallError: false # optional, default is false
serviceSubnet: 192.168.128.0/17 # optional, default "192.168.128.0/17"
podSubnet: 192.168.0.0/17 # optional, default "192.168.0.0/17"
debug: true # optional, default is true
logLevel: vvvvv # optional, default "vvvvv"
systemCpu: "1" # optional, default "1"
systemMemory: "2G" # optional, default "2G"
sudo: true # optional, default is true

also important are parameters like for the networking like the subnets for the pods and services inside the kubernetes cluster.

# at least 3 masters and 3 workers are needed
zones:
  - name: zone1
    nodes:
      master:
        - name: cluster1master1
          ipAdress: 10.2.10.11
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
        - name: cluster1master2
          ipAdress: 10.2.10.12
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
      worker:
        - name: cluster1worker1
          ipAdress: 10.2.10.14
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
        - name: cluster1worker1
          ipAdress: 10.2.10.15
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
  - name: zone2
    nodes:
      master:
        - name: cluster1master3
          ipAdress: 10.2.10.13
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2  
      worker:
        - name: cluster1worker1
          ipAdress: 10.2.10.16
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: drained
          kubeversion: 1.28.2

so new are the zones, which contain master and worker nodes. there are two different states: active and drained. also there can be two different kubernetes versions. So if you want to do updates in tranches, this is possible with kubeopsctl. Also you can set system memory and system cpu of the nodes for kubernetes itself. it is not possible to delete nodes, for deleting nodes you have to use lima. Also if you want to make an update in tranches, you need at least one master with the greater version.

the rest are parameters for installing the kubeops software, which are explained here

how to use kubeopsctl

apply changes to cluster

kubeopsctl apply -f kubeopsctl.yaml

2 - KubeOps Version

KubeOps Version

Here is the KubeOps and it’s supported tools versions list. Make sure to install or upgrade according to supported versions only.

KubeOps supported SINA Version Supported LIMA Version supported kubeopsctl version
KubeOps 1.4.0 KOSI 2.9.X LIMA 1.4.X kubeopsctl 1.4.X
KubeOps 1.3.0 KOSI 2.9.X LIMA 1.1.X kubeopsctl 0.2.X
KubeOps 1.2.0 SINA 2.8.X LIMA 1.0.X kubeopsctl 0.1.X
KubeOps 1.1.7 SINA 2.7.X LIMA 0.10.X
KubeOps 1.0.10 SINA 2.6.X LIMA 0.10.X

KubeOps 1.4.0 supports

Tools supported Tool Version supported Package Version SHA256 Checksum
fileBeat V 8.5.1 kubeops/filebeat-os:1.4.0 03a3338bfdc30ee5899a1cba5994bcc77278082adcd7b3d66ae0f55357f2ebc5
harbor V 2.9.1 kubeops/harbor:1.4.0 c407b7e2fd8f1a22bad4374061fceb04f4a2b5befccbb891a76b24e81333ae1e
helm V 3.8.0 kubeops/helm:1.4.0 433d84f30aa4ba6ae8dc0d5cba4953e3f2a933909374af0eb54450ad234f870d
logstash V 8.4.0 kubeops/logstash-os:1.4.0 5ff7b19fa2e72f1c4ac4b1c37f478c223c265c1277200c62f393c833cbb9db1b
opa-gatekeeper V 3.11.0 kubeops/opa-gatekeeper:1.4.0 882af738ac3c10528d5b694f6181e1f1e5f749df947a8afdb0c6ac97809ce5ef
opensearch V 2.11.1 kubeops/opensearch-os:1.4.0 e72094321b2e76d4de754e56e8b9c40eb79c57059078cf58fd01bc43ab515d4a
opensearch-dashboards V 2.11.1 kubeops/opensearch-dashboards:1.4.0 0ff2889aeff8e73c567c812ea709d633ff7a912a13bc8374ebb7c09aed52bac6
prometheus V 43.2.1 kubeops/kube-prometheus-stack:1.4.0 08880a2218ab776e3fd61f95133e8d02e1d2e37b84bcc2756b76eda713eac4ae
rook V 17.2.5 kubeops/rook-ceph:1.4.0 5e306c26c6a8fed92b13d47bb127f9d3a6f0b9fcc341ff0efc3c1eaf8d311567
cert-manager V 1.11.0 kubeops/cert-manager:1.4.0 ac0a5ff859c1e6846ecbf9fa77c5c448d060da4889ab3bc568317db866f97094
ingress-nginx V 1.8.5 kubeops/ingress-nginx:1.4.0 2128fe81553d80fa491c5978a7827402be79b5f196863a836667b59f3a35c0f8
kubeops-dashboard V 1.0.0 kubeops/kubeops-dashboard:1.4.0 b0623b9126a19e5264bbd887b051fd62651cd9683fefdae62fce998af4558e1e
keycloak V 16.0.5 kubeops/keycloak:1.4.0 b309624754edf53ffea2ce7d772b70d665b0f5ae176e8596fcb403e96e80adec
velero V 1.12.3 kubeops/velero:1.4.0 b762becf38dbcac716f1d2b446fb35ad40c72aa4d928ccbc9dd362a7ad227fc2
clustercreate V 1.4.0 kubeops/clustercreate:1.4.0 ebd2bccfedd99b051c930d23c3b1c123c40e70c098d2b025d29dee301f1b92d8
setup V 1.4.0 kubeops/setup:1.4.0 d74e2be55e676946f6a996575f75ac9161db547ad402da8b66a333dfd7936449
calicomultus V 0.0.3 lima/calicomultus:0.0.3 c4a40fd0ab66eb0669da7da82e2f209d67ea4d4c696c058670306d485e483f62

KubeOps 1.3.1 supports

Tools supported Tool Version supported Package Version SHA256 Checksum
fileBeat V 8.5.1 kubeops/sina-filebeat-os:1.3.1 476b23d4c484104167c58caade4f59143412cbbb863e54bb109c3e4c3a592940
harbor V 2.9.1 kubeops/harbor:1.3.1 4862e55ecbfee007f7e9336db7536c064d18020e6b144766ff1338a5d162fc56
helm V 3.8.0 kubeops/helm:1.3.1 99f4eac645d6a3ccb937252fde4880f7da8eab5f84c6143c287bd6c7f2dcce65
logstash V 8.4.0 kubeops/sina-logstash-os:1.3.1 48bee033e522bf3c4863e98623e2be58fbd145d4a52fd4f56b5e1e7ef984bd6d
opa-gatekeeper V 3.11.0 kubeops/opa-gatekeeper:1.3.1 f8d5633912f1df1e303889e2e3a32003764f0b65c8a77ece88d7c3435080a86b
opensearch V 2.9.0 kubeops/sina-opensearch-os:1.3.1 a09cf6f29aac5b929425cf3813570fe105ed617ccfdafd0e4593dbbe719a6207
opensearch-dashboards V 2.9.0 kubeops/sina-opensearch-dashboards:1.3.1 86858a23b15c4c67e5eee7a286d8c9a82568d331d39f814746247e742cc56a11
prometheus V 43.2.1 kubeops/sina-kube-prometheus-stack:1.3.1 aacced30732c08e8edf439e3dd0d40bd09575f7728f7fca54294c704bce2b76c
rook V 17.2.5 kubeops/rook-ceph:1.3.1 b3d5b9eace80025d070212fd227d9589024e1eb70e571e3e690d5709202fd26f
cert-manager V 1.11.0 kubeops/cert-manager:1.3.1 52ba2c9b809a3728d73cf55d99a769c9f083c7674600654c09c749d6e5f3bdf3
ingress-nginx V 1.7.0 kubeops/ingress-nginx:1.3.1 91007878ef416724c09f9a1c8d498f3a3314cd011ab0c2c2ca81163db971773d
kubeops-dashboard V 1.0.0 kubeops/kubeops-dashboard:1.3.1 70fb266137ac94896f841d27b341f610190afe7bed5d5baad53f450d8f925c78
keycloak V 16.0.5 kubeops/keycloak:1.3.1 853912a83fd3eff9bb92f8a6285f132d10ee7775b3ff52561c8a7d281e956090
clustercreate V 1.3.1 kubeops/clustercreate:1.3.1 0526a610502922092cd8ea52f98bec9a64e3f1d1f6ac7a29353f365ac8d43050
setup V 1.3.2 kubeops/setup:1.3.1 7c610df29cdfe633454f78a6750c9419bf26041cba69ca5862a98b69a3a17cca
calicomultus V 0.0.3 lima/calicomultus:0.0.3 c4a40fd0ab66eb0669da7da82e2f209d67ea4d4c696c058670306d485e483f62

KubeOps 1.2.1 supports

Tools supported Tool Version supported Package Version SHA256 Checksum
fileBeat V 8.5.1 kubeops/sina-filebeat-os:1.2.0 473546e78993ed4decc097851c84ade25aaaa068779fc9e96d17a0cb68564ed8
harbor V 2.6.4 kubeops/harbor:1.2.0 156f4713f916771f60f89cd8fb1ea58ea5fcb2718f80f3e7fabd47aebb416ecd
helm V 3.8.0 kubeops/helm:1.2.0 8d793269e0ccfde37312801e68369ca30db3f6cbe768cc5b5ece5e3ceb8500f3
logstash V 8.4.0 kubeops/sina-logstash-os:1.2.0 e2888e76ee2dbe64a137ab8b552fdc7a485c4d9b1db8d1f9fe7a507913f0452b
opa-gatekeeper V 3.11.0 kubeops/opa-gatekeeper:1.2.0 a45598107e5888b322131194f7a4cb70bb36bff02985326050f0181ac18b00e4
opensearch V 2.9.0 kubeops/sina-opensearch-os:1.2.0 c3b3e52902d25c6aa35f6c9780c038b25520977b9492d97e247bb345cc783240
opensearch-dashboards V 2.9.0 kubeops/sina-opensearch-dashboards:1.2.0 ced7643389b65b451c1d3ac0c3d778aa9a99e1ab83c35bfc5f2e750174d9ff83
prometheus V 43.2.1 kubeops/sina-kube-prometheus-stack:1.2.0 20d91eb1d565aa55f9d33a1dc7f4ff38256819270b06f60ad3c3a1464eae1f52
rook V 17.2.5 kubeops/rook-ceph:1.2.0 6a8b99938924b89d50537e26f7778bc898668ed5b8f01bbc07475ad6b77293e7
cert-manager V 1.11.0 kubeops/cert-manager:1.2.0 f1bb269dac94ebedc46ea4d3c01c9684e4035eace27d9fcb6662321e08cf6208
ingress-nginx V 1.7.0 kubeops/ingress-nginx:1.2.0 1d87f9d606659eebdc286457c7fc35fd4815bf1349d66d9d9ca97cf932d1230c
kubeops-dashboard V 1.0.0 kubeops/kubeops-dashboard:1.2.0 e084df99ecb8f5ef9e4fcdd022adfc9e0e566b760d4203ed5372a73d75276427
keycloak V 16.0.5 kubeops/keycloak:1.2.0 0a06b689357bb0f4bc905175aaad5dad75b302b27a21cff074abcb00c12bee06
clustercreate V 1.2.2 kubeops/clustercreate:1.2.2 771d031d69ac91c92ee9efcb3d7cefc506415a6d43f1c2475962c3f7431ff79e
setup V 1.2.6 kubeops/setup:1.2.6 6492b33cd96ccc76fdc4d430f60c47120d90336d1d887dc279e272f9efe6978e

3 - Glossary

Glossary

This section defines a glossary of common KubeOps terms.

KOSI package

KOSI package is the .kosi file packaged by bundling package.yaml and other essential yaml files and artifacts. This package is ready to install on your Kubernetes Clusters.

KubeOps Hub

KubeOps Hub is a secure repository where published KOSI packages can be stored and shared. You are welcome to contribute and use public hub also at the same time KubeOps provides you a way to access your own private hub.

Installation Address

It is the distinctive address automatically generated for each published package on KubeOps Hub. It is constructed using name of package creator, package name and package version.
You can use this address at the time of package installation on your Kubernetes Cluster.

It is indicated by the install column in KubeOps Hub.

Deployment name

When a package is installed, KOSI creates a deployment name to track that installation. Alternatively, KOSI also lets you specify the deployment name of your choice during the installation.
A single package may be installed many times into the same cluster and create multiple deployments.
It is indicated by Deployment column in the list of package deployments.

Tasks

As the name suggests, “Tasks” in package.yaml are one or more sets of instructions to be executed. These are defined by utilizing Plugins.

Plugins

KOSI provides many functions which enable you to define tasks to be executed using your package. These are called Plugins. They are the crucial part of your package development.

LIMAROOT Variable

LIMAROOT is an envoirment variable for LIMA. It is the place where LIMA stores information about your clusters. The environment variable LIMAROOT is set by default to /var/lima. However LIMA also facilitates setting your own LIMAROOT by yourself.

KUBEOPSROOT Variable

The environment variable KUBEOPSROOT stores the location of the KOSI plugins and the config.yaml. To use the variable, the config.yaml and the plugins have to be copied manually.

apiVersion

It shows the supported KubeOps tool API version. You do not need to change it unless otherwise specified.

Registry

As the name suggests, it is the location where docker images can be stored. You can either use the default KubeOps registry or specify your own local registry for AirGap environments. You need an internet connection to use the default registry provided by KubeOps.

Maintenance Package

KubeOps provides a package for the supported Kubernetes tools. These packages help you update the Kubernetes tools to the desired versions on your clusters along with the dependencies.

4 - FAQs

FAQ - Kubeopsctl

Known Issues

ImagepullBackoffs in Cluster

if you have imagepullbackoffs in your cluster, p.e. for prometheus, you can just use the kubeopsctl change registry comamnd again. e.g. kubeopsctl change registry -r :30002/library -t localhost:30002/library -f kubeopsctl.yaml

FAQ - KubeOps SINA

Error Messages

There is an error message regarding Remote-Certificate

  • Error: http://hub.kubernative.net/dispatcher?apiversion=3&vlientversion=2.X.0 : 0
  • X means per version
  • CentOS 7 cannot update the version by itself (ca-certificates-2021.2.50-72.el7_9.noarch).
    • Fix: yum update ca-certificates -y or yum update
  • Manual download and install of ca-certificates RPM:
    • Download: curl http://mirror.centos.org/centos/7/updates/x86_64/Packages/ca-certificates-2021.2.50-72.el7_9.noarch.rpm -o ca-certificates-2021.2.50-72.el7_9.noarch.rpm
    • Install: yum install ca-certificates-2021.2.50-72.el7_9.noarch.rpm -y

SINA Usage

Can I use SINA with sudo?

  • At the moment, SINA has no sudo support.
  • Docker and Helm, which are required, need sudo permissions.

I get an error message when I try to search an empty Hub?

  • Known bug, will be fixed in a later release.
  • Need at least one package in the Hub before you can search.

Package Configuration

In my package.yaml, can I use uppercase characters as a name?

  • Currently, only lowercase characters are allowed.
  • This will be fixed in a later release.

I have an error message that says “Username or password contain non-Latin characters”?

  • Known bug, may occur with incorrect username or password.
  • Please ensure both are correct.

In my template.yaml, can I just write a value without an associated key?

  • No, a YAML file requires a key-value structure.

Do I have to use the template plugin in my SINA package?

  • No, you don’t have to use the template plugin if you don’t want to.

I have an error message that says “reference not set to an instance of an object”?

  • Error from our tool for reading YAML files.
  • Indicates an attempt to read a value from a non-existent key in a YAML file.

I try to template but the value of a key stays empty.

  • Check the correct path of your values.
  • If your key contains “-”, the template plugin may not recognize it.
  • Removing “-” will solve the issue.

FAQ - KubeOps LIMA

Error Messages

LIMA 0.10.6 Cluster not ready

  • You have to apply the calico.yaml in the $LIMAROOT folder:
kubectl apply -f $LIMAROOT/calico.yaml

read header failed: Broken pipe

for lima version >= 0.9.0

  • Lima stops in line

ansible Playbook : COMPLETE : Ansible playbooks complete.

  • Search for
$LIMAROOT/dockerLogs/dockerLogs_latest.txt 

in the path Broken pipe. From the line with Broken pipe check if the following lines exist:

debug3: mux_client_read_packet: read header failed: Broken pipe

debug2: Received exit status from master 1

Shared connection to vli50707 closed.

<vli50707> ESTABLISH SSH CONNECTION FOR USER: demouser

<vli50707> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)

(ControlPersist=60s)

If this is the case, line /etc/ansible/ansible.cfg

in the currently running lima container in file ssh_args =-C -o ControlMaster=auto -o ControlPersist=60s must be commented out or removed.

Example:

docker container ls

CONTAINER ID IMAGE COMMAND

CREATED STATUS PORTS NAMES

99cabe7133e5 registry1.kubernative.net/lima/lima:v0.8.0 "/bin/bash" 6 days

ago Up 6 days lima-v0.8.0

docker exec -it 99cabe7133e5 bash

vi /etc/ansible/ansible.cfg 

Change the line ssh_args = -C -o ControlMaster=auto -o ControlPersist=60s to #ssh_args = -C-o ControlMaster=auto -o ControlPersist=60s or delete the line.

I want to delete the cluster master node and rejoin the cluster. When trying to rejoin the node a problem occurs and rejoining fails. What can be done?

To delete the cluster master, we need to set the cluster master to a different master machine first.

  1. On the admin machine: change the IP-Address from the current to new cluster master in:

    1. /var/lima/<name_of_cluster>/clusterStorage.yaml
    2. ~/.kube/config
  2. Delete the node

  3. Delete the images to prevent interference: ctr -n k8s.io i rm $(ctr -n k8s.io i ls -q)

  4. Change IP on new cluster master in

/etc/kubernetes/admin.conf
  1. Change IPs in config maps:

    1. kubectl edit cm kubeadm-config -n kube-system
    2. kubectl edit cm kube-proxy -n kube-system
    3. kubectl edit cm kubeadm-config -n kube-system
    4. kubectl edit cm cluster-info -n kube-public
  2. Restart kubelet

  3. Rejoin the node

Using LIMA on RHEL8 fails to download metadata for repo “rhel-8-for-x86_64-baseos-rpms”. What should I do?

This is a common problem which happens now and then, but the real source of error is difficult to identify. Nevertheless, the workaround is quick and easy: clean up the current repo data, refresh the subscription-manager and update the whole operating system. This can be done with the following commands:

dnf clean all

rm -frv /var/cache/dnf

subscription-manager refresh

dnf update -y

How does LIMA handle SELinux?

SELinux will be temporarily deactivated during the execution of LIMA tasks. After the execution is finished, SELinux is automatically reactivated. This indicates you are not required to manually enable SELinux every time while working with LIMA.

My pods are stuck: CONFIG-UPDATE 0/1 CONTAINERCREATING

  1. They are responsible for updating the loadbalancer, you can update them manualy and delete the pod

  2. You can try redeploying the deamonset to the kube-system namespace

I can not upgrade past KUBERNETES 1.21.X

  1. Please make sure you only have the latest dependancy packages for your enviroment in your /packages folder.

  2. It could be related to this kubernetes bug https://v1-22.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/

  3. Try upgrading past 1.21.x manualy

My master can not join, it fails when creating /ROOT/.KUBE

try the following commands on the master

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config 

Some nodes are missing the loadbalancer

  1. Check if the Loadbalancer staticPod file can be found in the manifest folder of the node.

  2. If it isn’t there please copy it from another node.

Some nodes didn’t upgrade. What to do now?

  1. Retry to upgrade your cluster.

  2. If LIMA thinks you are already on the target version edit the stored data of your cluster at $LIMAROOT/myClusterName/clusterStorage.yaml.

    Set the Key kubernetesVersion to the lowest kubernetes Version present on a Node in your cluster.

Could not detect a supported package manager from the followings list: [‘PORTAGE’, ‘RPM’, ‘PKG’, ‘APT’], or the required PYTHON library is not installed. Check warnings for details.

  1. Check if you got a package manager.

  2. You have to install python3 with yum install python and then create a symlink from python to python3 with update-alternatives --config python.

Aborting, target uses SELINUX but PYTHON bindings (LIBSELINUX-PYTHON) aren’t installed!

You have to install libselinux-python on your cluster machine so you can install a firewall via LIMA.

FAQ - KubeOps PIA

The httpd service is terminating too long. How can I force the shut down?

  1. Use following command to force shut down httpd service:
kubectl delete deployment pia-httpd –grace-period=0 –force
  1. Most deployments got a networking service like our httpd does.

Delete the networking service with the command:

kubectl delete svc pia-httpd-service –grace-period=0 –force

I get the error that some nodes are not ‘Ready’. How do I fix the problem?

  1. Use kubectl get nodes command to find out first which node is not ready.

  2. To identify the problem, get access to the shell of the non-ready node . Use systemctl status kubelet to get status information about state of kubelet.

  3. The most common cause of this error is that the kubelet has the problem of not automatically identify the node. In this case, the kubelet must be restarted manually on the non-ready machine. This is done with systemctl enable kubelet and systemctl start kubelet.

  4. If the issue persists, reason behind the error can be evaluated by your cluster administrators.

FAQ KubeOps PLATFORM

Support of S3 storage configuration doesn’t work

At the moment, the sina-package rook-ceph:1.1.2 (utilized in kubeOps 1.1.3) is employing a Ceph version with a known bug that prevents the proper setup and utilization of object storage via the S3 API. If you require the functionality provided by this storage class, we suggest considering the use of kubeOps 1.0.7. This particular version does not encounter the aforementioned issue and provides comprehensive support for S3 storage solutions.

Change encoding to UTF-8

Please make sure that your uservalues.yaml is using UTF-8 encoding.

If you get issues with encoding, you can change your file to UTF-8 with the following command:

iconv -f UTF-8 -t ISO-8859-1 uservalues.yaml > uservalues.yaml

How to update Calico Multus?

  1. Get podSubnet located in clusterStorage.yaml ($LIMAROOT/<clustername>/clusterStorage.yaml)

  2. Create a values.yaml with key=>podSubnet an value=>

    Example:

    podSubnet: 192.168.0.0/17

  3. Get the deployment name of the current calicomultus installation with the sina list- command

Example:

| Deployment | Package | PublicHub | Hub |

|-------------|--------------------------------------|--------------|----------|

| 39e6da | local/calicomultus:0.0.1 |        | local |
  1. Update the deployment with
sina update lima/calicomultus:0.0.2 --dname <yourdeploymentname> --hub=public -f values.yaml

–dname: important parameter mandatory for the update command.

-f values.yaml: important that the right podSubnet is being used.

Known issue:

error: resource mapping not found for name: calico-kube-controllers namespace:co.yaml: no matches for kind PodDisruptionBudget in version policy/v1beta1

ensure CRDs are installed first

Create Cluster-Package with firewalld:

If you want to create a cluster with firewalld and the kubeops/clustercreate:1.0. - package you have to manually pull the firewalld - maintenance - package for your OS first, after executing the kubeops/setup:1.0.1 - package.

Opensearch pods do not start:

If the following message appears in the Opensearch pod logs, the vm.max_map_count:

ERROR: [1] bootstrap checks failed

[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

On all control-plane and worker nodes the line vm.max_map_count=262144 must be added to the file /etc/sysctl.conf.

After that the following command must be executed in the console on all control-plane and worker nodes: sysctl -p

Finally, the Opensearch pods must be restarted.

FAQ - KubeOps KUBEOPSCTL

Known issue:

Upgrading the kubernetes version within the cluster is with the current Beta3 release not possible. It will be fixed in the next release. HA capability only after 12h, for earlier HA capability manually move the file /etc/kubernetes/manifest/haproxy.yaml out of the folder and back in again