Use Kubeopsctl

KubeOpsctl

kubeopsctl is a new KubeOps tool which can be used for managing a cluster and its state eaisily. Now you can just describe a desired cluster state and then kubeopsctl creates a cluster with the desired state.

Using KubeOpsCtl

Using this feature is as easy as configuring the cluster yaml file with desired cluster state and details and using the apply command. Below are the detailed steps.

1.Configure Cluster/Nodes/Software using yaml file

You need to have a cluster definition file which describes the different aspects of your cluster. this files describes only one cluster.

Full yaml syntax

apiVersion: kubeops/kubeopsctl/alpha/v3  # mandatory
kubeOpsUser: "demo" # mandatory,  change to your username
kubeOpsUserPassword: "Password" # mandatory,  change to your password
kubeOpsUserMail: "demo@demo.net" # change to your email
imagePullRegistry: "registry1.kubernative.net/lima" # mandatory
localRegistry: false # mandatory
### Values for setup configuration ###
clusterName: "testkubeopsctl"  # mandatory
clusterUser: "mnyuser"  # mandatory
kubernetesVersion: "1.28.2" # mandatory, check lima documentation
masterIP: 10.2.10.31 # mandatory
# at least 3 masters and 3 workers are needed
zones:
  - name: zone1
    nodes:
      master:
        - name: cluster1master1
          ipAdress: 10.2.10.11
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
        - name: cluster1master2
          ipAdress: 10.2.10.12
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
      worker:
        - name: cluster1worker1
          ipAdress: 10.2.10.14
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
        - name: cluster1worker2
          ipAdress: 10.2.10.15
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
  - name: zone2
    nodes:
      master:
        - name: cluster1master3
          ipAdress: 10.2.10.13
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: drained
          kubeversion: 1.28.2  
      worker:
        - name: cluster1worker1
          ipAdress: 10.2.10.16
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2

# set to true if you want to install it into your cluster
rook-ceph: false # mandatory
harbor: false # mandatory
opensearch: false # mandatory
opensearch-dashboards: false # mandatory
logstash: false # mandatory
filebeat: false # mandatory
prometheus: false # mandatory
opa: false # mandatory
headlamp: false # mandatory
certman: false # mandatory
ingress: false # mandatory
keycloak: false # mandatory

###Values for Rook-Ceph###
rookValues:
  namespace: kubeops
  nodePort: 31931 # optional, default: 31931
  cluster:
    storage:
      # Will only be used if useAllDevices is set to false and will be ignored if individual devices have been specified on a node.
      deviceFilter: "^sd[a-b]"
      # This setting can be used to store metadata on a different device. Only recommended if an additional metadata device is available.
      # Optional, will be overwritten by the corresponding node-level setting.
      config:
        metadataDevice: "sda"
      # Names of individual nodes in the cluster that should have their storage included.
      # Will only be used if useAllNodes is set to false.
      nodes:
        - name: "<ip-adress of node_1>"
          devices:
            - name: "sdb" 
        - name: "<ip-adress of node_2>"
          deviceFilter: "^sd[a-b]"
          config:
            metadataDevice: "sda" # optional
#-------------------------------------------------------------------------------------------------------------------------------
### Values for Postgres ###
postgrespass: "password" # mandatory, set password for harbor postgres access 
postgres:
  resources:
    requests:
      storage: 2Gi # mandatory, depending on storage capacity
#-------------------------------------------------------------------------------------------------------------------------------
### Values for Redis ###
redispass: "password" # mandatory set password for harbor redis access 
redis:
  resources:
    requests:
      storage: 2Gi # mandatory depending on storage capacity
#-------------------------------------------------------------------------------------------------------------------------------
### Values for Harbor deployment ###
## For detailed explaination for each key see: https://artifacthub.io/packages/helm/harbor/harbor/1.8.1#configuration ##
harborValues: 
  harborpass: "password" # mandatory: set password for harbor access 
  externalURL: https://10.2.10.13 # mandatory, the ip address, from which harbor is accessable outside of the cluster
  nodePort: 30003
  harborPersistence:
    persistentVolumeClaim:
      registry:
        size: 5Gi # mandatory, depending on storage capacity
      chartmuseum:
        size: 5Gi # mandatory, depending on storage capacity
      jobservice:
        jobLog:
          size: 1Gi # mandatory: Depending on storage capacity
        scanDataExports:
          size: 1Gi # mandatory: Depending on storage capacity
      database:
        size: 1Gi # mandatory, depending on storage capacity
      redis:
        size: 1Gi # mandatory, depending on storage capacity
      trivy: 
        size: 5Gi # mandatory, depending on storage capacity
#--------------------------------------------------------------------------------------------------------------------------------------
###Values for filebeat deployment###
filebeatValues:
  namespace: kubeops # optional, default is kubeops   
#--------------------------------------------------------------------------------------------------------------------------------
###Values for Logstash deployment###
##For detailed explaination for each key see: https://github.com/elastic/helm-charts/releases/tag/v7.16.3###
logstashValues:
  namespace: kubeops
  volumeClaimTemplate:
    resources:
      requests:
        storage: 1Gi # mandatory, depending on storage capacity
#--------------------------------------------------------------------------------------------------------------------------------------
###Values for OpenSearch-Dashboards deployment###
##For detailed explaination for each key see: https://github.com/opensearch-project/helm-charts/tree/main/charts/opensearch-dashboards###
openSearchDashboardValues:
  namespace: kubeops
  nodePort: 30050
#--------------------------------------------------------------------------------------------------------------------------------
###Values for OpenSearch deployment###
##For detailed explaination for each key see: https://github.com/opensearch-project/helm-charts/tree/main/charts/opensearch###
openSearchValues:
  namespace: kubeops
  resources:
  persistence:
    size: 4Gi # mandatory
#--------------------------------------------------------------------------------------------------------------------------------
###Values for Prometheus deployment###
prometheusValues:
  prometheusResources:
    nodePort: 32090
#--------------------------------------------------------------------------------------------------------------------------------
###Values for OPA deployment###
opaValues:
  namespace: kubeops
#--------------------------------------------------------------------------------------------------------------------------------
###Values for Headlamp deployment###
headlampValues:
  service:
    nodePort: 30007
#--------------------------------------------------------------------------------------------------------------------------------
###Values for cert-manager deployment###
certmanValues:
  namespace: kubeops
  replicaCount: 3
  logLevel: 2
#--------------------------------------------------------------------------------------------------------------------------------
###Values for ingress-nginx deployment###
ingressValues:
  namespace: kubeops
apiVersion: kubeops/kubeopsctl/alpha/v3  # mandatory
kubeOpsUser: "demo" # mandatory,  change to your username
kubeOpsUserPassword: "Password" # mandatory,  change to your password
kubeOpsUserMail: "demo@demo.net" # change to your email
imagePullRegistry: "registry1.kubernative.net/lima" # mandatory
localRegistry: false # mandatory
### Values for setup configuration ###
clusterName: "testkubeopsctl"  # mandatory
clusterUser: "mnyuser"  # mandatory
kubernetesVersion: "1.28.2" # mandatory, check lima documentation
#masterHost: optional if you have an hostname, default value in "masterIP"
masterIP: 10.2.10.31 # mandatory
firewall: "nftables" # mandatory, default "nftables"
pluginNetwork: "calico" # mandatory, default "nftables"
containerRuntime: "containerd" # mandatory, default "containerd"

these are parameters for the cluster creation, and software for the clustercreation, p.e. the containerruntime for running the contianers of the cluster. Also there are parameters for the lima software (see documentation of lima for futher explanation).

### Additional values for cluster configuration
useInsecureRegistry: false # optional, default is false
ignoreFirewallError: false # optional, default is false
serviceSubnet: 192.168.128.0/17 # optional, default "192.168.128.0/17"
podSubnet: 192.168.0.0/17 # optional, default "192.168.0.0/17"
debug: true # optional, default is true
logLevel: vvvvv # optional, default "vvvvv"
systemCpu: "1" # optional, default "1"
systemMemory: "2G" # optional, default "2G"
sudo: true # optional, default is true

also important are parameters like for the networking like the subnets for the pods and services inside the kubernetes cluster.

# at least 3 masters and 3 workers are needed
zones:
  - name: zone1
    nodes:
      master:
        - name: cluster1master1
          ipAdress: 10.2.10.11
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
        - name: cluster1master2
          ipAdress: 10.2.10.12
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
      worker:
        - name: cluster1worker1
          ipAdress: 10.2.10.14
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
        - name: cluster1worker1
          ipAdress: 10.2.10.15
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2
  - name: zone2
    nodes:
      master:
        - name: cluster1master3
          ipAdress: 10.2.10.13
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.28.2  
      worker:
        - name: cluster1worker1
          ipAdress: 10.2.10.16
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: drained
          kubeversion: 1.28.2

so here are thetwo zones, which contain master and worker nodes.
There are two different states: active and drained.
also there can be two different kubernetes versions.
So if you want to do updates in tranches, this is possible with kubeopsctl. Also you can set system memory and system cpu of the nodes for kubernetes itself. it is not possible to delete nodes, for deleting nodes you have to use lima. Also if you want to make an update in tranches, you need at least one master with the greater version.

All other parameters are explained here

2 Apply changes to cluster

Once you have configured the cluster changes in yaml file, use following command to apply the changes.

kubeopsctl apply -f kubeopsctl.yaml