This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Getting-Started

1 - About-Kubeopsctl

What is Kubeopsctl?

kubeopsctl is a tool for managing a cluster and its state. you can describe a desired clusterstate and then kubeopsctl creates a cluster with the desired state.

why use kubeopsctl?

you can set a desired clusterstate, and you do not have to do anything else to achieve the desired state.

Highlights
  • creating a cluster
  • adding nodes to you cluster
  • drain nodes
  • updating single nodes
  • label nodes for zones
  • adding platform software into your cluster

2 - Release Notes

KubeOps 1.4.0

Changelog kubeopsctl 1.4.0

Bugfix

  • if the variable rook-ceph is not mentioned in the file kubeopsctl.yaml, the default value false is used
  • if the variable harbor is not mentioned in the file kubeopsctl.yaml, the default value false is used
  • if the variable opensearch is not mentioned in the file kubeopsctl.yaml, the default value false is used
  • if the variable opensearch-dashboards is not mentioned in the file kubeopsctl.yaml, the default value false is used
  • if the variable logstash is not mentioned in the file kubeopsctl.yaml, the default value false is used
  • if the variable filebeat is not mentioned in the file kubeopsctl.yaml, the default value false is used
  • if the variable prometheus is not mentioned in the file kubeopsctl.yaml, the default value false is used
  • if the variable opa is not mentioned in the file kubeopsctl.yaml, the default value false is used
  • if the variable headlamp is not mentioned in the file kubeopsctl.yaml, the default value false is used
  • if the variable certman is not mentioned in the file kubeopsctl.yaml, the default value false is used
  • if the variable ingress is not mentioned in the file kubeopsctl.yaml, the default value false is used
  • if the variable keycloak is not mentioned in the file kubeopsctl.yaml, the default value false is used
  • if the variable velero is not mentioned in the file kubeopsctl.yaml, the default value false is used
  • resolved issue with “kubeopsctl.yaml” not being found exception, which leads to System.IO.FileNotFoundException.
  • if the variable systemCpu is not mentioned in the file kubeopsctl.yaml, the default value 200m is used
  • if the variable systemMemory is not mentioned in the file kubeopsctl.yaml, the default value 200Mi is used

Changelog rook-ceph 1.4.0

Updated

  • updated package version

Bugfix

  • fixed issue where default requested cpu and memory where set too high

Changelog kube-prometheus-stack 1.4.0

Updated

  • added default dashboards to grafana

Changelog harbor 1.4.0

Updated

  • updated harbor package version to app version 2.9.3

Changelog opensearch and opensearch-dashboards 1.4.0

Updated

  • updated both packages to app version 2.11.1

Changelog ingress-ningx 1.4.0

Updated

  • updated ingress-ningx to application version 1.8.5

Changelog lima 1.4.0

Bugfix

  • If the default clusterUser root is used but kubeopsctl is executed as user, an error message is displayed.

KubeOps 1.4.0-Beta0

Changelog kubeopsctl 1.4.0-Beta0

Bugfix

  • added null handling to storageClass name
  • fixed issue where the existance of cluster resource where not checked in the kubeopsctl.yaml
  • fixed issue where clustermaster was not automatically upgraded in the upgrade command

Changelog lima 1.4.0-Beta0

Bugfix

  • fixed an issue where firewalld could not be removed before cluster creation

Changelog rook-ceph 1.4.0

Updated

  • updated package version

Bugfix

  • fixed issue where default requested cpu and memory where set too high

Changelog kube-prometheus-stack 1.4.0

Updated

  • added default dashboards to grafana

Changelog harbor 1.4.0

Updated

  • updated harbor package version to app version 2.9.3

Changelog opensearch and opensearch-dashboards 1.4.0

Updated

  • updated both packages to app version 2.11.1

KubeOps 1.4.0-Alpha5

Bugfix

  • fixed issue with pushing images to harbor
  • fixed issue where podman installation was not checked
  • fixed issue with clusterip parameter in lima change config command
  • fixed issue related to zone labels for nodes
  • fixed issue with converting V2 model to V3 model

KubeOps 1.4.0-Alpha4

Changelog kubeopsctl 1.4.0-Alpha4

Bugfix

  • Fixed an issue that prevents the stop of kubeopsctl if an image can not be pulled
  • fixed an issue where systemcpu and systemmemory were not temmplated correctly
  • fixed an issue where packages had no idempotence
  • fixed issue where cluster could not be upgraded
  • fixed issue with infinite loop of kubeopsctl output
  • fixed issue with false permissions of .kube/config file

Update

  • added automatic removal of firewalld and runc, and installation of tc
  • updated ingress-ningx to application version 1.8.5
  • updated opensearch to application version 2.11.1

Lima 1.4.0-Beta0

  • added error logging in case kubeopsroot is empty or null

Lima 1.4.0-Alpha4

Bugfix

  • added initial check if kubeopsroot is not set

KubeOps 1.4.0-Alpha3

Changelog kubeopsctl 1.4.0-Alpha3

Bugfix

  • Fixed an issue that results in an ImagePullback Error while installing Velero on an Air Gap environment
  • Fixed an issue that prevents using kubeopsctl status
  • Fixed an issue with drain that prevents draining nodes because of violation of PodDisruptionBudgets
  • Fixed an issue with uncordon that prevenst uncordon nodes
  • Fixed an issue that created a delay in displaying the status messages on the console

Update

  • removed limaRoot in apiVersion kubeops/kubeopsctl/alpha/v3
  • removed kubeOpsRoot in apiVersion kubeops/kubeopsctl/alpha/v3

Lima 1.4.0-Alpha3

Bugfix

  • Fixed an issue that lead to an Broken pipe error in ansible

Harbor 1.4.0

Update

  • Raised the maxJobWorkers from 10 to 30 in order to prevent error 504 Gateway Time-out

KubeOps 1.4.0-Alpha2

Changelog kubeopsctl 1.4.0-Alpha2

Bugfix

  • Fixed an issue that was writing Command contains quotes and double quotes! can not mix within shell! into the logs without having a real error
  • Fixed an issue that was still writing log output on the console
  • Fixed an issue that prevents showing Debug log level in the logs
  • Fixed an issue that prevents using kubeopsctl status
  • Fixed an issue that prevents installing filebeat
  • Fixed an issue that prevents installing logstash
  • Fixed an issue that prevents installing opensearch
  • Fixed an issue that prevents installing prometheus
  • Fixed an issue that results in an ImagePullback Error while installing Velero on an Air Gap environment

KubeOps 1.4.0-Alpha1

Changelog kubeopsctl 1.4.0-Alpha1

Bugfix

  • Fixed an issue that leads to an Unhandled exception: System.FormatException
  • Fixed small output issues related to plugin bash

KubeOps 1.4.0-Alpha0

Changelog kubeopsctl 1.4.0-Alpha0

New

  • Reworked Console output
  • Old Console Output is now stored in log files in $KUBEOPSROOT
  • Logfiles with timestamps are now created in $KUBEOPSROOT
  • Skip cluster creation is now possible
  • Skip update registry is now possible
  • It is now possible to install software into a cluster that was not created by lima
  • Added new command drain
  • Added new command uncordon
  • Added new command upgrade
  • Added new command status

Updated

  • Updated Lima Version to lima 1.4.0Alpha0
  • Updated filebeat package Version to filebeat 1.4.0
  • Updated harbor package Version to harbor 1.4.0
  • Updated rook-ceph package Version to rook-ceph 1.4.0
  • Updated helm package Version to helm 1.4.0
  • Updated logstash package Version to logstash 1.4.0
  • Updated opa-gatekeeper package Version to opa-gatekeeper 1.4.0
  • Updated opensearch package Version to opensearch 1.4.0
  • Updated opensearch-dashboards package Version to opensearch-dashboards 1.4.0
  • Updated prometheus package Version to prometheus 1.4.0
  • Updated rook package Version to rook 1.4.0
  • Updated cert-manager package Version to cert-manager 1.4.0
  • Updated ingress-nginx package Version to ingress-nginx 1.4.0
  • Updated kubeops-dashboard package Version to kubeops-dashboard 1.4.0
  • Updated keycloak package Version to keycloak 1.4.0

LIMA 1.4.0_Alpha0

Updated

  • Old Console Output is now stored in log files in $KUBEOPSROOT

Changelog setup 1.4.0

Updated

  • Updated Lima Version to lima 1.2.0_Alpha0

Changelog clustercreate 1.4.0

Updated

  • Updated filebeat package Version to filebeat 1.4.0
  • Updated harbor package Version to harbor 1.4.0
  • Updated rook-ceph package Version to rook-ceph 1.4.0
  • Updated helm package Version to helm 1.4.0
  • Updated logstash package Version to logstash 1.4.0
  • Updated opa-gatekeeper package Version to opa-gatekeeper 1.4.0
  • Updated opensearch package Version to opensearch 1.4.0
  • Updated opensearch-dashboards package Version to opensearch-dashboards 1.4.0
  • Updated prometheus package Version to prometheus 1.4.0
  • Updated rook package Version to rook 1.4.0
  • Updated cert-manager package Version to cert-manager 1.4.0
  • Updated ingress-nginx package Version to ingress-nginx 1.4.0
  • Updated kubeops-dashboard package Version to kubeops-dashboard 1.4.0
  • Updated keycloak package Version to keycloak 1.4.0

Changelog filebeat 1.4.0

Updated

  • updated package version

Changelog harbor 1.4.0

Bugfixes

  • fixed an issue in update that prevents the datamigration from harbor
  • fixed an issue that prevents the scaling of PVCs

Changelog rook-ceph 1.4.0

Updated

  • updated package version

Changelog helm 1.4.0

Updated

  • updated package version

Changelog logstash 1.4.0

Updated

  • updated package version

Changelog opa-gatekeeper 1.4.0

Updated

  • updated package version

Changelog opensearch 1.4.0

Updated

  • updated package version

Changelog opensearch-dashboards 1.4.0

Updated

  • updated package version

Changelog prometheus 1.4.0-Beta0

Updated

  • added first dashboard as json-data in file

Changelog prometheus 1.4.0

Updated

  • updated package version

Changelog rook 1.4.0

Updated

  • updated package version

Changelog cert-manager 1.4.0

Updated

  • updated package version

Changelog ingress-nginx 1.4.0

Updated

  • updated package version

Changelog kubeops-dashboard 1.4.0

Updated

  • updated package version

Changelog keycloak 1.4.0

Updated

  • updated package version

KubeOps 1.3.2

changelog rook 1.2.1

  • fixed issue with templating of block storage class

changelog logstash 1.2.1

  • fixed issue with authentication

Changelog kubeopsctl 0.2.2

updated

  • added new containerd package

Bugfixes

  • fixed issue with missing haproxy image
  • fixed issue with installing kubectl in container

KubeOps 1.3.1

Changelog kubeopsctl 0.2.1

  • fixed issue where contauner runtime could not be updated

KubeOps 1.3.0

Changelog kubeopsctl 0.2.0

Bugfixes

  • Fixed an Bug that wrote tmpCopyDir content wrong
  • Fixed an Bug that prevents Harbor update from Kubeops 1.2.x to Kubeops 1.3.x, but the upgrade still has to be performed two times. Check the FAQs for more informationen

Changelog LIMA 1.1.0

Bugfixes

  • Fixed an Bug that caused ImagePullBackOff for loadbalancer and registry, because the manifest was changed before copying the image

Changelog setup 1.3.2

Updated

  • Updated Lima Version to lima 1.1.0

KubeOps 1.3.0-Beta1

Changelog kubeopsctl 0.2.0-Beta1

Updated

  • Updated Lima Version to lima 1.1.0Beta1
  • Updated filebeat package Version to filebeat 1.3.1
  • Updated harbor package Version to harbor 1.3.1
  • Updated rook-ceph package Version to rook-ceph 1.3.1
  • Updated helm package Version to helm 1.3.1
  • Updated logstash package Version to logstash 1.3.1
  • Updated opa-gatekeeper package Version to opa-gatekeeper 1.3.1
  • Updated opensearch package Version to opensearch 1.3.1
  • Updated opensearch-dashboards package Version to opensearch-dashboards 1.3.1
  • Updated prometheus package Version to prometheus 1.3.1
  • Updated rook package Version to rook 1.3.1
  • Updated cert-manager package Version to cert-manager 1.3.1
  • Updated ingress-nginx package Version to ingress-nginx 1.3.1
  • Updated kubeops-dashboard package Version to kubeops-dashboard 1.3.1
  • Updated keycloak package Version to keycloak 1.3.1

LIMA 1.1.0-Beta1

Updated

  • Updated the loadbalancer image due to critical CVEs
  • added kubernetes package for v1.29.1
  • added kubernetes package for v1.29.0
  • added kubernetes package for v1.28.5
  • added kubernetes package for v1.28.4
  • added kubernetes package for v1.28.3
  • added kubernetes package for v1.27.9
  • added kubernetes package for v1.27.8
  • added kubernetes package for v1.27.7

Changelog setup 1.3.1

Updated

  • Updated Lima Version to lima 1.1.0Beta1

Changelog clustercreate 1.3.1

Updated

  • Updated filebeat package Version to filebeat 1.3.1
  • Updated harbor package Version to harbor 1.3.1
  • Updated rook-ceph package Version to rook-ceph 1.3.1
  • Updated helm package Version to helm 1.3.1
  • Updated logstash package Version to logstash 1.3.1
  • Updated opa-gatekeeper package Version to opa-gatekeeper 1.3.1
  • Updated opensearch package Version to opensearch 1.3.1
  • Updated opensearch-dashboards package Version to opensearch-dashboards 1.3.1
  • Updated prometheus package Version to prometheus 1.3.1
  • Updated rook package Version to rook 1.3.1
  • Updated cert-manager package Version to cert-manager 1.3.1
  • Updated ingress-nginx package Version to ingress-nginx 1.3.1
  • Updated kubeops-dashboard package Version to kubeops-dashboard 1.3.1
  • Updated keycloak package Version to keycloak 1.3.1

Changelog filebeat 1.3.1

Updated

  • updated package version

Changelog harbor 1.3.1

Bugfixes

  • fixed an issue in update that prevents the datamigration from harbor
  • fixed an issue that prevents the scaling of PVCs

Changelog rook-ceph 1.3.1

Updated

  • updated package version

Changelog helm 1.3.1

Updated

  • updated package version

Changelog logstash 1.3.1

Updated

  • updated package version

Changelog opa-gatekeeper 1.3.1

Updated

  • updated package version

Changelog opensearch 1.3.1

Updated

  • updated package version

Changelog opensearch-dashboards 1.3.1

Updated

  • updated package version

Changelog prometheus 1.3.1

Updated

  • updated package version

Changelog rook 1.3.1

Updated

  • updated package version

Changelog cert-manager 1.3.1

Updated

  • updated package version

Changelog ingress-nginx 1.3.1

Updated

  • updated package version

Changelog kubeops-dashboard 1.3.1

Updated

  • updated package version

Changelog keycloak 1.3.1

Updated

  • updated package version

KubeOps 1.3.0-Beta0

Changelog kubeopsctl 0.2.0-Beta0

Bugfixes

  • fixed issue with Kubernetes Upgrade
  • fixed issue with false pause image in containerd config.toml
  • fixed issue with unstable rewriting of kubernetes manifests
  • fixed issue where ImagePullBackOff errors occurs because harbor was not ready yet

Updated

  • Updated Lima Version to lima 1.1.0Beta0"
  • added backwards compatibility to kubeopsctl 0.1.0

Changelog setup 1.3.0

Updated

  • added lima:1.1.0Beta0

Changelog rook-ceph 1.3.0

Updated

  • Changed installation/update/delete procedure from yaml-files to helm-chart
  • Updated rook from 1.10.9 to 1.12.7
  • removed some adjustable values for compatability and easier configuration:
    • removed “placement” value
    • removed “removeOSDsIfOutAndSafeToRemove” value, which is now hardcoded to false
    • removed FS-type templating for the “blockStorageClass” as it is not recommended to use to anything other than ext4

Changelog harbor 1.3.0

Updated

  • removed external redis and database and changed to internal services
  • Updated Harbor from Version 2.6.4 to 2.9.1
  • removed some adjustable values for compatability and easier configuration:
    • redis- and postgres-password configuration moved into “harborValues”-section (as “redisPassword” and “databasePassword”)
    • storage-values for the external instances have been removed, instead the storage configuration for redis and postgres inside the “harborValues” will be used (the “persitentVolumeClaim”-section contains the keys “redis” and “database”, latter referencing the internal postgres instance)

KubeOps 1.3.0-Alpha6

Changelog kubeopsctl 0.2.0-Alpha6

Bugfixes

  • fixed a bug that prevented tha use of special characters in passwords
  • fixed a bug that prevented the upgrade to a specific kubernetes version

KubeOps 1.3.0-Alpha5

Changelog setup 1.3.0

Updated

  • updated lima to 1.1.0-Alpha3

Changelog clustercreate 1.3.0

Updated

  • added parameter for templating the folder of lima images
  • added installation of packages in update-section, so that previously missing packages can be added after cluster creation

Bugfixes

  • fixed templating for the “harborPersistence” key

Changelog harbor 1.3.0

Bugfixes

  • fixed templating for the “harborPersistence” key, to allow templating of the PVC size and storageClass
  • removed irrelevant entry in chart-files

KubeOps 1.3.0-Alpha4

Changelog kubeopsctl 0.2.0-Alpha4

Updated

  • added more Parameters from rook Resources from kubeopsctl.yaml (see Changelog rook-ceph)

Changelog clustercreate 1.3.0

Updated

  • added global namespace variable to the values, which will be used if no other namespace has been set for the individual packages
  • added new values for the rook-ceph installation (see Changelog rook-ceph)
  • added more defaults for the installation values

Bugfixes

  • fixed updating loadbalancer after clustercreation

Changelog rook-ceph 1.3.0

Bugfixes

  • fixed issue with too high resource limits

Updated

  • added templating and adjustable values for resource-requests of objectstore and filesystem-pods

KubeOps 1.3.0-Alpha3

Changelog kubeopsctl 0.2.0-Alpha3

Bugfixes

  • fixed bug where packages from kubeops 1.2.0 were pulled in airgap environment

KubeOps 1.3.0-Alpha2

Changelog kubeopsctl 0.2.0-Alpha2

Bugfixes

  • fixed bug in Model that prevents cluster creation
  • fixed bug in Model that prevents installing rook
  • fixed bug in Model that prevents installing harbor

Changelog LIMA 1.1.0Alpha2

Bugfixes

  • fixed bug that prevents a healthy cluster creation
  • fixed bug that prevent adding masters to loadbalancer

Changelog setup 1.3.0

Updated

  • added lima:1.1.0Alpha2
  • added templating for updating loadbalancer after clustercreation

Changelog clustercreate 1.3.0

Bugfixes

  • added updating loadbalancer after clustercreation

KubeOps 1.3.0-Alpha1

Changelog kubeopsctl 0.2.0-Alpha1

Bugfixes

  • fixed issue with not recognized values in kubeopsctl.yaml

KubeOps 1.3.0-Alpha0

Changelog kubeopsctl 0.2.0-Alpha0

Bugfixes

  • fixed issue with Kubernetes Upgrade
  • fixed issue with false pause image in containerd config.toml

Updated

  • Updated Lima Version to lima 1.1.0Alpha0"

Changelog setup 1.3.0

Updated

  • added lima:1.1.0Alpha0

Changelog rook-ceph 1.3.0

Updated

  • Changed installation/update/delete procedure from yaml-files to helm-chart
  • Updated rook from 1.10.9 to 1.12.7
  • removed some adjustable values for compatability and easier configuration:
    • removed “placement” value
    • removed “removeOSDsIfOutAndSafeToRemove” value, which is now hardcoded to false
    • removed FS-type templating for the “blockStorageClass” as it is not recommended to use to anything other than ext4

Changelog harbor 1.3.0

Updated

  • removed external redis and database and changed to internal services
  • Updated Harbor from Version 2.6.4 to 2.9.1
  • removed some adjustable values for compatability and easier configuration:
    • redis- and postgres-password configuration moved into “harborValues”-section (as “redisPassword” and “databasePassword”)
    • storage-values for the external instances have been removed, instead the storage configuration for redis and postgres inside the “harborValues” will be used (the “persitentVolumeClaim”-section contains the keys “redis” and “database”, latter referencing the internal postgres instance)

Changelog KOSI 2.9.0_Alpha0

New

  • SINA got renamed to KOSI
  • new code format for creating packages is named package.kosi
  • added the possibility to use sha tag values.
  • added an image-clean-up if you use retagging in KOSI
  • added KOSI remove - command, for removing your own packages from the hub

KubeOps 1.2.0-Beta3

Changelog setup 1.2.3

Updated

  • Updated Lima Version to lima 1.0.0Beta3

Changelog clustercreate 1.2.0

Updated

  • added lima update loadbalancer to logic for clustercreation

Changelog kubeopsctl 0.1.0-Beta3

fixed

  • fixed issue with not working loadbalancer and network pods on cluster creation

KubeOps 1.2.0-Beta2

Changelog setup 1.2.2

Updated

  • Updated Lima Version to lima 1.0.0Beta2"

Changelog kubeopsctl 0.1.0-Beta2

fixed

  • fixed issue with Kubernetes Upgrade
  • fixed issue with false pause image in containerd config.toml

KubeOps 1.2.0-Beta1

Changelog LIMA 1.0.0-Beta1

new maintenance packages for kubernetes

  • added package for v1.28.2
  • added package for v1.28.1
  • added package for v1.28.0
  • added package for v1.27.6
  • added package for v1.27.5
  • added package for v1.27.4
  • added package for v1.27.3
  • added package for v1.26.9
  • added package for v1.26.8
  • added package for v1.26.7
  • added package for v1.26.6
  • added package for v1.25.14
  • added package for v1.25.13
  • added package for v1.25.12
  • added package for v1.25.11

Updated

  • updated the haproxy version to 2.8.1
  • updated crontab time to 12pm

Changelog setup 1.2.1

Updated

  • Updated Lima Version to lima 1.0.0Beta1"

Changelog kubeopsctl 0.1.0-Beta1

Updated

  • updated the haproxy version to 2.8.1
  • updated crontab time to 12pm

KubeOps 1.2.0-Beta0

Changelog kubeopsctl 0.1.0-Beta0

New

  • apply -f
  • change registry -f

Changelog LIMA 1.0.0-Beta0

New

  • Added support for podman
  • Removed Docker dependency

Changelog Sina 2.8.0_Beta0

New

  • added sina encrypt -f <user.yaml>
  • added flag (–cf <user.yaml>) for sina install to input encrypted yaml files
  • added flag (–cf <user.yaml>) for sina update to input encrypted yaml files
  • added flag (–cf <user.yaml>) for sina delete to input encrypted yaml files
  • added sina check Command
  • added flag (-t) for sina pull. Use the cluster´s internal adress for the registry, which is resolvable by nodes. (Can only be used with the -r Flag)

KubeOps 1.2.0-Alpha15

Changelog kubeopsctl 0.1.0-Alpha15

Bugfixes

  • fixed an issue that prevented the upgrade from platform 1.1 to platform 1.2

Changelog LIMA 1.0.0-Alpha14

Bugfixes

  • fixed an issue were nodes were not ready after cluster creation

KubeOps 1.2.0-Alpha14

Changelog kubeopsctl 0.1.0-Alpha14

Bugfixes

  • fixed an issue were nodes were not ready after cluster creation

Changelog LIMA 1.0.0-Alpha13

Bugfixes

  • fixed an issue were nodes were not ready after cluster creation

KubeOps 1.2.0-Alpha13

Changelog kubeopsctl 0.1.0-Alpha13

Bugfixes

  • fixed an issue where local-registry has no storage
  • fixed an issue where cni images were not on all master registries

Changelog LIMA 1.0.0Alpha12

Bugfixes

  • fixed an issue where local-registry has no storage
  • fixed an issue where cni images were not on all master registries

Changelog keycloak 1.2.0

Bugfixes

  • added missing / in url

Updates

  • updated readninessProbe from 30 to 300 seconds

KubeOps 1.2.0-Alpha12

Changelog kubeopsctl 0.1.0-Alpha12

Bugfixes

  • fixed an issue where kubeopsctl was not airgapready
  • fixed an issue where images for loadbalancer were not copied
  • fixed an issue where nodes could not be joined after the cluster creation

KubeOps 1.2.0-Alpha11

Changelog kubeopsctl 0.1.0-Alpha11

Bugfixes

  • fixed an bug where the status of the node is not changed
  • fixed an Bug where an exception is thrown while changing the status of a node
  • fixed an Bug where an exception is thrown while applying a label to a node

Changelog LIMA 1.0.0Alpha10

  • fixed an Bug where kubernetes images are not pulled from the local registry

KubeOps 1.2.0-Alpha10

Changelog kubeopsctl 0.1.0-Alpha10

Bugfixes

  • fixed an bug with creating and updating the cluster where a exception is thrown because of an unexisting file

KubeOps 1.2.0-Alpha9

Changelog kubeopsctl 0.1.0-Alpha9

Bugfixes

  • fixed an issue where the change to a different registry is not pushing the calico image
  • fixed an issue where the images are not pushed to harbor
  • fixed an issue where the change of the registry for headlamp is throwing an unhandled exception
  • fixed an issue where $LIMAROOT/images linebreak in calico image is missing

Known Issues

  • all nodes need internet connection
  • Logs are not exported to OpenSearch
  • Keycloak Pods are not running

Changelog LIMA 1.0.0Alpha7

  • fixed and issue where during the create cluster the calico images are tagged wrong
  • fixed an issue where the change to a different registry is tagging the calico images wrong

Changelog setup 1.2.0

Updated

  • updated lima to alpha7

Changelog rook-ceph 1.2.0

Bugfixes

  • Fixed a bug related to indicating the correct nodePort on the service

KubeOps 1.2.0-Alpha8

Changelog kubeopsctl 0.1.0-Alpha8

Bugfixes

  • fixed an issue where apply was not working as non-root user
  • fixed an issue with upgrading kubernetes versions

Changelog LIMA 1.0.0Alpha6

  • fixed an issue where apply was not working as non-root user
  • fixed an issue with upgrading kubernetes versions

KubeOps 1.2.0-Alpha7

Changelog kubeopsctl 0.1.0-Alpha7

Bugfixes

  • fixed an issue where change registry did not work and was not airgap ready

Changelog LIMA 1.0.0Alpha5

  • fixed issue with using airgap images

KubeOps 1.2.0-Alpha6

Changelog Sina 2.8.0_Alpha4

New

  • added sina encrypt -f <user.yaml>
  • added flag (–cf <user.yaml>) for sina install to input encrypted yaml files
  • added flag (–cf <user.yaml>) for sina update to input encrypted yaml files
  • added flag (–cf <user.yaml>) for sina delete to input encrypted yaml files
  • added sina check Command
  • added flag (-t) for sina pull. Use the cluster´s internal adress for the registry, which is resolvable by nodes. (Can only be used with the -r Flag)

Bugfixes

  • fixed retagging in pull for userv3 packages
  • fixed checksum problem, caused by deprecated backend api
  • fixed build user/v2 was not possible
  • updated sina plugins

Changelog kubeopsctl 0.1.0-Alpha6

New

  • apply -f
  • change registry -f

Bugfixes

  • fixed an issue where prometheus could not be installed
  • fixed issue where opensearch could not be installed
  • fixed issue where any tool could not be installed after first apply
  • fixed an issue where the images where not pushed to the local harbor

Changelog LIMA 1.0.0Alpha4

  • fixed bug where calico was not installed
  • fixed bug where calico pod disruption budget had a false version
  • renamed lima image to 1.0.0

Changelog setup 1.2.0

Updated

  • Updated Lima Version to 1.0.0Alpha4

KubeOps 1.2.0-Alpha5

Changelog kubeopsctl 0.1.0-Alpha5

New

  • apply -f
  • change registry -f

Bugfixes

  • fixed issue with ImagePullBackOff

ChangeLog LIMA 1.0.0Alpha3

New

  • Added support for podman
  • Removed Docker dependency

Updated

  • Updated sina

ChangeLog SINA 2.8.0Alpha3

New

  • added sina encrypt -f <user.yaml>
  • added flag (–cf <user.yaml>) for sina install to input encrypted yaml files
  • added flag (–cf <user.yaml>) for sina update to input encrypted yaml files
  • added flag (–cf <user.yaml>) for sina delete to input encrypted yaml files
  • added sina check Command
  • added flag (-t) for sina pull. Use the cluster´s internal adress for the registry, which is resolvable by nodes. (Can only be used with the -r Flag)

Bugfixes

  • fixed retagging in pull for userv3 packages
  • fixed checksum problem, caused by deprecated backend api
  • fixed build user/v2 was not possible
  • updated sina plugins

KubeOps 1.2.0-Alpha4

Changelog kubeopsctl 0.1.0-Alpha4

New

  • apply -f
  • change registry -f

Bugfixes

  • fixed issue with label

ChangeLog LIMA 1.0.0Alpha3

New

  • Added support for podman
  • Removed Docker dependency

Bugfixes

  • fixed issue with pushing calico

KubeOps 1.2.0-Alpha3

Changelog kubeopsctl 0.1.0-Alpha3

New

  • apply -f
  • change registry -f

Updated

  • added more default values for kubeopsctl.yaml

Bugfixes

  • fixed issue with $KUBEOPSROOT and $LIMAROOT variable
  • fixed issue with pulling packages
  • fixed issue with writing kubeopsctl.yal
  • fixed issue with writing KUBEOPSROOT and LIMAROOT in .bashrc file

ChangeLog LIMA 1.0.0Alpha2

New

  • Added support for podman
  • Removed Docker dependency

Bugfixes

  • fixed issue with with container already in use

KubeOps 1.2.0-Alpha2

Changelog kubeopsctl 0.1.0-Alpha2

New

  • apply -f
  • change registry -f

Bugfixes

  • Fixed an issue where sina directory was not found
  • Fixed an issue where plugin directory was not found

Changelog Sina 2.8.0_Alpha2

New

  • added sina encrypt -f <user.yaml>
  • added flag (–cf <user.yaml>) for sina install to input encrypted yaml files
  • added flag (–cf <user.yaml>) for sina update to input encrypted yaml files
  • added flag (–cf <user.yaml>) for sina delete to input encrypted yaml files
  • added sina check Command
  • added flag (-t) for sina pull. Use the cluster´s internal adress for the registry, which is resolvable by nodes. (Can only be used with the -r Flag)

Bugfixes

  • fixed checksum problem, caused by deprecated backend api
  • fixed build user/v2 was not possible
  • update sina plugin

Changelog SINA Plugin 0.3.0_Alpha2

New

  • Added pull command to plugin

Bugfixes

  • fixed sina install

Update

  • Update dependencies from sina

KubeOps 1.2.0-Alpha1

Changelog LIMA 1.0.0-Alpha1

Bugfixes

  • Corrected a bug related to updating the pauseVersion in containerd config when upgrading the Kubernetes version

Changelog setup 1.2.1

Updated

  • Updated Lima Version to 1.0.0-Alpha1

Changelog Sina 2.8.0_Alpha1

New

  • added sina encrypt -f <user.yaml>
  • added flag (–cf <user.yaml>) for sina install to input encrypted yaml files
  • added flag (–cf <user.yaml>) for sina update to input encrypted yaml files
  • added flag (–cf <user.yaml>) for sina delete to input encrypted yaml files
  • added sina check Command
  • added flag (-t) for sina pull. Use the cluster´s internal adress for the registry, which is resolvable by nodes. (Can only be used with the -r Flag)

Bugfixes

  • fixed checksum problem, caused by deprecated backend api
  • fixed build user/v2 was not possible

Changelog SINA Plugin 0.3.0_Alpha1

New

  • Added pull command to plugin

Update

  • Update dependencies from sina

ChangeLog kubeopsctl 0.1.0Alpha2

updated

  • added more default values for kubeopsctl.yaml

fixed

  • fixed issue with $KUBEOPSROOT and $LIMAROOT variable
  • fixed issue with pulling packages
  • fixed issue with writing kubeopsctl.yal
  • fixed issue with writing KUBEOPSROOT and LIMAROOT in .bashrc file

ChangeLog LIMA 1.0.0Alpha2

fixed

  • fixed issue with no airgap calicomultus image
  • fixed issue with with container already in use

Changelog cert-manager 1.2.0

Updated

  • updated package version to 1.2.0

Changelog sina-filebeat-os 1.2.0

Updated

  • updated package version to 1.2.0

Changelog Harbor 1.2.0

New

  • Added ingress for dashboard access

Updated

  • updated harbor version to v2.6.4
  • updated package version to 1.2.0

Changelog kubeops-dashboard 1.2.0

New

  • Added ingress for dashboard access

Updated

  • updated package version to 1.2.0

Changelog ingress-nginx 1.2.0

New

  • Added templating for ingress service

Updated

  • updated package version to 1.2.0

Changelog keycloak 1.2.0

New

  • Added package keycloak 1.2.0

Updated

  • updated package version to 1.2.0

Changelog clustercreate 1.2.0

New

  • Added keycloak

Updated

  • updated package version to 1.2.0
  • updated all packages to 1.2.0

Changelog setup 1.2.0

Updated

  • updated package version to 1.2.0
  • updated all packages to 1.2.0
  • updated lima to 1.0.0
  • removed docker dependencies

Changelog sina-logstash-os 1.2.0

Updated

  • updated package version to 1.2.0

Changelog opa-gatekeeper 1.2.0

Updated

  • updated package version to 1.2.0

Changelog sina-opensearch-os 1.1.2

Updated

  • updated opensearch version to v2.9.0
  • updated package version to 1.2.0

Changelog sina-opensearch-dashboards 1.2.0

New

  • Added ingress for dashboard access

Updated

  • updated opensearch version to v2.9.0
  • updated package version to 1.2.0

Changelog sina-kube-prometheus-stack 1.2.0

New

  • Added ingress for dashboard access

Updated

  • updated package version to 1.2.0

Changelog helm 1.2.0

Updated

  • updated package version to 1.2.0

Changelog rook-ceph 1.2.0

New

  • Added ingress for dashboard access

Bugfixes

  • Updated ceph to 17.2.6 and rook to 1.12.1

Changelog Lima 1.0.0-Alpha1

New

  • Added support for podman
  • Removed Docker dependency

Bugfixes

  • Fixed an issue where maintenance packages could not be pulled

Changelog kubeopsctl 0.1.0-Alpha1

New

  • apply -f
  • change registry -f

Bugfixes

  • Fixed an issue where change registry ist could while apply is running
  • Fixed an issue where new values could not be processed properly
  • Fixed an issue where packages could not be pulled

Changelog Grafana/Prometheus

New

  • Added SMTP-Alerting

3 -

QuickStart

This is comprehensive instruction guide to start working with a simple cluster.

Requirements

You can choose between Red Hat Enterprise Linux 8 or OpenSUSE 15 . All of your machines need the same os.

A total of 7 machines are required:

  • one admin
  • three master
  • three worker

1x Admin-Node

  • 2 CPUs
  • 2 GB RAM
  • 50 GB Boot disk storage

3x Master-Node

  • 4 CPUs
  • 8 GB RAM
  • 50 GB Boot disk storage

3x Worker-Node

  • 8 CPUs
  • 16 GB RAM
  • 50 GB Boot disk storage
  • 50 GB unformatted no partitioned disk storage for Ceph

For more information about the harddrives for rook-ceph, visit the rook-ceph prerequisites page

Requirements on Admin

The following requirements must be fulfilled on the admin machine:

  1. All the users require sudo privileges. We recommend using the root user.

  2. Admin machine must be synchronized with the current time.

  3. You need an internet connection to use the default KubeOps registry registry1.kubernative.net/lima.

  4. Install kubeopsctl and podman
    Create an account and log in on the KubeOps official website for downloading the RPM in the Download section. Copy the kubeopsctl RPM on your admin machine in the home directory.
    You must run this follwing command to install kubeopsctl and podman:

dnf install -y kubeopsctl*.rpm

dnf install -y podman
zypper install -y kubeopsctl*.rpm

zypper install -y podman
  1. $KUBEOPSROOT and $LIMAROOT must be set.
echo "export KUBEOPSROOT=\"${HOME}/kubeops\"" >> $HOME/.bashrc
echo "export LIMAROOT=\"${HOME}/kubeops/lima\"" >> $HOME/.bashrc
source $HOME/.bashrc

Prerequisites on Master and Worker Nodes

The following requirements must be fulfilled on master and worker nodes:

  1. All the users require sudo privileges. We recommend using the root user.

  2. Every machine must be synchronized with the current time.

  3. You have to assign lowercase unique hostnames for every master and worker machine you are using.

    We recommended using self-explanatory hostnames.

    To set the hostname on your machine use the following command:

    hostnamectl set-hostname <name of node>
    
    • Example
      Use the commands below to set the hostname on the particular machine as master1, master2, master3, node1 node2 or node3.
      hostnamectl set-hostname master1
      hostnamectl set-hostname master2
      hostnamectl set-hostname master3
      hostnamectl set-hostname node1
      hostnamectl set-hostname node2
      hostnamectl set-hostname node3
      
  4. If you are using Red Hat Enterprise Linux 8, you must remove firewalld. Kubeopsctl installs nftables by default.
    You can use the following commands to remove firewalld:

    systemctl disable --now firewalld
    systemctl mask firewalld
    dnf remove -y firewalld
    reboot
    

It is recommended that a dns service is running, or if you don’t have a DNS service, you can change the /etc/hosts file. An example for a entry in the /etc/hosts file could be:

  10.2.10.11 master1
  10.2.10.12 master2
  10.2.10.13 master3
  10.2.10.14 node1
  10.2.10.15 node2
  10.2.10.16 node3

Prerequisites on Admin Node

  1. To establish an SSH connection between your machines, you need to distribute the SSH key from your admin to each of your master and worker nodes.

    1. Generate an SSH key on admin machine using following command

      ssh-keygen
      

      There will be two keys generated in ~/.ssh directory.
      The first key is the id_rsa(private) and the second key is the id_rsa.pub(public).

    2. Copy the ssh public key from your admin machine to all node machines with ssh-copy-id, e.g.:

      ssh-copy-id master1
      
    3. Now try to establish a connection to the node machines from your admin machine, e.g.:

      ssh master1
      

Platform Setup

In order to install your cluster you need the following steps:

  1. kubeopsctl.yaml creation
vi kubeopsctl.yaml

Example kubeopsctl.yaml

the names of the nodes should be the same as the hostnames of the machines.
apiVersion: kubeops/kubeopsctl/alpha/v3 # mandatory
imagePullRegistry: "registry1.kubernative.net/lima"
localRegistry: true
clusterName: "example"
kubernetesVersion: "1.28.2"
masterIP: 10.2.10.11
systemCpu: "200m"
systemMemory: "200Mi"

zones:
  - name: zone1
    nodes:
      master:
        - name: master1
          ipAdress: 10.2.10.11
          status: active
          kubeversion: 1.28.2
        - name: master2
          ipAdress: 10.2.10.12
          status: active
          kubeversion: 1.28.2
      worker:
        - name: worker1
          ipAdress: 10.2.10.14
          status: active
          kubeversion: 1.28.2
        - name: worker2
          ipAdress: 10.2.10.15
          status: active
          kubeversion: 1.28.2
  - name: zone2
    nodes:
      master:
        - name: master3
          ipAdress: 10.2.10.13
          status: active
          kubeversion: 1.28.2  
      worker:
        - name: worker3
          ipAdress: 10.2.10.16
          status: active
          kubeversion: 1.28.2


# mandatory, set to true if you want to install it into your cluster
rook-ceph: true
harbor: true
opensearch: true
opensearch-dashboards: true
logstash: true
filebeat: true
prometheus: true
opa: true
headlamp: true
certman: true
ingress: true 
keycloak: true
velero: true

harborValues: 
  harborpass: "password" # change to your desired password
  databasePassword: "Postgres_Password" # change to your desired password
  redisPassword: "Redis_Password" 
  externalURL: http://10.2.10.11:30002 # change to ip adress of master1

prometheusValues:
  grafanaUsername: "user"
  grafanaPassword: "password"

ingressValues:
  externalIPs: []

keycloakValues:
  keycloak:
    auth:
      adminUser: admin
      adminPassword: admin
  postgresql:
    auth:
      postgresPassword: ""
      username: bn_keycloak
      password: ""
      database: bitnami_keycloak
      existingSecret: ""

veleroValues:
  accessKeyId: "your_s3_storage_username"
  secretAccessKey: "your_s3_storage_password"
When you are using ingress option, there are few updates needed in the settings of services. [Click here]( “ingress config”) to know more about it.
  1. Platform installation
kubeopsctl apply -f kubeopsctl.yaml
The installation will take about 3 hours.

4 - Installation

KubeOps Installation and Setup

Welcome to the very first step to getting started with KubeOps. In this section, you will get to know about

  • hardware, software and network requirements
  • steps to install the required software
  • key configurations for KubeOps

Prerequisites

A total of 7 machines are required:

  • one admin
  • three master
  • three worker

You can choose between Red Hat Enterprise Linux 8 or OpenSUSE 15 . All of your machines need the same os.
Below you can see the minimal requirements for CPU, memory and disk storage:

OS Minimum Requirements
Red Hat Enterprise Linux 8 8 CPU cores, 16 GB memory, 50GB disk storage
OpenSUSE 15 8 CPU cores, 16 GB memory, 50GB disk storage

For each working node, an additional unformatted hard disk with 50 GB each is required. For more information about the harddrives for rook-ceph, visit the rook-ceph prerequisites page

Requirements on admin

The following requirements must be fulfilled on the admin machine.

  1. All the utilized users require sudo privileges. If you are using Kubeops as a user, you need a user with sudo rights, so for Opensuse and RHEL 8 Environments it is the wheel group the user should be added to. Make sure that you change your user with:
su -l <user>
  1. Admin machine must be synchronized with the current time.

  2. You need an internet connection to use the default KubeOps registry registry1.kubernative.net/lima.

    A local registry can be used in the Airgap environment. KubeOps only supports secure registries.
    It is important to list your registry as an insecure registry in registry.conf (/etc/containers/registries.conf for podman, /etc/docker/deamon.json for docker), in case of insecure registry usage.

Now you can create your own registry instead of using the default. Checkout how to Guide Create a new Repository. for more info.

  1. it is recommended that runc is uninstalled

    dnf remove -y runc
    zypper remove -y runc

  2. tc should be installed.

    dnf install -y tc
    dnf install -y libnftnl
    
    zypper install -y iproute2
    zypper install -y libnftnl
    

  3. for opensearch, the /etc/sysctl.conf should be configured, the line

vm.max_map_count=262144

should be added. also the command

 sysctl -p

should be executed after that.

  1. Podman must be installed on your machine.
    sudo dnf install -y podman
    sudo zypper install -y podman
  1. $KUBEOPSROOT and $LIMAROOT must be set.
echo 'export KUBEOPSROOT=<home folder of user>/kubeops' >> $HOME/.bashrc
echo 'export LIMAROOT=<home folder of user>/kubeops/lima' >> $HOME/.bashrc
source $HOME/.bashrc

Requirements for each node

The following requirements must be fulfilled on each node.

  1. All the utilized users require sudo privileges. If you are using Kubeops as a user, you need a user with sudo rights, so for Opensuse and RHEL 8 Environments it is the wheel group the user should be added to.

  2. Every machine must be synchronized with the current time.

  3. You have to assign lowercase unique hostnames for every machine you are using.

    We recommended using self-explanatory hostnames.

    To set the hostname on your machine use the following command:

    hostnamectl set-hostname <name of node>
    
    • Example
      Use the commands below to set the hostnames on each machine as admin, master, node1 node2.
      hostnamectl set-hostname admin
      hostnamectl set-hostname master 
      hostnamectl set-hostname node1
      hostnamectl set-hostname node2
      

    Requires sudo privileges

    It is recommended that a dns service is running, or if you don’t have a nds service, you can change the /etc/hosts file. an example for a entry in the /etc/hosts file could be:

    10.2.10.12 admin
    10.2.10.13 master1
    10.2.10.14 master2
    10.2.10.15 master3
    10.2.10.16 node1
    10.2.10.17 node2
    10.2.10.18 node3
    

  4. To establish an SSH connection between your machines, you either need an SSH key or you need to install sshpass.

    1. Generate an SSH key on admin machine using following command

      ssh-keygen
      

      There will be two keys generated in ~/.ssh directory.
      The first key is the id_rsa(private) and the second key is the id_rsa.pub(public).

    2. Copy the ssh key from admin machine to your node machine/s with following command

      ssh-copy-id <ip address or hostname of your node machine>
      
    3. Now try establishing a connection to your node machine/s

      ssh <ip address or hostname of your node machine>
      

Installing KubeOpsCtl

  1. Create a kubeopsctl.yaml file with respective information as shown in kubeopsctl.yaml parameters, in order to use the KubeOps package.
  2. Install the kubeops*.rpm on your admin machine.

Working with KubeOpsCtl

Before starting with KubeOps cluster, it is important to check if podman is running.

  • To verify that podman is running, use the following command:

    systemctl status podman
    
  • To start and enable podman use the following commands:

    systemctl enable podman
    
    systemctl start --now podman
    
Note: This must be done with the root user or with a user with sudo privileges.

Run kubeopsctl on your commandline like:

kubeopsctl apply -f kubeopsctl.yaml

kubeopsctl.yaml parameters

the names of the nodes should be the same as the hostnames of the machines.
### General values for registry access ###
imagePullRegistry: "registry1.kubernative.net/lima" # mandatory, the registry from which the images for the cluster are pulled
localRegistry: false # mandatory, set to true if you use a local registry, if you want to set the value to true then you also have to set harbor to true

the imagePullRegistry parameter is for the registry, from which the images for the platform softeware is pulled. the localRegistry is a parameter for using a insecure, local registry for pulling images.

### Values for setup configuration ###
apiVersion: kubeops/kubeopsctl/alpha/v3 # mandatory
clusterName: "example" # mandatory
clusterUser: "root" # mandatory
kubernetesVersion: "1.28.2" # mandatory, check lima documentation
#masterHost: optional if you have an hostname, default value in "masterIP"
masterIP: 10.2.10.12 # mandatory
firewall: "nftables" # mandatory, default "nftables"
pluginNetwork: "calico" # mandatory, default "calico"
containerRuntime: "containerd" # mandatory
clusterOS: "Red Hat Enterprise Linux" # mandatory, can be "Red Hat Enterprise Linux" or "openSUSE Leap"
### Additional values for cluster configuration
useInsecureRegistry: false # optional, default is false
ignoreFirewallError: false # optional, default is false
serviceSubnet: 192.168.128.0/17 # optional, default "192.168.128.0/17"
podSubnet: 192.168.0.0/17 # optional, default "192.168.0.0/17"
debug: false # optional, default is false
logLevel: vvvvv # optional, default "vvvvv"
systemCpu: "1" # optional, default "1"
systemMemory: "2G" # optional, default "2G"
sudo: true # optional, default is true
tmpCopyDir: "/tmp" # optional, default is /tmp
createCluster: true # optional, default is true
updateRegistry: true # optional, default is true
  • the parameter clusterName is used to interact with and manage the cluster later on, p.e. if you want to change the runtime, you need the clusterName parameter.

  • the clusteruser is the linux user for using the cluster. the clusterOS is the linux distribution of the cluster.

  • masterIP is the ip-adress of the clustermaster or the first master, which is later used for interacting with the cluster.

  • useInsecureRegistry is for using a local and insecure registry for pulling images for the lima software.

  • ignoreFirewallError is a parameter for ignoring firewall errors while the cluster is created (not while operating on the cluster).

  • serviceSubnet is the subnet for all kubernetes service IP-adresses.

  • podSubnet is the subnet for all kubernetes pod IP-adresses.

  • systemCpu is the maximum of cpu that the kube-apiserver is allowd to use.

  • sudo is a parameter for using sudo for commands that need sudo rights, if you use a non-root linux-user.

  • tmpCopyDir is a parameter for templating the folder on the cluster nodes, where images of lima will be copied to.

  • createCluster is a parameter with which you can specify whether you want to create a cluster or not.

  • updateRegistry is a parameter for updating the docker registry.

zones:
  - name: zone1 
    nodes:
      master:
        - name: cluster1master1
          ipAdress: 10.2.10.11
          user: myuser
          systemCpu: 200m
          systemMemory: 200Mi 
          status: active
          kubeversion: 1.28.2
        - name: cluster1master2
          ipAdress: 10.2.10.12
          user: myuser
          systemCpu: 200m
          systemMemory: 200Mi 
          status: active
          kubeversion: 1.28.2
      worker:
        - name: cluster1worker1
          ipAdress: 10.2.10.14
          user: myuser
          systemCpu: 200m
          systemMemory: 200Mi 
          status: active
          kubeversion: 1.28.2
        - name: cluster1worker2
          ipAdress: 10.2.10.15
          user: myuser
          systemCpu: 200m
          systemMemory: 200Mi 
          status: active
          kubeversion: 1.28.2
  - name: zone2
    nodes:
      master:
        - name: cluster1master3
          ipAdress: 10.2.10.13
          user: myuser
          systemCpu: 200m
          systemMemory: 200Mi 
          status: active
          kubeversion: 1.28.2  
      worker:
        - name: cluster1worker3
          ipAdress: 10.2.10.16
          user: myuser
          systemCpu: 200m
          systemMemory: 200Mi 
          status: active
          kubeversion: 1.28.2

This YAML content is mandatory and describes a configuration for managing multiple zones in a Kubernetes cluster. Let’s break it down step by step:

  • zones: This is the top-level key in the YAML file, representing a list of zones within the Kubernetes cluster.

    • zone1 and zone2: These are two zones within the cluster, each with its own configuration.

      • nodes: This is a sub-key under each zone, indicating the different types of nodes within that zone.

        • master: This is a sub-key under nodes, representing the master nodes in the zone.

          • cluster1master1, cluster1master2, and cluster1master3: These are individual master nodes in the cluster, each with its own configuration settings. They have attributes like name (node name has to be equal to host name), ipAdress (IP address), user (the user associated with the node), systemCpu (CPU resources allocated to the system), systemMemory (system memory allocated), status (the status of the node, can be either “active” or “drained”), and kubeversion (the Kubernetes version running on the node). Those kubernetes versions in the zone are for the nodes. NOTE: If you drain too many nodes, you may have too few OSDs for Rook.
        • worker: This is another sub-key under nodes, representing the worker nodes in the zone.

          • cluster1worker1, cluster1worker2, and cluster1worker3: Similar to the master nodes, these are individual worker nodes in the cluster, each with its own configuration settings, including name, IP address, user, system resources, status, and Kubernetes version.
# mandatory, set to true if you want to install it into your cluster
rook-ceph: true
harbor: true # if localRegistry is set to true, harbor also needs to be set to true
opensearch: true
opensearch-dashboards: true
logstash: true
filebeat: true
prometheus: true
opa: true
headlamp: true
certman: true
ingress: true 
keycloak: true
velero: true

this values are booleans for deciding which applications are later installed into the cluster.

# Global values, will be overwritten by the corresponding values of the individual packages
namespace: "kubeops"
storageClass: "rook-cephfs"

These global values will be used for installing the packages, but will be overwritten by the corresponding package-level settings.

  • namespace defines the kubernetes-namespace in which the applications are deployed
  • storageClass defines the name of StorageClass-Ressource that will be used by the applications.
rookValues:
  namespace: kubeops
  cluster:
    spec:
      dataDirHostPath: "/var/lib/rook" # optional, default is /var/lib/rook
    storage:
      useAllNodes: true # optional, default value: true
      useAllDevices: true # optional, default value: true
      deviceFilter: "^sd[a-b]" # optional, will only be used if useAllDevices is set to false
      config:
        metadataDevice: "sda" # optional, only set this value, if there is a device available
      nodes: # optional if useAllNodes is set to true, otherwise mandatory
        - name: "<ip-adress of node_1>"
          devices:
            - name: "sdb" 
        - name: "<ip-adress of node_2>"
          deviceFilter: "^sd[a-b]"
          config:
            metadataDevice: "sda" # optional
    resources:
      mgr:
        requests:
          cpu: "500m" # optional, default is 500m, limit: 1000m
          memory: "512Mi" # optional, default is 1Gi, limit: 1Gi
      mon:
        requests:
          cpu: "1" # optional, default is 1, limit: 2000m
          memory: "1Gi" # optional, default is 1Gi, limit: 2Gi
      osd:
        requests:
          cpu: "1" # optional, default is 1, limit: 2
          memory: "1Gi" # optional, default is 4Gi, limit: 4Gi
      cephFileSystems:
        requests:
          cpu: "1" # optional, default is 1, limit: 2
          memory: "1Gi" # optional, default is 1, limit: 4Gi
      cephObjectStores:
        requests:
          cpu: "1" # optional, default is 1, limit: 2
          memory: "1Gi" # optional, default is 1Gi, limit: 2Gi
  operator:
    data:
      rookLogLevel: "DEBUG" # optional, default is DEBUG
  • the namespace parameter is important for the apllications, because this parameter decides, in which namespace the individual applications are deployed.
  • dataDirHostPath is for setting the path of the configuration fiules of rook.
  • useAllNodes is parameter of rook-ceph and if it is set to true, all worker nodes will be used for rook-ceph.
  • useAllDevices is parameter of rook-ceph and if it is set to true, all possible devices will be used for rook-ceph.
  • deviceFilter is a Global filter to only select certain devicesnames. This example matches names starting with sda or sdb. it will only be used if useAllDevices is set to false and will be ignored if individual devices have been specified on a node.
  • metadataDevice: Name of a device or lvm to use for the metadata of OSDs(daemons for storing data on the local file system) on each node Performance can be improved by using a low latency device (SSD or NVMe) as the metadata device, while other spinning platter (HDD) devices on a node are used to store data. This global setting will be overwritten by the corresponding node-level setting.
  • nodes: Names of individual nodes in the cluster that should have their storage included. Will only be used if useAllNodes is set to false. Specific configurations of the individual nodes will overwrite global settings.
  • resources refers to the cpu and memory that the parts of rook-ceph will be requesting. In this case it is the manager, the monitoring pods and the OSDs (they have the job of managing the local storages of the nodes and together they form the distributed storage) as well as the filesystem and object-store pods (they manage the respecting storage solution).
  • rookLogLevel: the loglevel of rook-ceph. this provides the most informative logs.
harborValues: 
  namespace: kubeops # optional, default is kubeops
  harborpass: "password" # mandatory: set password for harbor access
  databasePassword: "Postgres_Password" # mandatory: set password for database access
  redisPassword: "Redis_Password" # mandatory: set password for redis access
  externalURL: http://10.2.10.13:30002 # mandatory, the ip address and port, from which harbor is accessable outside of the cluster
  nodePort: 30002 # mandatory
  hostname: harbor.local # mandatory
  harborPersistence:
    persistentVolumeClaim:
      registry:
        size: 5Gi # mandatory, depending on storage capacity
        storageClass: "rook-cephfs" #optional, default is rook-cephfs
      jobservice:
        jobLog:
          size: 1Gi # mandatory: Depending on storage capacity
          storageClass: "rook-cephfs" #optional, default is rook-cephfs
      database:
        size: 1Gi # mandatory, depending on storage capacity
        storageClass: "rook-cephfs" #optional, default is rook-cephfs
      redis:
        size: 1Gi # mandatory, depending on storage capacity
        storageClass: "rook-cephfs" #optional, default is rook-cephfs
      trivy: 
        size: 5Gi # mandatory, depending on storage capacity
        storageClass: "rook-cephfs" #optional, default is rook-cephfs

You can set the root password for the postgres-database, redis and harbor itself. For the persistant volumes of harbor, the sizes and the storageclass are also templatable. So all applications of harbor. p.e. trivy for the image-scanning or the chart museum for helm charts.

###Values for Logstash deployment###
##For detailed explaination for each key see: https://github.com/elastic/helm-charts/releases/tag/v7.16.3###
logstashValues:
  namespace: kubeops
  volumeClaimTemplate:
    accessModes: 
      - ReadWriteMany #optional, default is [ReadWriteMany]
    resources:
      requests:
        storage: 1Gi # mandatory, depending on storage capacity
    storageClass: "rook-cephfs" #optional, default is rook-cephfs

for logstash the pvc size is also templateable

###Values for OpenSearch-Dashboards deployment###
##For detailed explaination for each key see: https://github.com/opensearch-project/helm-charts/tree/main/charts/opensearch-dashboards###
openSearchDashboardValues:
  namespace: kubeops
  nodePort: 30050
###Values for OpenSearch deployment###
##For detailed explaination for each key see: https://github.com/opensearch-project/helm-charts/tree/main/charts/opensearch###
openSearchValues:
  namespace: kubeops
  opensearchJavaOpts: "-Xmx512M -Xms512M" # optional, default is -Xmx512M -Xms512M
  resources:
    requests:
      cpu: "250m" # optional, default is 250m
      memory: "1024Mi" # optional, default is 1024Mi
    limits:
      cpu: "300m" # optional, default is 300m
      memory: "3072Mi" # optional, default is 3072Mi
  persistence:
    size: 4Gi # mandatory
    enabled: "true" # optional, default is true
    enableInitChown: "false" # optional, default is false
    labels:
      enabled: "false" # optional, default is false
    storageClass: "rook-cephfs" # optional, default is rook-cephfs
    accessModes:
      - "ReadWriteMany" # optional, default is {ReadWriteMany}
  securityConfig:
    enabled: false # optional, default value: false
    ### Additional values can be set, if securityConfig is enabled:
    # path: "/usr/share/opensearch/plugins/opensearch-security/securityconfig"
    # actionGroupsSecret:
    # configSecret:
    # internalUsersSecret: internal-users-config-secret
    # rolesSecret:
    # rolesMappingSecret:
    # tenantsSecret:
    # config:
    #   securityConfigSecret: ""
    #   dataComplete: true
    #   data: {}
  replicas: "3" # optional, default is 3
#--------------------------------------------------------------------------------------------------------------------------------
  • opensearchJavaOpts is the size of the java heap.
  • enableInitChown is the procedure of changing the owner of the opensearch configuration files, so non-root users can change the configuration files.
  • if you want labels for the opensearch pods in the cluster, you can enable the labels with the enabled parameter under the labels subtree.
  • if you want to use a custom security config, you can enable it and use then paramters like the path to the file. if you want more info, you can find it here
  • the replicas are 3 by default, but you can template it for better scaling.

###Values for Prometheus deployment###
prometheusValues:
  namespace: kubeops # optional, default is kubeops
  privateRegistry: false # optional, default is false
  grafanaUsername: "user" # optional, default is user
  grafanaPassword: "password" # optional, default is password
  retentionSize: "24GB" # optional, default is 24GB
  grafanaResources:
    nodePort: 30211 # optional, default is 30211
    storageClass: "rook-cephfs" # optional, default is rook-cephfs
    storage: 5Gi # optional, default is 5Gi
    grafanaUsername: "admin" # optional, default is admin
    grafanaPassword: "admin" # optional, default is admin
    retention: 10d # mandatory
    retentionSize: "24GB" # mandatory
    storageClass: "rook-cephfs" # optional, default is rook-cephfs
    storage: 25Gi # optional, default is 25Gi

  prometheusResources:
    retention: 10d # optional, default is 10d
    retentionSize: "24GB" # optional, default is "24GB"
    storageClass: "rook-cephfs" # optional, default is rook-cephfs
    storage: 25Gi # optional, default is 25Gi

the nodePort is 30211, so you can visit the grafana apllication on every master with :30211, but you can template and thus change it.

###Values for OPA deployment###
opaValues:
  namespace: kubeops
  • namespace value specifies the Kubernetes namespace where OPA will be deployed.
###Values for Headlamp deployment###
headlampValues:
  namespace: kubeops
  hostname: kubeops-dashboard.local
  service:
    nodePort: 30007
  • namespace value specifies the Kubernetes namespace where Headlamp will be deployed.
  • hostname is for accessing the Headlamp service.
  • the nodePort value specifies the node port for accessing Headlamp.
###Values for cert-manager deployment###
certmanValues:
  namespace: kubeops
  replicaCount: 3
  logLevel: 2
  • namespace value specifies the Kubernetes namespace where cert-manager will be deployed.
  • replicaCount specifies the number of replicas for the cert-manager deployment.
  • logLevel specifies the logging level for cert-manager.
###Values for ingress-nginx deployment###
ingressValues:
  namespace: kubeops
  externalIPs: []
  • namespace value specifies the Kubernetes namespace where ingress-nginx will be deployed.
  • externalIPs value specifies a list of external IP addresses that will be used to expose the ingress-nginx service. This allows external traffic to reach the ingress controller. The value for this key is expected to be provided as a list of IP addresses.
###Values for keycloak deployment###
keycloakValues:
  namespace: "kubeops" # Optional, default is "keycloak"
  storageClass: "rook-cephfs" # Optional, default is "rook-cephfs"
  nodePort: "30180" # Optional, default is "30180"
  hostname: keycloak.local
  keycloak:
    auth:
      adminUser: admin # Optional, default is admin
      adminPassword: admin # Optional, default is admin
      existingSecret: "" # Optional, default is ""
  postgresql:
    auth:
      postgresPassword: "" # Optional, default is ""
      username: bn_keycloak # Optional, default is "bn_keycloak"
      password: "" # Optional, default is ""
      database: bitnami_keycloak # Optional, default is "bitnami_keycloak"
      existingSecret: "" # Optional, default is ""
  • namespace value specifies the Kubernetes namespace where keycloak will be deployed.
  • storageClass value specifies the storage class to be used for persistent storage in Kubernetes. If not provided, it defaults to “rook-cephfs”.
  • nodePort value specifies the node port for accessing Keycloak. If not provided, it defaults to “30180”.
  • hostname value specifies the hostname for accessing the Keycloak service.
  • adminUser value specifies the username for the Keycloak admin user. Defaults to “admin”.
  • adminPassword value specifies the password for the Keycloak admin user. Defaults to “admin”.
  • existingSecret value specifies an existing Kubernetes secret to use for Keycloak admin authentication. Defaults to an empty string.
  • postgresPassword value specifie the password for the PostgreSQL database. Defaults to an empty string.
  • username value specifies the username for the PostgreSQL database. Defaults to “bn_keycloak”.
  • password value specifies the password for the PostgreSQL database. Defaults to an empty string.
  • database value specifies the name of the PostgreSQL database. Defaults to “bitnami_keycloak”.
  • existingSecret value specifies an existing Kubernetes secret to use for PostgreSQL authentication. Defaults to an empty string.
veleroValues:
  namespace: "velero"
  accessKeyId: "your_s3_storage_username"
  secretAccessKey: "your_s3_storage_password"
  useNodeAgent: false
  defaultVolumesToFsBackup: false
  provider: "aws"
  bucket: "velero"
  useVolumeSnapshots: false
  backupLocationConfig:
    region: "minio"
    s3ForcePathStyle: true
    s3Url: "http://minio.velero.svc:9000"
  • namespace: Specifies the Kubernetes namespace where Velero will be deployed.
  • accessKeyId: Your access key ID for accessing the S3 storage service.
  • secretAccessKey: Your secret access key for accessing the S3 storage service.
  • useNodeAgent: Indicates whether to use a node agent for backup operations. If set to true, Velero will use a node agent.
  • defaultVolumesToFsBackup: Specifies whether to default volumes to file system backup. If set to true, Velero will use file system backup by default.
  • provider: Specifies the cloud provider where the storage service resides.
  • bucket: The name of the S3 bucket where Velero will store backups.
  • useVolumeSnapshots: Indicates whether to use volume snapshots for backups. If set to true, Velero will use volume snapshots.
  • backupLocationConfig: Configuration for the backup location.
  • region: Specifies the region where the S3 storage service is located.
  • s3ForcePathStyle: Specifies whether to force the use of path-style URLs for S3 requests.
  • s3Url: The URL for accessing the S3-compatible storage service.