1 - Release Notes

Check out the latest release notes, providing a thorough insight into the exhilarating updates and enhancements within our newest kubeopsctl version.

1.1 - KubeOps Versions

KubeOps Versions

Here is the KubeOps and it’s supported tools versions list. Make sure to install or upgrade according to supported versions only.

KubeOps Supported KOSI Version Supported kubeopsctl Version Supported KOSI plugins Deprecation Date
KubeOps 2.1.0 KOSI 2.13.X kubeopsctl 2.1.X enterprise-plugins:2.0.X TBD
KubeOps 2.0.4 KOSI 2.13.X kubeopsctl 2.0.X enterprise-plugins:2.0.X TBD
KubeOps 1.7.0 KOSI 2.12.X kubeopsctl 1.7.X enterprise-plugins:1.7.X 01.10.2026

KubeOps 2.1.0_Beta0

Tool App Version Chart Version Package Version SHA256 Checksum
calico 3.29.1 3.29.1 kubeops/calico:v3.29.1 063ca2d09ae26610bb386f3ba12a8bb3ecec2a0c52eb7d014a05bd6bed9461b7
cilium 1.18.2 1.18.2 kubeops/cilium:v1.18.2 240e7fd568976f26b885a26daa23c1a3f4c54552554abbf110d8bcef6005a0d0
multus snapshot-thick kubeops/multus:snapshot-thick f3112eb8e49b512033158c358190c6a833dfe4d47b0d2e9bf814946e6e140d5b
ingress-nginx 1.11.5 4.11.5 kubeops/ingress-nginx:2.1.0_Beta0 301c3c4e9d6a1075b467bd1c01c755479b208b126e9b15c48eb0f330e4a4956d
cert-manager 1.18.2 1.18.2 kubeops/cert-manager:2.1.0_Beta0 40bb60bd217624bbdce7499bdd083389f3ce86d858f5eba2ef887d9e2430917b
opa-gatekeeper 3.17.1 3.17.1 kubeops/opa-gatekeeper:2.1.0_Beta0 643b161981306bd847576e22a09be93fdd51eb99cee59d185067519910cd9001
velero 1.16.2 10.1.0 kubeops/velero:2.1.0_Beta0 3aa92d0b07c0110cd7a5ed67c3f4af27f024f802f1ae4140521543831980485e
rook-ceph 1.15.6 1.15.6 kubeops/rook-ceph:2.1.0_Beta0 49f64bdfb4a7e190aa3d9e242ce3408ae64a8c21736eb71822cf61bceb651cf2
harbor 2.14.0 1.18.0 kubeops/harbor:2.1.0_Beta0 7ecbebb32bc8695054d25f39763970155f678e848578a05dcc02110a2497b4f4
kube-prometheus-stack 0.85.0 77.0.2 kubeops/kube-prometheus-stack:2.1.0_Beta0 7e5e6af94b47af9039cce293404762e6de2644de8b0e4fc8e2ebb5a2864ba9b8
keycloak 26.1.0 1.1.0 kubeops/keycloak:2.1.0_Beta0 8553fd9ed05918048406eb8171b30c4963a14bdbd032c7fdd2b29dbf25bdd190
kubeops-dashboard 0.26.0 0.26.0 kubeops/kubeops-dashboard:2.1.0_Beta0 7648148ce645549a1e1de2b32635c0be7e1197f1c472309f2ecd4f17a6e83a65
filebeat-os 8.5.1 8.5.1 kubeops/filebeat-os:2.1.0_Beta0 e5a6256481fa09f3a1a8e69633ab62c0fa97aff5eb67c780e6f857daa9d06636
logstash-os 8.4.0 8.5.1 kubeops/logstash-os:2.1.0_Beta0 0b31b98cdcded952d0f7d52b8c0a4cc51877b4869853c7944021b067d9bd6586
opensearch-os 3.4.0 3.4.0 kubeops/opensearch-os:2.1.0_Beta0 cdb0822b62241306a94e951d2b3e4b7f1a959626eaa53dc61a2e5e5d07b788df
opensearch-dashboards 3.4.0 3.4.0 kubeops/opensearch-dashboards:2.1.0_Beta0 838d51f87aee3d005fa5ae8c2a832b27a3c4cf8fcce66048adc989bad0595ca8
traefik 3.6.7 39.0.0 kubeops/traefik:2.1.0 fe0af58d4d1de42cc1f05d11ed87495ea837ff1bf7dfc28a29aed721b72bb559

KubeOps 2.1.0_Alpha0

Tool App Version Chart Version Package Version SHA256 Checksum
calico 3.29.1 3.29.1 kubeops/calico:v3.29.1 0b2ebe995bf32affd10a31be758d3198f0f4ba1bf57a213829bf85a3941f8143
cilium 1.18.2 1.18.2 kubeops/cilium:v1.18.2 eeaf1821ad86d5b59b238c452f31203a2c6d13a027ec270b0984b83a687bbdd6
multus snapshot-thick kubeops/multus:snapshot-thick d87ee3b3cf63a9b578b45c3517a6906c368ac1182810fac4c908cb12f8b073ba
ingress-nginx 1.11.5 4.11.5 kubeops/ingress-nginx:2.1.0_Alpha0 3fdc94e7cfc0388aafbde4582c43243f5f5f2c0ee07f6bc59cdf5ff1834c965a
cert-manager 1.18.2 1.18.2 kubeops/cert-manager:2.1.0_Alpha0 fb2ac8b1cabdf450da2d17b6aacf9539834d1a2f43ca2bfa9c166dd6302b7036
opa-gatekeeper 3.17.1 3.17.1 kubeops/opa-gatekeeper:2.1.0_Alpha0 7f7305ade7bef7f510283e65dddddd61ff817e6c7e1d26b70172a42d3300ccdf
velero 1.16.2 10.1.0 kubeops/velero:2.1.0_Alpha0 9327ba775e9783651bddd5758caa249a3cc3bd9b0fbf3061a2ca2a10b4c0623a
rook-ceph 1.15.6 1.15.6 kubeops/rook-ceph:2.1.0_Alpha0 6408422e4420ee024257852d3913e98c992946685412cf3e2cfe20ce24767082
harbor 2.14.0 1.18.0 kubeops/harbor:2.1.0_Alpha0 3a76820dadf06228072e1a4f36992c9007bc0dfa437c4a9da5ae73e36fda0890
kube-prometheus-stack 0.85.0 77.0.2 kubeops/kube-prometheus-stack:2.1.0_Alpha0 643052179a7cde3bf2e997c42002ee7fb27081456a4ba8eb246b234a5d6d0748
keycloak 26.1.0 1.1.0 kubeops/keycloak:2.1.0_Alpha0 38faa9f693111582db0dce2add81a1a800402fb07edbdb1ba50bc3d0b92ccc33
kubeops-dashboard 0.26.0 0.26.0 kubeops/kubeops-dashboard:2.1.0_Alpha0 5914b069528a0fae25c73300cc8aa816d2a4ddbc6ab2fca133743ddd2095c9fb
filebeat-os 8.5.1 8.5.1 kubeops/filebeat-os:2.1.0_Alpha0 12e77896da436a164ea884168e233592b94090a5479ea33f7d1f853386cdf6b2
logstash-os 8.4.0 8.5.1 kubeops/logstash-os:2.1.0_Alpha0 486bb879504a4bc649690883d3fc858daf1a4ade40a2ff7bb6c345220a077a21
opensearch-os 3.4.0 3.4.0 kubeops/opensearch-os:2.1.0_Alpha0 91f8e98cd0ac0fce125aeec3bb650f045cee8138b7920fac6c5cc5b5bd43f67f
opensearch-dashboards 3.4.0 3.4.0 kubeops/opensearch-dashboards:2.1.0_Alpha0 1780e256ad5ad805601d293757087012c05f04f50c783245292381dec6a4f7cf
traefik 3.6.7 39.0.0 kubeops/ traefik:2.1.0 80ed7bc983d82818516428555858482ee058b86f839c377e57f659836cfe87f9

KubeOps 2.0.4

Tool App Version Chart Version Package Version SHA256 Checksum
calico 3.29.1 3.29.1 kubeops/calico:v3.29.1 731447b3b231258dd636d4a9bd1e96da406c148231c434e21acaab492b00000f
cilium 1.18.2 1.18.2 kubeops/cilium:v1.18.2 240e7fd568976f26b885a26daa23c1a3f4c54552554abbf110d8bcef6005a0d0
multus snapshot-thick kubeops/multus:snapshot-thick cc929a5013323247027bb83f3f9b266fc0e4240c6e00cd2c8898a3796b4783bc
ingress-nginx 1.11.5 4.11.5 kubeops/ingress-nginx:2.0.4 01a5426de06f5b3610c36238fc3d6d60c05d167ad367ed94e11d1851ca7d13bb
cert-manager 1.18.2 1.18.2 kubeops/cert-manager:2.0.4 72674b1a96372d5d69ed11da55d6060ba39af3dc496d4844ae98c2ad841f3f2a
opa-gatekeeper 3.17.1 3.17.1 kubeops/opa-gatekeeper:2.0.4 d7a4616f25de303af5d1aabba57998e1bce6d735375819cc1a002d70453c03e6
velero 1.16.2 10.1.0 kubeops/velero:2.0.4 ccea73765c3aff7d84412fed5fa648c5ccfd26f269192c62f970e862b64682d7
rook-ceph 1.15.6 1.15.6 kubeops/rook-ceph:2.0.4 527582b5b7e24fd3451ae89564decf103559ee90f4048ab89f7f9aedfe98fe52
harbor 2.14.0 1.18.0 kubeops/harbor:2.0.4 56b5895e92f156966b7da487e1b7c5af44b25ce05ae98ac9cca9fe9197f1642c
kube-prometheus-stack 0.85.0 77.0.2 kubeops/kube-prometheus-stack:2.0.4 007256619efe07a33b9311b3ee16431ed4d729f9a4d69e1a9cb8f9947a8e829c
keycloak 26.1.0 1.1.0 kubeops/keycloak:2.0.4 833c4c807f2ffb006fb4731032c868428414214733984b826d65e5f0a504e47c
kubeops-dashboard 0.26.0 0.26.0 kubeops/kubeops-dashboard:2.0.4 0a78fa678e75897283f7a31972da7bca20d8d575d2ef459e0ccdc5c7bbe2847a
filebeat-os 8.5.1 8.5.1 kubeops/filebeat-os:2.0.4 95566a08d33c2a2be0ce89610d397ba85eb4957cc07373e4c258f19aa1f2c2a9
logstash-os 8.4.0 8.5.1 kubeops/logstash-os:2.0.4 0612a97608fe909db99ec5ec8a7ac745cdb3e8a43b792db1234b241d827da978
opensearch-os 3.4.0 3.4.0 kubeops/opensearch-os:2.0.4 312252b352681f086382e6de32439aa19872a03db040b3719ee9f4302d899154
opensearch-dashboards 3.4.0 3.4.0 kubeops/opensearch-dashboards:2.0.4 4096038e5c53d7b7cbf7def0c8cae20cf397b7c7e07c0daa8446fc5bcc1f2a7a

KubeOps 2.0.3

Tool App Version Chart Version Package Version SHA256 Checksum
calico 3.29.1 3.29.1 kubeops/calico:v3.29.1 224fd0a2cdaf2ffc3089d425dc2ecac65fdd184e413327ebaf35c3929613a058
cilium 1.18.2 1.18.2 kubeops/cilium:v1.18.2 240e7fd568976f26b885a26daa23c1a3f4c54552554abbf110d8bcef6005a0d0
multus snapshot-thick kubeops/multus:snapshot-thick 2540b63b999e19a64ba46f344ac8071eef47b2822c58f448f7bf61608b175d20
ingress-nginx 1.11.5 4.11.5 kubeops/ingress-nginx:2.0.3 0015a7834927f44e6c8189c812cae8d80256a063906ab75575e44f4e8702fe55
cert-manager 1.18.2 1.18.2 kubeops/cert-manager:2.0.3 0a7080c78e7ab2e7f5dcd58f527584acde06a1b9133313a983eef5a6b242e214
opa-gatekeeper 3.17.1 3.17.1 kubeops/opa-gatekeeper:2.0.3 9a4bee58e844e81eeb9626c46cc98a937944ddca71cfd2920540151050f370a3
velero 1.16.2 10.1.0 kubeops/velero:2.0.3 4d9dddf792da3d6b006c055fc39ded8b9deef8c7ab1a2ee42cf82417eac42396
rook-ceph 1.15.6 1.15.6 kubeops/rook-ceph:2.0.3 c295552e8cc95aa2ae7096df432dc4994be6fb25b2c232faeb25a3c4e111024b
harbor 2.14.0 1.18.0 kubeops/harbor:2.0.3 61ed12ef089dd1a41fc92dfa3b7c3bff4456714a887f19b8c51b23f5f70b088b
kube-prometheus-stack 0.85.0 77.0.2 kubeops/kube-prometheus-stack:2.0.3 b913de20a11ca60fbadd77afc96acd86d6fe0761542135f27cc6eed5e2e68586
keycloak 26.1.0 1.1.0 kubeops/keycloak:2.0.3 f4c8c128e7b56f9b2f2c5e9f9832f3f1c0851dbc0f25edda294d878b70a5a4ca
kubeops-dashboard 0.26.0 0.26.0 kubeops/kubeops-dashboard:2.0.3 d63622486637461363b493dd8932ddc2837d90559e2309f7e28df3fa9a7d69b1
filebeat-os 8.5.1 8.5.1 kubeops/filebeat-os:2.0.3 a07b624662376686a01706de217a5a363ade166dfd5d7637b6c0a8b658aeb5b7
logstash-os 8.4.0 8.5.1 kubeops/logstash-os:2.0.3 bf005cca685c5a07a937ec05bc3a8232bf487b60d9353738da9c836309f081ea
opensearch-os 3.2.0 3.2.1 kubeops/opensearch-os:2.0.3 ed21dec3970d2be84e6f726463e80fef8f45f978353bcd125458c34253b13b85
opensearch-dashboards 3.2.0 3.2.2 kubeops/opensearch-dashboards:2.0.3 3611891a888411215421d1518e27a488cbded4155d49d3bf2ae139a4aa88a754

KubeOps 2.0.2

Tool App Version Chart Version Package Version SHA256 Checksum
calico 3.29.1 3.29.1 kubeops/calico:v3.29.1 3f13159164357f15b2efee927606287dd121f7ec314445c9f6c18e9ae36bc76d
cilium 1.18.2 1.18.2 kubeops/cilium:v1.18.2 eeaf1821ad86d5b59b238c452f31203a2c6d13a027ec270b0984b83a687bbdd6
multus snapshot-thick kubeops/multus:snapshot-thick a07721b3290b4086e76baad09ef981a0eae0b1cc85973a63dfa3df3c01f49027
ingress-nginx 1.11.5 4.11.5 kubeops/ingress-nginx:2.0.2 8e69acc30431cd6f4c92a99dd3e045e32e1214c1c78e8529805960c4b5aca423
cert-manager 1.18.2 1.18.2 kubeops/cert-manager:2.0.2 68976f9bc78d0f590d35d89a33ce65a3fe72a3e83c87f6deecd84bbbbd887e38
opa-gatekeeper 3.17.1 3.17.1 kubeops/opa-gatekeeper:2.0.2 dbc6cad41ca2d9e7c735b2ab2f4383848c62281fe965eb9868e702c3f14e2273
velero 1.16.2 10.1.0 kubeops/velero:2.0.2 7fa039bcf55e2c74b7fda3dbe74282bd2cc262fd443e3ce29ffec6b776864d05
rook-ceph 1.15.6 1.15.6 kubeops/rook-ceph:2.0.2 27b32b7fefb4f0ef8d2c03c53ff0c1cc0c7dd97896e1514b360b826a3a8e0337
harbor 2.14.0 1.18.0 kubeops/harbor:2.0.2 08dcc31bdba365729bf28ba783fed905abd05b822ed0e212b148063356729240
kube-prometheus-stack 0.85.0 77.0.2 kubeops/kube-prometheus-stack:2.0.2 9b9a5be156845c36f56f228cf66c7bd0be8139e72a55bccb83165aba3a5200cd
keycloak 26.1.0 1.1.0 kubeops/keycloak:2.0.2 c8d302be6f6b9c51c54cbb93f55c51d3f5498a14fd73f5f243810e557be7dd03
kubeops-dashboard 0.26.0 0.26.0 kubeops/kubeops-dashboard:2.0.2 eebff5aa0618f0d4b7b709c31cf9fe2c3999a14641758baf2abdf91daabd259e
filebeat-os 8.5.1 8.5.1 kubeops/filebeat-os:2.0.2 0b62b3a7a3fcda8d856e10fcb382803dd962062a22f68877b6f6ce7a92d10aca
logstash-os 8.4.0 8.5.1 kubeops/logstash-os:2.0.2 a214f06de5b0d1b23ef11dd8d7c5ad4c7452e85c8038bfc62e11cd62b515cd78
opensearch-os 3.2.0 3.2.1 kubeops/opensearch-os:2.0.2 00d4fa640e3fe895e628d749c0b2c4b4bb90f2e38b0768c70351d73029dc491c
opensearch-dashboards 3.2.0 3.2.2 kubeops/opensearch-dashboards:2.0.2 753d224de4d28ca6dae774156043da087a05645f05770edf1b6a8c5ac6903197

KubeOps 2.0.1

Tool App Version Chart Version Package Version SHA256 Checksum
calico 3.29.1 3.29.1 kubeops/calico:v3.29.1 2f0d00e78a88c52aa057e9257fcbb16015466262e6f464b046af97bc46c039d5
cilium 1.18.2 1.18.2 kubeops/cilium:v1.18.2 240e7fd568976f26b885a26daa23c1a3f4c54552554abbf110d8bcef6005a0d0
multus snapshot-thick kubeops/multus:snapshot-thick 6904c4d157ac5d6ebc956a65affd27a9a11bd64da42cedd600f720789e6cc59c
ingress-nginx 1.11.5 4.11.5 kubeops/ingress-nginx:2.0.1 8b6422ae30448795138b3b939bbb9d86340d11517a9c48ce0d547d5c042aacc2
cert-manager 1.18.2 1.18.2 kubeops/cert-manager:2.0.1 599954bb8bbc400f4e5b4b5772db5269bf511aa6f3f52458fe9dfa724ff02663
opa-gatekeeper 3.17.1 3.17.1 kubeops/opa-gatekeeper:2.0.1 ffdd6ccb5c2237f612e59501f3c5dfc89918de9a77d0fe6f6b138d0981bed5f3
velero 1.16.2 10.1.0 kubeops/velero:2.0.1 c867e0fcd13b82012317b09db5fd8754578e1c6ae3a9b4511c290de3f9aadafd
rook-ceph 1.15.6 1.15.6 kubeops/rook-ceph:2.0.1 f931a3556173739d88199b841e3ff3ac34df1e63303e4ae482eff1bd7465061d
harbor 2.14.0 1.18.0 kubeops/harbor:2.0.1 fe08c1caa8ef1e21ef92c6927bbd8f160567e62d249a5a7d34cf24c0d6bc12cf
kube-prometheus-stack 0.85.0 77.0.2 kubeops/kube-prometheus-stack:2.0.1 4ef5d140ce2fdffad7ba3d51cf408746d64938b23d6b9b46c2182792fdb06850
keycloak 26.1.0 1.1.0 kubeops/keycloak:2.0.1 bc65c71135066eb22298109aa4f573b1b4115c11df1ed0bc995b9f4668d64a47
kubeops-dashboard 0.26.0 0.26.0 kubeops/kubeops-dashboard:2.0.1 7f795a2bc689b8ccbb2bc6a6dfbc40af28bbeed40a9cb5fec26469708687ce43
filebeat-os 8.5.1 8.5.1 kubeops/filebeat-os:2.0.1 6eb2098040870ca6539411b77d1ecb5c4606198ea6cad9e7c6f1173da69ba1f0
logstash-os 8.4.0 8.5.1 kubeops/logstash-os:2.0.1 20095b5d51eee351f26888dfe33ed90ae88571f5b9c6c3827ae0e2bb86aa68f7
opensearch-os 3.2.0 3.2.1 kubeops/opensearch-os:2.0.1 38273e01aff56dbd730509dbab86ac5206d920c4a10f64d99c8935acf95e5490
opensearch-dashboards 3.2.0 3.2.2 kubeops/opensearch-dashboards:2.0.1 7861cd97d6e7adc85f78e4907b9b2caf587667894e3f53ad1e0ca5695280d144

1.2 - KubeOps 2.1.0_Beta1

KubeOps 2.1.0_Beta1 - Release Date xx.xx.2026

Changelogs KubeOpsCtl 2.1.0_Beta1

Whats new?

+ Added Single Sign-On (SSO) with Keycloak for Rook-Ceph
+ Added Single Sign-On (SSO) with Keycloak for Harbor
+ Added Single Sign-On (SSO) with Keycloak for Grafana
+ Added Single Sign-On (SSO) with Keycloak for KubeOps Dashboard
+ Added Single Sign-On (SSO) with Keycloak for OpenSearch Dashboard
+ Added Traefik for manual installation
+ Added CloudNativePG operator for manual installation
+ Added command kubeopsctl check-connections -f cluster-values.yaml
+ Added check-connections before kubeopsctl apply
+ Improved cert-manager package to install lets-encrypt-staging as inital CA for SSO support
+ Improved harbor backup by adding pre.hook.backup.velero.io in harbor package
+ Improved keycloak backup by adding pre.hook.backup.velero.io in harbor package
+ Improved kubeopsctl apply no longer throws an exception when cluster-values.yaml is missing.

1.3 - KubeOps 2.1.0_Beta0

KubeOps 2.1.0_Beta0 - Release Date 13.04.2026

Changelogs KubeOpsCtl 2.1.0_Beta0

Whats new?

+ Added Single Sign-On (SSO) with Keycloak for Harbor
+ Added Single Sign-On (SSO) with Keycloak for Grafana
+ Added Single Sign-On (SSO) with Keycloak for KubeOps Dashboard
+ Added Traefik for manual installation
+ Added CloudNativePG operator for manual installation
+ Added command kubeopsctl check-connections -f cluster-values.yaml
+ Added check-connections before kubeopsctl apply
+ Improved cert-manager package to install lets-encrypt-staging as inital CA for SSO support
+ Improved harbor backup by adding pre.hook.backup.velero.io in harbor package
+ Improved keycloak backup by adding pre.hook.backup.velero.io in harbor package
+ Improved kubeopsctl apply no longer throws an exception when cluster-values.yaml is missing.

1.4 - KubeOps 2.1.0_Alpha0

KubeOps 2.1.0_Alpha0 - Release Date 09.03.2026

Changelogs KubeOpsCtl 2.1.0_Alpha0

Whats new?

+ Added Single Sign-On (SSO) with Keycloak for Rook-Ceph
+ Added Single Sign-On (SSO) with Keycloak for Harbor
+ Added Single Sign-On (SSO) with Keycloak for Grafana
+ Added Single Sign-On (SSO) with Keycloak for KubeOps Dashboard
+ Added Single Sign-On (SSO) with Keycloak for OpenSearch Dashboard
+ Added Traefik for manual installation
+ Added CloudNativePG operator for manual installation
+ Added command kubeopsctl check-connections -f cluster-values.yaml
+ Added check-connections before kubeopsctl apply
+ Improved cert-manager package to install lets-encrypt-staging as inital CA for SSO support
+ Improved harbor backup by adding pre.hook.backup.velero.io in harbor package
+ Improved keycloak backup by adding pre.hook.backup.velero.io in harbor package
+ Improved kubeopsctl apply no longer throws an exception when cluster-values.yaml is missing.

1.5 - KubeOps 2.0.5

KubeOps 2.0.5 - Release Date 26.03.2026

Changelogs KubeOpsCtl 2.0.5

Bugfixes

+ Implemented fixes for the following CVEs affecting the `rook-ceph` package:
    - CVE-2025-68121
    - CVE-2026-33186

1.6 - KubeOps 2.0.4

KubeOps 2.0.4 - Release Date 02.03.2026

Changelogs KubeOpsCtl 2.0.4

Bugfixes

+ Implemented fixes for the following CVEs affecting the `harbor` package:
    - CVE-2025-27151
    - CVE-2025-49844
+ Implemented fixes for the following CVEs affecting the `opensearch` package:
    - CVE-2025-68428
    - CVE-2025-9288
+ Implemented fixes for the following CVEs affecting the `opensearch-dashboards` package:
    - App Version 3.4.0, Chart Version 3.4.0
    - CVE-2025-68428
    - CVE-2025-9288

1.7 - KubeOps 2.0.3

KubeOps 2.0.3 - Release Date 10.02.2026

Changelogs KubeOpsCtl 2.0.3

Bugfixes

+ Implemented fixes for the following CVEs affecting the `ingress-nginx` package:
    - CVE-2025-15467
    - CVE-2025-49794
    - CVE-2025-49796

1.8 - KubeOps 2.0.2

KubeOps 2.0.2 - Release Date 28.01.2026

Changelogs KubeOpsCtl 2.0.2

Improvements

+ Added --delete-emptydir-data while draining nodes
+ Added Repo support for the KOSI installation packages

Bugfixes

+ Fixed a bug where imagePullSecrets were not created

1.9 - KubeOps 2.0.1

KubeOps 2.0.1 - Release Date 22.12.2025

Changelogs KubeOpsCtl 2.0.1

Whats new?

+ Supports KubeVIP
+ Supports Ubuntu 24.04 LTS
+ Supports RHEL 9.6
+ Supports Cilium CNI
+ Reworked Cluster creation process
+ Reworked Cluster upgrade process
+ Reworked Compliance application installation process
+ Reworked Compliance application update process
+ Introduced new value format for cluster-values.yaml
+ Introduced new value format for enterprise-values.yaml
+ Introduced selfcontained air-gap packages

Improvements

+ Improve the Cluster creation process
+ Improve the Cluster upgrade process
+ Improve the Compliance application installation process
+ Improve the Compliance application update process
+ Improve the Cluster creation/upgrade speed
+ Improve the Compliance application installation/update speed
+ Updated opensearch package from 1.7.6 to 2.0.0
+ Updated opensearch-dashboard package from 1.7.6 to 2.0.0
+ Updated velero package from 1.7.6 to 2.0.0
+ Enterprise packages are only pulled if they are marked as `enabled: true` in the enterprise-values.yaml

Bugfixes

+ "Delete Node" now works over multiple zones
+ An invalid cluster-values.yaml is now caught before cluster creation
+ If no helm chart is installed the cluster creation process will still run through
+ Harbor: fixed missing template for hostname
+ nginx-ingress: updated netpols
+ opensearch-dashboard: changed issuer to cluster-issuer
+ airgap-packages are pulled from locally installed harbor registry instead of public registries

Known Issues

2 - Getting-Started

Begin your exploration of KubeOps Compliance, diving into its robust capabilities and streamlined workflow for Kubernetes infrastructure management.

2.1 - About KubeOps Compliance

This article will give you a little insight into KubeOps Compliance and its advantages.

What is KubeOps Compliance?

kubeopsctl serves as a versatile utility designed specifically to efficiently manage both the configuration and status of a cluster.

With its capabilities, users can articulate their desired cluster state in detail, outlining configurations and specifications.

Subsequently, kubeopsctl orchestrates the creation of a cluster that precisely matches the specified state, ensuring alignment between intentions and operational reality.

Why use KubeOps Compliance?

In kubeopsctl, configuration management involves defining, maintaining, and updating the desired state of a cluster, including configurations for nodes, pods, services, and other resources in the application environment.

The main goal of kubeopsctl is to match the clusters actual state with the desired state specified in the configuration files. With a declarative model, kubeopsctl enables administrators to express their desired system setup, focusing on „what“ they want rather than „how“ to achieve it.

This approach improves flexibility and automation in managing complex systems, making the management process smoother and allowing easy adjustment to changing needs.

kubeopsctl uses YAML files to store configuration data in a human-readable format. These files document important metadata about the objects managed by kubeopsctl, such as pods, services, and deployments.

Highlights

  • creating a cluster
  • adding nodes to your cluster
  • drain nodes
  • updating single nodes
  • label nodes for zones
  • adding platform software into your cluster

2.2 - Prerequisits

Prerequisits

Prerequisits

A total of at least 7 machines are required:

  • one admin
  • at least three masters
  • at least three workers

Below you can see the supported operating systems with the associated minimal requirements for CPU, memory and disk storage:

OS Minimum Requirements
Red Hat Enterprise Linux 9.6 and newer 8 CPU cores, 16 GB memory, 50GB disk storage
Ubuntu 24.2 and newer 8 CPU cores, 16 GB memory, 50GB disk storage
  • For each working node, an additional unformatted hard disk with 50 GB each is required. For more information about the hard drives for rook-ceph, visit the rook-ceph prerequisites page

  • Each machine in the cluster needs to have the same username with the same password available. Otherwise the cluster creation and management process will fail!


Prerequisits for admin

The following requirements must be fulfilled on the admin machine.

  1. You need an internet connection to use the default KubeOps Registry.
registry.kubeops.net
A local registry can be used in the Airgap environment. KubeOps only supports secure registries. It is important to list your registry as an insecure registry in registry.conf (/etc/containers/registries.conf for podman, /etc/docker/deamon.json for docker), in case of insecure registry usage.

Prerequisits for each node

The following requirements must be fulfilled on each node.

  1. You have to assign lowercase unique hostnames for every machine you are using.
We recommended using self-explanatory hostnames.

To set the hostname on your machine use the following command:

sudo hostnamectl set-hostname <name of node>
Example

Use the commands below to set the hostnames on each machine as admin, master, node1 node2 (lowercase letters and numbers only).

sudo hostnamectl set-hostname admin

  1. Optional: In order to use encrypted traffic inside the cluster, follow these steps:

For RHEL machines, you will need to import the ELRepo Secure Boot key into your system. You can find a detailed explanation and comprehensive instructions in our how-to-guide Importing the Secure-Boot key

This is only necessary if your system has Secure Boot enabled. If this isn´t the case, or you dont want to use any encryption at all, you can skip this step.

2.3 - Prepare Cluster

Prepare Cluster

Prepare

1. Include package repo on all nodes

To easily install the kosi and kubeopsctl packages you should add the kubeops package repo to your operating system’s package manager.

wget https://packagerepo.kubeops.net/pgp-key.public
cat pgp-key.public | sudo gpg --dearmor -o /usr/share/keyrings/kubeops.gpg
echo 'deb [arch=amd64 signed-by=/usr/share/keyrings/kubeops.gpg] https://packagerepo.kubeops.net/deb stable main' | sudo tee /etc/apt/sources.list.d/kubeops.list
sudo apt update
sudo dnf config-manager --add-repo https://packagerepo.kubeops.net/rpm/kubeops.repo

2. Special operating system adaptations on all nodes

You must prepare all nodes with special operating system adaptations.

# remove unattended upgrades on all nodes!
sudo apt remove unattended-upgrades
# no special adaptions required 

3. Distribute ssh-keys on all nodes

The hostnames of all nodes must be resolvable via DNS.

If you do not run a DNS server, the easiest solution is to enter the IP-addresses and hostnames in etc/hosts.

# IMPORTANT: The following command has to be adapted so that every admin, masternode and workernode is included
sudo tee /etc/hosts <<EOL_ETC_HOSTS
127.0.0.1 localhost
<admin ip> <admin hostname>
<master1 ip> <master1 hostname>
<master2 ip> <master2 hostname>
<master3 ip> <master3 hostname>
...
<worker1 ip> <worker1 hostname>
<worker2 ip> <worker2 hostname>
<worker3 ip> <worker3 hostname>
...
EOL_ETC_HOSTS
Full Example
sudo tee /etc/hosts <<EOL_ETC_HOSTS
127.0.0.1 localhost
10.2.10.10 admin
10.2.10.110 master1
10.2.10.120 master2
10.2.10.130 master3
10.2.10.210 worker1
10.2.10.220 worker2
10.2.10.230 worker3
EOL_ETC_HOSTS

Create ssh-keys on all nodes

ssh-keygen -q -t ed25519 -f ~/.ssh/id_ed25519 -N ""

Copy public ssh-key on all nodes

# IMPORTANT: The following command has to be adapted so that every admin, masternode and workernode is included
ssh-copy-id -i ~/.ssh/id_ed25519 <admin hostname>
ssh-copy-id -i ~/.ssh/id_ed25519 <master1 hostname>
ssh-copy-id -i ~/.ssh/id_ed25519 <master2 hostname>
ssh-copy-id -i ~/.ssh/id_ed25519 <master3 hostname>
...
ssh-copy-id -i ~/.ssh/id_ed25519 <worker1 hostname>
ssh-copy-id -i ~/.ssh/id_ed25519 <worker2 hostname>
ssh-copy-id -i ~/.ssh/id_ed25519 <worker3 hostname>
...
Full Example
ssh-copy-id -i ~/.ssh/id_ed25519 admin
ssh-copy-id -i ~/.ssh/id_ed25519 master1
ssh-copy-id -i ~/.ssh/id_ed25519 master2
ssh-copy-id -i ~/.ssh/id_ed25519 master3
ssh-copy-id -i ~/.ssh/id_ed25519 worker1
ssh-copy-id -i ~/.ssh/id_ed25519 worker2
ssh-copy-id -i ~/.ssh/id_ed25519 worker3

Scan host-keys with name and IP-address on all nodes

# IMPORTANT: The following command has to be adapted so that every admin, masternode and workernode is included
ssh-keyscan <admin hostname> >> ~/.ssh/known_hosts
ssh-keyscan <admin ip> >> ~/.ssh/known_hosts
ssh-keyscan <master1 hostname> >> ~/.ssh/known_hosts
ssh-keyscan <master1 ip> >> ~/.ssh/known_hosts
ssh-keyscan <master2 hostname> >> ~/.ssh/known_hosts
ssh-keyscan <master2 ip> >> ~/.ssh/known_hosts
ssh-keyscan <master3 hostname> >> ~/.ssh/known_hosts
ssh-keyscan <master3 ip> >> ~/.ssh/known_hosts
...
ssh-keyscan <worker1 hostname> >> ~/.ssh/known_hosts
ssh-keyscan <worker1 ip> >> ~/.ssh/known_hosts
ssh-keyscan <worker2 hostname> >> ~/.ssh/known_hosts
ssh-keyscan <worker2 ip> >> ~/.ssh/known_hosts
ssh-keyscan <worker3 hostname> >> ~/.ssh/known_hosts
ssh-keyscan <worker3 ip> >> ~/.ssh/known_hosts
...
Full Example
ssh-keyscan admin >> ~/.ssh/known_hosts
ssh-keyscan 10.2.10.10 >> ~/.ssh/known_hosts
ssh-keyscan master1 >> ~/.ssh/known_hosts
ssh-keyscan 10.2.10.110 >> ~/.ssh/known_hosts
ssh-keyscan master2 >> ~/.ssh/known_hosts
ssh-keyscan 10.2.10.120 >> ~/.ssh/known_hosts
ssh-keyscan master3 >> ~/.ssh/known_hosts
ssh-keyscan 10.2.10.130 >> ~/.ssh/known_hosts
ssh-keyscan worker1 >> ~/.ssh/known_hosts
ssh-keyscan 10.2.10.210 >> ~/.ssh/known_hosts
ssh-keyscan worker2 >> ~/.ssh/known_hosts
ssh-keyscan 10.2.10.220 >> ~/.ssh/known_hosts
ssh-keyscan worker3 >> ~/.ssh/known_hosts
ssh-keyscan 10.2.10.230 >> ~/.ssh/known_hosts

Test ssh login without password.

The login from each node should be possible without password.

# IMPORTANT: The following command has to be adapted so that every admin, masternode and workernode is included
ssh <admin hostname> exit
ssh <admin ip> exit
ssh <master1 hostname> exit
ssh <master1 ip> exit
ssh <master2 hostname> exit
ssh <master2 ip> exit
ssh <master3 hostname> exit
ssh <master3 ip> exit
...
ssh <worker1 hostname> exit
ssh <worker1 ip> exit
ssh <worker2 hostname> exit
ssh <worker2 ip> exit
ssh <worker3 hostname> exit
ssh <worker3 ip> exit
...
Full Example
ssh admin exit
ssh 10.2.10.10 exit
ssh master1 exit
ssh 10.2.10.110 exit
ssh master2 exit
ssh 10.2.10.120 exit
ssh master3 exit
ssh 10.2.10.130 exit
ssh worker1 exit
ssh 10.2.10.210 exit
ssh worker2 exit
ssh 10.2.10.220 exit
ssh worker3 exit
ssh 10.2.10.230 exit

4. Distribute SUDOERS on all nodes

Replace the username “myuser” with your username and copy the sudoers file to /etc/sudoers.d/<username> on all nodes.

# Preperation
myuser ALL=(root) NOPASSWD: /usr/bin/gpg --dearmor -o /usr/share/keyrings/kubeops.gpg
myuser ALL=(root) NOPASSWD: /usr/bin/tee /etc/apt/sources.list.d/kubeops.list
myuser ALL=(root) NOPASSWD: /usr/bin/apt update
myuser ALL=(root) NOPASSWD: /usr/bin/apt-get update
myuser ALL=(root) NOPASSWD: /usr/bin/apt remove unattended-upgrades
myuser ALL=(root) NOPASSWD: /usr/bin/tee /etc/hosts
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --remove *
myuser ALL=(root) NOPASSWD: /usr/bin/lsof /var/lib/dpkg/lock*

# Setup
myuser ALL=(root) NOPASSWD: /usr/bin/apt install -y kosi*
myuser ALL=(root) NOPASSWD: /usr/bin/apt-get install -y kosi*
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install kosi*.deb
myuser ALL=(root) NOPASSWD: /usr/bin/apt install -y kubeopsctl*
myuser ALL=(root) NOPASSWD: /usr/bin/apt-get install -y kubeopsctl*
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install kubeopsctl*.deb

# Calico image import
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images import calico-images.tar

# Calico image deletion
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images delete localhost\:5001/calico/*
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images delete localhost\:5001/tigera/*

# Cilium airgap
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images import cilium-images.tar
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images delete localhost\:5001/cilium/*
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images delete localhost\:5001/spiffe/*

# kube-vip image import
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images import --platform amd64 kube-vip-image.tar
myuser ALL=(root) NOPASSWD: /bin/cp kube-vip.yaml /etc/kubernetes/manifests/kube-vip.yaml

# systemctl commands
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl stop kubelet
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl disable kubelet
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl stop containerd
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl disable containerd

# Kubeadm init
myuser ALL=(root) NOPASSWD: /usr/bin/kubeadm init --upload-certs --config cluster-config.yaml

# kubeadm reset
myuser ALL=(root) NOPASSWD: /usr/bin/kubeadm reset --force

# remove folders
myuser ALL=(root) NOPASSWD: /bin/rm -fr /etc/containerd
myuser ALL=(root) NOPASSWD: /bin/rm -fr /etc/kubernetes
myuser ALL=(root) NOPASSWD: /bin/rm -fr /usr/local/etc/haproxy
myuser ALL=(root) NOPASSWD: /bin/rm -fr /var/lib/etcd
myuser ALL=(root) NOPASSWD: /bin/rm -fr /var/lib/kubelet
myuser ALL=(root) NOPASSWD: /bin/rm -fr /var/kubeops

# reboot
myuser ALL=(root) NOPASSWD: /sbin/reboot now

# disable swap
myuser ALL=(root) NOPASSWD: /usr/sbin/swapoff --all
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl mask swap.target
myuser ALL=(root) NOPASSWD: /bin/sed -e * -i /etc/fstab

# enable UFW
myuser ALL=(root) NOPASSWD: /usr/sbin/ufw reset
myuser ALL=(root) NOPASSWD: /usr/sbin/ufw allow 6443/tcp
myuser ALL=(root) NOPASSWD: /usr/sbin/ufw allow 2379\:2380/tcp
myuser ALL=(root) NOPASSWD: /usr/sbin/ufw allow 10250/tcp
myuser ALL=(root) NOPASSWD: /usr/sbin/ufw allow 10259/tcp
myuser ALL=(root) NOPASSWD: /usr/sbin/ufw allow 10257/tcp
myuser ALL=(root) NOPASSWD: /usr/sbin/ufw allow 10256/tcp
myuser ALL=(root) NOPASSWD: /usr/sbin/ufw allow 30000\:32767/tcp
myuser ALL=(root) NOPASSWD: /usr/sbin/ufw allow 179/tcp
myuser ALL=(root) NOPASSWD: /usr/sbin/ufw allow 4789/udp
myuser ALL=(root) NOPASSWD: /usr/sbin/ufw allow 5473/tcp
myuser ALL=(root) NOPASSWD: /usr/sbin/ufw allow 51820/udp
myuser ALL=(root) NOPASSWD: /usr/sbin/ufw allow 22/tcp
myuser ALL=(root) NOPASSWD: /usr/sbin/ufw allow 5000/tcp
myuser ALL=(root) NOPASSWD: /usr/sbin/ufw allow 5001/tcp
myuser ALL=(root) NOPASSWD: /usr/sbin/ufw allow 7443/tcp
myuser ALL=(root) NOPASSWD: /usr/sbin/ufw logging low
myuser ALL=(root) NOPASSWD: /usr/sbin/ufw enable
myuser ALL=(root) NOPASSWD: /usr/sbin/ufw reload
myuser ALL=(root) NOPASSWD: /usr/sbin/ufw status
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl restart systemd-networkd
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl disable --now ufw

# nftables enable/restart
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl enable --now nftables
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl restart nftables

# copy nftables configs
myuser ALL=(root) NOPASSWD: /bin/cp nftables.conf /etc/nftables.conf

# firewalld control
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl stop firewalld
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl disable firewalld
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl mask firewalld

# Install/update Helm
myuser ALL=(root) NOPASSWD: /bin/mkdir -p /usr/bin
myuser ALL=(root) NOPASSWD: /bin/cp helm /usr/bin/
myuser ALL=(root) NOPASSWD: /bin/chmod +x /usr/bin/helm

# Delete Helm
myuser ALL=(root) NOPASSWD: /bin/rm -f /usr/bin/helm

# k9s
myuser ALL=(root) NOPASSWD: /bin/cp k9s /usr/bin/
myuser ALL=(root) NOPASSWD: /bin/chmod +x /usr/bin/k9s
myuser ALL=(root) NOPASSWD: /bin/rm -f /usr/bin/k9s

# crictl pull images
myuser ALL=(root) NOPASSWD:SETENV: /usr/bin/crictl pull *

# kubeadm init phase and kubeadm token create commands
myuser ALL=(root) NOPASSWD: /usr/bin/kubeadm init phase upload-certs --upload-certs
myuser ALL=(root) NOPASSWD: /usr/bin/kubeadm token create --print-join-command --certificate-key *

# kubeadm join
myuser ALL=(root) NOPASSWD: /usr/bin/kubeadm join *

# kubernetes admin.conf handling
myuser ALL=(root) NOPASSWD: /bin/cp /etc/kubernetes/admin.conf /home/*/.kube/config
myuser ALL=(root) NOPASSWD: /bin/chown [0-9]*\:[0-9]* /home/*/.kube/config

# scheduler config copy
myuser ALL=(root) NOPASSWD: /bin/cp scheduler-config.yaml /etc/kubernetes/scheduler-config.yaml

# scheduler manifest patching
myuser ALL=(root) NOPASSWD: /usr/bin/grep -q * /etc/kubernetes/manifests/kube-scheduler.yaml
myuser ALL=(root) NOPASSWD: /usr/bin/sed -i * /etc/kubernetes/manifests/kube-scheduler.yaml

# restart services
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl restart containerd
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl restart kubelet
myuser ALL=(root) NOPASSWD: /usr/bin/kubeadm token create --print-join-command

# create kubernetes manifests folder
myuser ALL=(root) NOPASSWD: /bin/mkdir -p /etc/kubernetes/manifests

# modeprobe br_netfilter
myuser ALL=(root) NOPASSWD: /bin/cp br_netfilter.conf /etc/modules-load.d/br_netfilter.conf
myuser ALL=(root) NOPASSWD: /bin/chmod 644 /etc/modules-load.d/br_netfilter.conf
myuser ALL=(root) NOPASSWD: /sbin/modprobe br_netfilter
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl daemon-reload

# kubevip
myuser ALL=(root) NOPASSWD: /usr/bin/ctr -n k8s.io images pull *
myuser ALL=(root) NOPASSWD: /usr/bin/kubeadm upgrade plan --ignore-preflight-errors=all
myuser ALL=(root) NOPASSWD: /usr/bin/kubeadm upgrade apply *

# upgrade nodes
myuser ALL=(root) NOPASSWD: /bin/cp /home/*/.kube/config /etc/kubernetes/admin.conf
myuser ALL=(root) NOPASSWD: /usr/bin/kubeadm upgrade node

# kubernetes-tools-packages
myuser ALL=(root) NOPASSWD: /bin/mkdir -p /opt/cni/bin
myuser ALL=(root) NOPASSWD: /bin/tar xzf cni-* -C /opt/cni/bin
myuser ALL=(root) NOPASSWD: /bin/tar xzf crictl-* -C /usr/bin
myuser ALL=(root) NOPASSWD: /bin/rm -f /etc/cni/net.d/87-podman-bridge.conflist

myuser ALL=(root) NOPASSWD: /usr/bin/dpkg install -y kubeadm*
myuser ALL=(root) NOPASSWD: /bin/cp kubeadm /usr/bin/kubeadm
myuser ALL=(root) NOPASSWD: /bin/chmod +x /usr/bin/kubeadm
myuser ALL=(root) NOPASSWD: /bin/test -f /usr/bin/kubelet

myuser ALL=(root) NOPASSWD: /bin/mv /usr/bin/kubelet /usr/bin/kubelet_*
myuser ALL=(root) NOPASSWD: /bin/cp kubelet /usr/bin/kubelet
myuser ALL=(root) NOPASSWD: /bin/chmod +x /usr/bin/kubelet

myuser ALL=(root) NOPASSWD: /usr/bin/dpkg install -y kubectl*
myuser ALL=(root) NOPASSWD: /bin/cp kubectl /usr/bin/kubectl
myuser ALL=(root) NOPASSWD: /bin/chmod +x /usr/bin/kubectl

myuser ALL=(root) NOPASSWD: /usr/bin/dpkg install -y kubelet*
myuser ALL=(root) NOPASSWD: /bin/cp kubelet.service /usr/lib/systemd/system/kubelet.service
myuser ALL=(root) NOPASSWD: /bin/mkdir -p /usr/lib/systemd/system/kubelet.service.d
myuser ALL=(root) NOPASSWD: /bin/cp 10-kubeadm.conf /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl enable --now kubelet

myuser ALL=(root) NOPASSWD: /usr/bin/apt-mark hold kubelet kubeadm kubectl
myuser ALL=(root) NOPASSWD: /usr/bin/apt-mark unhold kubelet kubeadm kubectl
myuser ALL=(root) NOPASSWD: /usr/bin/apt install -y kubelet* kubeadm* kubectl*
myuser ALL=(root) NOPASSWD: /usr/bin/apt install -y kubelet* kubeadm* kubectl*

# kubernetes-tools-packages-airgap
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images import kubernetes-images.tar
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images delete localhost\:5001/kube-apiserver\:v*
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images delete localhost\:5001/kube-controller-manager\:v*
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images delete localhost\:5001/kube-scheduler\:v*
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images delete localhost\:5001/kube-proxy\:v*
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images delete localhost\:5001/coredns/coredns\:v*
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images delete localhost\:5001/etcd\:*

# Allow HAProxy and load-balancer
myuser ALL=(root) NOPASSWD: /bin/mkdir -p /usr/local/etc/haproxy
myuser ALL=(root) NOPASSWD: /bin/cp haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
myuser ALL=(root) NOPASSWD: /bin/cp load-balancer.yaml /etc/kubernetes/manifests/load-balancer.yaml
myuser ALL=(root) NOPASSWD: /bin/mkdir -p /mnt/registry
myuser ALL=(root) NOPASSWD: /bin/cp docker-registry.yaml /etc/kubernetes/manifests/docker-registry.yaml
myuser ALL=(root) NOPASSWD: /bin/rm /etc/kubernetes/manifests/docker-registry.yaml
myuser ALL=(root) NOPASSWD: /bin/rm -rf /mnt/registry
myuser ALL=(root) NOPASSWD: /usr/bin/crictl --namespace k8s.io images import local-registry-image.tar
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images import local-registry-image.tar
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images import --platform amd64 load-balancer-image.tar

# Allow multus-airgap
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images import multus-images.tar
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images delete localhost\:5001/k8snetworkplumbingwg/multus-cni\:snapshot-thick

# Podman Installation of local .deb-Package
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install passt_*.deb
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install conmon_*.deb
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install catatonit_*.deb
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install netavark_*.deb
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install aardvark-dns_*.deb
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install golang-github-containers-image_*.deb
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install golang-github-containers-common_*.deb
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install containernetworking-plugins_*.deb
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install libsubid4_*.deb
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install uidmap_*.deb
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install libslirp0_*.deb
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install slirp4netns_*.deb
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install libyajl2_*.deb
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install crun_*.deb
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install fuse-overlayfs_*.deb
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install buildah_*.deb
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install podman_*.deb

# Podman Remove of local .deb-Package
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --remove podman buildah fuse-overlayfs crun libyajl2 slirp4netns libslirp0 uidmap libsubid4 containernetworking-plugins golang-github-containers-common golang-github-containers-image aardvark-dns netavark catatonit conmon passt

# Podman Installation & Update with apt
myuser ALL=(root) NOPASSWD: /usr/bin/apt install -y podman*
myuser ALL=(root) NOPASSWD: /usr/bin/apt install -y podman
myuser ALL=(root) NOPASSWD: /usr/bin/apt update
myuser ALL=(root) NOPASSWD: /usr/bin/apt remove -y podman

# Allow prepare-node
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install */pia/kosi_*

# runtime setup
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install conntrack_*.deb
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install runc_*.deb
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install containerd_*.deb
myuser ALL=(root) NOPASSWD: /usr/bin/apt install -y containerd

# Enable repo and install containerd.io
myuser ALL=(root) NOPASSWD: /usr/bin/apt install -y containerd.io
myuser ALL=(root) NOPASSWD: /usr/bin/apt install -y conntrack-tools
myuser ALL=(root) NOPASSWD: /usr/bin/apt install -y iproute-tc

# Remove packages
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --remove containerd

# Create directories
myuser ALL=(root) NOPASSWD: /bin/mkdir -p /etc/containerd
myuser ALL=(root) NOPASSWD: /bin/mkdir -p /etc/containerd/certs.d

# Configure containerd
myuser ALL=(root) NOPASSWD: /usr/bin/tee /etc/containerd/config.toml
myuser ALL=(root) NOPASSWD: /bin/sed -i * /etc/containerd/config.toml

# Allow containerd insecure registry
myuser ALL=(root) NOPASSWD: /usr/bin/sed -i -e * /etc/containerd/config.toml

# Enable and start containerd service
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl enable --now containerd

# Allow managing k8s sysctl configuration
myuser ALL=(root) NOPASSWD: /bin/cp k8s.conf /etc/sysctl.d/k8s.conf
myuser ALL=(root) NOPASSWD: /usr/sbin/sysctl --system
myuser ALL=(root) NOPASSWD: /bin/rm /etc/sysctl.d/k8s.conf

# Allow WireGuard package management
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install wireguard_*.deb wireguard-tools_*.deb
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install wireguard-tools-*.deb systemd-resolved-*.deb
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --remove wireguard wireguard-tools
myuser ALL=(root) NOPASSWD: /usr/bin/apt install -y wireguard-tools*
myuser ALL=(root) NOPASSWD: /usr/bin/apt install -y systemd-resolved*

# velero install on admin
myuser ALL=(root) NOPASSWD: /usr/bin/cp binary/velero /usr/bin
# Preperation
myuser ALL=(root) NOPASSWD: /usr/bin/dnf config-manager --add-repo https\://packagerepo.kubeops.net/rpm/kubeops.repo
myuser ALL=(root) NOPASSWD: /usr/bin/tee /etc/hosts

# Setup
myuser ALL=(root) NOPASSWD: /usr/bin/dnf install -y --disableexcludes=kubeops-repo kosi*, !/usr/bin/dnf install -y --disableexcludes=kubeops-repo kosi*[[\:space\:]]*
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install -v kosi*.rpm, !/usr/bin/rpm --install -v kosi*[[\:space\:]]*.rpm
myuser ALL=(root) NOPASSWD: /usr/bin/dnf install -y --disableexcludes=kubeops-repo kubeopsctl*, !/usr/bin/dnf install -y --disableexcludes=kubeops-repo kubeopsctl*[[\:space\:]]*
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install -v kubeopsctl*.rpm, !/usr/bin/rpm --install -v kubeopsctl*[[\:space\:]]*.rpm

# Calico image import
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images import calico-images.tar

# Calico image deletion
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images delete localhost\:5001/calico/*, !/usr/bin/ctr --namespace k8s.io images delete localhost\:5001/calico/*[[\:space\:]]*
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images delete localhost\:5001/tigera/*, !/usr/bin/ctr --namespace k8s.io images delete localhost\:5001/tigera/*[[\:space\:]]*

# Cilium image import
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images import cilium-images.tar

# Cilium image delete
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images delete localhost\:5001/cilium/*, !/usr/bin/ctr --namespace k8s.io images delete localhost\:5001/cilium/*[[\:space\:]]*
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images delete localhost\:5001/spiffe/*, !/usr/bin/ctr --namespace k8s.io images delete localhost\:5001/spiffe/*[[\:space\:]]*

# kube-vip image import
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images import --platform amd64 kube-vip-image.tar
myuser ALL=(root) NOPASSWD: /bin/cp kube-vip.yaml /etc/kubernetes/manifests/kube-vip.yaml

# systemctl commands
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl stop kubelet
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl disable kubelet
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl stop containerd
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl disable containerd

# kubeadm reset
myuser ALL=(root) NOPASSWD: /usr/bin/kubeadm reset --force
myuser ALL=(root) NOPASSWD: /usr/bin/kubeadm init --upload-certs --config cluster-config.yaml

# remove folders
myuser ALL=(root) NOPASSWD: /bin/rm -fr /etc/containerd
myuser ALL=(root) NOPASSWD: /bin/rm -fr /etc/kubernetes
myuser ALL=(root) NOPASSWD: /bin/rm -fr /usr/local/etc/haproxy
myuser ALL=(root) NOPASSWD: /bin/rm -fr /var/lib/etcd
myuser ALL=(root) NOPASSWD: /bin/rm -fr /var/lib/kubelet
myuser ALL=(root) NOPASSWD: /bin/rm -fr /var/kubeops

# reboot
myuser ALL=(root) NOPASSWD: /sbin/reboot now

# disable swap
myuser ALL=(root) NOPASSWD: /usr/sbin/swapoff --all
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl mask swap.target
myuser ALL=(root) NOPASSWD: /bin/sed -e * -i /etc/fstab

# nftables enable/restart
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl enable --now nftables
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl restart nftables

# copy nftables configs
myuser ALL=(root) NOPASSWD: /bin/cp nftables.conf /etc/sysconfig/nftables.conf

# firewalld control
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl stop firewalld
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl disable firewalld
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl mask firewalld

# Install/update Helm
myuser ALL=(root) NOPASSWD: /bin/mkdir -p /usr/bin
myuser ALL=(root) NOPASSWD: /bin/cp helm /usr/bin/
myuser ALL=(root) NOPASSWD: /bin/chmod +x /usr/bin/helm
myuser ALL=(root) NOPASSWD: /usr/bin/dnf install -y helm* 

# Delete Helm
myuser ALL=(root) NOPASSWD: /bin/rm -f /usr/bin/helm

# k9s/package.kosi
myuser ALL=(root) NOPASSWD: /bin/cp k9s /usr/bin/
myuser ALL=(root) NOPASSWD: /bin/chmod +x /usr/bin/k9s
myuser ALL=(root) NOPASSWD: /bin/rm -f /usr/bin/k9s

# crictl pull images
myuser ALL=(root) NOPASSWD:SETENV: /usr/bin/crictl pull *

# ssh remote kubeadm commands
myuser ALL=(root) NOPASSWD: /usr/bin/kubeadm init phase upload-certs --upload-certs
myuser ALL=(root) NOPASSWD: /usr/bin/kubeadm token create --print-join-command --certificate-key *

# local execution of kubeadm join
myuser ALL=(root) NOPASSWD: /usr/bin/kubeadm join *

# kubernetes admin.conf handling
myuser ALL=(root) NOPASSWD: /bin/cp /etc/kubernetes/admin.conf /home/*/.kube/config
myuser ALL=(root) NOPASSWD: /bin/chown [0-9]*\:[0-9]* /home/*/.kube/config

# scheduler config copy
myuser ALL=(root) NOPASSWD: /bin/cp scheduler-config.yaml /etc/kubernetes/scheduler-config.yaml

# scheduler manifest patching
myuser ALL=(root) NOPASSWD: /bin/grep -q * /etc/kubernetes/manifests/kube-scheduler.yaml
myuser ALL=(root) NOPASSWD: /usr/bin/sed -i * /etc/kubernetes/manifests/kube-scheduler.yaml

# restart services
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl restart containerd
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl restart kubelet
myuser ALL=(root) NOPASSWD: /usr/bin/kubeadm token create --print-join-command

# create kubernetes manifests folder
myuser ALL=(root) NOPASSWD: /bin/mkdir -p /etc/kubernetes/manifests

# modeprobe br_netfilter
myuser ALL=(root) NOPASSWD: /bin/cp br_netfilter.conf /etc/modules-load.d/br_netfilter.conf
myuser ALL=(root) NOPASSWD: /bin/chmod 644 /etc/modules-load.d/br_netfilter.conf
myuser ALL=(root) NOPASSWD: /sbin/modprobe br_netfilter
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl daemon-reload

# kubevip
myuser ALL=(root) NOPASSWD: /usr/bin/ctr -n k8s.io images pull *, !/usr/bin/ctr -n k8s.io images pull *[[\:space\:]]*
myuser ALL=(root) NOPASSWD: /usr/bin/kubeadm upgrade plan --ignore-preflight-errors=all
myuser ALL=(root) NOPASSWD: /usr/bin/kubeadm upgrade apply *, !/usr/bin/kubeadm upgrade apply *[[\:space\:]]*

# kubeadm upgrade node
myuser ALL=(root) NOPASSWD: /bin/cp /home/*/.kube/config /etc/kubernetes/admin.conf
myuser ALL=(root) NOPASSWD: /usr/bin/kubeadm upgrade node

# kubernetes-tools-packages
myuser ALL=(root) NOPASSWD: /bin/mkdir -p /opt/cni/bin
myuser ALL=(root) NOPASSWD: /bin/tar xzf cni-* -C /opt/cni/bin, !/bin/tar xzf cni-*[[\:space\:]]* -C /opt/cni/bin
myuser ALL=(root) NOPASSWD: /bin/tar xzf crictl-* -C /usr/bin, !/bin/tar xzf crictl-*[[\:space\:]]* -C /usr/bin
myuser ALL=(root) NOPASSWD: /usr/bin/dnf install -y kubeadm*, !/usr/bin/dnf install -y kubeadm*[[\:space\:]]*, !/usr/bin/dnf install -y kubeadm*.rpm
myuser ALL=(root) NOPASSWD: /bin/cp kubeadm /usr/bin/kubeadm
myuser ALL=(root) NOPASSWD: /bin/chmod +x /usr/bin/kubeadm
myuser ALL=(root) NOPASSWD: /bin/test -f /usr/bin/kubelet
myuser ALL=(root) NOPASSWD: /bin/mv /usr/bin/kubelet /usr/bin/kubelet_*, !/bin/mv /usr/bin/kubelet /usr/bin/kubelet_*[[\:space\:]]*
myuser ALL=(root) NOPASSWD: /bin/cp kubelet /usr/bin/kubelet
myuser ALL=(root) NOPASSWD: /bin/chmod +x /usr/bin/kubelet
myuser ALL=(root) NOPASSWD: /usr/bin/dnf install -y kubectl*, !/usr/bin/dnf install -y kubectl*[[\:space\:]]*, !/usr/bin/dnf install -y kubectl*.rpm
myuser ALL=(root) NOPASSWD: /bin/cp kubectl /usr/bin/kubectl
myuser ALL=(root) NOPASSWD: /bin/chmod +x /usr/bin/kubectl
myuser ALL=(root) NOPASSWD: /usr/bin/dnf install -y kubelet*, !/usr/bin/dnf install -y kubelet*[[\:space\:]]*, !/usr/bin/dnf install -y kubelet*.rpm
myuser ALL=(root) NOPASSWD: /bin/cp kubelet.service /usr/lib/systemd/system/kubelet.service
myuser ALL=(root) NOPASSWD: /bin/mkdir -p /usr/lib/systemd/system/kubelet.service.d
myuser ALL=(root) NOPASSWD: /bin/cp 10-kubeadm.conf /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl enable --now kubelet
myuser ALL=(root) NOPASSWD: /usr/bin/dnf install -y --disableexcludes=kubeops-repo kubelet-* kubeadm-* kubectl-*

# kubernetes-tools-packages-airgap
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images import kubernetes-images.tar
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images delete localhost\:5001/kube-apiserver\:v*
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images delete localhost\:5001/kube-controller-manager\:v*
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images delete localhost\:5001/kube-scheduler\:v*
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images delete localhost\:5001/kube-proxy\:v*
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images delete localhost\:5001/coredns/coredns\:v*
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images delete localhost\:5001/etcd\:*

# Allow HAProxy and load-balancer
myuser ALL=(root) NOPASSWD: /bin/mkdir -p /usr/local/etc/haproxy
myuser ALL=(root) NOPASSWD: /bin/cp haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
myuser ALL=(root) NOPASSWD: /bin/cp load-balancer.yaml /etc/kubernetes/manifests/load-balancer.yaml
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images import --platform amd64 load-balancer-image.tar
myuser ALL=(root) NOPASSWD: /bin/mkdir -p /mnt/registry
myuser ALL=(root) NOPASSWD: /bin/cp docker-registry.yaml /etc/kubernetes/manifests/docker-registry.yaml
myuser ALL=(root) NOPASSWD: /bin/rm /etc/kubernetes/manifests/docker-registry.yaml
myuser ALL=(root) NOPASSWD: /bin/rm -rf /mnt/registry
myuser ALL=(root) NOPASSWD: /usr/bin/crictl --namespace k8s.io images import local-registry-image.tar
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images import local-registry-image.tar

# Allow multus-airgap
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images import multus-images.tar
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images delete localhost\:5001/k8snetworkplumbingwg/multus-cni\:snapshot-thick

# Podman Installation of local .rpm-Package
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install passt-*.rpm
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install passt-selinux-*.rpm
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install aardvark-dns-*.rpm
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install netavark-*.rpm
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install container-selinux-*.rpm
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install libnet-*.rpm
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install criu-*.rpm
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install criu-libs-*.rpm
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install libslirp-*.rpm
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install slirp4netns-*.rpm
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install yajl-*.rpm
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install crun-*.rpm
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install containers-common-*.rpm
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install fuse-overlayfs-*.rpm
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install shadow-utils-subid-*.rpm
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install conmon-*.rpm
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install podman-*.rpm
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install iproute-*.rpm iproute-tc-*.rpm
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install containerd.io-*.rpm
 
# Podman Remove of local .rpm-Package
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --erase podman shadow-utils-subid fuse-overlayfs crun containers-common yajl slirp4netns libslirp criu-libs criu libnet container-selinux netavark aardvark-dns passt passt-selinux

# Podman Installation & Update with dnf
myuser ALL=(root) NOPASSWD: /usr/bin/dnf install -y podman*, !/usr/bin/dnf install -y podman*[[\:space\:]]*
myuser ALL=(root) NOPASSWD: /usr/bin/dnf update -y podman*,!/usr/bin/dnf update -y podman*[[\:space\:]]*
myuser ALL=(root) NOPASSWD: /usr/bin/dnf remove -y podman

# Allow prepare-node
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install */pia/kosi-*

# Allow installation of container-runtime
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install conntrack-tools-*.rpm, !/usr/bin/rpm --install conntrack-tools-*[\:space\:]]*.rpm
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install conntrack-tools-*.rpm libnetfilter_cthelper-*.rpm libnetfilter_cttimeout-*.rpm libnetfilter_queue-*.rpm
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install iproute-tc-*.rpm, !/usr/bin/rpm --install iproute-tc-*[\:space\:]]*.rpm

# Enable repo and install containerd.io
myuser ALL=(root) NOPASSWD: /usr/bin/dnf install -y containerd.io
myuser ALL=(root) NOPASSWD: /usr/bin/dnf install -y conntrack-tools
myuser ALL=(root) NOPASSWD: /usr/bin/dnf install -y iproute-tc

# Remove RPM packages
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --erase containerd.io

# Create directories
myuser ALL=(root) NOPASSWD: /bin/mkdir -p /etc/containerd
myuser ALL=(root) NOPASSWD: /bin/mkdir -p /etc/containerd/certs.d
myuser ALL=(root) NOPASSWD: /bin/mkdir -p /etc/systemd/system/containerd.service.d/
myuser ALL=(root) NOPASSWD: /usr/bin/tee /etc/systemd/system/containerd.service.d/override.conf*

# Configure containerd
myuser ALL=(root) NOPASSWD: /usr/bin/tee /etc/containerd/config.toml
myuser ALL=(root) NOPASSWD: /bin/sed -i * /etc/containerd/config.toml

# Allow containerd insecure registry
myuser ALL=(root) NOPASSWD: /usr/bin/sed -i -e * /etc/containerd/config.toml

# Enable and start containerd service
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl enable --now containerd

# Allow managing k8s sysctl configuration
myuser ALL=(root) NOPASSWD: /bin/cp k8s.conf /etc/sysctl.d/k8s.conf
myuser ALL=(root) NOPASSWD: /usr/sbin/sysctl --system
myuser ALL=(root) NOPASSWD: /bin/rm /etc/sysctl.d/k8s.conf

# Allow WireGuard package management
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install wireguard-tools-*.rpm systemd-resolved-*.rpm
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --erase wireguard-tools
myuser ALL=(root) NOPASSWD: /usr/bin/dnf install -y wireguard-tools*, !/usr/bin/dnf install -y wireguard-tools*[\:space\:]]*, !/usr/bin/dnf install -y wireguard-tools*.rpm
myuser ALL=(root) NOPASSWD: /usr/bin/dnf install -y systemd-resolved*, !/usr/bin/dnf install -y systemd-resolved*[\:space\:]]*, !/usr/bin/dnf install -y systemd-resolved*.rpm

# velero install on admin
myuser ALL=(root) NOPASSWD: /usr/bin/cp binary/velero /usr/bin

5. Configure time synchronization on all nodes

  1. Install chrony

Run on every cluster node:

sudo apt install -y chrony
sudo systemctl enable --now chrony
sudo dnf install -y chrony
sudo systemctl enable --now chronyd

  1. Configure NTP servers

Edit /etc/chrony.conf:

server pool.ntp.org iburst
# or your internal NTP servers:
# server 10.2.10.10 iburst

Apply changes:

sudo systemctl restart chronyd

  1. Verify synchronization
chronyc tracking
chronyc sources -v

Expected:

Stratum ≤ 3
Leap Status = Normal
System time offset < 10 ms
One source marked with *

Example:

Reference ID    : 83BC03DC (ntp0.rrze.uni-erlangen.de)
Stratum         : 2
System time     : 0.000718 seconds slow
Last offset     : -0.000214 seconds
RMS offset      : 0.000093 seconds
Leap status     : Normal

Meaning:

  • Reference ID → you are synchronizing with ntp0.rrze.uni-erlangen.de → good
  • Stratum 2 → completely normal for public NTP servers
  • System time slow: 0.0007s → deviation < 1 ms → excellent
  • Load/RMS offset → very stable synchronization
  • Leap status Normal → no time jumps, everything OK

6. Install curl on Admin Node

How to install and configure curl

Install curl

Run on every cluster node:

sudo apt install -y curl
sudo dnf install -y curl


Once all nodes are prepared, you can start setting up the cluster.

2.4 - Setup Cluster

Setup Cluster

Important: the following commands have to be executed on your cluster admin

1. Install KOSI

sudo apt update
sudo apt install -y kosi=2.13* 
sudo dnf install -y --disableexcludes=kubeops-repo kosi-2.13.0.2-0
# download kosi deb manually and install with
sudo dpkg --install kosi_2.13.0.2-1_amd64.deb
# download kosi rpm manually and install with
sudo rpm --install -v kosi-2.13.0.2-0.x86_64.rpm

2. Set the KUBEOPSROOT env var

Set KUBEOPSROOT and XDG_RUNTIME_DIR in ~/.bashrc

# file ~/.bashrc
# Append these values to the end of your ~/.bashrc file
export KUBEOPSROOT=/home/<yourUser>/kubeops
export XDG_RUNTIME_DIR=$KUBEOPSROOT

Source .bashrc to apply the values

source ~/.bashrc
echo $KUBEOPSROOT
echo $XDG_RUNTIME_DIR

As a result you should see your KUBEOPSROOT-path two times.

3. Adjust KOSI Configuration

This creates a kubeops directory in your home directory and transfers all necessary files, e.g., the kosi-config and the plugins, to it.

mkdir ~/kubeops
cd ~/kubeops
cp -R /var/kubeops/kosi/ .
cp -R /var/kubeops/plugins/ .

The config.yaml is in your KUBEOPSROOT-path (typically in ~/kubeops/kosi)

  • Set hub in your kosi config to hub: https://dispatcher.kubeops.net/v4/dispatcher/
  • Set the “plugins”-entry in your kosi config to plugins: /home/<yourUser>/kubeops/plugins, where is changed to your username
# file $KUBEOPSROOT/kosi/config.yaml
apiversion: kubernative/sina/config/v2

spec:
  hub: https://dispatcher.kubeops.net/v4/dispatcher/ # <-- set hub url
  plugins: <your kubeopsroot>/kubeops/plugins/ # <-- set the path to your plugin folder (~ for home or $KUBEOPSROOT don't work, it has to be the full path)
  workspace: /tmp/kosi/process/
  logging: info
  housekeeping: false
  proxy: false

4. Install KOSI enterprise plugins

kosi install --hub kosi-enterprise kosi/enterprise-plugins:2.0.0

5. Login with your user

kosi login -u <yourUser>

At this point it is normal if you get the following error message:
Error: The login to registry is temporary not available. Please try again later.
The reason for this is that podman is not yet installed.

6. Install kubeopsctl

sudo apt update
sudo apt install -y kubeopsctl=2.0* 
sudo dnf install -y --disableexcludes=kubeops-repo kubeopsctl-2.0.1.0
# download kubeopsctl deb manually from https://kubeops.net and install with
sudo dpkg --install kubeopsctl_2.0.1.0-1_amd64.deb
# download kubeopsctl rpm manually from https://kubeops.net and install with
sudo rpm --install -v kubeopsctl-2.0.1.0-0.x86_64.rpm

7. Create a cluster-values.yaml configuration file

# file cluster-values.yaml
apiVersion: kubeops/kubeopsctl/cluster/beta/v1
imagePullRegistry: registry.kubeops.net/kubeops/kubeops
airgap: false
clusterName: <your cluster name>
clusterUser: <your user name>
kubernetesVersion: <your kubernetesversion>
kubeVipEnabled: false
virtualIP: <your master1 ip>
firewall: nftables
pluginNetwork: calico
containerRuntime: containerd
kubeOpsRoot: <your kubeopsroot path>
serviceSubnet: 192.168.128.0/17
podSubnet: 192.168.0.0/17
debug: true
packageRepository: https://packagerepo.kubeops.net/
changeCluster: true
zones:
# IMPORTANT: The following part has to be adapted so that every one of your masternodes and workernodes is included
# This file only includes the minimum requirements for the amount of masters and workers and an example usage of zones
# You should adapt this part to your amount of masters and workers and cluster them into as many zones as you like
- name: zone1
  nodes:
  - name: <your master1 hostname>
    iPAddress: <your master1 ip>
    type: controlplane
    kubeVersion: <kubernetesversion from above>
  - name: <your worker1 hostname>
    iPAddress: <your worker1 ip>
    type: worker
    kubeVersion: <kubernetesversion from above>
- name: zone2
  nodes:
  - name: <your master2 hostname>
    iPAddress: <your master2 ip>
    type: controlplane
    kubeVersion: <kubernetesversion from above>
  - name: <your worker2 hostname>
    iPAddress: <your worker2 ip>
    type: worker
    kubeVersion: <kubernetesversion from above>
- name: zone3
  nodes:
  - name: <your master3 hostname>
    iPAddress: <your master3 ip>
    type: controlplane
    kubeVersion: <kubernetesversion from above>
  - name: <your worker3 hostname>
    iPAddress: <your worker3 ip>
    type: worker
    kubeVersion: <kubernetesversion from above>
Full Example
# file cluster-values.yaml
apiVersion: kubeops/kubeopsctl/cluster/beta/v1
imagePullRegistry: registry.kubeops.net/kubeops/kubeops
airgap: false
clusterName: myCluster
clusterUser: myuser
kubernetesVersion: 1.32.2
kubeVipEnabled: false
virtualIP: 10.2.10.110
firewall: nftables
pluginNetwork: calico
containerRuntime: containerd
kubeOpsRoot: /home/myuser/kubeops
serviceSubnet: 192.168.128.0/17
podSubnet: 192.168.0.0/17
debug: true
systemCpu: 250m
systemMemory: 256Mi
packageRepository: https://packagerepo.kubeops.net/
changeCluster: true
zones:
- name: zone1
  nodes:
  - name: dev07-master1-ubuntu2404
    iPAddress: 10.2.10.110
    type: controlplane
    kubeVersion: 1.32.2
  - name: dev07-worker1-ubuntu2404
    iPAddress: 10.2.10.210
    type: worker
    kubeVersion: 1.32.2
- name: zone2
  nodes:
  - name: dev07-master2-ubuntu2404
    iPAddress: 10.2.10.120
    type: controlplane
    kubeVersion: 1.32.2
  - name: dev07-worker2-ubuntu2404
    iPAddress: 10.2.10.220
    type: worker
    kubeVersion: 1.32.2
- name: zone3
  nodes:
  - name: dev07-master3-ubuntu2404
    iPAddress: 10.2.10.130
    type: controlplane
    kubeVersion: 1.32.2
  - name: dev07-worker3-ubuntu2404
    iPAddress: 10.2.10.230
    type: worker
    kubeVersion: 1.32.2

7.1 Using KubeVip in your Cluster (optional)

If you want to use KubeVip to setup your Cluster, you need a virtual ip for that. Also you have to set kubeVipEnabled to true and set your virtualIP. If you dont want to use KubeVip you have to set kubeVipEnabled to false and set your first controlplane as your virtualIP in your cluster-values.yaml in the Setup. Refer to the official KubeVip-documentation for details here.

Examples:

kubeVipEnabled: true
virtualIP: <IP in your cluster ip range which is not given yet>

or

kubeVipEnabled: false
virtualIP: <master1 ip>

8. Pull required KOSI packages

If you do not specify a parameter, the current Kubernetes version 1.32.2 will be pulled.
With parameter --kubernetesVersion <your wanted Kubernetesversion> you can pull an older Kubernetes version.
Available Kubernetes versions are 1.32.2 , 1.32.3 , 1.32.9 . 1.32.10 . 1.33.3 . 1.33.5 . 1.34.1 .

kubeopsctl pull

or

kubeopsctl pull --kubernetesVersion <x.xx.x>

9. Install podman

kosi install -p $KUBEOPSROOT/lima/podman_5.2.2.tgz -f cluster-values.yaml

10. Install helm

kosi install -p $KUBEOPSROOT/lima/helm_v3.16.4.tgz

11. Install kubernetes tools (kubectl)

Make sure the kubernetes version matches the one you pulled before.

kosi install -p $KUBEOPSROOT/lima/kubernetes-tools_<your kubernetes version>.tgz -f cluster-values.yaml

This command also installs kubelet and kubeadm. You can either mask or delete them on your admin as they are not necessary for the cluster creation process.

Full Example
kosi install -p $KUBEOPSROOT/lima/kubernetes-tools_1.32.2.tgz -f cluster-values.yaml

12. Cluster Setup

Make sure that you are logged in on hub and registry.

kosi login -u <your username>

Now the login for hub and registry should be successful!


Make sure that you changed the kosi config.yaml.

cat $KUBEOPSROOT/kosi/config.yaml

Make sure that you pulled all required packages.

ls -1 $KUBEOPSROOT/lima

Install Kubernetes Cluster with kubeopsctl. Cluster setup takes about 10 to 15 minutes.

kubeopsctl apply -f cluster-values.yaml

3 - How to Guides

Welcome to our comprehensive How-To Guide for using kubeops. Whether youre a beginner aiming to understand the basics or an experienced user looking to fine-tune your skills, this guide is designed to provide you with detailed step-by-step instructions on how to navigate and utilize all the features of kubeops effectively.

In the following sections, you will find everything from initial setup and configuration, to advanced tips and tricks that will help you get the most out of the software. Our aim is to assist you in becoming proficient with kubeops, enhancing both your productivity and your user experience.

Lets get started on your journey to mastering kubeops!

3.1 - Join Node to a Kubernetes Cluster

This guide outlines the steps to join a nodes to a cluster.

Joining a Node in a Kubernetes cluster

To increase performance or add additional resource capacity to your cluster, adding a node to the cluster is the correct process. This process with kubeopsctl is very easy.
You can use the following steps to join control-plane nodes or worker-nodes to a Kubernetes cluster.

Join Node Process:

Prerequisits

KOSI Login Recommendation

Before performing any action with kubeopsctl, it is recommended to do a login with kosi. Refer to the official KOSI documentation for details here.

ETCD Backup Recommendation

Before performing changes on the control planes, it is recommended to create an ETCD backup. Refer to the official Kubernetes documentation for details here

Example 1: Joining a Control-Plane Node to a Kubernetes Cluster

1. Pull required KOSI packages on your ADMIN

If you do not specify a parameter, the current Kubernetes version 1.32.2 will be pulled.
With parameter --kubernetesVersion 1.34.1 you can pull a specific Kubernetes version.
Available Kubernetes versions are 1.32.2 , 1.32.3 , 1.32.9 . 1.32.10 . 1.33.3 . 1.33.5 . 1.34.1 .

kubeopsctl pull

2. Add your node definition/specifications in the cluster-values

  - name: demo-controlplaneXX
    iPAddress: 10.2.10.XXX
    type: controlplane
    kubeVersion: 1.31.6 

3. Adjust your cluster-values in zone1
Adjust your cluster-values in comparison to the example below. Be sure to set the actual version in your values, as well as the target version in the nodes. In the snippet below it is just the zone1.

# file cluster-values.yaml
apiVersion: kubeops/kubeopsctl/cluster/beta/v1
imagePullRegistry: registry.kubeops.net/kubeops/kubeops
airgap: false
clusterName: myCluster
clusterUser: root
kubernetesVersion: 1.31.6       # -> actual version
kubeVipEnabled: false           
virtualIP: 10.2.10.110
firewall: nftables
pluginNetwork: calico
containerRuntime: containerd
kubeOpsRoot: /home/myuser/kubeops
serviceSubnet: 192.168.128.0/17
podSubnet: 192.168.0.0/17
debug: true
systemCpu: 250m
systemMemory: 256Mi
packageRepository: https://packagerepo.kubeops.net/
changeCluster: true             # -> has to be set
zones:
- name: zone1
  nodes:
  - name: demo-controlplane01
    iPAddress: 10.2.10.110
    type: controlplane
    kubeVersion: 1.31.6       
  - name: demo-controlplaneXX   # -> has to be changed
    iPAddress: 10.2.10.XXX      # -> has to be changed
    type: controlplane
    kubeVersion: 1.31.6         # -> check with actual version
  - name: demo-worker01
    iPAddress: 10.2.10.210
    type: worker
    kubeVersion: 1.31.6       
- name: zone2
  nodes:
  - name: demo-controlplane02
    iPAddress: 10.2.10.120
    type: controlplane
    kubeVersion: 1.31.6       
  - name: demo-worker02
    iPAddress: 10.2.10.220
    type: worker
    kubeVersion: 1.31.6       
- name: zone3
  nodes:
  - name: demo-controlplane03
    iPAddress: 10.2.10.130
    type: controlplane
    kubeVersion: 1.31.6      
  - name: demo-worker03
    iPAddress: 10.2.10.230
    type: worker
    kubeVersion: 1.31.6      

3. Validate your values and join the node to the cluster

Once the cluster-values.yaml is created, check the values once again. If you are ready just start the join node process with the command:

kubeopsctl apply -f cluster-values.yaml

Example 1: Joining a Worker Node to a Kubernetes Cluster

1. Pull required KOSI packages on your ADMIN

If you do not specify a parameter, the current Kubernetes version 1.32.2 will be pulled.
With parameter --kubernetesVersion x.xx.x you can pull other Kubernetes versions.
Available Kubernetes versions are 1.32.2 , 1.32.3 , 1.32.9 . 1.32.10 . 1.33.3 . 1.33.5 . 1.34.1 .

kubeopsctl pull

2. Add your node definition/specifications in the cluster-values

  - name: demo-workerXX
    iPAddress: 10.2.10.XX
    type: worker
    kubeVersion: 1.31.6      

3. Adjust your cluster-values in zone2
Adjust your cluster-values in comparison to the example below. Be sure to set the actual version in your values, as well as the target version in the nodes. In the snippet below it is just the zone1.

# file cluster-values.yaml
apiVersion: kubeops/kubeopsctl/cluster/beta/v1
imagePullRegistry: registry.kubeops.net/kubeops/kubeops
airgap: false
clusterName: myCluster
clusterUser: root
kubernetesVersion: 1.31.6        # -> actual version
kubeVipEnabled: false           
virtualIP: 10.2.10.110
firewall: nftables
pluginNetwork: calico
containerRuntime: containerd
kubeOpsRoot: /home/myuser/kubeops
serviceSubnet: 192.168.128.0/17
podSubnet: 192.168.0.0/17
debug: true
systemCpu: 250m
systemMemory: 256Mi
packageRepository: https://packagerepo.kubeops.net/
changeCluster: true              # -> has to be set
zones:
- name: zone1
  nodes:
  - name: demo-controlplane01
    iPAddress: 10.2.10.110
    type: controlplane
    kubeVersion: 1.31.6       
  - name: demo-worker01
    iPAddress: 10.2.10.210
    type: worker
    kubeVersion: 1.31.6       
- name: zone2
  nodes:
  - name: demo-controlplane02
    iPAddress: 10.2.10.120
    type: controlplane
    kubeVersion: 1.31.6       
  - name: demo-worker02
    iPAddress: 10.2.10.220
    type: worker
    kubeVersion: 1.31.6       
  - name: demo-workerXX          # -> has to be changed
    iPAddress: 10.2.10.XX        # -> has to be changed
    type: worker
    kubeVersion: 1.31.6          # -> check with actual version     
- name: zone3
  nodes:
  - name: demo-controlplane03
    iPAddress: 10.2.10.130
    type: controlplane
    kubeVersion: 1.31.6      
  - name: demo-worker03
    iPAddress: 10.2.10.230
    type: worker
    kubeVersion: 1.31.6      

3. Validate your values and join the node to the cluster

Once the cluster-values.yaml is created, check the values once again. If you are ready just start the join node process with the command:

kubeopsctl apply -f cluster-values.yaml

3.2 - Delete Worker-Node from a Kubernetes Cluster

This guide outlines the steps to delete worker-nodes from a cluster, specifically how to proceed with rook-ceph and other KubeOps Compliance applications

Deleting a Node from a Kubernetes cluster

In rare cases, it may be necessary to remove nodes from a Kubernetes cluster. This how-to guide explains the prerequisites and the key considerations to keep in mind before starting the node removal process.

You can use the following steps to delete nodes from a Kubernetes cluster.

Prerequisits

  • In order to run rook-ceph stable for a longer period your cluster needs at least 3 zones with each zone containing at least 1 worker-node

  • To check which mon and osd is running on the node you want to delete you can use the command kubectl get po -nrook-ceph -owide | grep worker02 | grep "mon\|osd" | grep -v "osd-prepare" | awk '{print $1}'. As an output you get the mon and the osd running on that node. If you don’t get an output, you don’t have to delete the ressource and can skip to the “delete the node”-section


Worker

Important: Due to rook-ceph, a worker node must not be removed without following the steps below. In this example, worker01 (zone1) is removed from the cluster. Worker01 contains osd.0 and mon-c.

Scale down the rook-ceph-operator deployment to 0

This prevents new MONs or OSDs from being created.

kubectl scale deploy rook-ceph-operator -n rook-ceph --replicas=0

Check which hosts and OSDs belong to each zone

kubectl exec -it deploy/rook-ceph-tools -n rook-ceph -- ceph osd tree
ID   CLASS  WEIGHT   TYPE NAME              STATUS  REWEIGHT  PRI-AFF
 -1         0.21478  root default
 -9         0.04880      zone zone1
 -7         0.04880          host worker01                              # worker01 is being removed
  0    ssd  0.04880              osd.0          up   1.00000  1.00000   # osd.0 is being removed
-15         0.04880          host worker04
  3    ssd  0.04880              osd.3          up   1.00000  1.00000
-11         0.10739      zone zone2
 -3         0.05859          host worker02
  1    ssd  0.05859              osd.1          up   0.95001  1.00000
-13         0.05859      zone zone3
 -5         0.05859          host worker03
  2    ssd  0.05859              osd.2          up   0.95001  1.00000

From this output you can see that osd.0 is part of worker01.

Scale down the OSD deployment

kubectl scale deploy -n rook-ceph rook-ceph-osd-<x> --replicas=0
# Example: kubectl scale deploy -n rook-ceph rook-ceph-osd-0 --replicas=0

Remove the OSD via ceph-tools

kubectl exec -it deploy/rook-ceph-tools -n rook-ceph -- bash
# show OSD tree
ceph osd tree
# mark OSD out
ceph osd out <x>
# Example: ceph osd out 0
ceph osd purge <x> --yes-i-really-mean-it
# Example: ceph osd purge 0 --yes-i-really-mean-it
ceph auth del osd.<x>
# adjust CRUSH map
ceph osd crush remove <nodename>
# exit from ceph-tools
exit
# show OSD tree (now without the deleted node)
kubectl exec -it deploy/rook-ceph-tools -n rook-ceph -- ceph osd tree

Delete OSD and MON deployments

kubectl delete deploy -n rook-ceph rook-ceph-osd-<x> rook-ceph-mon-<y>
Example
kubectl delete deploy -n rook-ceph rook-ceph-osd-0 rook-ceph-mon-c

Remove the deleted mon from the ceph tools

kubectl exec -it deploy/rook-ceph-tools -n rook-ceph -- ceph mon dump
kubectl exec -it deploy/rook-ceph-tools -n rook-ceph -- ceph mon rm <y>
# verfify
kubectl exec -it deploy/rook-ceph-tools -n rook-ceph -- ceph mon dump
Example
kubectl exec -it deploy/rook-ceph-tools -n rook-ceph -- ceph mon dump
kubectl exec -it deploy/rook-ceph-tools -n rook-ceph -- ceph mon rm c
# verfify
kubectl exec -it deploy/rook-ceph-tools -n rook-ceph -- ceph mon dump

This is the dump before executing the remove:

0: [v2:192.168.231.184:3300/0,v1:192.168.231.184:6789/0] mon.a
1: [v2:192.168.185.9:3300/0,v1:192.168.185.9:6789/0] mon.b
2: [v2:192.168.196.110:3300/0,v1:192.168.196.110:6789/0] mon.c

This is the dump after executing the remove:

0: [v2:192.168.231.184:3300/0,v1:192.168.231.184:6789/0] mon.a
1: [v2:192.168.185.9:3300/0,v1:192.168.185.9:6789/0] mon.b

Delete the node from the kubernetes cluster

  • Prepare your cluster-values.yaml so that the node you want to delete is removed from it
  • Execute the command kubeopsctl apply --delete -f cluster-values.yaml
Example

The cluster-values.yaml without node1 but with node4

# file cluster-values.yaml
apiVersion: kubeops/kubeopsctl/cluster/beta/v1
imagePullRegistry: registry.kubeops.net/kubeops/kubeops
airgap: true
clusterName: myCluster
clusterUser: root
kubernetesVersion: 1.31.6      
kubeVipEnabled: false
virtualIP: 10.2.10.110
firewall: nftables
pluginNetwork: calico
containerRuntime: containerd
kubeOpsRoot: /home/myuser/kubeops
serviceSubnet: 192.168.128.0/17
podSubnet: 192.168.0.0/17
debug: true
systemCpu: 250m
systemMemory: 256Mi
packageRepository: local
changeCluster: true
zones:
- name: zone1
  nodes:
  - name: controlplane01
    iPAddress: 10.2.10.110
    type: controlplane
    kubeVersion: 1.31.6       
  - name: worker04
    iPAddress: 10.2.10.214
    type: worker
    kubeVersion: 1.31.6       
- name: zone2
  nodes:
  - name: controlplane02
    iPAddress: 10.2.10.120
    type: controlplane
    kubeVersion: 1.31.6       
  - name: worker02
    iPAddress: 10.2.10.220
    type: worker
    kubeVersion: 1.31.6       
- name: zone3
  nodes:
  - name: controlplane03
    iPAddress: 10.2.10.130
    type: controlplane
    kubeVersion: 1.31.6  
  - name: worker03
    iPAddress: 10.2.10.230
    type: worker
    kubeVersion: 1.31.6     

After, you execute the command kubeopsctl apply --delete -f cluster-values.yaml

Scale the rook-ceph-operator deployment back to 1

This allows a new MON to be created automatically in zone2.

kubectl scale deploy rook-ceph-operator -n rook-ceph --replicas=1

Timing and health checks

The total duration depends on cluster size and node performance. Before proceeding, verify Ceph health and placement groups are clean.

kubectl exec -it deploy/rook-ceph-tools -n rook-ceph -- ceph status
kubectl exec -it deploy/rook-ceph-tools -n rook-ceph -- ceph pg stat

Typical duration ranges from 15 to 120 minutes.

If you want to rejoin the same node, reset it to a time prior to joining the cluster. Only this way you can be sure, that no leftovers from the deletion process remain!

3.3 - Single Sign-On with Keycloak

Learn how to configure Keycloak for Single Sign-On, securely expose it using Kubernetes Ingress and TLS, and integrate it with kubeops and other Kubernetes applications.

In this guide, you will learn how to implement Single Sign-On (SSO) using Keycloak. We will walk through the complete flow—from understanding SSO for platforms and services such as Rook Ceph, Harbor, and other Kubernetes applications, to configuring Keycloak, exposing it securely, and integrating it with kubeops.

By the end of this guide, you will be able to:

  • Understand how Keycloak enables centralized authentication
  • Configure Keycloak for SSO
  • Securely expose Keycloak using Kubernetes Ingress and TLS
  • Integrate Keycloak with kubeops for authentication and authorization
  • Validate and troubleshoot the SSO login flow

Let’s get started on enabling secure and seamless authentication with Keycloak.

3.3.1 - SSO for dashboard

Learn how to configure Single Sign-On (SSO) for KubeOps Dashboard using Keycloak with OIDC.

Single Sign-On (SSO) with Keycloak for KubeOps Dashboard

This guide describes how to configure KubeOps Dashboard using Keycloak (OIDC) in a kubeops-managed Kubernetes environment.


Prerequisites

Before proceeding, ensure the following requirements are met:

  • Keycloak is already installed and running
  • kubeops is installed and operational

Step 1: Extract Keycloak CA certificate

  • On your admin host, run the OpenSSL command (kept exactly as provided):
  openssl s_client -showcerts -connect dev04.kubeops.net:443 /dev/null | openssl x509 -outform PEM > keycloak-ca.crt
  • Copy the CA certificate to each master

    scp :/etc/kubernetes/pki/

Step 2: Update kube-apiserver yaml

  1. On every master, edit the yaml : /etc/kubernetes/manifests/kube-apiserver.yaml

    spec:
    
      containers:
    
      - command:
    
    
        - --oidc-issuer-url=https://dev04.kubeops.net/keycloak/realms/master
    
        - --oidc-client-id=headlamp
    
        - --oidc-username-claim=preferred_username
    
        - --oidc-groups-claim=groups
    
        - "--oidc-username-prefix=oidc:"
    
        - "--oidc-groups-prefix=oidc:"
    
        - --oidc-ca-file=/etc/kubernetes/pki/keycloak-ca.crt
    

Step 3: Create a Keycloak client for Headlamp

  • Create a client for headlamp

    • Client ID: headlamp
    • Client type: OpenID Connect
    • Access type: Confidential
    • Client authentication: Enabled
    • Standard flow: Enabled
    • Direct access grants: Disabled
  • Valid Redirect URIs

    Add the following redirect URI:

    https://headlamp/<your_DNS_name>/*
    
  • Web Origins

    <your_DNS_name>
    

Step 4: Create a client scope for Headlamp

  • Create a client scope

    • Assigned Client Scope : headlamp-dedicated
  • For groups, use the Group Mapper in Keycloak:

    • Mapper Type: groups
    • Name: groups
    • Token Claim Name: groups
    • Add to ID token: ON
    • Add to access token: ON
    • Add to user info: ON
    • Add to token introspection: ON

Step 5: Create a user Group and user in Keycloak

Create a group named headlamp (if doesn’t exist already) and user under the group.

Step 6: Create ClusterRoleBinding for Headlamp group

1.Use following yaml to create ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

  name: headlamp-admin-user

subjects:

- kind: Group

  name: "oidc:headlamp" # Der 'sub' oder 'preferred_username' from the Keycloak-Token

  apiGroup: rbac.authorization.k8s.io

roleRef:

  kind: ClusterRole

  name: cluster-admin

  apiGroup: rbac.authorization.k8s.io

The name “oidc:headlamp” needs to be the same as the group name.

  1. Apply the ClusterRoleBinding file
    kubectl apply -f headlamp-clusterrolebinding.yaml

Step 7: Get client secret

After creating the client, copy the client secret.
This value will be used in the next step.

Step 8: Prepare Headlamp values (enterprise.yaml)

configure enterprise-yaml

packages:

- name: kubeops-dashboard

  enabled: true

  values:

    standard:

      namespace: monitoring

      service:

        nodePort: 30007

      hostname: "headlamp.dev04.kubeops.net"

      path: "/"

    advanced:

      config:

        extraArgs:

          - "--in-cluster"

          - "--plugins-dir=/headlamp/plugins"

          - "--oidc-client-id=headlamp"

          - "--oidc-idp-issuer-url=https://dev04.kubeops.net/keycloak/realms/master"

          - "--oidc-scopes=openid,profile,email"

          - "--insecure-ssl"

          - "--oidc-client-secret=<client-secret>"

Replace with the secret retrieved in Step 7.
-oidc-client-id must match the Keycloak client name (headlamp).

Step 9: Install Headlamp

Deploy Headlamp with the updated enterprise.yaml.

3.3.2 - SSO for Harbor

Learn how to configure Single Sign-On (SSO) for Harbor using Keycloak with OIDC in a Kubernetes environment.

Single Sign-On (SSO) with Keycloak for Harbor

This guide describes how to configure Harbor authentication using Keycloak (OIDC) in a kubeops-managed Kubernetes environment.


Prerequisites

Before proceeding, ensure the following requirements are met:

  • Keycloak is already installed and running
  • Keycloak is exposed using Kubernetes Ingress
  • A valid DNS record is configured for Keycloak and Harbor
  • TLS is enabled with a trusted Certificate Authority (CA)
  • kubeops is installed and operational

Step 1: Prepare Keycloak (Realm, User, and Client)

In this step, we configure Keycloak for Harbor SSO. Keycloak is assumed to be already installed, exposed via Ingress, and reachable over HTTPS.

Create Realm

Ensure a realm named kubeops-dashboards exists.
If it does not exist, create it in the Keycloak admin console.

  • Realm name: kubeops-dashboards
  • Enabled: true

Create User

Ensure a user named kubeops exists in the kubeops-dashboards realm.
If the user does not exist, create it and set credentials.

  • Username: kubeops
  • Enabled: true
  • Set a permanent password

Create Client (Harbor)

Create a client for Harbor in the kubeops-dashboards realm.

  • Client ID: harbor
  • Client type: OpenID Connect
  • Access type: Confidential
  • Client authentication: Enabled
  • Standard flow: Enabled
  • Direct access grants: Disabled

Valid Redirect URIs

Add the following redirect URI:

https://<your_DNS_name>/c/oidc/callback

Web Origins

<your_DNS_name>

Client Secret

After creating the client, copy the client secret.
This value will be used in the Harbor configuration:

oidc_client_id: harbor
oidc_client_secret: <CLIENT_SECRET>

Create Secret

kubectl create secret generic <your_secret_name> -n <you_harbor_namespace> \
    --from-literal client_id=<your_oidc_client_id> \
    --from-literal client_secret=<your_oidc_client_secret> \

Step 2: Prepare Harbor Values

The following kubeops package configuration enables Harbor and integrates it with Keycloak using OIDC authentication.

apiVersion: kubeops/kubeopsctl/enterprise/beta/v1

deleteNs: false
localRegistry: false

packages:
  - name: harbor
    enabled: true
    values:
      standard:
        namespace: <you_harbor_namespace>
        harborpass: "password"
        databasePassword: "password"
        redisPassword: "password"
        externalURL: <your_DNS_name>
        nodePort: 30002
        hostname: harbor.dev04.kubeops.net
        harborPersistence:
          persistentVolumeClaim:
            registry:
              size: 40Gi
              storageClass: "rook-cephfs"
            jobservice:
              jobLog:
                size: 1Gi
                storageClass: "rook-cephfs"
            database:
              size: 1Gi
              storageClass: "rook-cephfs"
            redis:
              size: 1Gi
              storageClass: "rook-cephfs"
            trivy:
              size: 5Gi
              storageClass: "rook-cephfs"

      advanced:
        core:
          extraEnvVars:
            - name: OIDC_CLIENT_ID
              valueFrom:
                secretKeyRef:
                  name: oidc-harbor 
                  key: client_id
            - name: OIDC_CLIENT_SECRET
              valueFrom:
                secretKeyRef:
                  name: oidc-harbor 
                  key: client_secret
            - name: CONFIG_OVERWRITE_JSON
              value: |
                {
                  "auth_mode": "oidc_auth",
                  "oidc_name": "keycloak",
                  "oidc_endpoint": "https://<your_DNS_name>/keycloak/realms/kubeops-dashboards",
                  "oidc_client_id": "$(OIDC_CLIENT_ID)",
                  "oidc_client_secret": "$(OIDC_CLIENT_SECRET)",
                  "oidc_scope": "openid,profile,email",
                  "oidc_verify_cert": true,
                  "oidc_auto_onboard": true
                }                

Notes

  • Ensure the OIDC client in Keycloak matches the oidc_client_id and oidc_client_secret values.
  • The externalURL and hostname must match the Harbor DNS name exactly.
  • oidc_auto_onboard: true allows users to be created automatically in Harbor upon first login.

3.3.3 - SSO for rook-ceph

Learn how to configure Single Sign-On (SSO) for rook-ceph using Keycloak with OIDC in a Kubernetes environment.

Single Sign-On (SSO) with Keycloak for rook-ceph

This guide describes how to configure rook-ceph authentication using Keycloak (OIDC) in a kubeops-managed Kubernetes environment.


Prerequisites

Before proceeding, ensure the following requirements are met:

  • Keycloak is already installed and running
  • rook-ceph is already installed and running
  • kubeops is installed and operational

Step 1: Prepare Keycloak (Realm, User)

To configure Keycloak for rook-ceph SSO

Create Realm

Ensure a realm named kubeops-dashboards exists.
If it does not exist, create it in the Keycloak admin console.

  • Realm name: kubeops-dashboards
  • Enabled: true

Create User

Ensure a user named kubeops exists in the kubeops-dashboards realm.
If the user does not exist, create it and set credentials.

  • Username: kubeops
  • Enabled: true
  • Set a permanent password

Step 2: Create Client (rook-ceph)

Create a client for rook-ceph in the kubeops-dashboards realm with following settings.

  • Client ID: rook-ceph
  • Client type: OpenID Connect
  • Access type: Confidential
  • Client authentication: Enabled
  • Standard flow: Enabled
  • Direct access grants: Disabled

Valid Redirect URIs

Add the following redirect URI:

https://<your_DNS_name>/oauth2/callback

Web Origins

Also update the web-origins

<your_DNS_name>

Step 3: Get Client Secret

In the Keycloak admin console, open the rook-ceph client and copy the client secret. This value will be used by oauth2-proxy and referenced in next steps:

oidc_client_id: rook-ceph
oidc_client_secret: <CLIENT_SECRET>

Generate a secure random cookie secret.

python3 -c 'import os,base64; print(base64.urlsafe_b64encode(os.urandom(32)).decode())'

Create a Kubernetes Secret containing OAuth2 credentials. Note: the example command below uses client-id=“ceph-dashboard” — verify this value matches your Keycloak client ID

kubectl create secret generic oauth2-proxy-credentials   --from-literal=client-id="ceph-dashboard"   --from-literal=client-secret="<client-secret>"   --from-literal=cookie-secret="<cookie-secret>"   -n rook-ceph

Step 5: Prepare values for oauth2proxy

The following kubeops values configuration enables rook-ceph and integrates it with Keycloak using OIDC authentication.

Use the client secret and Cookie secret derived in above steps here.

global:

  # Global registry to pull the images from

  imageRegistry: ""

  # To help compatibility with other charts which use global.imagePullSecrets.

  imagePullSecrets: []

  #   - name: pullSecret1

  #   - name: pullSecret2

## Override the deployment namespace

##

namespaceOverride: ""

# Force the target Kubernetes version (it uses Helm `.Capabilities` if not set).

# This is especially useful for `helm template` as capabilities are always empty

# due to the fact that it doesn't query an actual cluster

kubeVersion:

# Oauth client configuration specifics

config:

  # Add config annotations

  annotations: {}

  # OAuth client ID

  clientID: "ceph-dashboard"

  # OAuth client secret

  clientSecret: "<client-secret>"

  # List of secret keys to include in the secret and expose as environment variables.

  # By default, all three secrets are required. To exclude certain secrets

  # (e.g., when using federated token authentication), remove them from this list.

  # Example to exclude client-secret:

  # requiredSecretKeys:

  #   - client-id

  #   - cookie-secret

  requiredSecretKeys:

    - client-id

    - client-secret

    - cookie-secret

  # Create a new secret with the following command

  # openssl rand -base64 32 | head -c 32 | base64

  # Use an existing secret for OAuth2 credentials (see secret.yaml for required fields)

  # Example:

  # existingSecret: secret

  cookieSecret: "<cookie-secret>"

  # The name of the cookie that oauth2-proxy will create

  # If left empty, it will default to the release name

  cookieName: ""

  google: {}

    # adminEmail: xxxx

    # useApplicationDefaultCredentials: true

    # targetPrincipal: xxxx

    # serviceAccountJson: xxxx

    # Alternatively, use an existing secret (see google-secret.yaml for required fields)

    # Example:

    # existingSecret: google-secret

    # groups: []

    # Example:

    #  - group1@example.com

    #  - group2@example.com

  # Default configuration, to be overridden

  configFile: |-

    provider = "keycloak-oidc"

    oidc_issuer_url = "https://dev04.kubeops.net/keycloak/realms/master"

    email_domains = [ "*" ]

    upstreams = [ "file:///dev/null" ]

   

    pass_user_headers = true

    set_xauthrequest = true

    pass_access_token = true

  # Custom configuration file: oauth2_proxy.cfg

  # configFile: |-

  #   pass_basic_auth = false

  #   pass_access_token = true

  # Use an existing config map (see configmap.yaml for required fields)

  # Example:

  # existingConfig: config

alphaConfig:

  enabled: false

  # Add config annotations

  annotations: {}

  # Arbitrary configuration data to append to the server section

  serverConfigData: {}

  # Arbitrary configuration data to append to the metrics section

  metricsConfigData: {}

  # Arbitrary configuration data to append

  configData: {}

  # Arbitrary configuration to append

  # This is treated as a Go template and rendered with the root context

  configFile: ""

  # Use an existing config map (see secret-alpha.yaml for required fields)

  existingConfig: ~

  # Use an existing secret

  existingSecret: "oauth2-proxy-credentials"

image:

  registry: ""

  repository: "oauth2-proxy/oauth2-proxy"

  # appVersion is used by default

  tag: ""

  pullPolicy: "IfNotPresent"

  command: []

# Optionally specify an array of imagePullSecrets.

# Secrets must be manually created in the namespace.

# ref: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod

imagePullSecrets: []

  # - name: myRegistryKeySecretName

# Set a custom containerPort if required.

# This will default to 4180 if this value is not set and the httpScheme set to http

# This will default to 4443 if this value is not set and the httpScheme set to https

# containerPort: 4180

extraArgs:

  - --provider=keycloak-oidc

  - --set-xauthrequest=true

  - --pass-user-headers=true

  - --pass-access-token=true

  - --skip-oidc-discovery=true

  - --oidc-issuer-url=https://dev04.kubeops.net/keycloak/realms/master

  - --login-url=https://dev04.kubeops.net/keycloak/realms/master/protocol/openid-connect/auth

  - --redeem-url=https://dev04.kubeops.net/keycloak/realms/master/protocol/openid-connect/token

  - --validate-url=https://dev04.kubeops.net/keycloak/realms/master/protocol/openid-connect/userinfo

  - --oidc-jwks-url=https://dev04.kubeops.net/keycloak/realms/master/protocol/openid-connect/certs

  - --ssl-insecure-skip-verify=true

  - --cookie-secure=true

extraEnv: []

envFrom: []

# Load environment variables from a ConfigMap(s) and/or Secret(s)

# that already exists (created and managed by you).

# ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#configure-all-key-value-pairs-in-a-configmap-as-container-environment-variables

#

# PS: Changes in these ConfigMaps or Secrets will not be automatically

#     detected and you must manually restart the relevant Pods after changes.

#

#  - configMapRef:

#      name: special-config

#  - secretRef:

#      name: special-config-secret

# -- Custom labels to add into metadata

customLabels: {}

# To authorize individual email addresses

# That is part of extraArgs but since this needs special treatment we need to do a separate section

authenticatedEmailsFile:

  enabled: false

  # Defines how the email addresses file will be projected, via a configmap or secret

  persistence: configmap

  # template is the name of the configmap what contains the email user list but has been configured without this chart.

  # It's a simpler way to maintain only one configmap (user list) instead changing it for each oauth2-proxy service.

  # Be aware the value name in the extern config map in data needs to be named to "restricted_user_access" or to the

  # provided value in restrictedUserAccessKey field.

  template: ""

  # The configmap/secret key under which the list of email access is stored

  # Defaults to "restricted_user_access" if not filled-in, but can be overridden to allow flexibility

  restrictedUserAccessKey: ""

  # One email per line

  # example:

  # restricted_access: |-

  #   name1@domain

  #   name2@domain

  # If you override the config with restricted_access it will configure a user list within this chart what takes care of the

  # config map resource.

  restricted_access: ""

  annotations: {}

  # helm.sh/resource-policy: keep

service:

  type: ClusterIP

  # when service.type is ClusterIP ...

  # clusterIP: 192.0.2.20

  # when service.type is LoadBalancer ...

  # loadBalancerIP: 198.51.100.40

  # loadBalancerSourceRanges: 203.0.113.0/24

  # when service.type is NodePort ...

  # nodePort: 80

  portNumber: 80

  # Protocol set on the service

  appProtocol: http

  annotations: {}

  # foo.io/bar: "true"

  # configure externalTrafficPolicy

  externalTrafficPolicy: ""

  # configure internalTrafficPolicy

  internalTrafficPolicy: ""

  # configure service target port

  targetPort: ""

  # Configures the service to use IPv4/IPv6 dual-stack.

  # Ref: https://kubernetes.io/docs/concepts/services-networking/dual-stack/

  ipDualStack:

    enabled: false

    ipFamilies: ["IPv6", "IPv4"]

    ipFamilyPolicy: "PreferDualStack"

  # Configure traffic distribution for the service

  # Ref: https://kubernetes.io/docs/concepts/services-networking/service/#traffic-distribution

  trafficDistribution: ""

## Create or use ServiceAccount

serviceAccount:

  ## Specifies whether a ServiceAccount should be created

  enabled: true

  ## The name of the ServiceAccount to use.

  ## If not set and create is true, a name is generated using the fullname template

  name:

  automountServiceAccountToken: true

  annotations: {}

  ## imagePullSecrets for the service account

  imagePullSecrets: []

    # - name: myRegistryKeySecretName

# Network policy settings.

networkPolicy:

  create: false

  ingress: []

  egress: []

ingress:

  enabled: false

  # className: nginx

  path: /

  # Only used if API capabilities (networking.k8s.io/v1) allow it

  pathType: ImplementationSpecific

  # Used to create an Ingress record.

  # hosts:

  # - chart-example.local

  # Extra paths to prepend to every host configuration. This is useful when working with annotation based services.

  # Warning! The configuration is dependant on your current k8s API version capabilities (networking.k8s.io/v1)

  # extraPaths:

  # - path: /*

  #   pathType: ImplementationSpecific

  #   backend:

  #     service:

  #       name: ssl-redirect

  #       port:

  #         name: use-annotation

  labels: {}

  # annotations:

  #   kubernetes.io/ingress.class: nginx

  #   kubernetes.io/tls-acme: "true"

  # tls:

  # Secrets must be manually created in the namespace.

  # - secretName: chart-example-tls

  #   hosts:

  #     - chart-example.local

# Gateway API HTTPRoute configuration

# Ref: https://gateway-api.sigs.k8s.io/api-types/httproute/

gatewayApi:

  enabled: false

  # The name of the Gateway resource to attach the HTTPRoute to

  # Example:

  # gatewayRef:

  #   name: gateway

  #   namespace: gateway-system

  gatewayRef:

    name: ""

    namespace: ""

  # HTTPRoute rule configuration

  # rules:

  # - matches:

  #   - path:

  #       type: PathPrefix

  #       value: /

  rules: []

  # Hostnames to match in the HTTPRoute

  # hostnames:

  # - chart-example.local

  hostnames: []

  # Additional labels to add to the HTTPRoute

  labels: {}

  # Additional annotations to add to the HTTPRoute

  annotations: {}

resources: {}

  # limits:

  #   cpu: 100m

  #   memory: 300Mi

  # requests:

  #   cpu: 100m

  #   memory: 300Mi

# Container resize policy for runtime resource updates

# Ref: https://kubernetes.io/docs/tasks/configure-pod-container/resize-container-resources/

resizePolicy: []

  # - resourceName: cpu

  #   restartPolicy: NotRequired

  # - resourceName: memory

  #   restartPolicy: RestartContainer

extraVolumes: []

  # - name: ca-bundle-cert

  #   secret:

  #     secretName: <secret-name>

extraVolumeMounts: []

  # - mountPath: /etc/ssl/certs/

  #   name: ca-bundle-cert

# Additional containers to be added to the pod.

extraContainers: []

  #  - name: my-sidecar

  #    image: nginx:latest

# Additional Init containers to be added to the pod.

extraInitContainers: []

  #  - name: wait-for-idp

  #    image: my-idp-wait:latest

  #    command:

  #    - sh

  #    - -c

  #    - wait-for-idp.sh

priorityClassName: ""

# hostAliases is a list of aliases to be added to /etc/hosts for network name resolution

hostAliases: []

# - ip: "10.xxx.xxx.xxx"

#   hostnames:

#     - "auth.example.com"

# - ip: 127.0.0.1

#   hostnames:

#     - chart-example.local

#     - example.local

# [TopologySpreadConstraints](https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/) configuration.

# Ref: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling

# topologySpreadConstraints: []

# Affinity for pod assignment

# Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity

# affinity: {}

# Tolerations for pod assignment

# Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/

tolerations: []

# Node labels for pod assignment

# Ref: https://kubernetes.io/docs/user-guide/node-selection/

nodeSelector: {}

# Whether to use secrets instead of environment values for setting up OAUTH2_PROXY variables

proxyVarsAsSecrets: true

# Configure Kubernetes liveness and readiness probes.

# Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/

# Disable both when deploying with Istio 1.0 mTLS. https://istio.io/help/faq/security/#k8s-health-checks

livenessProbe:

  enabled: true

  initialDelaySeconds: 0

  timeoutSeconds: 1

readinessProbe:

  enabled: true

  initialDelaySeconds: 0

  timeoutSeconds: 5

  periodSeconds: 10

  successThreshold: 1

# Configure Kubernetes security context for container

# Ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/

securityContext:

  enabled: true

  allowPrivilegeEscalation: false

  capabilities:

    drop:

      - ALL

  readOnlyRootFilesystem: true

  runAsNonRoot: true

  runAsUser: 2000

  runAsGroup: 2000

  seccompProfile:

    type: RuntimeDefault

deploymentAnnotations: {}

podAnnotations: {}

podLabels: {}

replicaCount: 1

revisionHistoryLimit: 10

strategy: {}

enableServiceLinks: true

## PodDisruptionBudget settings

## ref: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/

## One of maxUnavailable and minAvailable must be set to null.

podDisruptionBudget:

  enabled: true

  maxUnavailable: null

  minAvailable: 1

  # Policy for when unhealthy pods should be considered for eviction.

  # Valid values are "IfHealthyBudget" and "AlwaysAllow".

  # Ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/#unhealthy-pod-eviction-policy

  unhealthyPodEvictionPolicy: ""

## Horizontal Pod Autoscaling

## ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/

autoscaling:

  enabled: false

  minReplicas: 1

  maxReplicas: 10

  targetCPUUtilizationPercentage: 80

  # targetMemoryUtilizationPercentage: 80

  annotations: {}

  # Configure HPA behavior policies for scaling if needed

  # Ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#configuring-scaling-behavior

  behavior: {}

    # scaleDown:

    #   stabilizationWindowSeconds: 300

    #   policies:

    #   - type: Percent

    #     value: 100

    #     periodSeconds: 15

    #   selectPolicy: Min

    # scaleUp:

    #   stabilizationWindowSeconds: 0

    #   policies:

    #   - type: Percent

    #     value: 100

    #     periodSeconds: 15

    #   - type: Pods

    #     value: 4

    #     periodSeconds: 15

    #   selectPolicy: Max

# Configure Kubernetes security context for pod

# Ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/

podSecurityContext: {}

# whether to use http or https

httpScheme: http

initContainers:

  # if the redis sub-chart is enabled, wait for it to be ready

  # before starting the proxy

  # creates a role binding to get, list, watch, the redis master pod

  # if service account is enabled

  waitForRedis:

    enabled: true

    image:

      repository: "alpine"

      tag: "latest"

      pullPolicy: "IfNotPresent"

    # uses the kubernetes version of the cluster

    # the chart is deployed on, if not set

    kubectlVersion: ""

    securityContext:

      enabled: true

      allowPrivilegeEscalation: false

      capabilities:

        drop:

          - ALL

      readOnlyRootFilesystem: true

      runAsNonRoot: true

      runAsUser: 65534

      runAsGroup: 65534

      seccompProfile:

        type: RuntimeDefault

    timeout: 180

    resources: {}

      # limits:

      #   cpu: 100m

      #   memory: 300Mi

      # requests:

      #   cpu: 100m

      #   memory: 300Mi

# Additionally authenticate against a htpasswd file. Entries must be created with "htpasswd -B" for bcrypt encryption.

# Alternatively supply an existing secret which contains the required information.

htpasswdFile:

  enabled: false

  existingSecret: ""

  entries: []

  # One row for each user

  # example:

  # entries:

  #  - testuser:$2y$05$gY6dgXqjuzFhwdhsiFe7seM9q9Tile4Y3E.CBpAZJffkeiLaC21Gy

# Configure the session storage type, between cookie and redis

sessionStorage:

  # Can be one of the supported session storage cookie|redis

  type: cookie

  redis:

    # Name of the Kubernetes secret containing the redis & redis sentinel password values (see also `sessionStorage.redis.passwordKey`)

    existingSecret: ""

    # Redis password value. Applicable for all Redis configurations. Taken from redis subchart secret if not set. `sessionStorage.redis.existingSecret` takes precedence

    password: ""

    # Key of the Kubernetes secret data containing the redis password value. If you use the redis sub chart, make sure

    # this password matches the one used in redis-ha.redisPassword (see below).

    passwordKey: "redis-password"

    # Can be one of standalone|cluster|sentinel

    clientType: "standalone"

    standalone:

      # URL of redis standalone server for redis session storage (e.g. `redis://HOST[:PORT]`). Automatically generated if not set

      connectionUrl: ""

    cluster:

      # List of Redis cluster connection URLs. Array or single string allowed.

      connectionUrls: []

      # - "redis://127.0.0.1:8000"

      # - "redis://127.0.0.1:8001"

    sentinel:

      # Name of the Kubernetes secret containing the redis sentinel password value (see also `sessionStorage.redis.sentinel.passwordKey`). Default: `sessionStorage.redis.existingSecret`

      existingSecret: ""

      # Redis sentinel password. Used only for sentinel connection; any redis node passwords need to use `sessionStorage.redis.password`

      password: ""

      # Key of the Kubernetes secret data containing the redis sentinel password value

      passwordKey: "redis-sentinel-password"

      # Redis sentinel master name

      masterName: ""

      # List of Redis cluster connection URLs. Array or single string allowed.

      connectionUrls: []

      # - "redis://127.0.0.1:8000"

      # - "redis://127.0.0.1:8001"

# Enables and configure the automatic deployment of the redis-ha subchart

redis-ha:

  # provision an instance of the redis-ha sub-chart

  enabled: false

  # Redis specific helm chart settings, please see:

  # https://artifacthub.io/packages/helm/dandydev-charts/redis-ha#general-parameters

  #

  # Recommended:

  #

  # redisPassword: xxxxx

  # replicas: 1

  # persistentVolume:

  #   enabled: false

  #

  # If you install Redis using this sub chart, make sure that the password of the sub chart matches the password

  # you set in sessionStorage.redis.password (see above).

  #

  # If you want to use redis in sentinel mode see:

  # https://artifacthub.io/packages/helm/dandydev-charts/redis-ha#redis-sentinel-parameters

# Enables apiVersion deprecation checks

checkDeprecation: true

# Allows graceful shutdown

# terminationGracePeriodSeconds: 65

# lifecycle:

#   preStop:

#     exec:

#       command: [ "sh", "-c", "sleep 60" ]

metrics:

  # Enable Prometheus metrics endpoint

  enabled: true

  # Serve Prometheus metrics on this port

  port: 44180

  # when service.type is NodePort ...

  # nodePort: 44180

  # Protocol set on the service for the metrics port

  service:

    appProtocol: http

  serviceMonitor:

    # Enable Prometheus Operator ServiceMonitor

    enabled: false

    # Define the namespace where to deploy the ServiceMonitor resource

    namespace: ""

    # Prometheus Instance definition

    prometheusInstance: default

    # Prometheus scrape interval

    interval: 60s

    # Prometheus scrape timeout

    scrapeTimeout: 30s

    # Add custom labels to the ServiceMonitor resource

    labels: {}

    ## scheme: HTTP scheme to use for scraping. Can be used with `tlsConfig` for example if using istio mTLS.

    scheme: ""

    ## tlsConfig: TLS configuration to use when scraping the endpoint. For example if using istio mTLS.

    ## Of type: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#tlsconfig

    tlsConfig: {}

    ## bearerTokenFile: Path to bearer token file.

    bearerTokenFile: ""

    ## Used to pass annotations that are used by the Prometheus installed in your cluster to select Service Monitors to work with

    ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec

    annotations: {}

    ## Metric relabel configs to apply to samples before ingestion.

    ## [Metric Relabeling](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#metric_relabel_configs)

    metricRelabelings: []

    # - action: keep

    #   regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'

    #   sourceLabels: [__name__]

    ## Relabel configs to apply to samples before ingestion.

    ## [Relabeling](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config)

    relabelings: []

    # - sourceLabels: [__meta_kubernetes_pod_node_name]

    #   separator: ;

    #   regex: ^(.*)$

    #   targetLabel: nodename

    #   replacement: $1

    #   action: replace

# Extra K8s manifests to deploy

extraObjects: []

step 6 : Install oauth 2 helm chart

Use following steps to install oauth2 using help chart.

    helm repo add oauth2-proxy https://oauth2-proxy.github.io/manifests
    helm pull oauth2-proxy/oauth2-proxy
    tar -xzvf oauth2-proxy-10.1.0.tgz
    mv values.yaml oauth2-proxy/values.yaml
    helm install oauth2-proxy oauth2-proxy/ -n rook-ceph

step 7: Update rook-ceph configuration

Configure Ceph manager:

    ceph config-key set mgr/dashboard/external_auth true

    ceph config-key set mgr/dashboard/external_auth_header_name "X-Remote-User"

    ceph config-key set mgr/dashboard/external_auth_logout_url "https://dev04.kubeops.net/oauth2/sign_out?rd=https://dev04.kubeops.net/keycloak/realms/master/protocol/openid-connect/logout?client_id=ceph-dashboard"

step 8: update ceph-dashboard ingress

Configure the ceph-dashboard Ingress :

    metadata:                                                                                                                                                                                                                                                                     

    annotations:                                                                                                                                                                                                                                                                 

        cert-manager.io/cluster-issuer: kubeops-ca-issuer                                                                                                                                                                                                                         

        kubernetes.io/ingress.class: nginx                                                                                                                                                                                                                                       

        meta.helm.sh/release-name: rook-ceph-cluster                                                                                                                                                                                                                             

        meta.helm.sh/release-namespace: rook-ceph                                                                                                                                                                                                                                   

        nginx.ingress.kubernetes.io/auth-response-headers: X-Auth-Request-User                                                                                                                                                                                         

        nginx.ingress.kubernetes.io/auth-signin: https://dev04.kubeops.net/oauth2/start?rd=$escaped_request_uri                                                                                                                                             

        nginx.ingress.kubernetes.io/auth-url: https://dev04.kubeops.net/oauth2/auth                                                                                                                                                                                               

        nginx.ingress.kubernetes.io/configuration-snippet: |                                                                                                                                                                                                                       

        proxy_set_header X-Remote-User $upstream_http_x_auth_request_user;   

Step 9: create oauth2 ingress

Create an Ingress for oauth2-proxy

    apiVersion: networking.k8s.io/v1

    kind: Ingress

    metadata:

    name: oauth2-proxy-ingress

    namespace: rook-ceph # Namespace, in dem der Proxy läuft

    annotations:

        kubernetes.io/ingress.class: nginx

    spec:

    rules:

    - host: dev04.kubeops.net

        http:

        paths:

        - path: /oauth2

            pathType: Prefix

            backend:

            service:

                name: oauth2-proxy

                port:

                number: 80

3.4 - Upgrade Kubernetes Version

This guide outlines the steps to upgrade the Kubernetes version of a cluster, specifically demonstrating how to change the version using a configuration file.

Upgrading a Kubernetes cluster

Upgrading a Kubernetes cluster is essential to maintain security, stability, and compatibility.Like Kubernetes itself, we adhere to the version skew policy and only allow upgrades between releases that differ by a single minor version. This ensures compatibility between components, reduces the risk of instability, and keeps the cluster in a supported and secure state. More information about the Version Skew policy, Click here to read

You can use the following steps to upgrade the Kubernetes version of a cluster.

Kubernetes Version Upgrade Process:

Prerequisits

KOSI Login Recommendation

Before performing any action with kubeopsctl, it is recommended to do a login with kosi. Refer to the official KOSI documentation for details here.

1. Pull required KOSI packages on your ADMIN

If you do not specify a parameter, the Kubernetes version 1.32.2 will be pulled.
With parameter --kubernetesVersion 1.34.1 you can pull an older Kubernetes version.
Available Kubernetes versions are 1.32.2 , 1.32.3 , 1.32.9 . 1.32.10 . 1.33.3 . 1.33.5 . 1.34.1 .

kubeopsctl pull --kubernetesVersion <x.xx.x>

2. Change your target version inside the cluster-values

3. Start the upgrage with the command

kubeopsctl apply -f cluster-values.yaml

Example 1 - Upgrade all nodes in the cluster to a specific version

We want to upgrade a cluster from Kubernetes version v1.33.5 to v1.34.1. These are the following steps.

1. Pull required KOSI packages on your ADMIN

Pull the kubernetes v1.34.1 packages on your ADMIN machine.

kubeopsctl pull --kubernetesVersion 1.34.1

2. Change your target version inside the cluster-values

Adjust your cluster-values in comparison to the example below. Be sure to set the actual version in your values, as well as the target version in the nodes.

# file cluster-values.yaml
apiVersion: kubeops/kubeopsctl/cluster/beta/v1
imagePullRegistry: registry.kubeops.net/kubeops/kubeops
airgap: true
clusterName: myCluster
clusterUser: root
kubernetesVersion: 1.33.5     # -> actual version
kubeVipEnabled: false
virtualIP: 10.2.10.110
firewall: nftables
pluginNetwork: calico
containerRuntime: containerd
kubeOpsRoot: /home/myuser/kubeops
serviceSubnet: 192.168.128.0/17
podSubnet: 192.168.0.0/17
debug: true
systemCpu: 250m
systemMemory: 256Mi
packageRepository: local
changeCluster: true           # -> important! Needs to be set for an upgrade
zones:
- name: zone1
  nodes:
  - name: demo-controlplane01
    iPAddress: 10.2.10.110
    type: controlplane
    kubeVersion: 1.34.1       # -> target version
  - name: demo-worker01
    iPAddress: 10.2.10.210
    type: worker
    kubeVersion: 1.34.1       # -> target version
- name: zone2
  nodes:
  - name: demo-controlplane02
    iPAddress: 10.2.10.120
    type: controlplane
    kubeVersion: 1.34.1       # ->target version
  - name: demo-worker02
    iPAddress: 10.2.10.220
    type: worker
    kubeVersion: 1.34.1       # -> target version
- name: zone3
  nodes:
  - name: demo-controlplane03
    iPAddress: 10.2.10.130
    type: controlplane
    kubeVersion: 1.34.1       # -> target version
  - name: demo-worker03
    iPAddress: 10.2.10.230
    type: worker
    kubeVersion: 1.34.1       # -> target version

2. Validate your values and upgrade the cluster

Once the cluster-values.yaml is created, check the values once again. If you are ready just start the upgrade process with the command:

kubeopsctl apply -f cluster-values.yaml

Example 2 - Tranche upgrade zones to a specific version

We want to upgrade a cluster in tranches. First zone1, because of the initial-controlplane-node. Then zone3 and last but not least zone2.

1. Pull required KOSI packages on your ADMIN

Pull the kubernetes v1.33.5 packages on your ADMIN machine.

kubeopsctl pull --kubernetesVersion 1.33.5

2. Adjust your cluster-values in zone1

Adjust your cluster-values in comparison to the example below. Be sure to set the actual version in your values, as well as the target version in the nodes. In the snippet below it is just the zone1.

# file cluster-values.yaml
apiVersion: kubeops/kubeopsctl/cluster/beta/v1
imagePullRegistry: registry.kubeops.net/kubeops/kubeops
airgap: true
clusterName: myCluster
clusterUser: root
kubernetesVersion: 1.32.2     # -> actual version
kubeVipEnabled: false           
virtualIP: 10.2.10.110
firewall: nftables
pluginNetwork: calico
containerRuntime: containerd
kubeOpsRoot: /home/myuser/kubeops
serviceSubnet: 192.168.128.0/17
podSubnet: 192.168.0.0/17
debug: true
systemCpu: 250m
systemMemory: 256Mi
packageRepository: local
changeCluster: true
zones:
- name: zone1
  nodes:
  - name: demo-controlplane01
    iPAddress: 10.2.10.110
    type: controlplane
    kubeVersion: 1.33.5       # -> target version
  - name: demo-worker01
    iPAddress: 10.2.10.210
    type: worker
    kubeVersion: 1.33.5       # -> target version
- name: zone2
  nodes:
  - name: demo-controlplane02
    iPAddress: 10.2.10.120
    type: controlplane
    kubeVersion: 1.32.2       
  - name: demo-worker02
    iPAddress: 10.2.10.220
    type: worker
    kubeVersion: 1.32.2       
- name: zone3
  nodes:
  - name: demo-controlplane03
    iPAddress: 10.2.10.130
    type: controlplane
    kubeVersion: 1.32.2       
  - name: demo-worker03
    iPAddress: 10.2.10.230
    type: worker
    kubeVersion: 1.32.2       

3. Validate your values and upgrade the cluster

Once the cluster-values.yaml is created, check the values once again. If you are ready just start the upgrade process with the command:

kubeopsctl apply -f cluster-values.yaml

4. Adjust your cluster-values in zone2

Now change the target version of zone 2.

# file cluster-values.yaml
apiVersion: kubeops/kubeopsctl/cluster/beta/v1
imagePullRegistry: registry.kubeops.net/kubeops/kubeops
airgap: true
clusterName: myCluster
clusterUser: root
kubernetesVersion: 1.32.2     # -> actual version
kubeVipEnabled: false
virtualIP: 10.2.10.110
firewall: nftables
pluginNetwork: calico
containerRuntime: containerd
kubeOpsRoot: /home/myuser/kubeops
serviceSubnet: 192.168.128.0/17
podSubnet: 192.168.0.0/17
debug: true
systemCpu: 250m
systemMemory: 256Mi
packageRepository: local
changeCluster: true
zones:
- name: zone1
  nodes:
  - name: demo-controlplane01
    iPAddress: 10.2.10.110
    type: controlplane
    kubeVersion: 1.33.5       # -> target version
  - name: demo-worker01
    iPAddress: 10.2.10.210
    type: worker
    kubeVersion: 1.33.5       # -> target version
- name: zone2
  nodes:
  - name: demo-controlplane02
    iPAddress: 10.2.10.120
    type: controlplane
    kubeVersion: 1.33.5       # ->target version
  - name: demo-worker02
    iPAddress: 10.2.10.220
    type: worker
    kubeVersion: 1.33.5       # -> target version
- name: zone3
  nodes:
  - name: demo-controlplane03
    iPAddress: 10.2.10.130
    type: controlplane
    kubeVersion: 1.32.2       
  - name: demo-worker03
    iPAddress: 10.2.10.230
    type: worker
    kubeVersion: 1.32.2       

5. Validate your values and upgrade the cluster

Once the cluster-values.yaml is created, check the values once again. If you are ready just start the upgrade process with the command:

kubeopsctl apply -f cluster-values.yaml

6. Adjust your cluster-values in zone3

Now change the target version of zone 3.

# file cluster-values.yaml
apiVersion: kubeops/kubeopsctl/cluster/beta/v1
imagePullRegistry: registry.kubeops.net/kubeops/kubeops
airgap: true
clusterName: myCluster
clusterUser: root
kubernetesVersion: 1.32.2     # -> actual version
kubeVipEnabled: false
virtualIP: 10.2.10.110
firewall: nftables
pluginNetwork: calico
containerRuntime: containerd
kubeOpsRoot: /home/myuser/kubeops
serviceSubnet: 192.168.128.0/17
podSubnet: 192.168.0.0/17
debug: true
systemCpu: 250m
systemMemory: 256Mi
packageRepository: local
changeCluster: true
zones:
- name: zone1
  nodes:
  - name: demo-controlplane01
    iPAddress: 10.2.10.110
    type: controlplane
    kubeVersion: 1.33.5       # -> target version
  - name: demo-worker01
    iPAddress: 10.2.10.210
    type: worker
    kubeVersion: 1.33.5       # -> target version
- name: zone2
  nodes:
  - name: demo-controlplane02
    iPAddress: 10.2.10.120
    type: controlplane
    kubeVersion: 1.33.5       # ->target version
  - name: demo-worker02
    iPAddress: 10.2.10.220
    type: worker
    kubeVersion: 1.33.5       # -> target version
- name: zone3
  nodes:
  - name: demo-controlplane03
    iPAddress: 10.2.10.130
    type: controlplane
    kubeVersion: 1.33.5       # -> target version
  - name: demo-worker03
    iPAddress: 10.2.10.230
    type: worker
    kubeVersion: 1.33.5       # -> target version

7. Validate your values and upgrade the cluster

Once the cluster-values.yaml is created, check the values once again. If you are ready just start the upgrade process with the command:

kubeopsctl apply -f cluster-values.yaml

3.5 - Installing KubeOps Compliance applications

This guide outlines the steps to install KubeOps Compliance applications of a cluster.

Installing KubeOps Compliance applications

There is a predefined selection of applications included with KubeOps Compliance. These applications ensure a production-ready cluster deployment and can be individually configured as needed.

By separating the cluster values from the application values, the application values can be modified independently and installed at a later stage, providing greater flexibility and maintainability.

Prerequisits

KOSI Login Recommendation

Before performing any action with kubeopsctl, it is recommended to do a login with kosi. Refer to the official KOSI documentation for details here.

Example 1: Installing Applications in a non-airgap-environment

To install the KubeOps Compliance Applications in an existing cluster follow the next steps:

1. Define the Enterprise-Value-file

In the example value, the following applications are enabled:

  • opa-gatekeeper
  • rook-ceph
  • harbor
  • kubeops-dashboard

All other applications are disabled and will not be installed. For more information about available packages as well as parameters for each package check here.

The following file is only an example. Make sure to change the necessary values (ips, passwords, …) before usage

apiVersion: kubeops/kubeopsctl/enterprise/beta/v1
deleteNs: false
localRegistry: false
packages:
- name: opa-gatekeeper
  enabled: true
  values:
    standard:
      namespace: opa-gatekeeper
    advanced:
- name: rook-ceph
  enabled: true
  values:
    standard:
      namespace: rook-ceph
      cluster:
        resources:
          mgr:
            requests:
              cpu: "500m"
              memory: "512Mi"
          mon:
            requests:
              cpu: "1"
              memory: "1Gi"
          osd:
            requests:
              cpu: "1"
              memory: "1Gi"
        dashboard:
          enabled: true
      operator:
        data:
          rookLogLevel: "DEBUG"
- name: harbor
  enabled: true
  values:
    standard:
      namespace: harbor
      harborpass: "topsecret"
      databasePassword: "topsecret"
      redisPassword: "topsecret"
      externalURL: http://10.2.10.110:30002
      nodePort: 30002
      hostname: harbor.local
      harborPersistence:
        persistentVolumeClaim:
          registry:
            size: 40Gi
            storageClass: "rook-cephfs"
          jobservice:
            jobLog:
              size: 1Gi
              storageClass: "rook-cephfs"
          database:
            size: 1Gi
            storageClass: "rook-cephfs"
          redis:
            size: 1Gi
            storageClass: "rook-cephfs"
          trivy: 
            size: 5Gi
            storageClass: "rook-cephfs"
    advanced:
- name: kubeops-dashboard
  enabled: true
  values:
    standard:
      namespace: monitoring
      hostname: kubeops-dashboard.local
      service:
        nodePort: 30007
    advanced:
- name: filebeat-os
  enabled: false
  values:
    standard:
      namespace: logging
    advanced:

2. Pull the KubeOps Compliance Applications packages
To pull the required application packages in the correct version for the release, use the following commands:

kubeopsctl pull -f enterprise-values.yaml --kubernetesVersion <x.xx.x>

or

kubeopsctl pull --tools enterprise-values.yaml --kubernetesVersion <x.xx.x>

3. The KubeOps Compliance Application installation process
Important for only installation of the tools is that you have set your flag changeCluster to false in your cluster-values.yaml.

The following file is only an example. Make sure to change the necessary values (ips, passwords, …) before usage

# file cluster-values.yaml
apiVersion: kubeops/kubeopsctl/cluster/beta/v1
imagePullRegistry: registry.kubeops.net/kubeops/kubeops
airgap: false                       # -> important
clusterName: myCluster
clusterUser: root
kubernetesVersion: 1.31.6         
kubeVipEnabled: false
virtualIP: 10.2.10.110
firewall: nftables
pluginNetwork: calico
containerRuntime: containerd
kubeOpsRoot: /home/myuser/kubeops
serviceSubnet: 192.168.128.0/17
podSubnet: 192.168.0.0/17
debug: true
systemCpu: 250m
systemMemory: 256Mi
packageRepository: https://packagerepo.kubeops.net/
changeCluster: false                # -> important
zones:
- name: zone1
  nodes:
  - name: demo-controlplane01
    iPAddress: 10.2.10.110
    type: controlplane
    kubeVersion: 1.32.2       
  - name: demo-worker01
    iPAddress: 10.2.10.210
    type: worker
    kubeVersion: 1.32.2      
- name: zone2
  nodes:
  - name: demo-controlplane02
    iPAddress: 10.2.10.120
    type: controlplane
    kubeVersion: 1.32.2       
  - name: demo-worker02
    iPAddress: 10.2.10.220
    type: worker
    kubeVersion: 1.32.2       
- name: zone3
  nodes:
  - name: demo-controlplane03
    iPAddress: 10.2.10.130
    type: controlplane
    kubeVersion: 1.32.2       
  - name: demo-worker03
    iPAddress: 10.2.10.230
    type: worker
    kubeVersion: 1.32.2        

4. Validate your values and install the KubeOps Compliance Applications Once you finished defining your values, check them once again. If you are ready, just start the installation process with the command:

kubeopsctl apply -f cluster-values.yaml -f enterprise-values.yaml

Example 2: Installing Applications in an airgap-environment

To install the KubeOps Compliance Applications in an existing cluster follow the next steps:

1. Define the Enterprise-Value-file

In the example value, the following applications are enabled:

  • opa-gatekeeper
  • rook-ceph
  • harbor
  • kubeops-dashboard

All other applications are disabled and will not be installed. Value-parameter will be explained in the references and can be found here.

apiVersion: kubeops/kubeopsctl/enterprise/beta/v1
deleteNs: false
localRegistry: true             # important for airgap, otherwise images are pulled from public registry
packages:
- name: opa-gatekeeper
  enabled: true
  values:
    standard:
      namespace: opa-gatekeeper
    advanced:
- name: rook-ceph
  enabled: true
  values:
    standard:
      namespace: rook-ceph
      cluster:
        resources:
          mgr:
            requests:
              cpu: "500m"
              memory: "512Mi"
          mon:
            requests:
              cpu: "1"
              memory: "1Gi"
          osd:
            requests:
              cpu: "1"
              memory: "1Gi"
        dashboard:
          enabled: true
      operator:
        data:
          rookLogLevel: "DEBUG"
- name: harbor
  enabled: true
  values:
    standard:
      namespace: harbor
      harborpass: "topsecret"
      databasePassword: "topsecret"
      redisPassword: "topsecret"
      externalURL: http://10.2.10.110:30002
      nodePort: 30002
      hostname: harbor.local
      harborPersistence:
        persistentVolumeClaim:
          registry:
            size: 40Gi
            storageClass: "rook-cephfs"
          jobservice:
            jobLog:
              size: 1Gi
              storageClass: "rook-cephfs"
          database:
            size: 1Gi
            storageClass: "rook-cephfs"
          redis:
            size: 1Gi
            storageClass: "rook-cephfs"
          trivy: 
            size: 5Gi
            storageClass: "rook-cephfs"
    advanced:
- name: kubeops-dashboard
  enabled: true
  values:
    standard:
      namespace: monitoring
      hostname: kubeops-dashboard.local
      service:
        nodePort: 30007
    advanced:
- name: filebeat-os
  enabled: false
  values:
    standard:
      namespace: logging
    advanced:

2. Pull the KubeOps Compliance Applications packages
To pull the required application packages in the correct version for the release, use the following commands:

kubeopsctl pull -f enterprise-values.yaml --kubernetesVersion <x.xx.x>

or

kubeopsctl pull --tools enterprise-values.yaml --kubernetesVersion <x.xx.x>

3. The KubeOps Compliance Application installation process
Important for only installation of the tools is that you have set your flag changeCluster to false in your cluster-values.yaml.

The following file is only an example. Make sure to change the necessary values (ips, passwords, …) before usage

# file cluster-values.yaml
apiVersion: kubeops/kubeopsctl/cluster/beta/v1
imagePullRegistry: registry.kubeops.net/kubeops/kubeops
airgap: true                        # -> important
clusterName: myCluster
clusterUser: root
kubernetesVersion: 1.32.2         
kubeVipEnabled: false
virtualIP: 10.2.10.110
firewall: nftables
pluginNetwork: calico
containerRuntime: containerd
kubeOpsRoot: /home/myuser/kubeops
serviceSubnet: 192.168.128.0/17
podSubnet: 192.168.0.0/17
debug: true
systemCpu: 250m
systemMemory: 256Mi
packageRepository: local
changeCluster: false                # -> important
zones:
- name: zone1
  nodes:
  - name: demo-controlplane01
    iPAddress: 10.2.10.110
    type: controlplane
    kubeVersion: 1.32.2      
  - name: demo-worker01
    iPAddress: 10.2.10.210
    type: worker
    kubeVersion: 1.32.2       
- name: zone2
  nodes:
  - name: demo-controlplane02
    iPAddress: 10.2.10.120
    type: controlplane
    kubeVersion: 1.32.2      
  - name: demo-worker02
    iPAddress: 10.2.10.220
    type: worker
    kubeVersion: 1.32.2      
- name: zone3
  nodes:
  - name: demo-controlplane03
    iPAddress: 10.2.10.130
    type: controlplane
    kubeVersion: 1.32.2      
  - name: demo-worker03
    iPAddress: 10.2.10.230
    type: worker
    kubeVersion: 1.32.2        

4. Validate your values and install the KubeOps Compliance Applications Once you finished defining your values, check them once again. If you are ready, just start the installation process with the command:

kubeopsctl apply -f cluster-values.yaml -f enterprise-values.yaml

3.6 - Updating KubeOps Compliance applications

This guide outlines the steps to update KubeOps Compliance applications of a cluster.

Updating KubeOps Compliance applications

There is a predefined selection of applications included with KubeOps Compliance. These applications ensure a production-ready cluster deployment and can be configured individually as needed.

By separating cluster values from application values, application values can be modified independently and installed later, providing greater flexibility and maintainability.

kubeopsctl automatically detects whether an application is already deployed and updates it accordingly.

Prerequisites

KOSI Login Recommendation

Before performing any action with kubeopsctl, it is recommended to do a login with kosi. Refer to the official KOSI documentation for details here.

Updated KubeOpsctl

If you have an older kubeopsctl version installed, update it before starting with updating Compliance appliactions.

# kubeopsctl-version can be found under : https://packagerepo.kubeops.net/deb/pool/main/
sudo apt update
sudo apt install -y kubeopsctl=<kubeopsctl-version>
# kubeopsctl-version can be found under : https://packagerepo.kubeops.net/rpm/
sudo dnf install -y --disableexcludes=kubeops-repo <kubeopsctl-version>
# kubeopsctl-version can be found under : https://packagerepo.kubeops.net/deb/pool/main/
wget https://packagerepo.kubeops.net/deb/pool/main/<kubeopsctl-version>.deb
sudo dpkg --install <kubeopsctl-version>.deb
# kubeopsctl-versions can be found under: https://packagerepo.kubeops.net/rpm
sudo rpm -e kubeopsctl
wget https://packagerepo.kubeops.net/rpm/<kubeopsctl-version>.rpm
sudo rpm --install -v <kubeopsctl-version>.rpm

Example 1: Updating Applications in a non-airgap-environment

To update the KubeOps Compliance Applications in an existing cluster follow the next steps:

1. Define the Enterprise-Value-file

In the example value, the following applications are enabled:

  • opa-gatekeeper
  • rook-ceph
  • harbor
  • kubeops-dashboard

All other applications are disabled and will not be updated. Value-parameter will be explained in the references and can be found here.

apiVersion: kubeops/kubeopsctl/enterprise/beta/v1
deleteNs: false
localRegistry: false
packages:
- name: opa-gatekeeper
  enabled: true
  values:
    standard:
      namespace: opa-gatekeeper
    advanced:
- name: rook-ceph
  enabled: true
  values:
    standard:
      namespace: rook-ceph
      cluster:
        resources:
          mgr:
            requests:
              cpu: "500m"
              memory: "512Mi"
          mon:
            requests:
              cpu: "1"
              memory: "1Gi"
          osd:
            requests:
              cpu: "1"
              memory: "1Gi"
        dashboard:
          enabled: true
      operator:
        data:
          rookLogLevel: "DEBUG"
- name: harbor
  enabled: true
  values:
    standard:
      namespace: harbor
      harborpass: "topsecret"
      databasePassword: "topsecret"
      redisPassword: "topsecret"
      externalURL: http://10.2.10.110:30002
      nodePort: 30002
      hostname: harbor.local
      harborPersistence:
        persistentVolumeClaim:
          registry:
            size: 40Gi
            storageClass: "rook-cephfs"
          jobservice:
            jobLog:
              size: 1Gi
              storageClass: "rook-cephfs"
          database:
            size: 1Gi
            storageClass: "rook-cephfs"
          redis:
            size: 1Gi
            storageClass: "rook-cephfs"
          trivy: 
            size: 5Gi
            storageClass: "rook-cephfs"
    advanced:
- name: kubeops-dashboard
  enabled: true
  values:
    standard:
      namespace: monitoring
      hostname: kubeops-dashboard.local
      service:
        nodePort: 30007
    advanced:
- name: filebeat-os
  enabled: false
  values:
    standard:
      namespace: logging
    advanced:

2. Pull the KubeOps Compliance Applications packages
To pull the required application packages in the correct version for the release, use the following commands:

kubeopsctl pull -f enterprise-values.yaml

or

kubeopsctl pull --tools enterprise-values.yaml

3. The KubeOps Compliance Application update process
Important for only update of the tools is that you have set your flag changeCluster to false in your cluster-values.yaml.

# file cluster-values.yaml
apiVersion: kubeops/kubeopsctl/cluster/beta/v1
imagePullRegistry: registry.kubeops.net/kubeops/kubeops
airgap: false                       # -> important
clusterName: myCluster
clusterUser: root
kubernetesVersion: 1.31.6         
kubeVipEnabled: false
virtualIP: 10.2.10.110
firewall: nftables
pluginNetwork: calico
containerRuntime: containerd
kubeOpsRoot: /home/myuser/kubeops
serviceSubnet: 192.168.128.0/17
podSubnet: 192.168.0.0/17
debug: true
systemCpu: 250m
systemMemory: 256Mi
packageRepository: https://packagerepo.kubeops.net/
changeCluster: false                # -> important
zones:
- name: zone1
  nodes:
  - name: demo-controlplane01
    iPAddress: 10.2.10.110
    type: controlplane
    kubeVersion: 1.31.6       
  - name: demo-worker01
    iPAddress: 10.2.10.210
    type: worker
    kubeVersion: 1.31.6       
- name: zone2
  nodes:
  - name: demo-controlplane02
    iPAddress: 10.2.10.120
    type: controlplane
    kubeVersion: 1.31.6       
  - name: demo-worker02
    iPAddress: 10.2.10.220
    type: worker
    kubeVersion: 1.30.8       
- name: zone3
  nodes:
  - name: demo-controlplane03
    iPAddress: 10.2.10.130
    type: controlplane
    kubeVersion: 1.31.6       
  - name: demo-worker03
    iPAddress: 10.2.10.230
    type: worker
    kubeVersion: 1.31.6        

4. Validate your values and update the KubeOps Compliance Applications Once you finished defining your values, check them once again. If you are ready, just start the update process with the command:

kubeopsctl apply -f cluster-values.yaml -f enterprise-values.yaml

Example 2: Updating Applications in an airgap-environment

To update the KubeOps Compliance Applications in an existing cluster follow the next steps:

1. Define the Enterprise-Value-file

In the example value, the following applications are enabled:

  • opa-gatekeeper
  • rook-ceph
  • harbor
  • kubeops-dashboard

All other applications are disabled and will not be updated. Value-parameter will be explained in the references and can be found here.

apiVersion: kubeops/kubeopsctl/enterprise/beta/v1
deleteNs: false
localRegistry: true             # important for airgap, otherwise images are pulled from public registry
packages:
- name: opa-gatekeeper
  enabled: true
  values:
    standard:
      namespace: opa-gatekeeper
    advanced:
- name: rook-ceph
  enabled: true
  values:
    standard:
      namespace: rook-ceph
      cluster:
        resources:
          mgr:
            requests:
              cpu: "500m"
              memory: "512Mi"
          mon:
            requests:
              cpu: "1"
              memory: "1Gi"
          osd:
            requests:
              cpu: "1"
              memory: "1Gi"
        dashboard:
          enabled: true
      operator:
        data:
          rookLogLevel: "DEBUG"
- name: harbor
  enabled: true
  values:
    standard:
      namespace: harbor
      harborpass: "topsecret"
      databasePassword: "topsecret"
      redisPassword: "topsecret"
      externalURL: http://10.2.10.110:30002
      nodePort: 30002
      hostname: harbor.local
      harborPersistence:
        persistentVolumeClaim:
          registry:
            size: 40Gi
            storageClass: "rook-cephfs"
          jobservice:
            jobLog:
              size: 1Gi
              storageClass: "rook-cephfs"
          database:
            size: 1Gi
            storageClass: "rook-cephfs"
          redis:
            size: 1Gi
            storageClass: "rook-cephfs"
          trivy: 
            size: 5Gi
            storageClass: "rook-cephfs"
    advanced:
- name: kubeops-dashboard
  enabled: true
  values:
    standard:
      namespace: monitoring
      hostname: kubeops-dashboard.local
      service:
        nodePort: 30007
    advanced:
- name: filebeat-os
  enabled: false
  values:
    standard:
      namespace: logging
    advanced:

2. Update kubeopsctl

If you have an older kubeopsctl version installed, update it using the following commands.

# kubeopsctl-version can be found under : https://packagerepo.kubeops.net/deb/pool/main/
sudo apt update
sudo apt install -y kubeopsctl=<kubeopsctl-version>
# kubeopsctl-version can be found under : https://packagerepo.kubeops.net/rpm/
sudo dnf install -y --disableexcludes=kubeops-repo <kubeopsctl-version>
# kubeopsctl-version can be found under : https://packagerepo.kubeops.net/deb/pool/main/
wget https://packagerepo.kubeops.net/deb/pool/main/<kubeopsctl-version>.deb
sudo dpkg --install <kubeopsctl-version>.deb
# kubeopsctl-versions can be found under: https://packagerepo.kubeops.net/rpm
sudo rpm -e kubeopsctl
wget https://packagerepo.kubeops.net/rpm/<kubeopsctl-version>.rpm
sudo rpm --install -v <kubeopsctl-version>.rpm
2. Pull the KubeOps Compliance Applications packages
To pull the required application packages in the correct version for the release, use the following commands:

kubeopsctl pull -f enterprise-values.yaml

or

kubeopsctl pull --tools enterprise-values.yaml

3. The KubeOps Compliance Application update process
Important for only the update of the tools is that you have set your flag changeCluster to false in your cluster-values.yaml.

# file cluster-values.yaml
apiVersion: kubeops/kubeopsctl/cluster/beta/v1
imagePullRegistry: registry.kubeops.net/kubeops/kubeops
airgap: true                          # -> important
clusterName: myCluster
clusterUser: root
kubernetesVersion: 1.31.6         
kubeVipEnabled: false
virtualIP: 10.2.10.110
firewall: nftables
pluginNetwork: calico
containerRuntime: containerd
kubeOpsRoot: /home/myuser/kubeops
serviceSubnet: 192.168.128.0/17
podSubnet: 192.168.0.0/17
debug: true
systemCpu: 250m
systemMemory: 256Mi
packageRepository: local
changeCluster: false                # -> important
zones:
- name: zone1
  nodes:
  - name: demo-controlplane01
    iPAddress: 10.2.10.110
    type: controlplane
    kubeVersion: 1.31.6       
  - name: demo-worker01
    iPAddress: 10.2.10.210
    type: worker
    kubeVersion: 1.31.6       
- name: zone2
  nodes:
  - name: demo-controlplane02
    iPAddress: 10.2.10.120
    type: controlplane
    kubeVersion: 1.31.6       
  - name: demo-worker02
    iPAddress: 10.2.10.220
    type: worker
    kubeVersion: 1.30.8       
- name: zone3
  nodes:
  - name: demo-controlplane03
    iPAddress: 10.2.10.130
    type: controlplane
    kubeVersion: 1.31.6       
  - name: demo-worker03
    iPAddress: 10.2.10.230
    type: worker
    kubeVersion: 1.31.6        

4. Validate your values and update the KubeOps Compliance Applications Once you finished defining your values, check them once again. If you are ready, just start the update process with the command:

kubeopsctl apply -f cluster-values.yaml -f enterprise-values.yaml

3.7 - Harbor Deployment with CloudNativePG

Here is a brief overview of Harbor Deployment with CloudNativePG on Kubernetes using Kosi

Harbor Deployment with CloudNativePG on Kubernetes using Kosi

This guide provides a detailed guide to deploy Harbor on Kubernetes using a CloudNativePG (CNPG) PostgreSQL cluster managed by the CloudNativePG operator, installed via Kosi.

Precondition: Log in to the preprod environment with Kosi before beginning.

Step 1 — Install CloudNativePG operator

Deploy the operator with Kosi:

kosi install --hub kubeops kubeops/cloudnative-pg-operator:1.28.1 --dname cnpg-operator

With this step, it Installs the CloudNativePG operator into the cluster and the operator manages PostgreSQL clusters and their lifecycle.

Step 2 — Create PostgreSQL cluster for Harbor

1. Apply the following Cluster manifest to create a Postgres cluster with 2 instances and 1Gi storage:
cat <<EOF | kubectl apply -f -
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
  name: cloudnative-pg
  namespace: harbor
spec:
  instances: 2
  imagePullSecrets:
  - name: registry-pullsecret
  storage:
    size: 1Gi
EOF
2. Services and pods created for the cluster cloudnative-pg:

cloudnative-pg-rw → primary (read/write) cloudnative-pg-ro → replicas (read-only) cloudnative-pg-r → all pods

3. Verify pods are Running:
kubectl get pods -n harbor

Step 3 — Retrieve application user credentials

CNPG automatically creates a Secret named cloudnative-pg-app in the harbor namespace.

1. Inspect it:
kubectl get secret cloudnative-pg-app -n harbor
2. Decode the base64-encoded fields:
kubectl get secret cloudnative-pg-app -n harbor -o jsonpath="{.data.username}" | base64 -d 
kubectl get secret cloudnative-pg-app -n harbor -o jsonpath="{.data.password}" | base64 -d 
kubectl get secret cloudnative-pg-app -n harbor -o jsonpath="{.data.dbname}" | base64 -d

Example values (for illustration only):

username: app
password: Hw2t7hXuKPfZrVjVDwCc4PeKTevlB7ORmzQeW50JtEqiwHl40xkxuhVHeRIU3fX2
database: app

Important:
Use the non-superuser application credentials from this Secret in Harbor’s configuration.

Step 4 — Update Harbor tools.yaml for an external database

Edit your tools.yaml and set Harbor values under the helm chart configuration.
Example snippet:

- name: harbor
  enabled: true
  values:
    standard:
      namespace: harbor
      harborpass: "password"
      databasePassword: "<DB_PASSWORD>"
      redisPassword: "Redis_Password"
      externalURL: <your_domain_name>
      nodePort: 30002
      hostname: <your_domain_name>
      harborPersistence:
        persistentVolumeClaim:
          registry:
            size: 40Gi
            storageClass: "rook-cephfs"
          jobservice:
            jobLog:
              size: 1Gi
              storageClass: "rook-cephfs"
          database:
            size: 1Gi
            storageClass: "rook-cephfs"
          redis:
            size: 1Gi
            storageClass: "rook-cephfs"
          trivy: 
            size: 5Gi
            storageClass: "rook-cephfs"
    advanced:
      database:
        type: external
        external:
          host: "cloudnative-pg-rw.harbor.svc.cluster.local"
          port: "5432"
          username: "app"
          password: "Hw2t7hXuKPfZrVjVDwCc4PeKTevlB7ORmzQeW50JtEqiwHl40xkxuhVHeRIU3fX2"
          coreDatabase: "app"

Important: Use the -rw service host (cloudnative-pg-rw…) for write operations.
Do not use a superuser account.
Ensure the password matches the CNPG Secret.

Step 5 — Install Harbor with Kosi

  1. Deploy Harbor using the updated tools.yaml:
kosi install --hub kubeops kubeops/harbor:2.0.3 -f tools.yaml --dname harbor
  1. Verify Harbor pods:
kubectl get pods -n harbor
  1. Access Harbor at: <your_domain_name>:30002 (or as configured)

3.8 - Ingress Configuration

Here is a brief overview of how you can configure your ingress manually.

Manual configuration of the Nginx-Ingress-Controller

Right now the Ingress Controller Package is not fully configured. To make complete use of the Ingress capabilities of the cluster, the user needs to manually update some of the settings of the corresponding service.

Locating the service

The service in question is called “ingress-nginx-controller” and can be found in the same namespace as the ingress package itself. To locate the service across all namespaces, you could use the following command.

kubectl get service -A | grep ingress-nginx-controller

This command should return two entries of services, “ingress-nginx-controller” and “ingress-nginx-controller-admission”, though only the first one needs to be further adjusted.

Setting the Ingress-Controller service to type NodePort

To edit the service, you can use the following command, although the actual namespace may be different. This will change the service type to NodePort.

kubectl patch service ingress-nginx-controller -n kubeops -p '{"spec":{"type":"NodePort"}}'

Kubernetes will now automatically assign unused portnumbers for the nodePort to allow http and https connections to the service. These can be retrieved by running the same command, used to locate the service. Alternatively, you can use the following command, which adds the portnumbers 30080 and 30443 for the respective protocols. By doing so, you have to make sure, that these portnumbers are not being used by any other NodePort service.

kubectl patch service ingress-nginx-controller -n kubeops --type=json -p '[{"op":"replace","path":"/spec/type","value":"NodePort"}, {"op":"add","path":"/spec/ports/0/nodePort","value":30080}, {"op":"add","path":"/spec/ports/1/nodePort","value":30443}]'

Configuring external IPs

If you have access to external IPs that route to one or more cluster nodes, you can expose your Kubernetes-Services of any type through these addresses. The command below shows how to add an external IP-Adress to the service with the example value of “192.168.0.1”. Keep in mind that this value has to be changed in order to fit your networking settings.

kubectl patch service ingress-nginx-controller -n kubeops -p '{"spec":{"externalIPs":["192.168.0.1"]}}'

3.9 - Accessing Dashboards

A brief overview of how you can access dashboards.

Accessing Dashboards installed with KubeOps

To access a Application dashboard an SSH-Tunnel to one of the Control-Planes is needed. The following Dashboards are available and configured with the following NodePorts by default:

NodePort

32090 (if not set otherwise in the enterprise-values.yaml)

Connecting to the Dashboard

In order to connect to the dashboard an ssh tunnel has to be established. There are various tools for doing this, like the command line, putty or MobaXterm.
To establish a tunnel, the NodePort of the dashboard has to be forwarded on one of the control planes to the local machine. After that the dashboard can be accessed with localhost:<NodePort>.

Initial login credentials

No credentials are necessary for login

NodePort

30211 (if not set otherwise in the enterprise-values.yaml)

Connecting to the Dashboard

In order to connect to the dashboard an ssh tunnel has to be established. There are various tools for doing this, like the command line, putty or MobaXterm.
To establish a tunnel, the NodePort of the dashboard has to be forwarded on one of the control planes to the local machine. After that the dashboard can be accessed with localhost:<NodePort>.

Initial login credentials

  • username: the username set in the enterprise-values.yaml of Prometheus (default: user)
  • password: the password set in the enterprise-values.yaml of Prometheus (default: password)

NodePort

30050 (if not set otherwise in the enterprise-values.yaml)

Connecting to the Dashboard

In order to connect to the dashboard an ssh tunnel has to be established. There are various tools for doing this, like the command line, putty or MobaXterm.
To establish a tunnel, the NodePort of the dashboard has to be forwarded on one of the control planes to the local machine. After that the dashboard can be accessed with localhost:<NodePort>.

Initial login credentials

  • username: admin
  • password: Password@@123456

NodePort

  • https: 30003

Connecting to the Dashboard

In order to connect to the dashboard an ssh tunnel has to be established. There are various tools for doing this, like the command line, putty or MobaXterm.
To establish a tunnel, the NodePort of the dashboard has to be forwarded on one of the control planes to the local machine. After that the dashboard can be accessed with localhost:<NodePort>.

Initial login credentials

  • username: admin
  • password: the password set in the kubeopsvalues.yaml for the cluster creation (default: password)

NodePort

The Rook/Ceph Dashboard has no fixed NodePort yet. To find out the NodePort used by Rook/Ceph follow these steps:

  1. List the Services in the KubeOps namespace
kubectl get svc -n kubeops
  1. Find the line with the service rook-ceph-mgr-dashboard-external-http
NAME                                      TYPE        CLUSTER-IP        EXTERNAL-IP   PORT(S)                                     AGE
rook-ceph-mgr-dashboard-external-http     NodePort    192.168.197.13    <none>        7000:31268/TCP                              21h

Or use,

echo $(kubectl get --namespace rook-ceph -o jsonpath="{.spec.ports[0].nodePort}" services rook-ceph-mgr-dashboard-external-http)

In the example above the NodePort to connect to Rook/Ceph would be 31268.

Connecting to the Dashboard

In order to connect to the dashboard an ssh tunnel has to be established. There are various tools for doing this, like the command line, putty or MobaXterm.
To establish a tunnel, the NodePort of the dashboard has to be forwarded on one of the control planes to the local machine. After that the dashboard can be accessed with localhost:<NodePort>/ceph-dashboard/.

Initial login credentials

echo Username: admin
echo Password: $(kubectl get secret rook-ceph-dashboard-password -n rook-ceph --template={{.data.password}} | base64 -d)

NodePort

30007 (if not set otherwise in the enterprise-values.yaml)

Connecting to the Dashboard

In order to connect to the dashboard an ssh tunnel has to be established. There are various tools for doing this, like the command line, putty or MobaXterm.
To establish a tunnel, the NodePort of the dashboard has to be forwarded on one of the control planes to the local machine. After that the dashboard can be accessed with localhost:<NodePort>.

Initial login credentials

kubectl -n monitoring create token headlamp-admin

NodePort

30180

Connecting to the Dashboard

In order to connect to the dashboard an ssh tunnel has to be established. There are various tools for doing this, like the command line, putty or MobaXterm.
To establish a tunnel, the NodePort of the dashboard has to be forwarded on one of the control planes to the local machine. After that the dashboard can be accessed with localhost:<NodePort>/.

Initial login credentials

echo Username: $(kubectl get secret --namespace keycloak keycloak-kubeops -o jsonpath="{.data.ADMIN_USER}" | base64 -d)
echo Password: $(kubectl get secret --namespace keycloak keycloak-kubeops -o jsonpath="{.data.ADMIN_PASSWORD}" | base64 -d)

Connecting to the Dashboard

In order to connect to one of the dashboards, an ssh tunnel has to be established. There are various tools for doing this, like the command line, putty or MobaXterm.
To establish a tunnel, the NodePort of the dashboard has to be forwarded on one of the control planes to the local machine. After that, you can access the dashboard as described in the information panel of each dashboard above.

Connecting to the Dashboard via DNS

In order to connect to the dashboard via DNS the hosts file in /etc/hosts need the following additional entries:

10.2.10.11 kubeops-dashboard.local
10.2.10.11 harbor.local
10.2.10.11 keycloak.local
10.2.10.11 opensearch.local
10.2.10.11 grafana.local
10.2.10.11 rook-ceph.local

3.10 - Change the OpenSearch Password

Detailed instructions on how to change the OpenSearch password.

Changing a User Password in OpenSearch

This guide explains how to change a user password in OpenSearch with SecurityConfig enabled and an external Kubernetes Secret for user credentials.

Steps to Change the Password Using an External Secret

Prerequisites

  • Access to the Kubernetes cluster where OpenSearch is deployed.
  • Permissions to view and modify secrets in the relevant namespace.

Step 1: Generate a New Password Hash

Execute the command below (replacing the placeholders) to generate a hashed version of your new password:

kubectl exec -it <opensearch_pod_name> -n <opensearch_pod_namespace> -- bash -c "sh /usr/share/opensearch/plugins/opensearch-security/tools/hash.sh -p <new_password>"

Step 2: Extract the Existing Secret and Update internal_users.yaml

Retrieve the existing secret containing internal_users.yml. The secret stores the configuration in base64 encoding, so extract and decode it:

kubectl get secrets -n <opensearch_pod_namespace> internal-users-config-secret -o jsonpath='{.data.internal_users\.yml}' | base64 -d > internal_users.yaml

Open the exported file internal_users.yaml. Find the entry for the user whose password you want to change and replace the previous password hash with the new hash you generated in step 1. Then save the file.

Step 3: Patch the Secret with Updated internal_users.yml Data and Restart the Opensearch Pods

Encode the updated internal_users.yaml and apply it back to the secret.

cat internal_users.yaml | base64 -w 0 | xargs -I {} kubectl patch secret -n <opensearch_pod_namespace> internal-users-config-secret --patch '{"data": {"internal_users.yml": "{}"}}'

Restart the Opensearch pods to use the updated secret.

kubectl rollout restart statefulset opensearch-cluster-master -n <opensearch_pod_namespace>

Step 4: Copy the internal users yaml

You can copy the modified users.yaml now into the container with this command:

kubectl cp internal_users.yaml -n <opensearch_pod_namespace> <opensearch_pod_name>:/usr/share/opensearch/config/opensearch-security/internal_users.yml

Step 5: Run securityadmin.sh to Apply the Changes

This completes the password update process, ensuring that changes persist across OpenSearch pods.

kubectl exec -it <opensearch_pod_name> -n <opensearch_pod_namespace> -- bash -c "\
    sh /usr/share/opensearch/plugins/opensearch-security/tools/securityadmin.sh \
    -cd /usr/share/opensearch/config/opensearch-security/ \
    -icl -nhnv \
    -cacert /usr/share/opensearch/config/root-ca.pem \
    -cert /usr/share/opensearch/config/kirk.pem \
    -key /usr/share/opensearch/config/kirk-key.pem"

3.11 - Backup and restore

In this article, we look at the backup procedure with Velero.

Backup and restoring artifacts

What is Velero?

Velero uses object storage to store backups and associated artifacts. It also optionally integrates supported block storage systems to snapshot your persistent volumes. Before beginning the installation process, you should identify the object storage provider and optional block storage provider(s) you’ll be using from the list of compatible providers.

Velero supports storage providers for both cloud-provider environments and on-premises environments.

Velero prerequisites:

  • Access to a Kubernetes cluster, v1.16 or later, with DNS and container networking enabled.
  • kubectl installed locally
  • Object Storage (S3, Cloud Provider Environment, On-Premises Environment)

Install Velero

This command is an example on how you can install velero into your cluster:

velero install --provider aws --plugins velero/velero-plugin-for-aws:v1.2.1 --bucket velero --secret-file ./credentials-velero --use-volume-snapshots=false --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://minio.velero.svc:9000

NOTE:

  • s3Url has to be the url of your s3 storage login.
  • example for credentials-velero file:
    [default]
    aws_access_key_id = your_s3_storage_username
    aws_secret_access_key = your_s3_storage_password
    

Backup the cluster

Scheduled Backups

This command creates a backup for the cluster every 6 hours:

velero schedule create cluster --schedule "0 */6 * * *"

Get Schedules

This command lists all schedules for backups:

velero schedule get

Delete Schedules

This command deletes the specified schedule:

velero schedule delete cluster

Restore Scheduled Backup

This command restores the backup according to a schedule:

velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>

Backup

This command creates a backup for the cluster

velero backup create cluster

Get Backups

This command lists all created backups:

velero backup get

Delete Backups

This commands deletes the specified backup:

velero backup delete <BACKUP NAME>

Restore Backup

This commands restores the specified backup:

velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>

Backup a specific deployment

Scheduled Backups

This command creates a backup for the namespace “logging” every 6 hours:

velero schedule create filebeat --schedule "0 */6 * * *" --include-namespaces logging --include-cluster-resources=true

This command creates a backup for the deployment “filebeat” every 6 hours:

velero schedule create filebeat --schedule "0 */6 * * *" --include-namespaces logging --selector app=filebeat-filebeat,release=filebeat --include-resources serviceaccount,deployment,daemonset,configmap,clusterrolebinding,clusterrole --include-cluster-resources=true

Get Schedules

This command lists all schedules for backups:

velero schedule get

Delete Schedules

This command deletes the specified schedule:

velero schedule delete filebeat

Restore Scheduled Backup

This command restores the backup from a schedule:

velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>

Backup

This command creates a backup for the namespace “logging”:

velero backup create filebeat --include-namespaces logging --include-cluster-resources=true

This command creates a backup for the deployment “filebeat”:

velero backup create filebeat --include-namespaces logging --selector app=filebeat-filebeat,release=filebeat --include-resources serviceaccount,deployment,daemonset,configmap,clusterrolebinding,clusterrole --include-cluster-resources=true

Get Backups

This command lists all created backups:

velero backup get

Delete Backups

This commands deletes the specified backup:

velero backup delete <BACKUP NAME>

Restore Backup

This commands restores the specified backup:

velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>

Scheduled Backups

This command creates a backup for the namespace “logging” every 6 hours:

velero schedule create logstash --schedule "0 */6 * * *" --include-namespaces logging --include-cluster-resources=true

This command creates a backup for the deployment “logstash” every 6 hours:

velero schedule create logstash --schedule "0 */6 * * *" --include-namespaces logging --selector app=logstash-logstash,chart=logstash,release=logstash --include-resources StatefulSet,ServiceAccount,Service,Secret,RoleBinding,Role,PodSecurityPolicy,PodDisruptionBudget,Ingress,ConfigMap --include-cluster-resources=true

Get Schedules

This command lists all schedules for backups:

velero schedule get

Delete Schedules

This command deletes the specified schedule:

velero schedule delete logstash

Restore Scheduled Backup

This command restores the backup from a schedule:

velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>

Backup

This command creates a backup for the namespace “logging”:

velero backup create logstash --include-namespaces logging --include-cluster-resources=true

This command creates a backup for the deployment “logstash”:

velero backup create logstash --include-namespaces logging --selector app=logstash-logstash,chart=logstash,release=logstash --include-resources StatefulSet,ServiceAccount,Service,Secret,RoleBinding,Role,PodSecurityPolicy,PodDisruptionBudget,Ingress,ConfigMap --include-cluster-resources=true

Get Backups

This command lists all created backups:

velero backup get

Delete Backups

This commands deletes the specified backup:

velero backup delete <BACKUP NAME>

Restore Backup

This commands restores the specified backup:

velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>

Scheduled Backups

This command creates a backup for the namespace “logging” every 6 hours:

velero schedule create opensearch --schedule "0 */6 * * *" --include-namespaces logging --include-cluster-resources=true

This command creates a backup for the deployment “opensearch” every 6 hours:

velero schedule create opensearch --schedule "0 */6 * * *" --include-namespaces logging --selector app.kubernetes.io/instance=opensearch,app.kubernetes.io/name=opensearch --include-resources ConfigMap,Ingress,NetworkPolicy,PodDisruptionBudget,PodSecurityPolicy,Role,RoleBinding,Secret,Service,ServiceAccount,StatefulSet --include-cluster-resources=true

Get Schedules

This command lists all schedules for backups:

velero schedule get

Delete Schedules

This command deletes the specified schedule:

velero schedule delete opensearch

Restore Scheduled Backup

This command restores the backup from a schedule:

velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>

Backup

This command creates a backup for the namespace “logging”:

velero backup create opensearch --include-namespaces logging --include-cluster-resources=true

This command creates a backup for the deployment “opensearch”:

velero backup create opensearch --include-namespaces logging --selector app.kubernetes.io/instance=opensearch,app.kubernetes.io/name=opensearch --include-resources ConfigMap,Ingress,NetworkPolicy,PodDisruptionBudget,PodSecurityPolicy,Role,RoleBinding,Secret,Service,ServiceAccount,StatefulSet --include-cluster-resources=true

Get Backups

This command lists all created backups:

velero backup get

Delete Backups

This commands deletes the specified backup:

velero backup delete <BACKUP NAME>

Restore Backup

This commands restores the specified backup:

velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>

Scheduled Backups

This command creates a backup for the namespace “monitoring” every 6 hours:

velero schedule create prometheus --schedule "0 */6 * * *" --include-namespaces monitoring --include-cluster-resources=true

This command creates a backup for the deployment “prometheus” every 6 hours:

velero schedule create prometheus --schedule "0 */6 * * *" --include-namespaces monitoring --include-resources Alertmanager,Secret,Ingress,List,PodDisruptionBudget,Role,RoleBinding,PodSecurityPolicy,Service,ServiceAccount,ServiceMonitor,Endpoints,ConfigMap,ConfigMapList,ClusterRole,ClusterRoleBinding,SecretProviderClass,PodMonitor,Prometheus,Job,NetworkPolicy,MutatingWebhookConfiguration,ValidatingWebhookConfiguration,Issuer,Deployment,VerticalPodAutoscaler,ThanosRuler --include-cluster-resources=true

Get Schedules

This command lists all schedules for backups:

velero schedule get

Delete Schedules

This command deletes the specified schedule:

velero schedule delete prometheus

Restore Scheduled Backup

This command restores the backup from a schedule:

velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>

Backup

This command creates a backup for the namespace “monitoring”:

velero backup create prometheus --include-namespaces monitoring --include-cluster-resources=true

This command creates a backup for the deployment “prometheus”:

velero backup create prometheus --include-namespaces monitoring --include-resources Alertmanager,Secret,Ingress,List,PodDisruptionBudget,Role,RoleBinding,PodSecurityPolicy,Service,ServiceAccount,ServiceMonitor,Endpoints,ConfigMap,ConfigMapList,ClusterRole,ClusterRoleBinding,SecretProviderClass,PodMonitor,Prometheus,Job,NetworkPolicy,MutatingWebhookConfiguration,ValidatingWebhookConfiguration,Issuer,Deployment,VerticalPodAutoscaler,ThanosRuler --include-cluster-resources=true

Get Backups

This command lists all created backups:

velero backup get

Delete Backups

This commands deletes the specified backup:

velero backup delete <BACKUP NAME>

Restore Backup

This commands restores the specified backup:

velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>

Scheduled Backups

This command creates a backup for the namespace “harbor” every 6 hours:

velero schedule create harbor --schedule "0 */6 * * *" --include-namespaces harbor --include-cluster-resources=true

This command creates a backup for the deployment “harbor” every 6 hours:

velero schedule create harbor --schedule "0 */6 * * *" --include-namespaces harbor --include-resources ConfigMap,Deployment,PersistentVolumeClaim,Secret,Service,StatefulSet,Ingress,ServiceMonitor --include-cluster-resources=true

Get Schedules

This command lists all schedules for backups:

velero schedule get

Delete Schedules

This command deletes the specified schedule:

velero schedule delete harbor

Restore Scheduled Backup

This command restores the backup from a schedule:

velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>

Backup

This command creates a backup for the namespace “harbor”:

velero backup create harbor --include-namespaces harbor --include-cluster-resources=true

This command creates a backup for the deployment “harbor”:

velero backup create harbor --include-namespaces harbor --include-resources ConfigMap,Deployment,PersistentVolumeClaim,Secret,Service,StatefulSet,Ingress,ServiceMonitor --include-cluster-resources=true --include-cluster-resources=true

Get Backups

This command lists all created backups:

velero backup get

Delete Backups

This commands deletes the specified backup:

velero backup delete <BACKUP NAME>

Restore Backup

This commands restores the specified backup:

velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>

Scheduled Backups

This command creates a backup for the namespace “gatekeeper-system” every 6 hours:

velero schedule create gatekeeper --schedule "0 */6 * * *" --include-namespaces gatekeeper-system --include-cluster-resources=true

This command creates a backup for the deployment “gatekeeper” every 6 hours:

velero schedule create gatekeeper --schedule "0 */6 * * *" --include-namespaces gatekeeper-system --include-resources PodSecurityPolicy,ServiceAccount,Deployment,PodDisruptionBudget,ResourceQuota,ClusterRole,Role,ClusterRoleBinding,RoleBinding,MutatingWebhookConfiguration,ValidatingWebhookConfiguration,Secret,Service,Job --include-cluster-resources=true

Get Schedules

This command lists all schedules for backups:

velero schedule get

Delete Schedules

This command deletes the specified schedule:

velero schedule delete gatekeeper

Restore Scheduled Backup

This command restores the backup from a schedule:

velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>

Backup

This command creates a backup for the namespace “gatekeeper-system”:

velero backup create gatekeeper --include-namespaces gatekeeper-system --include-cluster-resources=true

This command creates a backup for the deployment “gatekeeper-system”:

velero backup create gatekeeper --include-namespaces gatekeeper-system --include-resources PodSecurityPolicy,ServiceAccount,Deployment,PodDisruptionBudget,ResourceQuota,ClusterRole,Role,ClusterRoleBinding,RoleBinding,MutatingWebhookConfiguration,ValidatingWebhookConfiguration,Secret,Service,Job --include-cluster-resources=true --include-cluster-resources=true

Get Backups

This command lists all created backups:

velero backup get

Delete Backups

This commands deletes the specified backup:

velero backup delete <BACKUP NAME>

Restore Backup

This commands restores the specified backup:

velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>

Scheduled Backups

This command creates a backup for the namespace “rook-ceph” every 6 hours:

velero schedule create rook-ceph --schedule "0 */6 * * *" --include-namespaces rook-ceph --include-cluster-resources=true

Get Schedules

This command lists all schedules for backups:

velero schedule get

Delete Schedules

This command deletes the specified schedule:

velero schedule delete rook-ceph

Restore Scheduled Backup

This command restores the backup from a schedule:

velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>

Backup

This command creates a backup for the namespace “rook-ceph”:

velero backup create rook-ceph --include-namespaces rook-ceph --include-cluster-resources=true

Get Backups

This command lists all created backups:

velero backup get

Delete Backups

This commands deletes the specified backup:

velero backup delete <BACKUP NAME>

Restore Backup

This commands restores the specified backup:

velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>

restore databases

keycloak

  1. create backup of keycloak namespace, in this example the backup is called keycloak1.

This command creates a backup for the namespace “rook-ceph”:

velero backup create keycloak1 --include-namespaces keycloak --include-cluster-resources=true
  1. restore backup, in this example keycloak1
velero restore create keycloak1 --from-backup keycloak1
  1. restore the database dump:
kubectl -n <keycloak-namespace> exec keycloak-postgres-0 -- pg_restore -v --jobs=4 --clean --if-exists -d bitnami_keycloak /backup/keycloak-db.dump

3.12 - Add certificate as trusted

This section outlines the process for adding a certificate as trusted by downloading it from the browser and installing it in the Trusted Root Certification Authorities on Windows or Linux systems.

1. Download the certificate

  1. As soon as Chrome issues a certificate warning, click on Not secure to the left of the address bar.
  2. Show the certificate (Click on Certificate is not valid).
  3. Go to Details tab.
  4. Click Export... at the bottom and save the certificate.
  1. As soon as Firefox issues a certificate warning, click on Advanced....
  2. View the certificate (Click on View Certificate).
  3. Scroll down to Miscellaneous and save the certificate.

2. Install the certificate

  1. Press Windows + R.
  2. Enter mmc and click OK.
  3. Click on File > Add/Remove snap-in....
  4. Select Certificates in the Available snap-ins list and click on Add >, then on OK. Add the snap-in.
  5. In the tree pane, open Certificates - Current user > Trusted Root Certification Authorities, then right-click Certificates and select All tasks > Import....
  6. The Certificate Import Wizard opens here. Click on Next.
  7. Select the previously saved certificate and click Next.
  8. Click Next again in the next window.
  9. Click on Finish. If a warning pops up, click on Yes.
  10. The program can now be closed. Console settings do not need to be saved.
  11. Clear browser cache and restart browser.

The procedures for using a browser to import a certificate as trusted (on Linux systems) vary depending on the browser and Linux distribution used. To manually cause a self-signed certificate to be trusted by a browser on a Linux system:

Distribution Copy certificate here Run following command to trust certificate
RedHat /etc/pki/ca-trust/source/anchors/ update-ca-trust extract

3.13 - Deploy Package On Cluster

This guide provides a simplified process for deploying packages in a Kubernetes cluster using Kosi with either the Helm or Kubectl plugin.

Deploying package on Cluster

You can install artifacts in your cluster in several ways. For this purpose, you can use these four plugins when creating a package:

  • helm
  • kubectl
  • cmd
  • Kosi

As an example, this guide installs the nginx-ingress Ingress Controller.

Using the Helm-Plugin

Prerequisite

In order to install an artifact with the Helm plugin, the Helm chart must first be downloaded. This step is not covered in this guide.

Create KOSI package

First you need to create a KOSI package. The following command creates the necessary files in the current directory:

kosi create

The downloaded Helm chart must also be located in the current directory. To customize the deployment of the Helm chart, the values.yaml file must be edited. This file can be downloaded from ArtifactHub and must be placed in the same directory as the Helm chart.

All files required by a task in the package must be named in the package.kosi file under files. The container images required by the Helm chart must also be listed in the package.kosi under containers. In the example below, only two files are required for the installation: the Helm Chart for the nginx-ingress and the values.yaml to configure the deployment. To install nginx-ingress you will also need the nginx/nginx-ingress image with the tag 3.0.1.

To install nginx-ingress with the Helm plugin, call the plugin as shown in the example under install. The deployment configuration file is listed under values and the packed Helm chart is specified with the key tgz. Furthermore, it is also possible to specify the namespace in which the artifact should be deployed and the name of the deployment. The full documentation for the Helm plugin can be found here.

languageversion = "0.1.0";
apiversion = "kubernative/kubeops/sina/user/v4";
name = "deployExample1";
description = "It shows how to deploy an artifact to your cluster using the helm plugin.";
version = "0.1.0";
docs = "docs.tgz";
logo = "logo.png";

files =
{
        valuesFile = "values.yaml";
        nginxHelmChart="nginx-ingress-0.16.1.tgz";
}

containers =
{
        nginx = ["docker.io", "nginx/nginx-ingress", "3.0.1"];
}

install
{
        helm
        (
            command = "install";
            tgz = "nginx-ingress-0.16.1.tgz";
            values = "['values.yaml']";
            namespace = "dev";
            deploymentName = "nginx-ingress"
        );
}

Once the package.kosi file has been fully configured, all files must be combined into a KOSI package. To do this, execute the following command in the directory where the package.kosi file is located.

kosi build

To make the generated kosi package available on other machines, it is pushed to the user’s private KubeOps Hub. To do this, the user must first log in to the hub.

$ kosi login -u <username>
2023-02-04 11:19:43 Info:      KOSI version: 2.13.0_Alpha0
2023-02-04 11:19:43 Info:      Please enter password
****************
2023-02-04 11:19:26 Info:      Login Succeeded to Hub.
$ kosi push --hub kosi
2023-02-04 11:23:18 Info:      KOSI version: 2.13.0_Alpha0
2023-02-04 11:23:19 Info:      Push to Private Registry registry.preprod.kubeops.net/<username>/

Deployment

Once the KOSI package has been created and published, it needs to be installed on the Admin node. The following command will download and execute the package. The package name and version refer to the values defined in package.kosi with the keys name and version.

kosi install --hub <username> <username>/<packagename>:<version>

For the example package, the command would be: kosi install --hub <username> <username>/deployExample:0.1.0.

Using the Kubectl-Plugin

Prerequisite

In order to install an artifact with the Kubectl plugin, the kubeops-kubernetes-plugins package must be installed on the admin node. This step is not covered in this guide.

Create KOSI package

First you need to create a KOSI package. The following command creates the necessary files in the current directory:

kosi create

The NGINX ingress controller YAML manifest can either be automaticly downloaded and applied directly with kubectl apply or it can be downloaded manually if you want to customize the deployment. The YAML manifest can be downloaded from the NGINX GitHub Repo and must be placed in the same directory as the files for the kosi package.

All files required by a task in the package must be named in the package.kosi file under files. The container images required by the YAML manifest must also be listed in the package.kosi under containers. In the example below, only one file is required for the installation: the YAML manifest for the nginx-ingress controller. To install nginx-ingress you will also need the registry.k8s.io/ingress-nginx/controller image with the tag v1.5.1 and the image registry.k8s.io/ingress-nginx/kube-webhook-certgen with tag v20220916-gd32f8c343.

To install nginx-ingress with the Kubectl plugin, call the plugin as shown in the example under installs. The full documentation for the Kubectl plugin can be found here.

languageversion = "0.1.0";
apiversion = "kubernative/kubeops/sina/user/v4";
name = "deployExample2";
description = "It shows how to deploy an artifact to your cluster using the helm plugin.";
version = "0.1.0";
docs = "docs.tgz";
logo = "logo.png";

files =
{
     manifest: "deploy.yaml"
}

containers =
{
    nginx = ["registry.k8s.io", "ingress-nginx/controller", "v1.5.1"];
    certgen= ["registry.k8s.io","ingress-nginx/kube-webhook-certgen","v20220916-gd32f8c343"];
}

install
{
    kubectl
    (
      operation="apply",
      flags="-f deploy.yaml";
      sudo = true;
      sudoPassword="toor"
    );
}

Once the package.kosi file has been fully configured, all files must be combined into a KOSI package. To do this, execute the following command in the directory where the package.kosi file is located.

kosi build

To make the generated KOSI package available on other machines, it is pushed to the user’s private KubeOps Hub. To do this, the user must first log in to the hub.

$ kosi login -u <username>
2023-02-04 11:19:43 Info:      kosi version: 2.13.0_Alpha0
2023-02-04 11:19:43 Info:      Please enter password
****************
2023-02-04 11:19:26 Info:      Login Succeeded to Hub.
$ kosi push --hub kosi
2023-02-04 11:23:18 Info:      kosi version: 2.13.0_Alpha0
2023-02-04 11:23:19 Info:      Push to Private Registry registry.preprod.kubeops.net/<username>/

Deployment

Once the KOSI package has been created and published, it needs to be installed on the Admin node. The following command will download and execute the package. The package name and version refer to the values defined in package.kosi with the keys name and version.

kosi install --hub <username> <username>/<packagename>:<version>

For the example package, the command would be: kosi install --hub <username> <username>/deployExample:0.1.0.

3.14 - How to migrate from nginx to traefik ingress

Installation

Kubeops supports deploying Traefik as a dynamic ingress controller and reverse proxy. This guide describes a concise, safe migration from an existing nginx-ingress controller to Traefik and explains how to install Traefik and replace a deprecated nginx-ingress deployment. The migration from nginx to Traefik is straightforward; the steps below show the process in order.

Prerequisites

A running Kubernetes cluster with an existing nginx-ingress controller.

1.Create Values file

values.yaml files are required for the Traefik installation:

# values.yaml
packages:
- name: traefik
  enabled: true
  values:
    standard:
      namespace: traefik
      externalIPs: []
    advanced: {}

Note: Update external IPs and other values as per user requirement.

2.Install Traefik

Once the values.yaml files have been created, install Traefik:

# get your desired version/s
kosi search --hub kubeops --ps traefik
# install traefik
kosi install --hub kubeops kubeops/traefik:<desired_version> -f values.yaml --dname traefik
Example
kosi install --hub kubeops kubeops kubeops/traefik:2.1.0_Beta0 -f values.yaml --dname traefik

3.Verify Deployment

Check pods and services in the traefik namespace:

kubectl get pods -n traefik
kubectl get svc -n traefik

Note: Default NodePorts (e.g. 31080 / 31443) might not be reachable. If the default ports are not accessible, determine the ports used by ingress-nginx (e.g. 30080 / 30443) and update Traefik to use the same ports.

4.Remove old nginx-ingress deployment and service

# get version of installed nginx-ingress and its deployment name (--dname)
kosi list
# delete old nginx-ingress
kosi delete --hub kubeops kubeops/ingress-nginx:<installed_version> -f enterprise-values.yaml --dname <kosi_deployment_name>

Edit Traefik service

If nginx used specific NodePorts and you require those same ports, edit the Traefik Service:

kubectl edit svc traefik -n traefik

Update the ports:

Update the ports to match the previous nginx NodePorts if required.

ports:
- name: web
  nodePort: 30080
  port: 80
  targetPort: web
- name: websecure
  nodePort: 30443
  port: 443
  targetPort: websecure

Verify Port Change

kubectl get svc -n traefik

Note: Ensure the nginx Service is removed or its NodePorts are freed before reusing those NodePorts on the Traefik Service.

4 - Reference

In the reference you will find articles on the Kubeopsctl Commands, Fileformats, KubeOps Version and the Glossary

4.1 - KubeOps CLI Commands

KubeOps KubeOpsCtl CLI Commands

This documentation shows all commands of the kubeopsctl and how to use them.

General commands

Overview of all KUBEOPSCTL commands

Description:
  kubeopsctl is a kubernetes cluster manager

Usage:
  kubeopsctl [command] [options]

Options:
  --version       Show version information
  -?, -h, --help  Show help and usage information

Commands:
  version  kubeopsctl version information
  login    Login to kubeops hub and registry
  logout   Logout from kubeops hub
  pull     Pull kosi packages for kubernetes cluster setup and plattform tools
  apply    Apply values on kubernetes cluster

Command ‘kubeopsctl –version’

The kubeopsctl --version command shows you the current version of kubeopsctl.

kubeopsctl --version

The output should be:

2.0.3

Command ‘kubeopsctl –help’

The command kubeopsctl --help gives you an overview of all available commands:

kubeopsctl --help

Alternatively, you can also enter kubeopsctl or kubeopsctl -? in the command line.

Command ‘kubeopsctl login’

The command kubeopsctl login performs a login against the KOSI HUB. A valid login session is neccessary to pull the packages.

Description:
  Login to kubeops hub and registry

Usage:
  kubeopsctl login [options]

Options:
  -u, --username <username> (REQUIRED)  Username
  -p, --password <password>             Password
  -?, -h, --help                        Show help and usage information

Example:

kubeopsctl login -u <username> -p <password>

Command ‘kubeopsctl logout’

The command kubeopsctl logout performs a logout from the KOSI HUB.

Description:
  Logout from kubeops hub

Usage:
  kubeopsctl logout [options]

Options:
  -?, -h, --help  Show help and usage information

Example:

kubeopsctl logout

Command ‘kubeopsctl pull’

The command kubeopsctl pull downloads all necessary KOSI packages to the admin node:

Description:
  Pull kosi packages for kubernetes cluster setup and plattform tools

Usage:
  kubeopsctl pull [options]

Options:
  -k, --kubernetesVersion <kubernetesVersion>  Kubernetes version
  -f, --tools <tools>                          Tools values file
  -?, -h, --help                               Show help and usage information

Example:

kubeopsctl pull

If you do not specify a parameter, the latest from kubeopsctl supported Kubernetes version will be pulled.
With parameter --kubernetesVersion 1.30.8 you can pull an older Kubernetes version.

Example:

kubeopsctl pull --kubernetesVersion 1.30.8

Command ‘kubeopsctl apply’

The command kubeopsctl apply is used to set up the kubeops platform with a configuration file.

Description:
  Apply values on kubernetes cluster

Usage:
  kubeopsctl apply [options]

Options:
  -f, --file <file> (REQUIRED)  Values files for cluster, tools and user
  -?, -h, --help                Show help and usage information

-f flag

The -f parameter is used to use value parameter yaml-files

Example:

kubeopsctl apply -f cluster-values.yaml

To install your KubeOps Compliance Applications, you have to use a second value file. It is called enterprise-value.yaml

Example:

kubeopsctl apply -f cluster-values.yaml -f enterprise-values.yaml

–delete flag

The -- delete parameter is used to perform a delete action.

This flag delete all nodes which are not present in the cluster-values.yaml - file Example:

kubeopsctl apply --delete -f cluster-values.yaml

4.2 - Fileformats

Fileformats in kubeopsctl

This documentation shows you all the different kind of fileformats kubeopsctl uses and how to use them.

There are currently 2 different files which can be handled by KubeOpsCtl:

cluster-values.yaml

The cluster-values.yaml defines every aspect of the cluster itself. It has no influence over which applications get installed.

apiVersion: kubeops/kubeopsctl/beta/v1 # required
imagePullRegistry: registry.kubeops.net/kubeops/kubeops # required
airgap: true # optional, default: true
clusterName: myCluster # required 
clusterUser: root # optional, default: root
kubernetesVersion: 1.32.2 # required
kubeVipEnabled: false # optional, default: true
virtualIP: 10.2.10.110 # required
firewall: nftables # optional, default: nftables
pluginNetwork: calico # optional, default: calico | possible alternative: cilium
containerRuntime: containerd # optional, default: containerd
kubeOpsRoot: /var/kubeops # optional, default: /var/kubeops
serviceSubnet: 192.168.128.0/17 # optional, default: 192.168.128.0/17
podSubnet: 192.168.0.0/17 # optional, default: 192.168.0.0/17
debug: false # optional, default: false
systemCpu: 250m # optional, default: 250m
systemMemory: 256Mi # optional, default: 256Mi
packageRepository: local # optional, default: local
changeCluster: true # optional, default: true
zones: # required
- name: zone1 # required
  nodes: # required
  - name: master1 # required
    iPAddress: 10.2.10.110 # required
    type: controlplane # required
    kubeVersion: 1.32.2 # required
Detailed Parameter Information
Key Possible Values Additional Info
pluginNetwork Calico, Cilium

enterprise-values.yaml

The enterprise-values.yaml defines all enterprise applications currently available for you to install in your cluster via kubeopsctl.
You can append multiple of them into a single enterprise-values.yaml as shown in the first example.

For each application you have 2 ways to change its values:

  • the standard values
  • the advanced values

While the standard values only cover predefined keys, the advanced values let you change every key available in the helm chart. Keep in mind, that the standard values overwrite the advanced values if both are set.

Each as optional marked line can be skipped unless otherwise stated. If a optional line is skipped, its default value will be used instead. If there is no default value, it can just be omitted and won’t affect the cluster and/or the application

apiVersion: kubeops/kubeopsctl/enterprise/beta/v1
deleteNs: false
localRegistry: false
packages:
- name: opa-gatekeeper
  enabled: true
  values:
    standard:
      namespace: opa-gatekeeper # optional, default is opa-gatekeeper
    advanced:
- name: filebeat-os
  enabled: false
  values:
    standard:
      namespace: logging # optional, default is logging
    advanced:
### Values for Rook-Ceph ###
### For detailed explanation for each key see: https://artifacthub.io/packages/helm/rook/rook-ceph?modal=values ###
apiVersion: kubeops/kubeopsctl/enterprise/beta/v1
deleteNs: false
localRegistry: false
packages:
- name: rook-ceph
  enabled: true
  values:
    standard:
      namespace: rook-ceph # optional, default is rook-ceph
      cluster:
        spec:
          dataDirHostPath: "/var/lib/rook" # optional, default is /var/lib/rook
        resources:
          mgr:
            requests:
              cpu: "500m" # optional, default is 500m, limit: 1000m
              memory: "512Mi" # optional, default is 1Gi, limit: 1Gi
          mon:
            requests:
              cpu: "1" # optional, default is 1, limit: 2000m
              memory: "1Gi" # optional, default is 1Gi, limit: 2Gi
          osd:
            requests:
              cpu: "1" # optional, default is 1, limit: 2
              memory: "1Gi" # optional, default is 4Gi, limit: 4Gi
      operator:
        data:
          rookLogLevel: "DEBUG" # optional, default is DEBUG
    advanced: 
      cluster: # All values from https://artifacthub.io/packages/helm/rook/rook-ceph-cluster?modal=values are overwritable
      operator: # All values from https://artifacthub.io/packages/helm/rook/rook-ceph?modal=values are overwritable
### Values for Harbor deployment ###
### For detailed explanation for each key see: https://artifacthub.io/packages/helm/harbor/harbor/1.8.1#configuration ###
apiVersion: kubeops/kubeopsctl/enterprise/beta/v1
deleteNs: false
localRegistry: false
packages:
- name: harbor
  enabled: true
  values:
    standard:
      namespace: harbor # optional, default is harbor
      harborpass: "password" # required: set password for harbor access
      databasePassword: "Postgres_Password" # required: set password for database access
      redisPassword: "Redis_Password" # required: set password for redis access
      externalURL: http://10.2.10.11:30002 # required, the ip address and port, from which harbor is accessable outside of the cluster
      nodePort: 30002 # required
      hostname: harbor.local # required
      harborPersistence:
        persistentVolumeClaim:
          registry:
            size: 40Gi # optional, default is 40Gi
            storageClass: "rook-cephfs" #optional, default is rook-cephfs
          jobservice:
            jobLog:
              size: 1Gi # optional, default is 1Gi
              storageClass: "rook-cephfs" #optional, default is rook-cephfs
          database:
            size: 1Gi # optional, default is 1Gi
            storageClass: "rook-cephfs" #optional, default is rook-cephfs
          redis:
            size: 1Gi # optional, default is 1Gi
            storageClass: "rook-cephfs" #optional, default is rook-cephfs
          trivy: 
            size: 5Gi # optional, default is 5Gi
            storageClass: "rook-cephfs" #optional, default is rook-cephfs
    advanced: #  All values from https://artifacthub.io/packages/helm/harbor/harbor/1.8.1#configuration are overwritable
### Values for filebeat deployment ###
apiVersion: kubeops/kubeopsctl/enterprise/beta/v1
deleteNs: false
localRegistry: false
packages:
- name: filebeat-os
  enabled: true
  values:
    standard:
      namespace: logging # optional, default is logging   
    advanced: # All values from https://artifacthub.io/packages/helm/elastic/filebeat?modal=values are overwritable
### Values for Logstash deployment ###
### For detailed explanation for each key see: https://github.com/elastic/helm-charts/releases/tag/v7.16.3 ###
apiVersion: kubeops/kubeopsctl/enterprise/beta/v1
deleteNs: false
localRegistry: false
packages:
- name: logstash-os
  enabled: true
  values:
    standard:
      namespace: logging # optional, default is logging
      volumeClaimTemplate:
        accessModes: 
          - ReadWriteMany #optional, default is [ReadWriteMany]
        resources:
          requests:
            storage: 1Gi # required, depending on storage capacity
        storageClass: "rook-cephfs" #optional, default is rook-cephfs
    advanced: # All values from https://artifacthub.io/packages/helm/elastic/logstash?modal=values are overwritable
    
### Values for OpenSearch-Dashboards deployment ###
### For detailed explanation for each key see: https://github.com/opensearch-project/helm-charts/tree/main/charts/opensearch-dashboards ###
apiVersion: kubeops/kubeopsctl/enterprise/beta/v1
deleteNs: false
localRegistry: false
packages:
- name: opensearch-dashboards
  enabled: true
  values:
    standard:
      namespace: logging # optional, default is logging
      nodePort: 30050
    advanced: # All values from https://artifacthub.io/packages/helm/opensearch-project-helm-charts/opensearch-dashboards?modal=values are overwritable
### Values for OpenSearch deployment ###
### For detailed explanation for each key see: https://github.com/opensearch-project/helm-charts/tree/main/charts/opensearch ###
apiVersion: kubeops/kubeopsctl/enterprise/beta/v1
deleteNs: false
localRegistry: false
packages:
- name: opensearch-os
  enabled: true
  values:
    standard:
      namespace: logging # optional, default is logging
      opensearchJavaOpts: "-Xmx512M -Xms512M" # optional, default is -Xmx512M -Xms512M
      resources:
        requests:
          cpu: "250m" # optional, default is 250m
          memory: "1024Mi" # optional, default is 1024Mi
        limits:
          cpu: "300m" # optional, default is 300m
          memory: "3072Mi" # optional, default is 3072Mi
      persistence:
        size: 4Gi # required
        enabled: "true" # optional, default is true
        enableInitChown: "false" # optional, default is false
        labels:
          enabled: "false" # optional, default is false
        storageClass: "rook-cephfs" # optional, default is rook-cephfs
        accessModes:
          - "ReadWriteMany" # optional, default is {ReadWriteMany}
      securityConfig:
        enabled: false # optional, default value: false
        ### Additional values can be set, if securityConfig is enabled:
        # path: "/usr/share/opensearch/plugins/opensearch-security/securityconfig"
        # actionGroupsSecret:
        # configSecret:
        # internalUsersSecret: internal-users-config-secret
        # rolesSecret:
        # rolesMappingSecret:
        # tenantsSecret:
        # config:
        #   securityConfigSecret: ""
        #   dataComplete: true
        #   data: {}
      replicas: "3" # optional, default is 3
    advanced: # All values from https://artifacthub.io/packages/helm/opensearch-project-helm-charts/opensearch?modal=values are overwritable
### Values for Prometheus deployment ###
apiVersion: kubeops/kubeopsctl/enterprise/beta/v1
deleteNs: false
localRegistry: false
packages:
- name: kube-prometheus-stack
  enabled: true
  values:
    standard:
      namespace: monitoring # optional, default is monitoring
      privateRegistry: false # optional, default is false
      grafanaUsername: "user" # optional, default is user
      grafanaPassword: "password" # optional, default is password
      grafanaResources:
        storageClass: "rook-cephfs" # optional, default is rook-cephfs
        storage: 5Gi # optional, default is 5Gi
        nodePort: 30211 # optional, default is 30211

      prometheusResources:
        storageClass: "rook-cephfs" # optional, default is rook-cephfs
        storage: 25Gi # optional, default is 25Gi
        retention: 10d # optional, default is 10d
        retentionSize: "24GB" # optional, default is 24GB
        nodePort: 32090
    advanced: # All values from https://artifacthub.io/packages/helm/prometheus-community/prometheus?modal=values-schema are overwritable
### Values for OPA deployment ###
apiVersion: kubeops/kubeopsctl/enterprise/beta/v1
deleteNs: false
localRegistry: false
packages:
- name: opa-gatekeeper
  enabled: true
  values:
    standard:
      namespace: gatekeeper-system # optional, default is gatekeeper-system
    advanced: # All values from https://artifacthub.io/packages/helm/gatekeeper/gatekeeper/3.1.1?modal=values are overwritable
### Values for KubeOps-Dashboard (Headlamp) deployment ###
apiVersion: kubeops/kubeopsctl/enterprise/beta/v1
deleteNs: false
localRegistry: false
packages:
- name: kubeops-dashboard
  enabled: true
  values:
    standard:
      namespace: monitoring # optional, default is monitoring
      service:
        nodePort: 30007
    advanced: # All values from https://artifacthub.io/packages/helm/headlamp/headlamp?modal=values are overwritable
### Values for cert-manager deployment ###
apiVersion: kubeops/kubeopsctl/enterprise/beta/v1
deleteNs: false
localRegistry: false
packages:
- name: cert-manager
  enabled: true
  values:
    standard:
      namespace: cert-manager # optional, default is cert-manager
      replicaCount: 3
      logLevel: 2
      secretName: root-secret
    advanced: # All values from https://artifacthub.io/packages/helm/cert-manager/cert-manager?modal=values are overwritable
    ## add helm values here
    # override email in the LetsEncrypt ClusterIssuer
    # emailLetsEncrypt: <your_email@domain.com> # dafault: example@example.com --> must configure
    # ingressName: <ingress_name> # default: nginx --> must update
### Values for ingress-nginx deployment ###
apiVersion: kubeops/kubeopsctl/enterprise/beta/v1
deleteNs: false
localRegistry: false
packages:
- name: ingress-nginx
  enabled: true
  values:
    standard:
      namespace: ingress # optional, default is ingress
    advanced: # All values from https://artifacthub.io/packages/helm/ingress-nginx/ingress-nginx?modal=values are overwritable
### Values for keycloak deployment ###
apiVersion: kubeops/kubeopsctl/enterprise/beta/v1
deleteNs: false
localRegistry: false
packages:
- name: keycloak
  enabled: true
  values:
    standard:
      namespace: keycloak # Optional, default is "keycloak"
      storageClass: "rook-cephfs" # Optional, default is "rook-cephfs"
      keycloak:
        auth:
          adminUser: admin # Optional, default is admin
          adminPassword: admin # Optional, default is admin
          existingSecret: "" # Optional, default is ""
      postgresql:
        auth:
          postgresPassword: "" # Optional, default is ""
          username: bn_keycloak # Optional, default is "bn_keycloak"
          password: "" # Optional, default is ""
          database: bitnami_keycloak # Optional, default is "bitnami_keycloak"
          existingSecret: "" # Optional, default is ""
    advanced: # All values from https://artifacthub.io/packages/helm/bitnami/keycloak?modal=values are overwritable
### Values for velero deployment ###
apiVersion: kubeops/kubeopsctl/enterprise/beta/v1
deleteNs: false
localRegistry: false
packages:
- name: velero
  enabled: true
  values:
    standard:
      namespace: velero # Optional, default is "velero"
      accessKeyId: "your_s3_storage_username"
      secretAccessKey: "your_s3_storage_password"
      useNodeAgent: false
      defaultVolumesToFsBackup: false
      provider: "aws"
      bucket: "velero"
      useVolumeSnapshots: false
      backupLocationConfig:
        region: "minio"
        s3ForcePathStyle: true
        s3Url: "http://minio.velero.svc:9000"
    advanced: # All values from https://artifacthub.io/packages/helm/vmware-tanzu/velero?modal=values are overwritable
apiVersion: kubeops/kubeopsctl/enterprise/beta/v1
deleteNs: false
localRegistry: false
packages:
- name: rook-ceph
  enabled: true
  values:
  	standard:
  		namespace: rook-ceph
  		cluster:
  			resources:
  				mgr:
  					requests:
  						cpu: "500m"
  						memory: "512Mi"
  				mon:
  					requests:
  						cpu: "1"
  						memory: "1Gi"
  				osd:
  					requests:
  						cpu: "1"
  						memory: "1Gi"
  			dashboard:
  				enabled: true
  		operator:
  			data:
  				rookLogLevel: "DEBUG"
- name: harbor
  enabled: true
  values:
  	standard:
  		namespace: harbor
  		harborpass: "topsecret"
  		databasePassword: "topsecret"
  		redisPassword: "topsecret"
  		externalURL: http://10.2.10.110:30002
  		nodePort: 30002
  		hostname: harbor.local
  		harborPersistence:
  			persistentVolumeClaim:
  				registry:
  					size: 40Gi
  					storageClass: "rook-cephfs"
  				jobservice:
  					jobLog:
  						size: 1Gi
  						storageClass: "rook-cephfs"
  				database:
  					size: 1Gi
  					storageClass: "rook-cephfs"
  				redis:
  					size: 1Gi
  					storageClass: "rook-cephfs"
  				trivy: 
  					size: 5Gi
  					storageClass: "rook-cephfs"
  	advanced:
- name: filebeat-os
  enabled: true
  values:
  	standard:
  		namespace: logging
  	advanced:
- name: logstash-os
  enabled: true
  values:
  	standard:
  		namespace: logging
  		volumeClaimTemplate:
  			accessModes: 
  				- ReadWriteMany #optional, default is [ReadWriteMany]
  			resources:
  				requests:
  					storage: 1Gi # required, depending on storage capacity
  			storageClass: "rook-cephfs" #optional, default is rook-cephfs
  	advanced:
- name: opensearch-dashboards
  enabled: true
  values:
  	standard:
  		namespace: logging
  		nodePort: 30050
  	advanced:
- name: opensearch-os
  enabled: true
  values:
  	standard:
  		namespace: logging
  		opensearchJavaOpts: "-Xmx512M -Xms512M" # optional, default is -Xmx512M -Xms512M
  		resources:
  			requests:
  				cpu: "250m" # optional, default is 250m
  				memory: "1024Mi" # optional, default is 1024Mi
  			limits:
  				cpu: "300m" # optional, default is 300m
  				memory: "3072Mi" # optional, default is 3072Mi
  		persistence:
  			size: 4Gi # required
  			enabled: "true" # optional, default is true
  			enableInitChown: "false" # optional, default is false
  			labels:
  				enabled: "false" # optional, default is false
  			storageClass: "rook-cephfs" # optional, default is rook-cephfs
  			accessModes:
  				- "ReadWriteMany" # optional, default is {ReadWriteMany}
  		securityConfig:
  			enabled: false # optional, default value: false
  			### Additional values can be set, if securityConfig is enabled:
  			# path: "/usr/share/opensearch/plugins/opensearch-security/securityconfig"
  			# actionGroupsSecret:
  			# configSecret:
  			# internalUsersSecret: internal-users-config-secret
  			# rolesSecret:
  			# rolesMappingSecret:
  			# tenantsSecret:
  			# config:
  			#   securityConfigSecret: ""
  			#   dataComplete: true
  			#   data: {}
  		replicas: "3" # optional, default is 3
  	advanced:
- name: kube-prometheus-stack
  enabled: true
  values:
  	standard:
  		namespace: kubeops # optional, default is kubeops
  		privateRegistry: false # optional, default is false
  		grafanaUsername: "user" # optional, default is user
  		grafanaPassword: "password" # optional, default is password
  		grafanaResources:
  			storageClass: "rook-cephfs" # optional, default is rook-cephfs
  			storage: 5Gi # optional, default is 5Gi
  			nodePort: 30211 # optional, default is 30211

  		prometheusResources:
  			storageClass: "rook-cephfs" # optional, default is rook-cephfs
  			storage: 25Gi # optional, default is 25Gi
  			retention: 10d # optional, default is 10d
  			retentionSize: "24GB" # optional, default is 24GB
  			nodePort: 32090
  	advanced:
- name: opa-gatekeeper
  enabled: true
  values:
  	standard:
  		namespace: kubeops
  	advanced:
- name: kubeops-dashboard
  enabled: true
  values:
  	standard:
  		service:
  			nodePort: 30007
  	advanced:
- name: cert-manager
  enabled: true
  values:
  	standard:
  		namespace: kubeops
  		replicaCount: 3
  		logLevel: 2
  		secretName: root-secret
  	advanced: # override email in the LetsEncrypt ClusterIssuer
    # emailLetsEncrypt: <your_email@domain.com> # dafault: example@example.com --> must configure
    # ingressName: <ingress_name> # default: nginx --> must update
- name: ingress-nginx
  enabled: true
  values:
  	standard:
  		namespace: kubeops
  	advanced:
- name: keycloak
  enabled: true
  values:
  	standard:
  		namespace: "kubeops" # Optional, default is "keycloak"
  		storageClass: "rook-cephfs" # Optional, default is "rook-cephfs"
  		keycloak:
  			auth:
  				adminUser: admin # Optional, default is admin
  				adminPassword: admin # Optional, default is admin
  				existingSecret: "" # Optional, default is ""
  		postgresql:
  			auth:
  				postgresPassword: "" # Optional, default is ""
  				username: bn_keycloak # Optional, default is "bn_keycloak"
  				password: "" # Optional, default is ""
  				database: bitnami_keycloak # Optional, default is "bitnami_keycloak"
  				existingSecret: "" # Optional, default is ""
  	advanced:
- name: velero
  enabled: true
  values:
  	standard:
  		namespace: "velero"
  		accessKeyId: "your_s3_storage_username"
  		secretAccessKey: "your_s3_storage_password"
  		useNodeAgent: false
  		defaultVolumesToFsBackup: false
  		provider: "aws"
  		bucket: "velero"
  		useVolumeSnapshots: false
  		backupLocationConfig:
  			region: "minio"
  			s3ForcePathStyle: true
  			s3Url: "http://minio.velero.svc:9000"
  	advanced:

4.3 - Supported Maintenance Packages

This guide provides an overview of maintenance packages for KubeOps clusters. It covers various Kubernetes tools, dependencies, and Container Runtime Interface (CRI) packages to set up and maintain your cluster. Ensure compatibility between versions to successfully deploy your first Kubernetes environment.

Supported Maintenance Packages

KubeOps provides you packages for the supported Kubernetes tools. These maintenance packages help you update the kubernetes tools to the desired versions on your clusters along with its dependencies.

It is necessary to install the required maintenance packages to create your first Kubernetes cluster. The packages are available on kubeops hub.

So let’s get started!

List of Maintenance Packages

1.Kubernetes

The first step is to choose a Kubernetes version and to pull its available package Kubeops Compliance 2.0 currently supports following Kubernetes versions:

Version 1.32.x 1.33.x 1.34.x
Deprecation date TBD 2026-06-28 2026-10-27
Supported OS Red Hat 9 Red Hat 9 Red Hat 9
1.32.0 1.33.3 1.34.1
1.32.2 1.33.5
1.32.3
1.32.7
1.32.9
1.32.10

Following are the packages available for the supported Kubernetes versions.

Kubernetes version Available packages
1.32.x kubernetes-1.32.x
1.33.x kubernetes-1.33.x
1.34.x kubernetes-1.34.x

4.4 - Glossary

Glossary


KOSI package

KOSI package is the .tgz file packaged by bundling package.kosi and other essential yaml files and artifacts. This package is ready to install on your Kubernetes Clusters.

KubeOps Hub

KubeOps Hub is a secure repository where published KOSI packages can be stored and shared. You are welcome to contribute and use public hub also at the same time KubeOps provides you a way to access your own private hub.

Installation Address

It is the distinctive address automatically generated for each published package on KubeOps Hub. It is constructed using name of package creator, package name and package version.
You can use this address at the time of package installation on your Kubernetes Cluster.

It is indicated by the install column in KubeOps Hub.

Deployment name

When a package is installed, KOSI creates a deployment name to track that installation. Alternatively, KOSI also lets you specify the deployment name of your choice during the installation.
A single package may be installed many times into the same cluster and create multiple deployments.
It is indicated by Deployment column in the list of package deployments.

Tasks

As the name suggests, “Tasks” in package.yaml are one or more sets of instructions to be executed. These are defined by utilizing Plugins.

Plugins

KOSI provides many functions which enable you to define tasks to be executed using your package. These are called Plugins. They are the crucial part of your package development.

KUBEOPSROOT Variable

The environment variable KUBEOPSROOT stores the location of the KOSI plugins and the config.yaml. To use the variable, the config.yaml and the plugins have to be copied manually.

apiVersion

It shows the supported KubeOps tool API version. You do not need to change it unless otherwise specified.

Registry

As the name suggests, it is the location where docker images can be stored. You can either use the default KubeOps registry or specify your own local registry for AirGap environments. You need an internet connection to use the default registry provided by KubeOps.

Maintenance Package

KubeOps provides a package for the supported Kubernetes tools. These packages help you update the Kubernetes tools to the desired versions on your clusters along with the dependencies.

Cluster

In computing, a cluster refers to a group of interconnected computers or servers that work together as a single system.

These machines, or nodes, are typically networked and collaborate to execute tasks or provide services. Clusters are commonly used in various fields such as distributed computing, high-performance computing, and cloud computing to improve reliability, scalability, and performance. In the context of technologies like Kubernetes, a cluster consists of multiple nodes managed collectively to deploy, manage, and scale containerized applications.

Container

A container is a lightweight, standalone package that includes everything needed to run a piece of software, including the code, runtime, libraries, and dependencies.

Containers are isolated from each other and from the underlying infrastructure, providing consistency and portability across different environments. Kubernetes manages containers, orchestrating their deployment, scaling, and management across a cluster of nodes. Containers are often used to encapsulate microservices or individual components of an application, allowing for efficient resource utilization and simplified deployment processes.

Drain-node

A Drain Node is a feature in distributed systems, especially prevalent in Kubernetes, used for gracefully removing a node from a cluster.

It allows the system to evict all existing workload from the node and prevent new workload assignments before shutting it down, ensuring minimal disruption to operations.

Kube-proxy

Kube-Proxy, short for Kubernetes Proxy, is a network proxy that runs on each node in a Kubernetes cluster. Its primary responsibility is to manage network connectivity for Kubernetes services. Its main tasks include service proxying and load balancing.

Kubelet

Kubelet is a crucial component of Kubernetes responsible for managing individual nodes in a cluster. It ensures that containers are running in pods as expected, maintaining their health and performance.

Kubelet communicates with the Kubernetes API server to receive instructions about which pods should be scheduled and executed on its node. It also monitors the state of these pods, reporting any issues back to the API server. Kubelet plays a vital role in the orchestration and management of containerized workloads within a Kubernetes cluster.

Node

A Kubernetes node oversees and executes pods.

It serves as the operational unit (virtual or physical machine) for executing assigned tasks. Similar to how pods bring together multiple containers to collaborate, a node gathers complete pods to work in unison. In large-scale operations, the goal is to delegate tasks to nodes with available pods ready to handle them.

Pod

In Kubernetes, a pod groups containers and is the smallest unit managed by the system.

Each pod shares an IP address among its containers and resources like memory and storage. This allows treating the containers as a single application, similar to traditional setups where processes run together on one host. Often, a pod contains just one container for simple tasks, but for more complex operations requiring collaboration among multiple processes with shared data, multi-container pods simplify deployment.

For example, in an image-processing service creating JPEGs, one pod might have containers for resizing images and managing background tasks or data cleanup, all working together.

Registry

Helm registry serves as a centralized repository for Helm charts, facilitating the discovery, distribution, and installation of Kubernetes applications and services.

It allows users to easily find, share, and consume pre-packaged Kubernetes resources, streamlining the deployment process in Kubernetes environments.

Zone

A “zone” typically refers to a subset of the overall cluster that shares certain characteristics, such as geographic location or hardware specifications. Zoning helps distribute resources strategically and can enhance fault tolerance by ensuring redundancy within distinct zones.