This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

How to Guides

Welcome to our comprehensive How-To Guide for using kubeops. Whether youre a beginner aiming to understand the basics or an experienced user looking to fine-tune your skills, this guide is designed to provide you with detailed step-by-step instructions on how to navigate and utilize all the features of kubeops effectively.

In the following sections, you will find everything from initial setup and configuration, to advanced tips and tricks that will help you get the most out of the software. Our aim is to assist you in becoming proficient with kubeops, enhancing both your productivity and your user experience.

Lets get started on your journey to mastering kubeops!

1 - Join Node to a Kubernetes Cluster

This guide outlines the steps to join a nodes to a cluster.

Joining a Node in a Kubernetes cluster

To increase performance or add additional resource capacity to your cluster, adding a node to the cluster is the correct process. This process with kubeopsctl is very easy.
You can use the following steps to join control-plane nodes or worker-nodes to a Kubernetes cluster.

Join Node Process:

Prerequisits

KOSI Login Recommendation

Before performing any action with kubeopsctl, it is recommended to do a login with kosi. Refer to the official KOSI documentation for details here.

ETCD Backup Recommendation

Before performing changes on the control planes, it is recommended to create an ETCD backup. Refer to the official Kubernetes documentation for details here

Example 1: Joining a Control-Plane Node to a Kubernetes Cluster

1. Pull required KOSI packages on your ADMIN

If you do not specify a parameter, the current Kubernetes version 1.32.2 will be pulled.
With parameter --kubernetesVersion 1.34.1 you can pull a specific Kubernetes version.
Available Kubernetes versions are 1.32.2 , 1.32.3 , 1.32.9 . 1.32.10 . 1.33.3 . 1.33.5 . 1.34.1 .

kubeopsctl pull

2. Add your node definition/specifications in the cluster-values

  - name: demo-controlplaneXX
    iPAddress: 10.2.10.XXX
    type: controlplane
    kubeVersion: 1.31.6 

3. Adjust your cluster-values in zone1
Adjust your cluster-values in comparison to the example below. Be sure to set the actual version in your values, as well as the target version in the nodes. In the snippet below it is just the zone1.

# file cluster-values.yaml
apiVersion: kubeops/kubeopsctl/cluster/beta/v1
imagePullRegistry: registry.kubeops.net/kubeops/kubeops
airgap: false
clusterName: myCluster
clusterUser: root
kubernetesVersion: 1.31.6       # -> actual version
kubeVipEnabled: false           
virtualIP: 10.2.10.110
firewall: nftables
pluginNetwork: calico
containerRuntime: containerd
kubeOpsRoot: /home/myuser/kubeops
serviceSubnet: 192.168.128.0/17
podSubnet: 192.168.0.0/17
debug: true
systemCpu: 250m
systemMemory: 256Mi
packageRepository: https://packagerepo.kubeops.net/
changeCluster: true             # -> has to be set
zones:
- name: zone1
  nodes:
  - name: demo-controlplane01
    iPAddress: 10.2.10.110
    type: controlplane
    kubeVersion: 1.31.6       
  - name: demo-controlplaneXX   # -> has to be changed
    iPAddress: 10.2.10.XXX      # -> has to be changed
    type: controlplane
    kubeVersion: 1.31.6         # -> check with actual version
  - name: demo-worker01
    iPAddress: 10.2.10.210
    type: worker
    kubeVersion: 1.31.6       
- name: zone2
  nodes:
  - name: demo-controlplane02
    iPAddress: 10.2.10.120
    type: controlplane
    kubeVersion: 1.31.6       
  - name: demo-worker02
    iPAddress: 10.2.10.220
    type: worker
    kubeVersion: 1.31.6       
- name: zone3
  nodes:
  - name: demo-controlplane03
    iPAddress: 10.2.10.130
    type: controlplane
    kubeVersion: 1.31.6      
  - name: demo-worker03
    iPAddress: 10.2.10.230
    type: worker
    kubeVersion: 1.31.6      

3. Validate your values and join the node to the cluster

Once the cluster-values.yaml is created, check the values once again. If you are ready just start the join node process with the command:

kubeopsctl apply -f cluster-values.yaml

Example 1: Joining a Worker Node to a Kubernetes Cluster

1. Pull required KOSI packages on your ADMIN

If you do not specify a parameter, the current Kubernetes version 1.32.2 will be pulled.
With parameter --kubernetesVersion x.xx.x you can pull other Kubernetes versions.
Available Kubernetes versions are 1.32.2 , 1.32.3 , 1.32.9 . 1.32.10 . 1.33.3 . 1.33.5 . 1.34.1 .

kubeopsctl pull

2. Add your node definition/specifications in the cluster-values

  - name: demo-workerXX
    iPAddress: 10.2.10.XX
    type: worker
    kubeVersion: 1.31.6      

3. Adjust your cluster-values in zone2
Adjust your cluster-values in comparison to the example below. Be sure to set the actual version in your values, as well as the target version in the nodes. In the snippet below it is just the zone1.

# file cluster-values.yaml
apiVersion: kubeops/kubeopsctl/cluster/beta/v1
imagePullRegistry: registry.kubeops.net/kubeops/kubeops
airgap: false
clusterName: myCluster
clusterUser: root
kubernetesVersion: 1.31.6        # -> actual version
kubeVipEnabled: false           
virtualIP: 10.2.10.110
firewall: nftables
pluginNetwork: calico
containerRuntime: containerd
kubeOpsRoot: /home/myuser/kubeops
serviceSubnet: 192.168.128.0/17
podSubnet: 192.168.0.0/17
debug: true
systemCpu: 250m
systemMemory: 256Mi
packageRepository: https://packagerepo.kubeops.net/
changeCluster: true              # -> has to be set
zones:
- name: zone1
  nodes:
  - name: demo-controlplane01
    iPAddress: 10.2.10.110
    type: controlplane
    kubeVersion: 1.31.6       
  - name: demo-worker01
    iPAddress: 10.2.10.210
    type: worker
    kubeVersion: 1.31.6       
- name: zone2
  nodes:
  - name: demo-controlplane02
    iPAddress: 10.2.10.120
    type: controlplane
    kubeVersion: 1.31.6       
  - name: demo-worker02
    iPAddress: 10.2.10.220
    type: worker
    kubeVersion: 1.31.6       
  - name: demo-workerXX          # -> has to be changed
    iPAddress: 10.2.10.XX        # -> has to be changed
    type: worker
    kubeVersion: 1.31.6          # -> check with actual version     
- name: zone3
  nodes:
  - name: demo-controlplane03
    iPAddress: 10.2.10.130
    type: controlplane
    kubeVersion: 1.31.6      
  - name: demo-worker03
    iPAddress: 10.2.10.230
    type: worker
    kubeVersion: 1.31.6      

3. Validate your values and join the node to the cluster

Once the cluster-values.yaml is created, check the values once again. If you are ready just start the join node process with the command:

kubeopsctl apply -f cluster-values.yaml

2 - Delete Worker-Node from a Kubernetes Cluster

This guide outlines the steps to delete worker-nodes from a cluster, specifically how to proceed with rook-ceph and other KubeOps Compliance applications

Deleting a Node from a Kubernetes cluster

In rare cases, it may be necessary to remove nodes from a Kubernetes cluster. This how-to guide explains the prerequisites and the key considerations to keep in mind before starting the node removal process.

You can use the following steps to delete nodes from a Kubernetes cluster.

Prerequisits

  • In order to run rook-ceph stable for a longer period your cluster needs at least 3 zones with each zone containing at least 1 worker-node

  • To check which mon and osd is running on the node you want to delete you can use the command kubectl get po -nrook-ceph -owide | grep worker02 | grep "mon\|osd" | grep -v "osd-prepare" | awk '{print $1}'. As an output you get the mon and the osd running on that node. If you don’t get an output, you don’t have to delete the ressource and can skip to the “delete the node”-section


Worker

Important: Due to rook-ceph, a worker node must not be removed without following the steps below. In this example, worker01 (zone1) is removed from the cluster. Worker01 contains osd.0 and mon-c.

Scale down the rook-ceph-operator deployment to 0

This prevents new MONs or OSDs from being created.

kubectl scale deploy rook-ceph-operator -n rook-ceph --replicas=0

Check which hosts and OSDs belong to each zone

kubectl exec -it deploy/rook-ceph-tools -n rook-ceph -- ceph osd tree
ID   CLASS  WEIGHT   TYPE NAME              STATUS  REWEIGHT  PRI-AFF
 -1         0.21478  root default
 -9         0.04880      zone zone1
 -7         0.04880          host worker01                              # worker01 is being removed
  0    ssd  0.04880              osd.0          up   1.00000  1.00000   # osd.0 is being removed
-15         0.04880          host worker04
  3    ssd  0.04880              osd.3          up   1.00000  1.00000
-11         0.10739      zone zone2
 -3         0.05859          host worker02
  1    ssd  0.05859              osd.1          up   0.95001  1.00000
-13         0.05859      zone zone3
 -5         0.05859          host worker03
  2    ssd  0.05859              osd.2          up   0.95001  1.00000

From this output you can see that osd.0 is part of worker01.

Scale down the OSD deployment

kubectl scale deploy -n rook-ceph rook-ceph-osd-<x> --replicas=0
# Example: kubectl scale deploy -n rook-ceph rook-ceph-osd-0 --replicas=0

Remove the OSD via ceph-tools

kubectl exec -it deploy/rook-ceph-tools -n rook-ceph -- bash
# show OSD tree
ceph osd tree
# mark OSD out
ceph osd out <x>
# Example: ceph osd out 0
ceph osd purge <x> --yes-i-really-mean-it
# Example: ceph osd purge 0 --yes-i-really-mean-it
ceph auth del osd.<x>
# adjust CRUSH map
ceph osd crush remove <nodename>
# exit from ceph-tools
exit
# show OSD tree (now without the deleted node)
kubectl exec -it deploy/rook-ceph-tools -n rook-ceph -- ceph osd tree

Delete OSD and MON deployments

kubectl delete deploy -n rook-ceph rook-ceph-osd-<x> rook-ceph-mon-<y>
Example
kubectl delete deploy -n rook-ceph rook-ceph-osd-0 rook-ceph-mon-c

Remove the deleted mon from the ceph tools

kubectl exec -it deploy/rook-ceph-tools -n rook-ceph -- ceph mon dump
kubectl exec -it deploy/rook-ceph-tools -n rook-ceph -- ceph mon rm <y>
# verfify
kubectl exec -it deploy/rook-ceph-tools -n rook-ceph -- ceph mon dump
Example
kubectl exec -it deploy/rook-ceph-tools -n rook-ceph -- ceph mon dump
kubectl exec -it deploy/rook-ceph-tools -n rook-ceph -- ceph mon rm c
# verfify
kubectl exec -it deploy/rook-ceph-tools -n rook-ceph -- ceph mon dump

This is the dump before executing the remove:

0: [v2:192.168.231.184:3300/0,v1:192.168.231.184:6789/0] mon.a
1: [v2:192.168.185.9:3300/0,v1:192.168.185.9:6789/0] mon.b
2: [v2:192.168.196.110:3300/0,v1:192.168.196.110:6789/0] mon.c

This is the dump after executing the remove:

0: [v2:192.168.231.184:3300/0,v1:192.168.231.184:6789/0] mon.a
1: [v2:192.168.185.9:3300/0,v1:192.168.185.9:6789/0] mon.b

Delete the node from the kubernetes cluster

  • Prepare your cluster-values.yaml so that the node you want to delete is removed from it
  • Execute the command kubeopsctl apply --delete -f cluster-values.yaml
Example

The cluster-values.yaml without node1 but with node4

# file cluster-values.yaml
apiVersion: kubeops/kubeopsctl/cluster/beta/v1
imagePullRegistry: registry.kubeops.net/kubeops/kubeops
airgap: true
clusterName: myCluster
clusterUser: root
kubernetesVersion: 1.31.6      
kubeVipEnabled: false
virtualIP: 10.2.10.110
firewall: nftables
pluginNetwork: calico
containerRuntime: containerd
kubeOpsRoot: /home/myuser/kubeops
serviceSubnet: 192.168.128.0/17
podSubnet: 192.168.0.0/17
debug: true
systemCpu: 250m
systemMemory: 256Mi
packageRepository: local
changeCluster: true
zones:
- name: zone1
  nodes:
  - name: controlplane01
    iPAddress: 10.2.10.110
    type: controlplane
    kubeVersion: 1.31.6       
  - name: worker04
    iPAddress: 10.2.10.214
    type: worker
    kubeVersion: 1.31.6       
- name: zone2
  nodes:
  - name: controlplane02
    iPAddress: 10.2.10.120
    type: controlplane
    kubeVersion: 1.31.6       
  - name: worker02
    iPAddress: 10.2.10.220
    type: worker
    kubeVersion: 1.31.6       
- name: zone3
  nodes:
  - name: controlplane03
    iPAddress: 10.2.10.130
    type: controlplane
    kubeVersion: 1.31.6  
  - name: worker03
    iPAddress: 10.2.10.230
    type: worker
    kubeVersion: 1.31.6     

After, you execute the command kubeopsctl apply --delete -f cluster-values.yaml

Scale the rook-ceph-operator deployment back to 1

This allows a new MON to be created automatically in zone2.

kubectl scale deploy rook-ceph-operator -n rook-ceph --replicas=1

Timing and health checks

The total duration depends on cluster size and node performance. Before proceeding, verify Ceph health and placement groups are clean.

kubectl exec -it deploy/rook-ceph-tools -n rook-ceph -- ceph status
kubectl exec -it deploy/rook-ceph-tools -n rook-ceph -- ceph pg stat

Typical duration ranges from 15 to 120 minutes.

If you want to rejoin the same node, reset it to a time prior to joining the cluster. Only this way you can be sure, that no leftovers from the deletion process remain!

3 - Single Sign-On with Keycloak

Learn how to configure Keycloak for Single Sign-On, securely expose it using Kubernetes Ingress and TLS, and integrate it with kubeops and other Kubernetes applications.

In this guide, you will learn how to implement Single Sign-On (SSO) using Keycloak. We will walk through the complete flow—from understanding SSO for platforms and services such as Rook Ceph, Harbor, and other Kubernetes applications, to configuring Keycloak, exposing it securely, and integrating it with kubeops.

By the end of this guide, you will be able to:

  • Understand how Keycloak enables centralized authentication
  • Configure Keycloak for SSO
  • Securely expose Keycloak using Kubernetes Ingress and TLS
  • Integrate Keycloak with kubeops for authentication and authorization
  • Validate and troubleshoot the SSO login flow

Let’s get started on enabling secure and seamless authentication with Keycloak.

3.1 - SSO for dashboard

Learn how to configure Single Sign-On (SSO) for KubeOps Dashboard using Keycloak with OIDC.

Single Sign-On (SSO) with Keycloak for KubeOps Dashboard

This guide describes how to configure KubeOps Dashboard using Keycloak (OIDC) in a kubeops-managed Kubernetes environment.


Prerequisites

Before proceeding, ensure the following requirements are met:

  • Keycloak is already installed and running
  • kubeops is installed and operational

Step 1: Extract Keycloak CA certificate

  • On your admin host, run the OpenSSL command (kept exactly as provided):
  openssl s_client -showcerts -connect dev04.kubeops.net:443 /dev/null | openssl x509 -outform PEM > keycloak-ca.crt
  • Copy the CA certificate to each master

    scp :/etc/kubernetes/pki/

Step 2: Update kube-apiserver yaml

  1. On every master, edit the yaml : /etc/kubernetes/manifests/kube-apiserver.yaml

    spec:
    
      containers:
    
      - command:
    
    
        - --oidc-issuer-url=https://dev04.kubeops.net/keycloak/realms/master
    
        - --oidc-client-id=headlamp
    
        - --oidc-username-claim=preferred_username
    
        - --oidc-groups-claim=groups
    
        - "--oidc-username-prefix=oidc:"
    
        - "--oidc-groups-prefix=oidc:"
    
        - --oidc-ca-file=/etc/kubernetes/pki/keycloak-ca.crt
    

Step 3: Create a Keycloak client for Headlamp

  • Create a client for headlamp

    • Client ID: headlamp
    • Client type: OpenID Connect
    • Access type: Confidential
    • Client authentication: Enabled
    • Standard flow: Enabled
    • Direct access grants: Disabled
  • Valid Redirect URIs

    Add the following redirect URI:

    https://headlamp/<your_DNS_name>/*
    
  • Web Origins

    <your_DNS_name>
    

Step 4: Create a client scope for Headlamp

  • Create a client scope

    • Assigned Client Scope : headlamp-dedicated
  • For groups, use the Group Mapper in Keycloak:

    • Mapper Type: groups
    • Name: groups
    • Token Claim Name: groups
    • Add to ID token: ON
    • Add to access token: ON
    • Add to user info: ON
    • Add to token introspection: ON

Step 5: Create a user Group and user in Keycloak

Create a group named headlamp (if doesn’t exist already) and user under the group.

Step 6: Create ClusterRoleBinding for Headlamp group

1.Use following yaml to create ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

  name: headlamp-admin-user

subjects:

- kind: Group

  name: "oidc:headlamp" # Der 'sub' oder 'preferred_username' from the Keycloak-Token

  apiGroup: rbac.authorization.k8s.io

roleRef:

  kind: ClusterRole

  name: cluster-admin

  apiGroup: rbac.authorization.k8s.io

The name “oidc:headlamp” needs to be the same as the group name.

  1. Apply the ClusterRoleBinding file
    kubectl apply -f headlamp-clusterrolebinding.yaml

Step 7: Get client secret

After creating the client, copy the client secret.
This value will be used in the next step.

Step 8: Prepare Headlamp values (enterprise.yaml)

configure enterprise-yaml

packages:

- name: kubeops-dashboard

  enabled: true

  values:

    standard:

      namespace: monitoring

      service:

        nodePort: 30007

      hostname: "headlamp.dev04.kubeops.net"

      path: "/"

    advanced:

      config:

        extraArgs:

          - "--in-cluster"

          - "--plugins-dir=/headlamp/plugins"

          - "--oidc-client-id=headlamp"

          - "--oidc-idp-issuer-url=https://dev04.kubeops.net/keycloak/realms/master"

          - "--oidc-scopes=openid,profile,email"

          - "--insecure-ssl"

          - "--oidc-client-secret=<client-secret>"

Replace with the secret retrieved in Step 7.
-oidc-client-id must match the Keycloak client name (headlamp).

Step 9: Install Headlamp

Deploy Headlamp with the updated enterprise.yaml.

3.2 - SSO for Harbor

Learn how to configure Single Sign-On (SSO) for Harbor using Keycloak with OIDC in a Kubernetes environment.

Single Sign-On (SSO) with Keycloak for Harbor

This guide describes how to configure Harbor authentication using Keycloak (OIDC) in a kubeops-managed Kubernetes environment.


Prerequisites

Before proceeding, ensure the following requirements are met:

  • Keycloak is already installed and running
  • Keycloak is exposed using Kubernetes Ingress
  • A valid DNS record is configured for Keycloak and Harbor
  • TLS is enabled with a trusted Certificate Authority (CA)
  • kubeops is installed and operational

Step 1: Prepare Keycloak (Realm, User, and Client)

In this step, we configure Keycloak for Harbor SSO. Keycloak is assumed to be already installed, exposed via Ingress, and reachable over HTTPS.

Create Realm

Ensure a realm named kubeops-dashboards exists.
If it does not exist, create it in the Keycloak admin console.

  • Realm name: kubeops-dashboards
  • Enabled: true

Create User

Ensure a user named kubeops exists in the kubeops-dashboards realm.
If the user does not exist, create it and set credentials.

  • Username: kubeops
  • Enabled: true
  • Set a permanent password

Create Client (Harbor)

Create a client for Harbor in the kubeops-dashboards realm.

  • Client ID: harbor
  • Client type: OpenID Connect
  • Access type: Confidential
  • Client authentication: Enabled
  • Standard flow: Enabled
  • Direct access grants: Disabled

Valid Redirect URIs

Add the following redirect URI:

https://<your_DNS_name>/c/oidc/callback

Web Origins

<your_DNS_name>

Client Secret

After creating the client, copy the client secret.
This value will be used in the Harbor configuration:

oidc_client_id: harbor
oidc_client_secret: <CLIENT_SECRET>

Create Secret

kubectl create secret generic <your_secret_name> -n <you_harbor_namespace> \
    --from-literal client_id=<your_oidc_client_id> \
    --from-literal client_secret=<your_oidc_client_secret> \

Step 2: Prepare Harbor Values

The following kubeops package configuration enables Harbor and integrates it with Keycloak using OIDC authentication.

apiVersion: kubeops/kubeopsctl/enterprise/beta/v1

deleteNs: false
localRegistry: false

packages:
  - name: harbor
    enabled: true
    values:
      standard:
        namespace: <you_harbor_namespace>
        harborpass: "password"
        databasePassword: "password"
        redisPassword: "password"
        externalURL: <your_DNS_name>
        nodePort: 30002
        hostname: harbor.dev04.kubeops.net
        harborPersistence:
          persistentVolumeClaim:
            registry:
              size: 40Gi
              storageClass: "rook-cephfs"
            jobservice:
              jobLog:
                size: 1Gi
                storageClass: "rook-cephfs"
            database:
              size: 1Gi
              storageClass: "rook-cephfs"
            redis:
              size: 1Gi
              storageClass: "rook-cephfs"
            trivy:
              size: 5Gi
              storageClass: "rook-cephfs"

      advanced:
        core:
          extraEnvVars:
            - name: OIDC_CLIENT_ID
              valueFrom:
                secretKeyRef:
                  name: oidc-harbor 
                  key: client_id
            - name: OIDC_CLIENT_SECRET
              valueFrom:
                secretKeyRef:
                  name: oidc-harbor 
                  key: client_secret
            - name: CONFIG_OVERWRITE_JSON
              value: |
                {
                  "auth_mode": "oidc_auth",
                  "oidc_name": "keycloak",
                  "oidc_endpoint": "https://<your_DNS_name>/keycloak/realms/kubeops-dashboards",
                  "oidc_client_id": "$(OIDC_CLIENT_ID)",
                  "oidc_client_secret": "$(OIDC_CLIENT_SECRET)",
                  "oidc_scope": "openid,profile,email",
                  "oidc_verify_cert": true,
                  "oidc_auto_onboard": true
                }                

Notes

  • Ensure the OIDC client in Keycloak matches the oidc_client_id and oidc_client_secret values.
  • The externalURL and hostname must match the Harbor DNS name exactly.
  • oidc_auto_onboard: true allows users to be created automatically in Harbor upon first login.

3.3 - SSO for rook-ceph

Learn how to configure Single Sign-On (SSO) for rook-ceph using Keycloak with OIDC in a Kubernetes environment.

Single Sign-On (SSO) with Keycloak for rook-ceph

This guide describes how to configure rook-ceph authentication using Keycloak (OIDC) in a kubeops-managed Kubernetes environment.


Prerequisites

Before proceeding, ensure the following requirements are met:

  • Keycloak is already installed and running
  • rook-ceph is already installed and running
  • kubeops is installed and operational

Step 1: Prepare Keycloak (Realm, User)

To configure Keycloak for rook-ceph SSO

Create Realm

Ensure a realm named kubeops-dashboards exists.
If it does not exist, create it in the Keycloak admin console.

  • Realm name: kubeops-dashboards
  • Enabled: true

Create User

Ensure a user named kubeops exists in the kubeops-dashboards realm.
If the user does not exist, create it and set credentials.

  • Username: kubeops
  • Enabled: true
  • Set a permanent password

Step 2: Create Client (rook-ceph)

Create a client for rook-ceph in the kubeops-dashboards realm with following settings.

  • Client ID: rook-ceph
  • Client type: OpenID Connect
  • Access type: Confidential
  • Client authentication: Enabled
  • Standard flow: Enabled
  • Direct access grants: Disabled

Valid Redirect URIs

Add the following redirect URI:

https://<your_DNS_name>/oauth2/callback

Web Origins

Also update the web-origins

<your_DNS_name>

Step 3: Get Client Secret

In the Keycloak admin console, open the rook-ceph client and copy the client secret. This value will be used by oauth2-proxy and referenced in next steps:

oidc_client_id: rook-ceph
oidc_client_secret: <CLIENT_SECRET>

Generate a secure random cookie secret.

python3 -c 'import os,base64; print(base64.urlsafe_b64encode(os.urandom(32)).decode())'

Create a Kubernetes Secret containing OAuth2 credentials. Note: the example command below uses client-id=“ceph-dashboard” — verify this value matches your Keycloak client ID

kubectl create secret generic oauth2-proxy-credentials   --from-literal=client-id="ceph-dashboard"   --from-literal=client-secret="<client-secret>"   --from-literal=cookie-secret="<cookie-secret>"   -n rook-ceph

Step 5: Prepare values for oauth2proxy

The following kubeops values configuration enables rook-ceph and integrates it with Keycloak using OIDC authentication.

Use the client secret and Cookie secret derived in above steps here.

global:

  # Global registry to pull the images from

  imageRegistry: ""

  # To help compatibility with other charts which use global.imagePullSecrets.

  imagePullSecrets: []

  #   - name: pullSecret1

  #   - name: pullSecret2

## Override the deployment namespace

##

namespaceOverride: ""

# Force the target Kubernetes version (it uses Helm `.Capabilities` if not set).

# This is especially useful for `helm template` as capabilities are always empty

# due to the fact that it doesn't query an actual cluster

kubeVersion:

# Oauth client configuration specifics

config:

  # Add config annotations

  annotations: {}

  # OAuth client ID

  clientID: "ceph-dashboard"

  # OAuth client secret

  clientSecret: "<client-secret>"

  # List of secret keys to include in the secret and expose as environment variables.

  # By default, all three secrets are required. To exclude certain secrets

  # (e.g., when using federated token authentication), remove them from this list.

  # Example to exclude client-secret:

  # requiredSecretKeys:

  #   - client-id

  #   - cookie-secret

  requiredSecretKeys:

    - client-id

    - client-secret

    - cookie-secret

  # Create a new secret with the following command

  # openssl rand -base64 32 | head -c 32 | base64

  # Use an existing secret for OAuth2 credentials (see secret.yaml for required fields)

  # Example:

  # existingSecret: secret

  cookieSecret: "<cookie-secret>"

  # The name of the cookie that oauth2-proxy will create

  # If left empty, it will default to the release name

  cookieName: ""

  google: {}

    # adminEmail: xxxx

    # useApplicationDefaultCredentials: true

    # targetPrincipal: xxxx

    # serviceAccountJson: xxxx

    # Alternatively, use an existing secret (see google-secret.yaml for required fields)

    # Example:

    # existingSecret: google-secret

    # groups: []

    # Example:

    #  - group1@example.com

    #  - group2@example.com

  # Default configuration, to be overridden

  configFile: |-

    provider = "keycloak-oidc"

    oidc_issuer_url = "https://dev04.kubeops.net/keycloak/realms/master"

    email_domains = [ "*" ]

    upstreams = [ "file:///dev/null" ]

   

    pass_user_headers = true

    set_xauthrequest = true

    pass_access_token = true

  # Custom configuration file: oauth2_proxy.cfg

  # configFile: |-

  #   pass_basic_auth = false

  #   pass_access_token = true

  # Use an existing config map (see configmap.yaml for required fields)

  # Example:

  # existingConfig: config

alphaConfig:

  enabled: false

  # Add config annotations

  annotations: {}

  # Arbitrary configuration data to append to the server section

  serverConfigData: {}

  # Arbitrary configuration data to append to the metrics section

  metricsConfigData: {}

  # Arbitrary configuration data to append

  configData: {}

  # Arbitrary configuration to append

  # This is treated as a Go template and rendered with the root context

  configFile: ""

  # Use an existing config map (see secret-alpha.yaml for required fields)

  existingConfig: ~

  # Use an existing secret

  existingSecret: "oauth2-proxy-credentials"

image:

  registry: ""

  repository: "oauth2-proxy/oauth2-proxy"

  # appVersion is used by default

  tag: ""

  pullPolicy: "IfNotPresent"

  command: []

# Optionally specify an array of imagePullSecrets.

# Secrets must be manually created in the namespace.

# ref: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod

imagePullSecrets: []

  # - name: myRegistryKeySecretName

# Set a custom containerPort if required.

# This will default to 4180 if this value is not set and the httpScheme set to http

# This will default to 4443 if this value is not set and the httpScheme set to https

# containerPort: 4180

extraArgs:

  - --provider=keycloak-oidc

  - --set-xauthrequest=true

  - --pass-user-headers=true

  - --pass-access-token=true

  - --skip-oidc-discovery=true

  - --oidc-issuer-url=https://dev04.kubeops.net/keycloak/realms/master

  - --login-url=https://dev04.kubeops.net/keycloak/realms/master/protocol/openid-connect/auth

  - --redeem-url=https://dev04.kubeops.net/keycloak/realms/master/protocol/openid-connect/token

  - --validate-url=https://dev04.kubeops.net/keycloak/realms/master/protocol/openid-connect/userinfo

  - --oidc-jwks-url=https://dev04.kubeops.net/keycloak/realms/master/protocol/openid-connect/certs

  - --ssl-insecure-skip-verify=true

  - --cookie-secure=true

extraEnv: []

envFrom: []

# Load environment variables from a ConfigMap(s) and/or Secret(s)

# that already exists (created and managed by you).

# ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#configure-all-key-value-pairs-in-a-configmap-as-container-environment-variables

#

# PS: Changes in these ConfigMaps or Secrets will not be automatically

#     detected and you must manually restart the relevant Pods after changes.

#

#  - configMapRef:

#      name: special-config

#  - secretRef:

#      name: special-config-secret

# -- Custom labels to add into metadata

customLabels: {}

# To authorize individual email addresses

# That is part of extraArgs but since this needs special treatment we need to do a separate section

authenticatedEmailsFile:

  enabled: false

  # Defines how the email addresses file will be projected, via a configmap or secret

  persistence: configmap

  # template is the name of the configmap what contains the email user list but has been configured without this chart.

  # It's a simpler way to maintain only one configmap (user list) instead changing it for each oauth2-proxy service.

  # Be aware the value name in the extern config map in data needs to be named to "restricted_user_access" or to the

  # provided value in restrictedUserAccessKey field.

  template: ""

  # The configmap/secret key under which the list of email access is stored

  # Defaults to "restricted_user_access" if not filled-in, but can be overridden to allow flexibility

  restrictedUserAccessKey: ""

  # One email per line

  # example:

  # restricted_access: |-

  #   name1@domain

  #   name2@domain

  # If you override the config with restricted_access it will configure a user list within this chart what takes care of the

  # config map resource.

  restricted_access: ""

  annotations: {}

  # helm.sh/resource-policy: keep

service:

  type: ClusterIP

  # when service.type is ClusterIP ...

  # clusterIP: 192.0.2.20

  # when service.type is LoadBalancer ...

  # loadBalancerIP: 198.51.100.40

  # loadBalancerSourceRanges: 203.0.113.0/24

  # when service.type is NodePort ...

  # nodePort: 80

  portNumber: 80

  # Protocol set on the service

  appProtocol: http

  annotations: {}

  # foo.io/bar: "true"

  # configure externalTrafficPolicy

  externalTrafficPolicy: ""

  # configure internalTrafficPolicy

  internalTrafficPolicy: ""

  # configure service target port

  targetPort: ""

  # Configures the service to use IPv4/IPv6 dual-stack.

  # Ref: https://kubernetes.io/docs/concepts/services-networking/dual-stack/

  ipDualStack:

    enabled: false

    ipFamilies: ["IPv6", "IPv4"]

    ipFamilyPolicy: "PreferDualStack"

  # Configure traffic distribution for the service

  # Ref: https://kubernetes.io/docs/concepts/services-networking/service/#traffic-distribution

  trafficDistribution: ""

## Create or use ServiceAccount

serviceAccount:

  ## Specifies whether a ServiceAccount should be created

  enabled: true

  ## The name of the ServiceAccount to use.

  ## If not set and create is true, a name is generated using the fullname template

  name:

  automountServiceAccountToken: true

  annotations: {}

  ## imagePullSecrets for the service account

  imagePullSecrets: []

    # - name: myRegistryKeySecretName

# Network policy settings.

networkPolicy:

  create: false

  ingress: []

  egress: []

ingress:

  enabled: false

  # className: nginx

  path: /

  # Only used if API capabilities (networking.k8s.io/v1) allow it

  pathType: ImplementationSpecific

  # Used to create an Ingress record.

  # hosts:

  # - chart-example.local

  # Extra paths to prepend to every host configuration. This is useful when working with annotation based services.

  # Warning! The configuration is dependant on your current k8s API version capabilities (networking.k8s.io/v1)

  # extraPaths:

  # - path: /*

  #   pathType: ImplementationSpecific

  #   backend:

  #     service:

  #       name: ssl-redirect

  #       port:

  #         name: use-annotation

  labels: {}

  # annotations:

  #   kubernetes.io/ingress.class: nginx

  #   kubernetes.io/tls-acme: "true"

  # tls:

  # Secrets must be manually created in the namespace.

  # - secretName: chart-example-tls

  #   hosts:

  #     - chart-example.local

# Gateway API HTTPRoute configuration

# Ref: https://gateway-api.sigs.k8s.io/api-types/httproute/

gatewayApi:

  enabled: false

  # The name of the Gateway resource to attach the HTTPRoute to

  # Example:

  # gatewayRef:

  #   name: gateway

  #   namespace: gateway-system

  gatewayRef:

    name: ""

    namespace: ""

  # HTTPRoute rule configuration

  # rules:

  # - matches:

  #   - path:

  #       type: PathPrefix

  #       value: /

  rules: []

  # Hostnames to match in the HTTPRoute

  # hostnames:

  # - chart-example.local

  hostnames: []

  # Additional labels to add to the HTTPRoute

  labels: {}

  # Additional annotations to add to the HTTPRoute

  annotations: {}

resources: {}

  # limits:

  #   cpu: 100m

  #   memory: 300Mi

  # requests:

  #   cpu: 100m

  #   memory: 300Mi

# Container resize policy for runtime resource updates

# Ref: https://kubernetes.io/docs/tasks/configure-pod-container/resize-container-resources/

resizePolicy: []

  # - resourceName: cpu

  #   restartPolicy: NotRequired

  # - resourceName: memory

  #   restartPolicy: RestartContainer

extraVolumes: []

  # - name: ca-bundle-cert

  #   secret:

  #     secretName: <secret-name>

extraVolumeMounts: []

  # - mountPath: /etc/ssl/certs/

  #   name: ca-bundle-cert

# Additional containers to be added to the pod.

extraContainers: []

  #  - name: my-sidecar

  #    image: nginx:latest

# Additional Init containers to be added to the pod.

extraInitContainers: []

  #  - name: wait-for-idp

  #    image: my-idp-wait:latest

  #    command:

  #    - sh

  #    - -c

  #    - wait-for-idp.sh

priorityClassName: ""

# hostAliases is a list of aliases to be added to /etc/hosts for network name resolution

hostAliases: []

# - ip: "10.xxx.xxx.xxx"

#   hostnames:

#     - "auth.example.com"

# - ip: 127.0.0.1

#   hostnames:

#     - chart-example.local

#     - example.local

# [TopologySpreadConstraints](https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/) configuration.

# Ref: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling

# topologySpreadConstraints: []

# Affinity for pod assignment

# Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity

# affinity: {}

# Tolerations for pod assignment

# Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/

tolerations: []

# Node labels for pod assignment

# Ref: https://kubernetes.io/docs/user-guide/node-selection/

nodeSelector: {}

# Whether to use secrets instead of environment values for setting up OAUTH2_PROXY variables

proxyVarsAsSecrets: true

# Configure Kubernetes liveness and readiness probes.

# Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/

# Disable both when deploying with Istio 1.0 mTLS. https://istio.io/help/faq/security/#k8s-health-checks

livenessProbe:

  enabled: true

  initialDelaySeconds: 0

  timeoutSeconds: 1

readinessProbe:

  enabled: true

  initialDelaySeconds: 0

  timeoutSeconds: 5

  periodSeconds: 10

  successThreshold: 1

# Configure Kubernetes security context for container

# Ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/

securityContext:

  enabled: true

  allowPrivilegeEscalation: false

  capabilities:

    drop:

      - ALL

  readOnlyRootFilesystem: true

  runAsNonRoot: true

  runAsUser: 2000

  runAsGroup: 2000

  seccompProfile:

    type: RuntimeDefault

deploymentAnnotations: {}

podAnnotations: {}

podLabels: {}

replicaCount: 1

revisionHistoryLimit: 10

strategy: {}

enableServiceLinks: true

## PodDisruptionBudget settings

## ref: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/

## One of maxUnavailable and minAvailable must be set to null.

podDisruptionBudget:

  enabled: true

  maxUnavailable: null

  minAvailable: 1

  # Policy for when unhealthy pods should be considered for eviction.

  # Valid values are "IfHealthyBudget" and "AlwaysAllow".

  # Ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/#unhealthy-pod-eviction-policy

  unhealthyPodEvictionPolicy: ""

## Horizontal Pod Autoscaling

## ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/

autoscaling:

  enabled: false

  minReplicas: 1

  maxReplicas: 10

  targetCPUUtilizationPercentage: 80

  # targetMemoryUtilizationPercentage: 80

  annotations: {}

  # Configure HPA behavior policies for scaling if needed

  # Ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#configuring-scaling-behavior

  behavior: {}

    # scaleDown:

    #   stabilizationWindowSeconds: 300

    #   policies:

    #   - type: Percent

    #     value: 100

    #     periodSeconds: 15

    #   selectPolicy: Min

    # scaleUp:

    #   stabilizationWindowSeconds: 0

    #   policies:

    #   - type: Percent

    #     value: 100

    #     periodSeconds: 15

    #   - type: Pods

    #     value: 4

    #     periodSeconds: 15

    #   selectPolicy: Max

# Configure Kubernetes security context for pod

# Ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/

podSecurityContext: {}

# whether to use http or https

httpScheme: http

initContainers:

  # if the redis sub-chart is enabled, wait for it to be ready

  # before starting the proxy

  # creates a role binding to get, list, watch, the redis master pod

  # if service account is enabled

  waitForRedis:

    enabled: true

    image:

      repository: "alpine"

      tag: "latest"

      pullPolicy: "IfNotPresent"

    # uses the kubernetes version of the cluster

    # the chart is deployed on, if not set

    kubectlVersion: ""

    securityContext:

      enabled: true

      allowPrivilegeEscalation: false

      capabilities:

        drop:

          - ALL

      readOnlyRootFilesystem: true

      runAsNonRoot: true

      runAsUser: 65534

      runAsGroup: 65534

      seccompProfile:

        type: RuntimeDefault

    timeout: 180

    resources: {}

      # limits:

      #   cpu: 100m

      #   memory: 300Mi

      # requests:

      #   cpu: 100m

      #   memory: 300Mi

# Additionally authenticate against a htpasswd file. Entries must be created with "htpasswd -B" for bcrypt encryption.

# Alternatively supply an existing secret which contains the required information.

htpasswdFile:

  enabled: false

  existingSecret: ""

  entries: []

  # One row for each user

  # example:

  # entries:

  #  - testuser:$2y$05$gY6dgXqjuzFhwdhsiFe7seM9q9Tile4Y3E.CBpAZJffkeiLaC21Gy

# Configure the session storage type, between cookie and redis

sessionStorage:

  # Can be one of the supported session storage cookie|redis

  type: cookie

  redis:

    # Name of the Kubernetes secret containing the redis & redis sentinel password values (see also `sessionStorage.redis.passwordKey`)

    existingSecret: ""

    # Redis password value. Applicable for all Redis configurations. Taken from redis subchart secret if not set. `sessionStorage.redis.existingSecret` takes precedence

    password: ""

    # Key of the Kubernetes secret data containing the redis password value. If you use the redis sub chart, make sure

    # this password matches the one used in redis-ha.redisPassword (see below).

    passwordKey: "redis-password"

    # Can be one of standalone|cluster|sentinel

    clientType: "standalone"

    standalone:

      # URL of redis standalone server for redis session storage (e.g. `redis://HOST[:PORT]`). Automatically generated if not set

      connectionUrl: ""

    cluster:

      # List of Redis cluster connection URLs. Array or single string allowed.

      connectionUrls: []

      # - "redis://127.0.0.1:8000"

      # - "redis://127.0.0.1:8001"

    sentinel:

      # Name of the Kubernetes secret containing the redis sentinel password value (see also `sessionStorage.redis.sentinel.passwordKey`). Default: `sessionStorage.redis.existingSecret`

      existingSecret: ""

      # Redis sentinel password. Used only for sentinel connection; any redis node passwords need to use `sessionStorage.redis.password`

      password: ""

      # Key of the Kubernetes secret data containing the redis sentinel password value

      passwordKey: "redis-sentinel-password"

      # Redis sentinel master name

      masterName: ""

      # List of Redis cluster connection URLs. Array or single string allowed.

      connectionUrls: []

      # - "redis://127.0.0.1:8000"

      # - "redis://127.0.0.1:8001"

# Enables and configure the automatic deployment of the redis-ha subchart

redis-ha:

  # provision an instance of the redis-ha sub-chart

  enabled: false

  # Redis specific helm chart settings, please see:

  # https://artifacthub.io/packages/helm/dandydev-charts/redis-ha#general-parameters

  #

  # Recommended:

  #

  # redisPassword: xxxxx

  # replicas: 1

  # persistentVolume:

  #   enabled: false

  #

  # If you install Redis using this sub chart, make sure that the password of the sub chart matches the password

  # you set in sessionStorage.redis.password (see above).

  #

  # If you want to use redis in sentinel mode see:

  # https://artifacthub.io/packages/helm/dandydev-charts/redis-ha#redis-sentinel-parameters

# Enables apiVersion deprecation checks

checkDeprecation: true

# Allows graceful shutdown

# terminationGracePeriodSeconds: 65

# lifecycle:

#   preStop:

#     exec:

#       command: [ "sh", "-c", "sleep 60" ]

metrics:

  # Enable Prometheus metrics endpoint

  enabled: true

  # Serve Prometheus metrics on this port

  port: 44180

  # when service.type is NodePort ...

  # nodePort: 44180

  # Protocol set on the service for the metrics port

  service:

    appProtocol: http

  serviceMonitor:

    # Enable Prometheus Operator ServiceMonitor

    enabled: false

    # Define the namespace where to deploy the ServiceMonitor resource

    namespace: ""

    # Prometheus Instance definition

    prometheusInstance: default

    # Prometheus scrape interval

    interval: 60s

    # Prometheus scrape timeout

    scrapeTimeout: 30s

    # Add custom labels to the ServiceMonitor resource

    labels: {}

    ## scheme: HTTP scheme to use for scraping. Can be used with `tlsConfig` for example if using istio mTLS.

    scheme: ""

    ## tlsConfig: TLS configuration to use when scraping the endpoint. For example if using istio mTLS.

    ## Of type: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#tlsconfig

    tlsConfig: {}

    ## bearerTokenFile: Path to bearer token file.

    bearerTokenFile: ""

    ## Used to pass annotations that are used by the Prometheus installed in your cluster to select Service Monitors to work with

    ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec

    annotations: {}

    ## Metric relabel configs to apply to samples before ingestion.

    ## [Metric Relabeling](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#metric_relabel_configs)

    metricRelabelings: []

    # - action: keep

    #   regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'

    #   sourceLabels: [__name__]

    ## Relabel configs to apply to samples before ingestion.

    ## [Relabeling](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config)

    relabelings: []

    # - sourceLabels: [__meta_kubernetes_pod_node_name]

    #   separator: ;

    #   regex: ^(.*)$

    #   targetLabel: nodename

    #   replacement: $1

    #   action: replace

# Extra K8s manifests to deploy

extraObjects: []

step 6 : Install oauth 2 helm chart

Use following steps to install oauth2 using help chart.

    helm repo add oauth2-proxy https://oauth2-proxy.github.io/manifests
    helm pull oauth2-proxy/oauth2-proxy
    tar -xzvf oauth2-proxy-10.1.0.tgz
    mv values.yaml oauth2-proxy/values.yaml
    helm install oauth2-proxy oauth2-proxy/ -n rook-ceph

step 7: Update rook-ceph configuration

Configure Ceph manager:

    ceph config-key set mgr/dashboard/external_auth true

    ceph config-key set mgr/dashboard/external_auth_header_name "X-Remote-User"

    ceph config-key set mgr/dashboard/external_auth_logout_url "https://dev04.kubeops.net/oauth2/sign_out?rd=https://dev04.kubeops.net/keycloak/realms/master/protocol/openid-connect/logout?client_id=ceph-dashboard"

step 8: update ceph-dashboard ingress

Configure the ceph-dashboard Ingress :

    metadata:                                                                                                                                                                                                                                                                     

    annotations:                                                                                                                                                                                                                                                                 

        cert-manager.io/cluster-issuer: kubeops-ca-issuer                                                                                                                                                                                                                         

        kubernetes.io/ingress.class: nginx                                                                                                                                                                                                                                       

        meta.helm.sh/release-name: rook-ceph-cluster                                                                                                                                                                                                                             

        meta.helm.sh/release-namespace: rook-ceph                                                                                                                                                                                                                                   

        nginx.ingress.kubernetes.io/auth-response-headers: X-Auth-Request-User                                                                                                                                                                                         

        nginx.ingress.kubernetes.io/auth-signin: https://dev04.kubeops.net/oauth2/start?rd=$escaped_request_uri                                                                                                                                             

        nginx.ingress.kubernetes.io/auth-url: https://dev04.kubeops.net/oauth2/auth                                                                                                                                                                                               

        nginx.ingress.kubernetes.io/configuration-snippet: |                                                                                                                                                                                                                       

        proxy_set_header X-Remote-User $upstream_http_x_auth_request_user;   

Step 9: create oauth2 ingress

Create an Ingress for oauth2-proxy

    apiVersion: networking.k8s.io/v1

    kind: Ingress

    metadata:

    name: oauth2-proxy-ingress

    namespace: rook-ceph # Namespace, in dem der Proxy läuft

    annotations:

        kubernetes.io/ingress.class: nginx

    spec:

    rules:

    - host: dev04.kubeops.net

        http:

        paths:

        - path: /oauth2

            pathType: Prefix

            backend:

            service:

                name: oauth2-proxy

                port:

                number: 80

4 - Upgrade Kubernetes Version

This guide outlines the steps to upgrade the Kubernetes version of a cluster, specifically demonstrating how to change the version using a configuration file.

Upgrading a Kubernetes cluster

Upgrading a Kubernetes cluster is essential to maintain security, stability, and compatibility.Like Kubernetes itself, we adhere to the version skew policy and only allow upgrades between releases that differ by a single minor version. This ensures compatibility between components, reduces the risk of instability, and keeps the cluster in a supported and secure state. More information about the Version Skew policy, Click here to read

You can use the following steps to upgrade the Kubernetes version of a cluster.

Kubernetes Version Upgrade Process:

Prerequisits

KOSI Login Recommendation

Before performing any action with kubeopsctl, it is recommended to do a login with kosi. Refer to the official KOSI documentation for details here.

1. Pull required KOSI packages on your ADMIN

If you do not specify a parameter, the Kubernetes version 1.32.2 will be pulled.
With parameter --kubernetesVersion 1.34.1 you can pull an older Kubernetes version.
Available Kubernetes versions are 1.32.2 , 1.32.3 , 1.32.9 . 1.32.10 . 1.33.3 . 1.33.5 . 1.34.1 .

kubeopsctl pull --kubernetesVersion <x.xx.x>

2. Change your target version inside the cluster-values

3. Start the upgrage with the command

kubeopsctl apply -f cluster-values.yaml

Example 1 - Upgrade all nodes in the cluster to a specific version

We want to upgrade a cluster from Kubernetes version v1.33.5 to v1.34.1. These are the following steps.

1. Pull required KOSI packages on your ADMIN

Pull the kubernetes v1.34.1 packages on your ADMIN machine.

kubeopsctl pull --kubernetesVersion 1.34.1

2. Change your target version inside the cluster-values

Adjust your cluster-values in comparison to the example below. Be sure to set the actual version in your values, as well as the target version in the nodes.

# file cluster-values.yaml
apiVersion: kubeops/kubeopsctl/cluster/beta/v1
imagePullRegistry: registry.kubeops.net/kubeops/kubeops
airgap: true
clusterName: myCluster
clusterUser: root
kubernetesVersion: 1.33.5     # -> actual version
kubeVipEnabled: false
virtualIP: 10.2.10.110
firewall: nftables
pluginNetwork: calico
containerRuntime: containerd
kubeOpsRoot: /home/myuser/kubeops
serviceSubnet: 192.168.128.0/17
podSubnet: 192.168.0.0/17
debug: true
systemCpu: 250m
systemMemory: 256Mi
packageRepository: local
changeCluster: true           # -> important! Needs to be set for an upgrade
zones:
- name: zone1
  nodes:
  - name: demo-controlplane01
    iPAddress: 10.2.10.110
    type: controlplane
    kubeVersion: 1.34.1       # -> target version
  - name: demo-worker01
    iPAddress: 10.2.10.210
    type: worker
    kubeVersion: 1.34.1       # -> target version
- name: zone2
  nodes:
  - name: demo-controlplane02
    iPAddress: 10.2.10.120
    type: controlplane
    kubeVersion: 1.34.1       # ->target version
  - name: demo-worker02
    iPAddress: 10.2.10.220
    type: worker
    kubeVersion: 1.34.1       # -> target version
- name: zone3
  nodes:
  - name: demo-controlplane03
    iPAddress: 10.2.10.130
    type: controlplane
    kubeVersion: 1.34.1       # -> target version
  - name: demo-worker03
    iPAddress: 10.2.10.230
    type: worker
    kubeVersion: 1.34.1       # -> target version

2. Validate your values and upgrade the cluster

Once the cluster-values.yaml is created, check the values once again. If you are ready just start the upgrade process with the command:

kubeopsctl apply -f cluster-values.yaml

Example 2 - Tranche upgrade zones to a specific version

We want to upgrade a cluster in tranches. First zone1, because of the initial-controlplane-node. Then zone3 and last but not least zone2.

1. Pull required KOSI packages on your ADMIN

Pull the kubernetes v1.33.5 packages on your ADMIN machine.

kubeopsctl pull --kubernetesVersion 1.33.5

2. Adjust your cluster-values in zone1

Adjust your cluster-values in comparison to the example below. Be sure to set the actual version in your values, as well as the target version in the nodes. In the snippet below it is just the zone1.

# file cluster-values.yaml
apiVersion: kubeops/kubeopsctl/cluster/beta/v1
imagePullRegistry: registry.kubeops.net/kubeops/kubeops
airgap: true
clusterName: myCluster
clusterUser: root
kubernetesVersion: 1.32.2     # -> actual version
kubeVipEnabled: false           
virtualIP: 10.2.10.110
firewall: nftables
pluginNetwork: calico
containerRuntime: containerd
kubeOpsRoot: /home/myuser/kubeops
serviceSubnet: 192.168.128.0/17
podSubnet: 192.168.0.0/17
debug: true
systemCpu: 250m
systemMemory: 256Mi
packageRepository: local
changeCluster: true
zones:
- name: zone1
  nodes:
  - name: demo-controlplane01
    iPAddress: 10.2.10.110
    type: controlplane
    kubeVersion: 1.33.5       # -> target version
  - name: demo-worker01
    iPAddress: 10.2.10.210
    type: worker
    kubeVersion: 1.33.5       # -> target version
- name: zone2
  nodes:
  - name: demo-controlplane02
    iPAddress: 10.2.10.120
    type: controlplane
    kubeVersion: 1.32.2       
  - name: demo-worker02
    iPAddress: 10.2.10.220
    type: worker
    kubeVersion: 1.32.2       
- name: zone3
  nodes:
  - name: demo-controlplane03
    iPAddress: 10.2.10.130
    type: controlplane
    kubeVersion: 1.32.2       
  - name: demo-worker03
    iPAddress: 10.2.10.230
    type: worker
    kubeVersion: 1.32.2       

3. Validate your values and upgrade the cluster

Once the cluster-values.yaml is created, check the values once again. If you are ready just start the upgrade process with the command:

kubeopsctl apply -f cluster-values.yaml

4. Adjust your cluster-values in zone2

Now change the target version of zone 2.

# file cluster-values.yaml
apiVersion: kubeops/kubeopsctl/cluster/beta/v1
imagePullRegistry: registry.kubeops.net/kubeops/kubeops
airgap: true
clusterName: myCluster
clusterUser: root
kubernetesVersion: 1.32.2     # -> actual version
kubeVipEnabled: false
virtualIP: 10.2.10.110
firewall: nftables
pluginNetwork: calico
containerRuntime: containerd
kubeOpsRoot: /home/myuser/kubeops
serviceSubnet: 192.168.128.0/17
podSubnet: 192.168.0.0/17
debug: true
systemCpu: 250m
systemMemory: 256Mi
packageRepository: local
changeCluster: true
zones:
- name: zone1
  nodes:
  - name: demo-controlplane01
    iPAddress: 10.2.10.110
    type: controlplane
    kubeVersion: 1.33.5       # -> target version
  - name: demo-worker01
    iPAddress: 10.2.10.210
    type: worker
    kubeVersion: 1.33.5       # -> target version
- name: zone2
  nodes:
  - name: demo-controlplane02
    iPAddress: 10.2.10.120
    type: controlplane
    kubeVersion: 1.33.5       # ->target version
  - name: demo-worker02
    iPAddress: 10.2.10.220
    type: worker
    kubeVersion: 1.33.5       # -> target version
- name: zone3
  nodes:
  - name: demo-controlplane03
    iPAddress: 10.2.10.130
    type: controlplane
    kubeVersion: 1.32.2       
  - name: demo-worker03
    iPAddress: 10.2.10.230
    type: worker
    kubeVersion: 1.32.2       

5. Validate your values and upgrade the cluster

Once the cluster-values.yaml is created, check the values once again. If you are ready just start the upgrade process with the command:

kubeopsctl apply -f cluster-values.yaml

6. Adjust your cluster-values in zone3

Now change the target version of zone 3.

# file cluster-values.yaml
apiVersion: kubeops/kubeopsctl/cluster/beta/v1
imagePullRegistry: registry.kubeops.net/kubeops/kubeops
airgap: true
clusterName: myCluster
clusterUser: root
kubernetesVersion: 1.32.2     # -> actual version
kubeVipEnabled: false
virtualIP: 10.2.10.110
firewall: nftables
pluginNetwork: calico
containerRuntime: containerd
kubeOpsRoot: /home/myuser/kubeops
serviceSubnet: 192.168.128.0/17
podSubnet: 192.168.0.0/17
debug: true
systemCpu: 250m
systemMemory: 256Mi
packageRepository: local
changeCluster: true
zones:
- name: zone1
  nodes:
  - name: demo-controlplane01
    iPAddress: 10.2.10.110
    type: controlplane
    kubeVersion: 1.33.5       # -> target version
  - name: demo-worker01
    iPAddress: 10.2.10.210
    type: worker
    kubeVersion: 1.33.5       # -> target version
- name: zone2
  nodes:
  - name: demo-controlplane02
    iPAddress: 10.2.10.120
    type: controlplane
    kubeVersion: 1.33.5       # ->target version
  - name: demo-worker02
    iPAddress: 10.2.10.220
    type: worker
    kubeVersion: 1.33.5       # -> target version
- name: zone3
  nodes:
  - name: demo-controlplane03
    iPAddress: 10.2.10.130
    type: controlplane
    kubeVersion: 1.33.5       # -> target version
  - name: demo-worker03
    iPAddress: 10.2.10.230
    type: worker
    kubeVersion: 1.33.5       # -> target version

7. Validate your values and upgrade the cluster

Once the cluster-values.yaml is created, check the values once again. If you are ready just start the upgrade process with the command:

kubeopsctl apply -f cluster-values.yaml

5 - Installing KubeOps Compliance applications

This guide outlines the steps to install KubeOps Compliance applications of a cluster.

Installing KubeOps Compliance applications

There is a predefined selection of applications included with KubeOps Compliance. These applications ensure a production-ready cluster deployment and can be individually configured as needed.

By separating the cluster values from the application values, the application values can be modified independently and installed at a later stage, providing greater flexibility and maintainability.

Prerequisits

KOSI Login Recommendation

Before performing any action with kubeopsctl, it is recommended to do a login with kosi. Refer to the official KOSI documentation for details here.

Example 1: Installing Applications in a non-airgap-environment

To install the KubeOps Compliance Applications in an existing cluster follow the next steps:

1. Define the Enterprise-Value-file

In the example value, the following applications are enabled:

  • opa-gatekeeper
  • rook-ceph
  • harbor
  • kubeops-dashboard

All other applications are disabled and will not be installed. For more information about available packages as well as parameters for each package check here.

The following file is only an example. Make sure to change the necessary values (ips, passwords, …) before usage

apiVersion: kubeops/kubeopsctl/enterprise/beta/v1
deleteNs: false
localRegistry: false
packages:
- name: opa-gatekeeper
  enabled: true
  values:
    standard:
      namespace: opa-gatekeeper
    advanced:
- name: rook-ceph
  enabled: true
  values:
    standard:
      namespace: rook-ceph
      cluster:
        resources:
          mgr:
            requests:
              cpu: "500m"
              memory: "512Mi"
          mon:
            requests:
              cpu: "1"
              memory: "1Gi"
          osd:
            requests:
              cpu: "1"
              memory: "1Gi"
        dashboard:
          enabled: true
      operator:
        data:
          rookLogLevel: "DEBUG"
- name: harbor
  enabled: true
  values:
    standard:
      namespace: harbor
      harborpass: "topsecret"
      databasePassword: "topsecret"
      redisPassword: "topsecret"
      externalURL: http://10.2.10.110:30002
      nodePort: 30002
      hostname: harbor.local
      harborPersistence:
        persistentVolumeClaim:
          registry:
            size: 40Gi
            storageClass: "rook-cephfs"
          jobservice:
            jobLog:
              size: 1Gi
              storageClass: "rook-cephfs"
          database:
            size: 1Gi
            storageClass: "rook-cephfs"
          redis:
            size: 1Gi
            storageClass: "rook-cephfs"
          trivy: 
            size: 5Gi
            storageClass: "rook-cephfs"
    advanced:
- name: kubeops-dashboard
  enabled: true
  values:
    standard:
      namespace: monitoring
      hostname: kubeops-dashboard.local
      service:
        nodePort: 30007
    advanced:
- name: filebeat-os
  enabled: false
  values:
    standard:
      namespace: logging
    advanced:

2. Pull the KubeOps Compliance Applications packages
To pull the required application packages in the correct version for the release, use the following commands:

kubeopsctl pull -f enterprise-values.yaml --kubernetesVersion <x.xx.x>

or

kubeopsctl pull --tools enterprise-values.yaml --kubernetesVersion <x.xx.x>

3. The KubeOps Compliance Application installation process
Important for only installation of the tools is that you have set your flag changeCluster to false in your cluster-values.yaml.

The following file is only an example. Make sure to change the necessary values (ips, passwords, …) before usage

# file cluster-values.yaml
apiVersion: kubeops/kubeopsctl/cluster/beta/v1
imagePullRegistry: registry.kubeops.net/kubeops/kubeops
airgap: false                       # -> important
clusterName: myCluster
clusterUser: root
kubernetesVersion: 1.31.6         
kubeVipEnabled: false
virtualIP: 10.2.10.110
firewall: nftables
pluginNetwork: calico
containerRuntime: containerd
kubeOpsRoot: /home/myuser/kubeops
serviceSubnet: 192.168.128.0/17
podSubnet: 192.168.0.0/17
debug: true
systemCpu: 250m
systemMemory: 256Mi
packageRepository: https://packagerepo.kubeops.net/
changeCluster: false                # -> important
zones:
- name: zone1
  nodes:
  - name: demo-controlplane01
    iPAddress: 10.2.10.110
    type: controlplane
    kubeVersion: 1.32.2       
  - name: demo-worker01
    iPAddress: 10.2.10.210
    type: worker
    kubeVersion: 1.32.2      
- name: zone2
  nodes:
  - name: demo-controlplane02
    iPAddress: 10.2.10.120
    type: controlplane
    kubeVersion: 1.32.2       
  - name: demo-worker02
    iPAddress: 10.2.10.220
    type: worker
    kubeVersion: 1.32.2       
- name: zone3
  nodes:
  - name: demo-controlplane03
    iPAddress: 10.2.10.130
    type: controlplane
    kubeVersion: 1.32.2       
  - name: demo-worker03
    iPAddress: 10.2.10.230
    type: worker
    kubeVersion: 1.32.2        

4. Validate your values and install the KubeOps Compliance Applications Once you finished defining your values, check them once again. If you are ready, just start the installation process with the command:

kubeopsctl apply -f cluster-values.yaml -f enterprise-values.yaml

Example 2: Installing Applications in an airgap-environment

To install the KubeOps Compliance Applications in an existing cluster follow the next steps:

1. Define the Enterprise-Value-file

In the example value, the following applications are enabled:

  • opa-gatekeeper
  • rook-ceph
  • harbor
  • kubeops-dashboard

All other applications are disabled and will not be installed. Value-parameter will be explained in the references and can be found here.

apiVersion: kubeops/kubeopsctl/enterprise/beta/v1
deleteNs: false
localRegistry: true             # important for airgap, otherwise images are pulled from public registry
packages:
- name: opa-gatekeeper
  enabled: true
  values:
    standard:
      namespace: opa-gatekeeper
    advanced:
- name: rook-ceph
  enabled: true
  values:
    standard:
      namespace: rook-ceph
      cluster:
        resources:
          mgr:
            requests:
              cpu: "500m"
              memory: "512Mi"
          mon:
            requests:
              cpu: "1"
              memory: "1Gi"
          osd:
            requests:
              cpu: "1"
              memory: "1Gi"
        dashboard:
          enabled: true
      operator:
        data:
          rookLogLevel: "DEBUG"
- name: harbor
  enabled: true
  values:
    standard:
      namespace: harbor
      harborpass: "topsecret"
      databasePassword: "topsecret"
      redisPassword: "topsecret"
      externalURL: http://10.2.10.110:30002
      nodePort: 30002
      hostname: harbor.local
      harborPersistence:
        persistentVolumeClaim:
          registry:
            size: 40Gi
            storageClass: "rook-cephfs"
          jobservice:
            jobLog:
              size: 1Gi
              storageClass: "rook-cephfs"
          database:
            size: 1Gi
            storageClass: "rook-cephfs"
          redis:
            size: 1Gi
            storageClass: "rook-cephfs"
          trivy: 
            size: 5Gi
            storageClass: "rook-cephfs"
    advanced:
- name: kubeops-dashboard
  enabled: true
  values:
    standard:
      namespace: monitoring
      hostname: kubeops-dashboard.local
      service:
        nodePort: 30007
    advanced:
- name: filebeat-os
  enabled: false
  values:
    standard:
      namespace: logging
    advanced:

2. Pull the KubeOps Compliance Applications packages
To pull the required application packages in the correct version for the release, use the following commands:

kubeopsctl pull -f enterprise-values.yaml --kubernetesVersion <x.xx.x>

or

kubeopsctl pull --tools enterprise-values.yaml --kubernetesVersion <x.xx.x>

3. The KubeOps Compliance Application installation process
Important for only installation of the tools is that you have set your flag changeCluster to false in your cluster-values.yaml.

The following file is only an example. Make sure to change the necessary values (ips, passwords, …) before usage

# file cluster-values.yaml
apiVersion: kubeops/kubeopsctl/cluster/beta/v1
imagePullRegistry: registry.kubeops.net/kubeops/kubeops
airgap: true                        # -> important
clusterName: myCluster
clusterUser: root
kubernetesVersion: 1.32.2         
kubeVipEnabled: false
virtualIP: 10.2.10.110
firewall: nftables
pluginNetwork: calico
containerRuntime: containerd
kubeOpsRoot: /home/myuser/kubeops
serviceSubnet: 192.168.128.0/17
podSubnet: 192.168.0.0/17
debug: true
systemCpu: 250m
systemMemory: 256Mi
packageRepository: local
changeCluster: false                # -> important
zones:
- name: zone1
  nodes:
  - name: demo-controlplane01
    iPAddress: 10.2.10.110
    type: controlplane
    kubeVersion: 1.32.2      
  - name: demo-worker01
    iPAddress: 10.2.10.210
    type: worker
    kubeVersion: 1.32.2       
- name: zone2
  nodes:
  - name: demo-controlplane02
    iPAddress: 10.2.10.120
    type: controlplane
    kubeVersion: 1.32.2      
  - name: demo-worker02
    iPAddress: 10.2.10.220
    type: worker
    kubeVersion: 1.32.2      
- name: zone3
  nodes:
  - name: demo-controlplane03
    iPAddress: 10.2.10.130
    type: controlplane
    kubeVersion: 1.32.2      
  - name: demo-worker03
    iPAddress: 10.2.10.230
    type: worker
    kubeVersion: 1.32.2        

4. Validate your values and install the KubeOps Compliance Applications Once you finished defining your values, check them once again. If you are ready, just start the installation process with the command:

kubeopsctl apply -f cluster-values.yaml -f enterprise-values.yaml

6 - Updating KubeOps Compliance applications

This guide outlines the steps to update KubeOps Compliance applications of a cluster.

Updating KubeOps Compliance applications

There is a predefined selection of applications included with KubeOps Compliance. These applications ensure a production-ready cluster deployment and can be configured individually as needed.

By separating cluster values from application values, application values can be modified independently and installed later, providing greater flexibility and maintainability.

kubeopsctl automatically detects whether an application is already deployed and updates it accordingly.

Prerequisites

KOSI Login Recommendation

Before performing any action with kubeopsctl, it is recommended to do a login with kosi. Refer to the official KOSI documentation for details here.

Updated KubeOpsctl

If you have an older kubeopsctl version installed, update it before starting with updating Compliance appliactions.

# kubeopsctl-version can be found under : https://packagerepo.kubeops.net/deb/pool/main/
sudo apt update
sudo apt install -y kubeopsctl=<kubeopsctl-version>
# kubeopsctl-version can be found under : https://packagerepo.kubeops.net/rpm/
sudo dnf install -y --disableexcludes=kubeops-repo <kubeopsctl-version>
# kubeopsctl-version can be found under : https://packagerepo.kubeops.net/deb/pool/main/
wget https://packagerepo.kubeops.net/deb/pool/main/<kubeopsctl-version>.deb
sudo dpkg --install <kubeopsctl-version>.deb
# kubeopsctl-versions can be found under: https://packagerepo.kubeops.net/rpm
sudo rpm -e kubeopsctl
wget https://packagerepo.kubeops.net/rpm/<kubeopsctl-version>.rpm
sudo rpm --install -v <kubeopsctl-version>.rpm

Example 1: Updating Applications in a non-airgap-environment

To update the KubeOps Compliance Applications in an existing cluster follow the next steps:

1. Define the Enterprise-Value-file

In the example value, the following applications are enabled:

  • opa-gatekeeper
  • rook-ceph
  • harbor
  • kubeops-dashboard

All other applications are disabled and will not be updated. Value-parameter will be explained in the references and can be found here.

apiVersion: kubeops/kubeopsctl/enterprise/beta/v1
deleteNs: false
localRegistry: false
packages:
- name: opa-gatekeeper
  enabled: true
  values:
    standard:
      namespace: opa-gatekeeper
    advanced:
- name: rook-ceph
  enabled: true
  values:
    standard:
      namespace: rook-ceph
      cluster:
        resources:
          mgr:
            requests:
              cpu: "500m"
              memory: "512Mi"
          mon:
            requests:
              cpu: "1"
              memory: "1Gi"
          osd:
            requests:
              cpu: "1"
              memory: "1Gi"
        dashboard:
          enabled: true
      operator:
        data:
          rookLogLevel: "DEBUG"
- name: harbor
  enabled: true
  values:
    standard:
      namespace: harbor
      harborpass: "topsecret"
      databasePassword: "topsecret"
      redisPassword: "topsecret"
      externalURL: http://10.2.10.110:30002
      nodePort: 30002
      hostname: harbor.local
      harborPersistence:
        persistentVolumeClaim:
          registry:
            size: 40Gi
            storageClass: "rook-cephfs"
          jobservice:
            jobLog:
              size: 1Gi
              storageClass: "rook-cephfs"
          database:
            size: 1Gi
            storageClass: "rook-cephfs"
          redis:
            size: 1Gi
            storageClass: "rook-cephfs"
          trivy: 
            size: 5Gi
            storageClass: "rook-cephfs"
    advanced:
- name: kubeops-dashboard
  enabled: true
  values:
    standard:
      namespace: monitoring
      hostname: kubeops-dashboard.local
      service:
        nodePort: 30007
    advanced:
- name: filebeat-os
  enabled: false
  values:
    standard:
      namespace: logging
    advanced:

2. Pull the KubeOps Compliance Applications packages
To pull the required application packages in the correct version for the release, use the following commands:

kubeopsctl pull -f enterprise-values.yaml

or

kubeopsctl pull --tools enterprise-values.yaml

3. The KubeOps Compliance Application update process
Important for only update of the tools is that you have set your flag changeCluster to false in your cluster-values.yaml.

# file cluster-values.yaml
apiVersion: kubeops/kubeopsctl/cluster/beta/v1
imagePullRegistry: registry.kubeops.net/kubeops/kubeops
airgap: false                       # -> important
clusterName: myCluster
clusterUser: root
kubernetesVersion: 1.31.6         
kubeVipEnabled: false
virtualIP: 10.2.10.110
firewall: nftables
pluginNetwork: calico
containerRuntime: containerd
kubeOpsRoot: /home/myuser/kubeops
serviceSubnet: 192.168.128.0/17
podSubnet: 192.168.0.0/17
debug: true
systemCpu: 250m
systemMemory: 256Mi
packageRepository: https://packagerepo.kubeops.net/
changeCluster: false                # -> important
zones:
- name: zone1
  nodes:
  - name: demo-controlplane01
    iPAddress: 10.2.10.110
    type: controlplane
    kubeVersion: 1.31.6       
  - name: demo-worker01
    iPAddress: 10.2.10.210
    type: worker
    kubeVersion: 1.31.6       
- name: zone2
  nodes:
  - name: demo-controlplane02
    iPAddress: 10.2.10.120
    type: controlplane
    kubeVersion: 1.31.6       
  - name: demo-worker02
    iPAddress: 10.2.10.220
    type: worker
    kubeVersion: 1.30.8       
- name: zone3
  nodes:
  - name: demo-controlplane03
    iPAddress: 10.2.10.130
    type: controlplane
    kubeVersion: 1.31.6       
  - name: demo-worker03
    iPAddress: 10.2.10.230
    type: worker
    kubeVersion: 1.31.6        

4. Validate your values and update the KubeOps Compliance Applications Once you finished defining your values, check them once again. If you are ready, just start the update process with the command:

kubeopsctl apply -f cluster-values.yaml -f enterprise-values.yaml

Example 2: Updating Applications in an airgap-environment

To update the KubeOps Compliance Applications in an existing cluster follow the next steps:

1. Define the Enterprise-Value-file

In the example value, the following applications are enabled:

  • opa-gatekeeper
  • rook-ceph
  • harbor
  • kubeops-dashboard

All other applications are disabled and will not be updated. Value-parameter will be explained in the references and can be found here.

apiVersion: kubeops/kubeopsctl/enterprise/beta/v1
deleteNs: false
localRegistry: true             # important for airgap, otherwise images are pulled from public registry
packages:
- name: opa-gatekeeper
  enabled: true
  values:
    standard:
      namespace: opa-gatekeeper
    advanced:
- name: rook-ceph
  enabled: true
  values:
    standard:
      namespace: rook-ceph
      cluster:
        resources:
          mgr:
            requests:
              cpu: "500m"
              memory: "512Mi"
          mon:
            requests:
              cpu: "1"
              memory: "1Gi"
          osd:
            requests:
              cpu: "1"
              memory: "1Gi"
        dashboard:
          enabled: true
      operator:
        data:
          rookLogLevel: "DEBUG"
- name: harbor
  enabled: true
  values:
    standard:
      namespace: harbor
      harborpass: "topsecret"
      databasePassword: "topsecret"
      redisPassword: "topsecret"
      externalURL: http://10.2.10.110:30002
      nodePort: 30002
      hostname: harbor.local
      harborPersistence:
        persistentVolumeClaim:
          registry:
            size: 40Gi
            storageClass: "rook-cephfs"
          jobservice:
            jobLog:
              size: 1Gi
              storageClass: "rook-cephfs"
          database:
            size: 1Gi
            storageClass: "rook-cephfs"
          redis:
            size: 1Gi
            storageClass: "rook-cephfs"
          trivy: 
            size: 5Gi
            storageClass: "rook-cephfs"
    advanced:
- name: kubeops-dashboard
  enabled: true
  values:
    standard:
      namespace: monitoring
      hostname: kubeops-dashboard.local
      service:
        nodePort: 30007
    advanced:
- name: filebeat-os
  enabled: false
  values:
    standard:
      namespace: logging
    advanced:

2. Update kubeopsctl

If you have an older kubeopsctl version installed, update it using the following commands.

# kubeopsctl-version can be found under : https://packagerepo.kubeops.net/deb/pool/main/
sudo apt update
sudo apt install -y kubeopsctl=<kubeopsctl-version>
# kubeopsctl-version can be found under : https://packagerepo.kubeops.net/rpm/
sudo dnf install -y --disableexcludes=kubeops-repo <kubeopsctl-version>
# kubeopsctl-version can be found under : https://packagerepo.kubeops.net/deb/pool/main/
wget https://packagerepo.kubeops.net/deb/pool/main/<kubeopsctl-version>.deb
sudo dpkg --install <kubeopsctl-version>.deb
# kubeopsctl-versions can be found under: https://packagerepo.kubeops.net/rpm
sudo rpm -e kubeopsctl
wget https://packagerepo.kubeops.net/rpm/<kubeopsctl-version>.rpm
sudo rpm --install -v <kubeopsctl-version>.rpm
2. Pull the KubeOps Compliance Applications packages
To pull the required application packages in the correct version for the release, use the following commands:

kubeopsctl pull -f enterprise-values.yaml

or

kubeopsctl pull --tools enterprise-values.yaml

3. The KubeOps Compliance Application update process
Important for only the update of the tools is that you have set your flag changeCluster to false in your cluster-values.yaml.

# file cluster-values.yaml
apiVersion: kubeops/kubeopsctl/cluster/beta/v1
imagePullRegistry: registry.kubeops.net/kubeops/kubeops
airgap: true                          # -> important
clusterName: myCluster
clusterUser: root
kubernetesVersion: 1.31.6         
kubeVipEnabled: false
virtualIP: 10.2.10.110
firewall: nftables
pluginNetwork: calico
containerRuntime: containerd
kubeOpsRoot: /home/myuser/kubeops
serviceSubnet: 192.168.128.0/17
podSubnet: 192.168.0.0/17
debug: true
systemCpu: 250m
systemMemory: 256Mi
packageRepository: local
changeCluster: false                # -> important
zones:
- name: zone1
  nodes:
  - name: demo-controlplane01
    iPAddress: 10.2.10.110
    type: controlplane
    kubeVersion: 1.31.6       
  - name: demo-worker01
    iPAddress: 10.2.10.210
    type: worker
    kubeVersion: 1.31.6       
- name: zone2
  nodes:
  - name: demo-controlplane02
    iPAddress: 10.2.10.120
    type: controlplane
    kubeVersion: 1.31.6       
  - name: demo-worker02
    iPAddress: 10.2.10.220
    type: worker
    kubeVersion: 1.30.8       
- name: zone3
  nodes:
  - name: demo-controlplane03
    iPAddress: 10.2.10.130
    type: controlplane
    kubeVersion: 1.31.6       
  - name: demo-worker03
    iPAddress: 10.2.10.230
    type: worker
    kubeVersion: 1.31.6        

4. Validate your values and update the KubeOps Compliance Applications Once you finished defining your values, check them once again. If you are ready, just start the update process with the command:

kubeopsctl apply -f cluster-values.yaml -f enterprise-values.yaml

7 - Harbor Deployment with CloudNativePG

Here is a brief overview of Harbor Deployment with CloudNativePG on Kubernetes using Kosi

Harbor Deployment with CloudNativePG on Kubernetes using Kosi

This guide provides a detailed guide to deploy Harbor on Kubernetes using a CloudNativePG (CNPG) PostgreSQL cluster managed by the CloudNativePG operator, installed via Kosi.

Precondition: Log in to the preprod environment with Kosi before beginning.

Step 1 — Install CloudNativePG operator

Deploy the operator with Kosi:

kosi install --hub kubeops kubeops/cloudnative-pg-operator:1.28.1 --dname cnpg-operator

With this step, it Installs the CloudNativePG operator into the cluster and the operator manages PostgreSQL clusters and their lifecycle.

Step 2 — Create PostgreSQL cluster for Harbor

1. Apply the following Cluster manifest to create a Postgres cluster with 2 instances and 1Gi storage:
cat <<EOF | kubectl apply -f -
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
  name: cloudnative-pg
  namespace: harbor
spec:
  instances: 2
  imagePullSecrets:
  - name: registry-pullsecret
  storage:
    size: 1Gi
EOF
2. Services and pods created for the cluster cloudnative-pg:

cloudnative-pg-rw → primary (read/write) cloudnative-pg-ro → replicas (read-only) cloudnative-pg-r → all pods

3. Verify pods are Running:
kubectl get pods -n harbor

Step 3 — Retrieve application user credentials

CNPG automatically creates a Secret named cloudnative-pg-app in the harbor namespace.

1. Inspect it:
kubectl get secret cloudnative-pg-app -n harbor
2. Decode the base64-encoded fields:
kubectl get secret cloudnative-pg-app -n harbor -o jsonpath="{.data.username}" | base64 -d 
kubectl get secret cloudnative-pg-app -n harbor -o jsonpath="{.data.password}" | base64 -d 
kubectl get secret cloudnative-pg-app -n harbor -o jsonpath="{.data.dbname}" | base64 -d

Example values (for illustration only):

username: app
password: Hw2t7hXuKPfZrVjVDwCc4PeKTevlB7ORmzQeW50JtEqiwHl40xkxuhVHeRIU3fX2
database: app

Important:
Use the non-superuser application credentials from this Secret in Harbor’s configuration.

Step 4 — Update Harbor tools.yaml for an external database

Edit your tools.yaml and set Harbor values under the helm chart configuration.
Example snippet:

- name: harbor
  enabled: true
  values:
    standard:
      namespace: harbor
      harborpass: "password"
      databasePassword: "<DB_PASSWORD>"
      redisPassword: "Redis_Password"
      externalURL: <your_domain_name>
      nodePort: 30002
      hostname: <your_domain_name>
      harborPersistence:
        persistentVolumeClaim:
          registry:
            size: 40Gi
            storageClass: "rook-cephfs"
          jobservice:
            jobLog:
              size: 1Gi
              storageClass: "rook-cephfs"
          database:
            size: 1Gi
            storageClass: "rook-cephfs"
          redis:
            size: 1Gi
            storageClass: "rook-cephfs"
          trivy: 
            size: 5Gi
            storageClass: "rook-cephfs"
    advanced:
      database:
        type: external
        external:
          host: "cloudnative-pg-rw.harbor.svc.cluster.local"
          port: "5432"
          username: "app"
          password: "Hw2t7hXuKPfZrVjVDwCc4PeKTevlB7ORmzQeW50JtEqiwHl40xkxuhVHeRIU3fX2"
          coreDatabase: "app"

Important: Use the -rw service host (cloudnative-pg-rw…) for write operations.
Do not use a superuser account.
Ensure the password matches the CNPG Secret.

Step 5 — Install Harbor with Kosi

  1. Deploy Harbor using the updated tools.yaml:
kosi install --hub kubeops kubeops/harbor:2.0.3 -f tools.yaml --dname harbor
  1. Verify Harbor pods:
kubectl get pods -n harbor
  1. Access Harbor at: <your_domain_name>:30002 (or as configured)

8 - Ingress Configuration

Here is a brief overview of how you can configure your ingress manually.

Manual configuration of the Nginx-Ingress-Controller

Right now the Ingress Controller Package is not fully configured. To make complete use of the Ingress capabilities of the cluster, the user needs to manually update some of the settings of the corresponding service.

Locating the service

The service in question is called “ingress-nginx-controller” and can be found in the same namespace as the ingress package itself. To locate the service across all namespaces, you could use the following command.

kubectl get service -A | grep ingress-nginx-controller

This command should return two entries of services, “ingress-nginx-controller” and “ingress-nginx-controller-admission”, though only the first one needs to be further adjusted.

Setting the Ingress-Controller service to type NodePort

To edit the service, you can use the following command, although the actual namespace may be different. This will change the service type to NodePort.

kubectl patch service ingress-nginx-controller -n kubeops -p '{"spec":{"type":"NodePort"}}'

Kubernetes will now automatically assign unused portnumbers for the nodePort to allow http and https connections to the service. These can be retrieved by running the same command, used to locate the service. Alternatively, you can use the following command, which adds the portnumbers 30080 and 30443 for the respective protocols. By doing so, you have to make sure, that these portnumbers are not being used by any other NodePort service.

kubectl patch service ingress-nginx-controller -n kubeops --type=json -p '[{"op":"replace","path":"/spec/type","value":"NodePort"}, {"op":"add","path":"/spec/ports/0/nodePort","value":30080}, {"op":"add","path":"/spec/ports/1/nodePort","value":30443}]'

Configuring external IPs

If you have access to external IPs that route to one or more cluster nodes, you can expose your Kubernetes-Services of any type through these addresses. The command below shows how to add an external IP-Adress to the service with the example value of “192.168.0.1”. Keep in mind that this value has to be changed in order to fit your networking settings.

kubectl patch service ingress-nginx-controller -n kubeops -p '{"spec":{"externalIPs":["192.168.0.1"]}}'

9 - Accessing Dashboards

A brief overview of how you can access dashboards.

Accessing Dashboards installed with KubeOps

To access a Application dashboard an SSH-Tunnel to one of the Control-Planes is needed. The following Dashboards are available and configured with the following NodePorts by default:

NodePort

32090 (if not set otherwise in the enterprise-values.yaml)

Connecting to the Dashboard

In order to connect to the dashboard an ssh tunnel has to be established. There are various tools for doing this, like the command line, putty or MobaXterm.
To establish a tunnel, the NodePort of the dashboard has to be forwarded on one of the control planes to the local machine. After that the dashboard can be accessed with localhost:<NodePort>.

Initial login credentials

No credentials are necessary for login

NodePort

30211 (if not set otherwise in the enterprise-values.yaml)

Connecting to the Dashboard

In order to connect to the dashboard an ssh tunnel has to be established. There are various tools for doing this, like the command line, putty or MobaXterm.
To establish a tunnel, the NodePort of the dashboard has to be forwarded on one of the control planes to the local machine. After that the dashboard can be accessed with localhost:<NodePort>.

Initial login credentials

  • username: the username set in the enterprise-values.yaml of Prometheus (default: user)
  • password: the password set in the enterprise-values.yaml of Prometheus (default: password)

NodePort

30050 (if not set otherwise in the enterprise-values.yaml)

Connecting to the Dashboard

In order to connect to the dashboard an ssh tunnel has to be established. There are various tools for doing this, like the command line, putty or MobaXterm.
To establish a tunnel, the NodePort of the dashboard has to be forwarded on one of the control planes to the local machine. After that the dashboard can be accessed with localhost:<NodePort>.

Initial login credentials

  • username: admin
  • password: Password@@123456

NodePort

  • https: 30003

Connecting to the Dashboard

In order to connect to the dashboard an ssh tunnel has to be established. There are various tools for doing this, like the command line, putty or MobaXterm.
To establish a tunnel, the NodePort of the dashboard has to be forwarded on one of the control planes to the local machine. After that the dashboard can be accessed with localhost:<NodePort>.

Initial login credentials

  • username: admin
  • password: the password set in the kubeopsvalues.yaml for the cluster creation (default: password)

NodePort

The Rook/Ceph Dashboard has no fixed NodePort yet. To find out the NodePort used by Rook/Ceph follow these steps:

  1. List the Services in the KubeOps namespace
kubectl get svc -n kubeops
  1. Find the line with the service rook-ceph-mgr-dashboard-external-http
NAME                                      TYPE        CLUSTER-IP        EXTERNAL-IP   PORT(S)                                     AGE
rook-ceph-mgr-dashboard-external-http     NodePort    192.168.197.13    <none>        7000:31268/TCP                              21h

Or use,

echo $(kubectl get --namespace rook-ceph -o jsonpath="{.spec.ports[0].nodePort}" services rook-ceph-mgr-dashboard-external-http)

In the example above the NodePort to connect to Rook/Ceph would be 31268.

Connecting to the Dashboard

In order to connect to the dashboard an ssh tunnel has to be established. There are various tools for doing this, like the command line, putty or MobaXterm.
To establish a tunnel, the NodePort of the dashboard has to be forwarded on one of the control planes to the local machine. After that the dashboard can be accessed with localhost:<NodePort>/ceph-dashboard/.

Initial login credentials

echo Username: admin
echo Password: $(kubectl get secret rook-ceph-dashboard-password -n rook-ceph --template={{.data.password}} | base64 -d)

NodePort

30007 (if not set otherwise in the enterprise-values.yaml)

Connecting to the Dashboard

In order to connect to the dashboard an ssh tunnel has to be established. There are various tools for doing this, like the command line, putty or MobaXterm.
To establish a tunnel, the NodePort of the dashboard has to be forwarded on one of the control planes to the local machine. After that the dashboard can be accessed with localhost:<NodePort>.

Initial login credentials

kubectl -n monitoring create token headlamp-admin

NodePort

30180

Connecting to the Dashboard

In order to connect to the dashboard an ssh tunnel has to be established. There are various tools for doing this, like the command line, putty or MobaXterm.
To establish a tunnel, the NodePort of the dashboard has to be forwarded on one of the control planes to the local machine. After that the dashboard can be accessed with localhost:<NodePort>/.

Initial login credentials

echo Username: $(kubectl get secret --namespace keycloak keycloak-kubeops -o jsonpath="{.data.ADMIN_USER}" | base64 -d)
echo Password: $(kubectl get secret --namespace keycloak keycloak-kubeops -o jsonpath="{.data.ADMIN_PASSWORD}" | base64 -d)

Connecting to the Dashboard

In order to connect to one of the dashboards, an ssh tunnel has to be established. There are various tools for doing this, like the command line, putty or MobaXterm.
To establish a tunnel, the NodePort of the dashboard has to be forwarded on one of the control planes to the local machine. After that, you can access the dashboard as described in the information panel of each dashboard above.

Connecting to the Dashboard via DNS

In order to connect to the dashboard via DNS the hosts file in /etc/hosts need the following additional entries:

10.2.10.11 kubeops-dashboard.local
10.2.10.11 harbor.local
10.2.10.11 keycloak.local
10.2.10.11 opensearch.local
10.2.10.11 grafana.local
10.2.10.11 rook-ceph.local

10 - Change the OpenSearch Password

Detailed instructions on how to change the OpenSearch password.

Changing a User Password in OpenSearch

This guide explains how to change a user password in OpenSearch with SecurityConfig enabled and an external Kubernetes Secret for user credentials.

Steps to Change the Password Using an External Secret

Prerequisites

  • Access to the Kubernetes cluster where OpenSearch is deployed.
  • Permissions to view and modify secrets in the relevant namespace.

Step 1: Generate a New Password Hash

Execute the command below (replacing the placeholders) to generate a hashed version of your new password:

kubectl exec -it <opensearch_pod_name> -n <opensearch_pod_namespace> -- bash -c "sh /usr/share/opensearch/plugins/opensearch-security/tools/hash.sh -p <new_password>"

Step 2: Extract the Existing Secret and Update internal_users.yaml

Retrieve the existing secret containing internal_users.yml. The secret stores the configuration in base64 encoding, so extract and decode it:

kubectl get secrets -n <opensearch_pod_namespace> internal-users-config-secret -o jsonpath='{.data.internal_users\.yml}' | base64 -d > internal_users.yaml

Open the exported file internal_users.yaml. Find the entry for the user whose password you want to change and replace the previous password hash with the new hash you generated in step 1. Then save the file.

Step 3: Patch the Secret with Updated internal_users.yml Data and Restart the Opensearch Pods

Encode the updated internal_users.yaml and apply it back to the secret.

cat internal_users.yaml | base64 -w 0 | xargs -I {} kubectl patch secret -n <opensearch_pod_namespace> internal-users-config-secret --patch '{"data": {"internal_users.yml": "{}"}}'

Restart the Opensearch pods to use the updated secret.

kubectl rollout restart statefulset opensearch-cluster-master -n <opensearch_pod_namespace>

Step 4: Copy the internal users yaml

You can copy the modified users.yaml now into the container with this command:

kubectl cp internal_users.yaml -n <opensearch_pod_namespace> <opensearch_pod_name>:/usr/share/opensearch/config/opensearch-security/internal_users.yml

Step 5: Run securityadmin.sh to Apply the Changes

This completes the password update process, ensuring that changes persist across OpenSearch pods.

kubectl exec -it <opensearch_pod_name> -n <opensearch_pod_namespace> -- bash -c "\
    sh /usr/share/opensearch/plugins/opensearch-security/tools/securityadmin.sh \
    -cd /usr/share/opensearch/config/opensearch-security/ \
    -icl -nhnv \
    -cacert /usr/share/opensearch/config/root-ca.pem \
    -cert /usr/share/opensearch/config/kirk.pem \
    -key /usr/share/opensearch/config/kirk-key.pem"

11 - Backup and restore

In this article, we look at the backup procedure with Velero.

Backup and restoring artifacts

What is Velero?

Velero uses object storage to store backups and associated artifacts. It also optionally integrates supported block storage systems to snapshot your persistent volumes. Before beginning the installation process, you should identify the object storage provider and optional block storage provider(s) you’ll be using from the list of compatible providers.

Velero supports storage providers for both cloud-provider environments and on-premises environments.

Velero prerequisites:

  • Access to a Kubernetes cluster, v1.16 or later, with DNS and container networking enabled.
  • kubectl installed locally
  • Object Storage (S3, Cloud Provider Environment, On-Premises Environment)

Install Velero

This command is an example on how you can install velero into your cluster:

velero install --provider aws --plugins velero/velero-plugin-for-aws:v1.2.1 --bucket velero --secret-file ./credentials-velero --use-volume-snapshots=false --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://minio.velero.svc:9000

NOTE:

  • s3Url has to be the url of your s3 storage login.
  • example for credentials-velero file:
    [default]
    aws_access_key_id = your_s3_storage_username
    aws_secret_access_key = your_s3_storage_password
    

Backup the cluster

Scheduled Backups

This command creates a backup for the cluster every 6 hours:

velero schedule create cluster --schedule "0 */6 * * *"

Get Schedules

This command lists all schedules for backups:

velero schedule get

Delete Schedules

This command deletes the specified schedule:

velero schedule delete cluster

Restore Scheduled Backup

This command restores the backup according to a schedule:

velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>

Backup

This command creates a backup for the cluster

velero backup create cluster

Get Backups

This command lists all created backups:

velero backup get

Delete Backups

This commands deletes the specified backup:

velero backup delete <BACKUP NAME>

Restore Backup

This commands restores the specified backup:

velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>

Backup a specific deployment

Scheduled Backups

This command creates a backup for the namespace “logging” every 6 hours:

velero schedule create filebeat --schedule "0 */6 * * *" --include-namespaces logging --include-cluster-resources=true

This command creates a backup for the deployment “filebeat” every 6 hours:

velero schedule create filebeat --schedule "0 */6 * * *" --include-namespaces logging --selector app=filebeat-filebeat,release=filebeat --include-resources serviceaccount,deployment,daemonset,configmap,clusterrolebinding,clusterrole --include-cluster-resources=true

Get Schedules

This command lists all schedules for backups:

velero schedule get

Delete Schedules

This command deletes the specified schedule:

velero schedule delete filebeat

Restore Scheduled Backup

This command restores the backup from a schedule:

velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>

Backup

This command creates a backup for the namespace “logging”:

velero backup create filebeat --include-namespaces logging --include-cluster-resources=true

This command creates a backup for the deployment “filebeat”:

velero backup create filebeat --include-namespaces logging --selector app=filebeat-filebeat,release=filebeat --include-resources serviceaccount,deployment,daemonset,configmap,clusterrolebinding,clusterrole --include-cluster-resources=true

Get Backups

This command lists all created backups:

velero backup get

Delete Backups

This commands deletes the specified backup:

velero backup delete <BACKUP NAME>

Restore Backup

This commands restores the specified backup:

velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>

Scheduled Backups

This command creates a backup for the namespace “logging” every 6 hours:

velero schedule create logstash --schedule "0 */6 * * *" --include-namespaces logging --include-cluster-resources=true

This command creates a backup for the deployment “logstash” every 6 hours:

velero schedule create logstash --schedule "0 */6 * * *" --include-namespaces logging --selector app=logstash-logstash,chart=logstash,release=logstash --include-resources StatefulSet,ServiceAccount,Service,Secret,RoleBinding,Role,PodSecurityPolicy,PodDisruptionBudget,Ingress,ConfigMap --include-cluster-resources=true

Get Schedules

This command lists all schedules for backups:

velero schedule get

Delete Schedules

This command deletes the specified schedule:

velero schedule delete logstash

Restore Scheduled Backup

This command restores the backup from a schedule:

velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>

Backup

This command creates a backup for the namespace “logging”:

velero backup create logstash --include-namespaces logging --include-cluster-resources=true

This command creates a backup for the deployment “logstash”:

velero backup create logstash --include-namespaces logging --selector app=logstash-logstash,chart=logstash,release=logstash --include-resources StatefulSet,ServiceAccount,Service,Secret,RoleBinding,Role,PodSecurityPolicy,PodDisruptionBudget,Ingress,ConfigMap --include-cluster-resources=true

Get Backups

This command lists all created backups:

velero backup get

Delete Backups

This commands deletes the specified backup:

velero backup delete <BACKUP NAME>

Restore Backup

This commands restores the specified backup:

velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>

Scheduled Backups

This command creates a backup for the namespace “logging” every 6 hours:

velero schedule create opensearch --schedule "0 */6 * * *" --include-namespaces logging --include-cluster-resources=true

This command creates a backup for the deployment “opensearch” every 6 hours:

velero schedule create opensearch --schedule "0 */6 * * *" --include-namespaces logging --selector app.kubernetes.io/instance=opensearch,app.kubernetes.io/name=opensearch --include-resources ConfigMap,Ingress,NetworkPolicy,PodDisruptionBudget,PodSecurityPolicy,Role,RoleBinding,Secret,Service,ServiceAccount,StatefulSet --include-cluster-resources=true

Get Schedules

This command lists all schedules for backups:

velero schedule get

Delete Schedules

This command deletes the specified schedule:

velero schedule delete opensearch

Restore Scheduled Backup

This command restores the backup from a schedule:

velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>

Backup

This command creates a backup for the namespace “logging”:

velero backup create opensearch --include-namespaces logging --include-cluster-resources=true

This command creates a backup for the deployment “opensearch”:

velero backup create opensearch --include-namespaces logging --selector app.kubernetes.io/instance=opensearch,app.kubernetes.io/name=opensearch --include-resources ConfigMap,Ingress,NetworkPolicy,PodDisruptionBudget,PodSecurityPolicy,Role,RoleBinding,Secret,Service,ServiceAccount,StatefulSet --include-cluster-resources=true

Get Backups

This command lists all created backups:

velero backup get

Delete Backups

This commands deletes the specified backup:

velero backup delete <BACKUP NAME>

Restore Backup

This commands restores the specified backup:

velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>

Scheduled Backups

This command creates a backup for the namespace “monitoring” every 6 hours:

velero schedule create prometheus --schedule "0 */6 * * *" --include-namespaces monitoring --include-cluster-resources=true

This command creates a backup for the deployment “prometheus” every 6 hours:

velero schedule create prometheus --schedule "0 */6 * * *" --include-namespaces monitoring --include-resources Alertmanager,Secret,Ingress,List,PodDisruptionBudget,Role,RoleBinding,PodSecurityPolicy,Service,ServiceAccount,ServiceMonitor,Endpoints,ConfigMap,ConfigMapList,ClusterRole,ClusterRoleBinding,SecretProviderClass,PodMonitor,Prometheus,Job,NetworkPolicy,MutatingWebhookConfiguration,ValidatingWebhookConfiguration,Issuer,Deployment,VerticalPodAutoscaler,ThanosRuler --include-cluster-resources=true

Get Schedules

This command lists all schedules for backups:

velero schedule get

Delete Schedules

This command deletes the specified schedule:

velero schedule delete prometheus

Restore Scheduled Backup

This command restores the backup from a schedule:

velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>

Backup

This command creates a backup for the namespace “monitoring”:

velero backup create prometheus --include-namespaces monitoring --include-cluster-resources=true

This command creates a backup for the deployment “prometheus”:

velero backup create prometheus --include-namespaces monitoring --include-resources Alertmanager,Secret,Ingress,List,PodDisruptionBudget,Role,RoleBinding,PodSecurityPolicy,Service,ServiceAccount,ServiceMonitor,Endpoints,ConfigMap,ConfigMapList,ClusterRole,ClusterRoleBinding,SecretProviderClass,PodMonitor,Prometheus,Job,NetworkPolicy,MutatingWebhookConfiguration,ValidatingWebhookConfiguration,Issuer,Deployment,VerticalPodAutoscaler,ThanosRuler --include-cluster-resources=true

Get Backups

This command lists all created backups:

velero backup get

Delete Backups

This commands deletes the specified backup:

velero backup delete <BACKUP NAME>

Restore Backup

This commands restores the specified backup:

velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>

Scheduled Backups

This command creates a backup for the namespace “harbor” every 6 hours:

velero schedule create harbor --schedule "0 */6 * * *" --include-namespaces harbor --include-cluster-resources=true

This command creates a backup for the deployment “harbor” every 6 hours:

velero schedule create harbor --schedule "0 */6 * * *" --include-namespaces harbor --include-resources ConfigMap,Deployment,PersistentVolumeClaim,Secret,Service,StatefulSet,Ingress,ServiceMonitor --include-cluster-resources=true

Get Schedules

This command lists all schedules for backups:

velero schedule get

Delete Schedules

This command deletes the specified schedule:

velero schedule delete harbor

Restore Scheduled Backup

This command restores the backup from a schedule:

velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>

Backup

This command creates a backup for the namespace “harbor”:

velero backup create harbor --include-namespaces harbor --include-cluster-resources=true

This command creates a backup for the deployment “harbor”:

velero backup create harbor --include-namespaces harbor --include-resources ConfigMap,Deployment,PersistentVolumeClaim,Secret,Service,StatefulSet,Ingress,ServiceMonitor --include-cluster-resources=true --include-cluster-resources=true

Get Backups

This command lists all created backups:

velero backup get

Delete Backups

This commands deletes the specified backup:

velero backup delete <BACKUP NAME>

Restore Backup

This commands restores the specified backup:

velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>

Scheduled Backups

This command creates a backup for the namespace “gatekeeper-system” every 6 hours:

velero schedule create gatekeeper --schedule "0 */6 * * *" --include-namespaces gatekeeper-system --include-cluster-resources=true

This command creates a backup for the deployment “gatekeeper” every 6 hours:

velero schedule create gatekeeper --schedule "0 */6 * * *" --include-namespaces gatekeeper-system --include-resources PodSecurityPolicy,ServiceAccount,Deployment,PodDisruptionBudget,ResourceQuota,ClusterRole,Role,ClusterRoleBinding,RoleBinding,MutatingWebhookConfiguration,ValidatingWebhookConfiguration,Secret,Service,Job --include-cluster-resources=true

Get Schedules

This command lists all schedules for backups:

velero schedule get

Delete Schedules

This command deletes the specified schedule:

velero schedule delete gatekeeper

Restore Scheduled Backup

This command restores the backup from a schedule:

velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>

Backup

This command creates a backup for the namespace “gatekeeper-system”:

velero backup create gatekeeper --include-namespaces gatekeeper-system --include-cluster-resources=true

This command creates a backup for the deployment “gatekeeper-system”:

velero backup create gatekeeper --include-namespaces gatekeeper-system --include-resources PodSecurityPolicy,ServiceAccount,Deployment,PodDisruptionBudget,ResourceQuota,ClusterRole,Role,ClusterRoleBinding,RoleBinding,MutatingWebhookConfiguration,ValidatingWebhookConfiguration,Secret,Service,Job --include-cluster-resources=true --include-cluster-resources=true

Get Backups

This command lists all created backups:

velero backup get

Delete Backups

This commands deletes the specified backup:

velero backup delete <BACKUP NAME>

Restore Backup

This commands restores the specified backup:

velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>

Scheduled Backups

This command creates a backup for the namespace “rook-ceph” every 6 hours:

velero schedule create rook-ceph --schedule "0 */6 * * *" --include-namespaces rook-ceph --include-cluster-resources=true

Get Schedules

This command lists all schedules for backups:

velero schedule get

Delete Schedules

This command deletes the specified schedule:

velero schedule delete rook-ceph

Restore Scheduled Backup

This command restores the backup from a schedule:

velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>

Backup

This command creates a backup for the namespace “rook-ceph”:

velero backup create rook-ceph --include-namespaces rook-ceph --include-cluster-resources=true

Get Backups

This command lists all created backups:

velero backup get

Delete Backups

This commands deletes the specified backup:

velero backup delete <BACKUP NAME>

Restore Backup

This commands restores the specified backup:

velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>

restore databases

keycloak

  1. create backup of keycloak namespace, in this example the backup is called keycloak1.

This command creates a backup for the namespace “rook-ceph”:

velero backup create keycloak1 --include-namespaces keycloak --include-cluster-resources=true
  1. restore backup, in this example keycloak1
velero restore create keycloak1 --from-backup keycloak1
  1. restore the database dump:
kubectl -n <keycloak-namespace> exec keycloak-postgres-0 -- pg_restore -v --jobs=4 --clean --if-exists -d bitnami_keycloak /backup/keycloak-db.dump

12 - Add certificate as trusted

This section outlines the process for adding a certificate as trusted by downloading it from the browser and installing it in the Trusted Root Certification Authorities on Windows or Linux systems.

1. Download the certificate

  1. As soon as Chrome issues a certificate warning, click on Not secure to the left of the address bar.
  2. Show the certificate (Click on Certificate is not valid).
  3. Go to Details tab.
  4. Click Export... at the bottom and save the certificate.
  1. As soon as Firefox issues a certificate warning, click on Advanced....
  2. View the certificate (Click on View Certificate).
  3. Scroll down to Miscellaneous and save the certificate.

2. Install the certificate

  1. Press Windows + R.
  2. Enter mmc and click OK.
  3. Click on File > Add/Remove snap-in....
  4. Select Certificates in the Available snap-ins list and click on Add >, then on OK. Add the snap-in.
  5. In the tree pane, open Certificates - Current user > Trusted Root Certification Authorities, then right-click Certificates and select All tasks > Import....
  6. The Certificate Import Wizard opens here. Click on Next.
  7. Select the previously saved certificate and click Next.
  8. Click Next again in the next window.
  9. Click on Finish. If a warning pops up, click on Yes.
  10. The program can now be closed. Console settings do not need to be saved.
  11. Clear browser cache and restart browser.

The procedures for using a browser to import a certificate as trusted (on Linux systems) vary depending on the browser and Linux distribution used. To manually cause a self-signed certificate to be trusted by a browser on a Linux system:

Distribution Copy certificate here Run following command to trust certificate
RedHat /etc/pki/ca-trust/source/anchors/ update-ca-trust extract

13 - Deploy Package On Cluster

This guide provides a simplified process for deploying packages in a Kubernetes cluster using Kosi with either the Helm or Kubectl plugin.

Deploying package on Cluster

You can install artifacts in your cluster in several ways. For this purpose, you can use these four plugins when creating a package:

  • helm
  • kubectl
  • cmd
  • Kosi

As an example, this guide installs the nginx-ingress Ingress Controller.

Using the Helm-Plugin

Prerequisite

In order to install an artifact with the Helm plugin, the Helm chart must first be downloaded. This step is not covered in this guide.

Create KOSI package

First you need to create a KOSI package. The following command creates the necessary files in the current directory:

kosi create

The downloaded Helm chart must also be located in the current directory. To customize the deployment of the Helm chart, the values.yaml file must be edited. This file can be downloaded from ArtifactHub and must be placed in the same directory as the Helm chart.

All files required by a task in the package must be named in the package.kosi file under files. The container images required by the Helm chart must also be listed in the package.kosi under containers. In the example below, only two files are required for the installation: the Helm Chart for the nginx-ingress and the values.yaml to configure the deployment. To install nginx-ingress you will also need the nginx/nginx-ingress image with the tag 3.0.1.

To install nginx-ingress with the Helm plugin, call the plugin as shown in the example under install. The deployment configuration file is listed under values and the packed Helm chart is specified with the key tgz. Furthermore, it is also possible to specify the namespace in which the artifact should be deployed and the name of the deployment. The full documentation for the Helm plugin can be found here.

languageversion = "0.1.0";
apiversion = "kubernative/kubeops/sina/user/v4";
name = "deployExample1";
description = "It shows how to deploy an artifact to your cluster using the helm plugin.";
version = "0.1.0";
docs = "docs.tgz";
logo = "logo.png";

files =
{
        valuesFile = "values.yaml";
        nginxHelmChart="nginx-ingress-0.16.1.tgz";
}

containers =
{
        nginx = ["docker.io", "nginx/nginx-ingress", "3.0.1"];
}

install
{
        helm
        (
            command = "install";
            tgz = "nginx-ingress-0.16.1.tgz";
            values = "['values.yaml']";
            namespace = "dev";
            deploymentName = "nginx-ingress"
        );
}

Once the package.kosi file has been fully configured, all files must be combined into a KOSI package. To do this, execute the following command in the directory where the package.kosi file is located.

kosi build

To make the generated kosi package available on other machines, it is pushed to the user’s private KubeOps Hub. To do this, the user must first log in to the hub.

$ kosi login -u <username>
2023-02-04 11:19:43 Info:      KOSI version: 2.13.0_Alpha0
2023-02-04 11:19:43 Info:      Please enter password
****************
2023-02-04 11:19:26 Info:      Login Succeeded to Hub.
$ kosi push --hub kosi
2023-02-04 11:23:18 Info:      KOSI version: 2.13.0_Alpha0
2023-02-04 11:23:19 Info:      Push to Private Registry registry.preprod.kubeops.net/<username>/

Deployment

Once the KOSI package has been created and published, it needs to be installed on the Admin node. The following command will download and execute the package. The package name and version refer to the values defined in package.kosi with the keys name and version.

kosi install --hub <username> <username>/<packagename>:<version>

For the example package, the command would be: kosi install --hub <username> <username>/deployExample:0.1.0.

Using the Kubectl-Plugin

Prerequisite

In order to install an artifact with the Kubectl plugin, the kubeops-kubernetes-plugins package must be installed on the admin node. This step is not covered in this guide.

Create KOSI package

First you need to create a KOSI package. The following command creates the necessary files in the current directory:

kosi create

The NGINX ingress controller YAML manifest can either be automaticly downloaded and applied directly with kubectl apply or it can be downloaded manually if you want to customize the deployment. The YAML manifest can be downloaded from the NGINX GitHub Repo and must be placed in the same directory as the files for the kosi package.

All files required by a task in the package must be named in the package.kosi file under files. The container images required by the YAML manifest must also be listed in the package.kosi under containers. In the example below, only one file is required for the installation: the YAML manifest for the nginx-ingress controller. To install nginx-ingress you will also need the registry.k8s.io/ingress-nginx/controller image with the tag v1.5.1 and the image registry.k8s.io/ingress-nginx/kube-webhook-certgen with tag v20220916-gd32f8c343.

To install nginx-ingress with the Kubectl plugin, call the plugin as shown in the example under installs. The full documentation for the Kubectl plugin can be found here.

languageversion = "0.1.0";
apiversion = "kubernative/kubeops/sina/user/v4";
name = "deployExample2";
description = "It shows how to deploy an artifact to your cluster using the helm plugin.";
version = "0.1.0";
docs = "docs.tgz";
logo = "logo.png";

files =
{
     manifest: "deploy.yaml"
}

containers =
{
    nginx = ["registry.k8s.io", "ingress-nginx/controller", "v1.5.1"];
    certgen= ["registry.k8s.io","ingress-nginx/kube-webhook-certgen","v20220916-gd32f8c343"];
}

install
{
    kubectl
    (
      operation="apply",
      flags="-f deploy.yaml";
      sudo = true;
      sudoPassword="toor"
    );
}

Once the package.kosi file has been fully configured, all files must be combined into a KOSI package. To do this, execute the following command in the directory where the package.kosi file is located.

kosi build

To make the generated KOSI package available on other machines, it is pushed to the user’s private KubeOps Hub. To do this, the user must first log in to the hub.

$ kosi login -u <username>
2023-02-04 11:19:43 Info:      kosi version: 2.13.0_Alpha0
2023-02-04 11:19:43 Info:      Please enter password
****************
2023-02-04 11:19:26 Info:      Login Succeeded to Hub.
$ kosi push --hub kosi
2023-02-04 11:23:18 Info:      kosi version: 2.13.0_Alpha0
2023-02-04 11:23:19 Info:      Push to Private Registry registry.preprod.kubeops.net/<username>/

Deployment

Once the KOSI package has been created and published, it needs to be installed on the Admin node. The following command will download and execute the package. The package name and version refer to the values defined in package.kosi with the keys name and version.

kosi install --hub <username> <username>/<packagename>:<version>

For the example package, the command would be: kosi install --hub <username> <username>/deployExample:0.1.0.

14 - How to migrate from nginx to traefik ingress

Installation

Kubeops supports deploying Traefik as a dynamic ingress controller and reverse proxy. This guide describes a concise, safe migration from an existing nginx-ingress controller to Traefik and explains how to install Traefik and replace a deprecated nginx-ingress deployment. The migration from nginx to Traefik is straightforward; the steps below show the process in order.

Prerequisites

A running Kubernetes cluster with an existing nginx-ingress controller.

1.Create Values file

values.yaml files are required for the Traefik installation:

# values.yaml
packages:
- name: traefik
  enabled: true
  values:
    standard:
      namespace: traefik
      externalIPs: []
    advanced: {}

Note: Update external IPs and other values as per user requirement.

2.Install Traefik

Once the values.yaml files have been created, install Traefik:

# get your desired version/s
kosi search --hub kubeops --ps traefik
# install traefik
kosi install --hub kubeops kubeops/traefik:<desired_version> -f values.yaml --dname traefik
Example
kosi install --hub kubeops kubeops kubeops/traefik:2.1.0_Beta0 -f values.yaml --dname traefik

3.Verify Deployment

Check pods and services in the traefik namespace:

kubectl get pods -n traefik
kubectl get svc -n traefik

Note: Default NodePorts (e.g. 31080 / 31443) might not be reachable. If the default ports are not accessible, determine the ports used by ingress-nginx (e.g. 30080 / 30443) and update Traefik to use the same ports.

4.Remove old nginx-ingress deployment and service

# get version of installed nginx-ingress and its deployment name (--dname)
kosi list
# delete old nginx-ingress
kosi delete --hub kubeops kubeops/ingress-nginx:<installed_version> -f enterprise-values.yaml --dname <kosi_deployment_name>

Edit Traefik service

If nginx used specific NodePorts and you require those same ports, edit the Traefik Service:

kubectl edit svc traefik -n traefik

Update the ports:

Update the ports to match the previous nginx NodePorts if required.

ports:
- name: web
  nodePort: 30080
  port: 80
  targetPort: web
- name: websecure
  nodePort: 30443
  port: 443
  targetPort: websecure

Verify Port Change

kubectl get svc -n traefik

Note: Ensure the nginx Service is removed or its NodePorts are freed before reusing those NodePorts on the Traefik Service.