Welcome to our comprehensive How-To Guide for using kubeops. Whether youre a beginner aiming to understand the basics or an experienced user looking to fine-tune your skills, this guide is designed to provide you with detailed step-by-step instructions on how to navigate and utilize all the features of kubeops effectively.
In the following sections, you will find everything from initial setup and configuration, to advanced tips and tricks that will help you get the most out of the software. Our aim is to assist you in becoming proficient with kubeops, enhancing both your productivity and your user experience.
Lets get started on your journey to mastering kubeops!
1 - Join Node to a Kubernetes Cluster
This guide outlines the steps to join a nodes to a cluster.
Joining a Node in a Kubernetes cluster
To increase performance or add additional resource capacity to your cluster, adding a node to the cluster is the correct process. This process with kubeopsctl is very easy.
You can use the following steps to join control-plane nodes or worker-nodes to a Kubernetes cluster.
Join Node Process:
Prerequisits
KOSI Login Recommendation
Before performing any action with kubeopsctl, it is recommended to do a login with kosi.
Refer to the official KOSI documentation for details
here.
ETCD Backup Recommendation
Before performing changes on the control planes, it is recommended to create an ETCD backup.
Refer to the official Kubernetes documentation for details
here
Example 1: Joining a Control-Plane Node to a Kubernetes Cluster
1. Pull required KOSI packages on your ADMIN
If you do not specify a parameter, the current Kubernetes version 1.32.2 will be pulled.
With parameter --kubernetesVersion 1.34.1 you can pull a specific Kubernetes version.
Available Kubernetes versions are
1.32.2
,
1.32.3
,
1.32.9
.
1.32.10
.
1.33.3
.
1.33.5
.
1.34.1
.
kubeopsctl pull
2. Add your node definition/specifications in the cluster-values
Note
If you want to join a node to your cluster the parameter: changeCluster has set to true
Note
The parameter kubeVersion defines the kubernetes version in your cluster for the node. This must set to the same version which exists in your cluster.
3. Adjust your cluster-values in zone1
Adjust your cluster-values in comparison to the example below. Be sure to set the actual version in your values, as well as the target version in the nodes. In the snippet below it is just the zone1.
Note
Take care of your format an syntax in the cluster-values.yaml
# file cluster-values.yamlapiVersion:kubeops/kubeopsctl/cluster/beta/v1imagePullRegistry:registry.kubeops.net/kubeops/kubeopsairgap:falseclusterName:myClusterclusterUser:rootkubernetesVersion:1.31.6# -> actual versionkubeVipEnabled:falsevirtualIP:10.2.10.110firewall:nftablespluginNetwork:calicocontainerRuntime:containerdkubeOpsRoot:/home/myuser/kubeopsserviceSubnet:192.168.128.0/17podSubnet:192.168.0.0/17debug:truesystemCpu:250msystemMemory:256MipackageRepository:https://packagerepo.kubeops.net/changeCluster:true# -> has to be setzones:- name:zone1nodes:- name:demo-controlplane01iPAddress:10.2.10.110type:controlplanekubeVersion:1.31.6- name:demo-controlplaneXX # -> has to be changediPAddress:10.2.10.XXX # -> has to be changedtype:controlplanekubeVersion:1.31.6# -> check with actual version- name:demo-worker01iPAddress:10.2.10.210type:workerkubeVersion:1.31.6- name:zone2nodes:- name:demo-controlplane02iPAddress:10.2.10.120type:controlplanekubeVersion:1.31.6- name:demo-worker02iPAddress:10.2.10.220type:workerkubeVersion:1.31.6- name:zone3nodes:- name:demo-controlplane03iPAddress:10.2.10.130type:controlplanekubeVersion:1.31.6- name:demo-worker03iPAddress:10.2.10.230type:workerkubeVersion:1.31.6
3. Validate your values and join the node to the cluster
Once the cluster-values.yaml is created, check the values once again. If you are ready just start the join node process with the command:
kubeopsctl apply -f cluster-values.yaml
Example 1: Joining a Worker Node to a Kubernetes Cluster
1. Pull required KOSI packages on your ADMIN
If you do not specify a parameter, the current Kubernetes version 1.32.2 will be pulled.
With parameter --kubernetesVersion x.xx.x you can pull other Kubernetes versions.
Available Kubernetes versions are
1.32.2
,
1.32.3
,
1.32.9
.
1.32.10
.
1.33.3
.
1.33.5
.
1.34.1
.
kubeopsctl pull
2. Add your node definition/specifications in the cluster-values
Note
If you want to join a node to your cluster the parameter: changeCluster has set to true
Note
The parameter kubeVersion defines the kubernetes version in your cluster for the node. This must set to the same version which exists in your cluster.
3. Adjust your cluster-values in zone2
Adjust your cluster-values in comparison to the example below. Be sure to set the actual version in your values, as well as the target version in the nodes. In the snippet below it is just the zone1.
Note
Take care of your format an syntax in the cluster-values.yaml
# file cluster-values.yamlapiVersion:kubeops/kubeopsctl/cluster/beta/v1imagePullRegistry:registry.kubeops.net/kubeops/kubeopsairgap:falseclusterName:myClusterclusterUser:rootkubernetesVersion:1.31.6# -> actual versionkubeVipEnabled:falsevirtualIP:10.2.10.110firewall:nftablespluginNetwork:calicocontainerRuntime:containerdkubeOpsRoot:/home/myuser/kubeopsserviceSubnet:192.168.128.0/17podSubnet:192.168.0.0/17debug:truesystemCpu:250msystemMemory:256MipackageRepository:https://packagerepo.kubeops.net/changeCluster:true# -> has to be setzones:- name:zone1nodes:- name:demo-controlplane01iPAddress:10.2.10.110type:controlplanekubeVersion:1.31.6- name:demo-worker01iPAddress:10.2.10.210type:workerkubeVersion:1.31.6- name:zone2nodes:- name:demo-controlplane02iPAddress:10.2.10.120type:controlplanekubeVersion:1.31.6- name:demo-worker02iPAddress:10.2.10.220type:workerkubeVersion:1.31.6- name:demo-workerXX # -> has to be changediPAddress:10.2.10.XX # -> has to be changedtype:workerkubeVersion:1.31.6# -> check with actual version - name:zone3nodes:- name:demo-controlplane03iPAddress:10.2.10.130type:controlplanekubeVersion:1.31.6- name:demo-worker03iPAddress:10.2.10.230type:workerkubeVersion:1.31.6
3. Validate your values and join the node to the cluster
Once the cluster-values.yaml is created, check the values once again. If you are ready just start the join node process with the command:
kubeopsctl apply -f cluster-values.yaml
Note
It is recommended if you join worker nodes to a cluster and rook-ceph is installed, to adjust your rook-ceph CRUSH map.
Refer to the official rook-ceph documentation for details
here.
for updating the crush map, you should set in your enterprise-values.yaml:
This guide outlines the steps to delete worker-nodes from a cluster, specifically how to proceed with rook-ceph and other KubeOps Compliance applications
Deleting a Node from a Kubernetes cluster
In rare cases, it may be necessary to remove nodes from a Kubernetes cluster.
This how-to guide explains the prerequisites and the key considerations to keep in mind before starting the node removal process.
You can use the following steps to delete nodes from a Kubernetes cluster.
Prerequisits
In order to run rook-ceph stable for a longer period your cluster needs at least 3 zones with each zone containing at least 1 worker-node
To check which mon and osd is running on the node you want to delete you can use the command kubectl get po -nrook-ceph -owide | grep worker02 | grep "mon\|osd" | grep -v "osd-prepare" | awk '{print $1}'. As an output you get the mon and the osd running on that node. If you don’t get an output, you don’t have to delete the ressource and can skip to the “delete the node”-section
Worker
Important:
Due to rook-ceph, a worker node must not be removed without following the steps below.
In this example, worker01 (zone1) is removed from the cluster.
Worker01 contains osd.0 and mon-c.
Scale down the rook-ceph-operator deployment to 0
This prevents new MONs or OSDs from being created.
kubectl exec -it deploy/rook-ceph-tools -n rook-ceph -- ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.21478 root default
-9 0.04880 zone zone1
-7 0.04880 host worker01 # worker01 is being removed0 ssd 0.04880 osd.0 up 1.00000 1.00000 # osd.0 is being removed-15 0.04880 host worker04
3 ssd 0.04880 osd.3 up 1.00000 1.00000
-11 0.10739 zone zone2
-3 0.05859 host worker02
1 ssd 0.05859 osd.1 up 0.95001 1.00000
-13 0.05859 zone zone3
-5 0.05859 host worker03
2 ssd 0.05859 osd.2 up 0.95001 1.00000
From this output you can see that osd.0 is part of worker01.
The total duration depends on cluster size and node performance. Before proceeding, verify Ceph health and placement groups are clean.
kubectl exec -it deploy/rook-ceph-tools -n rook-ceph -- ceph status
kubectl exec -it deploy/rook-ceph-tools -n rook-ceph -- ceph pg stat
Typical duration ranges from 15 to 120 minutes.
If you want to rejoin the same node, reset it to a time prior to joining the cluster. Only this way you can be sure, that no leftovers from the deletion process remain!
3 - Single Sign-On with Keycloak
Learn how to configure Keycloak for Single Sign-On, securely expose it using Kubernetes Ingress and TLS, and integrate it with kubeops and other Kubernetes applications.
In this guide, you will learn how to implement Single Sign-On (SSO) using Keycloak. We will walk through the complete flow—from understanding SSO for platforms and services such as Rook Ceph, Harbor, and other Kubernetes applications, to configuring Keycloak, exposing it securely, and integrating it with kubeops.
By the end of this guide, you will be able to:
Understand how Keycloak enables centralized authentication
Configure Keycloak for SSO
Securely expose Keycloak using Kubernetes Ingress and TLS
Integrate Keycloak with kubeops for authentication and authorization
Validate and troubleshoot the SSO login flow
Let’s get started on enabling secure and seamless authentication with Keycloak.
3.1 - SSO for dashboard
Learn how to configure Single Sign-On (SSO) for KubeOps Dashboard using Keycloak with OIDC.
Single Sign-On (SSO) with Keycloak for KubeOps Dashboard
This guide describes how to configure KubeOps Dashboard using Keycloak (OIDC) in a kubeops-managed Kubernetes environment.
Prerequisites
Before proceeding, ensure the following requirements are met:
Keycloak is already installed and running
kubeops is installed and operational
Step 1: Extract Keycloak CA certificate
On your admin host, run the OpenSSL command (kept exactly as provided):
Create a group named headlamp (if doesn’t exist already) and user under the group.
Step 6: Create ClusterRoleBinding for Headlamp group
1.Use following yaml to create ClusterRoleBinding
apiVersion:rbac.authorization.k8s.io/v1kind:ClusterRoleBindingmetadata: name:headlamp-admin-usersubjects:- kind:Group name:"oidc:headlamp"# Der 'sub' oder 'preferred_username' from the Keycloak-Token apiGroup:rbac.authorization.k8s.ioroleRef: kind:ClusterRole name:cluster-admin apiGroup:rbac.authorization.k8s.io
The name “oidc:headlamp” needs to be the same as the group name.
Apply the ClusterRoleBinding file
kubectl apply -f headlamp-clusterrolebinding.yaml
Step 7: Get client secret
After creating the client, copy the client secret.
This value will be used in the next step.
Ensure the OIDC client in Keycloak matches the oidc_client_id and oidc_client_secret values.
The externalURL and hostname must match the Harbor DNS name exactly.
oidc_auto_onboard: true allows users to be created automatically in Harbor upon first login.
3.3 - SSO for rook-ceph
Learn how to configure Single Sign-On (SSO) for rook-ceph using Keycloak with OIDC in a Kubernetes environment.
Single Sign-On (SSO) with Keycloak for rook-ceph
This guide describes how to configure rook-ceph authentication using Keycloak (OIDC) in a kubeops-managed Kubernetes environment.
Prerequisites
Before proceeding, ensure the following requirements are met:
Keycloak is already installed and running
rook-ceph is already installed and running
kubeops is installed and operational
Step 1: Prepare Keycloak (Realm, User)
To configure Keycloak for rook-ceph SSO
Create Realm
Ensure a realm named kubeops-dashboards exists.
If it does not exist, create it in the Keycloak admin console.
Realm name: kubeops-dashboards
Enabled: true
Create User
Ensure a user named kubeops exists in the kubeops-dashboards realm.
If the user does not exist, create it and set credentials.
Username: kubeops
Enabled: true
Set a permanent password
Step 2: Create Client (rook-ceph)
Create a client for rook-ceph in the kubeops-dashboards realm with following settings.
Client ID: rook-ceph
Client type: OpenID Connect
Access type: Confidential
Client authentication: Enabled
Standard flow: Enabled
Direct access grants: Disabled
Valid Redirect URIs
Add the following redirect URI:
https://<your_DNS_name>/oauth2/callback
Web Origins
Also update the web-origins
<your_DNS_name>
Step 3: Get Client Secret
In the Keycloak admin console, open the rook-ceph client and copy the client secret.
This value will be used by oauth2-proxy and referenced in next steps:
Create a Kubernetes Secret containing OAuth2 credentials. Note: the example command below uses client-id=“ceph-dashboard” — verify this value matches your Keycloak client ID
The following kubeops values configuration enables rook-ceph and integrates it with Keycloak using OIDC authentication.
Use the client secret and Cookie secret derived in above steps here.
global:# Global registry to pull the images fromimageRegistry:""# To help compatibility with other charts which use global.imagePullSecrets.imagePullSecrets:[]# - name: pullSecret1# - name: pullSecret2## Override the deployment namespace##namespaceOverride:""# Force the target Kubernetes version (it uses Helm `.Capabilities` if not set).# This is especially useful for `helm template` as capabilities are always empty# due to the fact that it doesn't query an actual clusterkubeVersion:# Oauth client configuration specificsconfig:# Add config annotationsannotations:{}# OAuth client IDclientID:"ceph-dashboard"# OAuth client secretclientSecret:"<client-secret>"# List of secret keys to include in the secret and expose as environment variables.# By default, all three secrets are required. To exclude certain secrets# (e.g., when using federated token authentication), remove them from this list.# Example to exclude client-secret:# requiredSecretKeys:# - client-id# - cookie-secretrequiredSecretKeys:- client-id- client-secret- cookie-secret# Create a new secret with the following command# openssl rand -base64 32 | head -c 32 | base64# Use an existing secret for OAuth2 credentials (see secret.yaml for required fields)# Example:# existingSecret: secretcookieSecret:"<cookie-secret>"# The name of the cookie that oauth2-proxy will create# If left empty, it will default to the release namecookieName:""google:{}# adminEmail: xxxx# useApplicationDefaultCredentials: true# targetPrincipal: xxxx# serviceAccountJson: xxxx# Alternatively, use an existing secret (see google-secret.yaml for required fields)# Example:# existingSecret: google-secret# groups: []# Example:# - group1@example.com# - group2@example.com# Default configuration, to be overriddenconfigFile:|-provider = "keycloak-oidc"oidc_issuer_url = "https://dev04.kubeops.net/keycloak/realms/master"email_domains = [ "*" ]upstreams = [ "file:///dev/null" ]pass_user_headers = trueset_xauthrequest = truepass_access_token = true# Custom configuration file: oauth2_proxy.cfg# configFile: |-# pass_basic_auth = false# pass_access_token = true# Use an existing config map (see configmap.yaml for required fields)# Example:# existingConfig: configalphaConfig:enabled:false# Add config annotationsannotations:{}# Arbitrary configuration data to append to the server sectionserverConfigData:{}# Arbitrary configuration data to append to the metrics sectionmetricsConfigData:{}# Arbitrary configuration data to appendconfigData:{}# Arbitrary configuration to append# This is treated as a Go template and rendered with the root contextconfigFile:""# Use an existing config map (see secret-alpha.yaml for required fields)existingConfig:~# Use an existing secretexistingSecret:"oauth2-proxy-credentials"image:registry:""repository:"oauth2-proxy/oauth2-proxy"# appVersion is used by defaulttag:""pullPolicy:"IfNotPresent"command:[]# Optionally specify an array of imagePullSecrets.# Secrets must be manually created in the namespace.# ref: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-podimagePullSecrets:[]# - name: myRegistryKeySecretName# Set a custom containerPort if required.# This will default to 4180 if this value is not set and the httpScheme set to http# This will default to 4443 if this value is not set and the httpScheme set to https# containerPort: 4180extraArgs:- --provider=keycloak-oidc- --set-xauthrequest=true- --pass-user-headers=true- --pass-access-token=true- --skip-oidc-discovery=true- --oidc-issuer-url=https://dev04.kubeops.net/keycloak/realms/master- --login-url=https://dev04.kubeops.net/keycloak/realms/master/protocol/openid-connect/auth- --redeem-url=https://dev04.kubeops.net/keycloak/realms/master/protocol/openid-connect/token- --validate-url=https://dev04.kubeops.net/keycloak/realms/master/protocol/openid-connect/userinfo- --oidc-jwks-url=https://dev04.kubeops.net/keycloak/realms/master/protocol/openid-connect/certs- --ssl-insecure-skip-verify=true- --cookie-secure=trueextraEnv:[]envFrom:[]# Load environment variables from a ConfigMap(s) and/or Secret(s)# that already exists (created and managed by you).# ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#configure-all-key-value-pairs-in-a-configmap-as-container-environment-variables## PS: Changes in these ConfigMaps or Secrets will not be automatically# detected and you must manually restart the relevant Pods after changes.## - configMapRef:# name: special-config# - secretRef:# name: special-config-secret# -- Custom labels to add into metadatacustomLabels:{}# To authorize individual email addresses# That is part of extraArgs but since this needs special treatment we need to do a separate sectionauthenticatedEmailsFile:enabled:false# Defines how the email addresses file will be projected, via a configmap or secretpersistence:configmap# template is the name of the configmap what contains the email user list but has been configured without this chart.# It's a simpler way to maintain only one configmap (user list) instead changing it for each oauth2-proxy service.# Be aware the value name in the extern config map in data needs to be named to "restricted_user_access" or to the# provided value in restrictedUserAccessKey field.template:""# The configmap/secret key under which the list of email access is stored# Defaults to "restricted_user_access" if not filled-in, but can be overridden to allow flexibilityrestrictedUserAccessKey:""# One email per line# example:# restricted_access: |-# name1@domain# name2@domain# If you override the config with restricted_access it will configure a user list within this chart what takes care of the# config map resource.restricted_access:""annotations:{}# helm.sh/resource-policy: keepservice:type:ClusterIP# when service.type is ClusterIP ...# clusterIP: 192.0.2.20# when service.type is LoadBalancer ...# loadBalancerIP: 198.51.100.40# loadBalancerSourceRanges: 203.0.113.0/24# when service.type is NodePort ...# nodePort: 80portNumber:80# Protocol set on the serviceappProtocol:httpannotations:{}# foo.io/bar: "true"# configure externalTrafficPolicyexternalTrafficPolicy:""# configure internalTrafficPolicyinternalTrafficPolicy:""# configure service target porttargetPort:""# Configures the service to use IPv4/IPv6 dual-stack.# Ref: https://kubernetes.io/docs/concepts/services-networking/dual-stack/ipDualStack:enabled:falseipFamilies:["IPv6","IPv4"]ipFamilyPolicy:"PreferDualStack"# Configure traffic distribution for the service# Ref: https://kubernetes.io/docs/concepts/services-networking/service/#traffic-distributiontrafficDistribution:""## Create or use ServiceAccountserviceAccount:## Specifies whether a ServiceAccount should be createdenabled:true## The name of the ServiceAccount to use.## If not set and create is true, a name is generated using the fullname templatename:automountServiceAccountToken:trueannotations:{}## imagePullSecrets for the service accountimagePullSecrets:[]# - name: myRegistryKeySecretName# Network policy settings.networkPolicy:create:falseingress:[]egress:[]ingress:enabled:false# className: nginxpath:/# Only used if API capabilities (networking.k8s.io/v1) allow itpathType:ImplementationSpecific# Used to create an Ingress record.# hosts:# - chart-example.local# Extra paths to prepend to every host configuration. This is useful when working with annotation based services.# Warning! The configuration is dependant on your current k8s API version capabilities (networking.k8s.io/v1)# extraPaths:# - path: /*# pathType: ImplementationSpecific# backend:# service:# name: ssl-redirect# port:# name: use-annotationlabels:{}# annotations:# kubernetes.io/ingress.class: nginx# kubernetes.io/tls-acme: "true"# tls:# Secrets must be manually created in the namespace.# - secretName: chart-example-tls# hosts:# - chart-example.local# Gateway API HTTPRoute configuration# Ref: https://gateway-api.sigs.k8s.io/api-types/httproute/gatewayApi:enabled:false# The name of the Gateway resource to attach the HTTPRoute to# Example:# gatewayRef:# name: gateway# namespace: gateway-systemgatewayRef:name:""namespace:""# HTTPRoute rule configuration# rules:# - matches:# - path:# type: PathPrefix# value: /rules:[]# Hostnames to match in the HTTPRoute# hostnames:# - chart-example.localhostnames:[]# Additional labels to add to the HTTPRoutelabels:{}# Additional annotations to add to the HTTPRouteannotations:{}resources:{}# limits:# cpu: 100m# memory: 300Mi# requests:# cpu: 100m# memory: 300Mi# Container resize policy for runtime resource updates# Ref: https://kubernetes.io/docs/tasks/configure-pod-container/resize-container-resources/resizePolicy:[]# - resourceName: cpu# restartPolicy: NotRequired# - resourceName: memory# restartPolicy: RestartContainerextraVolumes:[]# - name: ca-bundle-cert# secret:# secretName: <secret-name>extraVolumeMounts:[]# - mountPath: /etc/ssl/certs/# name: ca-bundle-cert# Additional containers to be added to the pod.extraContainers:[]# - name: my-sidecar# image: nginx:latest# Additional Init containers to be added to the pod.extraInitContainers:[]# - name: wait-for-idp# image: my-idp-wait:latest# command:# - sh# - -c# - wait-for-idp.shpriorityClassName:""# hostAliases is a list of aliases to be added to /etc/hosts for network name resolutionhostAliases:[]# - ip: "10.xxx.xxx.xxx"# hostnames:# - "auth.example.com"# - ip: 127.0.0.1# hostnames:# - chart-example.local# - example.local# [TopologySpreadConstraints](https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/) configuration.# Ref: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling# topologySpreadConstraints: []# Affinity for pod assignment# Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity# affinity: {}# Tolerations for pod assignment# Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/tolerations:[]# Node labels for pod assignment# Ref: https://kubernetes.io/docs/user-guide/node-selection/nodeSelector:{}# Whether to use secrets instead of environment values for setting up OAUTH2_PROXY variablesproxyVarsAsSecrets:true# Configure Kubernetes liveness and readiness probes.# Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/# Disable both when deploying with Istio 1.0 mTLS. https://istio.io/help/faq/security/#k8s-health-checkslivenessProbe:enabled:trueinitialDelaySeconds:0timeoutSeconds:1readinessProbe:enabled:trueinitialDelaySeconds:0timeoutSeconds:5periodSeconds:10successThreshold:1# Configure Kubernetes security context for container# Ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/securityContext:enabled:trueallowPrivilegeEscalation:falsecapabilities:drop:- ALLreadOnlyRootFilesystem:truerunAsNonRoot:truerunAsUser:2000runAsGroup:2000seccompProfile:type:RuntimeDefaultdeploymentAnnotations:{}podAnnotations:{}podLabels:{}replicaCount:1revisionHistoryLimit:10strategy:{}enableServiceLinks:true## PodDisruptionBudget settings## ref: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/## One of maxUnavailable and minAvailable must be set to null.podDisruptionBudget:enabled:truemaxUnavailable:nullminAvailable:1# Policy for when unhealthy pods should be considered for eviction.# Valid values are "IfHealthyBudget" and "AlwaysAllow".# Ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/#unhealthy-pod-eviction-policyunhealthyPodEvictionPolicy:""## Horizontal Pod Autoscaling## ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/autoscaling:enabled:falseminReplicas:1maxReplicas:10targetCPUUtilizationPercentage:80# targetMemoryUtilizationPercentage: 80annotations:{}# Configure HPA behavior policies for scaling if needed# Ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#configuring-scaling-behaviorbehavior:{}# scaleDown:# stabilizationWindowSeconds: 300# policies:# - type: Percent# value: 100# periodSeconds: 15# selectPolicy: Min# scaleUp:# stabilizationWindowSeconds: 0# policies:# - type: Percent# value: 100# periodSeconds: 15# - type: Pods# value: 4# periodSeconds: 15# selectPolicy: Max# Configure Kubernetes security context for pod# Ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/podSecurityContext:{}# whether to use http or httpshttpScheme:httpinitContainers:# if the redis sub-chart is enabled, wait for it to be ready# before starting the proxy# creates a role binding to get, list, watch, the redis master pod# if service account is enabledwaitForRedis:enabled:trueimage:repository:"alpine"tag:"latest"pullPolicy:"IfNotPresent"# uses the kubernetes version of the cluster# the chart is deployed on, if not setkubectlVersion:""securityContext:enabled:trueallowPrivilegeEscalation:falsecapabilities:drop:- ALLreadOnlyRootFilesystem:truerunAsNonRoot:truerunAsUser:65534runAsGroup:65534seccompProfile:type:RuntimeDefaulttimeout:180resources:{}# limits:# cpu: 100m# memory: 300Mi# requests:# cpu: 100m# memory: 300Mi# Additionally authenticate against a htpasswd file. Entries must be created with "htpasswd -B" for bcrypt encryption.# Alternatively supply an existing secret which contains the required information.htpasswdFile:enabled:falseexistingSecret:""entries:[]# One row for each user# example:# entries:# - testuser:$2y$05$gY6dgXqjuzFhwdhsiFe7seM9q9Tile4Y3E.CBpAZJffkeiLaC21Gy# Configure the session storage type, between cookie and redissessionStorage:# Can be one of the supported session storage cookie|redistype:cookieredis:# Name of the Kubernetes secret containing the redis & redis sentinel password values (see also `sessionStorage.redis.passwordKey`)existingSecret:""# Redis password value. Applicable for all Redis configurations. Taken from redis subchart secret if not set. `sessionStorage.redis.existingSecret` takes precedencepassword:""# Key of the Kubernetes secret data containing the redis password value. If you use the redis sub chart, make sure# this password matches the one used in redis-ha.redisPassword (see below).passwordKey:"redis-password"# Can be one of standalone|cluster|sentinelclientType:"standalone"standalone:# URL of redis standalone server for redis session storage (e.g. `redis://HOST[:PORT]`). Automatically generated if not setconnectionUrl:""cluster:# List of Redis cluster connection URLs. Array or single string allowed.connectionUrls:[]# - "redis://127.0.0.1:8000"# - "redis://127.0.0.1:8001"sentinel:# Name of the Kubernetes secret containing the redis sentinel password value (see also `sessionStorage.redis.sentinel.passwordKey`). Default: `sessionStorage.redis.existingSecret`existingSecret:""# Redis sentinel password. Used only for sentinel connection; any redis node passwords need to use `sessionStorage.redis.password`password:""# Key of the Kubernetes secret data containing the redis sentinel password valuepasswordKey:"redis-sentinel-password"# Redis sentinel master namemasterName:""# List of Redis cluster connection URLs. Array or single string allowed.connectionUrls:[]# - "redis://127.0.0.1:8000"# - "redis://127.0.0.1:8001"# Enables and configure the automatic deployment of the redis-ha subchartredis-ha:# provision an instance of the redis-ha sub-chartenabled:false# Redis specific helm chart settings, please see:# https://artifacthub.io/packages/helm/dandydev-charts/redis-ha#general-parameters## Recommended:## redisPassword: xxxxx# replicas: 1# persistentVolume:# enabled: false## If you install Redis using this sub chart, make sure that the password of the sub chart matches the password# you set in sessionStorage.redis.password (see above).## If you want to use redis in sentinel mode see:# https://artifacthub.io/packages/helm/dandydev-charts/redis-ha#redis-sentinel-parameters# Enables apiVersion deprecation checkscheckDeprecation:true# Allows graceful shutdown# terminationGracePeriodSeconds: 65# lifecycle:# preStop:# exec:# command: [ "sh", "-c", "sleep 60" ]metrics:# Enable Prometheus metrics endpointenabled:true# Serve Prometheus metrics on this portport:44180# when service.type is NodePort ...# nodePort: 44180# Protocol set on the service for the metrics portservice:appProtocol:httpserviceMonitor:# Enable Prometheus Operator ServiceMonitorenabled:false# Define the namespace where to deploy the ServiceMonitor resourcenamespace:""# Prometheus Instance definitionprometheusInstance:default# Prometheus scrape intervalinterval:60s# Prometheus scrape timeoutscrapeTimeout:30s# Add custom labels to the ServiceMonitor resourcelabels:{}## scheme: HTTP scheme to use for scraping. Can be used with `tlsConfig` for example if using istio mTLS.scheme:""## tlsConfig: TLS configuration to use when scraping the endpoint. For example if using istio mTLS.## Of type: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#tlsconfigtlsConfig:{}## bearerTokenFile: Path to bearer token file.bearerTokenFile:""## Used to pass annotations that are used by the Prometheus installed in your cluster to select Service Monitors to work with## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspecannotations:{}## Metric relabel configs to apply to samples before ingestion.## [Metric Relabeling](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#metric_relabel_configs)metricRelabelings:[]# - action: keep# regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'# sourceLabels: [__name__]## Relabel configs to apply to samples before ingestion.## [Relabeling](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config)relabelings:[]# - sourceLabels: [__meta_kubernetes_pod_node_name]# separator: ;# regex: ^(.*)$# targetLabel: nodename# replacement: $1# action: replace# Extra K8s manifests to deployextraObjects:[]
step 6 : Install oauth 2 helm chart
Use following steps to install oauth2 using help chart.
ceph config-key set mgr/dashboard/external_auth true ceph config-key set mgr/dashboard/external_auth_header_name "X-Remote-User" ceph config-key set mgr/dashboard/external_auth_logout_url "https://dev04.kubeops.net/oauth2/sign_out?rd=https://dev04.kubeops.net/keycloak/realms/master/protocol/openid-connect/logout?client_id=ceph-dashboard"
apiVersion:networking.k8s.io/v1kind:Ingressmetadata:name:oauth2-proxy-ingressnamespace:rook-ceph# Namespace, in dem der Proxy läuftannotations:kubernetes.io/ingress.class:nginxspec:rules:- host:dev04.kubeops.nethttp:paths:- path:/oauth2pathType:Prefixbackend:service:name:oauth2-proxyport:number:80
4 - Upgrade Kubernetes Version
This guide outlines the steps to upgrade the Kubernetes version of a cluster, specifically demonstrating how to change the version using a configuration file.
Upgrading a Kubernetes cluster
Upgrading a Kubernetes cluster is essential to maintain security, stability, and compatibility.Like Kubernetes itself, we adhere to the version skew policy and only allow upgrades between releases that differ by a single minor version. This ensures compatibility between components, reduces the risk of instability, and keeps the cluster in a supported and secure state.
More information about the Version Skew policy, Click here to read
You can use the following steps to upgrade the Kubernetes version of a cluster.
Kubernetes Version Upgrade Process:
Prerequisits
KOSI Login Recommendation
Before performing any action with kubeopsctl, it is recommended to do a login with kosi.
Refer to the official KOSI documentation for details
here.
1. Pull required KOSI packages on your ADMIN
If you do not specify a parameter, the Kubernetes version 1.32.2 will be pulled.
With parameter --kubernetesVersion 1.34.1 you can pull an older Kubernetes version.
Available Kubernetes versions are
1.32.2
,
1.32.3
,
1.32.9
.
1.32.10
.
1.33.3
.
1.33.5
.
1.34.1
.
kubeopsctl pull --kubernetesVersion <x.xx.x>
2. Change your target version inside the cluster-values
Note
The actual version is the current Kubernetes version in your cluster. The target version in the nodes, is version we want to do the upgrade (e.g., see this yaml file)
Note
If you want to upgrade your cluster the parameter: changeCluster has set to true (e.g., see this yaml file)
3. Start the upgrage with the command
kubeopsctl apply -f cluster-values.yaml
Important
It is important to start you upgrade with the initial controlplane node. It is the first node with the cluster was created.
Example 1 - Upgrade all nodes in the cluster to a specific version
We want to upgrade a cluster from Kubernetes version v1.33.5 to v1.34.1. These are the following steps.
1. Pull required KOSI packages on your ADMIN
Pull the kubernetes v1.34.1 packages on your ADMIN machine.
kubeopsctl pull --kubernetesVersion 1.34.1
2. Change your target version inside the cluster-values
Adjust your cluster-values in comparison to the example below. Be sure to set the actual version in your values, as well as the target version in the nodes.
# file cluster-values.yamlapiVersion:kubeops/kubeopsctl/cluster/beta/v1imagePullRegistry:registry.kubeops.net/kubeops/kubeopsairgap:trueclusterName:myClusterclusterUser:rootkubernetesVersion:1.33.5# -> actual versionkubeVipEnabled:falsevirtualIP:10.2.10.110firewall:nftablespluginNetwork:calicocontainerRuntime:containerdkubeOpsRoot:/home/myuser/kubeopsserviceSubnet:192.168.128.0/17podSubnet:192.168.0.0/17debug:truesystemCpu:250msystemMemory:256MipackageRepository:localchangeCluster:true# -> important! Needs to be set for an upgradezones:- name:zone1nodes:- name:demo-controlplane01iPAddress:10.2.10.110type:controlplanekubeVersion:1.34.1# -> target version- name:demo-worker01iPAddress:10.2.10.210type:workerkubeVersion:1.34.1# -> target version- name:zone2nodes:- name:demo-controlplane02iPAddress:10.2.10.120type:controlplanekubeVersion:1.34.1# ->target version- name:demo-worker02iPAddress:10.2.10.220type:workerkubeVersion:1.34.1# -> target version- name:zone3nodes:- name:demo-controlplane03iPAddress:10.2.10.130type:controlplanekubeVersion:1.34.1# -> target version- name:demo-worker03iPAddress:10.2.10.230type:workerkubeVersion:1.34.1# -> target version
2. Validate your values and upgrade the cluster
Once the cluster-values.yaml is created, check the values once again. If you are ready just start the upgrade process with the command:
kubeopsctl apply -f cluster-values.yaml
Note
Tools like rook-ceph has no pdbs, so if you drain nodes for the kubernetes upgrade, rook ceph is temporarily unavailable. you should drain only one node at a time for the kubernetes upgrade.
Example 2 - Tranche upgrade zones to a specific version
We want to upgrade a cluster in tranches. First zone1, because of the initial-controlplane-node. Then zone3 and last but not least zone2.
Important
It is important to start your tranche upgrade with zone1. In zone1 is the initial-controlplane-node.
1. Pull required KOSI packages on your ADMIN
Pull the kubernetes v1.33.5 packages on your ADMIN machine.
kubeopsctl pull --kubernetesVersion 1.33.5
2. Adjust your cluster-values in zone1
Adjust your cluster-values in comparison to the example below. Be sure to set the actual version in your values, as well as the target version in the nodes. In the snippet below it is just the zone1.
Note
Take care of your format an syntax in the cluster-values.yaml
Once the cluster-values.yaml is created, check the values once again. If you are ready just start the upgrade process with the command:
kubeopsctl apply -f cluster-values.yaml
Note
Tools like rook-ceph has no pdbs, so if you drain nodes for the kubernetes upgrade, rook ceph is temporarily unavailable. you should drain only one node at a time for the kubernetes upgrade.
Once the cluster-values.yaml is created, check the values once again. If you are ready just start the upgrade process with the command:
kubeopsctl apply -f cluster-values.yaml
5 - Installing KubeOps Compliance applications
This guide outlines the steps to install KubeOps Compliance applications of a cluster.
Installing KubeOps Compliance applications
There is a predefined selection of applications included with KubeOps Compliance.
These applications ensure a production-ready cluster deployment and can be individually configured as needed.
By separating the cluster values from the application values, the application values can be modified independently and installed at a later stage, providing greater flexibility and maintainability.
Prerequisits
KOSI Login Recommendation
Before performing any action with kubeopsctl, it is recommended to do a login with kosi.
Refer to the official KOSI documentation for details
here.
⚠ Warning
To ensure proper DNS resolution, all components of the logging stack — Filebeat, Logstash, OpenSearch, and OpenSearch Dashboards—must be deployed within the same Kubernetes namespace (or network domain).
Example 1: Installing Applications in a non-airgap-environment
To install the KubeOps Compliance Applications in an existing cluster follow the next steps:
1. Define the Enterprise-Value-file
In the example value, the following applications are enabled:
opa-gatekeeper
rook-ceph
harbor
kubeops-dashboard
All other applications are disabled and will not be installed. For more information about available packages as well as parameters for each package check here.
The following file is only an example. Make sure to change the necessary values (ips, passwords, …) before usage
2. Pull the KubeOps Compliance Applications packages
To pull the required application packages in the correct version for the release, use the following commands:
3. The KubeOps Compliance Application installation process
Important for only installation of the tools is that you have set your flag changeCluster to false in your cluster-values.yaml.
The following file is only an example. Make sure to change the necessary values (ips, passwords, …) before usage
⚠ Warning
Both, Cluster-values.yaml and enterprise-values.yaml are required!
4. Validate your values and install the KubeOps Compliance Applications
Once you finished defining your values, check them once again. If you are ready, just start the installation process with the command:
Example 2: Installing Applications in an airgap-environment
To install the KubeOps Compliance Applications in an existing cluster follow the next steps:
1. Define the Enterprise-Value-file
In the example value, the following applications are enabled:
opa-gatekeeper
rook-ceph
harbor
kubeops-dashboard
All other applications are disabled and will not be installed. Value-parameter will be explained in the references and can be found here.
apiVersion:kubeops/kubeopsctl/enterprise/beta/v1deleteNs:falselocalRegistry:true# important for airgap, otherwise images are pulled from public registrypackages:- name:opa-gatekeeperenabled:truevalues:standard:namespace:opa-gatekeeperadvanced:- name:rook-cephenabled:truevalues:standard:namespace:rook-cephcluster:resources:mgr:requests:cpu:"500m"memory:"512Mi"mon:requests:cpu:"1"memory:"1Gi"osd:requests:cpu:"1"memory:"1Gi"dashboard:enabled:trueoperator:data:rookLogLevel:"DEBUG"- name:harborenabled:truevalues:standard:namespace:harborharborpass:"topsecret"databasePassword:"topsecret"redisPassword:"topsecret"externalURL:http://10.2.10.110:30002nodePort:30002hostname:harbor.localharborPersistence:persistentVolumeClaim:registry:size:40GistorageClass:"rook-cephfs"jobservice:jobLog:size:1GistorageClass:"rook-cephfs"database:size:1GistorageClass:"rook-cephfs"redis:size:1GistorageClass:"rook-cephfs"trivy:size:5GistorageClass:"rook-cephfs"advanced:- name:kubeops-dashboardenabled:truevalues:standard:namespace:monitoringhostname:kubeops-dashboard.localservice:nodePort:30007advanced:- name:filebeat-osenabled:falsevalues:standard:namespace:loggingadvanced:
2. Pull the KubeOps Compliance Applications packages
To pull the required application packages in the correct version for the release, use the following commands:
Note
The airgap-packages will be pulled automatically
3. The KubeOps Compliance Application installation process
Important for only installation of the tools is that you have set your flag changeCluster to false in your cluster-values.yaml.
The following file is only an example. Make sure to change the necessary values (ips, passwords, …) before usage
⚠ Warning
Both, Cluster-values.yaml and enterprise-values.yaml are required!
Note
There is the possibility to do an Upgrade of the kubernetes version and do an installation of the KubeOps Compliance Tools
4. Validate your values and install the KubeOps Compliance Applications
Once you finished defining your values, check them once again. If you are ready, just start the installation process with the command:
This guide outlines the steps to update KubeOps Compliance applications of a cluster.
Updating KubeOps Compliance applications
There is a predefined selection of applications included with KubeOps Compliance. These applications ensure a production-ready cluster deployment and can be configured individually as needed.
By separating cluster values from application values, application values can be modified independently and installed later, providing greater flexibility and maintainability.
kubeopsctl automatically detects whether an application is already deployed and updates it accordingly.
Important
If you have made changes to your deployments, reapply them afterward or secure them using the advancedValues configuration.
⚠ Warning
To ensure proper DNS resolution, all components of the logging stack — Filebeat, Logstash, OpenSearch, and OpenSearch Dashboards—must be deployed within the same Kubernetes namespace (or network domain).
Prerequisites
KOSI Login Recommendation
Before performing any action with kubeopsctl, it is recommended to do a login with kosi.
Refer to the official KOSI documentation for details
here.
Updated KubeOpsctl
If you have an older kubeopsctl version installed, update it before starting with updating Compliance appliactions.
# kubeopsctl-version can be found under : https://packagerepo.kubeops.net/deb/pool/main/sudo apt update
sudo apt install -y kubeopsctl=<kubeopsctl-version>
# kubeopsctl-version can be found under : https://packagerepo.kubeops.net/rpm/sudo dnf install -y --disableexcludes=kubeops-repo <kubeopsctl-version>
# kubeopsctl-version can be found under : https://packagerepo.kubeops.net/deb/pool/main/wget https://packagerepo.kubeops.net/deb/pool/main/<kubeopsctl-version>.deb
sudo dpkg --install <kubeopsctl-version>.deb
# kubeopsctl-versions can be found under: https://packagerepo.kubeops.net/rpmsudo rpm -e kubeopsctl
wget https://packagerepo.kubeops.net/rpm/<kubeopsctl-version>.rpm
sudo rpm --install -v <kubeopsctl-version>.rpm
Example 1: Updating Applications in a non-airgap-environment
To update the KubeOps Compliance Applications in an existing cluster follow the next steps:
Note
All KubeOps Compliance applications versions are linked to the specific KubeOps Compliance version. A detailed version reference can be found here.
1. Define the Enterprise-Value-file
In the example value, the following applications are enabled:
opa-gatekeeper
rook-ceph
harbor
kubeops-dashboard
All other applications are disabled and will not be updated. Value-parameter will be explained in the references and can be found here.
2. Pull the KubeOps Compliance Applications packages
To pull the required application packages in the correct version for the release, use the following commands:
kubeopsctl pull -f enterprise-values.yaml
or
kubeopsctl pull --tools enterprise-values.yaml
3. The KubeOps Compliance Application update process
Important for only update of the tools is that you have set your flag changeCluster to false in your cluster-values.yaml.
⚠ Warning
Both, Cluster-values.yaml and enterprise-values.yaml are required!
4. Validate your values and update the KubeOps Compliance Applications
Once you finished defining your values, check them once again. If you are ready, just start the update process with the command:
Example 2: Updating Applications in an airgap-environment
To update the KubeOps Compliance Applications in an existing cluster follow the next steps:
Note
All KubeOps Compliance applications versions are linked to the specific KubeOps Compliance version. A detailed version reference can be found here.
1. Define the Enterprise-Value-file
In the example value, the following applications are enabled:
opa-gatekeeper
rook-ceph
harbor
kubeops-dashboard
All other applications are disabled and will not be updated. Value-parameter will be explained in the references and can be found here.
apiVersion:kubeops/kubeopsctl/enterprise/beta/v1deleteNs:falselocalRegistry:true# important for airgap, otherwise images are pulled from public registrypackages:- name:opa-gatekeeperenabled:truevalues:standard:namespace:opa-gatekeeperadvanced:- name:rook-cephenabled:truevalues:standard:namespace:rook-cephcluster:resources:mgr:requests:cpu:"500m"memory:"512Mi"mon:requests:cpu:"1"memory:"1Gi"osd:requests:cpu:"1"memory:"1Gi"dashboard:enabled:trueoperator:data:rookLogLevel:"DEBUG"- name:harborenabled:truevalues:standard:namespace:harborharborpass:"topsecret"databasePassword:"topsecret"redisPassword:"topsecret"externalURL:http://10.2.10.110:30002nodePort:30002hostname:harbor.localharborPersistence:persistentVolumeClaim:registry:size:40GistorageClass:"rook-cephfs"jobservice:jobLog:size:1GistorageClass:"rook-cephfs"database:size:1GistorageClass:"rook-cephfs"redis:size:1GistorageClass:"rook-cephfs"trivy:size:5GistorageClass:"rook-cephfs"advanced:- name:kubeops-dashboardenabled:truevalues:standard:namespace:monitoringhostname:kubeops-dashboard.localservice:nodePort:30007advanced:- name:filebeat-osenabled:falsevalues:standard:namespace:loggingadvanced:
2. Update kubeopsctl
If you have an older kubeopsctl version installed, update it using the following commands.
# kubeopsctl-version can be found under : https://packagerepo.kubeops.net/deb/pool/main/sudo apt update
sudo apt install -y kubeopsctl=<kubeopsctl-version>
# kubeopsctl-version can be found under : https://packagerepo.kubeops.net/rpm/sudo dnf install -y --disableexcludes=kubeops-repo <kubeopsctl-version>
# kubeopsctl-version can be found under : https://packagerepo.kubeops.net/deb/pool/main/wget https://packagerepo.kubeops.net/deb/pool/main/<kubeopsctl-version>.deb
sudo dpkg --install <kubeopsctl-version>.deb
# kubeopsctl-versions can be found under: https://packagerepo.kubeops.net/rpmsudo rpm -e kubeopsctl
wget https://packagerepo.kubeops.net/rpm/<kubeopsctl-version>.rpm
sudo rpm --install -v <kubeopsctl-version>.rpm
2. Pull the KubeOps Compliance Applications packages
To pull the required application packages in the correct version for the release, use the following commands:
kubeopsctl pull -f enterprise-values.yaml
or
kubeopsctl pull --tools enterprise-values.yaml
Note
The airgap-packages will be pulled automatically
3. The KubeOps Compliance Application update process
Important for only the update of the tools is that you have set your flag changeCluster to false in your cluster-values.yaml.
⚠ Warning
Both, Cluster-values.yaml and enterprise-values.yaml are required!
Note
There is the possibility to do an Upgrade of the kubernetes version and do an installation of the KubeOps Compliance Tools
4. Validate your values and update the KubeOps Compliance Applications
Once you finished defining your values, check them once again. If you are ready, just start the update process with the command:
Here is a brief overview of Harbor Deployment with CloudNativePG on Kubernetes using Kosi
Harbor Deployment with CloudNativePG on Kubernetes using Kosi
This guide provides a detailed guide to deploy Harbor on Kubernetes using a CloudNativePG (CNPG) PostgreSQL cluster managed by the CloudNativePG operator, installed via Kosi.
Precondition: Log in to the preprod environment with Kosi before beginning.
Important:
Use the -rw service host (cloudnative-pg-rw…) for write operations.
Do not use a superuser account.
Ensure the password matches the CNPG Secret.
Access Harbor at: <your_domain_name>:30002 (or as configured)
Notes and best practices
Always decode the CNPG app Secret to obtain the correct username, password, and database name used by Harbor.
Use the cloudnative-pg-rw service for write traffic; use -ro services for reads if required.
Production recommendations (follow your organizational security policies):
Create a dedicated DB user (for example, harbor) rather than reusing the default app user.
Enable TLS for Postgres connections.
Implement backups (for example, CNPG Barman or object storage like S3).
To scale the CNPG cluster, update the instances field in the Cluster spec and reapply.
8 - Ingress Configuration
Here is a brief overview of how you can configure your ingress manually.
Manual configuration of the Nginx-Ingress-Controller
Right now the Ingress Controller Package is not fully configured. To make complete use of the Ingress capabilities of the cluster, the user needs to manually update some of the settings of the corresponding service.
Locating the service
The service in question is called “ingress-nginx-controller” and can be found in the same namespace as the ingress package itself. To locate the service across all namespaces, you could use the following command.
kubectl get service -A | grep ingress-nginx-controller
This command should return two entries of services, “ingress-nginx-controller” and “ingress-nginx-controller-admission”, though only the first one needs to be further adjusted.
Setting the Ingress-Controller service to type NodePort
To edit the service, you can use the following command, although the actual namespace may be different. This will change the service type to NodePort.
kubectl patch service ingress-nginx-controller -n kubeops -p '{"spec":{"type":"NodePort"}}'
Kubernetes will now automatically assign unused portnumbers for the nodePort to allow http and https connections to the service. These can be retrieved by running the same command, used to locate the service. Alternatively, you can use the following command, which adds the portnumbers 30080 and 30443 for the respective protocols. By doing so, you have to make sure, that these portnumbers are not being used by any other NodePort service.
If you have access to external IPs that route to one or more cluster nodes, you can expose your Kubernetes-Services of any type through these addresses. The command below shows how to add an external IP-Adress to the service with the example value of “192.168.0.1”. Keep in mind that this value has to be changed in order to fit your networking settings.
kubectl patch service ingress-nginx-controller -n kubeops -p '{"spec":{"externalIPs":["192.168.0.1"]}}'
9 - Accessing Dashboards
A brief overview of how you can access dashboards.
Accessing Dashboards installed with KubeOps
To access a Application dashboard an SSH-Tunnel to one of the Control-Planes is needed.
The following Dashboards are available and configured with the following NodePorts by default:
NodePort
32090 (if not set otherwise in the enterprise-values.yaml)
Connecting to the Dashboard
In order to connect to the dashboard an ssh tunnel has to be established. There are various tools for doing this, like the command line, putty or MobaXterm.
To establish a tunnel, the NodePort of the dashboard has to be forwarded on one of the control planes to the local machine. After that the dashboard can be accessed with localhost:<NodePort>.
Initial login credentials
No credentials are necessary for login
NodePort
30211 (if not set otherwise in the enterprise-values.yaml)
Connecting to the Dashboard
In order to connect to the dashboard an ssh tunnel has to be established. There are various tools for doing this, like the command line, putty or MobaXterm.
To establish a tunnel, the NodePort of the dashboard has to be forwarded on one of the control planes to the local machine. After that the dashboard can be accessed with localhost:<NodePort>.
Initial login credentials
username: the username set in the enterprise-values.yaml of Prometheus (default: user)
password: the password set in the enterprise-values.yaml of Prometheus (default: password)
NodePort
30050 (if not set otherwise in the enterprise-values.yaml)
Connecting to the Dashboard
In order to connect to the dashboard an ssh tunnel has to be established. There are various tools for doing this, like the command line, putty or MobaXterm.
To establish a tunnel, the NodePort of the dashboard has to be forwarded on one of the control planes to the local machine. After that the dashboard can be accessed with localhost:<NodePort>.
Initial login credentials
username: admin
password: Password@@123456
NodePort
https: 30003
Connecting to the Dashboard
In order to connect to the dashboard an ssh tunnel has to be established. There are various tools for doing this, like the command line, putty or MobaXterm.
To establish a tunnel, the NodePort of the dashboard has to be forwarded on one of the control planes to the local machine. After that the dashboard can be accessed with localhost:<NodePort>.
Initial login credentials
username: admin
password: the password set in the kubeopsvalues.yaml for the cluster creation (default: password)
NodePort
The Rook/Ceph Dashboard has no fixed NodePort yet.
To find out the NodePort used by Rook/Ceph follow these steps:
List the Services in the KubeOps namespace
kubectl get svc -n kubeops
Find the line with the service rook-ceph-mgr-dashboard-external-http
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
rook-ceph-mgr-dashboard-external-http NodePort 192.168.197.13 <none> 7000:31268/TCP 21h
Or use,
echo $(kubectl get --namespace rook-ceph -o jsonpath="{.spec.ports[0].nodePort}" services rook-ceph-mgr-dashboard-external-http)
In the example above the NodePort to connect to Rook/Ceph would be 31268.
Connecting to the Dashboard
In order to connect to the dashboard an ssh tunnel has to be established. There are various tools for doing this, like the command line, putty or MobaXterm.
To establish a tunnel, the NodePort of the dashboard has to be forwarded on one of the control planes to the local machine. After that the dashboard can be accessed with localhost:<NodePort>/ceph-dashboard/.
30007 (if not set otherwise in the enterprise-values.yaml)
Connecting to the Dashboard
In order to connect to the dashboard an ssh tunnel has to be established. There are various tools for doing this, like the command line, putty or MobaXterm.
To establish a tunnel, the NodePort of the dashboard has to be forwarded on one of the control planes to the local machine. After that the dashboard can be accessed with localhost:<NodePort>.
Initial login credentials
kubectl -n monitoring create token headlamp-admin
NodePort
30180
Connecting to the Dashboard
In order to connect to the dashboard an ssh tunnel has to be established. There are various tools for doing this, like the command line, putty or MobaXterm.
To establish a tunnel, the NodePort of the dashboard has to be forwarded on one of the control planes to the local machine. After that the dashboard can be accessed with localhost:<NodePort>/.
In order to connect to one of the dashboards, an ssh tunnel has to be established. There are various tools for doing this, like the command line, putty or MobaXterm.
To establish a tunnel, the NodePort of the dashboard has to be forwarded on one of the control planes to the local machine. After that, you can access the dashboard as described in the information panel of each dashboard above.
Connecting to the Dashboard via DNS
In order to connect to the dashboard via DNS the hosts file in /etc/hosts need the following additional entries:
Open the exported file internal_users.yaml. Find the entry for the user whose password you want to change and replace the previous password hash with the new hash you generated in step 1. Then save the file.
Step 3: Patch the Secret with Updated internal_users.yml Data and Restart the Opensearch Pods
Encode the updated internal_users.yaml and apply it back to the secret.
You can also change the password by directly accessing the OpenSearch container and modifying the internal_users.yml file. This can be done by generating a new password hash using the hash.sh script inside the container, then updating the internal_users.yml file with the new hash. Finally, the securityadmin.sh script must be executed to apply the changes and update the OpenSearch cluster. However, this method is not persistent across container or pod restarts, especially in Kubernetes, unless the changes are stored in a persistent volume or backed by external storage. In contrast, changing the password using a Kubernetes secret is persistent across pod restarts, as the password information is stored in a Kubernetes secret, which is managed by the cluster and survives pod/container restarts.
11 - Backup and restore
In this article, we look at the backup procedure with Velero.
Backup and restoring artifacts
What is Velero?
Velero uses object storage to store backups and associated artifacts. It also optionally integrates supported block storage systems to snapshot your persistent volumes. Before beginning the installation process, you should identify the object storage provider and optional block storage provider(s) you’ll be using from the list of compatible providers.
Velero supports storage providers for both cloud-provider environments and on-premises environments.
Velero prerequisites:
Access to a Kubernetes cluster, v1.16 or later, with DNS and container networking enabled.
This section outlines the process for adding a certificate as trusted by downloading it from the browser and installing it in the Trusted Root Certification Authorities on Windows or Linux systems.
1. Download the certificate
As soon as Chrome issues a certificate warning, click on Not secure to the left of the address bar.
Show the certificate (Click on Certificate is not valid).
Go to Details tab.
Click Export... at the bottom and save the certificate.
As soon as Firefox issues a certificate warning, click on Advanced....
View the certificate (Click on View Certificate).
Scroll down to Miscellaneous and save the certificate.
2. Install the certificate
Press Windows + R.
Enter mmc and click OK.
Click on File > Add/Remove snap-in....
Select Certificates in the Available snap-ins list and click on Add >, then on OK. Add the snap-in.
In the tree pane, open Certificates - Current user > Trusted Root Certification Authorities, then right-click Certificates and select All tasks > Import....
The Certificate Import Wizard opens here. Click on Next.
Select the previously saved certificate and click Next.
Click Next again in the next window.
Click on Finish. If a warning pops up, click on Yes.
The program can now be closed. Console settings do not need to be saved.
Clear browser cache and restart browser.
The procedures for using a browser to import a certificate as trusted (on Linux systems) vary depending on the browser and Linux distribution used.
To manually cause a self-signed certificate to be trusted by a browser on a Linux system:
Distribution
Copy certificate here
Run following command to trust certificate
RedHat
/etc/pki/ca-trust/source/anchors/
update-ca-trust extract
Note
If the directory does not exist, create it.
Note
If you do not have the ca-certificates package, install it with your package manager.
13 - Deploy Package On Cluster
This guide provides a simplified process for deploying packages in a Kubernetes cluster using Kosi with either the Helm or Kubectl plugin.
Deploying package on Cluster
You can install artifacts in your cluster in several ways. For this purpose, you can use these four plugins when creating a package:
helm
kubectl
cmd
Kosi
As an example, this guide installs the nginx-ingress Ingress Controller.
Using the Helm-Plugin
Prerequisite
In order to install an artifact with the Helm plugin, the Helm chart must first be downloaded. This step is not covered in this guide.
Create KOSI package
First you need to create a KOSI package. The following command creates the necessary files in the current directory:
kosi create
The downloaded Helm chart must also be located in the current directory. To customize the deployment of the Helm chart, the values.yaml file must be edited. This file can be downloaded from ArtifactHub and must be placed in the same directory as the Helm chart.
All files required by a task in the package must be named in the package.kosi file under files. The container images required by the Helm chart must also be listed in the package.kosi under containers.
In the example below, only two files are required for the installation: the Helm Chart for the nginx-ingress and the values.yaml to configure the deployment. To install nginx-ingress you will also need the nginx/nginx-ingress image with the tag 3.0.1.
To install nginx-ingress with the Helm plugin, call the plugin as shown in the example under install. The deployment configuration file is listed under values and the packed Helm chart is specified with the key tgz. Furthermore, it is also possible to specify the namespace in which the artifact should be deployed and the name of the deployment. The full documentation for the Helm plugin can be found here.
languageversion="0.1.0";apiversion="kubernative/kubeops/sina/user/v4";name="deployExample1";description="It shows how to deploy an artifact to your cluster using the helm plugin.";version="0.1.0";docs="docs.tgz";logo="logo.png";files={valuesFile="values.yaml";nginxHelmChart="nginx-ingress-0.16.1.tgz";}containers={nginx=["docker.io","nginx/nginx-ingress","3.0.1"];}install{helm(command="install";tgz="nginx-ingress-0.16.1.tgz";values="['values.yaml']";namespace= "dev";
deploymentName="nginx-ingress");}
Once the package.kosi file has been fully configured, all files must be combined into a KOSI package. To do this, execute the following command in the directory where the package.kosi file is located.
kosi build
To make the generated kosi package available on other machines, it is pushed to the user’s private KubeOps Hub. To do this, the user must first log in to the hub.
Once the KOSI package has been created and published, it needs to be installed on the Admin node. The following command will download and execute the package. The package name and version refer to the values defined in package.kosi with the keys name and version.
For the example package, the command would be: kosi install --hub <username> <username>/deployExample:0.1.0.
Using the Kubectl-Plugin
Prerequisite
In order to install an artifact with the Kubectl plugin, the kubeops-kubernetes-plugins package must be installed on the admin node. This step is not covered in this guide.
Create KOSI package
First you need to create a KOSI package. The following command creates the necessary files in the current directory:
kosi create
The NGINX ingress controller YAML manifest can either be automaticly downloaded and applied directly with kubectl apply or it can be downloaded manually if you want to customize the deployment. The YAML manifest can be downloaded from the NGINX GitHub Repo and must be placed in the same directory as the files for the kosi package.
All files required by a task in the package must be named in the package.kosi file under files. The container images required by the YAML manifest must also be listed in the package.kosi under containers.
In the example below, only one file is required for the installation: the YAML manifest for the nginx-ingress controller. To install nginx-ingress you will also need the registry.k8s.io/ingress-nginx/controller image with the tag v1.5.1 and the image registry.k8s.io/ingress-nginx/kube-webhook-certgen with tag v20220916-gd32f8c343.
To install nginx-ingress with the Kubectl plugin, call the plugin as shown in the example under installs. The full documentation for the Kubectl plugin can be found here.
languageversion="0.1.0";apiversion="kubernative/kubeops/sina/user/v4";name="deployExample2";description="It shows how to deploy an artifact to your cluster using the helm plugin.";version="0.1.0";docs="docs.tgz";logo="logo.png";files={manifest:"deploy.yaml"}containers={nginx=["registry.k8s.io","ingress-nginx/controller","v1.5.1"];certgen=["registry.k8s.io","ingress-nginx/kube-webhook-certgen","v20220916-gd32f8c343"];}install{kubectl(operation="apply",flags="-f deploy.yaml";sudo=true;sudoPassword="toor");}
Once the package.kosi file has been fully configured, all files must be combined into a KOSI package. To do this, execute the following command in the directory where the package.kosi file is located.
kosi build
To make the generated KOSI package available on other machines, it is pushed to the user’s private KubeOps Hub. To do this, the user must first log in to the hub.
Once the KOSI package has been created and published, it needs to be installed on the Admin node. The following command will download and execute the package. The package name and version refer to the values defined in package.kosi with the keys name and version.
For the example package, the command would be: kosi install --hub <username> <username>/deployExample:0.1.0.
14 - How to migrate from nginx to traefik ingress
Installation
Kubeops supports deploying Traefik as a dynamic ingress controller and reverse proxy. This guide describes a concise, safe migration from an existing nginx-ingress controller to Traefik and explains how to install Traefik and replace a deprecated nginx-ingress deployment. The migration from nginx to Traefik is straightforward; the steps below show the process in order.
Prerequisites
A running Kubernetes cluster with an existing nginx-ingress controller.
1.Create Values file
values.yaml files are required for the Traefik installation:
kubectl get pods -n traefik
kubectl get svc -n traefik
Note:
Default NodePorts (e.g. 31080 / 31443) might not be reachable.
If the default ports are not accessible, determine the ports used by ingress-nginx (e.g. 30080 / 30443) and update Traefik to use the same ports.
4.Remove old nginx-ingress deployment and service
# get version of installed nginx-ingress and its deployment name (--dname)kosi list
# delete old nginx-ingresskosi delete --hub kubeops kubeops/ingress-nginx:<installed_version> -f enterprise-values.yaml --dname <kosi_deployment_name>
Edit Traefik service
If nginx used specific NodePorts and you require those same ports, edit the Traefik Service:
kubectl edit svc traefik -n traefik
Update the ports:
Update the ports to match the previous nginx NodePorts if required.