Commands, Cluster Health and Modifications

An overview on commands, cluster health, and basic modifications. This guide provides a straightforward approach to mastering kubeopsctl command line operations and ensuring the stability and performance of your cluster infrastructure.

In this quickstart you will learn about:

  • basic kubeopsctl command line operations
  • apply changes to cluster, e.g. add new worker

Prerequisites

To get the most out of this guide, the following requirements should be met:

  • kubeopsctl is installed, see kubeopsctl Installation Guide
  • a cluster is installed and running
  • basic understanding of Linux environments, bash / shell
  • basic understanding of text editors, vi / nano
  • administrator privileges (root) are granted

Overview of the kubeopsctl CLI

kubeopsctl provides a set of command line operations. For more information, see here.

The main kubeopsctl commands are:

Command Description
--version Shows the current version of kubeopsctl.
--help Shows an overview of all available commands:
apply Sets up the kubeops platform with a configuration file.
change registry Changes the currently used registry to a different one with a given configuration file.
drain Drains a cluster, zone or node.
uncordon Uncordons a cluster, zone or node. This ensures that while existing pods will remain running on the node, no new pods can be scheduled.
upgrade Upgrades the Kubernetes version of a cluster, zone or node.
status Prints the state of a cluster.
kubeopsctl --version
Show kubeopsctl Help
kubeopsctl --help
Setup Kubernetes Cluster or Apply Changes

Sets up the kubeops platform with a configuration file. Use the flag -f and pass the configuration file, e.g. kubeopsctl.yaml.

kubeopsctl apply -f kubeopsctl.yaml

You can also set the log level to a specific value. Available log levels are:

kubeopsctl apply -f kubeopsctl.yaml -l Debug3

The default log level is Info.

  • Error
  • Warning
  • Info (default log level)
  • Debug1
  • Debug2
  • Debug3

The command kubeopsctl apply can also be used to modify a cluster. For more information see Apply Changes to Cluster within this document.

Change the Registry

Changes the currently used registry to a different one with a given configuration file. For example:

kubeopsctl change registry -f kubeopsctl.yaml -r 10.2.10.11/library -t localhost/library
  • The -f parameter is used to use yaml parameter file.
  • The parameter -r is used to pull the docker images which are included in the package to a given local docker registry.
  • The -t parameter is used to tag the images with localhost. For the szenario that the registry of the cluster is exposed to the admin via a network internal domain name, but this name cant be resolved by the nodes, the flag -t can be used, to use the cluster internal hostname of the registry.
Draining

For draining clusters, zones or nodes use kubeopsctl drain <type>/<name>.

To drain a cluster use and replace <clustername> with the desired cluster:

kubeopsctl drain cluster/<clustername>

To drain a zone use and replace <zonename> with the desired zone:

kubeopsctl drain zone/<zonename>

To drain a node use and replace <nodename> with the desired node:

kubeopsctl drain node/<nodename>
Uncordoning

Uncordoning is similar to draining but draining means remove all running tasks/pods so the node can be fixed or taken offline safely. Uncordoning is just opening it up again for new tasks/pods.

For uncordoning clusters, zones or nodes use kubeopsctl uncordon TYPE/NAME.

To uncordon a cluster use and replace <clustername> with the desired cluster:

kubeopsctl uncordon cluster/<clustername>

To uncordon a zone use and replace <zonename> with the desired zone:

kubeopsctl uncordon zone/<zonename>

To uncordon a node use and replace <nodename> with the desired node:

kubeopsctl uncordon node/<nodename>
Upgrading

Upgrade clusters, zones or nodes by using kubeopsctl upgrade -v <version>.

To upgrade a cluster use and replace <clustername> with the desired cluster:

kubeopsctl upgrade cluster/<clustername> -v 1.26.6

To upgrade a zone use and replace <zonename> with the desired zone:

kubeopsctl upgrade zone/<zonename> -v 1.26.6

To upgrade a node use and replace <nodename> with the desired node:

kubeopsctl upgrade node/<nodename> -v 1.26.6

Check on Health and State of Clusters

To check on health and state of a cluster use the command kubeopsctl status cluster/<clustername>.

For example:

kubeopsctl status cluster/basiccluster

Apply Changes to Cluster

If the cluster is already running, it may be necessary to make additional settings or carry out updates.

For example:

  • when IP addresses or host names of nodes change
  • if you want to add or remove nodes

In any case, you need to modify the base configuration and apply the necessary changes to the configuration file. Changes will be applied state wise - short hand: only the difference will be applied.

For more information, see How to Guides.

Example: Add new Worker

If you want to add a new worker to your cluster, edit the configuration file.

In this example we reuse the basic configuration basiccluster.yml from the previous chapter (see Set up a Basic Cluster). Use an editor to edit the configuration file:

nano ~/basiccluster.yml

Add the lines to the configuration file for an additional worker4 at zone2. Also edit the desired ipAdress for the new worker.

apiVersion: kubeops/kubeopsctl/alpha/v3 # mandatory
imagePullRegistry: "registry1.kubernative.net/lima"
localRegistry: true
clusterName: "example"
kubernetesVersion: "1.30.0"
masterIP: 10.2.10.11

zones:
	- name: zone1
		nodes:
			master:
				- name: master1
					ipAdress: 10.2.10.11
					status: active
					kubeversion: 1.30.0
				- name: master2
					ipAdress: 10.2.10.12
					status: active
					kubeversion: 1.30.0
			worker:
				- name: worker1
					ipAdress: 10.2.10.14
					status: active
					kubeversion: 1.30.0
				- name: worker2
					ipAdress: 10.2.10.15
					status: active
					kubeversion: 1.26.2
	- name: zone2
		nodes:
			master:
				- name: master3
					ipAdress: 10.2.10.13
					status: active
					kubeversion: 1.30.0	
			worker:
				- name: worker3
					ipAdress: 10.2.10.16
					status: active
					kubeversion: 1.30.0

# mandatory, set to true if you want to install it into your cluster
rook-ceph: true
harbor: true
opensearch: true
opensearch-dashboards: true
logstash: true
filebeat: true
prometheus: true
opa: true
headlamp: true
certman: true
ingress: true
keycloak: true
velero: true

rookValues:
	cluster:
		resources:
			mgr:
				requests:
					cpu: 500m
					memory: 1Gi
			mon:
				requests:
					cpu: 500m
					memory: 1Gi
			osd:
				requests:
					cpu: 500m
					memory: 1Gi

harborValues:
	harborpass: "password" # change to your desired password
	databasePassword: "Postgres_Password" # change to your desired password
	redisPassword: "Redis_Password"
	externalURL: http://10.2.10.11:30002 # change to ip adress of master1
	nodePort: 30002
	harborPersistence:
		persistentVolumeClaim:
			registry:
				size: 40Gi
			jobservice:
				jobLog:
					size: 1Gi
			database:
				size: 1Gi
			redis:
				size: 1Gi
			trivy:
				size: 5Gi

prometheusValues:
	grafanaUsername: "user"
	grafanaPassword: "password"

logstashValues:
	volumeClaimTemplate:
		resources:
			requests:
				storage: 1Gi

openSearchValues:
	persistence:
		size: 4Gi

keycloakValues:
	keycloak:
		auth:
			adminUser: admin
			adminPassword: admin
	postgresql:
		auth:
			postgresPassword: ""
			username: bn_keycloak
			password: ""
			database: bitnami_keycloak
			existingSecret: ""

veleroValues:
	namespace: "velero"
	accessKeyId: "your_s3_storage_username"
	secretAccessKey: "your_s3_storage_password"
	useNodeAgent: false
	defaultVolumesToFsBackup: false
	provider: "aws"
	bucket: "velero"
	useVolumeSnapshots: false
	backupLocationConfig:
		region: "minio"
		s3ForcePathStyle: true
		s3Url: "http://minio.velero.svc:9000"
apiVersion: kubeops/kubeopsctl/alpha/v3 # mandatory
imagePullRegistry: "registry.preprod.kubeops.net"
localRegistry: true
clusterName: "example"
kubernetesVersion: "1.30.0"
masterIP: 10.2.10.11

zones:
	- name: zone1
		nodes:
			master:
				- name: master1
					ipAdress: 10.2.10.11
					status: active
					kubeversion: 1.30.0
				- name: master2
					ipAdress: 10.2.10.12
					status: active
					kubeversion: 1.30.0
			worker:
				- name: worker1
					ipAdress: 10.2.10.14
					status: active
					kubeversion: 1.30.0
				- name: worker2
					ipAdress: 10.2.10.15
					status: active
					kubeversion: 1.26.2
	- name: zone2
		nodes:
			master:
				- name: master3
					ipAdress: 10.2.10.13
					status: active
					kubeversion: 1.30.0	
			worker:
				- name: worker3
					ipAdress: 10.2.10.16
					status: active
					kubeversion: 1.30.0

# mandatory, set to true if you want to install it into your cluster
rook-ceph: true
harbor: true
opensearch: true
opensearch-dashboards: true
logstash: true
filebeat: true
prometheus: true
opa: true
headlamp: true
certman: true
ingress: true
keycloak: true
velero: true

rookValues:
	cluster:
		resources:
			mgr:
				requests:
					cpu: 500m
					memory: 1Gi
			mon:
				requests:
					cpu: 500m
					memory: 1Gi
			osd:
				requests:
					cpu: 500m
					memory: 1Gi

harborValues:
	harborpass: "password" # change to your desired password
	databasePassword: "Postgres_Password" # change to your desired password
	redisPassword: "Redis_Password"
	externalURL: http://10.2.10.11:30002 # change to ip adress of master1
	nodePort: 30002
	harborPersistence:
		persistentVolumeClaim:
			registry:
				size: 40Gi
			jobservice:
				jobLog:
					size: 1Gi
			database:
				size: 1Gi
			redis:
				size: 1Gi
			trivy:
				size: 5Gi

prometheusValues:
	grafanaUsername: "user"
	grafanaPassword: "password"

logstashValues:
	volumeClaimTemplate:
		resources:
			requests:
				storage: 1Gi

openSearchValues:
	persistence:
		size: 4Gi

keycloakValues:
	keycloak:
		auth:
			adminUser: admin
			adminPassword: admin
	postgresql:
		auth:
			postgresUserPassword: ""
			username: bn_keycloak
			password: ""
			database: bitnami_keycloak
		volumeSize: "8Gi"

veleroValues:
	namespace: "velero"
	accessKeyId: "your_s3_storage_username"
	secretAccessKey: "your_s3_storage_password"
	useNodeAgent: false
	defaultVolumesToFsBackup: false
	provider: "aws"
	bucket: "velero"
	useVolumeSnapshots: false
	backupLocationConfig:
		region: "minio"
		s3ForcePathStyle: true
		s3Url: "http://minio.velero.svc:9000"

After the modification is set up corretly, you can apply it to your cluster:

kubeopsctl apply -f ~/basiccluster.yml
Best Practices for Changes and Modifications

When configuring the first setup for a cluster, keep your base configuration file, e.g. basiccluster.yml.

Since the modifications are carried out per status / change, we recommend naming the new configuration files and adding a timestamp, if necessary.

For Example:

  • basiccluster.yml
  • 20240101-add-worker4.yml
  • 20240101-drain-worker2.yml
  • 20240101-update-kubeversion.yml