Welcome to our comprehensive How-To Guide for using kubeops. Whether youre a beginner aiming to understand the basics or an experienced user looking to fine-tune your skills, this guide is designed to provide you with detailed step-by-step instructions on how to navigate and utilize all the features of kubeops effectively.
In the following sections, you will find everything from initial setup and configuration, to advanced tips and tricks that will help you get the most out of the software. Our aim is to assist you in becoming proficient with kubeops, enhancing both your productivity and your user experience.
Lets get started on your journey to mastering kubeops!
1 - Ingress Configuration
Here is a brief overview of how you can configure your ingress manually.
Manual configuration of the Nginx-Ingress-Controller
Right now the Ingress Controller Package is not fully configured. To make complete use of the Ingress capabilities of the cluster, the user needs to manually update some of the settings of the corresponding service.
Locating the service
The service in question is called “ingress-nginx-controller” and can be found in the same namespace as the ingress package itself. To locate the service across all namespaces, you could use the following command.
kubectl get service -A | grep ingress-nginx-controller
This command should return two entries of services, “ingress-nginx-controller” and “ingress-nginx-controller-admission”, though only the first one needs to be further adjusted.
Setting the Ingress-Controller service to type NodePort
To edit the service, you can use the following command, although the actual namespace may be different. This will change the service type to NodePort.
kubectl patch service ingress-nginx-controller -n kubeops -p '{"spec":{"type":"NodePort"}}'
Kubernetes will now automatically assign unused portnumbers for the nodePort to allow http and https connections to the service. These can be retrieved by running the same command, used to locate the service. Alternatively, you can use the following command, which adds the portnumbers 30080 and 30443 for the respective protocols. By doing so, you have to make sure, that these portnumbers are not being used by any other NodePort service.
If you have access to external IPs that route to one or more cluster nodes, you can expose your Kubernetes-Services of any type through these addresses. The command below shows how to add an external IP-Adress to the service with the example value of “192.168.0.1”. Keep in mind that this value has to be changed in order to fit your networking settings.
kubectl patch service ingress-nginx-controller -n kubeops -p '{"spec":{"externalIPs":["192.168.0.1"]}}'
2 - Create Cluster
Here is a brief overview of how you can create a simple functional cluster. Including prerequisites and step by step instructions .
How to create a working cluster?
Pre-requisites
maintenance packages installed?
network connection?
LIMAROOT set
Steps
create yaml file
create cluster with multiple nodes
add nodes to created cluster
delete nodes when needed
Once you have completed the KubeOps installation, you are ready to dive into the KubeOps-Platform.
How to use LIMA
Downloaded all maintenance packages? If yes, then you are ready to use LIMA for managing your Kubernetes clusters!
In the following sections we will walk you through a quick cluster setup and adding nodes.
So the first thing to do is to create a YAML file that contains the specifications of your cluster. Customize the file below according to your downloaded maintenance packages, e.g. the parameters kubernetesVersion, firewall, containerRuntime. Also adjust the other parameters like masterPassword, masterHost, apiEndpoint to your environment.
Most of these parameters are optional and can be left out. If you want to know more about each parameter please refer to our Full Documentation
Set up a single node cluster
To set up a single node cluster we need our createCluster.yaml file from above.
Run the create cluster command on the admin node to create a cluster with one node.
lima create cluster -f createCluster.yaml
Done! LIMA is setting up your Kubernetes cluster. In a few minutes you have set up a regular single master cluster.
If LIMA is successfully finished you can check with kubectl get nodes your Kubernetes single node cluster.
It looks very alone and sad right? Jump to the next section to add some friends to your cluster!
Optional step
The master node which you used to set up your cluster is only suitable as an example installation or for testing. To use this node for production workloads remove the taint from the master node.
Let’s give your single node cluster some friends. What we need for this is another YAML file. We can call the YAML file whatever we want - we call it addNode.yaml.
addNode.yaml
apiVersion:lima/nodeconfig/v1alpha1clusterName:ExampleClusterNamespec:masters:- host:10.2.1.12user:rootpassword:"myPassword"workers:- host:10.2.1.13#IP-address of the node you want to adduser:rootpassword:"myPassword"
We do not need to pull any other maintenance packages. We already did that and are using the same specifications from our single node cluster. The only thing to do is to use the create nodes command
lima create nodes -f addNode.yaml
Done! LIMA adds the nodes to your single node cluster. After LIMA is finished check again with kubectl get nodes the state of your Kubernetes cluster. Your master node should not be alone anymore!
3 - Importing the ELRepo Secure Boot key
This guide explains how to prepare a system with Secure Boot for using third-party kernel modules by importing the ELRepo Secure Boot key, ensuring compatibility and secure module integration..
KubeOps supports inter-node traffic encryption through the use of the calico-wireguard extension. For this to work correctly, the wireguard kernel module needs to be installed on every node in the cluster.
KubeOps distributes and installs the required software automatically. However, since these are third-party modules signed by the ELRepo community project, system administrators must import the ELRepo Secure Boot public key into their MOK (Machine Owner Key) list in order to use them on a system with Secure Boot enabled.
This only applies to RHEL 8 machines.
Download the key
The secureboot key must be located on every node of the cluster. It can be directly downloaded with the following command:
If you are working with an airgap environment, you might need to manually distribute the file to all your nodes.
Import the key in the MOK list
With the key in place, install it by using this command:
mokutil --import SECURE-BOOT-KEY-elrepo.org.der
When prompted, enter a password of your choice. This password will be used when enrolling the key into the MOK list.
Reboot the system and enroll the key
Upon rebooting, the “Shim UEFI key management” screen appears. You will need to press any key withing 10 seconds to proceed.
Enroll the key by following these steps:
- Select Enroll MOK.
- Select View key 0 to inspect the public key and other important information. Press Esc when you are done.
- Select Continue and enter the previously created password.
- When asked to enroll the keys, select OK.
- Select Reboot and restart the system.
The key has now been added to the MOK list and enrolled.
4 - Install Maintenance Packages
This guide provides an overview of installing essential maintenance packages for KubeOps clusters. It covers how to pull and manage various Kubernetes tools, dependencies, and Container Runtime Interface (CRI) packages to set up and maintain your cluster. Ensure compatibility between versions to successfully deploy your first Kubernetes environment.
Installing the essential Maintenance Packages
KubeOps provides you packages for the supported Kubernetes tools. These maintenance packages help you update the kubernetes tools to the desired versions on your clusters along with its dependencies.
It is necessary to install the required maintenance packages to create your first Kubernetes cluster. The packages are available on kubeops hub.
So let’s get started!
Note : Be sure you have the supported KOSI version for the KubeOps Version installed or you can not pull any maintenance packages!
Commands to install a package
Following are the most common commands to be used on Admin Node to get and install any maintenance package.
Use the command get maintenance to list all available maintenance packages.
lima get maintenance
This will display a list of all the available maintenance packages.
Example :
| SOFTWARE | VERSION | STATUS | SOFTWAREPACKAGE |TYPE |
| -- | -- | -- | -- | -- |
| Kubernetes | 1.24.8 | available | lima/kubernetes:1.24.8 | upgrade |
| iptablesEL8 | 1.8.4 | available | lima/iptablesel8:1.8.4 | update |
| firewalldEL8 | 0.8.2 | downloaded | lima/firewalldel8:0.8.2 | update |
Please observe and download correct packages based on following important column in this table.
|Name | Description |
|-------------------------------------------|-------------------------------------------|
| SOFTWARE | It is the name of software which is required for your cluster. |
| VERSION | It is the software version. Select correct version based on your Kubernetes and KubeOps version. |
| SOFTWAREPACKAGE | It is the unique name of the maintenance package. Use this to pull the package on your machine.|
| STATUS | There can be any of the following status indicated. |
| | - available: package is remotely available |
| | - not found : package not found |
| | - downloaded : the package is locally and remotely available |
| | - only local : package is locally available |
| | - unknown: unknown package |
Use command pull maintenance to pull/download the package on your machine.
lima pull maintenance <SOFTWAREPACKAGE>
It is possible to pull more than 1 package with one pull invocation.
For example:
lima pull maintenance lima/kubernetes:1.23.5 lima/dockerEL7:18.09.1
List of Maintenance Packages
Following are the essential maintenance packages to be pulled. Use the above mentioned Common Commands to install desired packages.
1.Kubernetes
The first step is to choose a Kubernetes version and to pull its available package
LIMA currently supports following Kubernetes versions:
1.26.x
1.27.x
1.28.x
1.29.x
1.30.x
1.31.x
1.32.x
1.26.3
1.27.1
1.28.0
1.29.0
1.30.0
1.31.2
1.32.0
1.26.4
1.27.2
1.28.1
1.29.1
1.30.1
1.31.4
1.26.5
1.27.3
1.28.2
1.29.2
1.30.6
1.26.6
1.27.4
1.28.3
1.29.3
1.30.8
1.26.7
1.27.5
1.28.4
1.29.4
1.26.8
1.27.6
1.28.5
1.29.5
1.26.9
1.27.7
1.28.6
1.29.10
1.27.8
1.28.7
1.29.12
1.27.9
1.28.8
1.27.10
1.28.9
1.28.10
Following are the packages available for the supported Kubernetes versions.
Kubernetes version
Available packages
1.26.x
kubernetes-1.26.x
1.27.x
kubernetes-1.27.x
1.28.x
kubernetes-1.28.x
1.29.x
kubernetes-1.29.x
1.30.x
kubernetes-1.30.x
1.31.x
kubernetes-1.31.x
1.32.x
kubernetes-1.32.x
2. Install Kubectl
To install Kubectl you won’t need to pull any other package.
The Kubernetes package pulled in above step already contains Kubectl installation file.
In the following example the downloaded package is kubernetes-1.30.1.
The next step is to pull the Kubernetes dependencies:
OS
Available packages
RHEL 8
kubeDependencies-EL8-1.0.4
RHEL 8
kubeDependencies-EL8-1.0.6
4.CRIs
Choose your CRI and pull the available packages:
OS
CRI
Available packages
RHEL 8
docker
dockerEL8-20.10.2
containerd
containerdEL8-1.4.3
CRI-O
crioEL8-x.xx.x
crioEL8-dependencies-1.0.1
podmanEL8-18.09.1
Note : CRI-O packages are depending on the chosen Kubernetes version. Choose the CRI-O package which matches with the chosen Kubernetes version.
E.g kubernetes-1.23.5 requires crioEL7-1.23.5
E.g kubernetes-1.24.8 requires crioEL7-1.24.8
5.Firewall
Choose your firewall and pull the available packages:
OS
Firewall
Available packages
RHEL 8
iptables
iptablesEL8-1.8.4
firewalld
firewalldEL8-0.9.3
Example
Assuming a setup should exist with OS RHEL 8, CRI-O and Kubernetes 1.22.2 with the requested version, the following maintenance packages need to be installed:
kubernetes-1.22.2
kubeDependencies-EL8-1.0.2
crioEL8-1.22.2
crioEL8-dependencies-1.0.1
podmanEL8-18.09.1
5 - Upgrade KubeOps Software
This guide outlines the steps for upgrading KubeOps software. It covers updating essential packages, configuring kubeopsctl.yaml, removing old versions, and installing new ones. It also provides instructions for upgrading other components like rook-ceph, harbor, opensearch, and monitoring tools by modifying the configuration file and applying the updates systematically..
Upgrading KubeOps Software
1. Update essential KubeOps Packages
Update kubeops setup
Before installing the kubeops software, create a kubeopsctl.yaml with following parameters:
### General values for registry access ###apiVersion:kubeops/kubeopsctl/alpha/v5 # mandatorykubeOpsUser:"demo"# mandatorykubeOpsUserPassword:"Password"# mandatorykubeOpsUserMail:"demo@demo.net"# mandatoryimagePullRegistry:"registry.preprod.kubernative.net/lima"# mandatory, the registry from which the images for the cluster are pulledlocalRegistry:false# mandatory, set to true if you use a local registry
After creating the kubeopsctl.yaml please place another file into your machine to update the software:
### Values for setup configuration ###clusterName:"example"# mandatoryclusterUser:"root"# mandatorykubernetesVersion:"1.28.2"# mandatory, check lima documentationmasterIP:10.2.10.12# mandatorycontainerRuntime:"containerd"# mandatory
1. Remove old KubeOps software
If you want to remove the KubeOps software, it is recommended that you use your package manager.
For RHEL environments it is yum.
If you want to remove the KubeOps software with yum, use the following commands:
yum autoremove kosi
yum autoremove lima
2. Install new KubeOps software
Now, you can install the new software with yum.
sudo yum install <kosi-rpm>
3. Upgrade kubeops software
To upgrade your kubeops software, you have to use following command:
kubeopsctl apply -f kubeopsctl.yaml
4. Maintain the old Deployment Information (optional)
After upgrading KOSI from 2.5 to 2.6, the deployment.yaml file has to be moved to the $KUBEOPSROOT directory, if it is desired to keep old deployments.
Be sure there you set the $KUBEOPSROOT variable.
In order to upgrade rook-ceph, you have to go in your kubeopsctl.yaml file and set rook-ceph: false to rook-ceph: true
After that, use the command bellow:
kubeopsctl apply -f kubeopsctl.yaml
2. Update harbor
For Updating harbor, change your kubeopsctl.yaml file and set harbor: false to harbor: true.
Please set other applications to false before applying the kubeopsctl.yaml file.
3. Update opensearch
In order to update opensearch, change your kubeopsctl.yaml file and set opensearch: false to opensearch: true.
Please set other applications to false before applying the kubeopsctl.yaml file.
4. Update logstash
In order to update logstash, change your kubeopsctl.yaml file and set logstash: false to logstash: true
Please set other applications to false before applying the kubeopsctl.yaml file.
5. Update filebeat
In order to update filebeat, change your kubeopsctl.yaml file and set filebeat: false to filebeat: true.
Please set other applications to false before applying the kubeopsctl.yaml file.
6. Update prometheus
In order to update prometheus, change your kubeopsctl.yaml file and set prometheus: false to prometheus: true.
Please set other applications to false before applying the kubeopsctl.yaml file.
7. Update opa
In order to update opa, change your kubeopsctl.yaml file and set opa: false to opa: true.
Please set other applications to false before applying the kubeopsctl.yaml file.
6 - Update postgres resources of harbor
Update postgres resources of harbor.
How to Update Harbor Advanced Parameters Using kubeopsctl
Prerequisites
Before proceeding, ensure you have:
kubeopsctl installed and configured.
Access to your Kubernetes cluster.
The necessary permissions to apply changes to the Harbor deployment.
Understanding advancedParameters
Harbor allows advanced configuration via the harborValues.advancedParameters section. This section provides fine-grained control over various components, such as PostgreSQL, Redis, and logLevel, by defining resource allocations and other configurations.
Example Structure of advancedParameters
The advancedParameters section in kubeopsctl.yaml follows this structure:
harborValues:advancedParameters:postgres:resources:requests:memory:"512Mi"# Minimum memory requested by PostgreSQLcpu:"200m"# Minimum CPU requested by PostgreSQLlimits:memory:"1Gi"# Maximum memory PostgreSQL can usecpu:"500m"# Maximum CPU PostgreSQL can useinternal:redis:resources:requests:memory:"256Mi"# Minimum memory requested by Rediscpu:"100m"# Minimum CPU requested by Redislimits:memory:"512Mi"# Maximum CPU Redis can usecpu:"300m"# Maximum CPU Redis can uselogLevel:"debug"# Adjust logging level for debugging purposes
postgres: Defines resource limits for the PostgreSQL database.
redis: Configures Redis instance resources.
logLevel: Allows setting the logging level.
Modify these values based on your cluster’s available resources and workload requirements.
Step 1: Update Your kubeopsctl.yaml Configuration
Ensure that your kubeopsctl.yaml file includes the harborValues.advancedParameters section. If necessary, update or add parameters to customize your Harbor deployment.
Step 2: Apply the Configuration with kubeopsctl
Once your kubeopsctl.yaml file is ready, apply the changes using the following command:
kubeopsctl apply -f kubeopsctl.yaml
This command updates the advanced parameters for the Harbor deployment.
Step 3: Verify the Changes
To confirm that the new configuration has been applied, run:
kubectl get pod -n <your-harbor-namespace> -o yaml | grep -A6 -i 'resources:'
Replace <your-harbor-namespace> with the namespace where Harbor is deployed.
Alternatively, describe any component to check the applied settings:
kubectl describe pod <component-pod-name> -n <your-harbor-namespace>
Conclusion
Using kubeopsctl, you can efficiently update various advanced parameters in your Harbor deployment. The advancedParameters section allows fine-tuned configuration for multiple components, ensuring optimal resource usage and performance.
kubeopsctl is a KubeOps tool that simplifies cluster management by allowing users to define the desired cluster state in a YAML file. After configuring the cluster’s setup, the changes can be easily applied using the apply command, making it straightforward to manage updates and configurations..
KubeOpsctl
kubeopsctl is a new KubeOps tool which can be used for managing a cluster and its state eaisily. Now you can just describe a desired cluster state and then kubeopsctl creates a cluster with the desired state.
Using KubeOpsCtl
Using this feature is as easy as configuring the cluster yaml file with desired cluster state and details and using the apply command. Below are the detailed steps.
1.Configure Cluster/Nodes/Software using yaml file
You need to have a cluster definition file which describes the different aspects of your cluster. this files describes only one cluster.
Full yaml syntax
apiVersion:kubeops/kubeopsctl/alpha/v5 # mandatorykubeOpsUser:"demo"# mandatory, change to your usernamekubeOpsUserPassword:"Password"# mandatory, change to your passwordkubeOpsUserMail:"demo@demo.net"# change to your emailimagePullRegistry:"registry.preprod.kubernative.net/kubeops"# mandatorylocalRegistry:false# mandatory### Values for setup configuration ###clusterName:"testkubeopsctl"# mandatoryclusterUser:"myuser"# mandatorykubernetesVersion:"1.28.2"# mandatory, check lima documentationmasterIP:10.2.10.31# mandatory# at least 3 masters and 3 workers are neededzones:- name:zone1nodes:master:- name:cluster1master1ipAdress:10.2.10.11user:myusersystemCpu:100msystemMemory:100Mi status:activekubeversion:1.28.2- name:cluster1master2ipAdress:10.2.10.12user:myusersystemCpu:100msystemMemory:100Mi status:activekubeversion:1.28.2worker:- name:cluster1worker1ipAdress:10.2.10.14user:myusersystemCpu:100msystemMemory:100Mi status:activekubeversion:1.28.2- name:cluster1worker2ipAdress:10.2.10.15systemCpu:100msystemMemory:100Mi status:activekubeversion:1.28.2- name:zone2nodes:master:- name:cluster1master3ipAdress:10.2.10.13user:myusersystemCpu:100msystemMemory:100Mi status:drainedkubeversion:1.28.2worker:- name:cluster1worker1ipAdress:10.2.10.16user:myusersystemCpu:100msystemMemory:100Mi status:activekubeversion:1.28.2# set to true if you want to install it into your clusterrook-ceph:false# mandatoryharbor:false# mandatoryopensearch:false# mandatoryopensearch-dashboards:false# mandatorylogstash:false# mandatoryfilebeat:false# mandatoryprometheus:false# mandatoryopa:false# mandatorykubeops-dashboard:false# mandatorycertman:false# mandatoryingress:false# mandatorykeycloak:false# mandatory###Values for Rook-Ceph###rookValues:namespace:kubeopsnodePort: 31931 # optional, default:31931cluster:storage:# Will only be used if useAllDevices is set to false and will be ignored if individual devices have been specified on a node.deviceFilter:"^sd[a-b]"# This setting can be used to store metadata on a different device. Only recommended if an additional metadata device is available.# Optional, will be overwritten by the corresponding node-level setting.config:metadataDevice:"sda"# Names of individual nodes in the cluster that should have their storage included.# Will only be used if useAllNodes is set to false.nodes:- name:"<ip-adress of node_1>"devices:- name:"sdb"- name:"<ip-adress of node_2>"deviceFilter:"^sd[a-b]"config:metadataDevice:"sda"# optional#-------------------------------------------------------------------------------------------------------------------------------### Values for Postgres ###postgrespass:"password"# mandatory, set password for harbor postgres access postgres:resources:requests:storage:2Gi# mandatory, depending on storage capacity#-------------------------------------------------------------------------------------------------------------------------------### Values for Redis ###redispass:"password"# mandatory set password for harbor redis access redis:resources:requests:storage:2Gi# mandatory depending on storage capacity#-------------------------------------------------------------------------------------------------------------------------------### Values for Harbor deployment ##### For detailed explaination for each key see: https://artifacthub.io/packages/helm/harbor/harbor/1.8.1#configuration ##harborValues:harborpass:"password"# mandatory: set password for harbor access externalURL:https://10.2.10.13# mandatory, the ip address, from which harbor is accessable outside of the clusternodePort:30003harborPersistence:persistentVolumeClaim:registry:size:40Gi# optional, default is 40GistorageClass:"rook-cephfs"#optional, default is rook-cephfsjobservice:jobLog:size:1Gi# optional, default is 1GistorageClass:"rook-cephfs"#optional, default is rook-cephfsdatabase:size:1Gi# optional, default is 1GistorageClass:"rook-cephfs"#optional, default is rook-cephfsredis:size:1Gi# optional, default is 1GistorageClass:"rook-cephfs"#optional, default is rook-cephfstrivy:size:5Gi# optional, default is 5GistorageClass:"rook-cephfs"#optional, default is rook-cephfs#--------------------------------------------------------------------------------------------------------------------------------------###Values for filebeat deployment###filebeatValues:namespace:kubeops# optional, default is kubeops #--------------------------------------------------------------------------------------------------------------------------------###Values for Logstash deployment#####For detailed explaination for each key see: https://github.com/elastic/helm-charts/releases/tag/v7.16.3###logstashValues:namespace:kubeopsvolumeClaimTemplate:resources:requests:storage:1Gi# mandatory, depending on storage capacity#--------------------------------------------------------------------------------------------------------------------------------------###Values for OpenSearch-Dashboards deployment#####For detailed explaination for each key see: https://github.com/opensearch-project/helm-charts/tree/main/charts/opensearch-dashboards###openSearchDashboardValues:namespace:kubeopsnodePort:30050#--------------------------------------------------------------------------------------------------------------------------------###Values for OpenSearch deployment#####For detailed explaination for each key see: https://github.com/opensearch-project/helm-charts/tree/main/charts/opensearch###openSearchValues:namespace:kubeopsresources:persistence:size:4Gi# mandatory#--------------------------------------------------------------------------------------------------------------------------------###Values for Prometheus deployment###prometheusValues:prometheusResources:nodePort:32090#--------------------------------------------------------------------------------------------------------------------------------###Values for OPA deployment###opaValues:namespace:kubeops#--------------------------------------------------------------------------------------------------------------------------------###Values for KubeOps-Dashboard (Headlamp) deployment###kubeOpsDashboardValues:service:nodePort:30007#--------------------------------------------------------------------------------------------------------------------------------###Values for cert-manager deployment###certmanValues:namespace:kubeopsreplicaCount:3logLevel:2#--------------------------------------------------------------------------------------------------------------------------------###Values for ingress-nginx deployment###ingressValues:namespace:kubeops
apiVersion:kubeops/kubeopsctl/alpha/v5 # mandatorykubeOpsUser:"demo"# mandatory, change to your usernamekubeOpsUserPassword:"Password"# mandatory, change to your passwordkubeOpsUserMail:"demo@demo.net"# change to your emailimagePullRegistry:"registry.preprod.kubernative.net/kubeops"# mandatorylocalRegistry:false# mandatory### Values for setup configuration ###clusterName:"testkubeopsctl"# mandatoryclusterUser:"myuser"# mandatorykubernetesVersion:"1.28.2"# mandatory, check lima documentation#masterHost: optional if you have an hostname, default value in "masterIP"masterIP:10.2.10.31# mandatoryfirewall:"nftables"# mandatory, default "nftables"pluginNetwork:"calico"# mandatory, default "nftables"containerRuntime:"containerd"# mandatory, default "containerd"
these are parameters for the cluster creation, and software for the clustercreation, p.e. the containerruntime for running the contianers of the cluster. Also there are parameters for the lima software (see documentation of lima for futher explanation).
### Additional values for cluster configurationuseInsecureRegistry:false# optional, default is falseignoreFirewallError:false# optional, default is falseserviceSubnet:192.168.128.0/17# optional, default "192.168.128.0/17"podSubnet:192.168.0.0/17# optional, default "192.168.0.0/17"debug:true# optional, default is truelogLevel:vvvvv# optional, default "vvvvv"systemCpu:"1"# optional, default "1"systemMemory:"2G"# optional, default "2G"sudo:true# optional, default is true
also important are parameters like for the networking like the subnets for the pods and services inside the kubernetes cluster.
# at least 3 masters and 3 workers are neededzones:- name:zone1nodes:master:- name:cluster1master1ipAdress:10.2.10.11user:myusersystemCpu:100msystemMemory:100Mi status:activekubeversion:1.28.2- name:cluster1master2ipAdress:10.2.10.12user:myusersystemCpu:100msystemMemory:100Mi status:activekubeversion:1.28.2worker:- name:cluster1worker1ipAdress:10.2.10.14user:myusersystemCpu:100msystemMemory:100Mi status:activekubeversion:1.28.2- name:cluster1worker1ipAdress:10.2.10.15systemCpu:100msystemMemory:100Mi status:activekubeversion:1.28.2- name:zone2nodes:master:- name:cluster1master3ipAdress:10.2.10.13user:myusersystemCpu:100msystemMemory:100Mi status:activekubeversion:1.28.2worker:- name:cluster1worker1ipAdress:10.2.10.16user:myusersystemCpu:100msystemMemory:100Mi status:drainedkubeversion:1.28.2
so here are thetwo zones, which contain master and worker nodes.
There are two different states: active and drained.
also there can be two different kubernetes versions.
So if you want to do updates in tranches, this is possible with kubeopsctl. Also you can set system memory and system cpu of the nodes for kubernetes itself. it is not possible to delete nodes, for deleting nodes you have to use lima. Also if you want to make an update in tranches, you need at least one master with the greater version.
Once you have configured the cluster changes in yaml file, use following command to apply the changes.
kubeopsctl apply -f kubeopsctl.yaml
8 - Backup and restore
In this article, we look at the backup procedure with Velero.
Backup and restoring artifacts
What is Velero?
Velero uses object storage to store backups and associated artifacts. It also optionally integrates supported block storage systems to snapshot your persistent volumes. Before beginning the installation process, you should identify the object storage provider and optional block storage provider(s) you’ll be using from the list of compatible providers.
Velero supports storage providers for both cloud-provider environments and on-premises environments.
Velero prerequisites:
Access to a Kubernetes cluster, v1.16 or later, with DNS and container networking enabled.
LIMA enables you to renew all Certificates, for a specific Cluster, on all control-plane-nodes in one command.
lima renew cert <clusterName>
Note: Renewing certificates can take several minutes for restarting all certificates services.
Here is an example to renew certificates on cluster with name “Democluster”:
lima renew cert Democluster
Note: This command renew all certificates on the existing control-plane, there is no option to renew single certificates.
10 - Deploy Package On Cluster
This guide provides a simplified process for deploying packages in a Kubernetes cluster using Kosi with either the Helm or Kubectl plugin.
Deploying package on Cluster
You can install artifacts in your cluster in several ways. For this purpose, you can use these four plugins when creating a package:
helm
kubectl
cmd
Kosi
As an example, this guide installs the nginx-ingress Ingress Controller.
Using the Helm-Plugin
Prerequisite
In order to install an artifact with the Helm plugin, the Helm chart must first be downloaded. This step is not covered in this guide.
Create KOSI package
First you need to create a KOSI package. The following command creates the necessary files in the current directory:
kosi create
The downloaded Helm chart must also be located in the current directory. To customize the deployment of the Helm chart, the values.yaml file must be edited. This file can be downloaded from ArtifactHub and must be placed in the same directory as the Helm chart.
All files required by a task in the package must be named in the package.yaml file under includes.files. The container images required by the Helm chart must also be listed in the package.yaml under includes.containers. For installation, the required files and images must be listed under the installation.includes key.
In the example below, only two files are required for the installation: the Helm Chart for the nginx-ingress and the values.yaml to configure the deployment. To install nginx-ingress you will also need the nginx/nginx-ingress image with the tag 3.0.1.
To install nginx-ingress with the Helm plugin, call the plugin as shown in the example under installation.tasks. The deployment configuration file is listed under values and the packed Helm chart is specified with the key tgz. Furthermore, it is also possible to specify the namespace in which the artifact should be deployed and the name of the deployment. The full documentation for the Helm plugin can be found here.
apiversion:kubernative/kubeops/sina/user/v4name:deployExampledescription:"This Package is an example.
It shows how to deploy an artifact to your cluster using the helm plugin."version:0.1.0includes:files:config:"values.yaml"nginx:"nginx-ingress-0.16.1.tgz"containers:nginx-ingress:registry:docker.io image:nginx/nginx-ingresstag:3.0.1docs:docs.tgzlogo:logo.pnginstallation:includes:files:- config - nginxcontainers:- nginx-ingresstasks:- helm:command:"install"values:- values.yamltgz:"nginx-ingress-0.16.1.tgz"namespace:devdeploymentName:nginx-ingress...update:tasks:delete:tasks:
Once the package.yaml file has been fully configured, all files must be combined into a KOSI package. To do this, execute the following command in the directory where the package.yaml file is located.
kosi build
To make the generated kosi package available on other machines, it is pushed to the user’s private KubeOps Hub. To do this, the user must first log in to the hub.
Once the KOSI package has been created and published, it needs to be installed on the Admin node. The following command will download and execute the package. The package name and version refer to the values defined in package.yaml with the keys name and version.
For the example package, the command would be: kosi install --hub <username> <username>/deployExample:0.1.0.
Using the Kubectl-Plugin
Prerequisite
In order to install an artifact with the Kubectl plugin, the kubeops-kubernetes-plugins package must be installed on the admin node. This step is not covered in this guide.
Create KOSI package
First you need to create a KOSI package. The following command creates the necessary files in the current directory:
kosi create
The NGINX ingress controller YAML manifest can either be automaticly downloaded and applyed directly with kubectl apply or it can be downloaded manually if you want to customize the deployment. The YAML manifest can be downloaded from the NGINX GitHub Repo and must be placed in the same directory as the files for the kosi package.
All files required by a task in the package must be named in the package.yaml file under includes.files. The container images required by the YAML manifest must also be listed in the package.yaml under includes.containers. For installation, the required files and images must be listed under the installation.includes key.
In the example below, only one file is required for the installation: the YAML manifest for the nginx-ingress controller. To install nginx-ingress you will also need the registry.k8s.io/ingress-nginx/controller image with the tag v1.5.1 and the image registry.k8s.io/ingress-nginx/kube-webhook-certgen with tag v20220916-gd32f8c343.
To install nginx-ingress with the Kubectl plugin, call the plugin as shown in the example under installation.tasks. The full documentation for the Kubectl plugin can be found here.
apiversion:kubernative/kubeops/sina/user/v4name:deployExampledescription:"This Package is an example.
It shows how to deploy an artifact to your cluster using the helm plugin."version:0.1.0includes:files:manifest:"deploy.yaml"containers:nginx-ingress:registry:registry.k8s.ioimage:ingress-nginx/controllertag:v1.5.1webhook-certgen:registry:registry.k8s.ioimage:ingress-nginx/kube-webhook-certgentag:v20220916-gd32f8c343docs:docs.tgzlogo:logo.pnginstallation:includes:files:- manifestcontainers:- nginx-ingress- webhook-certgentasks:- kubectl:operation:"apply"flags:" -f <absolute path>/deploy.yaml"sudo:truesudoPassword:"toor"...update:tasks:delete:tasks:
Once the package.yaml file has been fully configured, all files must be combined into a KOSI package. To do this, execute the following command in the directory where the package.yaml file is located.
kosi build
To make the generated KOSI package available on other machines, it is pushed to the user’s private KubeOps Hub. To do this, the user must first log in to the hub.
Once the KOSI package has been created and published, it needs to be installed on the Admin node. The following command will download and execute the package. The package name and version refer to the values defined in package.yaml with the keys name and version.
For the example package, the command would be: kosi install --hub <username> <username>/deployExample:0.1.0.
11 - Replace Cluster Nodes
This guide explains how to replace nodes in a Kubernetes cluster using KubeOps, which involves deleting an existing node and adding a new one through a YAML configuration file.
Replace cluster nodes
This section describes how to replace cluster nodes in your cluster.
Direct replacement of nodes is not possible in KubeOps; however you can delete the node and add a new node to the cluster as shown in the following example.
Steps to replace a Kubernetes Node
Use the command delete on the admin node to delete the unwanted node from the cluster.
The command is:
lima delete -n <IP of your node> <name of your Cluster>
If you are deleting a node, then its data becomes inaccessible or erased.
Now create a new .yaml file with a configuration for the node as shown below
Example:
apiVersion:lima/nodeconfig/v1alpha1clusterName:roottestspec:masters:[]workers:- host:10.2.10.17## ip of the new node to be joinedsystemCpu:"200m"systemMemory:"200Mi"user:rootpassword:toor
Lastly use the command create nodes to create and join the new node.
The command is:
lima create nodes -f <node yaml file name>
Example 1
In the following example, we will replace a node with ip 10.2.10.15 from demoCluster to a new worker node with ip 10.2.10.17:
If you are rejoining a master node, all other steps are the same except, you need to add the node configuration in the yaml file as shown in the example below:
This guide outlines the steps to upgrade the Kubernetes version of a cluster, specifically demonstrating how to change the version using a configuration file.
Upgrading Kubernetes version
You can use the following steps to upgrade the Kubernetes version of a cluster.
In the following example, we will upgrade Kubernetes version of your cluster with name Democluster from Kubernetes version 1.27.2 to Kubernetes version 1.28.2
You have to create a kubeobsctl.yaml with following yaml syntax.
apiVersion:kubeops/kubeopsctl/alpha/v5 # mandatorykubeOpsUser:"demo"# mandatory, change to your usernamekubeOpsUserPassword:"Password"# mandatory, change to your passwordkubeOpsUserMail:"demo@demo.net"# change to your emailimagePullRegistry:"registry.preprod.kubernative.net/kubeops"# mandatorylocalRegistry:false# mandatory### Values for setup configuration ###clusterName:"Democluster"# mandatoryclusterUser:"myuser"# mandatorykubernetesVersion:"1.28.2"# mandatory, check lima documentationmasterIP:10.2.10.11# mandatory### Additional values for cluster configuration# at least 3 masters and 3 workers are neededzones:- name:zone1nodes:master:- name:cluster1master1ipAdress:10.2.10.11user:myusersystemCpu:100msystemMemory:100Mi status:activekubeversion:1.28.2- name:cluster1master2ipAdress:10.2.10.12user:myusersystemCpu:100msystemMemory:100Mi status:activekubeversion:1.28.2worker:- name:cluster1worker1ipAdress:10.2.10.14user:myusersystemCpu:100msystemMemory:100Mi status:activekubeversion:1.28.2- name:cluster1worker2ipAdress:10.2.10.15systemCpu:100msystemMemory:100Mi status:activekubeversion:1.28.2- name:zone2nodes:master:- name:cluster1master3ipAdress:10.2.10.13user:myusersystemCpu:100msystemMemory:100Mi status:drainedkubeversion:1.28.2worker:- name:cluster1worker1ipAdress:10.2.10.16user:myusersystemCpu:100msystemMemory:100Mi status:activekubeversion:1.28.2# set to true if you want to install it into your clusterrook-ceph:false# mandatoryharbor:false# mandatoryopensearch:false# mandatoryopensearch-dashboards:false# mandatorylogstash:false# mandatoryfilebeat:false# mandatoryprometheus:false# mandatoryopa:false# mandatorykubeops-dashboard:false# mandatorycertman:false# mandatoryingress:false# mandatorykeycloak:false# mandatory
Upgrade the version
Once the kubeopsctl.yaml file is created in order to change the Version of your cluster use the following command:
kubeopsctl upgrade -f kubeopsctl.yaml
rook-ceph has no pdbs, so if you drain nodes for the kubernetes upgrade, rook ceph is temporarily unavailable. you should drain only one node at a time for the kubernetes upgrade.
13 - Change CRI
A brief overview of how you can change the Container Runtime Interface (CRI) of your cluster to the supported CRI containerd and crio.
Changing Container Runtime Interface
KubeOps enables you to change the Container Runtime Interface (CRI) of the clusters to any of the following supported CRIs
containerd
crio
You can use the following steps to change the CRI
In the example below, we will change the CRI of the cluster with the name Democluster to containerd.
Download the desired CRI maintenance package from hub
In this case you will need package `lima/containerdlp151:1.6.6`.
To download the package use command:
lima pull maintenance lima/containerdlp151:1.6.6
Note : Packages may vary based on OS and Kubernetes version on your machine.
To select the correct maintenance package based on your machine configuration,
refer to Installing maintenance packages
Change the CRI of your cluster.
Once the desired CRI maintenance package is downloaded, to change the CRI of your cluster use command:
lima change runtime -r containerd Democluster
So in this case you want to change your runtime to containerd. The desired container runtime is specified after the -r parameter, which is necessary. In this example the cluster has the name Democluster, which is also necessary.
14 - How to delete nodes from the cluster with lima
A compact overview of how you can delete nodes from your cluster with Lima.
Note: If we want to delete a node from our kubernetes cluster we have to use lima.
If you are using our platform, lima is already installed by it. If this is not the case, please install lima manually.
These are the prerequisites that have to fulfilled before we can delete a node from our cluster.
lima has to be installed
a functioning cluster must exist
If you want to remove a node from your cluster you can run the delete command on the admin node.
lima delete -n <node which should be deleted> <name of your cluster>
Note: The example cluster name has to be the same like the one set in the Kubectl.yaml file. Under clusterName:
For example we want to delete worker node 2 from our existing kubernetes cluster named example and the IP-address 10.2.1.9 with the following command:
lima delete -n 10.2.1.9 example
15 - Accessing Dashboards
A brief overview of how you can access dashboards.
Accessing Dashboards installed with KubeOps
To access a Application dashboard an SSH-Tunnel to one of the Control-Planes is needed.
The following Dashboards are available and configured with the following NodePorts by default:
NodePort
30211
Initial login credentials
username: the username set in the kubeopsvalues.yaml for the cluster creation
password: the password set in the kubeopsvalues.yaml for the cluster creation
NodePort
30050
Initial login credentials
username: admin
password: Password@@123456
NodePort
https: 30003
Initial login credentials
username: admin
password: the password set in the kubeopsvalues.yaml for the cluster creation
NodePort
The Rook/Ceph Dashboard has no fixed NodePort yet.
To find out the NodePort used by Rook/Ceph follow these steps:
List the Services in the KubeOps namespace
kubectl get svc -n kubeops
Find the line with the service rook-ceph-mgr-dashboard-external-http
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
rook-ceph-mgr-dashboard-external-http NodePort 192.168.197.13 <none> 7000:31268/TCP 21h
Or use,
echo $(kubectl get --namespace rook-ceph -o jsonpath="{.spec.ports[0].nodePort}" services rook-ceph-mgr-dashboard-external-http)
In the example above the NodePort to connect to Rook/Ceph would be 31268.
In order to connect to one of the dashboards, an ssh tunnel has to be established. There are various tools for doing this, like the command line, putty or MobaXterm.
To establish a tunnel, the NodePort of the dashboard has to be forwarded on one of the control planes to the local machine. After that the dashboard can be accessed with localhost:<Port>.
Connecting to the Dashboard via DNS
In order to connect to the dashboard via DNS the hosts file in /etc/hosts need the following additional entries:
The IP address must be the same as the address of your Master1.
16 - Replace the kubeops-cert with your own cert
This section outlines how to replace the default kubeops certificate with a custom one by creating a new certificate in a Kubernetes secret and updating the configuration accordingly.
Replace the kubeops-cert with your own cert
1. Create your own cert in a secret
In this example, a new secret with the name example-ca is created.
This command creates two files: tls.key and tls.cert:
This section provides a comprehensive guide on setting up a new RPM repository in KubeOps for the centralized distribution of software packages, covering prerequisites, repository setup steps, and commands for managing the repository and installing packages.
Kubeops RPM Repository Setup Guide
Setting up a new RPM repository allows for centralized, secure, and efficient distribution of software packages, simplifying installation, updates, and dependency management.
Prerequisites
To setup a new repostory on your KubeOps platform, following pre-requisites must be fulfilled.
httpd (apache) server to access the repository over HTTP.
Root or administrative access to the server.
Software packages (RPM files) to include in the repository.
createrepo (an RPM package management tool) to create a new repository.
Repository Setup Steps
1. Install Required Tools
sudo yum install -y httpd createrepo
2. Create Repository Dierectory
When Apache is installed, the default Apache VirtualHost DocumentRoot created at /var/www/html. Create a new repository KubeOpsRepo under DocumentRoot.
sudo mkdir -p /var/www/html/KubeOpsRepo
3. Copy RPM Packages
Copy RPM packages into KubeOpsRepo repository.
Use below command to copy the packages that are already present in the host machine, else directly populate the packages into KubeOpsRepo
If you want to use your packages in a secure way, we recommend using GPG Signature.
How does the GPG tool work?
The GNU Privacy Guard (GPG) is used for secure communication and data integrity verification.
When gpgcheck set to 1 (enabled), the package will verify the GPG signature of each packages against the correponding key in the keyring. If the package’s signature matches the expected signature, the package is considered valid and can be installed. If the signature does not match or the package is not signed, the package manager will refuse to install the package or display a warning.
GPG Signature for new registry
Create a GPG key and add it to the /var/www/html/KubeOpsRepo/. Check here to know how to create GPG keypairs.
Save the GPG key as RPM-GPG-KEY-KubeOpsRepo using following command.
sudo cd /var/www/html/KubeOpsRepo/
gpg --armor --export > RPM-GPG-KEY-KubeOpsRepo
You can use following command to verify the gpg key.
By running createrepo command the KubeOpsRepo will be initialized.
sudo cd /var/www/html/KubeOpsRepo/
sudo createrepo .
The newly created directoryrepodata conatains metadata files that describe the RPM packages in the repository, including package information, dependencies, and checksums, enabling efficient package management and dependency resolution.
To install packages from KubeOpsRepo without specifying the URL everytime, we can configure the local repository. Also if you are using GPG signature, then gpgcheck needs to be enabled.
Create a Repository Configuration File
Create a new .repo configuration file (e.g. KubeOpsRepo.repo) in /etc/yum.repos.d/ directory with following command.
baseurl: It is the base URL of the new repository. Add your repository URL here.
name : It can be customized to a descriptive name.
enabled=1: This enables the the repository.
gpgcheck=1 : It is used to enable GPG signature verification for the repository.
gpgkey : Add the address where your GPG key is placed.
In case, you are not using the GPG signature verification
1. you can skip step 4
and
2. set the gpgcheck=0 in the above configuration file.
8. Test the Local Repository
To ensure that the latest metadata for the repositories available, you can run below command: (optional)
sudo yum makecache
To verify the repository in list
You can check the reposity in the repolist with following command :
sudo yum repolist
This will list out all the repositories with the information about the repositories.
[root@cluster3admin1 ~]# yum repolistUpdating Subscription Management repositories.
repo id repo name
KubeOpsRepo KubeOps Repository
rhel-8-for-x86_64-appstream-rpms Red Hat Enterprise Linux 8for x86_64 - AppStream (RPMs)rhel-8-for-x86_64-baseos-rpms Red Hat Enterprise Linux 8for x86_64 - BaseOS (RPMs)
To List all the packages in repository
You can list all the packages availbale in KubeOpsRepo with following command :
# To check all the packages including duplicate installed packagessudo yum list available --disablerepo="*" --enablerepo="KubeOpsRepo" --showduplicates
# sudo yum list --showduplicates | grep KubeOpsRepo
To Install the Packages from the repository directly
Now you can directly install the packages from the KubeOpsRepo Repository with following command :
sudo yum install package_name
For Example :
sudo yum install lima
18 - Add certificate as trusted
This section outlines the process for adding a certificate as trusted by downloading it from the browser and installing it in the Trusted Root Certification Authorities on Windows or Linux systems.
1. Download the certificate
As soon as Chrome issues a certificate warning, click on Not secure to the left of the address bar.
Show the certificate (Click on Certificate is not valid).
Go to Details tab.
Click Export... at the bottom and save the certificate.
As soon as Firefox issues a certificate warning, click on Advanced....
View the certificate (Click on View Certificate).
Scroll down to Miscellaneous and save the certificate.
2. Install the certificate
Press Windows + R.
Enter mmc and click OK.
Click on File > Add/Remove snap-in....
Select Certificates in the Available snap-ins list and click on Add >, then on OK. Add the snap-in.
In the tree pane, open Certificates - Current user > Trusted Root Certification Authorities, then right-click Certificates and select All tasks > Import....
The Certificate Import Wizard opens here. Click on Next.
Select the previously saved certificate and click Next.
Click Next again in the next window.
Click on Finish. If a warning pops up, click on Yes.
The program can now be closed. Console settings do not need to be saved.
Clear browser cache and restart browser.
The procedures for using a browser to import a certificate as trusted (on Linux systems) vary depending on the browser and Linux distribution used.
To manually cause a self-signed certificate to be trusted by a browser on a Linux system:
Distribution
Copy certificate here
Run following command to trust certificate
RedHat
/etc/pki/ca-trust/source/anchors/
update-ca-trust extract
Note: If the directory does not exist, create it.
Note: If you do not have the ca-certificates package, install it with your package manager.
19 - Change registry
In KubeOps you have the possibility to change the registry from A to B for the respective tools.
Changing Registry from A to B
KubeOps enables you to change the registry from A to B with following commands
apiVersion:kubeops/kubeopsctl/alpha/v5 # mandatorykubeOpsUser:"demo"# change to your usernamekubeOpsUserPassword:"Password"# change to your passwordkubeOpsUserMail:"demo@demo.net"# change to your emailimagePullRegistry:"registry.preprod.kubeops.net/kubeops"clusterName:"example"clusterUser:"root"kubernetesVersion:"1.28.2"masterIP:10.2.10.11firewall:"nftables"pluginNetwork:"calico"containerRuntime:"containerd"localRegistry:false# at least 3 masters and 3 workers are neededzones:- name:zone1nodes:master:- name:cluster1master1ipAdress:10.2.10.11user:myusersystemCpu:100msystemMemory:100Mi status:activekubeversion:1.28.2- name:cluster1master2ipAdress:10.2.10.12user:myusersystemCpu:100msystemMemory:100Mi status:activekubeversion:1.28.2worker:- name:cluster1worker1ipAdress:10.2.10.14user:myusersystemCpu:100msystemMemory:100Mi status:activekubeversion:1.28.2- name:cluster1worker2ipAdress:10.2.10.15systemCpu:100msystemMemory:100Mi status:activekubeversion:1.28.2- name:zone2nodes:master:- name:cluster1master3ipAdress:10.2.10.13user:myusersystemCpu:100msystemMemory:100Mi status:drainedkubeversion:1.28.2worker:- name:cluster1worker1ipAdress:10.2.10.16user:myusersystemCpu:100msystemMemory:100Mi status:activekubeversion:1.28.2controlPlaneList:- 10.2.10.12# use ip adress here for master2- 10.2.10.13# use ip adress here for master3workerList:- 10.2.10.14# use ip adress here for worker1- 10.2.10.15# use ip adress here for worker2- 10.2.10.16# use ip adress here for worker3rook-ceph:falseharbor:falseopensearch:falseopensearch-dashboards:falselogstash:falsefilebeat:falseprometheus:falseopa:falsekubeops-dashboard:falsecertman:falseingress:falsekeycloak:false# mandatory, set to true if you want to install it into your clustervelero:falsestorageClass:"rook-cephfs"rookValues:namespace:kubeopsnodePort:31931hostname:rook-ceph.localcluster:spec:dataDirHostPath:"/var/lib/rook"removeOSDsIfOutAndSafeToRemove:truestorage:# Global filter to only select certain devicesnames. This example matches names starting with sda or sdb.# Will only be used if useAllDevices is set to false and will be ignored if individual devices have been specified on a node.deviceFilter:"^sd[a-b]"# Names of individual nodes in the cluster that should have their storage included.# Will only be used if useAllNodes is set to false.nodes:- name:"<ip-adress of node_1>"devices:- name:"sdb"- name:"<ip-adress of node_2>"deviceFilter:"^sd[a-b]"# config:# metadataDevice: "sda"resources:mgr:requests:cpu:"500m"memory:"1Gi"mon:requests:cpu:"2"memory:"1Gi"osd:requests:cpu:"2"memory:"1Gi"operator:data:rookLogLevel:"DEBUG"blockStorageClass:parameters:fstype:"ext4"postgrespass:"password"# change to your desired passwordpostgres:storageClass:"rook-cephfs"volumeMode:"Filesystem"accessModes:["ReadWriteMany"]resources:requests:storage:2Giredispass:"password"# change to your desired passwordredis:storageClass:"rook-cephfs"volumeMode:"Filesystem"accessModes:["ReadWriteMany"]resources:requests:storage:2GiharborValues:namespace:kubeopsharborpass:"password"# change to your desired passwordexternalURL:https://10.2.10.13# change to ip adress of master1nodePort:30003hostname:harbor.localharborPersistence:persistentVolumeClaim:registry:size:40GistorageClass:"rook-cephfs"chartmuseum:size:5GistorageClass:"rook-cephfs"jobservice:jobLog:size:1GistorageClass:"rook-cephfs"scanDataExports:size:1GistorageClass:"rook-cephfs"database:size:1GistorageClass:"rook-cephfs"redis:size:1GistorageClass:"rook-cephfs"trivy:size:5GistorageClass:"rook-cephfs"filebeatValues:namespace:kubeops logstashValues:namespace:kubeopsvolumeClaimTemplate:resources:requests:storage:1GiaccessModes:- ReadWriteManystorageClass:"rook-cephfs"openSearchDashboardValues:namespace:kubeopsnodePort:30050hostname:opensearch.localopenSearchValues:namespace:kubeopsopensearchJavaOpts:"-Xmx512M -Xms512M"replicas:"3"resources:requests:cpu:"250m"memory:"1024Mi"limits:cpu:"300m"memory:"3072Mi"persistence:size:4Gienabled:"true"enableInitChown:"false"enabled:"false"labels:enabled:"false"storageClass:"rook-cephfs"accessModes:- "ReadWriteMany"securityConfig:enabled:false### Additional values can be set, if securityConfig is enabled:# path: "/usr/share/opensearch/plugins/opensearch-security/securityconfig"# actionGroupsSecret:# configSecret:# internalUsersSecret: internal-users-config-secret# rolesSecret:# rolesMappingSecret:# tenantsSecret:# config:# securityConfigSecret: ""# dataComplete: true# data: {}prometheusValues:namespace:kubeopsprivateRegistry:falsegrafanaUsername:"user"grafanaPassword:"password"grafanaResources:storageClass:"rook-cephfs"storage:5GinodePort:30211hostname:grafana.localprometheusResources:storageClass:"rook-cephfs"storage:25Giretention:10dretentionSize:"24GB"nodePort:32090hostname:prometheus.localopaValues:namespace:kubeopskubeOpsDashboardValues:namespace:kubeopshostname:kubeops-dashboard.localservice:nodePort:30007certmanValues:namespace:kubeopsreplicaCount:3logLevel:2ingressValues:namespace:kubeopsexternalIPs:[]keycloakValues:namespace:"kubeops"storageClass:"rook-cephfs"nodePort:"30180"hostname:keycloak.localkeycloak:auth:adminUser:adminadminPassword:adminexistingSecret:""postgresql:auth:postgresPassword:""username:bn_keycloakpassword:""database:bitnami_keycloakexistingSecret:""
fileBeat
In Order to change registry of filebeat, you have to go in your kubeopsctl.yaml file and set filebeat: false to filebeat: true
In Order to change registry of opensearch-dashboards, you have to go in your kubeopsctl.yaml file and set opensearch-dashboards: false to opensearch-dashboards: true
In Order to change registry of kubeops-dashboard, you have to go in your kubeopsctl.yaml file and set kubeops-dashboard: false to kubeops-dashboard: true
You can also change the password by directly accessing the OpenSearch container and modifying the internal_users.yml file. This can be done by generating a new password hash using the hash.sh script inside the container, then updating the internal_users.yml file with the new hash. Finally, the securityadmin.sh script must be executed to apply the changes and update the OpenSearch cluster. However, this method is not persistent across container or pod restarts, especially in Kubernetes, unless the changes are stored in a persistent volume or backed by external storage. In contrast, changing the password using a Kubernetes secret is persistent across pod restarts, as the password information is stored in a Kubernetes secret, which is managed by the cluster and survives pod/container restarts.
21 - Enabling AuditLog
A brief overview of how you can enable AuditLog.
Enabling AuditLog
This guide describes the steps to enable the AuditLog in a Kubernetes cluster.
Steps to Enable the AuditLog
Create the Directory:
Navigate to the $KUBEOPSROOT/lima directory and create the auditLog folder:
mkdir -p $KUBEOPSROOT/lima/auditLog
Create the Audit Policy File:
In the $KUBEOPSROOT/lima/auditLog directory, create the policy.yaml file:
touch $KUBEOPSROOT/lima/auditLog/policy.yaml
Configure the Audit Policy:
Add the content to policy.yaml according to the official Kubernetes Audit Policy documentation. Rules can be added or removed as needed to customize the auditlogs.
Enable the AuditLog:
To enable the auditlog for a cluster, execute the following command:
lima change auditlog <clustername> -a true
Example:
lima change auditlog democluster -a true
Note
The auditlog can also be disabled if needed by setting the -a parameter to false:
lima change auditlog <clustername> -a false
Additional Information
More details on configuring the audit policy can be found in the official Kubernetes documentation: Audit Policy.
22 - How to set up SSH keys
Setting up SSH (Secure Shell) keys is an essential step for securely accessing remote servers without the need to enter a password each time. Here’s a short introduction on how to set up SSH keys.
To securely access the kubeops master and worker machines, you need to create a ssh-key-pair (private and public key) on the admin machine. Afterwards copy the public key onto each machine.
Install SSH Client
Most Linux distributions come with an SSH client pre-installed. If its not installed, you can install it using your distributions package manager.
Important
For installing new additional software you may need permissions (e.g. root or sudo).
For RHEL8 OS use following command.
sudo dnf install -y openssh-client
Generate SSH Keys
If you do not already have an SSH key or if you want to generate a new key pair specifically for this connection, follow these steps.
Run the command
ssh-keygen
Follow the prompts to choose a file location and passphrase (optional but recommended for added security).
Copy the Public Key to the Remote Machine
To avoid password prompts every time you connect, you can authorize your public key on the remote machine.
You can manually copy the public key to the servers authorized keys using the command ssh-copy-id.
ssh-copy-id <username>@<remote_host>
Replace <username>@<remote_host> with your actual username and the remote machine‘s IP address or hostname.
If ssh-copy-id is not available, you can use the following command:
The key rpmServer must be added to the kubeopsctl.yaml file:
apiVersion:kubeops/kubeopsctl/alpha/v5# mandatoryimagePullRegistry:"registry.preprod.kubernative.net/kubeops"localRegistry:trueclusterName:"example"kubernetesVersion:"1.28.2"masterIP:10.2.10.11systemCpu:"200m"systemMemory:"200Mi"rpmServer:https://rpm.kubeops.net/kubeopsRepo/repodata/zones:- name:zone1nodes:master:- name:master1ipAdress:10.2.10.11status:activekubeversion:1.28.2- name:master2ipAdress:10.2.10.12status:activekubeversion:1.28.2worker:- name:worker1ipAdress:10.2.10.14status:activekubeversion:1.28.2- name:worker2ipAdress:10.2.10.15status:activekubeversion:1.28.2- name:zone2nodes:master:- name:master3ipAdress:10.2.10.13status:activekubeversion:1.28.2worker:- name:worker3ipAdress:10.2.10.16status:activekubeversion:1.28.2# mandatory, set to true if you want to install it into your clusterrook-ceph:trueharbor:trueopensearch:trueopensearch-dashboards:truelogstash:truefilebeat:trueprometheus:trueopa:truekubeops-dashboard:truecertman:trueingress:truekeycloak:truevelero:trueharborValues:harborpass:"password"# change to your desired passworddatabasePassword:"Postgres_Password"# change to your desired passwordredisPassword:"Redis_Password"externalURL:http://10.2.10.11:30002# change to ip adress of master1prometheusValues:grafanaUsername:"user"grafanaPassword:"password"ingressValues:externalIPs:[]keycloakValues:keycloak:auth:adminUser:adminadminPassword:adminpostgresql:auth:postgresPassword:""username:bn_keycloakpassword:""database:bitnami_keycloakexistingSecret:""veleroValues:accessKeyId:"your_s3_storage_username"secretAccessKey:"your_s3_storage_password"
Accessing the RPM Server in an Air Gap Environment
If you have an AirGap environment, the following files must be placed in the specified directory.
The following entry must be added to the route-ens192 file:
193.7.169.20 via <Your Gateway Adress> <network interface>
For example:
193.7.169.20 via 10.2.10.1 dev ens192
hosts in route-ens192 in /etc/
The following entry must be added to the hosts file:
193.7.169.20 rpm.kubeops.net
Test your connection
Please ensure that each node has a connection to the RPM server.The following command can be used for this:
dnf list kubeopsctl --showduplicates
24 - fix rook-ceph
repair rook-ceph when worker nodes are down
fix rook-ceph
if some worker nodes are down, you need to change the rook-ceph configuration if you use the parameter useallnodes and usealldevices.
this guide is for temporarily fixing rook-ceph, and is not a permanent solution
get the tools pod
kubectl -n <rook-ceph namespace> get pod | grep tools