This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Getting-Started

Begin your exploration of KubeOps Compliance, diving into its robust capabilities and streamlined workflow for Kubernetes infrastructure management.

1 - About KubeOps Compliance

This article will give you a little insight into KubeOps Compliance and its advantages.

What is KubeOps Compliance?

kubeopsctl serves as a versatile utility designed specifically to efficiently manage both the configuration and status of a cluster.

With its capabilities, users can articulate their desired cluster state in detail, outlining configurations and specifications.

Subsequently, kubeopsctl orchestrates the creation of a cluster that precisely matches the specified state, ensuring alignment between intentions and operational reality.

Why use KubeOps Compliance?

In kubeopsctl, configuration management involves defining, maintaining, and updating the desired state of a cluster, including configurations for nodes, pods, services, and other resources in the application environment.

The main goal of kubeopsctl is to match the clusters actual state with the desired state specified in the configuration files. With a declarative model, kubeopsctl enables administrators to express their desired system setup, focusing on „what“ they want rather than „how“ to achieve it.

This approach improves flexibility and automation in managing complex systems, making the management process smoother and allowing easy adjustment to changing needs.

kubeopsctl uses YAML files to store configuration data in a human-readable format. These files document important metadata about the objects managed by kubeopsctl, such as pods, services, and deployments.

Highlights

  • creating a cluster
  • adding nodes to your cluster
  • drain nodes
  • updating single nodes
  • label nodes for zones
  • adding platform software into your cluster

2 - Prerequisits

Prerequisits

Prerequisits

A total of at least 7 machines are required:

  • one admin
  • at least three masters
  • at least three workers

Below you can see the supported operating systems with the associated minimal requirements for CPU, memory and disk storage:

OS Minimum Requirements
Red Hat Enterprise Linux 9.6 and newer 8 CPU cores, 16 GB memory, 50GB disk storage
Ubuntu 24.2 and newer 8 CPU cores, 16 GB memory, 50GB disk storage
  • For each working node, an additional unformatted hard disk with 50 GB each is required. For more information about the hard drives for rook-ceph, visit the rook-ceph prerequisites page

  • Each machine in the cluster needs to have the same username with the same password available. Otherwise the cluster creation and management process will fail!


Prerequisits for admin

The following requirements must be fulfilled on the admin machine.

  1. You need an internet connection to use the default KubeOps Registry.
registry.kubeops.net
A local registry can be used in the Airgap environment. KubeOps only supports secure registries. It is important to list your registry as an insecure registry in registry.conf (/etc/containers/registries.conf for podman, /etc/docker/deamon.json for docker), in case of insecure registry usage.

Prerequisits for each node

The following requirements must be fulfilled on each node.

  1. You have to assign lowercase unique hostnames for every machine you are using.
We recommended using self-explanatory hostnames.

To set the hostname on your machine use the following command:

sudo hostnamectl set-hostname <name of node>
Example

Use the commands below to set the hostnames on each machine as admin, master, node1 node2 (lowercase letters and numbers only).

sudo hostnamectl set-hostname admin

  1. Optional: In order to use encrypted traffic inside the cluster, follow these steps:

For RHEL machines, you will need to import the ELRepo Secure Boot key into your system. You can find a detailed explanation and comprehensive instructions in our how-to-guide Importing the Secure-Boot key

This is only necessary if your system has Secure Boot enabled. If this isn´t the case, or you dont want to use any encryption at all, you can skip this step.

3 - Prepare Cluster

Prepare Cluster

Prepare

1. Include package repo on all nodes

To easily install the kosi and kubeopsctl packages you should add the kubeops package repo to your operating system’s package manager.

wget https://packagerepo.kubeops.net/pgp-key.public
cat pgp-key.public | sudo gpg --dearmor -o /usr/share/keyrings/kubeops.gpg
echo 'deb [arch=amd64 signed-by=/usr/share/keyrings/kubeops.gpg] https://packagerepo.kubeops.net/deb stable main' | sudo tee /etc/apt/sources.list.d/kubeops.list
sudo apt update
sudo dnf config-manager --add-repo https://packagerepo.kubeops.net/rpm/kubeops.repo

2. Special operating system adaptations on all nodes

You must prepare all nodes with special operating system adaptations.

# remove unattended upgrades on all nodes!
sudo apt remove unattended-upgrades
# no special adaptions required 

3. Distribute ssh-keys on all nodes

The hostnames of all nodes must be resolvable via DNS.

If you do not run a DNS server, the easiest solution is to enter the IP-addresses and hostnames in etc/hosts.

# IMPORTANT: The following command has to be adapted so that every admin, masternode and workernode is included
sudo tee /etc/hosts <<EOL_ETC_HOSTS
127.0.0.1 localhost
<admin ip> <admin hostname>
<master1 ip> <master1 hostname>
<master2 ip> <master2 hostname>
<master3 ip> <master3 hostname>
...
<worker1 ip> <worker1 hostname>
<worker2 ip> <worker2 hostname>
<worker3 ip> <worker3 hostname>
...
EOL_ETC_HOSTS
Full Example
sudo tee /etc/hosts <<EOL_ETC_HOSTS
127.0.0.1 localhost
10.2.10.10 admin
10.2.10.110 master1
10.2.10.120 master2
10.2.10.130 master3
10.2.10.210 worker1
10.2.10.220 worker2
10.2.10.230 worker3
EOL_ETC_HOSTS

Create ssh-keys on all nodes

ssh-keygen -q -t ed25519 -f ~/.ssh/id_ed25519 -N ""

Copy public ssh-key on all nodes

# IMPORTANT: The following command has to be adapted so that every admin, masternode and workernode is included
ssh-copy-id -i ~/.ssh/id_ed25519 <admin hostname>
ssh-copy-id -i ~/.ssh/id_ed25519 <master1 hostname>
ssh-copy-id -i ~/.ssh/id_ed25519 <master2 hostname>
ssh-copy-id -i ~/.ssh/id_ed25519 <master3 hostname>
...
ssh-copy-id -i ~/.ssh/id_ed25519 <worker1 hostname>
ssh-copy-id -i ~/.ssh/id_ed25519 <worker2 hostname>
ssh-copy-id -i ~/.ssh/id_ed25519 <worker3 hostname>
...
Full Example
ssh-copy-id -i ~/.ssh/id_ed25519 admin
ssh-copy-id -i ~/.ssh/id_ed25519 master1
ssh-copy-id -i ~/.ssh/id_ed25519 master2
ssh-copy-id -i ~/.ssh/id_ed25519 master3
ssh-copy-id -i ~/.ssh/id_ed25519 worker1
ssh-copy-id -i ~/.ssh/id_ed25519 worker2
ssh-copy-id -i ~/.ssh/id_ed25519 worker3

Scan host-keys with name and IP-address on all nodes

# IMPORTANT: The following command has to be adapted so that every admin, masternode and workernode is included
ssh-keyscan <admin hostname> >> ~/.ssh/known_hosts
ssh-keyscan <admin ip> >> ~/.ssh/known_hosts
ssh-keyscan <master1 hostname> >> ~/.ssh/known_hosts
ssh-keyscan <master1 ip> >> ~/.ssh/known_hosts
ssh-keyscan <master2 hostname> >> ~/.ssh/known_hosts
ssh-keyscan <master2 ip> >> ~/.ssh/known_hosts
ssh-keyscan <master3 hostname> >> ~/.ssh/known_hosts
ssh-keyscan <master3 ip> >> ~/.ssh/known_hosts
...
ssh-keyscan <worker1 hostname> >> ~/.ssh/known_hosts
ssh-keyscan <worker1 ip> >> ~/.ssh/known_hosts
ssh-keyscan <worker2 hostname> >> ~/.ssh/known_hosts
ssh-keyscan <worker2 ip> >> ~/.ssh/known_hosts
ssh-keyscan <worker3 hostname> >> ~/.ssh/known_hosts
ssh-keyscan <worker3 ip> >> ~/.ssh/known_hosts
...
Full Example
ssh-keyscan admin >> ~/.ssh/known_hosts
ssh-keyscan 10.2.10.10 >> ~/.ssh/known_hosts
ssh-keyscan master1 >> ~/.ssh/known_hosts
ssh-keyscan 10.2.10.110 >> ~/.ssh/known_hosts
ssh-keyscan master2 >> ~/.ssh/known_hosts
ssh-keyscan 10.2.10.120 >> ~/.ssh/known_hosts
ssh-keyscan master3 >> ~/.ssh/known_hosts
ssh-keyscan 10.2.10.130 >> ~/.ssh/known_hosts
ssh-keyscan worker1 >> ~/.ssh/known_hosts
ssh-keyscan 10.2.10.210 >> ~/.ssh/known_hosts
ssh-keyscan worker2 >> ~/.ssh/known_hosts
ssh-keyscan 10.2.10.220 >> ~/.ssh/known_hosts
ssh-keyscan worker3 >> ~/.ssh/known_hosts
ssh-keyscan 10.2.10.230 >> ~/.ssh/known_hosts

Test ssh login without password.

The login from each node should be possible without password.

# IMPORTANT: The following command has to be adapted so that every admin, masternode and workernode is included
ssh <admin hostname> exit
ssh <admin ip> exit
ssh <master1 hostname> exit
ssh <master1 ip> exit
ssh <master2 hostname> exit
ssh <master2 ip> exit
ssh <master3 hostname> exit
ssh <master3 ip> exit
...
ssh <worker1 hostname> exit
ssh <worker1 ip> exit
ssh <worker2 hostname> exit
ssh <worker2 ip> exit
ssh <worker3 hostname> exit
ssh <worker3 ip> exit
...
Full Example
ssh admin exit
ssh 10.2.10.10 exit
ssh master1 exit
ssh 10.2.10.110 exit
ssh master2 exit
ssh 10.2.10.120 exit
ssh master3 exit
ssh 10.2.10.130 exit
ssh worker1 exit
ssh 10.2.10.210 exit
ssh worker2 exit
ssh 10.2.10.220 exit
ssh worker3 exit
ssh 10.2.10.230 exit

4. Distribute SUDOERS on all nodes

Replace the username “myuser” with your username and copy the sudoers file to /etc/sudoers.d/<username> on all nodes.

# Preperation
myuser ALL=(root) NOPASSWD: /usr/bin/gpg --dearmor -o /usr/share/keyrings/kubeops.gpg
myuser ALL=(root) NOPASSWD: /usr/bin/tee /etc/apt/sources.list.d/kubeops.list
myuser ALL=(root) NOPASSWD: /usr/bin/apt update
myuser ALL=(root) NOPASSWD: /usr/bin/apt-get update
myuser ALL=(root) NOPASSWD: /usr/bin/apt remove unattended-upgrades
myuser ALL=(root) NOPASSWD: /usr/bin/tee /etc/hosts
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --remove *
myuser ALL=(root) NOPASSWD: /usr/bin/lsof /var/lib/dpkg/lock*

# Setup
myuser ALL=(root) NOPASSWD: /usr/bin/apt install -y kosi*
myuser ALL=(root) NOPASSWD: /usr/bin/apt-get install -y kosi*
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install kosi*.deb
myuser ALL=(root) NOPASSWD: /usr/bin/apt install -y kubeopsctl*
myuser ALL=(root) NOPASSWD: /usr/bin/apt-get install -y kubeopsctl*
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install kubeopsctl*.deb

# Calico image import
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images import calico-images.tar

# Calico image deletion
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images delete localhost\:5001/calico/*
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images delete localhost\:5001/tigera/*

# Cilium airgap
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images import cilium-images.tar
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images delete localhost\:5001/cilium/*
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images delete localhost\:5001/spiffe/*

# kube-vip image import
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images import --platform amd64 kube-vip-image.tar
myuser ALL=(root) NOPASSWD: /bin/cp kube-vip.yaml /etc/kubernetes/manifests/kube-vip.yaml

# systemctl commands
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl stop kubelet
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl disable kubelet
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl stop containerd
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl disable containerd

# Kubeadm init
myuser ALL=(root) NOPASSWD: /usr/bin/kubeadm init --upload-certs --config cluster-config.yaml

# kubeadm reset
myuser ALL=(root) NOPASSWD: /usr/bin/kubeadm reset --force

# remove folders
myuser ALL=(root) NOPASSWD: /bin/rm -fr /etc/containerd
myuser ALL=(root) NOPASSWD: /bin/rm -fr /etc/kubernetes
myuser ALL=(root) NOPASSWD: /bin/rm -fr /usr/local/etc/haproxy
myuser ALL=(root) NOPASSWD: /bin/rm -fr /var/lib/etcd
myuser ALL=(root) NOPASSWD: /bin/rm -fr /var/lib/kubelet
myuser ALL=(root) NOPASSWD: /bin/rm -fr /var/kubeops

# reboot
myuser ALL=(root) NOPASSWD: /sbin/reboot now

# disable swap
myuser ALL=(root) NOPASSWD: /usr/sbin/swapoff --all
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl mask swap.target
myuser ALL=(root) NOPASSWD: /bin/sed -e * -i /etc/fstab

# enable UFW
myuser ALL=(root) NOPASSWD: /usr/sbin/ufw reset
myuser ALL=(root) NOPASSWD: /usr/sbin/ufw allow 6443/tcp
myuser ALL=(root) NOPASSWD: /usr/sbin/ufw allow 2379\:2380/tcp
myuser ALL=(root) NOPASSWD: /usr/sbin/ufw allow 10250/tcp
myuser ALL=(root) NOPASSWD: /usr/sbin/ufw allow 10259/tcp
myuser ALL=(root) NOPASSWD: /usr/sbin/ufw allow 10257/tcp
myuser ALL=(root) NOPASSWD: /usr/sbin/ufw allow 10256/tcp
myuser ALL=(root) NOPASSWD: /usr/sbin/ufw allow 30000\:32767/tcp
myuser ALL=(root) NOPASSWD: /usr/sbin/ufw allow 179/tcp
myuser ALL=(root) NOPASSWD: /usr/sbin/ufw allow 4789/udp
myuser ALL=(root) NOPASSWD: /usr/sbin/ufw allow 5473/tcp
myuser ALL=(root) NOPASSWD: /usr/sbin/ufw allow 51820/udp
myuser ALL=(root) NOPASSWD: /usr/sbin/ufw allow 22/tcp
myuser ALL=(root) NOPASSWD: /usr/sbin/ufw allow 5000/tcp
myuser ALL=(root) NOPASSWD: /usr/sbin/ufw allow 5001/tcp
myuser ALL=(root) NOPASSWD: /usr/sbin/ufw allow 7443/tcp
myuser ALL=(root) NOPASSWD: /usr/sbin/ufw logging low
myuser ALL=(root) NOPASSWD: /usr/sbin/ufw enable
myuser ALL=(root) NOPASSWD: /usr/sbin/ufw reload
myuser ALL=(root) NOPASSWD: /usr/sbin/ufw status
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl restart systemd-networkd
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl disable --now ufw

# nftables enable/restart
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl enable --now nftables
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl restart nftables

# copy nftables configs
myuser ALL=(root) NOPASSWD: /bin/cp nftables.conf /etc/nftables.conf

# firewalld control
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl stop firewalld
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl disable firewalld
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl mask firewalld

# Install/update Helm
myuser ALL=(root) NOPASSWD: /bin/mkdir -p /usr/bin
myuser ALL=(root) NOPASSWD: /bin/cp helm /usr/bin/
myuser ALL=(root) NOPASSWD: /bin/chmod +x /usr/bin/helm

# Delete Helm
myuser ALL=(root) NOPASSWD: /bin/rm -f /usr/bin/helm

# k9s
myuser ALL=(root) NOPASSWD: /bin/cp k9s /usr/bin/
myuser ALL=(root) NOPASSWD: /bin/chmod +x /usr/bin/k9s
myuser ALL=(root) NOPASSWD: /bin/rm -f /usr/bin/k9s

# crictl pull images
myuser ALL=(root) NOPASSWD:SETENV: /usr/bin/crictl pull *

# kubeadm init phase and kubeadm token create commands
myuser ALL=(root) NOPASSWD: /usr/bin/kubeadm init phase upload-certs --upload-certs
myuser ALL=(root) NOPASSWD: /usr/bin/kubeadm token create --print-join-command --certificate-key *

# kubeadm join
myuser ALL=(root) NOPASSWD: /usr/bin/kubeadm join *

# kubernetes admin.conf handling
myuser ALL=(root) NOPASSWD: /bin/cp /etc/kubernetes/admin.conf /home/*/.kube/config
myuser ALL=(root) NOPASSWD: /bin/chown [0-9]*\:[0-9]* /home/*/.kube/config

# scheduler config copy
myuser ALL=(root) NOPASSWD: /bin/cp scheduler-config.yaml /etc/kubernetes/scheduler-config.yaml

# scheduler manifest patching
myuser ALL=(root) NOPASSWD: /usr/bin/grep -q * /etc/kubernetes/manifests/kube-scheduler.yaml
myuser ALL=(root) NOPASSWD: /usr/bin/sed -i * /etc/kubernetes/manifests/kube-scheduler.yaml

# restart services
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl restart containerd
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl restart kubelet
myuser ALL=(root) NOPASSWD: /usr/bin/kubeadm token create --print-join-command

# create kubernetes manifests folder
myuser ALL=(root) NOPASSWD: /bin/mkdir -p /etc/kubernetes/manifests

# modeprobe br_netfilter
myuser ALL=(root) NOPASSWD: /bin/cp br_netfilter.conf /etc/modules-load.d/br_netfilter.conf
myuser ALL=(root) NOPASSWD: /bin/chmod 644 /etc/modules-load.d/br_netfilter.conf
myuser ALL=(root) NOPASSWD: /sbin/modprobe br_netfilter
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl daemon-reload

# kubevip
myuser ALL=(root) NOPASSWD: /usr/bin/ctr -n k8s.io images pull *
myuser ALL=(root) NOPASSWD: /usr/bin/kubeadm upgrade plan --ignore-preflight-errors=all
myuser ALL=(root) NOPASSWD: /usr/bin/kubeadm upgrade apply *

# upgrade nodes
myuser ALL=(root) NOPASSWD: /bin/cp /home/*/.kube/config /etc/kubernetes/admin.conf
myuser ALL=(root) NOPASSWD: /usr/bin/kubeadm upgrade node

# kubernetes-tools-packages
myuser ALL=(root) NOPASSWD: /bin/mkdir -p /opt/cni/bin
myuser ALL=(root) NOPASSWD: /bin/tar xzf cni-* -C /opt/cni/bin
myuser ALL=(root) NOPASSWD: /bin/tar xzf crictl-* -C /usr/bin
myuser ALL=(root) NOPASSWD: /bin/rm -f /etc/cni/net.d/87-podman-bridge.conflist

myuser ALL=(root) NOPASSWD: /usr/bin/dpkg install -y kubeadm*
myuser ALL=(root) NOPASSWD: /bin/cp kubeadm /usr/bin/kubeadm
myuser ALL=(root) NOPASSWD: /bin/chmod +x /usr/bin/kubeadm
myuser ALL=(root) NOPASSWD: /bin/test -f /usr/bin/kubelet

myuser ALL=(root) NOPASSWD: /bin/mv /usr/bin/kubelet /usr/bin/kubelet_*
myuser ALL=(root) NOPASSWD: /bin/cp kubelet /usr/bin/kubelet
myuser ALL=(root) NOPASSWD: /bin/chmod +x /usr/bin/kubelet

myuser ALL=(root) NOPASSWD: /usr/bin/dpkg install -y kubectl*
myuser ALL=(root) NOPASSWD: /bin/cp kubectl /usr/bin/kubectl
myuser ALL=(root) NOPASSWD: /bin/chmod +x /usr/bin/kubectl

myuser ALL=(root) NOPASSWD: /usr/bin/dpkg install -y kubelet*
myuser ALL=(root) NOPASSWD: /bin/cp kubelet.service /usr/lib/systemd/system/kubelet.service
myuser ALL=(root) NOPASSWD: /bin/mkdir -p /usr/lib/systemd/system/kubelet.service.d
myuser ALL=(root) NOPASSWD: /bin/cp 10-kubeadm.conf /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl enable --now kubelet

myuser ALL=(root) NOPASSWD: /usr/bin/apt-mark hold kubelet kubeadm kubectl
myuser ALL=(root) NOPASSWD: /usr/bin/apt-mark unhold kubelet kubeadm kubectl
myuser ALL=(root) NOPASSWD: /usr/bin/apt install -y kubelet* kubeadm* kubectl*
myuser ALL=(root) NOPASSWD: /usr/bin/apt install -y kubelet* kubeadm* kubectl*

# kubernetes-tools-packages-airgap
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images import kubernetes-images.tar
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images delete localhost\:5001/kube-apiserver\:v*
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images delete localhost\:5001/kube-controller-manager\:v*
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images delete localhost\:5001/kube-scheduler\:v*
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images delete localhost\:5001/kube-proxy\:v*
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images delete localhost\:5001/coredns/coredns\:v*
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images delete localhost\:5001/etcd\:*

# Allow HAProxy and load-balancer
myuser ALL=(root) NOPASSWD: /bin/mkdir -p /usr/local/etc/haproxy
myuser ALL=(root) NOPASSWD: /bin/cp haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
myuser ALL=(root) NOPASSWD: /bin/cp load-balancer.yaml /etc/kubernetes/manifests/load-balancer.yaml
myuser ALL=(root) NOPASSWD: /bin/mkdir -p /mnt/registry
myuser ALL=(root) NOPASSWD: /bin/cp docker-registry.yaml /etc/kubernetes/manifests/docker-registry.yaml
myuser ALL=(root) NOPASSWD: /bin/rm /etc/kubernetes/manifests/docker-registry.yaml
myuser ALL=(root) NOPASSWD: /bin/rm -rf /mnt/registry
myuser ALL=(root) NOPASSWD: /usr/bin/crictl --namespace k8s.io images import local-registry-image.tar
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images import local-registry-image.tar
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images import --platform amd64 load-balancer-image.tar

# Allow multus-airgap
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images import multus-images.tar
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images delete localhost\:5001/k8snetworkplumbingwg/multus-cni\:snapshot-thick

# Podman Installation of local .deb-Package
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install passt_*.deb
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install conmon_*.deb
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install catatonit_*.deb
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install netavark_*.deb
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install aardvark-dns_*.deb
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install golang-github-containers-image_*.deb
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install golang-github-containers-common_*.deb
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install containernetworking-plugins_*.deb
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install libsubid4_*.deb
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install uidmap_*.deb
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install libslirp0_*.deb
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install slirp4netns_*.deb
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install libyajl2_*.deb
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install crun_*.deb
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install fuse-overlayfs_*.deb
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install buildah_*.deb
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install podman_*.deb

# Podman Remove of local .deb-Package
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --remove podman buildah fuse-overlayfs crun libyajl2 slirp4netns libslirp0 uidmap libsubid4 containernetworking-plugins golang-github-containers-common golang-github-containers-image aardvark-dns netavark catatonit conmon passt

# Podman Installation & Update with apt
myuser ALL=(root) NOPASSWD: /usr/bin/apt install -y podman*
myuser ALL=(root) NOPASSWD: /usr/bin/apt install -y podman
myuser ALL=(root) NOPASSWD: /usr/bin/apt update
myuser ALL=(root) NOPASSWD: /usr/bin/apt remove -y podman

# Allow prepare-node
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install */pia/kosi_*

# runtime setup
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install conntrack_*.deb
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install runc_*.deb
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install containerd_*.deb
myuser ALL=(root) NOPASSWD: /usr/bin/apt install -y containerd

# Enable repo and install containerd.io
myuser ALL=(root) NOPASSWD: /usr/bin/apt install -y containerd.io
myuser ALL=(root) NOPASSWD: /usr/bin/apt install -y conntrack-tools
myuser ALL=(root) NOPASSWD: /usr/bin/apt install -y iproute-tc

# Remove packages
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --remove containerd

# Create directories
myuser ALL=(root) NOPASSWD: /bin/mkdir -p /etc/containerd
myuser ALL=(root) NOPASSWD: /bin/mkdir -p /etc/containerd/certs.d

# Configure containerd
myuser ALL=(root) NOPASSWD: /usr/bin/tee /etc/containerd/config.toml
myuser ALL=(root) NOPASSWD: /bin/sed -i * /etc/containerd/config.toml

# Allow containerd insecure registry
myuser ALL=(root) NOPASSWD: /usr/bin/sed -i -e * /etc/containerd/config.toml

# Enable and start containerd service
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl enable --now containerd

# Allow managing k8s sysctl configuration
myuser ALL=(root) NOPASSWD: /bin/cp k8s.conf /etc/sysctl.d/k8s.conf
myuser ALL=(root) NOPASSWD: /usr/sbin/sysctl --system
myuser ALL=(root) NOPASSWD: /bin/rm /etc/sysctl.d/k8s.conf

# Allow WireGuard package management
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install wireguard_*.deb wireguard-tools_*.deb
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --install wireguard-tools-*.deb systemd-resolved-*.deb
myuser ALL=(root) NOPASSWD: /usr/bin/dpkg --remove wireguard wireguard-tools
myuser ALL=(root) NOPASSWD: /usr/bin/apt install -y wireguard-tools*
myuser ALL=(root) NOPASSWD: /usr/bin/apt install -y systemd-resolved*

# velero install on admin
myuser ALL=(root) NOPASSWD: /usr/bin/cp binary/velero /usr/bin
# Preperation
myuser ALL=(root) NOPASSWD: /usr/bin/dnf config-manager --add-repo https\://packagerepo.kubeops.net/rpm/kubeops.repo
myuser ALL=(root) NOPASSWD: /usr/bin/tee /etc/hosts

# Setup
myuser ALL=(root) NOPASSWD: /usr/bin/dnf install -y --disableexcludes=kubeops-repo kosi*, !/usr/bin/dnf install -y --disableexcludes=kubeops-repo kosi*[[\:space\:]]*
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install -v kosi*.rpm, !/usr/bin/rpm --install -v kosi*[[\:space\:]]*.rpm
myuser ALL=(root) NOPASSWD: /usr/bin/dnf install -y --disableexcludes=kubeops-repo kubeopsctl*, !/usr/bin/dnf install -y --disableexcludes=kubeops-repo kubeopsctl*[[\:space\:]]*
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install -v kubeopsctl*.rpm, !/usr/bin/rpm --install -v kubeopsctl*[[\:space\:]]*.rpm

# Calico image import
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images import calico-images.tar

# Calico image deletion
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images delete localhost\:5001/calico/*, !/usr/bin/ctr --namespace k8s.io images delete localhost\:5001/calico/*[[\:space\:]]*
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images delete localhost\:5001/tigera/*, !/usr/bin/ctr --namespace k8s.io images delete localhost\:5001/tigera/*[[\:space\:]]*

# Cilium image import
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images import cilium-images.tar

# Cilium image delete
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images delete localhost\:5001/cilium/*, !/usr/bin/ctr --namespace k8s.io images delete localhost\:5001/cilium/*[[\:space\:]]*
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images delete localhost\:5001/spiffe/*, !/usr/bin/ctr --namespace k8s.io images delete localhost\:5001/spiffe/*[[\:space\:]]*

# kube-vip image import
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images import --platform amd64 kube-vip-image.tar
myuser ALL=(root) NOPASSWD: /bin/cp kube-vip.yaml /etc/kubernetes/manifests/kube-vip.yaml

# systemctl commands
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl stop kubelet
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl disable kubelet
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl stop containerd
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl disable containerd

# kubeadm reset
myuser ALL=(root) NOPASSWD: /usr/bin/kubeadm reset --force
myuser ALL=(root) NOPASSWD: /usr/bin/kubeadm init --upload-certs --config cluster-config.yaml

# remove folders
myuser ALL=(root) NOPASSWD: /bin/rm -fr /etc/containerd
myuser ALL=(root) NOPASSWD: /bin/rm -fr /etc/kubernetes
myuser ALL=(root) NOPASSWD: /bin/rm -fr /usr/local/etc/haproxy
myuser ALL=(root) NOPASSWD: /bin/rm -fr /var/lib/etcd
myuser ALL=(root) NOPASSWD: /bin/rm -fr /var/lib/kubelet
myuser ALL=(root) NOPASSWD: /bin/rm -fr /var/kubeops

# reboot
myuser ALL=(root) NOPASSWD: /sbin/reboot now

# disable swap
myuser ALL=(root) NOPASSWD: /usr/sbin/swapoff --all
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl mask swap.target
myuser ALL=(root) NOPASSWD: /bin/sed -e * -i /etc/fstab

# nftables enable/restart
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl enable --now nftables
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl restart nftables

# copy nftables configs
myuser ALL=(root) NOPASSWD: /bin/cp nftables.conf /etc/sysconfig/nftables.conf

# firewalld control
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl stop firewalld
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl disable firewalld
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl mask firewalld

# Install/update Helm
myuser ALL=(root) NOPASSWD: /bin/mkdir -p /usr/bin
myuser ALL=(root) NOPASSWD: /bin/cp helm /usr/bin/
myuser ALL=(root) NOPASSWD: /bin/chmod +x /usr/bin/helm
myuser ALL=(root) NOPASSWD: /usr/bin/dnf install -y helm* 

# Delete Helm
myuser ALL=(root) NOPASSWD: /bin/rm -f /usr/bin/helm

# k9s/package.kosi
myuser ALL=(root) NOPASSWD: /bin/cp k9s /usr/bin/
myuser ALL=(root) NOPASSWD: /bin/chmod +x /usr/bin/k9s
myuser ALL=(root) NOPASSWD: /bin/rm -f /usr/bin/k9s

# crictl pull images
myuser ALL=(root) NOPASSWD:SETENV: /usr/bin/crictl pull *

# ssh remote kubeadm commands
myuser ALL=(root) NOPASSWD: /usr/bin/kubeadm init phase upload-certs --upload-certs
myuser ALL=(root) NOPASSWD: /usr/bin/kubeadm token create --print-join-command --certificate-key *

# local execution of kubeadm join
myuser ALL=(root) NOPASSWD: /usr/bin/kubeadm join *

# kubernetes admin.conf handling
myuser ALL=(root) NOPASSWD: /bin/cp /etc/kubernetes/admin.conf /home/*/.kube/config
myuser ALL=(root) NOPASSWD: /bin/chown [0-9]*\:[0-9]* /home/*/.kube/config

# scheduler config copy
myuser ALL=(root) NOPASSWD: /bin/cp scheduler-config.yaml /etc/kubernetes/scheduler-config.yaml

# scheduler manifest patching
myuser ALL=(root) NOPASSWD: /bin/grep -q * /etc/kubernetes/manifests/kube-scheduler.yaml
myuser ALL=(root) NOPASSWD: /usr/bin/sed -i * /etc/kubernetes/manifests/kube-scheduler.yaml

# restart services
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl restart containerd
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl restart kubelet
myuser ALL=(root) NOPASSWD: /usr/bin/kubeadm token create --print-join-command

# create kubernetes manifests folder
myuser ALL=(root) NOPASSWD: /bin/mkdir -p /etc/kubernetes/manifests

# modeprobe br_netfilter
myuser ALL=(root) NOPASSWD: /bin/cp br_netfilter.conf /etc/modules-load.d/br_netfilter.conf
myuser ALL=(root) NOPASSWD: /bin/chmod 644 /etc/modules-load.d/br_netfilter.conf
myuser ALL=(root) NOPASSWD: /sbin/modprobe br_netfilter
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl daemon-reload

# kubevip
myuser ALL=(root) NOPASSWD: /usr/bin/ctr -n k8s.io images pull *, !/usr/bin/ctr -n k8s.io images pull *[[\:space\:]]*
myuser ALL=(root) NOPASSWD: /usr/bin/kubeadm upgrade plan --ignore-preflight-errors=all
myuser ALL=(root) NOPASSWD: /usr/bin/kubeadm upgrade apply *, !/usr/bin/kubeadm upgrade apply *[[\:space\:]]*

# kubeadm upgrade node
myuser ALL=(root) NOPASSWD: /bin/cp /home/*/.kube/config /etc/kubernetes/admin.conf
myuser ALL=(root) NOPASSWD: /usr/bin/kubeadm upgrade node

# kubernetes-tools-packages
myuser ALL=(root) NOPASSWD: /bin/mkdir -p /opt/cni/bin
myuser ALL=(root) NOPASSWD: /bin/tar xzf cni-* -C /opt/cni/bin, !/bin/tar xzf cni-*[[\:space\:]]* -C /opt/cni/bin
myuser ALL=(root) NOPASSWD: /bin/tar xzf crictl-* -C /usr/bin, !/bin/tar xzf crictl-*[[\:space\:]]* -C /usr/bin
myuser ALL=(root) NOPASSWD: /usr/bin/dnf install -y kubeadm*, !/usr/bin/dnf install -y kubeadm*[[\:space\:]]*, !/usr/bin/dnf install -y kubeadm*.rpm
myuser ALL=(root) NOPASSWD: /bin/cp kubeadm /usr/bin/kubeadm
myuser ALL=(root) NOPASSWD: /bin/chmod +x /usr/bin/kubeadm
myuser ALL=(root) NOPASSWD: /bin/test -f /usr/bin/kubelet
myuser ALL=(root) NOPASSWD: /bin/mv /usr/bin/kubelet /usr/bin/kubelet_*, !/bin/mv /usr/bin/kubelet /usr/bin/kubelet_*[[\:space\:]]*
myuser ALL=(root) NOPASSWD: /bin/cp kubelet /usr/bin/kubelet
myuser ALL=(root) NOPASSWD: /bin/chmod +x /usr/bin/kubelet
myuser ALL=(root) NOPASSWD: /usr/bin/dnf install -y kubectl*, !/usr/bin/dnf install -y kubectl*[[\:space\:]]*, !/usr/bin/dnf install -y kubectl*.rpm
myuser ALL=(root) NOPASSWD: /bin/cp kubectl /usr/bin/kubectl
myuser ALL=(root) NOPASSWD: /bin/chmod +x /usr/bin/kubectl
myuser ALL=(root) NOPASSWD: /usr/bin/dnf install -y kubelet*, !/usr/bin/dnf install -y kubelet*[[\:space\:]]*, !/usr/bin/dnf install -y kubelet*.rpm
myuser ALL=(root) NOPASSWD: /bin/cp kubelet.service /usr/lib/systemd/system/kubelet.service
myuser ALL=(root) NOPASSWD: /bin/mkdir -p /usr/lib/systemd/system/kubelet.service.d
myuser ALL=(root) NOPASSWD: /bin/cp 10-kubeadm.conf /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl enable --now kubelet
myuser ALL=(root) NOPASSWD: /usr/bin/dnf install -y --disableexcludes=kubeops-repo kubelet-* kubeadm-* kubectl-*

# kubernetes-tools-packages-airgap
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images import kubernetes-images.tar
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images delete localhost\:5001/kube-apiserver\:v*
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images delete localhost\:5001/kube-controller-manager\:v*
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images delete localhost\:5001/kube-scheduler\:v*
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images delete localhost\:5001/kube-proxy\:v*
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images delete localhost\:5001/coredns/coredns\:v*
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images delete localhost\:5001/etcd\:*

# Allow HAProxy and load-balancer
myuser ALL=(root) NOPASSWD: /bin/mkdir -p /usr/local/etc/haproxy
myuser ALL=(root) NOPASSWD: /bin/cp haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
myuser ALL=(root) NOPASSWD: /bin/cp load-balancer.yaml /etc/kubernetes/manifests/load-balancer.yaml
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images import --platform amd64 load-balancer-image.tar
myuser ALL=(root) NOPASSWD: /bin/mkdir -p /mnt/registry
myuser ALL=(root) NOPASSWD: /bin/cp docker-registry.yaml /etc/kubernetes/manifests/docker-registry.yaml
myuser ALL=(root) NOPASSWD: /bin/rm /etc/kubernetes/manifests/docker-registry.yaml
myuser ALL=(root) NOPASSWD: /bin/rm -rf /mnt/registry
myuser ALL=(root) NOPASSWD: /usr/bin/crictl --namespace k8s.io images import local-registry-image.tar
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images import local-registry-image.tar

# Allow multus-airgap
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images import multus-images.tar
myuser ALL=(root) NOPASSWD: /usr/bin/ctr --namespace k8s.io images delete localhost\:5001/k8snetworkplumbingwg/multus-cni\:snapshot-thick

# Podman Installation of local .rpm-Package
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install passt-*.rpm
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install passt-selinux-*.rpm
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install aardvark-dns-*.rpm
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install netavark-*.rpm
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install container-selinux-*.rpm
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install libnet-*.rpm
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install criu-*.rpm
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install criu-libs-*.rpm
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install libslirp-*.rpm
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install slirp4netns-*.rpm
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install yajl-*.rpm
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install crun-*.rpm
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install containers-common-*.rpm
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install fuse-overlayfs-*.rpm
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install shadow-utils-subid-*.rpm
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install conmon-*.rpm
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install podman-*.rpm
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install iproute-*.rpm iproute-tc-*.rpm
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install containerd.io-*.rpm
 
# Podman Remove of local .rpm-Package
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --erase podman shadow-utils-subid fuse-overlayfs crun containers-common yajl slirp4netns libslirp criu-libs criu libnet container-selinux netavark aardvark-dns passt passt-selinux

# Podman Installation & Update with dnf
myuser ALL=(root) NOPASSWD: /usr/bin/dnf install -y podman*, !/usr/bin/dnf install -y podman*[[\:space\:]]*
myuser ALL=(root) NOPASSWD: /usr/bin/dnf update -y podman*,!/usr/bin/dnf update -y podman*[[\:space\:]]*
myuser ALL=(root) NOPASSWD: /usr/bin/dnf remove -y podman

# Allow prepare-node
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install */pia/kosi-*

# Allow installation of container-runtime
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install conntrack-tools-*.rpm, !/usr/bin/rpm --install conntrack-tools-*[\:space\:]]*.rpm
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install conntrack-tools-*.rpm libnetfilter_cthelper-*.rpm libnetfilter_cttimeout-*.rpm libnetfilter_queue-*.rpm
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install iproute-tc-*.rpm, !/usr/bin/rpm --install iproute-tc-*[\:space\:]]*.rpm

# Enable repo and install containerd.io
myuser ALL=(root) NOPASSWD: /usr/bin/dnf install -y containerd.io
myuser ALL=(root) NOPASSWD: /usr/bin/dnf install -y conntrack-tools
myuser ALL=(root) NOPASSWD: /usr/bin/dnf install -y iproute-tc

# Remove RPM packages
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --erase containerd.io

# Create directories
myuser ALL=(root) NOPASSWD: /bin/mkdir -p /etc/containerd
myuser ALL=(root) NOPASSWD: /bin/mkdir -p /etc/containerd/certs.d
myuser ALL=(root) NOPASSWD: /bin/mkdir -p /etc/systemd/system/containerd.service.d/
myuser ALL=(root) NOPASSWD: /usr/bin/tee /etc/systemd/system/containerd.service.d/override.conf*

# Configure containerd
myuser ALL=(root) NOPASSWD: /usr/bin/tee /etc/containerd/config.toml
myuser ALL=(root) NOPASSWD: /bin/sed -i * /etc/containerd/config.toml

# Allow containerd insecure registry
myuser ALL=(root) NOPASSWD: /usr/bin/sed -i -e * /etc/containerd/config.toml

# Enable and start containerd service
myuser ALL=(root) NOPASSWD: /usr/bin/systemctl enable --now containerd

# Allow managing k8s sysctl configuration
myuser ALL=(root) NOPASSWD: /bin/cp k8s.conf /etc/sysctl.d/k8s.conf
myuser ALL=(root) NOPASSWD: /usr/sbin/sysctl --system
myuser ALL=(root) NOPASSWD: /bin/rm /etc/sysctl.d/k8s.conf

# Allow WireGuard package management
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --install wireguard-tools-*.rpm systemd-resolved-*.rpm
myuser ALL=(root) NOPASSWD: /usr/bin/rpm --erase wireguard-tools
myuser ALL=(root) NOPASSWD: /usr/bin/dnf install -y wireguard-tools*, !/usr/bin/dnf install -y wireguard-tools*[\:space\:]]*, !/usr/bin/dnf install -y wireguard-tools*.rpm
myuser ALL=(root) NOPASSWD: /usr/bin/dnf install -y systemd-resolved*, !/usr/bin/dnf install -y systemd-resolved*[\:space\:]]*, !/usr/bin/dnf install -y systemd-resolved*.rpm

# velero install on admin
myuser ALL=(root) NOPASSWD: /usr/bin/cp binary/velero /usr/bin

5. Configure time synchronization on all nodes

  1. Install chrony

Run on every cluster node:

sudo apt install -y chrony
sudo systemctl enable --now chrony
sudo dnf install -y chrony
sudo systemctl enable --now chronyd

  1. Configure NTP servers

Edit /etc/chrony.conf:

server pool.ntp.org iburst
# or your internal NTP servers:
# server 10.2.10.10 iburst

Apply changes:

sudo systemctl restart chronyd

  1. Verify synchronization
chronyc tracking
chronyc sources -v

Expected:

Stratum ≤ 3
Leap Status = Normal
System time offset < 10 ms
One source marked with *

Example:

Reference ID    : 83BC03DC (ntp0.rrze.uni-erlangen.de)
Stratum         : 2
System time     : 0.000718 seconds slow
Last offset     : -0.000214 seconds
RMS offset      : 0.000093 seconds
Leap status     : Normal

Meaning:

  • Reference ID → you are synchronizing with ntp0.rrze.uni-erlangen.de → good
  • Stratum 2 → completely normal for public NTP servers
  • System time slow: 0.0007s → deviation < 1 ms → excellent
  • Load/RMS offset → very stable synchronization
  • Leap status Normal → no time jumps, everything OK

6. Install curl on Admin Node

How to install and configure curl

Install curl

Run on every cluster node:

sudo apt install -y curl
sudo dnf install -y curl


Once all nodes are prepared, you can start setting up the cluster.

4 - Setup Cluster

Setup Cluster

Important: the following commands have to be executed on your cluster admin

1. Install KOSI

sudo apt update
sudo apt install -y kosi=2.13* 
sudo dnf install -y --disableexcludes=kubeops-repo kosi-2.13.0.2-0
# download kosi deb manually and install with
sudo dpkg --install kosi_2.13.0.2-1_amd64.deb
# download kosi rpm manually and install with
sudo rpm --install -v kosi-2.13.0.2-0.x86_64.rpm

2. Set the KUBEOPSROOT env var

Set KUBEOPSROOT and XDG_RUNTIME_DIR in ~/.bashrc

# file ~/.bashrc
# Append these values to the end of your ~/.bashrc file
export KUBEOPSROOT=/home/<yourUser>/kubeops
export XDG_RUNTIME_DIR=$KUBEOPSROOT

Source .bashrc to apply the values

source ~/.bashrc
echo $KUBEOPSROOT
echo $XDG_RUNTIME_DIR

As a result you should see your KUBEOPSROOT-path two times.

3. Adjust KOSI Configuration

This creates a kubeops directory in your home directory and transfers all necessary files, e.g., the kosi-config and the plugins, to it.

mkdir ~/kubeops
cd ~/kubeops
cp -R /var/kubeops/kosi/ .
cp -R /var/kubeops/plugins/ .

The config.yaml is in your KUBEOPSROOT-path (typically in ~/kubeops/kosi)

  • Set hub in your kosi config to hub: https://dispatcher.kubeops.net/v4/dispatcher/
  • Set the “plugins”-entry in your kosi config to plugins: /home/<yourUser>/kubeops/plugins, where is changed to your username
# file $KUBEOPSROOT/kosi/config.yaml
apiversion: kubernative/sina/config/v2

spec:
  hub: https://dispatcher.kubeops.net/v4/dispatcher/ # <-- set hub url
  plugins: <your kubeopsroot>/kubeops/plugins/ # <-- set the path to your plugin folder (~ for home or $KUBEOPSROOT don't work, it has to be the full path)
  workspace: /tmp/kosi/process/
  logging: info
  housekeeping: false
  proxy: false

4. Install KOSI enterprise plugins

kosi install --hub kosi-enterprise kosi/enterprise-plugins:2.0.0

5. Login with your user

kosi login -u <yourUser>

At this point it is normal if you get the following error message:
Error: The login to registry is temporary not available. Please try again later.
The reason for this is that podman is not yet installed.

6. Install kubeopsctl

sudo apt update
sudo apt install -y kubeopsctl=2.0* 
sudo dnf install -y --disableexcludes=kubeops-repo kubeopsctl-2.0.1.0
# download kubeopsctl deb manually from https://kubeops.net and install with
sudo dpkg --install kubeopsctl_2.0.1.0-1_amd64.deb
# download kubeopsctl rpm manually from https://kubeops.net and install with
sudo rpm --install -v kubeopsctl-2.0.1.0-0.x86_64.rpm

7. Create a cluster-values.yaml configuration file

# file cluster-values.yaml
apiVersion: kubeops/kubeopsctl/cluster/beta/v1
imagePullRegistry: registry.kubeops.net/kubeops/kubeops
airgap: false
clusterName: <your cluster name>
clusterUser: <your user name>
kubernetesVersion: <your kubernetesversion>
kubeVipEnabled: false
virtualIP: <your master1 ip>
firewall: nftables
pluginNetwork: calico
containerRuntime: containerd
kubeOpsRoot: <your kubeopsroot path>
serviceSubnet: 192.168.128.0/17
podSubnet: 192.168.0.0/17
debug: true
packageRepository: https://packagerepo.kubeops.net/
changeCluster: true
zones:
# IMPORTANT: The following part has to be adapted so that every one of your masternodes and workernodes is included
# This file only includes the minimum requirements for the amount of masters and workers and an example usage of zones
# You should adapt this part to your amount of masters and workers and cluster them into as many zones as you like
- name: zone1
  nodes:
  - name: <your master1 hostname>
    iPAddress: <your master1 ip>
    type: controlplane
    kubeVersion: <kubernetesversion from above>
  - name: <your worker1 hostname>
    iPAddress: <your worker1 ip>
    type: worker
    kubeVersion: <kubernetesversion from above>
- name: zone2
  nodes:
  - name: <your master2 hostname>
    iPAddress: <your master2 ip>
    type: controlplane
    kubeVersion: <kubernetesversion from above>
  - name: <your worker2 hostname>
    iPAddress: <your worker2 ip>
    type: worker
    kubeVersion: <kubernetesversion from above>
- name: zone3
  nodes:
  - name: <your master3 hostname>
    iPAddress: <your master3 ip>
    type: controlplane
    kubeVersion: <kubernetesversion from above>
  - name: <your worker3 hostname>
    iPAddress: <your worker3 ip>
    type: worker
    kubeVersion: <kubernetesversion from above>
Full Example
# file cluster-values.yaml
apiVersion: kubeops/kubeopsctl/cluster/beta/v1
imagePullRegistry: registry.kubeops.net/kubeops/kubeops
airgap: false
clusterName: myCluster
clusterUser: myuser
kubernetesVersion: 1.32.2
kubeVipEnabled: false
virtualIP: 10.2.10.110
firewall: nftables
pluginNetwork: calico
containerRuntime: containerd
kubeOpsRoot: /home/myuser/kubeops
serviceSubnet: 192.168.128.0/17
podSubnet: 192.168.0.0/17
debug: true
systemCpu: 250m
systemMemory: 256Mi
packageRepository: https://packagerepo.kubeops.net/
changeCluster: true
zones:
- name: zone1
  nodes:
  - name: dev07-master1-ubuntu2404
    iPAddress: 10.2.10.110
    type: controlplane
    kubeVersion: 1.32.2
  - name: dev07-worker1-ubuntu2404
    iPAddress: 10.2.10.210
    type: worker
    kubeVersion: 1.32.2
- name: zone2
  nodes:
  - name: dev07-master2-ubuntu2404
    iPAddress: 10.2.10.120
    type: controlplane
    kubeVersion: 1.32.2
  - name: dev07-worker2-ubuntu2404
    iPAddress: 10.2.10.220
    type: worker
    kubeVersion: 1.32.2
- name: zone3
  nodes:
  - name: dev07-master3-ubuntu2404
    iPAddress: 10.2.10.130
    type: controlplane
    kubeVersion: 1.32.2
  - name: dev07-worker3-ubuntu2404
    iPAddress: 10.2.10.230
    type: worker
    kubeVersion: 1.32.2

7.1 Using KubeVip in your Cluster (optional)

If you want to use KubeVip to setup your Cluster, you need a virtual ip for that. Also you have to set kubeVipEnabled to true and set your virtualIP. If you dont want to use KubeVip you have to set kubeVipEnabled to false and set your first controlplane as your virtualIP in your cluster-values.yaml in the Setup. Refer to the official KubeVip-documentation for details here.

Examples:

kubeVipEnabled: true
virtualIP: <IP in your cluster ip range which is not given yet>

or

kubeVipEnabled: false
virtualIP: <master1 ip>

8. Pull required KOSI packages

If you do not specify a parameter, the current Kubernetes version 1.32.2 will be pulled.
With parameter --kubernetesVersion <your wanted Kubernetesversion> you can pull an older Kubernetes version.
Available Kubernetes versions are 1.32.2 , 1.32.3 , 1.32.9 . 1.32.10 . 1.33.3 . 1.33.5 . 1.34.1 .

kubeopsctl pull

or

kubeopsctl pull --kubernetesVersion <x.xx.x>

9. Install podman

kosi install -p $KUBEOPSROOT/lima/podman_5.2.2.tgz -f cluster-values.yaml

10. Install helm

kosi install -p $KUBEOPSROOT/lima/helm_v3.16.4.tgz

11. Install kubernetes tools (kubectl)

Make sure the kubernetes version matches the one you pulled before.

kosi install -p $KUBEOPSROOT/lima/kubernetes-tools_<your kubernetes version>.tgz -f cluster-values.yaml

This command also installs kubelet and kubeadm. You can either mask or delete them on your admin as they are not necessary for the cluster creation process.

Full Example
kosi install -p $KUBEOPSROOT/lima/kubernetes-tools_1.32.2.tgz -f cluster-values.yaml

12. Cluster Setup

Make sure that you are logged in on hub and registry.

kosi login -u <your username>

Now the login for hub and registry should be successful!


Make sure that you changed the kosi config.yaml.

cat $KUBEOPSROOT/kosi/config.yaml

Make sure that you pulled all required packages.

ls -1 $KUBEOPSROOT/lima

Install Kubernetes Cluster with kubeopsctl. Cluster setup takes about 10 to 15 minutes.

kubeopsctl apply -f cluster-values.yaml