At the moment, the sina-package rook-ceph:1.1.2 (utilized in kubeOps 1.1.3) is employing a Ceph version with a known bug that prevents the proper setup and utilization of object storage via the S3 API.
If you require the functionality provided by this storage class, we suggest considering the use of kubeOps 1.0.7.
This particular version does not encounter the aforementioned issue and provides comprehensive support for S3 storage solutions.
Please make sure that your uservalues.yaml is using UTF-8 encoding.
If you get issues with encoding, you can change your file to UTF-8 with the following command:
iconv -f UTF-8 -t ISO-8859-1 uservalues.yaml > uservalues.yaml
Example: podSubnet: 192.168.0.0/17
| Deployment | Package | PublicHub | Hub |
| 39e6da | local/calicomultus:0.0.1 | | local |
--dname: important parameter mandatory for the update command.
-f values.yaml: important that the right podSubnet is being used.
error: resource mapping not found for name: "calico-kube-controllers" namespace:co.yaml": no matches for kind "PodDisruptionBudget" in version "policy/v1beta1"
ensure CRDs are installed first
Error caused by a missing CRD. Error has no impact on the update procedure of calico.
In order to be able to scan images with Java code with the Trivy Scanner, the Trivy Statefulset must be edited.
There the environment variable "SCANNER_TRIVY_OFFLINE_SCAN" must be set to "true".
After the images have been scanned with Javacode, the environment variable should be set to "false" again, otherwise the CVE database will not be updated for further scans.
The problem is fixed as of Trivy version 0.37.2+.
If you want to create a cluster with firewalld and the “kubeops/clustercreate:1.0.2” - package you have to manually pull the firewalld - maintenance - package for your OS first, after executing the “kubeops/setup:1.0.1” - package.
If the following message appears in the Opensearch pod logs, the vm.max_map_count:
ERROR:  bootstrap checks failed
: max virtual memory areas vm.max_map_count  is too low, increase to at least 
On all control-plane and worker nodes the line "vm.max_map_count=262144" must be added to the file "/etc/sysctl.conf".
After that the following command must be executed in the console on all control-plane and worker nodes: sysctl -p
Finally, the Opensearch pods must be restarted.
X means per version
centos7 cannot update the version by itself ca-certificates-2021.2.50-72.el7_9.noarch
You can fix it with a
yum update ca-certificates -y
a yum update
This is how the ca-certificates rpm can be downloaded and installed manually:
To install: yum install ca-certificates-2021.2.50-72.el7_9.noarch.rpm -y
At the moment SINA has no sudo support yet. You need to have docker and helm installed on your machine as well.
Docker and Helm require sudo permissions.
This error message is a known bug and will be fixed in a later release. You need to have at least one package in the Hub before you can search it.
Currently, only lowercase characters are allowed as the name of the package. This will be fixed in a later release.
This error message is a known bug and can occur if your username or password is wrong. Please check if both are correct.
No, because a yaml file requires a key value structure.
No, you don't have to use the template plugin if you don't want to.
It is an error message from our tool for reading yaml files. It means you try to read a value from a key in a yaml file, but the key doesn't exist.
Please check the correct path of your values. If your key contains "-" the template plugin does not recognize that key. Removing "-" will solve that issue.
For lima version >= 0.9.0
Lima stops in line
ansible Playbook : COMPLETE : Ansible playbooks complete.
in the path Broken pipe. From the line with Broken pipe check if the following lines exist:
debug3: mux_client_read_packet: read header failed: Broken pipe debug2: Received exit status from master 1 Shared connection to vli50707 closed. <vli50707> ESTABLISH SSH CONNECTION FOR USER: demouser <vli50707> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o) (ControlPersist=60s)
If this is the case, line /etc/ansible/ansible.cfg
in the currently running lima container in file ssh_args =-C -o ControlMaster=auto -o ControlPersist=60s
must be commented out or removed.
docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 99cabe7133e5 registry1.kubernative.net/lima/lima:v0.8.0 "/bin/bash" 6 days ago Up 6 days lima-v0.8.0
docker exec -it 99cabe7133e5 bash
Change the line
ssh_args = -C -o ControlMaster=auto -o ControlPersist=60s to #ssh_args = -C-o ControlMaster=auto -o ControlPersist=60s
or delete the line.
To delete the cluster master, we need to set the cluster master to a different master machine first.
This is a common problem which happens now and then, but the real source of error is difficult to identify. Nevertheless, the workaround is quick and easy: clean up the current repo data, refresh the subscription-manager and update the whole operating system. This can be done with the following commands:
dnf clean all
rm -frv /var/cache/dnf
dnf update -y
SELinux will be temporarily deactivated during the execution of LIMA tasks. After the execution is finished, SELinux is automatically reactivated. This indicates you are not required to manually enable SELinux every time while working with LIMA.
1. They are responsible for updating the loadbalancer, you can update them manualy and delete the pod
2. You can try redeploying the deamonset to the kube-system namespace
1. Please make sure you only have the latest dependancy packages for your enviroment in your /packages folder.
2. It could be related to this kubernetes bug https://v1-22.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
3. Try upgrading past 1.21.x manualy
try the following commands on the master
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
1. Check if the Loadbalancer staticPod file can be found in the manifest folder of the node.
2. If it isn't there please copy it from another node.
1. Retry to upgrade your cluster.
2. If LIMA thinks you are already on the target version edit the stored data of your cluster at
Set the Key 'kubernetesVersion' to the lowest kubernetes Version present on a Node in your cluster.
1. Check if you got a package manager.
2. You have to install python3 with 'yum install python3' and then create a symlink from python to python3 with 'update-alternatives --config python'.
You have to install libselinux-python on your cluster machine so you can install a firewall via LIMA.
1. Use following command to force shut down httpd service:
‘kubectl delete deployment pia-httpd –grace-period=0 –force’.
2. Most deployments got a networking service like our httpd does.
Delete the networking service with the command:
‘kubectl delete svc pia-httpd-service –grace-period=0 –force’.
1. Use ‘kubectl get nodes’ command to find out first which node is not ready.
2. To identify the problem, get access to the shell of the non-ready node . Use ‘systemctl status kubelet’ to get status information about state of kubelet.
3. The most common cause of this error is that the kubelet has the problem of not automatically identify the node. In this case, the kubelet must be restarted manually on the non-ready machine. This is done with ‘systemctl enable kubelet’ and ‘systemctl start kubelet’.
4. If the issue persists, reason behind the error can be evaluated by your cluster administrators.
Please feel free to contact us for any question that is not answered yet.
We are looking forward to get in contact with you!