In order to be able to scan images with Java code with the Trivy Scanner, the Trivy Statefulset must be edited.
There the environment variable "SCANNER_TRIVY_OFFLINE_SCAN" must be set to "true".
After the images have been scanned with Javacode, the environment variable should be set to "false" again, otherwise the CVE database will not be updated for further scans.
The problem is fixed as of Trivy version 0.37.2+.
If you want to create a cluster with firewalld and the “kubeops/clustercreate:1.0.2” - package you have to manually pull the firewalld - maintenance - package for your OS first, after executing the “kubeops/setup:1.0.1” - package.
If the following message appears in the Opensearch pod logs, the vm.max_map_count:
ERROR: [1] bootstrap checks failed
[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
On all control-plane and worker nodes the line "vm.max_map_count=262144" must be added to the file "/etc/sysctl.conf".
After that the following command must be executed in the console on all control-plane and worker nodes: sysctl -p
Finally, the Opensearch pods must be restarted.
Error: http://hub.kubernative.net/dispatcher?apiversion=3&vlientversion=2.X.0 : 0
X means per version
centos7 cannot update the version by itself ca-certificates-2021.2.50-72.el7_9.noarch
You can fix it with a
yum update ca-certificates -y
or
a yum update
This is how the ca-certificates rpm can be downloaded and installed manually:
To install: yum install ca-certificates-2021.2.50-72.el7_9.noarch.rpm -y
At the moment SINA has no sudo support yet. You need to have docker and helm installed on your machine as well.
Docker and Helm require sudo permissions.
This error message is a known bug and will be fixed in a later release. You need to have at least one package in the Hub before you can search it.
Currently, only lowercase characters are allowed as the name of the package. This will be fixed in a later release.
This error message is a known bug and can occur if your username or password is wrong. Please check if both are correct.
No, because a yaml file requires a key value structure.
No, you don't have to use the template plugin if you don't want to.
It is an error message from our tool for reading yaml files. It means you try to read a value from a key in a yaml file, but the key doesn't exist.
Please check the correct path of your values. If your key contains "-" the template plugin does not recognize that key. Removing "-" will solve that issue.
For lima version >= 0.9.0
Lima stops in line
ansible Playbook : COMPLETE : Ansible playbooks complete.
Search for
$LIMAROOT/dockerLogs/dockerLogs_latest.txt
in the path Broken pipe. From the line with Broken pipe check if the following lines exist:
debug3: mux_client_read_packet: read header failed: Broken pipe
debug2: Received exit status from master 1
Shared connection to vli50707 closed.
<vli50707> ESTABLISH SSH CONNECTION FOR USER: demouser
<vli50707> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)
(ControlPersist=60s)
If this is the case, line /etc/ansible/ansible.cfg
in the currently running lima container in file ssh_args =-C -o ControlMaster=auto -o ControlPersist=60s
must be commented out or removed.
Example:
docker container ls
CONTAINER ID IMAGE COMMAND
CREATED STATUS PORTS NAMES
99cabe7133e5 registry1.kubernative.net/lima/lima:v0.8.0 "/bin/bash" 6 days
ago Up 6 days lima-v0.8.0
docker exec -it 99cabe7133e5 bash
vi /etc/ansible/ansible.cfg
Change the line
ssh_args = -C -o ControlMaster=auto -o ControlPersist=60s to #ssh_args = -C-o ControlMaster=auto -o ControlPersist=60s
or delete the line.
To delete the cluster master, we need to set the cluster master to a different master machine first.
This is a common problem which happens now and then, but the real source of error is difficult to identify. Nevertheless, the workaround is quick and easy: clean up the current repo data, refresh the subscription-manager and update the whole operating system. This can be done with the following commands:
dnf clean all
rm -frv /var/cache/dnf
subscription-manager refresh
dnf update -y
SELinux will be temporarily deactivated during the execution of LIMA tasks. After the execution is finished, SELinux is automatically reactivated. This indicates you are not required to manually enable SELinux every time while working with LIMA.
1. They are responsible for updating the loadbalancer, you can update them manualy and delete the pod
2. You can try redeploying the deamonset to the kube-system namespace
1. Please make sure you only have the latest dependancy packages for your enviroment in your /packages folder.
2. It could be related to this kubernetes bug https://v1-22.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
3. Try upgrading past 1.21.x manualy
try the following commands on the master
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
1. Check if the Loadbalancer staticPod file can be found in the manifest folder of the node.
2. If it isn't there please copy it from another node.
1. Retry to upgrade your cluster.
2. If LIMA thinks you are already on the target version edit the stored data of your cluster at
'$LIMAROOT/myClusterName/clusterStorage.yaml'.
Set the Key 'kubernetesVersion' to the lowest kubernetes Version present on a Node in your cluster.
1. Check if you got a package manager.
2. You have to install python3 with 'yum install python3' and then create a symlink from python to python3 with 'update-alternatives --config python'.
You have to install libselinux-python on your cluster machine so you can install a firewall via LIMA.
1. Use following command to force shut down httpd service:
‘kubectl delete deployment pia-httpd –grace-period=0 –force’.
2. Most deployments got a networking service like our httpd does.
Delete the networking service with the command:
‘kubectl delete svc pia-httpd-service –grace-period=0 –force’.
1. Use ‘kubectl get nodes’ command to find out first which node is not ready.
2. To identify the problem, get access to the shell of the non-ready node . Use ‘systemctl status kubelet’ to get status information about state of kubelet.
3. The most common cause of this error is that the kubelet has the problem of not automatically identify the node. In this case, the kubelet must be restarted manually on the non-ready machine. This is done with ‘systemctl enable kubelet’ and ‘systemctl start kubelet’.
4. If the issue persists, reason behind the error can be evaluated by your cluster administrators.
Please feel free to contact us for any question that is not answered yet.
We are looking forward to get in contact with you!
KubeOps GmbH
Hinter Stöck 17
72406 Bisingen
Germany
+49 7433 93724 00