This is the multi-page printable view of this section. Click here to print.
KUBEOPSCTL
- 1: Getting-Started
- 1.1: About-Kubeopsctl
- 1.2: Release Notes
- 1.3:
- 1.4: Installation
- 2: How to Guides
- 2.1: Ingress Configuration
- 2.2: Use Keycloak
- 2.3: Create Cluster
- 2.4: Install Maintenance Packages
- 2.5: Upgrade KubeOps Software
- 2.6: Use Kubeopsctl
- 2.7: Backup and restore
- 2.8: Renew Certificates
- 2.9: Deploy Package On Cluster
- 2.10: Replace Cluster Nodes
- 2.11: Update Kubernetes Version
- 2.12: Change CRI
- 2.13: How to delete nodes from the cluster with lima
- 2.14: Accessing Dashboards
- 2.15: Create a new Repository
- 2.16: Change registry
- 2.17: Change the OpenSearch password
- 2.18: Create Kosi package
- 2.19: Install package from Hub
- 3: Reference
- 3.1: Documentation-kubeopsctl
- 3.2: KubeOps Version
- 3.3: Glossary
- 3.4: FAQs
- 4:
- 4.1: About-Lima
- 4.2: Documentation-Lima
- 4.3: Installation-Guide-lima
- 4.4: Lima-Ports
- 5: Pia
- 5.1: About-Pia
- 5.2: Documentation-Pia
- 5.3: Pia-FAQ
- 5.4: Installation-guide-pia
- 5.5: Known-Issues-Pia
- 5.6: Pia-0.1.x
- 6: Internal References
1 - Getting-Started
1.1 - About-Kubeopsctl
What is Kubeopsctl?
kubeopsctl is a tool for managing a cluster and its state. you can describe a desired clusterstate and then kubeopsctl creates a cluster with the desired state.
why use kubeopsctl?
you can set a desired clusterstate, and you do not have to do anything else to achieve the desired state.
Highlights
- creating a cluster
- adding nodes to you cluster
- drain nodes
- updating single nodes
- label nodes for zones
- adding platform software into your cluster
1.2 - Release Notes
KubeOps 1.4.0
Changelog kubeopsctl 1.4.0
Bugfix
- if the variable rook-ceph is not mentioned in the file kubeopsctl.yaml, the default value false is used
- if the variable harbor is not mentioned in the file kubeopsctl.yaml, the default value false is used
- if the variable opensearch is not mentioned in the file kubeopsctl.yaml, the default value false is used
- if the variable opensearch-dashboards is not mentioned in the file kubeopsctl.yaml, the default value false is used
- if the variable logstash is not mentioned in the file kubeopsctl.yaml, the default value false is used
- if the variable filebeat is not mentioned in the file kubeopsctl.yaml, the default value false is used
- if the variable prometheus is not mentioned in the file kubeopsctl.yaml, the default value false is used
- if the variable opa is not mentioned in the file kubeopsctl.yaml, the default value false is used
- if the variable headlamp is not mentioned in the file kubeopsctl.yaml, the default value false is used
- if the variable certman is not mentioned in the file kubeopsctl.yaml, the default value false is used
- if the variable ingress is not mentioned in the file kubeopsctl.yaml, the default value false is used
- if the variable keycloak is not mentioned in the file kubeopsctl.yaml, the default value false is used
- if the variable velero is not mentioned in the file kubeopsctl.yaml, the default value false is used
- resolved issue with “kubeopsctl.yaml” not being found exception, which leads to System.IO.FileNotFoundException.
- if the variable systemCpu is not mentioned in the file kubeopsctl.yaml, the default value 200m is used
- if the variable systemMemory is not mentioned in the file kubeopsctl.yaml, the default value 200Mi is used
Changelog rook-ceph 1.4.0
Updated
- updated package version
Bugfix
- fixed issue where default requested cpu and memory where set too high
Changelog kube-prometheus-stack 1.4.0
Updated
- added default dashboards to grafana
Changelog harbor 1.4.0
Updated
- updated harbor package version to app version 2.9.3
Changelog opensearch and opensearch-dashboards 1.4.0
Updated
- updated both packages to app version 2.11.1
Changelog ingress-ningx 1.4.0
Updated
- updated ingress-ningx to application version 1.8.5
Changelog lima 1.4.0
Bugfix
- If the default clusterUser root is used but kubeopsctl is executed as user, an error message is displayed.
KubeOps 1.4.0-Beta0
Changelog kubeopsctl 1.4.0-Beta0
Bugfix
- added null handling to storageClass name
- fixed issue where the existance of cluster resource where not checked in the kubeopsctl.yaml
- fixed issue where clustermaster was not automatically upgraded in the upgrade command
Changelog lima 1.4.0-Beta0
Bugfix
- fixed an issue where firewalld could not be removed before cluster creation
Changelog rook-ceph 1.4.0
Updated
- updated package version
Bugfix
- fixed issue where default requested cpu and memory where set too high
Changelog kube-prometheus-stack 1.4.0
Updated
- added default dashboards to grafana
Changelog harbor 1.4.0
Updated
- updated harbor package version to app version 2.9.3
Changelog opensearch and opensearch-dashboards 1.4.0
Updated
- updated both packages to app version 2.11.1
KubeOps 1.4.0-Alpha5
Bugfix
- fixed issue with pushing images to harbor
- fixed issue where podman installation was not checked
- fixed issue with clusterip parameter in lima change config command
- fixed issue related to zone labels for nodes
- fixed issue with converting V2 model to V3 model
KubeOps 1.4.0-Alpha4
Changelog kubeopsctl 1.4.0-Alpha4
Bugfix
- Fixed an issue that prevents the stop of kubeopsctl if an image can not be pulled
- fixed an issue where systemcpu and systemmemory were not temmplated correctly
- fixed an issue where packages had no idempotence
- fixed issue where cluster could not be upgraded
- fixed issue with infinite loop of kubeopsctl output
- fixed issue with false permissions of .kube/config file
Update
- added automatic removal of firewalld and runc, and installation of tc
- updated ingress-ningx to application version 1.8.5
- updated opensearch to application version 2.11.1
Lima 1.4.0-Beta0
- added error logging in case kubeopsroot is empty or null
Lima 1.4.0-Alpha4
Bugfix
- added initial check if kubeopsroot is not set
KubeOps 1.4.0-Alpha3
Changelog kubeopsctl 1.4.0-Alpha3
Bugfix
- Fixed an issue that results in an ImagePullback Error while installing Velero on an Air Gap environment
- Fixed an issue that prevents using
kubeopsctl status
- Fixed an issue with drain that prevents draining nodes because of violation of PodDisruptionBudgets
- Fixed an issue with uncordon that prevenst uncordon nodes
- Fixed an issue that created a delay in displaying the status messages on the console
Update
- removed limaRoot in apiVersion
kubeops/kubeopsctl/alpha/v3
- removed kubeOpsRoot in apiVersion
kubeops/kubeopsctl/alpha/v3
Lima 1.4.0-Alpha3
Bugfix
- Fixed an issue that lead to an Broken pipe error in ansible
Harbor 1.4.0
Update
- Raised the maxJobWorkers from 10 to 30 in order to prevent error
504 Gateway Time-out
KubeOps 1.4.0-Alpha2
Changelog kubeopsctl 1.4.0-Alpha2
Bugfix
- Fixed an issue that was writing
Command contains quotes and double quotes! can not mix within shell!
into the logs without having a real error - Fixed an issue that was still writing log output on the console
- Fixed an issue that prevents showing Debug log level in the logs
- Fixed an issue that prevents using
kubeopsctl status
- Fixed an issue that prevents installing filebeat
- Fixed an issue that prevents installing logstash
- Fixed an issue that prevents installing opensearch
- Fixed an issue that prevents installing prometheus
- Fixed an issue that results in an ImagePullback Error while installing Velero on an Air Gap environment
KubeOps 1.4.0-Alpha1
Changelog kubeopsctl 1.4.0-Alpha1
Bugfix
- Fixed an issue that leads to an Unhandled exception: System.FormatException
- Fixed small output issues related to plugin bash
KubeOps 1.4.0-Alpha0
Changelog kubeopsctl 1.4.0-Alpha0
New
- Reworked Console output
- Old Console Output is now stored in log files in $KUBEOPSROOT
- Logfiles with timestamps are now created in $KUBEOPSROOT
- Skip cluster creation is now possible
- Skip update registry is now possible
- It is now possible to install software into a cluster that was not created by lima
- Added new command drain
- Added new command uncordon
- Added new command upgrade
- Added new command status
Updated
- Updated Lima Version to lima 1.4.0Alpha0
- Updated filebeat package Version to filebeat 1.4.0
- Updated harbor package Version to harbor 1.4.0
- Updated rook-ceph package Version to rook-ceph 1.4.0
- Updated helm package Version to helm 1.4.0
- Updated logstash package Version to logstash 1.4.0
- Updated opa-gatekeeper package Version to opa-gatekeeper 1.4.0
- Updated opensearch package Version to opensearch 1.4.0
- Updated opensearch-dashboards package Version to opensearch-dashboards 1.4.0
- Updated prometheus package Version to prometheus 1.4.0
- Updated rook package Version to rook 1.4.0
- Updated cert-manager package Version to cert-manager 1.4.0
- Updated ingress-nginx package Version to ingress-nginx 1.4.0
- Updated kubeops-dashboard package Version to kubeops-dashboard 1.4.0
- Updated keycloak package Version to keycloak 1.4.0
LIMA 1.4.0_Alpha0
Updated
- Old Console Output is now stored in log files in $KUBEOPSROOT
Changelog setup 1.4.0
Updated
- Updated Lima Version to lima 1.2.0_Alpha0
Changelog clustercreate 1.4.0
Updated
- Updated filebeat package Version to filebeat 1.4.0
- Updated harbor package Version to harbor 1.4.0
- Updated rook-ceph package Version to rook-ceph 1.4.0
- Updated helm package Version to helm 1.4.0
- Updated logstash package Version to logstash 1.4.0
- Updated opa-gatekeeper package Version to opa-gatekeeper 1.4.0
- Updated opensearch package Version to opensearch 1.4.0
- Updated opensearch-dashboards package Version to opensearch-dashboards 1.4.0
- Updated prometheus package Version to prometheus 1.4.0
- Updated rook package Version to rook 1.4.0
- Updated cert-manager package Version to cert-manager 1.4.0
- Updated ingress-nginx package Version to ingress-nginx 1.4.0
- Updated kubeops-dashboard package Version to kubeops-dashboard 1.4.0
- Updated keycloak package Version to keycloak 1.4.0
Changelog filebeat 1.4.0
Updated
- updated package version
Changelog harbor 1.4.0
Bugfixes
- fixed an issue in update that prevents the datamigration from harbor
- fixed an issue that prevents the scaling of PVCs
Changelog rook-ceph 1.4.0
Updated
- updated package version
Changelog helm 1.4.0
Updated
- updated package version
Changelog logstash 1.4.0
Updated
- updated package version
Changelog opa-gatekeeper 1.4.0
Updated
- updated package version
Changelog opensearch 1.4.0
Updated
- updated package version
Changelog opensearch-dashboards 1.4.0
Updated
- updated package version
Changelog prometheus 1.4.0-Beta0
Updated
- added first dashboard as json-data in file
Changelog prometheus 1.4.0
Updated
- updated package version
Changelog rook 1.4.0
Updated
- updated package version
Changelog cert-manager 1.4.0
Updated
- updated package version
Changelog ingress-nginx 1.4.0
Updated
- updated package version
Changelog kubeops-dashboard 1.4.0
Updated
- updated package version
Changelog keycloak 1.4.0
Updated
- updated package version
KubeOps 1.3.2
changelog rook 1.2.1
- fixed issue with templating of block storage class
changelog logstash 1.2.1
- fixed issue with authentication
Changelog kubeopsctl 0.2.2
updated
- added new containerd package
Bugfixes
- fixed issue with missing haproxy image
- fixed issue with installing kubectl in container
KubeOps 1.3.1
Changelog kubeopsctl 0.2.1
- fixed issue where contauner runtime could not be updated
KubeOps 1.3.0
Changelog kubeopsctl 0.2.0
Bugfixes
- Fixed an Bug that wrote tmpCopyDir content wrong
- Fixed an Bug that prevents Harbor update from Kubeops 1.2.x to Kubeops 1.3.x, but the upgrade still has to be performed two times. Check the FAQs for more informationen
Changelog LIMA 1.1.0
Bugfixes
- Fixed an Bug that caused ImagePullBackOff for loadbalancer and registry, because the manifest was changed before copying the image
Changelog setup 1.3.2
Updated
- Updated Lima Version to lima 1.1.0
KubeOps 1.3.0-Beta1
Changelog kubeopsctl 0.2.0-Beta1
Updated
- Updated Lima Version to lima 1.1.0Beta1
- Updated filebeat package Version to filebeat 1.3.1
- Updated harbor package Version to harbor 1.3.1
- Updated rook-ceph package Version to rook-ceph 1.3.1
- Updated helm package Version to helm 1.3.1
- Updated logstash package Version to logstash 1.3.1
- Updated opa-gatekeeper package Version to opa-gatekeeper 1.3.1
- Updated opensearch package Version to opensearch 1.3.1
- Updated opensearch-dashboards package Version to opensearch-dashboards 1.3.1
- Updated prometheus package Version to prometheus 1.3.1
- Updated rook package Version to rook 1.3.1
- Updated cert-manager package Version to cert-manager 1.3.1
- Updated ingress-nginx package Version to ingress-nginx 1.3.1
- Updated kubeops-dashboard package Version to kubeops-dashboard 1.3.1
- Updated keycloak package Version to keycloak 1.3.1
LIMA 1.1.0-Beta1
Updated
- Updated the loadbalancer image due to critical CVEs
- added kubernetes package for v1.29.1
- added kubernetes package for v1.29.0
- added kubernetes package for v1.28.5
- added kubernetes package for v1.28.4
- added kubernetes package for v1.28.3
- added kubernetes package for v1.27.9
- added kubernetes package for v1.27.8
- added kubernetes package for v1.27.7
Changelog setup 1.3.1
Updated
- Updated Lima Version to lima 1.1.0Beta1
Changelog clustercreate 1.3.1
Updated
- Updated filebeat package Version to filebeat 1.3.1
- Updated harbor package Version to harbor 1.3.1
- Updated rook-ceph package Version to rook-ceph 1.3.1
- Updated helm package Version to helm 1.3.1
- Updated logstash package Version to logstash 1.3.1
- Updated opa-gatekeeper package Version to opa-gatekeeper 1.3.1
- Updated opensearch package Version to opensearch 1.3.1
- Updated opensearch-dashboards package Version to opensearch-dashboards 1.3.1
- Updated prometheus package Version to prometheus 1.3.1
- Updated rook package Version to rook 1.3.1
- Updated cert-manager package Version to cert-manager 1.3.1
- Updated ingress-nginx package Version to ingress-nginx 1.3.1
- Updated kubeops-dashboard package Version to kubeops-dashboard 1.3.1
- Updated keycloak package Version to keycloak 1.3.1
Changelog filebeat 1.3.1
Updated
- updated package version
Changelog harbor 1.3.1
Bugfixes
- fixed an issue in update that prevents the datamigration from harbor
- fixed an issue that prevents the scaling of PVCs
Changelog rook-ceph 1.3.1
Updated
- updated package version
Changelog helm 1.3.1
Updated
- updated package version
Changelog logstash 1.3.1
Updated
- updated package version
Changelog opa-gatekeeper 1.3.1
Updated
- updated package version
Changelog opensearch 1.3.1
Updated
- updated package version
Changelog opensearch-dashboards 1.3.1
Updated
- updated package version
Changelog prometheus 1.3.1
Updated
- updated package version
Changelog rook 1.3.1
Updated
- updated package version
Changelog cert-manager 1.3.1
Updated
- updated package version
Changelog ingress-nginx 1.3.1
Updated
- updated package version
Changelog kubeops-dashboard 1.3.1
Updated
- updated package version
Changelog keycloak 1.3.1
Updated
- updated package version
KubeOps 1.3.0-Beta0
Changelog kubeopsctl 0.2.0-Beta0
Bugfixes
- fixed issue with Kubernetes Upgrade
- fixed issue with false pause image in containerd config.toml
- fixed issue with unstable rewriting of kubernetes manifests
- fixed issue where ImagePullBackOff errors occurs because harbor was not ready yet
Updated
- Updated Lima Version to lima 1.1.0Beta0"
- added backwards compatibility to kubeopsctl 0.1.0
Changelog setup 1.3.0
Updated
- added lima:1.1.0Beta0
Changelog rook-ceph 1.3.0
Updated
- Changed installation/update/delete procedure from yaml-files to helm-chart
- Updated rook from 1.10.9 to 1.12.7
- removed some adjustable values for compatability and easier configuration:
- removed “placement” value
- removed “removeOSDsIfOutAndSafeToRemove” value, which is now hardcoded to false
- removed FS-type templating for the “blockStorageClass” as it is not recommended to use to anything other than ext4
Changelog harbor 1.3.0
Updated
- removed external redis and database and changed to internal services
- Updated Harbor from Version 2.6.4 to 2.9.1
- removed some adjustable values for compatability and easier configuration:
- redis- and postgres-password configuration moved into “harborValues”-section (as “redisPassword” and “databasePassword”)
- storage-values for the external instances have been removed, instead the storage configuration for redis and postgres inside the “harborValues” will be used (the “persitentVolumeClaim”-section contains the keys “redis” and “database”, latter referencing the internal postgres instance)
KubeOps 1.3.0-Alpha6
Changelog kubeopsctl 0.2.0-Alpha6
Bugfixes
- fixed a bug that prevented tha use of special characters in passwords
- fixed a bug that prevented the upgrade to a specific kubernetes version
KubeOps 1.3.0-Alpha5
Changelog setup 1.3.0
Updated
- updated lima to 1.1.0-Alpha3
Changelog clustercreate 1.3.0
Updated
- added parameter for templating the folder of lima images
- added installation of packages in update-section, so that previously missing packages can be added after cluster creation
Bugfixes
- fixed templating for the “harborPersistence” key
Changelog harbor 1.3.0
Bugfixes
- fixed templating for the “harborPersistence” key, to allow templating of the PVC size and storageClass
- removed irrelevant entry in chart-files
KubeOps 1.3.0-Alpha4
Changelog kubeopsctl 0.2.0-Alpha4
Updated
- added more Parameters from rook Resources from kubeopsctl.yaml (see Changelog rook-ceph)
Changelog clustercreate 1.3.0
Updated
- added global namespace variable to the values, which will be used if no other namespace has been set for the individual packages
- added new values for the rook-ceph installation (see Changelog rook-ceph)
- added more defaults for the installation values
Bugfixes
- fixed updating loadbalancer after clustercreation
Changelog rook-ceph 1.3.0
Bugfixes
- fixed issue with too high resource limits
Updated
- added templating and adjustable values for resource-requests of objectstore and filesystem-pods
KubeOps 1.3.0-Alpha3
Changelog kubeopsctl 0.2.0-Alpha3
Bugfixes
- fixed bug where packages from kubeops 1.2.0 were pulled in airgap environment
KubeOps 1.3.0-Alpha2
Changelog kubeopsctl 0.2.0-Alpha2
Bugfixes
- fixed bug in Model that prevents cluster creation
- fixed bug in Model that prevents installing rook
- fixed bug in Model that prevents installing harbor
Changelog LIMA 1.1.0Alpha2
Bugfixes
- fixed bug that prevents a healthy cluster creation
- fixed bug that prevent adding masters to loadbalancer
Changelog setup 1.3.0
Updated
- added lima:1.1.0Alpha2
- added templating for updating loadbalancer after clustercreation
Changelog clustercreate 1.3.0
Bugfixes
- added updating loadbalancer after clustercreation
KubeOps 1.3.0-Alpha1
Changelog kubeopsctl 0.2.0-Alpha1
Bugfixes
- fixed issue with not recognized values in kubeopsctl.yaml
KubeOps 1.3.0-Alpha0
Changelog kubeopsctl 0.2.0-Alpha0
Bugfixes
- fixed issue with Kubernetes Upgrade
- fixed issue with false pause image in containerd config.toml
Updated
- Updated Lima Version to lima 1.1.0Alpha0"
Changelog setup 1.3.0
Updated
- added lima:1.1.0Alpha0
Changelog rook-ceph 1.3.0
Updated
- Changed installation/update/delete procedure from yaml-files to helm-chart
- Updated rook from 1.10.9 to 1.12.7
- removed some adjustable values for compatability and easier configuration:
- removed “placement” value
- removed “removeOSDsIfOutAndSafeToRemove” value, which is now hardcoded to false
- removed FS-type templating for the “blockStorageClass” as it is not recommended to use to anything other than ext4
Changelog harbor 1.3.0
Updated
- removed external redis and database and changed to internal services
- Updated Harbor from Version 2.6.4 to 2.9.1
- removed some adjustable values for compatability and easier configuration:
- redis- and postgres-password configuration moved into “harborValues”-section (as “redisPassword” and “databasePassword”)
- storage-values for the external instances have been removed, instead the storage configuration for redis and postgres inside the “harborValues” will be used (the “persitentVolumeClaim”-section contains the keys “redis” and “database”, latter referencing the internal postgres instance)
Changelog KOSI 2.9.0_Alpha0
New
- SINA got renamed to KOSI
- new code format for creating packages is named package.kosi
- added the possibility to use sha tag values.
- added an image-clean-up if you use retagging in KOSI
- added KOSI remove - command, for removing your own packages from the hub
KubeOps 1.2.0-Beta3
Changelog setup 1.2.3
Updated
- Updated Lima Version to lima 1.0.0Beta3
Changelog clustercreate 1.2.0
Updated
- added lima update loadbalancer to logic for clustercreation
Changelog kubeopsctl 0.1.0-Beta3
fixed
- fixed issue with not working loadbalancer and network pods on cluster creation
KubeOps 1.2.0-Beta2
Changelog setup 1.2.2
Updated
- Updated Lima Version to lima 1.0.0Beta2"
Changelog kubeopsctl 0.1.0-Beta2
fixed
- fixed issue with Kubernetes Upgrade
- fixed issue with false pause image in containerd config.toml
KubeOps 1.2.0-Beta1
Changelog LIMA 1.0.0-Beta1
new maintenance packages for kubernetes
- added package for v1.28.2
- added package for v1.28.1
- added package for v1.28.0
- added package for v1.27.6
- added package for v1.27.5
- added package for v1.27.4
- added package for v1.27.3
- added package for v1.26.9
- added package for v1.26.8
- added package for v1.26.7
- added package for v1.26.6
- added package for v1.25.14
- added package for v1.25.13
- added package for v1.25.12
- added package for v1.25.11
Updated
- updated the haproxy version to 2.8.1
- updated crontab time to 12pm
Changelog setup 1.2.1
Updated
- Updated Lima Version to lima 1.0.0Beta1"
Changelog kubeopsctl 0.1.0-Beta1
Updated
- updated the haproxy version to 2.8.1
- updated crontab time to 12pm
KubeOps 1.2.0-Beta0
Changelog kubeopsctl 0.1.0-Beta0
New
- apply -f
- change registry -f
Changelog LIMA 1.0.0-Beta0
New
- Added support for podman
- Removed Docker dependency
Changelog Sina 2.8.0_Beta0
New
- added sina encrypt -f <user.yaml>
- added flag (–cf <user.yaml>) for sina install to input encrypted yaml files
- added flag (–cf <user.yaml>) for sina update to input encrypted yaml files
- added flag (–cf <user.yaml>) for sina delete to input encrypted yaml files
- added sina check Command
- added flag (-t) for sina pull. Use the cluster´s internal adress for the registry, which is resolvable by nodes. (Can only be used with the -r Flag)
KubeOps 1.2.0-Alpha15
Changelog kubeopsctl 0.1.0-Alpha15
Bugfixes
- fixed an issue that prevented the upgrade from platform 1.1 to platform 1.2
Changelog LIMA 1.0.0-Alpha14
Bugfixes
- fixed an issue were nodes were not ready after cluster creation
KubeOps 1.2.0-Alpha14
Changelog kubeopsctl 0.1.0-Alpha14
Bugfixes
- fixed an issue were nodes were not ready after cluster creation
Changelog LIMA 1.0.0-Alpha13
Bugfixes
- fixed an issue were nodes were not ready after cluster creation
KubeOps 1.2.0-Alpha13
Changelog kubeopsctl 0.1.0-Alpha13
Bugfixes
- fixed an issue where local-registry has no storage
- fixed an issue where cni images were not on all master registries
Changelog LIMA 1.0.0Alpha12
Bugfixes
- fixed an issue where local-registry has no storage
- fixed an issue where cni images were not on all master registries
Changelog keycloak 1.2.0
Bugfixes
- added missing / in url
Updates
- updated readninessProbe from 30 to 300 seconds
KubeOps 1.2.0-Alpha12
Changelog kubeopsctl 0.1.0-Alpha12
Bugfixes
- fixed an issue where kubeopsctl was not airgapready
- fixed an issue where images for loadbalancer were not copied
- fixed an issue where nodes could not be joined after the cluster creation
KubeOps 1.2.0-Alpha11
Changelog kubeopsctl 0.1.0-Alpha11
Bugfixes
- fixed an bug where the status of the node is not changed
- fixed an Bug where an exception is thrown while changing the status of a node
- fixed an Bug where an exception is thrown while applying a label to a node
Changelog LIMA 1.0.0Alpha10
- fixed an Bug where kubernetes images are not pulled from the local registry
KubeOps 1.2.0-Alpha10
Changelog kubeopsctl 0.1.0-Alpha10
Bugfixes
- fixed an bug with creating and updating the cluster where a exception is thrown because of an unexisting file
KubeOps 1.2.0-Alpha9
Changelog kubeopsctl 0.1.0-Alpha9
Bugfixes
- fixed an issue where the change to a different registry is not pushing the calico image
- fixed an issue where the images are not pushed to harbor
- fixed an issue where the change of the registry for headlamp is throwing an unhandled exception
- fixed an issue where $LIMAROOT/images linebreak in calico image is missing
Known Issues
- all nodes need internet connection
- Logs are not exported to OpenSearch
- Keycloak Pods are not running
Changelog LIMA 1.0.0Alpha7
- fixed and issue where during the create cluster the calico images are tagged wrong
- fixed an issue where the change to a different registry is tagging the calico images wrong
Changelog setup 1.2.0
Updated
- updated lima to alpha7
Changelog rook-ceph 1.2.0
Bugfixes
- Fixed a bug related to indicating the correct nodePort on the service
KubeOps 1.2.0-Alpha8
Changelog kubeopsctl 0.1.0-Alpha8
Bugfixes
- fixed an issue where apply was not working as non-root user
- fixed an issue with upgrading kubernetes versions
Changelog LIMA 1.0.0Alpha6
- fixed an issue where apply was not working as non-root user
- fixed an issue with upgrading kubernetes versions
KubeOps 1.2.0-Alpha7
Changelog kubeopsctl 0.1.0-Alpha7
Bugfixes
- fixed an issue where change registry did not work and was not airgap ready
Changelog LIMA 1.0.0Alpha5
- fixed issue with using airgap images
KubeOps 1.2.0-Alpha6
Changelog Sina 2.8.0_Alpha4
New
- added sina encrypt -f <user.yaml>
- added flag (–cf <user.yaml>) for sina install to input encrypted yaml files
- added flag (–cf <user.yaml>) for sina update to input encrypted yaml files
- added flag (–cf <user.yaml>) for sina delete to input encrypted yaml files
- added sina check Command
- added flag (-t) for sina pull. Use the cluster´s internal adress for the registry, which is resolvable by nodes. (Can only be used with the -r Flag)
Bugfixes
- fixed retagging in pull for userv3 packages
- fixed checksum problem, caused by deprecated backend api
- fixed build user/v2 was not possible
- updated sina plugins
Changelog kubeopsctl 0.1.0-Alpha6
New
- apply -f
- change registry -f
Bugfixes
- fixed an issue where prometheus could not be installed
- fixed issue where opensearch could not be installed
- fixed issue where any tool could not be installed after first apply
- fixed an issue where the images where not pushed to the local harbor
Changelog LIMA 1.0.0Alpha4
- fixed bug where calico was not installed
- fixed bug where calico pod disruption budget had a false version
- renamed lima image to 1.0.0
Changelog setup 1.2.0
Updated
- Updated Lima Version to 1.0.0Alpha4
KubeOps 1.2.0-Alpha5
Changelog kubeopsctl 0.1.0-Alpha5
New
- apply -f
- change registry -f
Bugfixes
- fixed issue with ImagePullBackOff
ChangeLog LIMA 1.0.0Alpha3
New
- Added support for podman
- Removed Docker dependency
Updated
- Updated sina
ChangeLog SINA 2.8.0Alpha3
New
- added sina encrypt -f <user.yaml>
- added flag (–cf <user.yaml>) for sina install to input encrypted yaml files
- added flag (–cf <user.yaml>) for sina update to input encrypted yaml files
- added flag (–cf <user.yaml>) for sina delete to input encrypted yaml files
- added sina check Command
- added flag (-t) for sina pull. Use the cluster´s internal adress for the registry, which is resolvable by nodes. (Can only be used with the -r Flag)
Bugfixes
- fixed retagging in pull for userv3 packages
- fixed checksum problem, caused by deprecated backend api
- fixed build user/v2 was not possible
- updated sina plugins
KubeOps 1.2.0-Alpha4
Changelog kubeopsctl 0.1.0-Alpha4
New
- apply -f
- change registry -f
Bugfixes
- fixed issue with label
ChangeLog LIMA 1.0.0Alpha3
New
- Added support for podman
- Removed Docker dependency
Bugfixes
- fixed issue with pushing calico
KubeOps 1.2.0-Alpha3
Changelog kubeopsctl 0.1.0-Alpha3
New
- apply -f
- change registry -f
Updated
- added more default values for kubeopsctl.yaml
Bugfixes
- fixed issue with $KUBEOPSROOT and $LIMAROOT variable
- fixed issue with pulling packages
- fixed issue with writing kubeopsctl.yal
- fixed issue with writing KUBEOPSROOT and LIMAROOT in .bashrc file
ChangeLog LIMA 1.0.0Alpha2
New
- Added support for podman
- Removed Docker dependency
Bugfixes
- fixed issue with with container already in use
KubeOps 1.2.0-Alpha2
Changelog kubeopsctl 0.1.0-Alpha2
New
- apply -f
- change registry -f
Bugfixes
- Fixed an issue where sina directory was not found
- Fixed an issue where plugin directory was not found
Changelog Sina 2.8.0_Alpha2
New
- added sina encrypt -f <user.yaml>
- added flag (–cf <user.yaml>) for sina install to input encrypted yaml files
- added flag (–cf <user.yaml>) for sina update to input encrypted yaml files
- added flag (–cf <user.yaml>) for sina delete to input encrypted yaml files
- added sina check Command
- added flag (-t) for sina pull. Use the cluster´s internal adress for the registry, which is resolvable by nodes. (Can only be used with the -r Flag)
Bugfixes
- fixed checksum problem, caused by deprecated backend api
- fixed build user/v2 was not possible
- update sina plugin
Changelog SINA Plugin 0.3.0_Alpha2
New
- Added pull command to plugin
Bugfixes
- fixed sina install
Update
- Update dependencies from sina
KubeOps 1.2.0-Alpha1
Changelog LIMA 1.0.0-Alpha1
Bugfixes
- Corrected a bug related to updating the pauseVersion in containerd config when upgrading the Kubernetes version
Changelog setup 1.2.1
Updated
- Updated Lima Version to 1.0.0-Alpha1
Changelog Sina 2.8.0_Alpha1
New
- added sina encrypt -f <user.yaml>
- added flag (–cf <user.yaml>) for sina install to input encrypted yaml files
- added flag (–cf <user.yaml>) for sina update to input encrypted yaml files
- added flag (–cf <user.yaml>) for sina delete to input encrypted yaml files
- added sina check Command
- added flag (-t) for sina pull. Use the cluster´s internal adress for the registry, which is resolvable by nodes. (Can only be used with the -r Flag)
Bugfixes
- fixed checksum problem, caused by deprecated backend api
- fixed build user/v2 was not possible
Changelog SINA Plugin 0.3.0_Alpha1
New
- Added pull command to plugin
Update
- Update dependencies from sina
ChangeLog kubeopsctl 0.1.0Alpha2
updated
- added more default values for kubeopsctl.yaml
fixed
- fixed issue with $KUBEOPSROOT and $LIMAROOT variable
- fixed issue with pulling packages
- fixed issue with writing kubeopsctl.yal
- fixed issue with writing KUBEOPSROOT and LIMAROOT in .bashrc file
ChangeLog LIMA 1.0.0Alpha2
fixed
- fixed issue with no airgap calicomultus image
- fixed issue with with container already in use
Changelog cert-manager 1.2.0
Updated
- updated package version to 1.2.0
Changelog sina-filebeat-os 1.2.0
Updated
- updated package version to 1.2.0
Changelog Harbor 1.2.0
New
- Added ingress for dashboard access
Updated
- updated harbor version to v2.6.4
- updated package version to 1.2.0
Changelog kubeops-dashboard 1.2.0
New
- Added ingress for dashboard access
Updated
- updated package version to 1.2.0
Changelog ingress-nginx 1.2.0
New
- Added templating for ingress service
Updated
- updated package version to 1.2.0
Changelog keycloak 1.2.0
New
- Added package keycloak 1.2.0
Updated
- updated package version to 1.2.0
Changelog clustercreate 1.2.0
New
- Added keycloak
Updated
- updated package version to 1.2.0
- updated all packages to 1.2.0
Changelog setup 1.2.0
Updated
- updated package version to 1.2.0
- updated all packages to 1.2.0
- updated lima to 1.0.0
- removed docker dependencies
Changelog sina-logstash-os 1.2.0
Updated
- updated package version to 1.2.0
Changelog opa-gatekeeper 1.2.0
Updated
- updated package version to 1.2.0
Changelog sina-opensearch-os 1.1.2
Updated
- updated opensearch version to v2.9.0
- updated package version to 1.2.0
Changelog sina-opensearch-dashboards 1.2.0
New
- Added ingress for dashboard access
Updated
- updated opensearch version to v2.9.0
- updated package version to 1.2.0
Changelog sina-kube-prometheus-stack 1.2.0
New
- Added ingress for dashboard access
Updated
- updated package version to 1.2.0
Changelog helm 1.2.0
Updated
- updated package version to 1.2.0
Changelog rook-ceph 1.2.0
New
- Added ingress for dashboard access
Bugfixes
- Updated ceph to 17.2.6 and rook to 1.12.1
Changelog Lima 1.0.0-Alpha1
New
- Added support for podman
- Removed Docker dependency
Bugfixes
- Fixed an issue where maintenance packages could not be pulled
Changelog kubeopsctl 0.1.0-Alpha1
New
- apply -f
- change registry -f
Bugfixes
- Fixed an issue where change registry ist could while apply is running
- Fixed an issue where new values could not be processed properly
- Fixed an issue where packages could not be pulled
Changelog Grafana/Prometheus
New
- Added SMTP-Alerting
1.3 -
QuickStart
This is comprehensive instruction guide to start working with a simple cluster.
Warning
This is not an instruction guide which should be used on a productive environment.Requirements
You can choose between Red Hat Enterprise Linux 8
or OpenSUSE 15
. All of your machines need the same os.
A total of 7 machines are required:
- one admin
- three master
- three worker
1x Admin-Node
- 2 CPUs
- 2 GB RAM
- 50 GB Boot disk storage
3x Master-Node
- 4 CPUs
- 8 GB RAM
- 50 GB Boot disk storage
3x Worker-Node
- 8 CPUs
- 16 GB RAM
- 50 GB Boot disk storage
- 50 GB unformatted no partitioned disk storage for Ceph
For more information about the harddrives for rook-ceph, visit the rook-ceph prerequisites page
Requirements on Admin
The following requirements must be fulfilled on the admin machine:
-
All the users require sudo privileges. We recommend using the root user.
-
Admin machine must be synchronized with the current time.
-
You need an internet connection to use the default KubeOps registry
registry1.kubernative.net/lima
. -
Install kubeopsctl and podman
Create an account and log in on the KubeOps official website for downloading the RPM in the Download section. Copy the kubeopsctl RPM on your admin machine in the home directory.
You must run this follwing command to install kubeopsctl and podman:
dnf install -y kubeopsctl*.rpm
dnf install -y podman
zypper install -y kubeopsctl*.rpm
zypper install -y podman
- $KUBEOPSROOT and $LIMAROOT must be set.
echo "export KUBEOPSROOT=\"${HOME}/kubeops\"" >> $HOME/.bashrc
echo "export LIMAROOT=\"${HOME}/kubeops/lima\"" >> $HOME/.bashrc
source $HOME/.bashrc
Prerequisites on Master and Worker Nodes
The following requirements must be fulfilled on master and worker nodes:
-
All the users require sudo privileges. We recommend using the root user.
-
Every machine must be synchronized with the current time.
-
You have to assign lowercase unique hostnames for every master and worker machine you are using.
We recommended using self-explanatory hostnames.
To set the hostname on your machine use the following command:
hostnamectl set-hostname <name of node>
- Example
Use the commands below to set the hostname on the particular machine asmaster1
,master2
,master3
,node1
node2
ornode3
.hostnamectl set-hostname master1 hostnamectl set-hostname master2 hostnamectl set-hostname master3 hostnamectl set-hostname node1 hostnamectl set-hostname node2 hostnamectl set-hostname node3
- Example
-
If you are using Red Hat Enterprise Linux 8, you must remove firewalld. Kubeopsctl installs nftables by default.
You can use the following commands to remove firewalld:systemctl disable --now firewalld systemctl mask firewalld dnf remove -y firewalld reboot
It is recommended that a dns service is running, or if you don’t have a DNS service, you can change the /etc/hosts file. An example for a entry in the /etc/hosts file could be:
10.2.10.11 master1 10.2.10.12 master2 10.2.10.13 master3 10.2.10.14 node1 10.2.10.15 node2 10.2.10.16 node3
Prerequisites on Admin Node
-
To establish an SSH connection between your machines, you need to distribute the SSH key from your admin to each of your master and worker nodes.
-
Generate an SSH key on admin machine using following command
ssh-keygen
There will be two keys generated in ~/.ssh directory.
The first key is theid_rsa(private)
and the second key is theid_rsa.pub(public)
. -
Copy the ssh public key from your admin machine to all node machines with
ssh-copy-id
, e.g.:ssh-copy-id master1
-
Now try to establish a connection to the node machines from your admin machine, e.g.:
ssh master1
-
Platform Setup
In order to install your cluster you need the following steps:
- kubeopsctl.yaml creation
vi kubeopsctl.yaml
Example kubeopsctl.yaml
the names of the nodes should be the same as the hostnames of the machines.
apiVersion: kubeops/kubeopsctl/alpha/v3 # mandatory
imagePullRegistry: "registry1.kubernative.net/lima"
localRegistry: true
clusterName: "example"
kubernetesVersion: "1.28.2"
masterIP: 10.2.10.11
systemCpu: "200m"
systemMemory: "200Mi"
zones:
- name: zone1
nodes:
master:
- name: master1
ipAdress: 10.2.10.11
status: active
kubeversion: 1.28.2
- name: master2
ipAdress: 10.2.10.12
status: active
kubeversion: 1.28.2
worker:
- name: worker1
ipAdress: 10.2.10.14
status: active
kubeversion: 1.28.2
- name: worker2
ipAdress: 10.2.10.15
status: active
kubeversion: 1.28.2
- name: zone2
nodes:
master:
- name: master3
ipAdress: 10.2.10.13
status: active
kubeversion: 1.28.2
worker:
- name: worker3
ipAdress: 10.2.10.16
status: active
kubeversion: 1.28.2
# mandatory, set to true if you want to install it into your cluster
rook-ceph: true
harbor: true
opensearch: true
opensearch-dashboards: true
logstash: true
filebeat: true
prometheus: true
opa: true
headlamp: true
certman: true
ingress: true
keycloak: true
velero: true
harborValues:
harborpass: "password" # change to your desired password
databasePassword: "Postgres_Password" # change to your desired password
redisPassword: "Redis_Password"
externalURL: http://10.2.10.11:30002 # change to ip adress of master1
prometheusValues:
grafanaUsername: "user"
grafanaPassword: "password"
ingressValues:
externalIPs: []
keycloakValues:
keycloak:
auth:
adminUser: admin
adminPassword: admin
postgresql:
auth:
postgresPassword: ""
username: bn_keycloak
password: ""
database: bitnami_keycloak
existingSecret: ""
veleroValues:
accessKeyId: "your_s3_storage_username"
secretAccessKey: "your_s3_storage_password"
When you are using ingress option, there are few updates needed in the settings of services. [Click here]( “ingress config”) to know more about it.
- Platform installation
kubeopsctl apply -f kubeopsctl.yaml
The installation will take about 3 hours.
1.4 - Installation
KubeOps Installation and Setup
Welcome to the very first step to getting started with KubeOps. In this section, you will get to know about
- hardware, software and network requirements
- steps to install the required software
- key configurations for KubeOps
Prerequisites
A total of 7 machines are required:
- one admin
- three master
- three worker
You can choose between Red Hat Enterprise Linux 8
or OpenSUSE 15
. All of your machines need the same os.
Below you can see the minimal requirements for CPU, memory and disk storage:
OS | Minimum Requirements |
---|---|
Red Hat Enterprise Linux 8 | 8 CPU cores, 16 GB memory, 50GB disk storage |
OpenSUSE 15 | 8 CPU cores, 16 GB memory, 50GB disk storage |
For each working node, an additional unformatted hard disk with 50 GB each is required. For more information about the harddrives for rook-ceph, visit the rook-ceph prerequisites page
Requirements on admin
The following requirements must be fulfilled on the admin machine.
- All the utilized users require sudo privileges. If you are using Kubeops as a user, you need a user with sudo rights, so for Opensuse and RHEL 8 Environments it is the
wheel
group the user should be added to. Make sure that you change your user with:
su -l <user>
-
Admin machine must be synchronized with the current time.
-
You need an internet connection to use the default KubeOps registry
registry1.kubernative.net/lima
.A local registry can be used in the Airgap environment. KubeOps only supports secure registries.
It is important to list your registry as an insecure registry in registry.conf (/etc/containers/registries.conf for podman, /etc/docker/deamon.json for docker), in case of insecure registry usage.
Now you can create your own registry instead of using the default. Checkout how to Guide Create a new Repository. for more info.
-
it is recommended that runc is uninstalled
dnf remove -y runc
zypper remove -y runc
-
tc should be installed.
dnf install -y tc dnf install -y libnftnl
zypper install -y iproute2 zypper install -y libnftnl
-
for opensearch, the /etc/sysctl.conf should be configured, the line
vm.max_map_count=262144
should be added. also the command
sysctl -p
should be executed after that.
- Podman must be installed on your machine.
sudo dnf install -y podman
sudo zypper install -y podman
Warning
There can be an issue with conflicts with containerd, so it is recommended that containerd.io is removed before installing the podman package.- $KUBEOPSROOT and $LIMAROOT must be set.
echo 'export KUBEOPSROOT=<home folder of user>/kubeops' >> $HOME/.bashrc
echo 'export LIMAROOT=<home folder of user>/kubeops/lima' >> $HOME/.bashrc
source $HOME/.bashrc
Requirements for each node
The following requirements must be fulfilled on each node.
-
All the utilized users require sudo privileges. If you are using Kubeops as a user, you need a user with sudo rights, so for Opensuse and RHEL 8 Environments it is the
wheel
group the user should be added to. -
Every machine must be synchronized with the current time.
-
You have to assign lowercase unique hostnames for every machine you are using.
We recommended using self-explanatory hostnames.
To set the hostname on your machine use the following command:
hostnamectl set-hostname <name of node>
- Example
Use the commands below to set the hostnames on each machine asadmin
,master
,node1
node2
.hostnamectl set-hostname admin hostnamectl set-hostname master hostnamectl set-hostname node1 hostnamectl set-hostname node2
Requires sudo privileges
It is recommended that a dns service is running, or if you don’t have a nds service, you can change the /etc/hosts file. an example for a entry in the /etc/hosts file could be:
10.2.10.12 admin 10.2.10.13 master1 10.2.10.14 master2 10.2.10.15 master3 10.2.10.16 node1 10.2.10.17 node2 10.2.10.18 node3
- Example
-
To establish an SSH connection between your machines, you either need an SSH key or you need to install sshpass.
-
Generate an SSH key on admin machine using following command
ssh-keygen
There will be two keys generated in ~/.ssh directory.
The first key is theid_rsa(private)
and the second key is theid_rsa.pub(public)
. -
Copy the ssh key from admin machine to your node machine/s with following command
ssh-copy-id <ip address or hostname of your node machine>
-
Now try establishing a connection to your node machine/s
ssh <ip address or hostname of your node machine>
-
Installing KubeOpsCtl
- Create a kubeopsctl.yaml file with respective information as shown in kubeopsctl.yaml parameters, in order to use the KubeOps package.
- Install the kubeops*.rpm on your admin machine.
Working with KubeOpsCtl
Before starting with KubeOps cluster, it is important to check if podman is running.
-
To verify that podman is running, use the following command:
systemctl status podman
-
To start and enable podman use the following commands:
systemctl enable podman systemctl start --now podman
Note: This must be done with the root
user or with a user with sudo privileges.
Run kubeopsctl on your commandline like:
kubeopsctl apply -f kubeopsctl.yaml
kubeopsctl.yaml parameters
the names of the nodes should be the same as the hostnames of the machines.
### General values for registry access ###
imagePullRegistry: "registry1.kubernative.net/lima" # mandatory, the registry from which the images for the cluster are pulled
localRegistry: false # mandatory, set to true if you use a local registry, if you want to set the value to true then you also have to set harbor to true
the imagePullRegistry parameter is for the registry, from which the images for the platform softeware is pulled. the localRegistry is a parameter for using a insecure, local registry for pulling images.
### Values for setup configuration ###
apiVersion: kubeops/kubeopsctl/alpha/v3 # mandatory
clusterName: "example" # mandatory
clusterUser: "root" # mandatory
kubernetesVersion: "1.28.2" # mandatory, check lima documentation
#masterHost: optional if you have an hostname, default value in "masterIP"
masterIP: 10.2.10.12 # mandatory
firewall: "nftables" # mandatory, default "nftables"
pluginNetwork: "calico" # mandatory, default "calico"
containerRuntime: "containerd" # mandatory
clusterOS: "Red Hat Enterprise Linux" # mandatory, can be "Red Hat Enterprise Linux" or "openSUSE Leap"
### Additional values for cluster configuration
useInsecureRegistry: false # optional, default is false
ignoreFirewallError: false # optional, default is false
serviceSubnet: 192.168.128.0/17 # optional, default "192.168.128.0/17"
podSubnet: 192.168.0.0/17 # optional, default "192.168.0.0/17"
debug: false # optional, default is false
logLevel: vvvvv # optional, default "vvvvv"
systemCpu: "1" # optional, default "1"
systemMemory: "2G" # optional, default "2G"
sudo: true # optional, default is true
tmpCopyDir: "/tmp" # optional, default is /tmp
createCluster: true # optional, default is true
updateRegistry: true # optional, default is true
-
the parameter clusterName is used to interact with and manage the cluster later on, p.e. if you want to change the runtime, you need the clusterName parameter.
-
the clusteruser is the linux user for using the cluster. the clusterOS is the linux distribution of the cluster.
-
masterIP is the ip-adress of the clustermaster or the first master, which is later used for interacting with the cluster.
-
useInsecureRegistry is for using a local and insecure registry for pulling images for the lima software.
-
ignoreFirewallError is a parameter for ignoring firewall errors while the cluster is created (not while operating on the cluster).
-
serviceSubnet is the subnet for all kubernetes service IP-adresses.
-
podSubnet is the subnet for all kubernetes pod IP-adresses.
-
systemCpu is the maximum of cpu that the kube-apiserver is allowd to use.
-
sudo is a parameter for using sudo for commands that need sudo rights, if you use a non-root linux-user.
-
tmpCopyDir is a parameter for templating the folder on the cluster nodes, where images of lima will be copied to.
-
createCluster is a parameter with which you can specify whether you want to create a cluster or not.
-
updateRegistry is a parameter for updating the docker registry.
zones:
- name: zone1
nodes:
master:
- name: cluster1master1
ipAdress: 10.2.10.11
user: myuser
systemCpu: 200m
systemMemory: 200Mi
status: active
kubeversion: 1.28.2
- name: cluster1master2
ipAdress: 10.2.10.12
user: myuser
systemCpu: 200m
systemMemory: 200Mi
status: active
kubeversion: 1.28.2
worker:
- name: cluster1worker1
ipAdress: 10.2.10.14
user: myuser
systemCpu: 200m
systemMemory: 200Mi
status: active
kubeversion: 1.28.2
- name: cluster1worker2
ipAdress: 10.2.10.15
user: myuser
systemCpu: 200m
systemMemory: 200Mi
status: active
kubeversion: 1.28.2
- name: zone2
nodes:
master:
- name: cluster1master3
ipAdress: 10.2.10.13
user: myuser
systemCpu: 200m
systemMemory: 200Mi
status: active
kubeversion: 1.28.2
worker:
- name: cluster1worker3
ipAdress: 10.2.10.16
user: myuser
systemCpu: 200m
systemMemory: 200Mi
status: active
kubeversion: 1.28.2
This YAML content is mandatory
and describes a configuration for managing multiple zones in a Kubernetes cluster. Let’s break it down step by step:
-
zones
: This is the top-level key in the YAML file, representing a list of zones within the Kubernetes cluster.-
zone1
andzone2
: These are two zones within the cluster, each with its own configuration.-
nodes
: This is a sub-key under each zone, indicating the different types of nodes within that zone.-
master
: This is a sub-key undernodes
, representing the master nodes in the zone.cluster1master1
,cluster1master2
, andcluster1master3
: These are individual master nodes in the cluster, each with its own configuration settings. They have attributes likename
(node name has to be equal to host name),ipAdress
(IP address),user
(the user associated with the node),systemCpu
(CPU resources allocated to the system),systemMemory
(system memory allocated),status
(the status of the node, can be either “active” or “drained”), andkubeversion
(the Kubernetes version running on the node). Those kubernetes versions in the zone are for the nodes. NOTE: If you drain too many nodes, you may have too few OSDs for Rook.
-
worker
: This is another sub-key undernodes
, representing the worker nodes in the zone.cluster1worker1
,cluster1worker2
, andcluster1worker3
: Similar to the master nodes, these are individual worker nodes in the cluster, each with its own configuration settings, including name, IP address, user, system resources, status, and Kubernetes version.
-
-
-
# mandatory, set to true if you want to install it into your cluster
rook-ceph: true
harbor: true # if localRegistry is set to true, harbor also needs to be set to true
opensearch: true
opensearch-dashboards: true
logstash: true
filebeat: true
prometheus: true
opa: true
headlamp: true
certman: true
ingress: true
keycloak: true
velero: true
this values are booleans for deciding which applications are later installed into the cluster.
# Global values, will be overwritten by the corresponding values of the individual packages
namespace: "kubeops"
storageClass: "rook-cephfs"
These global values will be used for installing the packages, but will be overwritten by the corresponding package-level settings.
- namespace defines the kubernetes-namespace in which the applications are deployed
- storageClass defines the name of StorageClass-Ressource that will be used by the applications.
rookValues:
namespace: kubeops
cluster:
spec:
dataDirHostPath: "/var/lib/rook" # optional, default is /var/lib/rook
storage:
useAllNodes: true # optional, default value: true
useAllDevices: true # optional, default value: true
deviceFilter: "^sd[a-b]" # optional, will only be used if useAllDevices is set to false
config:
metadataDevice: "sda" # optional, only set this value, if there is a device available
nodes: # optional if useAllNodes is set to true, otherwise mandatory
- name: "<ip-adress of node_1>"
devices:
- name: "sdb"
- name: "<ip-adress of node_2>"
deviceFilter: "^sd[a-b]"
config:
metadataDevice: "sda" # optional
resources:
mgr:
requests:
cpu: "500m" # optional, default is 500m, limit: 1000m
memory: "512Mi" # optional, default is 1Gi, limit: 1Gi
mon:
requests:
cpu: "1" # optional, default is 1, limit: 2000m
memory: "1Gi" # optional, default is 1Gi, limit: 2Gi
osd:
requests:
cpu: "1" # optional, default is 1, limit: 2
memory: "1Gi" # optional, default is 4Gi, limit: 4Gi
cephFileSystems:
requests:
cpu: "1" # optional, default is 1, limit: 2
memory: "1Gi" # optional, default is 1, limit: 4Gi
cephObjectStores:
requests:
cpu: "1" # optional, default is 1, limit: 2
memory: "1Gi" # optional, default is 1Gi, limit: 2Gi
operator:
data:
rookLogLevel: "DEBUG" # optional, default is DEBUG
- the namespace parameter is important for the apllications, because this parameter decides, in which namespace the individual applications are deployed.
- dataDirHostPath is for setting the path of the configuration fiules of rook.
- useAllNodes is parameter of rook-ceph and if it is set to true, all worker nodes will be used for rook-ceph.
- useAllDevices is parameter of rook-ceph and if it is set to true, all possible devices will be used for rook-ceph.
- deviceFilter is a Global filter to only select certain devicesnames. This example matches names starting with sda or sdb. it will only be used if useAllDevices is set to false and will be ignored if individual devices have been specified on a node.
- metadataDevice: Name of a device or lvm to use for the metadata of OSDs(daemons for storing data on the local file system) on each node Performance can be improved by using a low latency device (SSD or NVMe) as the metadata device, while other spinning platter (HDD) devices on a node are used to store data. This global setting will be overwritten by the corresponding node-level setting.
- nodes: Names of individual nodes in the cluster that should have their storage included. Will only be used if useAllNodes is set to false. Specific configurations of the individual nodes will overwrite global settings.
- resources refers to the cpu and memory that the parts of rook-ceph will be requesting. In this case it is the manager, the monitoring pods and the OSDs (they have the job of managing the local storages of the nodes and together they form the distributed storage) as well as the filesystem and object-store pods (they manage the respecting storage solution).
- rookLogLevel: the loglevel of rook-ceph. this provides the most informative logs.
harborValues:
namespace: kubeops # optional, default is kubeops
harborpass: "password" # mandatory: set password for harbor access
databasePassword: "Postgres_Password" # mandatory: set password for database access
redisPassword: "Redis_Password" # mandatory: set password for redis access
externalURL: http://10.2.10.13:30002 # mandatory, the ip address and port, from which harbor is accessable outside of the cluster
nodePort: 30002 # mandatory
hostname: harbor.local # mandatory
harborPersistence:
persistentVolumeClaim:
registry:
size: 5Gi # mandatory, depending on storage capacity
storageClass: "rook-cephfs" #optional, default is rook-cephfs
jobservice:
jobLog:
size: 1Gi # mandatory: Depending on storage capacity
storageClass: "rook-cephfs" #optional, default is rook-cephfs
database:
size: 1Gi # mandatory, depending on storage capacity
storageClass: "rook-cephfs" #optional, default is rook-cephfs
redis:
size: 1Gi # mandatory, depending on storage capacity
storageClass: "rook-cephfs" #optional, default is rook-cephfs
trivy:
size: 5Gi # mandatory, depending on storage capacity
storageClass: "rook-cephfs" #optional, default is rook-cephfs
You can set the root password for the postgres-database, redis and harbor itself. For the persistant volumes of harbor, the sizes and the storageclass are also templatable. So all applications of harbor. p.e. trivy for the image-scanning or the chart museum for helm charts.
###Values for Logstash deployment###
##For detailed explaination for each key see: https://github.com/elastic/helm-charts/releases/tag/v7.16.3###
logstashValues:
namespace: kubeops
volumeClaimTemplate:
accessModes:
- ReadWriteMany #optional, default is [ReadWriteMany]
resources:
requests:
storage: 1Gi # mandatory, depending on storage capacity
storageClass: "rook-cephfs" #optional, default is rook-cephfs
for logstash the pvc size is also templateable
###Values for OpenSearch-Dashboards deployment###
##For detailed explaination for each key see: https://github.com/opensearch-project/helm-charts/tree/main/charts/opensearch-dashboards###
openSearchDashboardValues:
namespace: kubeops
nodePort: 30050
###Values for OpenSearch deployment###
##For detailed explaination for each key see: https://github.com/opensearch-project/helm-charts/tree/main/charts/opensearch###
openSearchValues:
namespace: kubeops
opensearchJavaOpts: "-Xmx512M -Xms512M" # optional, default is -Xmx512M -Xms512M
resources:
requests:
cpu: "250m" # optional, default is 250m
memory: "1024Mi" # optional, default is 1024Mi
limits:
cpu: "300m" # optional, default is 300m
memory: "3072Mi" # optional, default is 3072Mi
persistence:
size: 4Gi # mandatory
enabled: "true" # optional, default is true
enableInitChown: "false" # optional, default is false
labels:
enabled: "false" # optional, default is false
storageClass: "rook-cephfs" # optional, default is rook-cephfs
accessModes:
- "ReadWriteMany" # optional, default is {ReadWriteMany}
securityConfig:
enabled: false # optional, default value: false
### Additional values can be set, if securityConfig is enabled:
# path: "/usr/share/opensearch/plugins/opensearch-security/securityconfig"
# actionGroupsSecret:
# configSecret:
# internalUsersSecret: internal-users-config-secret
# rolesSecret:
# rolesMappingSecret:
# tenantsSecret:
# config:
# securityConfigSecret: ""
# dataComplete: true
# data: {}
replicas: "3" # optional, default is 3
#--------------------------------------------------------------------------------------------------------------------------------
- opensearchJavaOpts is the size of the java heap.
- enableInitChown is the procedure of changing the owner of the opensearch configuration files, so non-root users can change the configuration files.
- if you want labels for the opensearch pods in the cluster, you can enable the labels with the enabled parameter under the labels subtree.
- if you want to use a custom security config, you can enable it and use then paramters like the path to the file. if you want more info, you can find it here
- the replicas are 3 by default, but you can template it for better scaling.
###Values for Prometheus deployment###
prometheusValues:
namespace: kubeops # optional, default is kubeops
privateRegistry: false # optional, default is false
grafanaUsername: "user" # optional, default is user
grafanaPassword: "password" # optional, default is password
retentionSize: "24GB" # optional, default is 24GB
grafanaResources:
nodePort: 30211 # optional, default is 30211
storageClass: "rook-cephfs" # optional, default is rook-cephfs
storage: 5Gi # optional, default is 5Gi
grafanaUsername: "admin" # optional, default is admin
grafanaPassword: "admin" # optional, default is admin
retention: 10d # mandatory
retentionSize: "24GB" # mandatory
storageClass: "rook-cephfs" # optional, default is rook-cephfs
storage: 25Gi # optional, default is 25Gi
prometheusResources:
retention: 10d # optional, default is 10d
retentionSize: "24GB" # optional, default is "24GB"
storageClass: "rook-cephfs" # optional, default is rook-cephfs
storage: 25Gi # optional, default is 25Gi
the nodePort is 30211, so you can visit the grafana apllication on every master with
###Values for OPA deployment###
opaValues:
namespace: kubeops
- namespace value specifies the Kubernetes namespace where OPA will be deployed.
###Values for Headlamp deployment###
headlampValues:
namespace: kubeops
hostname: kubeops-dashboard.local
service:
nodePort: 30007
- namespace value specifies the Kubernetes namespace where Headlamp will be deployed.
- hostname is for accessing the Headlamp service.
- the nodePort value specifies the node port for accessing Headlamp.
###Values for cert-manager deployment###
certmanValues:
namespace: kubeops
replicaCount: 3
logLevel: 2
- namespace value specifies the Kubernetes namespace where cert-manager will be deployed.
- replicaCount specifies the number of replicas for the cert-manager deployment.
- logLevel specifies the logging level for cert-manager.
###Values for ingress-nginx deployment###
ingressValues:
namespace: kubeops
externalIPs: []
- namespace value specifies the Kubernetes namespace where ingress-nginx will be deployed.
- externalIPs value specifies a list of external IP addresses that will be used to expose the ingress-nginx service. This allows external traffic to reach the ingress controller. The value for this key is expected to be provided as a list of IP addresses.
###Values for keycloak deployment###
keycloakValues:
namespace: "kubeops" # Optional, default is "keycloak"
storageClass: "rook-cephfs" # Optional, default is "rook-cephfs"
nodePort: "30180" # Optional, default is "30180"
hostname: keycloak.local
keycloak:
auth:
adminUser: admin # Optional, default is admin
adminPassword: admin # Optional, default is admin
existingSecret: "" # Optional, default is ""
postgresql:
auth:
postgresPassword: "" # Optional, default is ""
username: bn_keycloak # Optional, default is "bn_keycloak"
password: "" # Optional, default is ""
database: bitnami_keycloak # Optional, default is "bitnami_keycloak"
existingSecret: "" # Optional, default is ""
- namespace value specifies the Kubernetes namespace where keycloak will be deployed.
- storageClass value specifies the storage class to be used for persistent storage in Kubernetes. If not provided, it defaults to “rook-cephfs”.
- nodePort value specifies the node port for accessing Keycloak. If not provided, it defaults to “30180”.
- hostname value specifies the hostname for accessing the Keycloak service.
- adminUser value specifies the username for the Keycloak admin user. Defaults to “admin”.
- adminPassword value specifies the password for the Keycloak admin user. Defaults to “admin”.
- existingSecret value specifies an existing Kubernetes secret to use for Keycloak admin authentication. Defaults to an empty string.
- postgresPassword value specifie the password for the PostgreSQL database. Defaults to an empty string.
- username value specifies the username for the PostgreSQL database. Defaults to “bn_keycloak”.
- password value specifies the password for the PostgreSQL database. Defaults to an empty string.
- database value specifies the name of the PostgreSQL database. Defaults to “bitnami_keycloak”.
- existingSecret value specifies an existing Kubernetes secret to use for PostgreSQL authentication. Defaults to an empty string.
veleroValues:
namespace: "velero"
accessKeyId: "your_s3_storage_username"
secretAccessKey: "your_s3_storage_password"
useNodeAgent: false
defaultVolumesToFsBackup: false
provider: "aws"
bucket: "velero"
useVolumeSnapshots: false
backupLocationConfig:
region: "minio"
s3ForcePathStyle: true
s3Url: "http://minio.velero.svc:9000"
- namespace: Specifies the Kubernetes namespace where Velero will be deployed.
- accessKeyId: Your access key ID for accessing the S3 storage service.
- secretAccessKey: Your secret access key for accessing the S3 storage service.
- useNodeAgent: Indicates whether to use a node agent for backup operations. If set to true, Velero will use a node agent.
- defaultVolumesToFsBackup: Specifies whether to default volumes to file system backup. If set to true, Velero will use file system backup by default.
- provider: Specifies the cloud provider where the storage service resides.
- bucket: The name of the S3 bucket where Velero will store backups.
- useVolumeSnapshots: Indicates whether to use volume snapshots for backups. If set to true, Velero will use volume snapshots.
- backupLocationConfig: Configuration for the backup location.
- region: Specifies the region where the S3 storage service is located.
- s3ForcePathStyle: Specifies whether to force the use of path-style URLs for S3 requests.
- s3Url: The URL for accessing the S3-compatible storage service.
2 - How to Guides
2.1 - Ingress Configuration
Manual configuration of the Nginx-Ingress-Controller
Right now the Ingress Controller Package is not fully configured. To make complete use of the Ingress capabilities of the cluster, the user needs to manually update some of the settings of the corresponding service.
Locating the service
The service in question is called “ingress-nginx-controller” and can be found in the same namespace as the ingress package itself. To locate the service across all namespaces, you could use the following command.
kubectl get service -A | grep ingress-nginx-controller
This command should return two entries of services, “ingress-nginx-controller” and “ingress-nginx-controller-admission”, though only the first one needs to be further adjusted.
Setting the Ingress-Controller service to type NodePort
To edit the service, you can use the following command, although the actual namespace may be different. This will change the service type to NodePort.
kubectl patch service ingress-nginx-controller -n kubeops -p '{"spec":{"type":"NodePort"}}'
Kubernetes will now automatically assign unused portnumbers for the nodePort to allow http and https connections to the service. These can be retrieved by running the same command, used to locate the service. Alternatively, you can use the following command, which adds the portnumbers 30080 and 30443 for the respective protocols. By doing so, you have to make sure, that these portnumbers are not being used by any other NodePort service.
kubectl patch service ingress-nginx-controller -n kubeops --type=json -p '[{"op":"replace","path":"/spec/type","value":"NodePort"}, {"op":"add","path":"/spec/ports/0/nodePort","value":30080}, {"op":"add","path":"/spec/ports/1/nodePort","value":30443}]'
Configuring external IPs
If you have access to external IPs that route to one or more cluster nodes, you can expose your Kubernetes-Services of any type through these addresses. The command below shows how to add an external IP-Adress to the service with the example value of “192.168.0.1”. Keep in mind that this value has to be changed in order to fit your networking settings.
kubectl patch service ingress-nginx-controller -n kubeops -p '{"spec":{"externalIPs":["192.168.0.1"]}}'
2.2 - Use Keycloak
keycloak
Now KubeOps-platform introduces keycloak, an one authentication and login system to use all the dashboards without the need of entering your credentials.
Install keycloak
you need kubeopsctl for installing keycloak: you need the parameter keycloak set to true:
...
keycloak: false # mandatory
...
later, you have configuration parameters for keycloak:
...
keycloakValues:
namespace: "kubeops" # Optional, default is "keycloak"
storageClass: "rook-cephfs" # Optional, default is "rook-cephfs"
keycloak:
auth:
adminUser: admin # Optional, default is admin
adminPassword: admin # Optional, default is admin
existingSecret: "" # Optional, default is ""
postgresql:
auth:
postgresPassword: "" # Optional, default is ""
username: bn_keycloak # Optional, default is "bn_keycloak"
password: "" # Optional, default is ""
database: bitnami_keycloak # Optional, default is "bitnami_keycloak"
existingSecret: "" # Optional, default is ""
...
Configure Dashboards for Keycloak
For Harbor
- Log in to the Harbor web console.
- Navigate to Administration > Configuration > Auth.
- Select OIDC as the Auth mode.
- Enter the required information from Keycloak:
-
OIDC Provider Name: Keycloak
-
OIDC Endpoint: (your Keycloak server URL)
-
OIDC Client ID: (The client ID you created in Keycloak for Harbor)
-
OIDC Client Secret: (The client secret you created in Keycloak for Harbor)
- Confirm the settings and test the login via Keycloak.
For Prometheus
- Keycloak supports OAuth2, which can be used for authentication with Prometheus. To do this, you must change the configuration of Prometheus to use the OAuth2 flow.
- in the prometheus.yml configuration file, you can add the authentication parameters under the oauth2 key:
scrape_configs:
- job_name: 'example-job'
oauth2:
client_id: 'your-client-id'
client_secret: 'your-client-secret
token_url: 'http://keycloak.example.com/auth/realms/your-realm/protocol/openid-connect/token'
For OpenSearch
- install the OpenSearch Security Plugin if it is not already installed.
- modify the OpenSearch security configuration file (config.yml) to use OIDC (OpenID Connect) for authentication:
authc:
openid_auth_domain:
http_enabled: true
transport_enabled: true
order: 0
http_authenticator:
type: openid
challenge: false
config:
subject_key: preferred_username
roles_key: roles
openid_connect_url: http://keycloak.example.com/auth/realms/your-realm/.well-known/openid-configuration
2.3 - Create Cluster
How to create a working cluster?
Pre-requisites
- maintenance packages installed?
- network connection?
- LIMAROOT set
Steps
- create yaml file
- create cluster with multiple nodes
- add nodes to created cluster
- delete nodes when needed
Once you have completed the KubeOps installation, you are ready to dive into the KubeOps-Platform.
How to use LIMA
Downloaded all maintenance packages? If yes, then you are ready to use LIMA for managing your Kubernetes clusters!
In the following sections we will walk you through a quick cluster setup and adding nodes.
So the first thing to do is to create a YAML file that contains the specifications of your cluster. Customize the file below according to your downloaded maintenance packages, e.g. the parameters kubernetesVersion
, firewall
, containerRuntime
. Also adjust the other parameters like masterPassword
, masterHost
, apiEndpoint
to your environment.
createCluster.yaml
apiVersion: lima/clusterconfig/v1alpha2
spec:
clusterName: ExampleClusterName
masterUser: root
masterPassword: "myPassword"
masterHost: 10.2.1.11
kubernetesVersion: 1.22.2
registry: registry1.kubernative.net/lima
useInsecureRegistry: false
ignoreFirewallError: false
firewall: firewalld
apiEndpoint: 10.2.1.11:6443
serviceSubnet: 192.168.128.0/20
podSubnet: 192.168.144.0/20
debug: true
logLevel: v
systemCpu: 100m
systemMemory: 100Mi
sudo: false
containerRuntime: crio
pluginNetwork:
type: weave
parameters:
weavePassword: re4llyS7ron6P4ssw0rd
auditLog: false
serial: 1
seLinuxSupport: true
Most of these parameters are optional and can be left out. If you want to know more about each parameter please refer to our Full Documentation
Set up a single node cluster
To set up a single node cluster we need our createCluster.yaml
file from above.
Run the create cluster
command on the admin node to create a cluster with one node.
lima create cluster -f createCluster.yaml
Done! LIMA is setting up your Kubernetes cluster. In a few minutes you have set up a regular single master cluster.
If LIMA is successfully finished you can check with kubectl get nodes
your Kubernetes single node cluster.
It looks very alone and sad right? Jump to the next section to add some friends to your cluster!
Optional step
The master node which you used to set up your cluster is only suitable as an example installation or for testing. To use this node for production workloads remove the taint from the master node.
kubectl taint nodes --all node-role.kubernetes.io/master-
Add nodes to your cluster
Let’s give your single node cluster some friends. What we need for this is another YAML file. We can call the YAML file whatever we want - we call it addNode.yaml
.
addNode.yaml
apiVersion: lima/nodeconfig/v1alpha1
clusterName: ExampleClusterName
spec:
masters:
- host: 10.2.1.12
user: root
password: "myPassword"
workers:
- host: 10.2.1.13 #IP-address of the node you want to add
user: root
password: "myPassword"
We do not need to pull any other maintenance packages. We already did that and are using the same specifications from our single node cluster. The only thing to do is to use the create nodes
command
lima create nodes -f addNode.yaml
Done! LIMA adds the nodes to your single node cluster. After LIMA is finished check again with kubectl get nodes
the state of your Kubernetes cluster. Your master node should not be alone anymore!
2.4 - Install Maintenance Packages
Installing the essential Maintenance Packages
KubeOps provides you packages for the supported Kubernetes tools. These maintenance packages help you update the kubernetes tools to the desired versions on your clusters along with its dependencies.
It is necessary to install the required maintenance packages to create your first Kubernetes cluster. The packages are available on kubeops hub.
So let’s get started!
Note : Be sure you have the supported KOSI version for the KubeOps Version installed or you can not pull any maintenance packages!
Commands to install a package
Following are the most common commands to be used on Admin Node to get and install any maintenance package.
-
Use the command
get maintenance
to list all available maintenance packages.lima get maintenance
This will display a list of all the available maintenance packages.
Example :
| SOFTWARE | VERSION | STATUS | SOFTWAREPACKAGE |TYPE |
| -- | -- | -- | -- | -- |
| Kubernetes | 1.24.8 | available | lima/kubernetes:1.24.8 | upgrade |
| iptablesEL8 | 1.8.4 | available | lima/iptablesel8:1.8.4 | update |
| firewalldEL8 | 0.8.2 | downloaded | lima/firewalldel8:0.8.2 | update |
Please observe and download correct packages based on following important column in this table.
|Name | Description |
|-------------------------------------------|-------------------------------------------|
| SOFTWARE | It is the name of software which is required for your cluster. |
| VERSION | It is the software version. Select correct version based on your Kubernetes and KubeOps version. |
| SOFTWAREPACKAGE | It is the unique name of the maintenance package. Use this to pull the package on your machine.|
| STATUS | There can be any of the following status indicated. |
| | - available: package is remotely available |
| | - not found : package not found |
| | - downloaded : the package is locally and remotely available |
| | - only local : package is locally available |
| | - unknown: unknown package |
-
Use command
pull maintenance
to pull/download the package on your machine.lima pull maintenance <SOFTWAREPACKAGE>
It is possible to pull more than 1 package with one pull invocation.
For example:lima pull maintenance lima/kubernetes:1.23.5 lima/dockerEL7:18.09.1
List of Maintenance Packages
Following are the essential maintenance packages to be pulled. Use the above mentioned Common Commands to install desired packages.
1.Kubernetes
The first step is to choose a Kubernetes version and to pull its available package LIMA currently supports following Kubernetes versions:
1.26.x | 1.27.x | 1.28.x | 1.29.x |
---|---|---|---|
1.26.3 | 1.27.1 | 1.28.0 | 1.29.0 |
1.26.4 | 1.27.2 | 1.28.1 | 1.29.1 |
1.26.5 | 1.27.3 | 1.28.2 | |
1.26.6 | 1.27.4 | ||
1.26.7 | 1.27.5 | ||
1.26.8 | 1.27.6 | ||
1.26.9 | 1.27.7 | ||
1.27.8 | |||
1.27.9 | |||
1.27.10 | |||
Following are the packages available for the supported Kubernetes versions.
Kubernetes version | Available packages |
---|---|
1.26.x | kubernetes-1.26.x |
1.27.x | kubernetes-1.27.x |
1.28.x | kubernetes-1.28.x |
1.29.x | kubernetes-1.29.x |
2. Install Kubectl
To install Kubectl you won’t need to pull any other package. The Kubernetes package pulled in above step already contains Kubectl installation file.
In the following example the downloaded package is kubernetes-1.23.5.
dnf install $LIMAROOT/packages/kubernetes-1.21.5/kubectl-1.23.5-0.x86_64.rpm
zypper install $LIMAROOT/packages/kubernetes-1.21.5/kubectl-1.23.5-0.x86_64.rpm
3.Kubernetes Dependencies
The next step is to pull the Kubernetes dependencies:
OS | Available packages |
---|---|
RHEL 8 /openSUSE 15 | kubeDependencies-EL8-1.0.4 |
4.CRIs
Choose your CRI and pull the available packages:
OS | CRI | Available packages |
---|---|---|
openSUSE 15 | docker | dockerLP151-19.03.5 |
containerd | containerdLP151-1.6.6 | |
CRI-O | crioLP151-1.22.0 | |
podmanLP151-3.4.7 | ||
RHEL 8 | docker | dockerEL8-20.10.2 |
containerd | containerdEL8-1.4.3 | |
CRI-O | crioEL8-x.xx.x | |
crioEL8-dependencies-1.0.1 | ||
podmanEL8-18.09.1 |
Note : CRI-O packages are depending on the chosen Kubernetes version. Choose the CRI-O package which matches with the chosen Kubernetes version.
- E.g
kubernetes-1.23.5
requirescrioEL7-1.23.5
- E.g
kubernetes-1.24.8
requirescrioEL7-1.24.8
5.Firewall
Choose your firewall and pull the available packages:
OS | Firewall | Available packages |
---|---|---|
openSUSE 15 | iptables | iptablesEL7-1.4.21 |
firewalld | firewalldEL7-0.6.3 | |
RHEL 8 | iptables | iptablesEL8-1.8.4 |
firewalld | firewalldEL8-0.9.3 |
Example
Assuming a setup should exist with OS RHEL 8
, CRI-O
and Kubernetes 1.22.2
with the requested version, the following maintenance packages need to be installed:
- kubernetes-1.22.2
- kubeDependencies-EL8-1.0.2
- crioEL8-1.22.2
- crioEL8-dependencies-1.0.1
- podmanEL8-18.09.1
2.5 - Upgrade KubeOps Software
Upgrading KubeOps Software
1. Update essential KubeOps Packages
Update kubeops setup
Before installing the kubeops software, create a kubeopsctl.yaml with following parameters:
### General values for registry access ###
apiVersion: kubeops/kubeopsctl/alpha/v3 # mandatory
kubeOpsUser: "demo" # mandatory
kubeOpsUserPassword: "Password" # mandatory
kubeOpsUserMail: "demo@demo.net" # mandatory
imagePullRegistry: "registry1.kubernative.net/lima" # mandatory, the registry from which the images for the cluster are pulled
localRegistry: false # mandatory, set to true if you use a local registry
After creating the kubeopsctl.yaml please place another file into your machine to update the software:
### Values for setup configuration ###
clusterName: "example" # mandatory
clusterUser: "root" # mandatory
kubernetesVersion: "1.28.2" # mandatory, check lima documentation
masterIP: 10.2.10.12 # mandatory
containerRuntime: "containerd" # mandatory
1. Remove old KubeOps software
If you want to remove the KubeOps software, it is recommended that you use your package manager. For RHEL environments it is yum, while for opensuse environments it would be zypper. If you want to remove the KubeOps software with yum, use the following commands:
yum autoremove kosi
yum autoremove lima
If you want to remove the KubeOps software with zypper, use the following commands:
zypper remove kosi
zypper remove lima
2. Install new KubeOps software
Now, you can install the new software with yum or zypper.
yum install <kosi-rpm>
zypper install <kosi-rpm>
3. Upgrade kubeops software
To upgrade your kubeops software, you have to use following command:
kubeopsctl apply -f kubeopsctl.yaml
4. Maintain the old Deployment Information (optional)
After upgrading KOSI from 2.5 to 2.6, the deployment.yaml file has to be moved to the $KUBEOPSROOT directory, if it is desired to keep old deployments.
Be sure there you set the $KUBEOPSROOT variable.
- Set the $KUBEOPSROOT variable
echo 'export KUBEOPSROOT="$HOME/kubeops"' >> $HOME/.bashrc
source ~/.bashrc
5. Update other softwares
1. Upgrade rook-ceph
In order to upgrade rook-ceph, you have to go in your kubeopsctl.yaml
file and set rook-ceph: false
to rook-ceph: true
After that, use the command bellow:
kubeopsctl apply -f kubeopsctl.yaml
2. Update harbor
For Updating harbor, change your kubeopsctl.yaml
file and set harbor: false
to harbor: true
.
Please set other applications to false
before applying the kubeopsctl.yaml
file.
3. Update opensearch
In order to update opensearch, change your kubeopsctl.yaml
file and set opensearch: false
to opensearch: true
.
Please set other applications to false
before applying the kubeopsctl.yaml
file.
4. Update logstash
In order to update logstash, change your kubeopsctl.yaml
file and set logstash: false
to logstash: true
Please set other applications to false
before applying the kubeopsctl.yaml
file.
5. Update filebeat
In order to update filebeat, change your kubeopsctl.yaml
file and set filebeat: false
to filebeat: true
.
Please set other applications to false
before applying the kubeopsctl.yaml
file.
6. Update prometheus
In order to update prometheus, change your kubeopsctl.yaml
file and set prometheus: false
to prometheus: true
.
Please set other applications to false
before applying the kubeopsctl.yaml
file.
7. Update opa
In order to update opa, change your kubeopsctl.yaml
file and set opa: false
to opa: true
.
Please set other applications to false
before applying the kubeopsctl.yaml
file.
2.6 - Use Kubeopsctl
KubeOpsctl
kubeopsctl is a new KubeOps tool which can be used for managing a cluster and its state eaisily. Now you can just describe a desired cluster state and then kubeopsctl creates a cluster with the desired state.
Using KubeOpsCtl
Using this feature is as easy as configuring the cluster yaml file with desired cluster state and details and using the apply
command. Below are the detailed steps.
1.Configure Cluster/Nodes/Software using yaml file
You need to have a cluster definition file which describes the different aspects of your cluster. this files describes only one cluster.
Full yaml syntax
apiVersion: kubeops/kubeopsctl/alpha/v3 # mandatory
kubeOpsUser: "demo" # mandatory, change to your username
kubeOpsUserPassword: "Password" # mandatory, change to your password
kubeOpsUserMail: "demo@demo.net" # change to your email
imagePullRegistry: "registry1.kubernative.net/lima" # mandatory
localRegistry: false # mandatory
### Values for setup configuration ###
clusterName: "testkubeopsctl" # mandatory
clusterUser: "mnyuser" # mandatory
kubernetesVersion: "1.28.2" # mandatory, check lima documentation
masterIP: 10.2.10.31 # mandatory
# at least 3 masters and 3 workers are needed
zones:
- name: zone1
nodes:
master:
- name: cluster1master1
ipAdress: 10.2.10.11
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
- name: cluster1master2
ipAdress: 10.2.10.12
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
worker:
- name: cluster1worker1
ipAdress: 10.2.10.14
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
- name: cluster1worker2
ipAdress: 10.2.10.15
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
- name: zone2
nodes:
master:
- name: cluster1master3
ipAdress: 10.2.10.13
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: drained
kubeversion: 1.28.2
worker:
- name: cluster1worker1
ipAdress: 10.2.10.16
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
# set to true if you want to install it into your cluster
rook-ceph: false # mandatory
harbor: false # mandatory
opensearch: false # mandatory
opensearch-dashboards: false # mandatory
logstash: false # mandatory
filebeat: false # mandatory
prometheus: false # mandatory
opa: false # mandatory
headlamp: false # mandatory
certman: false # mandatory
ingress: false # mandatory
keycloak: false # mandatory
###Values for Rook-Ceph###
rookValues:
namespace: kubeops
nodePort: 31931 # optional, default: 31931
cluster:
storage:
# Will only be used if useAllDevices is set to false and will be ignored if individual devices have been specified on a node.
deviceFilter: "^sd[a-b]"
# This setting can be used to store metadata on a different device. Only recommended if an additional metadata device is available.
# Optional, will be overwritten by the corresponding node-level setting.
config:
metadataDevice: "sda"
# Names of individual nodes in the cluster that should have their storage included.
# Will only be used if useAllNodes is set to false.
nodes:
- name: "<ip-adress of node_1>"
devices:
- name: "sdb"
- name: "<ip-adress of node_2>"
deviceFilter: "^sd[a-b]"
config:
metadataDevice: "sda" # optional
#-------------------------------------------------------------------------------------------------------------------------------
### Values for Postgres ###
postgrespass: "password" # mandatory, set password for harbor postgres access
postgres:
resources:
requests:
storage: 2Gi # mandatory, depending on storage capacity
#-------------------------------------------------------------------------------------------------------------------------------
### Values for Redis ###
redispass: "password" # mandatory set password for harbor redis access
redis:
resources:
requests:
storage: 2Gi # mandatory depending on storage capacity
#-------------------------------------------------------------------------------------------------------------------------------
### Values for Harbor deployment ###
## For detailed explaination for each key see: https://artifacthub.io/packages/helm/harbor/harbor/1.8.1#configuration ##
harborValues:
harborpass: "password" # mandatory: set password for harbor access
externalURL: https://10.2.10.13 # mandatory, the ip address, from which harbor is accessable outside of the cluster
nodePort: 30003
harborPersistence:
persistentVolumeClaim:
registry:
size: 5Gi # mandatory, depending on storage capacity
chartmuseum:
size: 5Gi # mandatory, depending on storage capacity
jobservice:
jobLog:
size: 1Gi # mandatory: Depending on storage capacity
scanDataExports:
size: 1Gi # mandatory: Depending on storage capacity
database:
size: 1Gi # mandatory, depending on storage capacity
redis:
size: 1Gi # mandatory, depending on storage capacity
trivy:
size: 5Gi # mandatory, depending on storage capacity
#--------------------------------------------------------------------------------------------------------------------------------------
###Values for filebeat deployment###
filebeatValues:
namespace: kubeops # optional, default is kubeops
#--------------------------------------------------------------------------------------------------------------------------------
###Values for Logstash deployment###
##For detailed explaination for each key see: https://github.com/elastic/helm-charts/releases/tag/v7.16.3###
logstashValues:
namespace: kubeops
volumeClaimTemplate:
resources:
requests:
storage: 1Gi # mandatory, depending on storage capacity
#--------------------------------------------------------------------------------------------------------------------------------------
###Values for OpenSearch-Dashboards deployment###
##For detailed explaination for each key see: https://github.com/opensearch-project/helm-charts/tree/main/charts/opensearch-dashboards###
openSearchDashboardValues:
namespace: kubeops
nodePort: 30050
#--------------------------------------------------------------------------------------------------------------------------------
###Values for OpenSearch deployment###
##For detailed explaination for each key see: https://github.com/opensearch-project/helm-charts/tree/main/charts/opensearch###
openSearchValues:
namespace: kubeops
resources:
persistence:
size: 4Gi # mandatory
#--------------------------------------------------------------------------------------------------------------------------------
###Values for Prometheus deployment###
prometheusValues:
prometheusResources:
nodePort: 32090
#--------------------------------------------------------------------------------------------------------------------------------
###Values for OPA deployment###
opaValues:
namespace: kubeops
#--------------------------------------------------------------------------------------------------------------------------------
###Values for Headlamp deployment###
headlampValues:
service:
nodePort: 30007
#--------------------------------------------------------------------------------------------------------------------------------
###Values for cert-manager deployment###
certmanValues:
namespace: kubeops
replicaCount: 3
logLevel: 2
#--------------------------------------------------------------------------------------------------------------------------------
###Values for ingress-nginx deployment###
ingressValues:
namespace: kubeops
apiVersion: kubeops/kubeopsctl/alpha/v3 # mandatory
kubeOpsUser: "demo" # mandatory, change to your username
kubeOpsUserPassword: "Password" # mandatory, change to your password
kubeOpsUserMail: "demo@demo.net" # change to your email
imagePullRegistry: "registry1.kubernative.net/lima" # mandatory
localRegistry: false # mandatory
### Values for setup configuration ###
clusterName: "testkubeopsctl" # mandatory
clusterUser: "mnyuser" # mandatory
kubernetesVersion: "1.28.2" # mandatory, check lima documentation
#masterHost: optional if you have an hostname, default value in "masterIP"
masterIP: 10.2.10.31 # mandatory
firewall: "nftables" # mandatory, default "nftables"
pluginNetwork: "calico" # mandatory, default "nftables"
containerRuntime: "containerd" # mandatory, default "containerd"
these are parameters for the cluster creation, and software for the clustercreation, p.e. the containerruntime for running the contianers of the cluster. Also there are parameters for the lima software (see documentation of lima for futher explanation).
### Additional values for cluster configuration
useInsecureRegistry: false # optional, default is false
ignoreFirewallError: false # optional, default is false
serviceSubnet: 192.168.128.0/17 # optional, default "192.168.128.0/17"
podSubnet: 192.168.0.0/17 # optional, default "192.168.0.0/17"
debug: true # optional, default is true
logLevel: vvvvv # optional, default "vvvvv"
systemCpu: "1" # optional, default "1"
systemMemory: "2G" # optional, default "2G"
sudo: true # optional, default is true
also important are parameters like for the networking like the subnets for the pods and services inside the kubernetes cluster.
# at least 3 masters and 3 workers are needed
zones:
- name: zone1
nodes:
master:
- name: cluster1master1
ipAdress: 10.2.10.11
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
- name: cluster1master2
ipAdress: 10.2.10.12
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
worker:
- name: cluster1worker1
ipAdress: 10.2.10.14
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
- name: cluster1worker1
ipAdress: 10.2.10.15
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
- name: zone2
nodes:
master:
- name: cluster1master3
ipAdress: 10.2.10.13
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
worker:
- name: cluster1worker1
ipAdress: 10.2.10.16
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: drained
kubeversion: 1.28.2
so here are thetwo zones, which contain master and worker nodes.
There are two different states: active and drained.
also there can be two different kubernetes versions.
So if you want to do updates in tranches, this is possible with kubeopsctl. Also you can set system memory and system cpu of the nodes for kubernetes itself. it is not possible to delete nodes, for deleting nodes you have to use lima. Also if you want to make an update in tranches, you need at least one master with the greater version.
All other parameters are explained here
2 Apply changes to cluster
Once you have configured the cluster changes in yaml file, use following command to apply the changes.
kubeopsctl apply -f kubeopsctl.yaml
2.7 - Backup and restore
Backup and restoring artifacts
What is Velero?
Velero uses object storage to store backups and associated artifacts. It also optionally integrates supported block storage systems to snapshot your persistent volumes. Before beginning the installation process, you should identify the object storage provider and optional block storage provider(s) you’ll be using from the list of compatible providers.
Velero supports storage providers for both cloud-provider environments and on-premises environments.
Velero prerequisites:
- Access to a Kubernetes cluster, v1.16 or later, with DNS and container networking enabled.
- kubectl installed locally
- Object Storage (S3, Cloud Provider Environment, On-Premises Environment)
Compatible providers and on-premises documentation can be read on https://velero.io/docs
Install Velero
This command is an example on how you can install velero into your cluster:
velero install --provider aws --plugins velero/velero-plugin-for-aws:v1.2.1 --bucket velero --secret-file ./credentials-velero --use-volume-snapshots=false --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://minio.velero.svc:9000
NOTE:
- s3Url has to be the url of your s3 storage login.
- example for credentials-velero file:
[default] aws_access_key_id = your_s3_storage_username aws_secret_access_key = your_s3_storage_password
Backup the cluster
Scheduled Backups
This command creates a backup for the cluster every 6 hours:
velero schedule create cluster --schedule "0 */6 * * *"
Get Schedules
This command lists all schedules for backups:
velero schedule get
Delete Schedules
This command deletes the specified schedule:
velero schedule delete cluster
Restore Scheduled Backup
This command restores the backup according to a schedule:
velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>
Backup
This command creates a backup for the cluster
velero backup create cluster
Get Backups
This command lists all created backups:
velero backup get
Delete Backups
This commands deletes the specified backup:
velero backup delete <BACKUP NAME>
Restore Backup
This commands restores the specified backup:
velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>
Backup a specific deployment
Scheduled Backups
This command creates a backup for the namespace “logging” every 6 hours:
velero schedule create filebeat --schedule "0 */6 * * *" --include-namespaces logging --include-cluster-resources=true
This command creates a backup for the deployment “filebeat” every 6 hours:
velero schedule create filebeat --schedule "0 */6 * * *" --include-namespaces logging --selector app=filebeat-filebeat,release=filebeat --include-resources serviceaccount,deployment,daemonset,configmap,clusterrolebinding,clusterrole --include-cluster-resources=true
Get Schedules
This command lists all schedules for backups:
velero schedule get
Delete Schedules
This command deletes the specified schedule:
velero schedule delete filebeat
Restore Scheduled Backup
This command restores the backup from a schedule:
velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>
Backup
This command creates a backup for the namespace “logging”:
velero backup create filebeat --include-namespaces logging --include-cluster-resources=true
This command creates a backup for the deployment “filebeat”:
velero backup create filebeat --include-namespaces logging --selector app=filebeat-filebeat,release=filebeat --include-resources serviceaccount,deployment,daemonset,configmap,clusterrolebinding,clusterrole --include-cluster-resources=true
Get Backups
This command lists all created backups:
velero backup get
Delete Backups
This commands deletes the specified backup:
velero backup delete <BACKUP NAME>
Restore Backup
This commands restores the specified backup:
velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>
Scheduled Backups
This command creates a backup for the namespace “logging” every 6 hours:
velero schedule create logstash --schedule "0 */6 * * *" --include-namespaces logging --include-cluster-resources=true
This command creates a backup for the deployment “logstash” every 6 hours:
velero schedule create logstash --schedule "0 */6 * * *" --include-namespaces logging --selector app=logstash-logstash,chart=logstash,release=logstash --include-resources StatefulSet,ServiceAccount,Service,Secret,RoleBinding,Role,PodSecurityPolicy,PodDisruptionBudget,Ingress,ConfigMap --include-cluster-resources=true
Get Schedules
This command lists all schedules for backups:
velero schedule get
Delete Schedules
This command deletes the specified schedule:
velero schedule delete logstash
Restore Scheduled Backup
This command restores the backup from a schedule:
velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>
Backup
This command creates a backup for the namespace “logging”:
velero backup create logstash --include-namespaces logging --include-cluster-resources=true
This command creates a backup for the deployment “logstash”:
velero backup create logstash --include-namespaces logging --selector app=logstash-logstash,chart=logstash,release=logstash --include-resources StatefulSet,ServiceAccount,Service,Secret,RoleBinding,Role,PodSecurityPolicy,PodDisruptionBudget,Ingress,ConfigMap --include-cluster-resources=true
Get Backups
This command lists all created backups:
velero backup get
Delete Backups
This commands deletes the specified backup:
velero backup delete <BACKUP NAME>
Restore Backup
This commands restores the specified backup:
velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>
Scheduled Backups
This command creates a backup for the namespace “logging” every 6 hours:
velero schedule create opensearch --schedule "0 */6 * * *" --include-namespaces logging --include-cluster-resources=true
This command creates a backup for the deployment “opensearch” every 6 hours:
velero schedule create opensearch --schedule "0 */6 * * *" --include-namespaces logging --selector app.kubernetes.io/instance=opensearch,app.kubernetes.io/name=opensearch --include-resources ConfigMap,Ingress,NetworkPolicy,PodDisruptionBudget,PodSecurityPolicy,Role,RoleBinding,Secret,Service,ServiceAccount,StatefulSet --include-cluster-resources=true
Get Schedules
This command lists all schedules for backups:
velero schedule get
Delete Schedules
This command deletes the specified schedule:
velero schedule delete opensearch
Restore Scheduled Backup
This command restores the backup from a schedule:
velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>
Backup
This command creates a backup for the namespace “logging”:
velero backup create opensearch --include-namespaces logging --include-cluster-resources=true
This command creates a backup for the deployment “opensearch”:
velero backup create opensearch --include-namespaces logging --selector app.kubernetes.io/instance=opensearch,app.kubernetes.io/name=opensearch --include-resources ConfigMap,Ingress,NetworkPolicy,PodDisruptionBudget,PodSecurityPolicy,Role,RoleBinding,Secret,Service,ServiceAccount,StatefulSet --include-cluster-resources=true
Get Backups
This command lists all created backups:
velero backup get
Delete Backups
This commands deletes the specified backup:
velero backup delete <BACKUP NAME>
Restore Backup
This commands restores the specified backup:
velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>
Scheduled Backups
This command creates a backup for the namespace “monitoring” every 6 hours:
velero schedule create prometheus --schedule "0 */6 * * *" --include-namespaces monitoring --include-cluster-resources=true
This command creates a backup for the deployment “prometheus” every 6 hours:
velero schedule create prometheus --schedule "0 */6 * * *" --include-namespaces monitoring --include-resources Alertmanager,Secret,Ingress,List,PodDisruptionBudget,Role,RoleBinding,PodSecurityPolicy,Service,ServiceAccount,ServiceMonitor,Endpoints,ConfigMap,ConfigMapList,ClusterRole,ClusterRoleBinding,SecretProviderClass,PodMonitor,Prometheus,Job,NetworkPolicy,MutatingWebhookConfiguration,ValidatingWebhookConfiguration,Issuer,Deployment,VerticalPodAutoscaler,ThanosRuler --include-cluster-resources=true
Get Schedules
This command lists all schedules for backups:
velero schedule get
Delete Schedules
This command deletes the specified schedule:
velero schedule delete prometheus
Restore Scheduled Backup
This command restores the backup from a schedule:
velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>
Backup
This command creates a backup for the namespace “monitoring”:
velero backup create prometheus --include-namespaces monitoring --include-cluster-resources=true
This command creates a backup for the deployment “prometheus”:
velero backup create prometheus --include-namespaces monitoring --include-resources Alertmanager,Secret,Ingress,List,PodDisruptionBudget,Role,RoleBinding,PodSecurityPolicy,Service,ServiceAccount,ServiceMonitor,Endpoints,ConfigMap,ConfigMapList,ClusterRole,ClusterRoleBinding,SecretProviderClass,PodMonitor,Prometheus,Job,NetworkPolicy,MutatingWebhookConfiguration,ValidatingWebhookConfiguration,Issuer,Deployment,VerticalPodAutoscaler,ThanosRuler --include-cluster-resources=true
Get Backups
This command lists all created backups:
velero backup get
Delete Backups
This commands deletes the specified backup:
velero backup delete <BACKUP NAME>
Restore Backup
This commands restores the specified backup:
velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>
Scheduled Backups
This command creates a backup for the namespace “harbor” every 6 hours:
velero schedule create harbor --schedule "0 */6 * * *" --include-namespaces harbor --include-cluster-resources=true
This command creates a backup for the deployment “harbor” every 6 hours:
velero schedule create harbor --schedule "0 */6 * * *" --include-namespaces harbor --include-resources ConfigMap,Deployment,PersistentVolumeClaim,Secret,Service,StatefulSet,Ingress,ServiceMonitor --include-cluster-resources=true
Get Schedules
This command lists all schedules for backups:
velero schedule get
Delete Schedules
This command deletes the specified schedule:
velero schedule delete harbor
Restore Scheduled Backup
This command restores the backup from a schedule:
velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>
Backup
This command creates a backup for the namespace “harbor”:
velero backup create harbor --include-namespaces harbor --include-cluster-resources=true
This command creates a backup for the deployment “harbor”:
velero backup create harbor --include-namespaces harbor --include-resources ConfigMap,Deployment,PersistentVolumeClaim,Secret,Service,StatefulSet,Ingress,ServiceMonitor --include-cluster-resources=true --include-cluster-resources=true
Get Backups
This command lists all created backups:
velero backup get
Delete Backups
This commands deletes the specified backup:
velero backup delete <BACKUP NAME>
Restore Backup
This commands restores the specified backup:
velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>
Scheduled Backups
This command creates a backup for the namespace “gatekeeper-system” every 6 hours:
velero schedule create gatekeeper --schedule "0 */6 * * *" --include-namespaces gatekeeper-system --include-cluster-resources=true
This command creates a backup for the deployment “gatekeeper” every 6 hours:
velero schedule create gatekeeper --schedule "0 */6 * * *" --include-namespaces gatekeeper-system --include-resources PodSecurityPolicy,ServiceAccount,Deployment,PodDisruptionBudget,ResourceQuota,ClusterRole,Role,ClusterRoleBinding,RoleBinding,MutatingWebhookConfiguration,ValidatingWebhookConfiguration,Secret,Service,Job --include-cluster-resources=true
Get Schedules
This command lists all schedules for backups:
velero schedule get
Delete Schedules
This command deletes the specified schedule:
velero schedule delete gatekeeper
Restore Scheduled Backup
This command restores the backup from a schedule:
velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>
Backup
This command creates a backup for the namespace “gatekeeper-system”:
velero backup create gatekeeper --include-namespaces gatekeeper-system --include-cluster-resources=true
This command creates a backup for the deployment “gatekeeper-system”:
velero backup create gatekeeper --include-namespaces gatekeeper-system --include-resources PodSecurityPolicy,ServiceAccount,Deployment,PodDisruptionBudget,ResourceQuota,ClusterRole,Role,ClusterRoleBinding,RoleBinding,MutatingWebhookConfiguration,ValidatingWebhookConfiguration,Secret,Service,Job --include-cluster-resources=true --include-cluster-resources=true
Get Backups
This command lists all created backups:
velero backup get
Delete Backups
This commands deletes the specified backup:
velero backup delete <BACKUP NAME>
Restore Backup
This commands restores the specified backup:
velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>
Scheduled Backups
This command creates a backup for the namespace “rook-ceph” every 6 hours:
velero schedule create rook-ceph --schedule "0 */6 * * *" --include-namespaces rook-ceph --include-cluster-resources=true
Get Schedules
This command lists all schedules for backups:
velero schedule get
Delete Schedules
This command deletes the specified schedule:
velero schedule delete rook-ceph
Restore Scheduled Backup
This command restores the backup from a schedule:
velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>
Backup
This command creates a backup for the namespace “rook-ceph”:
velero backup create rook-ceph --include-namespaces rook-ceph --include-cluster-resources=true
Get Backups
This command lists all created backups:
velero backup get
Delete Backups
This commands deletes the specified backup:
velero backup delete <BACKUP NAME>
Restore Backup
This commands restores the specified backup:
velero restore create <RESOURCE NAME> --from-backup <BACKUP NAME>
2.8 - Renew Certificates
Renewing all certificates at once
LIMA enables you to renew all Certificates, for a specific Cluster, on all control-plane-nodes in one command.
lima renew cert <clusterName>
Note: Renewing certificates can take several minutes for restarting all certificates services.
Here is an example to renew certificates on cluster with name “Democluster”:
lima renew cert Democluster
Note: This command renew all certificates on the existing control-plane, there is no option to renew single certificates.
2.9 - Deploy Package On Cluster
Deploying package on Cluster
You can install artifacts in your cluster in several ways. For this purpose, you can use these four plugins when creating a package:
- helm
- kubectl
- cmd
- Kosi
As an example, this guide installs the nginx-ingress
Ingress Controller.
Using the Helm-Plugin
Prerequisite
In order to install an artifact with the Helm plugin, the Helm chart must first be downloaded. This step is not covered in this guide.
Create KOSI package
First you need to create a KOSI package. The following command creates the necessary files in the current directory:
kosi create
The downloaded Helm chart must also be located in the current directory. To customize the deployment of the Helm chart, the values.yaml
file must be edited. This file can be downloaded from ArtifactHub and must be placed in the same directory as the Helm chart.
All files required by a task in the package must be named in the package.yaml
file under includes.files
. The container images required by the Helm chart must also be listed in the package.yaml
under includes.containers
. For installation, the required files and images must be listed under the installation.includes
key.
In the example below, only two files are required for the installation: the Helm Chart for the nginx-ingress and the values.yaml to configure the deployment. To install nginx-ingress you will also need the nginx/nginx-ingress
image with the tag 3.0.1
.
To install nginx-ingress with the Helm plugin, call the plugin as shown in the example under installation.tasks
. The deployment configuration file is listed under values
and the packed Helm chart is specified with the key tgz
. Furthermore, it is also possible to specify the namespace in which the artifact should be deployed and the name of the deployment. The full documentation for the Helm plugin can be found here.
apiversion: kubernative/kubeops/sina/user/v3
name: deployExample
description: "This Package is an example.
It shows how to deploy an artifact to your cluster using the helm plugin."
version: 0.1.0
includes:
files:
config: "values.yaml"
nginx: "nginx-ingress-0.16.1.tgz"
containers:
nginx-ingress:
registry: docker.io
image: nginx/nginx-ingress
tag: 3.0.1
docs: docs.tgz
logo: logo.png
installation:
includes:
files:
- config
- nginx
containers:
- nginx-ingress
tasks:
- helm:
command: "install"
values:
- values.yaml
tgz: "nginx-ingress-0.16.1.tgz"
namespace: dev
deploymentName: nginx-ingress
...
update:
tasks:
delete:
tasks:
Once the package.yaml
file has been fully configured, all files must be combined into a KOSI package. To do this, execute the following command in the directory where the package.yaml
file is located.
kosi build
To make the generated kosi package available on other machines, it is pushed to the user’s private KubeOps Hub. To do this, the user must first log in to the hub.
$ kosi login -u <username>
2023-02-04 11:19:43 Info: KOSI version: 2.6.0_Beta0
2023-02-04 11:19:43 Info: Please enter password
****************
2023-02-04 11:19:26 Info: Login Succeeded to Hub.
$ kosi push --hub kosi
2023-02-04 11:23:18 Info: KOSI version: 2.6.0_Beta0
2023-02-04 11:23:19 Info: Push to Private Registry registry.preprod.kubernative.net/<username>/
Deployment
Once the KOSI package has been created and published, it needs to be installed on the Admin node. The following command will download and execute the package. The package name and version refer to the values defined in package.yaml
with the keys name
and version
.
kosi install --hub <username> <username>/<packagename>:<version>
For the example package, the command would be: kosi install --hub <username><username>/deployExample:0.1.0
.
Using the Kubectl-Plugin
Prerequisite
In order to install an artifact with the Kubectl plugin, the kubeops-kubernetes-plugins
package must be installed on the admin node. This step is not covered in this guide.
Create KOSI package
First you need to create a KOSI package. The following command creates the necessary files in the current directory:
kosi create
The NGINX ingress controller YAML manifest can either be automaticly downloaded and applyed directly with kubectl apply
or it can be downloaded manually if you want to customize the deployment. The YAML manifest can be downloaded from the NGINX GitHub Repo and must be placed in the same directory as the files for the kosi package.
All files required by a task in the package must be named in the package.yaml
file under includes.files
. The container images required by the YAML manifest must also be listed in the package.yaml
under includes.containers
. For installation, the required files and images must be listed under the installation.includes
key.
In the example below, only one file is required for the installation: the YAML manifest for the nginx-ingress controller. To install nginx-ingress you will also need the registry.k8s.io/ingress-nginx/controller
image with the tag v1.5.1
and the image registry.k8s.io/ingress-nginx/kube-webhook-certgen
with tag v20220916-gd32f8c343
.
To install nginx-ingress with the Kubectl plugin, call the plugin as shown in the example under installation.tasks
. The full documentation for the Kubectl plugin can be found here.
apiversion: kubernative/kubeops/sina/user/v3
name: deployExample
description: "This Package is an example.
It shows how to deploy an artifact to your cluster using the helm plugin."
version: 0.1.0
includes:
files:
manifest: "deploy.yaml"
containers:
nginx-ingress:
registry: registry.k8s.io
image: ingress-nginx/controller
tag: v1.5.1
webhook-certgen:
registry: registry.k8s.io
image: ingress-nginx/kube-webhook-certgen
tag: v20220916-gd32f8c343
docs: docs.tgz
logo: logo.png
installation:
includes:
files:
- manifest
containers:
- nginx-ingress
- webhook-certgen
tasks:
- kubectl:
operation: "apply"
flags: " -f <absolute path>/deploy.yaml"
sudo: true
sudoPassword: "toor"
...
update:
tasks:
delete:
tasks:
Once the package.yaml
file has been fully configured, all files must be combined into a KOSI package. To do this, execute the following command in the directory where the package.yaml
file is located.
kosi build
To make the generated KOSI package available on other machines, it is pushed to the user’s private KubeOps Hub. To do this, the user must first log in to the hub.
$ kosi login -u <username>
2023-02-04 11:19:43 Info: kosi version: 2.6.0_Beta0
2023-02-04 11:19:43 Info: Please enter password
****************
2023-02-04 11:19:26 Info: Login Succeeded to Hub.
$ kosi push --hub kosi
2023-02-04 11:23:18 Info: kosi version: 2.6.0_Beta0
2023-02-04 11:23:19 Info: Push to Private Registry registry.preprod.kubernative.net/<username>/
Deployment
Once the KOSI package has been created and published, it needs to be installed on the Admin node. The following command will download and execute the package. The package name and version refer to the values defined in package.yaml
with the keys name
and version
.
kosi install --hub <username> <username>/<packagename>:<version>
For the example package, the command would be: kosi install --hub <username><username>/deployExample:0.1.0
.
2.10 - Replace Cluster Nodes
Replace cluster nodes
This section describes how to replace cluster nodes in your cluster.
Direct replacement of nodes is not possible in KubeOps; however you can delete the node and add a new node to the cluster as shown in the following example.
Steps to replace a Kubernetes Node
-
Use the command
delete
on the admin node to delete the unwanted node from the cluster.The command is:
lima delete -n <IP of your node> <name of your Cluster>
If you are deleting a node, then its data becomes inaccessible or erased.
-
Now create a new .yaml file with a configuration for the node as shown below
Example:
apiVersion: lima/nodeconfig/v1alpha1 clusterName: roottest spec: masters: [] workers: - host: 10.2.10.17 ## ip of the new node to be joined user: root password: toor
-
Lastly use the command
create nodes
to create and join the new node.The command is:
lima create nodes -f <node yaml file name>
Example 1
In the following example, we will replace a node with ip 10.2.10.15
from demoCluster
to a new worker node with ip 10.2.10.17
:
-
Delete node.
lima delete -n 10.2.10.15 demoCluster
-
create addNode.yaml for new worker node.
apiVersion: lima/nodeconfig/v1alpha1 clusterName: roottest spec: masters: [] workers: - host: 10.2.10.17 user: root password: toor
-
Join the new node.
lima create nodes -f addNode.yaml
Example 2
If you are rejoining a master node, all other steps are the same except, you need to add the node configuration in the yaml file as shown in the example below:
apiVersion: lima/nodeconfig/v1alpha1
clusterName: roottest
spec:
masters:
- host: 10.2.10.17
user: root
password: toor
workers: []
2.11 - Update Kubernetes Version
Upgrading Kubernetes version
You can use the following steps to upgrade the Kubernetes version of a cluster.
In the following example, we will upgrade Kubernetes version of your cluster with name Democluster
from Kubernetes version 1.27.2
to Kubernetes version 1.28.2
- You have to create a kubeobsctl.yaml with following yaml syntax.
apiVersion: kubeops/kubeopsctl/alpha/v3 # mandatory
kubeOpsUser: "demo" # mandatory, change to your username
kubeOpsUserPassword: "Password" # mandatory, change to your password
kubeOpsUserMail: "demo@demo.net" # change to your email
imagePullRegistry: "registry1.kubernative.net/lima" # mandatory
localRegistry: false # mandatory
### Values for setup configuration ###
clusterName: "Democluster" # mandatory
clusterUser: "mnyuser" # mandatory
kubernetesVersion: "1.28.2" # mandatory, check lima documentation
masterIP: 10.2.10.11 # mandatory
### Additional values for cluster configuration
# at least 3 masters and 3 workers are needed
zones:
- name: zone1
nodes:
master:
- name: cluster1master1
ipAdress: 10.2.10.11
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
- name: cluster1master2
ipAdress: 10.2.10.12
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
worker:
- name: cluster1worker1
ipAdress: 10.2.10.14
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
- name: cluster1worker2
ipAdress: 10.2.10.15
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
- name: zone2
nodes:
master:
- name: cluster1master3
ipAdress: 10.2.10.13
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: drained
kubeversion: 1.28.2
worker:
- name: cluster1worker1
ipAdress: 10.2.10.16
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
# set to true if you want to install it into your cluster
rook-ceph: false # mandatory
harbor: false # mandatory
opensearch: false # mandatory
opensearch-dashboards: false # mandatory
logstash: false # mandatory
filebeat: false # mandatory
prometheus: false # mandatory
opa: false # mandatory
headlamp: false # mandatory
certman: false # mandatory
ingress: false # mandatory
keycloak: false # mandatory
-
Upgrade the version
Once the kubeopsctl.yaml file is created in order to change the Version of your cluster use the following command:
kubeopsctl apply -f kubeopsctl.yaml
2.12 - Change CRI
Changing Container Runtime Interface
KubeOps enables you to change the Container Runtime Interface (CRI) of the clusters to any of the following supported CRIs
- containerd
- crio
You can use the following steps to change the CRI
In the example below, we will change the CRI of the cluster with the name Democluster
to containerd
on the machine with openSuse
OS .
- Download the desired CRI maintenance package from hub
In this case you will need package `lima/containerdlp151:1.6.6`.
To download the package use command:
````console
lima pull maintenance lima/containerdlp151:1.6.6
````
Note : Packages may vary based on OS and Kubernetes version on your machine.
To select the correct maintenance package based on your machine configuration,
refer to Installing maintenance packages
- Change the CRI of your cluster.
Once the desired CRI maintenance package is downloaded, to change the CRI of your cluster use command:
console lima change runtime -r containerd Democluster
So in this case you want to change your runtime to containerd. The desired container runtime is specified after the -r parameter, which is necessary. In this example the cluster has the name Democluster, which is also necessary.
2.13 - How to delete nodes from the cluster with lima
Note: If we want to delete a node from our kubernetes cluster we have to use lima.
If you are using our platform, lima is already installed by it. If this is not the case, please install lima manually.
These are the prerequisites that have to fulfilled before we can delete a node from our cluster.
- lima has to be installed
- a functioning cluster must exist
If you want to remove a node from your cluster you can run the delete
command on the admin node.
lima delete -n <node which should be deleted> <name of your cluster>
Note: The
example
cluster name has to be the same like the one set in the Kubectl.yaml file. UnderclusterName:
For example we want to delete worker node 2
from our existing kubernetes cluster named example
and the IP-address 10.2.1.9
with the following command:
lima delete -n 10.2.1.9 example
2.14 - Accessing Dashboards
Accessing Dashboards installed with KubeOps
To access a Application dashboard an SSH-Tunnel to one of the Control-Planes is needed. The following Dashboards are available and configured with the following NodePorts by default:
NodePort
30211
Initial login credentials
- username: the username set in the kubeopsvalues.yaml for the cluster creation
- password: the password set in the kubeopsvalues.yaml for the cluster creation
NodePort
30050
Initial login credentials
- username: admin
- password: admin
NodePort
- https: 30003
Initial login credentials
- username: admin
- password: the password set in the kubeopsvalues.yaml for the cluster creation
NodePort
The Rook/Ceph Dashboard has no fixed NodePort yet. To find out the NodePort used by Rook/Ceph follow these steps:
- List the Services in the KubeOps namespace
kubectl get svc -n kubeops
- Find the line with the service rook-ceph-mgr-dashboard-external-http
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
rook-ceph-mgr-dashboard-external-http NodePort 192.168.197.13 <none> 7000:31268/TCP 21h
In the example above the NodePort to connect to Rook/Ceph would be 31268.
Initial login credentials
- username: admin
- password:
kubectl get secret rook-ceph-dashboard-password -n kubeops --template={{.data.password}} | base64 -d
The dashboard can be accessed with localhost:Port/ceph-dashboard/
NodePort
30007
Initial login credentials
An access token is required to log in to the headlamp daschboard.
The access token is linked to the service account headlamp-admin and stored in the secret headlamp-admin
The access token can be read from the secret
echo $(kubectl get secret headlamp-admin --namespace headlamp --template=\{\{.data.token\}\} | base64 --decode)
Connecting to the Dashboard
In order to connect to one of the dashboards, an ssh tunnel has to be established. There are various tools for doing this, like the command line, putty or MobaXterm.
To establish a tunnel, the NodePort of the dashboard has to be forwarded on one of the control planes to the local machine. After that the dashboard can be accessed with localhost:<Port>
.
NodePort
- https: 30003
Initial login credentials
- username: admin
- password: the password set in the kubeopsctl.yaml for the cluster creation
NodePort
The Rook/Ceph Dashboard has no fixed NodePort yet. To find out the NodePort used by Rook/Ceph follow these steps:
- List the Services in the KubeOps namespace
kubectl get svc -n kubeops
- Find the line with the service rook-ceph-mgr-dashboard-external-http
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
rook-ceph-mgr-dashboard-external-http NodePort 192.168.197.13 <none> 7000:31268/TCP 21h
In the example above the NodePort to connect to Rook/Ceph would be 31268.
Initial login credentials
- username: admin
- password:
kubectl get secret rook-ceph-dashboard-password -n kubeops --template={{.data.password}} | base64 -d
The dashboard can be accessed with localhost:Port/ceph-dashboard/#/login
NodePort
30007
Initial login credentials
An access token is required to log in to the headlamp daschboard.
The access token is linked to the service account headlamp-admin and stored in the secret headlamp-admin
The access token can be read from the secret
echo $(kubectl get secret headlamp-admin --namespace headlamp --template=\{\{.data.token\}\} | base64 --decode)
Connecting to the Dashboard
In order to connect to one of the dashboards, an ssh tunnel has to be established. There are various tools for doing this, like the command line, putty or MobaXterm.
To establish a tunnel, the NodePort of the dashboard has to be forwarded on one of the control planes to the local machine. After that the dashboard can be accessed with localhost:<Port>
.
2.15 - Create a new Repository
Kubeops RPM Repository Setup Guide
Setting up a new RPM repository allows for centralized, secure, and efficient distribution of software packages, simplifying installation, updates, and dependency management.
Prerequisites
To setup a new repostory on your KubeOps platform, following pre-requisites must be fulfilled.
- httpd (apache) server to access the repository over HTTP.
- Root or administrative access to the server.
- Software packages (RPM files) to include in the repository.
createrepo
(an RPM package management tool) to create a new repository.
Repository Setup Steps
1. Install Required Tools
sudo yum install -y httpd createrepo
2. Create Repository Dierectory
When Apache is installed, the default Apache VirtualHost DocumentRoot
created at /var/www/html
. Create a new repository KubeOpsRepo
under DocumentRoot
.
sudo mkdir -p /var/www/html/KubeOpsRepo
3. Copy RPM Packages
Copy RPM packages into KubeOpsRepo
repository.
Use below command to copy the packages that are already present in the host machine, else directly populate the packages into KubeOpsRepo
sudo cp -r <sourcePathForRPMs> /var/www/html/KubeOpsRepo/
4. Generate the GPG Signature (optional)
If you want to use your packages in a secure way, we recommend using GPG Signature.
How does the GPG tool work?
TheGNU Privacy Guard (GPG)
is used for secure communication and data integrity verification.
Whengpgcheck
set to 1 (enabled), the package will verify the GPG signature of each packages against the correponding key in the keyring. If the package’s signature matches the expected signature, the package is considered valid and can be installed. If the signature does not match or the package is not signed, the package manager will refuse to install the package or display a warning.
GPG Signature for new registry
-
Create a GPG key and add it to the
/var/www/html/KubeOpsRepo/
. Check here to know how to create GPG keypairs. -
Save the GPG key as
RPM-GPG-KEY-KubeOpsRepo
using following command.
sudo cd /var/www/html/KubeOpsRepo/
gpg --armor --export > RPM-GPG-KEY-KubeOpsRepo
You can use following command to verify the gpg key.
curl -s http://<ip-address-of-server>/KubeOpsRepo/RPM-GPG-KEY-myrepo
5. Initialize the KubeOpsRepo
By running createrepo
command the KubeOpsRepo
will be initialized.
sudo cd /var/www/html/KubeOpsRepo/
sudo createrepo .
The newly created directoryrepodata
conatains metadata files that describe the RPM packages in the repository, including package information, dependencies, and checksums, enabling efficient package management and dependency resolution.
6. Start and Enable Apache Service
sudo systemctl start httpd
sudo systemctl enable httpd
Configure Firewall (Optional)
If the firewall is enabled, we need to allow incoming HTTP traffic.
sudo firewall-cmd --add-service=http --permanent
sudo firewall-cmd --reload
7. Configure the local repository
To install packages from KubeOpsRepo
without specifying the URL everytime, we can configure the local repository. Also if you are using GPG signature, then gpgcheck
needs to be enabled.
- Create a Repository Configuration File
Create a new.repo
configuration file (e.g.KubeOpsRepo.repo
) in/etc/yum.repos.d/
directory with following command.
sudo vi /etc/yum.repos.d/KubeOpsRepo.repo
- Add following confuration content to the File
[KubeOpsRepo]
name=KubeOps Repository
baseurl=http://<ip-address-of-server>/KubeOpsRepo/
enabled=1
gpgcheck=1
gpgkey=http://<ip-address-of-server>/KubeOpsRepo/RPM-GPG-KEY-KubeOpsRepo
Below are the configuration details :
KubeOpsRepo
: It is the repository ID.baseurl
: It is the base URL of the new repository. Add your repository URL here.name
: It can be customized to a descriptive name.enabled=1
: This enables the the repository.gpgcheck=1
: It is used to enable GPG signature verification for the repository.gpgkey
: Add the address where your GPG key is placed.
In case, you are not using the GPG signature verification
1. you can skip step 4
and
2. set the gpgcheck=0 in the above configuration file.
8. Test the Local Repository
To ensure that the latest metadata for the repositories available, you can run below command: (optional)
sudo yum makecache
To verify the repository in list
You can check the reposity in the repolist with following command :
sudo yum repolist
This will list out all the repositories with the information about the repositories.
[root@cluster3admin1 ~]# yum repolist Updating Subscription Management repositories. repo id repo name KubeOpsRepo KubeOps Repository rhel-8-for-x86_64-appstream-rpms Red Hat Enterprise Linux 8 for x86_64 - AppStream (RPMs) rhel-8-for-x86_64-baseos-rpms Red Hat Enterprise Linux 8 for x86_64 - BaseOS (RPMs)
To List all the packages in repository
You can list all the packages availbale in KubeOpsRepo
with following command :
# To check all the packages including duplicate installed packages
sudo yum list available --disablerepo="*" --enablerepo="KubeOpsRepo" --showduplicates
# sudo yum list --showduplicates | grep KubeOpsRepo
To Install the Packages from the repository directly
Now you can directly install the packages from the KubeOpsRepo Repository with following command :
sudo yum install package_name
For Example :
sudo yum install lima
2.16 - Change registry
Changing Registry from A to B
KubeOps enables you to change the registry from A to B with following commands
kosi 2.6.0 - kosi 2.7.0
Kubeops 1.0.6
fileBeat
kosi pull kubeops/kosi-filebeat-os:1.2.0 -o filebeat.kosi -r 10.9.10.222:30003
kosi pull kubeops/kosi-filebeat-os:1.2.0 -o filebeat.kosi -r localhost:30003
kosi install -p filebeat.kosi
harbor
kosi pull kubeops/harbor:1.0.1 -o harbor.kosi -r 10.9.10.222:30003
kosi pull kubeops/harbor:1.0.1 -o harbor.kosi -r localhost:30003
kosi install -p harbor.kosi
logstash
kosi pull kubeops/kosi-logstash-os:1.0.1 -o logstash.kosi -r 10.9.10.222:30003
kosi pull kubeops/kosi-logstash-os:1.0.1 -o logstash.kosi -r localhost:30003
kosi install -p logstash.kosi
opa-gatekeeper
kosi pull kubeops/opa-gatekeeper:1.0.1 -o opa-gatekeeper.kosi -r 10.9.10.222:30003
kosi pull kubeops/opa-gatekeeper:1.0.1 -o opa-gatekeeper.kosi -r localhost:30003
kosi install -p opa-gatekeeper.kosi
opensearch
kosi pull kubeops/kosi-opensearch-os:1.0.3 -o opensearch.kosi -r 10.9.10.222:30003
kosi pull kubeops/kosi-opensearch-os:1.0.3 -o opa-gatekeeper.kosi -r localhost:30003
kosi install -p opa-gatekeeper.kosi
opensearch-dashboards
kosi pull kubeops/kosi-opensearch-dashboards:1.0.1 -o opensearch-dashboards.kosi -r 10.9.10.222:30003
kosi pull kubeops/kosi-opensearch-dashboards:1.0.1 -o opensearch-dashboards.kosi -r localhost:30003
kosi install -p opensearch-dashboards.kosi
prometheus
kosi pull kubeops/kosi-kube-prometheus-stack:1.0.3 -o prometheus.kosi -r 10.9.10.222:30003
kosi pull kubeops/kosi-kube-prometheus-stack:1.0.3 -o prometheus.kosi -r localhost:30003
kosi install -p prometheus.kosi
rook
kosi pull kubeops/rook-ceph:1.0.3 -o rook-ceph.kosi -r 10.9.10.222:30003
kosi pull kubeops/rook-ceph:1.0.3 -o rook-ceph.kosi -r localhost:30003
kosi install -p rook-ceph.kosi
Kubeops 1.1.2
fileBeat
kosi pull kubeops/kosi-filebeat-os:1.1.1 -o kubeops/kosi-filebeat-os:1.1.1 -r 10.9.10.222:30003
kosi pull kubeops/kosi-filebeat-os:1.1.1 -o kubeops/kosi-filebeat-os:1.1.1 -t localhost:30003
kosi install -p package.yaml
harbor
kosi pull kubeops/harbor:1.1.1 -o kubeops/harbor:1.1.1 -r 10.9.10.222:30003
kosi pull kubeops/harbor:1.1.1 -o kubeops/harbor:1.1.1 -t localhost:30003
kosi install -p package.yaml
logstash
kosi pull kubeops/kosi-logstash-os:1.1.1 -o kubeops/kosi-logstash-os:1.1.1 -r 10.9.10.222:30003
kosi pull kubeops/kosi-logstash-os:1.1.1 -o kubeops/kosi-logstash-os:1.1.1 -t localhost:30003
kosi install -p package.yaml
opa-gatekeeper
kosi pull kubeops/opa-gatekeeper:1.1.1 -o kubeops/opa-gatekeeper:1.1.1 -r 10.9.10.222:30003
kosi pull kubeops/opa-gatekeeper:1.1.1 -o kubeops/opa-gatekeeper:1.1.1 -t localhost:30003
kosi install -p package.yaml
opensearch
kosi pull kubeops/kosi-opensearch-os:1.1.1 -o kubeops/kosi-opensearch-os:1.1.1 -r 10.9.10.222:30003
kosi pull kubeops/kosi-opensearch-os:1.1.1 -o kubeops/kosi-opensearch-os:1.1.1 -t localhost:30003
kosi install -p package.yaml
opensearch-dashboards
kosi pull kubeops/kosi-opensearch-dashboards:1.1.1 -o kubeops/kosi-opensearch-dashboards:1.1.1 -r 10.9.10.222:30003
kosi pull kubeops/kosi-opensearch-dashboards:1.1.1 -o kubeops/kosi-opensearch-dashboards:1.1.1 -t localhost:30003
kosi install -p package.yaml
prometheus
kosi pull kubeops/kosi-kube-prometheus-stack:1.1.1 -o kubeops/kosi-kube-prometheus-stack:1.1.1 -r 10.9.10.222:30003
kosi pull kubeops/kosi-kube-prometheus-stack:1.1.1 -o kubeops/kosi-kube-prometheus-stack:1.1.1 -t localhost:30003
kosi install -p package.yaml
rook
kosi pull kubeops/rook-ceph:1.1.1 -o kubeops/rook-ceph:1.1.1 -r 10.9.10.222:30003
kosi pull kubeops/rook-ceph:1.1.1 -o kubeops/rook-ceph:1.1.1 -t localhost:30003
kosi install -p package.yaml
cert-manager
kosi pull kubeops/cert-manager:1.0.2 -o kubeops/cert-manager:1.0.2 -r 10.9.10.222:30003
kosi pull kubeops/cert-manager:1.0.2 -o kubeops/cert-manager:1.0.2 -t localhost:30003
kosi install -p package.yaml
ingress-nginx
kosi pull kubeops/ingress-nginx:1.0.1 -o kubeops/ingress-nginx:1.0.1 -r 10.9.10.222:30003
kosi pull kubeops/ingress-nginx:1.0.1 -o kubeops/ingress-nginx:1.0.1 -t localhost:30003
kosi install -p package.yaml
kubeops-dashboard
kosi pull kubeops/kubeops-dashboard:1.0.1 -o kubeops/kubeops-dashboard:1.0.1 -r 10.9.10.222:30003
kosi pull kubeops/kubeops-dashboard:1.0.1 -o kubeops/kubeops-dashboard:1.0.1 -t localhost:30003
kosi install -p package.yaml
kubeopsctl 1.4.0
Kubeops 1.4.0
you have to create the file kubeopsctl.yaml :
apiVersion: kubeops/kubeopsctl/alpha/v3 # mandatory
kubeOpsUser: "demo" # change to your username
kubeOpsUserPassword: "Password" # change to your password
kubeOpsUserMail: "demo@demo.net" # change to your email
imagePullRegistry: "registry1.kubernative.net/lima"
clusterName: "example"
clusterUser: "root"
kubernetesVersion: "1.28.2"
masterIP: 10.2.10.11
firewall: "nftables"
pluginNetwork: "calico"
containerRuntime: "containerd"
localRegistry: false
# at least 3 masters and 3 workers are needed
zones:
- name: zone1
nodes:
master:
- name: cluster1master1
ipAdress: 10.2.10.11
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
- name: cluster1master2
ipAdress: 10.2.10.12
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
worker:
- name: cluster1worker1
ipAdress: 10.2.10.14
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
- name: cluster1worker2
ipAdress: 10.2.10.15
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
- name: zone2
nodes:
master:
- name: cluster1master3
ipAdress: 10.2.10.13
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: drained
kubeversion: 1.28.2
worker:
- name: cluster1worker1
ipAdress: 10.2.10.16
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
controlPlaneList:
- 10.2.10.12 # use ip adress here for master2
- 10.2.10.13 # use ip adress here for master3
workerList:
- 10.2.10.14 # use ip adress here for worker1
- 10.2.10.15 # use ip adress here for worker2
- 10.2.10.16 # use ip adress here for worker3
rook-ceph: false
harbor: false
opensearch: false
opensearch-dashboards: false
logstash: false
filebeat: false
prometheus: false
opa: false
headlamp: false
certman: false
ingress: false
keycloak: false # mandatory, set to true if you want to install it into your cluster
velero: false
storageClass: "rook-cephfs"
rookValues:
namespace: kubeops
nodePort: 31931
hostname: rook-ceph.local
cluster:
spec:
dataDirHostPath: "/var/lib/rook"
removeOSDsIfOutAndSafeToRemove: true
storage:
# Global filter to only select certain devicesnames. This example matches names starting with sda or sdb.
# Will only be used if useAllDevices is set to false and will be ignored if individual devices have been specified on a node.
deviceFilter: "^sd[a-b]"
# Names of individual nodes in the cluster that should have their storage included.
# Will only be used if useAllNodes is set to false.
nodes:
- name: "<ip-adress of node_1>"
devices:
- name: "sdb"
- name: "<ip-adress of node_2>"
deviceFilter: "^sd[a-b]"
# config:
# metadataDevice: "sda"
resources:
mgr:
requests:
cpu: "500m"
memory: "1Gi"
mon:
requests:
cpu: "2"
memory: "1Gi"
osd:
requests:
cpu: "2"
memory: "1Gi"
operator:
data:
rookLogLevel: "DEBUG"
blockStorageClass:
parameters:
fstype: "ext4"
postgrespass: "password" # change to your desired password
postgres:
storageClassName: "rook-cephfs"
volumeMode: "Filesystem"
accessModes: ["ReadWriteMany"]
resources:
requests:
storage: 2Gi
redispass: "password" # change to your desired password
redis:
storageClassName: "rook-cephfs"
volumeMode: "Filesystem"
accessModes: ["ReadWriteMany"]
resources:
requests:
storage: 2Gi
harborValues:
namespace: kubeops
harborpass: "password" # change to your desired password
externalURL: https://10.2.10.13 # change to ip adress of master1
nodePort: 30003
hostname: harbor.local
harborPersistence:
persistentVolumeClaim:
registry:
size: 5Gi
storageClass: "rook-cephfs"
chartmuseum:
size: 5Gi
storageClass: "rook-cephfs"
jobservice:
jobLog:
size: 1Gi
storageClass: "rook-cephfs"
scanDataExports:
size: 1Gi
storageClass: "rook-cephfs"
database:
size: 1Gi
storageClass: "rook-cephfs"
redis:
size: 1Gi
storageClass: "rook-cephfs"
trivy:
size: 5Gi
storageClass: "rook-cephfs"
filebeatValues:
namespace: kubeops
logstashValues:
namespace: kubeops
volumeClaimTemplate:
resources:
requests:
storage: 1Gi
accessModes:
- ReadWriteMany
storageClassName: "rook-cephfs"
openSearchDashboardValues:
namespace: kubeops
nodePort: 30050
hostname: opensearch.local
openSearchValues:
namespace: kubeops
opensearchJavaOpts: "-Xmx512M -Xms512M"
replicas: "3"
resources:
requests:
cpu: "250m"
memory: "1024Mi"
limits:
cpu: "300m"
memory: "3072Mi"
persistence:
size: 4Gi
enabled: "true"
enableInitChown: "false"
enabled: "false"
labels:
enabled: "false"
storageClass: "rook-cephfs"
accessModes:
- "ReadWriteMany"
securityConfig:
enabled: false
### Additional values can be set, if securityConfig is enabled:
# path: "/usr/share/opensearch/plugins/opensearch-security/securityconfig"
# actionGroupsSecret:
# configSecret:
# internalUsersSecret: internal-users-config-secret
# rolesSecret:
# rolesMappingSecret:
# tenantsSecret:
# config:
# securityConfigSecret: ""
# dataComplete: true
# data: {}
prometheusValues:
namespace: kubeops
privateRegistry: false
grafanaUsername: "user"
grafanaPassword: "password"
grafanaResources:
storageClass: "rook-cephfs"
storage: 5Gi
nodePort: 30211
hostname: grafana.local
prometheusResources:
storageClass: "rook-cephfs"
storage: 25Gi
retention: 10d
retentionSize: "24GB"
nodePort: 32090
hostname: prometheus.local
opaValues:
namespace: kubeops
headlampValues:
namespace: kubeops
hostname: kubeops-dashboard.local
service:
nodePort: 30007
certmanValues:
namespace: kubeops
replicaCount: 3
logLevel: 2
ingressValues:
namespace: kubeops
externalIPs: []
keycloakValues:
namespace: "kubeops"
storageClass: "rook-cephfs"
nodePort: "30180"
hostname: keycloak.local
keycloak:
auth:
adminUser: admin
adminPassword: admin
existingSecret: ""
postgresql:
auth:
postgresPassword: ""
username: bn_keycloak
password: ""
database: bitnami_keycloak
existingSecret: ""
fileBeat
In Order to change registry of filebeat, you have to go in your kubeopsctl.yaml
file and set filebeat: false
to filebeat: true
kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003
harbor
In Order to change registry of harbor, you have to go in your kubeopsctl.yaml
file and set harbor: false
to harbor: true
kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003
logstash
In Order to change registry of logstash, you have to go in your kubeopsctl.yaml
file and set logstash: false
to logstash: true
kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003
opa-gatekeeper
In Order to change registry of opa-gatekeeper, you have to go in your kubeopsctl.yaml
file and set opa: false
to opa: true
kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003
opensearch
In Order to change registry of opensearch, you have to go in your kubeopsctl.yaml
file and set opensearch: false
to opensearch: true
kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003
opensearch-dashboards
In Order to change registry of opensearch-dashboards, you have to go in your kubeopsctl.yaml
file and set opensearch-dashboards: false
to opensearch-dashboards: true
kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003
prometheus
In Order to change registry of prometheus, you have to go in your kubeopsctl.yaml
file and set prometheus: false
to prometheus: true
kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003
rook-ceph
In Order to change registry of rook-ceph, you have to go in your kubeopsctl.yaml
file and set rook-ceph: false
to rook-ceph: true
kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003
cert-manager
In Order to change registry of cert-manager, you have to go in your kubeopsctl.yaml
file and set certman: false
to certman: true
kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003
ingress-nginx
In Order to change registry of ingress-nginx, you have to go in your kubeopsctl.yaml
file and set ingress: false
to ingress: true
kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003
keycloak
In Order to change registry of keycloak, you have to go in your kubeopsctl.yaml
file and set keycloak: false
to keycloak: true
kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003
headlamp
In Order to change registry of headlamp, you have to go in your kubeopsctl.yaml
file and set headlamp: false
to headlamp: true
kubeopsctl change registry -f kubeopsctl.yaml -r 10.9.10.222:30003 -t localhost:30003
2.17 - Change the OpenSearch password
Changing the password of OpenSearch
Changing the password with default settings
If OpenSearch is installed without any SecurityConfig-settings, i.e. the SecurityConfig value is disabled inside the installation-values for OpenSearch, the following steps have to be taken in order to change password for a user.
Step 1: Generate a new passwordhash
Opensearch stores hashed passwords for authentication. In Order to change the password of a user we first have to generate the corresponding hash-value using the interactive hash.sh script, which can be found within the OpenSearch-container:
kubectl exec -it opensearch-cluster-master-0 -n kubeops -- bash
sh /usr/share/opensearch/plugins/opensearch-security/tools/hash.sh
Step 2: Save the new passwordhash in the internal_users.yaml file
By default, OpenSearch uses the internal_users.yaml file to save user-settings. To change the userpassword, one has to replace the hash-value for the specific user inside this file. Again, the needed file is located inside the OpenSearch-container. Use the following command to edit the internal_users.yaml file and replace the hash-entry with the newly generated one.
vi /usr/share/opensearch/config/opensearch-security/internal_users.yaml
Step 3: Update the OpenSearch-cluster:
Use the provided script securityadmin.sh, inside the OpenSearch-container to update the OpenSearch-cluster and persist the changes on the user-database:
sh /usr/share/opensearch/plugins/opensearch-security/tools/securityadmin.sh -cd /usr/share/opensearch/config/opensearch-security/ -icl -nhnv -cacert /usr/share/opensearch/config/root-ca.pem -cert /usr/share/opensearch/config/kirk.pem -key /usr/share/opensearch/config/kirk-key.pem
Opensearch with external secret
If OpenSearch is instead deployed with the SecurityConfig enabled and has created a external Secret, some additional steps/changes are required to change a user password.
Step 1: Generate a new passwordhash
Opensearch stores hashed passwords for authentication. In Order to change the password of a user we first have to generate the corresponding hash-value using the interactive hash.sh script, which can be found within the OpenSearch-container:
kubectl exec -it opensearch-cluster-master-0 -n kubeops -- bash
sh /usr/share/opensearch/plugins/opensearch-security/tools/hash.sh
Step 2: Localize the secret and extract the userdata
In this case, users and additional userdata is stored inside the internal-users-config-secret, a secret created within the kubernetes-cluster. It is stored in the same namespace as the OpenSearch-Deployment itself. Inside the Secret exists a data entry, which essentially contains the internal_users.yaml (a list of users and their userdata in yaml format) encoded in base64 as a String. The following commands will extract and decode the data, so the user can edit the local copy of the yaml-file, and replace the hash-entry with the newly generated one.
kubectl get secrets -n kubeops internal-users-config-secret -o jsonpath='{.data.internal_users\.yml}' | base64 -d > internal_users.yaml
vi internal_users.yaml
Step 3: Patch the secret and restart the OpenSearch pods
After editing the extracted data, it must be reencoded into base64, to then replace the old data inside the secret. After that, the OpenSearch pods need to be restarted in some way, for them to reload the secret.
cat internal_users.yaml | base64 -w 0 | xargs -I {} kubectl patch secret -n kubeops internal-users-config-secret --patch '{"data": {"internal_users.yml": "{}"}}'
kubectl rollout restart statefulset opensearch-cluster-master -n kubeops
Step 4: Update the OpenSearch-cluster:
Use the provided script securityadmin.sh, inside the OpenSearch-container to update the OpenSearch-cluster and persist the changes on the user-database. For the script to work properly, one must copy the internal_users.yaml into a certain directory, containing all the needed files.
cp /usr/share/opensearch/plugins/opensearch-security/securityconfig/internal_users.yml /usr/share/opensearch/config/opensearch-security/
sh /usr/share/opensearch/plugins/opensearch-security/tools/securityadmin.sh -cd /usr/share/opensearch/config/opensearch-security/ -icl -nhnv -cacert /usr/share/opensearch/config/root-ca.pem -cert /usr/share/opensearch/config/kirk.pem -key /usr/share/opensearch/config/kirk-key.pem
2.18 - Create Kosi package
Creating Kosi package
kosi create
To create a Kosi package, you must first run the kosi create
command in your directory.
The kosi create
command creates four files (package.yaml, template.yaml, logo.png and docs.tgz) in the current directory. These files can be edited.
[root@localhost ~]# kosi create
Created files:
- package.yaml - Defines properties of the Kosi package. (see below)
- template.yaml - Required if the template engine Scriban is to be used.
- logo.png - A package-thumbnail with the size of 50x50px, for showing logo on the KubeOpsHub.
- docs.tgz - A zipped directory with the documentation of the package, for showing documentation on the KubeOpsHub.
The documentation of the package is written in markdown. The file for the documentation is called readme.md.
To edit the markdown, you can unzip the docs.tgz in your directory with the commandtar -xzf docs.tgz
and zip it again with the commandtar -czf docs.tgz docs/
after you finished.
Note: Please name your markdown files inside docs.tgz without a version-tag (docs/documentation-1.0.0.md).
Do not change the file names of any of the files above generated with thekosi create
command.
package.yaml
The package.yaml defines a package in a specific version as well as the tasks needed to install it. The tasks which are used in the package.yaml are plugins, which can be created by the user.
Elements:
includes.files
: Describes the files which are inluded in the Kosi package.includes.containers
: Used for docker images. A container for the docker images will be created when thekosi install
,kosi update
orkosi delete
command is used.installation.tasks
: The tree describes the tasks (Kosi plugins), which are executed with thekosi install
command.update.tasks
: The tree describes the tasks (Kosi plugins), which are executed with thekosi update
command.delete.tasks
: The tree describes the tasks (Kosi plugins), which are executed with thekosi delete
command.
IMPORTANT: It is required to enter the package name in lowercase.
Do not use any docker tags (:v1.0.0) in your package name.
Example package.yaml
apiversion: kubernative/kubeops/sina/user/v3 # Required field
name: kosi-example-packagev3 # Required field
description: kosi-example-package # Required field
version: 0.1.0 # Required field
includes: # Required field: When "files" or "containers" are needed.
files: # Optional field: IF file is attached, e.g. "rpm, .extension"
input: "template.yaml"
containers: # Optional field: When "containers" are needed.
example:
registry: docker.io
image: nginx
tag: latest
docs: docs.tgz
logo: logo.png
installation: # Required field
includes: # Optional field: When "files" or "containers" are needed.
files: # Optional field:
- input # Reference to includes
containers: # Optional field:
- example # Reference to includes
tasks:
- cmd:
command: "touch ~/kosiExample1"
update: # Required field
includes: # Optional field: When "files" or "containers" are needed.
files: # Optional field:
- input # Reference to includes
containers: # Optional field:
- example # Reference to includes
tasks:
- cmd:
command: "touch ~/kosiExample2"
delete: # Required field
includes: # Optional field: When "files" or "containers" are needed.
files: # Optional field:
- input # Reference to includes
containers: # Optional field:
- example # Reference to includes
tasks:
- cmd:
command: "rm ~/kosiExample1"
- cmd:
command: "rm ~/kosiExample2"
kosi build
Now, after you created and edited the files from kosi create
, you can simply build a Kosi package by just running the kosi build
command in your directory.
[root@localhost ~]# kosi build
All files specified in the package.yaml are combined together with the package.yaml to form a kosi package.
In these few steps, you can successfully create and use the kosi package. This is the basic functionality offered by Kosi.
You can always explore Full Documentation to go through all the functionality and features provided by Kosi.
2.19 - Install package from Hub
Installing KOSI packages from KubeOps Hub
To install KOSI packages from the KubeOps Hub on your machines.
-
First you need to search the package using command
kosi search
( Refer kosi search command for more info) on the KubeOps Hub. -
Now copy the installation address of desired package and use it the
kosi install
command:[root@localhost ~]# kosi install --hub <hubname> <installation address>
The --hub
parameter is to be used to install packages from the software Hub.
To be able to install a package from the software Hub, you have to be logged in as a user.
Install from Private Hub
Example: The package livedemo
of user kosi
with version 2.7.1
is to be installed from the private software Hub:
[root@localhost ~]# kosi install kosi/livedemo:2.7.1
Install from Public Hub
Example: The package livedemo
of the user kosi
with the version 2.7.1
is to be installed from the public software Hub:
[root@localhost ~]# kosi install --hub public kosi/livedemo:2.7.1
Install along with yaml files
The -f
parameter must be used to use yaml files from the user.
[root@localhost ~]# kosi install <package> -f <user.yaml>
Example: The package livedemo of user kosi with version 2.7.1 is to be installed from the public software hub and user specific files are to be used for the installation:
[root@localhost ~]# kosi install --hub public kosi/livedemo:2.7.1 -f userfile1.yaml
Install in specific namespace
-namespace flag The namespace parameter can be used to specify a kubernetes namespace in which to perform the installation.
[root@localhost ~]# kosi install --hub <hubname> <package> --namespace <namespace>
Example: The package livedemo of user kosi with version 2.7.1 is to be installed from the public software hub and a custom kubernetes namespace is used:
[root@localhost ~]# kosi install --hub public kosi/livedemo:2.7.1 --namespace MyNamespace
Note: If no –namespace parameter is specified, the namespace default will be used.
Install with specific deployment name
–dname flag
The parameter dname can be used to save the package under a specific name.
[root@localhost ~]# kosi install --hub <hubname> <package> --dname <deploymentname>
Example: The package livedemo of user kosi with version 2.7.1 is to be installed from the public software hub and a deployment name is set:
[root@localhost ~]# kosi install --hub public kosi/livedemo:2.7.1 --dname MyDeployment
If no –dname parameter is specified, a random deployment name will be generated.
Note: The deployment name is stored in the file /
/var/kubeops/kosi/deployment.yaml.
In these few steps, you can successfully install and use the KOSI package.
You can always explore Full Documentation to go through all the functionality and features provided by KOSI.
Install on a machine with no internet connection
- Download the package using kosi pull with a internet connection.
[root@localhost ~]# kosi pull [package name from hub] -o [your preferred name] --hub public
-
Transfer the package to the machine with no internet connection but have KubeOps installed on it.
-
Install it with following command
[root@localhost ~]# kosi install -p [package name]
3 - Reference
3.1 - Documentation-kubeopsctl
KubeOps kubeopsctl
this documentation shows all feature of kubeopsctl and how to use these features.
the kosi software must be installed from the start.
General commands
Overview of all KUBEOPSCTL commands
Usage:
kubeopsctl [command] [options]
Options:
--version Show version information
-?, -h, --help Show help and usage information
Commands:
apply Use the apply command to apply a specific config to create or modify the cluster.
change change
drain <argument> Drain Command.
uncordon <name> Uncordon Command.
upgrade <name> upgrade Command.
status <name> Status Command.
Command ‘kubeopsctl –version’
The kubeopsctl --version
command shows you the current version of kubeopsctl.
kubeopsctl --version
The output should be:
0.2.0-Alpha0
Command ‘kubeopsctl –help’
The command kubeopsctl --help
gives you an overview of all available commands:
kubeopsctl --help
Alternatively, you can also enter kubeopsctl
or kubeopsctl -?
in the command line.
Command ‘kubeopsctl apply’
The command kubeopsctl apply
is used to set up the kubeops platform with a configuration file.
Example:
kubeopsctl apply -f kubeopsctl.yaml
-f flag
The -f
parameter is used to use yaml parameter file.
-l flag
The -l
parameter is used to set the log level to a specific value. The default log level is Info
. Available log levels are Error
, Warning
, Info
, Debug1
, Debug2
, Debug3
.
Example:
kubeopsctl apply -f kubeopsctl.yaml -l Debug3
Command ‘kubeopsctl change registry’
The command kubeopsctl change registry
is used to change the currently used registry to a different one.
Example:
kubeopsctl change registry -f kubeopsctl.yaml -r 10.2.10.11/library -t localhost/library
-f flag
The -f
parameter is used to use yaml parameter file.
-r flag
The parameter r
is used to pull the docker images which are included in the package to a given local docker registry.
-t flag
The -t
parameter is used to tag the images with localhost. For the szenario that the registry of the cluster is exposed to the admin via a network internal domain name, but this name can’t be resolved by the nodes, the flag -t can be used, to use the cluster internal hostname of the registry.
Command ‘kubeopsctl drain’
The command kubeopsctl drain
is used to drain a cluster, zone or node.
In this example we are draining a cluster:
kubeopsctl drain cluster/example
In this example we are draining a zone:
kubeopsctl drain zone/zone1
In this example we are draining a node:
kubeopsctl drain node/master1
Command ‘kubeopsctl uncordon’
The command kubeopsctl uncordon
is used to drain a cluster, zone or node.
In this example we are uncordoning a cluster:
kubeopsctl uncordon cluster/example
In this example we are uncordoning a zone:
kubeopsctl uncordon zone/zone1
In this example we are uncordoning a node:
kubeopsctl uncordon node/master1
Command ‘kubeopsctl upgrade’
The command kubeopsctl upgrade
is used to upgrade the kubernetes version of a cluster, zone or node.
In this example we are uncordoning a cluster:
kubeopsctl upgrade cluster/example -v 1.26.6
In this example we are uncordoning a zone:
kubeopsctl upgrade zone/zone1 -v 1.26.6
In this example we are uncordoning a node:
kubeopsctl upgrade node/master1 -v 1.26.6
-v flag
The parameter v
is used to set a higher kubernetes version.
Command ‘kubeopsctl status’
The command kubeopsctl status
is used to get the status of a cluster.
Example:
kubeopsctl status cluster/cluster1 -v 1.26.6
Prerequisites
Minimum hardware and OS requirments for a linux machine are
OS | Minimum Requirements |
---|---|
Red Hat Enterprise Linux 8 | 8 CPU cores, 16 GB memory |
OpenSUSE 15 | 8 CPU cores, 16 GB memory |
At least one machine should be used as an admin
machine for cluster lifecycle management.
Requirements on admin
The following requirements must be fulfilled on the admin machine.
- All the utilized users require sudo privileges. If you are using Kubeops as a user, you need a user with sudo rights, so for Opensuse and RHEL 8 Environments it is the
wheel
group the user should be added to. Make sure that you change your user with:
su -l <user>
-
Admin machine must be synchronized with the current time.
-
You need an internet connection to use the default KubeOps registry
registry1.kubernative.net/lima
.
A local registry can be used in the Airgap environment. KubeOps only supports secure registries.
It is important to list your registry as an insecure registry in registry.conf (/etc/containers/registries.conf for podman, /etc/docker/deamon.json for docker), in case of insecure registry usage.
Now you can create your own registry instead of using the default. Checkout how to Guide Create a new Repository. for more info.
-
kosi
2.8.0 must be installed on your machine. Click here to view how it is done in the Quick Start Guide. -
it is recommended that runc is uninstalled To uninstall runc on your OS use the following command:
dnf remove -y runc
zypper remove -y runc
-
tc should be installed. To install tc on your OS use the following command:
dnf install -y tc
zypper install -y iproute2
-
for opensearch, the /etc/sysctl.conf should be configured, the line
vm.max_map_count=262144
should be added. also the command
sysctl -p
should be executed after that.
- Podman must be installed on your machine.
To install podman on RHEL8 use command.
dnf install podman
zypper install podman
Warning
There can be an issue with conflicts with containerd, so it is recommended that containerd.io is removed before installing the podman package.-
You must install the kubeops-basic-plugins:0.4.0 .
Simply type in the following command to install the Basic-Plugins.
kosi install --hub=public pia/kubeops-basic-plugins:0.4.0
Noteable is that you must have to install it on a Root-User Machine.
-
You must install the kubeops-kubernetes-plugins:0.5.0.
Simply type in the following command to install the Kubernetes-Plugins.
kosi install --hub public pia/kubeops-kubernetes-plugins:0.5.0
Requirements for each node
The following requirements must be fulfilled on each node.
-
All the utilized users require sudo privileges. If you are using Kubeops as a user, you need a user with sudo rights, so for Opensuse and RHEL 8 Environments it is the
wheel
group the user should be added to. -
Every machine must be synchronized with the current time.
-
You have to assign lowercase unique hostnames for every machine you are using.
We recommended using self-explanatory hostnames.
To set the hostname on your machine use the following command:
hostnamectl set-hostname <name of node>
- Example
Use the commands below to set the hostnames on each machine asadmin
,master
,node1
node2
.hostnamectl set-hostname admin hostnamectl set-hostname master hostnamectl set-hostname node1 hostnamectl set-hostname node2
Requires sudo privileges
It is recommended that a dns service is running, or if you don’t have a nds service, you can change the /etc/hosts file. an example for a entry in the /etc/hosts file could be:
10.2.10.12 admin 10.2.10.13 master1 10.2.10.14 master2 10.2.10.15 master3 10.2.10.16 node1 10.2.10.17 node2 10.2.10.18 node3
- Example
-
To establish an SSH connection between your machines, you either need an SSH key or you need to install sshpass.
-
Generate an SSH key on admin machine using following command
ssh-keygen
There will be two keys generated in ~/.ssh directory.
The first key is theid_rsa(private)
and the second key is theid_rsa.pub(public)
. -
Copy the ssh key from admin machine to your node machine/s with following command
ssh-copy-id <ip address or hostname of your node machine>
-
Now try establishing a connection to your node machine/s
ssh <ip address or hostname of your node machine>
-
How to Configure Cluster/Nodes/Software using yaml file
you need to have a cluster definition file which describes the different aspects of your cluster. this files describes only one cluster.
Full yaml syntax
apiVersion: kubeops/kubeopsctl/alpha/v3 # mandatory
clusterName: "example" # mandatory
clusterUser: "root" # mandatory
kubernetesVersion: "1.28.2" # mandatory
#masterHost: optional if you have an hostname, default value in "masterIP"
masterIP: 10.2.10.12 # mandatory
firewall: "nftables" # mandatory, default "nftables"
pluginNetwork: "calico" # mandatory, default "calico"
containerRuntime: "containerd" # mandatory
clusterOS: "Red Hat Enterprise Linux" # mandatory, can be "Red Hat Enterprise Linux" or "openSUSE Leap"
### Additional values for cluster configuration
useInsecureRegistry: false # optional, default is false
ignoreFirewallError: false # optional, default is false
serviceSubnet: 192.168.128.0/17 # optional, default "192.168.128.0/17"
podSubnet: 192.168.0.0/17 # optional, default "192.168.0.0/17"
debug: false # optional, default is false
logLevel: vvvvv # optional, default "vvvvv"
systemCpu: "1" # optional, default "1"
systemMemory: "2G" # optional, default "2G"
sudo: true # optional, default is true
tmpCopyDir: "/tmp" # optional, default is /tmp
createCluster: true # optional, default is true
updateRegistry: true # optional, default is true
zones:
- name: zone1
nodes:
master:
- name: cluster1master1
ipAdress: 10.2.10.11
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
- name: cluster1master2
ipAdress: 10.2.10.12
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
worker:
- name: cluster1worker1
ipAdress: 10.2.10.14
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
- name: cluster1worker2
ipAdress: 10.2.10.15
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
- name: zone2
nodes:
master:
- name: cluster1master3
ipAdress: 10.2.10.13
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: drained
kubeversion: 1.28.2
worker:
- name: cluster1worker1
ipAdress: 10.2.10.16
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
# set to true if you want to install it into your cluster
rook-ceph: true # mandatory
harbor: true # mandatory
opensearch: true # mandatory
opensearch-dashboards: true # mandatory
logstash: true # mandatory
filebeat: true # mandatory
prometheus: true # mandatory
opa: true # mandatory
headlamp: true # mandatory
certman: true # mandatory
ingress: true # mandatory
keycloak: true # mandatory
velero: true # mandatory
nameSpace: "kubeops" #optional, the default value is different for each application
storageClass: "rook-cephfs" # optional, default value is "rook-cephfs"
###Values for Rook-Ceph###
rookValues:
namespace: kubeops
cluster:
spec:
dataDirHostPath: "/var/lib/rook" # optional, default is /var/lib/rook
storage:
useAllNodes: true # optional, default value: true
useAllDevices: true # optional, default value: true
deviceFilter: "^sd[a-b]" # optional, will only be used if useAllDevices is set to false
config:
metadataDevice: "sda" # optional, only set this value, if there is a device available
nodes: # optional if useAllNodes is set to true, otherwise mandatory
- name: "<ip-adress of node_1>"
devices:
- name: "sdb"
- name: "<ip-adress of node_2>"
deviceFilter: "^sd[a-b]"
config:
metadataDevice: "sda" # optional
resources:
mgr:
requests:
cpu: "500m" # optional, default is 500m, limit: 1000m
memory: "512Mi" # optional, default is 1Gi, limit: 1Gi
mon:
requests:
cpu: "1" # optional, default is 1, limit: 2000m
memory: "1Gi" # optional, default is 1Gi, limit: 2Gi
osd:
requests:
cpu: "1" # optional, default is 1, limit: 2
memory: "1Gi" # optional, default is 4Gi, limit: 4Gi
cephFileSystems:
requests:
cpu: "1" # optional, default is 1, limit: 2
memory: "1Gi" # optional, default is 1, limit: 4Gi
cephObjectStores:
requests:
cpu: "1" # optional, default is 1, limit: 2
memory: "1Gi" # optional, default is 1Gi, limit: 2Gi
operator:
data:
rookLogLevel: "DEBUG" # optional, default is DEBUG
#-------------------------------------------------------------------------------------------------------------------------------
### Values for Harbor deployment ###
## For detailed explaination for each key see: https://artifacthub.io/packages/helm/harbor/harbor/1.8.1#configuration ##
harborValues:
namespace: kubeops # optional, default is kubeops
harborpass: "password" # mandatory: set password for harbor access
databasePassword: "Postgres_Password" # mandatory: set password for database access
redisPassword: "Redis_Password" # mandatory: set password for redis access
externalURL: http://10.2.10.13:30002 # mandatory, the ip address and port, from which harbor is accessable outside of the cluster
nodePort: 30002 # mandatory
hostname: harbor.local # mandatory
harborPersistence:
persistentVolumeClaim:
registry:
size: 5Gi # mandatory, depending on storage capacity
storageClass: "rook-cephfs" #optional, default is rook-cephfs
jobservice:
jobLog:
size: 1Gi # mandatory: Depending on storage capacity
storageClass: "rook-cephfs" #optional, default is rook-cephfs
database:
size: 1Gi # mandatory, depending on storage capacity
storageClass: "rook-cephfs" #optional, default is rook-cephfs
redis:
size: 1Gi # mandatory, depending on storage capacity
storageClass: "rook-cephfs" #optional, default is rook-cephfs
trivy:
size: 5Gi # mandatory, depending on storage capacity
storageClass: "rook-cephfs" #optional, default is rook-cephfs
#--------------------------------------------------------------------------------------------------------------------------------------
###Values for filebeat deployment###
filebeatValues:
namespace: kubeops # optional, default is kubeops
#--------------------------------------------------------------------------------------------------------------------------------
###Values for Logstash deployment###
##For detailed explaination for each key see: https://github.com/elastic/helm-charts/releases/tag/v7.16.3###
logstashValues:
namespace: kubeops
volumeClaimTemplate:
accessModes:
- ReadWriteMany #optional, default is [ReadWriteMany]
resources:
requests:
storage: 1Gi # mandatory, depending on storage capacity
storageClass: "rook-cephfs" #optional, default is rook-cephfs
#--------------------------------------------------------------------------------------------------------------------------------------
###Values for OpenSearch-Dashboards deployment###
##For detailed explaination for each key see: https://github.com/opensearch-project/helm-charts/tree/main/charts/opensearch-dashboards###
openSearchDashboardValues:
namespace: kubeops
nodePort: 30050
#--------------------------------------------------------------------------------------------------------------------------------
###Values for OpenSearch deployment###
##For detailed explaination for each key see: https://github.com/opensearch-project/helm-charts/tree/main/charts/opensearch###
openSearchValues:
namespace: kubeops
opensearchJavaOpts: "-Xmx512M -Xms512M" # optional, default is -Xmx512M -Xms512M
resources:
requests:
cpu: "250m" # optional, default is 250m
memory: "1024Mi" # optional, default is 1024Mi
limits:
cpu: "300m" # optional, default is 300m
memory: "3072Mi" # optional, default is 3072Mi
persistence:
size: 4Gi # mandatory
enabled: "true" # optional, default is true
enableInitChown: "false" # optional, default is false
labels:
enabled: "false" # optional, default is false
storageClass: "rook-cephfs" # optional, default is rook-cephfs
accessModes:
- "ReadWriteMany" # optional, default is {ReadWriteMany}
securityConfig:
enabled: false # optional, default value: false
### Additional values can be set, if securityConfig is enabled:
# path: "/usr/share/opensearch/plugins/opensearch-security/securityconfig"
# actionGroupsSecret:
# configSecret:
# internalUsersSecret: internal-users-config-secret
# rolesSecret:
# rolesMappingSecret:
# tenantsSecret:
# config:
# securityConfigSecret: ""
# dataComplete: true
# data: {}
replicas: "3" # optional, default is 3
#--------------------------------------------------------------------------------------------------------------------------------
###Values for Prometheus deployment###
prometheusValues:
namespace: kubeops # optional, default is kubeops
privateRegistry: false # optional, default is false
grafanaUsername: "user" # optional, default is user
grafanaPassword: "password" # optional, default is password
grafanaResources:
storageClass: "rook-cephfs" # optional, default is rook-cephfs
storage: 5Gi # optional, default is 5Gi
nodePort: 30211 # optional, default is 30211
prometheusResources:
storageClass: "rook-cephfs" # optional, default is rook-cephfs
storage: 25Gi # optional, default is 25Gi
retention: 10d # optional, default is 10d
retentionSize: "24GB" # optional, default is 24GB
nodePort: 32090
#--------------------------------------------------------------------------------------------------------------------------------
###Values for OPA deployment###
opaValues:
namespace: kubeops
#--------------------------------------------------------------------------------------------------------------------------------
###Values for Headlamp deployment###
headlampValues:
service:
nodePort: 30007
#--------------------------------------------------------------------------------------------------------------------------------
###Values for cert-manager deployment###
certmanValues:
namespace: kubeops
replicaCount: 3
logLevel: 2
#--------------------------------------------------------------------------------------------------------------------------------
###Values for ingress-nginx deployment###
ingressValues:
namespace: kubeops
keycloakValues:
namespace: "kubeops" # Optional, default is "keycloak"
storageClass: "rook-cephfs" # Optional, default is "rook-cephfs"
keycloak:
auth:
adminUser: admin # Optional, default is admin
adminPassword: admin # Optional, default is admin
existingSecret: "" # Optional, default is ""
postgresql:
auth:
postgresPassword: "" # Optional, default is ""
username: bn_keycloak # Optional, default is "bn_keycloak"
password: "" # Optional, default is ""
database: bitnami_keycloak # Optional, default is "bitnami_keycloak"
existingSecret: "" # Optional, default is ""
veleroValues:
namespace: "velero"
accessKeyId: "your_s3_storage_username"
secretAccessKey: "your_s3_storage_password"
useNodeAgent: false
defaultVolumesToFsBackup: false
provider: "aws"
bucket: "velero"
useVolumeSnapshots: false
backupLocationConfig:
region: "minio"
s3ForcePathStyle: true
s3Url: "http://minio.velero.svc:9000"
kubeopsctl.yaml in detail
apiVersion: kubeops/kubeopsctl/alpha/v3 # mandatory
imagePullRegistry: "registry1.kubernative.net/lima" # mandatory
localRegistry: false # mandatory
### Values for setup configuration ###
clusterName: "testkubeopsctl" # mandatory
clusterUser: "mnyuser" # mandatory
kubernetesVersion: "1.28.2" # mandatory, check lima documentation
#masterHost: optional if you have an hostname, default value in "masterIP"
masterIP: 10.2.10.11 # mandatory
firewall: "nftables" # mandatory, default "nftables"
pluginNetwork: "calico" # mandatory, default "nftables"
containerRuntime: "containerd" # mandatory, default "containerd"
clusterOS: "Red Hat Enterprise Linux" # optional, can be "Red Hat Enterprise Linux" or "openSUSE Leap", remove this line if you want to use default installed OS on admin machine but it has to be "Red Hat Enterprise Linux" or "openSUSE Leap"
these are parameters for the cluster creation, and software for the clustercreation, p.e. the containerruntime for running the contianers of the cluster. Also there are parameters for the lima software (see documentation of lima for futher explanation).
### Additional values for cluster configuration
useInsecureRegistry: false # optional, default is false
ignoreFirewallError: false # optional, default is false
serviceSubnet: 192.168.128.0/17 # optional, default "192.168.128.0/17"
podSubnet: 192.168.0.0/17 # optional, default "192.168.0.0/17"
debug: true # optional, default is true
logLevel: vvvvv # optional, default "vvvvv"
systemCpu: "1" # optional, default "1"
systemMemory: "2G" # optional, default "2G"
sudo: true # optional, default is true
also important are parameters like for the networking like the subnets for the pods and services inside the kubernetes cluster.
# at least 3 masters and 3 workers are needed
zones:
- name: zone1
nodes:
master:
- name: cluster1master1
ipAdress: 10.2.10.11
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
- name: cluster1master2
ipAdress: 10.2.10.12
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
worker:
- name: cluster1worker1
ipAdress: 10.2.10.14
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
- name: cluster1worker1
ipAdress: 10.2.10.15
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
- name: zone2
nodes:
master:
- name: cluster1master3
ipAdress: 10.2.10.13
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
worker:
- name: cluster1worker1
ipAdress: 10.2.10.16
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: drained
kubeversion: 1.28.2
so new are the zones, which contain master and worker nodes. there are two different states: active and drained. also there can be two different kubernetes versions. So if you want to do updates in tranches, this is possible with kubeopsctl. Also you can set system memory and system cpu of the nodes for kubernetes itself. it is not possible to delete nodes, for deleting nodes you have to use lima. Also if you want to make an update in tranches, you need at least one master with the greater version.
the rest are parameters for installing the kubeops software, which are explained here
how to use kubeopsctl
apply changes to cluster
kubeopsctl apply -f kubeopsctl.yaml
3.2 - KubeOps Version
KubeOps Version
Here is the KubeOps and it’s supported tools versions list. Make sure to install or upgrade according to supported versions only.
KubeOps | supported SINA Version | Supported LIMA Version | supported kubeopsctl version |
---|---|---|---|
KubeOps 1.4.0 | KOSI 2.9.X | LIMA 1.4.X | kubeopsctl 1.4.X |
KubeOps 1.3.0 | KOSI 2.9.X | LIMA 1.1.X | kubeopsctl 0.2.X |
KubeOps 1.2.0 | SINA 2.8.X | LIMA 1.0.X | kubeopsctl 0.1.X |
KubeOps 1.1.7 | SINA 2.7.X | LIMA 0.10.X | |
KubeOps 1.0.10 | SINA 2.6.X | LIMA 0.10.X |
KubeOps 1.4.0 supports
Tools | supported Tool Version | supported Package Version | SHA256 Checksum |
---|---|---|---|
fileBeat | V 8.5.1 | kubeops/filebeat-os:1.4.0 | 03a3338bfdc30ee5899a1cba5994bcc77278082adcd7b3d66ae0f55357f2ebc5 |
harbor | V 2.9.1 | kubeops/harbor:1.4.0 | c407b7e2fd8f1a22bad4374061fceb04f4a2b5befccbb891a76b24e81333ae1e |
helm | V 3.8.0 | kubeops/helm:1.4.0 | 433d84f30aa4ba6ae8dc0d5cba4953e3f2a933909374af0eb54450ad234f870d |
logstash | V 8.4.0 | kubeops/logstash-os:1.4.0 | 5ff7b19fa2e72f1c4ac4b1c37f478c223c265c1277200c62f393c833cbb9db1b |
opa-gatekeeper | V 3.11.0 | kubeops/opa-gatekeeper:1.4.0 | 882af738ac3c10528d5b694f6181e1f1e5f749df947a8afdb0c6ac97809ce5ef |
opensearch | V 2.11.1 | kubeops/opensearch-os:1.4.0 | e72094321b2e76d4de754e56e8b9c40eb79c57059078cf58fd01bc43ab515d4a |
opensearch-dashboards | V 2.11.1 | kubeops/opensearch-dashboards:1.4.0 | 0ff2889aeff8e73c567c812ea709d633ff7a912a13bc8374ebb7c09aed52bac6 |
prometheus | V 43.2.1 | kubeops/kube-prometheus-stack:1.4.0 | 08880a2218ab776e3fd61f95133e8d02e1d2e37b84bcc2756b76eda713eac4ae |
rook | V 17.2.5 | kubeops/rook-ceph:1.4.0 | 5e306c26c6a8fed92b13d47bb127f9d3a6f0b9fcc341ff0efc3c1eaf8d311567 |
cert-manager | V 1.11.0 | kubeops/cert-manager:1.4.0 | ac0a5ff859c1e6846ecbf9fa77c5c448d060da4889ab3bc568317db866f97094 |
ingress-nginx | V 1.8.5 | kubeops/ingress-nginx:1.4.0 | 2128fe81553d80fa491c5978a7827402be79b5f196863a836667b59f3a35c0f8 |
kubeops-dashboard | V 1.0.0 | kubeops/kubeops-dashboard:1.4.0 | b0623b9126a19e5264bbd887b051fd62651cd9683fefdae62fce998af4558e1e |
keycloak | V 16.0.5 | kubeops/keycloak:1.4.0 | b309624754edf53ffea2ce7d772b70d665b0f5ae176e8596fcb403e96e80adec |
velero | V 1.12.3 | kubeops/velero:1.4.0 | b762becf38dbcac716f1d2b446fb35ad40c72aa4d928ccbc9dd362a7ad227fc2 |
clustercreate | V 1.4.0 | kubeops/clustercreate:1.4.0 | ebd2bccfedd99b051c930d23c3b1c123c40e70c098d2b025d29dee301f1b92d8 |
setup | V 1.4.0 | kubeops/setup:1.4.0 | d74e2be55e676946f6a996575f75ac9161db547ad402da8b66a333dfd7936449 |
calicomultus | V 0.0.3 | lima/calicomultus:0.0.3 | c4a40fd0ab66eb0669da7da82e2f209d67ea4d4c696c058670306d485e483f62 |
KubeOps 1.3.1 supports
Tools | supported Tool Version | supported Package Version | SHA256 Checksum |
---|---|---|---|
fileBeat | V 8.5.1 | kubeops/sina-filebeat-os:1.3.1 | 476b23d4c484104167c58caade4f59143412cbbb863e54bb109c3e4c3a592940 |
harbor | V 2.9.1 | kubeops/harbor:1.3.1 | 4862e55ecbfee007f7e9336db7536c064d18020e6b144766ff1338a5d162fc56 |
helm | V 3.8.0 | kubeops/helm:1.3.1 | 99f4eac645d6a3ccb937252fde4880f7da8eab5f84c6143c287bd6c7f2dcce65 |
logstash | V 8.4.0 | kubeops/sina-logstash-os:1.3.1 | 48bee033e522bf3c4863e98623e2be58fbd145d4a52fd4f56b5e1e7ef984bd6d |
opa-gatekeeper | V 3.11.0 | kubeops/opa-gatekeeper:1.3.1 | f8d5633912f1df1e303889e2e3a32003764f0b65c8a77ece88d7c3435080a86b |
opensearch | V 2.9.0 | kubeops/sina-opensearch-os:1.3.1 | a09cf6f29aac5b929425cf3813570fe105ed617ccfdafd0e4593dbbe719a6207 |
opensearch-dashboards | V 2.9.0 | kubeops/sina-opensearch-dashboards:1.3.1 | 86858a23b15c4c67e5eee7a286d8c9a82568d331d39f814746247e742cc56a11 |
prometheus | V 43.2.1 | kubeops/sina-kube-prometheus-stack:1.3.1 | aacced30732c08e8edf439e3dd0d40bd09575f7728f7fca54294c704bce2b76c |
rook | V 17.2.5 | kubeops/rook-ceph:1.3.1 | b3d5b9eace80025d070212fd227d9589024e1eb70e571e3e690d5709202fd26f |
cert-manager | V 1.11.0 | kubeops/cert-manager:1.3.1 | 52ba2c9b809a3728d73cf55d99a769c9f083c7674600654c09c749d6e5f3bdf3 |
ingress-nginx | V 1.7.0 | kubeops/ingress-nginx:1.3.1 | 91007878ef416724c09f9a1c8d498f3a3314cd011ab0c2c2ca81163db971773d |
kubeops-dashboard | V 1.0.0 | kubeops/kubeops-dashboard:1.3.1 | 70fb266137ac94896f841d27b341f610190afe7bed5d5baad53f450d8f925c78 |
keycloak | V 16.0.5 | kubeops/keycloak:1.3.1 | 853912a83fd3eff9bb92f8a6285f132d10ee7775b3ff52561c8a7d281e956090 |
clustercreate | V 1.3.1 | kubeops/clustercreate:1.3.1 | 0526a610502922092cd8ea52f98bec9a64e3f1d1f6ac7a29353f365ac8d43050 |
setup | V 1.3.2 | kubeops/setup:1.3.1 | 7c610df29cdfe633454f78a6750c9419bf26041cba69ca5862a98b69a3a17cca |
calicomultus | V 0.0.3 | lima/calicomultus:0.0.3 | c4a40fd0ab66eb0669da7da82e2f209d67ea4d4c696c058670306d485e483f62 |
KubeOps 1.2.1 supports
Tools | supported Tool Version | supported Package Version | SHA256 Checksum |
---|---|---|---|
fileBeat | V 8.5.1 | kubeops/sina-filebeat-os:1.2.0 | 473546e78993ed4decc097851c84ade25aaaa068779fc9e96d17a0cb68564ed8 |
harbor | V 2.6.4 | kubeops/harbor:1.2.0 | 156f4713f916771f60f89cd8fb1ea58ea5fcb2718f80f3e7fabd47aebb416ecd |
helm | V 3.8.0 | kubeops/helm:1.2.0 | 8d793269e0ccfde37312801e68369ca30db3f6cbe768cc5b5ece5e3ceb8500f3 |
logstash | V 8.4.0 | kubeops/sina-logstash-os:1.2.0 | e2888e76ee2dbe64a137ab8b552fdc7a485c4d9b1db8d1f9fe7a507913f0452b |
opa-gatekeeper | V 3.11.0 | kubeops/opa-gatekeeper:1.2.0 | a45598107e5888b322131194f7a4cb70bb36bff02985326050f0181ac18b00e4 |
opensearch | V 2.9.0 | kubeops/sina-opensearch-os:1.2.0 | c3b3e52902d25c6aa35f6c9780c038b25520977b9492d97e247bb345cc783240 |
opensearch-dashboards | V 2.9.0 | kubeops/sina-opensearch-dashboards:1.2.0 | ced7643389b65b451c1d3ac0c3d778aa9a99e1ab83c35bfc5f2e750174d9ff83 |
prometheus | V 43.2.1 | kubeops/sina-kube-prometheus-stack:1.2.0 | 20d91eb1d565aa55f9d33a1dc7f4ff38256819270b06f60ad3c3a1464eae1f52 |
rook | V 17.2.5 | kubeops/rook-ceph:1.2.0 | 6a8b99938924b89d50537e26f7778bc898668ed5b8f01bbc07475ad6b77293e7 |
cert-manager | V 1.11.0 | kubeops/cert-manager:1.2.0 | f1bb269dac94ebedc46ea4d3c01c9684e4035eace27d9fcb6662321e08cf6208 |
ingress-nginx | V 1.7.0 | kubeops/ingress-nginx:1.2.0 | 1d87f9d606659eebdc286457c7fc35fd4815bf1349d66d9d9ca97cf932d1230c |
kubeops-dashboard | V 1.0.0 | kubeops/kubeops-dashboard:1.2.0 | e084df99ecb8f5ef9e4fcdd022adfc9e0e566b760d4203ed5372a73d75276427 |
keycloak | V 16.0.5 | kubeops/keycloak:1.2.0 | 0a06b689357bb0f4bc905175aaad5dad75b302b27a21cff074abcb00c12bee06 |
clustercreate | V 1.2.2 | kubeops/clustercreate:1.2.2 | 771d031d69ac91c92ee9efcb3d7cefc506415a6d43f1c2475962c3f7431ff79e |
setup | V 1.2.6 | kubeops/setup:1.2.6 | 6492b33cd96ccc76fdc4d430f60c47120d90336d1d887dc279e272f9efe6978e |
3.3 - Glossary
Glossary
This section defines a glossary of common KubeOps terms.
KOSI package
KOSI package is the .kosi file packaged by bundling package.yaml and other essential yaml files and artifacts. This package is ready to install on your Kubernetes Clusters.
KubeOps Hub
KubeOps Hub is a secure repository where published KOSI packages can be stored and shared. You are welcome to contribute and use public hub also at the same time KubeOps provides you a way to access your own private hub.
Installation Address
It is the distinctive address automatically generated for each published package on KubeOps Hub. It is constructed using name of package creator, package name and package version.
You can use this address at the time of package installation on your Kubernetes Cluster.
It is indicated by the install
column in KubeOps Hub.
Deployment name
When a package is installed, KOSI creates a deployment name to track that installation.
Alternatively, KOSI also lets you specify the deployment name of your choice during the installation.
A single package may be installed many times into the same cluster and create multiple deployments.
It is indicated by Deployment
column in the list of package deployments.
Tasks
As the name suggests, “Tasks” in package.yaml are one or more sets of instructions to be executed. These are defined by utilizing Plugins.
Plugins
KOSI provides many functions which enable you to define tasks to be executed using your package. These are called Plugins. They are the crucial part of your package development.
LIMAROOT Variable
LIMAROOT is an envoirment variable for LIMA. It is the place where LIMA stores information about your clusters.
The environment variable LIMAROOT
is set by default to /var/lima
. However LIMA also facilitates setting your own LIMAROOT by yourself.
KUBEOPSROOT Variable
The environment variable KUBEOPSROOT
stores the location of the KOSI plugins and the config.yaml. To use the variable, the config.yaml and the plugins have to be copied manually.
apiVersion
It shows the supported KubeOps tool API version. You do not need to change it unless otherwise specified.
Registry
As the name suggests, it is the location where docker images can be stored. You can either use the default KubeOps registry or specify your own local registry for AirGap environments. You need an internet connection to use the default registry provided by KubeOps.
Maintenance Package
KubeOps provides a package for the supported Kubernetes tools. These packages help you update the Kubernetes tools to the desired versions on your clusters along with the dependencies.
3.4 - FAQs
FAQ - Kubeopsctl
Known Issues
ImagepullBackoffs in Cluster
if you have imagepullbackoffs in your cluster, p.e. for prometheus, you can just use the kubeopsctl change registry comamnd again.
e.g. kubeopsctl change registry -r
FAQ - KubeOps SINA
Error Messages
There is an error message regarding Remote-Certificate
- Error:
http://hub.kubernative.net/dispatcher?apiversion=3&vlientversion=2.X.0 : 0
- X means per version
- CentOS 7 cannot update the version by itself (
ca-certificates-2021.2.50-72.el7_9.noarch
).- Fix:
yum update ca-certificates -y
oryum update
- Fix:
- Manual download and install of
ca-certificates
RPM:- Download:
curl http://mirror.centos.org/centos/7/updates/x86_64/Packages/ca-certificates-2021.2.50-72.el7_9.noarch.rpm -o ca-certificates-2021.2.50-72.el7_9.noarch.rpm
- Install:
yum install ca-certificates-2021.2.50-72.el7_9.noarch.rpm -y
- Download:
SINA Usage
Can I use SINA with sudo?
- At the moment, SINA has no sudo support.
- Docker and Helm, which are required, need sudo permissions.
I get an error message when I try to search an empty Hub?
- Known bug, will be fixed in a later release.
- Need at least one package in the Hub before you can search.
Package Configuration
In my package.yaml, can I use uppercase characters as a name?
- Currently, only lowercase characters are allowed.
- This will be fixed in a later release.
I have an error message that says “Username or password contain non-Latin characters”?
- Known bug, may occur with incorrect username or password.
- Please ensure both are correct.
In my template.yaml, can I just write a value without an associated key?
- No, a YAML file requires a key-value structure.
Do I have to use the template plugin in my SINA package?
- No, you don’t have to use the template plugin if you don’t want to.
I have an error message that says “reference not set to an instance of an object”?
- Error from our tool for reading YAML files.
- Indicates an attempt to read a value from a non-existent key in a YAML file.
I try to template but the value of a key stays empty.
- Check the correct path of your values.
- If your key contains “-”, the template plugin may not recognize it.
- Removing “-” will solve the issue.
FAQ - KubeOps LIMA
Error Messages
LIMA 0.10.6 Cluster not ready
- You have to apply the calico.yaml in the $LIMAROOT folder:
kubectl apply -f $LIMAROOT/calico.yaml
read header failed: Broken pipe
for lima version >= 0.9.0
- Lima stops in line
ansible Playbook : COMPLETE : Ansible playbooks complete.
- Search for
$LIMAROOT/dockerLogs/dockerLogs_latest.txt
in the path Broken pipe
. From the line with Broken pipe
check if the following lines exist:
debug3: mux_client_read_packet: read header failed: Broken pipe
debug2: Received exit status from master 1
Shared connection to vli50707 closed.
<vli50707> ESTABLISH SSH CONNECTION FOR USER: demouser
<vli50707> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)
(ControlPersist=60s)
If this is the case, line /etc/ansible/ansible.cfg
in the currently running lima container in file ssh_args =-C -o ControlMaster=auto -o ControlPersist=60s
must be commented out or removed.
Example:
docker container ls
CONTAINER ID IMAGE COMMAND
CREATED STATUS PORTS NAMES
99cabe7133e5 registry1.kubernative.net/lima/lima:v0.8.0 "/bin/bash" 6 days
ago Up 6 days lima-v0.8.0
docker exec -it 99cabe7133e5 bash
vi /etc/ansible/ansible.cfg
Change the line ssh_args = -C -o ControlMaster=auto -o ControlPersist=60s
to #ssh_args = -C-o ControlMaster=auto -o ControlPersist=60s
or delete the line.
I want to delete the cluster master node and rejoin the cluster. When trying to rejoin the node a problem occurs and rejoining fails. What can be done?
To delete the cluster master, we need to set the cluster master to a different master machine first.
-
On the admin machine: change the IP-Address from the current to new cluster master in:
/var/lima/<name_of_cluster>/clusterStorage.yaml
~/.kube/config
-
Delete the node
-
Delete the images to prevent interference: ctr -n k8s.io i rm $(ctr -n k8s.io i ls -q)
-
Change IP on new cluster master in
/etc/kubernetes/admin.conf
-
Change IPs in config maps:
kubectl edit cm kubeadm-config -n kube-system
kubectl edit cm kube-proxy -n kube-system
kubectl edit cm kubeadm-config -n kube-system
kubectl edit cm cluster-info -n kube-public
-
Restart kubelet
-
Rejoin the node
Using LIMA on RHEL8 fails to download metadata for repo “rhel-8-for-x86_64-baseos-rpms”. What should I do?
This is a common problem which happens now and then, but the real source of error is difficult to identify. Nevertheless, the workaround is quick and easy: clean up the current repo data, refresh the subscription-manager and update the whole operating system. This can be done with the following commands:
dnf clean all
rm -frv /var/cache/dnf
subscription-manager refresh
dnf update -y
How does LIMA handle SELinux?
SELinux will be temporarily deactivated during the execution of LIMA tasks. After the execution is finished, SELinux is automatically reactivated. This indicates you are not required to manually enable SELinux every time while working with LIMA.
My pods are stuck: CONFIG-UPDATE 0/1 CONTAINERCREATING
-
They are responsible for updating the loadbalancer, you can update them manualy and delete the pod
-
You can try redeploying the deamonset to the kube-system namespace
I can not upgrade past KUBERNETES 1.21.X
-
Please make sure you only have the latest dependancy packages for your enviroment in your /packages folder.
-
It could be related to this kubernetes bug
https://v1-22.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
-
Try upgrading past 1.21.x manualy
My master can not join, it fails when creating /ROOT/.KUBE
try the following commands on the master
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Some nodes are missing the loadbalancer
-
Check if the Loadbalancer staticPod file can be found in the manifest folder of the node.
-
If it isn’t there please copy it from another node.
Some nodes didn’t upgrade. What to do now?
-
Retry to upgrade your cluster.
-
If LIMA thinks you are already on the target version edit the stored data of your cluster at
$LIMAROOT/myClusterName/clusterStorage.yaml
.Set the Key
kubernetesVersion
to the lowest kubernetes Version present on a Node in your cluster.
Could not detect a supported package manager from the followings list: [‘PORTAGE’, ‘RPM’, ‘PKG’, ‘APT’], or the required PYTHON library is not installed. Check warnings for details.
-
Check if you got a package manager.
-
You have to install python3 with
yum install python
and then create a symlink from python to python3 withupdate-alternatives --config python
.
Aborting, target uses SELINUX but PYTHON bindings (LIBSELINUX-PYTHON) aren’t installed!
You have to install libselinux-python on your cluster machine so you can install a firewall via LIMA.
FAQ - KubeOps PIA
The httpd service is terminating too long. How can I force the shut down?
- Use following command to force shut down httpd service:
kubectl delete deployment pia-httpd –grace-period=0 –force
- Most deployments got a networking service like our httpd does.
Delete the networking service with the command:
kubectl delete svc pia-httpd-service –grace-period=0 –force
I get the error that some nodes are not ‘Ready’. How do I fix the problem?
-
Use
kubectl get nodes
command to find out first which node is not ready. -
To identify the problem, get access to the shell of the non-ready node . Use
systemctl status kubelet
to get status information about state of kubelet. -
The most common cause of this error is that the kubelet has the problem of not automatically identify the node. In this case, the kubelet must be restarted manually on the non-ready machine. This is done with
systemctl enable kubelet
andsystemctl start kubelet
. -
If the issue persists, reason behind the error can be evaluated by your cluster administrators.
FAQ KubeOps PLATFORM
Support of S3 storage configuration doesn’t work
At the moment, the sina-package rook-ceph:1.1.2 (utilized in kubeOps 1.1.3) is employing a Ceph version with a known bug that prevents the proper setup and utilization of object storage via the S3 API. If you require the functionality provided by this storage class, we suggest considering the use of kubeOps 1.0.7. This particular version does not encounter the aforementioned issue and provides comprehensive support for S3 storage solutions.
Change encoding to UTF-8
Please make sure that your uservalues.yaml
is using UTF-8 encoding.
If you get issues with encoding, you can change your file to UTF-8 with the following command:
iconv -f UTF-8 -t ISO-8859-1 uservalues.yaml > uservalues.yaml
How to update Calico Multus?
-
Get podSubnet located in clusterStorage.yaml (
$LIMAROOT/<clustername>/clusterStorage.yaml
) -
Create a values.yaml with key=>podSubnet an value=>
Example:
podSubnet: 192.168.0.0/17
-
Get the deployment name of the current calicomultus installation with the
sina list
- command
Example:
| Deployment | Package | PublicHub | Hub |
|-------------|--------------------------------------|--------------|----------|
| 39e6da | local/calicomultus:0.0.1 | | local |
- Update the deployment with
sina update lima/calicomultus:0.0.2 --dname <yourdeploymentname> --hub=public -f values.yaml
–dname: important parameter mandatory for the update command.
-f values.yaml: important that the right podSubnet is being used.
Known issue:
error: resource mapping not found for name:
calico-kube-controllers
namespace:co.yaml: no matches for kindPodDisruptionBudget
in versionpolicy/v1beta1
ensure CRDs are installed first
Create Cluster-Package with firewalld:
If you want to create a cluster with firewalld and the kubeops/clustercreate:1.0.
- package you have to manually pull the firewalld - maintenance - package for your OS first, after executing the kubeops/setup:1.0.1
- package.
Opensearch pods do not start:
If the following message appears in the Opensearch pod logs, the vm.max_map_count:
ERROR: [1] bootstrap checks failed
[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
On all control-plane and worker nodes the line vm.max_map_count=262144
must be added to the file /etc/sysctl.conf
.
After that the following command must be executed in the console on all control-plane and worker nodes: sysctl -p
Finally, the Opensearch pods must be restarted.
FAQ - KubeOps KUBEOPSCTL
Known issue:
Upgrading the kubernetes version within the cluster is with the current Beta3 release not possible. It will be fixed in the next release. HA capability only after 12h, for earlier HA capability manually move the file /etc/kubernetes/manifest/haproxy.yaml out of the folder and back in again
4 -
4.1 - About-Lima
What is LIMA?
LIMA is a Cluster Lifecycle Manager application which helps you create and manage a high-availability Kubernetes cluster.
Why to use LIMA?
As the existing Kubernetes environments require management at individual cluster level. Many tasks like installing some softwares and updating them at every single cluster need to be repeated from time-to-time. Hence the manual management of Kubernetes Cluster becomes complex, time-consuming and error prone.
The main goal behind LIMA is to make Kubernetes secure and easy to use for everyone. LIMA gives you the possibility to automate thousands of Kubernetes clusters.
Highlights
- Cluster can be managed from a centralized management node.
- LIMA can be used in the Air-Gap environment.
- LIMA reduces the time and efforts of your IT-team by automating multiple clusters.
- LIMA supports various Container runtimes.
- LIMA also supports various Plugin networks.
What does LIMA offer?
LIMA enables you to set up and manage your cluster from a centralized management node without the need to access a single cluster node.
Cluster Lifecycle Management of a Kubernetes cluster with LIMA includes:
- Creating Cluster.
- Adding Nodes.
- Deleting Nodes.
- Upgrading the Kubernetes API version.
- Updating non-Kubernetes softwares in Cluster.
- Renew Certificates.
Click here to download and get started with LIMA now.
4.2 - Documentation-Lima
KubeOps-LIMA
This guide states all the LIMA features and explains how to use LIMA step by step with examples.
Before you begin, please make sure that
- you have installed the required Maintenance Packages
- required software is up and running
LIMA features
Before we take a look at LIMA features, please note
In order to use these features
- You need a healthy cluster created with LIMA.
- To get this cluster running, its necessary to apply a plugin network to your cluster.
Plugin-network support
You can apply plugin network to your cluster using LIMA in two ways
- You can install a plugin network during cluster creation process by editing the clusterconfig (see clusterconfig keys for more information)
OR - You can install it after you have created a cluster using LIMA (see lima get/install overlay)
Container runtime support
LIMA enables you to select container runtime for your cluster. You can choose between containerd and crio as container runtime for your cluster.
By default crio is the runtime used in the cluster. You can select your desired container runtime in the clusterconfig before creating a cluster. See clusterconfig keys to know how to select the container runtime.
Additionally you can even change the container runtime after cluster creation (see clusterconfig keys for more information)
Support of updating non-kubernetes software
Note: If not already applied updating your cluster with a loadbalancer is required to use LIMA properly.
To get an overview on how and which components you can update see lima update for more information.
Cluster upgrade support
LIMA makes it possible to upgrade your cluster from an outdated kubernetes version to any recent version. See lima upgrade for more information.
Certificate renewal support
If you want to renew certificates in you cluster by one command you should use lima renew cert
How to Configure Cluster/Node using yaml file
In order to create a cluster or add a node to your cluster you need to provide detailed configuration in the form of a YAML file.
- Create a yaml file with your desired name. We recommend using self-explanatory names for the ease of use.
- Use syntax provided in the examples below and update the specifications according to your needs.
Note: You can reuse the same file multiple times just by changing the content of the file, meaning the file name can remain same but you must change the specifications in the file.
This YAML file must always haveapiVersion: lima/nodeconfig/<version>
.
Configure cluster using YAML
You have to create a yaml file which should contain all the essential configuration specifications of the cluster in order to create a cluster. This specific yaml file can only create a single cluster.
Below are the examples to show the structure of the file createCluster.yaml
.
Please refer to Cluster Config API Objects under the Attachments section for detailed explanation of syntax.
Mandatory YAML syntax
apiVersion: lima/clusterconfig/v1alpha2
spec:
clusterName: example
kubernetesVersion: 1.22.4
apiEndpoint: 10.2.1.11:6443
masterHost: mydomain.net OR 10.2.1.11
Complete YAML Syntax
apiVersion: lima/clusterconfig/v1alpha2
spec:
clusterName: example
masterUser: root
masterPassword: toor
masterHost: mydomain.net OR 10.2.1.11
kubernetesVersion: 1.21.5
registry: registry1.kubernative.net/lima
useInsecureRegistry: false
ignoreFirewallError: false
firewall: firewalld
apiEndpoint: 10.2.1.11:6443
serviceSubnet: 192.168.128.0/20
podSubnet: 192.168.144.0/20
debug: true
logLevel: v
systemCpu: 100m
systemMemory: 100Mi
sudo: false
containerRuntime: crio
pluginNetwork:
type: weave
parameters:
weavePassword: re4llyS7ron6P4ssw0rd
auditLog: false
serial: 1
seLinuxSupport: true
Note: The type of yaml file above is only used for the initial cluster set up.
To add master nodes or worker nodes use theaddNode.yaml
file shown below.
Below is an example to show the structure of the file createCluster.yaml
. There are two versions shown below: One with only mandatory entries, and one with every entry available.
Please use alphanumeric numbers only for the weave password.
Note: You can name the YAML files as you want, but it is recommended to use self explanatory names.
In this documentation, the filecreateCluster.yaml
is further only used for an inital cluster set up.
It is important to know that you can not set up another cluster with the exact same YAML file.
Mandatory
apiVersion: lima/clusterconfig/v1alpha2
spec:
clusterName: example
kubernetesVersion: 1.22.4
apiEndpoint: 10.2.1.11:6443
masterHost: mydomain.net OR 10.2.1.11
Full
apiVersion: lima/clusterconfig/v1alpha2
spec:
clusterName: example
masterUser: root
masterPassword: toor
masterHost: mydomain.net OR 10.2.1.11
kubernetesVersion: 1.21.5
registry: registry1.kubernative.net/lima
useInsecureRegistry: false
ignoreFirewallError: false
firewall: firewalld
apiEndpoint: 10.2.1.11:6443
serviceSubnet: 192.168.128.0/20
podSubnet: 192.168.144.0/20
debug: true
logLevel: v
systemCpu: 100m
systemMemory: 100Mi
sudo: false
containerRuntime: crio
pluginNetwork:
type: weave
parameters:
weavePassword: re4llyS7ron6P4ssw0rd
auditLog: false
serial: 1
seLinuxSupport: true
To learn more about the YAML syntax and the specification of the API Objects please see the dedicated section under Attachments.
Configure node using YAML
You have to create a YAML file to configure the node with all the essential configuration specifications for your node in order to create a master node.
Below is an example to show the structure of the file addNode.yaml
. The YAML file contains the specification for one master node.
Please refer to the Node Config API Objects under the Attachments section for detailed explanation of syntax.
apiVersion: lima/nodeconfig/v1alpha1
clusterName: example
spec:
masters:
- host: 10.2.1.11
user: root
password: password
systemCpu: 200m
systemMemory: 200Mi
workers: {}
Note: It is also possible to add multiple nodes at once to your cluster. Refer to the Add multiple nodes to a single master cluster at once section for more information.
To learn more about the YAML syntax and the specification of the API Objects please see the dedicated section under Attachments.
How to use certificates
Instead of using a password you can use certificates.
https://www.digitalocean.com/community/tutorials/how-to-set-up-ssh-keys-on-centos7
The method is the same for SLES, Opensuse and RHEL.
How to use LIMA
Following detailed examples indicate how you can use LIMA.
Note: The
create
command is both used for creating a cluster and adding nodes to the cluster. To create a cluster, usecreate cluster
. Similarly, to add nodes, usecreate nodes
.
Set up a single node cluster for testing
Important: This node is only suitable as an example installation or for testing.
- Create a YAML file which contains cluster configuration.
We are now using the createCluster.yaml
file from above.
Run the create cluster
command on the admin node to create a cluster with one node.
lima create cluster -f createCluster.yaml
Note: Now you have set up a regular single master cluster.
To use this master node also as a worker node for testing production workloads you have to remove the taint of the master node.
Now remove the taint:
kubectl taint nodes --all node-role.kubernetes.io/master-
To learn more about taints please view the ‘Attachments’ section under taints and tolerations.
Set up a single master cluster
Note: This node is not suitable for production workloads.
Please add another worker node as shown below for production workloads.
First create a cluster config YAML file.
We are using now the createCluster.yaml
file from above.
Run the create cluster
command on the admin node to create a cluster with one cluster master.
lima create cluster -f createCluster.yaml
Add node to a single master cluster
Note: Only worker nodes that are added with
addNode.yaml
are suitable for production workloads.
Now create a config YAML file.
We are using now the addNode.yaml
file with the specification for a master node.
Run the create nodes
command on the admin node to add the new master node to your cluster.
lima create nodes -f addNode.yaml
Add multiple nodes to a single master cluster at once
It is possible to add multiple nodes at once to your cluster.
You do that by listing your desired nodes in the spec section of the addNode.yaml
file.
We are now creating a new file addNode1.yaml
with the desired nodes.
Keep in mind that there are two types of nodes; master nodes and worker nodes.
Put the desired node in their specific spec category.
Note: You can reuse the previous YAML file
addNode.yaml
when changing the content of the file. For this example we are using a new file.
The file name can be the same every time, but the content must contain different nodes in order to work.
apiVersion: lima/nodeconfig/v1alpha1
clusterName: example
spec:
masters:
- host: 10.2.1.7
user: root
password: password
systemCpu: 300m
systemMemory: 300Mi
- host: 10.2.1.13
user: root
password: password
- host: master1.kubernative.net
workers:
- host: 10.2.1.12
user: root
password: password
systemCpu: 200m
systemMemory: 200Mi
- host: 10.2.1.9
user: root
password: password
The YAML file now contains the configuration for 3 new master and 2 new worker nodes.
Note: For more information about the YAML syntax and the specification of the API Objects please see the dedicated section under Attachments.
Now type the create nodes
command on the admin node and add the nodes to your cluster.
lima create nodes -f addNode1.yaml
Your cluster now has a total of:
- 5 master nodes (1 cluster master from the initial set up, 1 single added master, 3 new added master )
- 2 worker nodes (The 2 new added ones)
Delete nodes from the kubernetes cluster
If you want to remove a node from your cluster you can run the delete
command on the admin node.
lima delete -n <node which should be deleted> <name of your cluster>
So now we delete worker node 2
from our existing kubernetes cluster named example
with the following command:
lima delete -n 10.2.1.9 example
This is how our cluster looks like
Upgrade your kubernetes cluster
Serial
LIMA offers two options with the upgrade
command. The first option is the upgrade plan
command to show which kubernetes versions are supported.
lima upgrade plan <name of your cluster>
Note: Currently LIMA supports the kubernetes versions: supported kubernetes versions
The second option is to upgrade your cluster. Run the upgrade apply
command on the admin node to upgrade your cluster.
lima upgrade apply -v <version> <name of your cluster>
Note: It is not possible to skip a kubernetes version. If you want to upgrade your kubernetes cluster with the version 1.20.x to the version 1.22.x you first need to upgrade to the version 1.21.x.
Note: Downgrading is not supported.
Back to our already exisiting cluster we are upgrading from version 1.21.5 to version 1.22.5 with the following command:
lima upgrade apply -v 1.22.5 example
Show the version of LIMA
Run the version
command on the admin node to check the current version of LIMA.
lima version
Show available maintenance packages for LIMA
Run the get maintenance
command on the admin node to check the current available maintenance packages.
lima get maintenance
An example output of the command could be as follows
Software | Version | Status | Softwarepackage |
---|---|---|---|
kubernetes | 1.22.4 | downloaded | lima/kubernetes:1.22.4 |
Software:
Software which the maintenance package belongs to
Version:
Affected software version
Status:
Status | Description |
---|---|
not found | Package not found |
downloaded | Package locally and remotely available |
only local | Package locally available |
available | Package remotely available |
unknown | Unknown package |
Softwarepackage:
Name of the package on our Software Hub
Pull available maintenance packages for LIMA
Note: Make sure LIMAROOT is set correctly! LIMAROOT must be set to /var/lima or your own specified path!
Note: Before pulling a package use
lima get maintenance
to get an overview of all available maintenance packages (lima get maintenance)
Run the lima pull maintenance
command on the admin node to pull remotely available maintenance packages. A list of available packages for pulling can be invoked with lima get maintenance
lima pull maintenance <package>
It is possible to pull more than 1 package with one pull invocation. For example:
lima pull maintenance lima/kubernetes:1.23.5 lima/dockerEL7:18.09.1
After pulling the package the package should be available in $LIMAROOT/packages
Get available plugin networks
Run the lima get overlay
command on the admin node to get an overview of all available plugin networks.
lima get overlay
Additionally you can get all configurable parameters for a certain plugin network with the following command:
lima get overlay <plugin-network package>
For example:
lima get overlay lima/weavenetinstaller:0.0.4
…shows an output like:
apiVersion:
description: Show version of validation File
pattern: ^lima/weave/v1alpha1$
type: string
parameters:
properties:
weavePassword:
description: "Weave needs the password to encrypt its communication if you dont offer a weavePassword a password will be generated randomly for you. Ensure you use a secure Password if you set a password by yourself".
pattern: ^([a-zA-Z0-9_]){9,128}$|^$
type: string
Install available plugin networks
Note: To get an overview of available overlay networks use lima get overlay
Run the lima install overlay
command on the admin node to deploy a plugin network in your running cluster. With -c or --cni
you can specify which overlay network you want to install.
lima install overlay <clusterName> -c <plugin-network package>
If the chosen plugin network allows configurable parameters you can pass them over with
-f or --file
:
lima install overlay <clusterName> -c <plugin-network package> -f <parameters.yaml>
For example:
lima install overlay testcluster -c lima/weavenetinstaller:0.0.2 -f parameters.yaml
…with parameters.yaml content:
apiVersion: lima/weave/v1alpha1
parameters:
weavePassword: "superSecurePassword"
Setting the verbosity of LIMA
Note: This section does not influence the logging produced by LIMAs Ansible component.
The logging of LIMA is sorted into different groups that can log at different log-levels.
(descending = increased verbosity)
LIMA log level | Scope |
---|---|
ERROR | program failure (can not be muted) |
WARNING | potential program failure |
INFO | program status |
DEBUG1 | generalized debug messages |
DEBUG2 | detailed debug messages |
DEBUG3 | variable dump |
Note: Group names are case sensitive, log-levels are not.
LIMA logging groups | default log-level | scope |
---|---|---|
default | INFO | parent of all other groups |
cmd | INFO | command logic |
container | INFO | container management |
messaging | INFO | logging |
storage | INFO | file management |
util | INFO | helper functions |
verify | INFO | validation |
All LIMA commands accept the lima -l <group>=<log-level>,<group>=<log-level>,... <command>
flag for setting the log-level of a group. These settings are not permanent and return back to INFO if not set explicitly. The default
group overwrites all other groups.
Example: setting all groups
lima -l default:Warning version
Example: specific groups
lima -l cmd:Error,container:DEBUG3 version
Renew all certificates from nodes
LIMA can renew all Certificates, for a specific Cluster, on all control-plane-nodes.
lima renew cert <clusterName>
Note: Renewing certificates can take several minutes for restarting all certificats services.
Note: This command renews all certificates on the existing control-plane, there is no option to renew single certificates.
An example to use renew cert on your cluster “example”:
lima renew cert example
Update non-kubernetes software of your kubernetes cluster
LIMA can perform a cluster-wide yum update with the update
command for all its dependencies.
lima update <clustername>
Note: Updating may take several minutes depending on how long ago the last update has been performed.
Note: This command updates the entire cluster, there is no option to single out a node.
Back to our already exisiting cluster we update our cluster with the following command:
lima update example
Flags
-s, –serial int
As seen in other commands, the serial command allows you to set the batch size of a given process.
-b, –loadbalancer
The loadbalancer flag alters the update command to additionally update the internal loadbalancer on all your nodes.
Using this flag is only necessary, if changes to the loadbalancer occur. This will happen when the workload in your cluster shifts. The loadbalancer will change for example, if you add or remove nodes from your cluster.
-r, –registry
with the -r parameter you can push your images, wich are in the $LIMAROOT/images file, into the local registry on the master nodes. The images are pulled from the registry which is stated in the createCluster.yaml.
-t, –template string
The template flag only works in conjunction with the loadbalancer flag to provide a path to a custom loadbalancer configuration.
The yaml file below represents the standard HAProxy configuration LIMA uses. The configuration will be templated using the Jinja2 syntax.
MasterIPs is a variable that provides a list of all master-node-addresses in your cluster.
Note: The Template Flag can also be used at cluster creation
lima create cluster -f <create.yaml> -t <template.yaml>
global
daemon
maxconn 1024
stats socket /var/run/haproxy.sock mode 755 expose-fd listeners level user
defaults
timeout connect 10s
timeout client 30s
timeout server 30s
frontend localhost
bind *:7443
option tcplog
mode tcp
default_backend nodes
backend nodes
mode tcp
balance roundrobin
option ssl-hello-chk
{% for ip in masterIPs %}
server webserver{{ ip }} {{ ip }}:{{ apiEndpointPort }} check inter 1s downinter 30s fall 1 rise 4
{% endfor %}
Change the settings of your cluster storage
LIMA allows you to directly edit either one or more values of your ‘clusterStorage.yaml’
that can be found in your $LIMAROOT/
The following flags are config values that can be changed by LIMA
--ansible_become_pass example: 'password123'
--apiEndpointIp example: '10.2.1.11'
--apiEndpointPort example: '6443'
-d, --debug example: 'true'
-f, --file example: 'clusterStorage.yaml'
--firewall example: 'iptables'
-i, --ignoreFirewallError example: 'false'
-l, --logLevel example: 'vvvvv'
--masterHost example: '10.2.1.11'
--masterPassword example: 'password123'
--masterUser example: 'root'
--registry example: 'registry1.kubernative.net/lima'
--sudo example: 'false'
--systemCpu example: '200m'
--systemMemory example: '200Mi'
-u, --useInsecureRegistry example: 'true'
--weavePassword example: 'password1_'
WARNING: Be sure that you know what you are doing when you change these settings, as there is always the danger of breaking your cluster when you alter configuration files.
Note: It is possible to add multiple flags to change several values at once.
Note: This command only changes the cluster storage YAML. It does not apply any changes in your cluster. For example, overwriting your kubernetes version in the config will not install a higher kubernetes version. To upgrade kubernetes, see ‘Upgrade your kubernetes cluster’.
For our example we can set the system resource parameters. These values will be used while joining new masters and workers if no specific vlaue is set in the used nodeConfig file.
To apply the new config YAML, we run the command:
lima change config --systemCpu '200m'--systemMemory '200Mi' example
Change the container runtime of your cluster
Serial
LIMA allows you to switch between containerd and crio as your container runtime.
lima change runtime -r <runtime> <cluster name>
Note: After changing your runtime, it can take several minutes before all pods are running again. In general, we recommend restarting your cluster after changing runtimes.
An example of how to change from CRI-O to containerd looks like this:
lima change runtime -r containerd example
### Change the usage of audit logging in your cluster ----
Configure audit logs for your cluster
lima change auditlog -a <true/false> <cluster name>
Note: This command is experimental and can fail. Please check the results after execution.
An example of how to turn audit logging off:
lima change auditlog -a false example
Serial Flag
With the serial flag, you can run a command simutaneously on a given number of nodes.
The commands supporting the serial are marked with the tag serial.
Example:
lima change runtime -r crio example -s 2
Now the change runtime
will be run parallel on two nodes.
Attachments
Changes by Lima
Below you can see the Lima process and its changes to the system.
The processes and changes are listed for each node separately.
Installing the Lima RPM
During the installation of the Lima RPM following changes are made:
- set setenforce to 0 (temporarily)
- set the default for the environment variable $LIMAROOT to /var/lima
- activate ipv4 ip forwarding in the /etc/sysctl.conf file
On master node when setting up a cluster
Note: Shown are only the changes from using the YAML file
createCluster.yaml
.
1. sshkeyscan
On all hosts listed in the inventory.yaml
file.
2. firewallCheck
Collect installed services and packages. No changed files.
3. firewallInstall
Installs iptables rpm if necessary and starts firewalld/iptables or none.
4. openPortsCluster (Skipped if ignoreFirewallError==true
and no firewall installed/recognised/mentioned)
4.1 Opens master_ports: 2379-2380/tcp, 6443/tcp, 6784/tcp, 9153/tcp, 10250/tcp, 10251/tcp, 10252/tcp
and depending on the chosen pluginNetwork:
weave_net_ports: 6783/tcp, 6783-6784/udp
or
calico_ports: 179/tcp.
4.2 Reloads the firewall if firewalld is used / persists iptables firewall changes through reboot
5. systemSelinux
5.1 Set setenforce to 0.
Setenforce persists through reboot
6. systemSwap
6.1 Disable swap.
System reboots after disabling swap.
7. podman
7.1 Install podman and dependencies.
7.2 Enable and start podman service.
8. containerdInsecureRegistry
8.1 Check if /etc/containerd/config.toml
exists
8.2 If not: Create it.
8.3 Append the insecure registries.
8.4 Restart containerd.
9. systemNetBridge
9.1 modprobe br_netfilter && systemctl daemon-reload.
9.2 Enable ipv4 forwarding.
9.3 Enable netfilter on bridge.
10. kubeadmKubeletKubectl
10.1 Install all required rpms.
10.2 Enable and start service kubelet.
11. kubernetesCluster
11.1 Create kubeadmconfig from template.
11.2 Create k8s user config directory.
11.3 Copy admin config to /root/.kube/config
.
On clustermaster node when node is added to the cluster
Note: Shown are only the changes when the YAML file
addNode.yaml
andaddNode1.yaml
is used when adding nodes.
1. kubernetesCreateToken
Clustermaster creates join-token with following command:
kubeadm token create --print-join-command
2. getCert
2.1 Upload certs and get certificate key.
2.2 Writes output from kubeadm token create --print-join-command --control-plane --certificate-key <cert_key>
into var.
On master nodes which will be joined to the cluster
1. sshkeyscan
On all hosts listed in the inventory.yaml
file.
2. firewallCheck
Collect installed services and packages. No changed files.
3. firewallInstall
Installs iptables rpm if necessary and starts firewalld/iptables or none.
4. openPortsMaster (Skipped if ignoreFirewallError==true
and no firewall installed/recognised/mentioned)
4.1 Opens master_ports: 2379-2380/tcp, 6443/tcp, 6784/tcp, 9153/tcp, 10250/tcp, 10251/tcp, 10252/tcp
and depending on the chosen pluginNetwork:
weave_net_ports: 6783/tcp, 6783-6784/udp
or
calico_ports: 179/tcp.
4.2 Reloads the firewall if firewalld is used / persists iptables firewall changes through reboot
5. systemSelinux
5.1 Set setenforce to 0.
Setenforce persists through reboot.
6. systemSwap
6.1 Disable swap.
System reboots after disabling swap.
7. containerd
7.1 Install containerd and dependencies.
7.2 Enable and start containerd service.
8. containerdInsecureRegistry
8.1 Check if /etc/containerd/config.toml
exists
8.2 If not: Create it.
8.3 Append the insecure registries.
8.4 Restart containerd.
9. systemNetBridge
9.1 modprobe br_netfilter && systemctl daemon-reload.
9.2 Enable ipv4 forwarding.
9.3 Enable netfilter on bridge.
10. kubeadmKubeletKubectl
10.1 Install all required rpms.
10.2 Enable and start service kubelet.
11. kubernetesJoinNode
Master node joins with token generated by cluster.
On Kubernetes workers
1. sshkeyscan
on all hosts listed in the inventory.yaml
file.
2. firewallCheck
Collect installed services and packages. No changed files.
3. firewallInstall
Installs iptables rpm if necessary and starts firewalld/iptables or none.
4. openPortsWorker (Skipped if ignoreFirewallError==true
and no firewall installed/recognised/mentioned)
4.1 Opens worker_ports: 10250/tcp, 30000-32767/tcp
and depending on the chosen pluginNetwork:
weave_net_ports: 6783/tcp, 6783-6784/udp
or
calico_ports: 179/tcp.
4.2 Reloads the firewall if firewalld is used / persists iptables firewall changes through reboot
5. systemSelinux
5.1 Set setenforce to 0.
Setenforce persists through reboot.
6. systemSwap
6.1 Disable swap.
System reboots after disabling swap.
7. containerd
7.1 Install containerd and dependencies.
7.2 Enable and start containerd service.
8. containerdInsecureRegistry
8.1 Check if /etc/containerd/config.toml
exists
8.2 If not: Create it.
8.3 Append the insecure registries.
8.4 Restart containerd.
9. systemNetBridge
9.1 modprobe br_netfilter && systemctl daemon-reload.
9.2 Enable ipv4 forwarding.
9.3 Enable netfilter on bridge.
10. kubeadmKubeletKubectl
10.1 Install all required rpms.
10.2 Enable and start service kubelet.
11. kubernetesJoinNode
Worker node joins with token generated by cluster.
Product environment considerations
Recommended architecture
You need:
- DNS server (optional)
- Persistent storage (NFS) (optional)
- Internet access for Kubernative registry
- Your own local registry on an AirGap environment
Failure tolerance
Having multiple master nodes ensures that services remain available should master node(s) fail. In order to guarantee availability of master nodes, they should be deployed with odd numbers (e.g. 3,5,7,9 etc.)
An odd-size cluster tolerates the same number of failures as an even-size cluster but with fewer nodes. The difference can be seen by comparing even and odd sized clusters:
Cluster Size | Majority | Failure Tolerance |
---|---|---|
1 | 1 | 0 |
2 | 2 | 0 |
3 | 2 | 1 |
4 | 3 | 1 |
5 | 3 | 2 |
6 | 4 | 2 |
7 | 4 | 3 |
8 | 5 | 3 |
9 | 5 | 4 |
Adding a member to bring the size of cluster up to an even number doesn’t buy additional fault tolerance. Likewise, during a network partition, an odd number of members guarantees that there will always be a majority partition that can continue to operate and be the source of truth when the partition ends.
Installation scenarios
Note: Please consider
How to install Lima
for the instructions.
1. Install from registry
2. Install on AirGap environment
Kubernetes Networking
Pod- and service subnet
In Kubernetes, every pod has its own routable IP address. Kubernetes networking – through the network plug-in that is required to install (e.g. Weave) takes care of routing all requests internally between hosts to the appropriate pod. External access is provided through a service or load balancer which Kubernetes routes to the appropriate pod.
The Podsubnet is set by the user in the createCluster.yaml
file. Every Pod has it’s own IP address. The Podsubnet range has to be big enough to fit all of the pods. Due to the design of Kubernetes, where pods will change or nodes reboot, services were built into Kubernetes to address the problem of changing IP addresses.
In Kubernetes, a Service is an abstraction which defines a logical set of Pods and a policy by which to access them (sometimes this pattern is called a micro-service).
Kubernetes service manages the state of a set of pods. The service is an abstraction of pods which asign a virtual IP address over a set of pod IP addresses.
Note: The podsubnet and the servicesubnet must have different IP ranges.
Persistent Storage
Please see the link below to learn more about persistent storage.
Cluster Storage
The state of each cluster will be saved in the environment variable LIMAROOT and represents which nodes are joined with the cluster master.
In LIMAROOT the clusterName is set as a folder name and includes a clusterStorage.yaml
.
In the clusterStorage.yaml
is all the information about the cluster.
The structure of the clusterStorage.yaml
file looks like this:
apiVersion: lima/storage/v1alpha2
config:
clusterName: example
kubernetesVersion: 1.23.5
registry: registry1.kubernative.net/lima
useInsecureRegistry: true
apiEndpoint: 10.2.1.11:6443
serviceSubnet: 192.168.128.0/20
podSubnet: 192.168.144.0/20
clusterMaster: master1.kubernative.net OR 10.2.1.11
nodes:
masters:
master1.kubernative.net: {}
10.2.1.50: {}
10.2.1.51: {}
workers:
worker1.kubernative.net: {}
worker2.kubernative.net: {}
10.2.1.54: {}
Taints and tolerations
Taints allow a node to reject a set of pods.
Tolerations are applied to pods, and allow (but do not require) the pods to schedule onto nodes with matching taints.
Taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes. One or more taints are applied to a node; this marks that the node should not accept any pods that do not tolerate the taints.
Example:
The master node has a taint that prevents it from using it for production workloads.
So first remove the taint in order to use the master for production workloads.
Important: Removing the taint from a master node is only recommended for testing!
Explaination of YAML file syntax
clusterconfig API Objects
Structure of the clusterconfig API Object with version lima/clusterconfig/v1alpha2
.
An example to show the structure of the file createCluster.yaml
:
apiVersion: lima/clusterconfig/v1alpha2
spec:
clusterName: example
kubernetesVersion: 1.23.5
registry: registry1.kubernative.net/lima
useInsecureRegistry: true
apiEndpoint: 10.2.1.11:6443
serviceSubnet: 192.168.128.0/20
podSubnet: 192.168.144.0/20
masterHost: worker1.kubernative.net OR 10.2.1.11
apiVersion (Mandatory):
Version string which defines the format of the API Object
The only currently supported version is:
apiVersion: lima/clusterconfig/v1alpha2
clusterName (Mandatory):
Name for the cluster used to address the cluster.
Should consist of only uppercase/lowercase letters, numbers and underscores.
Example:
clusterName: example
kubernetesVersion (Mandatory):
Defines the Kubernetes version to be installed. This value must follow the Kubernetes version convention. The valid format is ‘#.#.#.’ and accepts for ‘#’ only available numbers.
Example:
kubernetesVersion: 1.23.5
registry (Optional):
Address of the registry where the kubernetes images are stored.
This value has to be a valid IP address or valid DNS name.
Example:
registry: 10.2.1.12
Default:
registry: registry1.kubernative.net/lima
useInsecureRegistry (Mandatory):
Value defines if the used registry is a secure or insecure registry. This value can be either of true or false. Only lowercase letters allowed for ’true’ and ‘false’.
Example:
useInsecureRegistry: true
debug (Optional):
Value defines if the user wants to see output from the pipe or not. This value can be either of true or false. Only lowercase letters allowed for ’true’ and ‘false’.
Example:
debug: true
Default:
debug: false
ignoreFirewallError (Optional):
Value defines if the firewall error is ignored or not. This value can be either of true or false. Only lowercase letters allowed for ’true’ and ‘false’.
Example:
ignoreFirewallError: true
Default:
ignoreFirewallError: false
firewall (Optional):
Value defines which firewall is used. This value can be either iptables or firewalld. Only lowercase letters allowed for ‘iptables’ and ‘firewalld’.
Example:
firewall: iptables
apiEndpoint (Mandatory):
The IP address and port where the apiserver can be reached. This value consists of an IP address followed by a colon and port. Usually the IP address of a clustermaster or Load Balancer is used.
Example:
apiEndpoint: 10.2.1.11:6443
serviceSubnet (Optional ):
Defines the subnet to be used for the services within kubernetes. This subnet has to be given in CIDR format. But it is not checked for validity.
Example:
serviceSubnet: 1.1.1.0/24
Default:
serviceSubnet: 192.168.128.0/20
containerRuntime (Optional ):
Sets the container runtime enviroment of the cluster. The valid options are crio
, and containerd
Example:
containerRuntime: containerd
Default:
containerRuntime: crio
podSubnet (Optional ):
Defines the subnet used by the pods within kubernetes. This subnet has to be given in CIDR format.
Example:
podSubnet: 1.1.2.0/24
Default:
serviceSubnet: 192.168.144.0/20
Note: The podsubnet and the servicesubnet must have different IP ranges.
systemCpu (Optional ):
The CPU value is used to keep resources free on the operating system. If this value is left empty, 100m will be used. The range of values is from 0.001 to 0.9 or 1 m to 500,000 m.
Example:
systemCpu: 500m
Default:
systemCpu: 100m
systemMemory (Optional):
The Memory value is used to keep resources free on the operating system. If this value is left empty, 100Mi will be used. The range of values is from 0.001 to 0.9 or 1 m to 500,000 m. You can express memory as a simple integer or as a fixed-point number using one of these suffixes: E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki.
Example:
systemMemory: 1Gi
Default:
systemMemory: 100Mi
logLevel (Optional):
Describes how detailed the outputs are logged for the user. The following values can be used: v,vv,vvv,vvvv,vvvvv.
Example:
logLevel: vv
Default:
logLevel: vvvvv
masterHost (Mandatory):
Name of the node to be installed as the first master. This value can be either a correct FQDN or a specific IP Address.
Example:
masterHost: 10.2.1.11
masterUser (Optional):
Specification of the user to be used to connect to the node. If this field is left empty “root” will be used to run the ansbile scripts on the cluster master. Should consist of only uppercase/lowercase letters, numbers and underscores.
Example:
masterUser: root
Default:
masterUser: root
masterPassword (Optional):
Specification of the userpassword to be used to connect to the node. Requirements for the password:
- Password length must be between 1 and 128 characters long
- Allowed: alphanumeric characters, following symbols: ‘_!?-^@#$%*&():.,;<>’
Example:
masterPassword: password
Note: If you are not using ‘masterPassword’ you need to use certificates
sudo (Optional):
A boolean flag decides whether podman commands run with sudo or not.
Example:
sudo: false
Default:
sudo: true
pluginNetwork (Optional): Currently supported are calico, calico with multus CNI and weave as a plugin network.
To get an overview which pluginNetwork supports which parameters use lima get overlay
Example:
pluginNetwork:
type: weave
parameters:
weavePassword: "superSecurePassword"
Note: If you dont select a pluginNetwork your Cluster will not install any pluginNetwork during Cluster creation! You should either select a pluginNetwork before Cluster creation or use lima install overlay
Please use alphanumeric numbers only for the weave password.
auditLog (Optional):
Enables the Kubernetes AuditLog functionality. In order to do so, be sure to create a policy.yaml
and save the file under .../limaRoot/auditLog/policy.yaml
.
Example:
auditLog: true
Note: To learn more about creating a policy.yaml for the AuditLog functionality, visit the official Kubernetes documentation for Auditlogging.
seLinuxSupport (Optional):
Turn SeLinux on or off:
True: Enforcing
False: Permissive
seLinuxSupport: true
Note: If Selinux is Enforcing and you want to install a firewall, on your target machine, you need to pre install libselinux-python!
serial (Optional):
Specify if a LIMA command should be run simultaneously on a given number of nodes.
Example:
serial: 2
<br>
#### nodeconfig API Objects
----
An Example to show the structure of the file `addNode.yaml`:
```yaml
apiVersion: lima/nodeconfig/v1alpha1
clusterName: example
spec:
masters:
- host: 10.2.1.11
user: root
password: password
systemCpu: 200m
systemMemory: 200Mi
- host: master1.kubernative.net
workers:
- host: 10.2.1.12
user: root
password: password
- host: worker1.kubernative.net
masters
Optional
A list of all master nodes in the nodelist.
Each node must have a hostname.
The user and password are optional.
An Example to show the structure of the spec file: addNode.yaml
.
The cutout shows the structure of the masters spec
Object:
apiVersion: lima/nodeconfig/v1alpha1
clusterName: example
spec:
masters:
- host: 10.2.1.11
user: root
password: admin123456
- host: master.kubernative.net
host:
Mandatory
Each host has a unique identifier.
The hostname can be either a specific IP Address OR a correct FQDN.
Example:
host: 10.2.1.11
or
host: master.kubernative.net
user:
Optional
Specification of the user to be used to connect to the node.
Example:
user: root
Default:
user: root
password:
Optional
Specification of the password to be used to connect to the node.
Example:
password: admin123456
systemCpu:
Optional
Specification which system resources (CPU) are to be reserved for the connection to the node.
Example:
systemCpu: 200m
Default:
systemCpu: 100m
systemMemory:
Optional
Specification which system resources (memory) are to be reserved for the connection to the node.
Example:
systemMemory: 200Mi
Default:
systemMemory: 100Mi
workers
Optional
A list of all worker nodes in the nodelist.
Each node must have a hostname.
The user and password are optional.
An Example to show the structure of the spec file: addNode.yaml
.
The cutout shows the structure of the workers spec
Object:
apiVersion: lima/nodeconfig/v1alpha1
clusterName: example
spec:
workers:
- host: 10.2.1.12
user: root
password: password
- host: worker.kubernative.net
host:
Mandatory
Each host has a unique identifier.
The hostname can be either a specific IP Address OR a correct FQDN.
Example:
host: 10.2.1.12
or
host: worker.kubernative.net
user:
Optional
Specification of the user to be used to connect to the node.
Example:
user: root
Default:
user: root
password:
Optional
Specification of the password to be used to connect to the node.
Example:
password: admin123456
systemCpu:
Optional
Specification which system resources (CPU) are to be reserved for the connection to the node.
Example:
systemCpu: 200m
Default:
systemCpu: 100m
systemMemory:
Optional
Specification which system resources (memory) are to be reserved for the connection to the node.
Example:
systemMemory: 200Mi
Default:
systemMemory: 100Mi
Use LIMA as a User
When you use LIMA as a user, you have to pay attention to some details. First of all, the user has to be present on all nodes. This user needs a home directory and needs sudo privileges.
-
With the following command you can create a new user with a home directory and sudo privileges.
useradd -m -aG <sudo group> testuser
For example on RHEL or opensuse/SLES environments, the
wheel
group is the sudo group. -
You also need to add password for the user.
Following command allows you to set password for user.passwd testuser
-
For all new nodes you need a new public ssh key.
It is recommended to use a comment for that, where you add the username and host.ssh-keygen -C "testuser@master1"
This command creates a new public ssh key. The new ssh key is located in the
.ssh
folder in the home directory, and has the nameid_rsa.pub
. The content of this file needs to be in the authorized_keys file on the admin node. That authorized_keys file is located in the .ssh folder in the home directory.
You also need to pay attention to the createCluster.yaml, because it contains the masterUser and the masterPassword parameter, so you have to update these parameters.
...
masterUser: root
masterPassword: toor
...
Then you can create a cluster as a user:
lima create cluster -f createCluster.yaml
Linkpage
Kubernative Homepage https://kubeops.net/
Kubernetes API reference https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/
CNCF Landscape https://landscape.cncf.io
4.3 - Installation-Guide-lima
Install Lima on AirGap environment
- Set up a local docker registry in your AirGap environment.
Manual: https://docs.docker.com/registry/deploying/
- Pull the following images with a machine that has docker installed and internet access:
The lima images:
docker pull registry.kubernative.net/lima:<lima-version> //example v0.8.0-beta-4
docker pull registry.kubernative.net/autoreloadhaproxy:2.4.0-alpine
docker pull registry.kubernative.net/registry:2.7.1
docker pull registry.kubernative.net/docker.io/busybox:1.33.0
All images found in the imagelist.yaml in your kubernetes-<version> package:
docker pull gcr.io/google-containers/kube-apiserver:<kubernetes-version> //example v1.20.5
docker pull gcr.io/google-containers/kube-controller-manager:<kubernetes-version>
docker pull gcr.io/google-containers/kube-proxy:<kubernetes-version>
docker pull gcr.io/google-containers/kube-scheduler:<kubernetes-version>
docker pull gcr.io/google-containers/pause:<version> //example 3.1
docker pull gcr.io/google-containers/coredns:<version> //example 1.6.5
docker pull gcr.io/google-containers/etcd:<version> //example 3.4.3-0
Using weavenet:
docker pull docker.io/weaveworks/weave-kube:2.6.2
docker pull docker.io/weaveworks/weave-npc:2.6.2
Using calico:
docker pull docker.io/calico/cni:v3.18.1
docker pull docker.io/calico/pod2daemon-flexvol:v3.18.1
docker pull docker.io/calico/node:v3.18.1
docker pull docker.io/calico/kube-controllers:v3.18.1
- Tag them:
The lima images:
docker tag registry.kubernative.net/lima:<lima-version> <your registry>/lima:<lima-version>
docker tag registry.kubernative.net/autoreloadhaproxy:2.4.0-alpine <your registry>/autoreloadhaproxy:2.4.0-alpine
docker tag registry.kubernative.net/registry:2.7.1 <your registry>/registry:2.7.1
docker tag registry.kubernative.net/docker.io/busybox:1.33.0 <your registry>/docker.io/busybox:1.33.0
All images found in the imagelist.yaml im your kubernetes-<version> package:
docker tag gcr.io/google-containers/kube-apiserver:<kubernetes-version> <your registry>/gcr.io/google-containers/kube-apiserver:<kubernetes-version>
docker tag gcr.io/google-containers/kube-controller-manager:<kubernetes-version> <your registry>/gcr.io/google-containers/kube-controller-manager:<kubernetes-version>
docker tag gcr.io/google-containers/kube-proxy:<kubernetes-version> <your registry>/gcr.io/google-containers/kube-proxy:<kubernetes-version>
docker tag gcr.io/google-containers/kube-scheduler:<kubernetes-version> <your registry>/gcr.io/google-containers/kube-scheduler:<kubernetes-version>
docker tag gcr.io/google-containers/pause:<version> <your registry>/gcr.io/google-containers/pause:<version>
docker tag gcr.io/google-containers/coredns:<version> <your registry>/gcr.io/google-containers/coredns:<version>
docker tag gcr.io/google-containers/etcd:<version> <your registry>/gcr.io/google-containers/etcd:<version>
Using weavenet:
docker tag docker.io/weaveworks/weave-kube:2.6.2 <your registry>/docker.io/weaveworks/weave-kube:2.6.2
docker tag docker.io/weaveworks/weave-npc:2.6.2 <your registry>/docker.io/weaveworks/weave-npc:2.6.2
Using calico:
docker tag docker.io/calico/cni:v3.18.1 <your registry>/docker.io/calico/cni:v3.18.1
docker tag docker.io/calico/pod2daemon-flexvol:v3.18.1 <your registry>/docker.io/calico/pod2daemon-flexvol:v3.18.1
docker tag docker.io/calico/node:v3.18.1 <your registry>/docker.io/calico/node:v3.18.1
docker tag docker.io/calico/kube-controllers:v3.18.1 <your registry>/docker.io/calico/kube-controllers:v3.18.1
- Export your images as tar files:
The lima images:
docker save -o ./ansible.tar <your registry>/lima:<lima-version>
docker save -o ./autoreloadhaproxy.tar <your registry>/autoreloadhaproxy:2.4.0-alpine
docker save -o ./registry.tar <your registry>/registry:2.7.1
docker save -o ./busybox.tar <your registry>/docker.io/busybox:1.33.0
All images found in the imagelist.yaml im your kubernetes-<version> package:
docker save -o ./kube-apiserver.tar <your registry>/gcr.io/google-containers/kube-apiserver:<kubernetes-version>
docker save -o ./kube-controller-manager.tar <your registry>/gcr.io/google-containers/kube-controller-manager:<kubernetes-version>
docker save -o ./kube-proxy.tar <your registry>/gcr.io/google-containers/kube-proxy:<kubernetes-version>
docker save -o ./kube-scheduler.tar <your registry>/gcr.io/google-containers/kube-scheduler:<kubernetes-version>
docker save -o ./pause.tar <your registry>/gcr.io/google-containers/pause:<version>
docker save -o ./coredns.tar <your registry>/gcr.io/google-containers/coredns:<version>
docker save -o ./etcd.tar <your registry>/gcr.io/google-containers/etcd:<version>
Using weavenet:
docker save -o ./weave-kube.tar <your registry>/docker.io/weaveworks/weave-kube:2.6.2
docker save -o ./weave-npc.tar <your registry>/docker.io/weaveworks/weave-npc:2.6.2
Using calico:
docker save -o ./cni.tar <your registry>/docker.io/calico/cni:v3.18.1
docker save -o ./pod2daemon-flexvol.tar <your registry>/docker.io/calico/pod2daemon-flexvol:v3.18.1
docker save -o ./node.tar <your registry>/docker.io/calico/node:v3.18.1
docker save -o ./kube-controllers.tar <your registry>/docker.io/calico/kube-controllers:v3.18.1
- Move all tar files to a place that has access to your registry and docker installed.
- Extract all image tar files:
The lima images:
docker load -i ./ansible.tar <your registry>/lima:<lima-version>
docker load -i ./autoreloadhaproxy.tar <your registry>/autoreloadhaproxy:2.4.0-alpine
docker load -i ./registry.tar <your registry>/registry:2.7.1
docker load -i ./busybox.tar <your registry>/docker.io/busybox:1.33.0
All images found in the imagelist.yaml in your kubernetes-<version> package:
docker load -i ./kube-apiserver.tar
docker load -i ./kube-controller-manager.tar
docker load -i ./kube-proxy.tar
docker load -i ./kube-scheduler.tar
docker load -i ./pause.tar
docker load -i ./coredns.tar
docker load -i ./etcd.tar
Using weavenet:
docker load -i ./weave-kube.tar
docker load -i ./weave-npc.tar
Using calico:
docker load -i ./cni.tar
docker load -i ./pod2daemon-flexvol.tar
docker load -i ./node.tar
docker load -i ./kube-controllers.tar
- Push all images into your local registry: The lima images:
docker push <your registry>/lima:<lima-version>
docker push <your registry>/autoreloadhaproxy:2.4.0-alpine
docker push <your registry>/registry:2.7.1
docker push <your registry>/docker.io/busybox:1.33.0
All images found in the imagelist.yaml in your kubernetes-<version> package:
docker push <your registry>/gcr.io/google-containers/kube-apiserver:<kubernetes-version>
docker push <your registry>/gcr.io/google-containers/kube-controller-manager:<kubernetes-version>
docker push <your registry>/gcr.io/google-containers/kube-proxy:<kubernetes-version>
docker push <your registry>/gcr.io/google-containers/kube-scheduler:<kubernetes-version>
docker push <your registry>/gcr.io/google-containers/pause:<version>
docker push <your registry>/gcr.io/google-containers/coredns:<version>
docker push <your registry>/gcr.io/google-containers/etcd:<version>
Using weavenet:
docker push <your registry>/docker.io/weaveworks/weave-kube:2.6.2
docker push <your registry>/docker.io/weaveworks/weave-npc:2.6.2
Using calico:
docker push <your registry>/docker.io/calico/cni:v3.18.1
docker push <your registry>/docker.io/calico/pod2daemon-flexvol:v3.18.1
docker push <your registry>/docker.io/calico/node:v3.18.1
docker push <your registry>/docker.io/calico/kube-controllers:v3.18.1
- Install LIMA with SINA
sina install <lima package>
OR
On CentOS 7
yum install <lima.rpm>
On OpenSUSE 15.1
zypper install <lima.rpm>
-
Install kubectl on your admin node as shown in Chapter Install Kubectl
-
Install Docker on your admin node as shown in Chapter Install Docker
-
Check if LIMA is running:
lima version
- Set ‘registry’ to your local registry name when setting up a new cluster:
apiVersion: lima/clusterconfig/v1alpha2
spec:
...
registry: <your registry address>
...
Plugin-Networks on AirGap environment
Note: Your cluster wont be ready until you follow the instructions mentioned below.
If you want to run a cluster on AirGap enviroment you have to use sina to deploy a Plugin-Network. First of all you need a .sina
package availalbe locally which contains the plugin Network.
For example pulling calico as a Plugin-Network:
sina pull -o calico lima/calicoinstaller:0.0.1
If your cluster is up and runing you can easily deploy the Plugin-Network. Run the following command on your admin node:
sina install -p calico.sina -f $LIMAROOT/values.yaml
4.4 - Lima-Ports
Ports for lima clusters
kubernetes ports (RHEL8):
- 2379-2380/tcp
- 6784/tcp
- 7443/tcp
- 9153/tcp
- 10250/tcp
- 10251/tcp
- 10252/tcp
- 5000-5001/tcp
- 3300/tcp
- 6789/tcp
- 6800-7300/tcp
- apiEndpointPort (from createCluster.yaml)
kubernetes ports (OpenSuse):
- 2379-2380/tcp
- 6784/tcp
- 7443/tcp
- 9153/tcp
- 10250/tcp
- 10251/tcp
- 10252/tcp
- 10255/tcp
- 5000-5001/tcp
- 3300/tcp
- 6789/tcp
- 6800-7300/tcp
- apiEndpointPort (from createCluster.yaml)
weave net ports
- 6783/tcp
- 6783-6784/udp
calicoports:
- 179/tcp
5 - Pia
5.1 - About-Pia
What is PIA?
PIA is a Infrastructure Administrative Manager application for a Kubernetes Cluster. It helps you managing administrative tasks for your complete Cluster without need to execute commands on individual node meaning it provides centralized management efficiently.
Why to use PIA?
PIA is a simple application which primarily focuses on management of operating system-level tasks. For example managing firewall rules on any node, user management or installing operating system packages
Highlights
- PIA provides a convenient way to execute your tasks on a single command without making the process complex.
- PIA offers centralized management of the complete cluster at one place.
- PIA gives you flexibility to select the nodes on which the tasks need to be executed.
How does PIA work?
Here is the workflow which helps you understand how PIA works.
PIA helps you execute commands on all or specified nodes at a single pia run
command. Additionally PIA also support the feature to transfer artifacts from the admin node to master and worker nodes.
Click here to download and get started with PIA now.
5.2 - Documentation-Pia
KubeOps-PIA v0.1.1
This Guide explains how to use PIA with detailed instructions and examples.
If you havn’t installed PIA yet, refer to the Installation Guide for PIA.
- the node where PIA is running can execute any kubectl commands
- the node where PIA is running has the
tar
binary installed - the default Kubernetes labels
kubernetes.io/
are set - the label
kubernative-net/pia
is not used
The wget
binary has to be installed on every Kubernetes node.*
# How to install wget on CentOS
yum install -y wget
*
wget
does not need to be installed if the default imagebusybox
is used. For more information see the chapter “PIA syntax explanation”.
How to use PIA?
In order to use PIA
- Create a .yaml file and update it in standard syntax for PIA and provide the necessary information for the task to be executed.
- Now you only need to execute the file with command pia run.
PIA Commands
General Commands
Overview of all PIA commands
pia
PIA - Plugin-based Infrastructure Administrator
Usage:
pia [options] [command]
Options:
--version Show version information
-?, -h, --help Show help and usage information
Commands:
run <file path> Run PIA based on properties of a yaml file
version Basic info about PIA
delete <resource type> Delete resources in the cluster
Command ‘pia run’
The pia run command executes the provided yaml file. You can create this yaml file by following a standard syntax. Refer YAML file syntax for more details.
The flag ‘-f’ is required to execute this command.
pia run -f conf.yaml
Command ‘pia version’
The pia version command shows you the current version of pia.
[root@admin ~]# pia version
Command ‘pia delete’
The pia delete commands deletes the resources from your Kubernetes cluster.
The flag ‘-r’ is required to execute this command.
# delete httpd pod
pia delete -r httpd
This command is currently limited to the resource ‘httpd’, hence only accepts it as argument.
YAML file syntax for PIA
You need to create a .yaml file as shown in following example. It contains all the necessary information which enables PIA to execute the tasks successfully.
Below is an example to show the structure of the file ‘conf.yaml’
apiVersion: kubernative/pia/v1
spec:
affinity:
schedule: role/worker, node/master1
limit: 5
runtime:
# image: busybox
runOnHost: true
hostNetwork: true
# hostDirs:
# - /etc/docker
run:
artefacts:
- src: /etc/docker/daemon.json
dest: /etc/docker/
tasks:
- cmd:
- "systemctl restart docker"
apiVersion (Mandatory)
Version string which defines the format of the API Object.
The only currently supported version is:
kubernative/pia/v1
The rest of the syntax is mainly divided into 3 sections:
- affinity
- runtime
- run
affinity area
affinity:
schedule: role/worker, node/master1 # PIA deploys the DaemonSet on all worker nodes and on the node "master1"
limit: 5 # PIA takes the first 5 nodes and executes the given tasks on them. After that PIA repeats with the next 5 nodes.
schedule (Optional)
defines the nodes to be used.
You can select the value based on your requirements.
Value | Description |
---|---|
role/worker | for all worker nodes |
role/master | for all master nodes |
node/ |
for a specific node |
label/foo | for all nodes with the label foo |
allNodes | for all nodes ( default) |
limit (Optional)
describes how many nodes should be used at once.
This value can be any integer number. The default value is set to 2.
For example, if you have 20 nodes in your Kubernetes cluster and set the limit to 3, then PIA will take the first 3 nodes at the start and execute the given tasks and then will take the next 3 repeating the process till all of your 20 nodes are finished. \
runtime area
runtime:
# image: myOwnImage # Default value `busybox:latest` will be used because the key `image` is commented out
runOnHost: true # PIA DaemonSet runs all tasks directly on the file system of the underlying node
hostNetwork: true # PIA DaemonSet will use the same network namespace of the underlying node
hostDirs: # Mounts the directory /etc/docker? No! 'runOnHost' is 'true' so the value of 'hostDirs' will be ignored because you already gain access to the whole file system of the underlying node! So it is the same as you would comment it out.
- /etc/docker
image (Optional)
Specification of the image used by the DaemonSet.
This value has to be any valid docker image.*
Default: busybox:latest
Note: While using the self-created images, they must have thewget
andtar
binaries installed to avoid the errors when using artefacts.
runOnHost (Optional)
Describes whether PIA should run in the container or directly on the host system.
This value can be set to
false
: the file system of the container will be usedtrue
: the file system of the underlying kubernetes node will be used
Default: false
runOnHost
is equivalent tohostPID
in the pod security policies from Kubernetes.
Important: runOnHost: true
will grant access to the whole file system of the underlying node.
hostNetwork (Optional)
Describes whether PIA should use the same network namespace on the underlying host or not.
This value can be set to
false
: the same network namespace will not be used.true
: the same network namespace will be used.
Default:false
Equivalent to hostNetwork
in the pod security policies from Kubernetes.
hostDirs (Optional)
Defines listing of directories to be mounted.
The values should be one or more valid folder paths to directories to be mounted.
Important:
hostDirs
is dependent onrunOnHost
. IfrunOnHost
is set to
false
: the mounted folders must be matched to the container.true
: there is no need to mount directories as access to the whole file system of the underlying node is provided.
run area
run:
artefacts: # Copies the daemon.json from the admin node and drops the file in the /etc/docker from nodes that are specified in the affinity area
- src: /etc/docker/daemon.json
dest: /etc/docker/
tasks:
- cmd: # Executes the following commands on the nodes that are specified in the affinity area
- "systemctl restart docker"
artefacts (Optional)
Defines files which needs to be transferred to the other nodes.
This value consists of
src
: It is the source path of the file to be transferred.dest
: It is the destination path. It defines the directory path where the file should be transferred.
tasks (Optional)
Describes the list of tasks that should be executed on the scheduled nodes.
You can add the command to be executed node. This input can have multiple keywords.
Currently only cmd
(command) is supported.
Examples
Install Kosi on all nodes (CentOS)
In the following example, we are going to see how to use PIA to install the kosi.rpm(local file) on all the nodes including admin node of your kubernetes cluster.
Below is the .yaml file created for the task.
apiVersion: kubernative/pia/v1
spec:
affinity:
schedule: allNodes
limit: 5
runtime:
runOnHost: true
hostNetwork: true
run:
- artefacts:
- src: /myRpmsFolder/kosi-2.3.0_beta6-0.el7.x86_64.rpm
dest: /root/kosi-2.3.0_beta6-0.el7.x86_64.rpm
tasks:
- cmd:
- "yum install -y /root/kosi-2.3.0_beta6-0.el7.x86_64.rpm"
Add a custom config file to all worker nodes
The following is an example YAML that can be used to save a custom config file to one specific node (e.g. master3) and all nodes with the label ‘foo’
apiVersion: kubernative/pia/v1
spec:
affinity:
schedule: node/master3, label/foo
limit: 2
runtime:
hostDirs:
- /path/to/my/mounted/folder/ # can also be a parent directory e.g. '/etc' instead of '/etc/crio'
run:
artefacts:
- src: /path/to/myConfig.conf
dest: /path/to/my/mounted/folder/ # absolute path
Further notes
- if you want to mount the entire file system you have to set
runOnHost: true
- when using artefacts be sure that file permissions (chmod 777) are given
- logs are saved in the directory “/tmp/piaData/logs” and will be deleted at each restart of PIA
5.3 - Pia-FAQ
FAQ PIA
I get the error that some nodes are not ‘Ready’. How do I fix the problem?
First you need to find out which node is not ready. Use ‘kubectl get nodes’ to find the non-ready node.
Next, identify why the node is not ready. Get access to the shell of the non-ready node and use ‘systemctl status kubelet’ to get informations about the state of the kubelet.
A non-ready node can have many causes. In most cases, the kubelet has the problem of not automatically identificate the node. Therefore, the kubelet must be restarted manually on the non-ready machine. This is done with ‘systemctl enable kubelet’ and ‘systemctl start kubelet’.
If the issue has not been resolved, there is a deeper problem in your kubernetes cluster. Please contact your cluster administrators.
The httpd service is terminating too long. How can I force the shut down?
You can force the shutdown of your httpd service with the following command: ‘kubectl delete deployment pia-httpd –grace-period=0 –force’.
Most deployments got a networking serivce like our httpd does. Delete the networking service with ‘kubectl delete svc pia-httpd-service –grace-period=0 –force’.
My question was not answered here
Visit our Slack
We are happy to help you with any problem!
5.4 - Installation-guide-pia
Installation Guide for PIA
This Guide shows you how to install PIA.
Prerequisites
Before you begin, check the following prerequisites:
-
Your machine should have a running Kubernetes cluster with
- Kubectl commands should be allowed to execute on the node where PIA is running.
- the default Kubernetes labels kubernetes.io/ are set.
- the label kubernative-net/pia is not used.
-
The wget binary should be installed on every Kubernetes node.
To install wget on CentOS
yum install -y wget
If the default image busybox is used.‘wget’ need not to be installed. For more information about the image refer to the section “PIA syntax explanation” in full documentation.
-
PIA should be installed on admin node of the kubernetes cluster.
-
The admin node should have tar binary installed.
Installation Steps
Every release of PIA provides a rpm file for manual installation. You need to login into your KubeOps account to download the file.
- Create an KubOps account on the KubeOps website if you don’t have already and login into your account.
- Download your desired version of PIA .rpm file from our official download page https://kubeops.net/kubeops-tools/download/pia
- Install the PIA rpm on the admin node.
yum install <path to rpm>/<pia file name>
path to rpm : the directory path in which the file is available.
pia file name_ : the exact file name of the file with .rpm extension.
Check Installation
You can check if the PIA is installed correctly on your machine simply with command pia version
.
[root@localhost ~]# pia version
If the output indicates PIA version and relative information, the PIA is installed successfully.
For example, the output should look like
KubeOps PIA v0.1.1
This work is licensed under Creative Commons Attribution - NoDerivatives 4.0 International License(see https://creativecommons.org/licenses/by-nd/4.0/legalcode for more details).
©KubeOps GmbH, Hinter Stöck 17, 72406 Bisingen - Germany, 2022
5.5 - Known-Issues-Pia
Known Issues
PIA 0.1.1
- ’tasks’ area is required. If no tasks should be executed and only data should be transferred execute a command with an empty string
tasks:
- cmd:
- ""
-
only docker and containerD can be used as container runtime. CRIO leads to an error and will be supported in the next releases
-
uploading files without a stable and fast internet connection can cause timeouts which are not catched at the moment
-
under certain circumstances the httpd resource hangs at “Cleaning Up”. The resource must be deleted manually (see FAQ for more information)
5.6 - Pia-0.1.x
PIA Release
Changelog PIA 0.1.1
Bug Fixes
- Fixed an issue where PIA Settings intended only for worker will also applied for master
- Fixed an issue where PIA Settings intended only for master will also applied for worker
- Fixed an issue where PIA can’t execute
run -f
6 - Internal References
6.1 -
KubeOps Requirements
SINA
Setup Admin Node:
- min. RAM: 2 GB
- min. CPU: 2 Cores
- User must be in
sudoers file
- Podman Version 2.2.1 must be installed
- Helm must be installed
- install SINA
Example:rpm -i sina-2.6.0_Beta0-0.el7.x86_64.rpm
LIMA
Setup Admin Node:
- min. RAM: 2 GB
- min. CPU: 2 Cores
- User must be in
sudoers file
/home/myuser/.ssh/known_host
must contain all required nodes e.g. control-plane1, control-plane2, control-plane3, node1, node2, node3
Example:ssh demo@control-plane1
- Docker must be installed
Important:systemctl enable docker --now
- SINA must be installed
LIMA Version 0.8.x
Required SINA 2.5.x
LIMA Version 0.9.x
Required SINA 2.6.x
- install LIMA
Example:
After installation:rpm -i lima-0.9.0.Beta-0.el7.x86_64.rpm
export LIMAROOT="/var/lima" echo 'export LIMAROOT="/var/lima"' >> $HOME/.bashrc
- Kubectl must be installed
Example:lima get maintenance lima pull maintenance lima/kubernetes:1.24.8 rpm -i $LIMAROOT/packages/kubernetes-1.24.8/kubectl-1.24.8-0.x86_64.rpm
Setup Nodes
- User and Group ID must be same as on Admin Node
- User must be in
sudoers file
First steps with Lima
Create cluster
First you need a yaml file with our basic cluster configuration.
createCluster.yaml
apiVersion: lima/clusterconfig/v1alpha2
spec:
clusterName: Democluster
masterUser: root
masterPassword: toor
# IP Adress of your first master
masterHost: 10.2.10.13
kubernetesVersion: 1.23.14
registry: registry1.kubernative.net/lima
useInsecureRegistry: false
ignoreFirewallError: false
firewall: firewalld
# IP Adress of your first master
apiEndpoint: 10.2.10.13:6443
serviceSubnet: 192.168.128.0/20
podSubnet: 192.168.144.0/20
debug: true
logLevel: vvvvv
sudo: false
containerRuntime: docker
pluginNetwork:
type: weave
Now you can start downloading the required packages with LIMA.
With this command you can view all available packages:
lima get maintenance
Kubernetes Version
You need a package for Kubernetes version 1.23.14
.
With this command you can download the package:
lima pull maintenance lima/kubernetes:1.23.14
Container Runtime Interface
You need a package for CRI Docker
.
With this command you can download the package:
lima pull maintenance lima/dockerlp151:19.03.5
Firewall
You need a package for the firewall firewalld
.
With this command you can download the packages:
lima pull maintenance lima/kubedependencies-el7:1.0.3
lima pull maintenance lima/firewalldel7:0.6.3
CNI
You need a package for cni Weave Net
.
With this command you can download the package:
lima pull maintenance lima/weavenetinstaller:0.0.4
Ready to start
Now you are ready to create the cluster.
With this command you can create the cluster:
lima create cluster -f createCluster.yaml
Join Nodes to cluster
First you need a yaml file with a basic cluster configuration.
nodeJoin.yaml
apiVersion: lima/nodeconfig/v1alpha1
clusterName: Democluster
spec:
masters:
- host: 10.2.10.14
user: root
password: YourSecretPassword
- host: 10.2.10.15
user: root
password: YourSecretPassword
workers:
- host: 10.2.10.16
user: root
password: YourSecretPassword
- host: 10.2.10.17
user: root
password: YourSecretPassword
Now you can join Nodes with the command:
lima create nodes -f nodeJoin.yaml
Change CRI
In order to change the Container Runtime Interface (CRI) of the clusters to containerd
, the following steps are necessary.
First you need to download the package for containerd
.
With this command you can download the package:
lima pull maintenance lima/containerdlp151:1.6.6
Now you can change the CRI of your cluster with with the command:
lima change runtime -r containerd Democluster
Upgrade your kubernetes cluster
In order to increase the Kubernetes version of the cluster to Kubernetes version 1.24.8
, the following steps are necessary.
First you need a package for Kubernetes version 1.24.8
.
With this command you can download the package:
lima pull maintenance lima/kubernetes:1.24.8
Now you need to upgrade your Cluster with with the command:
lima upgrade apply -v 1.24.8 Democluster
Update SINA
In order to update the SINA version, the following steps are necessary.
First you need to remove the current SINA installation:
rpm -e sina
Now you can install the new version:
rpm -i sina-2.6.0_Beta1-0.el7.x86_64.rpm
Update LIMA
In order to update the LIMA version, the following steps are necessary.
First you need to remove the current LIMA installation:
rpm -e lima
Now you can install the new version:
rpm -i lima-0.9.0.Beta-1.el7.x86_64.rpm
Please check the changelogs of the new version for further information if the loadbalancer needs to be updated.
In case the loadbalancer needs an update the command is used:
lima update -b Democluster
6.2 -
Kubeops RPM Repository Setup Guide
Setting up a new RPM repository allows for centralized, secure, and efficient distribution of software packages, simplifying installation, updates, and dependency management.
Table of Contents
- Prerequisites
- Repository Setup Intructions
- Inspect KubeopsRepo
- Local Repository Configuration
- Enable the GPG Signature
- Test the Local Repository
- Implementing in the setup package
Prerequisites
- A Linux distribution that supports RPM package management (e.g., RHEL, CentOS, Fedora, or SUSE Linux Enterprise).
- httpd (apache) server to access the repository over HTTP.
- Software packages (RPM files) to include in the repository.
createrepo
(an RPM package management tool) to create a new repository.- Root or administrative access to the server.
Repository Setup Intructions
1. Install Required Tools
sudo yum install -y httpd createrepo
2. Create Repository Dierectory
When Apache is installed the default Apache VirtualHost DocumentRoot
created at /var/www/html
. Create a new repository KubeOpsRepo
under DocumentRoot
.
sudo mkdir -p /var/www/html/KubeOpsRepo
3. Copy RPM Packages
Copy RPM packages into KubeOpsRepo
repository.
Use below command to copy the packages that are already present in the host machine, else directly populate the packages into KubeOpsRepo
sudo cp -r <sourcePathForRPMs> /var/www/html/KubeOpsRepo/
4. Initialize the KubeOpsRepo
By running createrepo
command the KubeOpsRepo
will be initialized.
sudo cd /var/www/html/KubeOpsRepo/
sudo createrepo .
This will create a repodata
directory. The repodata
conatains metadata files that describe the RPM packages in the repository, including package information, dependencies, and checksums, enabling efficient package management and dependency resolution.
5. Start and Enable Apache Service
sudo systemctl start httpd
sudo systemctl enable httpd
(Optional) Configure Firewall
If the firewall is enabled, we need to allow incoming HTTP traffic.
sudo firewall-cmd --add-service=http --permanent
sudo firewall-cmd --reload
Inspect KubeopsRepo
Access the repository KubeOpsRepo
using a web browser or command-line tools like curl
or wget
.
curl http://<ip-address-of-server>/KubeOpsRepo/
Note: <ip-address-of-server>
is the IP address of the host machine.
Download an RPM package from the repository
curl -O http://<ip-address-of-server>/KubeOpsRepo/<package-name.rpm>
Install an RPM package from the repository
sudo yum install http://<ip-address-of-server>/KubeOpsRepo/<package-name.rpm>
List out all RPM package in the Repository
curl -s http://<ip-address-of-server>/KubeOpsRepo/ | grep -Po 'href="\K[^"]+\.rpm"'
Local Repository Configuration
A setup for local repository configuration needed to install packages from KubeOpsRepo
without specifying the URL each time. Here are the steps to configure a local repository.
Create a Repository Configuration File
Create a new .repo
configuration file (e.g. KubeOpsRepo.repo
) in /etc/yum.repos.d/
directory
sudo vi /etc/yum.repos.d/KubeOpsRepo.repo
Add the Following Content to the Configuration File
[KubeOpsRepo]
name=KubeOps Repository
baseurl=http://<ip-address-of-server>/KubeOpsRepo/
enabled=1
gpgcheck=0
save the changes and exit from the editor.
Note:
[KubeOpsRepo]
is the repo ID.- Replace
http://<ip-address-of-server>/KubeOpsRepo/
with the actual URL of the repository. name
field can be customized to a descriptive name.enabled=1
to enable the repository.gpgcheck=0
to disable GPG signature verification.
Update the Package Cache
sudo yum makecache
Install the Packages from KubeOpsRepo using yum
sudo yum install package
e.g. sudo yum install lima
IMPORTANT:
The configuration of the repo can also be done using
yum-config-manager
.sudo yum install -y yum-utils sudo yum-config-manager --add-repo http://http://<ip-address-of-server>/KubeOpsRepo/
Enable the GPG Signature
The GNU Privacy Guard (GPG)
tool used for secure communication and data integrity verification.
when gpgcheck
set to 1 (enabled), the package will verify the GPG signature of each packages against the correponding key in the keyring. If the package’s signature matches the expected signature, the package is considered valid and can be installed. If the signature does not match or the package is not signed, the package manager will refuse to install the package or display a warning.
Before initializing the KubeOpsRepo
using createrepo
command, create a GPG key and add it to the /var/www/html/KubeOpsRepo/
. Check here to create GPG keypairs. Save the GPG key as RPM-GPG-KEY-KubeOpsRepo
using following command.
sudo cd /var/www/html/KubeOpsRepo/
gpg --armor --export > RPM-GPG-KEY-KubeOpsRepo
Now, initialize the repository using createrepo
command.
curl
to the repository to check the gpg key.
curl -s http://<ip-address-of-server>/KubeOpsRepo/RPM-GPG-KEY-myrepo
Updating the Local Configuration File
The configuration file mentioned in Add the Following Content to the Configuration File will be chnaged as shown below.
[KubeOpsRepo]
name=KubeOps Repository
baseurl=http://<ip-address-of-server>/KubeOpsRepo/
enabled=1
gpgcheck=1
gpgkey=http://<ip-address-of-server>/KubeOpsRepo/RPM-GPG-KEY-KubeOpsRepo
Test the Local Repository
Generate metadata cache by running:
sudo yum makecache
This will ensure that yum fetches the latest metadata for the repositories.
Check the reposity in the repolist,
sudo yum repolist
Note:
The output for the command
yum repolist
shows the information about the repositories, including the enabled repositories and their priorities. The priorities are represented by numerical values, where a lower value indicates higher priority. The repository with the lowest priority value is considered the highest priority repository.In the below example, the repository with the lowest priority value will be listed first in the output
i.e. KubeOpsRepo
. You can use this information to determine the prioritization order of the repositories.[root@cluster3admin1 ~]# yum repolist Updating Subscription Management repositories. repo id repo name KubeOpsRepo KubeOps Repository rhel-8-for-x86_64-appstream-rpms Red Hat Enterprise Linux 8 for x86_64 - AppStream (RPMs) rhel-8-for-x86_64-baseos-rpms Red Hat Enterprise Linux 8 for x86_64 - BaseOS (RPMs)
If you want to explicitely set the priority values, the modify the
dnf.conf
configuration file located in/etc/dnf/dnf.conf
. check for[main]
section and addmodule_hotfixes
andmodule_platform_id
options to control the repository priorities.For example:
[main] module_hotfixes=1 module_platform_id=platform:f33
then, set the priority in the configuration file as shown below.
[KubeOpsRepo] name=KubeOps Repository baseurl=http://<ip-address-of-server>/KubeOpsRepo/ enabled=1 gpgcheck=1 gpgkey=http://<ip-address-of-server>/KubeOpsRepo/RPM-GPG-KEY-KubeOpsRepo priority=10
List all the packages availbale in KubeOpsRepo
,
# Shows the list of packages availble for installing but the installed packages are not shown here
sudo yum list available --disablerepo="*" --enablerepo="KubeOpsRepo"
# To check all the packages including iinstalled packages
# sudo yum list available --disablerepo="*" --enablerepo="KubeOpsRepo" --showduplicates
sudo yum list --showduplicates | grep KubeOpsRepo
Implementing in the setup package
Check here for more details.
6.3 -
Backup Strategy for Velero:
velero is a CLI Tool and a kubernetes deployment. the velero deployment in the cluster stores the metadata for backupsa, and the cli tool allows to save resources ofd the cluster.
with the velero install
command you can install the velero deployment in the cluster. There are parameters like the URL of the s3 storage, in which the resources will be saved. only s3 storage as the location of the backups is possible.
you can download the binary of velero on the following link:
https://velero.io/docs/v1.8/basic-install/
https://github.com/vmware-tanzu/velero/releases/tag/v1.9.5
the binary is in the tar.gz file, so the tar.gz file has to be extracted.
for velero you need a credentials file:
[default]
aws_access_key_id = minio
aws_secret_access_key = minio123
This file contains the username and the password, for logging into the s3 storage.
after that, you can execute the velero install
command. an example would be:
Danach kann man den velero install
befehl ausführen. dieser Befehl deployed velero.
velero install \
--provider aws \
--plugins velero/velero-plugin-for-aws:v1.2.1 \
--bucket velero \
--secret-file ./minio-credentials \
--use-volume-snapshots=false \
--backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://velero.minio.svc.cluster.local:9000
so velero needs different plugins for different s3 storage providers. The region parameter in the config is for local s3 storage solutions like minio.
with the velero backup
command you can manage the backups of velero.
The velero backup create <NAME>
can create backups, if velero is installed in the cluster
velero backup create nginx-backup --selector app=nginx
so-called selectors are used in velero. With the selectors you can determine the number of resources you want to backup. so you can determine if you only want to backup resources oif a certain namespace or a special kind of resources.
velero backup create <backup-name> --selector <key>=<value>
velero backup create <backup-name> --include-namespaces <namespace>
velero backup create <backup-name> --include-resources <resource>
velero backup create <backup-name> --include-cluster-resources=true
cluster-resources are resources, that aren’t related to namespaces in the cluster.
if you use include, ONLY the resources, meet the conditions are backed up. but you can add more resources, separated by a comma.
velero backup create <backup-name> --include-resources deployments, services
in theis example, persistant volume clains would not be backed up, because they aren’t described in the includes.
if you want to back up all resources except one resource, you can use –exclude-namespaces or –exclude-resources. in the –exclude-namespaces velero would back up all resources except the resources that are in the given namespace.
you have to write the full name, so for example services
instead of svc
, because otherwise velero can’t process the parameters.
with the velero backup get
command you can list the backups you made. with the velero backup describe <NAME>
you can describe the backup, and can for example see when you have creatd the backup, if it was succesful, or how many resources were created.
the backups have a TTL (time to live), so the backups are deleted after the the time set, the files will be deleted in the container, so you have to be careful about the time to live, because a TTL that is too short has the consequence that the backups of the cluster-resources cannot be restored. you can set the –ttl Parameter for extend the ttl, you can use hours (h), minutes (m) and seconds(s) for that.
with the velero backup logs <NAME>
you can see the logs of a backup creation. The error message, that the minio service is not reachable, is often in the logs, but despite it the backup creation can be successful. If you want to delete a backup, you can use the velero backup delete <NAME>
velero has also the possibility to schedule backups, for that you need the velero schedule create <NAME>
command.
velero schedule create nginx-daily --schedule="0 1 * * *" --selector app=nginx
-
- value: minute
-
- value: hour
-
- value: day of the month (1-30)
-
- value: week
-
- value: day of the week (1-7)
special characters:
- ‘*’ : value for “every” (“every …”) (e.g. every hour)
- ‘,’ : Addition, add value
- ‘-’ : range, from-to
- ‘/’ : Steps (“every x. …” p.e. every second hour= ‘* /2 * * * ‘) additionally: @yearly; @monthly; @weekly; @daily; @hourly
website for testing cronjobs: https://crontab.guru/
the same parameter, so --selector <key>=<value>
;
--include-namespaces <namespace>
;
--include-cluster-resources
and
--include-resources
are available for the schedules.
restore backups
with the command
velero restore create --from-backup <BACKUP-NAME>
you can restore the backups. for that you use the backup name in the command.
the
velero restore get <NAME>
command shows all restorations of backups. the backup has the date and the time of the day in the name. the date and the time is in the format YYYYMMDDhhmmss. the backups of schedules have also the date and time format plus the name of the schedule as name.
velero restore describe <RESTORE_NAME>
with this command you can show details of a restore, it has similar infos like the velero backup describe <NAME>
command.
in the following order velero backups and restores the kubernetes resources:
- Custom Resource Definitions
- Namespaces
- StorageClasses
- VolumeSnapshotClass
- VolumeSnapshotContents
- VolumeSnapshots
- PersistentVolumes
- PersistentVolumeClaims
- Secrets
- ConfigMaps
- ServiceAccounts
- LimitRanges
- Pods
- ReplicaSets
- Clusters
- ClusterResourceSets
velero restore create <RESTORE_NAME> \
--from-backup <BACKUP_NAME> \
--namespace-mappings old-ns-1:new-ns-1,old-ns-2:new-ns-2
it is also possible to restore backups into another namespace, like in this example. it is important to know that if you delete a backup in velero, the restores will also be deleted.
Velero as Helm Chart
https://artifacthub.io/packages/helm/vmware-tanzu/velero
the values.yaml of the helmchart has to be modified for the installation, so that p.e. the backup storage location can be determined. it is also possible, that you set the parameters in the helm install command:
helm install velero vmware-tanzu/velero
–namespace
–create-namespace
–set-file credentials.secretContents.cloud=
–set configuration.provider=
–set configuration.backupStorageLocation.name=
–set configuration.backupStorageLocation.bucket=
–set configuration.backupStorageLocation.config.region=
–set configuration.volumeSnapshotLocation.name=
–set configuration.volumeSnapshotLocation.config.region=
–set initContainers[0].name=velero-plugin-for-
–set initContainers[0].image=velero/velero-plugin-for-
–set initContainers[0].volumeMounts[0].mountPath=/target
–set initContainers[0].volumeMounts[0].name=plugins
here is the helm chart: helm-chart but the minio location can be changed, so the following lines schould be checked: