This is the multi-page printable view of this section. Click here to print.
Reference
- 1: Commands
- 2: Documentation-kubeopsctl
- 3: Fileformats
- 4: KubeOps Version
- 5: Glossary
- 6: FAQs
1 - Commands
KubeOps kubeopsctl
This documentation shows all features of kubeopsctl and how to use these features.
The kosi software must be installed from the start.
General commands
Overview of all KUBEOPSCTL commands
Usage:
kubeopsctl [command] [options]
Options:
--version Show version information
-?, -h, --help Show help and usage information
Commands:
apply Use the apply command to apply a specific config to create or modify the cluster.
change change
drain <argument> Drain Command.
uncordon <name> Uncordon Command.
upgrade <name> upgrade Command.
status <name> Status Command.
Command ‘kubeopsctl –version’
The kubeopsctl --version
command shows you the current version of kubeopsctl.
kubeopsctl --version
The output should be:
0.2.0-Alpha0
Command ‘kubeopsctl –help’
The command kubeopsctl --help
gives you an overview of all available commands:
kubeopsctl --help
Alternatively, you can also enter kubeopsctl
or kubeopsctl -?
in the command line.
Command ‘kubeopsctl apply’
The command kubeopsctl apply
is used to set up the kubeops platform with a configuration file.
Example:
kubeopsctl apply -f kubeopsctl.yaml
-f flag
The -f
parameter is used to use yaml parameter file.
-l flag
The -l
parameter is used to set the log level to a specific value. The default log level is Info
. Available log levels are Error
, Warning
, Info
, Debug1
, Debug2
, Debug3
.
Example:
kubeopsctl apply -f kubeopsctl.yaml -l Debug3
Command ‘kubeopsctl change registry’
The command kubeopsctl change registry
is used to change the currently used registry to a different one.
Example:
kubeopsctl change registry -f kubeopsctl.yaml -r 10.2.10.11/library -t localhost/library
-f flag
The -f
parameter is used to use yaml parameter file.
-r flag
The parameter r
is used to pull the docker images which are included in the package to a given local docker registry.
-t flag
The -t
parameter is used to tag the images with localhost. For the szenario that the registry of the cluster is exposed to the admin via a network internal domain name, but this name can’t be resolved by the nodes, the flag -t can be used, to use the cluster internal hostname of the registry.
Command ‘kubeopsctl drain’
The command kubeopsctl drain
is used to drain a cluster, zone or node.
In this example we are draining a cluster:
kubeopsctl drain cluster/example
In this example we are draining a zone:
kubeopsctl drain zone/zone1
In this example we are draining a node:
kubeopsctl drain node/master1
there can be issues with the draining of nodes if there is rook installed, because rook has pod disruption budgets, so there must be enough nodes ready for rook.
Command ‘kubeopsctl uncordon’
The command kubeopsctl uncordon
is used to drain a cluster, zone or node.
In this example we are uncordoning a cluster:
kubeopsctl uncordon cluster/example
In this example we are uncordoning a zone:
kubeopsctl uncordon zone/zone1
In this example we are uncordoning a node:
kubeopsctl uncordon node/master1
Command ‘kubeopsctl upgrade’
The command kubeopsctl upgrade
is used to upgrade the kubernetes version of a cluster, zone or node.
In this example we are uncordoning a cluster:
kubeopsctl upgrade cluster/example -v 1.26.6
In this example we are uncordoning a zone:
kubeopsctl upgrade zone/zone1 -v 1.26.6
In this example we are uncordoning a node:
kubeopsctl upgrade node/master1 -v 1.26.6
-v flag
The parameter v
is used to set a higher kubernetes version.
Command ‘kubeopsctl status’
The command kubeopsctl status
is used to get the status of a cluster.
Example:
kubeopsctl status cluster/cluster1 -v 1.26.6
2 - Documentation-kubeopsctl
KubeOps kubeopsctl
this documentation shows all feature of kubeopsctl and how to use these features.
the kosi software must be installed from the start.
General commands
Overview of all KUBEOPSCTL commands
Usage:
kubeopsctl [command] [options]
Options:
--version Show version information
-?, -h, --help Show help and usage information
Commands:
apply Use the apply command to apply a specific config to create or modify the cluster.
change change
drain <argument> Drain Command.
uncordon <name> Uncordon Command.
upgrade <name> upgrade Command.
status <name> Status Command.
Command ‘kubeopsctl –version’
The kubeopsctl --version
command shows you the current version of kubeopsctl.
kubeopsctl --version
The output should be:
0.2.0-Alpha0
Command ‘kubeopsctl –help’
The command kubeopsctl --help
gives you an overview of all available commands:
kubeopsctl --help
Alternatively, you can also enter kubeopsctl
or kubeopsctl -?
in the command line.
Command ‘kubeopsctl apply’
The command kubeopsctl apply
is used to set up the kubeops platform with a configuration file.
Example:
kubeopsctl apply -f kubeopsctl.yaml
-f flag
The -f
parameter is used to use yaml parameter file.
-l flag
The -l
parameter is used to set the log level to a specific value. The default log level is Info
. Available log levels are Error
, Warning
, Info
, Debug1
, Debug2
, Debug3
.
Example:
kubeopsctl apply -f kubeopsctl.yaml -l Debug3
Command ‘kubeopsctl change registry’
The command kubeopsctl change registry
is used to change the currently used registry to a different one.
Example:
kubeopsctl change registry -f kubeopsctl.yaml -r 10.2.10.11/library -t localhost/library
-f flag
The -f
parameter is used to use yaml parameter file.
-r flag
The parameter r
is used to pull the docker images which are included in the package to a given local docker registry.
-t flag
The -t
parameter is used to tag the images with localhost. For the szenario that the registry of the cluster is exposed to the admin via a network internal domain name, but this name can’t be resolved by the nodes, the flag -t can be used, to use the cluster internal hostname of the registry.
Command ‘kubeopsctl drain’
The command kubeopsctl drain
is used to drain a cluster, zone or node.
In this example we are draining a cluster:
kubeopsctl drain cluster/example
In this example we are draining a zone:
kubeopsctl drain zone/zone1
In this example we are draining a node:
kubeopsctl drain node/master1
there can be issues with the draining of nodes if there is rook installed, because rook has pod disruption budgets, so there must be enough nodes ready for rook.
Command ‘kubeopsctl uncordon’
The command kubeopsctl uncordon
is used to drain a cluster, zone or node.
In this example we are uncordoning a cluster:
kubeopsctl uncordon cluster/example
In this example we are uncordoning a zone:
kubeopsctl uncordon zone/zone1
In this example we are uncordoning a node:
kubeopsctl uncordon node/master1
Command ‘kubeopsctl upgrade’
The command kubeopsctl upgrade
is used to upgrade the kubernetes version of a cluster, zone or node.
In this example we are uncordoning a cluster:
kubeopsctl upgrade cluster/example -v 1.26.6
In this example we are uncordoning a zone:
kubeopsctl upgrade zone/zone1 -v 1.26.6
In this example we are uncordoning a node:
kubeopsctl upgrade node/master1 -v 1.26.6
-v flag
The parameter v
is used to set a higher kubernetes version.
Command ‘kubeopsctl status’
The command kubeopsctl status
is used to get the status of a cluster.
Example:
kubeopsctl status cluster/cluster1 -v 1.26.6
Prerequisites
Minimum hardware and OS requirments for a linux machine are
OS | Minimum Requirements |
---|---|
Red Hat Enterprise Linux 8 | 8 CPU cores, 16 GB memory |
OpenSUSE 15 | 8 CPU cores, 16 GB memory |
At least one machine should be used as an admin
machine for cluster lifecycle management.
Requirements on admin
The following requirements must be fulfilled on the admin machine.
- All the utilized users require sudo privileges. If you are using Kubeops as a user, you need a user with sudo rights, so for Opensuse and RHEL 8 Environments it is the
wheel
group the user should be added to. Make sure that you change your user with:
su -l <user>
-
Admin machine must be synchronized with the current time.
-
You need an internet connection to use the default KubeOps registry
registry.preprod.kubernative.net/kubeops
.
A local registry can be used in the Airgap environment. KubeOps only supports secure registries.
It is important to list your registry as an insecure registry in registry.conf (/etc/containers/registries.conf for podman, /etc/docker/deamon.json for docker), in case of insecure registry usage.
Now you can create your own registry instead of using the default. Checkout how to Guide Create a new Repository. for more info.
-
kosi
2.8.0 must be installed on your machine. Click here to view how it is done in the Quick Start Guide. -
it is recommended that runc is uninstalled To uninstall runc on your OS use the following command:
dnf remove -y runc
zypper remove -y runc
-
tc should be installed. To install tc on your OS use the following command:
dnf install -y tc
zypper install -y iproute2
-
for opensearch, the /etc/sysctl.conf should be configured, the line
vm.max_map_count=262144
should be added. also the command
sysctl -p
should be executed after that.
- Podman must be installed on your machine.
To install podman on RHEL8 use command.
dnf install podman
zypper install podman
Warning
There can be an issue with conflicts with containerd, so it is recommended that containerd.io is removed before installing the podman package.-
You must install the kubeops-basic-plugins:0.4.0 .
Simply type in the following command to install the Basic-Plugins.
kosi install --hub=public pia/kubeops-basic-plugins:0.4.0
Noteable is that you must have to install it on a Root-User Machine.
-
You must install the kubeops-kubernetes-plugins:0.5.0.
Simply type in the following command to install the Kubernetes-Plugins.
kosi install --hub public pia/kubeops-kubernetes-plugins:0.5.0
Requirements for each node
The following requirements must be fulfilled on each node.
-
All the utilized users require sudo privileges. If you are using Kubeops as a user, you need a user with sudo rights, so for Opensuse and RHEL 8 Environments it is the
wheel
group the user should be added to. -
Every machine must be synchronized with the current time.
-
You have to assign lowercase unique hostnames for every machine you are using.
We recommended using self-explanatory hostnames.
To set the hostname on your machine use the following command:
hostnamectl set-hostname <name of node>
- Example
Use the commands below to set the hostnames on each machine asadmin
,master
,node1
node2
.hostnamectl set-hostname admin hostnamectl set-hostname master hostnamectl set-hostname node1 hostnamectl set-hostname node2
Requires sudo privileges
It is recommended that a dns service is running, or if you don’t have a dns service, you can change the /etc/hosts file. an example for a entry in the /etc/hosts file could be:
10.2.10.12 admin 10.2.10.13 master1 10.2.10.14 master2 10.2.10.15 master3 10.2.10.16 node1 10.2.10.17 node2 10.2.10.18 node3
- Example
-
To establish an SSH connection between your machines, you either need an SSH key or you need to install sshpass.
-
Generate an SSH key on admin machine using following command
ssh-keygen
There will be two keys generated in ~/.ssh directory.
The first key is theid_rsa(private)
and the second key is theid_rsa.pub(public)
. -
Copy the ssh key from admin machine to your node machine/s with following command
ssh-copy-id <ip address or hostname of your node machine>
-
Now try establishing a connection to your node machine/s
ssh <ip address or hostname of your node machine>
-
How to Configure Cluster/Nodes/Software using yaml file
you need to have a cluster definition file which describes the different aspects of your cluster. this files describes only one cluster.
Full yaml syntax
Choose the appropriate imagePullRegistry
based on your kubeops version.
### General values for registry access ###
imagePullRegistry: "registry.preprod.kubernative.net/lima" # mandatory, the registry from which the images for the cluster are pulled
localRegistry: false # mandatory, set to true if you use a local registry, if you want to set the value to true then you also have to set harbor to true
### General values for registry access ###
imagePullRegistry: "registry.preprod.kubernative.net/kubeops" # mandatory, the registry from which the images for the cluster are pulled
localRegistry: false # mandatory, set to true if you use a local registry, if you want to set the value to true then you also have to set harbor to true
apiVersion: kubeops/kubeopsctl/alpha/v5 # mandatory
clusterName: "example" # mandatory
clusterUser: "myuser" # mandatory
kubernetesVersion: "1.28.2" # mandatory
#masterHost: optional if you have an hostname, default value in "masterIP"
masterIP: 10.2.10.12 # mandatory
firewall: "nftables" # mandatory, default "nftables"
pluginNetwork: "calico" # mandatory, default "calico"
containerRuntime: "containerd" # mandatory
clusterOS: "Red Hat Enterprise Linux" # mandatory, can be "Red Hat Enterprise Linux" or "openSUSE Leap"
### Additional values for cluster configuration
useInsecureRegistry: false # optional, default is false
ignoreFirewallError: false # optional, default is false
serviceSubnet: 192.168.128.0/17 # optional, default "192.168.128.0/17"
podSubnet: 192.168.0.0/17 # optional, default "192.168.0.0/17"
debug: false # optional, default is false
logLevel: vvvvv # optional, default "vvvvv"
systemCpu: "1" # optional, default "1"
systemMemory: "2G" # optional, default "2G"
sudo: true # optional, default is true
tmpCopyDir: "/tmp" # optional, default is /tmp
createCluster: true # optional, default is true
updateRegistry: true # optional, default is true
zones:
- name: zone1
nodes:
master:
- name: cluster1master1
ipAdress: 10.2.10.11
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
- name: cluster1master2
ipAdress: 10.2.10.12
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
worker:
- name: cluster1worker1
ipAdress: 10.2.10.14
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
- name: cluster1worker2
ipAdress: 10.2.10.15
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
- name: zone2
nodes:
master:
- name: cluster1master3
ipAdress: 10.2.10.13
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: drained
kubeversion: 1.28.2
worker:
- name: cluster1worker1
ipAdress: 10.2.10.16
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
# set to true if you want to install it into your cluster
rook-ceph: true # mandatory
harbor: true # mandatory
opensearch: true # mandatory
opensearch-dashboards: true # mandatory
logstash: true # mandatory
filebeat: true # mandatory
prometheus: true # mandatory
opa: true # mandatory
kubeops-dashboard: true # mandatory
certman: true # mandatory
ingress: true # mandatory
keycloak: true # mandatory
velero: true # mandatory
nameSpace: "kubeops" #optional, the default value is different for each application
storageClass: "rook-cephfs" # optional, default value is "rook-cephfs"
###Values for Rook-Ceph###
rookValues:
namespace: kubeops
cluster:
spec:
dataDirHostPath: "/var/lib/rook" # optional, default is /var/lib/rook
storage:
useAllNodes: true # optional, default value: true
useAllDevices: true # optional, default value: true
deviceFilter: "^sd[a-b]" # optional, will only be used if useAllDevices is set to false
config:
metadataDevice: "sda" # optional, only set this value, if there is a device available
nodes: # optional if useAllNodes is set to true, otherwise mandatory
- name: "<ip-adress of node_1>"
devices:
- name: "sdb"
- name: "<ip-adress of node_2>"
deviceFilter: "^sd[a-b]"
config:
metadataDevice: "sda" # optional
resources:
mgr:
requests:
cpu: "500m" # optional, default is 500m, limit: 1000m
memory: "512Mi" # optional, default is 1Gi, limit: 1Gi
mon:
requests:
cpu: "1" # optional, default is 1, limit: 2000m
memory: "1Gi" # optional, default is 1Gi, limit: 2Gi
osd:
requests:
cpu: "1" # optional, default is 1, limit: 2
memory: "1Gi" # optional, default is 4Gi, limit: 4Gi
operator:
data:
rookLogLevel: "DEBUG" # optional, default is DEBUG
#-------------------------------------------------------------------------------------------------------------------------------
### Values for Harbor deployment ###
## For detailed explaination for each key see: https://artifacthub.io/packages/helm/harbor/harbor/1.8.1#configuration ##
harborValues:
namespace: kubeops # optional, default is kubeops
harborpass: "password" # mandatory: set password for harbor access
databasePassword: "Postgres_Password" # mandatory: set password for database access
redisPassword: "Redis_Password" # mandatory: set password for redis access
externalURL: http://10.2.10.11:30002 # mandatory, the ip address and port, from which harbor is accessable outside of the cluster
nodePort: 30002 # mandatory
hostname: harbor.local # mandatory
harborPersistence:
persistentVolumeClaim:
registry:
size: 40Gi # optional, default is 40Gi
storageClass: "rook-cephfs" #optional, default is rook-cephfs
jobservice:
jobLog:
size: 1Gi # optional, default is 1Gi
storageClass: "rook-cephfs" #optional, default is rook-cephfs
database:
size: 1Gi # optional, default is 1Gi
storageClass: "rook-cephfs" #optional, default is rook-cephfs
redis:
size: 1Gi # optional, default is 1Gi
storageClass: "rook-cephfs" #optional, default is rook-cephfs
trivy:
size: 5Gi # optional, default is 5Gi
storageClass: "rook-cephfs" #optional, default is rook-cephfs
#--------------------------------------------------------------------------------------------------------------------------------------
###Values for filebeat deployment###
filebeatValues:
namespace: kubeops # optional, default is kubeops
#--------------------------------------------------------------------------------------------------------------------------------
###Values for Logstash deployment###
##For detailed explaination for each key see: https://github.com/elastic/helm-charts/releases/tag/v7.16.3###
logstashValues:
namespace: kubeops
volumeClaimTemplate:
accessModes:
- ReadWriteMany #optional, default is [ReadWriteMany]
resources:
requests:
storage: 1Gi # mandatory, depending on storage capacity
storageClass: "rook-cephfs" #optional, default is rook-cephfs
#--------------------------------------------------------------------------------------------------------------------------------------
###Values for OpenSearch-Dashboards deployment###
##For detailed explaination for each key see: https://github.com/opensearch-project/helm-charts/tree/main/charts/opensearch-dashboards###
openSearchDashboardValues:
namespace: kubeops
nodePort: 30050
#--------------------------------------------------------------------------------------------------------------------------------
###Values for OpenSearch deployment###
##For detailed explaination for each key see: https://github.com/opensearch-project/helm-charts/tree/main/charts/opensearch###
openSearchValues:
namespace: kubeops
opensearchJavaOpts: "-Xmx512M -Xms512M" # optional, default is -Xmx512M -Xms512M
resources:
requests:
cpu: "250m" # optional, default is 250m
memory: "1024Mi" # optional, default is 1024Mi
limits:
cpu: "300m" # optional, default is 300m
memory: "3072Mi" # optional, default is 3072Mi
persistence:
size: 4Gi # mandatory
enabled: "true" # optional, default is true
enableInitChown: "false" # optional, default is false
labels:
enabled: "false" # optional, default is false
storageClass: "rook-cephfs" # optional, default is rook-cephfs
accessModes:
- "ReadWriteMany" # optional, default is {ReadWriteMany}
securityConfig:
enabled: false # optional, default value: false
### Additional values can be set, if securityConfig is enabled:
# path: "/usr/share/opensearch/plugins/opensearch-security/securityconfig"
# actionGroupsSecret:
# configSecret:
# internalUsersSecret: internal-users-config-secret
# rolesSecret:
# rolesMappingSecret:
# tenantsSecret:
# config:
# securityConfigSecret: ""
# dataComplete: true
# data: {}
replicas: "3" # optional, default is 3
#--------------------------------------------------------------------------------------------------------------------------------
###Values for Prometheus deployment###
prometheusValues:
namespace: kubeops # optional, default is kubeops
privateRegistry: false # optional, default is false
grafanaUsername: "user" # optional, default is user
grafanaPassword: "password" # optional, default is password
grafanaResources:
storageClass: "rook-cephfs" # optional, default is rook-cephfs
storage: 5Gi # optional, default is 5Gi
nodePort: 30211 # optional, default is 30211
prometheusResources:
storageClass: "rook-cephfs" # optional, default is rook-cephfs
storage: 25Gi # optional, default is 25Gi
retention: 10d # optional, default is 10d
retentionSize: "24GB" # optional, default is 24GB
nodePort: 32090
#--------------------------------------------------------------------------------------------------------------------------------
###Values for OPA deployment###
opaValues:
namespace: kubeops
#--------------------------------------------------------------------------------------------------------------------------------
###Values for KubeOps-Dashboard (Headlamp) deployment###
kubeOpsDashboardValues:
service:
nodePort: 30007
#--------------------------------------------------------------------------------------------------------------------------------
###Values for cert-manager deployment###
certmanValues:
namespace: kubeops
replicaCount: 3
logLevel: 2
secretName: root-secret
#--------------------------------------------------------------------------------------------------------------------------------
###Values for ingress-nginx deployment###
ingressValues:
namespace: kubeops
keycloakValues:
namespace: "kubeops" # Optional, default is "keycloak"
storageClass: "rook-cephfs" # Optional, default is "rook-cephfs"
keycloak:
auth:
adminUser: admin # Optional, default is admin
adminPassword: admin # Optional, default is admin
postgresql:
auth:
postgresUserPassword: "" # Optional, default is ""
username: bn_keycloak # Optional, default is "bn_keycloak"
password: "" # Optional, default is ""
database: bitnami_keycloak # Optional, default is "bitnami_keycloak"
volumeSize: 8Gi
veleroValues:
namespace: "velero"
accessKeyId: "your_s3_storage_username"
secretAccessKey: "your_s3_storage_password"
useNodeAgent: false
defaultVolumesToFsBackup: false
provider: "aws"
bucket: "velero"
useVolumeSnapshots: false
backupLocationConfig:
region: "minio"
s3ForcePathStyle: true
s3Url: "http://minio.velero.svc:9000"
how to use kubeopsctl
apply changes to cluster
kubeopsctl apply -f kubeopsctl.yaml
3 - Fileformats
Fileformats in kubeopsctl
This documentation shows you all the different kind of fileformats kubeopsctl uses and how to use them.
How to configure Cluster/Nodes/Software using kubeopsctl.yaml file
You need to have a cluster definition file which describes the different aspects of your cluster. This files describes only one cluster.
Choose the appropriate imagePullRegistry
based on your kubeops version.
### General values for registry access ###
imagePullRegistry: "registry.preprod.kubernative.net/lima" # mandatory, the registry from which the images for the cluster are pulled
localRegistry: false # mandatory, set to true if you use a local registry, if you want to set the value to true then you also have to set harbor to true
### General values for registry access ###
imagePullRegistry: "registry.preprod.kubeops.net" # mandatory, the registry from which the images for the cluster are pulled
localRegistry: false # mandatory, set to true if you use a local registry, if you want to set the value to true then you also have to set harbor to true
Full yaml syntax
apiVersion: kubeops/kubeopsctl/alpha/v5 # mandatory
clusterName: "example" # mandatory
clusterUser: "myuser" # mandatory
kubernetesVersion: "1.28.2" # mandatory
#masterHost: optional if you have an hostname, default value in "masterIP"
masterIP: 10.2.10.12 # mandatory
firewall: "nftables" # mandatory, default "nftables"
pluginNetwork: "calico" # mandatory, default "calico"
containerRuntime: "containerd" # mandatory
clusterOS: "Red Hat Enterprise Linux" # mandatory, must be "Red Hat Enterprise Linux"
### Additional values for cluster configuration
useInsecureRegistry: false # optional, default is false
ignoreFirewallError: false # optional, default is false
serviceSubnet: 192.168.128.0/17 # optional, default "192.168.128.0/17"
podSubnet: 192.168.0.0/17 # optional, default "192.168.0.0/17"
debug: false # optional, default is false
logLevel: vvvvv # optional, default "vvvvv"
systemCpu: "1" # optional, default "1"
systemMemory: "2G" # optional, default "2G"
sudo: true # optional, default is true
tmpCopyDir: "/tmp" # optional, default is /tmp
createCluster: true # optional, default is true
updateRegistry: true # optional, default is true
zones:
- name: zone1
nodes:
master:
- name: cluster1master1
ipAdress: 10.2.10.11
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
- name: cluster1master2
ipAdress: 10.2.10.12
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
worker:
- name: cluster1worker1
ipAdress: 10.2.10.14
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
- name: cluster1worker2
ipAdress: 10.2.10.15
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
- name: zone2
nodes:
master:
- name: cluster1master3
ipAdress: 10.2.10.13
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: drained
kubeversion: 1.28.2
worker:
- name: cluster1worker1
ipAdress: 10.2.10.16
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
# set to true if you want to install it into your cluster
rook-ceph: true # mandatory
harbor: true # mandatory
opensearch: true # mandatory
opensearch-dashboards: true # mandatory
logstash: true # mandatory
filebeat: true # mandatory
prometheus: true # mandatory
opa: true # mandatory
kubeops-dashboard: true # mandatory
certman: true # mandatory
ingress: true # mandatory
keycloak: true # mandatory
velero: true # mandatory
nameSpace: "kubeops" #optional, the default value is different for each application
storageClass: "rook-cephfs" # optional, default value is "rook-cephfs"
###Values for Rook-Ceph###
rookValues:
namespace: kubeops
cluster:
spec:
dataDirHostPath: "/var/lib/rook" # optional, default is /var/lib/rook
storage:
useAllNodes: true # optional, default value: true
useAllDevices: true # optional, default value: true
deviceFilter: "^sd[a-b]" # optional, will only be used if useAllDevices is set to false
config:
metadataDevice: "sda" # optional, only set this value, if there is a device available
nodes: # optional if useAllNodes is set to true, otherwise mandatory
- name: "<ip-adress of node_1>"
devices:
- name: "sdb"
- name: "<ip-adress of node_2>"
deviceFilter: "^sd[a-b]"
config:
metadataDevice: "sda" # optional
resources:
mgr:
requests:
cpu: "500m" # optional, default is 500m, limit: 1000m
memory: "512Mi" # optional, default is 1Gi, limit: 1Gi
mon:
requests:
cpu: "1" # optional, default is 1, limit: 2000m
memory: "1Gi" # optional, default is 1Gi, limit: 2Gi
osd:
requests:
cpu: "1" # optional, default is 1, limit: 2
memory: "1Gi" # optional, default is 4Gi, limit: 4Gi
operator:
data:
rookLogLevel: "DEBUG" # optional, default is DEBUG
#-------------------------------------------------------------------------------------------------------------------------------
### Values for Harbor deployment ###
## For detailed explaination for each key see: https://artifacthub.io/packages/helm/harbor/harbor/1.8.1#configuration ##
harborValues:
namespace: kubeops # optional, default is kubeops
harborpass: "password" # mandatory: set password for harbor access
databasePassword: "Postgres_Password" # mandatory: set password for database access
redisPassword: "Redis_Password" # mandatory: set password for redis access
externalURL: http://10.2.10.11:30002 # mandatory, the ip address and port, from which harbor is accessable outside of the cluster
nodePort: 30002 # mandatory
hostname: harbor.local # mandatory
harborPersistence:
persistentVolumeClaim:
registry:
size: 40Gi # optional, default is 40Gi
storageClass: "rook-cephfs" #optional, default is rook-cephfs
jobservice:
jobLog:
size: 1Gi # optional, default is 1Gi
storageClass: "rook-cephfs" #optional, default is rook-cephfs
database:
size: 1Gi # optional, default is 1Gi
storageClass: "rook-cephfs" #optional, default is rook-cephfs
redis:
size: 1Gi # optional, default is 1Gi
storageClass: "rook-cephfs" #optional, default is rook-cephfs
trivy:
size: 5Gi # optional, default is 5Gi
storageClass: "rook-cephfs" #optional, default is rook-cephfs
#--------------------------------------------------------------------------------------------------------------------------------------
###Values for filebeat deployment###
filebeatValues:
namespace: kubeops # optional, default is kubeops
#--------------------------------------------------------------------------------------------------------------------------------
###Values for Logstash deployment###
##For detailed explaination for each key see: https://github.com/elastic/helm-charts/releases/tag/v7.16.3###
logstashValues:
namespace: kubeops
volumeClaimTemplate:
accessModes:
- ReadWriteMany #optional, default is [ReadWriteMany]
resources:
requests:
storage: 1Gi # mandatory, depending on storage capacity
storageClass: "rook-cephfs" #optional, default is rook-cephfs
#--------------------------------------------------------------------------------------------------------------------------------------
###Values for OpenSearch-Dashboards deployment###
##For detailed explaination for each key see: https://github.com/opensearch-project/helm-charts/tree/main/charts/opensearch-dashboards###
openSearchDashboardValues:
namespace: kubeops
nodePort: 30050
#--------------------------------------------------------------------------------------------------------------------------------
###Values for OpenSearch deployment###
##For detailed explaination for each key see: https://github.com/opensearch-project/helm-charts/tree/main/charts/opensearch###
openSearchValues:
namespace: kubeops
opensearchJavaOpts: "-Xmx512M -Xms512M" # optional, default is -Xmx512M -Xms512M
resources:
requests:
cpu: "250m" # optional, default is 250m
memory: "1024Mi" # optional, default is 1024Mi
limits:
cpu: "300m" # optional, default is 300m
memory: "3072Mi" # optional, default is 3072Mi
persistence:
size: 4Gi # mandatory
enabled: "true" # optional, default is true
enableInitChown: "false" # optional, default is false
labels:
enabled: "false" # optional, default is false
storageClass: "rook-cephfs" # optional, default is rook-cephfs
accessModes:
- "ReadWriteMany" # optional, default is {ReadWriteMany}
securityConfig:
enabled: false # optional, default value: false
### Additional values can be set, if securityConfig is enabled:
# path: "/usr/share/opensearch/plugins/opensearch-security/securityconfig"
# actionGroupsSecret:
# configSecret:
# internalUsersSecret: internal-users-config-secret
# rolesSecret:
# rolesMappingSecret:
# tenantsSecret:
# config:
# securityConfigSecret: ""
# dataComplete: true
# data: {}
replicas: "3" # optional, default is 3
#--------------------------------------------------------------------------------------------------------------------------------
###Values for Prometheus deployment###
prometheusValues:
namespace: kubeops # optional, default is kubeops
privateRegistry: false # optional, default is false
grafanaUsername: "user" # optional, default is user
grafanaPassword: "password" # optional, default is password
grafanaResources:
storageClass: "rook-cephfs" # optional, default is rook-cephfs
storage: 5Gi # optional, default is 5Gi
nodePort: 30211 # optional, default is 30211
prometheusResources:
storageClass: "rook-cephfs" # optional, default is rook-cephfs
storage: 25Gi # optional, default is 25Gi
retention: 10d # optional, default is 10d
retentionSize: "24GB" # optional, default is 24GB
nodePort: 32090
#--------------------------------------------------------------------------------------------------------------------------------
###Values for OPA deployment###
opaValues:
namespace: kubeops
#--------------------------------------------------------------------------------------------------------------------------------
###Values for KubeOps-Dashboard (Headlamp) deployment###
kubeOpsDashboardValues:
service:
nodePort: 30007
#--------------------------------------------------------------------------------------------------------------------------------
###Values for cert-manager deployment###
certmanValues:
namespace: kubeops
replicaCount: 3
logLevel: 2
secretName: root-secret
#--------------------------------------------------------------------------------------------------------------------------------
###Values for ingress-nginx deployment###
ingressValues:
namespace: kubeops
keycloakValues:
namespace: "kubeops" # Optional, default is "keycloak"
storageClass: "rook-cephfs" # Optional, default is "rook-cephfs"
keycloak:
auth:
adminUser: admin # Optional, default is admin
adminPassword: admin # Optional, default is admin
existingSecret: "" # Optional, default is ""
postgresql:
auth:
postgresPassword: "" # Optional, default is ""
username: bn_keycloak # Optional, default is "bn_keycloak"
password: "" # Optional, default is ""
database: bitnami_keycloak # Optional, default is "bitnami_keycloak"
existingSecret: "" # Optional, default is ""
veleroValues:
namespace: "velero"
accessKeyId: "your_s3_storage_username"
secretAccessKey: "your_s3_storage_password"
useNodeAgent: false
defaultVolumesToFsBackup: false
provider: "aws"
bucket: "velero"
useVolumeSnapshots: false
backupLocationConfig:
region: "minio"
s3ForcePathStyle: true
s3Url: "http://minio.velero.svc:9000"
apiVersion: kubeops/kubeopsctl/alpha/v4 # mandatory
clusterName: "example" # mandatory
clusterUser: "myuser" # mandatory
kubernetesVersion: "1.28.2" # mandatory
#masterHost: optional if you have an hostname, default value in "masterIP"
masterIP: 10.2.10.11 # mandatory
firewall: "nftables" # mandatory, default "nftables"
pluginNetwork: "calico" # mandatory, default "calico"
containerRuntime: "containerd" # mandatory
clusterOS: "Red Hat Enterprise Linux" # mandatory, must be "Red Hat Enterprise Linux"
### Additional values for cluster configuration
useInsecureRegistry: false # optional, default is false
ignoreFirewallError: false # optional, default is false
serviceSubnet: 192.168.128.0/17 # optional, default "192.168.128.0/17"
podSubnet: 192.168.0.0/17 # optional, default "192.168.0.0/17"
debug: false # optional, default is false
logLevel: vvvvv # optional, default "vvvvv"
systemCpu: "1" # optional, default "1"
systemMemory: "2G" # optional, default "2G"
sudo: true # optional, default is true
tmpCopyDir: "/tmp" # optional, default is /tmp
createCluster: true # optional, default is true
updateRegistry: true # optional, default is true
zones:
- name: zone1
nodes:
master:
- name: cluster1master1
ipAdress: 10.2.10.11
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
- name: cluster1master2
ipAdress: 10.2.10.12
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
worker:
- name: cluster1worker1
ipAdress: 10.2.10.14
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
- name: cluster1worker2
ipAdress: 10.2.10.15
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
- name: zone2
nodes:
master:
- name: cluster1master3
ipAdress: 10.2.10.13
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: drained
kubeversion: 1.28.2
worker:
- name: cluster1worker1
ipAdress: 10.2.10.16
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
# set to true if you want to install it into your cluster
rook-ceph: true # mandatory
harbor: true # mandatory
opensearch: true # mandatory
opensearch-dashboards: true # mandatory
logstash: true # mandatory
filebeat: true # mandatory
prometheus: true # mandatory
opa: true # mandatory
kubeops-dashboard: true # mandatory
certman: true # mandatory
ingress: true # mandatory
keycloak: true # mandatory
velero: true # mandatory
nameSpace: "kubeops" #optional, the default value is different for each application
storageClass: "rook-cephfs" # optional, default value is "rook-cephfs"
###Values for Rook-Ceph###
rookValues:
namespace: kubeops
cluster:
spec:
dataDirHostPath: "/var/lib/rook" # optional, default is /var/lib/rook
storage:
useAllNodes: true # optional, default value: true
useAllDevices: true # optional, default value: true
deviceFilter: "^sd[a-b]" # optional, will only be used if useAllDevices is set to false
config:
metadataDevice: "sda" # optional, only set this value, if there is a device available
nodes: # optional if useAllNodes is set to true, otherwise mandatory
- name: "<ip-adress of node_1>"
devices:
- name: "sdb"
- name: "<ip-adress of node_2>"
deviceFilter: "^sd[a-b]"
config:
metadataDevice: "sda" # optional
resources:
mgr:
requests:
cpu: "500m" # optional, default is 500m, limit: 1000m
memory: "512Mi" # optional, default is 1Gi, limit: 1Gi
mon:
requests:
cpu: "1" # optional, default is 1, limit: 2000m
memory: "1Gi" # optional, default is 1Gi, limit: 2Gi
osd:
requests:
cpu: "1" # optional, default is 1, limit: 2
memory: "1Gi" # optional, default is 4Gi, limit: 4Gi
cephFileSystems:
requests:
cpu: "1" # optional, default is 1, limit: 2
memory: "1Gi" # optional, default is 1, limit: 4Gi
cephObjectStores:
requests:
cpu: "1" # optional, default is 1, limit: 2
memory: "1Gi" # optional, default is 1Gi, limit: 2Gi
operator:
data:
rookLogLevel: "DEBUG" # optional, default is DEBUG
#-------------------------------------------------------------------------------------------------------------------------------
### Values for Harbor deployment ###
## For detailed explaination for each key see: https://artifacthub.io/packages/helm/harbor/harbor/1.8.1#configuration ##
harborValues:
namespace: kubeops # optional, default is kubeops
harborpass: "password" # mandatory: set password for harbor access
databasePassword: "Postgres_Password" # mandatory: set password for database access
redisPassword: "Redis_Password" # mandatory: set password for redis access
externalURL: http://10.2.10.11:30002 # mandatory, the ip address and port, from which harbor is accessable outside of the cluster
nodePort: 30002 # mandatory
hostname: harbor.local # mandatory
harborPersistence:
persistentVolumeClaim:
registry:
size: 40Gi # optional, default is 40Gi
storageClass: "rook-cephfs" #optional, default is rook-cephfs
jobservice:
jobLog:
size: 1Gi # optional, default is 1Gi
storageClass: "rook-cephfs" #optional, default is rook-cephfs
database:
size: 1Gi # optional, default is 1Gi
storageClass: "rook-cephfs" #optional, default is rook-cephfs
redis:
size: 1Gi # optional, default is 1Gi
storageClass: "rook-cephfs" #optional, default is rook-cephfs
trivy:
size: 5Gi # optional, default is 5Gi
storageClass: "rook-cephfs" #optional, default is rook-cephfs
#--------------------------------------------------------------------------------------------------------------------------------------
###Values for filebeat deployment###
filebeatValues:
namespace: kubeops # optional, default is kubeops
#--------------------------------------------------------------------------------------------------------------------------------
###Values for Logstash deployment###
##For detailed explaination for each key see: https://github.com/elastic/helm-charts/releases/tag/v7.16.3###
logstashValues:
namespace: kubeops
volumeClaimTemplate:
accessModes:
- ReadWriteMany #optional, default is [ReadWriteMany]
resources:
requests:
storage: 1Gi # mandatory, depending on storage capacity
storageClass: "rook-cephfs" #optional, default is rook-cephfs
#--------------------------------------------------------------------------------------------------------------------------------------
###Values for OpenSearch-Dashboards deployment###
##For detailed explaination for each key see: https://github.com/opensearch-project/helm-charts/tree/main/charts/opensearch-dashboards###
openSearchDashboardValues:
namespace: kubeops
nodePort: 30050
#--------------------------------------------------------------------------------------------------------------------------------
###Values for OpenSearch deployment###
##For detailed explaination for each key see: https://github.com/opensearch-project/helm-charts/tree/main/charts/opensearch###
openSearchValues:
namespace: kubeops
opensearchJavaOpts: "-Xmx512M -Xms512M" # optional, default is -Xmx512M -Xms512M
resources:
requests:
cpu: "250m" # optional, default is 250m
memory: "1024Mi" # optional, default is 1024Mi
limits:
cpu: "300m" # optional, default is 300m
memory: "3072Mi" # optional, default is 3072Mi
persistence:
size: 4Gi # mandatory
enabled: "true" # optional, default is true
enableInitChown: "false" # optional, default is false
labels:
enabled: "false" # optional, default is false
storageClass: "rook-cephfs" # optional, default is rook-cephfs
accessModes:
- "ReadWriteMany" # optional, default is {ReadWriteMany}
securityConfig:
enabled: false # optional, default value: false
### Additional values can be set, if securityConfig is enabled:
# path: "/usr/share/opensearch/plugins/opensearch-security/securityconfig"
# actionGroupsSecret:
# configSecret:
# internalUsersSecret: internal-users-config-secret
# rolesSecret:
# rolesMappingSecret:
# tenantsSecret:
# config:
# securityConfigSecret: ""
# dataComplete: true
# data: {}
replicas: "3" # optional, default is 3
#--------------------------------------------------------------------------------------------------------------------------------
###Values for Prometheus deployment###
prometheusValues:
namespace: kubeops # optional, default is kubeops
privateRegistry: false # optional, default is false
grafanaUsername: "user" # optional, default is user
grafanaPassword: "password" # optional, default is password
grafanaResources:
storageClass: "rook-cephfs" # optional, default is rook-cephfs
storage: 5Gi # optional, default is 5Gi
nodePort: 30211 # optional, default is 30211
prometheusResources:
storageClass: "rook-cephfs" # optional, default is rook-cephfs
storage: 25Gi # optional, default is 25Gi
retention: 10d # optional, default is 10d
retentionSize: "24GB" # optional, default is 24GB
nodePort: 32090
#--------------------------------------------------------------------------------------------------------------------------------
###Values for OPA deployment###
opaValues:
namespace: kubeops
#--------------------------------------------------------------------------------------------------------------------------------
###Values for KubeOps-Dashboard (Headlamp) deployment###
kubeOpsDashboardValues:
service:
nodePort: 30007
#--------------------------------------------------------------------------------------------------------------------------------
###Values for cert-manager deployment###
certmanValues:
namespace: kubeops
replicaCount: 3
logLevel: 2
secretName: root-secret
#--------------------------------------------------------------------------------------------------------------------------------
###Values for ingress-nginx deployment###
ingressValues:
namespace: kubeops
keycloakValues:
namespace: "kubeops" # Optional, default is "keycloak"
storageClass: "rook-cephfs" # Optional, default is "rook-cephfs"
keycloak:
auth:
adminUser: admin # Optional, default is admin
adminPassword: admin # Optional, default is admin
existingSecret: "" # Optional, default is ""
postgresql:
auth:
postgresPassword: "" # Optional, default is ""
username: bn_keycloak # Optional, default is "bn_keycloak"
password: "" # Optional, default is ""
database: bitnami_keycloak # Optional, default is "bitnami_keycloak"
existingSecret: "" # Optional, default is ""
veleroValues:
namespace: "velero"
accessKeyId: "your_s3_storage_username"
secretAccessKey: "your_s3_storage_password"
useNodeAgent: false
defaultVolumesToFsBackup: false
provider: "aws"
bucket: "velero"
useVolumeSnapshots: false
backupLocationConfig:
region: "minio"
s3ForcePathStyle: true
s3Url: "http://minio.velero.svc:9000"
apiVersion: kubeops/kubeopsctl/alpha/v3 # mandatory
imagePullRegistry: "registry.preprod.kubeops.net"
clusterName: "example" # mandatory
clusterUser: "myuser" # mandatory
kubernetesVersion: "1.28.2" # mandatory
#masterHost: optional if you have an hostname, default value in "masterIP"
masterIP: 10.2.10.11 # mandatory
firewall: "nftables" # mandatory, default "nftables"
pluginNetwork: "calico" # mandatory, default "calico"
containerRuntime: "containerd" # mandatory
clusterOS: "Red Hat Enterprise Linux" # mandatory, must be "Red Hat Enterprise Linux"
### Additional values for cluster configuration
useInsecureRegistry: false # optional, default is false
ignoreFirewallError: false # optional, default is false
serviceSubnet: 192.168.128.0/17 # optional, default "192.168.128.0/17"
podSubnet: 192.168.0.0/17 # optional, default "192.168.0.0/17"
debug: false # optional, default is false
logLevel: vvvvv # optional, default "vvvvv"
systemCpu: "1" # optional, default "1"
systemMemory: "2G" # optional, default "2G"
sudo: true # optional, default is true
tmpCopyDir: "/tmp" # optional, default is /tmp
createCluster: true # optional, default is true
updateRegistry: true # optional, default is true
zones:
- name: zone1
nodes:
master:
- name: cluster1master1
ipAdress: 10.2.10.11
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
- name: cluster1master2
ipAdress: 10.2.10.12
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
worker:
- name: cluster1worker1
ipAdress: 10.2.10.14
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
- name: cluster1worker2
ipAdress: 10.2.10.15
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
- name: zone2
nodes:
master:
- name: cluster1master3
ipAdress: 10.2.10.13
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: drained
kubeversion: 1.28.2
worker:
- name: cluster1worker1
ipAdress: 10.2.10.16
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
# set to true if you want to install it into your cluster
rook-ceph: true # mandatory
harbor: true # mandatory
opensearch: true # mandatory
opensearch-dashboards: true # mandatory
logstash: true # mandatory
filebeat: true # mandatory
prometheus: true # mandatory
opa: true # mandatory
headlamp: true # mandatory
certman: true # mandatory
ingress: true # mandatory
keycloak: true # mandatory
velero: true # mandatory
nameSpace: "kubeops" #optional, the default value is different for each application
storageClass: "rook-cephfs" # optional, default value is "rook-cephfs"
###Values for Rook-Ceph###
rookValues:
namespace: kubeops
cluster:
spec:
dataDirHostPath: "/var/lib/rook" # optional, default is /var/lib/rook
storage:
useAllNodes: true # optional, default value: true
useAllDevices: true # optional, default value: true
deviceFilter: "^sd[a-b]" # optional, will only be used if useAllDevices is set to false
config:
metadataDevice: "sda" # optional, only set this value, if there is a device available
nodes: # optional if useAllNodes is set to true, otherwise mandatory
- name: "<ip-adress of node_1>"
devices:
- name: "sdb"
- name: "<ip-adress of node_2>"
deviceFilter: "^sd[a-b]"
config:
metadataDevice: "sda" # optional
resources:
mgr:
requests:
cpu: "500m" # optional, default is 500m, limit: 1000m
memory: "512Mi" # optional, default is 1Gi, limit: 1Gi
mon:
requests:
cpu: "1" # optional, default is 1, limit: 2000m
memory: "1Gi" # optional, default is 1Gi, limit: 2Gi
osd:
requests:
cpu: "1" # optional, default is 1, limit: 2
memory: "1Gi" # optional, default is 4Gi, limit: 4Gi
cephFileSystems:
requests:
cpu: "1" # optional, default is 1, limit: 2
memory: "1Gi" # optional, default is 1, limit: 4Gi
cephObjectStores:
requests:
cpu: "1" # optional, default is 1, limit: 2
memory: "1Gi" # optional, default is 1Gi, limit: 2Gi
operator:
data:
rookLogLevel: "DEBUG" # optional, default is DEBUG
#-------------------------------------------------------------------------------------------------------------------------------
### Values for Harbor deployment ###
## For detailed explaination for each key see: https://artifacthub.io/packages/helm/harbor/harbor/1.8.1#configuration ##
harborValues:
namespace: kubeops # optional, default is kubeops
harborpass: "password" # mandatory: set password for harbor access
databasePassword: "Postgres_Password" # mandatory: set password for database access
redisPassword: "Redis_Password" # mandatory: set password for redis access
externalURL: http://10.2.10.11:30002 # mandatory, the ip address and port, from which harbor is accessable outside of the cluster
nodePort: 30002 # mandatory
hostname: harbor.local # mandatory
harborPersistence:
persistentVolumeClaim:
registry:
size: 40Gi # optional, default is 40Gi
storageClass: "rook-cephfs" #optional, default is rook-cephfs
jobservice:
jobLog:
size: 1Gi # optional, default is 1Gi
storageClass: "rook-cephfs" #optional, default is rook-cephfs
database:
size: 1Gi # optional, default is 1Gi
storageClass: "rook-cephfs" #optional, default is rook-cephfs
redis:
size: 1Gi # optional, default is 1Gi
storageClass: "rook-cephfs" #optional, default is rook-cephfs
trivy:
size: 5Gi # optional, default is 5Gi
storageClass: "rook-cephfs" #optional, default is rook-cephfs
#--------------------------------------------------------------------------------------------------------------------------------------
###Values for filebeat deployment###
filebeatValues:
namespace: kubeops # optional, default is kubeops
#--------------------------------------------------------------------------------------------------------------------------------
###Values for Logstash deployment###
##For detailed explaination for each key see: https://github.com/elastic/helm-charts/releases/tag/v7.16.3###
logstashValues:
namespace: kubeops
volumeClaimTemplate:
accessModes:
- ReadWriteMany #optional, default is [ReadWriteMany]
resources:
requests:
storage: 1Gi # mandatory, depending on storage capacity
storageClass: "rook-cephfs" #optional, default is rook-cephfs
#--------------------------------------------------------------------------------------------------------------------------------------
###Values for OpenSearch-Dashboards deployment###
##For detailed explaination for each key see: https://github.com/opensearch-project/helm-charts/tree/main/charts/opensearch-dashboards###
openSearchDashboardValues:
namespace: kubeops
nodePort: 30050
#--------------------------------------------------------------------------------------------------------------------------------
###Values for OpenSearch deployment###
##For detailed explaination for each key see: https://github.com/opensearch-project/helm-charts/tree/main/charts/opensearch###
openSearchValues:
namespace: kubeops
opensearchJavaOpts: "-Xmx512M -Xms512M" # optional, default is -Xmx512M -Xms512M
resources:
requests:
cpu: "250m" # optional, default is 250m
memory: "1024Mi" # optional, default is 1024Mi
limits:
cpu: "300m" # optional, default is 300m
memory: "3072Mi" # optional, default is 3072Mi
persistence:
size: 4Gi # mandatory
enabled: "true" # optional, default is true
enableInitChown: "false" # optional, default is false
labels:
enabled: "false" # optional, default is false
storageClass: "rook-cephfs" # optional, default is rook-cephfs
accessModes:
- "ReadWriteMany" # optional, default is {ReadWriteMany}
securityConfig:
enabled: false # optional, default value: false
### Additional values can be set, if securityConfig is enabled:
# path: "/usr/share/opensearch/plugins/opensearch-security/securityconfig"
# actionGroupsSecret:
# configSecret:
# internalUsersSecret: internal-users-config-secret
# rolesSecret:
# rolesMappingSecret:
# tenantsSecret:
# config:
# securityConfigSecret: ""
# dataComplete: true
# data: {}
replicas: "3" # optional, default is 3
#--------------------------------------------------------------------------------------------------------------------------------
###Values for Prometheus deployment###
prometheusValues:
namespace: kubeops # optional, default is kubeops
privateRegistry: false # optional, default is false
grafanaUsername: "user" # optional, default is user
grafanaPassword: "password" # optional, default is password
grafanaResources:
storageClass: "rook-cephfs" # optional, default is rook-cephfs
storage: 5Gi # optional, default is 5Gi
nodePort: 30211 # optional, default is 30211
prometheusResources:
storageClass: "rook-cephfs" # optional, default is rook-cephfs
storage: 25Gi # optional, default is 25Gi
retention: 10d # optional, default is 10d
retentionSize: "24GB" # optional, default is 24GB
nodePort: 32090
#--------------------------------------------------------------------------------------------------------------------------------
###Values for OPA deployment###
opaValues:
namespace: kubeops
#--------------------------------------------------------------------------------------------------------------------------------
###Values for Headlamp deployment###
headlampValues:
service:
nodePort: 30007
#--------------------------------------------------------------------------------------------------------------------------------
###Values for cert-manager deployment###
certmanValues:
namespace: kubeops
replicaCount: 3
logLevel: 2
#--------------------------------------------------------------------------------------------------------------------------------
###Values for ingress-nginx deployment###
ingressValues:
namespace: kubeops
keycloakValues:
namespace: "kubeops" # Optional, default is "keycloak"
storageClass: "rook-cephfs" # Optional, default is "rook-cephfs"
keycloak:
auth:
adminUser: admin # Optional, default is admin
adminPassword: admin # Optional, default is admin
existingSecret: "" # Optional, default is ""
postgresql:
auth:
postgresPassword: "" # Optional, default is ""
username: bn_keycloak # Optional, default is "bn_keycloak"
password: "" # Optional, default is ""
database: bitnami_keycloak # Optional, default is "bitnami_keycloak"
existingSecret: "" # Optional, default is ""
veleroValues:
namespace: "velero"
accessKeyId: "your_s3_storage_username"
secretAccessKey: "your_s3_storage_password"
useNodeAgent: false
defaultVolumesToFsBackup: false
provider: "aws"
bucket: "velero"
useVolumeSnapshots: false
backupLocationConfig:
region: "minio"
s3ForcePathStyle: true
s3Url: "http://minio.velero.svc:9000"
kubeopsctl.yaml in detail
You can find a more detailed description of the individual parameters here
Cluster creation
apiVersion: kubeops/kubeopsctl/alpha/v5 # mandatory
imagePullRegistry: "registry.preprod.kubeops.net" # mandatory
localRegistry: false # mandatory
### Values for setup configuration ###
clusterName: "testkubeopsctl" # mandatory
clusterUser: "myuser" # mandatory
kubernetesVersion: "1.28.2" # mandatory, check lima documentation
#masterHost: optional if you have an hostname, default value in "masterIP"
masterIP: 10.2.10.11 # mandatory
firewall: "nftables" # mandatory, default "nftables"
pluginNetwork: "calico" # mandatory, default "nftables"
containerRuntime: "containerd" # mandatory, default "containerd"
clusterOS: "Red Hat Enterprise Linux" # optional, can be "Red Hat Enterprise Linux", remove this line if you want to use default installed OS on admin machine but it has to be "Red Hat Enterprise Linux"
These are parameters for the cluster creation, and software for the clustercreation, p.e. the containerruntime for running the contianers of the cluster. Also there are parameters for the lima software (see documentation of lima for futher explanation).
Network and cluster configuration
### Additional values for cluster configuration
useInsecureRegistry: false # optional, default is false
ignoreFirewallError: false # optional, default is false
serviceSubnet: 192.168.128.0/17 # optional, default "192.168.128.0/17"
podSubnet: 192.168.0.0/17 # optional, default "192.168.0.0/17"
debug: true # optional, default is true
logLevel: vvvvv # optional, default "vvvvv"
systemCpu: "1" # optional, default "1"
systemMemory: "2G" # optional, default "2G"
sudo: true # optional, default is true
Also important are parameters like for the networking like the subnets for the pods and services inside the kubernetes cluster.
Zones
# at least 3 masters and 3 workers are needed
zones:
- name: zone1
nodes:
master:
- name: cluster1master1
ipAdress: 10.2.10.11
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
- name: cluster1master2
ipAdress: 10.2.10.12
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
worker:
- name: cluster1worker1
ipAdress: 10.2.10.14
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
- name: cluster1worker1
ipAdress: 10.2.10.15
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
- name: zone2
nodes:
master:
- name: cluster1master3
ipAdress: 10.2.10.13
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: active
kubeversion: 1.28.2
worker:
- name: cluster1worker1
ipAdress: 10.2.10.16
user: myuser
systemCpu: 100m
systemMemory: 100Mi
status: drained
kubeversion: 1.28.2
New are the zones, which contain master and worker nodes. There are two different states: active and drained. Also there can be two different kubernetes versions. So if you want to do updates in tranches, this is possible with kubeopsctl. Also you can set system memory and system cpu of the nodes for kubernetes itself. It is not possible to delete nodes, for deleting nodes you have to use lima. Also if you want to make an update in tranches, you need at least one master with the greater version.
Tools
###Values for Rook-Ceph###
rookValues:
namespace: kubeops
cluster:
spec:
dataDirHostPath: "/var/lib/rook" # optional, default is /var/lib/rook
storage:
useAllNodes: true # optional, default value: true
useAllDevices: true # optional, default value: true
deviceFilter: "^sd[a-b]" # optional, will only be used if useAllDevices is set to false
config:
metadataDevice: "sda" # optional, only set this value, if there is a device available
nodes: # optional if useAllNodes is set to true, otherwise mandatory
- name: "<ip-adress of node_1>"
devices:
- name: "sdb"
- name: "<ip-adress of node_2>"
deviceFilter: "^sd[a-b]"
config:
metadataDevice: "sda" # optional
resources:
mgr:
requests:
cpu: "500m" # optional, default is 500m, limit: 1000m
memory: "512Mi" # optional, default is 1Gi, limit: 1Gi
mon:
requests:
cpu: "1" # optional, default is 1, limit: 2000m
memory: "1Gi" # optional, default is 1Gi, limit: 2Gi
osd:
requests:
cpu: "1" # optional, default is 1, limit: 2
memory: "1Gi" # optional, default is 4Gi, limit: 4Gi
operator:
data:
rookLogLevel: "DEBUG" # optional, default is DEBUG
Parameter Name | Description | Possible Values | Default Value | Optional |
---|---|---|---|---|
namespace | Namespace for Rook-Ceph | Text | kubeops | No |
cluster.spec.dataDirHostPath | Data directory on the host | Text | /var/lib/rook | Yes |
cluster.storage.useAllNodes | Use all nodes | Boolean | true | Yes |
cluster.storage.useAllDevices | Use all devices | Boolean | true | Yes |
cluster.storage.deviceFilter | Device filter | Regex | ^sd[a-b] | Yes |
cluster.storage.config.metadataDevice | Metadata device | Text | sda | Yes |
cluster.storage.nodes.name | Node name | IP | No | |
cluster.storage.nodes.devices.name | Device name | Text | sdb | No |
cluster.storage.nodes.deviceFilter | Device filter | Regex | ^sd[a-b] | Yes |
cluster.storage.nodes.config.metadataDevice | Metadata device | Text | sda | Yes |
cluster.resources.mgr.requests.cpu | CPU requests for mgr | Text | 500m | Yes |
cluster.resources.mgr.requests.memory | Memory requests for mgr | Text | 512Mi | Yes |
cluster.resources.mon.requests.cpu | CPU requests for mon | Text | 1 | Yes |
cluster.resources.mon.requests.memory | Memory requests for mon | Text | 1Gi | Yes |
cluster.resources.osd.requests.cpu | CPU requests for osd | Text | 1 | Yes |
cluster.resources.osd.requests.memory | Memory requests for osd | Text | 1Gi | Yes |
operator.data.rookLogLevel | Log level | Text | DEBUG | Yes |
### Values for Harbor deployment ###
## For detailed explaination for each key see: https://artifacthub.io/packages/helm/harbor/harbor/1.8.1#configuration ##
harborValues:
namespace: kubeops # optional, default is kubeops
harborpass: "password" # mandatory: set password for harbor access
databasePassword: "Postgres_Password" # mandatory: set password for database access
redisPassword: "Redis_Password" # mandatory: set password for redis access
externalURL: http://10.2.10.11:30002 # mandatory, the ip address and port, from which harbor is accessable outside of the cluster
nodePort: 30002 # mandatory
hostname: harbor.local # mandatory
harborPersistence:
persistentVolumeClaim:
registry:
size: 40Gi # optional, default is 40Gi
storageClass: "rook-cephfs" #optional, default is rook-cephfs
jobservice:
jobLog:
size: 1Gi # optional, default is 1Gi
storageClass: "rook-cephfs" #optional, default is rook-cephfs
database:
size: 1Gi # optional, default is 1Gi
storageClass: "rook-cephfs" #optional, default is rook-cephfs
redis:
size: 1Gi # optional, default is 1Gi
storageClass: "rook-cephfs" #optional, default is rook-cephfs
trivy:
size: 5Gi # optional, default is 5Gi
storageClass: "rook-cephfs" #optional, default is rook-cephfs
Parameter Name | Description | Possible Values | Default Value | Optional |
---|---|---|---|---|
namespace | Namespace for Harbor | Text | kubeops | Yes |
harborpass | Password for Harbor access | Text | No | |
databasePassword | Password for database access | Text | No | |
redisPassword | Password for Redis access | Text | No | |
externalURL | External URL for Harbor access | URL | http://10.2.10.11:30002 | No |
nodePort | NodePort for Harbor | Number | 30002 | No |
hostname | Hostname for Harbor | Text | harbor.local | No |
harborPersistence.persistentVolumeClaim.registry.size | Storage size for registry | Text | 5Gi | No |
harborPersistence.persistentVolumeClaim.registry.storageClass | Storage class for registry | Text | rook-cephfs | Yes |
harborPersistence.persistentVolumeClaim.jobservice.jobLog.size | Storage size for job logs | Text | 1Gi | No |
harborPersistence.persistentVolumeClaim.jobservice.jobLog.storageClass | Storage class for job logs | Text | rook-cephfs | Yes |
harborPersistence.persistentVolumeClaim.database.size | Storage size for database | Text | 1Gi | No |
harborPersistence.persistentVolumeClaim.database.storageClass | Storage class for database | Text | rook-cephfs | Yes |
harborPersistence.persistentVolumeClaim.redis.size | Storage size for Redis | Text | 1Gi | No |
harborPersistence.persistentVolumeClaim.redis.storageClass | Storage class for Redis | Text | rook-cephfs | Yes |
harborPersistence.persistentVolumeClaim.trivy.size | Storage size for Trivy | Text | 5Gi | No |
harborPersistence.persistentVolumeClaim.trivy.storageClass | Storage class for Trivy | Text | rook-cephfs | Yes |
###Values for filebeat deployment###
filebeatValues:
namespace: kubeops # optional, default is kubeops
Parameter Name | Description | Possible Values | Default Value | Optional |
---|---|---|---|---|
namespace | Namespace for Filebeat | Text | kubeops | Yes |
###Values for Logstash deployment###
##For detailed explaination for each key see: https://github.com/elastic/helm-charts/releases/tag/v7.16.3###
logstashValues:
namespace: kubeops
volumeClaimTemplate:
accessModes:
- ReadWriteMany #optional, default is [ReadWriteMany]
resources:
requests:
storage: 1Gi # mandatory, depending on storage capacity
storageClass: "rook-cephfs" #optional, default is rook-cephfs
Parameter Name | Description | Possible Values | Default Value | Optional |
---|---|---|---|---|
namespace | Namespace for Logstash | Text | kubeops | No |
volumeClaimTemplate.accessModes | Access modes for volume claim | List of Texts | [ReadWriteMany] | Yes |
volumeClaimTemplate.resources.requests.storage | Storage requests for volume claim | Text | 1Gi | No |
volumeClaimTemplate.storageClass | Storage class for volume claim | Text | rook-cephfs | Yes |
###Values for OpenSearch-Dashboards deployment###
##For detailed explaination for each key see: https://github.com/opensearch-project/helm-charts/tree/main/charts/opensearch-dashboards###
openSearchDashboardValues:
namespace: kubeops
nodePort: 30050
Parameter Name | Description | Possible Values | Default Value | Optional |
---|---|---|---|---|
namespace | Namespace for OpenSearch-Dashboards | Text | kubeops | No |
nodePort | NodePort for OpenSearch-Dashboards | Number | 30050 | No |
###Values for OpenSearch deployment###
##For detailed explaination for each key see: https://github.com/opensearch-project/helm-charts/tree/main/charts/opensearch###
openSearchValues:
namespace: kubeops
opensearchJavaOpts: "-Xmx512M -Xms512M" # optional, default is -Xmx512M -Xms512M
resources:
requests:
cpu: "250m" # optional, default is 250m
memory: "1024Mi" # optional, default is 1024Mi
limits:
cpu: "300m" # optional, default is 300m
memory: "3072Mi" # optional, default is 3072Mi
persistence:
size: 4Gi # mandatory
enabled: "true" # optional, default is true
enableInitChown: "false" # optional, default is false
labels:
enabled: "false" # optional, default is false
storageClass: "rook-cephfs" # optional, default is rook-cephfs
accessModes:
- "ReadWriteMany" # optional, default is {ReadWriteMany}
securityConfig:
enabled: false # optional, default value: false
### Additional values can be set, if securityConfig is enabled:
# path: "/usr/share/opensearch/plugins/opensearch-security/securityconfig"
# actionGroupsSecret:
# configSecret:
# internalUsersSecret: internal-users-config-secret
# rolesSecret:
# rolesMappingSecret:
# tenantsSecret:
# config:
# securityConfigSecret: ""
# dataComplete: true
# data: {}
replicas: "3" # optional, default is 3
Parameter Name | Description | Possible Values | Default Value | Optional |
---|---|---|---|---|
namespace | Namespace for OpenSearch | Text | kubeops | No |
opensearchJavaOpts | Java options for OpenSearch | Text | -Xmx512M -Xms512M | Yes |
resources.requests.cpu | CPU requests | Text | 250m | Yes |
resources.requests.memory | Memory requests | Text | 1024Mi | Yes |
resources.limits.cpu | CPU limits | Text | 300m | Yes |
resources.limits.memory | Memory limits | Text | 3072Mi | Yes |
persistence.size | Storage size for persistent volume | Text | 4Gi | No |
persistence.enabled | Enable persistent volume | Boolean | true | Yes |
persistence.enableInitChown | Enable initial chown for persistent volume | Boolean | false | Yes |
persistence.labels.enabled | Enable labels for persistent volume | Boolean | false | Yes |
persistence.storageClass | Storage class for persistent volume | Text | rook-cephfs | Yes |
persistence.accessModes | Access modes for persistent volume | List of Texts | [ReadWriteMany] | Yes |
securityConfig.enabled | Enable security configuration | Boolean | false | Yes |
replicas | Number of replicas | Number | 3 | Yes |
###Values for Prometheus deployment###
prometheusValues:
namespace: kubeops # optional, default is kubeops
privateRegistry: false # optional, default is false
grafanaUsername: "user" # optional, default is user
grafanaPassword: "password" # optional, default is password
grafanaResources:
storageClass: "rook-cephfs" # optional, default is rook-cephfs
storage: 5Gi # optional, default is 5Gi
nodePort: 30211 # optional, default is 30211
prometheusResources:
storageClass: "rook-cephfs" # optional, default is rook-cephfs
storage: 25Gi # optional, default is 25Gi
retention: 10d # optional, default is 10d
retentionSize: "24GB" # optional, default is 24GB
nodePort: 32090
Parameter Name | Description | Possible Values | Default Value | Optional |
---|---|---|---|---|
namespace | Namespace for Prometheus | Text | kubeops | Yes |
privateRegistry | Use private registry | Boolean | false | Yes |
grafanaUsername | Username for Grafana | Text | user | Yes |
grafanaPassword | Password for Grafana | Text | password | Yes |
grafanaResources.storageClass | Storage class for Grafana | Text | rook-cephfs | Yes |
grafanaResources.storage | Storage size for Grafana | Text | 5Gi | Yes |
grafanaResources.nodePort | NodePort for Grafana | Number | 30211 | Yes |
prometheusResources.storageClass | Storage class for Prometheus | Text | rook-cephfs | Yes |
prometheusResources.storage | Storage size for Prometheus | Text | 25Gi | Yes |
prometheusResources.retention | Retention period for Prometheus | Text | 10d | Yes |
prometheusResources.retentionSize | Retention size for Prometheus | Text | 24GB | Yes |
prometheusResources.nodePort | NodePort for Prometheus | Number | 32090 | No |
###Values for OPA deployment###
opaValues:
namespace: kubeops
Parameter Name | Description | Possible Values | Default Value | Optional |
---|---|---|---|---|
namespace | Namespace for OPA | Text | kubeops | No |
###Values for KubeOps-Dashboard (Headlamp) deployment###
kubeOpsDashboardValues:
service:
nodePort: 30007
Parameter Name | Description | Possible Values | Default Value | Optional |
---|---|---|---|---|
service.nodePort | NodePort for KubeOps-Dashboard | Number | 30007 | No |
###Values for cert-manager deployment###
certmanValues:
namespace: kubeops
replicaCount: 3
logLevel: 2
secretName: root-secret
Parameter Name | Description | Possible Values | Default Value | Optional |
---|---|---|---|---|
namespace | Namespace for Cert-Manager | Text | kubeops | No |
replicaCount | Number of replicas | Number | 3 | No |
logLevel | Log level | Number | 2 | No |
secretName | Name of the secret | Text | root-secret | No |
###Values for ingress-nginx deployment###
ingressValues:
namespace: kubeops
Parameter Name | Description | Possible Values | Default Value | Optional |
---|---|---|---|---|
namespace | Namespace for Ingress-Nginx | Text | kubeops | No |
keycloakValues:
namespace: "kubeops" # Optional, default is "keycloak"
storageClass: "rook-cephfs" # Optional, default is "rook-cephfs"
keycloak:
auth:
adminUser: admin # Optional, default is admin
adminPassword: admin # Optional, default is admin
existingSecret: "" # Optional, default is ""
postgresql:
auth:
postgresPassword: "" # Optional, default is ""
username: bn_keycloak # Optional, default is "bn_keycloak"
password: "" # Optional, default is ""
database: bitnami_keycloak # Optional, default is "bitnami_keycloak"
existingSecret: "" # Optional, default is ""
Parameter Name | Description | Possible Values | Default Value | Optional |
---|---|---|---|---|
namespace | Namespace for Keycloak | Text | kubeops | Yes |
storageClass | Storage class for Keycloak | Text | rook-cephfs | Yes |
keycloak.auth.adminUser | Admin username | Text | admin | Yes |
keycloak.auth.adminPassword | Admin password | Text | admin | Yes |
keycloak.auth.existingSecret | Existing secret | Text | Yes | |
postgresql.auth.postgresPassword | Password for Postgres DB | Text | Yes | |
postgresql.auth.username | Username for Postgres | Text | bn_keycloak | Yes |
postgresql.auth.password | Password for Postgres | Text | Yes | |
postgresql.auth.database | Database name | Text | bitnami_keycloak | Yes |
postgresql.auth.existingSecret | Existing secret for Postgres | Text | Yes |
veleroValues:
namespace: "velero"
accessKeyId: "your_s3_storage_username"
secretAccessKey: "your_s3_storage_password"
useNodeAgent: false
defaultVolumesToFsBackup: false
provider: "aws"
bucket: "velero"
useVolumeSnapshots: false
backupLocationConfig:
region: "minio"
s3ForcePathStyle: true
s3Url: "http://minio.velero.svc:9000"
Parameter Name | Description | Possible Values | Default Value | Optional |
---|---|---|---|---|
namespace | Namespace for Velero | Text | velero | No |
accessKeyId | Access key ID | Text | No | |
secretAccessKey | Secret access key | Text | No | |
useNodeAgent | Use node agent | Boolean | false | No |
defaultVolumesToFsBackup | Use default volumes to FS backup | Boolean | false | No |
provider | Provider | Text | aws | No |
bucket | Bucket name | Text | velero | No |
useVolumeSnapshots | Use volume snapshots | Boolean | false | No |
backupLocationConfig.region | Region for backup location | Text | minio | No |
backupLocationConfig.s3ForcePathStyle | Enforce S3 path style | Boolean | true | No |
backupLocationConfig.s3Url | S3 URL for backup location | URL | http://minio.velero.svc:9000 | No |
Overwrite platform parameters
New with KubeOps 1.6.0 (apiVersion kubeops/kubeopsctl/alpha/v5)As a user, you can overwrite platform parameters by changing the advanced parameters (e.g. keycloak, velero, harbor, prometheus, etc) to include your desired values. An exception to this is rook-ceph, which has two
advancedParameters
: one for the rook-operator Helm chart and another for the rook-cluster Helm chart.
...
veleroValues:
s3ForcePathStyle: false #default would be true
advancedParameters:
keycloakValues:
namespace: "kubeops" #default is "keycloak"
advancedParameters:
harborValues:
namespace: harbor
advancedParameters:
prometheusValues:
privateRegistry: true
advancedParameters:
rookValues:
cluster:
advancedParameters:
dataDirHostPath: "/var/lib/rook" # Default ist "/var/lib/rook"
operator:
advancedParameters:
resources:
limits:
cpu: "500m" # Default ist "100m"
...
Overwriting of list elements
If you want to overwrite paramters inside of a list, then you would need to overwrite the whole list element. If we take the rook helm package as an example:
...
cephBlockPools:
- name: replicapool
# see https://github.com/rook/rook/blob/master/Documentation/CRDs/Block-Storage/ceph-block-pool-crd.md#spec for available configuration
spec:
failureDomain: host
replicated:
size: 3
# Enables collecting RBD per-image IO statistics by enabling dynamic OSD performance counters. Defaults to false.
# For reference: https://docs.ceph.com/docs/master/mgr/prometheus/#rbd-io-statistics
# enableRBDStats: true
storageClass:
enabled: true
name: rook-ceph-block
isDefault: true
reclaimPolicy: Retain
allowVolumeExpansion: true
volumeBindingMode: "Immediate"
...
If we wanted to change the replicas of the replicapool of the ceph block pools(under spec.replicated.size), then we would also need to overwrite the storageclass and other specs of the replicapool, because list elements can only be overwritten and not merged. if we only use the replicas in the advanced parameters, then the failuredomain would be empty
4 - KubeOps Version
KubeOps Version
Here is the KubeOps and it’s supported tools versions list. Make sure to install or upgrade according to supported versions only.
KubeOps | Supported KOSI Version | Supported LIMA Version | Supported kubeopsctl Version | Deprecation Date |
---|---|---|---|---|
KubeOps 1.7.0 | KOSI 2.12.X | LIMA 1.7.X | kubeopsctl 1.7.X | 01.10.2026 |
KubeOps 1.6.0 | KOSI 2.11.X | LIMA 1.6.X | kubeopsctl 1.6.X | 01.11.2025 |
KubeOps 1.7.0
Tools | Supported App Version | Chart Version | Supported Package Version | SHA256 Checksum |
---|---|---|---|---|
calicomultus | V 0.0.5 | kubeops/calicomultus:0.0.5 | 6e2dfd8135160fce5643f4b2b71de6e6af47925fcbaf38340704d25a79c39c09 | |
cert-manager | V 1.15.3 | V 1.15.3 | kubeops/cert-manager:1.7.0 | f4622bd0d13ec000f06ceac3bd6637dda5fb165528549aa2c392cc36ecfabd71 |
clustercreate | V 1.7.0 | kubeops/clustercreate:1.7.0 | d6e808632cfdb6d1c037ff7b9bc0c2b4fa7bdcc9a8a67ae6c5ed4fb0980308e5 | |
fileBeat | V 8.5.1 | 8.5.1 | kubeops/filebeat-os:1.7.0 | 18b83d6a3355e9ef27ea34f5083dc13c73167c51f9beb9d2099c8544be564f3a |
harbor | V 2.12.0 | 1.16.0 | kubeops/harbor:1.7.0 | 577f8b9f68d72a6da1a22726e423cc484fa39fa674d0dfd6531ac47a588d31c0 |
helm | V 3.14.4 | kubeops/helm:1.7.0 | 7cf5601288d239969e93f8ec44340505bb39474d19cfaeeb74a9c9a66d66adc5 | |
ingress-nginx | V 1.11.5 | 4.11.5 | kubeops/ingress-nginx:1.7.0 | d0e09bf32a80c4958efccd4c6fb6f69295becdfdc9d842eb81e78a9ed0cbc174 |
keycloak | V 1.16.0 | 1.0.0 | kubeops/keycloak:1.7.0 | f0a89b62de8e62974f6c94270248a79d633f5ad7b10c8fb5da27a5c96e16badb |
kubeops-dashboard | V 0.26.0 | 0.26.0 | kubeops/kubeops-dashboard:1.7.0 | 2a0b442ef2fce5b0ef3662a6c9bc679717feacfcbb316f8f9234264e5e45e691 |
prometheus | V 0.76.1 | 62.7.0 | kubeops/kube-prometheus-stack:1.7.0 | f1a8f8324216c693d143981e2171d2cf6b5978ac537e496e09c52d4773b1f91c |
logstash | V 8.4.0 | 8.5.1 | kubeops/logstash-os:1.7.0 | 111f96e952f3f3010bf154d5c2e7040ee7c48ef9ff01338f9f8bf39e7276210b |
opa-gatekeeper | V 3.17.1 | v3.17.1 | kubeops/opa-gatekeeper:1.7.0 | 751a5fa0d3382026f4c7229e3bc4535cfd6be16c4da13fc85807283eff414acc |
opensearch-dashboards | V 2.19.1 | 2.28.0 | kubeops/opensearch-dashboards:1.7.0 | 8c9457b2c08339c1dcf253266eb07387cb0dd73e99ffc66bd0d301bfecf095a9 |
opensearch | V 2.19.1 | 2.32.0 | kubeops/opensearch-os:1.7.0 | 06327e6404ec47bc80dce1e0e2bd0511b8d4f9795d4fa70c12689769826c8425 |
rook | v1.15.6 cluster/v1.15.6 operator | v1.15.6 cluster/v1.15.6 operator | kubeops/rook-ceph:1.7.0 | bd7c5b920cb309d17e5fdaf50408f9f865edba2bb7566ff7e5fb732464d50b65 |
setup | V 1.7.0 | kubeops/setup:1.7.0 | 580d0e079f981028a5bed0025826c6c09653a827cb7672f4eab9c98d0924d573 | |
velero | V 1.13.2 | 6.4.0 | kubeops/velero:1.7.0 | 7eaf98c79a06c6ce69e46f8e855de0c309f74460c3fe7e2ff19bea7e84334cff |
KubeOps 1.6.9
Tools | Supported App Version | Chart Version | Supported Package Version | SHA256 Checksum |
---|---|---|---|---|
calicomultus | V 0.0.5 | kubeops/calicomultus:0.0.5 | b3d249483c10cbd226093978e4d760553f2b2bf7e4a5d8176b56af2808e70aa1 | |
cert-manager | V 1.15.3 | V 1.15.3 | kubeops/cert-manager:1.6.9 | 7df7fc7f008a14eb61247fc4c3de19b8e5b450071645dbb8e156accc3ae46e61 |
clustercreate | V 1.6.9 | kubeops/clustercreate:1.6.9 | 9bed96f6fd7d3ad0790a9b174c6a5ff805b58c8800993c265261618ffa480d9f | |
fileBeat | V 8.5.1 | 8.5.1 | kubeops/filebeat-os:1.6.9 | 60bbcdb6135f9a4d47514819a31c0afef91cb9c3bc642c949bf4d408c2ff16e9 |
harbor | V 2.12.0 | 1.16.0 | kubeops/harbor:1.6.9 | b86a12034c3b50f19f99849bd34ca5c28ca4f88b7afcfcc2c56e05c0acb9b153 |
helm | V 3.14.4 | kubeops/helm:1.6.9 | 8497ed8517007542c3c22fdcf071b7db3e815e3069885ed6bc2d2ce9f2669eea | |
ingress-nginx | V 1.11.5 | 4.11.5 | kubeops/ingress-nginx:1.6.9 | 3844b25ee7231fbbe6b1881721a196a670e761f5dedab7445280e05b1f44e1a2 |
keycloak | V 1.16.0 | 1.0.0 | kubeops/keycloak:1.6.9 | c4877b3f9e0d623f74106b9f4d1f9bddc7f0ffd43504f1b92898cd2030850f06 |
kubeops-dashboard | V 0.26.0 | 0.26.0 | kubeops/kubeops-dashboard:1.6.9 | 7aca956c4743f8e3607f30af2183df756952452ef6cc5aaed2944d0a71718336 |
prometheus | V 0.76.1 | 62.7.0 | kubeops/kube-prometheus-stack:1.6.9 | f706513404a4b688ec8a8074e868a5b2e2ad384591ab413d776c9fb85f81fa2e |
logstash | V 8.4.0 | 8.5.1 | kubeops/logstash-os:1.6.9 | e961a19c19bf06845ff3816b2ff4e37f46ec7ecb4bf8face81eb45e731780b58 |
opa-gatekeeper | V 3.17.1 | v3.17.1 | kubeops/opa-gatekeeper:1.6.9 | ab6f3f8ef380b1a8ae2f5a18194c162562e3ef9698382f128ff7741f76579dfc |
opensearch-dashboards | V 2.19.1 | 2.28.0 | kubeops/opensearch-dashboards:1.6.9 | c1f39f9c982887a7fa5d8613f360ca6681e9ac6bdd583172826a1181812886c4 |
opensearch | V 2.19.1 | 2.32.0 | kubeops/opensearch-os:1.6.9 | d8c1af8e3976d85551888816781d7fb50abf3d0f1fb084b8f0115dc610d1d67e |
rook | v1.15.6 cluster/v1.15.6 operator | v1.15.6 cluster/v1.15.6 operator | kubeops/rook-ceph:1.6.9 | c569c85de83064dc285e5b76774dc6bb5a716c44a25cfa8a4dbcdb17b75989fb |
setup | V 1.6.9 | kubeops/setup:1.6.9 | 3a4de7e654302caafaad37c03a906ae1d95cdd018f991b16ac0c0a3fa17419b5 | |
velero | V 1.13.2 | 6.4.0 | kubeops/velero:1.6.9 | 4a6200688dc691b385ca5fa54d7b75ceb8087280bd5d45981cdd4224266e3853 |
KubeOps 1.7.0_Beta0 supports
Tools | Supported App Version | Chart Version | Supported Package Version | SHA256 Checksum |
---|---|---|---|---|
calicomultus | V 0.0.5 | kubeops/calicomultus:0.0.5 | 6e2dfd8135160fce5643f4b2b71de6e6af47925fcbaf38340704d25a79c39c09 | |
cert-manager | V 1.15.3 | V 1.15.3 | kubeops/cert-manager:1.7.0_Beta0 | 4185b79d5d09a7cbbd6779c9e55d71ab99d6f2e46f3aed8abb2b97ba8aa586e4 |
clustercreate | V 1.7.0_Beta0 | kubeops/clustercreate:1.7.0_Beta0 | 299f7ffe125e2ca8db1c8f1aacfc3b5783271906f56c78c5fc5a6e5730ca83e5 | |
fileBeat | V 8.5.1 | 8.5.1 | kubeops/filebeat-os:1.7.0_Beta0 | d31d63fc1a5e086199c0b465dca0a8756c22c88800c4d1a82af7b8b9f108ce63 |
harbor | V 2.12.0 | 1.16.0 | kubeops/harbor:1.7.0_Beta0 | 38ea742f5c40bd59c217777f0707469c78353acb859296ae5c5f0fbac129fc32 |
helm | V 3.14.4 | kubeops/helm:1.7.0_Beta0 | a9e4704fdb1b60791c0ff91851a2c69ed31782c865650f66c6a3f6ab96852568 | |
ingress-nginx | V 1.11.5 | 4.11.5 | kubeops/ingress-nginx:1.7.0_Beta0 | 0b967f3a34fea7a12b86bc226599e32adb305371f1ab5368570ebb5fbc0021c6 |
keycloak | V 1.16.0 | 1.0.0 | kubeops/keycloak:1.7.0_Beta0 | 3aec97cbbb559954a038a2212e1a52a9720e47c4ba0d8088fe47b000f42c469a |
kubeops-dashboard | V 0.26.0 | 0.26.0 | kubeops/kubeops-dashboard:1.7.0_Beta0 | a777e3b9568cfc60d7a9adef8f81f2345b979c545b566317ed0bd8ed0cf9faf3 |
prometheus | V 0.76.1 | 62.7.0 | kubeops/kube-prometheus-stack:1.7.0_Beta0 | 9210e5ef28babfed186b47043e95e6014dd3eadcdb1dbd521a5903190ecd7062 |
logstash | V 8.4.0 | 8.5.1 | kubeops/logstash-os:1.7.0_Beta0 | 88d144bf4a8bdd0a78c30694aa723fd059cabfed246420251f0c35a22ba6b35f |
opa-gatekeeper | V 3.17.1 | v3.17.1 | kubeops/opa-gatekeeper:1.7.0_Beta0 | fb72ae157ece5f8b41716d6d1fe95e1a574ca7d5a9196c071f2c83bbd3faebe7 |
opensearch-dashboards | V 2.19.1 | 2.28.0 | kubeops/opensearch-dashboards:1.7.0_Beta0 | d4698f3252e14b12c68befb4cd3e0d6ac1f87b121f00a467722e878172effbad |
opensearch | V 2.19.1 | 2.32.0 | kubeops/opensearch-os:1.7.0_Beta0 | 80583692c4010b0df3ff4f02c933ce1ebd792b54e3c4e4c9d3713c2730a9e02c |
rook | v1.15.6 cluster/v1.15.6 operator | v1.15.6 cluster/v1.15.6 operator | kubeops/rook-ceph:1.7.0_Beta0 | 5720f56cde2eb37ef2b74ee0e5dc764555157503554936cc78e03514779ad2fd |
setup | V 1.7.0_Beta0 | kubeops/setup:1.7.0_Beta0 | a6061c2795a52a772895c53ec334b595475da025b41d4cc14c6930d7d7cff018 | |
velero | V 1.13.2 | 6.4.0 | kubeops/velero:1.7.0_Beta0 | a826c2b2189f9e0f60fcf571d7230cd079ebc2f1e7a9594a9f310ec530ea64a8 |
Kubeops 1.6.8 supports
Tool | App Version | Chart Version | Supported Package Version | SHA256 Checksum |
---|---|---|---|---|
harbor | v2.12.1 | 1.16.0 | kubeops/harbor | 38a1471919eb95e69196b3c34aa682cdc471177e06680fc4ccda0c6394e83561 |
cert-manager | v1.15.3 | v1.15.3 | kubeops/cert-manager | 9b17986c8cb2cb18e276167dc63d3fe4a2a83d7e02c0fb7463676954d626dc88 |
filebeat | v8.5.1 | 8.5.1 | kubeops/filebeat | c888885c3001d5ecac8c6fe25f2c09a3352427153dc38994d3447d4a2b7fee2b |
ingress-nginx | v1.11.5 | 4.11.5 | kubeops/ingress-nginx | 664eb9b7dfba4a7516fc9fb68382f4ceaa591950fde7f9d8db6d82f2be802f3f |
keycloak | v22.0.1 | 16.0.5 | kubeops/keycloak | 469edff4c01f2dcd8339fe3debc23d8425cf8f86bedb91401dc6c18d9436396c |
kubeops-dashboard | v0.26.0 | 0.26.0 | kubeops/kubeops-dashboard | 0429b5dfe0dbf1242c6b6e9da08565578c7008a339cb6aec950f7519b66bcd1d |
logstash | v8.4.0 | 8.5.1 | kubeops/logstash | 6586d68ed7f858722796d7c623f1701339efc11eddb71d8b02985bb643fdec2f |
gatekeeper | v3.17.1 | 3.17.1 | kubeops/gatekeeper | 42bee78b7bb056e354c265384a0fdc36dc7999278ce70531219efe7b8b0759e6 |
prometheus | v0.76.1 | 62.7.0 | kubeops/prometheus | 987227f99dc8f57fa9ac1d5407af2d82d58ec57510ca91d540ebc0d5e0f011bc |
rook-ceph | v1.15.6 | v1.15.6 | kubeops/rook-ceph | 9dd9a5e96ccf2a7ebd1cb737ee4669fbdadee240f5071a3c2a993be1929b0905 |
velero | v1.13.2 | 6.4.0 | kubeops/velero | b53948b2565c60f434dffa1dba3fc21b679c5b08308a2dde421125a4b81616cc |
opensearch | v2.19.1 | 2.32.0 | kubeops/opensearch | 0e52bd9818be03c457d09132bd3c1a6790482bb7141f08987dc3bbf678d193bb |
opensearch-dashboards | v2.19.1 | 2.28.0 | kubeops/opensearch-dashboards | 137dd6c80ed753a4e4637c51039788f184bdee6afb2626fddb2937aea919cbd8 |
clustercreate | v1.6.8 | kubeops/clustercreate:1.6.8 | 032e67d4799ea8d56424a0173012d301118291ab3cfdd561657f2225d6446e8e | |
setup | v1.6.8 | kubeops/setup:1.6.8 | bd9c5e71dc4564bede85d27da39e5d31e286624be9efbd1e662ecc52bb8b136b | |
helm | v3.14.4 | kubeops/helm:1.6.8 | f3909a4ac8c7051cc4893a587806681bc55abdbf9a3241dc3c29c165081bc7b0 | |
calicomultus | V 0.0.5 | kubeops/calicomultus:1.6.8 | b3d249483c10cbd226093978e4d760553f2b2bf7e4a5d8176b56af2808e70aa1 |
Kubeops 1.6.7 supports
Tools | Supported App Version | Chart Version | Supported Package Version | SHA256 Checksum |
---|---|---|---|---|
calicomultus | V 0.0.5 | lima/calicomultus:0.0.5 | 18d458d6bda62efb37f6e07378bb90a8cee824286749c42154815fae01a10b62 | |
cert-manager | V 1.15.3 | V 1.15.3 | kubeops/cert-manager:1.6.7 | 39c2b7fb490dd5e3ad8b8a03ec6287a6d02dd518b86efd83b3a1e815fd641c98 |
clustercreate | V 1.6.7 | kubeops/clustercreate:1.6.7 | e83bd6e24dd5762e698d83d6e9e29480deda8bff693af7d835c83ba2e88ae3c2 | |
fileBeat | V 8.5.1 | 8.5.1 | kubeops/filebeat-os:1.6.7 | 64c4922429702e39fa8eed54856525161150c7c4c6b5328a2ac90ce56588fd71 |
harbor | V 2.12.0 | 1.16.0 | kubeops/harbor:1.6.7 | 769e914a9f02a6ca78ec03418895e67c058f851ce24560474640c60dab4c730a |
helm | V 3.14.4 | kubeops/helm:1.6.7 | c970844547cde59195bc1c5b4f17521597b9af1012f112c24185169492d59213 | |
ingress-nginx | V 1.11.5 | 4.11.5 | kubeops/ingress-nginx:1.6.7 | deaf25204437c2812b459c9e2b68ae83bc5343a57ac2ab87d9d8dd4b3d06039d |
keycloak | V 22.0.1 | 16.0.5 | kubeops/keycloak:1.6.7 | 3829d879e3098b14f16709f97b579bb9446ff2984553b72bba39b238aaaf332a |
kubeops-dashboard | V 0.26.0 | 0.26.0 | kubeops/kubeops-dashboard:1.6.7 | f237297adb8b01b7ad7344321d69928273c7e1a7a342634401d71205297a90dd |
prometheus | V 0.76.1 | 62.7.0 | kubeops/kube-prometheus-stack:1.6.7 | 832b5019fe6f8d3949d768e98b182bcb84d05019ca854c08518313875ab4eedb |
logstash | V 8.4.0 | 8.5.1 | kubeops/logstash-os:1.6.7 | 955f38c63dc5d4a3a67450c85e262a2884711910cfd1ee56860279a07f5ef833 |
opa-gatekeeper | V 3.17.1 | v3.17.1 | kubeops/opa-gatekeeper:1.6.7 | 3257e829cc4829c190a069b2a6409ea32ed1a38031f45b8c880eb69b85173c64 |
opensearch-dashboards | V 2.16.0 | 2.22.0 | kubeops/opensearch-dashboards:1.6.7 | 8c8e4dca83591ef1ff8b23d94646d0098c2c575e193f6baf746e64a03aface05 |
opensearch | V 2.16.0 | 2.23.1 | kubeops/opensearch-os:1.6.7 | fb80f291318a6c00696a0a8775c571dea3ed7a2bec1b8d3394c07081b2409605 |
rook | v1.15.6 cluster/v1.15.6 operator | v1.15.1 cluster/v1.15.1 operator | kubeops/rook-ceph:1.6.7 | 7198a6b33e677399ad90a2c780a7bf8af96e00de5ed46eef8215f6626645f06f |
setup | V 1.6.7 | kubeops/setup:1.6.7 | 2de24b9e24913e5f3966069de200644ae44b7777c7f94793a6f059f112649ea5 | |
velero | V 1.13.2 | 6.4.0 | kubeops/velero:1.6.7 | 0ab6465bd5e8e422d06ce1fc2bd1d620e36bdedbc2105bc45e9a80d9f9e71e0d |
Kubeops 1.6.6 supports
Tools | Supported App Version | Chart Version | Supported Package Version | SHA256 Checksum |
---|---|---|---|---|
calicomultus | V 0.0.4 | lima/calicomultus:0.0.4 | 0e32dcea34fe707248cf2348f0ed422c278574594e440db8ceac8190e3984dac | |
cert-manager | V 1.15.3 | 1.15.3 | kubeops/cert-manager:1.6.6 | 63e2ef627351619ab9813a684106dc19b187c63d643b68096889d8e0abf0640b |
clustercreate | V 1.6.6 | kubeops/clustercreate:1.6.6 | dc334cf0cede9352069e775c0ed4df606f468340f257c4fa5687db7a828906c9 | |
fileBeat | V 8.5.1 | 8.5.1 | kubeops/filebeat-os:1.6.6 | 09ad9bf914b7d694619be3195b43c51157234f8bb5b5b24adfe399298f47e495 |
harbor | V 2.12.0 | 1.16.0 | kubeops/harbor:1.6.6 | 4f2a1112234edb3cf69ec4447b96bbc81593f61327ac93f6576ebe0ab1ee4d9b |
helm | V 3.14.4 | kubeops/helm:1.6.6 | 9ea60096ce6faa4654b8eced71c27e733fa791bacfc40095dfc907fd9a7d5b46 | |
ingress-nginx | V 1.10.0 | 4.10.0 | kubeops/ingress-nginx:1.6.6 | f518a5d909697b0275b4515dc1bc49a411b54992db469319e070809e8bbffd9e |
keycloak | V 22.0.1 | 16.0.5 | kubeops/keycloak:1.6.6 | 3d2781a454f0cbe98c611e42910fb0e199db1dec79ac970c08ed4e9735581c4c |
kubeops-dashboard | V 0.26.0 | 0.26.0 | kubeops/kubeops-dashboard:1.6.6 | d21106e44b52f30cb23cb01bf2217662d7b393fd11046cbcc4e9ff165a725c1b |
prometheus | V 0.76.1 | 62.7.0 | kubeops/kube-prometheus-stack:1.6.6 | ada9d5a69b8c277c2c9037097e6a994d6c20ff794b51a65c30bdf480cfb23e52 |
logstash | V 8.4.0 | 8.5.1 | kubeops/logstash-os:1.6.6 | 89b8ae46c9bbc90c9d96e45a801c580608d24e4eaef2adadadb646a3012ece72 |
opa-gatekeeper | V 3.17.1 | 3.17.1 | kubeops/opa-gatekeeper:1.6.6 | a9c5423fdfabf456fa18b9808b9e9c9ee9428d5f5c4035810b9dbc3bfb838e4c |
opensearch-dashboards | V 2.16.0 | 2.22.0 | kubeops/opensearch-dashboards:1.6.6 | 2f8f66e6e321b773fcd5fb66014600b4f9cffda4bcea9f9451802274561f3ff4 |
opensearch | V 2.16.0 | 2.23.1 | kubeops/opensearch-os:1.6.6 | 8ab9d1d09398083679a3233aaf73f1a664bd7162e99a1aef51b716cd8daa3e55 |
rook | v1.15.6 cluster/v1.15.6 operator | v1.15.1 cluster/v1.15.1 operator | kubeops/rook-ceph:1.6.6 | 14b8cb259d6a0bb73ac576de7a07ed76499b43551a3d8a44b76eea181013080e |
setup | V 1.6.6 | kubeops/setup:1.6.6 | 92e392f170edb2edc5c92e265e9d92a4d6db5c6226f4603b33cece7361928089 | |
velero | V 1.13.2 | 6.4.0 | kubeops/velero:1.6.6 | 98bde279f5a8b589a5234d63fba900235b07060c6554e9f822d41b072ddbd2f9 |
KubeOps 1.6.5 supports
Tools | Supported App Version | Chart Version | Supported Package Version | SHA256 Checksum |
---|---|---|---|---|
calicomultus | V 0.0.4 | lima/calicomultus:0.0.4 | 0e32dcea34fe707248cf2348f0ed422c278574594e440db8ceac8190e3984dac | |
cert-manager | V 1.15.3 | V 1.15.3 | kubeops/cert-manager:1.6.5 | a66cfcf7849f745033fc8d6140d7b1ebbccb013739b37e26d9eb6dd22e0fb973 |
clustercreate | V 1.6.5 | kubeops/clustercreate:1.6.5 | a577edf4ea90710d041d31f434c4114d8efb4d6d9140ce39ca3b651f637b7147 | |
fileBeat | V 8.5.1 | 8.5.1 | kubeops/filebeat-os:1.6.5 | 70c2fceba0bc5e3d4dc0b56ad5cae769f79dc439915b0757b9b041244582b923 |
harbor | V 2.12.0 | 1.16.0 | kubeops/harbor:1.6.5 | 9d3283235cf41073d1ade638218d8036cb35764473edc2a6d3046ca7b5435228 |
helm | V 3.14.4 | kubeops/helm:1.6.5 | d1c67bc9084d647217ee57f2e9fd4df3cbeb50d771961423c9e8246651910daa | |
ingress-nginx | V 1.10.0 | 4.10.0 | kubeops/ingress-nginx:1.6.5 | 9453b739e927f36cebe17b4da8f08f843693a52b049a358612aab82f8d1cc659 |
keycloak | V 22.0.1 | 16.0.5 | kubeops/keycloak:1.6.5 | df652caa301d5171a7a3ae1ae8191790ef9f0af6de2750edbf2629b9022ccb3b |
kubeops-dashboard | V 0.26.0 | 0.26.0 | kubeops/kubeops-dashboard:1.6.5 | ccbb8721aa9a5c60661726feac3b3fd63d6711875b3a8e816b1cbdc68c51f530 |
prometheus | V 0.76.1 | 62.7.0 | kubeops/kube-prometheus-stack:1.6.5 | c3544bd6ddbac3c9ac58b3445c8a868a979ce669d1096dcdafa9842b35edd2d7 |
logstash | V 8.4.0 | 8.5.1 | kubeops/logstash-os:1.6.5 | 2ec4835b326afce0cdb01d15bbe84dabfe988dab295b6d12114383dd528b7807 |
opa-gatekeeper | V 3.17.1 | v3.17.1 | kubeops/opa-gatekeeper:1.6.5 | f5e6d871c12d463430aacd5adfd9fbc728a3dbf684424c002de1ae8d0b4df014 |
opensearch-dashboards | V 2.22.0 | 2.16.0 | kubeops/opensearch-dashboards:1.6.5 | a28a3b2161b276385062072fa05ac9cd34447e207c701b0700c78f5e828ec133 |
opensearch | V 2.23.1 | 2.16.0 | kubeops/opensearch-os:1.6.5 | e8bf63cbbb452e3e5cf7e62e3c6324e7dad31d1713e306c3847770a7ef67ca3a |
rook | v1.15.6 cluster/v1.15.6 operator | v1.15.6 cluster/v1.15.6 operator | kubeops/rook-ceph:1.6.5 | a8ee95eaca0705f95884d54a188fa97e5c9080601dc3722a16a80a1599783caa |
setup | V 1.6.5 | kubeops/setup:1.6.5 | 1ac0ab68976e4c6e1abd867a8d3a58391b2b0fddd24ba1aefbed6b0f5d60b9ab | |
velero | V 1.13.2 | 6.4.0 | kubeops/velero:1.6.5 | 7c224434008d856b9fe0275ac0c528f865fb9a549a46bbeb90a34221a4d8c187 |
KubeOps 1.6.4 supports
Tools | Supported App Version | Chart Version | Supported Package Version | SHA256 Checksum |
---|---|---|---|---|
calicomultus | V 0.0.4 | lima/calicomultus:0.0.4 | 0e32dcea34fe707248cf2348f0ed422c278574594e440db8ceac8190e3984dac | |
cert-manager | V 1.15.3 | V 1.15.3 | kubeops/cert-manager:1.6.4 | 61fea41a31cdb7fefb2f4046c9c94ef08dc57523c0b8516ebc11f278b3d79b37 |
clustercreate | V 1.6.4 | kubeops/clustercreate:1.6.4 | b9a0c9eefeebc6057abcecc7cd6e53956baf28614d48141e1530ae6a4f433f2b | |
fileBeat | V 8.5.1 | 8.5.1 | kubeops/filebeat-os:1.6.4 | 23dd2674b3447b3459c8a2b65f550726b6c97ca588d5c7259eb788bec6e4d317 |
harbor | V 2.11.1 | 1.15.1 | kubeops/harbor:1.6.4 | b794a6504769abff5b4ebba7c6384f83409c8d7d8d7687e3e49eec8a31e1a192 |
helm | V 3.14.4 | kubeops/helm:1.6.4 | 1309b1cefb132152cd6900954b6b68cce6ce3b1c9e878fc925d8ef0439eee5f1 | |
ingress-nginx | V 1.10.0 | 4.10.0 | kubeops/ingress-nginx:1.6.4 | 24214c2e96cf949073ba2e132a57c03096f36f5920a6938656bd159242ce8ec2 |
keycloak | V 22.0.1 | 16.0.5 | kubeops/keycloak:1.6.4 | 835d63c0d905dca14ee1aa5bc830e4cb3506c948d1c076317993d2e1a8b083ba |
kubeops-dashboard | V 0.26.0 | 0.26.0 | kubeops/kubeops-dashboard:1.6.4 | 0ebf8ef4d2bf01bc5257c0bf5048db7e785743461ce1969847de0c9605562ef4 |
prometheus | V 0.76.1 | 62.7.0 | kubeops/kube-prometheus-stack:1.6.4 | 59eb96a2f09fa8b632d4958215fd45a82df3c0f7697281ea63e54f49d4a81601 |
logstash | V 8.4.0 | 8.5.1 | kubeops/logstash-os:1.6.4 | 3571e2ed554c8bd57272fa8e2de85e26e67a7dbf62d8622c5d796d5e3f8b6cf5 |
opa-gatekeeper | V 3.17.1 | v3.17.1 | kubeops/opa-gatekeeper:1.6.4 | f5ce384bd332f3b6ffccd09b5824e92976019132b324c8fecbc261d14f2df095 |
opensearch-dashboards | V 2.22.0 | 2.16.0 | kubeops/opensearch-dashboards:1.6.4 | 6dd16d2e411bdde910fc3370c1aca73c3c934832e45174ec71887d74d70dfcec |
opensearch | V 2.23.1 | 2.16.0 | kubeops/opensearch-os:1.6.4 | cab021ed5f832057f2d4a7deaaccb1e2d2ab5d29bac502fb0daeebd8692a8178 |
rook | v1.15.6 cluster/v1.15.6 operator | v1.15.1 cluster/v1.15.1 operator | kubeops/rook-ceph:1.6.4 | 3f7c8c22406b5dc50add81f0df45a65d6d81ec47bbf3fb9935959ff870481601 |
setup | V 1.6.4 | kubeops/setup:1.6.4 | 4760479e480453029f59152839d6624f7c5a7374fbc37ec2d7d14f8253ab9204 | |
velero | V 1.13.2 | 6.4.0 | kubeops/velero:1.6.4 | 55136b3b4ea5aa8582b1300c37f084a48870f531496ed37a0849c33a63460b15 |
KubeOps 1.6.3 supports
Tools | Supported App Version | Chart Version | Supported Package Version | SHA256 Checksum |
---|---|---|---|---|
calicomultus | V 0.0.4 | lima/calicomultus:0.0.4 | 0e32dcea34fe707248cf2348f0ed422c278574594e440db8ceac8190e3984dac | |
cert-manager | V 1.15.3 | V 1.15.3 | kubeops/cert-manager:1.6.3 | 11105f523a2d8faf3bbfdca9e4d06145b4d52bad0ee0f16586266c26b59d5fe5 |
clustercreate | V 1.6.3 | kubeops/clustercreate:1.6.3 | 9bce651b5d3caa5e83bfad25ef5d2908e16b2cf854168baf59b9ff586841e856 | |
fileBeat | V 8.5.1 | 8.5.1 | kubeops/filebeat-os:1.6.3 | 36d0359590b0c5dd3e8f8cd4c5d769a1eea3c3009593cc465ae31f4d9fbeaa02 |
harbor | V 2.11.1 | 1.15.1 | kubeops/harbor:1.6.3 | 9a9d46f2c81a7596c8d00e920b3a733331d2def0676cc077b00749293e24255a |
helm | V 3.14.4 | kubeops/helm:1.6.3 | f3e90f91c99314ad8357a11129602ddb693aa7792038306f903cff3791a22a3e | |
ingress-nginx | V 1.10.0 | 4.10.0 | kubeops/ingress-nginx:1.6.3 | 97d27c7cfe437275994757e0d3395c1864fd1cd57f0441754c7ec2cf128893ab |
keycloak | V 22.0.1 | 16.0.5 | kubeops/keycloak:1.6.3 | 3d300a861d8024595fbc65be6010a3738384754c574bff9aca07d3dfc988671d |
kubeops-dashboard | V 0.26.0 | 0.26.0 | kubeops/kubeops-dashboard:1.6.3 | ab7a339a132138f732aa1a9b70e3308c449566920155f67e4a72a1f2591b09db |
prometheus | V 0.76.1 | 62.7.0 | kubeops/kube-prometheus-stack:1.6.3 | e24aa21f9bcdf900f8d15edeab380ac68921b937af2baa638971364264a9d6cd |
logstash | V 8.4.0 | 8.5.1 | kubeops/logstash-os:1.6.3 | 521d5238e1e6ca5adb12f088e279e05e47432b99008f99d5aed0bee75b740582 |
opa-gatekeeper | V 3.17.1 | v3.17.1 | kubeops/opa-gatekeeper:1.6.3 | 73d1e72c88da83889e48a908f6bac522d416e219b4d342dbcfff7ca987f32f49 |
opensearch-dashboards | V 2.22.0 | 2.16.0 | kubeops/opensearch-dashboards:1.6.3 | 0ef3767f2c1b134d539f5f69a5e74509c2d232ccd337f33eea1d792e0f538f43 |
opensearch | V 2.23.1 | 2.16.0 | kubeops/opensearch-os:1.6.3 | f9165115615e6f58ad320085bf73a37d559aa24e93225edd60cea203f8bdfe70 |
rook | v1.15.1 cluster/vv1.15.1 operator | v1.15.1 cluster/v1.15.1 operator | kubeops/rook-ceph:1.6.3 | 13b274e95da154699f72ae8442d1dca654311805d33b33f3d1eb6ea93bc8d5fe |
setup | V 1.6.3 | kubeops/setup:1.6.3 | cbe81f4169ead9c61bf65cf7b8cc47674a61ce9a6df37e6d8f7074254ea01d7f | |
velero | V 1.13.2 | 6.4.0 | kubeops/velero:1.6.3 | b27addb2fc9d7498d82a649cdda61aec32b6f257377472fed243621dbc55b68b |
KubeOps 1.6.2 supports
Tools | Supported App Version | Chart Version | Supported Package Version | SHA256 Checksum |
---|---|---|---|---|
calicomultus | V 0.0.4 | lima/calicomultus:0.0.4 | 0e32dcea34fe707248cf2348f0ed422c278574594e440db8ceac8190e3984dac | |
cert-manager | V 1.15.3 | V 1.15.3 | kubeops/cert-manager:1.6.1 | 8ca88b91370d395ea9bcf6f1967a38a2345ea7024936a3be86c51a8079f719a7 |
clustercreate | V 1.6.1 | kubeops/clustercreate:1.6.1 | 5aeec18ea4c960ee4301f9a7808f4eda7d76ec1811be15a6c8092155997a41ce | |
fileBeat | V 8.5.1 | 8.5.1 | kubeops/filebeat-os:1.6.1 | 0908fa15d1d85a59a9887ac8080a5e46f3ee13167f1fcaadefbf4b6229f0cf94 |
harbor | V 2.11.1 | 1.15.1 | kubeops/harbor:1.6.1 | 715b9ce2d0925d8207311fc078c10aa5dfe01685b47743203e17241e0c4ac3c7 |
helm | V 3.14.4 | kubeops/helm:1.6.1 | f149f8687285479e935207fc1c52e0c19e0bf21bc5b00bf11433f2fef7eb2800 | |
ingress-nginx | V 1.10.0 | 4.10.0 | kubeops/ingress-nginx:1.6.1 | 6a7d6c60c26d52a6e322422655e75d8a84040e3022c74a1341b3cc7dae3f1d14 |
keycloak | V 22.0.1 | 16.0.5 | kubeops/keycloak:1.6.1 | 135546a99aa8f25496262ed36a910f80f35c76f0f122652bd196a68b519a41e4 |
kubeops-dashboard | V 1.0.0 | 0.11.0 | kubeops/kubeops-dashboard:1.6.1 | 37c04d6cd7654847add82572c8b2d38520ea63aff47af3b222283b1d570f44a8 |
prometheus | V 0.76.1 | 62.7.0 | kubeops/kube-prometheus-stack:1.6.1 | c45db783a0e5c0475d9cd8e9c1309fa9af45410a8cca055f1c4028b8488cb4c9 |
logstash | V 8.4.0 | 8.5.1 | kubeops/logstash-os:1.6.1 | 19171f2aab866c53733147904aed8d39981f892aac0861dfd54354cdd98d0510 |
opa-gatekeeper | V 3.16.0 | v3.16.0 | kubeops/opa-gatekeeper:1.6.1 | 811f14f669324a7c9bfbac04aac074945c4aecffc926fc75126b44ff0bd41eb2 |
opensearch-dashboards | V 2.14.0 | 2.18.0 | kubeops/opensearch-dashboards:1.6.1 | 7985e684a549f2eada4f3bf9a6490dc38be9b525e8f43ad9ff0c9377bccb0b7b |
opensearch | V 2.16.0 | 2.23.1 | kubeops/opensearch-os:1.6.1 | cb804a50ab971ec55c893bd949127de2011503af37e221c0eb3ad83f5c78a502 |
rook | v1.12.5 cluster/v1.12.5 operator | v1.12.5 cluster/v1.12.5 operator | kubeops/rook-ceph:1.6.1 | 25e684fdc279b4f97cf1a5039f54fffbc1cf294f45935c20167dadd81a35ad52 |
setup | V 1.6.1 | kubeops/setup:1.6.1 | 5b40a96733c2e526e642f17d2941d7a9422ae0a858f14af343277051df96dc09 | |
velero | V 1.13.2 | 6.4.0 | kubeops/velero:1.6.1 | cb228d2c6fd69749e91444def89fd79be51bcb816cabc61c7032404f5257a767 |
KubeOps 1.6.1 supports
Tools | Supported App Version | Chart Version | Supported Package Version | SHA256 Checksum |
---|---|---|---|---|
calicomultus | V 0.0.4 | lima/calicomultus:0.0.4 | 0e32dcea34fe707248cf2348f0ed422c278574594e440db8ceac8190e3984dac | |
cert-manager | V 1.15.3 | V 1.15.3 | kubeops/cert-manager:1.6.1 | 8ca88b91370d395ea9bcf6f1967a38a2345ea7024936a3be86c51a8079f719a7 |
clustercreate | V 1.6.1 | kubeops/clustercreate:1.6.1 | 5aeec18ea4c960ee4301f9a7808f4eda7d76ec1811be15a6c8092155997a41ce | |
fileBeat | V 8.5.1 | 8.5.1 | kubeops/filebeat-os:1.6.1 | 0908fa15d1d85a59a9887ac8080a5e46f3ee13167f1fcaadefbf4b6229f0cf94 |
harbor | V 2.11.1 | 1.15.1 | kubeops/harbor:1.6.1 | 715b9ce2d0925d8207311fc078c10aa5dfe01685b47743203e17241e0c4ac3c7 |
helm | V 3.14.4 | kubeops/helm:1.6.1 | kubeops/helm:1.6.1.output | |
ingress-nginx | V 1.10.0 | 4.10.0 | kubeops/ingress-nginx:1.6.1 | 6a7d6c60c26d52a6e322422655e75d8a84040e3022c74a1341b3cc7dae3f1d14 |
keycloak | V 22.0.1 | 16.0.5 | kubeops/keycloak:1.6.1 | 135546a99aa8f25496262ed36a910f80f35c76f0f122652bd196a68b519a41e4 |
kubeops-dashboard | V 0.15.1 | 0.11.0 | kubeops/kubeops-dashboard:1.6.1 | 37c04d6cd7654847add82572c8b2d38520ea63aff47af3b222283b1d570f44a8 |
prometheus | V 0.76.1 | 62.7.0 | kubeops/kube-prometheus-stack:1.6.1 | c45db783a0e5c0475d9cd8e9c1309fa9af45410a8cca055f1c4028b8488cb4c9 |
logstash | V 8.4.0 | 8.5.1 | kubeops/logstash-os:1.6.1 | 19171f2aab866c53733147904aed8d39981f892aac0861dfd54354cdd98d0510 |
opa-gatekeeper | V 3.16.0 | v3.16.0 | kubeops/opa-gatekeeper:1.6.1 | 811f14f669324a7c9bfbac04aac074945c4aecffc926fc75126b44ff0bd41eb2 |
opensearch-dashboards | V 2.14.0 | 2.18.0 | kubeops/opensearch-dashboards:1.6.1 | 7985e684a549f2eada4f3bf9a6490dc38be9b525e8f43ad9ff0c9377bccb0b7b |
opensearch | V 2.16.0 | 2.23.1 | kubeops/opensearch-os:1.6.1 | cb804a50ab971ec55c893bd949127de2011503af37e221c0eb3ad83f5c78a502 |
rook | v1.12.5 cluster/v1.12.5 operator | v1.12.5 cluster/v1.12.5 operator | kubeops/rook-ceph:1.6.1 | 25e684fdc279b4f97cf1a5039f54fffbc1cf294f45935c20167dadd81a35ad52 |
setup | V 1.6.1 | kubeops/setup:1.6.1 | 5b40a96733c2e526e642f17d2941d7a9422ae0a858f14af343277051df96dc09 | |
velero | V 1.13.2 | 6.4.0 | kubeops/velero:1.6.1 | kubeops/velero:1.6.1.output |
KubeOps 1.6.0 supports
Tools | Supported App Version | Chart Version | Supported Package Version | SHA256 Checksum |
---|---|---|---|---|
calicomultus | V 0.0.4 | lima/calicomultus:0.0.4 | 0e32dcea34fe707248cf2348f0ed422c278574594e440db8ceac8190e3984dac | |
cert-manager | V 1.14.5 | v1.14.5 | kubeops/cert-manager:1.6.0 | 1a9ed861709cbfb05158f7610026acf5199749f989e1527ad48b80a277323765 |
clustercreate | V 1.6.0 | kubeops/clustercreate:1.6.0 | 730925a6231a4fc8c7abf162d5d47a0f60107cb4dfa825db6e52a15d769a812d | |
fileBeat | V 8.5.1 | 8.5.1 | kubeops/filebeat-os:1.6.0 | 061dc4feed5d970db3d353d29b8ef8256a826b146e0d95befbea4d5350874b8f |
harbor | V 2.9.3 | 1.13.3 | kubeops/harbor:1.6.0 | 3dd7dceb969dad59140e58631fd3a0c9f60ed22e2f1c2e1d087118e9c7592f26 |
helm | V 3.14.4 | kubeops/helm:1.6.0 | cb53f7b751473dd96f435d9f614e51edeaea99f2ca57a3710b59c788540d48d5 | |
ingress-nginx | V 1.10.0 | 4.10.0 | kubeops/ingress-nginx:1.6.0 | 068618eb258c2558c3097ed19344da9caad0d7b44a8252b160cd36ef4425b790 |
keycloak | V 22.0.1 | 16.0.5 | kubeops/keycloak:1.6.0 | 236b6955dc08707d6e645625681da0e356c865da9810695d4af7fc2509c36f25 |
kubeops-dashboard | 0.15.1 | 0.11.0 | kubeops/kubeops-dashboard:1.6.0 | 5564ec8dfa33bb332e2863b171580bffebad3dc379a1fd365bddf5fc1343caac |
prometheus | V 0.73.2 | 58.6.0 | kubeops/kube-prometheus-stack:1.6.0 | 0286d6a05e61081e3abe783a36512bf372a3184e6f91457819a2b2c4046ce35a |
logstash | V 8.4.0 | 8.5.1 | kubeops/logstash-os:1.6.0 | 88851fe2d5ec269a21811b15bb9cd11778d6564560eefe3938e874202298f3f1 |
opa-gatekeeper | V 3.16.0 | v3.16.0 | kubeops/opa-gatekeeper:1.6.0 | ad841610be5ce624abeb6e439e3e353bd2f1240ca613e24ebdc13f36e8891a1a |
opensearch-dashboards | V 2.14.0 | 2.18.0 | kubeops/opensearch-dashboards:1.6.0 | 6364ffb7dbe05ea16a685ddf6e3d3a2b59ef6e8b28e5a1194710a5c37ae72c40 |
opensearch | V 2.14.0 | 2.20.0 | kubeops/opensearch-os:1.6.0 | 16e1699fe187962fc58190a79d137db4c07723f2a84a889f393830b0093dba82 |
rook | v1.12.5 cluster/v1.12.5 operator | v1.12.5 cluster/v1.12.5 operator | kubeops/rook-ceph:1.6.0 | 48f79af13a0da86ea5019c78c24aa52c719d03a6ea2ab4e332b2078df0c02a16 |
setup | V 1.6.0 | kubeops/setup:1.6.0 | e2dd0419e17bbd2deaaea1f2888d391749afc0f550145c1e6a3ef5d5fba3a6a2 | |
velero | V 1.13.2 | 6.4.0 | kubeops/velero:1.6.0 | 2b9e27dcf3a927ebe044f332b597d399d99a1a95660f7a186cf7fb3658b3676d |
KubeOps 1.6.0_Beta1 supports
Tools | supported App Version | Chart Version | supported Package Version | SHA256 Checksum |
---|---|---|---|---|
fileBeat | V 8.5.1 | 8.5.1 | kubeops/filebeat-os:1.6.0_Beta1 | 72d21348d6153beb9b36c287900579f1100ccd7333f63ff30edc576cfcb47250 |
harbor | V 2.9.3 | 1.13.3 | kubeops/harbor:1.6.0_Beta1 | 317c7a931bb7f1f5d1d10dd373355f048e350878c0eee086c67714b104fad7cb |
helm | V 3.14.4 | kubeops/helm:1.6.0_Beta1 | 7890eb0c45ae420b664c655175844d84194520ae20429ad3b9d894eb865a8e66 | |
logstash | V 8.4.0 | 8.5.1 | kubeops/logstash-os:1.6.0_Beta1 | 3c90093bc613adc8919cd3cc9f7f87ecad72a89b85ea30a9c161a5c59e9e925b |
opa-gatekeeper | V 3.16.0 | v3.16.0 | kubeops/opa-gatekeeper:1.6.0_Beta1 | 3f655af62141c33437fe1183700c9f5ae5bd537b84df0d649023ae2cdc83cd11 |
opensearch | V 2.14.0 | 2.20.0 | kubeops/opensearch-os:1.6.0_Beta1 | 0953ab749ccdf8b03648f850298f259504d40338bffe03dde2d6ab27ff0cb787 |
opensearch-dashboards | V 2.14.0 | 2.18.0 | kubeops/opensearch-dashboards:1.6.0_Beta1 | 9b15ab03a8be7c0e7515056e7b46d2ca9a425690701a7a77afb2b4455790041e |
prometheus | V 0.73.2 | 58.6.0 | kubeops/kube-prometheus-stack:1.6.0_Beta1 | 277131992a7b70669e8aa2a299417da15a4631c89c9cca0f89128a1f2d81e532 |
rook | v1.12.5 cluster /v1.12.5 operator | v1.12.5 cluster/v1.12.5 operator | kubeops/rook-ceph:1.6.0_Beta1 | 495a9afeb61ff50800c6bc9931b934ee75bd78c988f8fa47a8ee79299f1a3b51 |
cert-manager | V 1.14.5 | v1.14.5 | kubeops/cert-manager:1.6.0_Beta1 | 220c892ed25f126e63da55f942a690a4d0443f5ed27e66b6532cdf573bb597af |
ingress-nginx | V 1.10.0 | 4.10.0 | kubeops/ingress-nginx:1.6.0_Beta1 | c21c12901d4f0542928234a4695c579fc24588a5d83ad27a61321c6b697f5588 |
kubeops-dashboard | V 1.0.0 | 0.11.0 | kubeops/kubeops-dashboard:1.6.0_Beta1 | 2fb230a9a9f2a3bfa5e4d588c5f63f12d1a8bc462f664ddd9b1d088f9ea141ac |
keycloak | V 22.0.1 | 16.0.5 | kubeops/keycloak:1.6.0_Beta1 | a9a3dedc583ec3d1b481e4799a50fe440af542f5b71176e0c80ba2e66d08bcdb |
velero | V 1.13.2 | 6.4.0 | kubeops/velero:1.6.0_Beta1 | 9bf96e91902aa8caff4e2245ae7d08058bfa099079fbf4ba77df08b4c697d654 |
clustercreate | V 1.6.0 V | kubeops/clustercreate:1.6.0_Beta1 | 36dd0d62ff6a405d8dab1c330972cd033d73413cab578d51a9249738a0f14385 | |
setup | V 1.6.0 | kubeops/setup:1.6.0_Beta1 | 6c9a96e026dbe7820f56c22767765adadd12466056d02de19d9bcae2c9fbbcde | |
calicomultus | V 0.0.4 | lima/calicomultus:0.0.4 | a58aa03128ee88f3803803186c357f7daab252d8f3ae51f4aea124e8f4939f7f |
KubeOps 1.6.0_Beta0 supports
Tools | supported App Version | Chart Version | supported Package Version | SHA256 Checksum |
---|---|---|---|---|
fileBeat | V 8.5.1 | 8.5.1 | kubeops/filebeat-os:1.6.0_Beta0 | a4d399bcd9efb238b07aee4b54ad11de132599801608dffc69ca4eee04f71c07 |
harbor | V 2.9.3 | 1.13.3 | kubeops/harbor:1.6.0_Beta0 | c345a0bb5fd80414405588583843978f8ca7dc31e07cbbf1c0db956866bc9e4d |
helm | V 3.14.4 | kubeops/helm:1.6.0_Beta0 | 0ea84bc7b77dff23a13a1a2a9426930f68845e0b5a1481c2362f3d895215274f | |
logstash | V 8.4.0 | 8.5.1 | kubeops/logstash-os:1.6.0_Beta0 | 75d42128535c5d30bc175287a3b9c04a193698bff64a830873c02ae697573127 |
opa-gatekeeper | V 3.16.0 | v3.16.0 | kubeops/opa-gatekeeper:1.6.0_Beta0 | e19e933869c2feb73bea8838a7f4bfcf0bf19090bae97cbf84407241ea3ca973 |
opensearch | V 2.14.0 | 2.20.0 | kubeops/opensearch-os:1.6.0_Beta0 | d2f58718d691946ea60bebe8eec6629f78f290405fe3fa572cec41b81106526e |
opensearch-dashboards | V 2.14.0 | 2.18.0 | kubeops/opensearch-dashboards:1.6.0_Beta0 | af6b12543a1e4cc863b06709ccbf67dec02db0f68d359916950a839abc581e5e |
prometheus | V 0.73.2 | 58.6.0 | kubeops/kube-prometheus-stack:1.6.0_Beta0 | 2d773591c3dda297c00cc37abed74d1cf1d1575feb2a69610f0bdc6fc9a69040 |
rook | v1.12.5 cluster /v1.12.5 operator | v1.12.5 cluster/v1.12.5 operator | kubeops/rook-ceph:1.6.0_Beta0 | c3e95ec2fb9b96346cba802dd010a00fd1ddd791a2ce2cbefa464cfbb4a922cc |
cert-manager | V 1.14.5 | v1.14.5 | kubeops/cert-manager:1.6.0_Beta0 | 792759e538124e8307daf9abb81aef203655a102a713a05a0b3b547b8c19dd99 |
ingress-nginx | V 1.10.0 | 4.10.0 | kubeops/ingress-nginx:1.6.0_Beta0 | 41f64cea80d92a6356a713fb612a5bafbe6a527b2bd9e21e974347dd3f3ad0d2 |
kubeops-dashboard | V 1.0.0 | 0.11.0 | kubeops/kubeops-dashboard:1.6.0_Beta0 | a9522a68a6be45358b96a78527ca3653439b2c24c5ab349ac6767e003dee80a4 |
keycloak | V 22.0.1 | 16.0.5 | kubeops/keycloak:1.6.0_Beta0 | f3dcc5dd3b21d5da83c72f757146df3ddc32e5b793f7c6039df751ab88ccc2b4 |
velero | V 1.13.2 | 6.4.0 | kubeops/velero:1.6.0_Beta0 | 4653976cf762030e859fe83af4ac0f0830d61dec0a9f40d33ab590743a6baebe |
clustercreate | V 1.6.0 V | kubeops/clustercreate:1.6.0_Beta0 | 29dc5a9d903eb2d9ac836f580e1ca4ff2691f24989bdb1c31313526de29e0208 | |
setup | V 1.6.0 | kubeops/setup:1.6.0_Beta0 | 7c41ace358a4e62fb0c31a920456308086a1bda4294e1ff0ab26763ae41da9bd | |
calicomultus | V 0.0.4 | lima/calicomultus:0.0.4 | a58aa03128ee88f3803803186c357f7daab252d8f3ae51f4aea124e8f4939f7f |
5 - Glossary
Glossary
This section defines a glossary of common KubeOps terms.
SINA package
SINA package is the .sina file packaged by bundling package.yaml and other essential yaml files and artifacts. This package is ready to install on your Kubernetes Clusters.
KubeOps Hub
KubeOps Hub is a secure repository where published SINA packages can be stored and shared. You are welcome to contribute and use public hub also at the same time KubeOps provides you a way to access your own private hub.
Installation Address
It is the distinctive address automatically generated for each published package on KubeOps Hub. It is constructed using name of package creator, package name and package version.
You can use this address at the time of package installation on your Kubernetes Cluster.
It is indicated by the install
column in KubeOps Hub.
Deployment name
When a package is installed, SINA creates a deployment name to track that installation.
Alternatively, SINA also lets you specify the deployment name of your choice during the installation.
A single package may be installed many times into the same cluster and create multiple deployments.
It is indicated by Deployment
column in the list of package deployments.
Tasks
As the name suggests, “Tasks” in package.yaml are one or more sets of instructions to be executed. These are defined by utilizing Plugins.
Plugins
SINA provides many functions which enable you to define tasks to be executed using your package. These are called Plugins. They are the crucial part of your package development.
LIMAROOT Variable
LIMAROOT is an envoirment variable for LIMA. It is the place where LIMA stores information about your clusters.
The environment variable LIMAROOT
is set by default to /var/lima
. However LIMA also facilitates setting your own LIMAROOT by yourself.
KUBEOPSROOT Variable
The environment variable KUBEOPSROOT
stores the location of the SINA plugins and the config.yaml. To use the variable, the config.yaml and the plugins have to be copied manually.
apiVersion
It shows the supported KubeOps tool API version. You do not need to change it unless otherwise specified.
Registry
As the name suggests, it is the location where docker images can be stored. You can either use the default KubeOps registry or specify your own local registry for AirGap environments. You need an internet connection to use the default registry provided by KubeOps.
Maintenance Package
KubeOps provides a package for the supported Kubernetes tools. These packages help you update the Kubernetes tools to the desired versions on your clusters along with the dependencies.
Cluster
In computing, a cluster refers to a group of interconnected computers or servers that work together as a single system.
These machines, or nodes, are typically networked and collaborate to execute tasks or provide services. Clusters are commonly used in various fields such as distributed computing, high-performance computing, and cloud computing to improve reliability, scalability, and performance. In the context of technologies like Kubernetes, a cluster consists of multiple nodes managed collectively to deploy, manage, and scale containerized applications.
Container
A container is a lightweight, standalone package that includes everything needed to run a piece of software, including the code, runtime, libraries, and dependencies.
Containers are isolated from each other and from the underlying infrastructure, providing consistency and portability across different environments. Kubernetes manages containers, orchestrating their deployment, scaling, and management across a cluster of nodes. Containers are often used to encapsulate microservices or individual components of an application, allowing for efficient resource utilization and simplified deployment processes.
Drain-node
A Drain Node is a feature in distributed systems, especially prevalent in Kubernetes, used for gracefully removing a node from a cluster.
It allows the system to evict all existing workload from the node and prevent new workload assignments before shutting it down, ensuring minimal disruption to operations.
Kube-proxy
Kube-Proxy, short for Kubernetes Proxy, is a network proxy that runs on each node in a Kubernetes cluster. Its primary responsibility is to manage network connectivity for Kubernetes services. Its main tasks include service proxying and load balancing.
Kubelet
Kubelet is a crucial component of Kubernetes responsible for managing individual nodes in a cluster. It ensures that containers are running in pods as expected, maintaining their health and performance.
Kubelet communicates with the Kubernetes API server to receive instructions about which pods should be scheduled and executed on its node. It also monitors the state of these pods, reporting any issues back to the API server. Kubelet plays a vital role in the orchestration and management of containerized workloads within a Kubernetes cluster.
Node
A Kubernetes node oversees and executes pods.
It serves as the operational unit (virtual or physical machine) for executing assigned tasks. Similar to how pods bring together multiple containers to collaborate, a node gathers complete pods to work in unison. In large-scale operations, the goal is to delegate tasks to nodes with available pods ready to handle them.
Pod
In Kubernetes, a pod groups containers and is the smallest unit managed by the system.
Each pod shares an IP address among its containers and resources like memory and storage. This allows treating the containers as a single application, similar to traditional setups where processes run together on one host. Often, a pod contains just one container for simple tasks, but for more complex operations requiring collaboration among multiple processes with shared data, multi-container pods simplify deployment.
For example, in an image-processing service creating JPEGs, one pod might have containers for resizing images and managing background tasks or data cleanup, all working together.
Registry
Helm registry serves as a centralized repository for Helm charts, facilitating the discovery, distribution, and installation of Kubernetes applications and services.
It allows users to easily find, share, and consume pre-packaged Kubernetes resources, streamlining the deployment process in Kubernetes environments.
Zone
A “zone” typically refers to a subset of the overall cluster that shares certain characteristics, such as geographic location or hardware specifications. Zoning helps distribute resources strategically and can enhance fault tolerance by ensuring redundancy within distinct zones.
6 - FAQs
FAQ - Kubeopsctl
Known Issues
ImagepullBackoffs in Cluster
If you have imagepullbackoffs in your cluster, p.e. for prometheus, you can just use the kubeopsctl change registry command again. e.g.
kubeopsctl change registry -r <your master ip>:30002/library -t localhost:30002/library -f kubeopsctl.yaml
FAQ - KubeOps KOSI
Error Messages
There is an error message regarding Remote-Certificate
- Error:
http://hub.kubernative.net/dispatcher?apiversion=3&vlientversion=2.X.0 : 0
- X means per version
- CentOS 7 cannot update the version by itself (
ca-certificates-2021.2.50-72.el7_9.noarch
).- Fix:
yum update ca-certificates -y
oryum update
- Fix:
- Manual download and install of
ca-certificates
RPM:- Download:
curl http://mirror.centos.org/centos/7/updates/x86_64/Packages/ca-certificates-2021.2.50-72.el7_9.noarch.rpm -o ca-certificates-2021.2.50-72.el7_9.noarch.rpm
- Install:
yum install ca-certificates-2021.2.50-72.el7_9.noarch.rpm -y
- Download:
KOSI Usage
Can I use KOSI with sudo?
- At the moment, KOSI has no sudo support.
- Docker and Helm, which are required, need sudo permissions.
I get an error message when I try to search an empty Hub?
- Known bug, will be fixed in a later release.
- Need at least one package in the Hub before you can search.
Package Configuration
In my package.yaml, can I use uppercase characters as a name?
- Currently, only lowercase characters are allowed.
- This will be fixed in a later release.
I have an error message that says “Username or password contain non-Latin characters”?
- Known bug, may occur with incorrect username or password.
- Please ensure both are correct.
In my template.yaml, can I just write a value without an associated key?
- No, a YAML file requires a key-value structure.
Do I have to use the template plugin in my KOSI package?
- No, you don’t have to use the template plugin if you don’t want to.
I have an error message that says “reference not set to an instance of an object”?
- Error from our tool for reading YAML files.
- Indicates an attempt to read a value from a non-existent key in a YAML file.
I try to template but the value of a key stays empty.
- Check the correct path of your values.
- If your key contains “-”, the template plugin may not recognize it.
- Removing “-” will solve the issue.
FAQ - KubeOps LIMA
Error Messages
LIMA Cluster not ready
- You have to apply the calico.yaml in the $LIMAROOT folder:
kubectl apply -f $LIMAROOT/calico.yaml
read header failed: Broken pipe
for lima version >= 0.9.0
- Lima stops in line
ansible Playbook : COMPLETE : Ansible playbooks complete.
- Search for
$LIMAROOT/dockerLogs/dockerLogs_latest.txt
in the path Broken pipe
. From the line with Broken pipe
check if the following lines exist:
debug3: mux_client_read_packet: read header failed: Broken pipe
debug2: Received exit status from master 1
Shared connection to vli50707 closed.
<vli50707> ESTABLISH SSH CONNECTION FOR USER: demouser
<vli50707> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)
(ControlPersist=60s)
If this is the case, line /etc/ansible/ansible.cfg
in the currently running lima container in file ssh_args =-C -o ControlMaster=auto -o ControlPersist=60s
must be commented out or removed.
Example:
docker container ls
CONTAINER ID IMAGE COMMAND
CREATED STATUS PORTS NAMES
99cabe7133e5 registry.preprod.kubernative.net/kubeops/lima:v0.8.0 "/bin/bash" 6 days
ago Up 6 days lima-v0.8.0
docker exec -it 99cabe7133e5 bash
vi /etc/ansible/ansible.cfg
Change the line ssh_args = -C -o ControlMaster=auto -o ControlPersist=60s
to #ssh_args = -C-o ControlMaster=auto -o ControlPersist=60s
or delete the line.
I want to delete the cluster master node and rejoin the cluster. When trying to rejoin the node a problem occurs and rejoining fails. What can be done?
To delete the cluster master, we need to set the cluster master to a different master machine first.
-
On the admin machine: change the IP-Address from the current to new cluster master in:
$LIMAROOT/<name_of_cluster>/clusterStorage.yaml
~/.kube/config
-
Delete the node
-
Delete the images to prevent interference: ctr -n k8s.io i rm $(ctr -n k8s.io i ls -q)
-
Change IP on new cluster master in
/etc/kubernetes/admin.conf
-
Change IPs in config maps:
kubectl edit cm kubeadm-config -n kube-system
kubectl edit cm kube-proxy -n kube-system
kubectl edit cm kubeadm-config -n kube-system
kubectl edit cm cluster-info -n kube-public
-
Restart kubelet
-
Rejoin the node
Using LIMA on RHEL8 fails to download metadata for repo “rhel-8-for-x86_64-baseos-rpms”. What should I do?
This is a common problem which happens now and then, but the real source of error is difficult to identify. Nevertheless, the workaround is quick and easy: clean up the current repo data, refresh the subscription-manager and update the whole operating system. This can be done with the following commands:
dnf clean all
rm -frv /var/cache/dnf
subscription-manager refresh
dnf update -y
How does LIMA handle SELinux?
SELinux will be temporarily deactivated during the execution of LIMA tasks. After the execution is finished, SELinux is automatically reactivated. This indicates you are not required to manually enable SELinux every time while working with LIMA.
My pods are stuck: CONFIG-UPDATE 0/1 CONTAINERCREATING
-
They are responsible for updating the loadbalancer, you can update them manualy and delete the pod
-
You can try redeploying the deamonset to the kube-system namespace
My master can not join, it fails when creating /ROOT/.KUBE
try the following commands on the master
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Some nodes are missing the loadbalancer
-
Check if the Loadbalancer staticPod file can be found in the manifest folder of the node.
-
If it isn’t there please copy it from another node.
Some nodes didn’t upgrade. What to do now?
-
Retry to upgrade your cluster.
-
If LIMA thinks you are already on the target version edit the stored data of your cluster at
$LIMAROOT/myClusterName/clusterStorage.yaml
.Set the Key
kubernetesVersion
to the lowest kubernetes Version present on a Node in your cluster.
Could not detect a supported package manager from the followings list: [‘PORTAGE’, ‘RPM’, ‘PKG’, ‘APT’], or the required PYTHON library is not installed. Check warnings for details.
-
Check if you got a package manager.
-
You have to install python3 with
yum install python
and then create a symlink from python to python3 withupdate-alternatives --config python
.
Aborting, target uses SELINUX but PYTHON bindings (LIBSELINUX-PYTHON) aren’t installed!
You have to install libselinux-python on your cluster machine so you can install a firewall via LIMA.
FAQ - KubeOps PIA
The httpd service is terminating too long. How can I force the shut down?
- Use following command to force shut down httpd service:
kubectl delete deployment pia-httpd –grace-period=0 –force
- Most deployments got a networking service like our httpd does.
Delete the networking service with the command:
kubectl delete svc pia-httpd-service –grace-period=0 –force
I get the error that some nodes are not ‘Ready’. How do I fix the problem?
-
Use
kubectl get nodes
command to find out first which node is not ready. -
To identify the problem, get access to the shell of the non-ready node . Use
systemctl status kubelet
to get status information about state of kubelet. -
The most common cause of this error is that the kubelet has the problem of not automatically identify the node. In this case, the kubelet must be restarted manually on the non-ready machine. This is done with
systemctl enable kubelet
andsystemctl start kubelet
. maybe you need to restart containerd:systemctl stop containerd
andsystemctl restart containerd
-
If the issue persists, reason behind the error can be evaluated by your cluster administrators.
I checked my clusterStorage.yaml after the clustercrartion and there is only a entry for master1
This error occurs sporadically and will be fixed in a later version. The error has no effect.
FAQ KubeOps PLATFORM
Support of S3 storage configuration doesn’t work
At the moment, the kosi-package rook-ceph:1.1.2 (utilized in kubeOps 1.1.3) is employing a Ceph version with a known bug that prevents the proper setup and utilization of object storage via the S3 API. If you require the functionality provided by this storage class, we suggest considering the use of kubeOps 1.0.7. This particular version does not encounter the aforementioned issue and provides comprehensive support for S3 storage solutions.
Change encoding to UTF-8
Please make sure that your uservalues.yaml
is using UTF-8 encoding.
If you get issues with encoding, you can change your file to UTF-8 with the following command:
iconv -f UTF-8 -t ISO-8859-1 uservalues.yaml > uservalues.yaml
How to update Calico Multus?
-
Get podSubnet located in clusterStorage.yaml (
$LIMAROOT/<clustername>/clusterStorage.yaml
) -
Create a values.yaml with key=>podSubnet an value=>
Example:
podSubnet: 192.168.0.0/17
-
Get the deployment name of the current calicomultus installation with the
kosi list
- command
Example:
| Deployment | Package | PublicHub | Hub |
|-------------|--------------------------------------|--------------|----------|
| 39e6da | local/calicomultus:0.0.1 | | local |
- Update the deployment with
kosi update lima/calicomultus:0.0.2 --dname <yourdeploymentname> --hub=public -f values.yaml
–dname: important parameter mandatory for the update command.
-f values.yaml: important that the right podSubnet is being used.
Known issue:
error: resource mapping not found for name:
calico-kube-controllers
namespace:co.yaml: no matches for kindPodDisruptionBudget
in versionpolicy/v1beta1
ensure CRDs are installed first
Create Cluster-Package with firewalld:
If you want to create a cluster with firewalld and the kubeops/clustercreate:1.0.
- package you have to manually pull the firewalld - maintenance - package for your OS first, after executing the kubeops/setup:1.0.1
- package.
Opensearch pods do not start:
If the following message appears in the Opensearch pod logs, the vm.max_map_count:
ERROR: [1] bootstrap checks failed
[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
On all control-plane and worker nodes the line vm.max_map_count=262144
must be added to the file /etc/sysctl.conf
.
After that the following command must be executed in the console on all control-plane and worker nodes: sysctl -p
Finally, the Opensearch pods must be restarted.
Does the KubeOps Platform have a vendor lock-in?
No
, the KubeOps Platform does not have a vendor lock-in. It is built entirely on open standards and Kubernetes technologies, ensuring you retain full control over your infrastructure at all times.
If you decide to discontinue using the KubeOps Platform, you can:
- Export data and configurations: All data and settings are available in standardized formats.
- Migrate workloads: Your applications can run on any Kubernetes environment without requiring modifications.
- Replace modules: Features like monitoring, security, or lifecycle management can be gradually replaced with alternative tools.
Your infrastructure will remain fully operational throughout the transition. Additionally, we provide comprehensive documentation and optional support to ensure a smooth migration process.
FAQ - KubeOps KUBEOPSCTL
Known issue:
The namespace for packages must remain consistent. For example, if you deploy a package in the “monitoring” namespace, all Kosi updates should also be applied within the same namespace.
HA capability only after 12h, for earlier HA capability manually move the file /etc/kubernetes/manifest/haproxy.yaml out of the folder and back in again
After upgrading a node or zone it is possible that the lima container is still running. Please confirm with
podman ps -a
if a lima container is running. Remove the lima container withpodman rm -f <container id>
. After that you can start another upgrade of node or zone.
Sometimes the rook-ceph PDBs are blocking the kubernetes upgrade if you have 3 workers, so you have to delete the rook-ceph PDBs so that the nodes can be drained in the kubernetes upgrade process. the PDBs are created dynamically, so you have to the PDBs could be created after some time.
if the calico or the multus images have a imagepullbackoff, you need toe execute
kosi pull --hub public lima/calicomultus:0.0.3 -o calico.tgz -r masternode:5000 -t localhost:5001
for all masternodes.
even if you have the updateRegistry parameter in your yaml file set to true, the images will not be rewritten. you can use
lima update -r (clustername from the yaml file)
.
The rook-ceph dashboard inaccessable with kubeopsctl v1.6.2
An additional worker or master is not added to the existing cluster. In kubeopsctl 1.5.0 an additional worker or master is not added to the existing cluster. We faced that issue with kubeopsctl 1.5.0. After the cluster creation an additional master or worker node should be joined. The kubeopsctl logs are showing that the additional node couldn’t be found. In
$KUBEOPSROOT/lima/dockerLogs/dockerLogs_latest.txt
at the bottom of the file we found the ErrorVariable useInsecureRegistry is not defined
. After checking$KUBEOPSROOT/lima/test/clusterStorage.yaml
(test is the name of our cluster, in your case its the clustername you gave in the kubeopsctl.yaml file) we found out that there is the entryuseInsecureRegistry:
without value. After we changed it touseInsecureRegistry: false
and tried to add the additional node it worked.
ImagepullBackoffs in Cluster If you have imagepullbackoffs in your cluster, p.e. for prometheus, you can just use the kubeopsctl change registry command again. e.g.
kubeopsctl change registry -r <your master ip>:30002/library -t localhost:30002/library -f kubeopsctl.yaml
Update the kubeopsctl.yaml from 1.6.X to 1.7.X you need to change change the ipAdress parameter to ipAddress in your kubeopsctl.yaml file and all generated yaml files in $KUBEOPSROOT