Installation

KubeOps Installation and Setup #

Welcome to the very first step to getting started with KubeOps. In this section, you will get to know about

  • hardware, software and network requirements
  • steps to install the required software
  • key configurations for KubeOps

Prerequisites #

Minimum hardware and OS requirments for a linux machine are

OS Minimum Requirements
Red Hat Enterprise Linux 8 4 CPU cores, 16 GB memory
OpenSUSE 15 4 CPU cores, 16 GB memory
At least one machine should be used as an admin machine for cluster lifecycle management.

Requirements on admin #

The following requirements must be fulfilled on the admin machine.

  1. All the utilized users require sudo privileges. If you are using Kubeops as a user, you need a user with sudo rights, so for Opensuse and RHEL 8 Environments it is the wheel group the user should be added to.

  2. Admin machine must be synchronized with the current time.

  3. You need an internet connection to use the default KubeOps registry registry1.kubernative.net/lima.

    A local registry can be used in the Airgap environment. KubeOps only supports secure registries.
    It is important to list your registry as an insecure registry in registry.conf (/etc/containers/registries.conf), in case of insecure registry usage.
  4. sina 2.6.0 must be installed on your machine. Click here to view how it is done in the Quick Start Guide.

  5. Podman 2.2.1 must be installed on your machine.

    To install podman on RHEL8 use command.

    sina install --hub public --dname podmanrhel8 kubeops/kubeops-podmanrhel8:2.2.1
    

    there can be an issue with conflicts with containerd, so it is recommended that containerd.io is removed before installing the podman package.

    To install podman on OpenSUSE use command.

    sina install --hub public --dname podmanlp kubeops/kubeops-podmanlp:2.2.1
    

    there can be an issue with conflicts with containerd, so it is recommended that containerd.io is removed before installing the podman package.

  6. You must install the kubeops-basic-plugins:0.4.0 .

    Simply type in the following command to install the Basic-Plugins.

    sina install --hub=public pia/kubeops-basic-plugins:0.4.0
    

    Noteable is that you must have to install it on a Root-User Machine.

  7. You must install the kubeops-kubernetes-plugins:0.5.0.

    Simply type in the following command to install the Kubernetes-Plugins.

    sina install --hub public pia/kubeops-kubernetes-plugins:0.5.0
    

Requirements for each node #

The following requirements must be fulfilled on each node.

  1. All the utilized users require sudo privileges. If you are using Kubeops as a user, you need a user with sudo rights, so for Opensuse and RHEL 8 Environments it is the wheel group the user should be added to.

  2. Every machine must be synchronized with the current time.

  3. You have to assign lowercase unique hostnames for every machine you are using.

    We recommended using self-explanatory hostnames.

    To set the hostname on your machine use the following command:

    hostnamectl set-hostname <name of node>
    
    • Example
      Use the commands below to set the hostnames on each machine as admin, master, node1 node2.
      hostnamectl set-hostname admin
      hostnamectl set-hostname master 
      hostnamectl set-hostname node1
      hostnamectl set-hostname node2
      

    Requires sudo privileges

    It is recommended that a dns service is running, or if you don’t have a nds service, you can change the /etc/hosts file. an example for a entry in the /etc/hosts file could be:

    10.2.10.12 admin
    10.2.10.13 master1
    10.2.10.14 master2
    10.2.10.15 master3
    10.2.10.16 node1
    10.2.10.17 node2
    10.2.10.18 node3
    

  4. To establish an SSH connection between your machines, you either need an SSH key or you need to install sshpass.

    1. Generate an SSH key on admin machine using following command

      ssh-keygen
      

      There will be two keys generated in ~/.ssh directory.
      The first key is the id_rsa(private) and the second key is the id_rsa.pub(public).

    2. Copy the ssh key from admin machine to your node machine/s with following command

      ssh-copy-id <ip address or hostname of your node machine>
      
    3. Now try establishing a connection to your node machine/s

      ssh <ip address or hostname of your node machine>
      

Installing KubeOps #

  1. Create a uservalues.yaml file with respective information as shown in below example, in order to use the KubeOps package.

Example uservalues.yaml: #

### General values for registry access ###
kubeOpsUser: "demo" # mandatory
kubeOpsUserPassword: "Password" # mandatory
kubeOpsUserMail: "demo@demo.net" # mandatory
imagePullRegistry: "registry1.kubernative.net/lima" # mandatory, the registry from which the images for the cluster are pulled
localRegistry: false # mandatory, set to true if you use a local registry
### Values for setup configuration ###
clusterName: "example" # mandatory
clusterUser: "root" # mandatory
kubernetesVersion: "1.24.8" # mandatory, check lima documentation
#masterHost: optional if you have an hostname, default value in "masterIP"
masterIP: 10.2.10.12 # mandatory
firewall: "nftables" # mandatory, default "nftables"
pluginNetwork: "calico" # mandatory, default "calico"
containerRuntime: "containerd" # mandatory
limaRoot: "/var/lima" # optional, default: "/var/lima"
clusterOS: "Red Hat Enterprise Linux" # optional, can be "Red Hat Enterprise Linux" or "openSUSE Leap", remove this line if you want to use default installed OS on admin machine but it has to be "Red Hat Enterprise Linux" or "openSUSE Leap"
### Additional values for cluster configuration
useInsecureRegistry: false # optional, default is false
ignoreFirewallError: false # optional, default is false
serviceSubnet: 192.168.128.0/17 # optional, default "192.168.128.0/17"
podSubnet: 192.168.0.0/17 # optional, default "192.168.0.0/17"
debug: true # optional, default is true
logLevel: vvvvv # optional, default "vvvvv"
systemCpu: "1" # optional, default "1"
systemMemory: "2G" # optional, default "2G"
sudo: true # optional, default is true

# in case of true use the following commands:
# sina pull --hub public kubeops/rook-ceph:1.0.0 -o $LIMAROOT/rookceph.sina -r <local docker registry>
# sina pull --hub public kubeops/harbor:1.0.0 -o $LIMAROOT/harbor.sina -r <local docker registry>
# sina pull --hub public kubeops/sina-kube-prometheus-stack:1.0.0 -o $LIMAROOT/prometheus.sina -r <local docker registry>
# sina pull --hub public kubeops/sina-opensearch-os:1.0.0 -o $LIMAROOT/opensearch.sina -r <local docker registry>
# sina pull --hub public kubeops/sina-opensearch-dashboards:1.0.0 -o $LIMAROOT/opensearch-ds.sina -r <local docker registry>
# sina pull --hub public kubeops/sina-filebeat-os:1.0.0 -o $LIMAROOT/filebeat.sina -r <local docker registry>
# sina pull --hub public kubeops/sina-logstash-os:1.0.0 -o $LIMAROOT/logstash.sina -r <local docker registry>


# additional controlplanes, excludes the masterIP
controlPlaneList: # required are at least two additional controlplanes, keep in mind that you need an odd number of controlplanes
  - 10.2.10.13
  - 10.2.10.14

# additional workers
workerList: # required are at least three additional worker
  - 10.2.10.15
  - 10.2.10.16
  - 10.2.10.17

rook-ceph: true # mandatory, set to true if you want to install it into your cluster
harbor: true # mandatory, set to true if you want to install it into your cluster
opensearch: true # mandatory, set to true if you want to install it into your cluster
opensearch-dashboards: true # mandatory, set to true if you want to install it into your cluster
logstash: true # mandatory, set to true if you want to install it into your cluster
filebeat: true # mandatory, set to true if you want to install it into your cluster
prometheus: true # mandatory, set to true if you want to install it into your cluster
opa: true # mandatory, set to true if you want to install it into your cluster

storageClass: "rook-cephfs" # optional, default value is "rook-cephfs", delete the line if you don't use it
###Values for Rook-Ceph###
cluster:
  storage:
    useAllNodes: true # default value: true
    useAllDevices: true # default value: true
    # Global filter to only select certain devicesnames. This example matches names starting with sda or sdb.
    # Will only be used if useAllDevices is set to false and will be ignored if individual devices have been specified on a node.
    deviceFilter: "^sd[a-b]"
    config:
      # Name of a device or lvm to use for the metadata of OSDs on each node.
      # Performance can be improved by using a low latency device (SSD or NVMe) as the metadata device, while other spinning platter (HDD) devices on a node are used to store data
      # This global setting will be overwritten by the corresponding node-level setting. 
      metadataDevice: "sda"
    # Names of individual nodes in the cluster that should have their storage included.
    # Will only be used if useAllNodes is set to false. Specific configurations of the individual nodes will overwright global settings.
    nodes:
      - name: "<ip-adress of node_1>"
        devices:
          - name: "sdb"
      - name: "<ip-adress of node_2>"
        deviceFilter: "^sd[a-b]"
        config:
          metadataDevice: "sda"
  resources:
    mgr:
      requests:
        cpu: "500m" # optional
        memory: "1Gi" # optional
    mon:
      requests:
        cpu: "2" # optional
        memory: "4Gi" # optional
    osd:
      requests:
        cpu: "2" # optional
        memory: "4Gi" # optional

#-------------------------------------------------------------------------------------------------------------------------------
### Values for Postgres ###
postgrespass: "password" # mandatory, set password for harbor postgres access 
postgres:
  resources:
    requests:
      storage: 2Gi # mandatory, depending on storage capacity
#-------------------------------------------------------------------------------------------------------------------------------
### Values for Redis ###
redispass: "password" # mandatory set password for harbor redis access 
redis:
  resources:
    requests:
      storage: 2Gi # mandatory depending on storage capacity
#-------------------------------------------------------------------------------------------------------------------------------
### Values for Harbor deployment ###
## For detailed explaination for each key see: https://artifacthub.io/packages/helm/harbor/harbor/1.8.1#configuration ##
harborpass: "password" # mandatory: set password for harbor access 
externalURL: https://10.2.10.13:30003 # mandatory, the ip address and port, from which harbor is accessable outside of the cluster
harborPersistence:
  persistentVolumeClaim:
    registry:
      size: 5Gi # mandatory, depending on storage capacity
    chartmuseum:
      size: 5Gi # mandatory, depending on storage capacity
    jobservice:
      size: 1Gi # mandatory, depending on storage capacity
    database:
      size: 1Gi # mandatory, depending on storage capacity
    redis:
      size: 1Gi # mandatory, depending on storage capacity
    trivy:
      size: 5Gi # mandatory, depending on storage capacity


#--------------------------------------------------------------------------------------------------------------------------------
###Values for Logstash deployment###
##For detailed explaination for each key see: https://github.com/elastic/helm-charts/releases/tag/v7.16.3###
volumeClaimTemplate:
  resources:
    requests:
      storage: 1Gi # mandatory, depending on storage capacity


#--------------------------------------------------------------------------------------------------------------------------------
###Values for OpenSearch deployment###
##For detailed explaination for each key see: https://github.com/opensearch-project/helm-charts/tree/main/charts/opensearch###
openSearchTemplate:
  opensearchJavaOpts: "-Xmx512M -Xms512M"
  resources:
    requests:
      cpu: "250m"
      memory: "1024Mi"
    limits:
      cpu: "300m"
      memory: "3072Mi"
  persistence:
    size: 4Gi

#--------------------------------------------------------------------------------------------------------------------------------
###Values for Prometheus deployment###
privateRegistry: false # optional, default is false
grafanaUsername: "user"
grafanaPassword: "password"
retentionSize: "24GB"
  1. Install the KubeOps-Setup package. Run the following command with your uservalues:

    sina install --hub=public kubeops/setup:1.0.1 -f uservalues.yaml
    
    Note: This must be done with the root user or with a user with sudo privileges.
  2. You can check if lima is installed correctly on your machine with following the command:

    lima version
    

    In the example above the output indicates the lima version and relative information, that KubeOps-LIMA has successfully been installed.

Working with KubeOps #

Before starting with KubeOps cluster, it is important to check if docker is running.

  • To verify that Docker is running, use the following command:

    systemctl status docker
    
  • To start and enable docker use the following commands:

    systemctl enable docker
    
    systemctl start --now docker
    
Note: This must be done with the root user or with a user with sudo privileges.

Create Cluster #

In order to create a cluster, run the following command with your uservalues.yaml:

  sina install --hub=public kubeops/clustercreate:1.0.2 -f uservalues.yaml
Note: If a firewall is already in use, it will be used. The firewall from the uservalues.yaml is then ignored.

Your cluster is up and running !! Now you are ready to explore and use other KubeOps features. Checkout our How-to-Guides to know more.