Quickstart

QuickStart #

This is comprehensive instruction guide to start working with a simple cluster.

Requirements #

A total of 7 machines are required:

  • one admin
  • three master
  • three worker

You can choose between Red Hat Enterprise Linux 8 or OpenSUSE 15 . All of your machines need the same os.
Below you can see the minimal requirements for CPU and memory:

OS Minimum Requirements
Red Hat Enterprise Linux 8 4 CPU cores, 16 GB memory
OpenSUSE 15 4 CPU cores, 16 GB memory

Two unformatted hard disks with 50GB each are required for each worker node.

Requirements on Admin #

The following requirements must be fulfilled on the admin machine:

  1. All the utilized users require sudo privileges. If you are using KubeOps as a user, you need a user with sudo rights, so for openSUSE and RHEL 8 Environments it is the wheel group the user should be added to.

  2. Admin machine must be synchronized with the current time.

  3. You need an internet connection to use the default KubeOps registry registry1.kubernative.net/lima.

  4. Create and log in to your account on the KubeOps official website.

  5. Download KUBEOPS SINA 2.6.0 from the KubeOps official download page.

  6. Install SINA

Use following commands to setup and install sina on RHEL 8

echo 'export KUBEOPSROOT="/home/myuser/kubeops"' >> $HOME/.bashrc
echo 'export LIMAROOT="/home/myuser/kubeops/lima"' >> $HOME/.bashrc
source $HOME/.bashrc
mkdir $KUBEOPSROOT
mkdir $LIMAROOT
  
sudo dnf install -y sina*.rpm
sina version

sudo cp -r /var/kubeops/sina/config.yaml $HOME/kubeops/sina/config.yaml
sudo cp -r /var/kubeops/plugins $HOME/kubeops/plugins

rm -rf /var/kubeops

sudo chown -cR myuser $HOME/kubeops

vi $HOME/sina/config.yaml
-> change plugin path to <home user directory>/kubeops/plugins

sina install --hub public --dname podmanrhel8 kubeops/kubeops-podmanrhel8:2.2.1
podman version

there can be an issue with conflicts with containerd, so it is recommended that containerd.io is removed before installing the podman package.

Use following commands to setup and install sina on openSUSE

echo 'export KUBEOPSROOT="/home/myuser/kubeops"' >> $HOME/.bashrc
echo 'export LIMAROOT="/home/myuser/kubeops/lima"' >> $HOME/.bashrc
source $HOME/.bashrc
mkdir $KUBEOPSROOT
mkdir $LIMAROOT
  
sudo zypper install -y --allow-unsigned-rpm sina*.rpm
sina version

sudo cp -r /var/kubeops/sina/config.yaml $KUBEOPSROOT/sina/config.yaml
sudo cp -r /var/kubeops/plugins $KUBEOPSROOT/plugins

rm -rf /var/kubeops

sudo chown -cR myuser /home/myuser/kubeops

vi $KUBEOPSROOT/sina/config.yaml
-> change plugin path to <home user directory>/kubeops/plugins

sina install --hub public --dname podmanlp kubeops/kubeops-podmanlp:2.2.1
podman version

there can be an issue with conflicts with containerd, so it is recommended that containerd.io is removed before installing the podman package.

  1. You must install the kubeops-basic-plugins:0.4.0 .

    Simply type in the following command to install the Basic-Plugins.

    sina install --hub=public pia/kubeops-basic-plugins:0.4.0
    
  2. You must install the kubeops-kubernetes-plugins:0.5.0.

    Simply type in the following command to install the Kubernetes-Plugins.

    sina install --hub public pia/kubeops-kubernetes-plugins:0.5.0
    

Requirements on Master and Worker Nodes #

The following requirements must be fulfilled on master and worker nodes:

  1. All the utilized users require sudo privileges. If you are using Kubeops as a user, you need a user with sudo rights, so for Opensuse and RHEL 8 Environments it is the wheel group the user should be added to.

  2. Every machine must be synchronized with the current time.

  3. You have to assign lowercase unique hostnames for every machine you are using.

    We recommended using self-explanatory hostnames.

    To set the hostname on your machine use the following command:

    hostnamectl set-hostname <name of node>
    
    • Example
      Use the commands below to set the hostname on the particular machine as admin, master1, master2, master3, node1 node2 or node3.
      hostnamectl set-hostname admin  
      hostnamectl set-hostname master1  
      hostnamectl set-hostname master2  
      hostnamectl set-hostname master3  
      hostnamectl set-hostname node1  
      hostnamectl set-hostname node2  
      hostnamectl set-hostname node3  
      

    Requires sudo privileges

    It is recommended that a dns service is running, or if you don’t have a nds service, you can change the /etc/hosts file. an example for a entry in the /etc/hosts file could be:

    10.2.10.12 admin
    10.2.10.13 master1
    10.2.10.14 master2
    10.2.10.15 master3
    10.2.10.16 node1
    10.2.10.17 node2
    10.2.10.18 node3
    

  4. To establish an SSH connection between your machines, you need an SSH key for each of your master and worker on the admin.

    1. Generate an SSH key on admin machine using following command

      ssh-keygen
      

      There will be two keys generated in ~/.ssh directory.
      The first key is the id_rsa(private) and the second key is the id_rsa.pub(public).

    2. Copy the ssh public key from your admin machine to all node machines with ssh-copy-id, e.g.:

      ssh-copy-id master1
      
    3. Now try to establish a connection to the node machines from your admin machine, e.g.:

      ssh master1
      

Platform Setup #

In order to install your cluster you need the following steps:

  1. uservalues.yaml creation
vi uservalues.yaml

Example uservalues.yaml #

kubeOpsUser: "demo" # change to your username
kubeOpsUserPassword: "Password" # change to your password
kubeOpsUserMail: "demo@demo.net" # change to your email
imagePullRegistry: "registry1.kubernative.net/lima"

clusterName: "example" 
clusterUser: "root" 
clusterOS: "Red Hat Enterprise Linux" # optional, default installed OS on admin machine
kubernetesVersion: "1.24.8" 
masterIP: 10.2.10.13 
firewall: "nftables" 
pluginNetwork: "calico" 
containerRuntime: "containerd" 
limaRoot: "<home user directory>/kubeops/lima" # change <home user directory> to your home directory 
kubeOpsRoot: "<home user directory>/kubeops"

localRegistry: false

controlPlaneList: 
  - 10.2.10.14 # use ip adress here for master2
  - 10.2.10.15 # use ip adress here for master3

workerList: 
  - 10.2.10.16 # use ip adress here for worker1
  - 10.2.10.17 # use ip adress here for worker2
  - 10.2.10.18 # use ip adress here for worker3

rook-ceph: true
harbor: true
opensearch: true
opensearch-dashboards: true
logstash: true
filebeat: true
prometheus: true
opa: true # mandatory, set to true if you want to install it into your cluster

storageClass: "rook-cephfs" # optional, default value is "rook-cephfs", delete the line if you don't use it
###Values for Rook-Ceph###
cluster:
  storage:
    useAllNodes: true # default value: true
    useAllDevices: true # default value: true
    # Global filter to only select certain devicesnames. This example matches names starting with sda or sdb.
    # Will only be used if useAllDevices is set to false and will be ignored if individual devices have been specified on a node.
    deviceFilter: "^sd[a-b]"
    config:
      # Name of a device or lvm to use for the metadata of OSDs on each node.
      # Performance can be improved by using a low latency device (SSD or NVMe) as the metadata device, while other spinning platter (HDD) devices on a node are used to store data
      # This global setting will be overwritten by the corresponding node-level setting. 
      metadataDevice: "sda"
    # Names of individual nodes in the cluster that should have their storage included.
    # Will only be used if useAllNodes is set to false. Specific configurations of the individual nodes will overwright global settings.
    nodes:
      - name: "<ip-adress of node_1>"
        devices:
          - name: "sdb"
      - name: "<ip-adress of node_2>"
        deviceFilter: "^sd[a-b]"
        config:
          metadataDevice: "sda"
  resources:
    mgr:
      requests:
        cpu: "500m" # optional
        memory: "1Gi" # optional
    mon:
      requests:
        cpu: "2" # optional
        memory: "4Gi" # optional
    osd:
      requests:
        cpu: "2" # optional
        memory: "4Gi" # optional

postgrespass: "password" # change to your password 
postgres:
  resources:
    requests:
      storage: 2Gi 

redispass: "password" # change to your password 
redis:
  resources:
    requests:
      storage: 2Gi 

harborpass: "password" # change to your password 
externalURL: https://10.2.10.13:30003 # change to ip address of master1 and port
harborPersistence:
  persistentVolumeClaim:
    registry:
      size: 5Gi 
    chartmuseum:
      size: 5Gi 
    jobservice:
      size: 1Gi 
    database:
      size: 1Gi 
    redis:
      size: 1Gi 
    trivy:
      size: 5Gi 

volumeClaimTemplate:
  resources:
    requests:
      storage: 1Gi 

openSearchTemplate:
  opensearchJavaOpts: "-Xmx512M -Xms512M"
  resources:
    requests:
      cpu: "250m"
      memory: "1024Mi"
    limits:
      cpu: "300m"
      memory: "3072Mi"
  persistence:
    size: 4Gi

privateRegistry: false 
grafanaUsername: "user"
grafanaPassword: "password"
retentionSize: "24GB"
  1. Platform preparation
sina install --hub=public kubeops/setup:1.0.1 -f uservalues.yaml
  1. Platform installation
sina install --hub=public kubeops/clustercreate:1.0.2 -f uservalues.yaml