1 - About-Lima

What is LIMA?

LIMA is a Cluster Lifecycle Manager application which helps you create and manage a high-availability Kubernetes cluster.

Lima_overview


Why to use LIMA?

As the existing Kubernetes environments require management at individual cluster level. Many tasks like installing some softwares and updating them at every single cluster need to be repeated from time-to-time. Hence the manual management of Kubernetes Cluster becomes complex, time-consuming and error prone.

The main goal behind LIMA is to make Kubernetes secure and easy to use for everyone. LIMA gives you the possibility to automate thousands of Kubernetes clusters.

Highlights
  • Cluster can be managed from a centralized management node.
  • LIMA can be used in the Air-Gap environment.
  • LIMA reduces the time and efforts of your IT-team by automating multiple clusters.
  • LIMA supports various Container runtimes.
  • LIMA also supports various Plugin networks.

What does LIMA offer?

LIMA enables you to set up and manage your cluster from a centralized management node without the need to access a single cluster node.

Cluster Lifecycle Management of a Kubernetes cluster with LIMA includes:
  • Creating Cluster.
  • Adding Nodes.
  • Deleting Nodes.
  • Upgrading the Kubernetes API version.
  • Updating non-Kubernetes softwares in Cluster.
  • Renew Certificates.

Click here to download and get started with LIMA now.

2 - Documentation-Lima

KubeOps-LIMA

This guide states all the LIMA features and explains how to use LIMA step by step with examples.

Before you begin, please make sure that

  1. you have installed the required Maintenance Packages
  2. required software is up and running

LIMA features

Before we take a look at LIMA features, please note

In order to use these features

  1. You need a healthy cluster created with LIMA.
  2. To get this cluster running, its necessary to apply a plugin network to your cluster.

Plugin-network support


You can apply plugin network to your cluster using LIMA in two ways

  1. You can install a plugin network during cluster creation process by editing the clusterconfig (see clusterconfig keys for more information)
    OR
  2. You can install it after you have created a cluster using LIMA (see lima get/install overlay)

Container runtime support


LIMA enables you to select container runtime for your cluster. You can choose between containerd and crio as container runtime for your cluster.

By default crio is the runtime used in the cluster. You can select your desired container runtime in the clusterconfig before creating a cluster. See clusterconfig keys to know how to select the container runtime.

Additionally you can even change the container runtime after cluster creation (see clusterconfig keys for more information)

Support of updating non-kubernetes software


Note: If not already applied updating your cluster with a loadbalancer is required to use LIMA properly.

To get an overview on how and which components you can update see lima update for more information.

Cluster upgrade support


LIMA makes it possible to upgrade your cluster from an outdated kubernetes version to any recent version. See lima upgrade for more information.

Certificate renewal support


If you want to renew certificates in you cluster by one command you should use lima renew cert

How to Configure Cluster/Node using yaml file

In order to create a cluster or add a node to your cluster you need to provide detailed configuration in the form of a YAML file.

  1. Create a yaml file with your desired name. We recommend using self-explanatory names for the ease of use.
  2. Use syntax provided in the examples below and update the specifications according to your needs.
Note: You can reuse the same file multiple times just by changing the content of the file, meaning the file name can remain same but you must change the specifications in the file.
This YAML file must always have apiVersion: lima/nodeconfig/<version>.

Configure cluster using YAML


You have to create a yaml file which should contain all the essential configuration specifications of the cluster in order to create a cluster. This specific yaml file can only create a single cluster.

Below are the examples to show the structure of the file createCluster.yaml.

Please refer to Cluster Config API Objects under the Attachments section for detailed explanation of syntax.

Mandatory YAML syntax

apiVersion: lima/clusterconfig/v1alpha2
spec:
  clusterName: example
  kubernetesVersion: 1.22.4
  apiEndpoint: 10.2.1.11:6443
  masterHost: mydomain.net OR 10.2.1.11

Complete YAML Syntax

apiVersion: lima/clusterconfig/v1alpha2
spec:
  clusterName: example
  masterUser: root
  masterPassword: toor
  masterHost: mydomain.net OR 10.2.1.11
  kubernetesVersion: 1.21.5
  registry: registry1.kubernative.net/lima
  useInsecureRegistry: false
  ignoreFirewallError: false
  firewall: firewalld
  apiEndpoint: 10.2.1.11:6443
  serviceSubnet: 192.168.128.0/20
  podSubnet: 192.168.144.0/20
  debug: true
  logLevel: v
  systemCpu: 100m
  systemMemory: 100Mi
  sudo: false
  containerRuntime: crio
  pluginNetwork:
    type: weave
    parameters:
      weavePassword: re4llyS7ron6P4ssw0rd
  auditLog: false
  serial: 1
  seLinuxSupport: true
Note: The type of yaml file above is only used for the initial cluster set up.
To add master nodes or worker nodes use the addNode.yaml file shown below.

Below is an example to show the structure of the file createCluster.yaml. There are two versions shown below: One with only mandatory entries, and one with every entry available.
Please use alphanumeric numbers only for the weave password.

Note: You can name the YAML files as you want, but it is recommended to use self explanatory names.
In this documentation, the file createCluster.yaml is further only used for an inital cluster set up.
It is important to know that you can not set up another cluster with the exact same YAML file.

Mandatory

apiVersion: lima/clusterconfig/v1alpha2
spec:
  clusterName: example
  kubernetesVersion: 1.22.4
  apiEndpoint: 10.2.1.11:6443
  masterHost: mydomain.net OR 10.2.1.11

Full

apiVersion: lima/clusterconfig/v1alpha2
spec:
  clusterName: example
  masterUser: root
  masterPassword: toor
  masterHost: mydomain.net OR 10.2.1.11
  kubernetesVersion: 1.21.5
  registry: registry1.kubernative.net/lima
  useInsecureRegistry: false
  ignoreFirewallError: false
  firewall: firewalld
  apiEndpoint: 10.2.1.11:6443
  serviceSubnet: 192.168.128.0/20
  podSubnet: 192.168.144.0/20
  debug: true
  logLevel: v
  systemCpu: 100m
  systemMemory: 100Mi
  sudo: false
  containerRuntime: crio
  pluginNetwork:
    type: weave
    parameters:
      weavePassword: re4llyS7ron6P4ssw0rd
  auditLog: false
  serial: 1
  seLinuxSupport: true

To learn more about the YAML syntax and the specification of the API Objects please see the dedicated section under Attachments.


Configure node using YAML


You have to create a YAML file to configure the node with all the essential configuration specifications for your node in order to create a master node.

Below is an example to show the structure of the file addNode.yaml. The YAML file contains the specification for one master node.

Please refer to the Node Config API Objects under the Attachments section for detailed explanation of syntax.

apiVersion: lima/nodeconfig/v1alpha1
clusterName: example
spec: 
  masters:
  - host: 10.2.1.11
    user: root
    password: password
    systemCpu: 200m
    systemMemory: 200Mi
  workers: {}
Note: It is also possible to add multiple nodes at once to your cluster. Refer to the Add multiple nodes to a single master cluster at once section for more information.

To learn more about the YAML syntax and the specification of the API Objects please see the dedicated section under Attachments.


How to use certificates

Instead of using a password you can use certificates.
https://www.digitalocean.com/community/tutorials/how-to-set-up-ssh-keys-on-centos7

The method is the same for SLES, Opensuse and RHEL.


How to use LIMA

Following detailed examples indicate how you can use LIMA.

Note: The create command is both used for creating a cluster and adding nodes to the cluster. To create a cluster, use create cluster. Similarly, to add nodes, use create nodes.

Set up a single node cluster for testing


Important: This node is only suitable as an example installation or for testing.

  1. Create a YAML file which contains cluster configuration.

We are now using the createCluster.yaml file from above.


Run the create cluster command on the admin node to create a cluster with one node.

lima create cluster -f createCluster.yaml

Note: Now you have set up a regular single master cluster.
To use this master node also as a worker node for testing production workloads you have to remove the taint of the master node.

Now remove the taint:

kubectl taint nodes --all node-role.kubernetes.io/master-

Set_up_a_single_node_cluster_for_testing


To learn more about taints please view the ‘Attachments’ section under taints and tolerations.


Set up a single master cluster


Note: This node is not suitable for production workloads.
Please add another worker node as shown below for production workloads.

First create a cluster config YAML file.

We are using now the createCluster.yaml file from above.


Run the create cluster command on the admin node to create a cluster with one cluster master.

lima create cluster -f createCluster.yaml

Set_up_single_cluster_master


Add node to a single master cluster


Note: Only worker nodes that are added with addNode.yaml are suitable for production workloads.

Now create a config YAML file.

We are using now the addNode.yaml file with the specification for a master node.


Run the create nodes command on the admin node to add the new master node to your cluster.

lima create nodes -f addNode.yaml

Add_node_to_a_single_master_cluster


Add multiple nodes to a single master cluster at once


It is possible to add multiple nodes at once to your cluster.
You do that by listing your desired nodes in the spec section of the addNode.yaml file.


We are now creating a new file addNode1.yaml with the desired nodes.
Keep in mind that there are two types of nodes; master nodes and worker nodes.
Put the desired node in their specific spec category.

Note: You can reuse the previous YAML file addNode.yaml when changing the content of the file. For this example we are using a new file.
The file name can be the same every time, but the content must contain different nodes in order to work.

apiVersion: lima/nodeconfig/v1alpha1
clusterName: example
spec: 
  masters:
  - host: 10.2.1.7
    user: root
    password: password
    systemCpu: 300m
    systemMemory: 300Mi
  - host: 10.2.1.13
    user: root
    password: password
  - host: master1.kubernative.net
  workers:
  - host: 10.2.1.12
    user: root
    password: password
    systemCpu: 200m
    systemMemory: 200Mi
  - host: 10.2.1.9
    user: root
    password: password

The YAML file now contains the configuration for 3 new master and 2 new worker nodes.

Note: For more information about the YAML syntax and the specification of the API Objects please see the dedicated section under Attachments.


Now type the create nodes command on the admin node and add the nodes to your cluster.

lima create nodes -f addNode1.yaml

Add_multiple_nodes_to_a_single_master_cluster_at_once


Your cluster now has a total of:

  • 5 master nodes (1 cluster master from the initial set up, 1 single added master, 3 new added master )
  • 2 worker nodes (The 2 new added ones)

Delete nodes from the kubernetes cluster


If you want to remove a node from your cluster you can run the delete command on the admin node.

lima delete -n <node which should be deleted> <name of your cluster>

So now we delete worker node 2 from our existing kubernetes cluster named example with the following command:

lima delete -n 10.2.1.9 example

This is how our cluster looks like

Delete_nodes_from_the_kubernetes_cluster


Upgrade your kubernetes cluster


Serial
LIMA offers two options with the upgrade command. The first option is the upgrade plan command to show which kubernetes versions are supported.

lima upgrade plan <name of your cluster>

Note: Currently LIMA supports the kubernetes versions: supported kubernetes versions


The second option is to upgrade your cluster. Run the upgrade apply command on the admin node to upgrade your cluster.

lima upgrade apply -v <version> <name of your cluster>

Note: It is not possible to skip a kubernetes version. If you want to upgrade your kubernetes cluster with the version 1.20.x to the version 1.22.x you first need to upgrade to the version 1.21.x.

Note: Downgrading is not supported.

Back to our already exisiting cluster we are upgrading from version 1.21.5 to version 1.22.5 with the following command:

lima upgrade apply -v 1.22.5 example

Show the version of LIMA


Run the version command on the admin node to check the current version of LIMA.

lima version

Show available maintenance packages for LIMA


Run the get maintenance command on the admin node to check the current available maintenance packages.

lima get maintenance

An example output of the command could be as follows

Software Version Status Softwarepackage
kubernetes 1.22.4 downloaded lima/kubernetes:1.22.4



Software:
Software which the maintenance package belongs to

Version:
Affected software version

Status:

Status Description
not found Package not found
downloaded Package locally and remotely available
only local Package locally available
available Package remotely available
unknown Unknown package



Softwarepackage:
Name of the package on our Software Hub

Pull available maintenance packages for LIMA


Note: Make sure LIMAROOT is set correctly! LIMAROOT must be set to /var/lima or your own specified path!

Note: Before pulling a package use lima get maintenance to get an overview of all available maintenance packages (lima get maintenance)

Run the lima pull maintenance command on the admin node to pull remotely available maintenance packages. A list of available packages for pulling can be invoked with lima get maintenance

lima pull maintenance <package>

It is possible to pull more than 1 package with one pull invocation. For example:

lima pull maintenance lima/kubernetes:1.23.5 lima/dockerEL7:18.09.1

After pulling the package the package should be available in $LIMAROOT/packages

Get available plugin networks


Run the lima get overlay command on the admin node to get an overview of all available plugin networks.

lima get overlay 

Additionally you can get all configurable parameters for a certain plugin network with the following command:

lima get overlay <plugin-network package> 

For example:

lima get overlay lima/weavenetinstaller:0.0.4

…shows an output like:

apiVersion:
  description: Show version of validation File
  pattern: ^lima/weave/v1alpha1$
  type: string
parameters:
  properties:
    weavePassword:
      description: "Weave needs the password to encrypt its communication if you dont offer a weavePassword a password will be generated randomly for you. Ensure you use a secure Password if you set a password by yourself".
      pattern: ^([a-zA-Z0-9_]){9,128}$|^$
      type: string

Install available plugin networks


Note: To get an overview of available overlay networks use lima get overlay

Run the lima install overlay command on the admin node to deploy a plugin network in your running cluster. With -c or --cni you can specify which overlay network you want to install.

lima install overlay <clusterName> -c <plugin-network package>

If the chosen plugin network allows configurable parameters you can pass them over with
-f or --file:

lima install overlay <clusterName> -c <plugin-network package> -f <parameters.yaml>

For example:

lima install overlay testcluster -c lima/weavenetinstaller:0.0.2 -f parameters.yaml

…with parameters.yaml content:

apiVersion: lima/weave/v1alpha1
parameters:
  weavePassword: "superSecurePassword"

Setting the verbosity of LIMA


Note: This section does not influence the logging produced by LIMAs Ansible component.

The logging of LIMA is sorted into different groups that can log at different log-levels.

(descending = increased verbosity)

LIMA log level Scope
ERROR program failure (can not be muted)
WARNING potential program failure
INFO program status
DEBUG1 generalized debug messages
DEBUG2 detailed debug messages
DEBUG3 variable dump

Note: Group names are case sensitive, log-levels are not.

LIMA logging groups default log-level scope
default INFO parent of all other groups
cmd INFO command logic
container INFO container management
messaging INFO logging
storage INFO file management
util INFO helper functions
verify INFO validation

All LIMA commands accept the lima -l <group>=<log-level>,<group>=<log-level>,... <command> flag for setting the log-level of a group. These settings are not permanent and return back to INFO if not set explicitly. The default group overwrites all other groups.
Example: setting all groups

lima -l default:Warning version

Example: specific groups

lima -l cmd:Error,container:DEBUG3 version

Renew all certificates from nodes


LIMA can renew all Certificates, for a specific Cluster, on all control-plane-nodes.

lima renew cert <clusterName>

Note: Renewing certificates can take several minutes for restarting all certificats services.

Note: This command renews all certificates on the existing control-plane, there is no option to renew single certificates.

An example to use renew cert on your cluster “example”:

lima renew cert example

Update non-kubernetes software of your kubernetes cluster


LIMA can perform a cluster-wide yum update with the update command for all its dependencies.

lima update <clustername>

Note: Updating may take several minutes depending on how long ago the last update has been performed.

Note: This command updates the entire cluster, there is no option to single out a node.

Back to our already exisiting cluster we update our cluster with the following command:

lima update example

Flags

-s, –serial int
As seen in other commands, the serial command allows you to set the batch size of a given process.

-b, –loadbalancer
The loadbalancer flag alters the update command to additionally update the internal loadbalancer on all your nodes. Using this flag is only necessary, if changes to the loadbalancer occur. This will happen when the workload in your cluster shifts. The loadbalancer will change for example, if you add or remove nodes from your cluster.

-r, –registry
with the -r parameter you can push your images, wich are in the $LIMAROOT/images file, into the local registry on the master nodes. The images are pulled from the registry which is stated in the createCluster.yaml.

-t, –template string
The template flag only works in conjunction with the loadbalancer flag to provide a path to a custom loadbalancer configuration.
The yaml file below represents the standard HAProxy configuration LIMA uses. The configuration will be templated using the Jinja2 syntax.
MasterIPs is a variable that provides a list of all master-node-addresses in your cluster.

Note: The Template Flag can also be used at cluster creation

lima create cluster -f <create.yaml> -t <template.yaml>
global
    daemon
    maxconn 1024
    stats socket /var/run/haproxy.sock mode 755 expose-fd listeners level user

defaults
    timeout connect 10s
    timeout client 30s
    timeout server 30s

frontend localhost
    bind *:7443
    option tcplog
    mode tcp
    default_backend nodes

backend nodes
    mode tcp
    balance roundrobin
    option ssl-hello-chk

    {% for ip in masterIPs %}
    server webserver{{ ip }} {{ ip }}:{{ apiEndpointPort }} check inter 1s downinter 30s fall 1 rise 4
    {% endfor %}

Change the settings of your cluster storage


LIMA allows you to directly edit either one or more values of your ‘clusterStorage.yaml’ that can be found in your $LIMAROOT/ folder. Most of these values are used as default while using a LIMA command.

The following flags are config values that can be changed by LIMA

--ansible_become_pass             example: 'password123'
--apiEndpointIp                   example: '10.2.1.11'
--apiEndpointPort                 example: '6443'
-d, --debug                       example: 'true'
-f, --file                        example: 'clusterStorage.yaml'
--firewall                        example: 'iptables'
-i, --ignoreFirewallError         example: 'false'
-l, --logLevel                    example: 'vvvvv'
--masterHost                      example: '10.2.1.11'
--masterPassword                  example: 'password123'
--masterUser                      example: 'root'
--registry                        example: 'registry1.kubernative.net/lima'
--sudo                            example: 'false'
--systemCpu                       example: '200m'
--systemMemory                    example: '200Mi'
-u, --useInsecureRegistry         example: 'true'
--weavePassword                   example: 'password1_'

WARNING: Be sure that you know what you are doing when you change these settings, as there is always the danger of breaking your cluster when you alter configuration files.

Note: It is possible to add multiple flags to change several values at once.

Note: This command only changes the cluster storage YAML. It does not apply any changes in your cluster. For example, overwriting your kubernetes version in the config will not install a higher kubernetes version. To upgrade kubernetes, see ‘Upgrade your kubernetes cluster’.

For our example we can set the system resource parameters. These values will be used while joining new masters and workers if no specific vlaue is set in the used nodeConfig file.

To apply the new config YAML, we run the command:

lima change config --systemCpu '200m'--systemMemory '200Mi' example

Change the container runtime of your cluster


Serial
LIMA allows you to switch between containerd and crio as your container runtime.

lima change runtime -r <runtime> <cluster name>

Note: After changing your runtime, it can take several minutes before all pods are running again. In general, we recommend restarting your cluster after changing runtimes.

An example of how to change from CRI-O to containerd looks like this:

lima change runtime -r containerd example


### Change the usage of audit logging in your cluster ----

Configure audit logs for your cluster


lima change auditlog -a <true/false> <cluster name>

Note: This command is experimental and can fail. Please check the results after execution.

An example of how to turn audit logging off:

lima change auditlog -a false example

Serial Flag


With the serial flag, you can run a command simutaneously on a given number of nodes.
The commands supporting the serial are marked with the tag serial.
Example:

lima change runtime -r crio example -s 2

Now the change runtime will be run parallel on two nodes.

Attachments

Changes by Lima

Below you can see the Lima process and its changes to the system.
The processes and changes are listed for each node separately.

Installing the Lima RPM


During the installation of the Lima RPM following changes are made:

  • set setenforce to 0 (temporarily)
  • set the default for the environment variable $LIMAROOT to /var/lima
  • activate ipv4 ip forwarding in the /etc/sysctl.conf file

On master node when setting up a cluster


Note: Shown are only the changes from using the YAML file createCluster.yaml.

1. sshkeyscan
On all hosts listed in the inventory.yaml file.

2. firewallCheck
Collect installed services and packages. No changed files.

3. firewallInstall
Installs iptables rpm if necessary and starts firewalld/iptables or none.

4. openPortsCluster (Skipped if ignoreFirewallError==true and no firewall installed/recognised/mentioned)
4.1 Opens master_ports: 2379-2380/tcp, 6443/tcp, 6784/tcp, 9153/tcp, 10250/tcp, 10251/tcp, 10252/tcp
and depending on the chosen pluginNetwork:
weave_net_ports: 6783/tcp, 6783-6784/udp
or
calico_ports: 179/tcp.

4.2 Reloads the firewall if firewalld is used / persists iptables firewall changes through reboot

5. systemSelinux
5.1 Set setenforce to 0.
Setenforce persists through reboot

6. systemSwap
6.1 Disable swap.
System reboots after disabling swap.

7. podman
7.1 Install podman and dependencies.
7.2 Enable and start podman service.

8. containerdInsecureRegistry
8.1 Check if /etc/containerd/config.toml exists
8.2 If not: Create it.
8.3 Append the insecure registries. 8.4 Restart containerd.

9. systemNetBridge
9.1 modprobe br_netfilter && systemctl daemon-reload.
9.2 Enable ipv4 forwarding.
9.3 Enable netfilter on bridge.

10. kubeadmKubeletKubectl
10.1 Install all required rpms.
10.2 Enable and start service kubelet.

11. kubernetesCluster
11.1 Create kubeadmconfig from template.
11.2 Create k8s user config directory.
11.3 Copy admin config to /root/.kube/config.


On clustermaster node when node is added to the cluster


Note: Shown are only the changes when the YAML file addNode.yaml and addNode1.yaml is used when adding nodes.

1. kubernetesCreateToken
Clustermaster creates join-token with following command:

kubeadm token create --print-join-command

2. getCert
2.1 Upload certs and get certificate key.
2.2 Writes output from kubeadm token create --print-join-command --control-plane --certificate-key <cert_key> into var.


On master nodes which will be joined to the cluster


1. sshkeyscan
On all hosts listed in the inventory.yaml file.

2. firewallCheck
Collect installed services and packages. No changed files.

3. firewallInstall
Installs iptables rpm if necessary and starts firewalld/iptables or none.

4. openPortsMaster (Skipped if ignoreFirewallError==true and no firewall installed/recognised/mentioned)
4.1 Opens master_ports: 2379-2380/tcp, 6443/tcp, 6784/tcp, 9153/tcp, 10250/tcp, 10251/tcp, 10252/tcp
and depending on the chosen pluginNetwork:
weave_net_ports: 6783/tcp, 6783-6784/udp
or
calico_ports: 179/tcp.
4.2 Reloads the firewall if firewalld is used / persists iptables firewall changes through reboot

5. systemSelinux
5.1 Set setenforce to 0.
Setenforce persists through reboot.

6. systemSwap
6.1 Disable swap.
System reboots after disabling swap.

7. containerd
7.1 Install containerd and dependencies.
7.2 Enable and start containerd service.

8. containerdInsecureRegistry
8.1 Check if /etc/containerd/config.toml exists
8.2 If not: Create it.
8.3 Append the insecure registries. 8.4 Restart containerd.

9. systemNetBridge
9.1 modprobe br_netfilter && systemctl daemon-reload.
9.2 Enable ipv4 forwarding.
9.3 Enable netfilter on bridge.

10. kubeadmKubeletKubectl
10.1 Install all required rpms.
10.2 Enable and start service kubelet.

11. kubernetesJoinNode
Master node joins with token generated by cluster.


On Kubernetes workers


1. sshkeyscan
on all hosts listed in the inventory.yaml file.

2. firewallCheck
Collect installed services and packages. No changed files.

3. firewallInstall
Installs iptables rpm if necessary and starts firewalld/iptables or none.

4. openPortsWorker (Skipped if ignoreFirewallError==true and no firewall installed/recognised/mentioned)
4.1 Opens worker_ports: 10250/tcp, 30000-32767/tcp
and depending on the chosen pluginNetwork:
weave_net_ports: 6783/tcp, 6783-6784/udp
or
calico_ports: 179/tcp.
4.2 Reloads the firewall if firewalld is used / persists iptables firewall changes through reboot

5. systemSelinux
5.1 Set setenforce to 0.
Setenforce persists through reboot.

6. systemSwap
6.1 Disable swap.
System reboots after disabling swap.

7. containerd
7.1 Install containerd and dependencies.
7.2 Enable and start containerd service.

8. containerdInsecureRegistry
8.1 Check if /etc/containerd/config.toml exists
8.2 If not: Create it.
8.3 Append the insecure registries. 8.4 Restart containerd. 9. systemNetBridge
9.1 modprobe br_netfilter && systemctl daemon-reload.
9.2 Enable ipv4 forwarding.
9.3 Enable netfilter on bridge.

10. kubeadmKubeletKubectl
10.1 Install all required rpms.
10.2 Enable and start service kubelet.

11. kubernetesJoinNode
Worker node joins with token generated by cluster.



Product environment considerations


You need:

  • DNS server (optional)
  • Persistent storage (NFS) (optional)
  • Internet access for Kubernative registry
  • Your own local registry on an AirGap environment

Recommended_architecture


Failure tolerance
Having multiple master nodes ensures that services remain available should master node(s) fail. In order to guarantee availability of master nodes, they should be deployed with odd numbers (e.g. 3,5,7,9 etc.)
An odd-size cluster tolerates the same number of failures as an even-size cluster but with fewer nodes. The difference can be seen by comparing even and odd sized clusters:

Cluster Size Majority Failure Tolerance
1 1 0
2 2 0
3 2 1
4 3 1
5 3 2
6 4 2
7 4 3
8 5 3
9 5 4

Adding a member to bring the size of cluster up to an even number doesn’t buy additional fault tolerance. Likewise, during a network partition, an odd number of members guarantees that there will always be a majority partition that can continue to operate and be the source of truth when the partition ends.


Installation scenarios


Note: Please consider How to install Lima for the instructions.

1. Install from registry


install_from_registry


2. Install on AirGap environment


Install_as_airgap


Kubernetes Networking


Pod- and service subnet
In Kubernetes, every pod has its own routable IP address. Kubernetes networking – through the network plug-in that is required to install (e.g. Weave) takes care of routing all requests internally between hosts to the appropriate pod. External access is provided through a service or load balancer which Kubernetes routes to the appropriate pod.


The Podsubnet is set by the user in the createCluster.yaml file. Every Pod has it’s own IP address. The Podsubnet range has to be big enough to fit all of the pods. Due to the design of Kubernetes, where pods will change or nodes reboot, services were built into Kubernetes to address the problem of changing IP addresses.


In Kubernetes, a Service is an abstraction which defines a logical set of Pods and a policy by which to access them (sometimes this pattern is called a micro-service).
Kubernetes service manages the state of a set of pods. The service is an abstraction of pods which asign a virtual IP address over a set of pod IP addresses.

Note: The podsubnet and the servicesubnet must have different IP ranges.

service


Persistent Storage


Please see the link below to learn more about persistent storage.


https://docs.openshift.com/container-platform/4.4/storage/understanding-persistent-storage.html#understanding-persistent-storage

Cluster Storage


The state of each cluster will be saved in the environment variable LIMAROOT and represents which nodes are joined with the cluster master.
In LIMAROOT the clusterName is set as a folder name and includes a clusterStorage.yaml.
In the clusterStorage.yaml is all the information about the cluster.


The structure of the clusterStorage.yaml file looks like this:

apiVersion: lima/storage/v1alpha2
config:
  clusterName: example
  kubernetesVersion: 1.23.5
  registry: registry1.kubernative.net/lima
  useInsecureRegistry: true
  apiEndpoint: 10.2.1.11:6443
  serviceSubnet: 192.168.128.0/20
  podSubnet: 192.168.144.0/20
  clusterMaster: master1.kubernative.net OR 10.2.1.11

nodes:
  masters:
    master1.kubernative.net: {}
    10.2.1.50: {}
    10.2.1.51: {}
  workers:
    worker1.kubernative.net: {}
    worker2.kubernative.net: {}
    10.2.1.54: {} 

Taints and tolerations


Taints allow a node to reject a set of pods.


Tolerations are applied to pods, and allow (but do not require) the pods to schedule onto nodes with matching taints.


Taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes. One or more taints are applied to a node; this marks that the node should not accept any pods that do not tolerate the taints.

Example:
The master node has a taint that prevents it from using it for production workloads.
So first remove the taint in order to use the master for production workloads.

Important: Removing the taint from a master node is only recommended for testing!


Explaination of YAML file syntax


clusterconfig API Objects


Structure of the clusterconfig API Object with version lima/clusterconfig/v1alpha2.

An example to show the structure of the file createCluster.yaml:

apiVersion: lima/clusterconfig/v1alpha2
spec:
  clusterName: example
  kubernetesVersion: 1.23.5
  registry: registry1.kubernative.net/lima
  useInsecureRegistry: true
  apiEndpoint: 10.2.1.11:6443
  serviceSubnet: 192.168.128.0/20
  podSubnet: 192.168.144.0/20
  masterHost: worker1.kubernative.net OR 10.2.1.11

apiVersion (Mandatory):
Version string which defines the format of the API Object
The only currently supported version is:

apiVersion: lima/clusterconfig/v1alpha2

clusterName (Mandatory):
Name for the cluster used to address the cluster.
Should consist of only uppercase/lowercase letters, numbers and underscores.
Example:

clusterName: example

kubernetesVersion (Mandatory):
Defines the Kubernetes version to be installed. This value must follow the Kubernetes version convention. The valid format is ‘#.#.#.’ and accepts for ‘#’ only available numbers.
Example:

kubernetesVersion: 1.23.5

registry (Optional):
Address of the registry where the kubernetes images are stored.
This value has to be a valid IP address or valid DNS name.
Example:

registry: 10.2.1.12

Default:

registry: registry1.kubernative.net/lima

useInsecureRegistry (Mandatory):
Value defines if the used registry is a secure or insecure registry. This value can be either of true or false. Only lowercase letters allowed for ’true’ and ‘false’.
Example:

useInsecureRegistry: true

debug (Optional):
Value defines if the user wants to see output from the pipe or not. This value can be either of true or false. Only lowercase letters allowed for ’true’ and ‘false’.
Example:

debug: true  

Default:

debug: false  

ignoreFirewallError (Optional):
Value defines if the firewall error is ignored or not. This value can be either of true or false. Only lowercase letters allowed for ’true’ and ‘false’.
Example:

ignoreFirewallError: true  

Default:

ignoreFirewallError: false  

firewall (Optional):
Value defines which firewall is used. This value can be either iptables or firewalld. Only lowercase letters allowed for ‘iptables’ and ‘firewalld’.
Example:

firewall: iptables

apiEndpoint (Mandatory):
The IP address and port where the apiserver can be reached. This value consists of an IP address followed by a colon and port. Usually the IP address of a clustermaster or Load Balancer is used.
Example:

apiEndpoint: 10.2.1.11:6443

serviceSubnet (Optional ):
Defines the subnet to be used for the services within kubernetes. This subnet has to be given in CIDR format. But it is not checked for validity.
Example:

serviceSubnet: 1.1.1.0/24

Default:

serviceSubnet: 192.168.128.0/20

containerRuntime (Optional ):
Sets the container runtime enviroment of the cluster. The valid options are crio, and containerd
Example:

containerRuntime: containerd

Default:

containerRuntime: crio

podSubnet (Optional ):
Defines the subnet used by the pods within kubernetes. This subnet has to be given in CIDR format.
Example:

podSubnet: 1.1.2.0/24

Default:

serviceSubnet: 192.168.144.0/20

Note: The podsubnet and the servicesubnet must have different IP ranges.

systemCpu (Optional ):
The CPU value is used to keep resources free on the operating system. If this value is left empty, 100m will be used. The range of values is from 0.001 to 0.9 or 1 m to 500,000 m.
Example:

systemCpu: 500m

Default:

systemCpu: 100m

systemMemory (Optional):

The Memory value is used to keep resources free on the operating system. If this value is left empty, 100Mi will be used. The range of values is from 0.001 to 0.9 or 1 m to 500,000 m. You can express memory as a simple integer or as a fixed-point number using one of these suffixes: E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki.
Example:

systemMemory: 1Gi

Default:

systemMemory: 100Mi

logLevel (Optional):
Describes how detailed the outputs are logged for the user. The following values can be used: v,vv,vvv,vvvv,vvvvv.
Example:

logLevel: vv

Default:

logLevel: vvvvv

masterHost (Mandatory):
Name of the node to be installed as the first master. This value can be either a correct FQDN or a specific IP Address.
Example:

masterHost: 10.2.1.11

masterUser (Optional):
Specification of the user to be used to connect to the node. If this field is left empty “root” will be used to run the ansbile scripts on the cluster master. Should consist of only uppercase/lowercase letters, numbers and underscores.
Example:

masterUser: root

Default:

masterUser: root

masterPassword (Optional):
Specification of the userpassword to be used to connect to the node. Requirements for the password:

  • Password length must be between 1 and 128 characters long
  • Allowed: alphanumeric characters, following symbols: ‘_!?-^@#$%*&():.,;<>’

Example:

masterPassword: password

Note: If you are not using ‘masterPassword’ you need to use certificates

sudo (Optional):

A boolean flag decides whether podman commands run with sudo or not.
Example:

sudo: false

Default:

sudo: true

pluginNetwork (Optional): Currently supported are calico, calico with multus CNI and weave as a plugin network.

To get an overview which pluginNetwork supports which parameters use lima get overlay

Example:

pluginNetwork:
  type: weave
  parameters:
    weavePassword: "superSecurePassword"

Note: If you dont select a pluginNetwork your Cluster will not install any pluginNetwork during Cluster creation! You should either select a pluginNetwork before Cluster creation or use lima install overlay
Please use alphanumeric numbers only for the weave password.

auditLog (Optional):
Enables the Kubernetes AuditLog functionality. In order to do so, be sure to create a policy.yaml and save the file under .../limaRoot/auditLog/policy.yaml. Example:

auditLog: true

Note: To learn more about creating a policy.yaml for the AuditLog functionality, visit the official Kubernetes documentation for Auditlogging.

seLinuxSupport (Optional):
Turn SeLinux on or off:

True: Enforcing
False: Permissive

seLinuxSupport: true

Note: If Selinux is Enforcing and you want to install a firewall, on your target machine, you need to pre install libselinux-python!

serial (Optional):
Specify if a LIMA command should be run simultaneously on a given number of nodes.

Example:

serial: 2

<br>

#### nodeconfig API Objects
----
An Example to show the structure of the file `addNode.yaml`:

```yaml
apiVersion: lima/nodeconfig/v1alpha1
clusterName: example
spec: 
  masters:
  - host: 10.2.1.11
    user: root
    password: password
    systemCpu: 200m
    systemMemory: 200Mi
  - host: master1.kubernative.net
  workers:
  - host: 10.2.1.12
    user: root
    password: password
  - host: worker1.kubernative.net

masters
Optional
A list of all master nodes in the nodelist.
Each node must have a hostname.
The user and password are optional.


An Example to show the structure of the spec file: addNode.yaml.
The cutout shows the structure of the masters spec Object:

apiVersion: lima/nodeconfig/v1alpha1
clusterName: example
spec: 
  masters:
  - host: 10.2.1.11
    user: root
    password: admin123456
  - host: master.kubernative.net

host:
Mandatory
Each host has a unique identifier.
The hostname can be either a specific IP Address OR a correct FQDN.
Example:

host: 10.2.1.11

or

host: master.kubernative.net

user:
Optional
Specification of the user to be used to connect to the node.
Example:

user: root

Default:

user: root

password:
Optional
Specification of the password to be used to connect to the node.
Example:

password: admin123456

systemCpu:
Optional
Specification which system resources (CPU) are to be reserved for the connection to the node. Example:

systemCpu: 200m

Default:

systemCpu: 100m

systemMemory:
Optional
Specification which system resources (memory) are to be reserved for the connection to the node. Example:

systemMemory: 200Mi

Default:

systemMemory: 100Mi


workers
Optional
A list of all worker nodes in the nodelist.
Each node must have a hostname.
The user and password are optional.


An Example to show the structure of the spec file: addNode.yaml.
The cutout shows the structure of the workers spec Object:

apiVersion: lima/nodeconfig/v1alpha1
clusterName: example
spec: 
  workers:
  - host: 10.2.1.12
    user: root
    password: password
  - host: worker.kubernative.net

host:
Mandatory
Each host has a unique identifier.
The hostname can be either a specific IP Address OR a correct FQDN.
Example:

host: 10.2.1.12

or

host: worker.kubernative.net

user:
Optional
Specification of the user to be used to connect to the node.
Example:

user: root

Default:

user: root

password:
Optional
Specification of the password to be used to connect to the node.
Example:

password: admin123456

systemCpu:
Optional
Specification which system resources (CPU) are to be reserved for the connection to the node. Example:

systemCpu: 200m

Default:

systemCpu: 100m

systemMemory:
Optional
Specification which system resources (memory) are to be reserved for the connection to the node. Example:

systemMemory: 200Mi

Default:

systemMemory: 100Mi

Use LIMA as a User

When you use LIMA as a user, you have to pay attention to some details. First of all, the user has to be present on all nodes. This user needs a home directory and needs sudo privileges.

  • With the following command you can create a new user with a home directory and sudo privileges.

    useradd -m -aG <sudo group> testuser
    

    For example on RHEL or opensuse/SLES environments, the wheel group is the sudo group.

  • You also need to add password for the user.
    Following command allows you to set password for user.

    passwd testuser
    
  • For all new nodes you need a new public ssh key.
    It is recommended to use a comment for that, where you add the username and host.

    ssh-keygen -C "testuser@master1"
    

    This command creates a new public ssh key. The new ssh key is located in the .ssh folder in the home directory, and has the name id_rsa.pub. The content of this file needs to be in the authorized_keys file on the admin node. That authorized_keys file is located in the .ssh folder in the home directory.

You also need to pay attention to the createCluster.yaml, because it contains the masterUser and the masterPassword parameter, so you have to update these parameters.

...
    masterUser: root
    masterPassword: toor
...

Then you can create a cluster as a user:

lima create cluster -f createCluster.yaml

Linkpage


Kubernative Homepage https://kubeops.net/

Kubernetes API reference https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/

CNCF Landscape https://landscape.cncf.io


3 - Installation-Guide-lima

Install Lima on AirGap environment

  1. Set up a local docker registry in your AirGap environment.
    Manual: https://docs.docker.com/registry/deploying/

  1. Pull the following images with a machine that has docker installed and internet access:

The lima images:

docker pull registry.kubernative.net/lima:<lima-version> //example v0.8.0-beta-4 

docker pull registry.kubernative.net/autoreloadhaproxy:2.4.0-alpine

docker pull registry.kubernative.net/registry:2.7.1

docker pull registry.kubernative.net/docker.io/busybox:1.33.0

All images found in the imagelist.yaml in your kubernetes-<version> package:


docker pull gcr.io/google-containers/kube-apiserver:<kubernetes-version> //example v1.20.5

docker pull gcr.io/google-containers/kube-controller-manager:<kubernetes-version>

docker pull gcr.io/google-containers/kube-proxy:<kubernetes-version>

docker pull gcr.io/google-containers/kube-scheduler:<kubernetes-version>

docker pull gcr.io/google-containers/pause:<version> //example 3.1

docker pull gcr.io/google-containers/coredns:<version> //example 1.6.5

docker pull gcr.io/google-containers/etcd:<version> //example 3.4.3-0

Using weavenet:

docker pull docker.io/weaveworks/weave-kube:2.6.2

docker pull docker.io/weaveworks/weave-npc:2.6.2

Using calico:

docker pull docker.io/calico/cni:v3.18.1

docker pull docker.io/calico/pod2daemon-flexvol:v3.18.1

docker pull docker.io/calico/node:v3.18.1

docker pull docker.io/calico/kube-controllers:v3.18.1

  1. Tag them:

The lima images:

docker tag registry.kubernative.net/lima:<lima-version> <your registry>/lima:<lima-version>

docker tag registry.kubernative.net/autoreloadhaproxy:2.4.0-alpine <your registry>/autoreloadhaproxy:2.4.0-alpine

docker tag registry.kubernative.net/registry:2.7.1 <your registry>/registry:2.7.1

docker tag registry.kubernative.net/docker.io/busybox:1.33.0  <your registry>/docker.io/busybox:1.33.0

All images found in the imagelist.yaml im your kubernetes-<version> package:


docker tag gcr.io/google-containers/kube-apiserver:<kubernetes-version> <your registry>/gcr.io/google-containers/kube-apiserver:<kubernetes-version>

docker tag gcr.io/google-containers/kube-controller-manager:<kubernetes-version> <your registry>/gcr.io/google-containers/kube-controller-manager:<kubernetes-version>

docker tag gcr.io/google-containers/kube-proxy:<kubernetes-version> <your registry>/gcr.io/google-containers/kube-proxy:<kubernetes-version>

docker tag gcr.io/google-containers/kube-scheduler:<kubernetes-version> <your registry>/gcr.io/google-containers/kube-scheduler:<kubernetes-version>

docker tag gcr.io/google-containers/pause:<version> <your registry>/gcr.io/google-containers/pause:<version>

docker tag gcr.io/google-containers/coredns:<version> <your registry>/gcr.io/google-containers/coredns:<version>

docker tag gcr.io/google-containers/etcd:<version> <your registry>/gcr.io/google-containers/etcd:<version>

Using weavenet:

docker tag docker.io/weaveworks/weave-kube:2.6.2 <your registry>/docker.io/weaveworks/weave-kube:2.6.2

docker tag docker.io/weaveworks/weave-npc:2.6.2 <your registry>/docker.io/weaveworks/weave-npc:2.6.2

Using calico:

docker tag docker.io/calico/cni:v3.18.1 <your registry>/docker.io/calico/cni:v3.18.1

docker tag docker.io/calico/pod2daemon-flexvol:v3.18.1 <your registry>/docker.io/calico/pod2daemon-flexvol:v3.18.1

docker tag docker.io/calico/node:v3.18.1 <your registry>/docker.io/calico/node:v3.18.1

docker tag docker.io/calico/kube-controllers:v3.18.1 <your registry>/docker.io/calico/kube-controllers:v3.18.1

  1. Export your images as tar files:

The lima images:

docker save -o ./ansible.tar <your registry>/lima:<lima-version>

docker save -o ./autoreloadhaproxy.tar <your registry>/autoreloadhaproxy:2.4.0-alpine

docker save -o ./registry.tar <your registry>/registry:2.7.1

docker save -o ./busybox.tar  <your registry>/docker.io/busybox:1.33.0

All images found in the imagelist.yaml im your kubernetes-<version> package:


docker save -o ./kube-apiserver.tar <your registry>/gcr.io/google-containers/kube-apiserver:<kubernetes-version>

docker save -o ./kube-controller-manager.tar <your registry>/gcr.io/google-containers/kube-controller-manager:<kubernetes-version>

docker save -o ./kube-proxy.tar <your registry>/gcr.io/google-containers/kube-proxy:<kubernetes-version>

docker save -o ./kube-scheduler.tar <your registry>/gcr.io/google-containers/kube-scheduler:<kubernetes-version>

docker save -o ./pause.tar <your registry>/gcr.io/google-containers/pause:<version>

docker save -o ./coredns.tar <your registry>/gcr.io/google-containers/coredns:<version>

docker save -o ./etcd.tar <your registry>/gcr.io/google-containers/etcd:<version>

Using weavenet:

docker save -o ./weave-kube.tar <your registry>/docker.io/weaveworks/weave-kube:2.6.2

docker save -o ./weave-npc.tar <your registry>/docker.io/weaveworks/weave-npc:2.6.2

Using calico:

docker save -o ./cni.tar <your registry>/docker.io/calico/cni:v3.18.1

docker save -o ./pod2daemon-flexvol.tar <your registry>/docker.io/calico/pod2daemon-flexvol:v3.18.1

docker save -o ./node.tar <your registry>/docker.io/calico/node:v3.18.1

docker save -o ./kube-controllers.tar <your registry>/docker.io/calico/kube-controllers:v3.18.1

  1. Move all tar files to a place that has access to your registry and docker installed.

  1. Extract all image tar files:

The lima images:

docker load -i ./ansible.tar <your registry>/lima:<lima-version>

docker load -i ./autoreloadhaproxy.tar <your registry>/autoreloadhaproxy:2.4.0-alpine

docker load -i ./registry.tar <your registry>/registry:2.7.1

docker load -i ./busybox.tar  <your registry>/docker.io/busybox:1.33.0

All images found in the imagelist.yaml in your kubernetes-<version> package:


docker load -i ./kube-apiserver.tar 

docker load -i ./kube-controller-manager.tar 

docker load -i ./kube-proxy.tar 

docker load -i ./kube-scheduler.tar 

docker load -i ./pause.tar 

docker load -i ./coredns.tar 

docker load -i ./etcd.tar 

Using weavenet:

docker load -i ./weave-kube.tar 

docker load -i ./weave-npc.tar 

Using calico:

docker load -i ./cni.tar 

docker load -i ./pod2daemon-flexvol.tar 

docker load -i ./node.tar 

docker load -i ./kube-controllers.tar 

  1. Push all images into your local registry: The lima images:
docker push <your registry>/lima:<lima-version>

docker push <your registry>/autoreloadhaproxy:2.4.0-alpine

docker push <your registry>/registry:2.7.1

docker push  <your registry>/docker.io/busybox:1.33.0

All images found in the imagelist.yaml in your kubernetes-<version> package:


docker push <your registry>/gcr.io/google-containers/kube-apiserver:<kubernetes-version>

docker push <your registry>/gcr.io/google-containers/kube-controller-manager:<kubernetes-version>

docker push <your registry>/gcr.io/google-containers/kube-proxy:<kubernetes-version>

docker push <your registry>/gcr.io/google-containers/kube-scheduler:<kubernetes-version>

docker push <your registry>/gcr.io/google-containers/pause:<version>

docker push <your registry>/gcr.io/google-containers/coredns:<version>

docker push <your registry>/gcr.io/google-containers/etcd:<version>

Using weavenet:

docker push <your registry>/docker.io/weaveworks/weave-kube:2.6.2

docker push <your registry>/docker.io/weaveworks/weave-npc:2.6.2

Using calico:

docker push <your registry>/docker.io/calico/cni:v3.18.1

docker push <your registry>/docker.io/calico/pod2daemon-flexvol:v3.18.1

docker push <your registry>/docker.io/calico/node:v3.18.1

docker push <your registry>/docker.io/calico/kube-controllers:v3.18.1

  1. Install LIMA with SINA
sina install <lima package>

OR

On CentOS 7

yum install <lima.rpm>

On OpenSUSE 15.1

zypper install <lima.rpm>
  1. Install kubectl on your admin node as shown in Chapter Install Kubectl

  2. Install Docker on your admin node as shown in Chapter Install Docker

  3. Check if LIMA is running:

lima version
  1. Set ‘registry’ to your local registry name when setting up a new cluster:
apiVersion: lima/clusterconfig/v1alpha2
spec:
  ...
  registry: <your registry address>
  ...

Plugin-Networks on AirGap environment


Note: Your cluster wont be ready until you follow the instructions mentioned below.

If you want to run a cluster on AirGap enviroment you have to use sina to deploy a Plugin-Network. First of all you need a .sina package availalbe locally which contains the plugin Network.

For example pulling calico as a Plugin-Network:

sina pull -o calico lima/calicoinstaller:0.0.1

If your cluster is up and runing you can easily deploy the Plugin-Network. Run the following command on your admin node:

sina install -p calico.sina -f $LIMAROOT/values.yaml

4 - Lima-Ports

Ports for lima clusters

kubernetes ports (RHEL8):

- 2379-2380/tcp
- 6784/tcp
- 7443/tcp
- 9153/tcp
- 10250/tcp
- 10251/tcp
- 10252/tcp
- 5000-5001/tcp
- 3300/tcp
- 6789/tcp
- 6800-7300/tcp
- apiEndpointPort (from createCluster.yaml)

kubernetes ports (OpenSuse):

- 2379-2380/tcp
- 6784/tcp
- 7443/tcp
- 9153/tcp
- 10250/tcp
- 10251/tcp
- 10252/tcp
- 10255/tcp
- 5000-5001/tcp
- 3300/tcp
- 6789/tcp
- 6800-7300/tcp
- apiEndpointPort (from createCluster.yaml)

weave net ports

- 6783/tcp
- 6783-6784/udp

calicoports:

- 179/tcp