This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Single Sign-On with Keycloak

Learn how to configure Keycloak for Single Sign-On, securely expose it using Kubernetes Ingress and TLS, and integrate it with kubeops and other Kubernetes applications.

In this guide, you will learn how to implement Single Sign-On (SSO) using Keycloak. We will walk through the complete flow—from understanding SSO for platforms and services such as Rook Ceph, Harbor, and other Kubernetes applications, to configuring Keycloak, exposing it securely, and integrating it with kubeops.

By the end of this guide, you will be able to:

  • Understand how Keycloak enables centralized authentication
  • Configure Keycloak for SSO
  • Securely expose Keycloak using Kubernetes Ingress and TLS
  • Integrate Keycloak with kubeops for authentication and authorization
  • Validate and troubleshoot the SSO login flow

Let’s get started on enabling secure and seamless authentication with Keycloak.

1 - SSO for dashboard

Learn how to configure Single Sign-On (SSO) for KubeOps Dashboard using Keycloak with OIDC.

Single Sign-On (SSO) with Keycloak for KubeOps Dashboard

This guide describes how to configure KubeOps Dashboard using Keycloak (OIDC) in a kubeops-managed Kubernetes environment.


Prerequisites

Before proceeding, ensure the following requirements are met:

  • Keycloak is already installed and running
  • kubeops is installed and operational

Step 1: Extract Keycloak CA certificate

  • On your admin host, run the OpenSSL command (kept exactly as provided):
  openssl s_client -showcerts -connect dev04.kubeops.net:443 /dev/null | openssl x509 -outform PEM > keycloak-ca.crt
  • Copy the CA certificate to each master

    scp :/etc/kubernetes/pki/

Step 2: Update kube-apiserver yaml

  1. On every master, edit the yaml : /etc/kubernetes/manifests/kube-apiserver.yaml

    spec:
    
      containers:
    
      - command:
    
    
        - --oidc-issuer-url=https://dev04.kubeops.net/keycloak/realms/master
    
        - --oidc-client-id=headlamp
    
        - --oidc-username-claim=preferred_username
    
        - --oidc-groups-claim=groups
    
        - "--oidc-username-prefix=oidc:"
    
        - "--oidc-groups-prefix=oidc:"
    
        - --oidc-ca-file=/etc/kubernetes/pki/keycloak-ca.crt
    

Step 3: Create a Keycloak client for Headlamp

  • Create a client for headlamp

    • Client ID: headlamp
    • Client type: OpenID Connect
    • Access type: Confidential
    • Client authentication: Enabled
    • Standard flow: Enabled
    • Direct access grants: Disabled
  • Valid Redirect URIs

    Add the following redirect URI:

    https://headlamp/<your_DNS_name>/*
    
  • Web Origins

    <your_DNS_name>
    

Step 4: Create a client scope for Headlamp

  • Create a client scope

    • Assigned Client Scope : headlamp-dedicated
  • For groups, use the Group Mapper in Keycloak:

    • Mapper Type: groups
    • Name: groups
    • Token Claim Name: groups
    • Add to ID token: ON
    • Add to access token: ON
    • Add to user info: ON
    • Add to token introspection: ON

Step 5: Create a user Group and user in Keycloak

Create a group named headlamp (if doesn’t exist already) and user under the group.

Step 6: Create ClusterRoleBinding for Headlamp group

1.Use following yaml to create ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

  name: headlamp-admin-user

subjects:

- kind: Group

  name: "oidc:headlamp" # Der 'sub' oder 'preferred_username' from the Keycloak-Token

  apiGroup: rbac.authorization.k8s.io

roleRef:

  kind: ClusterRole

  name: cluster-admin

  apiGroup: rbac.authorization.k8s.io

The name “oidc:headlamp” needs to be the same as the group name.

  1. Apply the ClusterRoleBinding file
    kubectl apply -f headlamp-clusterrolebinding.yaml

Step 7: Get client secret

After creating the client, copy the client secret.
This value will be used in the next step.

Step 8: Prepare Headlamp values (enterprise.yaml)

configure enterprise-yaml

packages:

- name: kubeops-dashboard

  enabled: true

  values:

    standard:

      namespace: monitoring

      service:

        nodePort: 30007

      hostname: "headlamp.dev04.kubeops.net"

      path: "/"

    advanced:

      config:

        extraArgs:

          - "--in-cluster"

          - "--plugins-dir=/headlamp/plugins"

          - "--oidc-client-id=headlamp"

          - "--oidc-idp-issuer-url=https://dev04.kubeops.net/keycloak/realms/master"

          - "--oidc-scopes=openid,profile,email"

          - "--insecure-ssl"

          - "--oidc-client-secret=<client-secret>"

Replace with the secret retrieved in Step 7.
-oidc-client-id must match the Keycloak client name (headlamp).

Step 9: Install Headlamp

Deploy Headlamp with the updated enterprise.yaml.

2 - SSO for Harbor

Learn how to configure Single Sign-On (SSO) for Harbor using Keycloak with OIDC in a Kubernetes environment.

Single Sign-On (SSO) with Keycloak for Harbor

This guide describes how to configure Harbor authentication using Keycloak (OIDC) in a kubeops-managed Kubernetes environment.


Prerequisites

Before proceeding, ensure the following requirements are met:

  • Keycloak is already installed and running
  • Keycloak is exposed using Kubernetes Ingress
  • A valid DNS record is configured for Keycloak and Harbor
  • TLS is enabled with a trusted Certificate Authority (CA)
  • kubeops is installed and operational

Step 1: Prepare Keycloak (Realm, User, and Client)

In this step, we configure Keycloak for Harbor SSO. Keycloak is assumed to be already installed, exposed via Ingress, and reachable over HTTPS.

Create Realm

Ensure a realm named kubeops-dashboards exists.
If it does not exist, create it in the Keycloak admin console.

  • Realm name: kubeops-dashboards
  • Enabled: true

Create User

Ensure a user named kubeops exists in the kubeops-dashboards realm.
If the user does not exist, create it and set credentials.

  • Username: kubeops
  • Enabled: true
  • Set a permanent password

Create Client (Harbor)

Create a client for Harbor in the kubeops-dashboards realm.

  • Client ID: harbor
  • Client type: OpenID Connect
  • Access type: Confidential
  • Client authentication: Enabled
  • Standard flow: Enabled
  • Direct access grants: Disabled

Valid Redirect URIs

Add the following redirect URI:

https://<your_DNS_name>/c/oidc/callback

Web Origins

<your_DNS_name>

Client Secret

After creating the client, copy the client secret.
This value will be used in the Harbor configuration:

oidc_client_id: harbor
oidc_client_secret: <CLIENT_SECRET>

Create Secret

kubectl create secret generic <your_secret_name> -n <you_harbor_namespace> \
    --from-literal client_id=<your_oidc_client_id> \
    --from-literal client_secret=<your_oidc_client_secret> \

Step 2: Prepare Harbor Values

The following kubeops package configuration enables Harbor and integrates it with Keycloak using OIDC authentication.

apiVersion: kubeops/kubeopsctl/enterprise/beta/v1

deleteNs: false
localRegistry: false

packages:
  - name: harbor
    enabled: true
    values:
      standard:
        namespace: <you_harbor_namespace>
        harborpass: "password"
        databasePassword: "password"
        redisPassword: "password"
        externalURL: <your_DNS_name>
        nodePort: 30002
        hostname: harbor.dev04.kubeops.net
        harborPersistence:
          persistentVolumeClaim:
            registry:
              size: 40Gi
              storageClass: "rook-cephfs"
            jobservice:
              jobLog:
                size: 1Gi
                storageClass: "rook-cephfs"
            database:
              size: 1Gi
              storageClass: "rook-cephfs"
            redis:
              size: 1Gi
              storageClass: "rook-cephfs"
            trivy:
              size: 5Gi
              storageClass: "rook-cephfs"

      advanced:
        core:
          extraEnvVars:
            - name: OIDC_CLIENT_ID
              valueFrom:
                secretKeyRef:
                  name: oidc-harbor 
                  key: client_id
            - name: OIDC_CLIENT_SECRET
              valueFrom:
                secretKeyRef:
                  name: oidc-harbor 
                  key: client_secret
            - name: CONFIG_OVERWRITE_JSON
              value: |
                {
                  "auth_mode": "oidc_auth",
                  "oidc_name": "keycloak",
                  "oidc_endpoint": "https://<your_DNS_name>/keycloak/realms/kubeops-dashboards",
                  "oidc_client_id": "$(OIDC_CLIENT_ID)",
                  "oidc_client_secret": "$(OIDC_CLIENT_SECRET)",
                  "oidc_scope": "openid,profile,email",
                  "oidc_verify_cert": true,
                  "oidc_auto_onboard": true
                }                

Notes

  • Ensure the OIDC client in Keycloak matches the oidc_client_id and oidc_client_secret values.
  • The externalURL and hostname must match the Harbor DNS name exactly.
  • oidc_auto_onboard: true allows users to be created automatically in Harbor upon first login.

3 - SSO for rook-ceph

Learn how to configure Single Sign-On (SSO) for rook-ceph using Keycloak with OIDC in a Kubernetes environment.

Single Sign-On (SSO) with Keycloak for rook-ceph

This guide describes how to configure rook-ceph authentication using Keycloak (OIDC) in a kubeops-managed Kubernetes environment.


Prerequisites

Before proceeding, ensure the following requirements are met:

  • Keycloak is already installed and running
  • rook-ceph is already installed and running
  • kubeops is installed and operational

Step 1: Prepare Keycloak (Realm, User)

To configure Keycloak for rook-ceph SSO

Create Realm

Ensure a realm named kubeops-dashboards exists.
If it does not exist, create it in the Keycloak admin console.

  • Realm name: kubeops-dashboards
  • Enabled: true

Create User

Ensure a user named kubeops exists in the kubeops-dashboards realm.
If the user does not exist, create it and set credentials.

  • Username: kubeops
  • Enabled: true
  • Set a permanent password

Step 2: Create Client (rook-ceph)

Create a client for rook-ceph in the kubeops-dashboards realm with following settings.

  • Client ID: rook-ceph
  • Client type: OpenID Connect
  • Access type: Confidential
  • Client authentication: Enabled
  • Standard flow: Enabled
  • Direct access grants: Disabled

Valid Redirect URIs

Add the following redirect URI:

https://<your_DNS_name>/oauth2/callback

Web Origins

Also update the web-origins

<your_DNS_name>

Step 3: Get Client Secret

In the Keycloak admin console, open the rook-ceph client and copy the client secret. This value will be used by oauth2-proxy and referenced in next steps:

oidc_client_id: rook-ceph
oidc_client_secret: <CLIENT_SECRET>

Generate a secure random cookie secret.

python3 -c 'import os,base64; print(base64.urlsafe_b64encode(os.urandom(32)).decode())'

Create a Kubernetes Secret containing OAuth2 credentials. Note: the example command below uses client-id=“ceph-dashboard” — verify this value matches your Keycloak client ID

kubectl create secret generic oauth2-proxy-credentials   --from-literal=client-id="ceph-dashboard"   --from-literal=client-secret="<client-secret>"   --from-literal=cookie-secret="<cookie-secret>"   -n rook-ceph

Step 5: Prepare values for oauth2proxy

The following kubeops values configuration enables rook-ceph and integrates it with Keycloak using OIDC authentication.

Use the client secret and Cookie secret derived in above steps here.

global:

  # Global registry to pull the images from

  imageRegistry: ""

  # To help compatibility with other charts which use global.imagePullSecrets.

  imagePullSecrets: []

  #   - name: pullSecret1

  #   - name: pullSecret2

## Override the deployment namespace

##

namespaceOverride: ""

# Force the target Kubernetes version (it uses Helm `.Capabilities` if not set).

# This is especially useful for `helm template` as capabilities are always empty

# due to the fact that it doesn't query an actual cluster

kubeVersion:

# Oauth client configuration specifics

config:

  # Add config annotations

  annotations: {}

  # OAuth client ID

  clientID: "ceph-dashboard"

  # OAuth client secret

  clientSecret: "<client-secret>"

  # List of secret keys to include in the secret and expose as environment variables.

  # By default, all three secrets are required. To exclude certain secrets

  # (e.g., when using federated token authentication), remove them from this list.

  # Example to exclude client-secret:

  # requiredSecretKeys:

  #   - client-id

  #   - cookie-secret

  requiredSecretKeys:

    - client-id

    - client-secret

    - cookie-secret

  # Create a new secret with the following command

  # openssl rand -base64 32 | head -c 32 | base64

  # Use an existing secret for OAuth2 credentials (see secret.yaml for required fields)

  # Example:

  # existingSecret: secret

  cookieSecret: "<cookie-secret>"

  # The name of the cookie that oauth2-proxy will create

  # If left empty, it will default to the release name

  cookieName: ""

  google: {}

    # adminEmail: xxxx

    # useApplicationDefaultCredentials: true

    # targetPrincipal: xxxx

    # serviceAccountJson: xxxx

    # Alternatively, use an existing secret (see google-secret.yaml for required fields)

    # Example:

    # existingSecret: google-secret

    # groups: []

    # Example:

    #  - group1@example.com

    #  - group2@example.com

  # Default configuration, to be overridden

  configFile: |-

    provider = "keycloak-oidc"

    oidc_issuer_url = "https://dev04.kubeops.net/keycloak/realms/master"

    email_domains = [ "*" ]

    upstreams = [ "file:///dev/null" ]

   

    pass_user_headers = true

    set_xauthrequest = true

    pass_access_token = true

  # Custom configuration file: oauth2_proxy.cfg

  # configFile: |-

  #   pass_basic_auth = false

  #   pass_access_token = true

  # Use an existing config map (see configmap.yaml for required fields)

  # Example:

  # existingConfig: config

alphaConfig:

  enabled: false

  # Add config annotations

  annotations: {}

  # Arbitrary configuration data to append to the server section

  serverConfigData: {}

  # Arbitrary configuration data to append to the metrics section

  metricsConfigData: {}

  # Arbitrary configuration data to append

  configData: {}

  # Arbitrary configuration to append

  # This is treated as a Go template and rendered with the root context

  configFile: ""

  # Use an existing config map (see secret-alpha.yaml for required fields)

  existingConfig: ~

  # Use an existing secret

  existingSecret: "oauth2-proxy-credentials"

image:

  registry: ""

  repository: "oauth2-proxy/oauth2-proxy"

  # appVersion is used by default

  tag: ""

  pullPolicy: "IfNotPresent"

  command: []

# Optionally specify an array of imagePullSecrets.

# Secrets must be manually created in the namespace.

# ref: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod

imagePullSecrets: []

  # - name: myRegistryKeySecretName

# Set a custom containerPort if required.

# This will default to 4180 if this value is not set and the httpScheme set to http

# This will default to 4443 if this value is not set and the httpScheme set to https

# containerPort: 4180

extraArgs:

  - --provider=keycloak-oidc

  - --set-xauthrequest=true

  - --pass-user-headers=true

  - --pass-access-token=true

  - --skip-oidc-discovery=true

  - --oidc-issuer-url=https://dev04.kubeops.net/keycloak/realms/master

  - --login-url=https://dev04.kubeops.net/keycloak/realms/master/protocol/openid-connect/auth

  - --redeem-url=https://dev04.kubeops.net/keycloak/realms/master/protocol/openid-connect/token

  - --validate-url=https://dev04.kubeops.net/keycloak/realms/master/protocol/openid-connect/userinfo

  - --oidc-jwks-url=https://dev04.kubeops.net/keycloak/realms/master/protocol/openid-connect/certs

  - --ssl-insecure-skip-verify=true

  - --cookie-secure=true

extraEnv: []

envFrom: []

# Load environment variables from a ConfigMap(s) and/or Secret(s)

# that already exists (created and managed by you).

# ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#configure-all-key-value-pairs-in-a-configmap-as-container-environment-variables

#

# PS: Changes in these ConfigMaps or Secrets will not be automatically

#     detected and you must manually restart the relevant Pods after changes.

#

#  - configMapRef:

#      name: special-config

#  - secretRef:

#      name: special-config-secret

# -- Custom labels to add into metadata

customLabels: {}

# To authorize individual email addresses

# That is part of extraArgs but since this needs special treatment we need to do a separate section

authenticatedEmailsFile:

  enabled: false

  # Defines how the email addresses file will be projected, via a configmap or secret

  persistence: configmap

  # template is the name of the configmap what contains the email user list but has been configured without this chart.

  # It's a simpler way to maintain only one configmap (user list) instead changing it for each oauth2-proxy service.

  # Be aware the value name in the extern config map in data needs to be named to "restricted_user_access" or to the

  # provided value in restrictedUserAccessKey field.

  template: ""

  # The configmap/secret key under which the list of email access is stored

  # Defaults to "restricted_user_access" if not filled-in, but can be overridden to allow flexibility

  restrictedUserAccessKey: ""

  # One email per line

  # example:

  # restricted_access: |-

  #   name1@domain

  #   name2@domain

  # If you override the config with restricted_access it will configure a user list within this chart what takes care of the

  # config map resource.

  restricted_access: ""

  annotations: {}

  # helm.sh/resource-policy: keep

service:

  type: ClusterIP

  # when service.type is ClusterIP ...

  # clusterIP: 192.0.2.20

  # when service.type is LoadBalancer ...

  # loadBalancerIP: 198.51.100.40

  # loadBalancerSourceRanges: 203.0.113.0/24

  # when service.type is NodePort ...

  # nodePort: 80

  portNumber: 80

  # Protocol set on the service

  appProtocol: http

  annotations: {}

  # foo.io/bar: "true"

  # configure externalTrafficPolicy

  externalTrafficPolicy: ""

  # configure internalTrafficPolicy

  internalTrafficPolicy: ""

  # configure service target port

  targetPort: ""

  # Configures the service to use IPv4/IPv6 dual-stack.

  # Ref: https://kubernetes.io/docs/concepts/services-networking/dual-stack/

  ipDualStack:

    enabled: false

    ipFamilies: ["IPv6", "IPv4"]

    ipFamilyPolicy: "PreferDualStack"

  # Configure traffic distribution for the service

  # Ref: https://kubernetes.io/docs/concepts/services-networking/service/#traffic-distribution

  trafficDistribution: ""

## Create or use ServiceAccount

serviceAccount:

  ## Specifies whether a ServiceAccount should be created

  enabled: true

  ## The name of the ServiceAccount to use.

  ## If not set and create is true, a name is generated using the fullname template

  name:

  automountServiceAccountToken: true

  annotations: {}

  ## imagePullSecrets for the service account

  imagePullSecrets: []

    # - name: myRegistryKeySecretName

# Network policy settings.

networkPolicy:

  create: false

  ingress: []

  egress: []

ingress:

  enabled: false

  # className: nginx

  path: /

  # Only used if API capabilities (networking.k8s.io/v1) allow it

  pathType: ImplementationSpecific

  # Used to create an Ingress record.

  # hosts:

  # - chart-example.local

  # Extra paths to prepend to every host configuration. This is useful when working with annotation based services.

  # Warning! The configuration is dependant on your current k8s API version capabilities (networking.k8s.io/v1)

  # extraPaths:

  # - path: /*

  #   pathType: ImplementationSpecific

  #   backend:

  #     service:

  #       name: ssl-redirect

  #       port:

  #         name: use-annotation

  labels: {}

  # annotations:

  #   kubernetes.io/ingress.class: nginx

  #   kubernetes.io/tls-acme: "true"

  # tls:

  # Secrets must be manually created in the namespace.

  # - secretName: chart-example-tls

  #   hosts:

  #     - chart-example.local

# Gateway API HTTPRoute configuration

# Ref: https://gateway-api.sigs.k8s.io/api-types/httproute/

gatewayApi:

  enabled: false

  # The name of the Gateway resource to attach the HTTPRoute to

  # Example:

  # gatewayRef:

  #   name: gateway

  #   namespace: gateway-system

  gatewayRef:

    name: ""

    namespace: ""

  # HTTPRoute rule configuration

  # rules:

  # - matches:

  #   - path:

  #       type: PathPrefix

  #       value: /

  rules: []

  # Hostnames to match in the HTTPRoute

  # hostnames:

  # - chart-example.local

  hostnames: []

  # Additional labels to add to the HTTPRoute

  labels: {}

  # Additional annotations to add to the HTTPRoute

  annotations: {}

resources: {}

  # limits:

  #   cpu: 100m

  #   memory: 300Mi

  # requests:

  #   cpu: 100m

  #   memory: 300Mi

# Container resize policy for runtime resource updates

# Ref: https://kubernetes.io/docs/tasks/configure-pod-container/resize-container-resources/

resizePolicy: []

  # - resourceName: cpu

  #   restartPolicy: NotRequired

  # - resourceName: memory

  #   restartPolicy: RestartContainer

extraVolumes: []

  # - name: ca-bundle-cert

  #   secret:

  #     secretName: <secret-name>

extraVolumeMounts: []

  # - mountPath: /etc/ssl/certs/

  #   name: ca-bundle-cert

# Additional containers to be added to the pod.

extraContainers: []

  #  - name: my-sidecar

  #    image: nginx:latest

# Additional Init containers to be added to the pod.

extraInitContainers: []

  #  - name: wait-for-idp

  #    image: my-idp-wait:latest

  #    command:

  #    - sh

  #    - -c

  #    - wait-for-idp.sh

priorityClassName: ""

# hostAliases is a list of aliases to be added to /etc/hosts for network name resolution

hostAliases: []

# - ip: "10.xxx.xxx.xxx"

#   hostnames:

#     - "auth.example.com"

# - ip: 127.0.0.1

#   hostnames:

#     - chart-example.local

#     - example.local

# [TopologySpreadConstraints](https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/) configuration.

# Ref: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling

# topologySpreadConstraints: []

# Affinity for pod assignment

# Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity

# affinity: {}

# Tolerations for pod assignment

# Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/

tolerations: []

# Node labels for pod assignment

# Ref: https://kubernetes.io/docs/user-guide/node-selection/

nodeSelector: {}

# Whether to use secrets instead of environment values for setting up OAUTH2_PROXY variables

proxyVarsAsSecrets: true

# Configure Kubernetes liveness and readiness probes.

# Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/

# Disable both when deploying with Istio 1.0 mTLS. https://istio.io/help/faq/security/#k8s-health-checks

livenessProbe:

  enabled: true

  initialDelaySeconds: 0

  timeoutSeconds: 1

readinessProbe:

  enabled: true

  initialDelaySeconds: 0

  timeoutSeconds: 5

  periodSeconds: 10

  successThreshold: 1

# Configure Kubernetes security context for container

# Ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/

securityContext:

  enabled: true

  allowPrivilegeEscalation: false

  capabilities:

    drop:

      - ALL

  readOnlyRootFilesystem: true

  runAsNonRoot: true

  runAsUser: 2000

  runAsGroup: 2000

  seccompProfile:

    type: RuntimeDefault

deploymentAnnotations: {}

podAnnotations: {}

podLabels: {}

replicaCount: 1

revisionHistoryLimit: 10

strategy: {}

enableServiceLinks: true

## PodDisruptionBudget settings

## ref: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/

## One of maxUnavailable and minAvailable must be set to null.

podDisruptionBudget:

  enabled: true

  maxUnavailable: null

  minAvailable: 1

  # Policy for when unhealthy pods should be considered for eviction.

  # Valid values are "IfHealthyBudget" and "AlwaysAllow".

  # Ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/#unhealthy-pod-eviction-policy

  unhealthyPodEvictionPolicy: ""

## Horizontal Pod Autoscaling

## ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/

autoscaling:

  enabled: false

  minReplicas: 1

  maxReplicas: 10

  targetCPUUtilizationPercentage: 80

  # targetMemoryUtilizationPercentage: 80

  annotations: {}

  # Configure HPA behavior policies for scaling if needed

  # Ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#configuring-scaling-behavior

  behavior: {}

    # scaleDown:

    #   stabilizationWindowSeconds: 300

    #   policies:

    #   - type: Percent

    #     value: 100

    #     periodSeconds: 15

    #   selectPolicy: Min

    # scaleUp:

    #   stabilizationWindowSeconds: 0

    #   policies:

    #   - type: Percent

    #     value: 100

    #     periodSeconds: 15

    #   - type: Pods

    #     value: 4

    #     periodSeconds: 15

    #   selectPolicy: Max

# Configure Kubernetes security context for pod

# Ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/

podSecurityContext: {}

# whether to use http or https

httpScheme: http

initContainers:

  # if the redis sub-chart is enabled, wait for it to be ready

  # before starting the proxy

  # creates a role binding to get, list, watch, the redis master pod

  # if service account is enabled

  waitForRedis:

    enabled: true

    image:

      repository: "alpine"

      tag: "latest"

      pullPolicy: "IfNotPresent"

    # uses the kubernetes version of the cluster

    # the chart is deployed on, if not set

    kubectlVersion: ""

    securityContext:

      enabled: true

      allowPrivilegeEscalation: false

      capabilities:

        drop:

          - ALL

      readOnlyRootFilesystem: true

      runAsNonRoot: true

      runAsUser: 65534

      runAsGroup: 65534

      seccompProfile:

        type: RuntimeDefault

    timeout: 180

    resources: {}

      # limits:

      #   cpu: 100m

      #   memory: 300Mi

      # requests:

      #   cpu: 100m

      #   memory: 300Mi

# Additionally authenticate against a htpasswd file. Entries must be created with "htpasswd -B" for bcrypt encryption.

# Alternatively supply an existing secret which contains the required information.

htpasswdFile:

  enabled: false

  existingSecret: ""

  entries: []

  # One row for each user

  # example:

  # entries:

  #  - testuser:$2y$05$gY6dgXqjuzFhwdhsiFe7seM9q9Tile4Y3E.CBpAZJffkeiLaC21Gy

# Configure the session storage type, between cookie and redis

sessionStorage:

  # Can be one of the supported session storage cookie|redis

  type: cookie

  redis:

    # Name of the Kubernetes secret containing the redis & redis sentinel password values (see also `sessionStorage.redis.passwordKey`)

    existingSecret: ""

    # Redis password value. Applicable for all Redis configurations. Taken from redis subchart secret if not set. `sessionStorage.redis.existingSecret` takes precedence

    password: ""

    # Key of the Kubernetes secret data containing the redis password value. If you use the redis sub chart, make sure

    # this password matches the one used in redis-ha.redisPassword (see below).

    passwordKey: "redis-password"

    # Can be one of standalone|cluster|sentinel

    clientType: "standalone"

    standalone:

      # URL of redis standalone server for redis session storage (e.g. `redis://HOST[:PORT]`). Automatically generated if not set

      connectionUrl: ""

    cluster:

      # List of Redis cluster connection URLs. Array or single string allowed.

      connectionUrls: []

      # - "redis://127.0.0.1:8000"

      # - "redis://127.0.0.1:8001"

    sentinel:

      # Name of the Kubernetes secret containing the redis sentinel password value (see also `sessionStorage.redis.sentinel.passwordKey`). Default: `sessionStorage.redis.existingSecret`

      existingSecret: ""

      # Redis sentinel password. Used only for sentinel connection; any redis node passwords need to use `sessionStorage.redis.password`

      password: ""

      # Key of the Kubernetes secret data containing the redis sentinel password value

      passwordKey: "redis-sentinel-password"

      # Redis sentinel master name

      masterName: ""

      # List of Redis cluster connection URLs. Array or single string allowed.

      connectionUrls: []

      # - "redis://127.0.0.1:8000"

      # - "redis://127.0.0.1:8001"

# Enables and configure the automatic deployment of the redis-ha subchart

redis-ha:

  # provision an instance of the redis-ha sub-chart

  enabled: false

  # Redis specific helm chart settings, please see:

  # https://artifacthub.io/packages/helm/dandydev-charts/redis-ha#general-parameters

  #

  # Recommended:

  #

  # redisPassword: xxxxx

  # replicas: 1

  # persistentVolume:

  #   enabled: false

  #

  # If you install Redis using this sub chart, make sure that the password of the sub chart matches the password

  # you set in sessionStorage.redis.password (see above).

  #

  # If you want to use redis in sentinel mode see:

  # https://artifacthub.io/packages/helm/dandydev-charts/redis-ha#redis-sentinel-parameters

# Enables apiVersion deprecation checks

checkDeprecation: true

# Allows graceful shutdown

# terminationGracePeriodSeconds: 65

# lifecycle:

#   preStop:

#     exec:

#       command: [ "sh", "-c", "sleep 60" ]

metrics:

  # Enable Prometheus metrics endpoint

  enabled: true

  # Serve Prometheus metrics on this port

  port: 44180

  # when service.type is NodePort ...

  # nodePort: 44180

  # Protocol set on the service for the metrics port

  service:

    appProtocol: http

  serviceMonitor:

    # Enable Prometheus Operator ServiceMonitor

    enabled: false

    # Define the namespace where to deploy the ServiceMonitor resource

    namespace: ""

    # Prometheus Instance definition

    prometheusInstance: default

    # Prometheus scrape interval

    interval: 60s

    # Prometheus scrape timeout

    scrapeTimeout: 30s

    # Add custom labels to the ServiceMonitor resource

    labels: {}

    ## scheme: HTTP scheme to use for scraping. Can be used with `tlsConfig` for example if using istio mTLS.

    scheme: ""

    ## tlsConfig: TLS configuration to use when scraping the endpoint. For example if using istio mTLS.

    ## Of type: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#tlsconfig

    tlsConfig: {}

    ## bearerTokenFile: Path to bearer token file.

    bearerTokenFile: ""

    ## Used to pass annotations that are used by the Prometheus installed in your cluster to select Service Monitors to work with

    ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec

    annotations: {}

    ## Metric relabel configs to apply to samples before ingestion.

    ## [Metric Relabeling](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#metric_relabel_configs)

    metricRelabelings: []

    # - action: keep

    #   regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'

    #   sourceLabels: [__name__]

    ## Relabel configs to apply to samples before ingestion.

    ## [Relabeling](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config)

    relabelings: []

    # - sourceLabels: [__meta_kubernetes_pod_node_name]

    #   separator: ;

    #   regex: ^(.*)$

    #   targetLabel: nodename

    #   replacement: $1

    #   action: replace

# Extra K8s manifests to deploy

extraObjects: []

step 6 : Install oauth 2 helm chart

Use following steps to install oauth2 using help chart.

    helm repo add oauth2-proxy https://oauth2-proxy.github.io/manifests
    helm pull oauth2-proxy/oauth2-proxy
    tar -xzvf oauth2-proxy-10.1.0.tgz
    mv values.yaml oauth2-proxy/values.yaml
    helm install oauth2-proxy oauth2-proxy/ -n rook-ceph

step 7: Update rook-ceph configuration

Configure Ceph manager:

    ceph config-key set mgr/dashboard/external_auth true

    ceph config-key set mgr/dashboard/external_auth_header_name "X-Remote-User"

    ceph config-key set mgr/dashboard/external_auth_logout_url "https://dev04.kubeops.net/oauth2/sign_out?rd=https://dev04.kubeops.net/keycloak/realms/master/protocol/openid-connect/logout?client_id=ceph-dashboard"

step 8: update ceph-dashboard ingress

Configure the ceph-dashboard Ingress :

    metadata:                                                                                                                                                                                                                                                                     

    annotations:                                                                                                                                                                                                                                                                 

        cert-manager.io/cluster-issuer: kubeops-ca-issuer                                                                                                                                                                                                                         

        kubernetes.io/ingress.class: nginx                                                                                                                                                                                                                                       

        meta.helm.sh/release-name: rook-ceph-cluster                                                                                                                                                                                                                             

        meta.helm.sh/release-namespace: rook-ceph                                                                                                                                                                                                                                   

        nginx.ingress.kubernetes.io/auth-response-headers: X-Auth-Request-User                                                                                                                                                                                         

        nginx.ingress.kubernetes.io/auth-signin: https://dev04.kubeops.net/oauth2/start?rd=$escaped_request_uri                                                                                                                                             

        nginx.ingress.kubernetes.io/auth-url: https://dev04.kubeops.net/oauth2/auth                                                                                                                                                                                               

        nginx.ingress.kubernetes.io/configuration-snippet: |                                                                                                                                                                                                                       

        proxy_set_header X-Remote-User $upstream_http_x_auth_request_user;   

Step 9: create oauth2 ingress

Create an Ingress for oauth2-proxy

    apiVersion: networking.k8s.io/v1

    kind: Ingress

    metadata:

    name: oauth2-proxy-ingress

    namespace: rook-ceph # Namespace, in dem der Proxy läuft

    annotations:

        kubernetes.io/ingress.class: nginx

    spec:

    rules:

    - host: dev04.kubeops.net

        http:

        paths:

        - path: /oauth2

            pathType: Prefix

            backend:

            service:

                name: oauth2-proxy

                port:

                number: 80