Accessing Dashboards with OIDC
4 minute read
Accessing Dashboards with OIDC
This how-to guide explains how to enable OIDC-based access to your Kubernetes application dashboards using kubeopsctl. You will configure a single enterprise-values.yaml file that defines the required components. Once the configuration is applied, the dashboards for Harbor, Grafana, Rook-Ceph, OpenSearch, Keycloak, the KubeOps dashboard, and Prometheus become accessible via their configured hostnames and paths. Following the steps in this guide, you will set up a consistent, centralized OIDC integration and make your dashboards securely available through a browser.
Prerequisites
- yq should be installed on all master nodes. For more infomration refer yq installation guide.
- Keycloak should be installed before other packages.
- Pull all the required packages with ‘kubectl pull’ command. For more information refer Pull Packages with KubeOpsctl
Step 1: Configure enterprise-values.yaml
In order to connect to dashboad with OIDC, you need to configure your enterprise-values.yaml file as below
apiVersion: kubeops/kubeopsctl/enterprise/beta/v1
privateRegistry: false
kubeopsuser: <user_name>
imagepullregistry: registry.kubeops.net/kubeops/kubeops
kubeopsuserpassword: <user_password>
kubeopsusermail: <user_email>
deleteNs: false
localRegistry: false # optional, default is opa-gatekeeper
packages:
- name: ingress-nginx
enabled: true
values:
standard:
namespace: ingress-nginx
externalIPs:
- <>
advanced:
- name: cert-manager
enabled: true
values:
standard:
namespace: cert-manager
replicaCount: 3
logLevel: 2
advanced:
ca:
emailLetsEncrypt: <email>
ingressName: nginx
- name: opa-gatekeeper
enabled: true
values:
standard:
namespace: opa-gatekeeper
advanced:
- name: velero
enabled: true
values:
standard:
namespace: "velero"
accessKeyId: "your_s3_storage_username"
secretAccessKey: "your_s3_storage_password"
useNodeAgent: false
defaultVolumesToFsBackup: false
provider: "aws"
bucket: "velero"
useVolumeSnapshots: false
backupLocationConfig:
region: "minio"
s3ForcePathStyle: true
s3Url: "http://minio.velero.svc:9000"
advanced:
- name: rook-ceph
enabled: true
values:
standard:
hostname: <domain_name>
namespace: rook-ceph
cluster:
resources:
mgr:
requests:
cpu: "500m"
memory: "512Mi"
mon:
requests:
cpu: "1"
memory: "1Gi"
osd:
requests:
cpu: "1"
memory: "1Gi"
dashboard:
enabled: "true"
operator:
data:
rookLogLevel: "DEBUG"
- name: harbor
enabled: true
values:
standard:
namespace: harbor
harborpass: "topsecret"
databasePassword: "topsecret"
redisPassword: "topsecret"
externalURL: http://10.2.10.110:30002
nodePort: 30002
hostname: <domain_name> #!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
harborPersistence:
persistentVolumeClaim:
registry:
size: 40Gi
storageClass: "rook-cephfs"
jobservice:
jobLog:
size: 1Gi
storageClass: "rook-cephfs"
database:
size: 1Gi
storageClass: "rook-cephfs"
redis:
size: 1Gi
storageClass: "rook-cephfs"
trivy:
size: 5Gi
storageClass: "rook-cephfs"
advanced:
- name: kube-prometheus-stack
enabled: true
values:
standard:
namespace: monitoring
grafanaUsername: admin
grafanaPassword: topsecret
retentionSize: "24GB"
grafanaResources:
hostname: <domain_name> #!!!!!!!!!!!!!!!!!!!!!!!!!!
nodePort: 30211
retention: 10d
retentionSize: "24GB"
storageClass: "rook-cephfs"
storage: 25Gi
prometheusResources:
hostname: <domain_name> #!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
retention: 10d
retentionSize: "24GB"
storageClass: "rook-cephfs"
storage: 25Gi
advanced:
- name: keycloak
enabled: true
values:
standard:
namespace: "keycloak"
storageClass: "rook-cephfs"
nodePort: "30180"
hostname: <domain_name> #!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
keycloak:
auth:
adminUser: admin
adminPassword: topsecret
postgresql:
auth:
postgresUserPassword: "changeme"
username: bn_keycloak
password: "changeme"
database: bitnami_keycloak
volumeSize: "8Gi"
advanced:
- name: kubeops-dashboard
enabled: true
values:
standard:
namespace: monitoring
hostname: <domain_name> #!!!!!!!!!!!!!!!!!!!!!!!!!
service:
nodePort: 30007
advanced:
- name: filebeat-os
enabled: true
values:
standard:
namespace: logging
advanced:
- name: logstash-os
enabled: true
values:
standard:
namespace: logging
volumeClaimTemplate:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClass: "rook-cephfs"
advanced:
- name: opensearch-os
enabled: true
values:
standard:
hostname: <domain_name>
namespace: logging
opensearchJavaOpts: "-Xmx512M -Xms512M" # optional, default is -Xmx512M -Xms512M
resources:
requests:
cpu: "250m" # optional, default is 250m
memory: "1024Mi" # optional, default is 1024Mi
limits:
cpu: "300m" # optional, default is 300m
memory: "3072Mi" # optional, default is 3072Mi
persistence:
size: 4Gi # mandatory
enabled: "true" # optional, default is true
enableInitChown: "false" # optional, default is false
labels:
enabled: "false" # optional, default is false
storageClass: "rook-cephfs" # optional, default is rook-cephfs
accessModes:
- "ReadWriteMany" # optional, default is {ReadWriteMany}
securityConfig:
enabled: "false" # optional, default value: false
replicas: "3" # optional, default is 3
advanced:
- name: opensearch-dashboards
enabled: true
values:
standard:
hostname: <domain_name>
namespace: logging
nodePort: 30050
advanced:
step 2: Apply the Configuration with ‘KubeOpctl’
Apply the enterprise-values.yaml configuration using the following command:
kubeopsctl apply -f enterprise-values.yaml
Wait until all components are deployed and running before proceeding.
Step 3: Connect to Dashboards via Hostname
To access the dashboards via OIDC, use the hostnames configured in your enterprise-values.yaml (uservalues).
Harbor is an exception: you can access it directly using the base domain name.
Use the following paths, replacing <domain_name> with your configured domain:
- <domain_name>/ for Harbor
- <domain_name>/grafana for grafana
- <domain_name>/ceph-dashboard for rook-ceph
- <domain_name>/opensearch for opensearch
- <domain_name>/keycloak for keycloak
- <domain_name>/kubeops-dashboard for Kubeops Dashboard
- <domain_name>/prometheus for Prometheus