Deploy Rok Components¶
At this point, you have configured everything and you are ready to install Rok. This guide will walk you through deploying Rok. More specifically, you will create the Rok namespaces and then deploy Rok Operator, Rok kmod, external services, and RokCluster CR.
Fast Forward
If you have already deployed the Rok components, expand this box to fast-forward.
Go to your GitOps repository, inside your
rok-tools
management environment:root@rok-tools:~# cd ~/ops/deploymentsSave your state:
root@rok-tools:~/ops/deployments# rok-j2 deploy/env.rok.j2 -o deploy/env.rokCommit your changes:
root@rok-tools:~/ops/deployments# git commit -am "Deploy Rok Components"Proceed to the Verify section.
Choose one of the following options to deploy Rok:
- Option 1: Deploy Rok Components Automatically (preferred).
- Option 2: Deploy Rok Components Manually.
Air Gapped
Follow Option 2 and proceed with the manual installation.
Overview
What You’ll Need¶
- A configured management environment.
- Your clone of the Arrikto GitOps repository.
- An existing Kubernetes cluster.
- A cloud identity with access to your cloud provider’s storage service.
- Access to your object storage service for Rok.
- Access to Arrikto’s private container registry.
- A configured Rok user.
- Account management for Rok.
- A Rok version that supports the kernel of your Kubernetes nodes.
Option 1: Deploy Rok Components Automatically (preferred)¶
Choose one of the following options, based on your platform.
Deploy Rok by following the on-screen instructions on the rok-deploy
user interface.
If rok-deploy
is not already running, start it with:
Proceed to the Summary section.
Option 2: Deploy Rok Components Manually¶
If you want to deploy Rok manually, follow the instructions below.
Procedure¶
Go to your GitOps repository, inside your
rok-tools
management environment:root@rok-tools:~# cd ~/ops/deploymentsDeploy the Rok Operator:
root@rok-tools:~/ops/deployments# rok-deploy --apply rok/rok-operator/overlays/deployDeploy Rok kmod:
root@rok-tools:~/ops/deployments# rok-deploy --apply rok/rok-kmod/overlays/deployDeploy etcd.
Edit the kustomization manifest. Choose one of the following options, based on your platform:
Edit
rok/rok-external-services/etcd/overlays/deploy/kustomization.yaml
to use theeks
overlay as base:bases: - ../eks # <-- Edit this line to point to the eks overlayEdit
rok/rok-external-services/etcd/overlays/deploy/kustomization.yaml
to use theaks
overlay as base:bases: - ../aks # <-- Edit this line to point to the aks overlayEdit
rok/rok-external-services/etcd/overlays/deploy/kustomization.yaml
to use thegke
overlay as base:bases: - ../gke # <-- Edit this line to point to the gke overlaySpecify the storage class to use for etcd persistent volumes:
root@rok-tools:~/ops/deployments# export ETCD_STORAGE_CLASS=rok-local-pathNote
This is provided by
local-path-provisioner
and is backed by local disks. We opt not to uselocal-path
that in Bright Kubernetes clusters is available by default and backed by NFS to avoid having the NFS server be a single point of failure. Rok etcd runs with multiple replicas so we can tolerate a single node failure at a time.Configure the
on-prem
overlay:root@rok-tools:~/ops/deployments# rok-j2 \ > rok/rok-external-services/etcd/overlays/on-prem/patches/pvc.yaml.j2 \ > -o rok/rok-external-services/etcd/overlays/on-prem/patches/pvc.yamlEdit
rok/rok-external-services/etcd/overlays/deploy/kustomization.yaml
to set theon-prem
overlay as base:bases: - ../on-prem # <-- Edit this line to point to the on-prem overlaySpecify the desired etcd cluster size:
root@rok-tools:~/ops/deployments# export ETCD_CLUSTER_SIZE=3Render the patch for the etcd cluster size:
root@rok-tools:~/ops/deployments# j2 \ > rok/rok-external-services/etcd/overlays/deploy/patches/cluster-size.yaml.j2 \ > -o rok/rok-external-services/etcd/overlays/deploy/patches/cluster-size.yamlCommit your changes:
root@rok-tools:~/ops/deployments# git commit -am "Configure etcd for our platform"Apply the manifests:
root@rok-tools:~/ops/deployments# rok-deploy --apply \ > rok/rok-external-services/etcd/overlays/deploy
Deploy Redis.
Edit the kustomization manifest. Choose one of the following options, based on your platform:
Edit
rok/rok-external-services/redis/overlays/deploy/kustomization.yaml
to set theeks
overlay as base:bases: - ../eks # <-- Edit this line to point to the eks overlayEdit
rok/rok-external-services/redis/overlays/deploy/kustomization.yaml
to set theaks
overlay as base:bases: - ../aks # <-- Edit this line to point to the aks overlayEdit
rok/rok-external-services/redis/overlays/deploy/kustomization.yaml
to set thegke
overlay as base:bases: - ../gke # <-- Edit this line to point to the gke overlaySpecify the storage class to use for Redis persistent volumes:
root@rok-tools:~/ops/deployments# export REDIS_STORAGE_CLASS=local-pathNote
In Bright Kubernetes clusters
local-path
storage class is available by default and backed by NFS.Configure the
on-prem
overlay:root@rok-tools:~/ops/deployments# rok-j2 \ > rok/rok-external-services/redis/overlays/on-prem/patches/pvc.yaml.j2 \ > -o rok/rok-external-services/redis/overlays/on-prem/patches/pvc.yamlEdit
rok/rok-external-services/redis/overlays/deploy/kustomization.yaml
to set theon-prem
overlay as base:bases: - ../on-prem # <-- Edit this line to point to the on-prem overlayCommit your changes:
root@rok-tools:~/ops/deployments# git commit -am \ > "Configure Redis for our platform"Apply the manifests:
root@rok-tools:~/ops/deployments# rok-deploy --apply \ > rok/rok-external-services/redis/overlays/deploy
Deploy S3Proxy (Azure only):
root@rok-tools:~/ops/deployments# rok-deploy --apply \ > rok/rok-external-services/s3proxy/overlays/deployDeploy the
kubeflow
namespace:root@rok-tools:~/ops/deployments# rok-deploy --apply \ > kubeflow/manifests/common/kubeflow-namespace/overlays/deployDeploy the Kubeflow Gateway in the
kubeflow
namespace:root@rok-tools:~/ops/deployments# rok-deploy --apply \ > kubeflow/manifests/common/istio-1-14/kubeflow-istio-resources/overlays/deployDeploy cert-manager resources, needed by the skel resources:
root@rok-tools:~/ops/deployments# rok-deploy --apply \ > rok/cert-manager/cert-manager/overlays/deployDeploy Kyverno resources, needed by the skel resources:
root@rok-tools:~/ops/deployments# rok-deploy --apply rok/kyverno/overlays/deployDeploy CRDs needed by the skel resources:
root@rok-tools:~/ops/deployments# rok-deploy --apply \ > kubeflow/manifests/apps/admission-webhook/upstream/overlays/deployDeploy the skel resources:
root@rok-tools:~/ops/deployments# rok-deploy --apply \ > kubeflow/manifests/common/skel-resources/overlays/deployDeploy the Reception server in the
kubeflow
namespace:root@rok-tools:~/ops/deployments# rok-deploy --apply \ > kubeflow/manifests/apps/reception/overlays/deployImportant
When a user logs in to Arrikto EKF for the first time, the Reception server will create a new Profile for this user. The Profile Controller will then handle this new Profile and create a dedicated namespace for this user.
To disable the automatic Profile creation, and consequently the automatic creation of dedicated user namespaces, follow the Disable Automatic Profile Creation guide.
Deploy the Profile Controller in the
kubeflow
namespace:root@rok-tools:~/ops/deployments# rok-deploy --apply \ > kubeflow/manifests/apps/profiles/upstream/overlays/deployDeploy roles necessary for RBAC configuration:
root@rok-tools:~/ops/deployments# rok-deploy --apply \ > kubeflow/manifests/common/kubeflow-roles/overlays/deploySpecify the Kubelet root directory. Choose one of the following options, based on your platform.
EKS uses the default Kubelet root directory. Skip this step.
AKS uses the default Kubelet root directory. Skip this step.
GKE uses the default Kubelet root directory. Skip this step.
If the Kubelet root directory of your cluster is not the default
/var/lib/kubelet
, configure Rok to use the directory of your installation:Set the Kubelet root directory used in your installation:
root@rok-tools:~/ops/deployments# export KUBELET_ROOT_DIR=<DIR>Replace
<DIR>
with your Kubelet root directory. For example:root@rok-tools:~/ops/deployments# export KUBELET_ROOT_DIR=/var/lib/kubeletNote
For Bright Kubernetes Clusters use
/cm/local/apps/kubernetes/var/kubelet
.Configure the
on-prem
overlay to use the Kubelet root directory that you specified in the previous step:root@rok-tools:~/ops/deployments# rok-j2 \ > rok/rok-cluster/overlays/on-prem/patches/kubelet-root-dir.yaml.j2 \ > -o rok/rok-cluster/overlays/on-prem/patches/kubelet-root-dir.yamlCommit your changes:
root@rok-tools:~/ops/deployments# git commit -am "Set Kubelet root directory"
Optional
If you wish your Rok cluster to trust one or more custom CAs, for example, to be able to securely connect to your S3 service, you need to:
Obtain the certificate authority (CA) bundle of your choice and copy it to your clipboard. For example, a CA bundle might look like this:
-----BEGIN CERTIFICATE----- MIIDyjCCArKgAwIBAgIQKX7Wxtqubey4K/qRvAFCETANBgkqhkiG9w0BAQsFADBM MRUwEwYDVQQKEwxjZXJ0LW1hbmFnZXIxMzAxBgNVBAMTKmE0OTI0ODE5MzU5MjM0 ... -----END CERTIFICATE-----Edit
rok/rok-cluster/components/cacerts/cacerts
and paste the contents of your certificate or certificate bundle. For example, the final result should look like this:-----BEGIN CERTIFICATE----- MIIDyjCCArKgAwIBAgIQKX7Wxtqubey4K/qRvAFCETANBgkqhkiG9w0BAQsFADBM MRUwEwYDVQQKEwxjZXJ0LW1hbmFnZXIxMzAxBgNVBAMTKmE0OTI0ODE5MzU5MjM0 ... -----END CERTIFICATE-----Enable the
cacerts
Kustomize component in the corresponding kustomization file if it is not already enabled. Choose one of the following options, based on your platform.Edit
rok/rok-cluster/overlays/deploy/kustomization.yaml
so that it contains the following lines:components: - ../../components/cacertsEdit
rok/rok-cluster/overlays/deploy/kustomization.yaml
so that it contains the following lines:components: - ../../components/cacertsEdit
rok/rok-cluster/overlays/deploy/kustomization.yaml
so that it contains the following lines:components: - ../../components/cacertsThe
cacerts
Kustomize component is enabled by default for on-premises deployments. Skip this step.Commit your changes:
root@rok-tools:~/ops/deployments# git commit -am "Specify trusted CA bundle"
Specify any extra trusted CIDRs for Rok CSI Access Servers. Choose one of the following options, based on your platform.
Skip this step.
Skip this step.
Skip this step.
If your Kubernetes nodes have more than one network interfaces and intra-cluster traffic may come from all of them, specify any CIDRs that are not known to Kubernetes, that is the addresses that do not show as InternalIP of the nodes.
Specify the trusted CIDRs:
root@rok-tools:~/ops/deployments# export ROK_ACCESS_SERVER_TRUSTED_CIDRS=<CIDR>Replace
<CIDR>
with your trusted CIDR. For example:root@rok-tools:~/ops/deployments# export ROK_ACCESS_SERVER_TRUSTED_CIDRS=10.0.0.1/24Note
If more than one, use a comma separated list.
Configure the
on-prem
overlay to use trusted CIDRs that you specified in the previous step:root@rok-tools:~/ops/deployments# rok-j2 \ > rok/rok-cluster/overlays/on-prem/patches/access-server.yaml.j2 \ > -o rok/rok-cluster/overlays/on-prem/patches/access-server.yamlCommit your changes:
root@rok-tools:~/ops/deployments# git commit -am "Configure CSI trusted CIDRs"
Configure the
rok-cluster
kustomization to use overlays and components based on your platform.Edit
rok/rok-cluster/overlays/deploy/kustomization.yaml
to useeks
as base in thedeploy
overlay:bases: - ../eksCommit your changes:
root@rok-tools:~/ops/deployments# git commit -am "Use eks overlay in rok-cluster"
Edit
rok/rok-cluster/overlays/deploy/kustomization.yaml
to useaks
as base in thedeploy
overlay:bases: - ../aksCommit your changes:
root@rok-tools:~/ops/deployments# git commit -am "Use aks overlay in rok-cluster"
Edit
rok/rok-cluster/overlays/deploy/kustomization.yaml
to usegke
as base in thedeploy
overlay:bases: - ../gkeCommit your changes:
root@rok-tools:~/ops/deployments# git commit -am "Use gke overlay in rok-cluster"
Specify the platform to use:
root@rok-tools:~/ops/deployments# export PLATFORM=on-premConfigure the
deploy
overlay to use theon-prem
overlay:root@rok-tools:~/ops/deployments# rok-j2 \ > rok/rok-cluster/overlays/deploy/kustomization.yaml.j2 \ > -o rok/rok-cluster/overlays/deploy/kustomization.yamlAir Gapped
Patch the kustomization to use the mirrored images:
root@rok-tools:~/ops/deployments# rok-image-patch \ > --kustomizations rok/rok-cluster/overlays/deployFollow the on-screen instructions and provide any necessary input.
Commit your changes:
root@rok-tools:~/ops/deployments# git commit -am "Use on-prem overlay in rok-cluster"
Deploy the RokCluster CR:
root@rok-tools:~/ops/deployments# rok-deploy --apply rok/rok-cluster/overlays/deployDeploy the Rok CSI Unpin Controller:
root@rok-tools:~/ops/deployments# rok-deploy --apply rok/rok-csi-unpin-controller/overlays/deployDeploy Dex in the
auth
namespace:root@rok-tools:~/ops/deployments# rok-deploy --apply \ > kubeflow/manifests/common/dex/overlays/deployNote
Dex runs as a StatefulSet that requires storage from Rok.
Deploy AuthService in the
istio-system
namespace:root@rok-tools:~/ops/deployments# rok-deploy --apply \ > kubeflow/manifests/common/oidc-authservice/overlays/deployDeploy the Rok Monitoring Stack in the
monitoring
namespace:root@rok-tools:~/ops/deployments# rok-deploy --apply rok/monitoring/overlays/deployImportant
By default no user has access to the Rok Monitoring Stack. To allow access to specific users follow the Grant Rok Monitoring Stack Admin Privileges guide.
See also
- Learn more about the Rok Monitoring Stack on the EKF Monitoring user guide.
Deploy Istio related resources for EKF:
root@rok-tools:~/ops/deployments# rok-deploy --apply rok/expose-ekf/overlays/deployNote
This will not expose EKF to the outer world yet. This will be done in the Expose EKF guide.
Save your state:
root@rok-tools:~/ops/deployments# rok-j2 deploy/env.rok.j2 -o deploy/env.rokCommit your changes:
root@rok-tools:~/ops/deployments# git commit -am "Deploy Rok Components"Mark your progress:
root@rok-tools:~/ops/deployments# export DATE=$(date -u "+%Y-%m-%dT%H.%M.%SZ")root@rok-tools:~/ops/deployments# git tag \ > -a deploy/${DATE?}/develop/rok \ > -m "Deploy Rok Components"
Verify¶
Go to your GitOps repository, inside your
rok-tools
management environment:root@rok-tools:~# cd ~/ops/deploymentsRestore the required context from previous sections:
root@rok-tools:~/ops/deployments# source <(cat deploy/env.cloudidentity)root@rok-tools:~/ops/deployments# export ROK_CLUSTER_NAMESPACEVerify that the Rok Operator, Rok Disk Manager, and Rok kmod Pods are up and running. Check the Pod status and verify field STATUS is Running and field READY is N/N for all Pods:
root@rok-tools:~/ops/deployments# kubectl -n rok-system get pods NAME READY STATUS RESTARTS AGE rok-disk-manager-tmwqz 1/1 Running 0 31s rok-kmod-8g48m 1/1 Running 0 37s rok-operator-0 2/2 Running 0 59sVerify that the Dex Pod is up and running. Check the Pod status and verify field STATUS is Running and field READY is 2/2:
root@rok-tools:~/ops/deployments# kubectl -n auth get pods NAME READY STATUS RESTARTS AGE dex-0 2/2 Running 0 65sVerify that the AuthService Pod is up and running. Check the Pod status and verify field STATUS is Running and field READY is 1/1:
root@rok-tools:~/ops/deployments# kubectl get pods -n istio-system -l app=authservice NAME READY STATUS RESTARTS AGE authservice-0 1/1 Running 0 9m27sVerify that the cert-manager Pods are up and running. Check the Pod status and verify field STATUS is Running and field READY is 1/1 for all Pods:
root@rok-tools:~/ops/deployments# kubectl -n cert-manager get pods NAME READY STATUS RESTARTS AGE cert-manager-6d86476c77-bl9rs 1/1 Running 0 9m cert-manager-cainjector-5b9cd446fd-n5jpd 1/1 Running 0 9m cert-manager-webhook-64d967c45-cdfwh 1/1 Running 0 9mVerify that the Kyverno Pod is up and running. Check the Pod status and verify field STATUS is Running and field READY is 1/1:
root@rok-tools:~/ops/deployments# kubectl -n kyverno get pods NAME READY STATUS RESTARTS AGE kyverno-544fc576bb-gbc9l 1/1 Running 0 9mVerify that the skel resources, Reception server, and Profile Controller Pods are up and running. Check the Pod status and verify field STATUS is Running and field READY is N/N for all Pods:
root@rok-tools:~/ops/deployments# kubectl -n kubeflow get pods NAME READY STATUS RESTARTS AGE admission-webhook-deployment-5d4cf6bbdb-gfrkv 2/2 Running 0 9m kubeflow-reception-54497df69c-psvvp 2/2 Running 0 9m profiles-deployment-6777bccfdc-l4l6z 3/3 Running 0 9mVerify that the
rok-init
job has completed successfully. Check the job status and verify field COMPLETIONS is 2/2:root@rok-tools:~/ops/deployments# kubectl -n ${ROK_CLUSTER_NAMESPACE?} get job NAME COMPLETIONS DURATION AGE rok-init 2/2 59s 24mVerify that the etcd, Redis, Rok CSI, Rok CSI Unpin Controller, and Rok Pods are up and running. Check the Pod status and verify field STATUS is Running and field READY is N/N for all Pods:
root@rok-tools:~/ops/deployments# kubectl -n ${ROK_CLUSTER_NAMESPACE?} get pods NAME READY STATUS RESTARTS AGE rok-9brt8 2/2 Running 0 5m23s rok-csi-controller-0 5/5 Running 0 5m21s rok-csi-guard-ip--172-31-18-161.eu-central-1... 1/1 Running 0 5m21s rok-csi-node-49ncb 3/3 Running 0 5m22s rok-csi-unpin-controller-64b46c74b5-hmvnk 1/1 Running 0 5m25s rok-etcd-0 2/2 Running 0 7m11s rok-redis-0 3/3 Running 0 6m51sVerify that the Rok cluster is up and running. Verify that field HEALTH is OK and field PHASE is Running:
root@rok-tools:~/ops/deployments# kubectl get rokcluster -n ${ROK_CLUSTER_NAMESPACE?} rok NAME VERSION HEALTH TOTAL MEMBERS READY MEMBERS PHASE AGE rok release-1.5-l0-release-1.5 OK 2 2 Running 2m4sVerify that the Rok Monitoring Stack is up and running:
root@rok-tools:~/ops/deployments# kubectl get pods -n monitoring NAME READY STATUS RESTARTS AGE grafana-6d7d7b78f7-6flm7 2/2 Running 0 2m17s kube-state-metrics-765c7c7f95-chkzn 4/4 Running 0 2m16s node-exporter-zng26 2/2 Running 0 2m16s prometheus-k8s-0 3/3 Running 1 2m15s prometheus-operator-5f75d76f9f-fmpp5 3/3 Running 0 8m24sEnsure that Prometheus has successfully discovered the needed targets so that it can pull metrics periodically:
root@rok-tools:~/ops/deployments# kubectl exec -ti -n monitoring sts/prometheus-k8s \ > -c prometheus -- wget -qO - localhost:9090/metrics | grep 'discovered.*rok-metrics' prometheus_sd_discovered_targets{config="serviceMonitor/rok/rok-metrics/0",name="scrape"} 7root@rok-tools:~/ops/deployments# kubectl exec -ti -n monitoring sts/prometheus-k8s \ > -c prometheus -- wget -qO - localhost:9090/metrics | grep 'discovered.*rok-etcd-metrics' prometheus_sd_discovered_targets{config="serviceMonitor/rok/rok-etcd-metrics/0",name="scrape"} 7root@rok-tools:~/ops/deployments# kubectl exec -ti -n monitoring sts/prometheus-k8s \ > -c prometheus -- wget -qO - localhost:9090/metrics | grep 'discovered.*rok-redis-metrics' prometheus_sd_discovered_targets{config="serviceMonitor/rok/rok-redis-metrics/0",name="scrape"} 7