Deploy Rok Components

At this point, you have configured everything and you are ready to install Rok. This guide will walk you through deploying Rok. More specifically, you will create the Rok namespaces and then deploy Rok Operator, Rok kmod, external services, and RokCluster CR.

Choose one of the following options to deploy Rok:

What You’ll Need

Option 1: Deploy Rok Components Automatically (preferred)

Choose one of the following options, based on your platform.

Deploy Rok by following the on-screen instructions on the rok-deploy user interface.

If rok-deploy is not already running, start it with:

root@rok-tools:~# rok-deploy --run-from rok
../../_images/rok.png

Proceed to the Summary section.

Rok does not currently support automatic deployment on Azure Cloud. Please follow the instructions in the Option 2: Deploy Rok Components Manually section to deploy Rok manually.
Rok does not currently support automatic deployment on Google Cloud. Please follow the instructions in the Option 2: Deploy Rok Components Manually section to deploy Rok manually.
Rok does not currently support automatic deployment on premises. Please follow the instructions in the Option 2: Deploy Rok Components Manually section to deploy Rok manually.

Option 2: Deploy Rok Components Manually

If you want to deploy Rok manually, follow the instructions below.

Procedure

  1. Go to your GitOps repository, inside your rok-tools management environment:

    root@rok-tools:~# cd ~/ops/deployments
  2. Deploy the Rok Operator:

    root@rok-tools:~/ops/deployments# rok-deploy --apply rok/rok-operator/overlays/deploy
  3. Deploy Rok kmod:

    root@rok-tools:~/ops/deployments# rok-deploy --apply rok/rok-kmod/overlays/deploy
  4. Deploy etcd.

    1. Edit the kustomization manifest. Choose one of the following options, based on your platform:

      Edit rok/rok-external-services/etcd/overlays/deploy/kustomization.yaml to use the eks overlay as base:

      bases: - ../eks # <-- Edit this line to point to the eks overlay

      Edit rok/rok-external-services/etcd/overlays/deploy/kustomization.yaml to use the aks overlay as base:

      bases: - ../aks # <-- Edit this line to point to the aks overlay

      Edit rok/rok-external-services/etcd/overlays/deploy/kustomization.yaml to use the gke overlay as base:

      bases: - ../gke # <-- Edit this line to point to the gke overlay

      Specify the storage class to use for etcd persistent volumes:

      root@rok-tools:~/ops/deployments# export ETCD_STORAGE_CLASS=rok-local-path

      Note

      This is provided by local-path-provisioner and is backed by local disks. We opt not to use local-path that in Bright Kubernetes clusters is available by default and backed by NFS to avoid having the NFS server be a single point of failure. Rok etcd runs with multiple replicas so we can tolerate a single node failure at a time.

      Configure the on-prem overlay:

      root@rok-tools:~/ops/deployments# rok-j2 \ > rok/rok-external-services/etcd/overlays/on-prem/patches/pvc.yaml.j2 \ > -o rok/rok-external-services/etcd/overlays/on-prem/patches/pvc.yaml

      Edit rok/rok-external-services/etcd/overlays/deploy/kustomization.yaml to set the on-prem overlay as base:

      bases: - ../on-prem # <-- Edit this line to point to the on-prem overlay
    2. Specify the desired etcd cluster size:

      root@rok-tools:~/ops/deployments# export ETCD_CLUSTER_SIZE=3
    3. Render the patch for the etcd cluster size:

      root@rok-tools:~/ops/deployments# j2 \ > rok/rok-external-services/etcd/overlays/deploy/patches/cluster-size.yaml.j2 \ > -o rok/rok-external-services/etcd/overlays/deploy/patches/cluster-size.yaml
    4. Commit your changes:

      root@rok-tools:~/ops/deployments# git commit -am "Configure etcd for our platform"
    5. Apply the manifests:

      root@rok-tools:~/ops/deployments# rok-deploy --apply \ > rok/rok-external-services/etcd/overlays/deploy
  5. Deploy Redis.

    1. Edit the kustomization manifest. Choose one of the following options, based on your platform:

      Edit rok/rok-external-services/redis/overlays/deploy/kustomization.yaml to set the eks overlay as base:

      bases: - ../eks # <-- Edit this line to point to the eks overlay

      Edit rok/rok-external-services/redis/overlays/deploy/kustomization.yaml to set the aks overlay as base:

      bases: - ../aks # <-- Edit this line to point to the aks overlay

      Edit rok/rok-external-services/redis/overlays/deploy/kustomization.yaml to set the gke overlay as base:

      bases: - ../gke # <-- Edit this line to point to the gke overlay

      Specify the storage class to use for Redis persistent volumes:

      root@rok-tools:~/ops/deployments# export REDIS_STORAGE_CLASS=local-path

      Note

      In Bright Kubernetes clusters local-path storage class is available by default and backed by NFS.

      Configure the on-prem overlay:

      root@rok-tools:~/ops/deployments# rok-j2 \ > rok/rok-external-services/redis/overlays/on-prem/patches/pvc.yaml.j2 \ > -o rok/rok-external-services/redis/overlays/on-prem/patches/pvc.yaml

      Edit rok/rok-external-services/redis/overlays/deploy/kustomization.yaml to set the on-prem overlay as base:

      bases: - ../on-prem # <-- Edit this line to point to the on-prem overlay
    2. Commit your changes:

      root@rok-tools:~/ops/deployments# git commit -am \ > "Configure Redis for our platform"
    3. Apply the manifests:

      root@rok-tools:~/ops/deployments# rok-deploy --apply \ > rok/rok-external-services/redis/overlays/deploy
  6. Deploy S3Proxy (Azure only):

    root@rok-tools:~/ops/deployments# rok-deploy --apply \ > rok/rok-external-services/s3proxy/overlays/deploy
  7. Deploy the kubeflow namespace:

    root@rok-tools:~/ops/deployments# rok-deploy --apply \ > kubeflow/manifests/common/kubeflow-namespace/overlays/deploy
  8. Deploy the Kubeflow Gateway in the kubeflow namespace:

    root@rok-tools:~/ops/deployments# rok-deploy --apply \ > kubeflow/manifests/common/istio-1-14/kubeflow-istio-resources/overlays/deploy
  9. Deploy cert-manager resources, needed by the skel resources:

    root@rok-tools:~/ops/deployments# rok-deploy --apply \ > rok/cert-manager/cert-manager/overlays/deploy
  10. Deploy Kyverno resources, needed by the skel resources:

    root@rok-tools:~/ops/deployments# rok-deploy --apply rok/kyverno/overlays/deploy
  11. Deploy CRDs needed by the skel resources:

    root@rok-tools:~/ops/deployments# rok-deploy --apply \ > kubeflow/manifests/apps/admission-webhook/upstream/overlays/deploy
  12. Deploy the skel resources:

    root@rok-tools:~/ops/deployments# rok-deploy --apply \ > kubeflow/manifests/common/skel-resources/overlays/deploy
  13. Deploy the Reception server in the kubeflow namespace:

    root@rok-tools:~/ops/deployments# rok-deploy --apply \ > kubeflow/manifests/apps/reception/overlays/deploy

    Important

    When a user logs in to Arrikto EKF for the first time, the Reception server will create a new Profile for this user. The Profile Controller will then handle this new Profile and create a dedicated namespace for this user.

    To disable the automatic Profile creation, and consequently the automatic creation of dedicated user namespaces, follow the Disable Automatic Profile Creation guide.

  14. Deploy the Profile Controller in the kubeflow namespace:

    root@rok-tools:~/ops/deployments# rok-deploy --apply \ > kubeflow/manifests/apps/profiles/upstream/overlays/deploy
  15. Deploy roles necessary for RBAC configuration:

    root@rok-tools:~/ops/deployments# rok-deploy --apply \ > kubeflow/manifests/common/kubeflow-roles/overlays/deploy
  16. Specify the Kubelet root directory. Choose one of the following options, based on your platform.

    EKS uses the default Kubelet root directory. Skip this step.

    AKS uses the default Kubelet root directory. Skip this step.

    GKE uses the default Kubelet root directory. Skip this step.

    If the Kubelet root directory of your cluster is not the default /var/lib/kubelet, configure Rok to use the directory of your installation:

    1. Set the Kubelet root directory used in your installation:

      root@rok-tools:~/ops/deployments# export KUBELET_ROOT_DIR=<DIR>

      Replace <DIR> with your Kubelet root directory. For example:

      root@rok-tools:~/ops/deployments# export KUBELET_ROOT_DIR=/var/lib/kubelet

      Note

      For Bright Kubernetes Clusters use /cm/local/apps/kubernetes/var/kubelet.

    2. Configure the on-prem overlay to use the Kubelet root directory that you specified in the previous step:

      root@rok-tools:~/ops/deployments# rok-j2 \ > rok/rok-cluster/overlays/on-prem/patches/kubelet-root-dir.yaml.j2 \ > -o rok/rok-cluster/overlays/on-prem/patches/kubelet-root-dir.yaml
    3. Commit your changes:

      root@rok-tools:~/ops/deployments# git commit -am "Set Kubelet root directory"
  17. Optional

    If you wish your Rok cluster to trust one or more custom CAs, for example, to be able to securely connect to your S3 service, you need to:

    1. Obtain the certificate authority (CA) bundle of your choice and copy it to your clipboard. For example, a CA bundle might look like this:

      -----BEGIN CERTIFICATE----- MIIDyjCCArKgAwIBAgIQKX7Wxtqubey4K/qRvAFCETANBgkqhkiG9w0BAQsFADBM MRUwEwYDVQQKEwxjZXJ0LW1hbmFnZXIxMzAxBgNVBAMTKmE0OTI0ODE5MzU5MjM0 ... -----END CERTIFICATE-----
    2. Edit rok/rok-cluster/components/cacerts/cacerts and paste the contents of your certificate or certificate bundle. For example, the final result should look like this:

      -----BEGIN CERTIFICATE----- MIIDyjCCArKgAwIBAgIQKX7Wxtqubey4K/qRvAFCETANBgkqhkiG9w0BAQsFADBM MRUwEwYDVQQKEwxjZXJ0LW1hbmFnZXIxMzAxBgNVBAMTKmE0OTI0ODE5MzU5MjM0 ... -----END CERTIFICATE-----
    3. Enable the cacerts Kustomize component in the corresponding kustomization file if it is not already enabled. Choose one of the following options, based on your platform.

      Edit rok/rok-cluster/overlays/deploy/kustomization.yaml so that it contains the following lines:

      components: - ../../components/cacerts

      Edit rok/rok-cluster/overlays/deploy/kustomization.yaml so that it contains the following lines:

      components: - ../../components/cacerts

      Edit rok/rok-cluster/overlays/deploy/kustomization.yaml so that it contains the following lines:

      components: - ../../components/cacerts

      The cacerts Kustomize component is enabled by default for on-premises deployments. Skip this step.

    4. Commit your changes:

      root@rok-tools:~/ops/deployments# git commit -am "Specify trusted CA bundle"
  18. Specify any extra trusted CIDRs for Rok CSI Access Servers. Choose one of the following options, based on your platform.

    Skip this step.

    Skip this step.

    Skip this step.

    If your Kubernetes nodes have more than one network interfaces and intra-cluster traffic may come from all of them, specify any CIDRs that are not known to Kubernetes, that is the addresses that do not show as InternalIP of the nodes.

    1. Specify the trusted CIDRs:

      root@rok-tools:~/ops/deployments# export ROK_ACCESS_SERVER_TRUSTED_CIDRS=<CIDR>

      Replace <CIDR> with your trusted CIDR. For example:

      root@rok-tools:~/ops/deployments# export ROK_ACCESS_SERVER_TRUSTED_CIDRS=10.0.0.1/24

      Note

      If more than one, use a comma separated list.

    2. Configure the on-prem overlay to use trusted CIDRs that you specified in the previous step:

      root@rok-tools:~/ops/deployments# rok-j2 \ > rok/rok-cluster/overlays/on-prem/patches/access-server.yaml.j2 \ > -o rok/rok-cluster/overlays/on-prem/patches/access-server.yaml
    3. Commit your changes:

      root@rok-tools:~/ops/deployments# git commit -am "Configure CSI trusted CIDRs"
  19. Configure the rok-cluster kustomization to use overlays and components based on your platform.

    1. Edit rok/rok-cluster/overlays/deploy/kustomization.yaml to use eks as base in the deploy overlay:

      bases: - ../eks
    2. Commit your changes:

      root@rok-tools:~/ops/deployments# git commit -am "Use eks overlay in rok-cluster"
    1. Edit rok/rok-cluster/overlays/deploy/kustomization.yaml to use aks as base in the deploy overlay:

      bases: - ../aks
    2. Commit your changes:

      root@rok-tools:~/ops/deployments# git commit -am "Use aks overlay in rok-cluster"
    1. Edit rok/rok-cluster/overlays/deploy/kustomization.yaml to use gke as base in the deploy overlay:

      bases: - ../gke
    2. Commit your changes:

      root@rok-tools:~/ops/deployments# git commit -am "Use gke overlay in rok-cluster"
    1. Specify the platform to use:

      root@rok-tools:~/ops/deployments# export PLATFORM=on-prem
    2. Configure the deploy overlay to use the on-prem overlay:

      root@rok-tools:~/ops/deployments# rok-j2 \ > rok/rok-cluster/overlays/deploy/kustomization.yaml.j2 \ > -o rok/rok-cluster/overlays/deploy/kustomization.yaml
    3. Commit your changes:

      root@rok-tools:~/ops/deployments# git commit -am "Use on-prem overlay in rok-cluster"
  20. Deploy the RokCluster CR:

    root@rok-tools:~/ops/deployments# rok-deploy --apply rok/rok-cluster/overlays/deploy
  21. Deploy the Rok CSI Unpin Controller:

    root@rok-tools:~/ops/deployments# rok-deploy --apply rok/rok-csi-unpin-controller/overlays/deploy
  22. Deploy Dex in the auth namespace:

    root@rok-tools:~/ops/deployments# rok-deploy --apply \ > kubeflow/manifests/common/dex/overlays/deploy

    Note

    Dex runs as a StatefulSet that requires storage from Rok.

  23. Deploy AuthService in the istio-system namespace:

    root@rok-tools:~/ops/deployments# rok-deploy --apply \ > kubeflow/manifests/common/oidc-authservice/overlays/deploy
  24. Deploy the Rok Monitoring Stack in the monitoring namespace:

    root@rok-tools:~/ops/deployments# rok-deploy --apply rok/monitoring/overlays/deploy

    Important

    By default no user has access to the Rok Monitoring Stack. To allow access to specific users follow the Grant Rok Monitoring Stack Admin Privileges guide.

    See also

    • Learn more about the Rok Monitoring Stack on the EKF Monitoring user guide.
  25. Deploy Istio related resources for EKF:

    root@rok-tools:~/ops/deployments# rok-deploy --apply rok/expose-ekf/overlays/deploy

    Note

    This will not expose EKF to the outer world yet. This will be done in the Expose EKF guide.

  26. Save your state:

    root@rok-tools:~/ops/deployments# rok-j2 deploy/env.rok.j2 -o deploy/env.rok
  27. Commit your changes:

    root@rok-tools:~/ops/deployments# git commit -am "Deploy Rok Components"
  28. Mark your progress:

    root@rok-tools:~/ops/deployments# export DATE=$(date -u "+%Y-%m-%dT%H.%M.%SZ")
    root@rok-tools:~/ops/deployments# git tag \ > -a deploy/${DATE?}/develop/rok \ > -m "Deploy Rok Components"

Verify

  1. Go to your GitOps repository, inside your rok-tools management environment:

    root@rok-tools:~# cd ~/ops/deployments
  2. Restore the required context from previous sections:

    root@rok-tools:~/ops/deployments# source <(cat deploy/env.cloudidentity)
    root@rok-tools:~/ops/deployments# export ROK_CLUSTER_NAMESPACE
  3. Verify that the Rok Operator, Rok Disk Manager, and Rok kmod Pods are up and running. Check the Pod status and verify field STATUS is Running and field READY is N/N for all Pods:

    root@rok-tools:~/ops/deployments# kubectl -n rok-system get pods NAME READY STATUS RESTARTS AGE rok-disk-manager-tmwqz 1/1 Running 0 31s rok-kmod-8g48m 1/1 Running 0 37s rok-operator-0 2/2 Running 0 59s
  4. Verify that the Dex Pod is up and running. Check the Pod status and verify field STATUS is Running and field READY is 2/2:

    root@rok-tools:~/ops/deployments# kubectl -n auth get pods NAME READY STATUS RESTARTS AGE dex-0 2/2 Running 0 65s
  5. Verify that the AuthService Pod is up and running. Check the Pod status and verify field STATUS is Running and field READY is 1/1:

    root@rok-tools:~/ops/deployments# kubectl get pods -n istio-system -l app=authservice NAME READY STATUS RESTARTS AGE authservice-0 1/1 Running 0 9m27s
  6. Verify that the cert-manager Pods are up and running. Check the Pod status and verify field STATUS is Running and field READY is 1/1 for all Pods:

    root@rok-tools:~/ops/deployments# kubectl -n cert-manager get pods NAME READY STATUS RESTARTS AGE cert-manager-6d86476c77-bl9rs 1/1 Running 0 9m cert-manager-cainjector-5b9cd446fd-n5jpd 1/1 Running 0 9m cert-manager-webhook-64d967c45-cdfwh 1/1 Running 0 9m
  7. Verify that the Kyverno Pod is up and running. Check the Pod status and verify field STATUS is Running and field READY is 1/1:

    root@rok-tools:~/ops/deployments# kubectl -n kyverno get pods NAME READY STATUS RESTARTS AGE kyverno-544fc576bb-gbc9l 1/1 Running 0 9m
  8. Verify that the skel resources, Reception server, and Profile Controller Pods are up and running. Check the Pod status and verify field STATUS is Running and field READY is N/N for all Pods:

    root@rok-tools:~/ops/deployments# kubectl -n kubeflow get pods NAME READY STATUS RESTARTS AGE admission-webhook-deployment-5d4cf6bbdb-gfrkv 2/2 Running 0 9m kubeflow-reception-54497df69c-psvvp 2/2 Running 0 9m profiles-deployment-6777bccfdc-l4l6z 3/3 Running 0 9m
  9. Verify that the rok-init job has completed successfully. Check the job status and verify field COMPLETIONS is 2/2:

    root@rok-tools:~/ops/deployments# kubectl -n ${ROK_CLUSTER_NAMESPACE?} get job NAME COMPLETIONS DURATION AGE rok-init 2/2 59s 24m
  10. Verify that the etcd, Redis, Rok CSI, Rok CSI Unpin Controller, and Rok Pods are up and running. Check the Pod status and verify field STATUS is Running and field READY is N/N for all Pods:

    root@rok-tools:~/ops/deployments# kubectl -n ${ROK_CLUSTER_NAMESPACE?} get pods NAME READY STATUS RESTARTS AGE rok-9brt8 2/2 Running 0 5m23s rok-csi-controller-0 5/5 Running 0 5m21s rok-csi-guard-ip--172-31-18-161.eu-central-1... 1/1 Running 0 5m21s rok-csi-node-49ncb 3/3 Running 0 5m22s rok-csi-unpin-controller-64b46c74b5-hmvnk 1/1 Running 0 5m25s rok-etcd-0 2/2 Running 0 7m11s rok-redis-0 3/3 Running 0 6m51s
  11. Verify that the Rok cluster is up and running. Verify that field HEALTH is OK and field PHASE is Running:

    root@rok-tools:~/ops/deployments# kubectl get rokcluster -n ${ROK_CLUSTER_NAMESPACE?} rok NAME VERSION HEALTH TOTAL MEMBERS READY MEMBERS PHASE AGE rok release-1.5-l0-release-1.5 OK 2 2 Running 2m4s
  12. Verify that the Rok Monitoring Stack is up and running:

    root@rok-tools:~/ops/deployments# kubectl get pods -n monitoring NAME READY STATUS RESTARTS AGE grafana-6d7d7b78f7-6flm7 2/2 Running 0 2m17s kube-state-metrics-765c7c7f95-chkzn 4/4 Running 0 2m16s node-exporter-zng26 2/2 Running 0 2m16s prometheus-k8s-0 3/3 Running 1 2m15s prometheus-operator-5f75d76f9f-fmpp5 3/3 Running 0 8m24s
  13. Ensure that Prometheus has successfully discovered the needed targets so that it can pull metrics periodically:

    root@rok-tools:~/ops/deployments# kubectl exec -ti -n monitoring sts/prometheus-k8s \ > -c prometheus -- wget -qO - localhost:9090/metrics | grep 'discovered.*rok-metrics' prometheus_sd_discovered_targets{config="serviceMonitor/rok/rok-metrics/0",name="scrape"} 7
    root@rok-tools:~/ops/deployments# kubectl exec -ti -n monitoring sts/prometheus-k8s \ > -c prometheus -- wget -qO - localhost:9090/metrics | grep 'discovered.*rok-etcd-metrics' prometheus_sd_discovered_targets{config="serviceMonitor/rok/rok-etcd-metrics/0",name="scrape"} 7
    root@rok-tools:~/ops/deployments# kubectl exec -ti -n monitoring sts/prometheus-k8s \ > -c prometheus -- wget -qO - localhost:9090/metrics | grep 'discovered.*rok-redis-metrics' prometheus_sd_discovered_targets{config="serviceMonitor/rok/rok-redis-metrics/0",name="scrape"} 7

Summary

You have successfully deployed Rok on Kubernetes. You can consume Rok’s storage through the rok StorageClass, take instant snapshots of your applications, and restore applications to an earlier state. You can create a time machine for your applications and travel back in time!

What’s Next

The next step is to test Rok and verify it works properly.