Deploy Rok¶
To declaratively configure and deploy Rok we use
Kustomize, a tool that is also natively built into
kubectl
. In a nutshell, the final manifests are generated by combining
maintainer’s kustomization directories (bases) and end-user’s variants
(overlays). Since the built-in version is old, we opt to use a newer version
to build the final Kubernetes manifests and apply them afterwards with
kubectl apply
.
Option 1: Deploy with the Rok deployment CLI (preferred)¶
The standard way to set up Rok on Kubernetes is to use rok-deploy
, an
interactive CLI utility that helps you declaratively configure and deploy a Rok
cluster on Kubernetes using GitOps, with minimal effort.
Assuming you have prepared your management environment, e.g., you are inside a
rok-tools
Pod (or Docker container), you can simply run:
root@rok-tools-0:/# rok-deploy
You will be prompted with a graphical interface that will ask you a series of questions to tailor your installation based on your platform, environment and preferences.

Important
The Rok deployment CLI assumes access to both AWS and Kubernetes, e.g., via
~/.kube/config
and ~/.aws/{config, credentials}
or the environment.
During this process rok-deploy
will
- Clone the https://github.com/arrikto/deployments GitOps repository
locally, either with SSH or username/token authentication, and checkout the
master
branch by default. The Rok installation will track this branch and will use it on upgrades, to fetch the changed manifests. - Validate that needed utilities (e.g.,
git
,kubectl
,aws
) are present on your system. - Validate access to both your cloud provider (e.g., AWS) and the Kubernetes cluster that is about to host your Rok installation.
- Calculate and conditionally create cloud resources (e.g., AWS IAM Roles and Policies) that are needed, so that Rok has full access to S3 buckets.
- Automatically generate YAML patches for Rok based on user’s input (e.g., storage options, auth credentials, S3 configuration, etc).
- Commit all changes locally.
- Ask for confirmation and deploy Rok and its external services on Kubernetes, including etcd, Redis, PostgreSQL, Istio, Dex and AuthService (authentication proxy).
Note
The Rok deployment CLI is at an early development stage and will be gradually extended with more features.
You can always view the auto-generated commits that rok-deploy
creates in
the GitOps repository, under ~/ops/deployments
by default. For example:
commit f99e865ede6c677b230d43bf82c25baaca53948e
Author: Rok Deploy v0.15-pre-1303-gab1b01db9 <no-reply@arrikto.com>
Date: Mon May 25 18:09:06 2020 +0300
Update Rok manifests
commit bb5e79dc67e1b35a938acfe0d6854a8097e5ec23
Author: Rok Deploy v0.15-pre-1303-gab1b01db9 <no-reply@arrikto.com>
Date: Mon May 25 18:08:58 2020 +0300
Update CloudFormation stack to authorize Rok on S3 buckets
Important
Make sure you mirror the GitOps repo to a private remote to be able to recover it in any case.
Once rok-deploy
completes successfully, your Rok cluster will be
up-and-running shortly.
Important
Some parts of the guide are not yet automated by rok-deploy
, so you need
to run them manually once all Rok Pods are ready:
- Make Rok the default StorageClass (optional), so that your applications will get Rok-backed storage by default, along with its data management capabilities.
- Setup Rok’s Monitoring Stack (optional), so that a full-fledged monitoring stack runs alongside Rok.
- Setup Rok Accounts (optional), if you want to test Rok.
Option 2: Deploy manually¶
Important
This guide uses GitOps, which is the practice of using git
to track
changes to your infrastructure configuration. The automated deployment
creates commits for each section without needing any further action. For the
manual deployment, you should commit changes to the repo at the end of each
section or whenever you feel best.
Clone the GitOps repository¶
At this point you need to clone a deployment repository provided by Arrikto. In
the following we assume you have cloned it under ~/ops
. Here are example
commands:
$ mkdir -p ~/ops
$ cd ~/ops
$ git clone <ARRIKTO_PROVIDED_REPOSITORY> deployments
$ cd deployments
Important
Make sure you mirror the GitOps repo to a private remote to be able to recover it any case.
Configure AWS¶
Rok requires access to Amazon’s S3 Storage Service to use it as its external data store for immutable snapshots of your volumes.
To grant Rok access to S3, use an IAM Role for Service Account.
Note
Using a K8s Service Account to assume an IAM role is a security best practice. GCP and Azure also support this feature via workload identity and pod identities respectively.
Create IAM resources automatically¶
Given the EKS cluster name, and a proper AWS environment, use the Arrikto provided script that will automate the official IAM Roles for Service Accounts guide. Specifically it will
- Discover the OIDC provider of the EKS cluster.
- Create an IAM role and allow the K8s service account that Rok runs with to assume it without any extra credentials.
- Create an IAM policy that allows full S3 access for a specific bucket prefix.
- Attach the IAM policy to the IAM role so that Rok will have access to S3.
$ rok-s3-authorize
Important
In case you do not have permissions to create IAM resources,
rok-s3-authorize
will fail. In this case, run it in dry run mode:
$ rok-s3-authorize --dry-run
The CLI tool will prompt you for an output file, where it will output a CloudFormation stack with the required IAM role and policy. You can then send this file to the IAM administrators or, if you are the administrator, you can deploy the stack by following the instructions in the Create IAM resources manually section.
Important
Obtain the generated Role ARN and bucket prefix and proceed to the Configure Rok section.
Create IAM resources manually¶
In this section we will grant a Rok cluster full access to S3 buckets with a specific prefix, i.e., related to an existing EKS cluster.
Note
You do not need to go through this section if you have already gone through the Create IAM resources automatically section. The manual steps in the current section are included here for the sake of completeness.
To properly name AWS resources to be created, make sure you set the following environment variables:
$ export ROK_CLUSTER_NAME=rok $ export ROK_CLUSTER_NAMESPACE=rok $ export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text) $ export OIDC_PROVIDER=$(aws eks describe-cluster --name $CLUSTERNAME \ > --query "cluster.identity.oidc.issuer" \ > --output text | sed -e "s/^https:\/\///") $ export STACK_NAME=$([ $ROK_CLUSTER_NAME != "rok" ] || [ $ROK_CLUSTER_NAMESPACE != "rok" ] \ > && echo rok-$AWS_DEFAULT_REGION-$CLUSTERNAME-$ROK_CLUSTER_NAMESPACE-$ROK_CLUSTER_NAME \ > || echo rok-$AWS_DEFAULT_REGION-$CLUSTERNAME)
Note
The name of the CloudFormation stack depends on the value of the
ROK_CLUSTER_NAME
andROK_CLUSTER_NAMESPACE
environment variables. In the majority of the cases these would berok
androk
, respectively, so we prefer to omit the-rok-rok
literal from the name of the stack and the IAM resources to be created.Format the existing template of the CloudFormation stack with the necessary IAM roles and policies:
$ j2 rok/eks/s3-iam-resources.yaml.j2 -o s3-iam-resources.yaml
Here is the corresponding CloudFormation template to be formatted:
Share to your admin or directly deploy the CloudFormation stack with:
$ aws cloudformation deploy --stack-name $STACK_NAME \ > --template-file s3-iam-resources.yaml \ > --capabilities CAPABILITY_NAMED_IAM
Where:
stack_name
is the name that is associated with the stack.s3-iam-resources.yaml
is the file that contains the CloudFormation stack.CAPABILITY_NAMED_IAM
specifies that the template is using IAM resources with custom names.
Configure Rok¶
Tweak kustomization files so they comply with your EKS environment.
Important
The provided manifests contain a deploy
overlay to use as a guide for
building your overlay, so we edit it in-place. Should conflicts arise in
these files, always keep the local changes. deploy
overlays are
user-owned.
S3 Region¶
You need to set the S3 endpoint and region according to your preferences,
e.g., based on $AWS_DEFAULT_REGION
. Alternatively, you can obtain the region
directly from the K8s nodes:
$ kubectl get nodes -o json | \
> jq -r '.items[].metadata.labels["failure-domain.beta.kubernetes.io/region"]'
To do so, edit
rok/rok-cluster/overlays/deploy/patches/storage.yaml
and
specify the values that correspond to your S3 Object Store:
s3:
endpoint: https://s3.<region>.amazonaws.com
region: <region>
S3 bucket prefix¶
Instruct Rok to use a specific prefix for S3 buckets. You have to use the
prefix specified in the s3-iam-resources.yaml
CF template that was
formatted based on environment variables in the Configure AWS section.
To do so, add the following configuration in the
rok/rok-cluster/overlays/deploy/patches/configvars.yaml
Kustomize patch
file:
spec:
configVars:
...
daemons.s3d.bucket_prefix: rok-<account_id>-<region>-<eks_cluster_name>
Here is an example of how the above snippet would actually look like:
spec:
configVars:
...
daemons.s3d.bucket_prefix: rok-123451234512-us-west-2-arrikto-cluster
Important
Rok will create a number of buckets with this specific prefix. Please note that Rok assumes it owns all buckets with names starting with this prefix, e.g., for Garbage Collection purposes, so this prefix must not be shared with any other application.
IAM Role for service account¶
Instruct Rok to use a specific IAM role. You have to use the ARN of the IAM
Role specified in the s3-iam-resources.yaml
CF template that was formatted
based on environment variables in the Configure AWS section. To do so, add
the following configuration in the
rok/rok-cluster/overlays/deploy/patches/storage.yaml
Kustomize patch file:
s3:
...
AWSRoleARN: arn:aws:iam::<account_id>:role/<role_name>
Here is an example of how the above snippet would actually look like:
spec:
configVars:
...
daemons.s3d.bucket_prefix: 123451234512:role/rok-us-west-2-arrikto-cluster
Important
Make sure that the ARN of the AWS Role as well as your included AWS Account ID are correct.
Private Docker Registry access¶
In order to pull container images for Rok and its components, you need to copy
the Arrikto provided dockerconfig.json
file that contains a token with pull
access to the arrikto-deploy
GCP Container Registry in certain locations
under the kustomization tree of the GitOps repo. See the Configure access to Arrikto’s Private Registry
section for more details.
Assuming you have dockerconfig.json
under /root/dockerconfig.json
:
$ cp /root/dockerconfig.json rok/rok-cluster/overlays/deploy/secrets/dockerconfig.json
$ cp /root/dockerconfig.json rok/rok-operator/overlays/deploy/secrets/dockerconfig.json
$ cp /root/dockerconfig.json rok/rok-disk-manager/overlays/deploy/secrets/dockerconfig.json
$ cp /root/dockerconfig.json rok/rok-kmod/overlays/deploy/secrets/dockerconfig.json
Note
Kustomize will read these files, auto-generate Secrets
and pass them to individual Rok components, so that they can pull from the
arrikto-deploy
container registry on your behalf.
Configure Authentication¶
Rok authenticates users using OIDC. We use Dex as our default OIDC Provider and AuthService as our OIDC Client (authenticating proxy). In this section we describe how to setup authentication for Rok, using Dex and AuthService.
More specifically you will need to
- Change password of default user.
- Change credentials of OIDC client.
Important
If you are planning to integrate Rok with another OIDC Provider other than Dex, e.g., GitLab, you will need to edit your installation after completing it with Dex.
By default, Dex is installed with a single static user. To change the default user’s password or create new users, you have to modify Dex’s ConfigMap. To change the password of the default user:
Pick a password for the default user, with handle
user
, and hash it usingbcrypt
:$ python3 -c 'from passlib.hash import bcrypt; import getpass; print(bcrypt.using(rounds=12, ident="2y").hash(getpass.getpass()))'
Edit
rok/rok-external-services/dex/overlays/deploy/patches/static-user-passwd.yaml
and fill the relevant field with the hash of the password you chose:... staticPasswords: - email: user hash: <enter the generated hash here>
Generate OIDC Client credentials for the AuthService. AuthService uses these credentials to authenticate to Dex. You must fill the credentials in both Dex and AuthService kustomizations:
$ export OIDC_CLIENT_ID="authservice" $ export OIDC_CLIENT_SECRET="$(openssl rand -base64 32)" $ j2 rok/rok-external-services/dex/base/secret_params.env.j2 -o rok/rok-external-services/dex/overlays/deploy/secrets/secret_params.env $ j2 rok/rok-external-services/authservice/base/secret_params.env.j2 -o rok/rok-external-services/authservice/overlays/deploy/secrets/secret_params.env
Note
For a full set of configuration options, see the relevant docs of each project:
Deploy Rok¶
At this point, you have everything configured and ready to be installed. You can install all components at once by running:
$ rok-deploy --apply install/rok
Alternatively, you can go through the process step-by-step, as described in the following subsections.
Important
The above command currently should run only once during initial
deployment. To upgrade Rok you should follow the version-specific
Upgrade notes. Note that in case you integrate Rok with
Kubeflow, the above command will override some
components that Kubeflow also installs, i.e., dex
and authservice
,
and might break the deployment.
Create Rok namespaces¶
Create the rok
and rok-system
namespaces needed to host Rok and its
system components:
$ rok-deploy --apply rok/rok-namespaces/overlays/deploy
Deploy Rok Operator¶
Rok Operator watches and reconciles RokCluster
resources across Kubernetes
namespaces and comes preconfigured with certain options:
$ rok-deploy --apply rok/rok-operator/overlays/deploy
Deploy Rok Disk Manager¶
In order to provision storage, Rok CSI requires a Volume Group (VG) to be available in each node. In this direction, we will deploy Rok Disk Manager, the component that detects, manages and prepares local disks found in Kubernetes nodes so that they can be later used by Rok:
$ rok-deploy --apply rok/rok-disk-manager/overlays/deploy
Deploy Rok kmod¶
Important
Before deploying rok-kmod
make sure that the version you are about to
deploy supports the kernel of your Kubernetes nodes.
Rok kmod is a helper that runs as DaemonSet and inserts pre-built kernel modules that Rok needs into the running kernel of each Kubernetes node dynamically:
$ rok-deploy --apply rok/rok-kmod/overlays/deploy
Note
Instead of executing the above 4 commands one-by-one you can execute the
equivalent rok-deploy
one:
$ rok-deploy --apply rok/{rok-namespaces,rok-operator,rok-disk-manager,rok-kmod}/overlays/deploy
Deploy Rok external services¶
Rok uses external services to operate. Namely, it leverages
- etcd as a key-value store,
- Redis as a key-value store,
- PostgreSQL as its database,
- Istio as its Service Mesh,
- Dex as its OIDC Provider (optional if you have another provider), and
- AuthService as its authentication proxy.
Deploy Istio CRDs and resources in the istio-system
namespace:
$ rok-deploy --apply rok/rok-external-services/istio/istio-1-5-7/istio-crds-1-5-7/overlays/deploy
$ rok-deploy --apply rok/rok-external-services/istio/istio-1-5-7/istio-namespace-1-5-7/overlays/deploy
$ rok-deploy --apply rok/rok-external-services/istio/istio-1-5-7/istio-install-1-5-7/overlays/deploy
Deploy Rok’s external services in the rok
namespace:
$ rok-deploy --apply rok/rok-external-services/etcd/overlays/deploy
$ rok-deploy --apply rok/rok-external-services/postgresql/overlays/deploy
$ rok-deploy --apply rok/rok-external-services/redis/overlays/deploy
Deploy Dex and AuthService: Dex resources live in the auth
namespace, while
AuthService ones live in the istio-system
namespace:
$ rok-deploy --apply rok/rok-external-services/dex/overlays/deploy
$ rok-deploy --apply rok/rok-external-services/authservice/overlays/deploy
Note
Instead of executing the above commands one-by-one you can execute the
equivalent rok-deploy
one:
$ rok-deploy --apply rok/rok-external-services/{etcd,postgresql,redis,dex,authservice}/overlays/deploy
Deploy RokCluster CR¶
In Kubernetes, a Rok cluster is represented by a RokCluster
custom resource,
along with StorageClass
and VolumeSnapshotClass
resources that are
inteded for exposing CSI to consumers and applications.
Deploy the Rok cluster:
$ rok-deploy --apply rok/rok-cluster/overlays/deploy
After a while, the Rok cluster should be up and running:
$ kubectl get rokcluster -n rok rok
NAME VERSION HEALTH TOTAL MEMBERS READY MEMBERS PHASE AGE
rok v0.15-pre-1004-g7943b112f OK 3 3 Running 2m4s
You can also view events related to the newly deployed Rok cluster with:
$ kubectl describe rokcluster -n rok rok
Important
If, for any reason, the cluster initialization failed, read the Rok cleanup section to delete the existing Rok cluster together with its state. Then, you re-deploy Rok starting from section Deploy Rok external services.
Setup Monitoring Stack¶
In case you wish to increase system observability you can deploy Rok’s monitoring stack, a full-fledged collection of monitoring and visualization components that allow you to instantly inspect the state of your physical nodes, Kubernetes cluster and running services.
For more information about this stack along with detailed deployment steps, you can read our guide on Monitoring.
Setup Storage Class¶
Important
This will make applications that request PVC without explicitly specifying the storage class run on Rok. Since this is on top of local SSD, even with a snapshot policy, in case of a node failure, the PVC will be automatically recovered from a previous snapshot, i.e., the application will go back in time.
Set Rok storage class as the default, instead of gp2:
$ kubectl annotate storageclass rok \
> storageclass.kubernetes.io/is-default-class=true
$ kubectl annotate volumesnapshotclass rok \
> snapshot.storage.kubernetes.io/is-default-class=true
$ kubectl annotate storageclass gp2 --overwrite \
> storageclass.kubernetes.io/is-default-class=false
Setup Rok Accounts¶
This section will guide you to create and configure the namespaces that will be accessible to each Rok user.
Important
If you are planning to deploy Kubeflow alongside Rok, Kubeflow’s profiles controller will provision the namespaces and RBAC rules automatically. Therefore, in this case you can skip this section.
In order to allow users to access resources on Rok, you must give them the
rok-admin
ClusterRole in each namespace they should be allowed to access.
Since the Rok UI will by default display the namespace that matches the username
upon login, you will need to ensure that the namespace and RoleBinding exists
for all users created in the configure authentication section.
First, you need to create all required namespaces. To create the namespace
user
, i.e., the namespace for the resources of user user
, you can run
the following command:
$ kubectl create namespace user
Then, for each namespace, and for each user you want to provide access to that
namespace, you need to create a rolebinding for that user in the namespace. For
example, to provide access to user user
in namespace user
, you need to
run the following command:
$ kubectl create rolebinding rok-admin-user --namespace user \
> --clusterrole rok-admin --user user
What’s Next?¶
Congratulations, you have deployed Rok on Kubernetes!
You can consume Rok’s storage through the rok
StorageClass, take instant
snapshots of your applications and restore applications to an earlier state,
effectively traveling back in time!
You can continue to the Test Rok section to test your installation or the Expose Services section to expose Rok to the outside world. Also, if you want to update your Rok installation with GitOps you can read the Configure Rok section.