Upgrades on Kubernetes

This guide describes the necessary steps to upgrade an existing Rok cluster on Kubernetes from version v1 to version v2.

Important

Note that, depending on your source and target Rok versions, tweaks might be needed in order to complete the steps provided in this guide. For this, always refer to the version-specific upgrade guide that corresponds to the upgrade you wish to perform.

Upgrade your management environment

We assume that you have followed the Deploy Rok Components guide, and have successfully set up a full-fledged rok-tools management environment either in local Docker or in Kubernetes.

Before proceeding with the core upgrade steps you need to first upgrade your management environment, in order to use CLI tools and utilities, such as rok-deploy that are compatible with the Rok version you are upgrading to.

Important

When you upgrade your management environment all previous data (GitOps repository, files, user settings, etc.) are preserved either in a Docker volume or Kubernetes PVC, depending on your environment. This volume or PVC is mounted in the new rok-tools container so that old data is adopted.

For Kubernetes simply apply the latest rok-tools manifests:

$ kubectl apply -f <download_root>/rok-tools-eks.yaml

Note

In case you see the following error:

The StatefulSet "rok-tools" is invalid: spec: Forbidden: updates to
statefulset spec for fields other than 'replicas', 'template', and
'updateStrategy' are forbidden

make sure your first delete the existing rok-tools StatefulSet with:

$ kubectl delete sts rok-tools

and then re-apply.

For Docker first delete the old container:

$ docker stop <OLD_ROK_TOOLS_CONTAINER_ID>
$ docker rm <OLD_ROK_TOOLS_CONTAINER_ID>

and then create a new one with previous data and the new image:

$ docker run -ti \
>     -p 8080:8080 \
>     --entrypoint /bin/bash \
>     -v $(pwd)/rok-tools-data:/root \
>     gcr.io/arrikto/rok-tools:release-1.4-l0-release-1.4.4

Upgrade manifests

We assume that you have followed the Deploy Rok Components guide, and have a local GitOps repo with Arrikto-provided manifests. Once Arrikto releases a new Rok version and pushes updated deployment manifests, you have to follow the standard GitOps workflow:

  1. Fetch latest upstream changes, pushed by Arrikto
  2. Rebase local changes on top latest upstream ones and resolve conflicts, if any
  3. Tweak manifests based on Arrikto-provided instructions, if necessary
  4. Commit everything
  5. Re-apply manifests

When one initially deploys Rok on Kubernetes, either automatically using rok-deploy or manually, they end up with a deploy overlay in each Rok component or external service that is to be applied to Kubernetes. In the GitOps deployment repository, Arrikto provides manifests that include the deploy overlay in each Kustomize app/package as scaffold, so that users can quickly start and set their preferences.

As a result, fetch/rebase might lead to conflicts since both Arrikto and the end-user might modify the same files that are tracked by Git. In this scenario, the most common and obvious solution is to keep the user's changes since they are the ones that reflect the existing deployment.

In case of breaking changes, e.g., parts of YAML documents that are absolutely necessary to perform the upgrade, or others that might be deprecated, Arrikto will inform users via version-specific upgrade nodes for all actions that need to be taken.

Note

It is the user's responsibility to apply valid manifests and kustomizations after a rebase. In case of uncertainty do not hesitate to coordinate with Arrikto's Tech Team for support.

We will use git to update local manifests:

  1. Fetch latest upstream changes:

    $ git fetch --all -p
    
  2. Rebase and favor local changes upon conflicts:

    $ git rebase -Xtheirs
    

    Important

    During rebase, the sides are swapped, i.e., ours is the so-far rebased series, and theirs is the working branch. For more information on the merge strategy read the official git-scm docs.

  3. The above may cause conflicts, e.g., when a file was modified locally but removed from upstream, for example:

    CONFLICT (modify/delete): kubeflow/kfctl_config.yaml deleted in origin/develop and modified in HEAD~61. Version HEAD~61 of kubeflow/kfctl_config.yaml left in tree.
    

    We suggest here to go ahead and delete those files, e.g.:

    $ git status --porcelain | awk '{if ($1=="DU") print $2}' | xargs git rm
    

    And proceed with the rebase:

    $ git rebase --continue
    
  4. (Optional) Edit deploy overlays based on version-specific upgrade notes.

  5. Commit changes, if any.

Important

Make sure you mirror the GitOps repo to a private remote to be able to recover it in any case.

Drain rok-csi nodes

To ensure minimal disruption of Rok services, please follow the following instructions to drain Rok CSI nodes, and wait for any pending Rok CSI operations to complete, before performing the upgrade.

During the upgrade, any pending Rok tasks will be canceled, so it is advisable to run the following steps in a period of inactivity, e.g., when no pipelines or snapshot policies run. Since pausing/queuing everything is currently not an option, one can monitor Rok logs and wait until nothing has been logged for, let's say, 30 secs:

$ kubectl -n rok logs -l app=rok-csi-controller -c csi-controller -f --tail=100

Note

Finding a period of inactivity is an ideal scenario, that depending on the deployment may not be feasible, e.g., when having tens of recurring pipelines running. In such a case the end-user will simply see some of them fail.

  1. Scale down the rok-operator StatefulSet:

    $ kubectl -n rok-system scale sts rok-operator --replicas=0
    
  2. Ensure rok-operator scaled down to zero:

    $ kubectl get sts rok-operator -n rok-system
    
  3. Scale down the rok-csi-controller StatefulSet:

    $ kubectl -n rok scale sts rok-csi-controller --replicas=0
    
  4. Ensure rok-csi-controller scaled down to zero:

    $ kubectl get sts rok-csi-controller -n rok
    
  5. Watch the rok-csi-node logs and ensure that all pending operations have finished, i.e., nothing has been logged for the last 30 secs:

    $ kubectl -n rok logs -l app=rok-csi-node -c csi-node -f --tail=100
    
  6. Continue with the Upgrade components section.

Upgrade components

We assume that you are already running a v1 Rok cluster on Kubernetes and that you also have access to the v2 kustomization tree you are upgrading to.

Since a Rok cluster on Kubernetes consists of multiple components, we need to upgrade each one of them. Throughout the guide, we will keep track of these components, as listed in the table below:

Component v1 v2
RokCluster CR  
RokCluster CRD  
Rok Operator  
Rok Disk Manager  
Rok kmod  

During the upgrade, Rok Operator will remove all members from the cluster and add a dedicated one to perform the upgrade. The cluster will be scaled down to zero and a Kubernetes Job will run to upgrade the cluster config on etcd and run any needed migrations. Finally, the cluster will be scaled back up to its initial size.

1. Increase observability (optional)

To gain insight into the status of the cluster upgrade execute the following commands in a separate window:

  • For live cluster status:

    $ watch kubectl get rokcluster -n rok
    
  • For live cluster events:

    $ watch 'kubectl describe rokcluster -n rok rok | tail -n 20'
    

2. Inspect current version (optional)

Get current images and version from the RokCluster CR:

$ kubectl describe rokcluster rok -n rok
...
Spec:
  Images:
    Rok:      gcr.io/arrikto-deploy/roke:release-1.4-l0-release-1.4.4
    Rok CSI:  gcr.io/arrikto-deploy/rok-csi:release-1.4-l0-release-1.4.4
Status:
  Version:        release-1.4-l0-release-1.4.4

3. Upgrade Rok Disk Manager

Apply the v2 Rok Disk Manager manifests:

$ kubectl apply -k rok/rok-disk-manager/overlays/deploy
Component v1 v2
RokCluster CR  
RokCluster CRD  
Rok Operator  
Rok Disk Manager  
Rok kmod  

4. Upgrade Rok kmod

Apply the v2 Rok kmod manifests:

$ kubectl apply -k rok/rok-kmod/overlays/deploy
Component v1 v2
RokCluster CR  
RokCluster CRD  
Rok Operator  
Rok Disk Manager  
Rok kmod  

5. Upgrade Rok cluster

Apply the v2 Rok cluster manifests:

$ kubectl apply -k rok/rok-cluster/overlays/deploy
Component v1 v2
RokCluster CR  
RokCluster CRD  
Rok Operator  
Rok Disk Manager  
Rok kmod  

6. Upgrade Rok Operator

Apply the v2 Operator manifests:

$ kubectl apply -k rok/rok-operator/overlays/deploy

Note

The above command also updates the RokCluster CRD

After the manifests have been applied, ensure Rok Operator has become ready by running the following command:

$ watch kubectl get pods -n rok-system -l app=rok-operator
Component v1 v2
RokCluster CR  
RokCluster CRD  
Rok Operator  
Rok Disk Manager  
Rok kmod  

7. Verify successful upgrade

  1. Check the status of the cluster upgrade Job:

    $ kubectl get job -n rok rok-upgrade-XYZ
    
  2. Ensure that Rok is up and running after the upgrade Job finishes:

    $ kubectl get rokcluster -n rok rok
    NAME   VERSION  HEALTH   TOTAL MEMBERS   READY MEMBERS   PHASE     AGE
    rok    v2       OK       1               1               Running   1h18m