Upgrade Volume Snapshot CRDs

This guide will walk you through upgrading the volume snapshot CRDs from version v1alpha1 to v1beta1.

Fast Forward

If you have already upgraded the volume snapshot CRDs, meaning you are upgrading from EKF 1.5-rc1 or later, proceed to the Verify section.

Warning

The volume snapshot v1beta1 CRDs are not backwards compatible with the current v1alpha1 volume snapshot CRDs. The procedure of upgrading these CRDs will delete all the existing Kubernetes VolumeSnapshot, VolumeSnapshotContent and VolumeSnapshotClass objects from the Kubernetes API Server.

As such, you will lose any snapshots that you may have manually created. However, you will not lose any snapshots that Rok created and registered. These will still remain available through the Rok UI.

This guide expects all the existing VolumeSnapshot objects on the cluster to be of rok volume snapshot class.

See also

Read more about upgrading the volume snapshot CRDs on the upstream Kubernetes CSI repository.

What You’ll Need

Check Your Environment

The v1alpha1 Volume Snapshot CRDs are created by the external-snapshotter sidecar, once it starts running. This means that deleted CRDs do not get recreated as long as the sidecar does not restart. In case the external-snapshotter restarts, the CRDs will be recreated and you will have to repeat the procedure.

Deleting the CRDs will delete all CR objects. Note that the Rok CSI must be up and running, so that the underlying snapshots get deleted along with the VolumeSnapshot objects. If it is not running, the deletion of the VolumeSnapshot objects will not succeed and you will not be able to remove the corresponding CRDs.

  1. Ensure that the Rok CSI is up-and-running. Verify that field STATUS is Running and field READY is n/n for all Pods:

    root@rok-tools:~# kubectl get pods -A -l 'app in (rok-csi-node,rok-csi-controller)' NAMESPACE NAME READY STATUS RESTARTS AGE rok rok-csi-controller-0 4/4 Running 0 18h rok rok-csi-node-8r8g6 2/2 Running 0 2d17h rok rok-csi-node-b7kwc 2/2 Running 1 18h

Procedure

  1. List the VolumeSnapshot objects across all namespaces of the cluster. These volume snapshots will be deleted in the next step:

    root@rok-tools:~# kubectl get volumesnapshots -A NAMESPACE NAME AGE kubeflow-user myvolume-snap-1 1d
  2. Delete the existing VolumeSnapshots CRD:

    root@rok-tools:~# kubectl delete crd volumesnapshots.snapshot.storage.k8s.io customresourcedefinition.apiextensions.k8s.io "volumesnapshots.snapshot.storage.k8s.io" deleted

    Important

    This command will delete all existing VolumeSnapshots objects, as well as the VolumeSnapshots CRD, and it is expected to take some time to complete. Please wait until the command returns.

    You can watch the remaining volume snapshots in a separate terminal by running:

    root@rok-tools:~# watch kubectl get volumesnapshots -A

    Note that if a PVC is provisioned from a volume snapshot, the volume snapshot will not be deleted until the PVC is ready.

  3. Ensure that the VolumeSnapshots CRD has been deleted:

    root@rok-tools:~# kubectl get crd volumesnapshots.snapshot.storage.k8s.io Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "volumesnapshots.snapshot.storage.k8s.io" not found
  4. Delete the existing VolumeSnapshotContents CRD:

    root@rok-tools:~# kubectl delete crd volumesnapshotcontents.snapshot.storage.k8s.io customresourcedefinition.apiextensions.k8s.io "volumesnapshotcontents.snapshot.storage.k8s.io" deleted
  5. Ensure that the VolumeSnapshotContents CRD has been deleted:

    root@rok-tools:~# kubectl get crd volumesnapshotcontents.snapshot.storage.k8s.io Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "volumesnapshotcontents.snapshot.storage.k8s.io" not found
  6. Delete the existing VolumeSnapshotClasses CRD:

    root@rok-tools:~# kubectl delete crd volumesnapshotclasses.snapshot.storage.k8s.io customresourcedefinition.apiextensions.k8s.io "volumesnapshotclasses.snapshot.storage.k8s.io" deleted
  7. Ensure that the VolumeSnapshotClasses CRD has been deleted:

    root@rok-tools:~# kubectl get crd volumesnapshotclasses.snapshot.storage.k8s.io Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "volumesnapshotclasses.snapshot.storage.k8s.io" not found
  8. Deploy the Snapshot Controller and the new CRDs. Choose one of the following options based on your cloud provider:

    root@rok-tools:~# rok-deploy --apply rok/snapshot-controller/overlays/deploy

    Troubleshooting

    Failed to apply customresourcedefinition/volumesnapshotclasses.snapshot.storage.k8s.io

    If while trying to deploy the Snapshot Controller Kustomize package you get an error similar to the following:

    Failed to apply customresourcedefinition/volumesnapshotclasses.snapshot.storage.k8s.io. The CustomResourceDefinition "volumesnapshotclasses.snapshot.storage.k8s.io" is invalid: status.storedVersions[0]: Invalid value: "v1alpha1": must appear in spec.versions

    it means the v1alpha1 CRDs have been recreated, most probably because the rok-csi-controller Pod restarted. Please begin the Procedure again.

    root@rok-tools:~# rok-deploy --apply rok/snapshot-controller/overlays/deploy

    Troubleshooting

    Failed to apply customresourcedefinition/volumesnapshotclasses.snapshot.storage.k8s.io

    If while trying to deploy the Snapshot Controller Kustomize package you get an error similar to the following:

    Failed to apply customresourcedefinition/volumesnapshotclasses.snapshot.storage.k8s.io. The CustomResourceDefinition "volumesnapshotclasses.snapshot.storage.k8s.io" is invalid: status.storedVersions[0]: Invalid value: "v1alpha1": must appear in spec.versions

    it means the v1alpha1 CRDs have been recreated, most probably because the rok-csi-controller restarted. Please begin the Procedure again.

    1. Delete the Kyverno policy which disables the original GKE-provided volume snapshot CRDs:

      root@rok-tools:~/ops/deployments# rok-deploy --delete rok/csi-disable-v1beta1/overlays/deploy
    2. Uninstall Kyverno:

      root@rok-tools:~/ops/deployments# rok-deploy --delete rok/kyverno/overlays/deploy

    GKE runs the Snapshot Controller and will apply the v1beta1 volume snapshot CRDs.

Verify

  1. Verify that the Snapshot Controller is up-and-running. Ensure that field READY is 1/1:

    root@rok-tools:~# kubectl get sts -n kube-system snapshot-controller NAME READY AGE snapshot-controller 1/1 1m
  2. Verify that you have successfully deployed the v1beta1 VolumeSnapshot CRD:

    root@rok-tools:~# kubectl get crd \ > -o json volumesnapshots.snapshot.storage.k8s.io \ > | jq '.spec.versions[].name' "v1" "v1beta1"
  3. Verify that you have successfully deployed the v1beta1 VolumeSnapshotContent CRD:

    root@rok-tools:~# kubectl get crd \ > -o json volumesnapshotcontents.snapshot.storage.k8s.io \ > | jq '.spec.versions[].name' "v1" "v1beta1"
  4. Verify that you have successfully deployed the v1beta1 VolumeSnapshotClass CRD:

    root@rok-tools:~# kubectl get crd \ > -o json volumesnapshotclasses.snapshot.storage.k8s.io \ > | jq '.spec.versions[].name' "v1" "v1beta1"
  1. Verify that the Snapshot Controller is up-and-running. Ensure that field READY is 1/1:

    root@rok-tools:~# kubectl get sts -n kube-system snapshot-controller NAME READY AGE snapshot-controller 1/1 1m
  2. Verify that you have successfully deployed the v1beta1 VolumeSnapshot CRD:

    root@rok-tools:~# kubectl get crd \ > -o json volumesnapshots.snapshot.storage.k8s.io \ > | jq '.spec.versions[].name' "v1" "v1beta1"
  3. Verify that you have successfully deployed the v1beta1 VolumeSnapshotContent CRD:

    root@rok-tools:~# kubectl get crd \ > -o json volumesnapshotcontents.snapshot.storage.k8s.io \ > | jq '.spec.versions[].name' "v1" "v1beta1"
  4. Verify that you have successfully deployed the v1beta1 VolumeSnapshotClass CRD:

    root@rok-tools:~# kubectl get crd \ > -o json volumesnapshotclasses.snapshot.storage.k8s.io \ > | jq '.spec.versions[].name' "v1" "v1beta1"
  1. Verify that GKE has deployed the v1beta1 VolumeSnapshot CRD:

    root@rok-tools:~# kubectl get crd \ > -o json volumesnapshots.snapshot.storage.k8s.io \ > | jq '.spec.versions[].name' "v1beta1"
  2. Verify that GKE has deployed the v1beta1 VolumeSnapshotContent CRD:

    root@rok-tools:~# kubectl get crd \ > -o json volumesnapshotcontents.snapshot.storage.k8s.io \ > | jq '.spec.versions[].name' "v1beta1"
  3. Verify that GKE has deployed the v1beta1 VolumeSnapshotClass CRD:

    root@rok-tools:~# kubectl get crd \ > -o json volumesnapshotclasses.snapshot.storage.k8s.io \ > | jq '.spec.versions[].name' "v1beta1"

What’s Next

The next step is to upgrade Rok.