Local thickly provisioned volumes¶
Up to v0.11, Rok uses the Linux kernel’s thin-provisioning device mapper target to provision and snapshot local volumes. The latest version of Rok (v0.12) adds support for thickly provisioned local volumes using linear mappings over local devices. This offers increased performance (more than double the IOPS) as well as improved stability (the thin device mapper target behaves erratically and panics when there is no space left).
Bellow we describe the steps needed to upgrade to thickly provisioned volumes.
Currently we do not support offline migration from a previously used thin pool to a new thick VG, so users should backup their data before migrating. The following steps describe the process one should follow to migrate a workload running on Kubernetes, backed by Rok on thinly provisioned volumes:
- Backup all PVs which use the Rok Storage Class, by registering them on the local Rok installation.
- Remove all PVCs/PVs backed by Rok.
- Replace the –volume-pool command line argument of the rok-csi-node pod DaemonSet with –thick and –volume-group=VG. The latter should point to the Volume Group to use for local storage.
- Present the backed-up PVs.
The upgrade of Rok appliances is performed by replacing the BOOT disk with the new version of the appliance and then running some scripts that perform all the upgrade steps. The upgrade procedure consists of upgrading the master node first and then the remaining nodes, in a rolling upgrade approach.
Upgrade the master node¶
Find the master node of the cluster:
$ rok-cluster-ctl member-list
Ensure that it will remain the master node of the cluster by setting it as the only master capable node. This step is required to avoid automatic failover to nodes that are using older versions.
Inspect which are the master capable nodes: .. code-block:: bash
$ rok-config get | grep master_capable
And remove the master capability from all other nodes, by running the following command for each node: .. code-block:: bash
$ rok-config set cluster.members.$host.master_capable=False
You can run the following one-liner to perform the above commands:
$ for i in $(rok-config get | grep master_capable | grep -v $(hostname)| grep -v "^host"); do rok-config set $(echo $i | sed 's/True/False/'); done
Power the machine off, change the BOOT disk with the new version of the appliance, and power on the machine. Login to the node and run the following command that completes all required tasks to upgrade the shared state of the Rok cluster:
$ rok-cluster-upgrade-v0.12.0 upgrade-cluster
After upgrading the cluster state, you need to run the following command to upgrade the node:
$ rok-cluster-upgrade-v0.12.0 upgrade-node
At this point, the master node is successfully upgraded to v0.12.0. You can continue by upgrading the remaining nodes of the cluster.
Upgrade the slave nodes¶
Upgrade the code of every node by replacing each node’s BOOT disk. Then run the upgrade-node command to upgrade the node:
$ rok-cluster-upgrade-v0.12.0 upgrade-node
Running the upgrade-cluster is not required, since this command must run only on the master node.
After successfully upgrading all nodes to v0.12.0, the upgrade to v0.12.0 is successfully completed.
Rok v0.12 introduces hierarchical maps which can be created only with the
v4 map format. After upgrading all nodes to v0.12, you can upgrade all
maps with the following steps:
Set the map version for newly created maps, by setting the
composer.map_versioncluster configuration variable:
$ rok-config set composer.map_version=4 $ rok-config apply
Upgrade existing maps with:
$ rok-composer-tool map-upgrade --all
The above command will create a backup map named after the previous map version with a
Ensure that everything is working as expected with:
$ rok-composer-tool verify
Delete map backups with:
$ rok-cluster-gc --delete-backups
Hierarchical maps are still an experimental and incomplete feature,
and are enabled only when setting the
Rok S3 daemon¶
When using the Rok S3 daemon in a cloud whose S3 service does not implement the versioning related S3 API calls, such as IBM S3 or minio, include the following line in the [s3d.0] section of /etc/rok/daemons.conf:
assume-no-versioning = true
This argument will make the S3 daemon assume that versioning is disabled, and completely prevent it from performing any versioning-related calls to the S3 service. Note that the use of this argument will also imply the use of the –no-versioning argument, which can optionally be removed.