This section contains configuration options that are available to the Rok administrator to fine-tune a Rok installation. Note that all examples follow the Edit-Commit-Apply workflow, which involves editing the corresponding YAML manifests inside your local GitOps deployment repository, committing your changes and re-applying the manifests.
Configure the garbage collection of Rok tasks¶
By default, successful Rok tasks are garbage collected after one week, and all other tasks are garbage collected after one month. You can modify this behavior by providing a custom value for the following configuration variables:
gw.task_gc.all.cronspec: A cron schedule expression for the garbage collection of all tasks, regardless of their outcome. The default is to run daily at 03:00. Set this to an empty string to disable garbage collection for all tasks.
gw.task_gc.all.max_age: The time interval after which a task is being deleted. The default is
1 month. Set this to 0 to disable garbage collection for all tasks.
gw.task_gc.success.cronspec: A cron schedule expression for garbage collecting only successful tasks. The default is to run daily at 04:00. Set this to an empty string to disable garbage collection for successful tasks.
gw.task_gc.success.max_age: The time interval after which a successful task is being deleted. You can set this to a shorter duration than
gw.task_gc.all.max_ageto enable garbage collecting successful tasks earlier. The default is
1 week. Set this to 0 to disable garbage collection for successful tasks.
For example, to decrease the duration for which successful tasks are retained to 3 days and run garbage collection for successful tasks every 8 hours, perform the following steps:
Go to the deployment repository:
$ cd ~/ops/deployments
rok/rok-cluster/overlays/deploy/patches/configvars.yamland add the
gw.task_gc.success.max_age: "3 days"and
gw.task_gc.success.cronspec: "0 */8 * * *"configuration variables, as follows:
... configVars: ... gw.task_gc.success.max_age: "3 days" gw.task_gc.success.cronspec: "0 */8 * * *"
Commit the new options:
$ git add rok/rok-cluster/overlays/deploy $ git commit -m "GC successful Rok tasks after 3 days"
Re-apply the Rok cluster overlay:
$ kubectl apply -k rok/rok-cluster/overlays/deploy
This section mentions all the necessary actions that the admin should do in order to scale-in and out an EKS cluster gracefully without loosing any data.
If an EC2 instance (EKS worker node) gets terminated in an unexpected manner, data will be lost. As such, the following actions should be avoided:
- Decrement the desired size of the ASG
- Terminate an EC2 instance directly from the console
- Delete a whole nodegroup
Scaling down the nodegroup using the ASG can have catastrophic implications, since it does not allow Rok to properly drain the node (and migrate any volumes) before deleting the corresponding EC2 instance. This is described in more details at the Amazon EC2 Auto Scaling instance lifecycle document, where we see that ASG will remove the instance after about 15 minutes, even if the drain operation has not been finished.
To prevent that from happening, we have to make sure that the ASG will not delete any instances by itself, as part of a scale-in operation. We achieve that by enabling the scale-in protection:
- at the ASG level, i.e., for newly created instances
- at the instance level, i.e., for existing instances
Since setting the scale-in protection cannot be done via EKS, we will operate directly on the underlying ASG. Specifically:
Specify the EKS cluster to operate on:
$ export CLUSTERNAME=arrikto-demo-cluster
Obtain the list of nodegroups:
$ aws eks list-nodegroups \ > --cluster-name $CLUSTERNAME \ > --query nodegroups --output text
Now repeat the following steps for each one of the nodegroups obtained by the previous step:
$ export NODEGROUP=general-workers
Obtain the nodegroup’s underlying ASG:
$ ASG=$(aws eks describe-nodegroup \ > --cluster-name $CLUSTERNAME \ > --nodegroup-name $NODEGROUP \ > --query nodegroup.resources.autoScalingGroups.name \ > --output text)
Check the current configuration wrt scale-in protection at ASG level:
$ aws autoscaling describe-auto-scaling-groups \ > --auto-scaling-group-name $ASG \ > --query AutoScalingGroups.NewInstancesProtectedFromScaleIn \ > --output text
and at instance level:
$ aws autoscaling describe-auto-scaling-groups \ > --auto-scaling-group-name $ASG | \ > jq -r '.AutoScalingGroups.Instances | .InstanceId, .ProtectedFromScaleIn' | paste - -
Enable scale-in protection at ASG level:
$ aws autoscaling update-auto-scaling-group \ > --auto-scaling-group-name $ASG \ > --new-instances-protected-from-scale-in
Enable scale-in protection at instance level:
$ aws autoscaling describe-auto-scaling-groups \ > --auto-scaling-group-name $ASG | \ > jq -r '.AutoScalingGroups.Instances.InstanceId' | \ > xargs aws autoscaling set-instance-protection \ > --auto-scaling-group-name $ASG \ > --protected-from-scale-in \ > --instance-ids
Currently, automatic scale-in is not supported, because Rok intentionally places workloads that cannot be migrated on each node where a Rok volume exists, to guard against scale-in operations. In the future, the Cluster Autoscaler will be extended to take that into account, and support automatic scale-in operations.
In order to manually scale-in the cluster, the administrator should:
Select a K8s node that they want to remove.
Start a drain operation on the selected node:
$ kubectl drain --ignore-daemonsets --delete-local-data NODE
Rok will snapshot the volumes on that node and move them elsewhere, unguard that node and allow the drain operation to complete.
When the drain has finished, the Cluster Autoscaler will see that the node is now empty, and considers it as unneeded.
After a period of time (
scale-down-unneeded-time) the Cluster Autoscaler will terminate the EC2 instance and reduce the desired size of the ASG.
Currently, automatic scale-out in case of insufficient Rok storage is not supported.
If a Pod gets scheduled on a node with insuffient Rok storage, the PVC will be stuck Pending. Reporting storage capacity and reschuling pods if storage fails to be provisioned is supported in K8s 1.19 and is in alpha state (see https://kubernetes.io/docs/concepts/storage/storage-capacity/#rescheduling).
Still, if a pod becomes un-schedulable due to insufficient resources (CPU, RAM), the Cluster Autoscaler will trigger a scale-out, i.e., will increase the desired size of the ASG, and eventually, a new K8s node will be added.
To scale-up the cluster manually, one can do it directly from EKS with:
$ aws eks update-nodegroup-config \ > --cluster-name $CLUSTERNAME \ > --nodegroup-name general-workers \ > --scaling-config minSize=2,maxSize=5,desiredSize=4
This will add a new node to the k8s cluster and the Rok operator will scale the RokCluster members accordingly.