Add Disks for Rok to Use

Rok can run on any instance type, as long as there are disks available for it to use.

  • For instance types that have Local NVMe Disks, e.g., Lsv2-series, Rok will automatically find and use all of them.
  • For instance types without Local NVMe Disks, e.g., DSv2-series, you will need one or more extra data disks of the exact same size. Rok will use all extra data disks attached at LUNs 60-63.

Note

If the instances of your user node pool already have the necessary storage attached, you may proceed to the Verify section.

What You’ll Need

Procedure

In case you have used a node size without local NMVe disks, you will have to attach an extra data disk for Rok to use as local storage. To do that, you have to modify the underlying Virtual machine scale set for your node pool.

  1. Sign it to the Azure portal.

  2. Search for Kubernetes services and select the AKS cluster you previously created, arrikto-cluster.

  3. On the sidebar, under Settings, click Properties.

  4. Search for Infrastructure resource group, and click MC_arrikto_arrikto-cluster_eastus.

  5. Filter for vmss to find the underlying Virtual machine scale sets.

    ../../../_images/vmss.png
  6. Click the VMSS for your user node pool, aks-workers-XXXXXXXX-vmss.

  7. On the sidebar, under Settings, click Disks.

  8. Click Create and attach a new disk.

  9. For Data disk:

    • Set LUN to 63 (the last one available)
    • Set Storage type to Premium SSD (the default one)
    • Set Size to 1000 GiB.
    ../../../_images/vmss-data-disk.png
  10. Click Save and wait for this change to complete.

  11. On the sidebar, under Settings, click Instances.

  12. Select all instances of the VMSS, and click Upgrade. Azure will hotplug the data disk without restarting/recreating the instances.

Verify

In order to verify that all instances of your user node pool have the necessary storage attached, this section will guide you to iterate over them and ensure that exactly one of the following conditions hold:

  • They use a Storage Optimized VM size (Lsv2-series)
  • They have at least an extra disk attached at LUNs 60-63.

Switch to your management environment and:

  1. Find the node resource group of your AKS cluster:

    root@rok-tools:~# export AZ_NODE_RESOURCE_GROUP=$(az aks show -o tsv \
    >     --resource-group ${AZ_RESOURCE_GROUP?} \
    >     --name ${CLUSTERNAME?} \
    >     --query nodeResourceGroup)
    
  2. Find the VMSS in the node resource group that corresponds to the workers node pool:

    root@rok-tools:~# az resource list -o tsv \
    >     --resource-type Microsoft.Compute/virtualMachineScaleSets \
    >     --resource-group ${AZ_NODE_RESOURCE_GROUP?} \
    >     --query "[?tags.poolName=='workers'].name"
    
  3. For each VMSS found:

    1. Specify the VMSS to operate on:

      root@rok-tools:~# export VMSS_NAME=aks-workers-XXXXX-vmss
      
    2. Inspect all instances of the VMSS to find their VM size and their Data Disk LUN(s) (if any). Verify that Column2 is a Storage Optimized VM size or that Column3 reports LUN(s) 60-63:

      root@rok-tools:~# az vmss list-instances -o table \
      >    --resource-group ${AZ_NODE_RESOURCE_GROUP?} \
      >    --name ${VMSS_NAME?} \
      >    --query '[].[name,sku.name,storageProfile.dataDisks[].lun]'
      Column1                      Column2         Column3
      ---------------------------  --------------- -------
      aks-workers-42403446-vmss_0  Standard_L8s_v2 []
      aks-workers-42403446-vmss_1  Standard_L8s_v2 []
      aks-workers-42403446-vmss_1  Standard_DS2_v2 [63]
      

Summary

You have successfully added local storage on your user node pool for Rok to use.

What’s Next

The next step is to enable Pod Identities in your AKS cluster.