Authorize Access to Object Storage

This section will guide you through giving Rok access to the cloud provider’s Object Storage service.

Important

The provided manifests contain a deploy overlay to use as a guide for building your overlay, so you can edit it in-place. Should conflicts arise in these files, always keep the local changes. deploy overlays are user-owned.

Choose one of the following options to give Rok access to the Object Storage service:

What You’ll Need

Option 1: Authorize Access to Object Storage Automatically (preferred)

In this section you will configure Rok to have access to object storage resources in an automated manner, using the rok-deploy CLI.

Procedure

Choose one of the following options, based on your cloud provider.

To configure AWS so that Rok has access to S3, follow the on-screen instructions.

You may now proceed to the Summary section.

Rok does not currently support automatically authorizing Rok to assume an Azure Managed Identity. Please follow the instructions in the Option 2: Authorize Access to Object Storage Manually section to authorize Rok to use the Azure Managed Identity manually.

Option 2: Authorize Access to Object Storage Manually

If you want to configure Rok so that it has access to object storage resources manually, follow this section.

Procedure

Choose one of the following options, based on your cloud provider.

  1. Go inside your clone of the GitOps repo:

    root@rok-tools:~# cd ~/ops/deployments
    
  2. Set a default S3 region, according to your preferences, e.g., based on $AWS_DEFAULT_REGION.

    1. (Optional) Obtain your region directly from the Kubernetes nodes:

      root@rok-tools:~/ops/deployments# kubectl get nodes -o json | \
      >     jq -r '.items[].metadata.labels["failure-domain.beta.kubernetes.io/region"]'
      
    2. Edit rok/rok-cluster/overlays/deploy/patches/storage.yaml and specify the values that correspond to your S3 Object Store, by replacing <REGION> with your region:

      s3:
        endpoint: https://s3.<REGION>.amazonaws.com
        region: <REGION>
      
  3. Configure the rok/rok-cluster/overlays/deploy/patches/configvars.yaml Kustomize patch file, by replacing <ACCOUNT_ID>, <REGION>, and <EKS_CLUSTER_NAME> with your account id, your region, and the name of your cluster, respectively:

    spec:
       configVars:
          ...
          daemons.s3d.bucket_prefix: rok-<ACCOUNT_ID>-<REGION>-<EKS_CLUSTER_NAME>
    

    Here is an example of how the above snippet would actually look like:

    spec:
       configVars:
          ...
          daemons.s3d.bucket_prefix: rok-123451234512-us-west-2-arrikto-cluster
    

    The configuration above instructs Rok to use a specific prefix for S3 buckets. This prefix is also specified in the s3-iam-resources.yaml that you have configured in the previous section.

    Important

    Rok will create a number of buckets with this specific prefix. Please note that Rok assumes it owns all buckets with names starting with this prefix, e.g., for Garbage Collection purposes, so this prefix must not be shared with any other application.

  4. Instruct Rok to use a specific IAM role, by configuring rok/rok-cluster/overlays/deploy/patches/storage.yaml Kustomize patch file. To do so, replace <ACCOUNT_ID> and <ROLE_NAME> with your account id and the ARN of the IAM role for service account you have previously created.

    Note

    You can find the ARN of the IAM role in the s3-iam-resources.yaml CF template that was formatted based on environment variables in Create Cloud Identity.

    s3:
      ...
      AWSRoleARN: arn:aws:iam::<account_id>:role/<role_name>
    

    Here is an example of how the above snippet would actually look like:

    spec:
       configVars:
          ...
          daemons.s3d.bucket_prefix: 123451234512:role/rok-us-west-2-arrikto-cluster
    

    Important

    Make sure that the ARN of the AWS Role as well as your included AWS Account ID are correct.

  1. Go inside your clone of the GitOps repo:

    root@rok-tools:~# cd ~/ops/deployments
    
  2. Retrieve the Resource ID of the Managed Identity:

    root@rok-tools:~/ops/deployments# IDENTITY_RESOURCE_ID="$(az identity show -g ${RESOURCE_GROUP?} -n ${IDENTITY_NAME?} --query id -otsv)"
    
  3. Create the namespaces used by Rok:

    root@rok-tools:~/ops/deployments# rok-deploy --apply rok/rok-namespaces/overlays/deploy
    
  4. Create a Pod Identity for S3Proxy in the rok namespace:

    root@rok-tools:~/ops/deployments# POD_IDENTITY_NAME=rok-s3proxy
    root@rok-tools:~/ops/deployments# az aks pod-identity add \
    >  --resource-group ${RESOURCE_GROUP?} \
    >  --cluster-name ${AKS_CLUSTER_NAME?} \
    >  --namespace ${NAMESPACE?} \
    >  --name ${POD_IDENTITY_NAME?} \
    >  --identity-resource-id ${IDENTITY_RESOURCE_ID?}
    
    Troubleshooting
    The command failed with a ‘Bad Request’ error

    Azure may take some time to enable Pod Identities in a cluster. When trying to create a Pod Identity before the changes are fully propagated, the following error may occur:

    Operation failed with status: 'Bad Request'. Details: Cluster identity has no assignment permission over identity 'resourceId: /subscriptions/f7a20dff-0a55-42bd-bec6-a18c6c370d0e/resourcegroups/arr/providers/Microsoft.ManagedIdentity/userAssignedIdentities/s3proxy - clientId: 6c5f59e0-e4f3-4ac8-939c-66cf87d8056d - objectId: 95520a25-3c7d-4b94-b516-cdc4cb2d881a'. Please grant at least 'Managed Identity Operator' permission before assigning pod identity
    

    If you see this error, wait for a few minutes for Azure to propagate permissions, and try creating the Pod Identity again.

  5. Configure S3Proxy to access the Azure Storage Account:

    root@rok-tools:~/ops/deployments# export STORAGE_ACCOUNT_NAME=rokstorageaccount
    root@rok-tools:~/ops/deployments# j2 rok/rok-external-services/s3proxy/overlays/deploy/config.env.j2 -o rok/rok-external-services/s3proxy/overlays/deploy/config.env
    
  6. Edit rok/rok-external-services/s3proxy/overlays/deploy/patches/deployment.yaml to set the aadpodidbinding label to the name of the Pod Identity you created, so S3Proxy can use it to access Azure Blob Storage.

    spec:
      template:
        metadata:
          labels:
            aadpodidbinding: "rok-s3proxy"
    
  7. Generate random credentials for Rok to access S3Proxy:

    root@rok-tools:~/ops/deployments# export S3PROXY_IDENTITY="$(openssl rand -base64 16)"
    root@rok-tools:~/ops/deployments# export S3PROXY_CREDENTIAL="$(openssl rand -base64 32)"
    
  8. Provide the generated credentials to S3Proxy:

    root@rok-tools:~/ops/deployments# j2 rok/rok-external-services/s3proxy/overlays/deploy/secrets/credentials.env.j2 -o rok/rok-external-services/s3proxy/overlays/deploy/secrets/credentials.env
    
  9. Edit rok/rok-cluster/overlays/deploy/kustomization.yaml to set the parent of the deploy kustomization overlay to aks:

    bases:
    - ../aks
    
  10. Edit rok/rok-cluster/overlays/deploy/patches/configvars.yaml to set the daemons.s3d.access_key_id and daemons.s3d.secret_access_key Rok Cluster configuration variables to the credentials you generated above.

    spec:
      configVars:
        daemons.s3d.access_key_id: "<S3PROXY_IDENTITY>"
        daemons.s3d.secret_access_key: "<S3PROXY_CREDENTIAL>"
    
  11. Track all changes in the git repository:

    root@rok-tools:~/ops/deployments# git add rok/rok-cluster rok/rok-external-services
    
  12. Commit the changes:

    root@rok-tools:~/ops/deployments# git commit -m "Configure Azure Blob Storage access for Rok"
    

Summary

You have successfully configured Rok so that it uses resources on the object storage service of your cloud provider.

What’s Next

The next step is to grant Rok access to Arrikto’s private container registry, so that it can pull images from it.