Configure ELB Subnets

This section will guide you through selecting the Classic AWS Load Balancer scheme and configuring the subnets it will use.

Procedure

  1. Go to your GitOps repository, inside your rok-tools management environment:

    root@rok-tools:~# cd ~/ops/deployments
  2. Restore the required context from previous sections:

    root@rok-tools:~/ops/deployments# source <(cat deploy/env.{aws-subnets,eks-cluster})
    root@rok-tools:~/ops/deployments# export AWS_SUBNETS_PUBLIC root@rok-tools:~/ops/deployments# export AWS_SUBNETS_PRIVATE root@rok-tools:~/ops/deployments# export EKS_CLUSTER
  3. Decide on the scheme of the Classic AWS Load Balancer you want to use.

    Choose a public ELB if you want to access it through the internet:

    root@rok-tools:~/ops/deployments# export SERVING_EKS_ELB_SCHEME=internet-facing

    Choose a private ELB if you want to only access it internally through a VPC:

    root@rok-tools:~/ops/deployments# export SERVING_EKS_ELB_SCHEME=internal
  4. Specify the subnets that the ELB will use. Choose one of the following options based on your ELB scheme.

    Select the subnets among the pool of public subnets AWS_SUBNETS_PUBLIC. These subnets should reside within at least two availability zones.

    root@rok-tools:~/ops/deployments# export SERVING_EKS_ELB_SUBNETS=${AWS_SUBNETS_PUBLIC?} \ > && echo ${SERVING_EKS_ELB_SUBNETS?} subnet-0b936cdc4fae6862a subnet-0110cc3509ed64a7e

    Note

    Advanced Networking: We recommend you use all of the available public subnets. However, if you have specific networking requirements, you can explicitly specify a subset of them with:

    root@rok-tools:~/ops/deployments# export SERVING_EKS_ELB_SUBNETS="<SUBNET1> <SUBNET2>" \ > && echo ${SERVING_EKS_ELB_SUBNETS?}

    Select the subnets among the pool of private subnets AWS_SUBNETS_PRIVATE. These subnets should reside within at least two availability zones.

    root@rok-tools:~/ops/deployments# export SERVING_EKS_ELB_SUBNETS=${AWS_SUBNETS_PRIVATE?} \ > && echo ${SERVING_EKS_ELB_SUBNETS?} subnet-018e3b5b3ec930ccb subnet-074cebd1b78c50066

    Note

    Advanced Networking: We recommend you use all of the available public subnets. However, if you have specific networking requirements, you can explicitly specify a subset of them with:

    root@rok-tools:~/ops/deployments# export SERVING_EKS_ELB_SUBNETS="<SUBNET1> <SUBNET2>" \ > && echo ${SERVING_EKS_ELB_SUBNETS?}
  5. Tag the subnets that the ELB will use so that they can be auto-discovered from the AWS Load Balancer Controller:

    root@rok-tools:~/ops/deployments# aws ec2 create-tags --resources ${SERVING_EKS_ELB_SUBNETS?} \ > --tags Key="kubernetes.io/cluster/${EKS_CLUSTER?}",Value="shared"
  6. Save your state:

    root@rok-tools:~/ops/deployments# j2 deploy/env.serving-eks-elb-subnets.j2 \ > -o deploy/env.serving-eks-elb-subnets
  7. Commit your changes:

    root@rok-tools:~/ops/deployments# git commit -am "Configure ELB Subnets for Serving"
  8. Mark your progress:

    root@rok-tools:~/ops/deployments# export DATE=$(date -u "+%Y-%m-%dT%H.%M.%SZ")
    root@rok-tools:~/ops/deployments# git tag -a deploy/${DATE?}/release-1.5/serving-eks-elb-subnets \ > -m "Configure ELB Subnets for Serving"

Verify

  1. Go to your GitOps repository, inside your rok-tools management environment:

    root@rok-tools:~# cd ~/ops/deployments
  2. Restore the required context from previous sections:

    root@rok-tools:~/ops/deployments# source <(cat deploy/env.{aws-subnets,eks-cluster,serving-eks-elb-subnets})
    root@rok-tools:~/ops/deployments# export AWS_SUBNETS_PUBLIC root@rok-tools:~/ops/deployments# export AWS_SUBNETS_PRIVATE root@rok-tools:~/ops/deployments# export EKS_CLUSTER root@rok-tools:~/ops/deployments# export SERVING_EKS_ELB_SUBNETS
  3. List the subnets that the Classic AWS Load Balancer will use:

    root@rok-tools:~/ops/deployments# aws ec2 describe-subnets \ > --subnet-ids ${SERVING_EKS_ELB_SUBNETS?} \ > --query "Subnets[].[SubnetId,AvailabilityZone,Tags[?Key==\`kubernetes.io/cluster/${EKS_CLUSTER?}\`]|[0].Value]" \ > --output table ------------------------------------------------------- | DescribeSubnets | +---------------------------+--------------+----------+ | subnet-0b936cdc4fae6862a | us-east-1a | shared | | subnet-0110cc3509ed64a7e | us-east-1b | shared | +---------------------------+--------------+----------+
    root@rok-tools:~/ops/deployments# aws ec2 describe-subnets \ > --subnet-ids ${SERVING_EKS_ELB_SUBNETS?} \ > --query "Subnets[].[SubnetId,AvailabilityZone,Tags[?Key==\`kubernetes.io/cluster/${EKS_CLUSTER?}\`]|[0].Value]" \ > --output table ------------------------------------------------------- | DescribeSubnets | +---------------------------+--------------+----------+ | subnet-018e3b5b3ec930ccb | us-east-1a | shared | | subnet-074cebd1b78c50066 | us-east-1b | shared | +---------------------------+--------------+----------+
  4. Verify that the selected ELB subnets match the ELB scheme. Choose one of the following options based on your ELB scheme.

    Ensure that the ELB subnets are public. To do that, ensure that all of the subnets in the list of step 3 exist in the list of public subnets. To list the public subnets:

    root@rok-tools:~/ops/deployments# echo ${AWS_SUBNETS_PUBLIC?} subnet-0b936cdc4fae6862a subnet-0110cc3509ed64a7e

    Ensure that the ELB subnets are private. To do that, ensure that all of the subnets in the list of step 3 exist in the list of private subnets. To list the private subnets:

    root@rok-tools:~/ops/deployments# echo ${AWS_SUBNETS_PRIVATE?} subnet-018e3b5b3ec930ccb subnet-074cebd1b78c50066
  5. Ensure that the subnets in the list of step 3 do not all belong to the same availability zone, that is, the second column refers to at least two AZs across all subnets.

  6. Ensure that the subnets in the list of step 3 have the kubernetes.io/cluster/<EKS_CLUSTER> tag set, that is, the third column shows the value shared for every single row.

Summary

You have successfully selected the Classic AWS Load Balancer scheme and configured its subnets.

What’s Next

The next step is to configure and install the NGINX Ingress Controller.