Create EKS Cluster

In this section we will create an EKS cluster running Kubernetes 1.19.X that will allow public access to Kubernetes behind a firewall.

  1. First choose the cluster name and the trusted CIDRs:

    $ export CIDRS=$CIDR
    $ export CLUSTERNAME=$AWS_ACCOUNT-$AWS_IAM_USER-cluster
    
  2. Obtain the AWS account ID:

    $ export ACCOUNT_ID=$(aws sts get-caller-identity | jq -r '.Account')
    
  3. Decide on the VPC configuration used by the cluster control plane, i.e, the Kubernetes master nodes.

  4. Specify the subnets you will use to host resources for your cluster. For the EKS control plane you must specify at least two subnets in different Availability Zones. We advise you to select all available subnets in the VPC, including the private ones (if any), so that you can deploy worker nodes on private subnets and use the internal Kubernetes endpoint. See Select subnets above for how to obtain the desired subnet IDs and make sure you set the SUBNETIDS environment variable accordingly.

  5. Specify the security groups you want to use (up to five).

    To list the security groups in the VPC run:

    $ aws ec2 describe-security-groups \
    >    --filter Name=vpc-id,Values=${VPCID?} \
    >    --query 'SecurityGroups[].[GroupId,GroupName,Description]' \
    >    --output table
    

    To use the security group created in the previous section along with the default one:

    $ export SECURITYGROUPIDS=$(aws ec2 describe-security-groups --filters Name=vpc-id,Values=${VPCID?} Name=group-name,Values=${SECURITYGROUP?},default | jq -r '.SecurityGroups[].GroupId' | xargs)
    

    Warning

    If you have specific network requirements, e.g., use pre-existing security groups, and you already know the Security Group IDs, you can specify them explicitly with:

    $ export SECURITYGROUPIDS="sg-1 sg-2"
    
  6. Specify the VPC configuration for the cluster control plane:

    $ export RESOURCES_VPC_CONFIG="subnetIds=${SUBNETIDS// /,},securityGroupIds=${SECURITYGROUPIDS// /,},endpointPublicAccess=true,endpointPrivateAccess=true,publicAccessCidrs=${CIDRS// /,}"
    
  7. Create an EKS cluster:

    $ aws eks create-cluster \
    >      --name ${CLUSTERNAME?} \
    >      --role-arn arn:aws:iam::${ACCOUNT_ID?}:role/eksClusterRole \
    >      --resources-vpc-config ${RESOURCES_VPC_CONFIG?} \
    >      --tags owner=${AWS_ACCOUNT?}/${AWS_IAM_USER?} \
    >      --kubernetes-version 1.19
    
  8. Verify that the EKS cluster exists:

    $ aws eks describe-cluster --name ${CLUSTERNAME?}
    

Enable IAM roles for Kubernetes Service Accounts

Note

You must wait for your cluster to become ACTIVE, before you can create an OIDC provider for it.

Create an OIDC provider and associate it with the Kubernetes cluster to enable IAM roles for service accounts:

$ eksctl utils associate-iam-oidc-provider --cluster $CLUSTERNAME --approve
Troubleshooting
The command fails with::
[i] eksctl version 0.16.0 [i] using region us-east-1 [!] retryable error (RequestError: send request failed caused by: Put http://169.254.169.254/latest/api/token: net/http: request canceled (Client.Timeout exceeded while awaiting headers)

Ensure that your EC2 instance has PUT response hop limit more than 1.

To verify:

$ export OIDC_PROVIDER=$(aws eks describe-cluster --name $CLUSTERNAME --query "cluster.identity.oidc.issuer" --output text | sed -e "s/^https:\/\///")
$ aws iam get-open-id-connect-provider \
>     --open-id-connect-provider-arn arn:aws:iam::$ACCOUNT_ID:oidc-provider/$OIDC_PROVIDER

Access EKS Cluster

To access your newly created EKS cluster you need to update your kubeconfig. For more information, see the official EKS kubeconfig docs:

$ aws eks update-kubeconfig --name $CLUSTERNAME

Inspect the generated config using kubectl:

$ kubectl config current-context
$ kubectl config view --minify=true

Note

Set the --minify flag to output info only for the current context.

Since the EKS cluster is behind firewall make sure you have access to the underlying ALB of the Kubernetes endpoint:

$ kubectl config view -o json --raw --minify=true | jq -r '.clusters[0].cluster.server'

If your management environment runs in an EC2 instance, assign the cluster security group created by EKS to your instance, otherwise you will not be able to access the Kubernetes API. To do so:

  1. Obtain the ClusterSecurityGroup ID of your EKS cluster:

    $ esg=$(aws eks describe-cluster \
    >     --name ${CLUSTERNAME?} \
    >     --query cluster.resourcesVpcConfig.clusterSecurityGroupId \
    >     --output text)
    
  2. Obtain the instance ID of your EC2 instance:

    $ instance_id=$(curl 169.254.169.254/latest/meta-data/instance-id)
    
  3. Obtain the security groups of your EC2 instance:

    $ isg=$(aws ec2 describe-instances \
    >     --instance-id $instance_id \
    >     --query Reservations[].Instances[].SecurityGroups[].GroupId \
    >     --output text)
    
  4. Update the security groups of your EC2 instance:

    $ aws ec2 modify-instance-attribute \
    >     --instance-id $instance_id \
    >     --groups $isg $esg
    

Verify that you can access your EKS cluster:

$ kubectl get nodes
No resources found in default namespace.
Troubleshooting
The command fails with ‘Unauthorized’ error.
If you try to access an existing cluster, make sure that the cluster creator provides access to your IAM user or role by following the (Optional) Share EKS cluster section.

(Optional) Share EKS cluster

In case you wish to allow other users to access your EKS cluster, you need to:

  1. Edit the aws-auth ConfigMap in the kube-system namespace and add an entry for each IAM user or IAM role you wish to grant access to:

    mapUsers: |
      - userarn: arn:aws:iam::<AWS_ACCOUNT_ID>:user/<AWS_IAM_USER>
        username: <AWS_IAM_USER>
        groups:
          - system:masters
    mapRoles: |
      - rolearn: arn:aws:iam::<AWS_ACCOUNT_ID>:role/<AWS_IAM_ROLE>
        username: system:node:{{EC2PrivateDNSName}}
        groups:
          - system:masters
    

    Important

    If the aws-auth ConfigMap does not exist in your cluster, there is an example one in your GitOps repository under rok/eks/aws-auth.yaml that you can edit and apply directly.

  2. Make sure additional users have sufficient permissions on EKS resources (see https://docs.aws.amazon.com/eks/latest/userguide/security_iam_id-based-policy-examples.html).

    Note

    For example, create a new group with the corresponding policy, e.g., AmazonEKSAdminPolicy and add the user to this group (see https://github.com/kubernetes-sigs/aws-iam-authenticator/issues/174#issuecomment-476442197)

  3. Have the user follow the Configure CLI guide so that they can access AWS resources with aws.

  4. Have the user follow the Access EKS Cluster guide so that they can access Kubernetes with kubectl.

    Important

    In case the Kubernetes API server is firewalled the user needs to make sure they are connecting from a trusted source, e.g., via a trusted VPN.