Create EKS Cluster

In this section we will create an EKS cluster running Kubernetes 1.18.X that will allow public access to Kubernetes behind a firewall.

  1. First choose the cluster name and the trusted CIDRs:

    $ export CIDRS=$CIDR
    $ export CLUSTERNAME=$AWS_ACCOUNT-$AWS_IAM_USER-cluster
    
  2. Obtain the AWS account ID:

    $ export ACCOUNT_ID=$(aws sts get-caller-identity | jq -r '.Account')
    
  3. Decide on the VPC configuration used by the cluster control plane, i.e, the Kubernetes master nodes.

  4. Specify the subnets you will use to host resources for your cluster. For the EKS control plane you must specify at least two subnets in different Availability Zones. We advise you to select all available subnets in the VPC, including the private ones (if any), so that you can deploy worker nodes on private subnets and use the internal Kubernetes endpoint. See Select subnets above for how to obtain the desired subnet IDs and make sure you set the SUBNETIDS environment variable accordingly.

  5. Specify the security groups you want to use (up to five):

    To use the security group created in the previous section:

    $ export SECURITYGROUPIDS=$(aws ec2 describe-security-groups --filters Name=vpc-id,Values=${VPCID?} Name=group-name,Values=${SECURITYGROUP?},default | jq -r '.SecurityGroups[].GroupId' | xargs)
    

    Warning

    If you have specific network requirements, e.g., use pre-existing security groups, and you already know the Security Group IDs, you can specify them explicitly with:

    $ export SECURITYGROUPIDS="sg-1 sg-2"
    
  6. Create an EKS cluster:

    $ aws eks create-cluster \
    >      --name ${CLUSTERNAME?} \
    >      --role-arn arn:aws:iam::${ACCOUNT_ID?}:role/eksClusterRole \
    >      --resources-vpc-config subnetIds=${SUBNETIDS// /,},securityGroupIds=${SECURITYGROUPIDS// /,},endpointPublicAccess=true,endpointPrivateAccess=true,publicAccessCidrs=${CIDRS// /,} \
    >      --tags owner=${AWS_ACCOUNT?}/${AWS_IAM_USER?} \
    >      --kubernetes-version 1.18
    
  7. Verify that the EKS cluster exists:

    $ aws eks describe-cluster --name ${CLUSTERNAME?}
    

Enable IAM roles for Kubernetes Service Accounts

Note

You must wait for your cluster to become ACTIVE, before you can create an OIDC provider for it.

Create an OIDC provider and associate it with the Kubernetes cluster to enable IAM roles for service accounts:

$ eksctl utils associate-iam-oidc-provider --cluster $CLUSTERNAME --approve

To verify:

$ export OIDC_PROVIDER=$(aws eks describe-cluster --name $CLUSTERNAME --query "cluster.identity.oidc.issuer" --output text | sed -e "s/^https:\/\///")
$ aws iam get-open-id-connect-provider \
>     --open-id-connect-provider-arn arn:aws:iam::$ACCOUNT_ID:oidc-provider/$OIDC_PROVIDER

Access EKS Cluster

To access your newly created EKS cluster you need to update your kubeconfig. For more information, see the official EKS kubeconfig docs:

$ aws eks update-kubeconfig --name $CLUSTERNAME

Inspect the generated config using kubectl:

$ kubectl config current-context
$ kubectl config view --minify=true

Note

Set the --minify flag to output info only for the current context.

Since the EKS cluster is behind firewall make sure you have access to the underlying ALB of the Kubernetes endpoint:

$ kubectl config view -o json --raw --minify=true | jq -r '.clusters[0].cluster.server'

(Optional) Share EKS cluster

In case you wish to allow other users to access your EKS cluster, you need to:

  1. Edit the kube-system/aws-auth ConfigMap and add an entry for each user you wish to grant access to:

    mapUsers: |
       - userarn: arn:aws:iam::<account_id>:user/<username>
         username: <username>
         groups:
           - system:masters
    
  2. Make sure additional users have sufficient permissions on EKS resources (see https://docs.aws.amazon.com/eks/latest/userguide/security_iam_id-based-policy-examples.html).

    Note

    For example, create a new group with the corresponding policy, e.g., AmazonEKSAdminPolicy and add the user to this group (see https://github.com/kubernetes-sigs/aws-iam-authenticator/issues/174#issuecomment-476442197)

  3. Have the user follow the Configure CLI guide so that they can access AWS resources with aws.

  4. Have the user follow the Access EKS Cluster guide so that they can access Kubernetes with kubectl.

    Important

    In case the Kubernetes API server is firewalled the user needs to make sure they are connecting from a trusted source, e.g., via a trusted VPN.