Expose Services on AWS with ALB

This section will walk you through the steps required to expose running services in your cluster to the outside world.

Important

This guide assumes that you already have access to a local clone of the Arrikto-provided deployments repo.

This guide uses example.com as an example hosted zone and demo.example.com as an example sub-domain to expose services running on your EKS cluster. Make sure you set the following environment variables as needed, based on your preferences:

$ export CLUSTERNAME=arrikto-demo-cluster
$ export DOMAIN=example.com
$ export SUBDOMAIN=demo.example.com

At the end of the day, the whole DOMAIN should be delegated to AWS nameservers and should not have any A records, while the SUBDOMAIN should resolve to the addresses that the DNS name of the ALB does.

We will configure the following:

  1. ExternalDNS to create dynamic DNS entries for the example.com domain.
  2. Amazon Certificate Manager (ACM) to create a wildcard SSL certificate for the demo.example.com sub-domain.
  3. Application Load Balancer (ALB) to expose the NGINX Ingress and do TLS termination with a matching ACM certificate.
  4. NGINX Ingress Controller to manage Ingress resources and expose services with them.
  5. Kubernetes Ingress to forward traffic from NGINX to the Istio IngressGateway.

Note

Depending on your environment and infrastructure, you may want to skip deploying some of the components above.

After successfully running this guide your cluster will have the following Ingress resources:

$ kubectl get ingress -A
NAMESPACE     NAME          HOSTS             ADDRESS                                                                 PORTS  AGE
ingress-nginx ingress-nginx *                 89ba9fd5-ingressnginx-ingr-8872-1476654995.us-west-2.elb.amazonaws.com  80     3h3m
istio-system  istio-ingress demo.example.com  89ba9fd5-ingressnginx-ingr-8872-1476654995.us-west-2.elb.amazonaws.com  80     106m

Before proceeding, ensure you work inside the Arrikto-provided deployments repo. For example:

$ cd ~/ops/deployments

Set Up ExternalDNS

See also the official guide for deploying ExternalDNS on AWS.

Important

The ExternalDNS controller does not work on an airgapped environment. If you are deploying on an airgapped cluster, you can skip the deployment of the ExternalDNS controller, and you will have to create the A record manually, after configuring Istio.

Set Up a Hosted Zone

Important

If you have already set up a hosted zone for your DOMAIN, skip the Only once steps below otherwise you will end up with a dangling hosted zone, i.e., Certificate validation and dynamic DNS updates will not work as expected.

  1. (Only once) Create the hosted zone:

    $ aws route53 create-hosted-zone \
    >     --name "${DOMAIN}." \
    >     --caller-reference "aws-$(date +%s)"
    
  2. Ensure that only one hosted zone exists for the desired domain:

    $ ZONES=$(aws route53 list-hosted-zones-by-name --output json --dns-name "${DOMAIN}." | jq -r '.HostedZones[].Id' | wc -l)
    $ [[ "$ZONES" -eq 1 ]] && echo OK
    
  3. Obtain the zone ID:

    $ export AWS_ZONE_ID=$(aws route53 list-hosted-zones-by-name --output json --dns-name "${DOMAIN}." | jq -r '.HostedZones[].Id' | xargs)
    
  4. Obtain the nameservers for the zone:

    $ aws route53 list-resource-record-sets \
    >     --output json \
    >     --hosted-zone-id $AWS_ZONE_ID \
    >     --query "ResourceRecordSets[?Type == 'NS']" | \
    >         jq -r '.[0].ResourceRecords[].Value'
    
  5. (Only once) Configure your nameservers to delegate the whole DOMAIN to the AWS nameservers listed above, e.g.:

    ns-2048.awsdns-64.com
    ns-2049.awsdns-65.net
    ns-2050.awsdns-66.org
    ns-2051.awsdns-67.co.uk
    
  6. Check existing records in the zone:

    $ aws route53 list-resource-record-sets \
    >     --output json \
    >     --hosted-zone-id $AWS_ZONE_ID
    
  7. Verify domain delegation:

    $ host -a $DOMAIN
    ...
    ;; ANSWER SECTION:
    example.com.  900   IN SOA   ns-2048.awsdns-64.com. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400
    example.com.  3600  IN NS ns-2048.awsdns-64.com.
    example.com.  3600  IN NS ns-2049.awsdns-65.net.
    example.com.  3600  IN NS ns-2050.awsdns-66.org.
    example.com.  3600  IN NS ns-2051.awsdns-67.co.uk.
    

Configure Cloud Identity

  1. Create the necessary policy to allow ExternalDNS to update Route53 Resource Record Sets and Hosted Zones:

    $ aws iam create-policy \
    >     --policy-name AllowExternalDNSUpdates \
    >     --policy-document file://rok/external-dns/iam-policy.json
    

    The JSON policy document is taken from the official guide for deploying ExternalDNS on AWS and contains:

  2. Set the necessary environment variables:

    $ export IAM_ROLE_NAME=eks-external-dns-$CLUSTERNAME
    $ export IAM_ROLE_DESCRIPTION=ExternalDNS
    $ export IAM_POLICY_NAME=AllowExternalDNSUpdates
    $ export SERVICE_ACCOUNT_NAMESPACE=default
    $ export SERVICE_ACCOUNT_NAME=external-dns
    

Associate the IAM Role and Policy with a Kubernetes Service Account, as described in the official IAM Roles for Service Accounts guide:

  1. Obtain the necessary info for the EKS cluster:

    $ export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text)
    $ export OIDC_PROVIDER=$(aws eks describe-cluster --name $CLUSTERNAME --query "cluster.identity.oidc.issuer" --output text | sed -e "s/^https:\/\///")
    
  2. Use the provided trust policy document template:

    And render it to substitute the missing vars:

    $ j2 rok/eks/iamsa-trust.json.j2 -o iam-$IAM_ROLE_NAME-trust.json
    
  3. Commit the formatted JSON file to the local GitOps repository:

    $ git add iam-$IAM_ROLE_NAME-trust.json
    $ git commit -m "Add JSON trust policy document for $IAM_ROLE_NAME"
    
  4. Create the role:

    $ aws iam create-role \
    >     --role-name $IAM_ROLE_NAME \
    >     --assume-role-policy-document file://iam-$IAM_ROLE_NAME-trust.json \
    >     --description "$IAM_ROLE_DESCRIPTION"
    
  5. Attach the desired policy to the created role:

    $ aws iam attach-role-policy \
    >     --role-name $IAM_ROLE_NAME \
    >     --policy-arn=arn:aws:iam::$AWS_ACCOUNT_ID:policy/$IAM_POLICY_NAME
    
  6. Verify:

    $ aws iam get-role --role-name $IAM_ROLE_NAME
    $ aws iam list-attached-role-policies --role-name $IAM_ROLE_NAME
    

Apply Kustomization

  1. Specify the IAM role to use by tweaking the ServiceAccount related patch to set the corresponding annotation, i.e., edit rok/external-dns/overlays/deploy/patches/sa.yaml:

    annotations:
      eks.amazonaws.com/role-arn: arn:aws:iam::123456789012:role/eks-external-dns  # <-- Update this line
    
  2. Specify the domain to operate by tweaking the corresponding Deployment related patch to add an extra argument accordingly, i.e., edit rok/external-dns/overlays/deploy/patches/deploy.yaml:

    - --domain-filter=example.com  # <-- Update this line with $DOMAIN.
    
  3. Commit changes:

    $ git commit -am "Configure ExternalDNS"
    
  4. Apply the kustomization:

    $ rok-deploy --apply rok/external-dns/overlays/deploy
    

Important

This will not work on an air-gapped environment because route53 does not support VPC endpoints.

Install cert-manager

Note

We install cert-manager as part of Kubeflow as well. You can skip this section if you have already deployed Kubeflow.

The AWS Load Balancer Controller uses cert-manager to create a self-signed certificate for its webhook.

Install cert-manager version 0.11.0 using our Kustomizations:

  1. Install kube-system resources:

    $ rok-deploy --apply rok/cert-manager/cert-manager-kube-system-resources/overlays/deploy
    
  2. Install cert-manager resources along with a self-signed ClusterIssuer:

    $ rok-deploy --apply rok/cert-manager/cert-manager/overlays/deploy
    

Deploy Load Balancer

To deploy the AWS Load Balancer Controller we follow the steps specified in the Amazon EKS provided guide.

Find and Tag Public Subnets

At this point we will tag the VPC subnets that we want the load balancers and AWS Load Balancer Controller to be aware of and use. Since we will create an external load balancer, we have to tag the public subnets of the VPC, as needed.

Important

The AWS Load Balancer Controller retrieves the available subnets and tries to resolve at least two qualified subnets. Subnets must contain the kubernetes.io/cluster/<cluster name> tag with a value of shared or owned and the kubernetes.io/role/elb tag signifying it should be used for ALBs. Additionally, there must be at least 2 subnets with unique availability zones as required by ALBs.

Find Public Subnets

This section will help you find all public subnets of the VPC that your EKS cluster lives in. See also the Select subnets section where we specify the subnets to use for the control plane and the nodegroups.

Warning

If you have specific network requirements, e.g., use a subset of available subnets, and you already know the VPC Subnet IDs, you can specify them explicitly with:

$ export SUBNETS="subnet-1 subnet-2 subnet-3"

Then, skip the rest of this section and continue to the Tag subnets one.

For subnets in general:

“If a subnet’s traffic is routed to an internet gateway, the subnet is known as a public subnet.”

For the default VPC:

“By default, a default subnet is a public subnet, because the main route table sends the subnet’s traffic that is destined for the internet to the internet gateway.”

Thus, to decide whether a subnet is a public one or not, we have to inspect the route tables of the VPC. Quoting from describe-route-table man page:

“Each subnet in your VPC must be associated with a route table. If a subnet is not explicitly associated with any route table, it is implicitly associated with the main route table. This command does not return the subnet ID for implicit associations.”

Here we will inspect the route table of each subnet, or the main table of the VPC and search for default route entries, i.e., 0.0.0.0/0 that use an Internet Gateway.

  1. Obtain the ID of the VPC that your EKS cluster lives in:

    $ export VPCID=$(aws eks describe-cluster --name ${CLUSTERNAME?} | jq -r '.cluster.resourcesVpcConfig.vpcId')
    
  2. Find the internet gateway of the VPC:

    $ export IGW=$(aws ec2 describe-internet-gateways --filters Name=attachment.vpc-id,Values=${VPCID?} | jq -r '.InternetGateways[].InternetGatewayId')
    
  3. Find the route table inside the VPC that has a default route that uses the Internet Gateway above:

    $ export RTB=$(aws ec2 describe-route-tables \
    >                  --filters Name=vpc-id,Values=${VPCID} \
    >                            Name=route.destination-cidr-block,Values=0.0.0.0/0 \
    >                            Name=route.gateway-id,Values=${IGW} | \
    >                  jq -r '.RouteTables[].RouteTableId')
    
  4. Obtain the main route table of the VPC:

    $ export MRTB=$(aws ec2 describe-route-tables \
    >                   --filters Name=vpc-id,Values=${VPCID} \
    >                             Name=association.main,Values=true | \
    >                   jq -r '.RouteTables[].RouteTableId')
    
  5. If the route table is the main route table of the VPC, obtain all subnets of VPC:

    $ [[ "$RTB" == "$MRTB" ]] && export SUBNETS=$(aws ec2 describe-subnets --filters Name=vpc-id,Values=$VPCID | jq -r '.Subnets[].SubnetId' | xargs)
    

    Otherwise, obtain the subnets that are explicitly associated with this route table by specifying its id:

    $ [[ "$RTB" != "$MRTB" ]] && export SUBNETS=$(aws ec2 describe-route-tables --route-table-id ${RTB?} | jq -r '.RouteTables[].Associations[].SubnetId' | xargs)
    

Tag Public Subnets

This section assumes that you have already specified the public VPC subnets you wish to use.

  1. View the retrieved public VPC subnets:

    $ echo ${SUBNETS?}
    
  2. Tag the retrieved VPC subnets, as needed:

    Caution

    The following command will add the kubernetes.io/role/elb tag to the specifed VPC subnets. Adding this tag to private subnets will break any existing ALB and can cause disruption of incoming traffic.

    $ aws ec2 create-tags \
    >     --resources ${SUBNETS?} \
    >     --tags Key=kubernetes.io/role/elb,Value=1
    

Configure Cloud Identity

  1. Create the necessary policy to allow the AWS Load Balancer Controller pod to make calls to AWS APIs on your behalf:

    $ aws iam create-policy \
    >     --policy-name AWSLoadBalancerControllerIAMPolicy \
    >     --policy-document file://rok/aws-load-balancer-controller/iam-policy.json
    

    The JSON policy document is taken from the official AWS Load Balancer Controller on Amazon EKS guide and contains:

  2. Set the necessary environment variables:

    $ export IAM_ROLE_NAME=eks-aws-load-balancer-controller-${CLUSTERNAME?}
    $ export IAM_ROLE_DESCRIPTION="AWS Load Balancer Controller"
    $ export IAM_POLICY_NAME=AWSLoadBalancerControllerIAMPolicy
    $ export SERVICE_ACCOUNT_NAMESPACE=kube-system
    $ export SERVICE_ACCOUNT_NAME=aws-load-balancer-controller
    

Associate the IAM Role and Policy with a Kubernetes Service Account, as described in the official IAM Roles for Service Accounts guide:

  1. Obtain the necessary info for the EKS cluster:

    $ export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text)
    $ export OIDC_PROVIDER=$(aws eks describe-cluster --name $CLUSTERNAME --query "cluster.identity.oidc.issuer" --output text | sed -e "s/^https:\/\///")
    
  2. Use the provided trust policy document template:

    And render it to substitute the missing vars:

    $ j2 rok/eks/iamsa-trust.json.j2 -o iam-$IAM_ROLE_NAME-trust.json
    
  3. Commit the formatted JSON file to the local GitOps repository:

    $ git add iam-$IAM_ROLE_NAME-trust.json
    $ git commit -m "Add JSON trust policy document for $IAM_ROLE_NAME"
    
  4. Create the role:

    $ aws iam create-role \
    >     --role-name $IAM_ROLE_NAME \
    >     --assume-role-policy-document file://iam-$IAM_ROLE_NAME-trust.json \
    >     --description "$IAM_ROLE_DESCRIPTION"
    
  5. Attach the desired policy to the created role:

    $ aws iam attach-role-policy \
    >     --role-name $IAM_ROLE_NAME \
    >     --policy-arn=arn:aws:iam::$AWS_ACCOUNT_ID:policy/$IAM_POLICY_NAME
    
  6. Verify:

    $ aws iam get-role --role-name $IAM_ROLE_NAME
    $ aws iam list-attached-role-policies --role-name $IAM_ROLE_NAME
    

Migrate from v1.0

Important

In case you are upgrading from v1.0 to v2.0, i.e., from ALB Ingress Controller to AWS Load Balancer Controller, you have to assign some extra permissions to the IAM role for the new controller to be able to manage existing AWS resources, e.g., LoadBalancers, SecurityGroups, etc.

  1. Create the extra policy to allow the AWS Load Balancer Controller pod to manage existing AWS resources created by previous ALB Ingress Controller:

    $ aws iam create-policy \
    >     --policy-name AWSLoadBalancerControllerExtraIAMPolicy \
    >     --policy-document file://rok/aws-load-balancer-controller/iam-policy-v1-to-v2-additional.json
    

    The JSON policy document is taken from the official AWS Load Balancer Controller on Amazon EKS guide and contains:

  2. Attach the policy to the previously created IAM role:

    $ aws iam attach-role-policy \
    >     --role-name ${IAM_ROLE_NAME?} \
    >     --policy-arn=arn:aws:iam::${AWS_ACCOUNT_ID?}:policy/AWSLoadBalancerControllerExtraIAMPolicy
    

Apply Kustomization

Note

The tracked AWS Load Balancer Controller manifests are obtained from the official repository.

  1. Specify the IAM role to use by editing the ServiceAccount related patch to set the corresponding annotation, i.e., edit rok/aws-load-balancer-controller/overlays/deploy/patches/sa.yaml:

    annotations:
      eks.amazonaws.com/role-arn: arn:aws:iam::123456789012:role/eks-aws-load-balancer-controller  # <-- Update this line
    
  2. Specify the cluster name to use by tweaking the corresponding Deployment related patch add an extra argument accordingly, i.e., edit rok/aws-load-balancer-controller/overlays/deploy/patches/deploy.yaml:

    value: "--cluster-name=CLUSTERNAME"  # <-- Update this line
    
  3. Commit changes:

    $ git commit -am "Configure AWS Load Balancer Controller"
    
  4. Apply the kustomization:

    $ rok-deploy --apply rok/aws-load-balancer-controller/overlays/deploy
    

Create Certificate

Important

Amazon Certificate Manager (ACM) does not create the DNS records required for validation on Route 53 automatically. You can, however, make an AWS API call to Route 53 to create the record.

  1. Request a certificate for your sub-domain, e.g., demo.example.com:

    $ aws acm request-certificate \
    >     --domain-name ${SUBDOMAIN} \
    >     --subject-alternative-names "*.${SUBDOMAIN}" \
    >     --validation-method DNS
    

    Note

    We request a wildcard FQDN to be included in the Subject Alternative Name extension of the ACM certificate so that we can expose multiple virtual hosts, such as GitLab-, or Kubeflow-related services.

  2. Obtain the ARN of the certificate:

    $ export CERT=$(aws acm list-certificates | \
    >     jq -r '.CertificateSummaryList[]  | select(.DomainName == "'$SUBDOMAIN'") | .CertificateArn')
    
  3. Obtain the underlying zone:

    $ export AWS_ZONE_ID=$(aws route53 list-hosted-zones-by-name --output json --dns-name "${DOMAIN}." | jq -r '.HostedZones[0].Id')
    
  4. Create the necessary CNAME records on route53:

    $ aws acm describe-certificate --certificate-arn $CERT | \
    >    jq -r '.Certificate.DomainValidationOptions[].ResourceRecord|.Name,.Value' | paste - - | \
    >        while read name value; do
    >            aws route53 change-resource-record-sets \
    >                --hosted-zone-id $AWS_ZONE_ID \
    >                --change-batch '{"Comment": "Add CNAME for ACM DNS Validation",
    >                                 "Changes": [
    >                                    {
    >                                      "Action": "UPSERT",
    >                                      "ResourceRecordSet": {
    >                                        "Name": "'$name'",
    >                                        "Type": "CNAME",
    >                                        "TTL": 300,
    >                                        "ResourceRecords": [
    >                                          {
    >                                            "Value": "'$value'"
    >                                          }
    >                                        ]
    >                                      }
    >                                    }
    >                                  ]
    >                                }'
    >        done
    

    Note

    We need to create a CNAME for each domain trusted in the certificate, i.e., CN plus the SANs. Since the certificate we requested is associated with two domain names, we will create two CNAME records.

  5. Wait until ACM issues your certificate:

    $ aws acm describe-certificate --certificate-arn $CERT
    

Deploy NGINX Ingress Controller

To deploy the NGINX Ingress Controller we follow the steps specified in the official NGINX installation guide for AWS. We will use an Application Load Balancer to expose the NGINX Ingress Controller. We will terminate TLS at the ALB using the ACM certificate we created previously.

Note

The tracked NGINX Ingress Controller manifests are obtained from the offical repo.

  1. Edit rok/nginx-ingress-controller/overlays/deploy/patches/ingress-alb.yaml to specify the ACM certificate to use:

    alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-west-1:111111111111:certificate/9b414703-707a-4589-a0ef-86b3d38df62f  # <-- Update this line.
    
  2. In case of a private-only EKS cluster you have to use an internal ALB instead of a internet facing one. Edit rok/nginx-ingress-controller/overlays/deploy/patches/ingress-alb.yaml and update the corresponding annotation:

    alb.ingress.kubernetes.io/scheme: internal   # <-- Update this line.
    
  3. To have ALB behind firewall, edit rok/nginx-ingress-controller/overlays/deploy/patches/ingress-alb.yaml and specify the desired trusted CIDR in the corresponding annotation:

    alb.ingress.kubernetes.io/inbound-cidrs: 1.2.3.4/32
    

    Note

    You can set multiple trusted inbound CIDRs by specifying them as a string list (comma separated list). For more information see the official AWS Load Balancer Controller docs.

  4. Commit changes:

    $ git commit -am "Expose NGINX Ingress Controller with an ALB"
    
  5. Apply the kustomization:

    $ rok-deploy --apply rok/nginx-ingress-controller/overlays/deploy
    
  6. Wait until the AWS Load Balancer Controller provisions the necessary AWS resources:

    $ kubectl get ingress -n ingress-nginx
    NAME            HOSTS   ADDRESS                                                                    PORTS   AGE
    ingress-nginx   *       e53a524a-ingressnginx-ingr-1234-592794601.eu-central-1.elb.amazonaws.com   80      64d
    

    Important

    If the Ingress object does not get an ADDRESS, look at aws-load-balancer-controller in the kube-system namespace and most probably you will find something like:

    "msg"="Reconciler error" "error"="failed to build LoadBalancer configuration due to retrieval of subnets failed to resolve 2 qualified subnets.
    
  7. Obtain the address of the ALB:

    $ kubectl get ingress -n ingress-nginx ingress-nginx -o json | jq -r '.status.loadBalancer.ingress[].hostname'
    
  8. Edit rok/nginx-ingress-controller/overlays/deploy/patches/service-alb.yaml and set the externalName of the ingress-nginx service:

    externalName: demo.example.com  # <-- Update this line.
    
  9. Commit changes:

    $ git commit -am "Set externalName of ingress-nginx service"
    
  10. Re-apply manifests:

    $ rok-deploy --apply rok/nginx-ingress-controller/overlays/deploy
    

Expose Istio

To expose Istio to the outer world follow the next steps.

Route Traffic

You have already deployed Istio and exposed Rok at /rok through the Istio IngressGateway. You now need create an Ingress in order to route traffic from the NGINX Pods to the Istio IngressGateway Pods.

  1. Edit rok/rok-external-services/istio/istio-1-9/istio-install/overlays/deploy/kustomization.yaml and use the ingress-nginx overlay:

    resources:
    - ../arrikto   # <- change to "../ingress-nginx"
    
  2. Edit rok/rok-external-services/istio/istio-1-9/istio-install/overlays/deploy/kustomization.yaml and uncomment the patch for ingress host:

    patches:
    - path: patches/ingress-host.yaml
      target:
        kind: Ingress
        name: istio-ingress
    
  3. Edit rok/rok-external-services/istio/istio-1-9/istio-install/overlays/deploy/patches/ingress-host.yaml to set the host field of the Ingress rule:

    value: demo.example.com  # <-- Update this line with SUBDOMAIN.
    
  4. Commit changes:

    $ git commit -am "Expose Istio via an NGINX Ingress"
    
  5. Apply the kustomization:

    $ rok-deploy --apply rok/rok-external-services/istio/istio-1-9/istio-install/overlays/deploy
    

Important

In case of a private only EKS cluster, external-dns will not be able to update route53 entries automatically. So you have to manually create an alias record, i.e., an A record pointing to the internal ALB created by the AWS Load Balancer Controller.

Configure X-Forwarded-* Settings

Istio IngressGateway may have one or more HTTP proxies in front of it. Those HTTP proxies set X-Forwarded-For and X-Forwarded-Proto headers that are important for our applications to work correctly. However, they are also security sensitive, so the Istio IngressGateway proxy should know how many proxies there are in front of it.

  1. If you are deploying the Istio IngressGateway behind HTTP proxies, edit rok/rok-external-services/istio/istio-1-9/istio-install/overlays/deploy/kustomization.yaml resources to include the trusted-front-proxies.yaml file:

    resources:
    ...
    # - trusted-front-proxies.yaml  # <-- Uncomment this line.
    
  2. Edit rok/rok-external-services/istio/istio-1-9/istio-install/overlays/deploy/trusted-front-proxies.yaml and set the xff_num_trusted_hops setting to the number of trusted proxies in front of the Istio IngressGateway. In this case, we have the ALB LoadBalancer and NGINX, so we set xff_num_trusted_hops: 2:

    apiVersion: networking.istio.io/v1alpha3
    kind: EnvoyFilter
    metadata:
      name: xff-trust-hops
      namespace: istio-system
    spec:
      [...]
              xff_num_trusted_hops: 2  # <-- Update this line accordingly.
    
  3. Commit changes:

    $ git add rok/rok-external-services/istio/istio-1-9/istio-install/overlays/deploy
    $ git commit -m "istio: Configure XF settings"
    
  4. Apply changes:

    $ rok-deploy --apply rok/rok-external-services/istio/istio-1-9/istio-install/overlays/deploy
    

Configure Access for External Clients

Clients that are external to the Kubernetes cluster (e.g., a bot or a user’s laptop) can access the cluster using Kubernetes Service Accounts as their identity. They store a long-lived token in their environment and use it to create time and audience bound tokens, to securely access the APIs exposed through the Istio Gateway (e.g., Rok, Kubeflow, etc.). You can read more at the External Access user guide.

To enable users to issue short-lived tokens, you need to expose the TokenRequest API of the Kubernetes API-Server, so that it’s accessible to external clients. To expose the TokenRequest API under demo.example.com/kubernetes:

  1. Edit rok/kubernetes-proxy/overlays/deploy/patches/ingress_host.json with your host:

    [
        {
          "op": "replace",
          "path": "/spec/rules/0/host",
          "value": "demo.example.com"
        }
    ]
    
  2. Commit changes:

    $ git commit -am "ingress: Expose Kubernetes TokenRequest API under /kubernetes"
    
  3. Apply changes:

    $ rok-deploy --apply rok/kubernetes-proxy/overlays/deploy
    

Note

This section exposes only the TokenRequest API, not the whole Kubernetes API-Server.

Test Running Services

We will make use of an echoserver pod and expose it:

  • behind Ingress NGINX controller via an Ingress
  • behind Istio via a Virtualservice

First deploy echoserver resources according to official docs:

$ rok-deploy --apply rok/echoservice/overlays/deploy

Test Ingress NGINX

First expose echoserver via an Ingress:

  1. Edit rok/echoservice/echoserver-ingress.yaml to set host under your subdomain, e.g., echoserver.demo.example.com.

  2. Commit the change:

    $ git commit -am "echoservice: Set host in Ingress"
    
  3. Apply the Ingress resource:

    $ kubectl apply -f rok/echoservice/echoserver-ingress.yaml
    
  4. Wait for DNS to get propagated.

  5. Test it with curl:

    $ curl https://echoserver.demo.example.com/echoserver/
    ...
    x-forwarded-for=1.2.3.4, 172.31.6.74
    x-forwarded-host=echoserver.demo.example.com
    x-forwarded-port=443
    x-forwarded-prefix=/echoserver
    x-forwarded-proto=https
    x-original-forwarded-for=1.2.3.4
    x-real-ip=1.2.3.4
    x-request-id=11f2cb5ff4778be50b14680d23b12712
    x-scheme=https
    

Note

In case you have already deployed autherservice you have to update SKIP_AUTH_URLS to include /echoserver so that no authentication is required.

Test Istio

First expose echoserver via a VirtualService:

  1. Create the VirtualService for Istio:

    $ kubectl apply -f rok/echoservice/echoserver-virtualservice.yaml
    
  2. Test it with curl:

    $ curl https://demo.example.com/echoserver/
    ...
    x-envoy-decorator-operation=...
    x-envoy-external-address=1.2.3.4
    x-envoy-original-path=/echoserver/
    x-envoy-peer-metadata=....
    x-envoy-peer-metadata-id=...
    x-forwarded-for=1.2.3.4, 172.31.31.247,172.31.23.139
    x-forwarded-host=demo.example.comdemo.example.com
    x-forwarded-port=443
    x-forwarded-prefix=/echoserver/
    x-forwarded-proto=https
    x-original-forwarded-for=1.2.3.4
    x-real-ip=1.2.3.4
    x-request-id=8a8732d9-644e-90c2-885f-1363d27348a0
    x-scheme=https
    

Note

The x-real-ip is calculated based on x-forwarded-for, which is updated by the proxies in front of our server, i.e., the Application Load Balancer managed by Amazon and the NGINX ingress controller running in our cluster. The valule depends on the xff_num_trusted_hops we have previously set.

Visit Rok

Visit the Rok UI in your sub-domain, e.g., https://demo.example.com/rok/

What’s Next

The next step is to deploy Kubeflow.