Deploy NGINX Ingress Controller

In this section you will configure and deploy the NGINX Ingress Controller and expose it using a Classic Load Balancer.

Check Your Environment

Before you can load balance application traffic to an application, your VPC must meet the following requirements:

  • It should have at least two subnets in different Availability Zones, both of which are either public or private. These subnets should have the following tag:

    • Key: kubernetes.io/cluster/<CLUSTERNAME>
    • Value: shared
  • The private subnets should have the following tag so that Kubernetes knows what subnets to use for internal load balancers:

    • Key: kubernetes.io/role/internal-elb
    • Value: 1
  • The public subnets should have the following tag so that Kubernetes knows what subnets to use for external load balancers:

    • Key: kubernetes.io/role/elb
    • Value: 1

Procedure

  1. Go to your GitOps repository, inside your rok-tools management environment:

    root@rok-tools:~# cd ~/ops/deployments
    
  2. Edit rok/nginx-ingress-controller/overlays/deploy/kustomization.yaml and use service-elb as base, instead of the default ingress-alb:

    bases:
    #- ../ingress-alb
    - ../service-elb
    #- ../service-azurelb
    
  3. Edit rok/nginx-ingress-controller/overlays/deploy/kustomization.yaml and use the service-elb patch, instead of the default ingress-alb and service-alb:

    patches:
    #- path: patches/ingress-alb.yaml
    #- path: patches/service-alb.yaml
    - path: patches/service-elb.yaml
    #- path: patches/service-azurelb.yaml
    
  4. Edit rok/nginx-ingress-controller/overlays/deploy/patches/service-elb.yaml and set the aws-load-balancer-internal annotation based on the type of Load Balancer you are going to create:

    annotations:
      service.beta.kubernetes.io/aws-load-balancer-internal: "false"
    
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-internal: "true"
    
  5. Enable the firewall in your Classic Load Balancer and allow access only to specific CIDRs. Edit rok/nginx-ingress-controller/overlays/deploy/patches/service-elb.yaml and set loadBalancerSourceRanges to the desired trusted CIDRs. Leave the default value of 0.0.0.0/0 if you want to allow access for everyone:

    spec:
      loadBalancerSourceRanges:
      - "0.0.0.0/0"
    
  6. Commit your changes:

    root@rok-tools:~/ops/deployments# git commit -am "Expose NGINX Ingress Controller with a Classic Load Balancer"
    
  7. Deploy NGINX Ingress Controller:

    root@rok-tools:~/ops/deployments# rok-deploy --apply rok/nginx-ingress-controller/overlays/deploy
    

Verify

  1. Verify that NGINX Ingress Controller is up-and-running. Check pod status and verify field STATUS is Running and field READY is 1/1:

    root@rok-tools:~/ops/deployments# kubectl -n ingress-nginx get pods
    NAME                                        READY   STATUS    RESTARTS AGE
    nginx-ingress-controller-7f74f657bd-ln59l   1/1     Running   0        1m
    
  2. Verify that the Load Balancer Service has an external IP:

    root@rok-tools:~/ops/deployments# kubectl -n ingress-nginx get service
    NAME           TYPE          CLUSTER-IP   EXTERNAL-IP                                                             PORT(S)                      AGE
    ingress-nginx  LoadBalancer  10.32.1.249  a4d794bfa6d7e440facc4398bf96edde-992601283.us-east-1.elb.amazonaws.com  80:30099/TCP,443:30719/TCP   1m
    
    Troubleshooting
    The Service object does not get an EXTERNAL-IP.
    1. Describe the service:

      root@rok-tools:~/ops/deployments# kubectl describe service -n ingress-nginx ingress-nginx
      
    2. If you see an event like the following:

      Events:
        Type     Reason                   Age   From                Message
        ----     ------                   ----  ----                -------
        Warning  UnAvailableLoadBalancer  1m    service-controller  There are no available nodes for LoadBalancer
      

      it means that your subnets are misconfigured.

    3. Check your environment and also verify your VPC configuration.

Summary

You have successfully deployed the NGINX Ingress Controller, and exposed it using a Classic Load Balancer.

What’s Next

The next step is to expose Istio, our service mesh.