Deploy NGINX Ingress Controller

In this section you will configure and deploy the NGINX Ingress Controller and expose it using a Classic Load Balancer.

Procedure

  1. Go to your GitOps repository, inside your rok-tools management environment:

    root@rok-tools:~# cd ~/ops/deployments
  2. Edit rok/nginx-ingress-controller/overlays/deploy/kustomization.yaml and use service-elb as base:

    bases: #- ../ingress-alb - ../service-elb #- ../service-azurelb
  3. Edit rok/nginx-ingress-controller/overlays/deploy/kustomization.yaml and enable only the service-elb patch:

    patches: #- path: patches/ingress-alb.yaml #- path: patches/service-alb.yaml - path: patches/service-elb.yaml #- path: patches/service-azurelb.yaml
  4. Edit rok/nginx-ingress-controller/overlays/deploy/patches/service-elb.yaml and set the aws-load-balancer-internal annotation. Choose one of the following options, based on the ELB scheme:

    annotations: service.beta.kubernetes.io/aws-load-balancer-internal: "false" # <-- Update this line.
    annotations: service.beta.kubernetes.io/aws-load-balancer-internal: "true" # <-- Update this line.
  5. Enable the firewall in your Classic Load Balancer and allow access only to specific CIDRs. Choose one of the following options, based on your ELB scheme:

    Edit rok/nginx-ingress-controller/overlays/deploy/patches/service-elb.yaml and set loadBalancerSourceRanges to the desired trusted CIDRs. Leave the default value of 0.0.0.0/0 if you want to allow access for everyone:

    spec: loadBalancerSourceRanges: - "0.0.0.0/0" # <-- Update this line.

    Skip specifying any CIDRs since the ELB will be a private one and as such not reachable outside your VPC.

  6. Commit your changes:

    root@rok-tools:~/ops/deployments# git commit -am "Expose NGINX Ingress Controller with a Classic Load Balancer"
  7. Deploy NGINX Ingress Controller:

    root@rok-tools:~/ops/deployments# rok-deploy --apply rok/nginx-ingress-controller/overlays/deploy

Verify

  1. Verify that NGINX Ingress Controller is up-and-running. Check pod status and verify field STATUS is Running and field READY is 1/1:

    root@rok-tools:~/ops/deployments# kubectl -n ingress-nginx get pods NAME READY STATUS RESTARTS AGE ingress-nginx-controller-7f74f657bd-ln59l 1/1 Running 0 1m
  2. Verify that the Load Balancer Service has an external IP:

    root@rok-tools:~/ops/deployments# kubectl -n ingress-nginx get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx LoadBalancer 10.32.1.249 a4d794bfa6d7e440facc4398bf96edde-992601283.us-east-1.elb.amazonaws.com 80:30099/TCP,443:30719/TCP 1m

    Troubleshooting

    The Service object does not get an EXTERNAL-IP.

    1. Describe the service:

      root@rok-tools:~/ops/deployments# kubectl describe service -n ingress-nginx ingress-nginx
    2. If you see an event like the following:

      Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning UnAvailableLoadBalancer 1m service-controller There are no available nodes for LoadBalancer

      it means that your subnets are misconfigured.

    3. Verify your subnets configuration.

Summary

You have successfully deployed the NGINX Ingress Controller, and exposed it using a Classic Load Balancer.

What’s Next

The next step is to expose Istio, our service mesh.