AWS, Cloud Computing, DevOps

4 Mins Read

Ways of Exposing Kubernetes Applications Externally

Voiced by Amazon Polly


Kubernetes is an open-source container orchestration tool that helps in automating operational tasks like deploying, scaling, and managing containerized applications. It tries to maintain the resources in desired state configuration as defined in the manifests.

Pod is the smallest deployable unit in Kubernetes and each pod can contain multiple containers running inside. Each pod gets an Ip based on the CNI plugin installed. Pods are not permanent, and a new pod will be created due to configuration changes or dynamic scaling or node replacement, etc. Hence, we use the service for exposing pods as a network service.

Kubernetes Service

Service identifies the pods based on the labels and it will have a unique URL across the cluster. Services can be exposed by a specification called type setting with,

  • ClusterIP: It is the default setting, and provides a private virtual Ip inside the cluster
  • NodePort: It exposes the cluster IP on a specific port on all worker nodes

LoadBalancer: In the backend, it uses ClusterIP and NodePort and expects to create a load balancer from cloud providers using a service controller. It can add extra features based on the type of load balancer like in AWS it includes DDOS protection, SSL termination, WAF Service, etc
Following is the service manifest file for type LoadBalancer


If we have multiple endpoints to be exposed externally, then we can use a separate load balancer for each service that increases the infrastructure’s complexity and cost. This is solved by ingress which along with services will provide access to pods.

Customized Cloud Solutions to Drive your Business Success

  • Cloud Migration
  • Devops
  • AIML & IoT
Know More


Ingress is one of the in-built Kubernetes API objects which exposes the services externally. It includes a set of rules for routing traffic.  Each rule will include a host, path, and backend to where traffic needs to be routed if matched with rules. It may be configured for SSL termination and load balancing. You can define a default

A basic ingress definition will be as follows

By default no ingress controller will be installed, we need to install it separately. There are ingress controllers provided by popular cloud providers and open-source popular ingress controllers like the nginx ingress controller. AWS provides AWS Load Balancer Controller.

AWS Load Balancer Controller

This approach uses an external load balancer that routes traffic directly to the pods via service. Target groups are configured and can be IPs or instances.

Annotations can be used in the ingress file to provide information for the ingress controller like load balancer name, IP (pods are directly registered) or instance as target type, internal or internet facing, etc


For the ingress objects, the target groups and listening rules will be configured for ALB. The default ingressClassName for AWS Load balancer controller is alb.  A default listener rule will be created for 404 fixed responses. You can group multiple ingress resources to be handled by a single load balancer by specifying the group using annotation, and the order can be specified using annotation

The controller can be specified to watch a single namespace or all namespaces. Currently, it does not support multiple namespaces. Natively adds HTTPS to HTTP redirection, WAF support, and authentication.

Nginx Ingress Controller

In this approach, we use the nginx ingress controller which is an in-cluster internal reverse proxy for layer 7. It routes the traffic from outside to the internal resources according to the configurations. An extra layer is added inside the cluster for nginx controller pods exposed by a service that takes responsibility for routing traffic. The installations and configurations of the controller should be taken care of by cluster operators.


The default nginx backend will give a 404 error or you can configure controller.defaultBackend or using annotation

Multiple ingress resources can be merged but without using group or order as used in AWS Load Balancer Controller. The default ingress class name will be nginx.


As we have seen AWS Load Balancer Controller provides a highly available and elastic way and offloads its infrastructure burden from the cluster itself. Nginx Controller gives more flexibility, but it needs to be maintained and run inside the cluster which may increase some operational workloads. Based upon our requirement for less operational workload vs more flexibility we can choose the right controller or even we can use multiple controllers for different use cases.

Get your new hires billable within 1-60 days. Experience our Capability Development Framework today.

  • Cloud Training
  • Customized Training
  • Experiential Learning
Read More

About CloudThat

CloudThat is also the official AWS (Amazon Web Services) Advanced Consulting Partner and Training partner and Microsoft gold partner, helping people develop knowledge of the cloud and help their businesses aim for higher goals using best in industry cloud computing practices and expertise. We are on a mission to build a robust cloud computing ecosystem by disseminating knowledge on technological intricacies within the cloud space. Our blogs, webinars, case studies, and white papers enable all the stakeholders in the cloud computing sphere.

Drop a query if you have any questions regarding Kubernetes and I will get back to you quickly.

To get started, go through our Consultancy page and Managed Services Package that is CloudThat’s offerings.


1. Which is the default load balancer type created for service type LoadBalancer in EKS?

ANS: – By default, it provisions Classic Load Balancer which is considered legacy ELB now. If you need a specific type of ELB you can specify using annotation nlb

2. How to specify AWS Load Balancer Controller to use watch only a particular namespace?

ANS: – You can give the value whatchNamespace to watch only one namespace or not specify to watch all namespaces.

WRITTEN BY Dharshan Kumar K S

Dharshan Kumar is a Research Associate at CloudThat. He has a working knowledge of various cloud platforms such as AWS, Microsoft, ad GCP. He is interested to learn more about AWS's Well-Architected Framework and writes about them.



    Click to Comment

Get The Most Out Of Us

Our support doesn't end here. We have monthly newsletters, study guides, practice questions, and more to assist you in upgrading your cloud career. Subscribe to get them all!