Creating Internal and External Load Balancers from EKS – Kubernetes - Middleware Inventory (2023)

In this article, we are going to see how to create internal and external load balancers using the Kubernetes service on EKS

When we think about making an application or deployment available externally or internally over a Domain name

We think of Kubernetes Ingressbut that’s not the only way to expose your service.

There are three ways to make service available externally ( Irrespective of External or Internal Load balancer )

Table of Contents

Three Ways to expose a Service through AWS Load Balancer

Setting the Service type to Nodeport

services that need to be exposed to the outside world can be configured with the node port. In this method, each cluster node opens a port on the node itself (hence this name was given ) and redirects traffic received on that port to the underlying service

You can add the selective machines/nodes into a Target group of existing or new Load Balancers.

The target group would be using NodePort as the target port

Setting the Service type to Loadbalancer ( Discussed in this article )

an extension of the NodePort type—This makes the service accessible through a dedicated load balancer, provisioned from the cloud infrastructure Kubernetes is running on.

The load balancer redirects traffic to the node port across all the nodes. Clients connect to the service through the load balancer’s IP.

In this type Load Balancer Creation and Targetgroup creation would all be done automatically.

In the case of EKS Cluster LB Controller would take care of that job. Which needs to be manually installed and set up.

Creating an Ingress resource ( Not covered in this article )

A radically different mechanism for exposing multiple services through a single IP address. It operates at the HTTP level (network layer 7) and can thus offer more features than layer 4 services can. that’s what we are going to see in this article. It is more like AWS Application Load Balancers.

We are not going to cover Ingress in this article. we have a dedicated article for K8s ingress. you can refer

Kubernetes Ingress Example on Google Cloud

In this article we are going to use only Service and with a type of Load Balancer.

Setting the Service type to LoadBalancer does not automatically create the load balancer for you.

(Video) Terraform EKS Cluster Example (AWS EKS IAM Roles | AWS EKS Load Balancer | AWS EKS Node Group ) Ep 6

For EKS to be able to create a Load Balancer, it needs to have certain configurations and Service accounts created.

To facilitate EKS to create and manage AWS Load Balancers. we need to deploy an additional controller named AWS LB Controller which was earlier known as AWS Ingress controller.

Installing AWS LB Controller add on to EKS

Here are the quick installation and configuration steps to install AWS LB Controller on your EKS Cluster.

We presume you have installed EKS Cluster already.

if you have not created it yet. Refer to this article to create EKS Cluster to Karpenter Autoscaling – Terraform

To manage your existing EKS Cluster, AWS provides a CLI named eksctlwhich you can download/install from here.

eksctl can make use of your awscli profiles for authentication and to communicate to your AWS account.

Enabling OIDC in our EKS Cluster

let us begin with enabling OpenID Connect(OIDC) in our EKS Cluster. this lets our IAM roles be associated directly with Kubernetes service accounts.

It is a new feature where you can associate IAM roles with your Kubernetes Service accounts directly. Read more about it here

⇒ eksctl utils associate-iam-oidc-provider \ --region us-east-2 \ --cluster gritfyeks \ --approve

Creating IAM Policy

Once we have enabled the OIDC in our EKS cluster. we can go ahead and download the iam_policyconfiguration needed to be created.

You can directly download it using curl

⇒ curl -o iam_policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.2.0/docs/install/iam_policy.json

Content of the iam_policy.json file available in Github if you would like to directly copy

Now you can use this json file to create your iam policy using aws iam create-policycommand

⇒ aws iam create-policy \ --policy-name AWSLoadBalancerControllerIAMPolicy \ --policy-document file://iam_policy.json

Creating IAM Service Account and Attach the Policy

Now it’s time to create an iamserviceaccount in our EKS cluster. we are going to use eksctl for the same.

you need to update the clustername and region before trying this out

this creates a service account named aws-load-balancer-controllerin kube-system namespace and this service, account is associated with IAM policy we created earlier.

You might have to just change the following things before running the command

  • ClusterName
  • Policy ARN – ( Just AWS Account Number is enough)
  • AWS account Region
⇒ eksctl create iamserviceaccount \--cluster ${YOUR_CLUSTER_NAME} --region ${YOUR_AWS_REGION} \--namespace kube-system \--name aws-load-balancer-controller \--attach-policy-arn arn:aws:iam::${YOUR_AWS_ACCOUNT_NUMBER}:policy/AWSLoadBalancerControllerIAMPolicy \--override-existing-serviceaccounts \--approve

Now we have the necessary service accounts and OIDC in place. Now we can go ahead and deploy the aws-load-balancer-controllerusing helm

Installing AWS load balancer controller in EKS with Helm

If you do not have helm installed in your local system. Please install it before continuing. you can find more information about helm here

Let us begin with adding the necessary charts and repository to helm

# helm repo add eks https://aws.github.io/eks-charts# helm repo update

Once you have executed the helm repo add and helm repo update

(Video) AWS EKS Single ALB Ingress with different NameSpaces & EKS Fargate Profiles (Powerful Kubernetes)

you are good to install the aws-load-balancer-controller

⇒ helm install aws-load-balancer-controller eks/aws-load-balancer-controller \ -n kube-system \ --set clusterName=${Your Cluster Name} \ --set serviceAccount.create=false \ --set serviceAccount.name=aws-load-balancer-controller

Once the helm chart is successfully deployed. we can verify by listing the aws-load-balancer-controller deployment on the kube-system namespace

⇒ kubectl get deployments aws-load-balancer-controller -n kube-systemNAME READY UP-TO-DATE AVAILABLE AGEaws-load-balancer-controller 2/2 2 2 56s

Once you have validated that the deployment is present and LIVE. Now we can move on to the next phase of this article.

We are going to deploy and test some sample applications in our EKS Cluster.

This is the Image or Application we are going to deploy.

Docker Tomcat Example – Dockerfile for Tomcat, Docker Tomcat Image

Creating a new deployment in EKS Cluster

To test the load balancer we. need to first deploy some applications to the Kubernetes cluster.

I have taken our famous aksarav/tomcat8 image and deployed it to the cluster with the following single line command

⇒ kubectl create deployment tomcatinfra --image=saravak/tomcat8deployment.apps/tomcatinfra created

But in real-time you would ideally be creating a deployment with YAML with much more customizations

Since our objective is to test the load balancer with EKS am fastening this with single line deployment creation

Now the deployment is created. The next stage is where the Load Balancer is going to be created.

Creating AWS External Load Balancer – with K8s Service EKS

Now we need to expose our application as a service. To keep things simple we are going to use one-liner commands for this

⇒ kubectl expose deployment tomcatinfra --port=80 --target-port=8080 --type LoadBalancer service/tomcatinfra exposed

when you run the kubectl expose command with your deployment name and the port. Service would be auto-created

Here is the YAML file of the service, if you do not want to use the one-liner command

apiVersion: v1kind: Servicemetadata: name: tomcatappsvcspec: ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - name: http port: 80 targetPort: 8080 selector: app: tomcatinfra type: LoadBalancer

As you can see, We are also defining what type of service has to be created using --type LoadBalancer in both formats

If all configurations are in place and done right. You would be able to see that a new Load Balancer is created

By default when you expose a service. it would become a publicaly available load balancer. In order to make it private we need to special annotations. Will get there.

For now, we have tested how to create an External Load Balancer with aws-load-balancer-controller and expose our deployment as a service

Now let us access our application to validate if it is accessible

(Video) SREcon16 Europe - Scaling Shopify's Multi-Tenant Architecture across Multiple Datacenters

Creating AWS Internal Load Balancer – with K8s Service EKS

We have seen how to create an external load balancer with service. we have used a one-liner command to expose our deployment.

It has created the External Load Balancer automatically.

Now we are going to see how to create Internal Load Balancer with Service.

By default when you create a service it would expose the load balancer to the public. but this can be controlled using certain annotations

Let us take a look at the YAML file we are going to use to create our service.

As Classic Load Balancer is going to be deprecated shortly by AWS. I have chosen NLB. So we are now going to create internal network load balancer with EKS

apiVersion: v1kind: Servicemetadata: annotations: service.beta.kubernetes.io/aws-load-balancer-internal: "true" service.beta.kubernetes.io/aws-load-balancer-scheme: internal service.beta.kubernetes.io/aws-load-balancer-type: nlbspec: ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - name: http port: 80 targetPort: 8080 selector: app: tomcatinfra type: LoadBalancer

Our deployment is still the same as we have used in the last example.

kubectl create deployment tomcatinfra --image=saravak/tomcat8

This is a simple Tomcat Application that exposes a port 8080 in our service that’s the target port and our Service Load balancer is going to listen on port 80

TargetPort should always point to the application exposed port. 80 => 8080

Now let us create the service using this YAML file and validate.

kubectl apply -f internalservice.yml

here is a quick video record of me applying this YAML file in my EKS cluster.

Creating Internal and External Load Balancers from EKS – Kubernetes - Middleware Inventory (4)

As you have seen in the screen record the internal Load Balancer was created.

Here are some more screenshots I have taken from the AWS console for the Load Balancer

As you can see a new network Load Balancer has been created and the schema is set to internal

It means we have successfully created an internal load balancer using LB Controller

Since this is an internal load balancer, you have to be present within the VPC internal network to be able to access this.

If you have VPN servers. that’s the best way to connect to the internal network

Otherwise, you can launch a machine on the VPC and try to access it from there. that’s what am going to do

(Video) How to use Traefik as a Reverse Proxy in Kubernetes? // Ingress Controller Tutorial

I launched a new Windows server on the same VPC we have created manually and going to test this Load balancer from there

You can see that the URL is available and accessible from the windows server launched on the same VPC but not externally.

The only difference between the external load balancer creation and internal load balancer creation is the annotation we are setting.

There are more Annotations available which you can explore List of EKS Load Balancer Annotations

More Annotations to Customize the LoadBalancer Creation

We have used only a few annotations which are necessary for making the LB an internal Load Balancer.

I know there are more options to configure/customize during the LB creation and there are more annotations available as well.

Here are some things you can customize and control using Annotations.

  • Healthcheck attributes
  • Enabling Access log
  • Adding SSL Certificate and TLS Listener to your NLB

Find the complete list of LoadBalancer Annotations for EKS here

Conclusion

In this article, we have seen how to create Internal and external load balancers using LB Controller on EKS.

We also have installed the LB Controller on our EKS Cluster and deployed a sample application and exposed it as a service and created an internal network load balancer and external Load balancer.

We also validated the internal NLB with a machine created and available on the local VPC.

If you have any questions. please do let me know on the comments section

For any professional support reach us at [emailprotected]

Thanks
Sarav

Creating Internal and External Load Balancers from EKS – Kubernetes - Middleware Inventory (9)

Follow us onFacebook orTwitterFor more practical videos and tutorials. Subscribe to our channelFind me on Linkedin My ProfileFor any Consultation or to hire us [emailprotected]If you like this article. Show your Support! Buy me a Coffee.

Signup for Exclusive "Subscriber-only" Content

FAQs

How do you make an internal load balancer in Kubernetes? ›

Enable internal load balancer subsetting in a new cluster
  1. Go to the Google Kubernetes Engine page in the Google Cloud console. ...
  2. Click add_box Create.
  3. Configure your cluster as desired.
  4. From the navigation pane, under Cluster, click Networking.
  5. Select the Enable subsetting for L4 internal load balancers checkbox.

What is internal and external load balancer in AWS? ›

A load balancer accepts incoming traffic from clients and routes requests to its registered targets (such as EC2 instances) in one or more Availability Zones. The load balancer also monitors the health of its registered targets and ensures that it routes traffic only to healthy targets.

What is external load balancer in Kubernetes? ›

External Load Balancers - these are used to direct external HTTP requests into a cluster. An external load balancer in Kubernetes gives the cluster an IP address which will route internet traffic to specific nodes identified by ports.

Does EKS use load balancer? ›

Network traffic is load balanced at L4 of the OSI model. To load balance application traffic at L7 , you deploy a Kubernetes ingress , which provisions an AWS Application Load Balancer.

What is internal load balancer in Kubernetes? ›

An internal load balancer makes a Kubernetes service accessible only to applications running in the same virtual network as the Kubernetes cluster. This article shows you how to create and use an internal load balancer with Azure Kubernetes Service (AKS).

How do you build an internal load balancer? ›

Start the configuration

In the Cloud Console, navigate to Navigation menu > Network Services > Load balancing, and then click Create load balancer. Under TCP Load Balancing, click on Start configuration. For Internet facing or internal only, select Only between my VMs.

What is difference between internal load balancer and external load balancer? ›

Internal Load balancer is used for services which are for internal consumption only and hence doesn't need to direct Web traffic. External Load balancers are for regulating web traffic.

What is the difference between external and internal load balancer? ›

Internal load balancers, which load balance traffic within a virtual network. External load balancers, which load balance external traffic to an internet connected endpoint.

What is the difference between external and internal load? ›

External loads are associated with physical work being done by the body in the form of movement, while internal loads are measures of a combination of biochemical and the biomechanical stress on the system [2].

What is the use of external load balancer? ›

External load balancers. External load balancers use Server/Application State Protocol (SASP) to obtain recommendations and topology information related to server applications and systems in a clustered environment.

What are the types of load balancers in Kubernetes? ›

In Kubernetes the most basic type of load balancing is load distribution. Kubernetes uses two methods of load distribution. Both of them are easy to implement at the dispatch level and operate through the kube-proxy feature. Kube-proxy manages virtual IPs for services.

Why does Kubernetes need a load balancer? ›

To use Kubernetes efficiently, you have to use load balancing. Load balancing spares users the annoyance of dealing with unresponsive services and applications. It also acts as an invisible facilitator between a group of servers and a client, which helps ensure connection requests don't get lost.

What are different types of load balancers in AWS? ›

Elastic Load Balancing supports the following types of load balancers: Application Load Balancers, Network Load Balancers, and Classic Load Balancers.

What is internal load balancer in AWS? ›

The nodes of an internal load balancer have only private IP addresses. The DNS name of an internal load balancer is publicly resolvable to the private IP addresses of the nodes. Therefore, internal load balancers can only route requests from clients with access to the VPC for the load balancer.

What is the difference between EKS and Kubernetes? ›

Kubernetes is an open-source system for automating the deployment, scaling, and management of containerized applications. Amazon EKS: Runs and scales the Kubernetes control plane across multiple AWS Availability Zones to ensure high availability.

Why do we need internal load balancer? ›

An internal (or private) load balancer is used where private IPs are needed at the frontend only. Internal load balancers are used to load balance traffic inside a virtual network. A load balancer frontend can be accessed from an on-premises network in a hybrid scenario.

What is internal load balancing? ›

Google Cloud Internal TCP/UDP Load Balancing is a regional load balancer that is built on the Andromeda network virtualization stack. Internal TCP/UDP Load Balancing distributes traffic among internal virtual machine (VM) instances in the same region in a Virtual Private Cloud (VPC) network.

What is external name service in Kubernetes? ›

An ExternalName service is a special case of service that does not have selectors. It does not define any ports or endpoints. Rather, it serves as a way to return an alias to an external service residing outside the cluster.

What are the two methods of load balancing technique? ›

Load Balancing Algorithms
  • Least Connection Method — directs traffic to the server with the fewest active connections. ...
  • Least Response Time Method — directs traffic to the server with the fewest active connections and the lowest average response time.

What are the three types of load balancers? ›

Elastic Load Balancing supports the following types of load balancers: Application Load Balancers, Network Load Balancers, and Classic Load Balancers.

How many load balancer do I need? ›

How many load balancers do I need? As a best practice, you want at least two load balancers in a clustered pair. If you only have a single load balancer and it fails for any reason, then your whole system will fail. This is known as a Single Point of Failure (SPOF).

What factors would you consider before you begin load balancing? ›

Based on our experience, Incapsula recommends the following best practices and key considerations for IT teams looking to select a load balancing solution:
  • Identify immediate and long term needs. ...
  • Calculate anticipated loads. ...
  • High availability (HA) ...
  • Security concerns. ...
  • Return on Investment. ...
  • Total cost of ownership.
22 Jan 2015

How do I know which load balancer to use? ›

The type of traffic that you need your load balancer to handle is another factor in determining which load balancer to use:
  1. HTTP and HTTPS traffic: Global external HTTP(S) load balancer. ...
  2. SSL traffic: External SSL proxy load balancer.
  3. TCP traffic: External TCP proxy load balancer. ...
  4. UDP traffic: ...
  5. ESP or ICMP traffic:
4 days ago

Which two benefits are realized when using a load balanced application? ›

Load balancing lets you evenly distribute network traffic to prevent failure caused by overloading a particular resource. This strategy improves the performance and availability of applications, websites, databases, and other computing resources. It also helps process user requests quickly and accurately.

What is the difference between external and internal events? ›

Internal and external events have different purposes and require different approaches. The main aim of internal events is to improve existing relationships and nurture them, while the goal of external events is to bring new clients in and make new networks. You must curate your content accordingly.

What is the difference between external and internal circuit? ›

The part of the circuit containing electrochemical cells of the battery is the internal circuit. The part of the circuit where charge is moving outside the battery pack through the wires and the light bulb is the external circuit.

What is external load balancing? ›

External HTTP(S) Load Balancing is a proxy-based Layer 7 load balancer that enables you to run and scale your services behind a single external IP address.

What are the types of external loads? ›

External Loads: A body is subjected to only two types of external loads; namely, surface forces or body forces. Surface Forces: Surface forces are caused by the direct contact of one body with the surface of another. In all cases these forces are distributed over the area of contact between the bodies.

How do you calculate external load? ›

  1. EXTERNAL LOAD CALCULATIONS FOR DIRECT BURIED CONDUIT.
  2. PS = E*I/(0.149*r3)
  3. PS = PIPE STIFFNESS (lb/in2)
  4. E = MODULUS OF ELASTICITY IN TENSION (lb/in2)
  5. I = MOMENT OF INERTIA (in3)
  6. r = MEAN RADIUS (in)

How do you measure internal loading? ›

To calculate these internal forces, simply:
  1. Draw a free-body diagram of the entire body,
  2. Find reactions at external supports,
  3. Find reactions at connections,
  4. Keep all loads in their exact locations,
  5. Pass a section cut through the member perpendicular to its axis at the point where the internal loads are to be determined,

What is internal load balancing? ›

Google Cloud Internal TCP/UDP Load Balancing is a regional load balancer that is built on the Andromeda network virtualization stack. Internal TCP/UDP Load Balancing distributes traffic among internal virtual machine (VM) instances in the same region in a Virtual Private Cloud (VPC) network.

What is internal LoadBalancer? ›

An internal (or private) load balancer is used where private IPs are needed at the frontend only. Internal load balancers are used to load balance traffic inside a virtual network. A load balancer frontend can be accessed from an on-premises network in a hybrid scenario.

Does Kubernetes act as load balancer? ›

Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods, and can load-balance across them.

Does Kubernetes include load balancer? ›

Kubernetes handles load balancing through a load balancer. This can be internal or external. In the case of internal load balancing, the load balancer enables routing across containers. In essence, internal load balancers help you to optimize in-cluster load balancing.

What is internal load and external load? ›

External loads are associated with physical work being done by the body in the form of movement, while internal loads are measures of a combination of biochemical and the biomechanical stress on the system [2].

What is internal load balancer and external load balancer? ›

The external load balancer is used to route external HTTP traffic into the cluster. The internal load balancer is used for internal service discovery and load balancing within the cluster.

What is external load balancer? ›

External load balancers use Server/Application State Protocol (SASP) to obtain recommendations and topology information related to server applications and systems in a clustered environment.

What are the 3 types of load balancers in AWS? ›

Elastic Load Balancing supports the following types of load balancers: Application Load Balancers, Network Load Balancers, and Classic Load Balancers.

What is the use of internal load balancer in AWS? ›

The DNS name of an internal load balancer is publicly resolvable to the private IP addresses of the nodes. Therefore, internal load balancers can only route requests from clients with access to the VPC for the load balancer.

Which Load Balancer is best for Kubernetes? ›

Ingress is becoming the most popular load balancing method because it's easily scalable and it simplifies and consolidates your Kubernetes service routing rules. Ingress can also load balance traffic on both layer 4 (TCP/IP) and layer 7 (application requests), unlike the other two methods which only work on layer 4.

What are the types of load balancers in Kubernetes? ›

In Kubernetes the most basic type of load balancing is load distribution. Kubernetes uses two methods of load distribution. Both of them are easy to implement at the dispatch level and operate through the kube-proxy feature. Kube-proxy manages virtual IPs for services.

Why do we need load balancer in Kubernetes? ›

Having a load balancer also gives ability to load balance the traffic across multiple replicas of ingress controller pods. When you use ingress controller the traffic comes from the loadBalancer to the ingress controller and then gets to backend POD IPs based on the rules defined in ingress resource.

Videos

1. Jenkins, Docker, Kubernetes with AWS ELB, kops Multi-Tier Applications Deployment - +1 437 215 2483
(Landmark Technologies)
2. [ Kube 101.1 ] Traefik v2 | Part 1 | How to deploy in Kubernetes
(Just me and Opensource)
3. [ Kube 59.2 ] Using Kubernetes Ingress with MetalLB
(Just me and Opensource)
4. Getting started with Kubernetes: Using TypeScript to deploy a Python Flask app + PostgreSQL
(PulumiTV)
5. Session05 : NGINX Ingress Controller for Kubernetes
(GDG Baku)
6. Getting Started With Kubernetes on DigitalOcean
(DigitalOcean)
Top Articles
Latest Posts
Article information

Author: Dr. Pierre Goyette

Last Updated: 16/09/2023

Views: 5851

Rating: 5 / 5 (70 voted)

Reviews: 85% of readers found this page helpful

Author information

Name: Dr. Pierre Goyette

Birthday: 1998-01-29

Address: Apt. 611 3357 Yong Plain, West Audra, IL 70053

Phone: +5819954278378

Job: Construction Director

Hobby: Embroidery, Creative writing, Shopping, Driving, Stand-up comedy, Coffee roasting, Scrapbooking

Introduction: My name is Dr. Pierre Goyette, I am a enchanting, powerful, jolly, rich, graceful, colorful, zany person who loves writing and wants to share my knowledge and understanding with you.