Kubernetes LoadBalancer Installation and Configuration

Kubernetes is the de facto standard for running containers with high availability and many other great features, such as auto-scaling, automation, and service metadata. However, one of the crucial components of running Kubernetes to serve production applications effectively is the Kubernetes loadbalancer. What is the Kubernetes load balancer, and how is this implemented?

Introduction to Kubernetes LoadBalancer

Load balancers are generally placed in front of critical application servers and evenly distribute incoming traffic load so that all servers are used, increasing availability.

Cloud providers can provide upstream external load balancers that can be used for load balancing your Kubernetes services. In addition, you can use bare metal Kubernetes load balancer solutions on-premises.

By default, Kubernetes services are not exposed with external IP addresses to allow network traffic to reach the internal Kubernetes service. Instead, pods are configured with a cluster IP address with no default external access.


External traffic must have a reachable external IP address or service port to communicate with services inside the Kubernetes cluster. Kubernetes load balancers and ingress controllers are the way that most modern Kubernetes clusters expose services to the outside world.

Communicating with an Internal Kubernetes Service

By default, Kubernetes has three service types you can configure for services running in the Kubernetes cluster:

  • ClusterIP service
  • Nodeport service
  • Load balancer service

Below, you can see a Kubernetes cluster with different services configured with different service types. Note how the Kubernetes load balancing service type is the only one configured with an external IP, which can be a public IP address.

The service definition defines which service type is used with a new service created. Note the different service types below in the output of a kubectl get all -A.

kubernetes loadbalancer
Viewing ClusterIP, NodePort, and LoadBalancer service types in a Kubernetes cluster

ClusterIP Service

The ClusterIP service is used as the default connectivity and is an ephemeral IP address dynamically provisioned for a Kubernetes service. It is a private network dynamically provisioned inside the Kubernetes cluster itself.

Kubernetes pods can communicate directly between them via internal traffic within the cluster with internal routing. By default, you cannot connect to the internal cluster IP addresses without an ingress controller or external load balancer exposing services.

kubernetes service load balancer
The default service type in Kubernetes is the Cluster-IP

Nodeport Service

The node port service is the specialized service type to configure a static port externally, which maps to the internal service port. You will need multiple external ports configured for each internal service, as these can’t overlap.

Kubernetes nodeport service ports can only be created within the following TCP port range: 30000-32767. It can present challenges due to limited ports and firewall rules that must allow non-standard ports.

The nodeport service type is exposed on each Kubernetes host via the external IPs of the node’s IP address hosts, which are likely running as virtual machines.

Loadbalancer Service Type

The load balancer type maps from an external load balancer external IP address to the internal Cluster IP addresses. You can see the listed Loadbalancer IP and internal Cluster IP addresses when you edit a service.

loadbalancer service kubernetes
Viewing a service manifest defining the service type

When the service pods spin up, a new IP address will be provisioned for the new service from the external load balancer. With a load balancer type service there are also health checks. If a health check fails, the load balancer will remove the service instances, and traffic won’t be forwarded to the instances.


Creating Services with Kubectl

With the kubectl expose command, you can create a new service and expose the ports to the relevant Kubernetes pods.

kubectl expose deployment hello-world --port 80 --target-port=80

Creating Services with a Service Manifest

You can also use a service manifest to create services and define the service type. The service manifest is a great way to define services with DevOps processes, infrastructure as code, etc. In the service spec, the service type and other configurations are set.

Service Controller

In Kubernetes, the service controller watches the Kubernetes services for object changes. It can then create, update, or delete load balancers in the cloud. The glue code in the Kubernetes cluster handles calling out to the cloud service provider APIs to manage load balancer functions to handle client requests.

Service Discovery

Kubernetes service discovery is a mechanism in Kubernetes allowing an application running on a group of pods exposed as a network service. Service discovery allows multiple pods to run as a single DNS name.

This construct is important because it allows Kubernetes internal load balancers to load balance traffic between the pods. The internal load balancer can forward traffic to healthy pods if any of the pods for the service fail.

Load Balancing the Kubernetes Control Plane

In addition to load balancing application services, it is essential also to load balance the Kubernetes control plane API. By default, this runs on port 6443. You can load balance the traffic between your Kubernetes control plane nodes using a virtual IP address. API traffic uses the virtual IP address shared between the cluster nodes. Therefore, if there is a node failure, the API traffic can use a healthy node if one control plane node fails.

Adding a Load Balancer to a Kubernetes Cluster

Kubernetes clusters do not have a built-in network load balancer for bare-metal clusters. Instead, it only contains code that allows it to work with cloud provider environments to call out for load balancer IP addresses when spinning up the service type “Load balancer.” It expects an external load balancer to use for assigning an IP address for services of type LoadBalancer.


Ingress Controller

An ingress controller is one of the most popular and robust ways to handle the load balancing of services. It provides the most sophisticated approach and is what many use in their Kubernetes clusters to expose production service object traffic and route traffic directly to the internal services.

Ingress controllers allow defining the routing rules for incoming traffic and allow defining the service ingress controller routing based on the fully-qualified domain name. In this way, you don’t have to create a specific TCP port via the node ports for getting traffic into specific services.

A very popular ingress controller with bare-metal Kubernetes is Traefik. Traefik provides a great option for an ingress controller for externalname service routing. Below is the Traefik dashboard you can expose for visualizing ingress controller routes and other services. Click on the service name to have more detail.

loadbalancer kubernetes
Viewing the Traefik dashboard

Installing MetalLB Kubernetes Load Balancer

MetalLB is a bare-metal Kubernetes load balancer that provides a quick and easy load balancer for self-hosted Kubernetes clusters. In a solution like Microk8s, you can easily install MetalLB using the following command:

sudo microk8s enable metallb:<IP address range>

Below, we have created a new MetalLB deployment with the IP address range configured.

load balancer in kubernetes
Enabling and configuring MetalLB in Microk8s

MetalLB is not a DHCP server. So, you must carve out a range of IP addresses from your network range you want to use to expose services and define this range in the MetalLB configuration, as shown above.

MetalLB can operate in Layer2 mode which uses traditional ARP to advertise the IP addresses assigned to the services, or it can use BGP. To perform a test of the MetalLB installation, we can deploy a simple test service in Kubernetes to ensure we receive an IP address.

Let’s deploy a quick NGINX web server.

sudo microk8s helm repo add bitnami https://charts.bitnami.com/bitnami

sudo microk8s helm install <your server name> bitnami/nginx
load balancing in kubernetes
Installing an easy NGINX web server for testing

Now we can check to see if it has been correctly assigned an IP address from the MetalLB pool. We can issue the command kubectl get all -A and view the services. The new web server has correctly received an IP in the range of IP addresses defined for the MetalLB configuration.

kubernetes type loadbalancer
Viewing the IP address of the NGINX server configured with the MetalLB load balancer

Wrapping Up

The Kubernetes loadbalancer is essential to hosting production containerized services in Kubernetes clusters. It allows enabling internal Kubernetes services with IP addresses reachable by clients externally. Self-hosting Kubernetes clusters provides additional challenges since you need to supply your own load balancer solution. Cloud service providers do this for you. However, MetalLB and other open-source Kubernetes load balancers are simple and easy to use and provide great load-balancing features for your K8s pods.

I enjoy technology and developing websites. Since 2012 I'm running a few of my own websites, and share useful content on gadgets, PC administration and website promotion.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.