MetalLB is a load-balancer implementation for bare metal Kubernetes clusters, using standard routing protocols. This chart bootstraps a MetalLB installation on a Kubernetes cluster using the Helm package manager. This chart provides an implementation for LoadBalancer Service objects.

MetalLB is a cluster service, and as such can only be deployed as a cluster singleton. Running multiple installations of MetalLB in a single cluster is not supported.

Prerequisites

  • Kubernetes 1.9+
  • A Kubernetes cluster, running Kubernetes 1.9.0 or later, that does not already have network-load-balancing functionality.
  • A cluster network configuration that can coexist with MetalLB.
  • Some IPv4 addresses for MetalLB to hand out.
  • Depending on the operating mode, you may need one or more routers capable of speaking BGP.

CONCEPTS

MetalLB hooks into your Kubernetes cluster, and provides a network load-balancer implementation. In short, it allows you to create Kubernetes services of type “LoadBalancer” in clusters that don’t run on a cloud provider, and thus cannot simply hook into paid products to provide load-balancers.

It has two features that work together to provide this service: address allocation, and external announcement.

Address Allocation

In a cloud-enabled Kubernetes cluster, you request a load-balancer, and your cloud platform assigns an IP address to you. In a bare metal cluster, MetalLB is responsible for that allocation.

MetalLB cannot create IP addresses out of thin air, so you do have to give it pools of IP addresses that it can use. MetalLB will take care of assigning and unassigning individual addresses as services come and go, but it will only ever hand out IPs that are part of its configured pools.

How you get IP address pools for MetalLB depends on your environment. If you’re running a bare metal cluster in a colocation facility, your hosting provider probably offers IP addresses for lease. In that case, you would lease, say, a /26 of IP space (64 addresses, and provide that range to MetalLB for cluster services.

Alternatively, your cluster might be purely private, providing services to a nearby LAN but not exposed to the internet. In that case, you could pick a range of IPs from one of the private address spaces (so-called RFC1918 addresses), and assign those to MetalLB. Such addresses are free and work fine as long as you’re only providing cluster services to your LAN.

Or, you could do both! MetalLB lets you define as many address pools as you want, and doesn’t care what “kind” of addresses you give it.

External Announcement

Once MetalLB has assigned an external IP address to a service, it needs to make the network beyond the cluster aware that the IP “lives” in the cluster. MetalLB uses standard routing protocols to achieve this: ARP, NDP, or BGP.

Layer 2 mode (ARP/NDP)

In layer 2 mode, one machine in the cluster takes ownership of the service and uses standard address discovery protocols (ARP for IPv4, NDP for IPv6) to make those IPs reachable on the local network. From the LAN’s point of view, the announcing machine simply has multiple IP addresses.

The layer 2 mode sub-page has more details on the behavior and limitations of layer 2 modes.

BGP

In BGP mode, all machines in the cluster establish BGP peering sessions with nearby routers that you control and tell those routers how to forward traffic to the service IPs. Using BGP allows for true load balancing across multiple nodes, and fine-grained traffic control thanks to BGP’s policy mechanisms.

The BGP mode sub-page has more details on BGP mode’s operation and limitations.

Kubernetes does not offer an implementation of network load-balancers (Services of type LoadBalancer) for bare metal clusters. The implementations of Network LB that Kubernetes does ship with are all glue code that calls out to various IaaS platforms (GCP, AWS, Azure…). If you’re not running on a supported IaaS platform (GCP, AWS, Azure…), LoadBalancers will remain in the “pending” state indefinitely when created.

Bare metal cluster operators have left with two lesser tools to bring user traffic into their clusters, “NodePort” and “external IPs” services. Both of these options have significant downsides for production use, which makes bare metal clusters second-class citizens in the Kubernetes ecosystem.

MetalLB aims to redress this imbalance by offering a Network LB implementation that integrates with standard network equipment so that external services on bare metal clusters also “just work” as much as possible.

Tell us about a new Kubernetes application

Newsletter

Never miss a thing! Sign up for our newsletter to stay updated.

About

Discover and learn about everything Kubernetes

Navigation