Adding kube-vip after setting up the cluster

2024-05-26

Network preparation

First thing we need to decide what will be the virtual IP (VIP) that will be used for the control plane: 192.168.1.30 in my case.
Since I decided to use the IP that was initially assigned by my DHCP to the only existing control plane, I needed to make a couple of changes.

The initial configuration was:

I changed it to:

Warning

If you want to use kubectl at this stage, you need to change the IP address in the client ~/.kube/config file. This would only be a temporary change since later you will use the VIP address instead.

Note

Since the VIP is the same as the previous only control plane node, I don’t need to change the worker configuration as it will later communicate with the load balanced VIP instead of directly with the control plane node.

Setting up the control plane

As stated in the main article1, K3s already provides a service load balancer, we want to disable it since it will be replaced by kube-vip. We also want to let K3s know that it will be called through a public IP that is different from the node one.
To make these changes, we need to update the K3s service start command by editing the /etc/systemd/system/k3s.service file and make the execution command look like this:

ExecStart=/usr/local/bin/k3s \
    server \
    --disable=traefik \
    --disable=servicelb \
    --tls-san=192.168.1.30 \

This will be needed for each new control plane node we are going to add.
To make the changes working you need, in order, to stop the service, edit the file, reload it and start the service again:

service k3s stop
edit /etc/systemd/system/k3s.service
systemctl daemon-reload
service k3s start

If the service is not stopped, the changes might not take effect.

[!ATTENTION]
At this point, your services will not be able to obtain a public IP, therefore they will not work.

Then, following the documentation, we want to locate the auto-deploy folder in our control plane which is in /var/lib/rancher/k3s/server/manifests.

Note

In the kube-vip documentation they assume we are going to create a new cluster, instead, having it already running, we will find the folder already created

The next step is to create the manifests needed to run kube-vip within the cluster, that we will then copy into the auto-deploy folder. So ssh into the control plane node and go ahead:

curl https://kube-vip.io/manifests/rbac.yaml > ~/kube-vip.yaml
echo "---" >> ~/kube-vip.yaml

export VIP=192.168.1.30
export KVVERSION=$(curl -sL https://api.github.com/repos/kube-vip/kube-vip/releases | jq -r ".[0].name")

alias kube-vip="ctr image pull ghcr.io/kube-vip/kube-vip:$KVVERSION; ctr run --rm --net-host ghcr.io/kube-vip/kube-vip:$KVVERSION vip /kube-vip"

kube-vip manifest daemonset \
    --address $VIP \
    --inCluster \
    --taint \
    --controlplane \
    --services \
    --arp \
    --leaderElection >> ~/kube-vip.yaml

sudo mv ~/kube-vip.yaml /var/lib/rancher/k3s/server/manifests

Let’s dive into the details:

Note

The documentation also creates the manifest passing the --interface, I decided to not pass it as this enables automatic selection.

[!TODO]
I blindly followed the documentation without even trying to apply the configs with kubectl, this might be interesting to test as it fits more the way I handle the configuration.

After a while you will be able to use kubectl as always with the VIP address.

The last step is to finally enable load balancing for Services, which is done through a cloud controller. The cloud controller is in charge of handling any network responsibility specific to the environment where the cluster is “located”, such as providing a public IP address to expose Services. The cloud controller communicates with Kubernetes through the cloud controller manager, which is a control plane component.

kube-vip provides the kube-vip-cloud-controller component which is responsible of setting the Service IP address; the address is then picked up by the kube-vip pods and advertised through ARP in my case.

The installation is straightforward, first we need to create a ConfigMap containing the address(es) we want the Services to be assigned to. In my case, I decided to have Ingress-NGINX as the reverse proxy for all the Services, so I only need 1 IP address, the same I used for the control plane:

apiVersion: v1
kind: ConfigMap
metadata:
  name: kubevip
  namespace: kube-system
data:
  cidr-global: 192.168.1.30/32

More options are available to manage the IP addresses.

I then created a Kustomize file to define the required manifests and the ConfigMap itself:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - configmap.yaml
  - https://raw.githubusercontent.com/kube-vip/kube-vip-cloud-provider/main/manifest/kube-vip-cloud-controller.yaml

After this last step, the Ingress-NGINX will be assigned the 192.168.1.30 IP address and will start (again) to receive traffic.


  1. K3s load balancing with kube-vip↩︎