2024-05-26
First thing we need to decide what will be the virtual IP (VIP) that
will be used for the control plane: 192.168.1.30
in my
case.
Since I decided to use the IP that was initially assigned by my DHCP to
the only existing control plane, I needed to make a couple of
changes.
The initial configuration was:
192.168.1.30
192.168.1.31
I changed it to:
192.168.1.30
192.168.1.31
192.168.1.34
.32
and .33
free as I’m
planning to add 2 additional control planes in the near future.Warning
If you want to use kubectl
at this stage, you need to
change the IP address in the client ~/.kube/config
file.
This would only be a temporary change since later you will use the VIP
address instead.
Note
Since the VIP is the same as the previous only control plane node, I don’t need to change the worker configuration as it will later communicate with the load balanced VIP instead of directly with the control plane node.
As stated in the main article1, K3s already provides a
service load balancer, we want to disable it since it will be replaced
by kube-vip. We also want to let K3s know that it will be called through
a public IP that is different from the node one.
To make these changes, we need to update the K3s service start command
by editing the /etc/systemd/system/k3s.service
file and
make the execution command look like this:
ExecStart=/usr/local/bin/k3s \
server \
--disable=traefik \
--disable=servicelb \
--tls-san=192.168.1.30 \
This will be needed for each new control plane node we are going to
add.
To make the changes working you need, in order, to stop the service,
edit the file, reload it and start the service again:
service k3s stop
edit /etc/systemd/system/k3s.service
systemctl daemon-reload
service k3s start
If the service is not stopped, the changes might not take effect.
[!ATTENTION]
At this point, your services will not be able to obtain a public IP, therefore they will not work.
Then, following the documentation,
we want to locate the auto-deploy
folder in our control plane which is in
/var/lib/rancher/k3s/server/manifests
.
Note
In the kube-vip documentation they assume we are going to create a new cluster, instead, having it already running, we will find the folder already created
The next step is to create the manifests needed to run kube-vip within the cluster, that we will then copy into the auto-deploy folder. So ssh into the control plane node and go ahead:
curl https://kube-vip.io/manifests/rbac.yaml > ~/kube-vip.yaml
echo "---" >> ~/kube-vip.yaml
export VIP=192.168.1.30
export KVVERSION=$(curl -sL https://api.github.com/repos/kube-vip/kube-vip/releases | jq -r ".[0].name")
alias kube-vip="ctr image pull ghcr.io/kube-vip/kube-vip:$KVVERSION; ctr run --rm --net-host ghcr.io/kube-vip/kube-vip:$KVVERSION vip /kube-vip"
kube-vip manifest daemonset \
--address $VIP \
--inCluster \
--taint \
--controlplane \
--services \
--arp \
--leaderElection >> ~/kube-vip.yaml
sudo mv ~/kube-vip.yaml /var/lib/rancher/k3s/server/manifests
Let’s dive into the details:
curl https://kube-vip.io/manifests/rbac.yaml > ~/kube-vip.yaml
just retrieves the required access control (RBAC) manifests to give
kube-vip the rights to watch Kubernetes objects;echo "---" >> ~/kube-vip.yaml
adds a separator in
our YAML file since I decided to pipe all the configuration in a single
file;VIP
and KVVERSION
variables are set so we
can use the values later;kube-vip
is create for cleaner execution
later, it basically runs a command on a downloaded Docker image of
kube-vip;kube-vip manifest
create the actual YAML manifest (that
is eventually redirected the previously created manifest), here is an
explanation of the params we are passing in:
--address $VIP
: the control plane load balancer virtual
IP;--inCluster
: required when running as DaemonSet, makes
kube-vip run with a ServiceAccount named kube-vip
;--taint
: adds node affinity rules forcing kube-vip pods
to only run on control plane nodes;--controlplane
: enable the control plane load
balancing;--services
: enable the service load balancing;--arp
: use the ARP mode (or layer 2); a
node is assigned the virtual IP and traffic will pass through it;--leaderElection
: used by ARP to elect a leader which
boradcasts the VIP.Note
The documentation also creates the manifest passing the
--interface
, I decided to not pass it as this enables automatic
selection.
[!TODO]
I blindly followed the documentation without even trying to apply the configs withkubectl
, this might be interesting to test as it fits more the way I handle the configuration.
After a while you will be able to use kubectl
as always
with the VIP address.
The last step is to finally enable load balancing for Services, which is done through a cloud controller. The cloud controller is in charge of handling any network responsibility specific to the environment where the cluster is “located”, such as providing a public IP address to expose Services. The cloud controller communicates with Kubernetes through the cloud controller manager, which is a control plane component.
kube-vip provides the kube-vip-cloud-controller component which is responsible of setting the Service IP address; the address is then picked up by the kube-vip pods and advertised through ARP in my case.
The installation is straightforward, first we need to create a ConfigMap containing the address(es) we want the Services to be assigned to. In my case, I decided to have Ingress-NGINX as the reverse proxy for all the Services, so I only need 1 IP address, the same I used for the control plane:
apiVersion: v1
kind: ConfigMap
metadata:
name: kubevip
namespace: kube-system
data:
cidr-global: 192.168.1.30/32
More options are available to manage the IP addresses.
I then created a Kustomize file to define the required manifests and the ConfigMap itself:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- configmap.yaml
- https://raw.githubusercontent.com/kube-vip/kube-vip-cloud-provider/main/manifest/kube-vip-cloud-controller.yaml
After this last step, the Ingress-NGINX will be assigned the
192.168.1.30
IP address and will start (again) to receive
traffic.