I have been thinking about setting up a Kubernetes cluster for quit some time now and finally decided to start with the initial setup. The goal is to eventually migrate all my running docker-compose stacks and other containers to my Kubernetes cluster. I decided to use k3s as it’s a pretty lightweight Kubernetes distribution and quit popular in the selfhosted community. To make the setup HA, I also made use of kube-vip which provides a virtual IP and load balancer!

I will install the cluster without servicelb this will be replaced by kube-vip && also without traefik as I want to install it myself using Helm and my own values.

Requirements

To get an HA setup, I need 3 server nodes for the cluster and some workers.

Node OS IP
k3s-01 Rocky Linux 8.9 192.168.129.191
k3s-02 Rocky Linux 8.9 192.168.129.192
k3s-03 Rocky Linux 8.9 192.168.129.193
k3s-worker-01 Rocky Linux 8.9 192.168.129.194
k3s-worker-02 Rocky Linux 8.9 192.168.129.195
k3s-worker-03 Rocky Linux 8.9 192.168.129.196

Initial install

To install k3s we can utilize a really handy tool which makes the install quit easy. The tool is called k3sup and makes spinning up a k3s cluster a piece of cake.

Let’s start of with installing k3sup:

# Install
curl -sLS https://get.k3sup.dev | sh
sudo install k3sup /usr/local/bin/

# Check installation
k3sup --help

!!! This tool uses SSH to connect to the nodes where k3s needs to be installed so make sure SSH-key based connections are configured.

Next we use the tool to setup our first server node:

k3sup install --ip 192.168.129.191 \
--user rein \
--sudo \
--tls-san 192.168.192.230 \
--cluster \
--local-path /home/rein/.kube/config \
--context k3s-homelab \
--k3s-extra-args "--disable servicelb --disable traefik --node-ip=192.168.129.191"

Now we can export the kubeconfig file so we can communicate with k3s using kubectl, or even add it to your .bashrc / .zshrc:

export KUBECONFIG=/home/rein/.kube/config

To get started with kube-vip we need to upload and apply the “rbac” manifest. This is to provide kube-vip with the required permissions as it will run as a daemonset:

kubectl apply -f https://kube-vip.io/manifests/rbac.yaml 

Kube-vip

Now we can setup kube-vip properly on the k3s server node we just created. Start of by making an SSH connection to the host and becoming root:

ssh 192.168.129.191

sudo -i

Next up is downloading the kube-vip image file:

ctr image pull docker.io/plndr/kube-vip:latest

Following the documentation, we will now create an alias to run the kube-vip command:

alias kube-vip="ctr run --rm --net-host docker.io/plndr/kube-vip:latest vip /kube-vip"

Now we can create our daemonset for kube-vip by utilizing the alias:

kube-vip manifest daemonset \
--arp \
--interface eth0 \
--address 192.168.129.230 \
--controlplane \
--leaderElection \
--taint \
--inCluster | tee /var/lib/rancher/k3s/server/manifests/kube-vip.yaml

If everything worked, we should now see the daemon set loaded and ready using the kubectl command:

kubectl get ds -A

NAMESPACE     NAME          DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
kube-system   kube-vip-ds   1         1         1       1            1           <none>          3m

Ok great! This daemonset will provide us with a virtual IP address that will act as our loadbalancing IP.

Adding other server nodes

With kube-vip setup now, we can utilize the new virtual IP to add new server nodes to the cluster! I made a small bash script to do this just to make my life a little easier, and to utilize this maybe in the future:

#!/bin/bash
set -e

export NODE_2="192.168.129.192"
export NODE_3="192.168.129.193"
export LB_IP="192.168.129.230"
export USER=rein

# The second node joins
k3sup join \
  --server \
  --ip $NODE_2 \
  --user $USER \
  --server-user $USER \
  --server-ip $LB_IP \
  --k3s-extra-args "--disable servicelb --disable traefik --node-ip=$NODE_2"

# The third node joins
k3sup join \
  --server \
  --ip $NODE_3 \
  --user $USER \
  --server-user $USER \
  --server-ip $LB_IP \
  --k3s-extra-args "--disable servicelb --disable traefik --node-ip=$NODE_3"

After running the script, the new server nodes are added to the cluster:

kubectl get nodes

NAME     STATUS   ROLES                       AGE  VERSION
k3s-01   Ready    control-plane,etcd,master   11m  v1.28.6+k3s2
k3s-02   Ready    control-plane,etcd,master   6m   v1.28.6+k3s2
k3s-03   Ready    control-plane,etcd,master   5m   v1.28.6+k3s2

The daemonset should now also be replicated to the 2 new nodes:

kubectl get ds -A

NAMESPACE     NAME          DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
kube-system   kube-vip-ds   3         3         3       3            3           <none>          13m

Adding worker nodes

Here we do the exact same thing but only setup the nodes as workers. These will be used to run all the pods and workloads on.

#!/bin/bash
set -e

export NODE_1="192.168.129.194"
export NODE_2="192.168.129.195"
export NODE_3="192.168.129.196"
export LB_IP="192.168.129.230"
export USER=rein

# The first node joins
k3sup join \
  --ip $NODE_1 \
  --user $USER \
  --server-user $USER \
  --server-ip $LB_IP \

# The second node joins
k3sup join \
  --ip $NODE_2 \
  --user $USER \
  --server-user $USER \
  --server-ip $LB_IP \

# The third node joins
k3sup join \
  --ip $NODE_3 \
  --user $USER \
  --server-user $USER \
  --server-ip $LB_IP \

After running the script, the new worker nodes are added to the cluster:

kubectl get nodes

NAME              STATUS   ROLES                       AGE     VERSION
k3s-01            Ready    control-plane,etcd,master   1d     v1.28.6+k3s2
k3s-02            Ready    control-plane,etcd,master   1d     v1.28.6+k3s2
k3s-03            Ready    control-plane,etcd,master   1d     v1.28.6+k3s2
k3s-worker-01     Ready    <none>                      1d     v1.28.6+k3s2
k3s-worker-02     Ready    <none>                      1d     v1.28.6+k3s2
k3s-worker-03     Ready    <none>                      1d     v1.28.6+k3s2

The last thing I will do is to label these new nodes as workers. That way I can tell my deployments to only run on these nodes:

kubectl label nodes k3s-worker-0{1,2,3} worker=true

Installing Traefik ingress controller

As I want some custom values for my traefik installation, I decided to install it separately and not with the k3sup setup script.

Values.yaml

globalArguments:
  - "--global.sendanonymoususage=false"
  - "--global.checknewversion=true"

additionalArguments:
  - "--serversTransport.insecureSkipVerify=true"
  - "--log.level=INFO"

deployment:
  enabled: true
  replicas: 3
  annotations: {}
  podAnnotations: {}
  additionalContainers: []
  initContainers: []

ports:
  web:
    redirectTo:
      port: websecure
      priority: 10
  websecure:
    tls:
      enabled: true
      
ingressRoute:
  dashboard:
    enabled: false

providers:
  kubernetesCRD:
    enabled: true
    ingressClass: traefik-external
    allowExternalNameServices: true
  kubernetesIngress:
    enabled: true
    allowExternalNameServices: true
    publishedService:
      enabled: false

rbac:
  enabled: true

service:
  enabled: true
  type: LoadBalancer
  annotations: {}
  labels: {}
  loadBalancerSourceRanges: []
  externalIPs: []

The installation:

helm repo add traefik https://helm.traefik.io/traefik \
helm repo update \
kubectl create namespace traefik \
helm install --namespace=traefik traefik traefik/traefik --values=values.yaml

Success! I now have an HA k3s cluster running. The next steps will be to learn about storage and networking in Kubernetes. Once I learned a bit about that, I can start gradually migrating my containers to the k3s stack!