Now that I have a cluster setup with 3 server nodes and 3 worker nodes, I can start looking at my options for persistant storage. I decided to go for longhorn as my storage solution. I could easily run longhorn on the nodes that are already inside the cluster but I decided to create 3 extra nodes that would only manage the storage. I have a good amount of resources left in my homelab infrastructure and this will make it more highly available.

Requirements

To start of I will create the 3 extra longhorn nodes:

Node OS IP
k3s-01 Rocky Linux 8.9 192.168.129.191
k3s-02 Rocky Linux 8.9 192.168.129.192
k3s-03 Rocky Linux 8.9 192.168.129.193
k3s-worker-01 Rocky Linux 8.9 192.168.129.194
k3s-worker-02 Rocky Linux 8.9 192.168.129.195
k3s-worker-03 Rocky Linux 8.9 192.168.129.196
k3s-longhorn-01 Rocky Linux 8.9 192.168.129.197
k3s-longhorn-02 Rocky Linux 8.9 192.168.129.198
k3s-longhorn-03 Rocky Linux 8.9 192.168.129.199

All the nodes that will make use of longhorn should have iscsi-initiator-utils installed to access the volume mounts!

Node install

I will once again make use of k3sup and add the 3 new nodes to the cluster. I will add an additional label to these nodes to tag them as storage nodes:

#!/bin/bash
set -e

export LB_IP="192.168.129.230"
export NODE_1="192.168.129.197"
export NODE_2="192.168.129.198"
export NODE_3="192.168.129.199"
export USER=rein

# The first node joins
k3sup join \
  --ip $NODE_1 \
  --user $USER \
  --server-user $USER \
  --server-ip $LB_IP \

# The second node joins
k3sup join \
  --ip $NODE_2 \
  --user $USER \
  --server-user $USER \
  --server-ip $LB_IP \

# The third node joins
k3sup join \
  --ip $NODE_3 \
  --user $USER \
  --server-user $USER \
  --server-ip $LB_IP \

I will label these nodes to only be used as storage nodes. It’s also important to label the worker nodes that will be using the storage:

kubectl label nodes k3s-worker-0{1,2,3} longhorn=true
kubectl label nodes k3s-longhorn-0{1,2,3} longhorn=true

Longhorn install

With the nodes ready to go, I can install longhorn inside the k3s cluster. This is easily done by using the official documentation but I wanted some small adjustments to only deploy the pods on the longhorn nodes! So I downloaded the deployment file and added a small change to the deployment manifests:

wget https://raw.githubusercontent.com/longhorn/longhorn/v1.6.0/deploy/longhorn.yaml

The following needs to be added to the deployment-ui, deployment-driver and daemonset-sa deployment manifests:

apiVersion: apps/v1
kind: Deployment
...
      # Only deploy on longhorn labeled nodes
      nodeSelector:
        longhorn: "true"
...

After this small change, I can apply the manifest and let longhorn deploy across the different nodes:

kubectl apply -f longhorn.yaml

Longhorn web-UI

After the deployment is done, the UI will be ready and should have gotten a loadbalancer IP. When navigating to this IP you will be greeted by the longhorn web-UI.

The first thing I did was going to the “node” tab where I disabled the schedules for the worker nodes. This way only the longhorn nodes will schedule the storage on them and the workers will be able to access the volumes. longhorn-ui

Alright great, now I can start using longhorn persitent volume claims inside my kubernetes cluster!

A small example for one of my deployments:

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: longhorn-speedtest-pvc
  namespace: default
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: longhorn
  resources:
    requests:
      storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: speedtest
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: speedtest
  template:
    metadata:
      labels:
        app.kubernetes.io/name: speedtest
    spec:
      # Only deploy on the worker nodes
      nodeSelector:
        worker: "true"
      containers:
      - name: speedtest
        image: henrywhitaker3/speedtest-tracker:latest
        imagePullPolicy: IfNotPresent
        env:
        - name: OOKLA_EULA_GDPR
          value: "true"
        volumeMounts:
        - name: speedtest-vol
          mountPath: /config
      volumes:
      - name: speedtest-vol
        persistentVolumeClaim:
          claimName: longhorn-speedtest-pvc

Voila.