Now that my cluster is fully setup and ready to receive some workloads, I decided to start of with something that will make my life easier! As I love Infrastructure as Code and I’m already using Puppet to manage my VM’s, I wanted something similar for my kubernetes setup and containers. This is where ArgoCD comes in! It’s a declarative GitOps tool for kubernetes that will translate my yaml declarative files into deployments, configurations,… Perfect!

Requirements

The only real requiretments are:

  • kubectl
  • kubeconfig file to access an existing cluster

Installation

Start of creating a namespace for ArgoCD & deploying the premade manifest:

kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

Everything should now be installed and deployed after a few moments.

Next up is making sure that the newly deployed ArgoCD is reachable from outside the k3s cluster. This can be done in different ways but just for the initial setup I decided to use a loadbalancer service type:

kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'

In order for this to now get a loadbalancer IP, I had to make sure that there was a range being added to the KubeVIP setup for the argocd namespace:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: kubevip
  namespace: kube-system
data:
  range-argocd: 192.168.129.210-192.168.129.215

Now checking if the service got an IP:

kubectl get svc -n argocd

NAME                                      TYPE           CLUSTER-IP      EXTERNAL-IP       PORT(S)                      AGE
argocd-applicationset-controller          ClusterIP      10.43.135.134   <none>            7000/TCP,8080/TCP            1d
argocd-dex-server                         ClusterIP      10.43.133.220   <none>            5556/TCP,5557/TCP,5558/TCP   1d
argocd-metrics                            ClusterIP      10.43.109.191   <none>            8082/TCP                     1d
argocd-notifications-controller-metrics   ClusterIP      10.43.233.191   <none>            9001/TCP                     1d
argocd-redis                              ClusterIP      10.43.23.173    <none>            6379/TCP                     1d
argocd-repo-server                        ClusterIP      10.43.88.28     <none>            8081/TCP,8084/TCP            1d
argocd-server                             LoadBalancer   10.43.203.179   192.168.129.210   80:30226/TCP,443:30099/TCP   1d
argocd-server-metrics                     ClusterIP      10.43.246.124   <none>            8083/TCP                     1d

Great, when going to that address now we get the login page! argo

Initial login

To login on the webUI of ArgoCD we need the secret that was created with the deployment. This will be base64 encoded and will need to be decoded to use in the login:

kubectl get secret argocd-initial-admin-secret -n argocd -o yaml

Example deployment

Now that ArgoCD is setup, I can start of by creating my first deployment on it! As I will try to move all my docker-compose stacks to k3s, I will use one of my containers as an example (speedtest).

I will start of by converting the docker-compose file to a kubernetes manifest:

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: longhorn-speedtest-pvc
  namespace: default
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: longhorn
  resources:
    requests:
      storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: speedtest
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: speedtest
  template:
    metadata:
      labels:
        app.kubernetes.io/name: speedtest
    spec:
      nodeSelector:
        worker: "true"
      containers:
      - name: speedtest
        image: henrywhitaker3/speedtest-tracker:latest
        imagePullPolicy: IfNotPresent
        env:
        - name: OOKLA_EULA_GDPR
          value: "true"
        volumeMounts:
        - name: speedtest-vol
          mountPath: /config
      volumes:
      - name: speedtest-vol
        persistentVolumeClaim:
          claimName: longhorn-speedtest-pvc
---
apiVersion: v1
kind: Service
metadata:
  name: speedtest
  namespace: default
spec:
  type: LoadBalancer
  selector:
    app.kubernetes.io/name: speedtest
  ports:
    - name: http
      port: 80
      targetPort: 80
      protocol: TCP

This yaml file combines the deployment, service and PVC needed to deploy my speedtest container in the k3s cluster.

To now make use of ArgoCD, I will need to create a configuration file that will be read and used by ArgoCD:

---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata: 
  name: speedtest
  namespace: default
spec:
  project: default
  source:
    repoURL: 'http://192.168.129.170:3000/homelab/homelab_kube' # Gitea repository URL
    path: speedtest
    targetRevision: HEAD
    directory:
      recurse: true
      jsonnet:
        tlas:
          - name: ''
            value: ''
  destination:
    server: 'https://kubernetes.default.svc'
    namespace: default
  syncPolicy:
    automated:
      - selfHeal: true
      - prune: true
    syncOptions:
      - CreateNamespace=true

With this file created, I can apply it and ArgoCD will pick it up and give me a nice overview of the deployment:

kubectl apply -f configuration.yaml

cd cd

The next step will be to migrate all of my docker-compose stacks to my new k3s setup. I procrastinated my k3s setup for a long time and I have no idea why I did that, I learned so much already by just setting everything up and reading the documentation. The fact that everything can just be deployed through declarative yaml files, makes everything even better.

My next steps will be to go deeper into kubernetes and learn more about secrets, users and permissions.