3 minutes
Rebuilding my homelab (Part 2): provisioning with Terraform
To keep my homelab infrastructure reproducible and version-controlled, I use OpenTofu (a community-driven fork of Terraform) to provision all virtual machines on my Proxmox hypervisor.
The entire setup lives in a Git repository and follows a simple but clean structure:
terraform/
├── main.tf
├── outputs.tf
├── provider.tf
├── variables/
│ ├── k3s/variables.tfvars
│ └── vm/variables.tfvars
└── variables.tf
This structure keeps the configuration flexible and modular. The .tfvars
files define machine-specific variables, grouped by purpose — for example, one file for general VMs and another for my k3s cluster nodes. These are passed into the main config using workspaces or CLI flags, depending on what I’m provisioning.
VM Definition
Each VM is provisioned using a set of common parameters, like disk size, CPU, memory, and Proxmox template ID. Here’s what a variables.tfvars snippet looks like:
node_info = {
"k3s-01" = {
ip = "192.168.129.191"
vmid = 100
size = 25
cores = 2
sockets = 1
memory = 4096
ssd = true
pve = "pve1"
template_id = 901
qemu_agent = true
},
...
}
These values are looped over using for_each inside the main Terraform config to create one or more VMs. The resource for the VM looks like this:
resource "proxmox_virtual_environment_vm" "homelab_node" {
# General VM settings
for_each = var.node_info
name = each.key
node_name = each.value.pve
vm_id = each.value.vmid
stop_on_destroy = true
clone {
datastore_id = "local-lvm"
vm_id = each.value.template_id
node_name = each.value.pve
}
agent {
enabled = each.value.qemu_agent
}
cpu {
cores = each.value.cores
sockets = each.value.sockets
type = "host"
}
memory {
dedicated = each.value.memory
}
disk {
ssd = each.value.ssd
iothread = false
size = each.value.size
interface = "scsi0"
datastore_id = "local-lvm"
cache = "none"
}
vga {
type = "serial0"
}
network_device {
bridge = "vmbr0"
}
# Cloud Init Settings
initialization {
ip_config {
ipv4 {
address = "${each.value.ip}/23"
gateway = "192.168.128.1"
}
}
dns {
domain = "local.imrein.com"
servers = ["192.168.129.55", "192.168.129.101", "1.1.1.1"]
}
user_account {
keys = [var.ssh_key_main]
password = "xxx"
username = "xxx"
}
}
}
CI/CD with Gitea Actions
One of the key goals of this setup is to embrace a GitOps workflow — even for infrastructure provisioning. To automate the process, I’ve integrated Gitea Actions to run OpenTofu whenever changes are pushed to the VM definition files.
Here’s the exact workflow that gets triggered for the VM workspace:
name: Deploy VM nodes
on:
push:
paths:
- terraform/variables/vm/**
jobs:
Deploy:
runs-on: ubuntu-latest
env:
TF_VAR_PROXMOX_VE_API_TOKEN: "${{ secrets.PM_API_TOKEN_ID }}=${{ secrets.PM_API_TOKEN_SECRET }}"
TF_VAR_S3_KEY_ID: "${{ secrets.S3_KEY_ID }}"
TF_VAR_S3_KEY_SECRET: "${{ secrets.S3_KEY_SECRET }}"
steps:
- name: Check out the codebase
uses: actions/checkout@v4
- name: Install OpenTofu
run: |
curl -sSLo tofu.zip https://github.com/opentofu/opentofu/releases/download/v1.9.1/tofu_1.9.1_linux_amd64.zip
unzip tofu.zip -d /usr/local/bin
tofu version
- name: Format check
run: tofu fmt -check -recursive -diff
- name: Tofu init
run: cd terraform; tofu init -upgrade
- name: Set Workspace
run: |
cd terraform; tofu workspace select vm || tofu workspace new vm
- name: Tofu validate code
run: cd terraform; tofu validate
- name: Tofu plan
run: cd terraform; tofu plan -var-file=variables/vm/variables.tfvars -out=tfplan
- name: Tofu apply
run: cd terraform; tofu apply -auto-approve tfplan
This ensures that:
- Any changes to the VM definitions automatically trigger a validation and deployment pipeline
- Infrastructure changes are always run from version-controlled code
- Provisioning is hands-off and predictable
By codifying and automating everything — from VM creation to CI integration — I’ve turned a one-off manual process into a repeatable, self-documented system. Future changes are as simple as editing a .tfvars
file and pushing to Git.