3.1 KiB
Executable File
title, date, draft, summary
title | date | draft | summary |
---|---|---|---|
Homemade Kubernetes | 2025-08-18T10:30:00-03:00 | false | Why I went with k3s for local homelab. |
tl;dr: wanted to learn k8s properly and wanted some high availability for some services. Also solves loneliness ;)
I started to have some issues in regards to high availability for some services. I wanted to make sure that my self-hosted applications would remain accessible even if one of my servers went down (like Jellyfin). This led me to explore Kubernetes as a solution.
As you may or may not know, k8s is a container orchestration platform that automates the deployment, scaling, and management of containerized applications. However it comes with a lot of complexity and operational overhead. I tried to set up a k8s cluster using k3s, which is a lightweight version of Kubernetes. It seems to be a good starting point, I'm using it since then and has been working wonders so far.
Currently I'm running them while all config files are on a NFS server, this makes managing configurations easier and backup-ready. For this, I'm using nfs-subdir-external-provisioner
to manage PVCs through NFS. I have also setup 2 backup cronjobs: one for local servers and another for a remote server.
Pros and cons
Pros that I have noticed:
- Easy to set up and manage: k3s is designed to be lightweight and easy to install
- High availability: if a server goes down, I can still access the services in there
- I haven't been able to properly set a HA k3s cluster yet as I need more hardware
- Currently, I'm using a single master-node setup
- Backups are easy to manage if you have all configurations under one place.
- Cronjobs are a breeze to set up and manage, mainly if you need to perform backup rituals.
- "Enterprise-grade" cluster in your home!
- Have fun :)
Cons:
- Complexity: While k3s simplifies many aspects of Kubernetes, it still requires a certain level of understanding of container orchestration concepts.
- Single-point of failure: In my current setup, the single master node is a potential point of failure. If it goes down, the entire cluster becomes unavailable.
- This can be solved with a multi-master setup, but it requires additional hardware.
- Learning curve: Kubernetes has a steep learning curve -- which is good for people like me.
Current setup
This is my current (might be outdated) setup:
- 2 Orange Pi running k3s
- Each with 4 GB RAM, 4C/4T, 256GB SD card on each.
- 1 Mini PC
- 6 GB RAM, 2C/4T, 64GB internal memory + 512GB SD Card
- Proxmox
- 32 GB RAM, 6C/12T, 1 TB SSD
- Currently I run these VMs with k3s:
- 1 prod-like VM
- 1 dev-like VM
- 1 work sandbox VM
At a tech level, I haven't made my setup / scripts / configurations public yet.
I believe that everyone should try this at home, be in a dedicated hardware/server or in a VM. It's a great way to learn and experiment with Kubernetes in a controlled environment.
I'm still running some services on Docker itself, but I'm slowly migrating them to k8s. Some services like DNS and Traefik Reverse Proxy are a bit more complex to set up.