first commit

This commit is contained in:
2025-09-17 15:23:54 -03:00
commit 5e1ee4c0f2
23 changed files with 2022 additions and 0 deletions

8
.gitignore vendored Normal file
View File

@@ -0,0 +1,8 @@
media/
default/*tt*
certs/ca*
lab
rbac
cronjobs

46
README.md Normal file
View File

@@ -0,0 +1,46 @@
# Haven
**A *forever-work-in-progress* self-hosted server setup**
Based on a multi-node k3s cluster running on VMs and bare metal hardware.
The overall application configs are stored in a NFS share inside of a SSD that was purposed specifically for this. For that I'm using `nfs-subdir-external-provisioner` as a dynamic storage provisioner with specified paths on each PVC. Some other data is stored on a NAS server with a NFS share as well.
The cluster is running on `k3s` with `nginx` as the ingress controller. For load balancing I'm using `MetalLB` in layer 2 mode. I'm also using `cert-manager` for local CA and certificates (as Vaultwarden requires it).
For more information on setup, check out [SETUP.md](SETUP.md).
## Namespaces
- default
- ArchiveBox
- Homarr
- Homepage
- It-tools
- Notepad
- Searxng
- Uptimekuma
- Vaultwarden
- dns
- AdGuardHome
- AdGuardHome-2 (2nd instance)
- AdGuard-Sync
- infra
- Haven Notify (my own internal service)
- Beszel
- Beszel Agent (running as DaemonSet)
- Code Config (vscode for internal config editing)
- WireGuard Easy
- dev
- Gitea Runner (x64)
- Gitea Runner (arm64)
- lab
- Nothing yet, just a playground/sandbox namespace
- metallb-system
- MetalLB components
- cert-manager
- Cert-Manager components
## Todo:
- Move archivebox data to its own PVC on NAS
- Move uptimekuma to `infra` namespace
- Add links to each application docs
- Add links to server scripts

65
SETUP.md Normal file
View File

@@ -0,0 +1,65 @@
## Install nfs-subdir-external-provisioner
```bash
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
--set nfs.server=192.168.15.61 \
--set nfs.path=/export/config \
--set storageClass.name=nfs-client \
--set storageClass.pathPattern='${.PVC.namespace}/${.PVC.annotations.nfs.io/storage-path}'
```
Make it default by:
```bash
current_default=$(kubectl get storageclass -o jsonpath='{.items[?(@.metadata.annotations.storageclass\.kubernetes\.io/is-default-class=="true")].metadata.name}')
if [ -n "$current_default" ]; then
kubectl annotate storageclass "$current_default" storageclass.kubernetes.io/is-default-class- --overwrite
fi
kubectl annotate storageclass nfs-client storageclass.kubernetes.io/is-default-class=true --overwrite
```
PVC Usage:
```yaml
apiVersion: storage.k8s.io/v1
kind: PersistentVolumeClaim
metadata:
name: app-config
namespace: default
annotations:
nfs.io/storage-path: "app-config"
spec:
storageClassName: "nfs-client"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
```
## Install MetalLB
```bash
kubectl create ns metallb-system
helm repo add metallb https://metallb.github.io/metallb
helm install metallb metallb/metallb --namespace metallb-system
```
Configure MetalLB with the config map from [metallb-system/address-pool.yaml](metallb-system/address-pool.yaml), and apply it:
```bash
kubectl apply -f metallb-system/address-pool.yaml
```
## Install cert-manager
```bash
kubectl create namespace cert-manager
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.15.1/cert-manager.yaml
# Create the private key for local CA
openssl genrsa -out ca.key 4096
# Create the root certificate, valid for 10 years
openssl req -x509 -new -nodes -key ca.key -sha256 -days 3650 -out ca.crt -subj "/CN=Homelab CA"
# Create secret and ClusterIssuer
kubectl create secret tls internal-ca-secret -cert=ca.crt --key=ca.key -n cert-manager
kubectl apply -f cert-manager/cluster-issuer.yaml
```

View File

@@ -0,0 +1,8 @@
# internal-issuer.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: internal-ca
spec:
ca:
secretName: internal-ca-secret

145
default/archivebox.yaml Normal file
View File

@@ -0,0 +1,145 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: sonic
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: sonic
template:
metadata:
labels:
app: sonic
spec:
containers:
- name: sonic
image: archivebox/sonic:latest
imagePullPolicy: Always
ports:
- containerPort: 1491
env:
- name: SEARCH_BACKEND_PASSWORD
valueFrom:
secretKeyRef:
name: password
key: password
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: archivebox
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: archivebox
template:
metadata:
labels:
app: archivebox
spec:
containers:
- name: archivebox
image: archivebox/archivebox:latest
imagePullPolicy: Always
ports:
- containerPort: 8000
env:
- name: SONIC_HOST
value: "sonic.default.svc.cluster.local"
- name: SONIC_PORT
value: "1491"
- name: SEARCH_BACKEND_ENGINE
value: "sonic"
- name: SONIC_PASSWORD
valueFrom:
secretKeyRef:
name: password
key: password
- name: ADMIN_USERNAME
value: "ivanch"
- name: ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: password
key: password
- name: CSRF_TRUSTED_ORIGINS
value: "archive.haven"
- name: ALLOWED_HOSTS
value: "*"
- name: PUBLIC_ADD_VIEW
value: "false"
volumeMounts:
- name: archivebox-data
mountPath: /data
volumes:
- name: archivebox-data
persistentVolumeClaim:
claimName: archivebox-data
---
apiVersion: v1
kind: Service
metadata:
name: sonic-svc
namespace: default
spec:
selector:
app: sonic
ports:
- protocol: TCP
port: 1491
targetPort: 1491
---
apiVersion: v1
kind: Service
metadata:
name: archivebox-svc
namespace: default
spec:
selector:
app: archivebox
ports:
- protocol: TCP
port: 8000
targetPort: 8000
---
# 3) PersistentVolumeClaim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: archivebox-data
namespace: default
annotations:
nfs.io/storage-path: "archivebox-data"
spec:
storageClassName: "nfs-client"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
limits:
storage: 30Gi
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: archivebox-ingress
namespace: default
spec:
ingressClassName: nginx
rules:
- host: "archive.haven"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: archivebox-svc
port:
number: 8000

104
default/homarr.yaml Normal file
View File

@@ -0,0 +1,104 @@
---
# 1) Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: homarr
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: homarr
template:
metadata:
labels:
app: homarr
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
containers:
- name: homarr
image: ghcr.io/homarr-labs/homarr:latest
imagePullPolicy: Always
env:
- name: PUID
value: "1000"
- name: PGID
value: "1000"
- name: SECRET_ENCRYPTION_KEY
value: "c60b894215be5e4cc0fdd209aada8d83386b20579138ca143bc267c4c0042d08"
ports:
- containerPort: 7575
name: homarr-port
volumeMounts:
- name: homarr-config
mountPath: /appdata
resources:
requests:
cpu: 250m
memory: 512Mi
limits:
cpu: 250m
memory: 1Gi
volumes:
- name: homarr-config
persistentVolumeClaim:
claimName: homarr-config
---
# 2) Service
apiVersion: v1
kind: Service
metadata:
name: homarr
namespace: default
spec:
type: ClusterIP
selector:
app: homarr
ports:
- port: 7575
targetPort: homarr-port
---
# 3) PersistentVolumeClaim (for /config)
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: homarr-config
namespace: default
annotations:
nfs.io/storage-path: "homarr-labs-config"
spec:
storageClassName: "nfs-client"
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
---
# 4) Ingress (Traefik)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: homarr
namespace: default
spec:
ingressClassName: nginx
rules:
- host: homarr.lab
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: homarr
port:
number: 7575

206
default/homepage.yaml Normal file
View File

@@ -0,0 +1,206 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: homepage
namespace: default
labels:
app.kubernetes.io/name: homepage
secrets:
- name: homepage
---
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: homepage
namespace: default
labels:
app.kubernetes.io/name: homepage
annotations:
kubernetes.io/service-account.name: homepage
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: homepage
labels:
app.kubernetes.io/name: homepage
rules:
- apiGroups:
- ""
resources:
- namespaces
- pods
- nodes
verbs:
- get
- list
- apiGroups:
- extensions
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- apiGroups:
- traefik.io
resources:
- ingressroutes
verbs:
- get
- list
- apiGroups:
- gateway.networking.k8s.io
resources:
- httproutes
- gateways
verbs:
- get
- list
- apiGroups:
- metrics.k8s.io
resources:
- nodes
- pods
verbs:
- get
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: homepage
labels:
app.kubernetes.io/name: homepage
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: homepage
subjects:
- kind: ServiceAccount
name: homepage
namespace: default
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: homepage
namespace: default
labels:
app.kubernetes.io/name: homepage
spec:
revisionHistoryLimit: 3
replicas: 1
strategy:
type: RollingUpdate
selector:
matchLabels:
app.kubernetes.io/name: homepage
template:
metadata:
labels:
app.kubernetes.io/name: homepage
annotations:
configmap.reloader/checksum: "{{ include (print $.Template.BasePath \"/app/config/services.yaml\") . | sha256sum }}"
spec:
serviceAccountName: homepage
automountServiceAccountToken: true
enableServiceLinks: true
containers:
- name: homepage
image: "ghcr.io/gethomepage/homepage:latest"
imagePullPolicy: Always
env:
- name: HOMEPAGE_ALLOWED_HOSTS
value: homepage.haven # required, may need port. See gethomepage.dev/installation/#homepage_allowed_hosts
ports:
- name: http
containerPort: 3000
protocol: TCP
livenessProbe:
httpGet:
path: /
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /
port: 3000
initialDelaySeconds: 5
periodSeconds: 5
volumeMounts:
- name: logs
mountPath: /app/config/logs
- name: homepage-config
mountPath: /app/config
- name: homepage-config
mountPath: /app/public/images
subPath: images
volumes:
- name: homepage-config
persistentVolumeClaim:
claimName: homepage-config
- name: logs
emptyDir: {}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: homepage-config
namespace: default
annotations:
nfs.io/storage-path: "homepage-config"
spec:
storageClassName: "nfs-client"
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: homepage
namespace: default
labels:
app.kubernetes.io/name: homepage
annotations:
spec:
type: ClusterIP
ports:
- port: 3000
targetPort: http
protocol: TCP
name: http
selector:
app.kubernetes.io/name: homepage
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: homepage
namespace: default
labels:
app.kubernetes.io/name: homepage
# annotations:
# gethomepage.dev/description: Dynamically Detected Homepage
# gethomepage.dev/enabled: "true"
# gethomepage.dev/group: Cluster Management
# gethomepage.dev/icon: homepage.png
# gethomepage.dev/name: Homepage
spec:
ingressClassName: nginx
rules:
- host: "homepage.haven"
http:
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: homepage
port:
number: 3000

60
default/it-tools.yaml Normal file
View File

@@ -0,0 +1,60 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: it-tools
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: it-tools
template:
metadata:
labels:
app: it-tools
spec:
containers:
- name: it-tools
image: corentinth/it-tools:latest
imagePullPolicy: Always
ports:
- containerPort: 80
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 10
---
apiVersion: v1
kind: Service
metadata:
name: it-tools-svc
namespace: default
spec:
selector:
app: it-tools
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: it-tools-ingress
namespace: default
spec:
ingressClassName: nginx
rules:
- host: "tools.haven"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: it-tools-svc
port:
number: 80

81
default/notepad.yaml Normal file
View File

@@ -0,0 +1,81 @@
---
# 1) Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: notepad
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: notepad
template:
metadata:
labels:
app: notepad
spec:
containers:
- name: notepad
image: jdreinhardt/minimalist-web-notepad:latest
imagePullPolicy: Always
ports:
- containerPort: 80
volumeMounts:
- name: notepad-data
mountPath: /var/www/html/_tmp
volumes:
- name: notepad-data
persistentVolumeClaim:
claimName: notepad-data
---
# 2) Service
apiVersion: v1
kind: Service
metadata:
name: notepad
namespace: default
spec:
type: ClusterIP
selector:
app: notepad
ports:
- port: 80
targetPort: 80
---
# 3) PersistentVolumeClaim (local storage via k3s local-path)
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: notepad-data
namespace: default
annotations:
nfs.io/storage-path: "notepad-data"
spec:
storageClassName: "nfs-client"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
# 4) Ingress (Traefik)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: notepad
namespace: default
spec:
ingressClassName: nginx
rules:
- host: notepad.lab
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: notepad
port:
number: 80

86
default/searxng.yaml Normal file
View File

@@ -0,0 +1,86 @@
---
# 1) Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: searxng
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: searxng
template:
metadata:
labels:
app: searxng
spec:
containers:
- name: searxng
image: searxng/searxng:latest
imagePullPolicy: Always
env:
- name: PUID
value: "1000"
- name: PGID
value: "1000"
ports:
- containerPort: 8080
name: searxng-port
volumeMounts:
- name: searxng-config
mountPath: /etc/searxng
volumes:
- name: searxng-config
persistentVolumeClaim:
claimName: searxng-config
---
# 2) Service
apiVersion: v1
kind: Service
metadata:
name: searxng
namespace: default
spec:
type: ClusterIP
selector:
app: searxng
ports:
- port: 8080
targetPort: searxng-port
---
# 3) PersistentVolumeClaim (for /config)
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: searxng-config
namespace: default
annotations:
nfs.io/storage-path: "searxng-config"
spec:
storageClassName: "nfs-client"
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
---
# 4) Ingress (Traefik)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: searxng
namespace: default
spec:
ingressClassName: nginx
rules:
- host: search.haven
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: searxng
port:
number: 8080

100
default/uptime-kuma.yaml Normal file
View File

@@ -0,0 +1,100 @@
---
# 1) Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: uptimekuma
namespace: default
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: uptimekuma
template:
metadata:
labels:
app: uptimekuma
spec:
containers:
- name: uptimekuma
image: louislam/uptime-kuma:1
imagePullPolicy: Always
env:
- name: PUID
value: "1000"
- name: PGID
value: "1000"
ports:
- containerPort: 3001
name: uptimekuma-port
livenessProbe:
httpGet:
path: /
port: 3001
initialDelaySeconds: 30
periodSeconds: 60
readinessProbe:
httpGet:
path: /
port: 3001
initialDelaySeconds: 5
periodSeconds: 5
volumeMounts:
- name: uptimekuma-config
mountPath: /app/data
volumes:
- name: uptimekuma-config
persistentVolumeClaim:
claimName: uptimekuma-config
---
# 2) Service
apiVersion: v1
kind: Service
metadata:
name: uptimekuma
namespace: default
spec:
type: ClusterIP
selector:
app: uptimekuma
ports:
- port: 3001
targetPort: uptimekuma-port
---
# 3) PersistentVolumeClaim (for /config)
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: uptimekuma-config
namespace: default
annotations:
nfs.io/storage-path: "uptimekuma-config"
spec:
storageClassName: "nfs-client"
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
---
# 4) Ingress (Traefik)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: uptimekuma
namespace: default
spec:
ingressClassName: nginx
rules:
- host: uptimekuma.haven
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: uptimekuma
port:
number: 3001

123
default/vaultwarden.yaml Normal file
View File

@@ -0,0 +1,123 @@
---
# 1) Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: vaultwarden
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: vaultwarden
template:
metadata:
labels:
app: vaultwarden
spec:
containers:
- name: vaultwarden
image: vaultwarden/server:latest
imagePullPolicy: Always
env:
- name: DOMAIN
value: "https://vault.haven"
- name: ADMIN_TOKEN
valueFrom:
secretKeyRef:
name: vaultwarden-admin-token
key: ADMIN_TOKEN
ports:
- containerPort: 80
name: vault-port
volumeMounts:
- name: vaultwarden-data
mountPath: /data
resources:
requests:
cpu: 250m
memory: 64Mi
limits:
cpu: 250m
memory: 256Mi
volumes:
- name: vaultwarden-data
persistentVolumeClaim:
claimName: vaultwarden-data
---
# 2) Service
apiVersion: v1
kind: Service
metadata:
name: vaultwarden
namespace: default
spec:
type: ClusterIP
selector:
app: vaultwarden
ports:
- port: 80
targetPort: vault-port
---
# 3) PersistentVolumeClaim (for /data)
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: vaultwarden-data
namespace: default
annotations:
nfs.io/storage-path: "vaultwarden-data"
spec:
storageClassName: "nfs-client"
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
---
# 4) Ingress (Traefik)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: vaultwarden
namespace: default
annotations:
cert-manager.io/cluster-issuer: internal-ca
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- vault.haven
secretName: vaultwarden-tls
rules:
- host: vault.haven
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: vaultwarden
port:
number: 80
---
# 4) Ingress (Traefik)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: vaultwarden-public
namespace: default
spec:
ingressClassName: nginx
rules:
- host: vault.ivanch.me
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: vaultwarden
port:
number: 80

6
dev/README.md Normal file
View File

@@ -0,0 +1,6 @@
## Creating gitea-runner secrets
```bash
kubectl create secret generic gitea-runner-token \
--from-literal=REGISTRATION_TOKEN='your_gitea_runner_token' -n dev
```

182
dev/gitea-runner.yaml Normal file
View File

@@ -0,0 +1,182 @@
# --- ConfigMap for the AMD64 Runner ---
apiVersion: v1
kind: ConfigMap
metadata:
name: gitea-runner-amd64-config
namespace: dev
data:
config.yaml: |
# Registration token and Gitea instance URL should be managed via secrets
runner:
capacity: 4
timeout: 1h
labels:
- "ubuntu-amd64:docker://docker.gitea.com/runner-images:ubuntu-latest"
- "ubuntu-latest:docker://docker.gitea.com/runner-images:ubuntu-latest"
- "ubuntu-slim:docker://docker.gitea.com/runner-images:ubuntu-latest-slim"
---
# --- ConfigMap for the ARM64 Runner ---
apiVersion: v1
kind: ConfigMap
metadata:
name: gitea-runner-arm64-config
namespace: dev
data:
config.yaml: |
runner:
capacity: 4
timeout: 1h
labels:
- "ubuntu-arm64:docker://docker.gitea.com/runner-images:ubuntu-latest-slim"
---
# PersistentVolumeClaim for AMD64
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gitea-runner-amd64-pvc
namespace: dev
annotations:
nfs.io/storage-path: "gitea-runner-amd64-pvc"
spec:
storageClassName: "nfs-client"
accessModes:
- ReadWriteMany
resources:
requests:
storage: 8Mi
---
# PersistentVolumeClaim for ARM64
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gitea-runner-arm64-pvc
namespace: dev
annotations:
nfs.io/storage-path: "gitea-runner-arm64-pvc"
spec:
storageClassName: "nfs-client"
accessModes:
- ReadWriteMany
resources:
requests:
storage: 8Mi
---
# --- Deployment for the AMD64 Runner ---
apiVersion: apps/v1
kind: Deployment
metadata:
name: gitea-runner-amd64
namespace: dev
spec:
replicas: 1
selector:
matchLabels:
app: gitea-runner-amd64
template:
metadata:
labels:
app: gitea-runner-amd64
spec:
containers:
- name: gitea-runner
image: gitea/act_runner:latest
imagePullPolicy: Always
volumeMounts:
- name: config-volume
mountPath: /etc/gitea-runner/config.yaml
subPath: config.yaml
- name: docker-socket
mountPath: /var/run/docker.sock
- name: gitea-runner-amd64-pvc
mountPath: /data
env:
- name: GITEA_RUNNER_REGISTRATION_TOKEN
valueFrom:
secretKeyRef:
name: gitea-runner-token
key: REGISTRATION_TOKEN
- name: GITEA_INSTANCE_URL
value: https://git.ivanch.me
- name: GITEA_RUNNER_NAME
value: k8s-runner-amd64
- name: CONFIG_FILE
value: /etc/gitea-runner/config.yaml
volumes:
- name: config-volume
configMap:
name: gitea-runner-amd64-config
- name: docker-socket
hostPath:
path: /var/run/docker.sock
- name: gitea-runner-amd64-pvc
persistentVolumeClaim:
claimName: gitea-runner-amd64-pvc
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- iris
---
# --- Deployment for the ARM64 Runner ---
apiVersion: apps/v1
kind: Deployment
metadata:
name: gitea-runner-arm64
namespace: dev
spec:
replicas: 1
selector:
matchLabels:
app: gitea-runner-arm64
template:
metadata:
labels:
app: gitea-runner-arm64
spec:
containers:
- name: gitea-runner
image: gitea/act_runner:latest
imagePullPolicy: Always
volumeMounts:
- name: config-volume
mountPath: /etc/gitea-runner/config.yaml
subPath: config.yaml
- name: docker-socket
mountPath: /var/run/docker.sock
- name: gitea-runner-arm64-pvc
mountPath: /data
env:
- name: GITEA_RUNNER_REGISTRATION_TOKEN
valueFrom:
secretKeyRef:
name: gitea-runner-token
key: REGISTRATION_TOKEN
- name: GITEA_INSTANCE_URL
value: https://git.ivanch.me
- name: GITEA_RUNNER_NAME
value: k8s-runner-arm64
- name: CONFIG_FILE
value: /etc/gitea-runner/config.yaml
volumes:
- name: config-volume
configMap:
name: gitea-runner-arm64-config
- name: docker-socket
hostPath:
path: /var/run/docker.sock
- name: gitea-runner-arm64-pvc
persistentVolumeClaim:
claimName: gitea-runner-arm64-pvc
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- arm64

21
dns/README.md Normal file
View File

@@ -0,0 +1,21 @@
## Setup AdGuard Sync credentials
```bash
kubectl create secret generic adguardhome-password \
--from-literal=password='your_adguardhome_password' \
--from-literal=username='your_adguardhome_username' -n dns
```
## Add AdGuardHome to CoreDNS configmap fallback:
1. Edit the CoreDNS configmap:
```bash
kubectl edit configmap coredns -n kube-system
```
2. Replace the `forward` line with the following:
```
forward . <ADGUARDHOME_IP> <ADGUARDHOME_IP_2>
```
This will use AdGuardHome as the primary DNS server and a secondary one as a fallback, instead of using the default Kubernetes CoreDNS server.
You may also use `/etc/resolv.conf` to forward to the node's own DNS resolver, but it depends on whether it's well configured or not. *Since it's Linux, we never know.*
Ideally, since DNS is required for fetching the container image, you would have AdGuardHome as first and then a public DNS server as second (fallback).

118
dns/adguard-sync.yaml Normal file
View File

@@ -0,0 +1,118 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: adguardsync-pvc
namespace: dns
annotations:
nfs.io/storage-path: "adguardsync-config"
spec:
storageClassName: "nfs-client"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: adguardsync
namespace: dns
spec:
strategy:
type: Recreate
replicas: 1
selector:
matchLabels:
app: adguardsync
template:
metadata:
labels:
app: adguardsync
spec:
containers:
- name: adguardsync
image: ghcr.io/bakito/adguardhome-sync:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
protocol: TCP
name: web-port
env:
- name: CRON
value: "*/2 * * * *"
- name: RUN_ON_START
value: "true"
- name: LOG_LEVEL
value: "info"
- name: ORIGIN_URL
value: "http://adguard.haven"
- name: ORIGIN_USERNAME
valueFrom:
secretKeyRef:
name: adguardhome-password
key: username
- name: ORIGIN_PASSWORD
valueFrom:
secretKeyRef:
name: adguardhome-password
key: password
- name: REPLICA1_URL
value: "http://adguard2.haven"
- name: REPLICA1_USERNAME
valueFrom:
secretKeyRef:
name: adguardhome-password
key: username
- name: REPLICA1_PASSWORD
valueFrom:
secretKeyRef:
name: adguardhome-password
key: password
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 100m
memory: 128Mi
volumeMounts:
- name: adguardsync-storage
mountPath: /config
volumes:
- name: adguardsync-storage
persistentVolumeClaim:
claimName: adguardsync-pvc
---
apiVersion: v1
kind: Service
metadata:
name: adguardsync-svc
namespace: dns
spec:
type: ClusterIP
selector:
app: adguardsync
ports:
- name: web
port: 8080
targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: adguardsync-ingress
namespace: dns
spec:
ingressClassName: nginx
rules:
- host: adguardsync.haven
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: adguardsync-svc
port:
number: 8080

145
dns/adguard.yaml Normal file
View File

@@ -0,0 +1,145 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: adguardhome-pvc
namespace: dns
annotations:
nfs.io/storage-path: "adguardhome-config"
spec:
storageClassName: "nfs-client"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: adguardhome
namespace: dns
spec:
strategy:
type: Recreate
replicas: 1
selector:
matchLabels:
app: adguardhome
template:
metadata:
labels:
app: adguardhome
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
preference:
matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- nexus
containers:
- name: adguardhome
image: adguard/adguardhome:latest
imagePullPolicy: Always
ports:
- containerPort: 53
protocol: TCP
- containerPort: 53
protocol: UDP
- containerPort: 3000
protocol: TCP
name: install-port
- containerPort: 80
protocol: TCP
name: web-port
resources:
requests:
cpu: 100m
memory: 256Mi
volumeMounts:
- name: adguardhome-storage
mountPath: /opt/adguardhome/work
- name: adguardhome-storage
mountPath: /opt/adguardhome/conf
volumes:
- name: adguardhome-storage
persistentVolumeClaim:
claimName: adguardhome-pvc
---
apiVersion: v1
kind: Service
metadata:
name: adguardhome-svc
namespace: dns
spec:
type: LoadBalancer
selector:
app: adguardhome
loadBalancerIP: 192.168.15.200
ports:
- name: dns-tcp
port: 53
targetPort: 53
protocol: TCP
- name: dns-udp
port: 53
targetPort: 53
protocol: UDP
- name: web
port: 80
targetPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: adguard-install-svc
namespace: dns
spec:
type: ClusterIP
selector:
app: adguardhome
ports:
- name: install
port: 3000
targetPort: 3000
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: adguardhome-ingress
namespace: dns
spec:
ingressClassName: nginx
rules:
- host: adguard.haven
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: adguardhome-svc
port:
number: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: adguardhome-install-ingress
namespace: dns
spec:
ingressClassName: nginx
rules:
- host: install.adguard.haven
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: adguard-install-svc
port:
number: 3000

145
dns/adguard2.yaml Normal file
View File

@@ -0,0 +1,145 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: adguardhome2-pvc
namespace: dns
annotations:
nfs.io/storage-path: "adguardhome2-config"
spec:
storageClassName: "nfs-client"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: adguardhome2
namespace: dns
spec:
strategy:
type: Recreate
replicas: 1
selector:
matchLabels:
app: adguardhome2
template:
metadata:
labels:
app: adguardhome2
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
preference:
matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- iris
containers:
- name: adguardhome2
image: adguard/adguardhome:latest
imagePullPolicy: Always
ports:
- containerPort: 53
protocol: TCP
- containerPort: 53
protocol: UDP
- containerPort: 3000
protocol: TCP
name: install-port
- containerPort: 80
protocol: TCP
name: web-port
resources:
requests:
cpu: 100m
memory: 256Mi
volumeMounts:
- name: adguardhome2-storage
mountPath: /opt/adguardhome/work
- name: adguardhome2-storage
mountPath: /opt/adguardhome/conf
volumes:
- name: adguardhome2-storage
persistentVolumeClaim:
claimName: adguardhome2-pvc
---
apiVersion: v1
kind: Service
metadata:
name: adguardhome2-svc
namespace: dns
spec:
type: LoadBalancer
selector:
app: adguardhome2
loadBalancerIP: 192.168.15.201
ports:
- name: dns-tcp
port: 53
targetPort: 53
protocol: TCP
- name: dns-udp
port: 53
targetPort: 53
protocol: UDP
- name: web
port: 80
targetPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: adguard2-install-svc
namespace: dns
spec:
type: ClusterIP
selector:
app: adguardhome2
ports:
- name: install
port: 3000
targetPort: 3000
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: adguardhome2-ingress
namespace: dns
spec:
ingressClassName: nginx
rules:
- host: adguard2.haven
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: adguardhome2-svc
port:
number: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: adguardhome2-install-ingress
namespace: dns
spec:
ingressClassName: nginx
rules:
- host: install.adguard2.haven
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: adguard2-install-svc
port:
number: 3000

43
infra/beszel-agent.yaml Normal file
View File

@@ -0,0 +1,43 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: beszel-agent
namespace: infra
spec:
selector:
matchLabels:
app: beszel-agent
template:
metadata:
labels:
app: beszel-agent
spec:
hostNetwork: true
containers:
- env:
- name: PORT
value: "45876"
- name: KEY
valueFrom:
secretKeyRef:
name: beszel-key
key: SECRET-KEY
image: henrygd/beszel-agent:latest
imagePullPolicy: Always
name: beszel-agent
ports:
- containerPort: 45876
hostPort: 45876
restartPolicy: Always
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
operator: Exists
updateStrategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 100%
type: RollingUpdate

86
infra/beszel.yaml Normal file
View File

@@ -0,0 +1,86 @@
---
# 1) Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: beszel
namespace: infra
spec:
replicas: 1
selector:
matchLabels:
app: beszel
template:
metadata:
labels:
app: beszel
spec:
containers:
- name: beszel
image: henrygd/beszel:latest
imagePullPolicy: Always
env:
- name: PUID
value: "1000"
- name: PGID
value: "1000"
ports:
- containerPort: 8090
name: beszel-port
volumeMounts:
- name: beszel-config
mountPath: /beszel_data
volumes:
- name: beszel-config
persistentVolumeClaim:
claimName: beszel-config
---
# 2) Service
apiVersion: v1
kind: Service
metadata:
name: beszel
namespace: infra
spec:
type: ClusterIP
selector:
app: beszel
ports:
- port: 80
targetPort: beszel-port
---
# 3) PersistentVolumeClaim (for /config)
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: beszel-config
namespace: infra
annotations:
nfs.io/storage-path: "beszel-config"
spec:
storageClassName: "nfs-client"
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
---
# 4) Ingress (Traefik)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: beszel
namespace: infra
spec:
ingressClassName: nginx
rules:
- host: beszel.haven
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: beszel
port:
number: 80

103
infra/code-config.yaml Normal file
View File

@@ -0,0 +1,103 @@
---
# 1) Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: code-config
namespace: infra
spec:
replicas: 1
selector:
matchLabels:
app: code-config
template:
metadata:
labels:
app: code-config
spec:
containers:
- name: code-config
image: lscr.io/linuxserver/code-server:latest
imagePullPolicy: Always
env:
- name: PUID
value: "1000"
- name: PGID
value: "1000"
- name: PROXY_DOMAIN
value: "code-config.haven"
- name: DEFAULT_WORKSPACE
value: "/k8s-config"
resources:
requests:
memory: 512Mi
cpu: 200m
limits:
memory: 1Gi
cpu: 500m
ports:
- containerPort: 8443
name: code-port
volumeMounts:
- name: code-config
mountPath: /config
- name: k8s-config
mountPath: /k8s-config
volumes:
- name: code-config
persistentVolumeClaim:
claimName: code-config
- name: k8s-config
nfs:
server: 192.168.15.61
path: /export/config
---
# 2) Service
apiVersion: v1
kind: Service
metadata:
name: code-config
namespace: infra
spec:
type: ClusterIP
selector:
app: code-config
ports:
- port: 8443
targetPort: code-port
---
# 3) PersistentVolumeClaim (for /config)
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: code-config
namespace: infra
annotations:
nfs.io/storage-path: "code-config"
spec:
storageClassName: "nfs-client"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
# 4) Ingress (Traefik)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: code-config
namespace: infra
spec:
ingressClassName: nginx
rules:
- host: code-config.haven
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: code-config
port:
number: 8443

124
infra/wg-easy.yaml Normal file
View File

@@ -0,0 +1,124 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wg-easy-pvc
namespace: infra
annotations:
nfs.io/storage-path: "wg-easy-config"
spec:
storageClassName: "nfs-client"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wg-easy
namespace: infra
spec:
strategy:
type: Recreate
replicas: 1
selector:
matchLabels:
app: wg-easy
template:
metadata:
labels:
app: wg-easy
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
preference:
matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- nexus
containers:
- name: wg-easy
image: ghcr.io/wg-easy/wg-easy:latest
imagePullPolicy: Always
ports:
- containerPort: 51820
protocol: UDP
name: wg-port
- containerPort: 51821
protocol: TCP
name: web-port
env:
- name: LANG
value: en
- name: WG_HOST
value: vpn.ivanch.me
- name: WG_MTU
value: "1420"
- name: UI_TRAFFIC_STATS
value: "true"
- name: UI_CHART_TYPE
value: "0"
- name: WG_ENABLE_ONE_TIME_LINKS
value: "true"
- name: UI_ENABLE_SORT_CLIENTS
value: "true"
securityContext:
capabilities:
add:
- NET_ADMIN
- SYS_MODULE
resources:
requests:
cpu: 100m
memory: 256Mi
volumeMounts:
- name: wg-easy-volume
mountPath: /etc/wireguard
restartPolicy: Always
volumes:
- name: wg-easy-volume
persistentVolumeClaim:
claimName: wg-easy-pvc
---
apiVersion: v1
kind: Service
metadata:
name: wg-easy-svc
namespace: infra
spec:
type: LoadBalancer
selector:
app: wg-easy
loadBalancerIP: 192.168.15.202
ports:
- name: wg-port
port: 51820
targetPort: 51820
protocol: UDP
- name: web-port
port: 51821
targetPort: 51821
protocol: TCP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: wg-easy-ingress
namespace: infra
spec:
ingressClassName: nginx
rules:
- host: vpn.haven
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: wg-easy-svc
port:
number: 51821

View File

@@ -0,0 +1,17 @@
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: default-pool
namespace: metallb-system
spec:
addresses:
- 192.168.15.200-192.168.15.220
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: default-advertisement
namespace: metallb-system
spec:
ipAddressPools:
- default-pool