mirror of
https://github.com/ivanch/blog.ivanch.me.git
synced 2026-02-14 18:28:27 +00:00
re-init
This commit is contained in:
6
content/archives.md
Executable file
6
content/archives.md
Executable file
@@ -0,0 +1,6 @@
|
||||
---
|
||||
title: "Posts"
|
||||
layout: "posts"
|
||||
url: "/posts/"
|
||||
summary: posts
|
||||
---
|
||||
238
content/posts/api-gateway-terraform.md
Executable file
238
content/posts/api-gateway-terraform.md
Executable file
@@ -0,0 +1,238 @@
|
||||
---
|
||||
title: "AWS API Gateway with Terraform"
|
||||
date: 2022-12-01T15:30:00-03:00
|
||||
draft: false
|
||||
summary: "Creating API Gateway endpoints with Terraform."
|
||||
---
|
||||
|
||||
Right when we first started to use the AWS API Gateway, one of the things that did bother us was the fact that we had to manage lot of resources spread into 1,000s of lines of a couple of Terraform files, and it was a lot of work that required attention and time, things that are critical in software development as we all know.
|
||||
|
||||
So we decided to create a module to help us with this. Big thanks to [Stephano](https://www.linkedin.com/in/stephano-macedo/) who helped me a lot!
|
||||
|
||||
## Before
|
||||
Basically, when we are developing a new API, we need to create a 3 resources in the API Gateway. We need to create a new gateway_resource, a new gateway_method and a new gateway_integration, and then connect all of them using their respectives IDs.
|
||||
|
||||
Let's suppose a endpoint called `/users/all`. This is a snippet of the code we had before:
|
||||
|
||||
#### Resource
|
||||
```terraform
|
||||
resource "aws_api_gateway_resource" "api_users_all" {
|
||||
rest_api_id = aws_api_gateway_rest_api.api.id
|
||||
parent_id = aws_api_gateway_resource.api_users.id
|
||||
path_part = "all"
|
||||
}
|
||||
```
|
||||
|
||||
#### Method
|
||||
```terraform
|
||||
resource "aws_api_gateway_method" "api_users_all" {
|
||||
rest_api_id = aws_api_gateway_rest_api.api.id
|
||||
resource_id = aws_api_gateway_resource.users_all.id
|
||||
http_method = "GET"
|
||||
|
||||
request_parameters = {
|
||||
"method.request.header.Authorization" = true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Integration
|
||||
```terraform
|
||||
resource "aws_api_gateway_integration" "api_users_all" {
|
||||
rest_api_id = aws_api_gateway_rest_api.api.id
|
||||
resource_id = aws_api_gateway_resource.api_users_all.id
|
||||
http_method = aws_api_gateway_method.api_users_all.http_method
|
||||
type = "HTTP_PROXY"
|
||||
integration_http_method = "GET"
|
||||
uri = "https://api.example.com/users/all"
|
||||
|
||||
request_parameters = {
|
||||
"integration.request.header.Authorization" = true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Obviously there is more code to that, but this is the main part of it and we will be focusing on that.
|
||||
|
||||
## Creating a module
|
||||
Now we can create a module to help us. We can start by creating a separate folder which will be our module, let's call it `terraform/modules/api`, inside of it there will be a couple of files:
|
||||
|
||||
### variables.tf
|
||||
Here we will define the variables that we will use in the module, what will come from the outside. Note that here it's just the essencial, you will add more things as you need.
|
||||
```terraform
|
||||
# This is the parent resource ID in case we have something like /users/all/prune
|
||||
variable "parent_id" {
|
||||
description = "Resource Parent ID"
|
||||
type = string
|
||||
}
|
||||
|
||||
# This is the last part of the path, we can infer it from the endpoint URI
|
||||
variable "path_part" {
|
||||
description = "Path Part"
|
||||
type = string
|
||||
}
|
||||
|
||||
# Here we will put all the HTTP methods that the endpoint will accept
|
||||
variable "http_methods" {
|
||||
description = "HTTP Methods"
|
||||
type = list(string)
|
||||
default = []
|
||||
}
|
||||
|
||||
# The complete endpoint URI
|
||||
variable "uri" {
|
||||
description = "URI"
|
||||
type = string
|
||||
default = ""
|
||||
}
|
||||
|
||||
# The API Gateway ID
|
||||
variable "gateway_id" {
|
||||
description = "API Gateway ID"
|
||||
type = string
|
||||
}
|
||||
|
||||
# If we have a URI that won't accept any HTTP method, we set this to true
|
||||
variable "only_resource" {
|
||||
description = "Only create the resource"
|
||||
type = bool
|
||||
default = false
|
||||
}
|
||||
|
||||
# Authorization as an example so that we can pass the headers
|
||||
variable "authorization" {
|
||||
description = "Required authorization"
|
||||
type = bool
|
||||
default = false
|
||||
}
|
||||
```
|
||||
|
||||
### outputs.tf
|
||||
This file is needed for at least one important variable, which is the `resource_id`. That's needed if we have some endpoint like `/users/all/prune` which needs a `parent_id`.
|
||||
```terraform
|
||||
output "resource_id" {
|
||||
value = local.resource_id
|
||||
}
|
||||
```
|
||||
|
||||
### locals.tf
|
||||
As we referenced the `resource_id` in the `outputs.tf`, we need to define it in the `locals.tf`.
|
||||
```terraform
|
||||
locals {
|
||||
// this join is because we simply can't do aws_api_gateway_resource.api_resource.id
|
||||
resource_id = join("", aws_api_gateway_resource.api_resource[*].id)
|
||||
|
||||
// if starts with '{' and ends with '}' then it's a path variable
|
||||
// take all the middle characters
|
||||
// if it's empty then it's a normal path
|
||||
path_variable = length(regexall("{.*}", var.path_part)) > 0 ? substr(var.path_part, 1, length(var.path_part) - 2) : ""
|
||||
|
||||
// in case we need Authorization
|
||||
integration_request_parameters = var.authorization ? {
|
||||
"integration.request.header.Authorization" = "method.request.header.Authorization"
|
||||
} : {}
|
||||
|
||||
method_request_parameters = {
|
||||
"method.request.header.Authorization" = var.authorization
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
### gateway.resources.tf
|
||||
Here is where the fun begins, thank God it's pretty straightforward. All of the variables are coming from the `variables.tf` file.
|
||||
```terraform
|
||||
resource "aws_api_gateway_resource" "api_resource" {
|
||||
rest_api_id = var.gateway_id
|
||||
parent_id = var.parent_id
|
||||
path_part = var.path_part
|
||||
}
|
||||
```
|
||||
|
||||
### gateway.methods.tf
|
||||
Since we need one `aws_api_gateway_method` for each HTTP Method, we use the `count` to iterate over the list of HTTP Methods and create one api_gateway_method for each http method we defined in the `var.http_methods` list.
|
||||
```terraform
|
||||
resource "aws_api_gateway_method" "api_method" {
|
||||
count = var.only_resource ? 0 : length(var.http_methods)
|
||||
rest_api_id = var.gateway_id
|
||||
resource_id = local.resource_id
|
||||
|
||||
http_method = var.http_methods[count.index]
|
||||
authorization = var.authorization ? "CUSTOM" : "NONE"
|
||||
|
||||
// Got a path variable? No problem! We deal with that too right here
|
||||
request_parameters = merge(local.method_request_parameters, local.path_variable != "" ? {
|
||||
"method.request.path.${local.path_variable}" = local.path_variable != ""
|
||||
} : {})
|
||||
}
|
||||
```
|
||||
|
||||
### gateway.integrations.tf
|
||||
The same idea goes for api_gateway_integration.
|
||||
```terraform
|
||||
resource "aws_api_gateway_integration" "api_integration" {
|
||||
count = var.only_resource ? 0 : length(var.http_methods)
|
||||
rest_api_id = var.gateway_id
|
||||
resource_id = local.resource_id
|
||||
http_method = aws_api_gateway_method.api_method[count.index].http_method
|
||||
|
||||
integration_http_method = var.http_methods[count.index]
|
||||
uri = var.uri
|
||||
|
||||
// Aahh I see your path variable, let's do some magic here
|
||||
request_parameters = merge(local.integration_request_parameters, local.path_variable != "" ? {
|
||||
"integration.request.path.${local.path_variable}" = "method.request.path.${local.path_variable}"
|
||||
} : {})
|
||||
}
|
||||
```
|
||||
|
||||
## Using the module
|
||||
|
||||
Now that we have the module, we can use it in our `main.tf` file. We will use the same example as before, but now we will use the module and we will create some other endpoints as example as well.
|
||||
```terraform
|
||||
# this is our main API endpoint, we don't want to receive any request here, so we will only create the resource
|
||||
# /users (only resource)
|
||||
module "api_users" {
|
||||
source = "./api"
|
||||
|
||||
gateway_id = gateway.outputs.gateway.gateway_config.gateway_id
|
||||
parent_id = gateway.outputs.gateway.gateway_config.root_endpoints.api_root
|
||||
path_part = "users"
|
||||
only_resource = true
|
||||
}
|
||||
|
||||
# /users/all (get)
|
||||
module "api_users_all" {
|
||||
source = "./api"
|
||||
|
||||
gateway_id = gateway.outputs.gateway.gateway_config.gateway_id
|
||||
parent_id = module.api_users.resource_id
|
||||
path_part = "all"
|
||||
http_methods = ["GET"]
|
||||
uri = "http://api.example.com/users/all"
|
||||
}
|
||||
|
||||
# /users/all/{userid} (get, post, put, delete)
|
||||
module "api_users_all" {
|
||||
source = "./api"
|
||||
|
||||
gateway_id = gateway.outputs.gateway.gateway_config.gateway_id
|
||||
parent_id = module.api_users_all.resource_id
|
||||
path_part = "{userid}"
|
||||
http_methods = ["GET", "POST", "PUT", "DELETE"]
|
||||
uri = "http://api.example.com/users/all/{userid}"
|
||||
}
|
||||
|
||||
# and so on...
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
For one endpoint, we went from having to manage 15 lines splitted in 3 files to just 5 lines inside of one file. If you have to manage hundreds of endpoints, that will be a great help.
|
||||
|
||||
## WWW-Authenticate header
|
||||
We can also add the `WWW-Authenticate` header to the request for example. We tried to do that by adding it to the files properly, but it didn't work. The reason was that the API Gateway was not passing the `WWW-Authenticate` to our API, and that's because of the name of the header. You can call it `WWW-Authenticate-Header` for example and it will work.
|
||||
|
||||
## Note
|
||||
This code has not been tested "as is", but it has been tested as part of a bigger project. There is always room for improvements and more possibilities depending on the context, but it's a good start.
|
||||
|
||||
There has been a lot of pieces of Terraform code that was omitted, like when we use the declare the `terraform_remote_state` or the `authorizer_id` which you will need if using authorization "CUSTOM".
|
||||
178
content/posts/automated-changelogs-gitlab.md
Executable file
178
content/posts/automated-changelogs-gitlab.md
Executable file
@@ -0,0 +1,178 @@
|
||||
---
|
||||
title: "Automated Changelogs on GitLab"
|
||||
date: 2023-05-15T22:38:55-03:00
|
||||
draft: false
|
||||
summary: "Changelog automation on GitLab CI"
|
||||
---
|
||||
|
||||
Changelogs are good, mainly if you need to keep track of what was changed on a release. But they can be a pain to write, especially if you have a lot of commits, people working on the same project, lots of tasks, and so on. A good spot to put some **automation**.
|
||||
|
||||
There are a couple of ways we could make an automated changelog system, we will focus on making one that uses GitLab CI and the commit messages from the project. We will also take into consideration that *releases are made through git tags*.
|
||||
|
||||
For this, we will start with a few requirements:
|
||||
* Plan on a commit message pattern, for example: "[TASK-200] Fixing something for a task on Jira";
|
||||
* Have the release notes/changelogs on a specific part of pipeline (for example production release);
|
||||
* The release notes generation will take part when creating a tag.
|
||||
|
||||
We will take advantage of these two commands:
|
||||
1. `git log --pretty=format:"%s" --no-merges <tag>..HEAD` - This will give us the commit messages from the last tag to the HEAD;
|
||||
2. `git describe --abbrev=0 --tags` - This will give us the latest tag.
|
||||
|
||||
## Creating a basic pipeline
|
||||
|
||||
Let's start by creating a basic pipeline that will run on the production release.
|
||||
|
||||
```yaml
|
||||
run:
|
||||
script:
|
||||
- echo "Running the pipeline"
|
||||
|
||||
.generateChangelog:
|
||||
image: python:latest
|
||||
stage: test
|
||||
script:
|
||||
- echo "Generating changelog..."
|
||||
# Generate changelog here
|
||||
artifacts:
|
||||
name: changelog.txt
|
||||
paths:
|
||||
- changelog.txt
|
||||
when: always
|
||||
expire_in: 1 week
|
||||
|
||||
deploy:
|
||||
stage: deploy
|
||||
extends:
|
||||
- .generateChangelog
|
||||
rules:
|
||||
- if: $CI_COMMIT_TAG
|
||||
when: manual
|
||||
environment: production
|
||||
```
|
||||
|
||||
We will output the changelog into a file named `changelog.txt` and then we will use the `artifacts` keyword to save it.
|
||||
|
||||
## Generating the changelog
|
||||
|
||||
Note that we set the image to be `python:latest` on the `.generateChangelog` job, this is because we will use a Python script to generate the changelog. Inside the code we will set two functions: one that will return the latest tag, and another that will get the commits between the latest tag and the HEAD.
|
||||
|
||||
To call commands on the OS we will use the `subprocess` module, and to get the output from the command we will use the `communicate()` function. In case of an error, we can further add some error handling (more on this later).
|
||||
|
||||
```python
|
||||
def get_last_tag():
|
||||
pipe = sp.Popen('git describe --abbrev=0 --tags', shell=True, stdout=sp.PIPE, stderr=sp.PIPE)
|
||||
prev_tag, err = pipe.communicate()
|
||||
|
||||
# If it returns 0, it means it was successful
|
||||
if (pipe.returncode == 0):
|
||||
return prev_tag.strip()
|
||||
|
||||
def get_commits():
|
||||
prev_tag = get_last_tag().decode('utf-8')
|
||||
|
||||
print('Previous tag: ' + prev_tag)
|
||||
|
||||
pipe = sp.Popen('git rev-list ' + prev_tag + '..HEAD --format=%s', shell=True, stdout=sp.PIPE, stderr=sp.PIPE)
|
||||
|
||||
commits, err = pipe.communicate()
|
||||
|
||||
# Only dealing with 0 for now
|
||||
if (pipe.returncode == 0):
|
||||
commits = commits.strip().decode('utf-8').split('\n')
|
||||
|
||||
return commits
|
||||
```
|
||||
|
||||
Now we should get a list of the commits that we want. Calling the function `get_commits()` will return a string list with all the commits, but there could be some commits that we don't want to show on the changelog, for example: `Merge branch 'master' into 'develop'`. **This is where having a pattern will help.**
|
||||
|
||||
```python
|
||||
def get_formatted_commits():
|
||||
commits = get_commits()
|
||||
|
||||
formatted_commits = []
|
||||
|
||||
for commit in commits:
|
||||
if commit.startswith('[TASK-') or commit.startswith('[BUG-'):
|
||||
formatted_commits.append(commit)
|
||||
|
||||
return formatted_commits
|
||||
```
|
||||
|
||||
This will give us only the important commit messages with the pattern that we want. We can further improve this by adding a regex, transforming `formatted_commits` into a `set` of Task Numbers, do some parsing, API calls, whatever we want. For now, we will keep simple and do the basic.
|
||||
|
||||
## Writing the changelog
|
||||
|
||||
Now that we have the commits that we want, we can write them to a file. We will use the `open` function to open the file and write the commits to it.
|
||||
|
||||
```python
|
||||
def write_changelog():
|
||||
commits = get_formatted_commits()
|
||||
|
||||
with open('changelog.txt', 'w') as f:
|
||||
for commit in commits:
|
||||
f.write(commit + '\n')
|
||||
```
|
||||
|
||||
## Putting it all together on the pipeline yaml file
|
||||
|
||||
Now that we have the everything we want, we can put them all together on the pipeline yaml file.
|
||||
|
||||
```yaml
|
||||
run:
|
||||
script:
|
||||
- echo "Running the pipeline"
|
||||
|
||||
.generateChangelog:
|
||||
image: python:latest
|
||||
stage: test
|
||||
script:
|
||||
- echo "Generating changelog..."
|
||||
- git tag -d $(git describe --abbrev=0 --tags) || true
|
||||
- python changelog.py
|
||||
artifacts:
|
||||
name: changelog.txt
|
||||
paths:
|
||||
- changelog.txt
|
||||
when: always
|
||||
expire_in: 1 week
|
||||
|
||||
deploy:
|
||||
stage: deploy
|
||||
extends:
|
||||
- .generateChangelog
|
||||
rules:
|
||||
- if: $CI_COMMIT_TAG
|
||||
when: manual
|
||||
environment: production
|
||||
```
|
||||
|
||||
Note that we had to add `git tag -d $(git describe --abbrev=0 --tags)` command there to delete the latest tag. This is because we are using the `git describe` command to get the latest tag, and if we don't delete it, the changelog will be empty. The `|| true` is there to make sure that the pipeline doesn't fail if a tag doesn't exist.
|
||||
|
||||
## Error handling
|
||||
|
||||
We can further improve this by adding some error handling. For example, if we don't have any tags, we can set a default hash (which would be the start of git history).
|
||||
|
||||
```python
|
||||
def get_last_tag():
|
||||
pipe = sp.Popen('git describe --abbrev=0 --tags', shell=True, stdout=sp.PIPE, stderr=sp.PIPE)
|
||||
prev_tag, err = pipe.communicate()
|
||||
|
||||
# If it's successful, we return the tag name
|
||||
if (pipe.returncode == 0):
|
||||
return prev_tag.strip()
|
||||
else:
|
||||
# If it's not successful, we return the first commit hash
|
||||
pipe = sp.Popen('git rev-list --max-parents=0 HEAD', shell=True, stdout=sp.PIPE, stderr=sp.PIPE)
|
||||
first_commit, err = pipe.communicate()
|
||||
|
||||
# If it's successful, we return the first commit hash
|
||||
if (pipe.returncode == 0):
|
||||
return first_commit.strip()
|
||||
else:
|
||||
# If it's not successful, we print the error and exit, there's something else wrong
|
||||
print('Error: Could not get the last commit hash')
|
||||
print(err.strip())
|
||||
sys.exit(1)
|
||||
```
|
||||
|
||||
Further error handling or improvements can be done, this is just a proof of concept. On another note, the code hasn't been tested *as is*, so there might be some errors.
|
||||
104
content/posts/error-handling-dotnet.md
Normal file
104
content/posts/error-handling-dotnet.md
Normal file
@@ -0,0 +1,104 @@
|
||||
---
|
||||
title: ".NET - Proper API error handling"
|
||||
date: 2024-06-20T20:00:06-03:00
|
||||
draft: false
|
||||
summary: "Because returning stack traces isn't secure."
|
||||
---
|
||||
|
||||
The main idea behind having centralized error handling is that we can process any unhandled exception to:
|
||||
* Return formatted responses without revealing any internal functionality
|
||||
* Process issues so that they are properly logged on logs or other monitoring systems (like Sentry)
|
||||
* Make sure all errors have the same external behavior
|
||||
|
||||
For that, we will use a new middleware class:
|
||||
```csharp
|
||||
public class ErrorResponse
|
||||
{
|
||||
public string Message { get; set; }
|
||||
}
|
||||
|
||||
|
||||
public class ErrorHandlerMiddleware {
|
||||
private readonly RequestDelegate _next;
|
||||
private readonly ILogger<ErrorHandlerMiddleware> _logger;
|
||||
private readonly IHostEnvironment _env;
|
||||
|
||||
public ErrorHandlerMiddleware(RequestDelegate next, ILogger<ErrorHandlerMiddleware> logger, IHostEnvironment env){
|
||||
_next = next;
|
||||
_logger = logger;
|
||||
_env = env;
|
||||
}
|
||||
|
||||
public async Task Invoke(HttpContext httpContext)
|
||||
{
|
||||
// Attempts to execute the next action on the http chain
|
||||
// If it fails, we log the exception and trigger the HandleErrorAsync method
|
||||
try
|
||||
{
|
||||
await _next(httpContext);
|
||||
}
|
||||
catch(Exception ex)
|
||||
{
|
||||
_logger.LogError(ex, "An unhandled exception has occurred: {Message}", ex.Message);
|
||||
await HandleErrorAsync(httpContext, ex);
|
||||
}
|
||||
}
|
||||
|
||||
private async Task HandleErrorAsync(HttpContext context, Exception exception)
|
||||
{
|
||||
context.Response.ContentType = "application/json";
|
||||
context.Response.StatusCode = (int) HttpStatusCode.InternalServerError;
|
||||
|
||||
ErrorResponse errorResponse;
|
||||
|
||||
if (_env.IsDevelopment())
|
||||
{
|
||||
// In development, we want to see the full details for easier debugging.
|
||||
errorResponse = new ErrorResponse
|
||||
{
|
||||
Message = exception.ToString()
|
||||
};
|
||||
}
|
||||
else
|
||||
{
|
||||
// In production, we return a generic message to avoid leaking details.
|
||||
errorResponse = new ErrorResponse
|
||||
{
|
||||
Message = "An internal server error occurred. Please try again later."
|
||||
};
|
||||
}
|
||||
|
||||
// We use the modern System.Text.Json for serialization via WriteAsJsonAsync
|
||||
await context.Response.WriteAsJsonAsync(errorResponse);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
We will also define a new Extension class to register this middleware:
|
||||
```csharp
|
||||
public static class ErrorHandlerExtensions
|
||||
{
|
||||
public static IApplicationBuilder UseErrorHandler(this IApplicationBuilder appBuilder)
|
||||
{
|
||||
return appBuilder.UseMiddleware<ErrorHandler>();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
And then we just configure it on the `Configure()` method at the startup:
|
||||
```csharp
|
||||
public void Configure(IApplicationBuilder app)
|
||||
{
|
||||
app.UseErrorHandler();
|
||||
}
|
||||
```
|
||||
|
||||
Now, when there's an issue on the API execution, the API will return something like this:
|
||||
```json
|
||||
{
|
||||
"Message": "Internal Server Error"
|
||||
}
|
||||
```
|
||||
|
||||
Sources:
|
||||
* [https://learn.microsoft.com/en-us/aspnet/core/fundamentals/middleware/write?view=aspnetcore-8.0](https://learn.microsoft.com/en-us/aspnet/core/fundamentals/middleware/write?view=aspnetcore-8.0)
|
||||
56
content/posts/home-k8s.md
Executable file
56
content/posts/home-k8s.md
Executable file
@@ -0,0 +1,56 @@
|
||||
---
|
||||
title: "Homemade Kubernetes"
|
||||
date: 2025-08-18T10:30:00-03:00
|
||||
draft: false
|
||||
summary: Why I went with k3s for local homelab.
|
||||
---
|
||||
|
||||
tl;dr: wanted to learn k8s properly and wanted some high availability for some services. Also solves loneliness ;)
|
||||
|
||||
---
|
||||
|
||||
I started to have some issues in regards to high availability for some services. I wanted to make sure that my self-hosted applications would remain accessible even if one of my servers went down (like Jellyfin). This led me to explore Kubernetes as a solution.
|
||||
|
||||
As you may or may not know, k8s is a container orchestration platform that automates the deployment, scaling, and management of containerized applications. However it comes with a lot of complexity and operational overhead. I tried to set up a k8s cluster using [k3s](https://k3s.io/), which is a lightweight version of Kubernetes. It seems to be a good starting point, I'm using it since then and has been working wonders so far.
|
||||
|
||||
Currently I'm running them while all config files are on a NFS server, this makes managing configurations easier and backup-ready. For this, I'm using `nfs-subdir-external-provisioner` to manage PVCs through NFS. I have also setup 2 backup cronjobs: one for local servers and another for a remote server.
|
||||
|
||||
## Pros and cons
|
||||
|
||||
Pros that I have noticed:
|
||||
* **Easy to set up and manage**: k3s is designed to be lightweight and easy to install
|
||||
* **High availability**: if a server goes down, I can still access the services in there
|
||||
* I haven't been able to properly set a HA k3s cluster yet as I need more hardware
|
||||
* Currently, I'm using a single master-node setup
|
||||
* **Backups** are easy to manage if you have all configurations under one place.
|
||||
* **Cronjobs** are a breeze to set up and manage, mainly if you need to perform backup rituals.
|
||||
* **"Enterprise-grade"** cluster in your home!
|
||||
* **Have fun :)**
|
||||
|
||||
Cons:
|
||||
* **Complexity**: While k3s simplifies many aspects of Kubernetes, it still requires a certain level of understanding of container orchestration concepts.
|
||||
* **Single-point of failure**: In my current setup, the single master node is a potential point of failure. If it goes down, the entire cluster becomes unavailable.
|
||||
* This can be solved with a multi-master setup, but it requires additional hardware.
|
||||
* **Learning curve**: Kubernetes has a steep learning curve -- which is good for people like me.
|
||||
|
||||
## Current setup
|
||||
|
||||
This is my current (might be outdated) setup:
|
||||
* 2 Orange Pi running k3s
|
||||
- Each with 4 GB RAM, 4C/4T, 256GB SD card on each.
|
||||
* 1 Mini PC
|
||||
- 6 GB RAM, 2C/4T, 64GB internal memory + 512GB SD Card
|
||||
* Proxmox
|
||||
- 32 GB RAM, 6C/12T, 1 TB SSD
|
||||
- Currently I run these VMs with k3s:
|
||||
- 1 prod-like VM
|
||||
- 1 dev-like VM
|
||||
- 1 work sandbox VM
|
||||
|
||||
At a tech level, I haven't made my setup / scripts / configurations public yet.
|
||||
|
||||
---
|
||||
|
||||
I believe that everyone should try this at home, be in a dedicated hardware/server or in a VM. It's a great way to learn and experiment with Kubernetes in a controlled environment.
|
||||
|
||||
I'm still running some services on Docker itself, but I'm slowly migrating them to k8s. Some services like DNS and Traefik Reverse Proxy are a bit more complex to set up.
|
||||
7
content/posts/projetos.md
Executable file
7
content/posts/projetos.md
Executable file
@@ -0,0 +1,7 @@
|
||||
---
|
||||
title: "Projetos"
|
||||
date: 2022-04-05T13:39:09-03:00
|
||||
draft: true
|
||||
---
|
||||
|
||||
Nada aqui ainda, mas era pra ter.
|
||||
48
content/posts/selfhost.md
Executable file
48
content/posts/selfhost.md
Executable file
@@ -0,0 +1,48 @@
|
||||
---
|
||||
title: "Self Hosting"
|
||||
date: 2025-01-19T14:00:00-03:00
|
||||
draft: false
|
||||
summary: "Everyone should have Netflix at home"
|
||||
---
|
||||
|
||||
[Why I'm slowly changing to Kubernetes.](https://blog.ivanch.me/posts/home-k8s/)
|
||||
|
||||
# Honorable Mentions:
|
||||
* [Proxmox VE](https://www.proxmox.com/) - Got put those VMs somewhere.
|
||||
* [Proxmox VE Helper Scripts](https://community-scripts.github.io/ProxmoxVE/) - easy deploys.
|
||||
* [OpenMediaVault](https://www.openmediavault.org/) - NAS made simple.
|
||||
|
||||
## Necessary ones
|
||||
* [AdGuard](https://hub.docker.com/r/adguard/adguardhome) - DNS-based Adblocker service (also useful to block malware and other unwanted things).
|
||||
* Easy setup alternative: [PiHole](https://hub.docker.com/r/pihole/pihole) - Same thing, but easier to setup.
|
||||
* [Dockge](https://dockge.kuma.pet/) - Container and Compose management.
|
||||
* Alternative: [Portainer](https://www.portainer.io/) - Container management.
|
||||
* [Traefik](https://hub.docker.com/_/traefik) - Reverse proxy manager.
|
||||
* Alternative: [Nginx Proxy Manager](https://nginxproxymanager.com/)
|
||||
* [WatchTower](https://containrrr.dev/watchtower/) - Automatic container updates.
|
||||
* My lightweight alternative to this is my own `.sh` script that runs every 4 days that updates all containers on a specific server.
|
||||
* [Paperless](https://docs.paperless-ngx.com/) - Keep those important documents and papers organized with easy searching.
|
||||
|
||||
## Misc
|
||||
* [Homarr](https://homarr.dev/) - A stylish dashboard with all services and sometimes some nice widgets.
|
||||
* [Beszel](https://beszel.dev/) - Server monitor with some useful alarms.
|
||||
* [Uptime Kuma](https://uptime.kuma.pet/) - Status monitoring for applications.
|
||||
* [Gitea](https://gitea.com/) - Homemade GitHub (with Actions!)
|
||||
* [Notepad](https://github.com/pereorga/minimalist-web-notepad) - Homemade dontpad.
|
||||
* [Code Server](https://hub.docker.com/r/linuxserver/code-server/) - VSCode inside of a Docker.
|
||||
* [FileBrowser](https://filebrowser.org/installation#docker/) - Hosting files made easier.
|
||||
* [nginx](https://hub.docker.com/_/nginx/) - Let's all love nginx.
|
||||
* [WireGuard](https://hub.docker.com/r/linuxserver/wireguard) - Personal VPN tunnel.
|
||||
* [it-tools](https://hub.docker.com/r/corentinth/it-tools) - Some useful tools that we use every now and then.
|
||||
|
||||
## Media (*arr stack)
|
||||
* [Jellyfin](https://hub.docker.com/r/linuxserver/jellyfin/) - Homemade Netflix (I hate Plex).
|
||||
* [Transmission](https://hub.docker.com/r/linuxserver/transmission/) - Torrent client with a simple web interface.
|
||||
* Alternative [qBitTorrent](https://hub.docker.com/r/linuxserver/qbittorrent) - A more advanced web interface.
|
||||
* [Prowlarr](https://hub.docker.com/r/linuxserver/prowlarr/) - Torrent tracker aggregator.
|
||||
* [Sonarr](https://hub.docker.com/r/linuxserver/sonarr/) - TV shows management (Torrent integration).
|
||||
* [Radarr](https://hub.docker.com/r/linuxserver/radarr/) - Movies management (Torrent integration).
|
||||
* [Lidarr](https://hub.docker.com/r/linuxserver/lidarr/) - Music management (Torrent integration), though I don't use this one.
|
||||
|
||||
## Game server
|
||||
* [Minecraft Server](https://hub.docker.com/r/itzg/minecraft-server/) - For that 2 week period every 3 years.
|
||||
7
content/posts/unhealthy-workers.md
Normal file
7
content/posts/unhealthy-workers.md
Normal file
@@ -0,0 +1,7 @@
|
||||
---
|
||||
title: "Unhealthy Workers"
|
||||
date: 2025-06-20T23:11:44-03:00
|
||||
draft: true
|
||||
summary: "Put those workers to work!"
|
||||
---
|
||||
|
||||
Reference in New Issue
Block a user