Compare commits
2 Commits
ca1071de69
...
3feca8b295
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
3feca8b295 | ||
|
|
45426301e9 |
64
content/auto-server-setup/index.md
Normal file
64
content/auto-server-setup/index.md
Normal file
@@ -0,0 +1,64 @@
|
|||||||
|
+++
|
||||||
|
title = "Automatic Server Setup"
|
||||||
|
date = 2026-03-31
|
||||||
|
description = "Automatic my server setup"
|
||||||
|
draft = true
|
||||||
|
|
||||||
|
|
||||||
|
[extra]
|
||||||
|
toc = true
|
||||||
|
|
||||||
|
[taxonomies]
|
||||||
|
categories = ["Homelab"]
|
||||||
|
tags = ["Homelab"]
|
||||||
|
+++
|
||||||
|
|
||||||
|
Completly automating the setup of new servers in my homelab from OS install to service deployments, using PXE/iPXE and ansible.
|
||||||
|
|
||||||
|
<!-- more -->
|
||||||
|
|
||||||
|
# Intro: Network booting and PXE
|
||||||
|
So you might have heard the terms "Network boot" and "PXE" before, but what do they actually mean and why should I care?
|
||||||
|
|
||||||
|
Simply put, all of them describe an alternative to normal booting and OS installations, in which you get all your necessary data over the network.
|
||||||
|
PXE is an older standard that implements the network booting process, it fetches the initial kernel, initramfs and everything else you need for booting an OS dynamically over the network.
|
||||||
|
|
||||||
|
Okay, but why do I care now? Well for me personally the most interesting part of this, is not necessarily having your entire OS live in the network, but instead use these tools to install the OS.
|
||||||
|
So instead of running around with a USB that contains your linux installation media, you have some central setup in your network and any host can just boot from it, to install an OS.
|
||||||
|
|
||||||
|
In my case, the machine will first use PXE to boot into a custom iPXE build, which has an embedded iPXE script to simply chainload another iPXE script from my server, specific to the machine.
|
||||||
|
- If the machine has been configured in my tool, the chainloaded iPXE script will be a simple script to boot a ubuntu installer with some extra options.
|
||||||
|
- If the machine has not been configured in my tool, the chainloaded iPXE script will currently display a kind of error message to the user and wait for an input.
|
||||||
|
I want to expand on this a bit to instead boot into a minimal linux environment, where one can find all the necessary things to then go and configure the server in my tool.
|
||||||
|
|
||||||
|
# Automating the OS install
|
||||||
|
Okay, so you can now get your installation media over the network and avoid having to run around with a USB (heck even finding an unused USB is annoying for me). But then you still have to go through all the steps in the installer manually, or do you?
|
||||||
|
|
||||||
|
Well you see, most major linux distributions with an installer, also have a way to provide the configuration, that one would set in the installer, directly. Specifically for my case, ubuntu distros have something called "autoinstall"[^autoinstall], which kind is similar to cloud-init[^cloud-init].
|
||||||
|
The configuration can be provided in a variety of ways, for simplicity I decided to provide it over http.
|
||||||
|
|
||||||
|
Based on this I wrote a small piece of software that provides a dynamically generated configuration, based on which machine requested the configuration.
|
||||||
|
So you specify some machine specific things in my tool:
|
||||||
|
- the mac-address to identify the machine
|
||||||
|
- the serial numbers of the drives you want to use as your boot drives
|
||||||
|
- networking related stuff
|
||||||
|
- ? TODO
|
||||||
|
|
||||||
|
and then the corresponding configuration for the installer is automatically created.
|
||||||
|
|
||||||
|
# Why not MAAS?
|
||||||
|
Well...
|
||||||
|
|
||||||
|
# Future Work
|
||||||
|
## Deploy to "production"
|
||||||
|
Currently this is mostly still an experimental setup, but I actually want to deploy this for my homelab.
|
||||||
|
For this I am planning on setting up 1 or 2 raspberry pies that might also serve some other critical services.
|
||||||
|
|
||||||
|
## Better help for unknown config
|
||||||
|
Currently it is a bit annoying to get the first configuration for a client going, because you need to find all the serial numbers etc.
|
||||||
|
In the long run I want to boot into a minimal live linux setup[^blog_tiny_linux_from_scrach], which one can then use to find all the info needed for setting up your config.
|
||||||
|
|
||||||
|
# References
|
||||||
|
[^autoinstall]: [autoinstall docs](https://canonical-subiquity.readthedocs-hosted.com/en/latest/intro-to-autoinstall.html)
|
||||||
|
[^cloud-init]: [cloud-init homepage](https://cloud-init.io/)
|
||||||
|
[^blog_tiny_linux_from_scrach]: [Blog Post about creating a minimal linux kernel with busybox](https://blinry.org/tiny-linux/)
|
||||||
@@ -3,16 +3,23 @@ title = "Ceph RBD Benchmarking"
|
|||||||
date = 2026-02-22
|
date = 2026-02-22
|
||||||
description = "My recent ceph rbd benchmarking adventure"
|
description = "My recent ceph rbd benchmarking adventure"
|
||||||
|
|
||||||
|
[extra]
|
||||||
|
toc = true
|
||||||
|
|
||||||
[taxonomies]
|
[taxonomies]
|
||||||
categories = ["Homelab"]
|
categories = ["Homelab"]
|
||||||
tags = ["Homelab", "Ceph"]
|
tags = ["Homelab", "Ceph"]
|
||||||
+++
|
+++
|
||||||
|
|
||||||
## Motivation
|
Running a small set of benchmarks to find out what my ceph cluster is capable of.
|
||||||
|
|
||||||
|
<!-- more -->
|
||||||
|
|
||||||
|
# Motivation
|
||||||
I have been running a ceph cluster in my homelab for about 2 years now, but never properly benchmarked it, let alone wrote down my findings or any potential conclusions.
|
I have been running a ceph cluster in my homelab for about 2 years now, but never properly benchmarked it, let alone wrote down my findings or any potential conclusions.
|
||||||
I still don't know what kind of performance to expect or what would be considered good or expected for my setup, but having numbers without context is still better than no numbers at all.
|
I still don't know what kind of performance to expect or what would be considered good or expected for my setup, but having numbers without context is still better than no numbers at all.
|
||||||
|
|
||||||
## Benchmark Setup
|
# Benchmark Setup
|
||||||
This covers all the steps I did to setup my benchmarking, so anyone could follow along and for me to reference later to repeat benchmarks properly.
|
This covers all the steps I did to setup my benchmarking, so anyone could follow along and for me to reference later to repeat benchmarks properly.
|
||||||
|
|
||||||
{% details(summary="Create Benchmark User in Ceph") %}
|
{% details(summary="Create Benchmark User in Ceph") %}
|
||||||
@@ -49,7 +56,7 @@ Setup the benchmark itself:
|
|||||||
5. `mount /dev/rbd0 /mnt/bench`
|
5. `mount /dev/rbd0 /mnt/bench`
|
||||||
{% end %}
|
{% end %}
|
||||||
|
|
||||||
## Benchmarks
|
# Benchmarks
|
||||||
All benchmarks are run with the same configuration, only changing the access patterns (read/write, random/sequential).
|
All benchmarks are run with the same configuration, only changing the access patterns (read/write, random/sequential).
|
||||||
Key configuration options are:
|
Key configuration options are:
|
||||||
- using libaio
|
- using libaio
|
||||||
@@ -101,7 +108,7 @@ bs=4M
|
|||||||
```
|
```
|
||||||
{% end %}
|
{% end %}
|
||||||
|
|
||||||
## Results
|
# Results
|
||||||
{% details(summary="Random Reads") %}
|
{% details(summary="Random Reads") %}
|
||||||
{{ fio_benchmark(path="content/ceph-benchmarking/benchmarks/random_read.json") }}
|
{{ fio_benchmark(path="content/ceph-benchmarking/benchmarks/random_read.json") }}
|
||||||
{% end %}
|
{% end %}
|
||||||
@@ -118,7 +125,7 @@ bs=4M
|
|||||||
{{ fio_benchmark(path="content/ceph-benchmarking/benchmarks/seq_write.json") }}
|
{{ fio_benchmark(path="content/ceph-benchmarking/benchmarks/seq_write.json") }}
|
||||||
{% end %}
|
{% end %}
|
||||||
|
|
||||||
## Conclusion
|
# Conclusion
|
||||||
1. Overall I am satisfied with the performance of the cluster for my current use-case
|
1. Overall I am satisfied with the performance of the cluster for my current use-case
|
||||||
2. There is a lot of room for improvement in the low queue-depth range
|
2. There is a lot of room for improvement in the low queue-depth range
|
||||||
3. The network is not really a limiting factor currently
|
3. The network is not really a limiting factor currently
|
||||||
@@ -126,7 +133,7 @@ bs=4M
|
|||||||
- My client used for testing was limited by the network, evident by the fact that the highest speed achieved is ~1.2GB/s (~10Gb/s)
|
- My client used for testing was limited by the network, evident by the fact that the highest speed achieved is ~1.2GB/s (~10Gb/s)
|
||||||
4. My smallest node (the embedded epyc) could be the limiting factor as in some benchmarks, it reached 100% cpu usage, while my other nodes never exceeded 40%
|
4. My smallest node (the embedded epyc) could be the limiting factor as in some benchmarks, it reached 100% cpu usage, while my other nodes never exceeded 40%
|
||||||
|
|
||||||
## Extra Details
|
# Extra Details
|
||||||
{% details(summary="Cluster Hardware") %}
|
{% details(summary="Cluster Hardware") %}
|
||||||
- 10 GbE Networking between all nodes
|
- 10 GbE Networking between all nodes
|
||||||
- Node
|
- Node
|
||||||
@@ -175,7 +182,7 @@ jq '[.jobs[] | { iodepth: ."job options".iodepth, bs: ."job options".bs, operati
|
|||||||
```
|
```
|
||||||
{% end %}
|
{% end %}
|
||||||
|
|
||||||
## Future Work
|
# Future Work
|
||||||
- Try directly on the block device
|
- Try directly on the block device
|
||||||
- Try this using xfs instead of ext4
|
- Try this using xfs instead of ext4
|
||||||
- Try this with and without drive caches
|
- Try this with and without drive caches
|
||||||
|
|||||||
Reference in New Issue
Block a user