Finish ceph-benchmark post for now
This commit is contained in:
@@ -1,8 +1,7 @@
|
|||||||
+++
|
+++
|
||||||
title = "Ceph Benchmarking"
|
title = "Ceph RBD Benchmarking"
|
||||||
date = 2026-02-21
|
date = 2026-02-22
|
||||||
description = "The results of some of my recent ceph benchmarks"
|
description = "My recent ceph rbd benchmarking adventure"
|
||||||
draft = true
|
|
||||||
|
|
||||||
[taxonomies]
|
[taxonomies]
|
||||||
categories = ["Homelab"]
|
categories = ["Homelab"]
|
||||||
@@ -11,8 +10,12 @@ tags = ["Homelab", "Ceph"]
|
|||||||
|
|
||||||
## Motivation
|
## Motivation
|
||||||
I have been running a ceph cluster in my homelab for about 2 years now, but never properly benchmarked it, let alone wrote down my findings or any potential conclusions.
|
I have been running a ceph cluster in my homelab for about 2 years now, but never properly benchmarked it, let alone wrote down my findings or any potential conclusions.
|
||||||
|
I still don't know what kind of performance to expect or what would be considered good or expected for my setup, but having numbers without context is still better than no numbers at all.
|
||||||
|
|
||||||
## Setup everything for the Benchmarking
|
## Benchmark Setup
|
||||||
|
This covers all the steps I did to setup my benchmarking, so anyone could follow along and for me to reference later to repeat benchmarks properly.
|
||||||
|
|
||||||
|
{% details(summary="Create Benchmark User in Ceph") %}
|
||||||
On a machine in the ceph-cluster already:
|
On a machine in the ceph-cluster already:
|
||||||
1. Generate a minimal ceph config using `ceph config generate-minimal-conf`
|
1. Generate a minimal ceph config using `ceph config generate-minimal-conf`
|
||||||
2. Create a user for Benchmarking
|
2. Create a user for Benchmarking
|
||||||
@@ -22,7 +25,9 @@ On a machine in the ceph-cluster already:
|
|||||||
- `mon`: `profile rbd`
|
- `mon`: `profile rbd`
|
||||||
- `osd`: `profile rbd pool=test-rbd`
|
- `osd`: `profile rbd pool=test-rbd`
|
||||||
3. Get the keyring configuration using `ceph auth get client.linux_pc`
|
3. Get the keyring configuration using `ceph auth get client.linux_pc`
|
||||||
|
{% end %}
|
||||||
|
|
||||||
|
{% details(summary="Configure Ceph as a client on the Benchmark machine") %}
|
||||||
On the client machine doing the Benchmarking:
|
On the client machine doing the Benchmarking:
|
||||||
1. Install basic ceph tools `apt-get install -y ceph-common`
|
1. Install basic ceph tools `apt-get install -y ceph-common`
|
||||||
2. Load the rbd kernel module `modprobe rbd`
|
2. Load the rbd kernel module `modprobe rbd`
|
||||||
@@ -33,16 +38,19 @@ On the client machine doing the Benchmarking:
|
|||||||
- Copy the keyring configuratio nto `/etc/ceph/ceph.client.linux_pc.keyring`
|
- Copy the keyring configuratio nto `/etc/ceph/ceph.client.linux_pc.keyring`
|
||||||
- `chmod 644 /etc/ceph/ceph.client.linux_pc.keyring`
|
- `chmod 644 /etc/ceph/ceph.client.linux_pc.keyring`
|
||||||
5. Confirm your configuration is working by running `ceph -s -n client.linux_pc`
|
5. Confirm your configuration is working by running `ceph -s -n client.linux_pc`
|
||||||
|
{% end %}
|
||||||
|
|
||||||
|
{% details(summary="Setting up Benchmark on the Benchmark machine") %}
|
||||||
Setup the benchmark itself:
|
Setup the benchmark itself:
|
||||||
1. `rbd create -n client.linux_pc --size 10G --pool test-rbd bench-volume`
|
1. `rbd create -n client.linux_pc --size 10G --pool test-rbd bench-volume`
|
||||||
2. `rbd -n client.linux_pc device map --pool test-rbd bench-volume` (which should create a new block device, likely `/dev/rbd0`)
|
2. `rbd -n client.linux_pc device map --pool test-rbd bench-volume` (which should create a new block device, likely `/dev/rbd0`)
|
||||||
3. `mkfs.ext4 /dev/rbd0`
|
3. `mkfs.ext4 /dev/rbd0`
|
||||||
4. `mkdir /mnt/bench`
|
4. `mkdir /mnt/bench`
|
||||||
5. `mount /dev/rbd0 /mnt/bench`
|
5. `mount /dev/rbd0 /mnt/bench`
|
||||||
|
{% end %}
|
||||||
|
|
||||||
## Benchmarks
|
## Benchmarks
|
||||||
All benchmarks are run with the same configuration, only changing the access (read/write, random/sequential).
|
All benchmarks are run with the same configuration, only changing the access patterns (read/write, random/sequential).
|
||||||
Key configuration options are:
|
Key configuration options are:
|
||||||
- using libaio
|
- using libaio
|
||||||
- direct io
|
- direct io
|
||||||
@@ -111,7 +119,7 @@ bs=4M
|
|||||||
{% end %}
|
{% end %}
|
||||||
|
|
||||||
## Conclusion
|
## Conclusion
|
||||||
1. Overall I am satisfied with the performance of the cluster for my current use-case.
|
1. Overall I am satisfied with the performance of the cluster for my current use-case
|
||||||
2. There is a lot of room for improvement in the low queue-depth range
|
2. There is a lot of room for improvement in the low queue-depth range
|
||||||
3. The network is not really a limiting factor currently
|
3. The network is not really a limiting factor currently
|
||||||
- None of the nodes in the cluster exceeded 500MiB/s of TX or RX, so there is plenty of room for growth
|
- None of the nodes in the cluster exceeded 500MiB/s of TX or RX, so there is plenty of room for growth
|
||||||
@@ -120,7 +128,7 @@ bs=4M
|
|||||||
|
|
||||||
## Extra Details
|
## Extra Details
|
||||||
{% details(summary="Cluster Hardware") %}
|
{% details(summary="Cluster Hardware") %}
|
||||||
- 10 Gb Networking between all nodes
|
- 10 GbE Networking between all nodes
|
||||||
- Node
|
- Node
|
||||||
- Ryzen 5 5500
|
- Ryzen 5 5500
|
||||||
- 64GB RAM
|
- 64GB RAM
|
||||||
|
|||||||
Reference in New Issue
Block a user