99 lines
3.1 KiB
Markdown
99 lines
3.1 KiB
Markdown
+++
|
|
title = "Ceph Benchmarking"
|
|
date = 2026-03-01
|
|
description = "The results of some of my recent ceph benchmarks"
|
|
draft = true
|
|
|
|
[taxonomies]
|
|
categories = ["Homelab"]
|
|
tags = ["Homelab", "Ceph"]
|
|
+++
|
|
|
|
## Setup everything for the Benchmarking
|
|
On a machine in the ceph-cluster already:
|
|
1. Generate a minimal ceph config using `ceph config generate-minimal-conf`
|
|
2. Create a user for Benchmarking
|
|
1. Create a new user `ceph auth add client.linux_pc`
|
|
2. Edit the caps for my use-case
|
|
- `mgr`: `profile rbd pool=test-rbd`
|
|
- `mon`: `profile rbd`
|
|
- `osd`: `profile rbd pool=test-rbd`
|
|
3. Get the keyring configuration using `ceph auth get client.linux_pc`
|
|
|
|
On the client machine doing the Benchmarking:
|
|
1. Install basic ceph tools `apt-get install -y ceph-common`
|
|
2. Load the rbd kernel module `modprobe rbd`
|
|
3. Setup local ceph config
|
|
- Copy the generated configuration to `/etc/ceph/ceph.conf`
|
|
- `chmod 644 /etc/ceph/ceph.conf`
|
|
4. Setup local ceph keyring
|
|
- Copy the keyring configuratio nto `/etc/ceph/ceph.client.linux_pc.keyring`
|
|
- `chmod 644 /etc/ceph/ceph.client.linux_pc.keyring`
|
|
5. Confirm your configuration is working by running `ceph -s -n client.linux_pc`
|
|
|
|
Setup the benchmark itself:
|
|
1. `rbd create -n client.linux_pc --size 10G --pool test-rbd bench-volume`
|
|
2. `rbd -n client.linux_pc device map --pool test-rbd bench-volume` (which should create a new block device, likely `/dev/rbd0`)
|
|
3. `mkfs.ext4 /dev/rbd0`
|
|
4. `mkdir /mnt/bench`
|
|
5. `mount /dev/rbd0 /mnt/bench`
|
|
|
|
## Benchmarks
|
|
|
|
|
|
## Results
|
|
{% details(summary="Random Reads - 1 Job") %}
|
|
{{ fio_benchmark(path="content/ceph-benchmarking/benchmarks/random_read.json") }}
|
|
{% end %}
|
|
|
|
{% details(summary="Random Writes - 1 Job") %}
|
|
{{ fio_benchmark(path="content/ceph-benchmarking/benchmarks/random_write.json") }}
|
|
{% end %}
|
|
|
|
{% details(summary="Sequential Reads - 1 Job") %}
|
|
{{ fio_benchmark(path="content/ceph-benchmarking/benchmarks/seq_read.json") }}
|
|
{% end %}
|
|
|
|
{% details(summary="Sequential Writes - 1 Job") %}
|
|
{{ fio_benchmark(path="content/ceph-benchmarking/benchmarks/seq_write.json") }}
|
|
{% end %}
|
|
|
|
## TODO
|
|
- Try directly on the block device
|
|
- Try this using xfs instead of ext4
|
|
- Try this with and without drive caches
|
|
|
|
|
|
## Details
|
|
{% details(summary="Command to convert raw data into vis data") %}
|
|
|
|
```bash
|
|
jq '[.jobs[] | { iodepth: ."job options".iodepth, bs: ."job options".bs, operations: { iops: .write.iops, bw_bytes: .write.bw_bytes } }]' content/ceph-benchmarking/benchmarks/raw_random_write.json | jq '
|
|
# collect sorted unique labels
|
|
(map({key:.bs,value:1})|from_entries|keys_unsorted) as $labels
|
|
|
|
|
{
|
|
labels: $labels,
|
|
iodepths:
|
|
(
|
|
group_by(.iodepth)
|
|
| map(
|
|
. as $group
|
|
| {
|
|
iodepth: ($group[0].iodepth | tonumber),
|
|
iops: [
|
|
$labels[] as $l
|
|
| ($group[] | select(.bs == $l) | .operations.iops) // null
|
|
],
|
|
bw: [
|
|
$labels[] as $l
|
|
| ($group[] | select(.bs == $l) | .operations.bw_bytes) // null
|
|
]
|
|
}
|
|
)
|
|
)
|
|
}
|
|
'
|
|
```
|
|
{% end %}
|