Skip to main content

Benchmark Results

Explore real performance data from an 800 GiB end-to-end backup and restore benchmark. Select metrics, scrub through the timeline, and watch the operation unfold.

Methodology

This benchmark ran a full backup and restore cycle on a single-node Kafka cluster:

ParameterValue
Dataset800 GiB
Record size1 MiB
Total records819,200
Partitions96
Compressionzstd level 3
Storage backendS3-compatible (MinIO)
Compressed backup size478.7 GiB (1.67x ratio)

Results

PhaseDurationThroughput
Backup105.8 min129.0 MiB/s
Restore94.2 min145.0 MiB/s
End-to-end3.67 hours

Source and restored topics both contain exactly 819,200 records — zero data loss, bit-perfect restore.

How to read the chart

  • Backup Phase (teal): Records are read from Kafka and written to MinIO. Throughput ramps up as the pipeline fills, then plateaus at steady-state.
  • Restore Phase (mauve): Records are read from MinIO and written back to Kafka. Restore progress tracks completion percentage while throughput shows the data rate.

Use the metric selector on the right to toggle additional metrics like per-container CPU, memory, network, and disk I/O.

Reproducing these benchmarks

The benchmark infrastructure is in the kafka-backup repository. See the scripts/benchmark-800gb directory for the Docker Compose setup, Prometheus configuration, and benchmark runner script.