Skip to content

Time-Series Performance Tuning Guide

Time-Series Performance Tuning Guide

Last Updated: January 4, 2026


This guide covers optimization strategies for HeliosDB’s time-series compression features.

Table of Contents

  1. Performance Targets
  2. Block Size Optimization
  3. Compression Ratio Optimization
  4. Throughput Optimization
  5. Memory Optimization
  6. Latency Optimization
  7. Benchmarking
  8. Production Recommendations

Performance Targets

Baseline Performance

MetricTargetNotes
Compression ratio8-12xFor typical IoT/metrics data
Compression latency<5ms/1K ptsPer 1000 data points
Decompression latency<3ms/1K ptsPer 1000 data points
Throughput500K+ pts/secSustained rate
Memory per batch<1 MBFor 10K point batch

Measuring Current Performance

use std::time::Instant;
use heliosdb_storage::timeseries::BatchCompressor;
fn benchmark_compression(data: &[(Vec<u64>, Vec<f64>)]) {
let compressor = BatchCompressor::default();
let start = Instant::now();
let mut total_points = 0;
let mut total_original = 0;
let mut total_compressed = 0;
for (timestamps, values) in data {
let compressed = compressor.compress_batch(timestamps, values, None).unwrap();
total_points += timestamps.len();
total_original += timestamps.len() * 16;
total_compressed += compressed.len();
}
let duration = start.elapsed();
let throughput = total_points as f64 / duration.as_secs_f64();
println!("Points: {}", total_points);
println!("Duration: {:?}", duration);
println!("Throughput: {:.0} pts/sec", throughput);
println!("Ratio: {:.2}x", total_original as f64 / total_compressed as f64);
}

Block Size Optimization

Block size affects the tradeoff between compression ratio and latency.

Guidelines

Block SizeCompression RatioLatencyUse Case
256Lower (-10%)LowerReal-time streaming
512ModerateLowInteractive queries
1024 (default)GoodModerateGeneral purpose
2048Better (+5%)HigherBatch processing
4096Best (+10%)HighestArchival storage

Configuration

use heliosdb_storage::timeseries::BatchCompressionConfig;
// Real-time: prioritize latency
let realtime_config = BatchCompressionConfig {
block_size: 256,
..Default::default()
};
// Archival: prioritize compression
let archival_config = BatchCompressionConfig {
block_size: 4096,
..Default::default()
};

Finding Optimal Block Size

fn find_optimal_block_size(sample_data: &[(Vec<u64>, Vec<f64>)]) {
for block_size in [256, 512, 1024, 2048, 4096] {
let config = BatchCompressionConfig {
block_size,
..Default::default()
};
let compressor = BatchCompressor::new(config);
let start = Instant::now();
let mut total_compressed = 0;
for (ts, vals) in sample_data {
let compressed = compressor.compress_batch(ts, vals, None).unwrap();
total_compressed += compressed.len();
}
let duration = start.elapsed();
println!("Block size {}: ratio={:.2}x, time={:?}",
block_size,
original_size as f64 / total_compressed as f64,
duration);
}
}

Compression Ratio Optimization

Data Characteristics That Affect Compression

CharacteristicImpact on RatioRecommendation
Regular intervalsBetterUse consistent sampling
Slowly changing valuesBetterNatural for metrics
Random valuesWorseConsider pre-processing
Many unique metricsWorseUse dictionary compression

Improving Compression Ratio

1. Sort by timestamp

// Unsorted data compresses poorly
let mut data: Vec<(u64, f64)> = load_data();
// Sort for better compression
data.sort_by_key(|(ts, _)| *ts);
let timestamps: Vec<u64> = data.iter().map(|(t, _)| *t).collect();
let values: Vec<f64> = data.iter().map(|(_, v)| *v).collect();

2. Use consistent intervals

// Irregular intervals: worse compression
// [1000, 1050, 1200, 1201, 5000]
// Regular intervals: better compression
// [1000, 2000, 3000, 4000, 5000]

3. Enable all compression types

let config = BatchCompressionConfig {
compress_timestamps: true,
compress_values: true,
compress_metrics: true, // Enable dictionary
..Default::default()
};

4. Increase block size

let config = BatchCompressionConfig {
block_size: 4096, // Larger blocks = better ratio
..Default::default()
};

Throughput Optimization

Parallel Processing

use rayon::prelude::*;
fn parallel_compression(batches: Vec<(Vec<u64>, Vec<f64>)>) -> Vec<Vec<u8>> {
batches
.par_iter()
.map(|(ts, vals)| {
let compressor = BatchCompressor::default();
compressor.compress_batch(ts, vals, None).unwrap()
})
.collect()
}

Batch Size Tuning

// Too small: overhead dominates
let small_batch = 100; // ~10K ops/sec
// Too large: memory pressure
let large_batch = 1_000_000; // Memory issues
// Optimal: 10K-100K per batch
let optimal_batch = 50_000; // ~500K pts/sec

Pipeline Processing

use tokio::sync::mpsc;
async fn pipeline_compression(
rx: mpsc::Receiver<(Vec<u64>, Vec<f64>)>,
tx: mpsc::Sender<Vec<u8>>,
) {
let compressor = BatchCompressor::default();
while let Some((ts, vals)) = rx.recv().await {
let compressed = compressor.compress_batch(&ts, &vals, None).unwrap();
tx.send(compressed).await.unwrap();
}
}

Memory Optimization

Memory Usage Guidelines

Batch SizeMemory (Compression)Memory (Decompression)
1K points~100 KB~50 KB
10K points~800 KB~400 KB
100K points~8 MB~4 MB
1M points~80 MB~40 MB

Reducing Memory Usage

1. Process in smaller batches

const BATCH_SIZE: usize = 10_000;
for chunk in data.chunks(BATCH_SIZE) {
let compressed = compressor.compress_batch(&chunk.ts, &chunk.vals, None)?;
write_to_storage(&compressed).await?;
}

2. Stream processing

// Don't load all data into memory
async fn stream_compress(reader: impl AsyncRead, writer: impl AsyncWrite) {
let compressor = BatchCompressor::default();
let mut batch_ts = Vec::with_capacity(10_000);
let mut batch_vals = Vec::with_capacity(10_000);
while let Some(point) = read_point(&mut reader).await? {
batch_ts.push(point.timestamp);
batch_vals.push(point.value);
if batch_ts.len() >= 10_000 {
let compressed = compressor.compress_batch(&batch_ts, &batch_vals, None)?;
write_chunk(&mut writer, &compressed).await?;
batch_ts.clear();
batch_vals.clear();
}
}
}

3. Reuse compressor instance

// Good: reuse compressor
let compressor = BatchCompressor::default();
for batch in batches {
let compressed = compressor.compress_batch(&batch.ts, &batch.vals, None)?;
}
// Bad: create new compressor each time
for batch in batches {
let compressor = BatchCompressor::default(); // Allocation overhead
let compressed = compressor.compress_batch(&batch.ts, &batch.vals, None)?;
}

Latency Optimization

Latency Targets

WorkloadCompressionDecompression
Real-time<1ms/1K pts<0.5ms/1K pts
Interactive<5ms/1K pts<3ms/1K pts
Batch<10ms/1K pts<5ms/1K pts

Reducing Latency

1. Smaller block sizes

let config = BatchCompressionConfig {
block_size: 256, // Faster processing
..Default::default()
};

2. Disable unnecessary compression

// If timestamps are already compact
let config = BatchCompressionConfig {
compress_timestamps: false, // Skip timestamp compression
compress_values: true,
compress_metrics: false, // Skip dictionary
..Default::default()
};

3. Pre-allocate buffers

// Pre-allocate for known batch sizes
let mut timestamps = Vec::with_capacity(10_000);
let mut values = Vec::with_capacity(10_000);

Benchmarking

Running Benchmarks

Terminal window
# Run all compression benchmarks
cargo bench --package heliosdb-storage --bench compression_performance
# Run specific benchmark
cargo bench --package heliosdb-storage --bench compression_performance -- batch_compression
# Profile with flamegraph
cargo bench --package heliosdb-storage --bench compression_performance -- --profile-time=30

Benchmark Interpretation

batch_compression_throughput/compress/1000
time: [2.1 ms 2.2 ms 2.3 ms]
thrpt: [434.78 K elem/s 454.55 K elem/s 476.19 K elem/s]
  • time: Compression time for 1000 points
  • thrpt: Throughput in points per second

Custom Benchmarks

use criterion::{criterion_group, criterion_main, Criterion, Throughput};
fn compression_benchmark(c: &mut Criterion) {
let compressor = BatchCompressor::default();
let (timestamps, values) = generate_test_data(10_000);
let mut group = c.benchmark_group("compression");
group.throughput(Throughput::Elements(10_000));
group.bench_function("compress_10k", |b| {
b.iter(|| {
compressor.compress_batch(&timestamps, &values, None).unwrap()
})
});
group.finish();
}
criterion_group!(benches, compression_benchmark);
criterion_main!(benches);

Production Recommendations

General Settings

let production_config = BatchCompressionConfig {
block_size: 1024, // Good balance
compress_timestamps: true,
compress_values: true,
compress_metrics: true,
min_ratio: 1.5, // Only compress if worthwhile
};

Workload-Specific Settings

WorkloadBlock SizeBatch SizeParallelism
IoT (high volume)204850K4-8 threads
Metrics (regular)102410K2-4 threads
Financial (low latency)2561K1 thread
Archival (offline)4096100KAll cores

Monitoring

// Log compression statistics periodically
let stats = compressor.stats();
metrics::gauge!("compression.ratio", stats.avg_compression_ratio());
metrics::gauge!("compression.throughput", stats.points_per_second());
metrics::histogram!("compression.latency_ms", stats.avg_latency_ms());
if stats.avg_compression_ratio() < 2.0 {
warn!("Low compression ratio: {:.2}x", stats.avg_compression_ratio());
}