Time-Series Performance Tuning Guide
Time-Series Performance Tuning Guide
Last Updated: January 4, 2026
This guide covers optimization strategies for HeliosDB’s time-series compression features.
Table of Contents
- Performance Targets
- Block Size Optimization
- Compression Ratio Optimization
- Throughput Optimization
- Memory Optimization
- Latency Optimization
- Benchmarking
- Production Recommendations
Performance Targets
Baseline Performance
| Metric | Target | Notes |
|---|---|---|
| Compression ratio | 8-12x | For typical IoT/metrics data |
| Compression latency | <5ms/1K pts | Per 1000 data points |
| Decompression latency | <3ms/1K pts | Per 1000 data points |
| Throughput | 500K+ pts/sec | Sustained rate |
| Memory per batch | <1 MB | For 10K point batch |
Measuring Current Performance
use std::time::Instant;use heliosdb_storage::timeseries::BatchCompressor;
fn benchmark_compression(data: &[(Vec<u64>, Vec<f64>)]) { let compressor = BatchCompressor::default();
let start = Instant::now(); let mut total_points = 0; let mut total_original = 0; let mut total_compressed = 0;
for (timestamps, values) in data { let compressed = compressor.compress_batch(timestamps, values, None).unwrap(); total_points += timestamps.len(); total_original += timestamps.len() * 16; total_compressed += compressed.len(); }
let duration = start.elapsed(); let throughput = total_points as f64 / duration.as_secs_f64();
println!("Points: {}", total_points); println!("Duration: {:?}", duration); println!("Throughput: {:.0} pts/sec", throughput); println!("Ratio: {:.2}x", total_original as f64 / total_compressed as f64);}Block Size Optimization
Block size affects the tradeoff between compression ratio and latency.
Guidelines
| Block Size | Compression Ratio | Latency | Use Case |
|---|---|---|---|
| 256 | Lower (-10%) | Lower | Real-time streaming |
| 512 | Moderate | Low | Interactive queries |
| 1024 (default) | Good | Moderate | General purpose |
| 2048 | Better (+5%) | Higher | Batch processing |
| 4096 | Best (+10%) | Highest | Archival storage |
Configuration
use heliosdb_storage::timeseries::BatchCompressionConfig;
// Real-time: prioritize latencylet realtime_config = BatchCompressionConfig { block_size: 256, ..Default::default()};
// Archival: prioritize compressionlet archival_config = BatchCompressionConfig { block_size: 4096, ..Default::default()};Finding Optimal Block Size
fn find_optimal_block_size(sample_data: &[(Vec<u64>, Vec<f64>)]) { for block_size in [256, 512, 1024, 2048, 4096] { let config = BatchCompressionConfig { block_size, ..Default::default() }; let compressor = BatchCompressor::new(config);
let start = Instant::now(); let mut total_compressed = 0;
for (ts, vals) in sample_data { let compressed = compressor.compress_batch(ts, vals, None).unwrap(); total_compressed += compressed.len(); }
let duration = start.elapsed();
println!("Block size {}: ratio={:.2}x, time={:?}", block_size, original_size as f64 / total_compressed as f64, duration); }}Compression Ratio Optimization
Data Characteristics That Affect Compression
| Characteristic | Impact on Ratio | Recommendation |
|---|---|---|
| Regular intervals | Better | Use consistent sampling |
| Slowly changing values | Better | Natural for metrics |
| Random values | Worse | Consider pre-processing |
| Many unique metrics | Worse | Use dictionary compression |
Improving Compression Ratio
1. Sort by timestamp
// Unsorted data compresses poorlylet mut data: Vec<(u64, f64)> = load_data();
// Sort for better compressiondata.sort_by_key(|(ts, _)| *ts);
let timestamps: Vec<u64> = data.iter().map(|(t, _)| *t).collect();let values: Vec<f64> = data.iter().map(|(_, v)| *v).collect();2. Use consistent intervals
// Irregular intervals: worse compression// [1000, 1050, 1200, 1201, 5000]
// Regular intervals: better compression// [1000, 2000, 3000, 4000, 5000]3. Enable all compression types
let config = BatchCompressionConfig { compress_timestamps: true, compress_values: true, compress_metrics: true, // Enable dictionary ..Default::default()};4. Increase block size
let config = BatchCompressionConfig { block_size: 4096, // Larger blocks = better ratio ..Default::default()};Throughput Optimization
Parallel Processing
use rayon::prelude::*;
fn parallel_compression(batches: Vec<(Vec<u64>, Vec<f64>)>) -> Vec<Vec<u8>> { batches .par_iter() .map(|(ts, vals)| { let compressor = BatchCompressor::default(); compressor.compress_batch(ts, vals, None).unwrap() }) .collect()}Batch Size Tuning
// Too small: overhead dominateslet small_batch = 100; // ~10K ops/sec
// Too large: memory pressurelet large_batch = 1_000_000; // Memory issues
// Optimal: 10K-100K per batchlet optimal_batch = 50_000; // ~500K pts/secPipeline Processing
use tokio::sync::mpsc;
async fn pipeline_compression( rx: mpsc::Receiver<(Vec<u64>, Vec<f64>)>, tx: mpsc::Sender<Vec<u8>>,) { let compressor = BatchCompressor::default();
while let Some((ts, vals)) = rx.recv().await { let compressed = compressor.compress_batch(&ts, &vals, None).unwrap(); tx.send(compressed).await.unwrap(); }}Memory Optimization
Memory Usage Guidelines
| Batch Size | Memory (Compression) | Memory (Decompression) |
|---|---|---|
| 1K points | ~100 KB | ~50 KB |
| 10K points | ~800 KB | ~400 KB |
| 100K points | ~8 MB | ~4 MB |
| 1M points | ~80 MB | ~40 MB |
Reducing Memory Usage
1. Process in smaller batches
const BATCH_SIZE: usize = 10_000;
for chunk in data.chunks(BATCH_SIZE) { let compressed = compressor.compress_batch(&chunk.ts, &chunk.vals, None)?; write_to_storage(&compressed).await?;}2. Stream processing
// Don't load all data into memoryasync fn stream_compress(reader: impl AsyncRead, writer: impl AsyncWrite) { let compressor = BatchCompressor::default(); let mut batch_ts = Vec::with_capacity(10_000); let mut batch_vals = Vec::with_capacity(10_000);
while let Some(point) = read_point(&mut reader).await? { batch_ts.push(point.timestamp); batch_vals.push(point.value);
if batch_ts.len() >= 10_000 { let compressed = compressor.compress_batch(&batch_ts, &batch_vals, None)?; write_chunk(&mut writer, &compressed).await?; batch_ts.clear(); batch_vals.clear(); } }}3. Reuse compressor instance
// Good: reuse compressorlet compressor = BatchCompressor::default();for batch in batches { let compressed = compressor.compress_batch(&batch.ts, &batch.vals, None)?;}
// Bad: create new compressor each timefor batch in batches { let compressor = BatchCompressor::default(); // Allocation overhead let compressed = compressor.compress_batch(&batch.ts, &batch.vals, None)?;}Latency Optimization
Latency Targets
| Workload | Compression | Decompression |
|---|---|---|
| Real-time | <1ms/1K pts | <0.5ms/1K pts |
| Interactive | <5ms/1K pts | <3ms/1K pts |
| Batch | <10ms/1K pts | <5ms/1K pts |
Reducing Latency
1. Smaller block sizes
let config = BatchCompressionConfig { block_size: 256, // Faster processing ..Default::default()};2. Disable unnecessary compression
// If timestamps are already compactlet config = BatchCompressionConfig { compress_timestamps: false, // Skip timestamp compression compress_values: true, compress_metrics: false, // Skip dictionary ..Default::default()};3. Pre-allocate buffers
// Pre-allocate for known batch sizeslet mut timestamps = Vec::with_capacity(10_000);let mut values = Vec::with_capacity(10_000);Benchmarking
Running Benchmarks
# Run all compression benchmarkscargo bench --package heliosdb-storage --bench compression_performance
# Run specific benchmarkcargo bench --package heliosdb-storage --bench compression_performance -- batch_compression
# Profile with flamegraphcargo bench --package heliosdb-storage --bench compression_performance -- --profile-time=30Benchmark Interpretation
batch_compression_throughput/compress/1000 time: [2.1 ms 2.2 ms 2.3 ms] thrpt: [434.78 K elem/s 454.55 K elem/s 476.19 K elem/s]- time: Compression time for 1000 points
- thrpt: Throughput in points per second
Custom Benchmarks
use criterion::{criterion_group, criterion_main, Criterion, Throughput};
fn compression_benchmark(c: &mut Criterion) { let compressor = BatchCompressor::default(); let (timestamps, values) = generate_test_data(10_000);
let mut group = c.benchmark_group("compression"); group.throughput(Throughput::Elements(10_000));
group.bench_function("compress_10k", |b| { b.iter(|| { compressor.compress_batch(×tamps, &values, None).unwrap() }) });
group.finish();}
criterion_group!(benches, compression_benchmark);criterion_main!(benches);Production Recommendations
General Settings
let production_config = BatchCompressionConfig { block_size: 1024, // Good balance compress_timestamps: true, compress_values: true, compress_metrics: true, min_ratio: 1.5, // Only compress if worthwhile};Workload-Specific Settings
| Workload | Block Size | Batch Size | Parallelism |
|---|---|---|---|
| IoT (high volume) | 2048 | 50K | 4-8 threads |
| Metrics (regular) | 1024 | 10K | 2-4 threads |
| Financial (low latency) | 256 | 1K | 1 thread |
| Archival (offline) | 4096 | 100K | All cores |
Monitoring
// Log compression statistics periodicallylet stats = compressor.stats();
metrics::gauge!("compression.ratio", stats.avg_compression_ratio());metrics::gauge!("compression.throughput", stats.points_per_second());metrics::histogram!("compression.latency_ms", stats.avg_latency_ms());
if stats.avg_compression_ratio() < 2.0 { warn!("Low compression ratio: {:.2}x", stats.avg_compression_ratio());}Related Documentation
- README.md - Feature overview
- USER_GUIDE.md - Comprehensive guide
- QUICK_START.md - Getting started
- Technical Details - Implementation reference