Purpose-built storage engine optimized for modern hardware.
Replacing legacy storage with a native solution designed for HeliosDB.
HeliosCore is a native storage engine designed specifically for HeliosDB's unique capabilities: database branching, vector search, and time-travel queries. Unlike generic key-value stores that require workarounds for these features, HeliosCore was built from the ground up with them as first-class primitives.
-- Features powered by HeliosCore's
-- purpose-built storage layer
-- Zero-cost branch creation
CREATE BRANCH feature_auth
FROM main;
-- Native time-travel queries
SELECT * FROM users
AS OF '2026-01-15 09:00:00';
-- Vector search with optimized I/O
SELECT content,
embedding <=> $query AS distance
FROM documents
ORDER BY distance
LIMIT 10;
-- All three features leverage the same
-- underlying storage primitives
HeliosCore bypasses the operating system's page cache entirely using O_DIRECT, giving the storage engine complete control over what data lives in memory. This eliminates double-buffering overhead and delivers predictable, consistent latency regardless of OS memory pressure.
# Latency comparison under load
# (synthetic benchmark, 8-core NVMe)
With OS Page Cache (buffered I/O):
P50 latency: 0.8ms
P99 latency: 12.4ms # spikes from eviction
P999 latency: 45.2ms # tail latency
With HeliosCore O_DIRECT:
P50 latency: 0.6ms
P99 latency: 1.8ms # predictable
P999 latency: 3.1ms # tight tail
Key insight:
Not just faster median - dramatically
better tail latency. No surprise
stalls from OS memory management.
HeliosCore is designed for the hardware of today and tomorrow. It takes full advantage of NVMe SSDs, large memory capacities, and multi-core processors — capabilities that legacy storage engines (designed in the HDD era) cannot fully exploit.
# HeliosCore hardware utilization
NVMe SSD Layer
Queue depth: Multi-queue I/O submission
I/O pattern: Aligned, direct, parallel
Read-ahead: Workload-adaptive prefetch
CPU Layer
Threading: Work-stealing thread pool
SIMD: AVX2/AVX-512 for hot paths
Lock strategy: Lock-free where possible
Memory Layer
Buffer pool: Workload-aware eviction
Allocation: Arena-based, zero-copy
Page sizes: Adaptive (4KB - 2MB)
Result:
Scales linearly with hardware.
More cores = more throughput.
More NVMe = more IOPS.
More RAM = larger working set.
HeliosCore implements a custom buffer pool with workload-aware eviction. Instead of simple LRU, the buffer pool learns from access patterns and adapts its caching strategy to your specific workload.
# Buffer pool configuration
[storage.buffer_pool]
size = "8GB"
eviction_policy = "adaptive"
# Memory budget breakdown
# (auto-adjusted based on workload)
row_data_ratio = 0.40
index_data_ratio = 0.30
vector_data_ratio = 0.20
query_memory_ratio = 0.10
# Warm-up configuration
[storage.warmup]
enabled = true
strategy = "access_pattern"
# Preloads hot pages on startup
# based on historical access logs
# Total memory limit
[memory]
max_total = "12GB"
oom_action = "evict_and_retry"
HeliosCore treats compression as a first-class feature, not an afterthought. Per-column adaptive compression automatically selects the best algorithm for each data type, achieving 70-90% storage reduction without manual tuning.
# Per-column adaptive compression
# (automatically selected by HeliosCore)
Column: user_id (INTEGER)
Algorithm: Delta + Bit-packing
Ratio: 12:1
Column: email (VARCHAR)
Algorithm: FSST (short strings)
Ratio: 4:1
Column: created_at (TIMESTAMP)
Algorithm: Delta-of-delta + ZSTD
Ratio: 20:1
Column: description (TEXT)
Algorithm: ZSTD (level 3)
Ratio: 5:1
Column: embedding (VECTOR(768))
Algorithm: Product Quantization
Ratio: 384:1
Column: status (ENUM)
Algorithm: Dictionary encoding
Ratio: 32:1
# Overall storage reduction: ~80%
HeliosCore's storage layer natively understands database branching. Copy-on-write at the storage level means branch creation is instantaneous regardless of database size, and branches share unchanged data automatically.
-- Create a branch (instant, any DB size)
CREATE BRANCH feature_auth
FROM main;
-- Switch to branch and make changes
USE BRANCH feature_auth;
ALTER TABLE users
ADD COLUMN role VARCHAR(50);
INSERT INTO users (name, role)
VALUES ('admin', 'superuser');
-- Main branch is completely unaffected
USE BRANCH main;
SELECT * FROM users;
-- No 'role' column here
-- Merge when ready
MERGE BRANCH feature_auth
INTO main;
-- Storage: only modified pages
-- are stored separately.
-- Shared data = zero overhead.
HeliosCore provides write-ahead logging with instant crash recovery. Data durability is guaranteed even during unexpected failures — power loss, kernel panics, or hardware failures. Recovery is automatic and fast.
# WAL configuration
[storage.wal]
enabled = true
sync_mode = "fsync"
# Options: fsync, fdatasync, async
# Checkpoint configuration
[storage.checkpoint]
interval = "5m"
wal_size_trigger = "256MB"
# Checkpoint when WAL exceeds 256MB
# or every 5 minutes, whichever first
# Page integrity
[storage.integrity]
page_checksums = true
checksum_algorithm = "crc32c"
# Hardware-accelerated on modern CPUs
# Recovery behavior
[storage.recovery]
mode = "automatic"
max_recovery_time = "30s"
# Typical recovery: < 2 seconds
HeliosCore eliminates the compromises that come with building on top of generic key-value stores like RocksDB or LevelDB.
| Characteristic | RocksDB / LevelDB | HeliosCore |
|---|---|---|
| Write amplification | 10-30x (LSM compaction) | 1-3x (page-oriented) |
| Latency consistency | Spikes during compaction | Predictable P99 |
| Compaction stalls | Yes, can block writes | No compaction needed |
| Database branching | Not supported | Native, zero-cost CoW |
| Time-travel queries | Snapshot-based (expensive) | Native MVCC versioning |
| Vector data layout | Generic KV (suboptimal) | Vector-aware I/O paths |
| Space amplification | 1.1-2x (level compaction) | Near 1x with compression |
| Memory management | Block cache + OS page cache | Unified buffer pool (O_DIRECT) |
| Crash recovery | WAL replay + manifest | Instant WAL recovery (<2s) |
| Compression | Per-block, single algorithm | Per-column, adaptive |
Page-oriented design avoids the 10-30x write amplification of LSM-tree compaction. SSDs last longer and writes are faster.
No background compaction stalls. P99 latency is tight and consistent, even under heavy write workloads.
Copy-on-write at the storage level enables zero-cost branch creation. No external tool or workaround needed.
Per-column algorithm selection. 70-90% storage reduction with zero manual tuning. 384x vector compression.
Bypasses OS page cache for predictable memory usage. No double-buffering, no surprise evictions.
Sub-2-second recovery from any crash. WAL replay proportional to recent changes, not database size.
HeliosCore powers the storage layer in both HeliosDB Lite and HeliosDB Full editions. Start building with the next-generation storage engine today.