HeliosDB API Reference
HeliosDB API Reference
Version 7.0 Last Updated: January 4, 2026
Table of Contents
- Introduction
- API Overview
- Authentication
- Storage APIs
- Vector Database APIs
- Graph Database APIs
- Document Store APIs
- Time-Series APIs
- Replication APIs
- Cache APIs
- Transaction APIs
- Protocol Compatibility
- Error Handling
- Best Practices
Introduction
The HeliosDB API provides a comprehensive set of interfaces for interacting with the database system. This reference covers all public APIs organized by functional area. Each API includes:
- Function signatures with parameter and return types
- Usage examples
- Error conditions
- Performance considerations
API Overview
HeliosDB provides multiple API interfaces to accommodate different use cases and integration patterns:
- Rust Native APIs: For embedded use cases and maximum performance (documented below)
- RESTful HTTP APIs: For web services, microservices, and cross-language integration
- SQL Interface: ANSI SQL:2016 compliant with HeliosDB extensions - see SQL_API_REFERENCE.md
- Wire Protocol Compatibility: PostgreSQL, MongoDB, Redis, Cassandra, ClickHouse, Oracle TNS
API Design Principles
- Consistency: All APIs follow consistent naming, error handling, and response patterns
- Performance: Zero-copy operations, async/await, connection pooling
- Security: Authentication, authorization, encryption, input validation
- Observability: Comprehensive logging, metrics, tracing
- Backward Compatibility: Semantic versioning, deprecation warnings
REST API Base URLs
Production: https://api.heliosdb.comDevelopment: http://localhost:8080API Versioning
All HTTP APIs are versioned in the URL path:
/api/v1/... # Current stable version/api/v2/... # Future version (when available)Related HTTP API Documentation
For RESTful HTTP APIs, see these dedicated references:
- Tenant Management API - Multi-tenancy endpoints
- Configuration API - Tenant configuration
- Row-Level Security API - RLS policy management
- Streaming API - Event streaming
- Job Management API - Background jobs
- OpenAPI Specification - Full REST API spec
Authentication
API Key Authentication
For REST API and service-to-service communication:
curl -H "Authorization: Bearer hdb_sk_abc123xyz..." \ https://api.heliosdb.com/api/v1/tenantsJWT Token Authentication
For user authentication with fine-grained permissions:
# Obtain tokencurl -X POST http://localhost:8080/api/v1/auth/login \ -H "Content-Type: application/json" \ -d '{"username": "user", "password": "pass"}'
# Use tokencurl -H "Authorization: Bearer eyJhbGc..." \ http://localhost:8080/api/v1/tablesDatabase Authentication
For PostgreSQL wire protocol connections:
psql "postgresql://username:password@localhost:5432/heliosdb"Supported Methods:
- Password (MD5, SCRAM-SHA-256)
- Certificate-based (mTLS)
- LDAP/Active Directory
- OAuth 2.0 / OIDC
Import Conventions
// Storageuse heliosdb_storage::{LsmStorageEngine, StorageConfig};
// Vectoruse heliosdb_vector::{VectorIndex, HnswIndex, DistanceMetric};
// Graphuse heliosdb_graph::{GraphEngine, GraphConfig, TraversalMode};
// Documentuse heliosdb_document::{DocumentStore, Collection, DocumentId};
// Time-Seriesuse heliosdb_storage::timeseries::{TimeSeriesEngine, TimeSeriesPoint};
// Replicationuse heliosdb_replication::{ReplicaManager, ReplicaConfig};
// Cacheuse heliosdb_unified_cache::{UnifiedCacheManager, CacheConfig};Storage APIs
The storage layer provides the foundational LSM-tree based storage engine with advanced features including HCC compression, temporal tables, and cloud tiering.
LsmStorageEngine
The core storage engine using Log-Structured Merge-trees.
Constructor
pub async fn new(path: impl AsRef<Path>, config: StorageConfig) -> Result<Self>Parameters:
path: Directory path for storage dataconfig: Storage configuration options
Returns: Result<LsmStorageEngine>
Example:
use heliosdb_storage::{LsmStorageEngine, StorageConfig};
let config = StorageConfig::default();let engine = LsmStorageEngine::new("/data/helios", config).await?;get
pub async fn get(&self, key: &Key) -> Result<Option<Value>>Retrieve a value by key from the storage engine.
Parameters:
key: The key to look up
Returns: Result<Option<Value>> - The value if found, None otherwise
Example:
use heliosdb_common::Key;
let key = Key::from("user:123");if let Some(value) = engine.get(&key).await? { println!("Found value: {:?}", value);}put
pub async fn put(&self, key: Key, value: Value) -> Result<()>Insert or update a key-value pair.
Parameters:
key: The key to storevalue: The value to store
Returns: Result<()>
Example:
use heliosdb_common::{Key, Value};
let key = Key::from("user:123");let value = Value::from(b"user data".to_vec());engine.put(key, value).await?;delete
pub async fn delete(&self, key: &Key) -> Result<()>Delete a key-value pair.
Parameters:
key: The key to delete
Returns: Result<()>
Example:
let key = Key::from("user:123");engine.delete(&key).await?;scan
pub async fn scan(&self, start: &Key, end: &Key) -> Result<Vec<(Key, Value)>>Scan a range of keys.
Parameters:
start: Start of the key range (inclusive)end: End of the key range (exclusive)
Returns: Result<Vec<(Key, Value)>> - All key-value pairs in range
Example:
let start = Key::from("user:000");let end = Key::from("user:999");let results = engine.scan(&start, &end).await?;HCC Compression
Hybrid Columnar Compression for efficient data storage.
HccV2Compressor
pub struct HccV2Compressor { config: HccV2Config,}
impl HccV2Compressor { pub fn new(config: HccV2Config) -> Self
pub fn compress(&self, data: &[ColumnData]) -> Result<Vec<CompressedColumn>>}Example:
use heliosdb_storage::{HccV2Compressor, HccV2Config};
let config = HccV2Config::default();let compressor = HccV2Compressor::new(config);
let columns = vec![/* column data */];let compressed = compressor.compress(&columns)?;TimeSeriesEngine
Optimized storage and querying for time-series data.
Constructor
pub async fn new(path: impl AsRef<Path>, strategy: PartitionStrategy) -> Result<Self>Parameters:
path: Storage directorystrategy: Partitioning strategy (Hourly, Daily, Monthly, Yearly)
Returns: Result<TimeSeriesEngine>
Example:
use heliosdb_storage::timeseries::{TimeSeriesEngine, PartitionStrategy};
let engine = TimeSeriesEngine::new( "/data/timeseries", PartitionStrategy::Daily).await?;write_point
pub async fn write_point( &mut self, metric: impl Into<String>, value: f64, timestamp: Option<u64>) -> Result<()>Write a single time-series data point.
Parameters:
metric: Name of the metric/seriesvalue: Numerical valuetimestamp: Unix timestamp in milliseconds (None = current time)
Returns: Result<()>
Example:
// Write current temperatureengine.write_point("sensor.temperature", 23.5, None).await?;
// Write historical datalet timestamp = 1635724800000; // 2021-11-01engine.write_point("sensor.temperature", 21.3, Some(timestamp)).await?;query_range
pub async fn query_range( &self, metric: &str, start_time: u64, end_time: u64) -> Result<Vec<TimeSeriesPoint>>Query data points within a time range.
Parameters:
metric: Metric name to querystart_time: Start timestamp (inclusive)end_time: End timestamp (exclusive)
Returns: Result<Vec<TimeSeriesPoint>>
Example:
let start = 1635724800000; // 2021-11-01let end = 1635811200000; // 2021-11-02let points = engine.query_range("sensor.temperature", start, end).await?;
for point in points { println!("{}: {}", point.timestamp, point.value);}set_retention_policy
pub fn set_retention_policy(&mut self, policy: RetentionPolicy)Configure automatic data retention/expiration.
Parameters:
policy: Retention policy configuration
Example:
use heliosdb_storage::timeseries::RetentionPolicy;use std::time::Duration;
// Keep data for 30 dayslet policy = RetentionPolicy::new(Duration::from_secs(30 * 24 * 3600));engine.set_retention_policy(policy);Temporal Tables
Bi-temporal database support for tracking data history.
TemporalEngine
pub struct TemporalEngine { // ...}
impl TemporalEngine { pub fn new(storage_path: impl AsRef<Path>) -> Result<Self>
pub async fn create_table(&mut self, name: String, columns: Vec<String>) -> Result<()>
pub async fn insert(&mut self, table: &str, row: TemporalRow) -> Result<()>
pub async fn query_as_of( &self, table: &str, timestamp: u64 ) -> Result<Vec<TemporalRow>>}Example:
use heliosdb_storage::temporal::{TemporalEngine, TemporalRow};
let mut engine = TemporalEngine::new("/data/temporal")?;
// Create tableengine.create_table( "employees".to_string(), vec!["id".to_string(), "name".to_string(), "salary".to_string()]).await?;
// Insert datalet row = TemporalRow::new(vec![ ("id", "123"), ("name", "Alice"), ("salary", "75000"),]);engine.insert("employees", row).await?;
// Query historical statelet timestamp = 1635724800000;let history = engine.query_as_of("employees", timestamp).await?;Vector Database APIs
HeliosDB provides high-performance vector similarity search with multiple indexing algorithms and distance metrics.
VectorIndex Trait
The base trait for all vector index implementations.
pub trait VectorIndex: Send + Sync { async fn insert(&mut self, id: u64, vector: Vec<f32>) -> Result<()>; async fn search(&self, query: &[f32], k: usize) -> Result<Vec<(u64, f32)>>; async fn remove(&mut self, id: u64) -> Result<bool>;}HnswIndex
Hierarchical Navigable Small World graph index for fast approximate nearest neighbor search.
Constructor
pub fn new( dimensions: usize, m: usize, ef_construction: usize, metric: DistanceMetric) -> SelfParameters:
dimensions: Vector dimensionalitym: Number of bi-directional links per node (typically 16)ef_construction: Size of dynamic candidate list during construction (typically 200)metric: Distance metric (L2, Cosine, InnerProduct)
Returns: HnswIndex
Example:
use heliosdb_vector::{HnswIndex, DistanceMetric};
let index = HnswIndex::new( 768, // OpenAI embeddings dimension 16, // M parameter 200, // ef_construction DistanceMetric::Cosine // Distance metric);insert
pub async fn insert(&mut self, id: u64, vector: Vec<f32>) -> Result<()>Insert a vector into the index.
Parameters:
id: Unique identifier for the vectorvector: Dense vector of floats
Returns: Result<()>
Example:
let embedding = vec![0.1, 0.2, 0.3, /* ... 768 dimensions */];index.insert(12345, embedding).await?;search
pub async fn search(&self, query: &[f32], k: usize) -> Result<Vec<(u64, f32)>>Search for the k nearest neighbors to a query vector.
Parameters:
query: Query vectork: Number of nearest neighbors to return
Returns: Result<Vec<(u64, f32)>> - Vector of (id, distance) pairs sorted by distance
Example:
let query = vec![0.15, 0.25, 0.35, /* ... */];let results = index.search(&query, 10).await?;
for (id, distance) in results { println!("ID: {}, Distance: {}", id, distance);}set_ef
pub fn set_ef(&mut self, ef: usize)Set the ef parameter for search-time quality/speed tradeoff.
Parameters:
ef: Size of dynamic candidate list during search (higher = better quality, slower)
Example:
// Higher quality searchindex.set_ef(500);
// Faster searchindex.set_ef(50);IvfIndex
Inverted File Index with product quantization for memory-efficient vector search.
Constructor
pub fn new( dimensions: usize, num_clusters: usize, quantization: QuantizationType, metric: DistanceMetric) -> SelfParameters:
dimensions: Vector dimensionalitynum_clusters: Number of clusters/partitions (typically sqrt(N))quantization: None, ProductQuantization, or ScalarQuantizationmetric: Distance metric
Returns: IvfIndex
Example:
use heliosdb_vector::{IvfIndex, QuantizationType, DistanceMetric};
let index = IvfIndex::new( 768, 256, // 256 clusters QuantizationType::ProductQuantization, DistanceMetric::L2);train
pub async fn train(&mut self, training_vectors: &[Vec<f32>]) -> Result<()>Train the index using sample vectors (required before insertion).
Parameters:
training_vectors: Sample vectors for clustering
Returns: Result<()>
Example:
let training_data = vec![ vec![0.1, 0.2, /* ... */], vec![0.3, 0.4, /* ... */], // ... 10,000+ vectors recommended];index.train(&training_data).await?;HybridSearchEngine
Combines vector search with metadata filtering and text search.
Constructor
pub fn new(config: SearchConfig) -> Result<Self>Example:
use heliosdb_vector::hybrid::{HybridSearchEngine, HybridQuery, FilterOp};
let engine = HybridSearchEngine::new(config)?;
// Hybrid query with filterslet query = HybridQuery { vector: Some(vec![0.1, 0.2, /* ... */]), text: Some("machine learning".to_string()), filters: vec![ FilterOp::Eq { field: "category".to_string(), value: "AI".into(), }, ], limit: 10,};
let results = engine.search(query).await?;Distance Metrics
pub enum DistanceMetric { L2, // Euclidean distance Cosine, // Cosine similarity InnerProduct, // Dot product Manhattan, // L1 distance Hamming, // Hamming distance (binary vectors) Jaccard, // Jaccard similarity (sets)}Example:
use heliosdb_vector::distance::{ euclidean_distance, cosine_distance, dot_product,};
let v1 = vec![1.0, 2.0, 3.0];let v2 = vec![4.0, 5.0, 6.0];
let l2_dist = euclidean_distance(&v1, &v2);let cos_dist = cosine_distance(&v1, &v2);let dot = dot_product(&v1, &v2);Graph Database APIs
HeliosDB provides comprehensive graph querying capabilities including traversal, pathfinding, and pattern matching.
GraphEngine
The main graph database engine.
Constructor
pub async fn new(config: GraphConfig) -> Result<Self>Parameters:
config: Graph configuration
Returns: Result<GraphEngine>
Example:
use heliosdb_graph::{GraphEngine, GraphConfig};
let config = GraphConfig { max_depth: 100, max_paths: 1000, enable_cycle_detection: true, max_iterations: 10000, cache_size: 10000, enable_optimization: true,};
let engine = GraphEngine::new(config).await?;register_graph
pub async fn register_graph(&self, name: String) -> Result<()>Create a new named graph.
Parameters:
name: Unique graph identifier
Returns: Result<()>
Example:
engine.register_graph("social_network".to_string()).await?;add_node
pub async fn add_node(&self, graph: &str, node: Node) -> Result<()>Add a node to the graph.
Parameters:
graph: Graph namenode: Node with id, label, and properties
Returns: Result<()>
Example:
use heliosdb_graph::Node;use std::collections::HashMap;
let mut properties = HashMap::new();properties.insert("name".to_string(), json!("Alice"));properties.insert("age".to_string(), json!(30));
let node = Node { id: 1, label: "Person".to_string(), properties,};
engine.add_node("social_network", node).await?;add_edge
pub async fn add_edge(&self, graph: &str, edge: Edge) -> Result<()>Add an edge to the graph.
Parameters:
graph: Graph nameedge: Edge with id, source, target, label, weight, and properties
Returns: Result<()>
Example:
use heliosdb_graph::Edge;
let edge = Edge { id: 1, source: 1, target: 2, label: "KNOWS".to_string(), weight: 1.0, properties: HashMap::new(),};
engine.add_edge("social_network", edge).await?;traverse
pub async fn traverse( &self, start: NodeId, mode: TraversalMode, max_depth: usize) -> Result<Vec<NodeId>>Traverse the graph from a starting node.
Parameters:
start: Starting node IDmode: BreadthFirst or DepthFirstmax_depth: Maximum traversal depth
Returns: Result<Vec<NodeId>> - Visited node IDs in traversal order
Example:
use heliosdb_graph::TraversalMode;
// Breadth-first traversallet nodes = engine.traverse(1, TraversalMode::BreadthFirst, 5).await?;
// Depth-first traversallet nodes = engine.traverse(1, TraversalMode::DepthFirst, 10).await?;shortest_path
pub async fn shortest_path( &self, graph: &str, source: NodeId, target: NodeId) -> Result<Option<Path>>Find the shortest path between two nodes using Dijkstra’s algorithm.
Parameters:
graph: Graph namesource: Source node IDtarget: Target node ID
Returns: Result<Option<Path>> - Path with nodes, edges, and total cost
Example:
if let Some(path) = engine.shortest_path("social_network", 1, 100).await? { println!("Path length: {}", path.len()); println!("Total cost: {}", path.cost); println!("Nodes: {:?}", path.nodes);}all_paths
pub async fn all_paths( &self, graph: &str, source: NodeId, target: NodeId, max_length: usize) -> Result<Vec<Path>>Find all paths between two nodes up to a maximum length.
Parameters:
graph: Graph namesource: Source node IDtarget: Target node IDmax_length: Maximum path length
Returns: Result<Vec<Path>>
Example:
let paths = engine.all_paths("social_network", 1, 100, 5).await?;println!("Found {} paths", paths.len());detect_cycles
pub async fn detect_cycles(&self, graph: &str) -> Result<Vec<Vec<NodeId>>>Detect all cycles in the graph.
Parameters:
graph: Graph name
Returns: Result<Vec<Vec<NodeId>>> - List of cycles (each cycle is a list of node IDs)
Example:
let cycles = engine.detect_cycles("social_network").await?;for cycle in cycles { println!("Cycle: {:?}", cycle);}connected_components
pub async fn connected_components(&self, graph: &str) -> Result<Vec<Vec<NodeId>>>Find strongly connected components.
Parameters:
graph: Graph name
Returns: Result<Vec<Vec<NodeId>>> - List of components
Example:
let components = engine.connected_components("social_network").await?;println!("Found {} components", components.len());Document Store APIs
JSON document storage with schema validation, indexing, and MongoDB-compatible queries.
DocumentStore
High-level API for document operations.
Constructor
pub fn new(path: impl AsRef<Path>) -> Result<Self>Parameters:
path: Storage directory
Returns: Result<DocumentStore>
Example:
use heliosdb_document::DocumentStore;
let store = DocumentStore::new("/data/documents")?;insert
pub fn insert( &self, collection: &Collection, id: &DocumentId, data: serde_json::Value) -> Result<Document>Insert a document into a collection.
Parameters:
collection: Collection nameid: Document IDdata: JSON document data
Returns: Result<Document>
Example:
use heliosdb_document::{Collection, DocumentId};use serde_json::json;
let collection = Collection::new("users");let id = DocumentId::new("user123");let data = json!({ "name": "Alice", "email": "alice@example.com", "age": 30, "tags": ["active", "premium"]});
let doc = store.insert(&collection, &id, data)?;get
pub fn get(&self, collection: &Collection, id: &DocumentId) -> Result<Option<Document>>Retrieve a document by ID.
Parameters:
collection: Collection nameid: Document ID
Returns: Result<Option<Document>>
Example:
let collection = Collection::new("users");let id = DocumentId::new("user123");
if let Some(doc) = store.get(&collection, &id)? { println!("Document: {:?}", doc.data);}update
pub fn update( &self, collection: &Collection, id: &DocumentId, update: serde_json::Value) -> Result<Option<Document>>Update a document.
Parameters:
collection: Collection nameid: Document IDupdate: Updated fields
Returns: Result<Option<Document>>
Example:
let update = json!({ "email": "alice.new@example.com", "age": 31});
if let Some(doc) = store.update(&collection, &id, update)? { println!("Updated: {:?}", doc.data);}delete
pub fn delete(&self, collection: &Collection, id: &DocumentId) -> Result<bool>Delete a document.
Parameters:
collection: Collection nameid: Document ID
Returns: Result<bool> - true if deleted, false if not found
Example:
let deleted = store.delete(&collection, &id)?;if deleted { println!("Document deleted");}find
pub fn find(&self, collection: &Collection, filter: Filter) -> Result<Vec<Document>>Query documents with a filter.
Parameters:
collection: Collection namefilter: Query filter
Returns: Result<Vec<Document>>
Example:
use heliosdb_document::{Filter, FilterOp};
// Find all users older than 25let filter = Filter { op: FilterOp::Gt { field: "age".to_string(), value: json!(25), },};
let docs = store.find(&collection, filter)?;println!("Found {} documents", docs.len());aggregate
pub fn aggregate( &self, collection: &Collection, pipeline: Vec<AggregationStage>) -> Result<Vec<serde_json::Value>>Execute an aggregation pipeline.
Parameters:
collection: Collection namepipeline: Aggregation stages
Returns: Result<Vec<serde_json::Value>>
Example:
use heliosdb_document::AggregationStage;
let pipeline = vec![ AggregationStage::Match { filter: Filter { op: FilterOp::Gte { field: "age".to_string(), value: json!(18), }, }, }, AggregationStage::Group { by: "$age".to_string(), accumulator: json!({ "count": { "$sum": 1 } }), },];
let results = store.aggregate(&collection, pipeline)?;register_schema
pub fn register_schema( &self, collection: &str, schema: serde_json::Value) -> Result<()>Register a JSON Schema for validation.
Parameters:
collection: Collection nameschema: JSON Schema definition
Returns: Result<()>
Example:
use heliosdb_document::{SchemaBuilder, PropertyBuilder};
let schema = SchemaBuilder::new() .property("name", PropertyBuilder::string() .min_length(1) .max_length(100) .build()) .property("email", PropertyBuilder::string() .pattern(r"^[^\s@]+@[^\s@]+\.[^\s@]+$") .build()) .property("age", PropertyBuilder::integer() .minimum(0) .maximum(150) .build()) .required(vec!["name", "email"]) .build();
store.register_schema("users", schema)?;watch
pub fn watch(&self, collection: Collection) -> Result<ChangeStream>Watch a collection for real-time changes.
Parameters:
collection: Collection to watch
Returns: Result<ChangeStream>
Example:
let collection = Collection::new("users");let mut stream = store.watch(collection)?;
tokio::spawn(async move { while let Some(event) = stream.next().await { match event { Ok(change) => println!("Change: {:?}", change.event_type), Err(e) => eprintln!("Error: {}", e), } }});Time-Series APIs
Specialized APIs for time-series data with retention policies, downsampling, and forecasting.
TimeSeriesQueryEngine
Advanced time-series query engine with windowing and aggregations.
pub struct TimeSeriesQueryEngine { // ...}
impl TimeSeriesQueryEngine { pub fn new(storage: Arc<TimeSeriesEngine>) -> Self
pub async fn query_with_window( &self, metric: &str, start: u64, end: u64, window: Window ) -> Result<Vec<WindowedResult>>}Example:
use heliosdb_storage::timeseries::{ TimeSeriesQueryEngine, Window, WindowType,};use std::time::Duration;
let query_engine = TimeSeriesQueryEngine::new(engine_arc);
// Query with 5-minute tumbling windowslet window = Window { window_type: WindowType::Tumbling, size: Duration::from_secs(300), aggregations: vec!["avg", "min", "max"],};
let results = query_engine.query_with_window( "cpu.usage", start_time, end_time, window).await?;
for result in results { println!("Window: {} - Avg: {}", result.window_start, result.avg);}DownsamplingEngine
Reduce data granularity while preserving trends.
pub struct DownsamplingEngine { // ...}
impl DownsamplingEngine { pub fn new(config: DownsamplingConfig) -> Self
pub async fn downsample( &self, points: Vec<TimeSeriesPoint>, target_count: usize ) -> Result<Vec<TimeSeriesPoint>>}Example:
use heliosdb_storage::timeseries::{ DownsamplingEngine, DownsamplingConfig, AggregationFunction,};use std::time::Duration;
let config = DownsamplingConfig { interval: Duration::from_secs(3600), // 1 hour function: AggregationFunction::Avg,};
let downsampler = DownsamplingEngine::new(config);
// Reduce 1 million points to 1000 pointslet downsampled = downsampler.downsample(points, 1000).await?;TagIndex
Multi-dimensional indexing for time-series tags.
pub struct TagIndex { // ...}
impl TagIndex { pub fn new() -> Self
pub async fn insert_series( &mut self, series_id: u64, tags: HashMap<String, String> ) -> Result<()>
pub async fn query(&self, query: TagQuery) -> Result<Vec<u64>>}Example:
use heliosdb_storage::timeseries::{TagIndex, TagQuery};
let mut index = TagIndex::new();
// Insert series with tagslet mut tags = HashMap::new();tags.insert("host".to_string(), "server01".to_string());tags.insert("region".to_string(), "us-west".to_string());tags.insert("env".to_string(), "production".to_string());
index.insert_series(123, tags).await?;
// Query by tagslet query = TagQuery::and(vec![ TagQuery::equals("host", "server01"), TagQuery::equals("region", "us-west"),]);
let series_ids = index.query(query).await?;Replication APIs
Read replica support with automatic failover and load balancing.
ReplicaManager
Manages read replicas and distributes read traffic.
Constructor
pub async fn new(config: ReplicaConfig) -> Result<Self>Parameters:
config: Replica configuration
Returns: Result<ReplicaManager>
Example:
use heliosdb_replication::{ReplicaManager, ReplicaConfig, ReplicationMode};use std::time::Duration;
let config = ReplicaConfig { primary_address: "127.0.0.1:5432".to_string(), replica_addresses: vec![ "127.0.0.1:5433".to_string(), "127.0.0.1:5434".to_string(), ], replication_mode: ReplicationMode::Async, lag_threshold_ms: 1000, health_check_interval: Duration::from_secs(5), enable_failover: true, session_affinity: false,};
let manager = ReplicaManager::new(config).await?;get_replica_for_read
pub async fn get_replica_for_read( &self, strategy: LoadBalancingStrategy) -> Result<String>Get a replica address for read operations.
Parameters:
strategy: Load balancing strategy
Returns: Result<String> - Replica address
Example:
use heliosdb_replication::LoadBalancingStrategy;
// Round-robinlet replica = manager.get_replica_for_read( LoadBalancingStrategy::RoundRobin).await?;
// Least connectionslet replica = manager.get_replica_for_read( LoadBalancingStrategy::LeastConnections).await?;
// Lowest latencylet replica = manager.get_replica_for_read( LoadBalancingStrategy::LatencyBased).await?;get_replication_stats
pub async fn get_replication_stats(&self) -> Result<Vec<ReplicationStats>>Get replication lag and health statistics.
Returns: Result<Vec<ReplicationStats>>
Example:
let stats = manager.get_replication_stats().await?;
for stat in stats { println!("Replica: {}", stat.replica_address); println!(" Lag: {}ms", stat.lag_ms); println!(" Status: {:?}", stat.status); println!(" Bytes replicated: {}", stat.bytes_replicated);}WalSender
PostgreSQL-compatible WAL streaming for physical replication.
pub struct WalSender { // ...}
impl WalSender { pub fn new(config: WalSenderConfig) -> Self
pub async fn create_replication_slot( &self, slot_name: String, slot_type: SlotType, output_plugin: Option<String>, database: Option<String> ) -> Result<()>
pub async fn start_replication( &self, slot_name: String, start_lsn: u64 ) -> Result<()>}Example:
use heliosdb_replication::wal_sender::{WalSender, WalSenderConfig, SlotType};
let sender = WalSender::new(WalSenderConfig::default());
// Create physical replication slotsender.create_replication_slot( "replica_slot".to_string(), SlotType::Physical, None, None).await?;
// Start streaming from LSNsender.start_replication("replica_slot".to_string(), 0).await?;Cache APIs
Unified caching system with ML-based eviction and tiered storage.
UnifiedCacheManager
Comprehensive caching with multiple eviction strategies and compression.
Constructor
pub fn new(config: CacheConfig) -> SelfParameters:
config: Cache configuration
Returns: UnifiedCacheManager
Example:
use heliosdb_unified_cache::{ UnifiedCacheManager, CacheConfig, EvictionPolicyType, CompressionType,};
let config = CacheConfig { max_size: 1024 * 1024 * 1024, // 1GB eviction_policy: EvictionPolicyType::Hybrid, enable_ml: true, enable_compression: true, compression_type: CompressionType::Zstd, compression_threshold: 1024, enable_tiered: true, l1_size: 256 * 1024 * 1024, l2_size: Some(1024 * 1024 * 1024), ..Default::default()};
let cache = UnifiedCacheManager::new(config);insert
pub async fn insert( &self, key: CacheKey, value: Vec<u8>, ttl: Option<Duration>) -> Result<()>Insert a value into the cache.
Parameters:
key: Cache keyvalue: Value to cache (will be compressed if enabled)ttl: Optional time-to-live
Returns: Result<()>
Example:
use heliosdb_unified_cache::CacheKey;use std::time::Duration;
let key = CacheKey::new("user:123");let value = vec![1, 2, 3, 4, 5];
// Insert with 1-hour TTLcache.insert(key, value, Some(Duration::from_secs(3600))).await?;get
pub async fn get(&self, key: &CacheKey) -> Result<Option<Vec<u8>>>Retrieve a value from the cache.
Parameters:
key: Cache key
Returns: Result<Option<Vec<u8>>> - Decompressed value if found
Example:
let key = CacheKey::new("user:123");
if let Some(value) = cache.get(&key).await? { println!("Cache hit! Value: {:?}", value);} else { println!("Cache miss");}invalidate
pub async fn invalidate(&self, key: &CacheKey) -> Result<bool>Remove a value from the cache.
Parameters:
key: Cache key
Returns: Result<bool> - true if removed
Example:
let removed = cache.invalidate(&CacheKey::new("user:123")).await?;get_stats
pub fn get_stats(&self) -> CacheStatsGet cache performance statistics.
Returns: CacheStats
Example:
let stats = cache.get_stats();
println!("Hit rate: {:.2}%", stats.hit_rate() * 100.0);println!("Hits: {}", stats.hits);println!("Misses: {}", stats.misses);println!("Evictions: {}", stats.evictions);println!("Current size: {} bytes", stats.current_size);println!("Entries: {}", stats.current_entries);Transaction APIs
ACID transactions with multiple isolation levels.
TransactionParticipant
Local transaction support with 2PC coordination.
pub struct TransactionParticipant { // ...}
impl TransactionParticipant { pub fn new(storage: Arc<LsmStorageEngine>) -> Self
pub async fn begin_transaction( &self, isolation_level: IsolationLevel ) -> Result<TransactionId>
pub async fn read( &self, txn_id: TransactionId, key: &Key ) -> Result<Option<Value>>
pub async fn write( &self, txn_id: TransactionId, key: Key, value: Value ) -> Result<()>
pub async fn commit(&self, txn_id: TransactionId) -> Result<()>
pub async fn rollback(&self, txn_id: TransactionId) -> Result<()>}Example:
use heliosdb_storage::{TransactionParticipant, IsolationLevel};
let participant = TransactionParticipant::new(storage_arc);
// Begin transactionlet txn_id = participant.begin_transaction( IsolationLevel::Serializable).await?;
// Perform operationslet key = Key::from("account:123");let current = participant.read(txn_id, &key).await?;
let new_value = Value::from(b"updated".to_vec());participant.write(txn_id, key, new_value).await?;
// Commit or rollbackmatch participant.commit(txn_id).await { Ok(_) => println!("Transaction committed"), Err(e) => { participant.rollback(txn_id).await?; eprintln!("Transaction failed: {}", e); }}XaParticipant
Distributed transaction support with XA protocol.
pub struct XaParticipant { // ...}
impl XaParticipant { pub fn new(config: XaParticipantConfig, storage: Arc<LsmStorageEngine>) -> Self
pub async fn xa_start(&mut self, xid: String) -> Result<LocalTransactionId>
pub async fn xa_end(&mut self, local_txn_id: LocalTransactionId) -> Result<()>
pub async fn xa_prepare(&mut self, local_txn_id: LocalTransactionId) -> Result<()>
pub async fn xa_commit(&mut self, local_txn_id: LocalTransactionId) -> Result<()>
pub async fn xa_rollback(&mut self, local_txn_id: LocalTransactionId) -> Result<()>}Example:
use heliosdb_storage::{XaParticipant, XaParticipantConfig};
let config = XaParticipantConfig::default();let participant = XaParticipant::new(config, storage_arc);
// Two-phase commitlet xid = "global-txn-123".to_string();
// Phase 1: Start and executelet local_id = participant.xa_start(xid.clone()).await?;// ... perform operations ...participant.xa_end(local_id).await?;
// Phase 2: Prepareparticipant.xa_prepare(local_id).await?;
// Phase 3: Commit or rollbackmatch participant.xa_commit(local_id).await { Ok(_) => println!("XA transaction committed"), Err(e) => { participant.xa_rollback(local_id).await?; eprintln!("XA transaction rolled back: {}", e); }}Protocol Compatibility
HeliosDB provides wire-protocol compatibility with multiple database systems, allowing you to use existing client libraries.
PostgreSQL Wire Protocol
Connect using any PostgreSQL client (port 5432):
psql -h localhost -p 5432 -U helios -d mydbimport psycopg2
conn = psycopg2.connect( host="localhost", port=5432, database="mydb", user="helios", password="password")
cursor = conn.cursor()cursor.execute("SELECT * FROM orders LIMIT 10")results = cursor.fetchall()MongoDB Wire Protocol
Connect using MongoDB drivers (port 27017):
const { MongoClient } = require('mongodb');
const client = new MongoClient('mongodb://localhost:27017');await client.connect();
const db = client.db('mydb');const collection = db.collection('orders');
const results = await collection.find({ status: 'pending' }).toArray();Redis Protocol (RESP3)
Connect using Redis clients (port 6379):
redis-cli -h localhost -p 6379import redis
r = redis.Redis(host='localhost', port=6379)r.set('key1', 'value1')value = r.get('key1')Cassandra CQL
Connect using Cassandra drivers (port 9042):
cqlsh localhost 9042from cassandra.cluster import Cluster
cluster = Cluster(['localhost'])session = cluster.connect('mykeyspace')
rows = session.execute('SELECT * FROM orders WHERE user_id = 100')for row in rows: print(row)Error Handling
All HeliosDB APIs return Result<T> types with specific error enums.
Common Error Types
// Storage errorspub enum HeliosError { Io(std::io::Error), Serialization(String), Corruption(String), KeyNotFound(String), TransactionConflict, // ...}
// Vector errorspub enum VectorError { DimensionMismatch { expected: usize, got: usize }, InvalidParameter(String), IndexNotTrained, // ...}
// Graph errorspub enum GraphError { NodeNotFound(NodeId), EdgeNotFound(EdgeId), CycleDetected, MaxDepthExceeded(usize), PathNotFound(NodeId, NodeId), // ...}
// Document errorspub enum DocumentError { SchemaValidation(String), InvalidQuery(String), CollectionNotFound(String), // ...}
// Replication errorspub enum ReplicationError { NoHealthyReplicas, ReplicationLagExceeded { lag_ms: u64, threshold_ms: u64 }, ConnectionFailed(String), // ...}Error Handling Patterns
use heliosdb_common::HeliosError;
// Pattern 1: Match on specific errorsmatch engine.get(&key).await { Ok(Some(value)) => println!("Found: {:?}", value), Ok(None) => println!("Not found"), Err(HeliosError::Io(e)) => eprintln!("IO error: {}", e), Err(HeliosError::Corruption(msg)) => eprintln!("Corruption: {}", msg), Err(e) => eprintln!("Other error: {}", e),}
// Pattern 2: Use ? operator with anyhowuse anyhow::Result;
async fn my_function() -> Result<()> { let value = engine.get(&key).await?; // ... Ok(())}
// Pattern 3: Map errorslet result = engine.get(&key).await .map_err(|e| format!("Failed to get key: {}", e))?;Best Practices
1. Connection Management
// Use Arc for shared accessuse std::sync::Arc;
let engine = Arc::new(LsmStorageEngine::new("/data", config).await?);
// Clone Arc for multiple taskslet engine_clone = engine.clone();tokio::spawn(async move { engine_clone.get(&key).await});2. Batch Operations
// Bad: Individual operationsfor i in 0..1000 { engine.put(Key::from(format!("key{}", i)), value.clone()).await?;}
// Good: Batch writeslet mut batch = Vec::new();for i in 0..1000 { batch.push((Key::from(format!("key{}", i)), value.clone()));}// Use batch write API when available3. Resource Cleanup
// Always use RAII or explicit cleanup{ let cache = UnifiedCacheManager::new(config); // ... use cache ...} // Automatically cleaned up
// Or explicitlet cache = UnifiedCacheManager::new(config);// ... use cache ...drop(cache);4. Configuration Tuning
// Production settingslet config = StorageConfig { memtable_size: 256 * 1024 * 1024, // 256MB block_cache_size: 1024 * 1024 * 1024, // 1GB compaction_threads: 4, enable_compression: true, max_open_files: 10000, ..Default::default()};5. Monitoring
// Regular metrics collectiontokio::spawn(async move { let mut interval = tokio::time::interval(Duration::from_secs(60)); loop { interval.tick().await;
let stats = cache.get_stats(); log::info!("Cache hit rate: {:.2}%", stats.hit_rate() * 100.0);
let repl_stats = manager.get_replication_stats().await?; for stat in repl_stats { log::info!("Replica {} lag: {}ms", stat.replica_address, stat.lag_ms); } }});6. Error Recovery
// Implement retry logic for transient failuresuse tokio::time::{sleep, Duration};
async fn retry_operation<F, T>( mut op: F, max_retries: u32) -> Result<T>where F: FnMut() -> Result<T>,{ let mut attempts = 0; loop { match op() { Ok(result) => return Ok(result), Err(e) if attempts < max_retries => { attempts += 1; sleep(Duration::from_millis(100 * attempts as u64)).await; } Err(e) => return Err(e), } }}Conclusion
This API reference covers the core functionality of HeliosDB 6.0. For additional details:
- See individual package documentation:
cargo doc --open - Review examples in each package’s
examples/directory - Consult the User Guide for conceptual overviews
- Check Quick Reference for common operations
For support, visit: https://github.com/your-org/heliosdb/issues
Advanced APIs
Self-Healing Database
The self-healing module provides automatic detection and repair of database issues.
SelfHealingManager
use heliosdb_self_healing::{SelfHealingManager, SelfHealingConfig};
pub struct SelfHealingManager { // ...}
impl SelfHealingManager { pub fn new(config: SelfHealingConfig) -> Self
pub async fn start(&mut self) -> Result<()>
pub async fn check_health(&self) -> Result<HealthStatus>
pub async fn trigger_repair(&self, issue: Issue) -> Result<RepairResult>}Example:
use heliosdb_self_healing::{SelfHealingManager, SelfHealingConfig};
let config = SelfHealingConfig { check_interval: Duration::from_secs(60), auto_repair: true, max_repair_attempts: 3, enable_ml_predictions: true,};
let mut manager = SelfHealingManager::new(config);manager.start().await?;
// Check health statuslet status = manager.check_health().await?;if !status.is_healthy { println!("Issues detected: {:?}", status.issues);}Materialized Views
Intelligent materialized views with automatic refresh and incremental updates.
MaterializedViewEngine
use heliosdb_materialized_views::{MaterializedViewEngine, ViewDefinition};
pub struct MaterializedViewEngine { // ...}
impl MaterializedViewEngine { pub async fn new(storage: Arc<LsmStorageEngine>) -> Result<Self>
pub async fn create_view(&mut self, definition: ViewDefinition) -> Result<()>
pub async fn refresh_view(&self, view_name: &str) -> Result<()>
pub async fn query_view(&self, view_name: &str) -> Result<Vec<Row>>}Example:
use heliosdb_materialized_views::{MaterializedViewEngine, ViewDefinition};
let engine = MaterializedViewEngine::new(storage_arc).await?;
// Create materialized viewlet definition = ViewDefinition { name: "user_stats".to_string(), query: "SELECT age, COUNT(*) as count FROM users GROUP BY age".to_string(), refresh_interval: Some(Duration::from_secs(300)), incremental: true,};
engine.create_view(definition).await?;
// Query view (much faster than running the query)let results = engine.query_view("user_stats").await?;ETL Pipeline
Automated ETL with AI-powered transformations and data quality checks.
IngestionPipeline
use heliosdb_etl::{IngestionPipeline, IngestionConfig, TransformationRule};
pub struct IngestionPipeline { // ...}
impl IngestionPipeline { pub fn new(config: IngestionConfig) -> Self
pub async fn ingest(&mut self, source: DataSource) -> Result<IngestionStats>
pub fn add_transformation(&mut self, rule: TransformationRule)
pub async fn start_streaming(&mut self) -> Result<()>}Example:
use heliosdb_etl::{ IngestionPipeline, IngestionConfig, DataSource, TransformationRule, ValidationRule,};
let config = IngestionConfig { batch_size: 1000, enable_cdc: true, enable_quality_checks: true, parallelism: 4,};
let mut pipeline = IngestionPipeline::new(config);
// Add transformation rulespipeline.add_transformation(TransformationRule::RenameColumn { from: "old_name".to_string(), to: "new_name".to_string(),});
pipeline.add_transformation(TransformationRule::FilterRows { condition: "age >= 18".to_string(),});
// Ingest datalet source = DataSource::Csv { path: "/data/input.csv".to_string(), delimiter: ',', has_header: true,};
let stats = pipeline.ingest(source).await?;println!("Ingested {} rows in {:.2}s", stats.rows_processed, stats.duration_secs);Multi-Master Replication
CRDT-based multi-master replication with conflict resolution.
MultiMasterReplicator
use heliosdb_multi_master::{MultiMasterReplicator, ReplicatorConfig, CrdtType};
pub struct MultiMasterReplicator { // ...}
impl MultiMasterReplicator { pub fn new(config: ReplicatorConfig) -> Self
pub async fn add_master(&mut self, node_id: String, address: String) -> Result<()>
pub async fn write(&self, key: Key, value: Value, crdt_type: CrdtType) -> Result<()>
pub async fn read(&self, key: &Key) -> Result<Option<Value>>
pub async fn sync(&self) -> Result<SyncStats>}Example:
use heliosdb_multi_master::{ MultiMasterReplicator, ReplicatorConfig, CrdtType,};
let config = ReplicatorConfig { node_id: "node1".to_string(), sync_interval: Duration::from_secs(10), conflict_resolution: ConflictResolution::LastWriteWins,};
let mut replicator = MultiMasterReplicator::new(config);
// Add other master nodesreplicator.add_master( "node2".to_string(), "192.168.1.2:5000".to_string()).await?;
// Write using CRDTreplicator.write( Key::from("counter:1"), Value::from(vec![0, 0, 0, 1]), // counter value: 1 CrdtType::Counter).await?;
// Sync with other masterslet stats = replicator.sync().await?;println!("Synced {} operations", stats.operations_synced);Distributed Query Optimizer
AI-powered query optimization for distributed execution.
DistributedQueryOptimizer
use heliosdb_distributed_optimizer::{ DistributedQueryOptimizer, QueryPlan, ExecutionStats,};
pub struct DistributedQueryOptimizer { // ...}
impl DistributedQueryOptimizer { pub fn new(config: OptimizerConfig) -> Self
pub async fn optimize(&self, query: &str) -> Result<QueryPlan>
pub async fn execute(&self, plan: QueryPlan) -> Result<Vec<Row>>
pub async fn explain(&self, query: &str) -> Result<String>}Example:
use heliosdb_distributed_optimizer::{ DistributedQueryOptimizer, OptimizerConfig,};
let config = OptimizerConfig { enable_ai: true, cost_threshold: 1000.0, max_parallel_shards: 16,};
let optimizer = DistributedQueryOptimizer::new(config);
// Optimize querylet query = "SELECT * FROM users JOIN orders ON users.id = orders.user_id";let plan = optimizer.optimize(query).await?;
println!("Execution plan:");println!("{}", optimizer.explain(query).await?);
// Execute optimized planlet results = optimizer.execute(plan).await?;Global Distributed Cache
Multi-region cache with intelligent prefetching and hotspot detection.
GlobalCacheCoordinator
use heliosdb_global_cache::{ GlobalCacheCoordinator, CacheTopology, ReplicationStrategy,};
pub struct GlobalCacheCoordinator { // ...}
impl GlobalCacheCoordinator { pub async fn new(topology: CacheTopology) -> Result<Self>
pub async fn get(&self, key: &CacheKey) -> Result<Option<Vec<u8>>>
pub async fn set( &self, key: CacheKey, value: Vec<u8>, strategy: ReplicationStrategy ) -> Result<()>
pub async fn detect_hotspots(&self) -> Result<Vec<Hotspot>>
pub async fn prefetch(&self, patterns: Vec<AccessPattern>) -> Result<()>}Example:
use heliosdb_global_cache::{ GlobalCacheCoordinator, CacheTopology, ReplicationStrategy, Region,};
let topology = CacheTopology { regions: vec![ Region { name: "us-west".to_string(), nodes: 3 }, Region { name: "us-east".to_string(), nodes: 3 }, Region { name: "eu-west".to_string(), nodes: 3 }, ], replication_factor: 2,};
let coordinator = GlobalCacheCoordinator::new(topology).await?;
// Set with regional replicationcoordinator.set( CacheKey::new("user:123"), vec![1, 2, 3, 4], ReplicationStrategy::Regional).await?;
// Detect and optimize for hotspotslet hotspots = coordinator.detect_hotspots().await?;for hotspot in hotspots { println!("Hotspot: {} (access count: {})", hotspot.key, hotspot.access_count );}Deadlock Detection
Distributed deadlock detection with predictive capabilities.
DeadlockDetector
use heliosdb_deadlock_detection::{DeadlockDetector, DetectorConfig};
pub struct DeadlockDetector { // ...}
impl DeadlockDetector { pub fn new(config: DetectorConfig) -> Self
pub async fn start_monitoring(&mut self) -> Result<()>
pub async fn detect_deadlocks(&self) -> Result<Vec<Deadlock>>
pub async fn resolve_deadlock(&self, deadlock: Deadlock) -> Result<()>
pub async fn predict_deadlocks(&self) -> Result<Vec<PotentialDeadlock>>}Example:
use heliosdb_deadlock_detection::{DeadlockDetector, DetectorConfig};
let config = DetectorConfig { check_interval: Duration::from_secs(5), enable_prediction: true, auto_resolve: true, resolution_strategy: ResolutionStrategy::VictimSelection,};
let mut detector = DeadlockDetector::new(config);detector.start_monitoring().await?;
// Detect deadlockslet deadlocks = detector.detect_deadlocks().await?;if !deadlocks.is_empty() { println!("Found {} deadlocks", deadlocks.len()); for deadlock in deadlocks { detector.resolve_deadlock(deadlock).await?; }}
// Predict potential deadlockslet predictions = detector.predict_deadlocks().await?;for prediction in predictions { println!("Potential deadlock: probability={:.2}", prediction.probability );}Quantum Computing Integration
Quantum algorithms for optimization problems.
QuantumOptimizer
use heliosdb_quantum::{QuantumOptimizer, QuantumCircuit, QuantumConfig};
pub struct QuantumOptimizer { // ...}
impl QuantumOptimizer { pub fn new(config: QuantumConfig) -> Result<Self>
pub async fn optimize_query(&self, query: &str) -> Result<OptimizedPlan>
pub async fn solve_tsp(&self, cities: Vec<City>) -> Result<Route>
pub fn create_circuit(&self, gates: Vec<Gate>) -> Result<QuantumCircuit>
pub async fn execute_circuit(&self, circuit: QuantumCircuit) -> Result<Vec<f64>>}Example:
use heliosdb_quantum::{QuantumOptimizer, QuantumConfig, Gate};
let config = QuantumConfig { simulator_type: SimulatorType::StateVector, num_qubits: 10, optimization_level: 3,};
let optimizer = QuantumOptimizer::new(config)?;
// Solve traveling salesman problemlet cities = vec![ City { x: 0.0, y: 0.0 }, City { x: 1.0, y: 2.0 }, City { x: 3.0, y: 1.0 }, // ... more cities];
let route = optimizer.solve_tsp(cities).await?;println!("Optimal route distance: {:.2}", route.total_distance);Neuromorphic Computing
Spiking neural networks for pattern recognition and anomaly detection.
NeuromorphicEngine
use heliosdb_neuromorphic::{ NeuromorphicEngine, SpikingNeuralNetwork, NeuronConfig,};
pub struct NeuromorphicEngine { // ...}
impl NeuromorphicEngine { pub fn new(config: NeuronConfig) -> Self
pub async fn train(&mut self, data: Vec<Sample>) -> Result<TrainingStats>
pub async fn detect_anomaly(&self, input: Vec<f64>) -> Result<AnomalyScore>
pub async fn recognize_pattern(&self, pattern: Vec<u8>) -> Result<ClassificationResult>}Example:
use heliosdb_neuromorphic::{ NeuromorphicEngine, NeuronConfig, Sample,};
let config = NeuronConfig { num_neurons: 1000, connection_probability: 0.1, learning_rate: 0.01, threshold: 1.0,};
let mut engine = NeuromorphicEngine::new(config);
// Train on time-series datalet training_data: Vec<Sample> = load_training_data();let stats = engine.train(training_data).await?;
// Detect anomalies in real-timelet input = vec![0.5, 0.6, 0.9, 1.2, 0.8]; // unusual spikelet score = engine.detect_anomaly(input).await?;
if score.is_anomaly { println!("Anomaly detected! Score: {:.2}", score.score);}Energy-Aware Optimization
Power management and carbon footprint optimization.
EnergyOptimizer
use heliosdb_energy_optimizer::{ EnergyOptimizer, EnergyConfig, PowerGovernor,};
pub struct EnergyOptimizer { // ...}
impl EnergyOptimizer { pub fn new(config: EnergyConfig) -> Self
pub async fn optimize_workload(&self, workload: Workload) -> Result<OptimizedWorkload>
pub async fn get_power_usage(&self) -> Result<PowerMetrics>
pub async fn calculate_carbon_footprint(&self) -> Result<CarbonMetrics>
pub fn set_power_mode(&mut self, mode: PowerMode) -> Result<()>}Example:
use heliosdb_energy_optimizer::{ EnergyOptimizer, EnergyConfig, PowerMode,};
let config = EnergyConfig { target_power_watts: 150.0, enable_carbon_tracking: true, prefer_renewable: true, dynamic_frequency_scaling: true,};
let mut optimizer = EnergyOptimizer::new(config);
// Set low-power mode during off-peak hoursoptimizer.set_power_mode(PowerMode::LowPower)?;
// Get current metricslet power = optimizer.get_power_usage().await?;println!("Current power usage: {:.2}W", power.current_watts);
let carbon = optimizer.calculate_carbon_footprint().await?;println!("Carbon footprint: {:.2}kg CO2", carbon.total_kg);REST API Usage
Authentication
# Using API Keycurl -H "X-API-Key: your-api-key" \ http://localhost:8080/api/v1/query
# Using JWT Bearer Tokencurl -H "Authorization: Bearer your-jwt-token" \ http://localhost:8080/api/v1/querySQL Queries
# Execute SQL querycurl -X POST http://localhost:8080/api/v1/query \ -H "Content-Type: application/json" \ -H "X-API-Key: your-key" \ -d '{ "query": "SELECT * FROM users WHERE age > $1 LIMIT 10", "parameters": [21] }'Vector Search
# Search for similar vectorscurl -X POST http://localhost:8080/api/v1/vector/search \ -H "Content-Type: application/json" \ -H "X-API-Key: your-key" \ -d '{ "vector": [0.1, 0.2, 0.3, 0.4], "k": 10, "index": "embeddings_index", "metric": "cosine" }'Document Operations
# Insert documentcurl -X POST http://localhost:8080/api/v1/documents/users \ -H "Content-Type: application/json" \ -H "X-API-Key: your-key" \ -d '{ "id": "user123", "data": { "name": "Alice", "email": "alice@example.com", "age": 30 } }'
# Query documentscurl "http://localhost:8080/api/v1/documents/users?filter=%7B%22age%22:%7B%22%24gt%22:25%7D%7D" \ -H "X-API-Key: your-key"Time-Series Operations
# Write time-series pointcurl -X POST http://localhost:8080/api/v1/timeseries/write \ -H "Content-Type: application/json" \ -H "X-API-Key: your-key" \ -d '{ "metric": "cpu.usage", "value": 75.5, "timestamp": 1635724800000, "tags": { "host": "server01", "region": "us-west" } }'
# Query time-series datacurl -X POST http://localhost:8080/api/v1/timeseries/query \ -H "Content-Type: application/json" \ -H "X-API-Key: your-key" \ -d '{ "metric": "cpu.usage", "start": 1635724800000, "end": 1635811200000, "aggregation": "avg", "interval": 300 }'GraphQL API Usage
Basic Queries
# Query usersquery { users(where: { age: { gt: 25 } }, orderBy: { name: ASC }, take: 10) { id name email age createdAt }}
# Get single userquery { user(id: "123") { id name email posts { id title content } }}Mutations
# Create usermutation { createUser(name: "Alice", email: "alice@example.com", age: 30) { id name email }}
# Update usermutation { updateUser(id: "123", email: "newemail@example.com") { id email }}
# Delete usermutation { deleteUser(id: "123") { id }}Subscriptions
# Subscribe to user creationssubscription { userCreated { id name email createdAt }}
# Subscribe to updatessubscription { userUpdated(id: "123") { id name email }}Performance Tuning
Storage Configuration
let config = StorageConfig { // Memory settings memtable_size: 256 * 1024 * 1024, // 256MB block_cache_size: 1024 * 1024 * 1024, // 1GB write_buffer_size: 64 * 1024 * 1024, // 64MB
// Compaction compaction_threads: 4, max_background_jobs: 8, target_file_size: 64 * 1024 * 1024, // 64MB
// I/O use_direct_io: true, max_open_files: 10000, enable_mmap: true,
// Compression enable_compression: true, compression_type: CompressionType::Zstd, compression_level: 3,
// Advanced enable_bloom_filters: true, bloom_filter_bits_per_key: 10, enable_statistics: true,};Vector Index Tuning
// For high recall (slower but more accurate)let index = HnswIndex::new( 768, 32, // Higher M = better recall 400, // Higher ef_construction = better index quality DistanceMetric::Cosine);index.set_ef(500); // Higher ef = better search quality
// For speed (lower recall)let index = HnswIndex::new( 768, 8, // Lower M = faster 100, // Lower ef_construction = faster build DistanceMetric::Cosine);index.set_ef(50); // Lower ef = faster searchCache Optimization
let config = CacheConfig { // Size and eviction max_size: 2 * 1024 * 1024 * 1024, // 2GB eviction_policy: EvictionPolicyType::Hybrid, enable_ml: true,
// Compression enable_compression: true, compression_type: CompressionType::Zstd, compression_threshold: 1024, compression_level: 3,
// Tiering enable_tiered: true, l1_size: 512 * 1024 * 1024, // 512MB hot tier l2_size: Some(2 * 1024 * 1024 * 1024), // 2GB warm tier l3_enabled: true, // Distributed tier
// Performance num_shards: 16, enable_prefetch: true, prefetch_window: 100,};Query Optimization
// Enable query optimizationlet optimizer_config = OptimizerConfig { enable_ai: true, enable_statistics: true, enable_cost_model: true, parallel_execution: true, max_parallel_workers: 8, adaptive_planning: true,};
// Use prepared statementslet stmt = db.prepare("SELECT * FROM users WHERE id = $1")?;for id in user_ids { let results = stmt.query(&[&id])?; // Process results}
// Use connection poolinglet pool = ConnectionPool::new(PoolConfig { min_connections: 5, max_connections: 20, connection_timeout: Duration::from_secs(30), idle_timeout: Some(Duration::from_secs(600)), max_lifetime: Some(Duration::from_secs(3600)),});Monitoring and Observability
Metrics Collection
use heliosdb_metrics::{MetricsCollector, MetricType};
let collector = MetricsCollector::new();
// Collect metrics periodicallytokio::spawn(async move { let mut interval = tokio::time::interval(Duration::from_secs(60)); loop { interval.tick().await;
let metrics = collector.collect_all().await;
println!("=== Database Metrics ==="); println!("Queries/sec: {}", metrics.queries_per_second); println!("Cache hit rate: {:.2}%", metrics.cache_hit_rate * 100.0); println!("Replication lag: {}ms", metrics.replication_lag_ms); println!("Storage size: {} GB", metrics.storage_size_bytes / (1024*1024*1024)); println!("Active connections: {}", metrics.active_connections); }});Distributed Tracing
use heliosdb_distributed_tracing::{TracingContext, Span};
// Create tracelet ctx = TracingContext::new("query-execution");
// Start spanlet span = ctx.start_span("execute-query");
// Execute operationlet result = execute_query(&query).await?;
// End spanspan.finish();
// View tracelet trace = ctx.get_trace();println!("Trace duration: {}ms", trace.duration_ms);for span in trace.spans { println!(" {}: {}ms", span.name, span.duration_ms);}Health Checks
use heliosdb_common::health::{HealthChecker, ComponentHealth};
let checker = HealthChecker::new();
// Check all componentslet health = checker.check_all().await?;
if health.is_healthy() { println!("System healthy");} else { println!("System unhealthy:"); for component in health.components { if !component.healthy { println!(" {}: {}", component.name, component.message); } }}Security Best Practices
Authentication
use heliosdb_security::{AuthManager, AuthConfig, TokenType};
let auth_manager = AuthManager::new(AuthConfig { jwt_secret: std::env::var("JWT_SECRET")?, token_expiry: Duration::from_secs(3600), refresh_token_expiry: Duration::from_secs(86400 * 7), enable_2fa: true,});
// Authenticate userlet token = auth_manager.authenticate(username, password).await?;
// Verify tokenlet claims = auth_manager.verify_token(&token)?;Encryption
use heliosdb_security::{EncryptionManager, EncryptionConfig};
let encryption = EncryptionManager::new(EncryptionConfig { algorithm: EncryptionAlgorithm::Aes256Gcm, key_rotation_interval: Duration::from_secs(86400 * 30), enable_at_rest: true, enable_in_transit: true,});
// Encrypt datalet plaintext = b"sensitive data";let ciphertext = encryption.encrypt(plaintext)?;
// Decrypt datalet decrypted = encryption.decrypt(&ciphertext)?;Access Control
use heliosdb_security::{AccessControl, Permission, Role};
let ac = AccessControl::new();
// Define rolesac.create_role(Role { name: "admin".to_string(), permissions: vec![ Permission::Read, Permission::Write, Permission::Delete, Permission::Admin, ],})?;
// Check permissionif ac.has_permission(&user, Permission::Write)? { // Allow operation}