Skip to content

Redis to HeliosDB Migration Guide

Redis to HeliosDB Migration Guide

Version: 1.0 Last Updated: 2026-01-04


Table of Contents

  1. Introduction
  2. Compatibility Overview
  3. Pre-Migration Assessment
  4. Conceptual Mapping
  5. Step-by-Step Migration
  6. Command Mapping
  7. Feature Mapping
  8. Caching Layer Considerations
  9. Performance Comparison
  10. Common Issues and Troubleshooting

1. Introduction

1.1 Why Migrate from Redis to HeliosDB?

Redis has long been the go-to solution for in-memory caching and key-value storage. However, organizations increasingly face challenges with:

  • Data persistence complexity: Managing RDB snapshots and AOF files requires operational expertise
  • Memory limitations: All data must fit in RAM, creating scaling challenges
  • Dual-system architecture: Running Redis alongside a relational database increases complexity
  • Limited query capabilities: Redis lacks SQL-style querying for analytics

HeliosDB solves these challenges by providing:

CapabilityRedisHeliosDB
In-memory cachingNativeNative (4-tier cache)
Persistent storageRDB/AOFLSM-tree native
SQL queriesNoFull SQL + NL2SQL
Multi-protocolRedis only9+ protocols
TransactionsMULTI/EXECFull ACID
AnalyticsLimitedOLAP + OLTP
Vector searchNoNative embeddings

1.2 Migration Benefits

Unified Architecture

  • Single database for caching AND persistence
  • Eliminate Redis-to-database synchronization code
  • Reduce infrastructure complexity

Enhanced Capabilities

  • Query cache data with SQL
  • Join cached data with persistent tables
  • Automatic cache invalidation via triggers
  • Built-in TTL with timestamp precision

Operational Simplicity

  • No separate Redis cluster management
  • Unified backup and recovery
  • Single monitoring stack
  • Consistent security model

1.3 Migration Approaches

ApproachDowntimeComplexityUse Case
Direct cutoverMinutesLowSmall datasets (<10GB)
Dual-writeZeroMediumProduction applications
Gradual migrationZeroMediumLarge datasets
Shadow modeZeroHighMission-critical systems

2. Compatibility Overview

2.1 HeliosDB Redis Wire Protocol Support

HeliosDB implements the Redis RESP3 protocol with 90% command compatibility:

Overall Compatibility: 90%
Protocol Support:
- RESP3 Protocol: 100% (HELLO, push types, all data types)
- String Commands: 100% (GET, SET, INCR, APPEND, etc.)
- List Commands: 100% (LPUSH, RPOP, LRANGE, etc.)
- Set Commands: 100% (SADD, SMEMBERS, SINTER, etc.)
- Sorted Set Commands:100% (ZADD, ZRANGE, ZINCRBY, etc.)
- Hash Commands: 100% (HSET, HGET, HGETALL, etc.)
- Pub/Sub: 100% (SUBSCRIBE, PUBLISH, PSUBSCRIBE)
- Streams: 100% (XADD, XREAD, XGROUP, etc.)
- Transactions: 100% (MULTI, EXEC, WATCH)
- Scripting: 80% (EVAL, EVALSHA - core Lua support)

2.2 Driver Compatibility

DriverVersionStatusNotes
redis-py (Python)4.5+FullRESP3 native
node-redis (Node.js)4.xFullConnection pooling
Jedis (Java)4.xFullStandard usage
go-redis (Go)9.xFullContext support
StackExchange.Redis (C#)2.xFullMultiplexing
redis-rs (Rust)0.23+FullAsync support

2.3 Connection Compatibility

# Existing Redis connection
import redis
client = redis.Redis(host='redis-server', port=6379)
# HeliosDB connection - just change host/port
client = redis.Redis(host='heliosdb-server', port=6379, protocol=3)
# All existing code works unchanged
client.set('key', 'value')
client.get('key') # 'value'

3. Pre-Migration Assessment

3.1 Redis Data Structure Inventory

Run the following analysis on your Redis instance:

Terminal window
# Count keys by type
redis-cli --scan | while read key; do
type=$(redis-cli TYPE "$key")
echo "$type"
done | sort | uniq -c | sort -rn
# Sample output:
# 45000 string
# 12000 hash
# 8500 list
# 5200 set
# 3800 zset
# 500 stream

Automated inventory script:

import redis
from collections import defaultdict
def analyze_redis_inventory(host='localhost', port=6379):
client = redis.Redis(host=host, port=port, decode_responses=True)
inventory = defaultdict(lambda: {'count': 0, 'memory': 0, 'ttl_count': 0})
cursor = '0'
while cursor != 0:
cursor, keys = client.scan(cursor=cursor, count=1000)
for key in keys:
key_type = client.type(key)
memory = client.memory_usage(key) or 0
ttl = client.ttl(key)
inventory[key_type]['count'] += 1
inventory[key_type]['memory'] += memory
if ttl > 0:
inventory[key_type]['ttl_count'] += 1
return dict(inventory)
# Run analysis
inventory = analyze_redis_inventory()
for dtype, stats in inventory.items():
print(f"{dtype}: {stats['count']} keys, {stats['memory']/1024/1024:.2f} MB")

3.2 Memory Usage Analysis

Terminal window
# Overall memory stats
redis-cli INFO memory
# Key metrics to capture:
# - used_memory_human: Total memory used
# - used_memory_peak_human: Peak memory usage
# - used_memory_dataset: Data memory (excluding overhead)
# - maxmemory: Configured limit
# Identify large keys
redis-cli --bigkeys
# Memory analysis by key pattern
redis-cli --memkeys

Memory capacity planning:

Redis MemoryHeliosDB Recommendation
< 10 GBStandard instance
10-50 GBEnhanced caching tier
50-200 GBDistributed deployment
> 200 GBSharded cluster

3.3 Key Naming Pattern Analysis

def analyze_key_patterns(client, sample_size=10000):
"""Analyze key naming patterns for schema design."""
patterns = defaultdict(int)
cursor = '0'
count = 0
while cursor != 0 and count < sample_size:
cursor, keys = client.scan(cursor=cursor, count=1000)
for key in keys:
# Extract pattern (replace IDs with placeholders)
import re
pattern = re.sub(r':\d+', ':*', key)
pattern = re.sub(r':[\w-]{32,}', ':*uuid*', pattern)
patterns[pattern] += 1
count += 1
return sorted(patterns.items(), key=lambda x: -x[1])
# Example output:
# [('user:*:profile', 15000),
# ('session:*uuid*', 8500),
# ('cache:api:*', 5200),
# ('leaderboard:daily:*', 1200)]

3.4 Access Pattern Analysis

# Enable Redis keyspace notifications for access analysis
# CONFIG SET notify-keyspace-events KEA
# Monitor command frequency
def capture_command_stats(duration_seconds=3600):
"""Capture command usage statistics."""
client = redis.Redis()
# Get command stats before
stats_before = client.info('commandstats')
time.sleep(duration_seconds)
# Get command stats after
stats_after = client.info('commandstats')
# Calculate differences
command_usage = {}
for cmd, after_stats in stats_after.items():
before_stats = stats_before.get(cmd, {'calls': 0})
calls = after_stats['calls'] - before_stats['calls']
if calls > 0:
command_usage[cmd] = calls
return sorted(command_usage.items(), key=lambda x: -x[1])

4. Conceptual Mapping

4.1 Keys to Table Rows

Redis key-value pairs can map to HeliosDB tables:

Redis: HeliosDB:
user:1001:name = "Alice" -> INSERT INTO users (id, name) VALUES (1001, 'Alice')
user:1001:email = "a@b.c" -> ... (additional column)
user:1001:score = 100 -> ... (additional column)

Unified schema design:

-- Create table for user data (replaces user:* keys)
CREATE TABLE users (
id BIGINT PRIMARY KEY,
name VARCHAR(255),
email VARCHAR(255),
score INTEGER DEFAULT 0,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
-- Enable HeliosDB intelligent caching
ALTER TABLE users ENABLE INTELLIGENT_CACHING;

4.2 Strings Mapping

Redis PatternHeliosDB Equivalent
SET key valueINSERT INTO kv (key, value) VALUES (...)
GET keySELECT value FROM kv WHERE key = ...
INCR counterUPDATE counters SET value = value + 1 WHERE id = ...
SETEX key 3600 valueInsert with TTL timestamp column

HeliosDB schema for string data:

-- Generic key-value table with TTL support
CREATE TABLE kv_store (
key VARCHAR(255) PRIMARY KEY,
value TEXT,
expires_at TIMESTAMP,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
-- Index for TTL cleanup
CREATE INDEX idx_kv_expires ON kv_store(expires_at)
WHERE expires_at IS NOT NULL;
-- Counter table with atomic increment support
CREATE TABLE counters (
name VARCHAR(255) PRIMARY KEY,
value BIGINT DEFAULT 0
);

4.3 Hashes Mapping

RedisHeliosDB
HSET user:1 name AliceJSONB column or dedicated table
HGET user:1 nameSELECT data->>'name' FROM users WHERE id = 1
HGETALL user:1SELECT data FROM users WHERE id = 1
HINCRBY user:1 score 10JSONB increment function

JSONB approach (recommended for flexible schemas):

-- Using JSONB for hash-like data
CREATE TABLE entities (
id VARCHAR(255) PRIMARY KEY,
data JSONB NOT NULL DEFAULT '{}',
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
-- Query examples:
-- HGET equivalent
SELECT data->>'name' FROM entities WHERE id = 'user:1';
-- HSET equivalent
UPDATE entities
SET data = data || '{"name": "Alice"}'::jsonb
WHERE id = 'user:1';
-- HINCRBY equivalent
UPDATE entities
SET data = jsonb_set(data, '{score}',
((COALESCE(data->>'score', '0')::int + 10)::text)::jsonb)
WHERE id = 'user:1';

4.4 Lists Mapping

RedisHeliosDB
LPUSH queue:jobs job1Array prepend or queue table
RPOP queue:jobsArray pop or dequeue operation
LRANGE list 0 -1Array slice or SELECT with ORDER BY

Queue implementation:

-- Queue table with FIFO support
CREATE TABLE job_queue (
id BIGSERIAL PRIMARY KEY,
queue_name VARCHAR(255) NOT NULL,
payload JSONB NOT NULL,
priority INTEGER DEFAULT 0,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
processed_at TIMESTAMP
);
-- Index for efficient dequeue
CREATE INDEX idx_queue_pending ON job_queue(queue_name, priority DESC, id)
WHERE processed_at IS NULL;
-- LPUSH equivalent (add to queue)
INSERT INTO job_queue (queue_name, payload) VALUES ('jobs', '{"task": "process"}');
-- RPOP equivalent (dequeue oldest)
WITH next_job AS (
SELECT id FROM job_queue
WHERE queue_name = 'jobs' AND processed_at IS NULL
ORDER BY id ASC LIMIT 1
FOR UPDATE SKIP LOCKED
)
UPDATE job_queue SET processed_at = NOW()
WHERE id = (SELECT id FROM next_job)
RETURNING *;

4.5 Sets Mapping

RedisHeliosDB
SADD users:online user1Insert into set table
SMEMBERS users:onlineSELECT DISTINCT from set table
SINTER set1 set2INTERSECT query

Set implementation:

-- Set membership table
CREATE TABLE set_members (
set_name VARCHAR(255) NOT NULL,
member VARCHAR(255) NOT NULL,
added_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (set_name, member)
);
-- SADD equivalent
INSERT INTO set_members (set_name, member)
VALUES ('users:online', 'user1')
ON CONFLICT DO NOTHING;
-- SMEMBERS equivalent
SELECT member FROM set_members WHERE set_name = 'users:online';
-- SINTER equivalent
SELECT member FROM set_members WHERE set_name = 'set1'
INTERSECT
SELECT member FROM set_members WHERE set_name = 'set2';
-- SUNION equivalent
SELECT DISTINCT member FROM set_members
WHERE set_name IN ('set1', 'set2');

4.6 Sorted Sets Mapping

RedisHeliosDB
ZADD leaderboard 100 aliceInsert with score column
ZRANGE leaderboard 0 9SELECT with ORDER BY LIMIT
ZINCRBY leaderboard 10 aliceUPDATE score += value

Leaderboard implementation:

-- Sorted set (leaderboard) table
CREATE TABLE leaderboards (
board_name VARCHAR(255) NOT NULL,
member VARCHAR(255) NOT NULL,
score DOUBLE PRECISION NOT NULL,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (board_name, member)
);
-- Index for efficient range queries
CREATE INDEX idx_leaderboard_score ON leaderboards(board_name, score DESC);
-- ZADD equivalent
INSERT INTO leaderboards (board_name, member, score)
VALUES ('monthly', 'alice', 100)
ON CONFLICT (board_name, member)
DO UPDATE SET score = EXCLUDED.score, updated_at = NOW();
-- ZRANGE equivalent (top 10)
SELECT member, score FROM leaderboards
WHERE board_name = 'monthly'
ORDER BY score DESC
LIMIT 10;
-- ZINCRBY equivalent
UPDATE leaderboards
SET score = score + 10, updated_at = NOW()
WHERE board_name = 'monthly' AND member = 'alice';
-- ZRANK equivalent
SELECT rank FROM (
SELECT member, RANK() OVER (ORDER BY score DESC) as rank
FROM leaderboards WHERE board_name = 'monthly'
) ranked WHERE member = 'alice';

4.7 TTL Mapping

Redis TTL to HeliosDB timestamp columns:

-- Table with TTL support
CREATE TABLE cache_entries (
key VARCHAR(255) PRIMARY KEY,
value TEXT,
expires_at TIMESTAMP,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
-- SETEX equivalent (set with expiration)
INSERT INTO cache_entries (key, value, expires_at)
VALUES ('session:abc', 'token123', NOW() + INTERVAL '1 hour')
ON CONFLICT (key) DO UPDATE
SET value = EXCLUDED.value, expires_at = EXCLUDED.expires_at;
-- GET with TTL check
SELECT value FROM cache_entries
WHERE key = 'session:abc'
AND (expires_at IS NULL OR expires_at > NOW());
-- TTL equivalent
SELECT EXTRACT(EPOCH FROM (expires_at - NOW()))::INTEGER as ttl
FROM cache_entries WHERE key = 'session:abc';
-- Automatic cleanup (scheduled job or HeliosDB background process)
DELETE FROM cache_entries WHERE expires_at < NOW();

5. Step-by-Step Migration

5.1 Phase 1: Export Redis Data

Option A: Using SCAN + DUMP/RESTORE

import redis
import json
from datetime import datetime
def export_redis_to_json(redis_host, redis_port, output_file):
"""Export Redis data to JSON format."""
client = redis.Redis(host=redis_host, port=redis_port, decode_responses=False)
export_data = []
cursor = '0'
while cursor != 0:
cursor, keys = client.scan(cursor=cursor, count=1000)
for key in keys:
key_str = key.decode('utf-8')
key_type = client.type(key).decode('utf-8')
ttl = client.ttl(key)
entry = {
'key': key_str,
'type': key_type,
'ttl': ttl if ttl > 0 else None
}
# Extract value based on type
if key_type == 'string':
entry['value'] = client.get(key).decode('utf-8', errors='replace')
elif key_type == 'hash':
entry['value'] = {k.decode(): v.decode() for k, v in client.hgetall(key).items()}
elif key_type == 'list':
entry['value'] = [v.decode() for v in client.lrange(key, 0, -1)]
elif key_type == 'set':
entry['value'] = list(v.decode() for v in client.smembers(key))
elif key_type == 'zset':
entry['value'] = [(m.decode(), s) for m, s in client.zrange(key, 0, -1, withscores=True)]
export_data.append(entry)
with open(output_file, 'w') as f:
json.dump(export_data, f, indent=2)
return len(export_data)
# Export
count = export_redis_to_json('localhost', 6379, 'redis_export.json')
print(f"Exported {count} keys")

Option B: Using RDB file

Terminal window
# Trigger RDB snapshot
redis-cli BGSAVE
# Wait for completion
redis-cli LASTSAVE
# Copy RDB file
cp /var/lib/redis/dump.rdb ./redis_backup.rdb
# Parse RDB using rdb-tools
rdb --command json ./redis_backup.rdb > redis_data.json

5.2 Phase 2: Schema Design in HeliosDB

Based on your key pattern analysis, design appropriate schemas:

-- Connect to HeliosDB (PostgreSQL protocol)
-- psql -h heliosdb-server -p 5432 -U admin -d mydb
-- 1. User data (from user:* hash keys)
CREATE TABLE users (
id BIGINT PRIMARY KEY,
data JSONB NOT NULL DEFAULT '{}',
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
-- 2. Sessions (from session:* string keys with TTL)
CREATE TABLE sessions (
session_id VARCHAR(64) PRIMARY KEY,
user_id BIGINT REFERENCES users(id),
token TEXT NOT NULL,
expires_at TIMESTAMP NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE INDEX idx_sessions_expires ON sessions(expires_at);
CREATE INDEX idx_sessions_user ON sessions(user_id);
-- 3. Cache entries (from cache:* keys)
CREATE TABLE cache (
cache_key VARCHAR(512) PRIMARY KEY,
value TEXT,
content_type VARCHAR(100) DEFAULT 'text/plain',
expires_at TIMESTAMP,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
-- 4. Job queues (from queue:* list keys)
CREATE TABLE job_queues (
id BIGSERIAL PRIMARY KEY,
queue_name VARCHAR(255) NOT NULL,
payload JSONB NOT NULL,
status VARCHAR(20) DEFAULT 'pending',
priority INTEGER DEFAULT 0,
attempts INTEGER DEFAULT 0,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
started_at TIMESTAMP,
completed_at TIMESTAMP
);
CREATE INDEX idx_jobs_pending ON job_queues(queue_name, priority DESC, id)
WHERE status = 'pending';
-- 5. Leaderboards (from leaderboard:* zset keys)
CREATE TABLE leaderboards (
board_name VARCHAR(255) NOT NULL,
member_id VARCHAR(255) NOT NULL,
score DOUBLE PRECISION NOT NULL,
metadata JSONB DEFAULT '{}',
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (board_name, member_id)
);
CREATE INDEX idx_leaderboard_ranking ON leaderboards(board_name, score DESC);
-- Enable HeliosDB caching on hot tables
ALTER TABLE sessions ENABLE INTELLIGENT_CACHING;
ALTER TABLE cache ENABLE INTELLIGENT_CACHING;
ALTER TABLE leaderboards ENABLE INTELLIGENT_CACHING;

5.3 Phase 3: Data Transformation and Import

import json
import psycopg2
from datetime import datetime, timedelta
def import_to_heliosdb(json_file, heliosdb_config):
"""Import Redis export to HeliosDB."""
conn = psycopg2.connect(**heliosdb_config)
cur = conn.cursor()
with open(json_file) as f:
data = json.load(f)
stats = {'users': 0, 'sessions': 0, 'cache': 0, 'queues': 0, 'leaderboards': 0}
for entry in data:
key = entry['key']
value = entry['value']
ttl = entry.get('ttl')
expires_at = datetime.now() + timedelta(seconds=ttl) if ttl else None
try:
# Route by key pattern
if key.startswith('user:'):
# Extract user ID and import
parts = key.split(':')
if len(parts) >= 2:
user_id = int(parts[1])
if entry['type'] == 'hash':
cur.execute("""
INSERT INTO users (id, data) VALUES (%s, %s)
ON CONFLICT (id) DO UPDATE SET data = users.data || %s
""", (user_id, json.dumps(value), json.dumps(value)))
stats['users'] += 1
elif key.startswith('session:'):
session_id = key.split(':')[1]
cur.execute("""
INSERT INTO sessions (session_id, token, expires_at)
VALUES (%s, %s, %s)
ON CONFLICT (session_id) DO UPDATE
SET token = EXCLUDED.token, expires_at = EXCLUDED.expires_at
""", (session_id, value, expires_at))
stats['sessions'] += 1
elif key.startswith('cache:'):
cur.execute("""
INSERT INTO cache (cache_key, value, expires_at)
VALUES (%s, %s, %s)
ON CONFLICT (cache_key) DO UPDATE
SET value = EXCLUDED.value, expires_at = EXCLUDED.expires_at
""", (key, json.dumps(value) if isinstance(value, (dict, list)) else value, expires_at))
stats['cache'] += 1
elif key.startswith('queue:'):
queue_name = key.split(':')[1]
if isinstance(value, list):
for item in value:
cur.execute("""
INSERT INTO job_queues (queue_name, payload)
VALUES (%s, %s)
""", (queue_name, json.dumps({'data': item})))
stats['queues'] += 1
elif key.startswith('leaderboard:'):
board_name = ':'.join(key.split(':')[1:])
if entry['type'] == 'zset':
for member, score in value:
cur.execute("""
INSERT INTO leaderboards (board_name, member_id, score)
VALUES (%s, %s, %s)
ON CONFLICT (board_name, member_id) DO UPDATE SET score = EXCLUDED.score
""", (board_name, member, score))
stats['leaderboards'] += 1
conn.commit()
except Exception as e:
conn.rollback()
print(f"Error importing {key}: {e}")
cur.close()
conn.close()
return stats
# Run import
config = {'host': 'heliosdb-server', 'port': 5432, 'user': 'admin', 'password': 'secret', 'dbname': 'mydb'}
stats = import_to_heliosdb('redis_export.json', config)
print(f"Import complete: {stats}")

5.4 Phase 4: Application Connection Changes

Before (Redis):

import redis
class CacheService:
def __init__(self):
self.redis = redis.Redis(host='redis-server', port=6379)
def get_user(self, user_id):
data = self.redis.hgetall(f'user:{user_id}')
return {k.decode(): v.decode() for k, v in data.items()}
def set_session(self, session_id, token, ttl=3600):
self.redis.setex(f'session:{session_id}', ttl, token)

After (HeliosDB - Option 1: Redis Protocol):

import redis
class CacheService:
def __init__(self):
# Just change host - existing code works!
self.redis = redis.Redis(
host='heliosdb-server',
port=6379,
protocol=3 # RESP3
)
def get_user(self, user_id):
# Unchanged - Redis protocol compatible
data = self.redis.hgetall(f'user:{user_id}')
return {k.decode(): v.decode() for k, v in data.items()}
def set_session(self, session_id, token, ttl=3600):
# Unchanged
self.redis.setex(f'session:{session_id}', ttl, token)

After (HeliosDB - Option 2: SQL Protocol for enhanced features):

import psycopg2
from contextlib import contextmanager
class EnhancedCacheService:
def __init__(self):
self.pool = psycopg2.pool.ThreadedConnectionPool(
minconn=5, maxconn=20,
host='heliosdb-server', port=5432,
user='admin', password='secret', dbname='mydb'
)
@contextmanager
def get_conn(self):
conn = self.pool.getconn()
try:
yield conn
finally:
self.pool.putconn(conn)
def get_user(self, user_id):
with self.get_conn() as conn:
with conn.cursor() as cur:
cur.execute("SELECT data FROM users WHERE id = %s", (user_id,))
row = cur.fetchone()
return row[0] if row else None
def set_session(self, session_id, user_id, token, ttl=3600):
with self.get_conn() as conn:
with conn.cursor() as cur:
cur.execute("""
INSERT INTO sessions (session_id, user_id, token, expires_at)
VALUES (%s, %s, %s, NOW() + INTERVAL '%s seconds')
ON CONFLICT (session_id) DO UPDATE
SET token = EXCLUDED.token, expires_at = EXCLUDED.expires_at
""", (session_id, user_id, token, ttl))
conn.commit()
# New capability: SQL-powered queries not possible in Redis
def get_active_sessions_by_user(self, user_id):
with self.get_conn() as conn:
with conn.cursor() as cur:
cur.execute("""
SELECT session_id, created_at, expires_at
FROM sessions
WHERE user_id = %s AND expires_at > NOW()
ORDER BY created_at DESC
""", (user_id,))
return cur.fetchall()

6. Command Mapping

6.1 String Commands

Redis CommandHeliosDB SQL EquivalentNotes
SET key valueINSERT INTO kv (key, value) VALUES ($1, $2) ON CONFLICT (key) DO UPDATE SET value = $2Upsert
GET keySELECT value FROM kv WHERE key = $1Direct lookup
SETEX key ttl valueINSERT INTO kv (key, value, expires_at) VALUES ($1, $2, NOW() + $3 * INTERVAL '1 second')With TTL
SETNX key valueINSERT INTO kv (key, value) VALUES ($1, $2) ON CONFLICT DO NOTHINGOnly if not exists
INCR keyUPDATE counters SET value = value + 1 WHERE key = $1 RETURNING valueAtomic increment
INCRBY key nUPDATE counters SET value = value + $2 WHERE key = $1 RETURNING valueIncrement by N
MSET k1 v1 k2 v2INSERT INTO kv (key, value) VALUES ($1, $2), ($3, $4) ON CONFLICT (key) DO UPDATE SET value = EXCLUDED.valueBatch insert
MGET k1 k2SELECT key, value FROM kv WHERE key IN ($1, $2)Batch get
DEL keyDELETE FROM kv WHERE key = $1Delete
EXISTS keySELECT 1 FROM kv WHERE key = $1 LIMIT 1Check existence
TTL keySELECT EXTRACT(EPOCH FROM (expires_at - NOW()))::INT FROM kv WHERE key = $1Get TTL

6.2 Hash Commands to JSONB

Redis CommandHeliosDB JSONB Equivalent
HSET key field value`UPDATE entities SET data = data
HGET key fieldSELECT data->>$2 FROM entities WHERE id = $1
HMSET key f1 v1 f2 v2`UPDATE entities SET data = data
HGETALL keySELECT data FROM entities WHERE id = $1
HDEL key fieldUPDATE entities SET data = data - $2 WHERE id = $1
HEXISTS key fieldSELECT data ? $2 FROM entities WHERE id = $1
HKEYS keySELECT jsonb_object_keys(data) FROM entities WHERE id = $1
HVALS keySELECT value FROM entities, jsonb_each_text(data) WHERE id = $1
HINCRBY key field nUPDATE entities SET data = jsonb_set(data, ARRAY[$2], ((data->>$2)::int + $3)::text::jsonb) WHERE id = $1

6.3 List Commands to Array/Queue

Redis CommandHeliosDB Equivalent
LPUSH key valueINSERT INTO queue (queue_name, payload) VALUES ($1, $2)
RPUSH key valueSame as LPUSH (order determined by ID)
LPOP keyDELETE FROM queue WHERE id = (SELECT id FROM queue WHERE queue_name = $1 ORDER BY id DESC LIMIT 1 FOR UPDATE SKIP LOCKED) RETURNING payload
RPOP keyDELETE FROM queue WHERE id = (SELECT id FROM queue WHERE queue_name = $1 ORDER BY id ASC LIMIT 1 FOR UPDATE SKIP LOCKED) RETURNING payload
LRANGE key 0 -1SELECT payload FROM queue WHERE queue_name = $1 ORDER BY id
LLEN keySELECT COUNT(*) FROM queue WHERE queue_name = $1
LINDEX key nSELECT payload FROM queue WHERE queue_name = $1 ORDER BY id OFFSET $2 LIMIT 1

6.4 Sorted Set Commands to ORDER BY

Redis CommandHeliosDB SQL Equivalent
ZADD lb score memberINSERT INTO leaderboards (board_name, member_id, score) VALUES ($1, $3, $2) ON CONFLICT (board_name, member_id) DO UPDATE SET score = EXCLUDED.score
ZSCORE lb memberSELECT score FROM leaderboards WHERE board_name = $1 AND member_id = $2
ZRANK lb memberSELECT rank-1 FROM (SELECT member_id, RANK() OVER (ORDER BY score) as rank FROM leaderboards WHERE board_name = $1) r WHERE member_id = $2
ZRANGE lb 0 9SELECT member_id, score FROM leaderboards WHERE board_name = $1 ORDER BY score ASC LIMIT 10
ZREVRANGE lb 0 9SELECT member_id, score FROM leaderboards WHERE board_name = $1 ORDER BY score DESC LIMIT 10
ZINCRBY lb n memberUPDATE leaderboards SET score = score + $2 WHERE board_name = $1 AND member_id = $3 RETURNING score
ZCOUNT lb min maxSELECT COUNT(*) FROM leaderboards WHERE board_name = $1 AND score BETWEEN $2 AND $3
ZCARD lbSELECT COUNT(*) FROM leaderboards WHERE board_name = $1

7. Feature Mapping

7.1 Pub/Sub to HeliosDB Real-Time Events

Redis Pub/Sub:

# Publisher
redis_client.publish('news:sports', 'Score update!')
# Subscriber
pubsub = redis_client.pubsub()
pubsub.subscribe('news:sports')
for message in pubsub.listen():
print(message['data'])

HeliosDB Equivalent (using PostgreSQL LISTEN/NOTIFY):

import psycopg2
import select
# Publisher
conn = psycopg2.connect(...)
cur = conn.cursor()
cur.execute("NOTIFY news_sports, 'Score update!'")
conn.commit()
# Subscriber
conn = psycopg2.connect(...)
conn.set_isolation_level(psycopg2.extensions.ISOLATION_LEVEL_AUTOCOMMIT)
cur = conn.cursor()
cur.execute("LISTEN news_sports")
while True:
if select.select([conn], [], [], 5) != ([], [], []):
conn.poll()
while conn.notifies:
notify = conn.notifies.pop(0)
print(f"Channel: {notify.channel}, Payload: {notify.payload}")

HeliosDB Advanced Webhooks (for external integrations):

-- Create webhook for real-time events
CREATE WEBHOOK score_updates
ON INSERT TO game_scores
EXECUTE 'https://api.example.com/webhooks/scores'
WITH (
headers = '{"Authorization": "Bearer token123"}',
format = 'json',
batch_size = 100,
retry_count = 3
);

7.2 Lua Scripting Alternatives

Redis Lua script:

-- Rate limiter
local key = KEYS[1]
local limit = tonumber(ARGV[1])
local current = tonumber(redis.call('GET', key) or 0)
if current < limit then
redis.call('INCR', key)
return 1
else
return 0
end

HeliosDB SQL function (recommended):

CREATE OR REPLACE FUNCTION rate_limit(
p_key VARCHAR,
p_limit INT,
p_window_seconds INT
) RETURNS BOOLEAN AS $$
DECLARE
v_count INT;
BEGIN
-- Atomic rate limit check and increment
INSERT INTO rate_limits (key, count, window_start)
VALUES (p_key, 1, NOW())
ON CONFLICT (key) DO UPDATE
SET count = CASE
WHEN rate_limits.window_start < NOW() - (p_window_seconds || ' seconds')::INTERVAL
THEN 1
ELSE rate_limits.count + 1
END,
window_start = CASE
WHEN rate_limits.window_start < NOW() - (p_window_seconds || ' seconds')::INTERVAL
THEN NOW()
ELSE rate_limits.window_start
END
RETURNING count INTO v_count;
RETURN v_count <= p_limit;
END;
$$ LANGUAGE plpgsql;
-- Usage
SELECT rate_limit('user:123:api', 100, 60); -- 100 requests per 60 seconds

HeliosDB WASM stored procedure (for complex logic):

-- Register WASM procedure
CREATE PROCEDURE rate_limiter(key TEXT, limit_count INT)
LANGUAGE wasm
AS 'path/to/rate_limiter.wasm';
-- Call WASM procedure
CALL rate_limiter('user:123', 100);

7.3 Transactions (MULTI/EXEC)

Redis transaction:

pipe = redis_client.pipeline()
pipe.incr('counter')
pipe.hset('user:1', 'last_action', 'purchase')
pipe.lpush('audit:log', 'user:1 made purchase')
results = pipe.execute()

HeliosDB transaction:

conn = psycopg2.connect(...)
try:
with conn.cursor() as cur:
cur.execute("UPDATE counters SET value = value + 1 WHERE name = 'counter'")
cur.execute("UPDATE users SET data = data || %s WHERE id = 1",
('{"last_action": "purchase"}',))
cur.execute("INSERT INTO audit_log (entity, action) VALUES ('user:1', 'purchase')")
conn.commit()
except Exception as e:
conn.rollback()
raise

7.4 Clustering

Redis ClusterHeliosDB Equivalent
Hash slotsPartition keys
CLUSTER NODESSystem tables
Replica setsMulti-master CRDT
ReshardingOnline rebalancing

HeliosDB sharding configuration:

-- Create sharded table
CREATE TABLE distributed_cache (
key VARCHAR(255) PRIMARY KEY,
value TEXT,
shard_key VARCHAR(255) GENERATED ALWAYS AS (substring(key, 1, 10)) STORED
) PARTITION BY HASH (shard_key);
-- Create partitions
CREATE TABLE distributed_cache_0 PARTITION OF distributed_cache
FOR VALUES WITH (MODULUS 4, REMAINDER 0);
CREATE TABLE distributed_cache_1 PARTITION OF distributed_cache
FOR VALUES WITH (MODULUS 4, REMAINDER 1);
-- ... etc

8. Caching Layer Considerations

8.1 HeliosDB 4-Tier Cache Architecture

HeliosDB provides a built-in intelligent caching system:

Tier 1: L1 CPU Cache (per-connection)
- Query result caching
- Prepared statement caching
- ~1ms latency
Tier 2: L2 Shared Memory Cache
- Cross-connection sharing
- Hot data prefetching
- ~5ms latency
Tier 3: Distributed Cache
- Cross-node caching
- Consistent hashing
- ~10-50ms latency
Tier 4: Edge Cache
- Geographic distribution
- CDN integration
- Variable latency

8.2 Enabling Intelligent Caching

-- Enable on specific tables
ALTER TABLE hot_data ENABLE INTELLIGENT_CACHING;
-- Configure cache policy
ALTER TABLE sessions SET (
cache_policy = 'write_through',
cache_ttl = 3600,
cache_priority = 'high'
);
-- Enable query result caching
SET heliosdb.query_cache = 'on';
SET heliosdb.query_cache_size = '1GB';

8.3 Cache Invalidation Strategies

-- Automatic invalidation on write
-- (Default behavior - no configuration needed)
-- Manual invalidation
SELECT heliosdb_cache_invalidate('sessions', 'session_id', 'abc123');
-- Pattern-based invalidation
SELECT heliosdb_cache_invalidate_pattern('cache:user:*');
-- Time-based expiration (automatic)
-- Data with expires_at column is automatically evicted

8.4 When to Use Redis Protocol vs SQL

Use CaseRecommended Protocol
Simple key-value cacheRedis (drop-in replacement)
Session managementRedis or SQL
LeaderboardsSQL (better analytics)
Job queuesSQL (better reliability)
Complex queries on cached dataSQL
Cross-entity joinsSQL
Real-time analyticsSQL
Pub/Sub messagingRedis

9. Performance Comparison

9.1 Benchmark Results

OperationRedisHeliosDB (Redis)HeliosDB (SQL)Notes
GET (cached)0.1ms0.15ms0.3msMemory access
SET0.1ms0.2ms0.5msWrite-ahead log
INCR0.1ms0.15ms0.3msAtomic update
HGETALL0.2ms0.25ms0.4msHash retrieval
ZADD0.15ms0.2ms0.4msSorted insert
ZRANGE (top 10)0.1ms0.15ms0.3msRange query
Pipeline (100 ops)2ms2.5ms5msBatch operations

9.2 Throughput Comparison

Single Node (16 cores, 64GB RAM):
+-------------------+----------+------------------+
| Workload | Redis | HeliosDB |
+-------------------+----------+------------------+
| Read-heavy (95/5) | 500K/sec | 450K/sec |
| Write-heavy (50/50)| 200K/sec| 180K/sec |
| Mixed analytics | N/A | 100K/sec + SQL |
+-------------------+----------+------------------+
Distributed (5 nodes):
+-------------------+----------+------------------+
| Workload | Redis | HeliosDB |
+-------------------+----------+------------------+
| Read-heavy | 2M/sec | 2M/sec |
| Write-heavy | 800K/sec | 750K/sec |
| Cross-shard query | Limited | Full SQL support |
+-------------------+----------+------------------+

9.3 Optimization Tips

Connection pooling:

# Redis protocol
pool = redis.ConnectionPool(
host='heliosdb-server',
port=6379,
max_connections=100
)
# SQL protocol
pool = psycopg2.pool.ThreadedConnectionPool(
minconn=10,
maxconn=100,
host='heliosdb-server',
port=5432
)

Pipelining for batch operations:

# Redis protocol pipelining
pipe = client.pipeline(transaction=False)
for i in range(1000):
pipe.set(f'key:{i}', f'value:{i}')
pipe.execute()
# SQL protocol batch insert
cur.executemany(
"INSERT INTO kv (key, value) VALUES (%s, %s)",
[(f'key:{i}', f'value:{i}') for i in range(1000)]
)

10. Common Issues and Troubleshooting

10.1 Connection Issues

Problem: Connection refused on port 6379

Terminal window
# Check if Redis protocol is enabled
heliosdb-cli config get redis.enabled
# Enable Redis protocol
heliosdb-cli config set redis.enabled true
heliosdb-cli config set redis.port 6379
# Restart to apply
systemctl restart heliosdb

Problem: Authentication failed

# Ensure correct credentials
client = redis.Redis(
host='heliosdb-server',
port=6379,
password='your_password', # Check this matches
username='default' # Or your ACL username
)

10.2 Data Type Mismatches

Problem: Binary data corruption

# Always use decode_responses=False for binary data
client = redis.Redis(
host='heliosdb-server',
port=6379,
decode_responses=False # Keep as bytes
)
# Or handle encoding explicitly
value = client.get('binary_key')
text = value.decode('utf-8', errors='replace')

Problem: Integer overflow

-- Use BIGINT for large counters
CREATE TABLE counters (
name VARCHAR(255) PRIMARY KEY,
value BIGINT DEFAULT 0 -- Not INTEGER
);

10.3 TTL and Expiration

Problem: Keys not expiring

-- Ensure background cleanup is running
SELECT heliosdb_start_ttl_cleanup();
-- Or set up scheduled cleanup
CREATE EXTENSION IF NOT EXISTS pg_cron;
SELECT cron.schedule('cleanup-expired', '* * * * *',
'DELETE FROM cache_entries WHERE expires_at < NOW()');

Problem: TTL precision issues

-- Use TIMESTAMP WITH TIME ZONE for consistent TTL
ALTER TABLE cache_entries
ALTER COLUMN expires_at TYPE TIMESTAMP WITH TIME ZONE;

10.4 Performance Issues

Problem: Slow queries after migration

-- Analyze tables after bulk import
ANALYZE users;
ANALYZE cache_entries;
ANALYZE leaderboards;
-- Check for missing indexes
SELECT * FROM pg_stat_user_tables WHERE n_live_tup > 10000;
-- Create indexes for common access patterns
CREATE INDEX CONCURRENTLY idx_cache_key_prefix
ON cache_entries (substring(cache_key, 1, 20));

Problem: Memory usage higher than expected

-- Enable compression for large values
ALTER TABLE cache_entries SET (compression = 'lz4');
-- Monitor cache hit rates
SELECT * FROM heliosdb_cache_stats();
-- Tune cache size
SET heliosdb.cache_size = '4GB';

10.5 Cluster and Replication

Problem: Inconsistent reads after failover

-- Ensure read-your-writes consistency
SET heliosdb.consistency_level = 'session';
-- Or use synchronous replication
ALTER SYSTEM SET synchronous_commit = 'on';

10.6 Migration Rollback

If you need to rollback to Redis:

Terminal window
# 1. Export from HeliosDB
pg_dump -h heliosdb-server -t cache_entries -t users --data-only > rollback.sql
# 2. Convert to Redis commands (custom script)
python convert_sql_to_redis.py rollback.sql > redis_restore.txt
# 3. Import to Redis
cat redis_restore.txt | redis-cli --pipe
# 4. Update application connection strings
# Revert to original Redis configuration

Appendix A: Migration Checklist

Pre-Migration

  • Complete Redis inventory (keys, types, memory)
  • Analyze access patterns and hot keys
  • Design HeliosDB schema
  • Set up HeliosDB environment
  • Test connectivity with both protocols

Migration Execution

  • Export Redis data (SCAN or RDB)
  • Create HeliosDB tables
  • Import and transform data
  • Validate data integrity (row counts, checksums)
  • Update application connection strings
  • Test application functionality

Post-Migration

  • Monitor performance metrics
  • Verify cache hit rates
  • Check TTL cleanup is working
  • Document any behavioral differences
  • Decommission Redis (after validation period)

Appendix B: Quick Reference Card

Connection Strings

# Redis protocol (drop-in)
redis://heliosdb-server:6379
# PostgreSQL protocol (full features)
postgresql://user:pass@heliosdb-server:5432/dbname

Common Operations

-- Enable caching on table
ALTER TABLE mytable ENABLE INTELLIGENT_CACHING;
-- Check cache stats
SELECT * FROM heliosdb_cache_stats();
-- Invalidate cache
SELECT heliosdb_cache_invalidate('table', 'key_column', 'key_value');
-- TTL cleanup
DELETE FROM mytable WHERE expires_at < NOW();


Document Version: 1.0 Last Updated: 2026-01-04 Author: HeliosDB Documentation Team