Redis to HeliosDB Migration Guide
Redis to HeliosDB Migration Guide
Version: 1.0 Last Updated: 2026-01-04
Table of Contents
- Introduction
- Compatibility Overview
- Pre-Migration Assessment
- Conceptual Mapping
- Step-by-Step Migration
- Command Mapping
- Feature Mapping
- Caching Layer Considerations
- Performance Comparison
- Common Issues and Troubleshooting
1. Introduction
1.1 Why Migrate from Redis to HeliosDB?
Redis has long been the go-to solution for in-memory caching and key-value storage. However, organizations increasingly face challenges with:
- Data persistence complexity: Managing RDB snapshots and AOF files requires operational expertise
- Memory limitations: All data must fit in RAM, creating scaling challenges
- Dual-system architecture: Running Redis alongside a relational database increases complexity
- Limited query capabilities: Redis lacks SQL-style querying for analytics
HeliosDB solves these challenges by providing:
| Capability | Redis | HeliosDB |
|---|---|---|
| In-memory caching | Native | Native (4-tier cache) |
| Persistent storage | RDB/AOF | LSM-tree native |
| SQL queries | No | Full SQL + NL2SQL |
| Multi-protocol | Redis only | 9+ protocols |
| Transactions | MULTI/EXEC | Full ACID |
| Analytics | Limited | OLAP + OLTP |
| Vector search | No | Native embeddings |
1.2 Migration Benefits
Unified Architecture
- Single database for caching AND persistence
- Eliminate Redis-to-database synchronization code
- Reduce infrastructure complexity
Enhanced Capabilities
- Query cache data with SQL
- Join cached data with persistent tables
- Automatic cache invalidation via triggers
- Built-in TTL with timestamp precision
Operational Simplicity
- No separate Redis cluster management
- Unified backup and recovery
- Single monitoring stack
- Consistent security model
1.3 Migration Approaches
| Approach | Downtime | Complexity | Use Case |
|---|---|---|---|
| Direct cutover | Minutes | Low | Small datasets (<10GB) |
| Dual-write | Zero | Medium | Production applications |
| Gradual migration | Zero | Medium | Large datasets |
| Shadow mode | Zero | High | Mission-critical systems |
2. Compatibility Overview
2.1 HeliosDB Redis Wire Protocol Support
HeliosDB implements the Redis RESP3 protocol with 90% command compatibility:
Overall Compatibility: 90%
Protocol Support:- RESP3 Protocol: 100% (HELLO, push types, all data types)- String Commands: 100% (GET, SET, INCR, APPEND, etc.)- List Commands: 100% (LPUSH, RPOP, LRANGE, etc.)- Set Commands: 100% (SADD, SMEMBERS, SINTER, etc.)- Sorted Set Commands:100% (ZADD, ZRANGE, ZINCRBY, etc.)- Hash Commands: 100% (HSET, HGET, HGETALL, etc.)- Pub/Sub: 100% (SUBSCRIBE, PUBLISH, PSUBSCRIBE)- Streams: 100% (XADD, XREAD, XGROUP, etc.)- Transactions: 100% (MULTI, EXEC, WATCH)- Scripting: 80% (EVAL, EVALSHA - core Lua support)2.2 Driver Compatibility
| Driver | Version | Status | Notes |
|---|---|---|---|
| redis-py (Python) | 4.5+ | Full | RESP3 native |
| node-redis (Node.js) | 4.x | Full | Connection pooling |
| Jedis (Java) | 4.x | Full | Standard usage |
| go-redis (Go) | 9.x | Full | Context support |
| StackExchange.Redis (C#) | 2.x | Full | Multiplexing |
| redis-rs (Rust) | 0.23+ | Full | Async support |
2.3 Connection Compatibility
# Existing Redis connectionimport redisclient = redis.Redis(host='redis-server', port=6379)
# HeliosDB connection - just change host/portclient = redis.Redis(host='heliosdb-server', port=6379, protocol=3)
# All existing code works unchangedclient.set('key', 'value')client.get('key') # 'value'3. Pre-Migration Assessment
3.1 Redis Data Structure Inventory
Run the following analysis on your Redis instance:
# Count keys by typeredis-cli --scan | while read key; do type=$(redis-cli TYPE "$key") echo "$type"done | sort | uniq -c | sort -rn
# Sample output:# 45000 string# 12000 hash# 8500 list# 5200 set# 3800 zset# 500 streamAutomated inventory script:
import redisfrom collections import defaultdict
def analyze_redis_inventory(host='localhost', port=6379): client = redis.Redis(host=host, port=port, decode_responses=True)
inventory = defaultdict(lambda: {'count': 0, 'memory': 0, 'ttl_count': 0})
cursor = '0' while cursor != 0: cursor, keys = client.scan(cursor=cursor, count=1000) for key in keys: key_type = client.type(key) memory = client.memory_usage(key) or 0 ttl = client.ttl(key)
inventory[key_type]['count'] += 1 inventory[key_type]['memory'] += memory if ttl > 0: inventory[key_type]['ttl_count'] += 1
return dict(inventory)
# Run analysisinventory = analyze_redis_inventory()for dtype, stats in inventory.items(): print(f"{dtype}: {stats['count']} keys, {stats['memory']/1024/1024:.2f} MB")3.2 Memory Usage Analysis
# Overall memory statsredis-cli INFO memory
# Key metrics to capture:# - used_memory_human: Total memory used# - used_memory_peak_human: Peak memory usage# - used_memory_dataset: Data memory (excluding overhead)# - maxmemory: Configured limit
# Identify large keysredis-cli --bigkeys
# Memory analysis by key patternredis-cli --memkeysMemory capacity planning:
| Redis Memory | HeliosDB Recommendation |
|---|---|
| < 10 GB | Standard instance |
| 10-50 GB | Enhanced caching tier |
| 50-200 GB | Distributed deployment |
| > 200 GB | Sharded cluster |
3.3 Key Naming Pattern Analysis
def analyze_key_patterns(client, sample_size=10000): """Analyze key naming patterns for schema design.""" patterns = defaultdict(int)
cursor = '0' count = 0 while cursor != 0 and count < sample_size: cursor, keys = client.scan(cursor=cursor, count=1000) for key in keys: # Extract pattern (replace IDs with placeholders) import re pattern = re.sub(r':\d+', ':*', key) pattern = re.sub(r':[\w-]{32,}', ':*uuid*', pattern) patterns[pattern] += 1 count += 1
return sorted(patterns.items(), key=lambda x: -x[1])
# Example output:# [('user:*:profile', 15000),# ('session:*uuid*', 8500),# ('cache:api:*', 5200),# ('leaderboard:daily:*', 1200)]3.4 Access Pattern Analysis
# Enable Redis keyspace notifications for access analysis# CONFIG SET notify-keyspace-events KEA
# Monitor command frequencydef capture_command_stats(duration_seconds=3600): """Capture command usage statistics.""" client = redis.Redis()
# Get command stats before stats_before = client.info('commandstats')
time.sleep(duration_seconds)
# Get command stats after stats_after = client.info('commandstats')
# Calculate differences command_usage = {} for cmd, after_stats in stats_after.items(): before_stats = stats_before.get(cmd, {'calls': 0}) calls = after_stats['calls'] - before_stats['calls'] if calls > 0: command_usage[cmd] = calls
return sorted(command_usage.items(), key=lambda x: -x[1])4. Conceptual Mapping
4.1 Keys to Table Rows
Redis key-value pairs can map to HeliosDB tables:
Redis: HeliosDB:user:1001:name = "Alice" -> INSERT INTO users (id, name) VALUES (1001, 'Alice')user:1001:email = "a@b.c" -> ... (additional column)user:1001:score = 100 -> ... (additional column)Unified schema design:
-- Create table for user data (replaces user:* keys)CREATE TABLE users ( id BIGINT PRIMARY KEY, name VARCHAR(255), email VARCHAR(255), score INTEGER DEFAULT 0, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP);
-- Enable HeliosDB intelligent cachingALTER TABLE users ENABLE INTELLIGENT_CACHING;4.2 Strings Mapping
| Redis Pattern | HeliosDB Equivalent |
|---|---|
SET key value | INSERT INTO kv (key, value) VALUES (...) |
GET key | SELECT value FROM kv WHERE key = ... |
INCR counter | UPDATE counters SET value = value + 1 WHERE id = ... |
SETEX key 3600 value | Insert with TTL timestamp column |
HeliosDB schema for string data:
-- Generic key-value table with TTL supportCREATE TABLE kv_store ( key VARCHAR(255) PRIMARY KEY, value TEXT, expires_at TIMESTAMP, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP);
-- Index for TTL cleanupCREATE INDEX idx_kv_expires ON kv_store(expires_at) WHERE expires_at IS NOT NULL;
-- Counter table with atomic increment supportCREATE TABLE counters ( name VARCHAR(255) PRIMARY KEY, value BIGINT DEFAULT 0);4.3 Hashes Mapping
| Redis | HeliosDB |
|---|---|
HSET user:1 name Alice | JSONB column or dedicated table |
HGET user:1 name | SELECT data->>'name' FROM users WHERE id = 1 |
HGETALL user:1 | SELECT data FROM users WHERE id = 1 |
HINCRBY user:1 score 10 | JSONB increment function |
JSONB approach (recommended for flexible schemas):
-- Using JSONB for hash-like dataCREATE TABLE entities ( id VARCHAR(255) PRIMARY KEY, data JSONB NOT NULL DEFAULT '{}', updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP);
-- Query examples:-- HGET equivalentSELECT data->>'name' FROM entities WHERE id = 'user:1';
-- HSET equivalentUPDATE entitiesSET data = data || '{"name": "Alice"}'::jsonbWHERE id = 'user:1';
-- HINCRBY equivalentUPDATE entitiesSET data = jsonb_set(data, '{score}', ((COALESCE(data->>'score', '0')::int + 10)::text)::jsonb)WHERE id = 'user:1';4.4 Lists Mapping
| Redis | HeliosDB |
|---|---|
LPUSH queue:jobs job1 | Array prepend or queue table |
RPOP queue:jobs | Array pop or dequeue operation |
LRANGE list 0 -1 | Array slice or SELECT with ORDER BY |
Queue implementation:
-- Queue table with FIFO supportCREATE TABLE job_queue ( id BIGSERIAL PRIMARY KEY, queue_name VARCHAR(255) NOT NULL, payload JSONB NOT NULL, priority INTEGER DEFAULT 0, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, processed_at TIMESTAMP);
-- Index for efficient dequeueCREATE INDEX idx_queue_pending ON job_queue(queue_name, priority DESC, id) WHERE processed_at IS NULL;
-- LPUSH equivalent (add to queue)INSERT INTO job_queue (queue_name, payload) VALUES ('jobs', '{"task": "process"}');
-- RPOP equivalent (dequeue oldest)WITH next_job AS ( SELECT id FROM job_queue WHERE queue_name = 'jobs' AND processed_at IS NULL ORDER BY id ASC LIMIT 1 FOR UPDATE SKIP LOCKED)UPDATE job_queue SET processed_at = NOW()WHERE id = (SELECT id FROM next_job)RETURNING *;4.5 Sets Mapping
| Redis | HeliosDB |
|---|---|
SADD users:online user1 | Insert into set table |
SMEMBERS users:online | SELECT DISTINCT from set table |
SINTER set1 set2 | INTERSECT query |
Set implementation:
-- Set membership tableCREATE TABLE set_members ( set_name VARCHAR(255) NOT NULL, member VARCHAR(255) NOT NULL, added_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, PRIMARY KEY (set_name, member));
-- SADD equivalentINSERT INTO set_members (set_name, member)VALUES ('users:online', 'user1')ON CONFLICT DO NOTHING;
-- SMEMBERS equivalentSELECT member FROM set_members WHERE set_name = 'users:online';
-- SINTER equivalentSELECT member FROM set_members WHERE set_name = 'set1'INTERSECTSELECT member FROM set_members WHERE set_name = 'set2';
-- SUNION equivalentSELECT DISTINCT member FROM set_membersWHERE set_name IN ('set1', 'set2');4.6 Sorted Sets Mapping
| Redis | HeliosDB |
|---|---|
ZADD leaderboard 100 alice | Insert with score column |
ZRANGE leaderboard 0 9 | SELECT with ORDER BY LIMIT |
ZINCRBY leaderboard 10 alice | UPDATE score += value |
Leaderboard implementation:
-- Sorted set (leaderboard) tableCREATE TABLE leaderboards ( board_name VARCHAR(255) NOT NULL, member VARCHAR(255) NOT NULL, score DOUBLE PRECISION NOT NULL, updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, PRIMARY KEY (board_name, member));
-- Index for efficient range queriesCREATE INDEX idx_leaderboard_score ON leaderboards(board_name, score DESC);
-- ZADD equivalentINSERT INTO leaderboards (board_name, member, score)VALUES ('monthly', 'alice', 100)ON CONFLICT (board_name, member)DO UPDATE SET score = EXCLUDED.score, updated_at = NOW();
-- ZRANGE equivalent (top 10)SELECT member, score FROM leaderboardsWHERE board_name = 'monthly'ORDER BY score DESCLIMIT 10;
-- ZINCRBY equivalentUPDATE leaderboardsSET score = score + 10, updated_at = NOW()WHERE board_name = 'monthly' AND member = 'alice';
-- ZRANK equivalentSELECT rank FROM ( SELECT member, RANK() OVER (ORDER BY score DESC) as rank FROM leaderboards WHERE board_name = 'monthly') ranked WHERE member = 'alice';4.7 TTL Mapping
Redis TTL to HeliosDB timestamp columns:
-- Table with TTL supportCREATE TABLE cache_entries ( key VARCHAR(255) PRIMARY KEY, value TEXT, expires_at TIMESTAMP, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP);
-- SETEX equivalent (set with expiration)INSERT INTO cache_entries (key, value, expires_at)VALUES ('session:abc', 'token123', NOW() + INTERVAL '1 hour')ON CONFLICT (key) DO UPDATESET value = EXCLUDED.value, expires_at = EXCLUDED.expires_at;
-- GET with TTL checkSELECT value FROM cache_entriesWHERE key = 'session:abc'AND (expires_at IS NULL OR expires_at > NOW());
-- TTL equivalentSELECT EXTRACT(EPOCH FROM (expires_at - NOW()))::INTEGER as ttlFROM cache_entries WHERE key = 'session:abc';
-- Automatic cleanup (scheduled job or HeliosDB background process)DELETE FROM cache_entries WHERE expires_at < NOW();5. Step-by-Step Migration
5.1 Phase 1: Export Redis Data
Option A: Using SCAN + DUMP/RESTORE
import redisimport jsonfrom datetime import datetime
def export_redis_to_json(redis_host, redis_port, output_file): """Export Redis data to JSON format.""" client = redis.Redis(host=redis_host, port=redis_port, decode_responses=False)
export_data = [] cursor = '0'
while cursor != 0: cursor, keys = client.scan(cursor=cursor, count=1000) for key in keys: key_str = key.decode('utf-8') key_type = client.type(key).decode('utf-8') ttl = client.ttl(key)
entry = { 'key': key_str, 'type': key_type, 'ttl': ttl if ttl > 0 else None }
# Extract value based on type if key_type == 'string': entry['value'] = client.get(key).decode('utf-8', errors='replace') elif key_type == 'hash': entry['value'] = {k.decode(): v.decode() for k, v in client.hgetall(key).items()} elif key_type == 'list': entry['value'] = [v.decode() for v in client.lrange(key, 0, -1)] elif key_type == 'set': entry['value'] = list(v.decode() for v in client.smembers(key)) elif key_type == 'zset': entry['value'] = [(m.decode(), s) for m, s in client.zrange(key, 0, -1, withscores=True)]
export_data.append(entry)
with open(output_file, 'w') as f: json.dump(export_data, f, indent=2)
return len(export_data)
# Exportcount = export_redis_to_json('localhost', 6379, 'redis_export.json')print(f"Exported {count} keys")Option B: Using RDB file
# Trigger RDB snapshotredis-cli BGSAVE
# Wait for completionredis-cli LASTSAVE
# Copy RDB filecp /var/lib/redis/dump.rdb ./redis_backup.rdb
# Parse RDB using rdb-toolsrdb --command json ./redis_backup.rdb > redis_data.json5.2 Phase 2: Schema Design in HeliosDB
Based on your key pattern analysis, design appropriate schemas:
-- Connect to HeliosDB (PostgreSQL protocol)-- psql -h heliosdb-server -p 5432 -U admin -d mydb
-- 1. User data (from user:* hash keys)CREATE TABLE users ( id BIGINT PRIMARY KEY, data JSONB NOT NULL DEFAULT '{}', created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP);
-- 2. Sessions (from session:* string keys with TTL)CREATE TABLE sessions ( session_id VARCHAR(64) PRIMARY KEY, user_id BIGINT REFERENCES users(id), token TEXT NOT NULL, expires_at TIMESTAMP NOT NULL, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP);
CREATE INDEX idx_sessions_expires ON sessions(expires_at);CREATE INDEX idx_sessions_user ON sessions(user_id);
-- 3. Cache entries (from cache:* keys)CREATE TABLE cache ( cache_key VARCHAR(512) PRIMARY KEY, value TEXT, content_type VARCHAR(100) DEFAULT 'text/plain', expires_at TIMESTAMP, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP);
-- 4. Job queues (from queue:* list keys)CREATE TABLE job_queues ( id BIGSERIAL PRIMARY KEY, queue_name VARCHAR(255) NOT NULL, payload JSONB NOT NULL, status VARCHAR(20) DEFAULT 'pending', priority INTEGER DEFAULT 0, attempts INTEGER DEFAULT 0, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, started_at TIMESTAMP, completed_at TIMESTAMP);
CREATE INDEX idx_jobs_pending ON job_queues(queue_name, priority DESC, id) WHERE status = 'pending';
-- 5. Leaderboards (from leaderboard:* zset keys)CREATE TABLE leaderboards ( board_name VARCHAR(255) NOT NULL, member_id VARCHAR(255) NOT NULL, score DOUBLE PRECISION NOT NULL, metadata JSONB DEFAULT '{}', updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, PRIMARY KEY (board_name, member_id));
CREATE INDEX idx_leaderboard_ranking ON leaderboards(board_name, score DESC);
-- Enable HeliosDB caching on hot tablesALTER TABLE sessions ENABLE INTELLIGENT_CACHING;ALTER TABLE cache ENABLE INTELLIGENT_CACHING;ALTER TABLE leaderboards ENABLE INTELLIGENT_CACHING;5.3 Phase 3: Data Transformation and Import
import jsonimport psycopg2from datetime import datetime, timedelta
def import_to_heliosdb(json_file, heliosdb_config): """Import Redis export to HeliosDB."""
conn = psycopg2.connect(**heliosdb_config) cur = conn.cursor()
with open(json_file) as f: data = json.load(f)
stats = {'users': 0, 'sessions': 0, 'cache': 0, 'queues': 0, 'leaderboards': 0}
for entry in data: key = entry['key'] value = entry['value'] ttl = entry.get('ttl') expires_at = datetime.now() + timedelta(seconds=ttl) if ttl else None
try: # Route by key pattern if key.startswith('user:'): # Extract user ID and import parts = key.split(':') if len(parts) >= 2: user_id = int(parts[1]) if entry['type'] == 'hash': cur.execute(""" INSERT INTO users (id, data) VALUES (%s, %s) ON CONFLICT (id) DO UPDATE SET data = users.data || %s """, (user_id, json.dumps(value), json.dumps(value))) stats['users'] += 1
elif key.startswith('session:'): session_id = key.split(':')[1] cur.execute(""" INSERT INTO sessions (session_id, token, expires_at) VALUES (%s, %s, %s) ON CONFLICT (session_id) DO UPDATE SET token = EXCLUDED.token, expires_at = EXCLUDED.expires_at """, (session_id, value, expires_at)) stats['sessions'] += 1
elif key.startswith('cache:'): cur.execute(""" INSERT INTO cache (cache_key, value, expires_at) VALUES (%s, %s, %s) ON CONFLICT (cache_key) DO UPDATE SET value = EXCLUDED.value, expires_at = EXCLUDED.expires_at """, (key, json.dumps(value) if isinstance(value, (dict, list)) else value, expires_at)) stats['cache'] += 1
elif key.startswith('queue:'): queue_name = key.split(':')[1] if isinstance(value, list): for item in value: cur.execute(""" INSERT INTO job_queues (queue_name, payload) VALUES (%s, %s) """, (queue_name, json.dumps({'data': item}))) stats['queues'] += 1
elif key.startswith('leaderboard:'): board_name = ':'.join(key.split(':')[1:]) if entry['type'] == 'zset': for member, score in value: cur.execute(""" INSERT INTO leaderboards (board_name, member_id, score) VALUES (%s, %s, %s) ON CONFLICT (board_name, member_id) DO UPDATE SET score = EXCLUDED.score """, (board_name, member, score)) stats['leaderboards'] += 1
conn.commit()
except Exception as e: conn.rollback() print(f"Error importing {key}: {e}")
cur.close() conn.close() return stats
# Run importconfig = {'host': 'heliosdb-server', 'port': 5432, 'user': 'admin', 'password': 'secret', 'dbname': 'mydb'}stats = import_to_heliosdb('redis_export.json', config)print(f"Import complete: {stats}")5.4 Phase 4: Application Connection Changes
Before (Redis):
import redis
class CacheService: def __init__(self): self.redis = redis.Redis(host='redis-server', port=6379)
def get_user(self, user_id): data = self.redis.hgetall(f'user:{user_id}') return {k.decode(): v.decode() for k, v in data.items()}
def set_session(self, session_id, token, ttl=3600): self.redis.setex(f'session:{session_id}', ttl, token)After (HeliosDB - Option 1: Redis Protocol):
import redis
class CacheService: def __init__(self): # Just change host - existing code works! self.redis = redis.Redis( host='heliosdb-server', port=6379, protocol=3 # RESP3 )
def get_user(self, user_id): # Unchanged - Redis protocol compatible data = self.redis.hgetall(f'user:{user_id}') return {k.decode(): v.decode() for k, v in data.items()}
def set_session(self, session_id, token, ttl=3600): # Unchanged self.redis.setex(f'session:{session_id}', ttl, token)After (HeliosDB - Option 2: SQL Protocol for enhanced features):
import psycopg2from contextlib import contextmanager
class EnhancedCacheService: def __init__(self): self.pool = psycopg2.pool.ThreadedConnectionPool( minconn=5, maxconn=20, host='heliosdb-server', port=5432, user='admin', password='secret', dbname='mydb' )
@contextmanager def get_conn(self): conn = self.pool.getconn() try: yield conn finally: self.pool.putconn(conn)
def get_user(self, user_id): with self.get_conn() as conn: with conn.cursor() as cur: cur.execute("SELECT data FROM users WHERE id = %s", (user_id,)) row = cur.fetchone() return row[0] if row else None
def set_session(self, session_id, user_id, token, ttl=3600): with self.get_conn() as conn: with conn.cursor() as cur: cur.execute(""" INSERT INTO sessions (session_id, user_id, token, expires_at) VALUES (%s, %s, %s, NOW() + INTERVAL '%s seconds') ON CONFLICT (session_id) DO UPDATE SET token = EXCLUDED.token, expires_at = EXCLUDED.expires_at """, (session_id, user_id, token, ttl)) conn.commit()
# New capability: SQL-powered queries not possible in Redis def get_active_sessions_by_user(self, user_id): with self.get_conn() as conn: with conn.cursor() as cur: cur.execute(""" SELECT session_id, created_at, expires_at FROM sessions WHERE user_id = %s AND expires_at > NOW() ORDER BY created_at DESC """, (user_id,)) return cur.fetchall()6. Command Mapping
6.1 String Commands
| Redis Command | HeliosDB SQL Equivalent | Notes |
|---|---|---|
SET key value | INSERT INTO kv (key, value) VALUES ($1, $2) ON CONFLICT (key) DO UPDATE SET value = $2 | Upsert |
GET key | SELECT value FROM kv WHERE key = $1 | Direct lookup |
SETEX key ttl value | INSERT INTO kv (key, value, expires_at) VALUES ($1, $2, NOW() + $3 * INTERVAL '1 second') | With TTL |
SETNX key value | INSERT INTO kv (key, value) VALUES ($1, $2) ON CONFLICT DO NOTHING | Only if not exists |
INCR key | UPDATE counters SET value = value + 1 WHERE key = $1 RETURNING value | Atomic increment |
INCRBY key n | UPDATE counters SET value = value + $2 WHERE key = $1 RETURNING value | Increment by N |
MSET k1 v1 k2 v2 | INSERT INTO kv (key, value) VALUES ($1, $2), ($3, $4) ON CONFLICT (key) DO UPDATE SET value = EXCLUDED.value | Batch insert |
MGET k1 k2 | SELECT key, value FROM kv WHERE key IN ($1, $2) | Batch get |
DEL key | DELETE FROM kv WHERE key = $1 | Delete |
EXISTS key | SELECT 1 FROM kv WHERE key = $1 LIMIT 1 | Check existence |
TTL key | SELECT EXTRACT(EPOCH FROM (expires_at - NOW()))::INT FROM kv WHERE key = $1 | Get TTL |
6.2 Hash Commands to JSONB
| Redis Command | HeliosDB JSONB Equivalent |
|---|---|
HSET key field value | `UPDATE entities SET data = data |
HGET key field | SELECT data->>$2 FROM entities WHERE id = $1 |
HMSET key f1 v1 f2 v2 | `UPDATE entities SET data = data |
HGETALL key | SELECT data FROM entities WHERE id = $1 |
HDEL key field | UPDATE entities SET data = data - $2 WHERE id = $1 |
HEXISTS key field | SELECT data ? $2 FROM entities WHERE id = $1 |
HKEYS key | SELECT jsonb_object_keys(data) FROM entities WHERE id = $1 |
HVALS key | SELECT value FROM entities, jsonb_each_text(data) WHERE id = $1 |
HINCRBY key field n | UPDATE entities SET data = jsonb_set(data, ARRAY[$2], ((data->>$2)::int + $3)::text::jsonb) WHERE id = $1 |
6.3 List Commands to Array/Queue
| Redis Command | HeliosDB Equivalent |
|---|---|
LPUSH key value | INSERT INTO queue (queue_name, payload) VALUES ($1, $2) |
RPUSH key value | Same as LPUSH (order determined by ID) |
LPOP key | DELETE FROM queue WHERE id = (SELECT id FROM queue WHERE queue_name = $1 ORDER BY id DESC LIMIT 1 FOR UPDATE SKIP LOCKED) RETURNING payload |
RPOP key | DELETE FROM queue WHERE id = (SELECT id FROM queue WHERE queue_name = $1 ORDER BY id ASC LIMIT 1 FOR UPDATE SKIP LOCKED) RETURNING payload |
LRANGE key 0 -1 | SELECT payload FROM queue WHERE queue_name = $1 ORDER BY id |
LLEN key | SELECT COUNT(*) FROM queue WHERE queue_name = $1 |
LINDEX key n | SELECT payload FROM queue WHERE queue_name = $1 ORDER BY id OFFSET $2 LIMIT 1 |
6.4 Sorted Set Commands to ORDER BY
| Redis Command | HeliosDB SQL Equivalent |
|---|---|
ZADD lb score member | INSERT INTO leaderboards (board_name, member_id, score) VALUES ($1, $3, $2) ON CONFLICT (board_name, member_id) DO UPDATE SET score = EXCLUDED.score |
ZSCORE lb member | SELECT score FROM leaderboards WHERE board_name = $1 AND member_id = $2 |
ZRANK lb member | SELECT rank-1 FROM (SELECT member_id, RANK() OVER (ORDER BY score) as rank FROM leaderboards WHERE board_name = $1) r WHERE member_id = $2 |
ZRANGE lb 0 9 | SELECT member_id, score FROM leaderboards WHERE board_name = $1 ORDER BY score ASC LIMIT 10 |
ZREVRANGE lb 0 9 | SELECT member_id, score FROM leaderboards WHERE board_name = $1 ORDER BY score DESC LIMIT 10 |
ZINCRBY lb n member | UPDATE leaderboards SET score = score + $2 WHERE board_name = $1 AND member_id = $3 RETURNING score |
ZCOUNT lb min max | SELECT COUNT(*) FROM leaderboards WHERE board_name = $1 AND score BETWEEN $2 AND $3 |
ZCARD lb | SELECT COUNT(*) FROM leaderboards WHERE board_name = $1 |
7. Feature Mapping
7.1 Pub/Sub to HeliosDB Real-Time Events
Redis Pub/Sub:
# Publisherredis_client.publish('news:sports', 'Score update!')
# Subscriberpubsub = redis_client.pubsub()pubsub.subscribe('news:sports')for message in pubsub.listen(): print(message['data'])HeliosDB Equivalent (using PostgreSQL LISTEN/NOTIFY):
import psycopg2import select
# Publisherconn = psycopg2.connect(...)cur = conn.cursor()cur.execute("NOTIFY news_sports, 'Score update!'")conn.commit()
# Subscriberconn = psycopg2.connect(...)conn.set_isolation_level(psycopg2.extensions.ISOLATION_LEVEL_AUTOCOMMIT)cur = conn.cursor()cur.execute("LISTEN news_sports")
while True: if select.select([conn], [], [], 5) != ([], [], []): conn.poll() while conn.notifies: notify = conn.notifies.pop(0) print(f"Channel: {notify.channel}, Payload: {notify.payload}")HeliosDB Advanced Webhooks (for external integrations):
-- Create webhook for real-time eventsCREATE WEBHOOK score_updatesON INSERT TO game_scoresEXECUTE 'https://api.example.com/webhooks/scores'WITH ( headers = '{"Authorization": "Bearer token123"}', format = 'json', batch_size = 100, retry_count = 3);7.2 Lua Scripting Alternatives
Redis Lua script:
-- Rate limiterlocal key = KEYS[1]local limit = tonumber(ARGV[1])local current = tonumber(redis.call('GET', key) or 0)if current < limit then redis.call('INCR', key) return 1else return 0endHeliosDB SQL function (recommended):
CREATE OR REPLACE FUNCTION rate_limit( p_key VARCHAR, p_limit INT, p_window_seconds INT) RETURNS BOOLEAN AS $$DECLARE v_count INT;BEGIN -- Atomic rate limit check and increment INSERT INTO rate_limits (key, count, window_start) VALUES (p_key, 1, NOW()) ON CONFLICT (key) DO UPDATE SET count = CASE WHEN rate_limits.window_start < NOW() - (p_window_seconds || ' seconds')::INTERVAL THEN 1 ELSE rate_limits.count + 1 END, window_start = CASE WHEN rate_limits.window_start < NOW() - (p_window_seconds || ' seconds')::INTERVAL THEN NOW() ELSE rate_limits.window_start END RETURNING count INTO v_count;
RETURN v_count <= p_limit;END;$$ LANGUAGE plpgsql;
-- UsageSELECT rate_limit('user:123:api', 100, 60); -- 100 requests per 60 secondsHeliosDB WASM stored procedure (for complex logic):
-- Register WASM procedureCREATE PROCEDURE rate_limiter(key TEXT, limit_count INT)LANGUAGE wasmAS 'path/to/rate_limiter.wasm';
-- Call WASM procedureCALL rate_limiter('user:123', 100);7.3 Transactions (MULTI/EXEC)
Redis transaction:
pipe = redis_client.pipeline()pipe.incr('counter')pipe.hset('user:1', 'last_action', 'purchase')pipe.lpush('audit:log', 'user:1 made purchase')results = pipe.execute()HeliosDB transaction:
conn = psycopg2.connect(...)try: with conn.cursor() as cur: cur.execute("UPDATE counters SET value = value + 1 WHERE name = 'counter'") cur.execute("UPDATE users SET data = data || %s WHERE id = 1", ('{"last_action": "purchase"}',)) cur.execute("INSERT INTO audit_log (entity, action) VALUES ('user:1', 'purchase')") conn.commit()except Exception as e: conn.rollback() raise7.4 Clustering
| Redis Cluster | HeliosDB Equivalent |
|---|---|
| Hash slots | Partition keys |
| CLUSTER NODES | System tables |
| Replica sets | Multi-master CRDT |
| Resharding | Online rebalancing |
HeliosDB sharding configuration:
-- Create sharded tableCREATE TABLE distributed_cache ( key VARCHAR(255) PRIMARY KEY, value TEXT, shard_key VARCHAR(255) GENERATED ALWAYS AS (substring(key, 1, 10)) STORED) PARTITION BY HASH (shard_key);
-- Create partitionsCREATE TABLE distributed_cache_0 PARTITION OF distributed_cache FOR VALUES WITH (MODULUS 4, REMAINDER 0);CREATE TABLE distributed_cache_1 PARTITION OF distributed_cache FOR VALUES WITH (MODULUS 4, REMAINDER 1);-- ... etc8. Caching Layer Considerations
8.1 HeliosDB 4-Tier Cache Architecture
HeliosDB provides a built-in intelligent caching system:
Tier 1: L1 CPU Cache (per-connection) - Query result caching - Prepared statement caching - ~1ms latency
Tier 2: L2 Shared Memory Cache - Cross-connection sharing - Hot data prefetching - ~5ms latency
Tier 3: Distributed Cache - Cross-node caching - Consistent hashing - ~10-50ms latency
Tier 4: Edge Cache - Geographic distribution - CDN integration - Variable latency8.2 Enabling Intelligent Caching
-- Enable on specific tablesALTER TABLE hot_data ENABLE INTELLIGENT_CACHING;
-- Configure cache policyALTER TABLE sessions SET ( cache_policy = 'write_through', cache_ttl = 3600, cache_priority = 'high');
-- Enable query result cachingSET heliosdb.query_cache = 'on';SET heliosdb.query_cache_size = '1GB';8.3 Cache Invalidation Strategies
-- Automatic invalidation on write-- (Default behavior - no configuration needed)
-- Manual invalidationSELECT heliosdb_cache_invalidate('sessions', 'session_id', 'abc123');
-- Pattern-based invalidationSELECT heliosdb_cache_invalidate_pattern('cache:user:*');
-- Time-based expiration (automatic)-- Data with expires_at column is automatically evicted8.4 When to Use Redis Protocol vs SQL
| Use Case | Recommended Protocol |
|---|---|
| Simple key-value cache | Redis (drop-in replacement) |
| Session management | Redis or SQL |
| Leaderboards | SQL (better analytics) |
| Job queues | SQL (better reliability) |
| Complex queries on cached data | SQL |
| Cross-entity joins | SQL |
| Real-time analytics | SQL |
| Pub/Sub messaging | Redis |
9. Performance Comparison
9.1 Benchmark Results
| Operation | Redis | HeliosDB (Redis) | HeliosDB (SQL) | Notes |
|---|---|---|---|---|
| GET (cached) | 0.1ms | 0.15ms | 0.3ms | Memory access |
| SET | 0.1ms | 0.2ms | 0.5ms | Write-ahead log |
| INCR | 0.1ms | 0.15ms | 0.3ms | Atomic update |
| HGETALL | 0.2ms | 0.25ms | 0.4ms | Hash retrieval |
| ZADD | 0.15ms | 0.2ms | 0.4ms | Sorted insert |
| ZRANGE (top 10) | 0.1ms | 0.15ms | 0.3ms | Range query |
| Pipeline (100 ops) | 2ms | 2.5ms | 5ms | Batch operations |
9.2 Throughput Comparison
Single Node (16 cores, 64GB RAM):+-------------------+----------+------------------+| Workload | Redis | HeliosDB |+-------------------+----------+------------------+| Read-heavy (95/5) | 500K/sec | 450K/sec || Write-heavy (50/50)| 200K/sec| 180K/sec || Mixed analytics | N/A | 100K/sec + SQL |+-------------------+----------+------------------+
Distributed (5 nodes):+-------------------+----------+------------------+| Workload | Redis | HeliosDB |+-------------------+----------+------------------+| Read-heavy | 2M/sec | 2M/sec || Write-heavy | 800K/sec | 750K/sec || Cross-shard query | Limited | Full SQL support |+-------------------+----------+------------------+9.3 Optimization Tips
Connection pooling:
# Redis protocolpool = redis.ConnectionPool( host='heliosdb-server', port=6379, max_connections=100)
# SQL protocolpool = psycopg2.pool.ThreadedConnectionPool( minconn=10, maxconn=100, host='heliosdb-server', port=5432)Pipelining for batch operations:
# Redis protocol pipeliningpipe = client.pipeline(transaction=False)for i in range(1000): pipe.set(f'key:{i}', f'value:{i}')pipe.execute()
# SQL protocol batch insertcur.executemany( "INSERT INTO kv (key, value) VALUES (%s, %s)", [(f'key:{i}', f'value:{i}') for i in range(1000)])10. Common Issues and Troubleshooting
10.1 Connection Issues
Problem: Connection refused on port 6379
# Check if Redis protocol is enabledheliosdb-cli config get redis.enabled
# Enable Redis protocolheliosdb-cli config set redis.enabled trueheliosdb-cli config set redis.port 6379
# Restart to applysystemctl restart heliosdbProblem: Authentication failed
# Ensure correct credentialsclient = redis.Redis( host='heliosdb-server', port=6379, password='your_password', # Check this matches username='default' # Or your ACL username)10.2 Data Type Mismatches
Problem: Binary data corruption
# Always use decode_responses=False for binary dataclient = redis.Redis( host='heliosdb-server', port=6379, decode_responses=False # Keep as bytes)
# Or handle encoding explicitlyvalue = client.get('binary_key')text = value.decode('utf-8', errors='replace')Problem: Integer overflow
-- Use BIGINT for large countersCREATE TABLE counters ( name VARCHAR(255) PRIMARY KEY, value BIGINT DEFAULT 0 -- Not INTEGER);10.3 TTL and Expiration
Problem: Keys not expiring
-- Ensure background cleanup is runningSELECT heliosdb_start_ttl_cleanup();
-- Or set up scheduled cleanupCREATE EXTENSION IF NOT EXISTS pg_cron;SELECT cron.schedule('cleanup-expired', '* * * * *', 'DELETE FROM cache_entries WHERE expires_at < NOW()');Problem: TTL precision issues
-- Use TIMESTAMP WITH TIME ZONE for consistent TTLALTER TABLE cache_entriesALTER COLUMN expires_at TYPE TIMESTAMP WITH TIME ZONE;10.4 Performance Issues
Problem: Slow queries after migration
-- Analyze tables after bulk importANALYZE users;ANALYZE cache_entries;ANALYZE leaderboards;
-- Check for missing indexesSELECT * FROM pg_stat_user_tables WHERE n_live_tup > 10000;
-- Create indexes for common access patternsCREATE INDEX CONCURRENTLY idx_cache_key_prefixON cache_entries (substring(cache_key, 1, 20));Problem: Memory usage higher than expected
-- Enable compression for large valuesALTER TABLE cache_entries SET (compression = 'lz4');
-- Monitor cache hit ratesSELECT * FROM heliosdb_cache_stats();
-- Tune cache sizeSET heliosdb.cache_size = '4GB';10.5 Cluster and Replication
Problem: Inconsistent reads after failover
-- Ensure read-your-writes consistencySET heliosdb.consistency_level = 'session';
-- Or use synchronous replicationALTER SYSTEM SET synchronous_commit = 'on';10.6 Migration Rollback
If you need to rollback to Redis:
# 1. Export from HeliosDBpg_dump -h heliosdb-server -t cache_entries -t users --data-only > rollback.sql
# 2. Convert to Redis commands (custom script)python convert_sql_to_redis.py rollback.sql > redis_restore.txt
# 3. Import to Rediscat redis_restore.txt | redis-cli --pipe
# 4. Update application connection strings# Revert to original Redis configurationAppendix A: Migration Checklist
Pre-Migration
- Complete Redis inventory (keys, types, memory)
- Analyze access patterns and hot keys
- Design HeliosDB schema
- Set up HeliosDB environment
- Test connectivity with both protocols
Migration Execution
- Export Redis data (SCAN or RDB)
- Create HeliosDB tables
- Import and transform data
- Validate data integrity (row counts, checksums)
- Update application connection strings
- Test application functionality
Post-Migration
- Monitor performance metrics
- Verify cache hit rates
- Check TTL cleanup is working
- Document any behavioral differences
- Decommission Redis (after validation period)
Appendix B: Quick Reference Card
Connection Strings
# Redis protocol (drop-in)redis://heliosdb-server:6379
# PostgreSQL protocol (full features)postgresql://user:pass@heliosdb-server:5432/dbnameCommon Operations
-- Enable caching on tableALTER TABLE mytable ENABLE INTELLIGENT_CACHING;
-- Check cache statsSELECT * FROM heliosdb_cache_stats();
-- Invalidate cacheSELECT heliosdb_cache_invalidate('table', 'key_column', 'key_value');
-- TTL cleanupDELETE FROM mytable WHERE expires_at < NOW();Related Documentation
- Redis Protocol Documentation
- Redis Compatibility Matrix
- Redis Configuration Guide
- General Migration Guide
- Protocol Compatibility Matrix
Document Version: 1.0 Last Updated: 2026-01-04 Author: HeliosDB Documentation Team