Skip to content

Snowflake to HeliosDB Migration Guide

Snowflake to HeliosDB Migration Guide

Version: 1.0 Last Updated: January 2026 Compatibility Target: Snowflake (All Editions)


Table of Contents

  1. Introduction
  2. Compatibility Overview
  3. Pre-Migration Assessment
  4. Conceptual Mapping
  5. Connection String Migration
  6. Data Type Mapping
  7. Virtual Warehouse Migration
  8. Time Travel Migration
  9. VARIANT Semi-Structured Data Migration
  10. FLATTEN Function Migration
  11. COPY INTO Migration
  12. Stage and Storage Integration
  13. Application Connectivity
  14. SQL Function Mapping
  15. Known Limitations
  16. Performance Considerations
  17. Post-Migration Validation
  18. Common Issues and Troubleshooting
  19. Rollback Procedures

1. Introduction

1.1 Why Migrate from Snowflake to HeliosDB?

Organizations choose to migrate from Snowflake to HeliosDB for several strategic reasons:

Cost Optimization

Cost FactorSnowflakeHeliosDBBenefit
Compute CreditsPer-second billingFixed resource allocationPredictable costs
StorageSeparated pricingUnified pricingSimplified billing
Data TransferEgress chargesIncluded70-90% reduction
Multi-cluster WarehousePremium chargesIncludedIncluded by default
Time Travel (90 days)Additional storage costIncludedNo additional charge
Fail-safeAdditional storage costConfigurableFlexible retention

Typical Annual Savings: 40-70% for data-intensive workloads

Multi-Model Capabilities

HeliosDB extends Snowflake’s capabilities with native multi-model support:

  • Relational SQL: Full Snowflake SQL compatibility (90%+)
  • Document/JSON: MongoDB-compatible document storage alongside VARIANT
  • Key-Value: Redis-compatible operations for caching
  • Time-Series: Native time-series optimizations
  • Graph: Cypher query language support
  • Vector: AI/ML embeddings with similarity search

Architecture Advantages

CapabilitySnowflakeHeliosDB
DeploymentCloud-only (SaaS)Cloud + On-premises
Data SovereigntyLimited regionsFull control
Multi-ProtocolSQL only9+ protocols
Real-time StreamingStreams/TasksNative CDC + Kafka
ML/AI IntegrationExternal toolsNative in-database
HTAPSeparate warehouseUnified workload

1.2 Migration Goals

This guide helps you achieve:

  1. Zero data loss during migration with full VARIANT support
  2. Minimal application changes due to 90%+ SQL compatibility
  3. Preserved Time Travel functionality with identical syntax
  4. Seamless semi-structured data migration with full FLATTEN support
  5. Improved cost efficiency with predictable resource pricing

2. Compatibility Overview

2.1 Snowflake Protocol Support in HeliosDB

HeliosDB implements Snowflake SQL API compatibility for seamless integration:

CategoryCoverageNotes
SQL API90%+Full statement execution
AuthenticationSupportedKey pair, password
Time Travel100%AT/BEFORE syntax fully supported
VARIANT100%Full semi-structured support
FLATTEN100%All modes supported
Virtual Warehouses100%All operations supported
COPY INTO100%All file formats

2.2 Supported Snowflake Features

Fully Supported (Direct Migration)

  • DDL: CREATE DATABASE/SCHEMA/TABLE/VIEW, ALTER, DROP
  • DML: SELECT, INSERT, UPDATE, DELETE, MERGE
  • Transactions: COMMIT, ROLLBACK, BEGIN/END TRANSACTION
  • Time Travel: AT(TIMESTAMP), AT(OFFSET), AT(STATEMENT), BEFORE, UNDROP
  • Semi-Structured: VARIANT, OBJECT, ARRAY data types
  • Functions: PARSE_JSON, FLATTEN, OBJECT_CONSTRUCT, ARRAY_AGG
  • Warehouses: CREATE/ALTER/DROP WAREHOUSE, suspend/resume
  • Data Loading: COPY INTO from all supported formats

Partially Supported (Requires Modification)

Snowflake FeatureHeliosDB SupportMigration Notes
GEOGRAPHY typeFull supportUses PostGIS-compatible functions
External TablesStage-based accessConfigure storage integration
Materialized ViewsFull supportSame refresh semantics
User-Defined Functions (SQL)SupportedUDF syntax compatible

Not Supported (Requires Redesign)

Snowflake FeatureHeliosDB AlternativeMigration Approach
StreamsHeliosDB CDCUse Change Data Capture
TasksHeliosDB SchedulerUse DBMS_SCHEDULER or cron
PipesCOPY INTO with schedulingBatch or streaming ingestion
Dynamic TablesMaterialized ViewsUse MVs with refresh
Snowpark (Python/Java/Scala)Native SQL + UDFsRewrite as SQL UDFs
External FunctionsHTTP/REST integrationUse HeliosDB REST API

2.3 SQL Compatibility Matrix

SQL FeatureSnowflakeHeliosDBNotes
CTEs (WITH clause)YesYesIdentical syntax
Window FunctionsYesYesAll functions supported
PIVOT/UNPIVOTYesYesSame syntax
QUALIFYYesYesFilter window results
SAMPLE/TABLESAMPLEYesYesRow sampling
MATCH_RECOGNIZELimitedNoUse window functions
Recursive CTEsYesYesSame syntax
LATERAL joinsYesYesSame syntax

3. Pre-Migration Assessment

3.1 Snowflake Environment Inventory

Before migration, catalog your Snowflake environment:

Database and Schema Inventory

-- List all databases
SHOW DATABASES;
-- List all schemas in a database
SHOW SCHEMAS IN DATABASE my_database;
-- Get database sizes
SELECT
DATABASE_NAME,
DATABASE_OWNER,
CREATED,
LAST_ALTERED,
COMMENT
FROM INFORMATION_SCHEMA.DATABASES
ORDER BY DATABASE_NAME;
-- Table inventory with sizes
SELECT
TABLE_CATALOG,
TABLE_SCHEMA,
TABLE_NAME,
TABLE_TYPE,
ROW_COUNT,
BYTES,
RETENTION_TIME
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_SCHEMA NOT IN ('INFORMATION_SCHEMA')
ORDER BY BYTES DESC NULLS LAST;

VARIANT Column Analysis

-- Find all VARIANT columns
SELECT
TABLE_CATALOG,
TABLE_SCHEMA,
TABLE_NAME,
COLUMN_NAME,
DATA_TYPE
FROM INFORMATION_SCHEMA.COLUMNS
WHERE DATA_TYPE IN ('VARIANT', 'OBJECT', 'ARRAY')
ORDER BY TABLE_CATALOG, TABLE_SCHEMA, TABLE_NAME;
-- Sample VARIANT structures (for migration planning)
SELECT
column_name,
TYPEOF(column_name) as variant_type,
COUNT(*) as occurrences
FROM my_variant_table
GROUP BY 1, 2;

3.2 Virtual Warehouse Analysis

-- Warehouse inventory
SHOW WAREHOUSES;
-- Warehouse usage patterns
SELECT
WAREHOUSE_NAME,
START_TIME,
END_TIME,
WAREHOUSE_SIZE,
CREDITS_USED
FROM SNOWFLAKE.ACCOUNT_USAGE.WAREHOUSE_METERING_HISTORY
WHERE START_TIME >= DATEADD(day, -30, CURRENT_TIMESTAMP())
ORDER BY START_TIME DESC;
-- Query patterns by warehouse
SELECT
WAREHOUSE_NAME,
COUNT(*) as query_count,
AVG(EXECUTION_TIME) as avg_execution_ms,
SUM(CREDITS_USED_CLOUD_SERVICES) as total_credits
FROM SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY
WHERE START_TIME >= DATEADD(day, -30, CURRENT_TIMESTAMP())
GROUP BY WAREHOUSE_NAME
ORDER BY query_count DESC;

3.3 Time Travel Usage Analysis

-- Check Time Travel retention settings
SELECT
TABLE_CATALOG,
TABLE_SCHEMA,
TABLE_NAME,
RETENTION_TIME
FROM INFORMATION_SCHEMA.TABLES
WHERE RETENTION_TIME > 0
ORDER BY RETENTION_TIME DESC;
-- Find Time Travel queries in history
SELECT
QUERY_TEXT,
START_TIME,
EXECUTION_STATUS
FROM SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY
WHERE QUERY_TEXT ILIKE '%AT(TIMESTAMP%'
OR QUERY_TEXT ILIKE '%AT(OFFSET%'
OR QUERY_TEXT ILIKE '%BEFORE(STATEMENT%'
LIMIT 100;

3.4 Stream and Task Inventory (Requires Redesign)

-- List all streams
SHOW STREAMS;
-- Stream details
SELECT
STREAM_CATALOG,
STREAM_SCHEMA,
STREAM_NAME,
TABLE_NAME,
TYPE,
STALE
FROM INFORMATION_SCHEMA.STREAMS;
-- List all tasks
SHOW TASKS;
-- Task details
SELECT
NAME,
DATABASE_NAME,
SCHEMA_NAME,
OWNER,
SCHEDULE,
STATE
FROM TABLE(INFORMATION_SCHEMA.TASK_HISTORY())
WHERE SCHEDULED_TIME >= DATEADD(day, -7, CURRENT_TIMESTAMP());

3.5 Migration Complexity Assessment

Calculate your migration complexity score:

FactorPointsYour Score
Databases < 51
Databases 5-202
Databases > 203
VARIANT columns < 101
VARIANT columns 10-502
VARIANT columns > 503
No Streams/Tasks0
Uses Streams/Tasks3
No Snowpark usage0
Uses Snowpark4
Data size < 100GB1
Data size 100GB-1TB2
Data size > 1TB3

Total Score Interpretation:

  • 1-5: Simple migration (1-2 weeks)
  • 6-10: Medium complexity (2-4 weeks)
  • 11+: Complex migration (4+ weeks)

4. Conceptual Mapping

4.1 Snowflake to HeliosDB Object Mapping

Snowflake ConceptHeliosDB EquivalentNotes
AccountCluster/InstanceHeliosDB instance
Virtual WarehouseCompute PoolResource allocation
DatabaseDatabaseDirect mapping
SchemaSchemaDirect mapping
TableTableFull compatibility
ViewViewFull compatibility
Materialized ViewMaterialized ViewSame functionality
StageStorage MountFile access
File FormatFile FormatSame concept
SequenceSequenceCompatible syntax
UserUserDirect mapping
RoleRoleDirect mapping
Resource MonitorResource LimitsDifferent implementation

4.2 Warehouse Size to Compute Pool Mapping

Snowflake SizeCredits/HourHeliosDB Compute PoolRecommended Memory
X-Small1xs-pool8GB
Small2small-pool16GB
Medium4medium-pool32GB
Large8large-pool64GB
X-Large16xlarge-pool128GB
2X-Large322xlarge-pool256GB
3X-Large643xlarge-pool512GB
4X-Large1284xlarge-pool1TB

4.3 Storage Tier Mapping

Snowflake StorageHeliosDB StorageConfiguration
Active storageHot tierDefault storage
Time TravelMVCC + retentionConfigurable retention
Fail-safeArchive tierOptional configuration
External stage (S3)S3 storage mountNative S3 access
External stage (Azure)Azure storage mountNative Azure access
External stage (GCS)GCS storage mountNative GCS access

5. Connection String Migration

5.1 Connection Parameter Mapping

Snowflake ParameterHeliosDB ParameterNotes
accountaccountUse “heliosdb” or custom
useruserSame concept
passwordpasswordSame concept
warehousewarehouseMaps to compute pool
databasedatabaseSame concept
schemaschemaSame concept
roleroleSame concept
hosthostHeliosDB server hostname
portportDefault: 443

5.2 Connection String Examples

Before (Snowflake)

# Standard connection
snowflake://user:password@account.snowflakecomputing.com:443/database/schema?warehouse=COMPUTE_WH
# With full parameters
snowflake://my_user:my_password@xy12345.us-east-1.snowflakecomputing.com:443/MY_DATABASE/PUBLIC?warehouse=ANALYTICS_WH&role=ANALYST

After (HeliosDB)

# Standard connection
snowflake://user:password@heliosdb.example.com:443/database/schema?warehouse=COMPUTE_WH&account=heliosdb
# With full parameters
snowflake://my_user:my_password@heliosdb.example.com:443/MY_DATABASE/PUBLIC?warehouse=ANALYTICS_WH&role=ANALYST&account=heliosdb

5.3 Python Connection Migration

Snowflake (Original)

import snowflake.connector
# Snowflake connection
conn = snowflake.connector.connect(
account="xy12345.us-east-1",
user="my_user",
password="my_password",
warehouse="COMPUTE_WH",
database="MY_DATABASE",
schema="PUBLIC",
role="ANALYST"
)
cursor = conn.cursor()
cursor.execute("SELECT * FROM users LIMIT 10")
rows = cursor.fetchall()
conn.close()

HeliosDB (Migrated)

import snowflake.connector
# HeliosDB connection - only host/account changes
conn = snowflake.connector.connect(
host="heliosdb.example.com", # Changed: HeliosDB host
port=443,
account="heliosdb", # Changed: HeliosDB account identifier
user="my_user",
password="my_password",
warehouse="COMPUTE_WH", # Same warehouse name
database="MY_DATABASE", # Same database
schema="PUBLIC", # Same schema
role="ANALYST" # Same role
)
cursor = conn.cursor()
cursor.execute("SELECT * FROM users LIMIT 10") # Same SQL
rows = cursor.fetchall()
conn.close()

5.4 JavaScript/Node.js Connection Migration

Snowflake (Original)

const snowflake = require('snowflake-sdk');
const connection = snowflake.createConnection({
account: 'xy12345.us-east-1',
username: 'my_user',
password: 'my_password',
warehouse: 'COMPUTE_WH',
database: 'MY_DATABASE',
schema: 'PUBLIC'
});
connection.connect((err, conn) => {
if (err) {
console.error('Connection failed:', err);
return;
}
console.log('Connected to Snowflake');
});

HeliosDB (Migrated)

const snowflake = require('snowflake-sdk');
const connection = snowflake.createConnection({
host: 'heliosdb.example.com', // Added: HeliosDB host
port: 443, // Added: Port
account: 'heliosdb', // Changed: HeliosDB account
username: 'my_user',
password: 'my_password',
warehouse: 'COMPUTE_WH', // Same warehouse
database: 'MY_DATABASE', // Same database
schema: 'PUBLIC' // Same schema
});
connection.connect((err, conn) => {
if (err) {
console.error('Connection failed:', err);
return;
}
console.log('Connected to HeliosDB');
});

5.5 Environment Variable Migration

Snowflake Environment Variables

Terminal window
# Snowflake
export SNOWFLAKE_ACCOUNT=xy12345.us-east-1
export SNOWFLAKE_USER=my_user
export SNOWFLAKE_PASSWORD=my_password
export SNOWFLAKE_WAREHOUSE=COMPUTE_WH
export SNOWFLAKE_DATABASE=MY_DATABASE
export SNOWFLAKE_SCHEMA=PUBLIC

HeliosDB Environment Variables

Terminal window
# HeliosDB - compatible variable names
export SNOWFLAKE_HOST=heliosdb.example.com
export SNOWFLAKE_PORT=443
export SNOWFLAKE_ACCOUNT=heliosdb
export SNOWFLAKE_USER=my_user
export SNOWFLAKE_PASSWORD=my_password
export SNOWFLAKE_WAREHOUSE=COMPUTE_WH
export SNOWFLAKE_DATABASE=MY_DATABASE
export SNOWFLAKE_SCHEMA=PUBLIC

6. Data Type Mapping

6.1 Complete Data Type Mapping

Snowflake TypeHeliosDB TypeNotes
NUMBERNUMBERFull precision support
NUMBER(p,s)NUMBER(p,s)Identical
DECIMALDECIMALAlias for NUMBER
NUMERICNUMERICAlias for NUMBER
INTINTINTEGER alias
INTEGERINTEGERSame
BIGINTBIGINTSame
SMALLINTSMALLINTSame
TINYINTTINYINTSame
BYTEINTTINYINTMaps to TINYINT
FLOATFLOATDouble precision
FLOAT4FLOAT4Single precision
FLOAT8FLOAT8Double precision
DOUBLEDOUBLESame
DOUBLE PRECISIONDOUBLE PRECISIONSame
REALREALSame
VARCHARVARCHARVariable character
VARCHAR(n)VARCHAR(n)With length
CHARCHARFixed character
CHAR(n)CHAR(n)With length
STRINGVARCHARMaps to VARCHAR
TEXTTEXTUnlimited text
BINARYBINARYBinary data
BINARY(n)BINARY(n)With length
VARBINARYVARBINARYVariable binary
BOOLEANBOOLEANSame
DATEDATESame
TIMETIMESame
TIME(p)TIME(p)With precision
TIMESTAMPTIMESTAMPDefault NTZ behavior
TIMESTAMP_NTZTIMESTAMP_NTZNo timezone
TIMESTAMP_LTZTIMESTAMP_LTZLocal timezone
TIMESTAMP_TZTIMESTAMP_TZWith timezone
VARIANTVARIANTFull support
OBJECTOBJECTJSON object
ARRAYARRAYJSON array
GEOGRAPHYGEOGRAPHYGeospatial
GEOMETRYGEOMETRYGeospatial

6.2 NUMBER Type Migration

-- Snowflake NUMBER types
CREATE TABLE snowflake_numbers (
id NUMBER(38,0), -- Integer
price NUMBER(10,2), -- Decimal
quantity NUMBER, -- Default precision
rate NUMBER(5,4), -- High precision decimal
big_number NUMBER(38,0) -- Max precision
);
-- HeliosDB (identical syntax)
CREATE TABLE heliosdb_numbers (
id NUMBER(38,0), -- Integer
price NUMBER(10,2), -- Decimal
quantity NUMBER, -- Default precision
rate NUMBER(5,4), -- High precision decimal
big_number NUMBER(38,0) -- Max precision
);

6.3 Timestamp Type Migration

-- Snowflake timestamp types
CREATE TABLE snowflake_timestamps (
created_at TIMESTAMP_NTZ, -- No timezone
updated_at TIMESTAMP_LTZ, -- Local timezone
event_time TIMESTAMP_TZ, -- With timezone
log_time TIMESTAMP -- Default (NTZ)
);
-- HeliosDB (identical syntax)
CREATE TABLE heliosdb_timestamps (
created_at TIMESTAMP_NTZ, -- No timezone
updated_at TIMESTAMP_LTZ, -- Local timezone
event_time TIMESTAMP_TZ, -- With timezone
log_time TIMESTAMP -- Default (NTZ)
);
-- Timestamp operations work identically
SELECT
created_at,
CONVERT_TIMEZONE('UTC', 'America/New_York', created_at) as eastern_time,
DATEADD(day, 7, created_at) as week_later,
DATEDIFF(hour, created_at, CURRENT_TIMESTAMP()) as hours_ago
FROM timestamps_table;

6.4 Semi-Structured Type Migration

-- Snowflake semi-structured types
CREATE TABLE snowflake_json (
id INT,
data VARIANT, -- Any JSON value
metadata OBJECT, -- JSON object
tags ARRAY -- JSON array
);
-- HeliosDB (identical syntax)
CREATE TABLE heliosdb_json (
id INT,
data VARIANT, -- Any JSON value
metadata OBJECT, -- JSON object
tags ARRAY -- JSON array
);
-- Insert works identically
INSERT INTO heliosdb_json (id, data, metadata, tags)
VALUES (
1,
PARSE_JSON('{"name": "Alice", "age": 30}'),
OBJECT_CONSTRUCT('source', 'web', 'version', '2.0'),
ARRAY_CONSTRUCT('premium', 'active', 'verified')
);

7. Virtual Warehouse Migration

7.1 Warehouse Creation Migration

Snowflake Warehouse

-- Snowflake warehouse creation
CREATE WAREHOUSE analytics_wh WITH
WAREHOUSE_SIZE = 'LARGE'
WAREHOUSE_TYPE = 'STANDARD'
AUTO_SUSPEND = 300
AUTO_RESUME = TRUE
MIN_CLUSTER_COUNT = 1
MAX_CLUSTER_COUNT = 4
SCALING_POLICY = 'STANDARD'
INITIALLY_SUSPENDED = TRUE
COMMENT = 'Analytics workload warehouse';

HeliosDB Warehouse (Identical)

-- HeliosDB warehouse creation (same syntax)
CREATE WAREHOUSE analytics_wh WITH
WAREHOUSE_SIZE = 'LARGE'
AUTO_SUSPEND = 300
AUTO_RESUME = TRUE
MIN_CLUSTER_COUNT = 1
MAX_CLUSTER_COUNT = 4
SCALING_POLICY = 'STANDARD'
INITIALLY_SUSPENDED = TRUE;

7.2 Warehouse Management Operations

-- All operations work identically in HeliosDB
-- Resize warehouse
ALTER WAREHOUSE analytics_wh SET WAREHOUSE_SIZE = 'XLARGE';
-- Suspend warehouse
ALTER WAREHOUSE analytics_wh SUSPEND;
-- Resume warehouse
ALTER WAREHOUSE analytics_wh RESUME;
-- Modify auto-suspend
ALTER WAREHOUSE analytics_wh SET AUTO_SUSPEND = 600;
-- Scale cluster count
ALTER WAREHOUSE analytics_wh SET
MIN_CLUSTER_COUNT = 2
MAX_CLUSTER_COUNT = 8;
-- Drop warehouse
DROP WAREHOUSE IF EXISTS temp_wh;
-- Show warehouse status
SHOW WAREHOUSES;
SHOW WAREHOUSES LIKE 'ANALYTICS%';
-- Switch warehouse context
USE WAREHOUSE analytics_wh;

7.3 Warehouse Size Equivalence

Snowflake SizeHeliosDB Compute Allocation
X-Small1 vCPU, 8GB RAM
Small2 vCPUs, 16GB RAM
Medium4 vCPUs, 32GB RAM
Large8 vCPUs, 64GB RAM
X-Large16 vCPUs, 128GB RAM
2X-Large32 vCPUs, 256GB RAM
3X-Large64 vCPUs, 512GB RAM
4X-Large128 vCPUs, 1TB RAM

7.4 Multi-Cluster Warehouse Migration

-- Snowflake multi-cluster warehouse
CREATE WAREHOUSE scaling_wh WITH
WAREHOUSE_SIZE = 'MEDIUM'
MIN_CLUSTER_COUNT = 1
MAX_CLUSTER_COUNT = 10
SCALING_POLICY = 'STANDARD';
-- HeliosDB multi-cluster warehouse (same syntax)
CREATE WAREHOUSE scaling_wh WITH
WAREHOUSE_SIZE = 'MEDIUM'
MIN_CLUSTER_COUNT = 1
MAX_CLUSTER_COUNT = 10
SCALING_POLICY = 'STANDARD';
-- Economy scaling policy
CREATE WAREHOUSE economy_wh WITH
WAREHOUSE_SIZE = 'SMALL'
MIN_CLUSTER_COUNT = 1
MAX_CLUSTER_COUNT = 4
SCALING_POLICY = 'ECONOMY';

8. Time Travel Migration

8.1 Time Travel Syntax Compatibility

HeliosDB supports the exact same Time Travel syntax as Snowflake:

AT(TIMESTAMP) Queries

-- Snowflake Time Travel
SELECT * FROM orders
AT(TIMESTAMP => '2024-01-15 10:30:00'::TIMESTAMP_NTZ);
-- HeliosDB Time Travel (identical)
SELECT * FROM orders
AT(TIMESTAMP => '2024-01-15 10:30:00'::TIMESTAMP_NTZ);
-- Using SYSTIMESTAMP
SELECT * FROM orders
AT(TIMESTAMP => DATEADD(hours, -24, CURRENT_TIMESTAMP()));
-- HeliosDB equivalent
SELECT * FROM orders
AT(TIMESTAMP => DATEADD(hours, -24, CURRENT_TIMESTAMP()));

AT(OFFSET) Queries

-- Snowflake: Query data from 1 hour ago
SELECT * FROM inventory
AT(OFFSET => -3600);
-- HeliosDB: Identical syntax
SELECT * FROM inventory
AT(OFFSET => -3600);
-- Query from 1 day ago
SELECT * FROM daily_metrics
AT(OFFSET => -86400);

AT(STATEMENT) and BEFORE(STATEMENT) Queries

-- Get query ID from query history
-- Snowflake
SELECT query_id FROM TABLE(INFORMATION_SCHEMA.QUERY_HISTORY())
WHERE QUERY_TEXT LIKE 'UPDATE orders%'
ORDER BY START_TIME DESC
LIMIT 1;
-- Query data at specific statement
SELECT * FROM orders
AT(STATEMENT => '01a1b2c3-d4e5-f6a7-b8c9-d0e1f2a3b4c5');
-- Query data before specific statement
SELECT * FROM orders
BEFORE(STATEMENT => '01a1b2c3-d4e5-f6a7-b8c9-d0e1f2a3b4c5');
-- HeliosDB: Both syntaxes work identically
SELECT * FROM orders
AT(STATEMENT => 'statement-id');
SELECT * FROM orders
BEFORE(STATEMENT => 'statement-id');

8.2 CHANGES Clause Migration

-- Snowflake: Track changes
SELECT *
FROM orders
CHANGES(INFORMATION => DEFAULT)
AT(TIMESTAMP => '2024-01-01 00:00:00')
END(TIMESTAMP => '2024-01-02 00:00:00');
-- HeliosDB: Same syntax
SELECT *
FROM orders
CHANGES(INFORMATION => DEFAULT)
AT(TIMESTAMP => '2024-01-01 00:00:00')
END(TIMESTAMP => '2024-01-02 00:00:00');
-- With append-only information
SELECT *
FROM events
CHANGES(INFORMATION => APPEND_ONLY)
AT(TIMESTAMP => DATEADD(day, -1, CURRENT_TIMESTAMP()))
END(TIMESTAMP => CURRENT_TIMESTAMP());

8.3 UNDROP Migration

-- Snowflake: Undrop table
DROP TABLE orders;
UNDROP TABLE orders;
-- HeliosDB: Same syntax
DROP TABLE orders;
UNDROP TABLE orders;
-- Undrop schema
DROP SCHEMA analytics;
UNDROP SCHEMA analytics;
-- Undrop database
DROP DATABASE warehouse_db;
UNDROP DATABASE warehouse_db;

8.4 Clone with Time Travel

-- Snowflake: Clone at historical point
CREATE TABLE orders_backup CLONE orders
AT(TIMESTAMP => '2024-01-15 00:00:00');
-- HeliosDB: Same syntax
CREATE TABLE orders_backup CLONE orders
AT(TIMESTAMP => '2024-01-15 00:00:00');
-- Clone at offset
CREATE TABLE inventory_snapshot CLONE inventory
AT(OFFSET => -86400); -- 24 hours ago
-- Clone schema
CREATE SCHEMA analytics_backup CLONE analytics
AT(TIMESTAMP => '2024-01-01 00:00:00');

8.5 Time Travel Configuration

-- Snowflake: Set retention period
ALTER TABLE orders SET DATA_RETENTION_TIME_IN_DAYS = 90;
-- HeliosDB: Same syntax
ALTER TABLE orders SET DATA_RETENTION_TIME_IN_DAYS = 90;
-- Check current retention
SHOW TABLES LIKE 'orders';
-- Create table with specific retention
CREATE TABLE audit_log (
id INT,
action VARCHAR(100),
timestamp TIMESTAMP
)
DATA_RETENTION_TIME_IN_DAYS = 90;

9. VARIANT Semi-Structured Data Migration

9.1 VARIANT Column Operations

-- Create table with VARIANT
CREATE TABLE events (
id INT,
event_time TIMESTAMP,
event_data VARIANT
);
-- Insert JSON data
INSERT INTO events (id, event_time, event_data)
VALUES (
1,
CURRENT_TIMESTAMP(),
PARSE_JSON('{
"user_id": 12345,
"action": "purchase",
"items": [
{"product_id": 101, "name": "Widget", "price": 29.99},
{"product_id": 102, "name": "Gadget", "price": 49.99}
],
"metadata": {
"browser": "Chrome",
"platform": "Windows"
}
}')
);
-- Query VARIANT data (works identically in HeliosDB)
SELECT
id,
event_data:user_id::INT as user_id,
event_data:action::STRING as action,
event_data:items[0].name::STRING as first_item,
event_data:metadata.browser::STRING as browser
FROM events;

9.2 Path Notation

-- Snowflake path notation
SELECT
data:level1:level2:field::STRING as nested_value,
data:array[0].property::NUMBER as array_value,
data:"special-key"::STRING as special_key -- Keys with special chars
FROM json_table;
-- HeliosDB path notation (identical)
SELECT
data:level1:level2:field::STRING as nested_value,
data:array[0].property::NUMBER as array_value,
data:"special-key"::STRING as special_key
FROM json_table;

9.3 Type Casting from VARIANT

-- All type casts work identically
SELECT
data:id::INT as id_int,
data:id::STRING as id_string,
data:price::FLOAT as price_float,
data:price::NUMBER(10,2) as price_decimal,
data:active::BOOLEAN as is_active,
data:timestamp::TIMESTAMP as event_timestamp,
data:date::DATE as event_date,
data:nested::VARIANT as nested_variant,
data:items::ARRAY as items_array,
data:config::OBJECT as config_object
FROM variant_table;

9.4 VARIANT Functions

-- PARSE_JSON
SELECT PARSE_JSON('{"name": "Alice", "age": 30}') as json_data;
-- TO_JSON
SELECT TO_JSON(OBJECT_CONSTRUCT('name', 'Alice', 'age', 30)) as json_string;
-- TRY_PARSE_JSON (safe parsing)
SELECT TRY_PARSE_JSON(potentially_invalid_json) as parsed;
-- TO_VARIANT
SELECT TO_VARIANT(123) as variant_number;
SELECT TO_VARIANT('hello') as variant_string;
-- GET and GET_PATH
SELECT
GET(data, 'field') as value1,
GET_PATH(data, 'level1.level2.field') as value2
FROM json_table;
-- TYPEOF
SELECT
TYPEOF(data:number_field), -- Returns 'INTEGER' or 'DECIMAL'
TYPEOF(data:string_field), -- Returns 'VARCHAR'
TYPEOF(data:array_field), -- Returns 'ARRAY'
TYPEOF(data:object_field) -- Returns 'OBJECT'
FROM variant_table;

9.5 OBJECT Functions

-- OBJECT_CONSTRUCT
SELECT OBJECT_CONSTRUCT(
'id', user_id,
'name', user_name,
'email', email,
'metadata', OBJECT_CONSTRUCT(
'created', created_at,
'updated', updated_at
)
) as user_json
FROM users;
-- OBJECT_CONSTRUCT_KEEP_NULL (preserve NULL values)
SELECT OBJECT_CONSTRUCT_KEEP_NULL(
'field1', value1,
'field2', NULL,
'field3', value3
);
-- OBJECT_INSERT
SELECT OBJECT_INSERT(existing_object, 'new_key', 'new_value') as updated;
-- OBJECT_DELETE
SELECT OBJECT_DELETE(existing_object, 'key_to_remove') as updated;
-- OBJECT_KEYS
SELECT OBJECT_KEYS(data) as keys FROM json_table;

9.6 ARRAY Functions

-- ARRAY_CONSTRUCT
SELECT ARRAY_CONSTRUCT(1, 2, 3, 4, 5) as numbers;
SELECT ARRAY_CONSTRUCT('a', 'b', 'c') as letters;
-- ARRAY_AGG
SELECT
category,
ARRAY_AGG(product_name) as products,
ARRAY_AGG(DISTINCT product_name) as unique_products
FROM products
GROUP BY category;
-- ARRAY_SIZE
SELECT ARRAY_SIZE(data:items) as item_count FROM orders;
-- ARRAY_CONTAINS
SELECT *
FROM users
WHERE ARRAY_CONTAINS('admin'::VARIANT, data:roles);
-- ARRAY_APPEND / ARRAY_PREPEND
SELECT
ARRAY_APPEND(existing_array, 'new_element') as appended,
ARRAY_PREPEND(existing_array, 'first_element') as prepended;
-- ARRAY_CAT (concatenate arrays)
SELECT ARRAY_CAT(array1, array2) as combined;
-- ARRAY_COMPACT (remove NULLs)
SELECT ARRAY_COMPACT(array_with_nulls) as cleaned;
-- ARRAY_DISTINCT
SELECT ARRAY_DISTINCT(array_with_duplicates) as unique_values;
-- ARRAY_INTERSECTION
SELECT ARRAY_INTERSECTION(array1, array2) as common_elements;
-- ARRAY_SLICE
SELECT ARRAY_SLICE(large_array, 0, 10) as first_ten;

10. FLATTEN Function Migration

10.1 Basic FLATTEN

-- Snowflake FLATTEN
SELECT
o.id as order_id,
o.customer_id,
f.value:product_id::INT as product_id,
f.value:name::STRING as product_name,
f.value:price::FLOAT as price,
f.value:quantity::INT as quantity
FROM orders o,
LATERAL FLATTEN(input => o.items) f;
-- HeliosDB FLATTEN (identical syntax)
SELECT
o.id as order_id,
o.customer_id,
f.value:product_id::INT as product_id,
f.value:name::STRING as product_name,
f.value:price::FLOAT as price,
f.value:quantity::INT as quantity
FROM orders o,
LATERAL FLATTEN(input => o.items) f;

10.2 FLATTEN Output Columns

Both Snowflake and HeliosDB FLATTEN return the same columns:

ColumnDescriptionExample
SEQSequence number1
KEYKey (for objects)“name”
PATHFull path to element”[0].items[1]“
INDEXArray index0, 1, 2…
VALUEThe element value{“id”: 1, …}
THISThe input expressionOriginal array
-- Access all FLATTEN output columns
SELECT
f.seq,
f.key,
f.path,
f.index,
f.value,
f.this
FROM json_table t,
LATERAL FLATTEN(input => t.data:items) f;

10.3 FLATTEN with PATH Parameter

-- Flatten nested structure at specific path
SELECT
t.id,
f.value:sku::STRING as sku,
f.value:quantity::INT as qty
FROM orders t,
LATERAL FLATTEN(input => t.data, path => 'order.line_items') f;
-- HeliosDB equivalent (same syntax)
SELECT
t.id,
f.value:sku::STRING as sku,
f.value:quantity::INT as qty
FROM orders t,
LATERAL FLATTEN(input => t.data, path => 'order.line_items') f;

10.4 FLATTEN with OUTER

-- OUTER preserves rows with empty/null arrays
SELECT
o.id,
o.customer_id,
f.value:name::STRING as item_name
FROM orders o,
LATERAL FLATTEN(input => o.items, outer => true) f;
-- Without OUTER, orders with no items are excluded
-- With OUTER, they appear with NULL for flattened columns

10.5 FLATTEN with MODE

-- MODE = 'ARRAY' (default) - flatten arrays only
SELECT f.value
FROM data,
LATERAL FLATTEN(input => data:array_field, mode => 'ARRAY') f;
-- MODE = 'OBJECT' - flatten object keys
SELECT f.key, f.value
FROM data,
LATERAL FLATTEN(input => data:object_field, mode => 'OBJECT') f;
-- MODE = 'BOTH' - flatten both arrays and objects
SELECT f.key, f.value
FROM data,
LATERAL FLATTEN(input => data:mixed_field, mode => 'BOTH') f;

10.6 Recursive FLATTEN

-- Recursively flatten nested structures
SELECT
f.path as full_path,
f.key as field_name,
f.value as field_value,
TYPEOF(f.value) as value_type
FROM config_table c,
LATERAL FLATTEN(input => c.config, recursive => true) f
WHERE TYPEOF(f.value) NOT IN ('OBJECT', 'ARRAY');
-- Useful for flattening deeply nested JSON
-- Returns all leaf values with their full paths

10.7 Multiple FLATTEN Operations

-- Flatten multiple arrays
SELECT
o.id as order_id,
items.value:name::STRING as item_name,
tags.value::STRING as tag
FROM orders o,
LATERAL FLATTEN(input => o.data:items) items,
LATERAL FLATTEN(input => items.value:tags) tags;
-- Cross-product of all items with all their tags

10.8 FLATTEN with Aggregation

-- Aggregate over flattened data
SELECT
o.customer_id,
COUNT(DISTINCT f.value:product_id) as unique_products,
SUM(f.value:price::FLOAT * f.value:quantity::INT) as total_value
FROM orders o,
LATERAL FLATTEN(input => o.items) f
GROUP BY o.customer_id;

11. COPY INTO Migration

11.1 Basic COPY INTO Syntax

-- Snowflake COPY INTO
COPY INTO my_table
FROM @my_stage/data/
FILE_FORMAT = (TYPE = 'JSON')
ON_ERROR = 'CONTINUE';
-- HeliosDB COPY INTO (identical)
COPY INTO my_table
FROM @my_stage/data/
FILE_FORMAT = (TYPE = 'JSON')
ON_ERROR = 'CONTINUE';

11.2 File Format Specifications

CSV Format

-- Snowflake CSV
COPY INTO my_table
FROM @stage/csv_files/
FILE_FORMAT = (
TYPE = 'CSV'
FIELD_DELIMITER = ','
RECORD_DELIMITER = '\n'
SKIP_HEADER = 1
FIELD_OPTIONALLY_ENCLOSED_BY = '"'
NULL_IF = ('NULL', 'null', '')
EMPTY_FIELD_AS_NULL = TRUE
COMPRESSION = 'GZIP'
);
-- HeliosDB CSV (same syntax)
COPY INTO my_table
FROM @stage/csv_files/
FILE_FORMAT = (
TYPE = 'CSV'
FIELD_DELIMITER = ','
RECORD_DELIMITER = '\n'
SKIP_HEADER = 1
FIELD_OPTIONALLY_ENCLOSED_BY = '"'
NULL_IF = ('NULL', 'null', '')
EMPTY_FIELD_AS_NULL = TRUE
COMPRESSION = 'GZIP'
);

JSON Format

-- JSON file format
COPY INTO my_table
FROM @stage/json_files/
FILE_FORMAT = (
TYPE = 'JSON'
STRIP_OUTER_ARRAY = TRUE
STRIP_NULL_VALUES = FALSE
COMPRESSION = 'AUTO'
);

Parquet Format

-- Parquet file format
COPY INTO my_table
FROM @stage/parquet_files/
FILE_FORMAT = (TYPE = 'PARQUET')
MATCH_BY_COLUMN_NAME = CASE_INSENSITIVE;

Avro Format

-- Avro file format
COPY INTO my_table
FROM @stage/avro_files/
FILE_FORMAT = (TYPE = 'AVRO');

ORC Format

-- ORC file format
COPY INTO my_table
FROM @stage/orc_files/
FILE_FORMAT = (TYPE = 'ORC');

11.3 COPY INTO from Cloud Storage

Amazon S3

-- Snowflake S3 access
COPY INTO my_table
FROM 's3://my-bucket/data/path/'
CREDENTIALS = (
AWS_KEY_ID = 'AKIAIOSFODNN7EXAMPLE'
AWS_SECRET_KEY = 'wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY'
)
FILE_FORMAT = (TYPE = 'PARQUET')
PATTERN = '.*\.parquet';
-- HeliosDB S3 access (same syntax)
COPY INTO my_table
FROM 's3://my-bucket/data/path/'
CREDENTIALS = (
AWS_KEY_ID = 'AKIAIOSFODNN7EXAMPLE'
AWS_SECRET_KEY = 'wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY'
)
FILE_FORMAT = (TYPE = 'PARQUET')
PATTERN = '.*\.parquet';

Azure Blob Storage

-- Azure Blob access
COPY INTO my_table
FROM 'azure://myaccount.blob.core.windows.net/mycontainer/path/'
CREDENTIALS = (AZURE_SAS_TOKEN = '?sv=2020-08-04&ss=b&srt=sco...')
FILE_FORMAT = (TYPE = 'JSON');

Google Cloud Storage

-- GCS access
COPY INTO my_table
FROM 'gcs://my-bucket/path/'
CREDENTIALS = (GCS_CREDENTIALS = '...')
FILE_FORMAT = (TYPE = 'CSV');

11.4 COPY INTO with Error Handling

-- Continue on errors
COPY INTO my_table
FROM @stage/data/
FILE_FORMAT = (TYPE = 'CSV')
ON_ERROR = 'CONTINUE';
-- Skip entire file on error
COPY INTO my_table
FROM @stage/data/
FILE_FORMAT = (TYPE = 'CSV')
ON_ERROR = 'SKIP_FILE';
-- Skip file after N errors
COPY INTO my_table
FROM @stage/data/
FILE_FORMAT = (TYPE = 'CSV')
ON_ERROR = 'SKIP_FILE_3'; -- Skip after 3 errors
-- Abort on first error (default)
COPY INTO my_table
FROM @stage/data/
FILE_FORMAT = (TYPE = 'CSV')
ON_ERROR = 'ABORT_STATEMENT';

11.5 COPY INTO for Export

-- Export to stage
COPY INTO @my_stage/export/
FROM my_table
FILE_FORMAT = (TYPE = 'PARQUET')
HEADER = TRUE
OVERWRITE = TRUE;
-- Export with query
COPY INTO @my_stage/filtered_export/
FROM (
SELECT id, name, created_at
FROM my_table
WHERE created_at >= '2024-01-01'
)
FILE_FORMAT = (TYPE = 'CSV' COMPRESSION = 'GZIP');
-- Partitioned export
COPY INTO @my_stage/partitioned/
FROM my_table
PARTITION BY (date_column)
FILE_FORMAT = (TYPE = 'PARQUET')
MAX_FILE_SIZE = 104857600; -- 100MB per file

11.6 Named File Formats

-- Create file format
CREATE FILE FORMAT my_csv_format
TYPE = 'CSV'
FIELD_DELIMITER = '|'
SKIP_HEADER = 1
NULL_IF = ('NULL', 'null');
-- Use named file format
COPY INTO my_table
FROM @stage/data/
FILE_FORMAT = my_csv_format;
-- Alter file format
ALTER FILE FORMAT my_csv_format SET SKIP_HEADER = 2;
-- Drop file format
DROP FILE FORMAT my_csv_format;

12. Stage and Storage Integration

12.1 Internal Stage Migration

-- Snowflake internal stage
CREATE STAGE my_internal_stage
FILE_FORMAT = (TYPE = 'JSON');
-- HeliosDB internal stage (same syntax)
CREATE STAGE my_internal_stage
FILE_FORMAT = (TYPE = 'JSON');
-- Stage with directory table
CREATE STAGE my_stage
DIRECTORY = (ENABLE = TRUE);
-- List stage contents
LIST @my_internal_stage;
LIST @my_internal_stage/subdir/;
LIST @my_internal_stage PATTERN = '.*\.json';

12.2 External Stage Migration

S3 External Stage

-- Snowflake S3 stage
CREATE STAGE s3_stage
URL = 's3://my-bucket/path/'
CREDENTIALS = (
AWS_KEY_ID = 'AKIAIOSFODNN7EXAMPLE'
AWS_SECRET_KEY = 'wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY'
)
FILE_FORMAT = (TYPE = 'PARQUET');
-- HeliosDB S3 stage (same syntax)
CREATE STAGE s3_stage
URL = 's3://my-bucket/path/'
CREDENTIALS = (
AWS_KEY_ID = 'AKIAIOSFODNN7EXAMPLE'
AWS_SECRET_KEY = 'wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY'
)
FILE_FORMAT = (TYPE = 'PARQUET');

Azure External Stage

-- Azure Blob stage
CREATE STAGE azure_stage
URL = 'azure://myaccount.blob.core.windows.net/mycontainer/'
CREDENTIALS = (AZURE_SAS_TOKEN = '?sv=2020-08-04...')
FILE_FORMAT = (TYPE = 'CSV');

GCS External Stage

-- GCS stage
CREATE STAGE gcs_stage
URL = 'gcs://my-bucket/path/'
CREDENTIALS = (GCS_CREDENTIALS = '...')
FILE_FORMAT = (TYPE = 'JSON');

12.3 Storage Integration Migration

-- Snowflake storage integration
CREATE STORAGE INTEGRATION my_s3_integration
TYPE = EXTERNAL_STAGE
STORAGE_PROVIDER = 'S3'
ENABLED = TRUE
STORAGE_AWS_ROLE_ARN = 'arn:aws:iam::123456789012:role/my-role'
STORAGE_ALLOWED_LOCATIONS = ('s3://bucket1/', 's3://bucket2/');
-- HeliosDB storage integration (same syntax)
CREATE STORAGE INTEGRATION my_s3_integration
TYPE = EXTERNAL_STAGE
STORAGE_PROVIDER = 'S3'
ENABLED = TRUE
STORAGE_AWS_ROLE_ARN = 'arn:aws:iam::123456789012:role/my-role'
STORAGE_ALLOWED_LOCATIONS = ('s3://bucket1/', 's3://bucket2/');
-- Use with stage
CREATE STAGE integrated_stage
URL = 's3://bucket1/data/'
STORAGE_INTEGRATION = my_s3_integration
FILE_FORMAT = (TYPE = 'PARQUET');

12.4 PUT and GET Commands

-- Upload file to stage
PUT file:///local/path/data.csv @my_stage/upload/;
-- Upload with options
PUT file:///local/path/*.csv @my_stage/
PARALLEL = 4
AUTO_COMPRESS = TRUE;
-- Download from stage
GET @my_stage/data.csv file:///local/download/;
-- Download with pattern
GET @my_stage/ file:///local/download/
PATTERN = '.*\.parquet';

12.5 Table Stage and User Stage

-- Table stage (automatic per table)
-- Access with @%table_name
COPY INTO my_table
FROM @%my_table/incoming/
FILE_FORMAT = (TYPE = 'CSV');
-- User stage (automatic per user)
-- Access with @~
PUT file:///local/data.csv @~/uploads/;
LIST @~;
-- Copy from user stage
COPY INTO my_table
FROM @~/uploads/
FILE_FORMAT = (TYPE = 'CSV');

13. Application Connectivity

13.1 Python (snowflake-connector-python)

Basic Connection and Query

import snowflake.connector
from snowflake.connector import DictCursor
# HeliosDB connection
conn = snowflake.connector.connect(
host="heliosdb.example.com",
port=443,
account="heliosdb",
user="my_user",
password="my_password",
warehouse="COMPUTE_WH",
database="ANALYTICS",
schema="PUBLIC"
)
# Basic query
cursor = conn.cursor()
cursor.execute("SELECT * FROM users LIMIT 10")
for row in cursor.fetchall():
print(row)
# Using DictCursor for named columns
cursor = conn.cursor(DictCursor)
cursor.execute("SELECT id, name, email FROM users")
for row in cursor.fetchall():
print(f"User: {row['name']}, Email: {row['email']}")
cursor.close()
conn.close()

Time Travel Query

from datetime import datetime, timedelta
# Connect to HeliosDB
conn = snowflake.connector.connect(
host="heliosdb.example.com",
account="heliosdb",
user="my_user",
password="my_password"
)
cursor = conn.cursor()
# Time travel to yesterday
yesterday = (datetime.now() - timedelta(days=1)).strftime('%Y-%m-%d %H:%M:%S')
cursor.execute(f"""
SELECT COUNT(*) as order_count
FROM orders
AT(TIMESTAMP => '{yesterday}'::TIMESTAMP)
""")
result = cursor.fetchone()
print(f"Orders yesterday: {result[0]}")
# Time travel with offset
cursor.execute("""
SELECT *
FROM inventory
AT(OFFSET => -3600)
WHERE product_id = 101
""")
for row in cursor.fetchall():
print(row)

VARIANT and FLATTEN Query

# Query VARIANT data
cursor.execute("""
SELECT
id,
data:user_id::INT as user_id,
data:action::STRING as action,
data:metadata.browser::STRING as browser
FROM events
WHERE data:action::STRING = 'purchase'
""")
for row in cursor.fetchall():
print(f"Event {row[0]}: User {row[1]} - {row[2]} via {row[3]}")
# FLATTEN query
cursor.execute("""
SELECT
o.id as order_id,
f.value:name::STRING as product,
f.value:price::FLOAT as price
FROM orders o,
LATERAL FLATTEN(input => o.items) f
""")
for row in cursor.fetchall():
print(f"Order {row[0]}: {row[1]} - ${row[2]:.2f}")

Batch Operations

# Batch insert
data = [
(1, 'Alice', 'alice@example.com'),
(2, 'Bob', 'bob@example.com'),
(3, 'Charlie', 'charlie@example.com')
]
cursor.executemany(
"INSERT INTO users (id, name, email) VALUES (%s, %s, %s)",
data
)
conn.commit()

13.2 JavaScript/Node.js (snowflake-sdk)

Basic Connection

const snowflake = require('snowflake-sdk');
// Create connection
const connection = snowflake.createConnection({
host: 'heliosdb.example.com',
port: 443,
account: 'heliosdb',
username: 'my_user',
password: 'my_password',
warehouse: 'COMPUTE_WH',
database: 'ANALYTICS',
schema: 'PUBLIC'
});
// Connect
connection.connect((err, conn) => {
if (err) {
console.error('Connection failed:', err);
return;
}
console.log('Connected to HeliosDB with ID:', conn.getId());
});

Promise-Based Wrapper

const snowflake = require('snowflake-sdk');
class HeliosDBClient {
constructor(config) {
this.connection = snowflake.createConnection({
host: config.host || 'localhost',
port: config.port || 443,
account: config.account || 'heliosdb',
username: config.user,
password: config.password,
warehouse: config.warehouse,
database: config.database,
schema: config.schema
});
}
async connect() {
return new Promise((resolve, reject) => {
this.connection.connect((err, conn) => {
if (err) reject(err);
else resolve(conn);
});
});
}
async execute(sql, binds = []) {
return new Promise((resolve, reject) => {
this.connection.execute({
sqlText: sql,
binds: binds,
complete: (err, stmt, rows) => {
if (err) reject(err);
else resolve({ stmt, rows });
}
});
});
}
async close() {
return new Promise((resolve, reject) => {
this.connection.destroy((err, conn) => {
if (err) reject(err);
else resolve(conn);
});
});
}
}
// Usage
async function main() {
const client = new HeliosDBClient({
host: 'heliosdb.example.com',
user: 'my_user',
password: 'my_password',
warehouse: 'COMPUTE_WH',
database: 'ANALYTICS'
});
await client.connect();
// Time travel query
const { rows } = await client.execute(`
SELECT * FROM orders
AT(OFFSET => -3600)
LIMIT 10
`);
console.log('Historical orders:', rows);
// FLATTEN query
const { rows: items } = await client.execute(`
SELECT
o.id,
f.value:name::STRING as product
FROM orders o,
LATERAL FLATTEN(input => o.items) f
`);
console.log('Order items:', items);
await client.close();
}
main().catch(console.error);

13.3 Java (JDBC)

import net.snowflake.client.jdbc.SnowflakeBasicDataSource;
import java.sql.*;
public class HeliosDBExample {
public static void main(String[] args) throws SQLException {
// Configure data source
SnowflakeBasicDataSource ds = new SnowflakeBasicDataSource();
ds.setServerName("heliosdb.example.com");
ds.setPortNumber(443);
ds.setAccount("heliosdb");
ds.setUser("my_user");
ds.setPassword("my_password");
ds.setWarehouse("COMPUTE_WH");
ds.setDatabaseName("ANALYTICS");
ds.setSchema("PUBLIC");
// Connect and query
try (Connection conn = ds.getConnection()) {
// Time travel query
String timeTravelSql = """
SELECT * FROM orders
AT(TIMESTAMP => DATEADD(hour, -1, CURRENT_TIMESTAMP()))
LIMIT 10
""";
try (Statement stmt = conn.createStatement();
ResultSet rs = stmt.executeQuery(timeTravelSql)) {
while (rs.next()) {
System.out.println("Order ID: " + rs.getInt("id"));
}
}
// VARIANT query
String variantSql = """
SELECT
id,
data:user_id::INT as user_id,
data:action::STRING as action
FROM events
""";
try (Statement stmt = conn.createStatement();
ResultSet rs = stmt.executeQuery(variantSql)) {
while (rs.next()) {
System.out.println("User: " + rs.getInt("user_id") +
", Action: " + rs.getString("action"));
}
}
}
}
}

13.4 Go (gosnowflake)

package main
import (
"database/sql"
"fmt"
"log"
_ "github.com/snowflakedb/gosnowflake"
)
func main() {
// Connection string for HeliosDB
dsn := fmt.Sprintf(
"%s:%s@%s:%d/%s/%s?warehouse=%s&account=%s",
"my_user",
"my_password",
"heliosdb.example.com",
443,
"ANALYTICS",
"PUBLIC",
"COMPUTE_WH",
"heliosdb",
)
db, err := sql.Open("snowflake", dsn)
if err != nil {
log.Fatal(err)
}
defer db.Close()
// Time travel query
rows, err := db.Query(`
SELECT id, customer_id, total
FROM orders
AT(OFFSET => -3600)
LIMIT 10
`)
if err != nil {
log.Fatal(err)
}
defer rows.Close()
for rows.Next() {
var id, customerID int
var total float64
if err := rows.Scan(&id, &customerID, &total); err != nil {
log.Fatal(err)
}
fmt.Printf("Order %d: Customer %d, Total $%.2f\n", id, customerID, total)
}
}

14. SQL Function Mapping

14.1 Semi-Structured Functions

Snowflake FunctionHeliosDB SupportNotes
PARSE_JSON(expr)SupportedParse JSON string
TRY_PARSE_JSON(expr)SupportedSafe JSON parse
TO_JSON(variant)SupportedConvert to JSON string
TO_VARIANT(expr)SupportedConvert to VARIANT
OBJECT_CONSTRUCT(k, v, ...)SupportedBuild object
OBJECT_CONSTRUCT_KEEP_NULL(...)SupportedBuild with NULLs
OBJECT_DELETE(obj, key)SupportedRemove key
OBJECT_INSERT(obj, k, v)SupportedAdd key
OBJECT_KEYS(obj)SupportedGet keys array
OBJECT_AGG(key, value)SupportedAggregate to object
ARRAY_CONSTRUCT(...)SupportedBuild array
ARRAY_AGG(expr)SupportedAggregate to array
ARRAY_SIZE(array)SupportedArray length
ARRAY_CONTAINS(val, array)SupportedCheck membership
ARRAY_APPEND(array, val)SupportedAppend element
ARRAY_CAT(arr1, arr2)SupportedConcatenate
ARRAY_COMPACT(array)SupportedRemove NULLs
ARRAY_DISTINCT(array)SupportedUnique values
ARRAY_INTERSECTION(arr1, arr2)SupportedCommon elements
ARRAY_SLICE(arr, from, to)SupportedGet slice
GET(variant, key)SupportedGet by key
GET_PATH(variant, path)SupportedGet by path
TYPEOF(variant)SupportedGet type
FLATTEN(...)SupportedUnnest arrays

14.2 String Functions

Snowflake FunctionHeliosDB SupportNotes
CONCAT(s1, s2, ...)SupportedConcatenate
CONCAT_WS(sep, s1, s2, ...)SupportedConcat with separator
CONTAINS(s, sub)SupportedCheck substring
STARTSWITH(s, prefix)SupportedCheck prefix
ENDSWITH(s, suffix)SupportedCheck suffix
SPLIT(s, sep)SupportedSplit to array
SPLIT_PART(s, sep, n)SupportedGet nth part
LISTAGG(col, sep)SupportedAggregate strings
TRIM/LTRIM/RTRIM(s)SupportedRemove whitespace
UPPER/LOWER(s)SupportedCase conversion
LENGTH/LEN(s)SupportedString length
SUBSTR/SUBSTRING(s, start, len)SupportedExtract substring
REPLACE(s, from, to)SupportedReplace substring
REGEXP_REPLACE(s, pat, rep)SupportedRegex replace
REGEXP_SUBSTR(s, pat)SupportedRegex extract
REGEXP_COUNT(s, pat)SupportedRegex count
REGEXP_LIKE(s, pat)SupportedRegex match
RPAD/LPAD(s, len, pad)SupportedPad string
REVERSE(s)SupportedReverse string
INITCAP(s)SupportedTitle case

14.3 Date and Time Functions

Snowflake FunctionHeliosDB SupportNotes
CURRENT_DATE()SupportedCurrent date
CURRENT_TIME()SupportedCurrent time
CURRENT_TIMESTAMP()SupportedCurrent timestamp
DATEADD(part, n, date)SupportedAdd interval
DATEDIFF(part, d1, d2)SupportedDate difference
DATE_PART(part, date)SupportedExtract part
DATE_TRUNC(part, date)SupportedTruncate date
DAYNAME(date)SupportedDay name
MONTHNAME(date)SupportedMonth name
YEAR/MONTH/DAY(date)SupportedExtract component
HOUR/MINUTE/SECOND(ts)SupportedExtract component
WEEK(date)SupportedWeek number
QUARTER(date)SupportedQuarter number
TO_DATE(expr)SupportedConvert to date
TO_TIME(expr)SupportedConvert to time
TO_TIMESTAMP(expr)SupportedConvert to timestamp
TO_TIMESTAMP_NTZ(expr)SupportedNo timezone
TO_TIMESTAMP_LTZ(expr)SupportedLocal timezone
TO_TIMESTAMP_TZ(expr)SupportedWith timezone
CONVERT_TIMEZONE(from, to, ts)SupportedTimezone conversion
LAST_DAY(date)SupportedLast day of month
ADD_MONTHS(date, n)SupportedAdd months
MONTHS_BETWEEN(d1, d2)SupportedMonth difference

14.4 Numeric Functions

Snowflake FunctionHeliosDB SupportNotes
ABS(n)SupportedAbsolute value
CEIL/CEILING(n)SupportedRound up
FLOOR(n)SupportedRound down
ROUND(n, scale)SupportedRound
TRUNC/TRUNCATE(n, scale)SupportedTruncate
MOD(n, m)SupportedModulo
POWER(base, exp)SupportedExponentiation
SQRT(n)SupportedSquare root
EXP(n)SupportedExponential
LN/LOG(n)SupportedNatural log
LOG(base, n)SupportedLog base
SIGN(n)SupportedSign (-1, 0, 1)
RANDOM()SupportedRandom number
UNIFORM(min, max, gen)SupportedUniform random
GREATEST(v1, v2, ...)SupportedMaximum
LEAST(v1, v2, ...)SupportedMinimum

14.5 Conditional Functions

Snowflake FunctionHeliosDB SupportNotes
CASE WHEN ... ENDSupportedConditional
IFF(cond, true, false)SupportedInline IF
IFNULL(expr, default)SupportedNULL handling
NVL(expr, default)SupportedNULL handling
NVL2(expr, not_null, null)SupportedNULL handling
NULLIF(e1, e2)SupportedNULL if equal
COALESCE(e1, e2, ...)SupportedFirst non-NULL
ZEROIFNULL(expr)SupportedZero if NULL
NULLIFZERO(expr)SupportedNULL if zero
DECODE(expr, v1, r1, ...)SupportedValue mapping
TRY_CAST(expr AS type)SupportedSafe cast
TRY_TO_NUMBER(expr)SupportedSafe to number
TRY_TO_DATE(expr)SupportedSafe to date

14.6 Aggregate Functions

Snowflake FunctionHeliosDB SupportNotes
COUNT(*)SupportedRow count
COUNT(DISTINCT col)SupportedDistinct count
SUM(col)SupportedSum
AVG(col)SupportedAverage
MIN(col)SupportedMinimum
MAX(col)SupportedMaximum
MEDIAN(col)SupportedMedian
MODE(col)SupportedMode
STDDEV(col)SupportedStandard deviation
VARIANCE(col)SupportedVariance
LISTAGG(col, sep)SupportedString aggregation
ARRAY_AGG(col)SupportedArray aggregation
OBJECT_AGG(key, value)SupportedObject aggregation
APPROX_COUNT_DISTINCT(col)SupportedApproximate distinct
APPROX_PERCENTILE(col, p)SupportedApproximate percentile
HLL(col)SupportedHyperLogLog

14.7 Window Functions

Snowflake FunctionHeliosDB SupportNotes
ROW_NUMBER()SupportedRow numbering
RANK()SupportedRanking with gaps
DENSE_RANK()SupportedRanking no gaps
NTILE(n)SupportedBucket distribution
LAG(col, n, default)SupportedPrevious row value
LEAD(col, n, default)SupportedNext row value
FIRST_VALUE(col)SupportedFirst in window
LAST_VALUE(col)SupportedLast in window
NTH_VALUE(col, n)SupportedNth in window
PERCENT_RANK()SupportedPercent rank
CUME_DIST()SupportedCumulative distribution
SUM() OVER (...)SupportedRunning sum
AVG() OVER (...)SupportedRunning average

15. Known Limitations

15.1 Unsupported Features

Snowflake FeatureStatusAlternative in HeliosDB
StreamsNot supportedUse CDC (Change Data Capture) tables
TasksNot supportedUse DBMS_SCHEDULER or external scheduler
PipesNot supportedUse COPY INTO with scheduled jobs
Dynamic TablesNot supportedUse Materialized Views with refresh
SnowparkNot supportedUse SQL UDFs or external processing
External FunctionsLimitedUse HeliosDB REST API
Java/Scala UDFsNot supportedUse SQL UDFs
Python UDFsNot supportedUse SQL UDFs
Stored Procedures (JavaScript)LimitedUse SQL stored procedures
Data SharingNot supportedUse cross-database access
Secure ViewsSupportedSame syntax
Row Access PoliciesSupportedSame syntax
TagsPartialBasic tagging supported
Data ClassificationNot supportedManual classification
Search OptimizationNot supportedUse standard indexes

15.2 Stream Migration Alternatives

-- Snowflake Stream approach
CREATE STREAM orders_stream ON TABLE orders;
SELECT * FROM orders_stream; -- Get changes
-- HeliosDB CDC approach
-- Enable CDC on table
ALTER TABLE orders ENABLE CHANGE_TRACKING;
-- Query changes using Time Travel
SELECT *
FROM orders
CHANGES(INFORMATION => DEFAULT)
AT(TIMESTAMP => DATEADD(minute, -5, CURRENT_TIMESTAMP()))
END(TIMESTAMP => CURRENT_TIMESTAMP());
-- Or create a CDC table
CREATE TABLE orders_cdc AS
SELECT *, CURRENT_TIMESTAMP() as captured_at
FROM orders
WHERE 1=0;
-- Scheduled capture using triggers or external job

15.3 Task Migration Alternatives

-- Snowflake Task
CREATE TASK hourly_aggregation
WAREHOUSE = compute_wh
SCHEDULE = 'USING CRON 0 * * * * UTC'
AS
INSERT INTO hourly_stats
SELECT DATE_TRUNC('hour', created_at), COUNT(*)
FROM orders
WHERE created_at >= DATEADD(hour, -1, CURRENT_TIMESTAMP())
GROUP BY 1;
-- HeliosDB alternative using scheduler
-- Option 1: DBMS_SCHEDULER (Oracle-compatible)
BEGIN
DBMS_SCHEDULER.CREATE_JOB(
job_name => 'HOURLY_AGGREGATION',
job_type => 'PLSQL_BLOCK',
job_action => '
BEGIN
INSERT INTO hourly_stats
SELECT DATE_TRUNC(''hour'', created_at), COUNT(*)
FROM orders
WHERE created_at >= DATEADD(hour, -1, CURRENT_TIMESTAMP())
GROUP BY 1;
COMMIT;
END;
',
start_date => SYSTIMESTAMP,
repeat_interval => 'FREQ=HOURLY; BYMINUTE=0',
enabled => TRUE
);
END;
-- Option 2: External cron job calling HeliosDB
-- */60 * * * * /usr/bin/heliosdb-cli -c "INSERT INTO hourly_stats..."

15.4 Pipe Migration Alternatives

-- Snowflake Pipe (continuous ingestion)
CREATE PIPE my_pipe
AUTO_INGEST = TRUE
AS COPY INTO my_table FROM @my_stage;
-- HeliosDB alternative: Scheduled COPY
-- Create a job that runs periodically
BEGIN
DBMS_SCHEDULER.CREATE_JOB(
job_name => 'INGEST_FROM_STAGE',
job_type => 'PLSQL_BLOCK',
job_action => '
BEGIN
EXECUTE IMMEDIATE ''
COPY INTO my_table
FROM @my_stage/incoming/
FILE_FORMAT = (TYPE = ''''JSON'''')
ON_ERROR = ''''CONTINUE''''
'';
COMMIT;
END;
',
start_date => SYSTIMESTAMP,
repeat_interval => 'FREQ=MINUTELY; INTERVAL=5',
enabled => TRUE
);
END;
-- Or use HeliosDB streaming ingestion
-- (Native Kafka integration available)

15.5 Dynamic Table Migration

-- Snowflake Dynamic Table
CREATE DYNAMIC TABLE order_summary
TARGET_LAG = '1 minute'
WAREHOUSE = compute_wh
AS
SELECT
DATE_TRUNC('hour', order_time) as hour,
COUNT(*) as order_count,
SUM(total) as total_revenue
FROM orders
GROUP BY 1;
-- HeliosDB alternative: Materialized View with scheduled refresh
CREATE MATERIALIZED VIEW order_summary AS
SELECT
DATE_TRUNC('hour', order_time) as hour,
COUNT(*) as order_count,
SUM(total) as total_revenue
FROM orders
GROUP BY 1;
-- Schedule refresh
BEGIN
DBMS_SCHEDULER.CREATE_JOB(
job_name => 'REFRESH_ORDER_SUMMARY',
job_type => 'PLSQL_BLOCK',
job_action => 'BEGIN REFRESH MATERIALIZED VIEW order_summary; END;',
repeat_interval => 'FREQ=MINUTELY; INTERVAL=1',
enabled => TRUE
);
END;

16. Performance Considerations

16.1 Query Optimization

-- HeliosDB automatic query optimization
-- No changes needed for most queries
-- Enable parallel query execution
ALTER SESSION SET PARALLEL_QUERY = TRUE;
-- Check query plan
EXPLAIN
SELECT
customer_id,
SUM(total) as total_spent
FROM orders
WHERE order_date >= '2024-01-01'
GROUP BY customer_id;
-- Detailed execution analysis
EXPLAIN ANALYZE
SELECT
customer_id,
SUM(total) as total_spent
FROM orders
WHERE order_date >= '2024-01-01'
GROUP BY customer_id;

16.2 Clustering and Indexes

-- Snowflake clustering keys
ALTER TABLE orders CLUSTER BY (customer_id, order_date);
-- HeliosDB equivalent (same syntax supported)
ALTER TABLE orders CLUSTER BY (customer_id, order_date);
-- Additionally, HeliosDB supports traditional indexes
CREATE INDEX idx_orders_customer ON orders(customer_id);
CREATE INDEX idx_orders_date ON orders(order_date);
-- Composite index
CREATE INDEX idx_orders_cust_date ON orders(customer_id, order_date);

16.3 Statistics Collection

-- HeliosDB statistics update
ANALYZE orders;
-- Analyze specific columns
ANALYZE orders(customer_id, order_date, total);
-- Scheduled statistics update
BEGIN
DBMS_SCHEDULER.CREATE_JOB(
job_name => 'NIGHTLY_ANALYZE',
job_type => 'PLSQL_BLOCK',
job_action => 'BEGIN ANALYZE; END;',
repeat_interval => 'FREQ=DAILY; BYHOUR=2',
enabled => TRUE
);
END;

16.4 Result Caching

-- HeliosDB query result caching (automatic)
-- Results are cached based on query patterns
-- Check cache status
SHOW PARAMETER RESULT_CACHE;
-- Force cache bypass for testing
SELECT /*+ NO_RESULT_CACHE */ *
FROM orders
WHERE customer_id = 123;

16.5 Warehouse Sizing Guidelines

Workload TypeSnowflake SizeHeliosDB Recommendation
Light OLTPX-Smallxs-pool with 8GB RAM
Standard OLTPSmallsmall-pool with 16GB RAM
Mixed workloadMediummedium-pool with 32GB RAM
AnalyticsLargelarge-pool with 64GB RAM
Heavy analyticsX-Largexlarge-pool with 128GB RAM
Data science2X-Large+2xlarge-pool+ with 256GB+ RAM

16.6 Performance Monitoring

-- Query history analysis
SELECT
query_text,
execution_time,
bytes_scanned,
rows_returned
FROM INFORMATION_SCHEMA.QUERY_HISTORY
WHERE execution_time > 60000 -- Over 60 seconds
ORDER BY execution_time DESC
LIMIT 20;
-- Resource usage
SELECT
warehouse_name,
AVG(execution_time) as avg_time,
MAX(execution_time) as max_time,
SUM(bytes_scanned) as total_bytes
FROM INFORMATION_SCHEMA.QUERY_HISTORY
GROUP BY warehouse_name;

17. Post-Migration Validation

17.1 Data Integrity Validation

Row Count Verification

-- Generate row count comparison queries
SELECT
'SELECT ''' || TABLE_NAME || ''' as table_name, COUNT(*) as row_count FROM ' ||
TABLE_CATALOG || '.' || TABLE_SCHEMA || '.' || TABLE_NAME || ';' as validation_query
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_SCHEMA = 'PUBLIC'
AND TABLE_TYPE = 'BASE TABLE';
-- Run on both Snowflake and HeliosDB, compare results

Checksum Validation

-- Column-level checksum
SELECT
SUM(HASH(id, customer_id, order_date, total)) as table_checksum
FROM orders;
-- Run identical query on both systems and compare

17.2 VARIANT Data Validation

-- Verify VARIANT columns migrated correctly
SELECT
id,
data:key1::STRING as key1,
data:key2::NUMBER as key2,
TYPEOF(data:nested_array) as nested_type
FROM variant_table
LIMIT 100;
-- Compare structure analysis
SELECT
TYPEOF(data) as data_type,
COUNT(*) as occurrences
FROM variant_table
GROUP BY 1;

17.3 Time Travel Validation

-- Verify Time Travel works
SELECT COUNT(*)
FROM orders
AT(OFFSET => -3600);
-- Verify retention is preserved
SHOW TABLES LIKE 'orders';
-- Check DATA_RETENTION_TIME_IN_DAYS column

17.4 Query Result Validation

-- Create validation queries
-- Run on both Snowflake and HeliosDB
-- Simple aggregation
SELECT
DATE_TRUNC('month', order_date) as month,
COUNT(*) as order_count,
SUM(total) as total_revenue
FROM orders
WHERE order_date >= '2024-01-01'
GROUP BY 1
ORDER BY 1;
-- FLATTEN validation
SELECT
COUNT(*) as total_items,
SUM(f.value:price::FLOAT) as total_value
FROM orders o,
LATERAL FLATTEN(input => o.items) f;
-- Window function validation
SELECT
customer_id,
order_date,
total,
ROW_NUMBER() OVER (PARTITION BY customer_id ORDER BY order_date) as order_num,
SUM(total) OVER (PARTITION BY customer_id ORDER BY order_date) as running_total
FROM orders
WHERE customer_id = 1234;

17.5 Application Validation Checklist

CategoryValidation ItemStatus
ConnectionApplication connects successfully[ ]
ConnectionConnection pooling works[ ]
QueriesBasic SELECT queries work[ ]
QueriesVARIANT queries return correct data[ ]
QueriesFLATTEN operations work[ ]
QueriesTime Travel queries work[ ]
DMLINSERT operations work[ ]
DMLUPDATE operations work[ ]
DMLDELETE operations work[ ]
DDLCREATE TABLE works[ ]
DDLALTER TABLE works[ ]
WarehouseWarehouse suspend/resume works[ ]
Data LoadingCOPY INTO works[ ]
PerformanceQuery times are acceptable[ ]
PerformanceBatch operations perform well[ ]

18. Common Issues and Troubleshooting

18.1 Connection Issues

IssueCauseSolution
Connection refusedServer not runningCheck HeliosDB status
Authentication failedWrong credentialsVerify username/password
SSL/TLS errorCertificate issueVerify SSL configuration
Warehouse not foundWarehouse doesn’t existCreate warehouse or check name
TimeoutLong-running queryIncrease timeout or optimize query

Debug Connection

import snowflake.connector
import logging
# Enable debug logging
logging.basicConfig(level=logging.DEBUG)
try:
conn = snowflake.connector.connect(
host="heliosdb.example.com",
port=443,
account="heliosdb",
user="my_user",
password="my_password"
)
print("Connection successful")
except Exception as e:
print(f"Connection failed: {e}")

18.2 SQL Compatibility Issues

QUALIFY Clause Issues

-- If QUALIFY causes issues (should work in HeliosDB)
-- Original
SELECT *
FROM (
SELECT *, ROW_NUMBER() OVER (PARTITION BY customer_id ORDER BY order_date DESC) as rn
FROM orders
)
QUALIFY rn = 1;
-- Alternative if needed
SELECT * FROM (
SELECT *, ROW_NUMBER() OVER (PARTITION BY customer_id ORDER BY order_date DESC) as rn
FROM orders
) sub
WHERE rn = 1;

MATCH_RECOGNIZE Issues

-- MATCH_RECOGNIZE is not supported
-- Use window functions instead
-- Snowflake MATCH_RECOGNIZE
SELECT *
FROM events
MATCH_RECOGNIZE (
PARTITION BY user_id
ORDER BY event_time
MEASURES ...
);
-- HeliosDB alternative using window functions
WITH lagged AS (
SELECT *,
LAG(event_type) OVER (PARTITION BY user_id ORDER BY event_time) as prev_event,
LAG(event_time) OVER (PARTITION BY user_id ORDER BY event_time) as prev_time
FROM events
)
SELECT *
FROM lagged
WHERE event_type = 'purchase' AND prev_event = 'view';

18.3 VARIANT Issues

Path Notation Differences

-- Verify path notation works correctly
SELECT
data:simple_key, -- Basic key
data:nested.key, -- Nested with dot
data:nested:key, -- Nested with colon (Snowflake style)
data:"key-with-dashes", -- Special characters
data:array[0], -- Array access
data:array[0].nested -- Combined
FROM variant_table;

Type Casting Issues

-- If type casting fails, use TRY_CAST
SELECT
TRY_CAST(data:id AS INT) as id,
TRY_CAST(data:amount AS FLOAT) as amount,
TRY_CAST(data:date AS DATE) as date_value
FROM variant_table;

18.4 Time Travel Issues

IssueCauseSolution
Data too oldRetention exceededIncrease retention period
Statement ID not foundID expired or invalidUse timestamp instead
UNDROP failsObject already existsRename existing object first
-- Check if data is available for Time Travel
SELECT
TABLE_NAME,
RETENTION_TIME
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME = 'orders';
-- If UNDROP fails due to existing object
ALTER TABLE orders RENAME TO orders_temp;
UNDROP TABLE orders;
-- Compare and merge if needed

18.5 Performance Issues

-- Identify slow queries
SELECT
query_text,
execution_time,
bytes_scanned,
compilation_time
FROM INFORMATION_SCHEMA.QUERY_HISTORY
WHERE execution_time > 30000
ORDER BY execution_time DESC;
-- Check for missing indexes
EXPLAIN ANALYZE
SELECT * FROM orders WHERE customer_id = 12345;
-- Look for table scans that should be index scans
-- Create index if needed
CREATE INDEX idx_orders_customer ON orders(customer_id);

19. Rollback Procedures

19.1 Pre-Migration Rollback Preparation

Before migration, ensure rollback capability:

-- Create backup of critical tables in Snowflake
CREATE TABLE orders_backup CLONE orders;
CREATE TABLE customers_backup CLONE customers;
-- Document warehouse configurations
SHOW WAREHOUSES;
-- Export DDL for recreation
SELECT GET_DDL('TABLE', 'orders');
SELECT GET_DDL('WAREHOUSE', 'COMPUTE_WH');

19.2 Rollback Steps

Step 1: Stop Application Traffic to HeliosDB

Terminal window
# Update load balancer or DNS to stop traffic to HeliosDB
# Update application connection strings back to Snowflake

Step 2: Export Any New Data from HeliosDB

-- Export any data created in HeliosDB during migration period
COPY INTO @export_stage/new_orders/
FROM (
SELECT * FROM orders
WHERE created_at > '2024-01-15 00:00:00' -- Migration start time
)
FILE_FORMAT = (TYPE = 'PARQUET');

Step 3: Restore Snowflake Connection

# Revert to Snowflake connection
conn = snowflake.connector.connect(
account="xy12345.us-east-1", # Back to Snowflake
user="my_user",
password="my_password",
warehouse="COMPUTE_WH",
database="MY_DATABASE"
)

Step 4: Import New Data to Snowflake (if needed)

-- Import any data created during HeliosDB period
COPY INTO orders
FROM @import_stage/new_orders/
FILE_FORMAT = (TYPE = 'PARQUET');

19.3 Rollback Checklist

StepActionStatus
1Stop HeliosDB writes[ ]
2Verify Snowflake connectivity[ ]
3Update connection strings[ ]
4Test application connectivity[ ]
5Sync any delta data[ ]
6Validate data integrity[ ]
7Resume Snowflake operations[ ]
8Notify stakeholders[ ]

19.4 Rollback Decision Criteria

Consider rollback if:

  • Data integrity issues that cannot be resolved within SLA
  • Performance degradation >50% on critical queries
  • Critical features not working (VARIANT queries, Time Travel)
  • Application functionality failures affecting users

Do NOT rollback for:

  • Minor performance differences (optimize instead)
  • Non-critical feature differences
  • Issues with clear workarounds
  • Learning curve adjustments

Appendix A: Quick Reference Card

Essential Commands

-- Connect using Snowflake connector
-- Only change: host and account parameters
-- Time Travel
SELECT * FROM table AT(TIMESTAMP => '...');
SELECT * FROM table AT(OFFSET => -3600);
-- VARIANT access
SELECT data:field::TYPE FROM table;
-- FLATTEN
SELECT f.value FROM table, LATERAL FLATTEN(input => array_col) f;
-- Warehouse management
CREATE WAREHOUSE wh WITH WAREHOUSE_SIZE = 'MEDIUM';
ALTER WAREHOUSE wh SUSPEND;
ALTER WAREHOUSE wh RESUME;
-- COPY INTO
COPY INTO table FROM @stage FILE_FORMAT = (TYPE = '...');

Migration Summary

ComponentEffortNotes
Connection stringsLowChange host/account only
SQL queriesLow90%+ compatible
VARIANT dataLowIdentical syntax
Time TravelLowIdentical syntax
Virtual WarehousesLowSame operations
StreamsHighRedesign to CDC
TasksMediumUse scheduler
SnowparkHighRewrite to SQL

Appendix B: Migration Timeline Template

WeekPhaseActivities
1AssessmentInventory Snowflake environment, identify complexity
2PlanningDesign migration strategy, identify unsupported features
3SetupProvision HeliosDB, configure warehouses
4-5Schema MigrationExport DDL, create objects in HeliosDB
6-8Data MigrationCOPY INTO data, verify VARIANT migration
9TestingValidate SQL compatibility, Time Travel, FLATTEN
10Application UpdateUpdate connection strings, test applications
11CutoverSwitch production to HeliosDB
12StabilizationMonitor, optimize, resolve issues


Need Help?


Document Version History:

VersionDateChanges
1.0January 2026Initial release