Webhook Throughput Quick Start Guide
Webhook Throughput Quick Start Guide
10K+ Webhooks/Sec with Redis Queue & Worker Pool
Target: 10,000+ webhooks/second Architecture: Redis queue + async worker pool Latency: <5ms average, <25ms P99
Quick Setup (5 Minutes)
1. Install Redis
# macOSbrew install redisredis-server
# Ubuntu/Debiansudo apt-get install redis-serversudo systemctl start redis
# Dockerdocker run -d -p 6379:6379 redis:7-alpine2. Add Dependencies
[dependencies]heliosdb-webhooks = "0.6"tokio = { version = "1.40", features = ["full"] }redis = { version = "0.27", features = ["tokio-comp"] }3. Basic Usage
use heliosdb_webhooks::{ QueueConfig, WebhookQueue, WorkerPool, WorkerPoolConfig, WebhookProcessor, WebhookRequest, WebhookResponse,};use std::sync::Arc;
#[tokio::main]async fn main() -> anyhow::Result<()> { // 1. Create Redis queue let queue_config = QueueConfig { redis_url: "redis://127.0.0.1:6379".to_string(), max_workers: 100, batch_size: 100, enable_priority: true, ..Default::default() }; let queue = Arc::new(WebhookQueue::new(queue_config).await?);
// 2. Create webhook processor let processor = Arc::new(MyWebhookProcessor::new());
// 3. Create worker pool let pool_config = WorkerPoolConfig { num_workers: 100, batch_size: 100, enable_batch: true, ..Default::default() }; let pool = WorkerPool::new(queue.clone(), processor, pool_config);
// 4. Start processing pool.start().await?;
// 5. Enqueue webhooks let request = create_webhook_request(); queue.enqueue(request, 10).await?;
// 6. Monitor metrics let stats = pool.stats().await; println!("Throughput: {} req/s", stats.current_throughput);
Ok(())}
// Implement your custom processorstruct MyWebhookProcessor;
#[async_trait::async_trait]impl WebhookProcessor for MyWebhookProcessor { async fn process(&self, webhook: QueuedWebhook) -> Result<WebhookResponse> { // Your processing logic here Ok(WebhookResponse { status_code: 200, headers: Default::default(), body: serde_json::json!({"status": "ok"}), processing_time_ms: 1, }) }}Performance Tuning
Configuration Guide
| Workers | Batch Size | Throughput | Use Case |
|---|---|---|---|
| 10 | 1 | ~1K/s | Development |
| 50 | 50 | ~5K/s | Small production |
| 100 | 100 | ~10K/s | Production (recommended) |
| 200 | 500 | ~50K/s | High throughput |
| 500 | 1000 | ~150K/s | Extreme scale |
Redis Optimization
# redis.confmaxmemory 4gbmaxmemory-policy allkeys-lrusave "" # Disable RDB for performanceappendonly no # Disable AOF for max throughputWorker Pool Tuning
let pool_config = WorkerPoolConfig { num_workers: 100, // Scale based on CPU cores batch_size: 100, // Higher = more throughput, higher latency enable_batch: true, // ALWAYS enable for production poll_interval: Duration::from_micros(100), // Lower = lower latency auto_scale: true, // Enable for variable loads min_workers: 50, max_workers: 200, ..Default::default()};Batch Operations
Batch Enqueue (1000x faster)
// Single enqueue: ~100 ops/secfor i in 0..1000 { queue.enqueue(request.clone(), 10).await?;}
// Batch enqueue: ~100,000 webhooks/seclet batch: Vec<_> = (0..1000) .map(|_| (request.clone(), 10)) .collect();queue.enqueue_batch(batch).await?;Batch Processing
impl WebhookProcessor for MyProcessor { // Process batch (more efficient) async fn process_batch(&self, webhooks: Vec<QueuedWebhook>) -> Result<Vec<WebhookResponse>> { // Single database transaction for all webhooks let responses = database_batch_insert(webhooks).await?; Ok(responses) }}Monitoring & Metrics
Real-Time Metrics
// Worker pool statslet stats = pool.stats().await;println!("Active Workers: {}", stats.active_workers);println!("Throughput: {:.0} req/s", stats.current_throughput);println!("Avg Latency: {:.2}ms", stats.avg_latency_ms);println!("Success Rate: {:.2}%", stats.success_rate());
// Queue metricslet metrics = queue.metrics().await?;println!("Queue Size: {}", metrics.queue_size);println!("Processing: {}", metrics.processing_size);println!("DLQ Size: {}", metrics.dlq_size);println!("Total Completed: {}", metrics.total_completed);Alerting Thresholds
// Alert if queue is backing upif metrics.queue_size > 5000 { alert("Queue depth high - scale up workers");}
// Alert if DLQ is growingif metrics.dlq_size > 100 { alert("High failure rate - investigate");}
// Alert if latency is highif stats.avg_latency_ms > 10.0 { alert("High latency - check Redis/processing");}Production Deployment
High Availability Setup
version: '3.8'
services: redis-master: image: redis:7-alpine ports: - "6379:6379" command: redis-server --maxmemory 4gb --maxmemory-policy allkeys-lru volumes: - redis-data:/data
redis-replica: image: redis:7-alpine ports: - "6380:6379" command: redis-server --replicaof redis-master 6379
webhook-workers: image: heliosdb-webhooks:latest deploy: replicas: 3 environment: - REDIS_URL=redis://redis-master:6379 - NUM_WORKERS=100 - BATCH_SIZE=100
volumes: redis-data:Kubernetes Deployment
apiVersion: apps/v1kind: Deploymentmetadata: name: webhook-workersspec: replicas: 3 template: spec: containers: - name: webhook-worker image: heliosdb-webhooks:latest env: - name: REDIS_URL value: "redis://redis-service:6379" - name: NUM_WORKERS value: "100" - name: BATCH_SIZE value: "100" resources: requests: memory: "2Gi" cpu: "2000m" limits: memory: "4Gi" cpu: "4000m"Benchmarking
Run Benchmarks
# Install Criterioncargo install cargo-criterion
# Run webhook benchmarkscd heliosdb-webhookscargo bench --bench throughput_benchmark
# Run specific benchmarkcargo bench --bench throughput_benchmark -- sustained_throughputExpected Results
single_webhook/enqueue time: [8.2 ms 8.5 ms 8.9 ms] thrpt: [112 elem/s 117 elem/s 121 elem/s]
batch_operations/batch_enqueue/1000 time: [9.8 ms 10.2 ms 10.7 ms] thrpt: [93.4K elem/s 98.0K elem/s 102K elem/s]
sustained_throughput/10k_per_sec_target time: [58.1 s 60.0 s 62.1 s] thrpt: [10.2K req/s 10.5K req/s 10.8K req/s]Troubleshooting
Issue: Low Throughput (<1K/s)
Solution:
- Enable batch mode:
enable_batch: true - Increase batch size:
batch_size: 100 - Increase workers:
num_workers: 100
Issue: High Latency (>50ms)
Solution:
- Reduce poll interval:
poll_interval: Duration::from_micros(50) - Check Redis latency:
redis-cli --latency - Use localhost Redis (avoid network latency)
Issue: DLQ Growing
Solution:
- Check webhook processing logic for errors
- Increase retry count:
max_retries: 5 - Add error logging in processor
Issue: Memory Usage High
Solution:
- Reduce batch size:
batch_size: 50 - Configure Redis maxmemory
- Reduce worker count if CPU is idle
Next Steps
- Implement Custom Processor: Extend
WebhookProcessortrait - Add Monitoring: Integrate with Prometheus/Grafana
- Configure Alerts: Set up alerting for queue depth, DLQ, latency
- Load Test: Run sustained load tests to find limits
- Scale Horizontally: Add more worker instances
References
Last Updated: November 14, 2025 Version: v0.6.0 Status: Production Ready ✓