Redis Caching System
Overview
CONA uses Redis as an in-memory caching layer to significantly improve application performance by reducing database query latency. Redis stores frequently accessed data in memory, providing sub-millisecond access times for cached data.Why Redis?
Performance Benefits
- Query Latency Reduction: Reduces database query time from ~100ms to ~5ms
- Workflow Throughput: Improves Temporal workflow performance by reducing I/O bottlenecks
- User Experience: Faster page loads and real-time data updates
- Cost Efficiency: Reduces database load, allowing for smaller database instances
Use Cases in CONA
- Chart of Accounts Caching: Frequently accessed account data and lists
- Organization Settings: User preferences and configuration data
- Posting Matrix Rules: Complex accounting rules that are expensive to compute
- Integration Data: Cached results from external API calls
- Session Data: User authentication and session information
Architecture
- Why Redis is used - Performance benefits and use cases
- Architecture overview - How Redis fits into the system
- Configuration details - Production and development setup
- Implementation patterns - Code examples and best practices
- Cache version management - Automatic stale cache detection and invalidation
- Monitoring integration - How to connect with Redis Insights
- Troubleshooting guide - Common issues and solutions
- Security considerations - Access control and data protection
- Cost optimization - Current costs and optimization strategies
Redis Configuration
Production Environment
Provider: Redis Cloud Configuration:- Database:
13661105 - Subscription:
2958056 - Region: Frankfurt (fra)
- Memory: 250MB (Dataset: 125MB, Total: 250MB)
- High Availability: Single zone
- Connections: 30 concurrent connections
- Persistence: RDB snapshots every 6 hours
- Network Cap: 100GB/month
rediss://:{REDIS_PASSWORD}@{REDIS_HOST}:{REDIS_PORT}
GUI to connect - Redis Insight
Development Environment
Local Redis Instance: Local instances is running in docker container. To spin up the instances runpnpm run redis:start:dev
Environment Variables
Required Variables:REDIS_HOST- Redis server hostnameREDIS_PORT- Redis server portREDIS_PASSWORD- Redis authentication passwordREDIS_TLS- Set to"true"for TLS connections (production)REDIS_ENV- Environment identifier for cache key prefixing (defaults to no prefix in production)
REDIS_ENABLED- Set to"false"to disable Redis (all cache operations will be no-ops)REDIS_CACHE_VERSION- Manual cache version override (highest priority)- If not set, automatically uses
VERCEL_GIT_COMMIT_SHA(first 8 chars) on Vercel - Falls back to
"1"if neither is available - Used to detect stale cache after Redis restarts
- If not set, automatically uses
Cache Version Management
CONA implements automatic cache version management to prevent serving stale data after Redis restarts or deployments.How It Works
- On Redis Startup: The system checks the current cache version stored in Redis
- Version Comparison: Compares stored version with expected version from environment
- Automatic Flush: If versions don’t match, the entire cache is flushed to prevent stale data
- Version Update: The new version is stored in Redis for future checks
Version Sources (Priority Order)
- Manual Override (
REDIS_CACHE_VERSION): Highest priority, set explicitly - Vercel Git Commit (
VERCEL_GIT_COMMIT_SHA): Automatically available on Vercel deployments - Default Fallback (
"1"): Used when no other version is available
Configuration Examples
Production (Automatic):When Cache Gets Flushed
- Deployment: New
VERCEL_GIT_COMMIT_SHAtriggers automatic flush - Manual Version Change: Updating
REDIS_CACHE_VERSIONflushes cache - First Run: Initial Redis connection sets version (no flush)
- Version Mismatch: Any mismatch between stored and expected version
Implementation
Import Redis Functions
Redis Client Initialization
Redis is automatically initialized on application startup via thestartRedis() function. This function:
- Connection Management: Handles Redis connection with automatic retry logic
- Error Handling: Gracefully handles connection failures and returns status
- Version Checking: Automatically checks and updates cache version on connection
- Reconnection: Implements exponential backoff retry strategy (4 attempts max)
- Return Value: Returns a
RedisStartResultobject with connection status
- Initial delay: 4 seconds
- Max attempts: 4
- Backoff: Geometric progression (4s, 8s, 16s, 32s)
- Total timeout: ~65 seconds (sum of all delays + 5s buffer)
Core Cache Functions
All cache functions include:- Graceful Degradation: Return
nullor no-op if Redis is unavailable - Environment Prefixing: Automatically prefix keys with
REDIS_ENVin non-production - Error Handling: Log warnings but don’t throw errors (allows app to continue)
REDIS_ENV in non-production environments:
Cache Key Conventions
{domain}:{type}:{organizationId}:{identifier}
Bulk Cache Invalidation Functions
CONA provides specialized functions for invalidating related caches:Usage Patterns
1. Cache-Aside Pattern
2. Write-Through Pattern
3. Cache Invalidation Strategies
Monitoring & Debugging
Redis Insights Integration
Production Redis Cloud Dashboard: https://cloud.redis.io/#/databases/13661105/subscription/2958056/view-bdb/configuration Key Metrics to Monitor:- Memory Usage: Currently 5.2MB / 250MB (2.1%) - excellent utilization
- Hit Rate: Target >90% cache hit rate
- Connection Count: Monitor concurrent connections
- Key Count: Track number of cached keys
- Operations per Second: Monitor Redis performance
- Network Usage: 0GB / 100GB monthly cap (0% used)
- Cache Version: Check
cache:versionkey to see current version
Redis CLI Access
Application Logging
The Redis implementation includes comprehensive logging:- Connection Events: Logs when Redis connects, reconnects, or fails
- Version Checks: Logs cache version initialization and mismatches
- Cache Flushes: Logs when cache is flushed due to version mismatch
- Error Handling: Logs warnings when cache operations fail (allows graceful degradation)
"Redis connection established successfully""Cache version initialized""Cache version mismatch - flushing cache""Cache flushed due to version mismatch"
Best Practices
1. TTL (Time To Live) Strategy
2. Cache Key Design
- Hierarchical: Use colons to separate levels (
domain:type:org:id) - Consistent: Follow the same pattern across all cache keys
- Descriptive: Make keys self-documenting
- Environment-aware: Automatically prefixed with
REDIS_ENVin non-production
3. Error Handling
All cache functions are designed to fail gracefully:4. Memory Management
- Set appropriate TTLs to prevent memory bloat
- Use bulk invalidation functions for related caches
- Monitor memory usage regularly in Redis Insights
- Cache version management automatically prevents stale data accumulation
5. Cache Version Management
- Automatic on Vercel: No configuration needed - uses git commit SHA
- Manual for migrations: Set
REDIS_CACHE_VERSIONwhen deploying schema changes - Development: Uses default version or set manually for testing
- Monitor logs: Watch for version mismatch warnings
Troubleshooting
Common Issues
-
Connection Timeouts
- Check Redis Cloud status
- Verify network connectivity
- Check connection pool settings
- Review retry logs (max 4 attempts with exponential backoff)
-
Memory Issues
- Monitor memory usage in Redis Insights
- Check for memory leaks in key patterns
- Adjust TTL values
- Use bulk invalidation functions
-
Performance Issues
- Check cache hit rates
- Monitor Redis CPU usage
- Verify key patterns are efficient
- Check for version mismatch flushes (may cause temporary performance hit)
-
Data Inconsistency
- Ensure proper cache invalidation
- Check for race conditions
- Verify write-through patterns
- Review cache version logs for unexpected flushes
-
Stale Cache After Restart
- Cache version management should automatically handle this
- Check logs for version mismatch warnings
- Verify
REDIS_CACHE_VERSIONorVERCEL_GIT_COMMIT_SHAis set correctly - Manually flush if needed:
redis-cli FLUSHDB
Debug Commands
Security
Access Control
- Password Protection: All Redis instances use strong passwords
- TLS Encryption: Production Redis uses TLS for secure connections
- Network Isolation: Redis Cloud provides network-level security
- Environment Separation: Development and production use separate instances
- Key Prefixing: Non-production environments use
REDIS_ENVprefix to prevent cross-contamination
Data Protection
- No Sensitive Data: Never cache passwords, tokens, or PII
- TTL Enforcement: All cached data has appropriate expiration
- Encryption: Sensitive cached data should be encrypted before storage
- Version Management: Automatic cache flushing prevents stale sensitive data
Cost Optimization
Current Costs
- Redis Cloud: ~$10/month for 250MB memory (estimated based on usage)
- Performance Gain: Reduces database load, allowing smaller DB instances
- ROI: Significant performance improvement for minimal cost
Optimization Strategies
- Right-size Memory: Monitor usage and adjust memory allocation
- Efficient TTLs: Set appropriate expiration times
- Key Compression: Use shorter, efficient key names
- Pattern Cleanup: Regular cleanup of unused keys
- Version Management: Automatic flushing prevents accumulation of stale data
Future Enhancements
Planned Features
- Cache Warming: Pre-populate cache with frequently accessed data
- Distributed Caching: Support for multiple Redis instances
- Cache Analytics: Detailed hit/miss ratio tracking
- Automatic Scaling: Dynamic memory allocation based on usage
- Cache Versioning: Enhanced version management with metadata tracking
Integrations
- Temporal Workflows: Cache workflow state and results
- API Rate Limiting: Use Redis for rate limiting external APIs
- Session Management: Store user sessions in Redis
- Real-time Features: Use Redis for pub/sub messaging