Skip to main content

Redis Caching System

Overview

CONA uses Redis as an in-memory caching layer to significantly improve application performance by reducing database query latency. Redis stores frequently accessed data in memory, providing sub-millisecond access times for cached data.

Why Redis?

Performance Benefits

  • Query Latency Reduction: Reduces database query time from ~100ms to ~5ms
  • Workflow Throughput: Improves Temporal workflow performance by reducing I/O bottlenecks
  • User Experience: Faster page loads and real-time data updates
  • Cost Efficiency: Reduces database load, allowing for smaller database instances

Use Cases in CONA

  1. Chart of Accounts Caching: Frequently accessed account data and lists
  2. Organization Settings: User preferences and configuration data
  3. Posting Matrix Rules: Complex accounting rules that are expensive to compute
  4. Integration Data: Cached results from external API calls
  5. Session Data: User authentication and session information

Architecture

┌─────────────────┐    Cache Miss     ┌─────────────────┐
│   WebApp        │ ────────────────► │   Supabase      │
│   (Next.js)     │                   │   PostgreSQL    │
└─────────────────┘                   └─────────────────┘
         │                                     │
         │ Cache Hit                           │
         ▼                                     │
┌─────────────────┐    Store Result           │
│     Redis       │ ◄─────────────────────────┘
│   (In-Memory)   │
└─────────────────┘

         │ Redis Functions

┌─────────────────┐
│   @cona/core    │
│   (Business     │
│    Logic)       │
└─────────────────┘
This documentation provides comprehensive coverage of Redis usage in CONA, including:
  1. Why Redis is used - Performance benefits and use cases
  2. Architecture overview - How Redis fits into the system
  3. Configuration details - Production and development setup
  4. Implementation patterns - Code examples and best practices
  5. Cache version management - Automatic stale cache detection and invalidation
  6. Monitoring integration - How to connect with Redis Insights
  7. Troubleshooting guide - Common issues and solutions
  8. Security considerations - Access control and data protection
  9. Cost optimization - Current costs and optimization strategies

Redis Configuration

Production Environment

Provider: Redis Cloud Configuration:
  • Database: 13661105
  • Subscription: 2958056
  • Region: Frankfurt (fra)
  • Memory: 250MB (Dataset: 125MB, Total: 250MB)
  • High Availability: Single zone
  • Connections: 30 concurrent connections
  • Persistence: RDB snapshots every 6 hours
  • Network Cap: 100GB/month
Connection Details:
REDIS_HOST=redis-13661105.c1.eu-west-1-1.ec2.cloud.redislabs.com
REDIS_PORT=16462
REDIS_PASSWORD=<secure-password>
REDIS_TLS=true
REDIS_ENV=production
Connection string - rediss://:{REDIS_PASSWORD}@{REDIS_HOST}:{REDIS_PORT} GUI to connect - Redis Insight

Development Environment

Local Redis Instance: Local instances is running in docker container. To spin up the instances run pnpm run redis:start:dev
REDIS_HOST=localhost
REDIS_PORT=6380
REDIS_PASSWORD=redisLoca1passw0rd
REDIS_TLS=false
REDIS_ENV=development

Environment Variables

Required Variables:
  • REDIS_HOST - Redis server hostname
  • REDIS_PORT - Redis server port
  • REDIS_PASSWORD - Redis authentication password
  • REDIS_TLS - Set to "true" for TLS connections (production)
  • REDIS_ENV - Environment identifier for cache key prefixing (defaults to no prefix in production)
Optional Variables:
  • REDIS_ENABLED - Set to "false" to disable Redis (all cache operations will be no-ops)
  • REDIS_CACHE_VERSION - Manual cache version override (highest priority)
    • If not set, automatically uses VERCEL_GIT_COMMIT_SHA (first 8 chars) on Vercel
    • Falls back to "1" if neither is available
    • Used to detect stale cache after Redis restarts

Cache Version Management

CONA implements automatic cache version management to prevent serving stale data after Redis restarts or deployments.

How It Works

  1. On Redis Startup: The system checks the current cache version stored in Redis
  2. Version Comparison: Compares stored version with expected version from environment
  3. Automatic Flush: If versions don’t match, the entire cache is flushed to prevent stale data
  4. Version Update: The new version is stored in Redis for future checks

Version Sources (Priority Order)

  1. Manual Override (REDIS_CACHE_VERSION): Highest priority, set explicitly
  2. Vercel Git Commit (VERCEL_GIT_COMMIT_SHA): Automatically available on Vercel deployments
  3. Default Fallback ("1"): Used when no other version is available

Configuration Examples

Production (Automatic):
# No configuration needed - uses VERCEL_GIT_COMMIT_SHA automatically
# Cache automatically invalidates on every deployment
Manual Version Control:
# Set explicit version (useful for database migrations)
REDIS_CACHE_VERSION=2.1.0
Development:
# Uses default "1" or set manually
REDIS_CACHE_VERSION=dev-1

When Cache Gets Flushed

  • Deployment: New VERCEL_GIT_COMMIT_SHA triggers automatic flush
  • Manual Version Change: Updating REDIS_CACHE_VERSION flushes cache
  • First Run: Initial Redis connection sets version (no flush)
  • Version Mismatch: Any mismatch between stored and expected version

Implementation

Import Redis Functions

// Import Redis functions from @cona/core
import {
  startRedis,
  getFromCache,
  setInCache,
  deleteFromCache,
  deleteFromCacheByPattern,
  deleteAllChartOfAccountsCache,
  deleteAllPostingMatrixCache,
  deletAllGlDimensionsCache,
  deleteAllOrganizationCache,
  REDIS_DYNAMIC_KEYS,
} from "@cona/core/redis";

Redis Client Initialization

Redis is automatically initialized on application startup via the startRedis() function. This function:
  • Connection Management: Handles Redis connection with automatic retry logic
  • Error Handling: Gracefully handles connection failures and returns status
  • Version Checking: Automatically checks and updates cache version on connection
  • Reconnection: Implements exponential backoff retry strategy (4 attempts max)
  • Return Value: Returns a RedisStartResult object with connection status
// Automatically called on app startup (e.g., in instrumentation.ts)
import { startRedis, type RedisStartResult } from "@cona/core/redis";

const result: RedisStartResult = await startRedis();

// Result structure:
// {
//   connected: boolean,
//   reason: "already-started" | "disabled" | "connected" | "error",
//   message?: string  // Error message if reason is "error"
// }

// Handle result
if (result.connected) {
  if (result.reason === "connected") {
    console.log("✅ Redis connection established successfully");
  } else if (result.reason === "already-started") {
    console.log("ℹ️ Redis already initialized");
  }
} else {
  if (result.reason === "disabled") {
    console.log("ℹ️ Redis is disabled (REDIS_ENABLED=false)");
  } else if (result.reason === "error") {
    console.error("⚠️ Redis connection failed:", result.message);
    // App continues to work without Redis - all cache operations return null
  }
}
Connection Retry Strategy:
  • Initial delay: 4 seconds
  • Max attempts: 4
  • Backoff: Geometric progression (4s, 8s, 16s, 32s)
  • Total timeout: ~65 seconds (sum of all delays + 5s buffer)

Core Cache Functions

All cache functions include:
  • Graceful Degradation: Return null or no-op if Redis is unavailable
  • Environment Prefixing: Automatically prefix keys with REDIS_ENV in non-production
  • Error Handling: Log warnings but don’t throw errors (allows app to continue)
// Get data from cache
export const getFromCache = async <T>(key: string): Promise<T | null> => {
  // Returns null if Redis disabled, unavailable, or on error
  // Automatically handles environment prefixing
};

// Store data in cache
export const setInCache = async <T>(key: string, value: T, ttl: number): Promise<void> => {
  // No-op if Redis disabled or unavailable
  // Automatically handles environment prefixing
};

// Delete specific key
export const deleteFromCache = async (key: string): Promise<void> => {
  // Throws error on failure (caller can handle)
  // Automatically handles environment prefixing
};

// Delete by pattern (wildcard support)
export const deleteFromCacheByPattern = async (pattern: string): Promise<void> => {
  // Uses SCAN for efficient pattern matching
  // Throws error on failure (caller can handle)
  // Automatically handles environment prefixing
};
Environment Key Prefixing: Keys are automatically prefixed with REDIS_ENV in non-production environments:
// Production: key = "organization:details:org-123"
// Development: key = "development:organization:details:org-123"
// Staging: key = "staging:organization:details:org-123"

Cache Key Conventions

// packages/core/src/redis/constants.ts
export const REDIS_DYNAMIC_KEYS = {
  organizationDetails: (organizationId: string) => `organization:details:${organizationId}`,
  chartOfAccountsListOrderedAsc: (organizationId: string, onlyReconcileAccounts: boolean) =>
    `chart_of_accounts:list:ordered:asc:${organizationId}:${onlyReconcileAccounts}`,
  chartOfAccountsSearch: (organizationId: string) => `chart_of_accounts:search:${organizationId}`,
  postingMatricesGrouped: (organizationId: string) => `posting_matrix:grouped:${organizationId}`,
  glDimensionsList: (organizationId: string) => `gl_dimensions:list:${organizationId}`,
  glDimensionsListNotAccount: (organizationId: string) =>
    `gl_dimensions:list:not_account:${organizationId}`,
};
Key Structure: {domain}:{type}:{organizationId}:{identifier}

Bulk Cache Invalidation Functions

CONA provides specialized functions for invalidating related caches:
// Invalidate all chart of accounts caches
await deleteAllChartOfAccountsCache(organizationId);

// Invalidate all posting matrix caches
await deleteAllPostingMatrixCache(organizationId);

// Invalidate all GL dimensions caches
await deletAllGlDimensionsCache(organizationId);

// Invalidate ALL organization caches (use with caution)
await deleteAllOrganizationCache(organizationId);

Usage Patterns

1. Cache-Aside Pattern

// Get data with caching
export async function getChartOfAccountById(organizationId: string, id: string) {
  const cacheKey = REDIS_DYNAMIC_KEYS.chartOfAccountsById(organizationId, id);

  // Try cache first
  const cached = await getFromCache<ChartOfAccount>(cacheKey);
  if (cached) {
    return cached;
  }

  // Cache miss - fetch from database
  const account = await prisma.chartOfAccount.findUnique({
    where: { id, organizationId },
  });

  if (account) {
    // Store in cache for 1 hour
    await setInCache(cacheKey, account, 3600);
  }

  return account;
}

2. Write-Through Pattern

// Update data with cache invalidation
export async function updateChartOfAccount(organizationId: string, id: string, data: UpdateData) {
  // Update database
  const updated = await prisma.chartOfAccount.update({
    where: { id, organizationId },
    data,
  });

  // Invalidate related caches
  await deleteFromCache(REDIS_DYNAMIC_KEYS.chartOfAccountsListOrderedAsc(organizationId, true));
  await deleteFromCache(REDIS_DYNAMIC_KEYS.chartOfAccountsListOrderedAsc(organizationId, false));
  await deleteFromCache(REDIS_DYNAMIC_KEYS.chartOfAccountsSearch(organizationId));
  await deleteFromCache(REDIS_DYNAMIC_KEYS.chartOfAccountsById(organizationId, id));

  return updated;
}

3. Cache Invalidation Strategies

// Invalidate all chart of accounts for an organization
await deleteFromCacheByPattern(`chart_of_accounts:*:${organizationId}:*`);

// Invalidate specific account
await deleteFromCache(REDIS_DYNAMIC_KEYS.chartOfAccountsById(organizationId, id));

// Use bulk invalidation function (recommended)
await deleteAllChartOfAccountsCache(organizationId);

Monitoring & Debugging

Redis Insights Integration

Production Redis Cloud Dashboard: https://cloud.redis.io/#/databases/13661105/subscription/2958056/view-bdb/configuration Key Metrics to Monitor:
  • Memory Usage: Currently 5.2MB / 250MB (2.1%) - excellent utilization
  • Hit Rate: Target >90% cache hit rate
  • Connection Count: Monitor concurrent connections
  • Key Count: Track number of cached keys
  • Operations per Second: Monitor Redis performance
  • Network Usage: 0GB / 100GB monthly cap (0% used)
  • Cache Version: Check cache:version key to see current version

Redis CLI Access

# Connect to production Redis
redis-cli -h redis-13661105.c1.eu-west-1-1.ec2.cloud.redislabs.com -p 16462 -a <password> --tls

# List all keys
KEYS *

# Get key value
GET "organization:details:org-123"

# Check cache version
GET "cache:version"

# Monitor commands in real-time
MONITOR

# Get memory usage
INFO memory

# Get key statistics
INFO keyspace

Application Logging

The Redis implementation includes comprehensive logging:
  • Connection Events: Logs when Redis connects, reconnects, or fails
  • Version Checks: Logs cache version initialization and mismatches
  • Cache Flushes: Logs when cache is flushed due to version mismatch
  • Error Handling: Logs warnings when cache operations fail (allows graceful degradation)
Check application logs for messages like:
  • "Redis connection established successfully"
  • "Cache version initialized"
  • "Cache version mismatch - flushing cache"
  • "Cache flushed due to version mismatch"

Best Practices

1. TTL (Time To Live) Strategy

// Different TTLs for different data types
const TTL = {
  SHORT: 300, // 5 minutes - frequently changing data
  MODERATE: 1800, // 30 minutes - moderately unstable data
  MEDIUM: 3600, // 1 hour - moderately stable data
  LONG: 86400, // 24 hours - rarely changing data
  VERY_LONG: 604800, // 7 days - static reference data
};

// Usage
await setInCache(key, data, TTL.MEDIUM);

2. Cache Key Design

  • Hierarchical: Use colons to separate levels (domain:type:org:id)
  • Consistent: Follow the same pattern across all cache keys
  • Descriptive: Make keys self-documenting
  • Environment-aware: Automatically prefixed with REDIS_ENV in non-production

3. Error Handling

All cache functions are designed to fail gracefully:
// getFromCache - returns null on error (allows fallback to DB)
const cached = await getFromCache<Data>(key);
if (!cached) {
  // Fallback to database
}

// setInCache - no-op on error (allows app to continue)
await setInCache(key, data, ttl); // Won't throw

// deleteFromCache - throws on error (caller can handle)
try {
  await deleteFromCache(key);
} catch (error) {
  // Handle deletion failure
}

4. Memory Management

  • Set appropriate TTLs to prevent memory bloat
  • Use bulk invalidation functions for related caches
  • Monitor memory usage regularly in Redis Insights
  • Cache version management automatically prevents stale data accumulation

5. Cache Version Management

  • Automatic on Vercel: No configuration needed - uses git commit SHA
  • Manual for migrations: Set REDIS_CACHE_VERSION when deploying schema changes
  • Development: Uses default version or set manually for testing
  • Monitor logs: Watch for version mismatch warnings

Troubleshooting

Common Issues

  1. Connection Timeouts
    • Check Redis Cloud status
    • Verify network connectivity
    • Check connection pool settings
    • Review retry logs (max 4 attempts with exponential backoff)
  2. Memory Issues
    • Monitor memory usage in Redis Insights
    • Check for memory leaks in key patterns
    • Adjust TTL values
    • Use bulk invalidation functions
  3. Performance Issues
    • Check cache hit rates
    • Monitor Redis CPU usage
    • Verify key patterns are efficient
    • Check for version mismatch flushes (may cause temporary performance hit)
  4. Data Inconsistency
    • Ensure proper cache invalidation
    • Check for race conditions
    • Verify write-through patterns
    • Review cache version logs for unexpected flushes
  5. Stale Cache After Restart
    • Cache version management should automatically handle this
    • Check logs for version mismatch warnings
    • Verify REDIS_CACHE_VERSION or VERCEL_GIT_COMMIT_SHA is set correctly
    • Manually flush if needed: redis-cli FLUSHDB

Debug Commands

# Check Redis status
redis-cli ping

# Get detailed info
redis-cli info

# Check specific key
redis-cli get "key-name"

# Check key TTL
redis-cli ttl "key-name"

# Check cache version
redis-cli get "cache:version"

# Monitor all commands
redis-cli monitor

# Flush entire database (use with caution)
redis-cli FLUSHDB

Security

Access Control

  • Password Protection: All Redis instances use strong passwords
  • TLS Encryption: Production Redis uses TLS for secure connections
  • Network Isolation: Redis Cloud provides network-level security
  • Environment Separation: Development and production use separate instances
  • Key Prefixing: Non-production environments use REDIS_ENV prefix to prevent cross-contamination

Data Protection

  • No Sensitive Data: Never cache passwords, tokens, or PII
  • TTL Enforcement: All cached data has appropriate expiration
  • Encryption: Sensitive cached data should be encrypted before storage
  • Version Management: Automatic cache flushing prevents stale sensitive data

Cost Optimization

Current Costs

  • Redis Cloud: ~$10/month for 250MB memory (estimated based on usage)
  • Performance Gain: Reduces database load, allowing smaller DB instances
  • ROI: Significant performance improvement for minimal cost

Optimization Strategies

  1. Right-size Memory: Monitor usage and adjust memory allocation
  2. Efficient TTLs: Set appropriate expiration times
  3. Key Compression: Use shorter, efficient key names
  4. Pattern Cleanup: Regular cleanup of unused keys
  5. Version Management: Automatic flushing prevents accumulation of stale data

Future Enhancements

Planned Features

  1. Cache Warming: Pre-populate cache with frequently accessed data
  2. Distributed Caching: Support for multiple Redis instances
  3. Cache Analytics: Detailed hit/miss ratio tracking
  4. Automatic Scaling: Dynamic memory allocation based on usage
  5. Cache Versioning: Enhanced version management with metadata tracking

Integrations

  1. Temporal Workflows: Cache workflow state and results
  2. API Rate Limiting: Use Redis for rate limiting external APIs
  3. Session Management: Store user sessions in Redis
  4. Real-time Features: Use Redis for pub/sub messaging