Temporal Workflow Architecture
This document explains CONA’s event-driven architecture for importing financial data and processing accounting impacts using Temporal workflows.Core Concept
CONA uses a two-phase approach to financial data processing:- Ingestion Phase - Import data from external sources (Shopify, PayPal) and create documents
- Accounting Phase - Generate accounting impact (general ledger entries) for those documents
Architecture Overview
Phase 1: Data Ingestion
Purpose
Transform external financial data into standardized CONA documents with line items.How It Works
Shopify Integration:- Fetches orders and payments via Shopify API
- Creates sales invoices, credit notes, and payment documents
- Processes in chunks for memory efficiency
- Handles customers, addresses, line items (products, shipping, discounts)
- Reads transactions from CSV uploads
- Creates payment documents (main payment + optional fee documents)
- Tracks processing status per CSV record
- Handles duplicate detection via
paypal_payment_id
Key Operations
Result: Documents are created in the database with:- Complete line item breakdown
accounting_ready_at
timestamp (signals ready for accounting)accounting_ready_version = 1
(version tracking for changes)- Corresponding job in
accounting_work_queue
Phase 2: Accounting Processing
Purpose
Convert business documents into accounting entries (debits and credits) using posting matrix rules.How It Works
ThesyncAccountingQueueWorkflow
runs as an infinite loop:
- Polls the queue for pending jobs (batch of 50)
- Atomically leases jobs using
FOR UPDATE SKIP LOCKED
- Processes 3 jobs concurrently
- Creates general ledger entries based on posting rules
- Marks jobs as completed
- Uses
continueAsNew
to reset workflow history and loop forever
Job State Machine
Processing Flow
Version Tracking
Why It Matters
Documents can be modified after creation (e.g., adding line items). Version tracking ensures accounting is always processed with the latest data.How It Works
Key Points:- Documents start at version 1
- Adding line items to a document with accounting increments the version
- Workers check version before processing
- Mismatches cause automatic rescheduling with the correct version
Key Design Patterns
1. Idempotency
Every operation can be safely retried without side effects:- Duplicate documents detected by external IDs (
shopify_order_id
,paypal_payment_id
) - Jobs use unique constraint: one job per document per organization
- Accounting entries can be deleted and recreated
2. Atomic Operations
All state changes happen within database transactions:- Document + line items created atomically
- Job state changes within transactions
- GL entries created atomically per document
3. Decoupled Phases
Ingestion and accounting are completely independent:- Ingestion workflows can run without accounting being ready
- Accounting queue can catch up asynchronously
- Failures in one phase don’t block the other
4. Resilient Failure Handling
Transient errors: Automatic retry with exponential backoff (5min, 15min, 45min, 2h, 6h) Permanent errors: Jobs marked asfailed
after 6 attempts, require manual investigation
Worker crashes: Stale jobs (leased > 15min) automatically recovered
Configuration
Ingestion Workers
- Shopify: 10 concurrent activities, 20 concurrent workflows
- PayPal: 10 concurrent activities, 20 concurrent workflows
- Optimized for high-throughput batch processing
Accounting Queue Worker
- Concurrency: 3 activities, 5 workflows
- Batch size: 50 jobs per poll
- Poll interval: 10 seconds when idle, immediate when active
- Optimized for consistent, controlled processing
Performance Characteristics
- Shopify sync: ~250 orders/minute
- PayPal CSV: ~100 transactions/minute
- Accounting queue: ~22 jobs/minute average, ~50 jobs/minute peak
Monitoring
Quick Health Checks
Temporal UI
- View active workflow executions at
https://cloud.temporal.io
- Track workflow history and activity retries
- Monitor continue-as-new chains
- Debug activity failures
When Things Go Wrong
Common Issues
Queue growing faster than processing:- Check worker is running
- Increase batch size or concurrency
- Scale horizontally (add more workers)
failed
state:
- Check
last_error
field for error message - Fix underlying issue (missing rules, bad data)
- Reset job to
pending
state
- Check
accounting_ready_at
is set - Verify job exists in
accounting_work_queue
- Check posting rules exist for document type