E2E Testing Conventions
This document outlines the standards and conventions used for automated end-to-end (E2E) testing across the CONA platform, including test identification, organization, and implementation patterns with Playwright.Automation Framework
All E2E tests are implemented using Playwright, which provides:- Cross-browser testing (Chromium, Firefox, WebKit)
- Reliable selectors and auto-waiting mechanisms
- Network request interception and mocking
- Parallel test execution
- Visual comparison capabilities
Test Implementation Structure
Test Identification System
To effectively manage and track our extensive test suite, we use a standardized ID system across all test categories:Test ID Format
- Basic Format:
{Module}-{Category}-{Number}
- Example:
SH-CORE-001
- File naming:
{testId}.spec.ts
(e.g.,SH-CORE-001.spec.ts
)
Module Prefixes
SH
: Shopify integration testsDT
: DATEV integration testsPP
: PayPal integration testsCORE
: Core platform testsAPI
: API-specific tests
Category Types
CORE
: Core business logic testsINT
: Integration tests between systemsERR
: Error handling and boundary case testsPERF
: Performance and load testsE2E
: End-to-end user journey testsSEC
: Security-focused tests
Numbering Convention
- Three-digit sequential numbering (e.g., 001, 002, 003)
- Numbers are assigned within each module-category combination
- Gaps in numbering are allowed for future insertions
Benefits of Test ID System
- Traceability: Clear mapping between requirements and test cases
- Reporting: Structured test reporting and result analysis
- Communication: Common reference points when discussing test results
- Organization: Logical grouping of related test scenarios
- Maintenance: Easier identification of test coverage gaps
Test Documentation Standards
Each test should be documented with:- ID: Unique test identifier
- Title: Brief, descriptive title
- Scenario: Concise description of what’s being tested
- Preconditions: Required system state before test execution
- Automation Verifications: What the automated test verifies
- Expected Results: Clear description of expected outcomes
- Test Data Requirements: Any specific test data needed
Example Test Documentation
Playwright Implementation Guidelines
- Page Objects: Implement page object pattern for all UI interactions
- API Helpers: Create reusable API helpers for backend interactions
- Fixtures: Use Playwright fixtures for setup/teardown and shared context
- Selectors: Prefer data-testid attributes for element selection
- Assertions: Use explicit assertions for all verification points
- Timeouts: Configure appropriate timeouts for async operations
- Retries: Implement retry mechanisms for flaky network operations
Example Test Implementation
Test Data Management
- Use dedicated test data that doesn’t affect production
- Create specific seed data for each test category
- Document data dependencies between tests
- Use data factories to generate test data programmatically
- Reset test data between test runs
CI/CD Integration
- All tests are executed in CI/CD pipelines on pull requests
- Tests are categorized by execution time (fast, medium, slow)
- Critical path tests run on every PR, full suite runs nightly
- Test results are published to test reporting dashboard
- Visual test reports are generated for failures with screenshots
Test Environments
- Development: Local development environment for test writing
- Integration: Shared test environment for basic validation
- Staging: Production-like environment for full test suite
- Production: Subset of non-destructive tests for monitoring