Testing Strategy
A comprehensive testing strategy is one of the six domains the CTA must address. The review board evaluates whether you can design a test strategy that covers the full spectrum — from unit tests to performance testing — and tie it to the specific risks in the scenario. A weak testing strategy is one of the most common reasons CTA candidates receive low scores in the Development Lifecycle domain.
The Test Pyramid
The test pyramid is the foundational model for balancing test types. The base (unit tests) should be the largest layer, with fewer but more comprehensive tests at each higher level.
flowchart TD
subgraph Pyramid["Test Pyramid"]
direction TB
E2E["End-to-End Tests<br/>Fewest | Slowest | Most Expensive<br/>Full user journeys across systems"]
INT["Integration Tests<br/>Moderate count | Medium speed<br/>Cross-object, cross-system interactions"]
UNIT["Unit Tests<br/>Most numerous | Fastest | Cheapest<br/>Individual methods and classes"]
end
E2E ~~~ INT
INT ~~~ UNIT
style E2E fill:#e76f51,stroke:#c45a3f,color:#fff
style INT fill:#f4a261,stroke:#d4823e,color:#000
style UNIT fill:#2d6a4f,stroke:#1b4332,color:#fff
Salesforce Test Pyramid Mapping
| Pyramid Layer | Salesforce Implementation | Tools | Automation Level |
|---|---|---|---|
| Unit Tests | Apex @isTest methods | Apex Testing Framework | Fully automated |
| Component Tests | LWC Jest tests | Jest, @salesforce/sfdx-lwc-jest | Fully automated |
| Integration Tests | Apex tests with HttpCalloutMock | Apex Testing Framework | Fully automated |
| System Tests | Scratch org deployment + test execution | Salesforce CLI, CI/CD | Automated |
| UAT | Business user testing scenarios | Manual, Provar, Copado RT | Semi-automated |
| Performance Tests | Load testing, query analysis | Salesforce Optimizer, custom tools | Semi-automated |
| E2E Tests | Full process testing across systems | Provar, Copado Robotic Testing, Selenium | Automated or manual |
Apex Unit Testing
The 75% Coverage Requirement
Salesforce requires a minimum of 75% code coverage to deploy Apex to production. But 75% is the floor, not the goal.
Coverage vs Quality
75% coverage that tests nothing meaningful is worse than 60% coverage that validates actual business logic. The review board does not want to hear “we will achieve 75% coverage.” They want to hear “we will achieve 85%+ coverage with meaningful assertions that validate business outcomes, not just exercise code paths.”
Test Class Best Practices
Structure every test class with:
- Test data factory: Centralized test data creation to avoid duplication
- Positive tests: Verify the happy path works correctly
- Negative tests: Verify error handling works (invalid data, permission errors)
- Bulk tests: Verify the code handles 200 records (trigger bulkification)
- Boundary tests: Verify behavior at limits (0 records, max records, null values)
Test Data Strategies
| Strategy | Description | When to Use |
|---|---|---|
| Test Data Factory | Centralized @isTest utility class that creates standard test records | Always — should be the default |
@TestSetup | Method that creates test data once for all test methods in the class | When multiple test methods need the same base data |
SeeAllData=true | Tests can see real org data | Almost never — only for specific platform features that require it |
| Static Resources | CSV files loaded as test data | Bulk test scenarios with specific data patterns |
@isTestprivate class AccountServiceTest {
@TestSetup static void setupTestData() { // Use test data factory for consistent test data List<Account> accounts = TestDataFactory.createAccounts(200); insert accounts; }
@isTest static void shouldAssignTerritoryForUSAccounts() { // Arrange List<Account> accounts = [SELECT Id, BillingCountry FROM Account WHERE BillingCountry = 'US']; // Act Test.startTest(); AccountService.assignTerritories(accounts); Test.stopTest();
// Assert -- meaningful assertions, not just "it didn't crash" List<Account> updated = [SELECT Id, Territory__c FROM Account]; for (Account a : updated) { System.assertNotEquals(null, a.Territory__c, 'Territory should be assigned for US accounts'); } }}Test Isolation
Test.startTest()/Test.stopTest(): Reset governor limits, execute async code synchronously@isTest: Test classes do not count against org code size limitsSeeAllData=false(default): Tests are isolated from org data — this is the correct default- Mock callouts: Use
HttpCalloutMockfor external service testing — never make real callouts in tests
Integration Testing (Mock Callouts)
Testing integrations without making real HTTP calls:
@isTestprivate class ERPIntegrationTest {
private class ERPCalloutMock implements HttpCalloutMock { public HttpResponse respond(HttpRequest req) { HttpResponse res = new HttpResponse(); res.setStatusCode(200); res.setBody('{"status":"success","orderId":"ORD-123"}'); return res; } }
@isTest static void shouldSyncOrderToERP() { Test.setMock(HttpCalloutMock.class, new ERPCalloutMock());
Test.startTest(); ERPIntegrationService.syncOrder(testOrder); Test.stopTest();
// Assert the integration result was processed correctly Order__c updated = [SELECT ERP_Order_Id__c FROM Order__c WHERE Id = :testOrder.Id]; System.assertEquals('ORD-123', updated.ERP_Order_Id__c); }}UAT Planning
User Acceptance Testing validates that the solution meets business requirements from the user’s perspective.
UAT Planning Framework
| Phase | Activities | Duration | Participants |
|---|---|---|---|
| Preparation | Write test scripts, prepare test data, train testers | 1-2 weeks | BA, QA, PM |
| Execution | Execute test scripts, log defects | 1-3 weeks | Business users, BA |
| Defect Resolution | Fix defects, retest | 1-2 weeks | Developers, QA |
| Sign-off | Business stakeholder approval | 1-3 days | Business sponsor |
UAT Test Script Template
Each UAT test script should include:
- Test ID: Unique identifier (e.g., UAT-SALES-001)
- Business Process: Which business process is being tested
- Preconditions: What must be true before the test starts
- Steps: Numbered, specific steps the tester follows
- Expected Result: What should happen at each step
- Actual Result: What the tester observed (filled during execution)
- Pass/Fail: Did it meet expectations?
- Defect Reference: Link to defect if failed
UAT Environment Requirements
- Partial Copy or Full Copy sandbox with representative data
- Data masking applied for PII (do not expose real customer data to UAT testers)
- User accounts set up with production-equivalent profiles and permission sets
- Integration endpoints pointing to test environments of external systems
- Documentation including release notes and known limitations
Performance Testing
Performance testing is critical in CTA scenarios involving Large Data Volumes (LDV), high transaction volumes, or complex integrations.
Performance Test Types
| Test Type | What It Validates | Tools |
|---|---|---|
| Load Testing | System behavior under expected load | Custom Apex batch, data loader |
| Stress Testing | System behavior beyond expected load | Custom scripts, JMeter for APIs |
| Query Performance | SOQL query execution time with production-scale data | Query Plan tool, Developer Console |
| Page Load | Lightning page render time | Salesforce Performance Assistant |
| API Throughput | API calls per time period | Custom test harness |
| Batch Processing | Batch Apex completion time at scale | Apex test with large data sets |
Performance Testing Checklist
- Tested with production-scale data volume (use Full Copy sandbox)
- SOQL queries analyzed with Query Plan tool for selective filters
- Batch Apex tested with expected record counts
- API endpoint load tested with expected concurrent users
- Lightning page load tested with Salesforce Optimizer
- Integration tested with production-volume message throughput
- Report and dashboard performance validated with full data
CTA Performance Signal
The review board expects you to proactively identify performance risks in the scenario. If the scenario mentions “10 million Account records” or “500 concurrent users,” you must address performance testing in your architecture. State what you will test, how, and what the acceptance criteria are.
Test Automation Tools
Salesforce-Specific Test Automation
| Tool | Type | Strengths | Considerations |
|---|---|---|---|
| Provar | Salesforce-native E2E | Understanding of SF DOM, CI/CD integration | License cost |
| Copado Robotic Testing | Salesforce-native E2E | No-code test creation, CI/CD built-in | License cost, learning curve |
| Selenium | Generic web E2E | Free, large community | Fragile with SF Lightning DOM |
| Playwright | Generic web E2E | Modern API, reliable | Requires SF DOM knowledge |
| Jest (LWC) | Component unit tests | Official Salesforce support | LWC only, no Apex |
| PMD | Static analysis | Free, catches common Apex issues | Configuration needed |
| ESLint | Static analysis (JS) | Standard for LWC JavaScript | Configuration needed |
Test Automation Strategy
flowchart TD
A[Code Committed] --> B[CI Pipeline Triggered]
B --> C[Static Analysis<br/>PMD + ESLint]
C --> D{Pass?}
D -->|No| E[Block Merge<br/>Notify Developer]
D -->|Yes| F[Create Scratch Org]
F --> G[Deploy Source]
G --> H[Run Apex Unit Tests]
H --> I[Run LWC Jest Tests]
I --> J{Coverage >= 85%<br/>and All Pass?}
J -->|No| E
J -->|Yes| K[Deploy to SIT]
K --> L[Run Integration Tests]
L --> M[Run E2E Tests<br/>Provar/Copado RT]
M --> N{All Pass?}
N -->|No| O[Log Defects<br/>Block Release]
N -->|Yes| P[Ready for UAT]
style E fill:#e76f51,stroke:#c45a3f,color:#fff
style O fill:#e76f51,stroke:#c45a3f,color:#fff
style P fill:#2d6a4f,stroke:#1b4332,color:#fff
Test Environment Mapping
Different test types require different environments. Mismatching tests to environments produces unreliable results — unit tests do not need production data, but performance tests are meaningless without it.
flowchart TD
subgraph Tests["Test Types"]
UT[Unit Tests<br/>Apex + LWC Jest]
CT[Component Tests<br/>LWC Jest]
IT[Integration Tests<br/>Mock Callouts]
ST[System Tests<br/>Cross-Object Flows]
UAT_T[UAT<br/>Business Scenarios]
PT[Performance Tests<br/>Load + Query]
E2E[E2E Tests<br/>Full User Journeys]
end
subgraph Envs["Environments"]
SO[Scratch Orgs<br/>Ephemeral, no data]
DEV[Dev Sandbox<br/>Developer type]
SIT[SIT Sandbox<br/>Partial Copy]
UAT_E[UAT Sandbox<br/>Partial Copy]
STAGE[Staging Sandbox<br/>Full Copy]
end
UT --> SO
UT --> DEV
CT --> SO
CT --> DEV
IT --> SO
IT --> SIT
ST --> SIT
UAT_T --> UAT_E
PT --> STAGE
E2E --> SIT
E2E --> UAT_E
style SO fill:#2d6a4f,stroke:#1b4332,color:#fff
style DEV fill:#2d6a4f,stroke:#1b4332,color:#fff
style SIT fill:#4ecdc4,stroke:#3ab5ad,color:#000
style UAT_E fill:#f4a261,stroke:#d4823e,color:#000
style STAGE fill:#e76f51,stroke:#c45a3f,color:#fff
| Test Type | Environment | Why This Environment |
|---|---|---|
| Unit / Component | Scratch Org or Dev Sandbox | Fast feedback, no data dependency, isolated |
| Integration (Mock) | Scratch Org or SIT | Mock callouts need no external systems |
| System / Integration (Live) | SIT (Partial Copy) | Needs cross-object data and integration endpoints |
| UAT | UAT (Partial Copy) | Business users need representative data and separate env |
| Performance | Staging (Full Copy) | Only valid with production-scale data volume |
| E2E | SIT or UAT | Full user journeys across connected systems |
CTA Environment-Test Alignment
When presenting to the review board, explicitly map each test type to an environment. This demonstrates that you understand both why certain tests exist and where they must run to produce valid results. A common mistake is claiming “we will run performance tests” without specifying that they must run against a Full Copy sandbox with production-scale data.
Testing Strategy for CTA Scenarios
What to Include in Your CTA Presentation
- Test approach: Which test types you will use and why
- Automation scope: What will be automated vs manual
- Coverage targets: Not just 75% — meaningful coverage with assertions
- Environment mapping: Which tests run in which environment
- UAT plan: How business users will validate
- Performance testing: What performance risks exist and how you will validate
- Regression strategy: How you will prevent regression in future releases
Common CTA Scenario Testing Risks
| Scenario Element | Testing Risk | Mitigation |
|---|---|---|
| High data volume | Performance degradation | Load test with Full Copy sandbox |
| Complex integrations | Integration failures | Mock callout tests + E2E integration test |
| Multiple user personas | Permission/sharing issues | Profile-specific test scenarios |
| Data migration | Data quality issues | Validation scripts + reconciliation tests |
| Multi-cloud | Cross-cloud compatibility | Cross-org integration test suite |
| AppExchange packages | Package upgrade regression | Regression test suite post-upgrade |
Related Topics
- Environment Strategy — Which environments support each testing phase
- CI/CD & Deployment — How testing integrates into the deployment pipeline
- Governance Model — Test governance and quality gates
- Risk Management — Testing as risk mitigation
- Decision Guides — Testing approach decision flowchart
- Declarative vs Programmatic — Testing implications of Flow vs Apex
Sources
- Salesforce Developer Documentation: Apex Testing
- Salesforce Developer Documentation: Testing Lightning Web Components
- Salesforce Architects: Testing Strategy
- Salesforce Help: Salesforce Optimizer
- Martin Fowler: Test Pyramid
- Provar: Salesforce Test Automation
- CTA Study Groups: Community testing strategy patterns