Skip to content

Testing Strategy

A comprehensive testing strategy is one of the six domains the CTA must address. The review board evaluates whether you can design a test strategy that covers the full spectrum — from unit tests to performance testing — and tie it to the specific risks in the scenario. A weak testing strategy is one of the most common reasons CTA candidates receive low scores in the Development Lifecycle domain.

The Test Pyramid

The test pyramid is the foundational model for balancing test types. The base (unit tests) should be the largest layer, with fewer but more comprehensive tests at each higher level.

flowchart TD
    subgraph Pyramid["Test Pyramid"]
        direction TB
        E2E["End-to-End Tests<br/>Fewest | Slowest | Most Expensive<br/>Full user journeys across systems"]
        INT["Integration Tests<br/>Moderate count | Medium speed<br/>Cross-object, cross-system interactions"]
        UNIT["Unit Tests<br/>Most numerous | Fastest | Cheapest<br/>Individual methods and classes"]
    end

    E2E ~~~ INT
    INT ~~~ UNIT

    style E2E fill:#e76f51,stroke:#c45a3f,color:#fff
    style INT fill:#f4a261,stroke:#d4823e,color:#000
    style UNIT fill:#2d6a4f,stroke:#1b4332,color:#fff

Salesforce Test Pyramid Mapping

Pyramid LayerSalesforce ImplementationToolsAutomation Level
Unit TestsApex @isTest methodsApex Testing FrameworkFully automated
Component TestsLWC Jest testsJest, @salesforce/sfdx-lwc-jestFully automated
Integration TestsApex tests with HttpCalloutMockApex Testing FrameworkFully automated
System TestsScratch org deployment + test executionSalesforce CLI, CI/CDAutomated
UATBusiness user testing scenariosManual, Provar, Copado RTSemi-automated
Performance TestsLoad testing, query analysisSalesforce Optimizer, custom toolsSemi-automated
E2E TestsFull process testing across systemsProvar, Copado Robotic Testing, SeleniumAutomated or manual

Apex Unit Testing

The 75% Coverage Requirement

Salesforce requires a minimum of 75% code coverage to deploy Apex to production. But 75% is the floor, not the goal.

Coverage vs Quality

75% coverage that tests nothing meaningful is worse than 60% coverage that validates actual business logic. The review board does not want to hear “we will achieve 75% coverage.” They want to hear “we will achieve 85%+ coverage with meaningful assertions that validate business outcomes, not just exercise code paths.”

Test Class Best Practices

Structure every test class with:

  1. Test data factory: Centralized test data creation to avoid duplication
  2. Positive tests: Verify the happy path works correctly
  3. Negative tests: Verify error handling works (invalid data, permission errors)
  4. Bulk tests: Verify the code handles 200 records (trigger bulkification)
  5. Boundary tests: Verify behavior at limits (0 records, max records, null values)

Test Data Strategies

StrategyDescriptionWhen to Use
Test Data FactoryCentralized @isTest utility class that creates standard test recordsAlways — should be the default
@TestSetupMethod that creates test data once for all test methods in the classWhen multiple test methods need the same base data
SeeAllData=trueTests can see real org dataAlmost never — only for specific platform features that require it
Static ResourcesCSV files loaded as test dataBulk test scenarios with specific data patterns
@isTest
private class AccountServiceTest {
@TestSetup
static void setupTestData() {
// Use test data factory for consistent test data
List<Account> accounts = TestDataFactory.createAccounts(200);
insert accounts;
}
@isTest
static void shouldAssignTerritoryForUSAccounts() {
// Arrange
List<Account> accounts = [SELECT Id, BillingCountry
FROM Account
WHERE BillingCountry = 'US'];
// Act
Test.startTest();
AccountService.assignTerritories(accounts);
Test.stopTest();
// Assert -- meaningful assertions, not just "it didn't crash"
List<Account> updated = [SELECT Id, Territory__c FROM Account];
for (Account a : updated) {
System.assertNotEquals(null, a.Territory__c,
'Territory should be assigned for US accounts');
}
}
}

Test Isolation

  • Test.startTest() / Test.stopTest(): Reset governor limits, execute async code synchronously
  • @isTest: Test classes do not count against org code size limits
  • SeeAllData=false (default): Tests are isolated from org data — this is the correct default
  • Mock callouts: Use HttpCalloutMock for external service testing — never make real callouts in tests

Integration Testing (Mock Callouts)

Testing integrations without making real HTTP calls:

@isTest
private class ERPIntegrationTest {
private class ERPCalloutMock implements HttpCalloutMock {
public HttpResponse respond(HttpRequest req) {
HttpResponse res = new HttpResponse();
res.setStatusCode(200);
res.setBody('{"status":"success","orderId":"ORD-123"}');
return res;
}
}
@isTest
static void shouldSyncOrderToERP() {
Test.setMock(HttpCalloutMock.class, new ERPCalloutMock());
Test.startTest();
ERPIntegrationService.syncOrder(testOrder);
Test.stopTest();
// Assert the integration result was processed correctly
Order__c updated = [SELECT ERP_Order_Id__c FROM Order__c
WHERE Id = :testOrder.Id];
System.assertEquals('ORD-123', updated.ERP_Order_Id__c);
}
}

UAT Planning

User Acceptance Testing validates that the solution meets business requirements from the user’s perspective.

UAT Planning Framework

PhaseActivitiesDurationParticipants
PreparationWrite test scripts, prepare test data, train testers1-2 weeksBA, QA, PM
ExecutionExecute test scripts, log defects1-3 weeksBusiness users, BA
Defect ResolutionFix defects, retest1-2 weeksDevelopers, QA
Sign-offBusiness stakeholder approval1-3 daysBusiness sponsor

UAT Test Script Template

Each UAT test script should include:

  1. Test ID: Unique identifier (e.g., UAT-SALES-001)
  2. Business Process: Which business process is being tested
  3. Preconditions: What must be true before the test starts
  4. Steps: Numbered, specific steps the tester follows
  5. Expected Result: What should happen at each step
  6. Actual Result: What the tester observed (filled during execution)
  7. Pass/Fail: Did it meet expectations?
  8. Defect Reference: Link to defect if failed

UAT Environment Requirements

  • Partial Copy or Full Copy sandbox with representative data
  • Data masking applied for PII (do not expose real customer data to UAT testers)
  • User accounts set up with production-equivalent profiles and permission sets
  • Integration endpoints pointing to test environments of external systems
  • Documentation including release notes and known limitations

Performance Testing

Performance testing is critical in CTA scenarios involving Large Data Volumes (LDV), high transaction volumes, or complex integrations.

Performance Test Types

Test TypeWhat It ValidatesTools
Load TestingSystem behavior under expected loadCustom Apex batch, data loader
Stress TestingSystem behavior beyond expected loadCustom scripts, JMeter for APIs
Query PerformanceSOQL query execution time with production-scale dataQuery Plan tool, Developer Console
Page LoadLightning page render timeSalesforce Performance Assistant
API ThroughputAPI calls per time periodCustom test harness
Batch ProcessingBatch Apex completion time at scaleApex test with large data sets

Performance Testing Checklist

  • Tested with production-scale data volume (use Full Copy sandbox)
  • SOQL queries analyzed with Query Plan tool for selective filters
  • Batch Apex tested with expected record counts
  • API endpoint load tested with expected concurrent users
  • Lightning page load tested with Salesforce Optimizer
  • Integration tested with production-volume message throughput
  • Report and dashboard performance validated with full data

CTA Performance Signal

The review board expects you to proactively identify performance risks in the scenario. If the scenario mentions “10 million Account records” or “500 concurrent users,” you must address performance testing in your architecture. State what you will test, how, and what the acceptance criteria are.

Test Automation Tools

Salesforce-Specific Test Automation

ToolTypeStrengthsConsiderations
ProvarSalesforce-native E2EUnderstanding of SF DOM, CI/CD integrationLicense cost
Copado Robotic TestingSalesforce-native E2ENo-code test creation, CI/CD built-inLicense cost, learning curve
SeleniumGeneric web E2EFree, large communityFragile with SF Lightning DOM
PlaywrightGeneric web E2EModern API, reliableRequires SF DOM knowledge
Jest (LWC)Component unit testsOfficial Salesforce supportLWC only, no Apex
PMDStatic analysisFree, catches common Apex issuesConfiguration needed
ESLintStatic analysis (JS)Standard for LWC JavaScriptConfiguration needed

Test Automation Strategy

flowchart TD
    A[Code Committed] --> B[CI Pipeline Triggered]
    B --> C[Static Analysis<br/>PMD + ESLint]
    C --> D{Pass?}
    D -->|No| E[Block Merge<br/>Notify Developer]
    D -->|Yes| F[Create Scratch Org]
    F --> G[Deploy Source]
    G --> H[Run Apex Unit Tests]
    H --> I[Run LWC Jest Tests]
    I --> J{Coverage >= 85%<br/>and All Pass?}
    J -->|No| E
    J -->|Yes| K[Deploy to SIT]
    K --> L[Run Integration Tests]
    L --> M[Run E2E Tests<br/>Provar/Copado RT]
    M --> N{All Pass?}
    N -->|No| O[Log Defects<br/>Block Release]
    N -->|Yes| P[Ready for UAT]

    style E fill:#e76f51,stroke:#c45a3f,color:#fff
    style O fill:#e76f51,stroke:#c45a3f,color:#fff
    style P fill:#2d6a4f,stroke:#1b4332,color:#fff

Test Environment Mapping

Different test types require different environments. Mismatching tests to environments produces unreliable results — unit tests do not need production data, but performance tests are meaningless without it.

flowchart TD
    subgraph Tests["Test Types"]
        UT[Unit Tests<br/>Apex + LWC Jest]
        CT[Component Tests<br/>LWC Jest]
        IT[Integration Tests<br/>Mock Callouts]
        ST[System Tests<br/>Cross-Object Flows]
        UAT_T[UAT<br/>Business Scenarios]
        PT[Performance Tests<br/>Load + Query]
        E2E[E2E Tests<br/>Full User Journeys]
    end

    subgraph Envs["Environments"]
        SO[Scratch Orgs<br/>Ephemeral, no data]
        DEV[Dev Sandbox<br/>Developer type]
        SIT[SIT Sandbox<br/>Partial Copy]
        UAT_E[UAT Sandbox<br/>Partial Copy]
        STAGE[Staging Sandbox<br/>Full Copy]
    end

    UT --> SO
    UT --> DEV
    CT --> SO
    CT --> DEV
    IT --> SO
    IT --> SIT
    ST --> SIT
    UAT_T --> UAT_E
    PT --> STAGE
    E2E --> SIT
    E2E --> UAT_E

    style SO fill:#2d6a4f,stroke:#1b4332,color:#fff
    style DEV fill:#2d6a4f,stroke:#1b4332,color:#fff
    style SIT fill:#4ecdc4,stroke:#3ab5ad,color:#000
    style UAT_E fill:#f4a261,stroke:#d4823e,color:#000
    style STAGE fill:#e76f51,stroke:#c45a3f,color:#fff
Test TypeEnvironmentWhy This Environment
Unit / ComponentScratch Org or Dev SandboxFast feedback, no data dependency, isolated
Integration (Mock)Scratch Org or SITMock callouts need no external systems
System / Integration (Live)SIT (Partial Copy)Needs cross-object data and integration endpoints
UATUAT (Partial Copy)Business users need representative data and separate env
PerformanceStaging (Full Copy)Only valid with production-scale data volume
E2ESIT or UATFull user journeys across connected systems

CTA Environment-Test Alignment

When presenting to the review board, explicitly map each test type to an environment. This demonstrates that you understand both why certain tests exist and where they must run to produce valid results. A common mistake is claiming “we will run performance tests” without specifying that they must run against a Full Copy sandbox with production-scale data.

Testing Strategy for CTA Scenarios

What to Include in Your CTA Presentation

  1. Test approach: Which test types you will use and why
  2. Automation scope: What will be automated vs manual
  3. Coverage targets: Not just 75% — meaningful coverage with assertions
  4. Environment mapping: Which tests run in which environment
  5. UAT plan: How business users will validate
  6. Performance testing: What performance risks exist and how you will validate
  7. Regression strategy: How you will prevent regression in future releases

Common CTA Scenario Testing Risks

Scenario ElementTesting RiskMitigation
High data volumePerformance degradationLoad test with Full Copy sandbox
Complex integrationsIntegration failuresMock callout tests + E2E integration test
Multiple user personasPermission/sharing issuesProfile-specific test scenarios
Data migrationData quality issuesValidation scripts + reconciliation tests
Multi-cloudCross-cloud compatibilityCross-org integration test suite
AppExchange packagesPackage upgrade regressionRegression test suite post-upgrade

Sources