Skip to content

DevOps Best Practices

DevOps best practices and anti-patterns for Salesforce development lifecycle management. These patterns reflect the practical wisdom expected of a CTA — not just knowing the tools, but knowing how to use them effectively in enterprise contexts.

Best Practices

1. Source Control Everything

Every piece of metadata should be in source control — not just Apex and LWC, but Flows, custom objects, permission sets, and configuration. If it is not in source control, it does not exist from a release management perspective.

What to track:

  • All Apex classes and triggers
  • All LWC and Aura components
  • Flows and Flow versions
  • Custom objects and fields
  • Permission sets and profiles
  • Page layouts and record types
  • Named credentials and custom metadata
  • Validation rules and duplicate rules

2. Automate the Pipeline

Manual deployment is the single biggest source of release errors. Automate every step you can:

  • Automated testing on every pull request
  • Automated deployment to integration environments
  • Automated validation before production deployment
  • Automated notifications for deployment status
flowchart LR
    A[Developer Commit] --> B[Pull Request]
    B --> C[CI: Run Apex Tests]
    C --> D{Tests Pass?}
    D -->|No| E[Fix & Re-push]
    E --> B
    D -->|Yes| F[Code Review]
    F --> G[Merge to Main]
    G --> H[CD: Validate in Staging]
    H --> I{Validation Pass?}
    I -->|No| J[Investigate & Fix]
    J --> A
    I -->|Yes| K[Deploy to Production]
    K --> L[Post-Deploy Monitoring]

The CTA Standard

In a CTA scenario, always recommend CI/CD automation. Even if the customer currently uses change sets, propose a modernization path. The review board expects you to guide the customer toward best practices, not just validate their current state.

3. Validate Before Deploying

Always run a check-only (validation) deployment before the actual deployment:

  • Catches missing dependencies before they block the real deployment
  • Runs all tests in the target environment, revealing org-specific failures
  • Can be done during business hours without affecting users
  • Gives the team confidence before the deployment window

4. Maintain Environment Parity

Keep sandbox environments as close to production as possible:

  • Refresh sandboxes on a regular cadence
  • Run post-copy scripts to set up environment-specific configuration
  • Monitor sandbox drift (metadata differences from production)
  • Use tools like Gearset to compare environments and identify drift

5. Use Feature Flags for Safe Deployment

Deploy code to production behind feature flags. Activate features separately from deployment:

  • Reduces deployment risk (code is deployed but inactive)
  • Enables gradual rollout to user groups
  • Provides a kill switch for problematic features
  • Decouples deployment from release — deploy weekly, release monthly

6. Document Architecture Decisions

Every significant decision should be recorded in an ADR (Architecture Decision Record). Future teams will need to understand not just what was built, but why.

7. Plan for Rollback

Every production deployment needs a rollback plan:

  • Unlocked packages: Install previous version
  • Metadata deployment: Deploy previous source version (forward-fix preferred)
  • Data changes: Backup data before migration, have restore scripts ready
  • Configuration changes: Document pre-change state for manual rollback

8. Separate Metadata Deployment from Data Changes

Deploy metadata and data changes separately:

  • Metadata first (objects, fields, automation)
  • Data second (records, record type assignments, permission assignments)
  • This allows independent rollback of each layer

9. Test in Production-Like Environments

UAT and staging should mirror production as closely as possible:

  • Full Copy sandbox for staging (if budget allows)
  • Production-scale data for performance testing
  • Production-equivalent user profiles and permissions
  • Production-equivalent integration endpoints (test environments of external systems)

10. Monitor After Deployment

Deployment is not done when the deployment succeeds. Monitor the production environment for:

  • Error rates in Apex exception logs
  • Page load performance changes
  • Integration success/failure rates
  • User-reported issues in the first 24-48 hours

Anti-Patterns

Anti-Pattern: The Wild West Org

What it looks like: No source control. Developers and admins make changes directly in production. No deployment process. No testing.

Why it is bad: No audit trail. No rollback capability. No way to know what changed. Conflicts between changes. Production outages from untested changes.

Fix: Implement source control as the first step. Then sandbox development, then CI/CD. Do not try to implement everything at once — phase the adoption.

Anti-Pattern: Change Set Theater

What it looks like: Change sets are used for all deployments, but the process is manual — components are selected by memory, tests are skipped “because they passed in the sandbox,” and there is no record of what was deployed.

Why it is bad: Human error in component selection. No rollback. No deployment history. No automated testing. Components are frequently missed.

Fix: Migrate to Salesforce CLI + source control. If change sets must stay short-term, at minimum document every deployment and run tests every time.

Anti-Pattern: The Single Sandbox

What it looks like: One sandbox for all development, testing, and UAT. Multiple developers work in the same sandbox simultaneously.

Why it is bad: Developers overwrite each other’s work. UAT is contaminated by in-progress development. No clean testing environment. Deployment conflicts.

Fix: At minimum, separate development from testing. Use developer sandboxes or scratch orgs for development. Reserve a Partial Copy for UAT. Add a staging environment for pre-production validation.

flowchart LR
    subgraph Dev["Development"]
        D1[Dev Sandbox 1]
        D2[Dev Sandbox 2]
        SO[Scratch Orgs]
    end
    subgraph Test["Testing"]
        SIT[SIT / QA Sandbox]
    end
    subgraph UAT["User Acceptance"]
        UAT1[Partial Copy Sandbox]
    end
    subgraph Staging["Pre-Production"]
        STG[Full Copy Sandbox]
    end
    subgraph Prod["Production"]
        PROD[Production Org]
    end
    D1 --> SIT
    D2 --> SIT
    SO --> SIT
    SIT --> UAT1
    UAT1 --> STG
    STG --> PROD

Anti-Pattern: Tests as Coverage Padding

What it looks like: Test classes that create data, call methods, but never assert anything meaningful. Tests exist solely to reach the 75% coverage threshold.

Why it is bad: Tests do not catch bugs. Code passes all tests but fails in production. False sense of security. Coverage number is meaningless.

Fix: Require meaningful assertions in every test method. Review test quality during code review, not just coverage percentage. Set a team standard of 85%+ coverage with business-logic assertions.

Anti-Pattern: Big Bang Deployment

What it looks like: Months of development are deployed to production all at once. No phased rollout. No feature flags. All or nothing.

Why it is bad: High risk of failure. If anything goes wrong, everything must be rolled back. Debugging is difficult because many changes land at once. Users are overwhelmed by all changes simultaneously.

Fix: Deploy frequently in small increments. Use feature flags to deploy code without activating features. Phase rollouts by user group or feature area.

Anti-Pattern: No Rollback Plan

What it looks like: Production deployment proceeds without a documented plan for what to do if things go wrong. The implicit plan is “we’ll figure it out.”

Why it is bad: When a deployment fails at 10 PM on a Thursday, “figure it out” means panic, ad-hoc changes, and potentially making things worse.

Fix: Document the rollback plan before every deployment. Include: what to revert, how to revert it, who does it, and what the communication plan is.

Anti-Pattern: Ignoring Sandbox Drift

What it looks like: Sandboxes are refreshed infrequently. Over time, the sandbox metadata diverges significantly from production. Deployments that work in the sandbox fail in production.

Why it is bad: Tests pass in a sandbox that does not reflect production. Deployments fail due to missing dependencies that exist in production but not in the stale sandbox.

Fix: Refresh sandboxes on a regular cadence. Use environment comparison tools (Gearset, Copado) to identify and resolve drift before it causes deployment failures.

Checklist: DevOps Maturity Assessment

Use this to evaluate the current state and propose improvements in a CTA scenario:

PracticeLevel 1 (Basic)Level 2 (Standard)Level 3 (Advanced)
Source ControlNoneApex/LWC onlyAll metadata
DeploymentChange setsCLI manualCI/CD automated
TestingManual onlyApex tests in pipelineFull pyramid automated
Environments1 sandboxDev + UATDev + SIT + UAT + Staging
MonitoringReactiveError logs reviewedProactive dashboards
RollbackNone plannedManual documentedAutomated package rollback
GovernanceNoneInformal reviewsARB + CAB + ADRs

Sources