Skip to content

Development Lifecycle Trade-offs

Every development lifecycle decision involves trade-offs. The CTA review board expects you to articulate what you gain and what you sacrifice with each choice. This page covers the major trade-off dimensions across Domain 6.

How to present trade-offs

Use the formula: “I chose [option] because [scenario-specific reason]. The trade-off is [downside], which I mitigate by [mitigation].” This demonstrates you considered alternatives and can defend your position.


1. Agile vs Waterfall

flowchart TD
    A[Project Methodology Decision] --> B{Requirements<br/>well-defined?}
    B -->|Yes| C{Regulatory<br/>sign-offs required?}
    B -->|No| F[Agile]
    C -->|Yes| D{Team has<br/>agile maturity?}
    C -->|No| F
    D -->|Yes| E[Hybrid: Waterfall gates<br/>+ Agile sprints]
    D -->|No| G[Waterfall]
    F --> H["Fast feedback, flexible scope"]
    G --> I["Predictable budget, formal artifacts"]
    E --> J["Compliance + velocity"]

The most fundamental project methodology trade-off. CTA scenarios rarely have a single “right” answer — the choice depends on the customer’s organizational maturity, timeline, and risk tolerance.

DimensionAgile (Scrum/SAFe)Waterfall
Feedback speedEvery 2-week sprintAfter full build (months)
Scope flexibilityHigh — backlog is reprioritized continuouslyLow — scope locked after requirements phase
Risk discoveryEarly — working software each sprintLate — issues found in UAT or later
DocumentationLighter, living docsComprehensive upfront documentation
Stakeholder involvementHigh and continuousFront-loaded, then review gates
Team maturity requiredSelf-organizing, cross-functionalDefined roles, sequential handoffs
Budget predictabilityVariable — scope adjusts to budgetFixed — budget tied to signed scope
Compliance readinessRequires discipline for audit trailsNaturally creates audit-ready artifacts
CTA scenario frequencyVery common — most scenarios assume agileAppears in regulated or government scenarios

When Each Side Wins

Agile wins when: Requirements are evolving, stakeholders are available for regular feedback, the team is experienced with iterative delivery, or the project has high uncertainty.

Waterfall wins when: Regulatory requirements demand formal sign-offs at each stage, the scope is truly fixed and well-understood, the organization lacks agile maturity, or contractual obligations require upfront deliverables.

Hybrid approach: Many CTA scenarios benefit from a hybrid — waterfall for the overall program structure (milestones, gates) with agile sprints for execution within each phase. This satisfies compliance while maintaining delivery velocity.


2. Change Sets vs CLI (Salesforce DX)

DimensionChange SetsSalesforce CLI / SFDX
Learning curveLow — point-and-click in SetupHigher — command line, project structure
RepeatabilityManual — must recreate each timeAutomated — scripted, version-controlled
CI/CD compatibilityNone — cannot automateFull — integrates with any CI/CD tool
RollbackDestructive changes only, no true rollbackVersion control enables rollback
Team collaborationSequential — one deployer at a timeParallel — merge workflows
Metadata coverageLimited subset of metadata typesBroader metadata coverage
Audit trailDeployment history in SetupGit history, PR reviews, pipeline logs
Org complexitySufficient for simple orgsRequired for complex, multi-team orgs

When Each Side Wins

Change sets win when: Small team (1-2 admins), simple org, infrequent deployments, no CI/CD requirement, or admin-heavy team without CLI experience.

CLI/SFDX wins when: Multiple developers, complex metadata, CI/CD pipeline needed, unlocked packages in use, scratch org development model, or audit requirements demand git history.

CTA board expectation

The board expects CTAs to recommend Salesforce DX for any enterprise-scale scenario. Recommending change sets for a complex, multi-team implementation will raise concerns about your architectural maturity. However, you should acknowledge that change sets may coexist during a transition period.


3. Full Copy vs Partial Copy Sandbox

DimensionFull Copy SandboxPartial Copy Sandbox
Data fidelityProduction-identical dataSampled subset via sandbox templates
Refresh timeHours to days (large orgs)Minutes to hours
Storage costMatches production storageFraction of production storage
License costIncluded with Enterprise+ (limited qty)More available per edition
Testing realismHighest — real data volumes and relationshipsLower — may miss edge cases in data
Data sensitivityContains real PII/PHI (requires masking)Smaller attack surface (but still needs masking)
Refresh frequencyLimited (29-day interval)More frequent refreshes possible
LDV testingAccurate performance testingCannot test real data volume behavior

When Each Side Wins

Full copy wins when: Performance testing with realistic data volumes, UAT requiring production-identical data, data migration validation, or integration testing with real data relationships.

Partial copy wins when: Development work, unit testing, training environments, cost-constrained projects, or when data privacy regulations make full copy impractical without significant masking investment.

The masking question

The CTA board frequently asks about data masking in sandboxes. Always mention post-copy scripts (Sandbox Post Copy Apex) for masking sensitive data, regardless of sandbox type. This demonstrates security awareness.


4. Manual vs Automated Testing

DimensionManual TestingAutomated Testing
Setup costLow — no tooling neededHigh — framework, scripts, maintenance
Execution speedSlow — human-pacedFast — runs in minutes
Regression coverageInconsistent — depends on testerConsistent — same tests every time
Exploratory valueHigh — humans find unexpected issuesLow — only tests what is scripted
Maintenance burdenNone (ad hoc)Ongoing — tests break as UI/logic changes
CI/CD integrationCannot gate deploymentsGates deployments automatically
ScalabilityLinear cost (more testers = more cost)Fixed cost after initial investment
Confidence levelVariableHigh and measurable

When Each Side Wins

Manual testing wins when: Exploratory testing, UX validation, complex business process verification, one-time migration validation, or early-stage projects where automation ROI is unclear.

Automated testing wins when: Regression testing across releases, CI/CD pipeline gating, high-frequency deployments, large test suites, or long-term projects where automation investment pays off.

Recommended CTA approach: Present a test pyramid with automated unit tests at the base (Apex test classes, 75%+ coverage), automated integration tests in the middle, and manual exploratory/UAT at the top. This demonstrates a balanced, cost-effective strategy.


5. Centralized vs Federated Center of Excellence (CoE)

flowchart TD
    A[CoE Model Decision] --> B{Org size?}
    B -->|"Small/Medium"| C{Strict compliance?}
    B -->|"Large (5+ BUs)"| D{BUs share<br/>customer data?}
    C -->|Yes| E[Centralized CoE]
    C -->|No| F{Shared SF expertise?}
    F -->|Limited| E
    F -->|Distributed| G[Hub-and-Spoke Hybrid]
    D -->|Yes| G
    D -->|No| H{BUs have mature<br/>SF teams?}
    H -->|Yes| I[Federated CoE]
    H -->|No| G
    E --> J["Strong governance, bottleneck risk"]
    G --> K["Balanced: central standards, BU speed"]
    I --> L["BU autonomy, consistency risk"]
DimensionCentralized CoEFederated CoE
Governance strengthStrong — single authorityVariable — depends on BU compliance
Standards consistencyHigh — one set of standardsRisk of divergence across BUs
Speed of deliverySlower — bottleneck at central teamFaster — BU teams self-serve
Knowledge concentrationRisk of single point of failureKnowledge distributed across org
CostLower headcount, shared resourcesHigher — each BU needs expertise
InnovationControlled, methodicalFaster experimentation at BU level
ScalabilityBottleneck as org growsScales with organization
BU autonomyLow — must go through central teamHigh — BUs own their roadmap

When Each Side Wins

Centralized wins when: Organization is small/medium, strict compliance requirements, shared customer data across BUs, limited Salesforce expertise in the org, or early in the Salesforce journey.

Federated wins when: Large organization with independent BUs, diverse business processes, multiple orgs, BUs have mature Salesforce teams, or speed of delivery is prioritized over consistency.

Hybrid model: Most enterprise CTA scenarios benefit from a “hub and spoke” model — a central CoE sets standards, manages shared components, and governs architecture decisions, while BU teams execute within those guardrails. This balances governance with delivery speed.


6. Short vs Long Release Cycles

DimensionShort Cycles (2-4 weeks)Long Cycles (Quarterly+)
Feedback incorporationRapid — issues fixed quicklySlow — feedback waits for next release
Risk per releaseLow — small change setsHigh — large change sets, more unknowns
Testing burdenLighter per releaseHeavy — full regression each cycle
Coordination overheadContinuous — always releasingConcentrated — big release events
User disruptionFrequent small changesInfrequent but larger disruptions
Rollback complexitySimple — small delta to revertComplex — large delta with dependencies
CI/CD requirementEssentialNice-to-have (can deploy manually)
Salesforce release alignmentCan react to platform releases quicklyMay conflict with 3x yearly Salesforce releases

When Each Side Wins

Short cycles win when: CI/CD is mature, automated testing is in place, business requires rapid feature delivery, or the team follows DevOps practices.

Long cycles win when: Regulatory change management requirements, limited testing resources, complex multi-system deployments requiring coordination, or organizations with formal CAB processes.

Salesforce release cadence

Salesforce releases 3 times per year (Spring, Summer, Winter). Your release strategy must account for these platform releases. Short cycles can absorb platform changes incrementally; long cycles must plan for potential breaking changes in a single big effort.


Trade-off Analysis Framework

Use this template when analyzing any dev lifecycle trade-off at the board:

StepQuestionExample
1What is the business driver?”Need to deploy weekly to respond to market changes”
2What are the options?”Short release cycles with CI/CD vs quarterly releases”
3What does each option optimize for?”Speed and risk reduction vs coordination simplicity”
4What does each option sacrifice?”Short cycles need CI/CD investment; long cycles delay feedback”
5What does the scenario context favor?”Multiple teams, evolving requirements favor short cycles”
6What is the mitigation for the trade-off?”Invest in automated testing to support short cycles”
7What is the recommendation?“2-week sprints with automated deployment pipeline”

Cross-Domain Connections

Dev lifecycle trade-offs connect directly to other domains:


Sources