Development Lifecycle Trade-offs
Every development lifecycle decision involves trade-offs. The CTA review board expects you to articulate what you gain and what you sacrifice with each choice. This page covers the major trade-off dimensions across Domain 6.
How to present trade-offs
Use the formula: “I chose [option] because [scenario-specific reason]. The trade-off is [downside], which I mitigate by [mitigation].” This demonstrates you considered alternatives and can defend your position.
1. Agile vs Waterfall
flowchart TD
A[Project Methodology Decision] --> B{Requirements<br/>well-defined?}
B -->|Yes| C{Regulatory<br/>sign-offs required?}
B -->|No| F[Agile]
C -->|Yes| D{Team has<br/>agile maturity?}
C -->|No| F
D -->|Yes| E[Hybrid: Waterfall gates<br/>+ Agile sprints]
D -->|No| G[Waterfall]
F --> H["Fast feedback, flexible scope"]
G --> I["Predictable budget, formal artifacts"]
E --> J["Compliance + velocity"]
The most fundamental project methodology trade-off. CTA scenarios rarely have a single “right” answer — the choice depends on the customer’s organizational maturity, timeline, and risk tolerance.
| Dimension | Agile (Scrum/SAFe) | Waterfall |
|---|---|---|
| Feedback speed | Every 2-week sprint | After full build (months) |
| Scope flexibility | High — backlog is reprioritized continuously | Low — scope locked after requirements phase |
| Risk discovery | Early — working software each sprint | Late — issues found in UAT or later |
| Documentation | Lighter, living docs | Comprehensive upfront documentation |
| Stakeholder involvement | High and continuous | Front-loaded, then review gates |
| Team maturity required | Self-organizing, cross-functional | Defined roles, sequential handoffs |
| Budget predictability | Variable — scope adjusts to budget | Fixed — budget tied to signed scope |
| Compliance readiness | Requires discipline for audit trails | Naturally creates audit-ready artifacts |
| CTA scenario frequency | Very common — most scenarios assume agile | Appears in regulated or government scenarios |
When Each Side Wins
Agile wins when: Requirements are evolving, stakeholders are available for regular feedback, the team is experienced with iterative delivery, or the project has high uncertainty.
Waterfall wins when: Regulatory requirements demand formal sign-offs at each stage, the scope is truly fixed and well-understood, the organization lacks agile maturity, or contractual obligations require upfront deliverables.
Hybrid approach: Many CTA scenarios benefit from a hybrid — waterfall for the overall program structure (milestones, gates) with agile sprints for execution within each phase. This satisfies compliance while maintaining delivery velocity.
2. Change Sets vs CLI (Salesforce DX)
| Dimension | Change Sets | Salesforce CLI / SFDX |
|---|---|---|
| Learning curve | Low — point-and-click in Setup | Higher — command line, project structure |
| Repeatability | Manual — must recreate each time | Automated — scripted, version-controlled |
| CI/CD compatibility | None — cannot automate | Full — integrates with any CI/CD tool |
| Rollback | Destructive changes only, no true rollback | Version control enables rollback |
| Team collaboration | Sequential — one deployer at a time | Parallel — merge workflows |
| Metadata coverage | Limited subset of metadata types | Broader metadata coverage |
| Audit trail | Deployment history in Setup | Git history, PR reviews, pipeline logs |
| Org complexity | Sufficient for simple orgs | Required for complex, multi-team orgs |
When Each Side Wins
Change sets win when: Small team (1-2 admins), simple org, infrequent deployments, no CI/CD requirement, or admin-heavy team without CLI experience.
CLI/SFDX wins when: Multiple developers, complex metadata, CI/CD pipeline needed, unlocked packages in use, scratch org development model, or audit requirements demand git history.
CTA board expectation
The board expects CTAs to recommend Salesforce DX for any enterprise-scale scenario. Recommending change sets for a complex, multi-team implementation will raise concerns about your architectural maturity. However, you should acknowledge that change sets may coexist during a transition period.
3. Full Copy vs Partial Copy Sandbox
| Dimension | Full Copy Sandbox | Partial Copy Sandbox |
|---|---|---|
| Data fidelity | Production-identical data | Sampled subset via sandbox templates |
| Refresh time | Hours to days (large orgs) | Minutes to hours |
| Storage cost | Matches production storage | Fraction of production storage |
| License cost | Included with Enterprise+ (limited qty) | More available per edition |
| Testing realism | Highest — real data volumes and relationships | Lower — may miss edge cases in data |
| Data sensitivity | Contains real PII/PHI (requires masking) | Smaller attack surface (but still needs masking) |
| Refresh frequency | Limited (29-day interval) | More frequent refreshes possible |
| LDV testing | Accurate performance testing | Cannot test real data volume behavior |
When Each Side Wins
Full copy wins when: Performance testing with realistic data volumes, UAT requiring production-identical data, data migration validation, or integration testing with real data relationships.
Partial copy wins when: Development work, unit testing, training environments, cost-constrained projects, or when data privacy regulations make full copy impractical without significant masking investment.
The masking question
The CTA board frequently asks about data masking in sandboxes. Always mention post-copy scripts (Sandbox Post Copy Apex) for masking sensitive data, regardless of sandbox type. This demonstrates security awareness.
4. Manual vs Automated Testing
| Dimension | Manual Testing | Automated Testing |
|---|---|---|
| Setup cost | Low — no tooling needed | High — framework, scripts, maintenance |
| Execution speed | Slow — human-paced | Fast — runs in minutes |
| Regression coverage | Inconsistent — depends on tester | Consistent — same tests every time |
| Exploratory value | High — humans find unexpected issues | Low — only tests what is scripted |
| Maintenance burden | None (ad hoc) | Ongoing — tests break as UI/logic changes |
| CI/CD integration | Cannot gate deployments | Gates deployments automatically |
| Scalability | Linear cost (more testers = more cost) | Fixed cost after initial investment |
| Confidence level | Variable | High and measurable |
When Each Side Wins
Manual testing wins when: Exploratory testing, UX validation, complex business process verification, one-time migration validation, or early-stage projects where automation ROI is unclear.
Automated testing wins when: Regression testing across releases, CI/CD pipeline gating, high-frequency deployments, large test suites, or long-term projects where automation investment pays off.
Recommended CTA approach: Present a test pyramid with automated unit tests at the base (Apex test classes, 75%+ coverage), automated integration tests in the middle, and manual exploratory/UAT at the top. This demonstrates a balanced, cost-effective strategy.
5. Centralized vs Federated Center of Excellence (CoE)
flowchart TD
A[CoE Model Decision] --> B{Org size?}
B -->|"Small/Medium"| C{Strict compliance?}
B -->|"Large (5+ BUs)"| D{BUs share<br/>customer data?}
C -->|Yes| E[Centralized CoE]
C -->|No| F{Shared SF expertise?}
F -->|Limited| E
F -->|Distributed| G[Hub-and-Spoke Hybrid]
D -->|Yes| G
D -->|No| H{BUs have mature<br/>SF teams?}
H -->|Yes| I[Federated CoE]
H -->|No| G
E --> J["Strong governance, bottleneck risk"]
G --> K["Balanced: central standards, BU speed"]
I --> L["BU autonomy, consistency risk"]
| Dimension | Centralized CoE | Federated CoE |
|---|---|---|
| Governance strength | Strong — single authority | Variable — depends on BU compliance |
| Standards consistency | High — one set of standards | Risk of divergence across BUs |
| Speed of delivery | Slower — bottleneck at central team | Faster — BU teams self-serve |
| Knowledge concentration | Risk of single point of failure | Knowledge distributed across org |
| Cost | Lower headcount, shared resources | Higher — each BU needs expertise |
| Innovation | Controlled, methodical | Faster experimentation at BU level |
| Scalability | Bottleneck as org grows | Scales with organization |
| BU autonomy | Low — must go through central team | High — BUs own their roadmap |
When Each Side Wins
Centralized wins when: Organization is small/medium, strict compliance requirements, shared customer data across BUs, limited Salesforce expertise in the org, or early in the Salesforce journey.
Federated wins when: Large organization with independent BUs, diverse business processes, multiple orgs, BUs have mature Salesforce teams, or speed of delivery is prioritized over consistency.
Hybrid model: Most enterprise CTA scenarios benefit from a “hub and spoke” model — a central CoE sets standards, manages shared components, and governs architecture decisions, while BU teams execute within those guardrails. This balances governance with delivery speed.
6. Short vs Long Release Cycles
| Dimension | Short Cycles (2-4 weeks) | Long Cycles (Quarterly+) |
|---|---|---|
| Feedback incorporation | Rapid — issues fixed quickly | Slow — feedback waits for next release |
| Risk per release | Low — small change sets | High — large change sets, more unknowns |
| Testing burden | Lighter per release | Heavy — full regression each cycle |
| Coordination overhead | Continuous — always releasing | Concentrated — big release events |
| User disruption | Frequent small changes | Infrequent but larger disruptions |
| Rollback complexity | Simple — small delta to revert | Complex — large delta with dependencies |
| CI/CD requirement | Essential | Nice-to-have (can deploy manually) |
| Salesforce release alignment | Can react to platform releases quickly | May conflict with 3x yearly Salesforce releases |
When Each Side Wins
Short cycles win when: CI/CD is mature, automated testing is in place, business requires rapid feature delivery, or the team follows DevOps practices.
Long cycles win when: Regulatory change management requirements, limited testing resources, complex multi-system deployments requiring coordination, or organizations with formal CAB processes.
Salesforce release cadence
Salesforce releases 3 times per year (Spring, Summer, Winter). Your release strategy must account for these platform releases. Short cycles can absorb platform changes incrementally; long cycles must plan for potential breaking changes in a single big effort.
Trade-off Analysis Framework
Use this template when analyzing any dev lifecycle trade-off at the board:
| Step | Question | Example |
|---|---|---|
| 1 | What is the business driver? | ”Need to deploy weekly to respond to market changes” |
| 2 | What are the options? | ”Short release cycles with CI/CD vs quarterly releases” |
| 3 | What does each option optimize for? | ”Speed and risk reduction vs coordination simplicity” |
| 4 | What does each option sacrifice? | ”Short cycles need CI/CD investment; long cycles delay feedback” |
| 5 | What does the scenario context favor? | ”Multiple teams, evolving requirements favor short cycles” |
| 6 | What is the mitigation for the trade-off? | ”Invest in automated testing to support short cycles” |
| 7 | What is the recommendation? | “2-week sprints with automated deployment pipeline” |
Cross-Domain Connections
Dev lifecycle trade-offs connect directly to other domains:
- Declarative vs Programmatic — deployment complexity differs dramatically between declarative config and custom Apex
- Integration Trade-offs — integration deployment adds pipeline complexity (connected app configs, API versioning, middleware releases)
- System Architecture Trade-offs — org strategy decisions (single vs multi-org) multiply the deployment pipeline complexity
Sources
- Salesforce DX Developer Guide
- Salesforce DevOps Center
- Trailhead: Org Development Model
- Salesforce Architect: Environment Strategy
- Sam Newman, “Building Microservices” (deployment patterns)
- Gene Kim et al., “The Phoenix Project” (DevOps principles)
- CTA coaching community notes on dev lifecycle trade-offs