Skip to content

Solution 07: ConnectAll Communications

AI-Generated Content — Use for Reference Only

This content is AI-generated and has only been validated by AI review processes. It has NOT been reviewed or validated by certified Salesforce CTAs or human subject matter experts. Do not rely on this content as authoritative or completely accurate. Use it solely as a reference point for your own study and preparation. Always verify architectural recommendations against official Salesforce documentation.

Spoiler Warning

Attempt Scenario 07: ConnectAll Communications FIRST. Set a 180-minute timer and build your own solution before reading this.

Assumptions

  1. Communications Cloud (Industries package) and Industries CPQ licensed for product catalog and order management
  2. Amdocs BSS/OSS remains billing system of record; Salesforce is the CRM/engagement layer
  3. MuleSoft Anypoint available as integration middleware
  4. CDR data stays in a dedicated Snowflake data warehouse — never in Salesforce core
  5. 1.8M residential + 120K business customers fit in a single org with proper data architecture
  6. Copado or Gearset for CI/CD (not DevOps Center, which is unsuited for scratch-org development at 25-developer scale)
  7. Salesforce Field Service with offline capability for field technicians
  8. Data Cloud licensed for CDR access and analytics

Key Architectural Decisions

Decision 1: CDR Data Stays External (Data Cloud + MuleSoft API)

I chose to keep CDRs (2.8M records/day) in Snowflake, with Data Cloud ingesting daily usage summaries and a MuleSoft REST API providing on-demand agent access.

Rejected Big Objects because they have limited query capabilities (equality filters only on first indexed fields) and 2.8M daily ingest would consume API limits rapidly. Agent query patterns require flexible date-range and phone-number lookups Big Objects handle poorly. Also rejected full Data Cloud ingestion because query latency for individual customer lookups cannot reliably meet the sub-3-second SLA.

Implementation: Snowflake stores raw CDRs (7-year retention). Data Cloud ingests daily summaries for analytics/churn scoring. MuleSoft REST API queries Snowflake for the agent-facing “View Usage Details” action. CPNI access controls enforced at both Salesforce permission and Snowflake row-security levels.

Decision 2: Industries CPQ + OmniStudio for Product Catalog

I chose Communications Cloud with Industries CPQ for catalog modeling, pricing rules, and guided selling.

Rejected custom build (standard Products/Price Books + custom objects) because the 2,500-product catalog with bundle dependencies, step-up pricing, geographic eligibility, and cross-service discounts would require thousands of lines of custom Apex, creating long-term maintenance burden.

Implementation: EPC models all products/bundles/promotions declaratively. OmniStudio FlexCards power agent workspace with guided selling. Amdocs master catalog reconciled nightly via MuleSoft with drift alerting. Step-up pricing modeled as time-based pricing terms with automated milestone tracking.

Decision 3: Trunk-Based Development with Feature Flags

I chose trunk-based development with short-lived feature branches and feature flags for 3 distributed teams (25 developers).

Rejected GitFlow because long-lived branches were the root cause of ConnectAll’s current deployment conflicts. With 25 developers modifying shared metadata, long-lived branches guarantee late-stage merge pain.

Implementation: Single main branch. Feature branches max 2-3 days. Feature flags via CMDT gate incomplete features. Cross-team PR reviews mandatory for shared objects. CODEOWNERS defines team boundaries. CI validates metadata on every PR. Weekly production releases with hotfix path.

Critical Diagrams

Data Architecture: What Lives Where

graph TB
    subgraph Legend
        direction LR
        NEW["🟢 NEW - Being Built"]
        KEEP["⚪ KEEPING - No Changes"]
        RETIRE["🔴 RETIRING - Decommissioning"]
        INT["🟠 INTEGRATION LAYER"]
    end

    subgraph SF["Salesforce Core"]
        A[Customers — 1.92M]
        B[Subscriptions — ~6M]
        C[Cases — rolling 2 years]
        D[Industries CPQ Catalog]
    end

    subgraph DC["Data Cloud"]
        F[Usage Summaries]
        G[Churn Scoring]
    end

    subgraph MULE["MuleSoft Anypoint"]
        API[API Gateway + ESB]
    end

    subgraph EXT["Snowflake"]
        I[CDR Raw Data<br/>7-year / ~7B+ records]
        J[Billing History<br/>108M transactions]
    end

    subgraph AMDOCS["Amdocs BSS/OSS"]
        L[Master Billing SOR]
        M[Master Product Catalog]
    end

    L -->|Batch daily billing sync| API
    M -->|Batch nightly catalog sync| API
    API -->|Batch daily| SF
    API -->|Batch nightly catalog| D
    I -.->|REST on-demand CDR query| API
    API -.->|REST on-demand| A
    F -->|Calculated insights| G
    G -->|Churn scores| A

    style A fill:#cce5ff,stroke:#0d6efd
    style B fill:#cce5ff,stroke:#0d6efd
    style C fill:#cce5ff,stroke:#0d6efd
    style D fill:#d4edda,stroke:#28a745
    style F fill:#d4edda,stroke:#28a745
    style G fill:#d4edda,stroke:#28a745
    style API fill:#fff3cd,stroke:#fd7e14
    style I fill:#e9ecef,stroke:#6c757d
    style J fill:#e9ecef,stroke:#6c757d
    style L fill:#e9ecef,stroke:#6c757d
    style M fill:#e9ecef,stroke:#6c757d

Examiner Focus

The single most important decision is what data lives on-platform vs. off-platform. Putting 108M billing records or billions of CDRs into Salesforce is a critical failure.

CI/CD Pipeline and Environment Strategy

graph LR
    subgraph Dev["Developer Environments"]
        D1[Platform Team — 2 SB]
        D2[Integration Team — 2 SB]
        D3[Product Team — 2 SB]
    end

    subgraph Shared["Shared Environments"]
        QA[Integration Test<br/>Partial Copy]
        UAT[UAT — Full Copy]
        STG[Staging — Full Copy]
        PROD[Production]
    end

    D1 & D2 & D3 -->|PR + CI| QA
    QA -->|test gate| UAT
    UAT -->|sign-off| STG -->|approval| PROD

Environment Count

10 total: 6 developer sandboxes (2 per team for rotation), 1 integration test, 1 UAT, 1 staging, 1 production. Minimum viable for 25 developers across 3 teams.

Identity & SSO Flow

sequenceDiagram
    participant User as Internal User<br/>(Call Center / Store)
    participant Browser as Browser
    participant Okta as Okta<br/>(Corporate IdP)
    participant SF as Salesforce

    User->>Browser: Navigate to Salesforce
    Browser->>Okta: Redirect (SP-initiated SSO)
    Okta->>Okta: Authenticate (MFA — Okta Verify push)
    Okta->>Browser: SAML 2.0 Assertion
    Browser->>SF: POST SAML to ACS URL
    SF->>SF: Match Federation ID → User record
    SF->>Browser: Session Cookie + Agent Workspace
sequenceDiagram
    participant Sub as Subscriber<br/>(1.92M residential + business)
    participant Browser as Browser / Mobile App
    participant SF as Salesforce<br/>(Experience Cloud)

    Sub->>Browser: Navigate to MyConnectAll
    Browser->>SF: Login page
    SF->>SF: Email/Password + MFA (SMS OTP)
    SF->>Browser: Session Cookie + Customer Dashboard
sequenceDiagram
    participant Tech as Field Technician<br/>(1,200)
    participant App as SFS Mobile App
    participant Okta as Okta<br/>(Corporate IdP)
    participant SF as Salesforce

    Tech->>App: Launch Field Service app
    App->>Okta: OAuth 2.0 + PKCE
    Okta->>Okta: Biometric / MFA
    Okta->>App: Access Token + Refresh Token
    App->>SF: API calls with Bearer token
    SF->>App: Work order data + offline cache

Internal users (650 call center + 85 stores + HQ staff): SAML 2.0 SP-initiated SSO with Okta. Telecom environments typically standardize on Okta for its strong Salesforce pre-integration and adaptive MFA. Okta Verify push for MFA. Retail store associates use shared kiosk login with per-user Okta credentials.

Field technicians (1,200): OAuth 2.0 with PKCE flow via Okta for the SFS mobile app. Biometric authentication (fingerprint/face) as the primary factor with Okta Verify as fallback. Refresh tokens enable offline session continuity — technicians maintain access during connectivity gaps without re-authentication.

Subscribers (1.92M): Experience Cloud native login for the MyConnectAll portal rebuild. SMS OTP for MFA (telecom subscribers already have verified phone numbers). Business accounts support authorized user roles with the account holder controlling access. Self-registration via account number + last 4 of SSN + service ZIP for identity verification.

Integration Error Handling

IntegrationPatternRetry StrategyDead Letter QueueMonitoringFallback
Amdocs Billing Sync (daily batch)Batch flat file via MuleSoftFull batch re-run on systemic failure; record-level retry for individual errorsFailed billing records → Billing_Sync_Error__c with account ID + errorAlert if batch not complete by 7 AM; reconciliation count mismatch > 0.1%Previous day’s billing data displayed with “as of” timestamp; escalation to Amdocs support
Amdocs Catalog Sync (nightly)Batch via MuleSoftFull re-run on failure; delta comparison on retryCatalog drift records → Catalog_Drift__c with product code + discrepancy detailNightly reconciliation report; alert if > 10 product mismatchesEPC changes frozen until sync confirmed; product managers notified
CDR Query (on-demand)Sync REST via MuleSoft → Snowflake2 retries: 500ms, 2s backoff (3-second SLA)N/A (stateless query)Alert on avg latency > 2s; circuit breaker at 5s”Usage details temporarily unavailable” message; agent offered to email usage report when available
Genesys CTI (real-time)Real-time CTI connector2 retries: 200ms, 1s backoffN/A (real-time screen pop)Alert on > 3s screen pop latency; CTI dashboardScreen pop fails gracefully — agent manually searches account; call recording still captured
Nagios Outage CorrelationEvent-driven via MuleSoft3 retries: 1s, 5s, 30s backoffFailed outage events → Anypoint MQ DLQAlert on unprocessed outage events > 5 min oldManual outage creation by network ops; suppression rules still check ticket history
Data Cloud Churn ScoringBatch dailyRe-run on failure; previous day’s scores retainedFailed score calculations logged in Data CloudAlert if scoring pipeline not complete by 8 AMPrevious day’s churn scores used; retention team notified of stale data

A/B Rollout Error Isolation

ConnectAll uses feature flags (Custom Metadata Types) to control A/B rollouts across 85 retail stores and 3 call centers. If an error spike is detected post-deployment via Splunk alerting (> 5x baseline error rate within 30 minutes), the feature flag is toggled off, instantly reverting affected users to the previous experience. This is faster than a full rollback and limits blast radius to the specific feature.

Requirements Addressed

  1. Unified customer view (< 3 seconds) — Industries Party model unifies 4 legacy data structures; MuleSoft caches hot account data (Reqs 1, 2)
  2. Product catalog management — Industries CPQ with EPC for 2,500 products; product managers maintain without developers (Reqs 6, 7, 8, 9)
  3. CDR data architecture — Snowflake for raw CDRs (7-year), Data Cloud for summaries, MuleSoft API for on-demand access (Reqs 16, 17)
  4. Churn reduction — Data Cloud scoring + automated retention workflows + specialized retention team routing (Req 3)
  5. Multi-team DevOps — Trunk-based development, CODEOWNERS governance, 10 environments, feature flags (Reqs 30, 31, 32, 33)
  6. Trouble ticket automation — Outage-to-ticket correlation, escalation rules, technician dispatch (Reqs 11, 12, 13)
  7. CPNI compliance — Logged/audited CDR access, Snowflake row-security, Salesforce FLS + Event Monitoring (Req 23)
  8. Self-service portal — Experience Cloud replacing MyConnectAll; payments, outage reporting, usage (Req 25)
  9. Zero-disruption deployments — Sunday 2-6 AM windows, feature flags, 30-minute rollback capability (Req 34)

Governance & DevOps

flowchart LR
    PLT[Platform Team<br/>Dev Sandbox x2] --> INT[Integration Test<br/>Partial Copy]
    INTG[Integration Team<br/>Dev Sandbox x2] --> INT
    PROD_T[Product Team<br/>Dev Sandbox x2] --> INT
    INT --> UAT[UAT — Full Copy<br/>1.92M Subscribers]
    UAT --> STG[Staging — Full Copy<br/>Pre-Production]
    STG --> PROD[Production]

    HF[Hotfix] -.-> STG

Branching Strategy

Trunk-based development with short-lived feature branches and feature flags. This directly addresses the root cause of ConnectAll’s deployment conflicts — long-lived branches with late-stage merge pain.

  • main — single source of truth. Always deployable. Protected branch requiring 2 approvals.
  • feature/* — max 2-3 day branches. Each team works on small, mergeable increments.
  • Feature flags via Custom Metadata Types gate incomplete features in production. The 47 promotional offers/year are configured as CMDT records, not code deployments.
  • CODEOWNERS file defines team boundaries: Platform team owns core objects and sharing; Integration team owns MuleSoft flows and Apex callouts; Product team owns EPC catalog and OmniStudio.
  • Cross-team PR reviews mandatory for shared objects (Account, Case, custom platform objects).
  • Weekly production releases (Sunday 2-6 AM window). Hotfix path: direct to staging with 2 senior approvals.

Sandbox Strategy

Sandbox TypeCountPurpose
Developer62 per team (Platform, Integration, Product) for individual development with rotation.
Partial Copy1Integration test environment with Amdocs test instance, Genesys test, and Snowflake dev warehouse connections.
Full Copy21 for UAT with full 1.92M subscriber dataset for performance validation; 1 for staging (pre-production validation).

Testing Strategy

High-volume performance testing is critical given 1.92M subscribers, 2.8M daily CDRs, and 85K concurrent portal sessions.

  • Apex unit tests: >80% coverage. Amdocs callout mocks with realistic SOAP/XML and flat file payloads. Industries CPQ configuration tests validating bundle pricing, step-up schedules, and geographic eligibility.
  • Integration testing: End-to-end Amdocs billing sync with record count reconciliation. CDR query latency validation (< 3 seconds for single customer 30-day lookup). Genesys CTI screen pop latency (< 3 seconds). Catalog sync drift detection.
  • Performance testing: Load test simulating 85K concurrent portal sessions. Agent workspace page load under 3 seconds with full subscriber data. Industries CPQ order processing throughput for peak promotional periods.
  • A/B rollout testing: Feature flag toggle validation — confirm feature activates/deactivates cleanly per store/call center group. Error rate monitoring integration with Splunk.
  • UAT: 3-week cycle with call center supervisors, retail store managers, product managers, and network ops. Each team validates their domain workflows. Business accounts tested with multi-service bundles and authorized user access.
  • Regression: Automated test suite covering order flow (new service, upgrade, disconnect, transfer), trouble ticket lifecycle, and portal bill payment. Run on every merge to main.
  • CPNI compliance testing: Verify CDR access logging captures every query. Confirm FLS prevents unauthorized CDR visibility. Audit trail report generation for FCC/PUC submissions.

CoE / Governance

Chief Transformation Officer sponsors the program with a dedicated Platform Owner in Charlotte.

  • Post-go-live ownership: Transition from 25-person SI team to internal CoE of 8 (3 developers, 2 admins, 2 integration specialists, 1 architect). SI retains 4 developers for 6 months post-go-live knowledge transfer.
  • Change management: CODEOWNERS enforces team boundaries. All metadata changes require PR with automated CI validation. Cross-team changes (touching shared objects) require architecture review.
  • Conflict detection: Automated pre-merge conflict scanning in CI pipeline. Metadata-level diff analysis flags overlapping changes before they reach integration test. This directly addresses the billing outage root cause.
  • Release cadence: Weekly production releases (Sunday 2-6 AM). 30-minute rollback capability via Copado/Gearset rollback snapshots. Feature flags enable instant disable without rollback. Hotfix path available for critical issues.
  • Amdocs coordination: Joint release calendar with Amdocs BSS team. Catalog changes follow Amdocs-first workflow: product managers configure in Amdocs, nightly sync propagates to EPC, validation confirms alignment before activation.

Risk Assessment

RiskLikelihoodImpactMitigation
Amdocs catalog sync drift — EPC diverges over timeHighHighNightly MuleSoft reconciliation with discrepancy alerting; EPC changes require Amdocs-first approval
CDR query latency exceeds 3-second SLAMediumHighSnowflake query caching; pre-compute last 90 days by subscriber; MuleSoft response caching; graceful timeout fallback
Data migration quality — 4 legacy models with acquisition inconsistenciesHighCritical3-month Informatica cleansing phase; Amdocs as golden source; automated validation scripts comparing counts
Multi-team metadata conflictsMediumMediumCODEOWNERS enforcement; automated PR conflict detection; weekly cross-team sync; shared component registry

Domain Scoring Notes

D3 Data (HEAVY): Clear on-platform vs. off-platform strategy with volume numbers. Data ownership: Amdocs = billing master, Salesforce = CRM master, Snowflake = analytical/archival. Migration strategy for 4 legacy models and 3 acquisition normalizations. LDV handling: skinny tables, selective indexes, archival. CPNI compliance embedded in data access model.

D4 Solution Architecture (HEAVY): Industries CPQ is the correct answer — custom build is a critical failure. Product modeling depth: EPC specs, offerings, bundles, pricing terms, eligibility, step-up schedules. Churn signals flow from Data Cloud scoring to agent routing. Self-service portal: Experience Cloud for MyConnectAll rebuild.

D6 Dev Lifecycle (HEAVY): Trunk-based with clear rationale tied to 3-team structure. Environment strategy with enough sandboxes (judges will count). CODEOWNERS-style governance. Zero-disruption deployment for retail/call centers. 30-minute rollback target. Feature-flag-gated deployments during low-traffic windows (Sunday 2-6 AM) with Copado/Gearset rollback snapshots; feature flags hide incomplete work and enable instant rollback without redeployment.

Unified Customer Model

Industries Party model: Individual object for residential contacts, Account for business entities. Service Accounts linked via Account hierarchy, one per service line. Subscription Management objects track active/historical subscriptions per service address. Migration uses a golden record strategy with Amdocs billing data as authoritative for address and service details. Acquisition data normalization handled in pre-migration ETL (Informatica) that standardizes account numbers, product codes, and addresses before loading.

What Would Fail

Critical Failures

  1. Storing CDRs in Salesforce — 2.8M/day is explicitly designed to test platform limits knowledge. CDRs stay external with on-demand access.

  2. Building a custom product catalog — ignoring Industries CPQ for 2,500 products with bundles, step-up pricing, and geographic eligibility. Judges see this as not knowing the platform.

  3. No environment strategy — mentioning “sandboxes” without specifying count, type, and how 25 developers across 3 teams share them.

  4. GitFlow for 3 distributed teams — long-lived branches with big-bang merges repeat ConnectAll’s exact current problem.

  5. Ignoring CPNI — treating CDR access as standard FLS. CPNI is a federal regulatory requirement with audit obligations requiring explicit access logging and compliance controls.