Skip to content

Domain Grilling: D5 Integration

AI-Generated Content — Use for Reference Only

This content is AI-generated and has only been validated by AI review processes. It has NOT been reviewed or validated by certified Salesforce CTAs or human subject matter experts. Do not rely on this content as authoritative or completely accurate. Use it solely as a reference point for your own study and preparation. Always verify architectural recommendations against official Salesforce documentation.

Integration is the highest-failure domain on the CTA exam. Judges expect you to name the exact pattern, justify it against alternatives, address error handling, and explain the timing. Vague answers like “we will use an API” or “we will integrate with middleware” will lose you points immediately.

Type 1: Invalid — “Your Solution Won’t Work”

Q1.1: Fire-and-Forget without order confirmation

Judge: “You proposed Fire and Forget for ERP order creation — don’t you need immediate confirmation of the Order ID before the user moves to the next screen?”

What they’re testing: Understanding of when fire-and-forget is inappropriate and request-reply is required.

Model answer: “You are correct — the user workflow requires the ERP Order ID before proceeding. I would revise this to a Request-Reply pattern using a synchronous REST callout through MuleSoft, with a default 10-second timeout configurable up to 120 seconds. The trade-off is tighter coupling to the ERP, which I mitigate with a circuit breaker in MuleSoft and a graceful degradation path that queues the order if the ERP is unavailable.”


Q1.2: Platform Events retention gap

Judge: “You said Platform Events for your inventory sync. What’s the replay window? What happens if the subscriber is down for 72 hours?”

What they’re testing: Knowledge of Platform Event retention limits and recovery strategy.

Model answer: “High-volume Platform Events have a 72-hour retention window. If the subscriber is down for exactly 72 hours we are at the boundary — anything beyond that and events are lost permanently. My architecture includes a fallback batch reconciliation job that runs every 24 hours using Bulk API 2.0 to detect and correct any drift. I would also implement subscriber lag monitoring that alerts the operations team when lag exceeds 12 hours, well before hitting the retention limit.”


Q1.3: Bulk API governor limit implications

Judge: “What are the governor limit implications of your batch integration processing 500K records? You have not mentioned serial versus parallel mode.”

What they’re testing: Deep knowledge of Bulk API 2.0 processing modes and their implications.

Model answer: “For 500K records, I would use Bulk API 2.0, which processes batches automatically — note that Bulk API 2.0 does not expose the serial/parallel concurrency mode option that Bulk API 1.0 had. If these records have parent-child relationships — for example, Accounts and their child Contacts in the same job — the automatic parallel processing risks UNABLE_TO_LOCK_ROW errors due to lock contention. In that case, I would split into separate sequential jobs: Accounts first, then Contacts referencing the parent Account External IDs via upsert. If serial processing is specifically required to avoid lock contention, I would fall back to Bulk API 1.0 with serial concurrency mode. The daily Bulk API batch limit is 15,000 batches per 24 hours (shared across Bulk API 1.0 and 2.0), and max file size per upload is 150 MB, so for 500K records I would partition into multiple uploads of roughly 100K records each.”


Q1.4: External Objects in a trigger-dependent workflow

Judge: “You proposed Salesforce Connect with External Objects for your product catalog lookup. But your solution also has a trigger on the product object that calculates pricing. How does that work?”

What they’re testing: Knowledge of External Object limitations.

Model answer: “That is a conflict in my design. External Objects cannot participate in Apex triggers, process builders, most Flow operations, or approval processes. If the pricing calculation trigger is required, Data Virtualization is not the right pattern here. I would revise this to a nightly Batch Data Synchronization using Bulk API 2.0 to replicate the product catalog into a custom Product__c object in Salesforce, enabling triggers and automations to run. The trade-off is data staleness between syncs, which I would mitigate by scheduling the sync to run multiple times per day during business hours if near-real-time pricing is needed.”


Type 2: Missed — “You Haven’t Addressed…”

Q2.1: No error handling on any integration

Judge: “Every integration you have shown has a happy path. What happens when the ERP is down for 2 hours on a Friday afternoon?”

What they’re testing: Whether you have designed for failure, not just success.

Model answer: “I should have addressed this explicitly. My error handling strategy has four layers. First, retry with exponential backoff — 3 retries at 1s, 2s, 4s intervals with jitter to prevent thundering herd. Second, a circuit breaker implemented via Platform Cache that opens after 5 consecutive failures, preventing further calls for 60 seconds before attempting a half-open test call. Third, a dead letter queue using a custom Integration_Error__c object that captures the source system, target system, payload, error message, retry count, and correlation ID. Fourth, automated alerting via PagerDuty for critical failures with auto-created Jira tickets. The operations team reviews the DLQ dashboard and triggers bulk resubmit once the ERP recovers.”


Q2.2: Missing idempotency design

Judge: “You have Platform Events publishing to three subscribers. What prevents duplicate processing when a subscriber reconnects and replays events?”

What they’re testing: Understanding of at-least-once delivery and idempotency requirements.

Model answer: “Platform Events deliver at-least-once, meaning duplicates are expected on replay. Each subscriber must be idempotent. For the Salesforce subscriber processing order updates, I use upsert with an External ID field — sending the same order twice produces the same result. For the external warehouse subscriber, each event includes a correlation ID, and the warehouse system checks a processed-events table before processing. For the analytics subscriber, I use a payload hash deduplication check. This idempotency design is mandatory for any at-least-once delivery system.”


Q2.3: No API limit budgeting

Judge: “You have 6 integrations hitting the Salesforce API. Have you calculated your daily API consumption against the org limit?”

What they’re testing: Whether you budget API calls as an architectural constraint.

Model answer: “I should have included an API budget table. With an Unlimited Edition org, the allocation formula is 100,000 base requests per 24 hours plus 5,000 per Salesforce user license. With 200 users, that gives 100,000 + (200 x 5,000) = 1,100,000 daily requests. My integrations consume approximately: ERP order sync at 5,000 calls per day, billing invoice sync at 2,000, marketing contact sync at 10,000, portal user requests at 50,000, reporting data extract at 1,000 via Bulk API, and real-time address validation at 3,000. That totals about 71,000 against a 1,100,000 limit — roughly 6% utilization with significant headroom. I would also implement an API consumption monitoring dashboard that alerts at 80% utilization.”


Q2.4: Missing authentication strategy

Judge: “How does your ERP authenticate to Salesforce? You showed the data flow but not the auth flow.”

What they’re testing: Whether you have designed the security layer of your integrations.

Model answer: “For the ERP-to-Salesforce system-to-system integration, I would use OAuth 2.0 Client Credentials flow — this is the modern replacement for the deprecated Username-Password flow and does not require a user context. The Connected App in Salesforce defines the scope restrictions and rate limits. On the Salesforce-initiated side, all outbound callouts use Named Credentials, which securely manage token storage, refresh, and credential rotation without hardcoding secrets. The Connected App is governed via the ARB to ensure proper scope restrictions and lifecycle management.”


Type 3: Suboptimal — “Have You Considered…?”

Q3.1: Middleware overkill for a single integration

Judge: “You used MuleSoft for a single integration between Salesforce and the address validation service. Can you justify that cost over a direct API call?”

What they’re testing: Cost-consciousness and knowing when middleware is NOT justified.

Model answer: “You are right to challenge that. For a single, simple address validation integration with no data transformation and no routing to multiple systems, MuleSoft is overkill. I would revise this to a direct REST callout from Salesforce using Named Credentials. The middleware justification checklist calls for middleware when there are 4+ connections, complex transformations, orchestration needs, or centralized monitoring requirements. This single integration meets none of those criteria. I would reserve MuleSoft for the complex multi-system integrations in the scenario — the ERP, billing, and data warehouse connections — where the API-led connectivity layers and centralized error handling provide genuine value.”


Q3.2: REST where SOAP is more appropriate

Judge: “Your integration with the legacy SAP system uses REST. Have you verified that SAP exposes a REST endpoint for this service?”

What they’re testing: Protocol awareness and not defaulting to REST for everything.

Model answer: “That is a fair challenge. Many legacy SAP integrations expose SOAP/WSDL endpoints rather than REST, particularly older SAP ECC installations. If SAP only exposes SOAP, I would use MuleSoft’s SAP connector at the System API layer to handle the SOAP-to-REST translation, exposing a clean REST API from the Process API layer for Salesforce to consume. This way, Salesforce always talks REST, and the protocol complexity is abstracted in middleware. If the customer has SAP S/4HANA with OData services enabled, REST is viable directly. I would validate the available SAP endpoints in the discovery phase.”


Q3.3: CDC where Platform Events are better

Judge: “You are using Change Data Capture for your order notification integration. But the business event is ‘order submitted,’ which does not map to a single record change. Have you considered Platform Events?”

What they’re testing: Understanding the distinction between CDC (data change tracking) and Platform Events (custom business events).

Model answer: “You are correct. CDC automatically publishes events when records are created, updated, or deleted and includes field-level change tracking. But ‘order submitted’ is a business event that may span multiple record changes — updating the Order status, creating Order Line Items, and triggering fulfillment. A Platform Event like Order_Submitted__e with custom fields for OrderId, Amount, and a CorrelationId is the correct choice here because I control the schema and the semantics. CDC is the right choice when I need to track what fields changed on a specific object. For business events with custom semantics, Platform Events are appropriate.”


Q3.4: Synchronous where async would improve UX

Judge: “Your inventory allocation call blocks the user for up to 10 seconds while waiting for the warehouse system. Have you considered an asynchronous pattern to improve the user experience?”

What they’re testing: Pattern selection based on UX requirements, not just technical feasibility.

Model answer: “Considered synchronous because the user needs to see available inventory. However, if the warehouse system response time averages 6-8 seconds, a near-real-time pattern would provide better UX. I would revise to a Fire-and-Forget with callback: publish a Platform Event to request inventory allocation, return immediately to the user with a ‘processing’ status, and use a Pub/Sub API callback to update the UI via an LWC component with Emp API (lightning/empApi) when the allocation completes. The trade-off is eventual consistency — the user sees a brief delay rather than a blocked screen. I would add a fallback timeout of 30 seconds that alerts the user if the allocation has not completed.”


Type 4: Rationale Missing — “WHY Did You Choose…?”

Q4.1: OAuth flow justification

Judge: 🎨 “Draw the OAuth 2.0 JWT Bearer flow for your system-to-system integration and explain why you chose it over Client Credentials.”

What they’re testing: Deep understanding of OAuth flows and the ability to visualize them under pressure.

Model answer: “The JWT Bearer flow works as follows: The external system creates a JWT assertion signed with a private key, containing the issuer (client ID), subject (the integration user), audience (Salesforce token endpoint), and expiration. It POSTs this to Salesforce’s token endpoint. Salesforce validates the signature using the pre-registered X.509 certificate on the Connected App, verifies the claims, and returns an access token. I chose JWT Bearer over Client Credentials for two reasons. First, JWT Bearer allows the external system to assert any authorized user’s identity via the ‘subject’ claim, providing flexibility to impersonate different integration users for different contexts. Client Credentials flow always runs as a single fixed ‘Run As’ user configured on the Connected App — both flows operate in a user context with sharing and FLS enforced, but JWT Bearer offers more control over which user context is used. Second, JWT Bearer uses certificate-based authentication (private key signing), providing stronger security than Client Credentials’ client_secret approach for this high-trust server-to-server scenario.”


Q4.2: ETL vs ESB distinction

Judge: “Describe two scenarios where you would choose ETL over ESB and explain why.”

What they’re testing: Understanding of the fundamental difference between ETL (batch data movement) and ESB (real-time message routing and orchestration).

Model answer: “First scenario: nightly data warehouse synchronization. The requirement is to extract 2 million Account and Opportunity records from Salesforce, transform them into a star schema format, and load into the data warehouse for BI reporting. This is batch, scheduled, volume-heavy, and transformation-intensive — a textbook ETL use case using Informatica Cloud or MuleSoft Batch Module with Bulk API 2.0. An ESB would be wrong here because there is no real-time message routing or event-driven orchestration needed.

Second scenario: initial data migration from a legacy CRM. The requirement is a one-time migration of 5 million records with complex transformation rules, deduplication, and data quality checks. ETL tools like Informatica or Talend excel at extract-transform-load with data quality rules. An ESB is designed for ongoing real-time message mediation, not one-time bulk data movement.”


Q4.3: Middleware layer justification

Judge: “Walk me through why you placed MuleSoft between Salesforce and the ERP rather than using a direct Salesforce-to-ERP connection.”

What they’re testing: Architectural justification for middleware, not just default usage.

Model answer: “Four reasons justify middleware here. First, transformation complexity: the ERP uses a proprietary order format that requires field mapping, data enrichment from a reference database, and format conversion — this logic belongs in a Process API, not in Salesforce Apex. Second, fan-out routing: the order event needs to reach the ERP, the warehouse system, and the billing platform — middleware routes one event to three destinations. Third, centralized error handling: the dead letter queue, circuit breaker, and retry logic live in MuleSoft with a unified operations dashboard rather than being reimplemented in each system. Fourth, API governance: MuleSoft API Manager enforces rate limiting, SLA tiers, and versioning policies across all API consumers. The API-led connectivity layers give us reusability — the System API for SAP can be reused by any future Process API without building a new integration.”


Q4.4: Named Credentials over hardcoded tokens

Judge: “Why Named Credentials instead of storing the OAuth token in a custom setting?”

What they’re testing: Security best practice awareness for integration authentication.

Model answer: “Named Credentials are mandatory for any callout authentication in a CTA scenario. They provide three capabilities that custom settings cannot: automatic token refresh without custom code, encrypted credential storage managed by the platform, and per-callout authentication injection that eliminates the risk of developer error. Storing tokens in custom settings means writing custom Apex for token refresh, managing encryption manually, and risking token exposure if a developer queries the custom setting in a debug log. Named Credentials also support credential rotation without code deployment and provide an audit trail of authentication events. There is no acceptable reason to hardcode credentials or store tokens in custom settings.”


Type 5: Cascading — “If You Change X, What Happens to Y?”

Q5.1: Sync-to-async cascading impact

Judge: “You just changed your order integration from synchronous to asynchronous. Does your near-real-time dashboard still work?”

What they’re testing: Understanding how changing an integration pattern cascades to other design elements.

Model answer: “Changing to asynchronous means the order data in Salesforce will lag behind the ERP by the processing time of the async queue — potentially seconds to minutes. My real-time dashboard currently queries Salesforce for order status. With async processing, the dashboard will show stale data during the propagation window. I would revise the dashboard to subscribe to Platform Events via an LWC using the Emp API (lightning/empApi), so the dashboard updates in near-real-time when the order confirmation event arrives from the callback. I would also add a ‘Last Synced’ timestamp on the dashboard so users know the data freshness. Additionally, my reporting strategy needs revision — any reports requiring real-time order data should query the ERP directly via Data Virtualization or accept a batch sync window.”


Q5.2: Data model change breaks integration payload

Judge: “You just split your single Order object into Order Header and Order Line Items. What happens to your integration with the ERP that expects a flat order payload?”

What they’re testing: Ability to trace data model changes through integration contracts.

Model answer: “The data model change breaks the integration contract. My MuleSoft Process API currently maps a flat Salesforce Order to the ERP order format. With the split into Order Header and Order Line Items, I need to update the Process API to perform a composite query — fetching the Order Header and its related Line Items, then flattening them into the ERP’s expected payload format. The System API for Salesforce adds a new endpoint that returns the denormalized order. On the Salesforce side, my CDC configuration needs to be updated to track changes on both objects, not just one. The Platform Event schema for Order_Submitted__e remains unchanged since it carries the Order ID — the middleware handles the data assembly. My test scripts need new test cases for the parent-child query and the transformation logic.”


Q5.3: Adding a new subscriber to existing event stream

Judge: “The business just added a requirement for a customer notification system that needs to subscribe to the same order events. What is the impact on your existing integration?”

What they’re testing: Understanding of event-driven architecture scalability and shared delivery allocations.

Model answer: “Adding a new subscriber to an existing Platform Event channel is architecturally straightforward — that is the strength of pub/sub. The new notification subscriber connects to the Order_Submitted__e channel via Pub/Sub API with its own managed subscription and independent cursor. However, the impact I need to assess is on the shared delivery allocation. Platform Events and CDC share the same daily delivery allocation — default 50,000 event deliveries per 24-hour period. Adding a new subscriber increases the delivery count. I would verify the current utilization and request a higher allocation if needed. I also need to ensure the new subscriber is idempotent, has its own error handling and DLQ, and does not create a bottleneck that could affect the other subscribers. The circuit breaker and retry logic for the notification service must be independent of the existing order processing subscribers.”


Q5.4: Middleware failure cascading to Salesforce

Judge: “MuleSoft goes down completely for 30 minutes. Walk me through the impact on every integration in your landscape.”

What they’re testing: Understanding of single points of failure and resilience design.

Model answer: “The impact depends on which integrations route through MuleSoft versus direct connections. My hybrid design has direct connections for address validation and portal CDC — those are unaffected. For MuleSoft-routed integrations: the ERP order sync uses Fire-and-Forget via Platform Events — events accumulate on the Salesforce event bus during the outage and are delivered when MuleSoft recovers, within the 72-hour retention window. The billing invoice sync uses Remote Call-In through MuleSoft — incoming invoices from billing will fail and need billing-side retry logic or queuing. The data warehouse batch sync only runs nightly — if the outage does not overlap with the batch window, there is no impact. For the real-time integrations, I would implement MuleSoft CloudHub 2.0 with multiple workers across availability zones to minimize single-point-of-failure risk. I would also add a health check endpoint that Salesforce calls before routing critical transactions, falling back to a direct callout for the most critical paths.”

This is a personal study site for Salesforce CTA exam preparation. Built with AI assistance. Not affiliated with Salesforce.