Skip to content

Integration Best Practices

This page compiles integration best practices from official Salesforce documentation, CTA coaches, and real-world enterprise experience. Organized into four categories — design, implementation, testing, and operations — with an anti-patterns section covering what NOT to do.


Design Best Practices

1. Start with the Business Requirement, Not the Technology

Every integration decision must trace back to a business need. The CTA board will challenge you if you cannot articulate WHY a particular pattern was chosen.

Good ReasoningBad Reasoning
”Real-time sync because the call center needs current customer data during calls""Real-time sync because it’s more modern"
"Batch because the data warehouse only needs nightly refreshes""Batch because it’s easier"
"Middleware because 8 systems need shared transformations""Middleware because enterprise best practice”

2. Design for Failure First

Assume every external system will go down, every network will have latency spikes, and every data payload will occasionally be malformed.

  • Define error handling strategy BEFORE implementation
  • Every integration touchpoint needs: retry strategy, timeout, fallback, monitoring
  • Document the degraded-mode behavior: What does the user see when the integration fails?

3. Minimize Coupling

Loosely coupled integrations are more resilient and maintainable.

Tight CouplingLoose Coupling
Synchronous calls blocking transactionsAsynchronous event-driven
Direct system-to-system connectionsMiddleware or event bus mediating
Shared database accessAPI-based data exchange
Hardcoded endpointsNamed Credentials + Custom Metadata
Schema-aware consumersContract-first with versioning

4. Choose the Right Granularity

APIs that are too fine-grained create chatty integrations. APIs that are too coarse-grained transfer unnecessary data.

The Goldilocks zone

A single API call should represent a meaningful business operation. “Create an Order with Line Items” is better than separate calls for the Order header and each Line Item (too chatty) or a single call that also creates the Account, Contact, and Payment Method (too coarse).

5. Plan for Data Volume Growth

Today’s 1,000 records per day becomes 100,000 in two years. Design integrations that can scale.

Volume TodayDesign ForTechnology
< 1,000/day10,000/dayREST API (can migrate to Bulk)
1,000 - 10,000/day100,000/dayBulk API 2.0 from the start
10,000+/day1,000,000/dayBulk API 2.0 + middleware + partitioning

6. Use Named Credentials

Named Credentials manage authentication and endpoint configuration declaratively. They eliminate hardcoded credentials, handle token refresh, and support admin-managed configuration.

  • Store all external endpoints in Named Credentials
  • Use Principal types: Named Principal (shared), Per User (individual context)
  • External Credentials (new model) support multiple authentication protocols
  • Never store credentials in Custom Settings, custom metadata, or Apex code

7. Apply the Principle of Least Privilege

Every integration should have the minimum permissions needed.

  • Create dedicated integration users (not admin accounts)
  • Use Permission Sets specifically for integration needs
  • Scope OAuth tokens to required object/field access
  • Restrict IP ranges for Connected Apps
  • Enable only the APIs needed (disable SOAP if using REST only)

Implementation Best Practices

8. Use External IDs for Data Matching

External ID fields enable upsert operations and are the foundation of reliable data synchronization.

Without External IDWith External ID
Query to find record, then decide insert vs updateSingle upsert call handles both
Race conditions with concurrent integrationsIdempotent by design
Extra API calls for lookupFewer API calls, cleaner code

9. Implement Idempotency Everywhere

Any operation that might be retried must produce the same result when executed multiple times.

  • Use External ID + upsert for data sync
  • Include idempotency keys in event payloads
  • Design database operations as idempotent (upsert, not insert)
  • Check for existing records before creating

10. Handle Partial Failures

Bulk operations can partially succeed. Your design must handle records that succeed alongside records that fail.

flowchart TD
    A[Bulk API Job<br/>10,000 Records] --> B{Result}
    B --> C[9,800 Success]
    B --> D[200 Failed]
    D --> E[Parse Error Details]
    E --> F{Fixable?}
    F -->|Yes: data quality| G[Fix Data<br/>Resubmit Failed Records]
    F -->|No: systemic| H[Alert Team<br/>Manual Review]
    G --> I[Retry Only<br/>Failed Records]

11. Version Your APIs

Unversioned APIs create upgrade nightmares. Always version from day one.

StrategyExampleBest For
URL versioning/api/v2/ordersREST APIs, clear and simple
Header versioningAccept: application/vnd.myapi.v2+jsonWhen URL changes are undesirable
Query parameter/api/orders?version=2Quick and dirty, not recommended

12. Log Everything, but Smartly

  • Log request/response pairs with correlation IDs
  • Log at appropriate levels (DEBUG for success, ERROR for failures)
  • Do NOT log sensitive data (passwords, tokens, PII)
  • Include timestamps, source system, target system, operation type
  • Use structured logging (JSON) for machine parsing

Testing Best Practices

13. Test at Every Layer

LayerWhat to TestTools
UnitIndividual callout logic, data mappingApex test classes, mock callouts
IntegrationEnd-to-end data flow between systemsSandbox with test environments
PerformanceVolume, throughput, latencyBulk load testing, JMeter
FailureError handling, retry, circuit breakerSimulated failures, chaos testing
SecurityAuth flows, data exposure, injectionOAuth validation, penetration testing

14. Use Mock Callouts in Apex Tests

Apex tests cannot make real HTTP callouts. Use HttpCalloutMock and StaticResourceCalloutMock to simulate external responses.

  • Test success responses, error responses, and timeouts
  • Test with realistic payload sizes
  • Test with malformed responses (what happens when the external system returns unexpected data?)

15. Test with Production-Like Volumes

Integrations that work with 100 records often break with 100,000. Always test with volumes that approximate production.

  • Use Bulk API test jobs with realistic record counts
  • Test concurrent integrations (what happens when two batch jobs run simultaneously?)
  • Verify governor limits are not exceeded under load
  • Test with data that triggers sharing rules, validation rules, and triggers

16. Test Failure Scenarios Explicitly

ScenarioHow to SimulateWhat to Verify
External system downStop test serverCircuit breaker activates, DLQ works
Network timeoutAdd artificial delayTimeout handling, retry triggers
Rate limit (429)Return 429 responseBackoff activates, no data loss
Partial bulk failureMix valid/invalid recordsSuccess records commit, failures route to error handling
Auth token expiredInvalidate tokenToken refresh, retry with new token
Duplicate eventReplay same eventIdempotency prevents double processing

Operations Best Practices

17. Monitor Proactively, Not Reactively

Do not wait for users to report integration failures. Build monitoring that alerts BEFORE business impact.

  • Dashboard showing integration health metrics (success rate, latency, volume)
  • Alerts for: DLQ depth, failure rate spikes, API limit consumption, circuit breaker state
  • Regular review cadence (weekly integration health review)

18. Document Integration Contracts

Every integration should have a documented contract that both teams agree to.

Contract ElementPurpose
Endpoint URLsWhere to connect
AuthenticationHow to authenticate
Request/Response schemaData format and validation rules
Rate limitsThrottling expectations
SLAUptime, response time, support hours
Error codesWhat each error means and expected handling
Versioning policyHow and when versions change
Escalation contactsWho to call when things break

19. Plan for Maintenance Windows

External systems have maintenance windows. Your integrations must handle planned downtime gracefully.

  • Queue messages during maintenance windows
  • Automatically retry after maintenance ends
  • Communicate maintenance schedules across teams
  • Test integration recovery after extended downtime

20. Maintain Integration Runbooks

Every production integration should have a runbook covering:

  • Normal operation description
  • Common failure modes and resolutions
  • How to restart/retry failed processes
  • How to verify data integrity after failures
  • Escalation procedures and contacts

Anti-Patterns: What NOT to Do

Design Anti-Patterns

Anti-PatternWhy It FailsWhat to Do Instead
SOAP for everythingVerbose, slow, unnecessary for modern systemsUse REST as default, SOAP only for legacy requirements
God integrationOne integration handles all data exchangeSeparate integrations by business domain and timing
Polling every 5 secondsWastes API calls, false sense of real-timeUse event-driven (CDC, Platform Events, Pub/Sub)
Storing credentials in codeSecurity vulnerability, audit failureNamed Credentials, always
No error handlingSilent failures, data inconsistencyRetry + DLQ + monitoring + alerting
Ignoring API limitsProduction outage when limits hitBudget API calls, use Bulk/Composite to optimize

Implementation Anti-Patterns

Anti-PatternWhy It FailsWhat to Do Instead
Insert instead of upsertDuplicates on retry, no idempotencyExternal ID + upsert
Synchronous for bulkGovernor limit violations, timeoutsBulk API 2.0 for volume > 200 records
Hardcoded endpointsCannot change without deploymentCustom Metadata + Named Credentials
No correlation IDsCannot trace transactions across systemsGenerate and pass correlation IDs
Ignoring field-level securityIntegration user bypasses FLSRespect WITH SECURITY_ENFORCED or stripInaccessible
Triggering recursionIntegration writes trigger flows that trigger integrationsBypass flags, transaction control

Operational Anti-Patterns

Anti-PatternWhy It FailsWhat to Do Instead
No monitoringFailures discovered by end usersProactive alerting and dashboards
Manual retry onlyDoes not scale, human bottleneckAutomated retry with manual escalation for edge cases
No runbooksKnowledge trapped in individualsDocumented procedures for every integration
Testing only in productionRisk of data corruptionSandbox testing with production-like data
Shared integration userCannot audit or attribute actionsSeparate integration users per system

The cardinal sin

The single worst integration anti-pattern is building an integration with no error handling and no monitoring. When (not if) it fails, nobody knows until a business user reports missing data — which could be days or weeks later. By then, the data inconsistency may be unrecoverable. Always build error handling and monitoring FIRST, not as an afterthought.


  • Identity & SSO — integration security best practices depend on OAuth flows, Named Credentials, and SSO configuration
  • Testing Strategy — integration testing (mocking external services, contract testing) is critical to integration quality
  • Reporting & Analytics — integration monitoring and analytics provide visibility into integration health

Sources