Skip to content

Case Study 03: GlobalTrade Logistics — Q&A Preparation

AI-Generated Content — Use for Reference Only

This content is AI-generated and has only been validated by AI review processes. It has NOT been reviewed or validated by certified Salesforce CTAs or human subject matter experts. Do not rely on this content as authoritative or completely accurate. Use it solely as a reference point for your own study and preparation. Always verify architectural recommendations against official Salesforce documentation.

Q&A Format

Duration: 30 minutes following 30-minute presentation Strategy: State your position, give the reasoning, acknowledge the trade-off. Do not ramble — keep answers to 1-2 minutes.

Data Architecture

Q1: 413M tracking events in Big Objects — how does an agent look up an 18-month-old shipment?

The shipment record itself is always in standard objects. Tracking events older than 90 days are in Big Objects. When an agent opens the shipment, hot-tier events load instantly. A “Load Full History” button triggers async SOQL against the Big Object, returning in 5-10 seconds. Key milestone dates are denormalized on the Shipment record, so the agent gets essential information immediately without querying tracking events at all.

Q2: Why Big Objects instead of Heroku Postgres for warm/cold tiers?

Big Objects stay within the Salesforce platform boundary — accessible via async SOQL, reportable in CRM Analytics, governed by the same security model. An external database creates a separate security perimeter and compliance gap. The trade-off: Big Objects have limited query flexibility (first N index fields only). I designed the index with ShipmentId first and EventTimestamp second, covering the primary access pattern. Data Cloud handles analytics queries that do not fit this index.

Q3: What about Salesforce Data Archive instead of Big Objects?

Data Archive is a valid alternative. I chose Big Objects as the more mature, well-understood pattern with proven scale at 400M+. If Data Archive has more production reference customers at this scale by implementation start, I would revisit. The architecture supports swapping — the tiering batch logic is the same regardless of warm-tier destination.

Integration

Q4: MuleSoft for 40+ carriers is expensive. Justify it.

Without middleware, GTL maintains 40+ point-to-point Apex integrations across three orgs — the current state costs $4.2M/year in maintenance and incident response. MuleSoft provides: canonical data model reducing 40 contracts to one, centralized monitoring in a single dashboard, and one-system-API carrier onboarding. License cost is ~$300-400K/year against $4.2M in annual savings plus reduced maintenance. ROI is clear within year one.

Q5: What if MuleSoft goes down?

MuleSoft CloudHub provides 99.99% uptime SLA. Critical pattern: Anypoint MQ queues messages between process and system layers with automatic retry. Platform Events provide guaranteed delivery with replay. Portal tracking pages use a 60-second MuleSoft cache so users see recent data even during brief outages. The design is eventually consistent, not synchronous.

Q6: How do you handle 720,000 daily FrostGuard readings?

I do not bring all 720K into Salesforce. MuleSoft ingests the MQTT stream and does two things: aggregates readings into 5-minute summaries (720K reduced to ~144K/day) for Temperature Log objects, and evaluates every raw reading against thresholds in real-time — excursions immediately publish a Platform Event bypassing aggregation, meeting the 2-minute SLA. Raw 60-second data stays in FrostGuard.

Q7: EU SAP ECC to S/4HANA upgrade overlaps with migration. How?

MuleSoft makes this manageable. Salesforce-to-SAP flows through a MuleSoft process API calling a SAP system API. Today that system API speaks BAPI/RFC to ECC. When the upgrade completes, I build a new system API speaking OData to S/4HANA and swap routing. Process API and Salesforce see no change. Sequencing: EU Salesforce migration at month 10, SAP swap at month 16-18. Never both simultaneously.

Security

Q8: Performance impact of Territory Management at 35K accounts?

Territory assignment rules process on account create/modify. At 35K accounts with hundreds of daily changes, this is manageable. Sharing recalculation on hierarchy changes runs async — for 35K accounts across 18 territories, it completes within hours. I deliberately apply Territory Management only on Account (not Shipment at 27M records). Shipments inherit via Controlled by Parent.

Q9: How do you enforce EU data residency in a single global org?

Three layers. First, Hyperforce in Frankfurt — all data at rest physically in EU. Second, Shield Platform Encryption with tenant-managed keys for Japan/Singapore sovereign fields. Third, Data Classification metadata tagging personal data by applicable regulation for automated compliance reporting. Trade-off: APAC users experience slightly higher latency, mitigated by UI optimization and CDN caching.

Q10: German rep should not see Japanese data. But what about a cross-border shipment?

If the Japanese client is a global account, the criteria-based sharing rule grants both APAC and EU teams access. The German rep sees the shipment through the consignee account relationship. If not a global account, the German rep has no visibility. Shipment visibility follows Account via Controlled by Parent. A cross-border shipment involves two parties (shipper + consignee) — each team sees it through their own account context.

Migration

Q11: Deduplication strategy for 2,800 FrostLine overlapping accounts?

Three tiers by risk. Tier 1: top 200 by revenue get manual data steward review. Tier 2: remaining 2,600 use automated Informatica matching (Company Name fuzzy + Tax ID exact), GTL record survives, FrostLine fields merge where GTL is blank. Tier 3: 700 FrostLine-only accounts load directly. Pre-merge validation flags matches below 85% confidence for manual review.

Q12: How do you handle NA’s 420 Process Builders?

Check execution logs — expect 30-40% dormant. Deprecated those. Active ones with straightforward logic use the Migrate to Flow tool. Complex Process Builders with multiple criteria nodes get manually rewritten as Flows post-migration. Salesforce still executes Process Builders during migration — conversion to Flow is technical debt scheduled for the first two sprints after NA go-live.

Q13: Rollback plan if NA parallel run fails?

During parallel run, the old org remains system of record. Both systems run; data writes go to both via MuleSoft dual-write. If critical issues surface (discrepancies, integration failures, adoption below 70%), we extend the parallel run — we do not cut over. Rollback is simply “stop using the new org.” The sunk cost is the parallel-run period, not the entire migration.

Portal, Cold-Chain, and Governance

Q14: 8,000 concurrent portal users — how does Experience Cloud handle this?

Four measures. CDN caching for static assets (read-heavy app). LWC lazy loading — summary first, full timeline on interaction. Tracking data served from MuleSoft experience API with 60-second cache, not direct SOQL. Two-month load testing phase (months 20-22) with k6 simulating 8K concurrent to find bottlenecks pre-launch.

Q15: The FrostLine MD worries about “big company processes” slowing his team. How do you address this?

Tiered governance. GARB only reviews cross-region and schema-level changes. Cold-chain gets its own admin scope with autonomy for cold-chain-specific config: layouts, flows, reports, dashboards. They deploy on the same bi-weekly cadence but their changes are scoped to cold-chain objects and do not require cross-region validation. Direct communication: “Your team keeps its speed for cold-chain changes. Governance only kicks in when changes touch shared objects or integrations.”

Question Categorization

DomainQuestions
D1 System ArchitectureCovered in presentation deep dives
D2 SecurityQ8, Q9, Q10
D3 DataQ1, Q2, Q3, Q11
D5 IntegrationQ4, Q5, Q6, Q7
D6 Dev LifecycleQ12, Q13
D4 Solution / D7 CommunicationQ14, Q15

Q&A Survival Rules

  1. Answer the question asked — do not pivot to a topic you prepared better for
  2. State position first, then reasoning: “I chose X because Y. I rejected Z because W.”
  3. Name the trade-off proactively — judges respect honesty over pretending there is no cost
  4. Say “I don’t know” when appropriate: “I would validate that during the design phase”
  5. Stay within 1-2 minutes per answer