Domain Grilling: D7 Communication
AI-Generated Content — Use for Reference Only
This content is AI-generated and has only been validated by AI review processes. It has NOT been reviewed or validated by certified Salesforce CTAs or human subject matter experts. Do not rely on this content as authoritative or completely accurate. Use it solely as a reference point for your own study and preparation. Always verify architectural recommendations against official Salesforce documentation.
Communication is independently scored and assessed throughout the entire presentation and Q&A — there is no separate “communication section.” Judges evaluate whether you articulate benefits and limitations clearly, use visualization tools effectively, and handle unexpected roadblocks with composure. This domain tests consulting acumen as much as technical knowledge.
Type 1: Invalid — “Your Solution Won’t Work”
Q1.1: Unreadable diagram challenge
Judge: “Your System Landscape diagram has 15 boxes and I can’t read the labels. This is supposed to communicate your architecture to me — it’s not working. How would you improve it?”
What they’re testing: Ability to recognize and correct a communication failure under pressure.
Model answer: “You are right — 15 boxes on a single diagram exceeds the readability threshold of 10-12 elements per diagram. I would restructure into two levels: a Level 1 System Landscape with Salesforce at the center, the middleware layer, and grouped external system categories — ERP, Marketing, Analytics, and Portals — as 6-8 boxes with high-level integration types labeled on each arrow. Then a Level 2 Integration Detail diagram that expands each external system group with specific protocols, timing, and error handling. This follows the one-diagram-one-message principle. I would also ensure every arrow has a label showing data flow direction and protocol, and include a legend with color coding for system status — green for new, gray for keeping, red for retiring.”
Q1.2: Requirement without a solution
Judge: “What requirement is this solution addressing? If you don’t know what you’re solving for, we can’t score you.”
What they’re testing: Whether every solution element is explicitly tied to a stated requirement.
Model answer: “I apologize for not making that connection explicit. This component addresses requirement [specific number or description] from the scenario — the need for [business capability]. I should be tying every design decision to a specific requirement using the format: ‘This addresses requirement X, and I chose approach Y because of constraint Z.’ Let me reframe: the business need is [restate the requirement], which drives this design choice because [specific reasoning]. I will be more deliberate about stating the requirement-to-solution mapping for each subsequent element.”
Q1.3: Contradicting your own design
Judge: “Ten minutes ago you said your sharing model uses private OWD for Accounts. Now you are describing a portal integration that assumes public read access. Which is it?”
What they’re testing: Internal consistency of your architecture across the presentation.
Model answer: “You have caught an inconsistency in my design. The correct model is private OWD for Accounts, which I stated earlier and which is driven by the territory-based data isolation requirement. My portal integration needs to be revised to respect this. For the Experience Cloud portal, external users should see only their own account data via the customer community sharing set, not public read access. I would use sharing rules or Apex managed sharing to grant specific portal users access to their related accounts. Thank you for catching this — maintaining consistency across the seven domains is exactly what the review board is evaluating. Let me note this adjustment and trace the impact: my integration user for the portal API also needs to run in the correct sharing context, which means using JWT Bearer flow with a specific integration user rather than Client Credentials.”
Q1.4: Solution dump without context
Judge: “You have been describing features for 5 minutes without once telling me which business problem they solve. I need you to tie this to the scenario.”
What they’re testing: Whether you lead with business context or technology.
Model answer: “You are right, and I am going to reset. The business problem is [restate the core challenge from the scenario]. The specific requirement driving this section of my architecture is [requirement]. I chose [approach] because it addresses [business need] while respecting [constraint]. Going forward, I will follow the pattern: identify the requirement first, state my recommendation, explain why, acknowledge the trade-off, and note alternatives considered. Every slide should answer the question ‘what problem does this solve’ before explaining the how.”
Type 2: Missed — “You Haven’t Addressed…”
Q2.1: Time management failure
Judge: “You have spent 25 minutes on your data model but haven’t shown governance, migration, or deployment yet. How do you adjust?”
What they’re testing: Time management and ability to prioritize under pressure.
Model answer: “I need to adjust immediately. I have 20 minutes remaining and three critical sections to cover. I will spend 5 minutes on migration — covering the phased approach, key tools, and data validation strategy at a high level. Then 5 minutes on governance and deployment — covering the CI/CD pipeline, sandbox topology, and CoE model using a single summary slide. Then 5 minutes on a closing summary that sweeps across all seven domains, highlighting the top 3 decisions and trade-offs. I will reserve 5 minutes for Q&A buffer. The key trade-off is depth versus completeness — it is better to address all domains at a summary level than to deeply cover some and completely skip others. I will invite questions on any area where the judges want me to go deeper.”
Q2.2: Deferred question never addressed
Judge: “You deferred my sharing model question saying ‘I’ll cover it later.’ It has been 10 minutes and you haven’t. Address it now.”
What they’re testing: Accountability for commitments made during the presentation.
Model answer: “I apologize for not circling back. The sharing model for this scenario uses private OWD for Accounts, Opportunities, and Cases to enforce territory-based data isolation. I implement sharing through three layers: first, a role hierarchy that mirrors the sales territory structure for vertical access. Second, criteria-based sharing rules for cross-territory collaboration scenarios — for example, when a shared customer spans two territories. Third, Apex managed sharing for the most complex access patterns where the customer portal needs dynamic access based on the user’s account relationship. The trade-off is performance — Apex managed sharing adds processing overhead and maintenance complexity, but it is necessary for the dynamic access patterns in this scenario. I should have covered this when I showed the role hierarchy diagram rather than deferring.”
Q2.3: Missing stakeholder perspective
Judge: “You have presented a technically sound architecture but haven’t once mentioned change management or user adoption. How will 500 users transition to this new system?”
What they’re testing: Whether you think beyond technology to organizational change.
Model answer: “That is a critical gap in my presentation. For 500 users transitioning from the legacy system, I would plan three phases of change management. First, a stakeholder analysis identifying champions and resistors in each department — the Sales VP and Service Director are key sponsors who need executive briefings. Second, a phased rollout starting with a pilot group of 25 power users across 3 business units who validate the solution and become internal advocates. Third, a training program with role-based content — admins get configuration training, managers get reporting dashboards, and end users get process-specific walkthroughs. I would also recommend a hypercare period of 4-6 weeks post-go-live with dedicated support resources. The governance model includes a feedback loop where user adoption metrics drive iteration priorities.”
Q2.4: No risk acknowledgment
Judge: “You have presented your entire architecture without mentioning a single risk. Are there no risks in this solution?”
What they’re testing: Maturity to proactively identify and communicate risks.
Model answer: “Every architecture has risks, and I should have addressed them explicitly. The top three risks for this solution are: first, integration complexity — with 6 external systems and a middleware layer, the integration testing surface area is large, and an ERP outage during the critical holiday period could impact order processing. I mitigate this with circuit breakers, DLQs, and a dedicated integration monitoring dashboard. Second, data migration quality — migrating 2 million records from the legacy system risks data quality issues that surface post-go-live. I mitigate with automated validation scripts and a two-week parallel run. Third, user adoption — the 500 users have worked with the legacy system for years, and resistance to change is the most common reason CRM implementations underdeliver. I mitigate with the phased rollout and champion network I just described.”
Type 3: Suboptimal — “Have You Considered…?”
Q3.1: Diagram improvement suggestion
Judge: “Your integration diagram shows all happy paths. Where are the error flows?”
What they’re testing: Diagram maturity and whether you design for failure.
Model answer: “Fair point. A production-ready integration diagram must show the error path alongside the happy path. I would add a ‘retry 3x with backoff, then DLQ’ annotation on each integration arrow, and include the dead letter queue as a separate component in the diagram with an arrow to the operations monitoring dashboard. For the most critical integration — the ERP order sync — I would show the full error flow: retry with exponential backoff, circuit breaker open state, dead letter queue routing, PagerDuty alert, and manual resubmit path. A single error arrow with clear labeling demonstrates architectural maturity without cluttering the diagram. This is the difference between an architect who designs for the demo and one who designs for production.”
Q3.2: Overly technical language for mixed audience
Judge: “I am the CFO. I don’t care about technical architecture. Explain why this solution is worth $2 million.”
What they’re testing: Ability to translate technical decisions into business value for a non-technical stakeholder.
Model answer: “This solution delivers three measurable business outcomes. First, revenue protection: the unified customer view eliminates the current problem where 15% of leads are lost between the legacy system and email — based on the scenario’s stated 50,000 annual leads, that represents approximately 7,500 recovered opportunities. Second, operational efficiency: automating the order-to-fulfillment process that currently requires 3 manual handoffs reduces order processing time from 48 hours to near-real-time, which based on the scenario’s volume translates to recovering 2 FTE equivalents. Third, risk reduction: the current legacy system is unsupported and the compliance gap it creates exposes the company to regulatory penalties. The $2 million investment pays back through recovered revenue, reduced headcount dependency, and eliminated compliance risk. I would present a formal business case with projected ROI to the executive steering committee.”
Q3.3: Scripted vs conversational delivery
Judge: “Your presentation feels rehearsed and scripted. I am asking you to explain your data model in your own words, not from your slides.”
What they’re testing: Whether you truly understand your design or are reciting memorized content.
Model answer: “Fair feedback. Let me step away from the slides. The core data model challenge in this scenario is the many-to-many relationship between Partners and their customer Accounts. A single Account can work with multiple Partners, and each Partner manages hundreds of Accounts. I modeled this with a junction object called Partner_Account__c that holds the relationship metadata — which Partner owns which Account, since when, and their current status. I chose a junction object over a lookup because both sides of the relationship need to be required, and I need to enforce referential integrity. The LDV consideration is the Account object at 2 million records — I added custom indexes on the key filter fields used in the most common SOQL queries. That is the essence of the data model decision.”
Q3.4: Cutting an artifact under time pressure
Judge: “If you had to cut one artifact from your presentation due to time, which one and why?”
What they’re testing: Prioritization judgment and understanding of artifact weighting.
Model answer: “If forced to cut one artifact, I would cut the detailed migration strategy and cover it in two sentences during my governance section. My reasoning: the Big 3 diagrams — System Landscape, Data Model, and Role Hierarchy — are the highest-weighted artifacts and must stay. The integration architecture is critical given the 6 external systems. The security and identity model is non-negotiable for this scenario’s compliance requirements. Migration, while important, can be summarized as ‘phased approach using Bulk API 2.0 and an ETL tool with automated validation scripts and a 2-week parallel run,’ which conveys the architectural thinking without a dedicated diagram. If a judge wants to go deeper on migration during Q&A, I can elaborate. The principle is: cover all seven domains with varying depth rather than skip any domain entirely.”
Type 4: Rationale Missing — “WHY Did You Choose…?”
Q4.1: Stakeholder disagreement handling
Judge: “How would you handle a stakeholder who disagrees with your single-org recommendation and insists on a multi-org strategy?”
What they’re testing: Consulting acumen and stakeholder management skills.
Model answer: “I would not dismiss their concern or immediately defend my position. First, I would listen to understand their reasoning — there may be a legitimate business constraint I missed, such as regulatory data isolation requirements or an acquisition timeline. Second, I would present the trade-off analysis: a single org provides unified reporting, shared customer data, and simpler governance, while a multi-org strategy provides stronger data isolation, independent release cycles, and reduced blast radius — but at the cost of duplicated configuration, cross-org integration complexity, and higher licensing costs. Third, I would tie the decision to the specific requirements: ‘The scenario states that all 3 BUs share customers and need unified reporting — this requirement is significantly harder to achieve in a multi-org strategy.’ If their concern is data isolation, I would demonstrate that sharing rules, permission sets, and territories can achieve the same isolation within a single org. Ultimately, if they provide new information that invalidates my assumption, I adapt.”
Q4.2: Trade-off articulation for a key decision
Judge: “You chose MuleSoft for integration. What did you give up by choosing it, and what would make you change your mind?”
What they’re testing: Whether you understand the costs and limitations of your choices, not just the benefits.
Model answer: “Three things I gave up. First, cost: MuleSoft licensing starts at $80K+ annually, which is significant for a mid-market customer. If the budget were tighter and the integration landscape were simpler — say 2-3 systems with straightforward mappings — I would switch to direct Salesforce-to-system integrations using Named Credentials and Apex callouts. Second, complexity: MuleSoft requires specialized skills — if the customer’s team has no MuleSoft experience, the ramp-up time is 2-3 months, which impacts the project timeline. If the customer already had Dell Boomi deployed and the scenario did not require API-led reusability, I would recommend Boomi instead. Third, operational overhead: MuleSoft adds another platform to monitor, patch, and maintain. What would make me change: if the scenario had only 2 external integrations with no transformation complexity, MuleSoft would be unjustified and I would use direct integration.”
Q4.3: Diagram notation and tool choices
Judge: “Why did you choose this specific diagram style? What notation standards are you following?”
What they’re testing: Whether your visual communication is deliberate and standards-based.
Model answer: “I use the Salesforce diagram conventions from the Architect website and Lucidchart’s Salesforce shape library. My color coding is consistent across all diagrams: blue for Salesforce platform components, orange for the integration and middleware layer, gray for external systems, and green for user-facing channels. System status is indicated by border style: solid green border for new systems being implemented, standard gray fill for keeping, and dashed red border for systems being retired. Every arrow is labeled with both the data flow direction and the integration protocol. I include a legend on every diagram — a missing legend is a characteristic of failing artifacts. The notation is deliberate: it allows a judge to quickly identify what is new, what is existing, and how data flows without me needing to explain every element verbally.”
Q4.4: Defer vs answer immediately
Judge: “I have asked you about your sharing model, but you are in the middle of explaining your data model. Do you stop what you are doing and answer me, or defer?”
What they’re testing: Real-time prioritization and respect for the judge’s time.
Model answer: “I answer you now. The judge’s question takes priority over my planned presentation flow for two reasons. First, the question may be time-sensitive — the judge may be trying to understand a dependency before they lose the context. Second, deferring creates a debt that I must remember to pay, and if I forget, it counts against me. For the sharing model: I use private OWD for Accounts and Opportunities with criteria-based sharing rules for cross-territory access. I can elaborate further after completing the data model section, or I can go deeper now — which would you prefer? This approach shows respect for the judge’s question while offering to manage the time impact collaboratively.”
Type 5: Cascading — “If You Change X, What Happens to Y?”
Q5.1: Security revision cascading to integration
Judge: “You just acknowledged a flaw in your security model and revised your sharing from public read/write to private. Does your revised sharing model still work with the integration you described 20 minutes ago?”
What they’re testing: Ability to trace cascading impacts across domains in real time.
Model answer: “Changing from public read/write to private OWD has a direct impact on my integration design. My integration user currently runs API calls that assume org-wide visibility to all Account records. With private OWD, the integration user can only see records they own or that are shared to their role. I need to revise the integration in one of two ways: either grant the integration user a role at the top of the hierarchy so they inherit visibility to all records below, or use a system-level API call with a user that has ‘View All Data’ permission — though that permission should be used sparingly. The Platform Event subscribers run in system context, so CDC and Platform Event triggers are unaffected. However, any Apex REST endpoint called by external systems needs CRUD/FLS enforcement, and with private OWD, query results will be filtered by the running user’s access. I would also verify my reporting design — dashboards that aggregate cross-territory data need to run as a user with appropriate access or use analytics snapshots.”
Q5.2: Time pressure forcing architectural trade-offs
Judge: “The project timeline just got cut by 4 weeks. You still have the same scope. What do you sacrifice and what are the consequences?”
What they’re testing: Ability to make and communicate difficult trade-offs under constraint.
Model answer: “With 4 fewer weeks and the same scope, I would make three deliberate trade-offs. First, reduce the migration scope — instead of migrating all 2 million historical records, I migrate only the last 12 months of active records and archive the rest in the legacy system with read-only access via Data Virtualization using Salesforce Connect. The consequence is that historical reporting requires querying External Objects, which are slower and have limited SOQL support. Second, defer the automated regression test suite — ship with manual regression testing for the first release and build the automation suite in the first post-go-live sprint. The consequence is higher risk of regression bugs in the second release. Third, simplify the middleware layer — reduce the number of API-led connectivity layers from three to two by combining System and Process APIs for the less complex integrations. The consequence is reduced reusability for future integrations, which I accept as technical debt to address post-go-live.”
Q5.3: Judge introduces a new constraint mid-presentation
Judge: “I forgot to mention — this company just acquired a European subsidiary with its own Salesforce org. How does this change your architecture?”
What they’re testing: Ability to adapt your architecture to a significant new constraint in real time.
Model answer: “That changes three areas of my architecture. First, org strategy: I now need to decide between merging the European org into my single org or maintaining a multi-org architecture with Salesforce-to-Salesforce integration. Given GDPR data residency requirements for the European subsidiary, I would maintain two orgs with a cross-org integration pattern using MuleSoft to synchronize shared customer data while keeping EU citizen data physically in the European org. Second, identity: I need a federated SSO model where both orgs authenticate against a shared Identity Provider, with the European org potentially requiring additional MFA compliance for EU regulations. Third, reporting: unified cross-org reporting requires either a data warehouse that aggregates from both orgs or CRM Analytics with cross-org data sync. The System Landscape diagram needs revision to show the two-org architecture with the middleware layer handling cross-org data flows. This is a significant architectural change that would normally warrant a dedicated discovery phase.”
Q5.4: Presentation revision under live feedback
Judge: “Based on all the feedback we have given you in the last 15 minutes, if you could redo your opening 5 minutes, what would you change?”
What they’re testing: Self-awareness, ability to incorporate feedback, and meta-cognition about communication effectiveness.
Model answer: “Three changes. First, I would lead with the business problem and critical requirements before showing any diagrams. I spent too long walking through system boxes without first establishing what problem they solve. The opening should be: ‘This company faces three core challenges: [1, 2, 3]. My architecture prioritizes [design philosophy] because [business reason].’ Second, I would show my System Landscape at a higher level — 8 boxes maximum with grouped external systems — and use it as a roadmap for the rest of the presentation, referencing back to it as I go deeper into each area. Third, I would explicitly state my top 3 assumptions upfront so that if a judge disagrees with an assumption, we can address it before I build 40 minutes of architecture on a faulty foundation. The meta-lesson is that communication is not about showing everything I know — it is about helping the judges understand my reasoning as efficiently as possible.”
Related Topics
This is a personal study site for Salesforce CTA exam preparation. Built with AI assistance. Not affiliated with Salesforce.