Agentforce Architecture
Agentforce is Salesforce’s autonomous AI agent platform and the company’s biggest strategic push. It allows organizations to deploy AI agents that reason, plan, and take actions within Salesforce. As the fastest-growing organic product in Salesforce history (18,500+ customers by early 2026), Agentforce is increasingly expected to appear in CTA scenarios. A CTA must understand the architecture deeply — not just what it does, but when to recommend it, when to avoid it, and how to defend that decision before the review board.
Why This Matters for CTA
Agentforce has been added to the Salesforce Admin exam as of December 2025 and is rapidly entering CTA scenario discussions. Review boards now expect candidates to consider AI agents as part of the solution landscape. You do not need to recommend Agentforce in every scenario, but you must demonstrate you evaluated it and can articulate why it does or does not fit.
Atlas Reasoning Engine
The Atlas Reasoning Engine is the core of Agentforce. It implements System 2 reasoning — deliberate, step-by-step analysis rather than fast pattern matching. This distinction is architecturally significant: Atlas does not just generate text — it plans, acts, observes, and reflects in a loop until the goal is met.
The ReAct Reasoning Loop
Atlas uses a Reason-Act-Observe (ReAct) cycle. Salesforce’s engineering team confirmed through extensive experimentation that ReAct-style prompting yields significantly better results than Chain-of-Thought (CoT) alone.
flowchart TD
A["User Query"] --> B["Topic Classification<br/>(intent detection across defined topics)"]
B --> C["RAG: Data Retrieval<br/>(Data Cloud, Knowledge, CRM records)"]
C --> D["Augmented Prompt Assembly<br/>(query + retrieved data + topic instructions + guardrails)"]
D --> E["LLM Reasoning — Plan<br/>(System 2: step-by-step plan generation)"]
E --> F{"Plan sufficient?"}
F -->|"No — needs more data"| C
F -->|"Yes"| G["Action Selection<br/>(choose Flow, Apex, API, MuleSoft, or Prompt)"]
G --> H["Action Execution<br/>(deterministic execution within Salesforce)"]
H --> I["Observe Results<br/>(evaluate action output)"]
I --> J{"Goal achieved?"}
J -->|"No — refine plan"| E
J -->|"Yes"| K["Self-Reflection & Validation<br/>(check against guardrails + toxicity)"]
K --> L{"Passes all<br/>guardrails?"}
L -->|"No"| M["Escalate to Human Agent"]
L -->|"Yes"| N["Response to User"]
N --> O["Audit Trail Logged<br/>(full interaction recorded)"]
style B fill:#264653,stroke:#1d3640,color:#fff
style C fill:#2a9d8f,stroke:#21867a,color:#fff
style E fill:#1a535c,stroke:#0d3b44,color:#fff
style H fill:#2d6a4f,stroke:#1b4332,color:#fff
style K fill:#e76f51,stroke:#c45a3f,color:#fff
style M fill:#9d0208,stroke:#6a040f,color:#fff
Atlas Architecture Internals
| Component | Role | Architectural Significance |
|---|---|---|
| Planner | Translates user goals into step-wise plans using the LLM | Enables multi-step reasoning, not single-shot responses |
| Action Selector | Determines which tool/action to invoke based on the plan | Bridges AI reasoning with deterministic Salesforce execution |
| Tool Execution Engine | Dynamically invokes actions (Flows, Apex, APIs) | Actions execute within Salesforce governor limits and security model |
| Memory Module | Maintains conversation history and context | Enables multi-turn conversations with context retention |
| Reflection Module | Evaluates results and retries/optimizes if needed | Self-correction reduces hallucination and improves accuracy |
Async, Event-Driven Design
Atlas uses an asynchronous, event-driven architecture internally. It adopts a publish-subscribe pattern that decouples component nodes. Agent roles, behaviors, and states are defined declaratively via YAML configuration, allowing engineers to define agents without extensive code.
Agent Component Model
Every Agentforce agent is composed of four core building blocks: Topics, Actions, Instructions, and Guardrails. Understanding their relationships is essential for designing effective agents.
Topics
Topics are the foundation — they define what an agent can do and scope its behavior to specific business domains. The Atlas reasoning engine detects user intent and routes to the matching topic.
| Topic Element | Purpose | Example |
|---|---|---|
| Classification Description | Tells the engine when to activate this topic | ”Use when customer asks about order status or shipping” |
| Scope | Defines what the agent can do within this topic | ”Can look up orders, check tracking, initiate returns” |
| Instructions | Detailed behavioral guidance and business rules | ”Always verify customer identity before sharing order details” |
| Actions | The specific tasks the agent can execute | Flow: Get Order Status, Apex: Calculate Refund |
Topic Design Best Practice
Salesforce recommends no more than 15 actions per topic. Too many actions create ambiguity for the reasoning engine and degrade response quality. Design narrow, focused topics rather than broad catch-all topics.
Action Types
Actions are the executable tasks agents perform. They bridge AI reasoning with deterministic Salesforce automation.
| Action Type | When to Use | Strengths | Limitations |
|---|---|---|---|
| Flow Actions | Declarative business logic, admin-maintained processes | Low-code, reuses existing Flows, fast to deploy | Limited to Flow capabilities |
| Apex Actions | Complex calculations, custom logic, bulk operations | Full programmatic control, deterministic results | Requires developer, code maintenance |
| MuleSoft API Actions | Cross-system integration, external data retrieval | Connects to any system via APIs and connectors | Requires MuleSoft licensing and expertise |
| Prompt Template Actions | Personalized text generation, summaries, recommendations | AI-generated content grounded in CRM data | Non-deterministic output |
| External Service Actions | Simple REST API calls with OpenAPI spec | No-code API integration from within Flows | Limited to APIs with OpenAPI specs |
Guardrails
Guardrails constrain agent behavior and enforce safety boundaries. They operate at multiple levels.
| Guardrail Type | What It Controls | Example |
|---|---|---|
| Topic-level instructions | Scope boundaries per domain | ”Never discuss competitor products” |
| Ethical guardrails | Prevent harmful/biased outputs | Toxicity detection, bias filtering |
| Security guardrails | Prevent prompt injection and data leakage | Input sanitization, PII masking |
| Escalation rules | Define when to hand off to humans | ”Escalate if customer sentiment is negative after 3 turns” |
| Action constraints | Limit what actions can be triggered | ”Cannot issue refunds over $500 without human approval” |
Einstein Trust Layer
The Trust Layer is the enterprise-grade security architecture that makes Agentforce production-ready. It is not optional — every Agentforce interaction passes through it.
flowchart LR
A["User Prompt"] --> B["PII/PCI/PHI Masking<br/>(named entity detection)"]
B --> C["Secure Data Retrieval<br/>(RAG via Data Cloud)"]
C --> D["Dynamic Grounding<br/>(augment with CRM data)"]
D --> E["LLM Processing<br/>(zero-retention gateway)"]
E --> F["Toxicity Detection<br/>(scan response for harmful content)"]
F --> G["De-masking<br/>(restore masked entities)"]
G --> H["Audit Trail Logged<br/>(full interaction metadata stored)"]
H --> I["Response to User"]
style B fill:#e76f51,stroke:#c45a3f,color:#fff
style E fill:#264653,stroke:#1d3640,color:#fff
style F fill:#e76f51,stroke:#c45a3f,color:#fff
style H fill:#f4a261,stroke:#d4823e,color:#000
| Trust Layer Feature | What It Does | CTA Significance |
|---|---|---|
| PII Masking | Substitutes sensitive data with placeholders before LLM processing | Meets GDPR/CCPA/HIPAA requirements for data minimization |
| Zero Data Retention | LLM providers (OpenAI, etc.) never store prompt or response data | Contractual guarantee — data is wiped after response generation |
| Toxicity Detection | Scans both input prompts and AI responses for harmful content | Prevents brand-damaging or offensive agent behavior |
| Dynamic Grounding | Augments prompts with real Salesforce data to reduce hallucination | Accuracy depends on data quality in Data Cloud |
| Audit Trail | Logs every interaction including original prompt, masked prompt, toxicity scores | Stored in Data 360 for compliance and analytics |
| Prompt Defense | Detects and blocks prompt injection attacks | Prevents users from manipulating the agent into unauthorized actions |
CTA Board Question: “How does the Trust Layer protect customer data?”
Model answer: “Every prompt passes through the Einstein Trust Layer before reaching the LLM. PII, PCI, and PHI data is masked using named entity detection. The LLM operates under a zero-retention agreement — no customer data is stored or used for model training. Responses are scanned for toxicity before delivery. Every interaction is logged in an audit trail in Data 360, providing the compliance team full visibility. This architecture means the customer’s data never leaves Salesforce’s security perimeter in an unprotected form.”
Data Cloud Integration and RAG
Agentforce’s intelligence depends on Data Cloud for retrieval-augmented generation (RAG). Without Data Cloud, agents cannot access the unified customer context that makes them effective.
How RAG Works in Agentforce
- Retrieval — The system searches Data Cloud’s unified data graph (structured CRM data + unstructured content processed by Data 360) to find relevant context for the user’s query
- Augmentation — Retrieved data is injected into the LLM prompt alongside the user query, topic instructions, and guardrails
- Generation — The enriched prompt is sent to the LLM, producing a response grounded in real customer data rather than generic training data
Data Sources Available to Agents
| Data Source | Access Method | Use Case |
|---|---|---|
| CRM Records | Direct Salesforce queries | Account, Contact, Case, Opportunity data |
| Knowledge Articles | Knowledge base search | FAQ resolution, product information |
| Data Cloud Unified Profile | RAG via Data 360 | Cross-system customer context |
| Unstructured Content | Vector search via Data 360 | PDFs, emails, chat transcripts, documents |
| External Systems | MuleSoft API actions | ERP data, inventory, external databases |
Data Cloud Dependency
Data Cloud is strongly recommended for advanced RAG use cases, but basic agents can function with direct CRM queries and Knowledge articles without Data Cloud. However, without Data Cloud, agents lack the unified cross-system context needed for sophisticated RAG grounding. In a CTA scenario, if the customer requires complex data unification or unstructured content search, you must factor the Data Cloud implementation into your timeline, budget, and architecture.
Agent Types Comparison
| Agent Type | Primary Use Case | Key Actions | When to Recommend |
|---|---|---|---|
| Service Agent | Case deflection, customer self-service | Knowledge search, case creation, order lookup, escalation | High-volume service orgs needing 24/7 coverage |
| Sales Agent (SDR) | Lead qualification, outreach, pipeline generation | Lead research, email drafting, meeting scheduling | Large lead volumes with limited SDR headcount |
| Marketing Agent | Campaign execution, segmentation, personalization | Audience building, content generation, journey optimization | Marketing teams wanting AI-driven campaign execution |
| Commerce Agent | Product discovery, shopping assistance | Intent-aware search, product recommendations, cart assistance | B2C commerce sites needing guided shopping |
| Contact Center Agent | Unified voice + digital channel handling | Omnichannel routing, real-time transcription, agent assist | Multi-channel contact centers (GA, rolling out progressively since early 2026) |
| Industry Agents | Vertical-specific workflows | Varies by industry (claims, quoting, care coordination) | Financial Services, Automotive, Manufacturing orgs |
| Custom Agent | Any business-specific workflow | User-defined topics and actions | Unique processes not covered by pre-built agents |
Agent Builder Configuration
Agent Builder is the low-code tool for building and configuring Agentforce agents. The new agent building experience became generally available in February 2026.
Configuration Workflow
- Select Agent Type — Choose a pre-built agent or create from scratch. The agent type determines available topics and actions
- Define Topics — Create topics with classification descriptions, scope, and instructions. Each topic represents a domain the agent handles
- Assign Actions — Map Flow, Apex, MuleSoft, or Prompt Template actions to each topic (max 15 actions per topic recommended)
- Set Guardrails — Define escalation rules, action constraints, and behavioral boundaries
- Test in Conversation Preview — Simulate conversations and inspect the plan canvas to see which topics, actions, and reasoning the agent used
- Deploy — Activate on channels (web chat, WhatsApp, Slack, Experience Cloud, voice)
Natural Language Agent Creation
Agentforce 2.0+ supports interpreting natural language instructions like “Onboard New Product Managers” to auto-generate agents. These agents combine pre-built skills with custom logic, but always review and refine the generated configuration before production deployment.
Licensing and Cost Model
Agentforce pricing has evolved significantly since launch. A CTA must understand the cost model to evaluate ROI.
| Pricing Model | How It Works | Best For | Cost |
|---|---|---|---|
| Conversations | Flat fee per completed conversation | External-facing customer agents with predictable volume | ~$2/conversation |
| Flex Credits | Consumption-based; each action = 20 credits | Any Agentforce use case; pay for what you use | $0.10/action (packs of 100K credits for $500) |
| Per-User Licensing | Monthly fee per user for AI capabilities | Agentforce add-ons ($125/user/mo) or Editions ($550+/user/mo) | $125-$550+/user/month (approximate — verify current pricing at salesforce.com/agentforce as pricing evolves rapidly) |
Pricing Constraints
Credits and Conversations cannot be combined in the same org — each org must choose one pricing model. Factor this into multi-use-case architectures. Also budget for Data Cloud licensing as a prerequisite.
CTA Board Question: “What’s the cost model and ROI justification?”
Model answer: “Agentforce uses consumption-based pricing. For the service use case, at $2 per conversation, if we deflect 10,000 cases per month that currently cost $15 each in agent time, the monthly cost is $20,000 against $150,000 in savings — a 7.5x ROI. However, we must also factor Data Cloud licensing, implementation costs, and the ongoing tuning effort. I would recommend a 90-day pilot with 2-3 topics to validate deflection rates before committing to full rollout. Real-world results show 40-70% case deflection rates for well-configured agents.”
Agentforce vs Einstein Bots vs Flow
This is a critical decision matrix for CTA scenarios. These are complementary tools, not competitors.
| Factor | Agentforce | Einstein Bots | Flow Automation |
|---|---|---|---|
| Intelligence | LLM-powered reasoning, multi-step planning | Rule-based NLU, keyword matching | Deterministic logic, no NLU |
| Conversations | Natural language, context-aware | Scripted dialog trees | No conversational interface |
| Autonomy | Autonomous multi-step execution | Follows predefined paths | Executes predefined logic |
| Data access | RAG via Data Cloud + CRM | Knowledge articles, CRM queries | Direct CRM queries |
| Complexity handled | High (ambiguous, multi-step) | Low-Medium (structured queries) | Medium (rule-based branching) |
| Setup effort | Medium (topics, actions, testing) | Medium (intents, dialogs) | Low (declarative builder) |
| Cost | $2/conversation or flex credits | Included in Service Cloud | Included in platform |
| Maintenance | Ongoing tuning, monitoring | Intent training updates | Standard Flow maintenance |
| Best for | Complex customer interactions needing reasoning | High-volume simple FAQ deflection | Internal automation, no user dialog |
Decision Flowchart: When to Use What
flowchart TD
Start(["Customer Interaction<br/>Automation Need"]) --> Q1{"Requires natural<br/>language understanding?"}
Q1 -->|"No"| R1["Flow/Apex Automation<br/>(deterministic, no conversation)"]
Q1 -->|"Yes"| Q2{"Multi-step reasoning<br/>or ambiguous queries?"}
Q2 -->|"No"| Q3{"High volume,<br/>simple FAQ-style?"}
Q3 -->|"Yes"| R2["Einstein Bots<br/>(cost-effective for simple deflection)"]
Q3 -->|"No"| R3["Agentforce — Single Topic<br/>(focused AI agent)"]
Q2 -->|"Yes"| Q4{"Data Cloud available<br/>or justified?"}
Q4 -->|"No"| R4["Flow + Apex + Custom LWC<br/>(avoid AI dependency)"]
Q4 -->|"Yes"| Q5{"Existing Flows/Apex<br/>reusable as actions?"}
Q5 -->|"Yes"| R5["Agentforce with<br/>Existing Actions"]
Q5 -->|"No"| R6["Phased: Build Actions First<br/>then Add Agentforce"]
style R1 fill:#2d6a4f,stroke:#1b4332,color:#fff
style R2 fill:#4ecdc4,stroke:#3ab5ad,color:#000
style R3 fill:#1a535c,stroke:#0d3b44,color:#fff
style R4 fill:#f4a261,stroke:#d4823e,color:#000
style R5 fill:#1a535c,stroke:#0d3b44,color:#fff
style R6 fill:#f4a261,stroke:#d4823e,color:#000
Security and Sharing Model
Agentforce respects the Salesforce security model. Agents do not bypass permissions — they aggressively use whatever access the running user already has.
| Security Aspect | How It Works | CTA Implication |
|---|---|---|
| User context | Agent inherits the permissions of the Salesforce user it runs as | Each Agent User should be unique and follow least-privilege |
| Sharing model | Agent respects role hierarchy, sharing rules, OWD settings | Data visibility matches what that user would see in the UI |
| Field-level security | Agent can only read/write fields the user has access to | FLS misconfigurations can cause silent agent failures |
| CRUD permissions | Agent can only perform CRUD operations the user is authorized for | Permission sets must be carefully scoped for agent users |
| Attribute-based policies | Fine-grained rules beyond RBAC (e.g., “Only EU sales agents can trigger pricing workflows”) | Supports complex regulatory requirements |
CTA Board Question: “How does the agent respect data access controls?”
Model answer: “Agentforce agents inherit the permissions and sharing rules of the Salesforce user they run as. They do not bypass the security model — if the user cannot see a record, the agent cannot see it either. We create a dedicated Agent User with a permission set following least-privilege principles. The agent can only access objects and fields explicitly granted. For external-facing agents, we use a guest user profile with the minimum permissions needed. This aligns with Salesforce’s shared responsibility model.”
Testing and Monitoring
Agent quality requires systematic testing and production monitoring. Salesforce provides several tools.
| Tool | Purpose | When to Use |
|---|---|---|
| Conversation Preview | Manual testing in Agent Builder; inspect plan canvas and reasoning | During development and topic configuration |
| Testing Center | Batch testing; simulate hundreds of interactions in one run | Pre-deployment validation, regression testing |
| Enhanced Event Logs | Capture full interaction details for debugging | Troubleshooting specific conversation failures |
| Agent Health Monitoring | Real-time dashboards tracking error rate and latency | Production monitoring with 5-minute interval metrics |
| Agentforce Command Centre | Unified dashboard for all production agents | Enterprise-scale agent fleet management |
| Utterance Analysis | Analyze conversation patterns and unhandled intents | Ongoing optimization and topic gap identification |
Key Metrics to Track
| Metric | What It Measures | Target |
|---|---|---|
| Deflection rate | % of conversations resolved without human handoff | 40-70% for well-configured agents |
| Error rate | % of agent responses that fail | <5% |
| Average interaction latency | Time from request to response | <3 seconds |
| Escalation rate | % of conversations escalated to human | Track trend — should decrease over time |
| Customer satisfaction | Post-conversation CSAT score | At or above human agent baseline |
CTA Board Question: “How do you test and monitor agent quality?”
Model answer: “We test at three levels. First, manual testing in Conversation Preview during development to validate topic routing and action execution. Second, batch testing in Testing Center before deployment to simulate hundreds of scenarios and catch regressions. Third, production monitoring via Agent Health Monitoring dashboards tracking error rate, latency, and deflection rate at 5-minute intervals. We also use Utterance Analysis to identify unhandled intents and feed those back into topic design. The Command Centre gives leadership a unified view across all agents.”
CTA Scenario Use Cases
Scenario 1: Service Deflection
Situation: A telecom company handles 50,000 cases/month. 60% are routine (billing inquiries, plan changes, outage status). Average cost per agent-handled case is $12.
Recommendation: Deploy Agentforce Service Agent with three topics: Billing Inquiries, Plan Management, and Outage Status. Expose existing Flows (Get Bill Summary, Change Plan, Check Outage Map) as agent actions.
Expected outcome: 40-50% deflection rate = 20,000-25,000 fewer agent-handled cases/month. At $2/conversation vs $12/case, monthly savings of $200K-$250K against $50K in agent costs.
Why not Flow alone? Customers describe problems in natural language (“my bill is too high” vs “I want to see my last 3 invoices”). The agent must interpret intent, not just follow a decision tree.
Scenario 2: Sales Lead Qualification
Situation: A B2B SaaS company receives 5,000 inbound leads/month. Their 10-person SDR team can only touch 2,000. The rest go cold.
Recommendation: Deploy Agentforce SDR Agent to research and qualify all inbound leads using Data Cloud unified profiles. Agent drafts personalized outreach and schedules meetings for qualified leads.
Expected outcome: Based on Salesforce’s internal deployment, SDR agents generated $1.7M in new pipeline from dormant leads in one year. The agent handles initial qualification; human SDRs focus on high-value conversations.
Scenario 3: Multi-Channel Commerce
Situation: A retailer needs consistent customer experience across web chat, WhatsApp, and mobile app. Current support is fragmented across channels.
Recommendation: Deploy Agentforce Contact Center (GA, rolling out progressively since early 2026) with unified voice + digital channel handling. Agent maintains conversation context across channels and hands off to human agents with full history.
Trade-off: This is a newer capability (early maturity). Mitigate with phased rollout: start with web chat, add WhatsApp after 60 days, add voice after 120 days.
Scenario 4: When NOT to Recommend Agentforce
Situation: A small manufacturing company wants to automate their quote approval process. It follows rigid business rules: orders under $10K are auto-approved, $10K-$50K need manager approval, over $50K need VP approval.
Recommendation: Standard Approval Process + Flow. This is deterministic logic with no ambiguity or natural language component. Agentforce would add unnecessary cost and complexity.
Why not Agentforce? No NLU needed, no customer conversation, purely rule-based. A Flow handles this in hours; Agentforce would take weeks and cost per-conversation/credit fees for zero added value.
Multi-Agent Orchestration
Agentforce supports multi-agent architectures where specialized agents collaborate to solve complex problems.
| Concept | How It Works |
|---|---|
| Primary Agent | Single point of contact for the user; routes tasks to specialists |
| Specialist Agents | Focused agents handling specific domains (billing, shipping, returns) |
| Agent2Agent (A2A) | Open protocol for connecting Agentforce to third-party AI agents |
| Task Routing | Atlas reviews each agent’s description and capabilities to route intelligently |
| Context Sharing | Conversation context is maintained across agent handoffs |
Multi-Agent Complexity
Multi-agent orchestration is powerful but adds architectural complexity. For CTA scenarios, start with a single well-configured agent before recommending multi-agent patterns. The review board will question why you need multiple agents if a single agent with multiple topics would suffice.
Gotchas and Trade-Offs
Critical Gotchas
- Data Cloud is strongly recommended — Without it, advanced RAG grounding is severely limited (basic agents can still use CRM queries and Knowledge). Budget for Data Cloud if unified cross-system context is needed.
- Credit consumption adds up — At $0.10/action, an agent executing 5 actions per conversation costs $0.50 plus the conversation fee. Model costs carefully.
- Hallucination risk remains — Grounding reduces but does not eliminate hallucination. Always include human escalation paths.
- Over-automation danger — Not every process needs an AI agent. Deterministic logic in Flows is faster, cheaper, and more predictable.
- Testing is non-trivial — AI agents are non-deterministic. The same input may produce different outputs. Testing requires statistical validation, not binary pass/fail.
- Pricing model lock-in — Conversations and Flex Credits cannot be combined in one org. Choose carefully based on use case mix.
- Agent user permissions — Misconfigured agent user permissions are the leading cause of silent agent failures. Apply least-privilege rigorously.
AI vs Deterministic Automation Trade-Offs
| Factor | AI Agent (Agentforce) | Deterministic (Flow/Apex) |
|---|---|---|
| Flexibility | Handles ambiguous input | Requires structured input |
| Predictability | Non-deterministic — same input may vary | Deterministic — same input, same output |
| Cost | Per-conversation/action pricing | Included in platform licensing |
| Maintenance | Ongoing tuning, topic refinement | Standard CI/CD, version control |
| Testing | Statistical validation required | Binary pass/fail testing |
| Speed to build | Days for basic agent; weeks for tuning | Hours for basic Flow; days for complex Apex |
| Scalability | Scales with conversation volume (and cost) | Scales with platform limits (free) |
| User experience | Natural language, conversational | Structured forms, guided processes |
Related Topics
- Modern Platform Features — Broader AI/Einstein features and platform evolution
- Declarative vs Programmatic — When Flow/Apex is the better choice over AI
- Decision Guides — Agentforce vs traditional automation decision flowchart
- Build vs Buy — Evaluating Agentforce against custom AI solutions
- Trade-Offs — Deeper trade-off analysis for all technology choices
Sources
- Salesforce Agentforce Platform
- Inside the Brain of Agentforce — Atlas Reasoning Engine — Salesforce Engineering
- How the Atlas Reasoning Engine Powers Agentforce — Salesforce
- Agentforce Guide: How To Get Started — Salesforce
- Agentforce Pricing — Salesforce
- Einstein Trust Layer — Salesforce
- Agentforce Observability and Monitoring — Salesforce
- Best Practices for Secure Agentforce Implementation — Salesforce Blog
- Agent Builder Basics — Trailhead
- Agentforce Testing Tools and Strategies — Trailhead
- Agentforce Guardrails and Trust Patterns — Trailhead
- Data Cloud Powered Agentforce — Trailhead
- Agentforce Customer Success Stories — Salesforce
- Agentic Patterns and Implementation — Salesforce Architects
- Agentforce Actions Guide (2026) — Composio
- Connecting Agentforce to Data Cloud for Grounding With RAG — Salesforce Ben
- Best Practices for Building Agentforce Apex Actions — Salesforce Developers Blog
- Agentforce vs Einstein Bots Comparison — Salesforce Help
- Best Practices for Agent User Permissions — Salesforce Help