
Key Takeaways (TL;DR)
- What is it? A default-enabled Salesforce setting that allows Salesforce to use your org's customer data for global AI model training, research, and development
- Where to find it: Setup → Einstein → Opt Out of Customer Data Access
- Impact: Your organization's data may be contributing to Salesforce's global AI model training right now — without your explicit knowledge
- Action required: Review the setting and make an intentional, documented decision with your legal and security stakeholders
- Time to review: Under 5 minutes to locate and change the setting
- Bottom line: Opting out does NOT break Einstein, Agentforce, Copilot, or any AI features in your org — it only stops Salesforce from using your data for their global model improvements
A recent conversation that went viral among Salesforce professionals has put a spotlight on something most admins didn't know existed: a default-enabled setting buried in Salesforce Setup that grants Salesforce access to your organization's customer data — not for your org's benefit, but for Salesforce's own AI training, research, and development purposes.
The reaction was swift. Thousands of Salesforce admins, architects, and security professionals shared the discussion, many expressing shock that the setting existed at all — and that it was turned on by default.
But before you panic, let's be clear: this is not a data breach, and this is not a scandal. This is a configuration choice that every Salesforce customer has the right to make — once they know it exists. The problem isn't that the option is available. The problem is that most organizations have never been asked to make a conscious decision about it.
In this guide, we'll walk you through exactly what this setting does, how to find it, what the Einstein Trust Layer actually protects (and what it doesn't), and a decision framework for whether your organization should opt out.
What Is the "Opt Out of Customer Data Access" Setting?
The Opt Out of Customer Data Access setting is a Salesforce configuration option that controls whether Salesforce can use your organization's customer data for purposes beyond running your org's own Einstein AI features.
When this setting is not opted out (the default state), Salesforce may use your customer data on an aggregate basis to:
- Train global predictive AI models that power Einstein features across all Salesforce customers
- Improve Salesforce services and features through data analysis and benchmarking
- Conduct research and development for new services and capabilities
When you opt out, Salesforce employees can no longer view your customer data outside of a support case, a pilot, or as otherwise described in your legal agreements. Your data stops contributing to Salesforce's broader AI training pipeline.
What This Setting Is NOT
This setting is not the Einstein Trust Layer. It is not the toggle that enables or disables Einstein AI features in your org. It is a separate, independent control that governs Salesforce's downstream use of your data — distinct from how Einstein AI operates within your instance.
How to Find and Review the Setting (Step-by-Step)
Follow these exact steps to locate the Opt Out of Customer Data Access setting in your Salesforce org:
Step 1: Navigate to Setup
Click the gear icon in the upper-right corner of your Salesforce instance and select Setup.
Step 2: Search for Einstein Settings
In the Quick Find box on the left sidebar, type "Einstein" or "Opt Out".
Step 3: Open "Opt Out of Customer Data Access"
Click on Opt Out of Customer Data Access under the Einstein section in Setup. The exact path is:
Setup → Einstein → Opt Out of Customer Data Access
Step 4: Review the Current State
You'll see a toggle labeled "Allow Salesforce Access to Customer Data" (or similar phrasing). If this is enabled (toggled ON), Salesforce currently has access to use your customer data for AI training and research.
Step 5: Make Your Decision
If you choose to opt out, toggle the setting to disable Salesforce access. If you choose to keep it enabled, document that decision with your stakeholders.
Step 6: Document Your Decision
Regardless of your choice, document the following:
- Date of review
- Decision made (opted out or kept enabled)
- Stakeholders involved (legal, security, compliance, IT leadership)
- Reasoning for the decision
- Next review date (we recommend quarterly)
Einstein Trust Layer vs. Opt Out of Customer Data Access: What's the Difference?
This is where confusion runs rampant in the Salesforce community. These are two completely separate controls that serve different purposes. Here's a clear comparison:
| Feature | Einstein Trust Layer | Opt Out of Customer Data Access |
|---|---|---|
| What it does | Protects your data during real-time AI operations (inference, predictions, generative AI prompts) | Controls whether Salesforce uses your data for global AI model training and R&D |
| When it applies | Every time Einstein AI processes a request in your org | Governs Salesforce's downstream, aggregate use of your data |
| PII masking | Yes — automatically masks sensitive data before sending to LLMs | Not applicable — this controls data usage policy, not real-time processing |
| Zero-data retention | Yes — LLM partners don't store your data | Not applicable — this is about Salesforce's own internal data usage |
| Can you disable it? | No — it's mandatory and always on | Yes — you can toggle it in Setup |
| Impact on Einstein features | Essential — Einstein won't work without it | None — opting out does NOT break any features |
| Who benefits | Your org (data stays protected during AI use) | Salesforce (they use your data to improve their global models) |
| Certifications | ISO 27001/27017/27018, ISO 42001, FedRAMP | Governed by Master Subscription Agreement |
The Critical Distinction
The Einstein Trust Layer protects your data when YOU use AI. It ensures PII is masked, data isn't retained by third-party LLMs, and all interactions are logged.
The Opt Out setting controls whether SALESFORCE uses your data. It determines whether your customer data contributes to Salesforce's global AI model training, research, and development activities.
These two controls are completely independent. The Trust Layer works regardless of your opt-out status. Opting out does not weaken any Trust Layer protections. And keeping the setting enabled does not give you additional Trust Layer benefits.
Does Opting Out Break Einstein Features?
No. This is the single most important fact in this article.
Opting out of customer data access does NOT disable, degrade, or break any of the following Einstein features in your org:
- ✅ Einstein GPT / Generative AI — continues to work normally
- ✅ Agentforce — fully functional
- ✅ Einstein Activity Capture — unaffected
- ✅ Einstein Copilot — operates as expected
- ✅ Einstein Lead Scoring — still works (uses org-specific models)
- ✅ Einstein Opportunity Insights — unaffected
- ✅ Prompt Builder — fully functional
- ✅ Data Cloud — operates independently
All of these features rely on the Einstein Trust Layer for data protection during use — not on your data access opt-in status. The opt-out setting only controls whether your data is used for Salesforce's own global model improvements.
The one potential trade-off: opting out may mean your data doesn't contribute to global model personalization improvements that could benefit the broader Salesforce ecosystem. However, most organizations — especially those with sensitive customer data — will find this an acceptable trade-off.
The SaaS Industry Pattern: You've Seen This Before
Salesforce is not the first major SaaS platform to face scrutiny over default opt-in AI data training. This is part of a well-documented industry pattern:
Zoom (August 2023)
Zoom updated its Terms of Service to grant itself broad rights to use customer content — including audio, video, and chat — for AI training. The backlash was immediate and severe. Zoom reversed course within days, explicitly promising not to use customer content for AI training without consent.
Adobe (June 2024)
Adobe updated its Terms of Use with language that was perceived as granting the company rights to use creative professionals' work for AI training. The creative community revolted. Adobe employees publicly criticized the decision. Adobe was forced to clarify and update its terms.
LinkedIn (September 2024)
LinkedIn quietly enabled AI training on user content — including posts, comments, and profile data — by default. The change triggered regulatory scrutiny in the EU and widespread user backlash. Users had to manually navigate to a buried privacy setting to opt out.
Slack (May 2024)
Slack faced intense criticism after users discovered the platform was using customer messages, files, and data to train its machine learning models by default. Users were shocked to learn that opting out required sending an email to Slack — there was no in-app toggle.
The Pattern
Every one of these companies:
- Made AI data training opt-out by default (rather than opt-in)
- Buried the setting or disclosure in terms of service or privacy documentation
- Faced significant backlash when the community discovered the practice
- Had to clarify, reverse, or modify their approach
Salesforce's "Opt Out of Customer Data Access" setting fits squarely within this pattern. The key difference: Salesforce does provide an in-app setting to control this — you just need to know it exists.
Why Regulated Industries Should Pay Extra Attention
While every organization should review this setting, businesses in regulated industries face additional considerations:
Financial Services
- SEC and FINRA require firms to maintain strict control over client data
- Customer financial data contributing to third-party AI training could create regulatory complications
- Fiduciary duty demands that client data usage is transparent and approved
- Compliance teams should treat this setting as a mandatory audit item
Healthcare
- HIPAA requires Business Associate Agreements (BAAs) for any third party that handles Protected Health Information (PHI)
- While Salesforce has a BAA program, the AI training use case may require separate evaluation
- Patient data used for global AI model training raises consent questions
- Healthcare organizations should consult legal counsel before leaving this setting enabled
Insurance
- State regulatory frameworks often restrict how policyholder data can be used
- AI model training on claims data could raise questions about data purpose limitations
- NAIC model laws around data governance are becoming increasingly strict
General Data Privacy (GDPR / CCPA)
- GDPR Article 5 requires data to be collected for "specified, explicit, and legitimate purposes" — AI training may not fall within your stated purpose
- CCPA/CPRA defines "sale" and "sharing" of personal information broadly — aggregate data use for AI training could trigger disclosure requirements
- Opting out prevents potential "sale/sharing" interpretations under these frameworks
- GDPR's Right to Object (Article 21) gives data subjects the right to object to processing for purposes like AI training
Decision Framework: Should Your Organization Opt Out?
Not every organization needs to make the same decision. Here's a framework to guide your evaluation:
You Should Strongly Consider Opting Out If:
- You handle sensitive customer data (financial records, health information, legal documents, personal identifiers)
- You operate in a regulated industry (financial services, healthcare, insurance, government)
- Your data governance policy restricts secondary use of customer data
- Your customers have privacy expectations that would conflict with third-party AI training
- You're subject to GDPR, CCPA/CPRA, HIPAA, SOX, PCI-DSS, or similar frameworks
- Your security team requires explicit approval for any data sharing beyond contracted services
- Your organization has completed a data classification exercise and identified high-sensitivity data in Salesforce
You Might Keep It Enabled If:
- You've reviewed the setting with legal and security and have documented approval
- Your data is low-sensitivity (public information, non-PII business data)
- You want to contribute to Salesforce's global model improvements and understand the trade-off
- Your Master Subscription Agreement explicitly covers this use case
- Your compliance framework doesn't restrict secondary data processing
Regardless of Your Decision:
- ✅ Document the decision in your security/compliance records
- ✅ Involve legal, security, and IT leadership in the evaluation
- ✅ Set a recurring review cadence (quarterly recommended)
- ✅ Communicate the decision to relevant stakeholders
- ✅ Include this setting in your Salesforce security audit checklist
Best Practices for Salesforce Data Governance in the AI Era
The Opt Out of Customer Data Access setting is just one piece of a comprehensive AI governance strategy. Here are broader best practices every Salesforce admin should follow:
1. Conduct a Full Einstein Settings Audit
Review all Einstein-related settings in your org, not just this one. Understand what data is being used, how, and by whom.
2. Implement a Data Classification Framework
Classify your Salesforce data by sensitivity level (public, internal, confidential, restricted). This makes governance decisions faster and more consistent.
3. Review Your Master Subscription Agreement
Understand exactly what your contract with Salesforce allows regarding data usage. If the AI training clause concerns you, negotiate specific terms at renewal.
4. Enable and Monitor the Audit Trail
Use Salesforce's built-in audit trail capabilities to track changes to security settings, data access patterns, and AI feature usage.
5. Create an AI Governance Policy
Document your organization's policies around AI data usage, including which features are approved, how data flows through AI systems, and who has authority to change settings.
6. Train Your Admin Team
Ensure all Salesforce admins understand the implications of data access settings and know how to review them. This shouldn't be a one-person responsibility.
7. Engage Legal and Compliance Early
Don't wait for an audit or a regulatory inquiry. Proactively involve legal counsel in AI governance decisions.
8. Stay Current on Salesforce Releases
Salesforce updates its privacy and AI features regularly. Review release notes for any changes to data handling, and subscribe to Salesforce's Trust and Security communications.
Frequently Asked Questions (FAQ)
Is Salesforce using my customer data to train AI?
By default, yes. Unless you have specifically opted out via Setup → Einstein → Opt Out of Customer Data Access, Salesforce may use your organization's customer data on an aggregate basis to train global predictive AI models, improve services, and conduct research and development. Opting out stops this usage entirely.
Does opting out of customer data access break Einstein or Agentforce?
No. Opting out does not break, disable, or degrade any Einstein features in your org. Einstein GPT, Agentforce, Einstein Activity Capture, Einstein Copilot, and all other AI features continue to work normally. The opt-out only controls whether Salesforce uses your data for their global model improvements.
What is the difference between the Einstein Trust Layer and the Opt Out of Customer Data Access setting?
The Einstein Trust Layer is a mandatory security architecture that protects your data during real-time AI operations — it masks PII, enforces zero-data retention with LLM partners, and logs all interactions. The Opt Out of Customer Data Access is a separate setting that controls whether Salesforce uses your data for its own AI training and R&D. They are completely independent.
How do I opt out of Salesforce customer data access?
Navigate to Setup → Einstein → Opt Out of Customer Data Access. Toggle the setting to disable Salesforce's access to your customer data. The change takes effect immediately. Document your decision and the date for your compliance records.
Does opting out affect Salesforce's Einstein Trust Layer protections?
No. The Einstein Trust Layer operates independently and remains fully active regardless of your opt-out status. PII masking, zero-data retention, toxicity detection, and audit logging all continue to function as designed.
Is the Salesforce data opt-out setting compliant with GDPR and CCPA?
Opting out aligns with GDPR and CCPA/CPRA data minimization principles by preventing your data from being used for purposes beyond your contracted services. For organizations subject to these regulations, opting out eliminates potential concerns about "sale/sharing" interpretations and helps satisfy data purpose limitation requirements.
What data does Salesforce use when the setting is enabled?
When the setting is enabled (not opted out), Salesforce may use your customer data on an aggregate basis for AI model training and service improvements. This includes insights, reports, and scoring from Einstein services. Salesforce excludes certain categories by default, including credit card numbers, government IDs, racial/ethnic origin data, and financial information. PII is automatically masked by the Einstein Trust Layer before reaching any LLM partners.
Should regulated industries opt out of Salesforce data access?
Most compliance professionals and legal advisors recommend that organizations in regulated industries — including financial services, healthcare, insurance, and government — opt out of customer data access as a precautionary measure. Even though Salesforce's Trust Layer provides robust protections, the secondary use of data for AI training may raise questions under industry-specific regulations like HIPAA, SOX, FINRA guidelines, and state privacy laws.
How often should I review the Opt Out of Customer Data Access setting?
We recommend reviewing this setting quarterly or whenever there is a major Salesforce release. Include it in your regular security audit checklist, and re-evaluate whenever your organization's data governance policies, regulatory landscape, or Salesforce contract terms change.
Can Salesforce change this setting without my knowledge?
Salesforce may update the functionality or scope of this setting through platform releases. It's important to subscribe to Salesforce's release notes and security bulletins, and to review your Einstein settings after each major release to confirm your opt-out status is still in effect.
Conclusion: Make an Intentional Decision
The Opt Out of Customer Data Access setting isn't a threat — it's a governance opportunity. Every Salesforce org should know this setting exists, understand what it controls, and make a conscious, documented decision about whether to opt out.
For many organizations — especially those handling sensitive data or operating in regulated environments — opting out is the prudent choice. For others, keeping the setting enabled may be acceptable after a thorough review with legal and security stakeholders.
What's not acceptable is leaving it in its default state simply because nobody knew it was there.
Take 5 minutes today. Go to Setup → Einstein → Opt Out of Customer Data Access. Make your decision. Document it. Move on.
How Vantage Point Can Help
At Vantage Point, we help organizations navigate the intersection of Salesforce, AI, and data governance. Our AI Governance Assessment service includes:
- Full Einstein settings audit — every AI-related configuration reviewed and documented
- Data classification and governance framework development
- Trust Layer optimization — ensuring your Einstein AI implementation follows security best practices
- Regulatory compliance alignment — mapping your Salesforce data practices to GDPR, CCPA, HIPAA, SOX, and industry-specific requirements
- Ongoing governance monitoring — quarterly reviews and release-specific impact assessments
Whether you're implementing Agentforce, deploying Einstein Copilot, or simply trying to understand what your Salesforce org is doing with your data, we bring the expertise to help you move forward with confidence.
Ready to audit your Salesforce AI governance settings? Contact Vantage Point today →
About Vantage Point
Vantage Point is a Salesforce, HubSpot, and AI implementation partner that helps businesses unlock the full potential of their CRM investments. From Sales Cloud and Service Cloud to Data Cloud, MuleSoft, and Agentforce, we bring deep technical expertise paired with strategic business insight. Our partnerships with Salesforce, HubSpot, Anthropic (Claude AI), Aircall, and Workato enable us to deliver integrated, future-ready solutions across any industry.
Learn more at vantagepoint.io
