AI & Confidentiality: What Every Coach, Consultant, and Therapist Needs to Know

If you’re working with a coach, consultant, therapist, or mentor, or you’re in one of those roles yourself then this article is for you. AI tools are quickly becoming part of how we prepare for sessions, process notes, and even generate insights. But these tools raise serious privacy concerns that few are talking about.
Note: I realize this may sound a bit like the early days of cloud computing — when enterprise hardware vendors warned that “no Fortune 500 company would ever co-mingle their data with competitors over the public internet.” We all know how that turned out: cloud is now the standard.
My intention isn’t to resist innovation or sow fear. I use AI tools daily and encourage others to explore their potential. But just like cloud computing required new security and governance practices, AI demands the same level of thoughtfulness , especially when it comes to client confidentiality.
This piece is a call for awareness, not avoidance. Use AI, but do it with your eyes wide open.
In this guide, I’ll break down how leading AI platforms handle your data, what risks practitioners and clients need to watch for, and how to protect your practice without falling behind on innovation.
The landscape of coaching, mental health, and personal development is undergoing a profound transformation, driven by the rapid advancement of Artificial Intelligence.
For consultants, therapists and coaches, AI presents an unprecedented opportunity to enhance their practice, improve client outcomes, and streamline administrative work. AI tools are already assisting with session preparation, generating personalized homework, and offering real-time insights into communication patterns. Clients, too, benefit from AI-powered journaling prompts, structured reflections, and non-judgmental dialogue between sessions. The gains in efficiency and support are clear.
But this power comes with a critical responsibility: safeguarding client privacy. While consulting, coaching and therapy are fundamentally different disciplines, they all demand absolute confidentiality in their client relationships.
Consider the highly sensitive information a CEO might share with a trusted coach: deep-seated conflicts with the board or leadership team, confidential strategic pivots, upcoming acquisitions, or personal struggles affecting performance.
A significant risk emerges when using off-the-shelf, public AI models. This sensitive information could inadvertently be used to train the underlying models. This poses a serious threat to confidentiality and may result in data exposure or ethical breaches, including potential violations of laws such as HIPAA (USA), GDPR (EU), PIPEDA (Canada), CCPA (California), and the UK Data Protection Act. It may also conflict with non-disclosure agreements (NDAs), professional ethics codes, or other contractual or regulatory obligations related to confidentiality.
Client trust hinges on the confidentiality maintained by the practitioner and any tools they use.
The Privacy Landscape of Popular AI Models
Before diving into specific platforms, it's crucial to acknowledge that the privacy policies and Terms of Service for AI models are dynamic and subject to frequent updates. The information in this article is current as of the date it was written in July 2025. As a responsible practitioner, always refer to the most current documentation directly from the AI service provider to ensure you have the latest information regarding their data handling practices
Understanding the privacy policies of leading AI platforms is essential for any practitioner considering their use.
OpenAI ChatGPT:
OpenAI’s ChatGPT is available in several tiers: Free, Plus, API (Pro), Team, and Enterprise, each with different privacy implications.
Free and Plus (consumer tiers):
By default, when Chat History & Training is enabled:
- Your conversations may be used to train OpenAI’s models.
- A subset of interactions may be reviewed by human evaluators if flagged for Trust & Safety reasons.
To protect privacy:
- Go to Settings → Data Controls → Chat History & Training, and toggle it off.
- When this is turned off, your chats won’t be used to train the model, and won’t appear in your history.
- OpenAI may still retain conversations for up to 30 days for abuse monitoring, but they are not used for training.
Deleted chats are typically removed within 30 days unless required by legal obligation (e.g. litigation-related data preservation).
Team and Enterprise tiers:
- Conversations are not used for model training.
- Data is encrypted in transit and at rest.
- Admins control retention and access settings.
- Enterprise is SOC 2 certified and complies with GDPR and related data frameworks.
Privacy trade-offs:
Disabling chat history and training means ChatGPT will not remember past interactions or preferences — limiting personalization but significantly strengthening data confidentiality.
Google Gemini
Google’s Gemini platform offers powerful capabilities, but privacy-conscious professionals — especially therapists, coaches, consultants, and mentors — must actively manage their data settings.
By default, if “Gemini Apps Activity” is enabled, your conversations may be reviewed by human evaluators and used to improve Google’s AI models. These chats are anonymized but may be retained for up to three years, even after deletion. To prevent this, users must turn off Gemini Apps Activity and manually delete past activity.
Even with these settings disabled, Google may temporarily store conversations for up to 72 hours to support service continuity and safety checks. This short-term storage is not used for model training.
Enterprise use: In Google Workspace enterprise environments, Gemini conversations are not used for training and are not reviewed by humans. Data handling is governed by organizational admin policies.
Privacy trade-offs: Turning off activity tracking means losing access to features like memory, personalization, and session continuity. While this reduces convenience, it strengthens privacy and limits long-term data exposure.
Anthropic Claude
Anthropic’s Claude follows a privacy-first approach across all tiers, including Free, Claude Pro, and Commercial deployments.
By default, Claude does not use your inputs or outputs to train its models unless you explicitly opt in — for example, by submitting feedback via thumbs up or down. Conversations that violate usage policies may be flagged and reviewed by Anthropic’s Trust & Safety team.
- Flagged content may be stored for up to 2 years.
- Trust & Safety metadata may be retained for up to 7 years.
- Conversations deleted by the user are removed within 30 days unless legally required to be retained.
Feedback caution: Submitting feedback can lead to data being stored and used for model improvement. For maximum discretion, users should avoid providing feedback.
Memory limitations: Claude does not currently support memory or session continuity. Each session is stateless, even for paid users. No prior conversation data is used to personalize future responses.
Claude for Organizations (Commercial Use): Anthropic offers Claude through enterprise partners such as Amazon Bedrock and Google Cloud. In these environments:
- Inputs and outputs are never used to train models.
- No human review takes place, even for flagged content.
- Data remains within the customer's secure environment.
- Organizations control retention, access, and infrastructure security.
Privacy trade-offs: Claude’s default settings are strong, but flagged content and feedback can still introduce risk. Enterprise deployments offer the highest level of control and isolation for regulated industries.
Microsoft Copilot
Microsoft’s Copilot is available in both consumer and enterprise tiers, with the strongest protections offered through Microsoft 365 Copilot for commercial use.
Enterprise deployments:
- Managed through Microsoft Entra ID (formerly Azure AD).
- Prompts, documents, and outputs are not used to train Microsoft’s foundation models.
- Data is encrypted in transit and at rest, and never leaves the organization’s Microsoft 365 tenant.
- Copilot complies with GDPR, SOC 2, and Microsoft’s EU Data Boundary commitments.
- Enterprise terms are governed by a Data Protection Addendum and include admin-level control over identity, retention, and access policies.
Consumer Copilot: In tools like Word, Excel, Windows, and Bing, consumer versions of Copilot may use data for product improvement or training unless users opt out. These settings vary by platform, region, and software version and may change during preview or beta phases.
Privacy trade-offs: Enterprise Copilot offers strong data isolation and compliance for regulated settings. Consumer versions, if misconfigured, may share prompt data with Microsoft. Carefully reviewing and adjusting data-sharing settings is essential for maintaining confidentiality.
The Ultimate Safeguard: Localized AI Chatbots
While diligently configuring cloud AI settings is a vital step in protecting client confidentiality, the gold standard for absolute privacy is to run AI models entirely offline using localized chatbot tools.
This setup ensures:
- Absolute Privacy: No data ever leaves your device or touches external servers
- No Training Risk: Your conversations are never used to train public AI models
- Full Control: You manage the data, the model, and the entire environment
Thanks to tools like Ollama, LM Studio, GPT4All, and llama.cpp, it is now easier than ever to set up private AI on your own machine. These platforms support open-source models like Gemma, Mistral, and Llama 3, and can be used to generate therapeutic prompts, organize client notes, or support research — all without compromising privacy.
Security tip: Secure your AI environment with strong passwords, up-to-date software, antivirus protection, and physical access control.
A Path Forward for Responsible AI Integration
To ethically integrate AI into coaching, therapy, or consulting, a multi-layered approach is essential:
- Educate Yourself
Understand each AI tool’s capabilities, limitations, and privacy risks.
- Prioritize Privacy-First Tools
Choose services with clear privacy policies and strong default protections.
- Configure Settings Carefully
Disable data-sharing and training features in all cloud-based tools. Recheck settings regularly.
- Explore Localized Solutions
Use offline AI setups where data sensitivity is high.
- Get Informed Consent
Always inform clients when using AI in your practice. Update contracts to reflect how AI is used and protected. While some coaches avoid formal contracts, written agreements are strongly encouraged and ethically expected by ICF, EMCC, and AC.
- Maintain Human Oversight
AI can support your work, but it must never replace your professional judgment, empathy, or the human connection at the core of your practice.
As AI technology continues to advance and regulatory landscapes adapt, staying informed about evolving best practices and legal requirements will be an ongoing responsibility for practitioners committed to ethical integration.
Conclusion
AI is reshaping how we work, think, and support our clients. But with great power comes great responsibility. By prioritizing privacy, maintaining ethical rigor, and staying informed, practitioners can embrace the promise of AI without compromising the trust at the core of every client relationship.
Feel free to repost this to raise awareness and protect others' privacy in AI-powered conversations.