Features Compare Packages
Online Hosted (SaaS)Managed PrivateSelf-Managed
Services
Dedicated ServicesCustom Engineering
IntegrationsSecuritySolutions Resources
BlogAI SafetyEU AI Act Guide
Request Demo
ENTERPRISE SECURITY

Shadow AI and PII: How Employees Are Accidentally Sending Sensitive Data to ChatGPT

Your GDPR compliance programme is sophisticated. Your security team is well-funded. Your employees are trained. And right now, someone at your company is pasting a customer database extract into ChatGPT to “just quickly summarize it.”

The Shadow AI Epidemic

Shadow AI refers to employees using AI tools without IT approval or security review. It is the AI equivalent of shadow IT — employees installing unauthorised software — but with a critical difference: AI tools are consumer-grade, always-on, and accessed entirely through the browser, making them invisible to most corporate security stacks.

78% of employees use unapproved AI tools at work (Gartner, 2025)
47 average shadow AI applications per enterprise (Salesforce, 2025)
11% of data pasted into ChatGPT by enterprise users is sensitive (Cyberhaven, 2024)
4.88M average cost of a data breach involving AI systems (IBM, 2025)

How Shadow AI Leaks PII (5 Vectors)

Vector 1: Copy-Paste from Internal Systems

The most common vector. An employee is working on a customer support ticket, copies the customer’s details (name, email, account number, complaint history) into ChatGPT to “help me write a professional response.” The customer’s PII is now in OpenAI’s system. The employee did not intend harm — they were trying to be efficient.

Vector 2: Document Upload and Summarization

Many AI tools now accept file uploads. Employees upload contracts, HR files, financial reports, and customer data exports to get summaries, translations, or analysis. These documents often contain extensive personal data. Once uploaded to a cloud AI service, the data is out of your control regardless of the service’s privacy policy.

Vector 3: Code Generation with Real Data

Developers frequently paste real database schemas, API responses containing live user data, or log files with embedded PII into AI coding assistants to debug or generate code. “Here’s the JSON response from our users API, help me parse the nested address field” — and the real user data goes with it.

Vector 4: Email and Communication Drafting

Sales teams paste customer CRM records into AI to draft personalised emails. HR teams paste interview notes to generate feedback letters. Legal teams paste case details to draft correspondence. Each represents PII leaving your organisation’s boundaries without any security review.

Vector 5: AI-Powered Browser Extensions

Dozens of browser extensions use AI to improve writing, summarize pages, and assist with forms — and many of them transmit page content (which may include PII visible on screen) to third-party AI APIs. An employee who installs a “helpful writing assistant” extension may not realise it is sending every page they visit to an external server.

The Hidden Cost of AI Data Exposure

Shadow AI incidents are difficult to detect and expensive when discovered. Unlike traditional data breaches (which are event-based and traceable), shadow AI exposure is continuous and cumulative: hundreds of employees making thousands of individual decisions, each representing a small regulatory risk, collectively representing a massive compliance exposure.

The cost calculation must include:

  • GDPR notification obligations: Any personal data transmitted to unauthorized third parties without adequate safeguards constitutes a personal data breach requiring assessment and potential DPA notification within 72 hours
  • Regulatory fines: Up to €20M or 4% of global annual turnover for serious GDPR violations
  • Litigation exposure: Data subjects whose PII was exposed have rights of compensation under GDPR Article 82
  • Reputational damage: Enterprise customers increasingly audit supplier AI usage policies as part of vendor risk management
  • EU AI Act exposure: For organizations building AI systems, shadow AI that processes personal data may trigger EU AI Act obligations your legal team is not aware of

What Your CISO Needs to Enforce

Technical controls are more effective than policy-only approaches. Research consistently shows that employees bypass security policies when they perceive them as friction to productivity. The solution is not to ban AI — that is both unenforceable and counterproductive. The solution is to provide a safe channel for AI use that makes the secure path the easy path.

The CISO agenda for shadow AI should have three components:

  1. Visibility: Know which AI tools are being used. Browser extension inventory, DNS monitoring, and CASB (Cloud Access Security Broker) integration can identify shadow AI usage patterns without reading employee content.
  2. Policy: Publish an AI Acceptable Use Policy that specifies which AI tools are approved, what data classifications are permissible to share, and the anonymization requirements for any PII-adjacent use case.
  3. Technical enforcement: Deploy the anonymize.solutions Chrome Extension enterprise-wide to intercept and anonymize PII before it reaches any AI platform, turning the unsafe path into a safe one without blocking AI use.

The Technical Solution: Intercept Before It Reaches the LLM

The most elegant solution to shadow AI PII leakage is to intercept PII at the point of transmission, before it reaches the AI platform. This approach:

  • Does not require employees to change their behaviour
  • Does not block productive AI use
  • Works across all AI platforms (ChatGPT, Claude, Gemini, Copilot, Perplexity, and more)
  • Creates an audit trail without reading employee content
  • Converts a compliance risk into a compliant workflow

Chrome Extension: How It Works

The anonymize.solutions Chrome Extension operates at the browser level, monitoring text input to AI chat interfaces. When a user begins typing or pasting content that contains detected PII, the extension:

  1. Detects entities in real-time — 320+ entity types, 48 languages, running on the anonymize.solutions API with sub-second latency
  2. Anonymizes before submission — PII is replaced with consistent pseudonyms or AES-256-GCM encrypted tokens before the text is sent to the AI platform
  3. Optionally de-anonymizes responses — If the user needs the AI’s response to reference real names (for example, drafting a personalised email), the extension decrypts the response client-side for the authorised user
  4. Logs anonymization events — Enterprise deployments can forward anonymization event logs (entity counts and types, not content) to SIEM systems for compliance monitoring

Enterprise Policy Template

The following policy template can be adapted for inclusion in your AI Acceptable Use Policy:

AI Tool Usage — PII Handling Requirements

All use of AI tools (approved and unapproved) must comply with the following data handling requirements: (1) Personal data of customers, employees, or third parties must not be submitted to external AI services without prior anonymization. (2) The anonymize.solutions Chrome Extension is deployed on all corporate devices and must remain active during AI platform use. (3) Employees using AI for tasks involving personal data must use the anonymize-then-query workflow described in the AI Safety Guide. (4) Any AI-assisted output that will be shared externally must be reviewed for AI-generated PII (synthetic but realistic personal data) before sending. Violations are subject to disciplinary action under the Data Protection Policy.

Building a Shadow AI Governance Framework

A complete shadow AI governance framework has four layers:

  • Discovery: Continuous monitoring of AI tool usage across the organisation (CASB, DNS, extension inventory)
  • Policy: Clear AI Acceptable Use Policy with data classification requirements and approved tool list
  • Technical enforcement: Chrome Extension deployment for real-time PII interception; MCP Server for developer workflows; REST API for application-layer controls
  • Monitoring and response: Anonymization event logging, SIEM integration, and incident response procedure for AI-related data breaches

Conclusion: Enable AI, Control PII

Shadow AI is not a behaviour problem. It is a tool gap problem. Employees use unauthorised AI tools because authorised alternatives are not as capable, are harder to access, or are not yet available. The solution that actually works is to make the safe path the easy path: deploy the Chrome Extension enterprise-wide, adopt the MCP Server for developer AI workflows, and give employees a productive AI experience that is also compliant by design.

Enterprise deployment: The Chrome Extension supports enterprise deployment via Google Workspace admin, Microsoft Intune, or standard browser management policies. Contact our team to discuss enterprise licensing and SIEM integration options. Request enterprise demo →

Related Articles

Stop Shadow AI PII Leakage

Chrome Extension + enterprise deployment + SIEM logging. Real-time PII interception for ChatGPT, Claude, Gemini, and 50+ AI platforms. Works without changing employee workflows.