What is MCP Server and Why Does It Matter for Privacy?

The Model Context Protocol (MCP), introduced by Anthropic, is an open standard that lets AI assistants like Claude Desktop, Cursor, and VS Code Copilot call external tools mid-conversation. Instead of manually copying text into a separate anonymization app, MCP lets you type a single prompt — and the AI automatically routes the data through your chosen tools before processing.

For privacy teams, this is transformative. The anonymize.solutions MCP Server inserts itself as an invisible privacy layer: your sensitive text goes in, a pseudonymized version comes out, the LLM works on the safe version, and the original values are restored in the response. The LLM never sees actual names, IDs, or financial data.

Zero-Knowledge guarantee: The MCP Server runs in your environment. Your API key authenticates against the anonymize.solutions API over TLS. No plaintext PII is ever logged on our servers — only you hold the mapping table.

The 6 MCP Operators for PII Protection

The MCP Server exposes 6 operators as callable tools. Each has a distinct role in the anonymization lifecycle:

1. anonymize

Purpose: Replace detected PII with placeholder tokens (e.g., [PERSON_1], [EMAIL_1]). Stores the mapping for later de-anonymization.

Example prompt: "Anonymize the following support ticket before I send it for analysis: [paste text]"

2. detect

Purpose: Scan text and return a structured list of all PII found — type, position, confidence score — without modifying the text. Useful for audits and risk assessment.

Example prompt: "Detect all PII in this log file and tell me what types are present."

3. analyze

Purpose: Similar to detect, but returns richer metadata including entity counts, compliance risk level (GDPR/HIPAA/PCI-DSS), and suggested preset. Designed for compliance officers.

Example prompt: "Analyze this database export for HIPAA compliance risk."

4. de-anonymize

Purpose: Restore original values from placeholders using the session mapping table. Completes the anonymize → process → de-anonymize cycle.

Example prompt: "De-anonymize the AI's response so I can send it to the actual customer."

5. encrypt

Purpose: Replace PII with reversible encrypted tokens using AES-256-GCM. Unlike anonymize (which uses readable placeholders), encrypt produces opaque tokens that can only be reversed with your key. Ideal for storing data in third-party systems.

Example prompt: "Encrypt all PII in this CSV before uploading to our data warehouse."

6. decrypt

Purpose: Restore encrypted tokens to their original values using your encryption key. Requires the same key used during encryption — enforcing strict access control.

Example prompt: "Decrypt the PII fields in this record for the authorized HR review."

Installation: JSON Configuration

Add the anonymize.solutions MCP Server to your Claude Desktop or Cursor configuration file. The config lives at ~/.config/claude/claude_desktop_config.json on macOS/Linux or %APPDATA%\Claude\claude_desktop_config.json on Windows.

{
  "mcpServers": {
    "anonymize-solutions": {
      "command": "npx",
      "args": ["-y", "@anonymize-solutions/mcp-server"],
      "env": {
        "ANONYMIZE_API_KEY": "your-api-key-here",
        "ANONYMIZE_PRESET": "gdpr",
        "ANONYMIZE_LANGUAGE": "en",
        "ANONYMIZE_BASE_URL": "https://api.anonymize.solutions/v1"
      }
    }
  }
}

For VS Code with the Copilot MCP extension, add the same block to your .vscode/mcp.json workspace file. Restart the AI assistant after saving.

Verifying the Installation

Once installed, type this prompt in Claude Desktop:

Use the anonymize tool to anonymize: "John Smith's email is john@acme.com and his SSN is 123-45-6789"

You should receive: "[PERSON_1]'s email is [EMAIL_1] and his SSN is [US_SSN_1]"

Workflow: Anonymize → Send to LLM → De-anonymize Response

The complete privacy-preserving workflow has three steps, each handled by the MCP Server automatically when you chain them in a single prompt:

Step 1 — User prompt:
"Anonymize the following patient note, then summarize it for a billing review,
then de-anonymize the summary."

Patient note: "Maria Rossi, DOB 12/03/1981, MRN 78234, was admitted for
chest pain. Her insurer is BlueCross policy #BC-9923. She lives at
42 Maple Street, Boston MA."

Step 2 — MCP anonymizes (internally):
"[PERSON_1], DOB [DATE_1], MRN [MRN_1], was admitted for chest pain.
Her insurer is [ORG_1] policy #[ID_1]. She lives at [ADDRESS_1]."

Step 3 — LLM summarizes the anonymized text (LLM never sees real PII)

Step 4 — MCP de-anonymizes the LLM's output before returning to user

This entire cycle happens in a single conversation turn. You see the final, de-anonymized summary. The LLM model logs contain only the anonymized version.

Entity Presets: GDPR, HIPAA, PCI-DSS, Custom

The ANONYMIZE_PRESET environment variable controls which entity types the MCP Server detects. Choose the preset that matches your compliance requirement:

Preset Entity Types Detected Best For
gdpr Name, email, phone, address, national ID, IP address, date of birth, location EU organizations, general data processing
hipaa All 18 PHI identifiers: name, DOB, MRN, NPI, insurance ID, dates, geographic data, biometrics Healthcare organizations, US medical data
pci-dss Credit card numbers (Luhn-validated), CVV, expiry date, cardholder name, bank account, routing number Payment processors, e-commerce
custom Fully configurable via API — add regex patterns, NLP entity types, keyword lists Industry-specific identifiers (employee IDs, policy numbers, etc.)

Real-World Use Cases

Code Review with Customer Data

Developers often paste database query results into Claude to debug issues. Without anonymization, real customer records — names, emails, account numbers — enter the LLM context. With the MCP Server, the prompt is automatically anonymized before the AI sees it.

Prompt: "Anonymize this query result, then help me optimize the SQL that produced it."

Document Analysis for Legal Teams

Legal documents contain client names, case numbers, financial amounts, and dates. Anonymize before asking Claude to summarize or extract key terms. De-anonymize the output before sending to the client.

API Debugging with Production Logs

Production logs frequently contain PII in request payloads, headers, and error messages. Detect and redact PII before pasting logs into Cursor for debugging assistance.

Prompt: "Detect PII in this API log, then help me find the 500 error root cause."

Customer Data Processing Pipelines

For teams building data pipelines with AI-assisted code generation, use the encrypt operator to pseudonymize sample data used during development. The actual schema and processing logic remain visible; personal values are replaced with encrypted tokens.

Zero-Knowledge: How It Works in the MCP Context

Zero-Knowledge in the MCP context means three things:

  1. Your API key authenticates the request — the anonymize.solutions API processes the text over TLS and returns anonymized output. No plaintext is stored on our servers after the response is sent.
  2. The mapping table lives in your session — the MCP Server holds the entity-to-placeholder mapping in memory for the duration of your session. It is never transmitted to the LLM or stored externally.
  3. Encryption keys never leave your device — when using the encrypt operator, your encryption key is derived locally using Argon2id and used client-side. We receive only the ciphertext.

This architecture means that even if an LLM provider were breached, the data in their logs would be pseudonymized placeholders — not real PII.

Compliance: GDPR and EU AI Act Article 10

Using the MCP Server satisfies two critical legal requirements simultaneously:

GDPR Article 25 (Privacy by Design): Routing AI workflows through an anonymization layer before data reaches an LLM constitutes a technical measure implementing data minimization and privacy by design. The LLM only ever processes the minimum data necessary — pseudonymized text.

EU AI Act Article 10 (Data Governance): Article 10 requires that training and validation data used in high-risk AI systems be subject to "appropriate data governance practices." Using anonymized data in AI-assisted workflows demonstrates compliance with this requirement for operational data fed to AI models.

Both requirements are documented in the anonymize.solutions compliance export, available under the Managed Private and Self-Managed packages.

Related Articles

📋

Consistent Pseudonymization for RAG Pipelines

How to anonymize data for vector embeddings without breaking retrieval — same entity, same token, every time.

Read More →
🔒

PII in LLM Prompts: Risks and Solutions

What happens when PII enters a large language model — and the technical controls that prevent it.

Read More →

All Integration Options

REST API, MCP Server, Chrome Extension, Office Add-in, Desktop App — see every way to connect.

View Integrations →

Get the MCP Server

Add privacy-preserving AI workflows to Claude Desktop, Cursor, or VS Code in under 5 minutes. Your PII never reaches the LLM.