Ensure PII is not sent to model providers in violation of GDPR
Teams often rely on ZDR but still send raw PII into the LLM. Guardrails filter personal data before it goes to the model, addressing the gaps ZDR does not cover.
What's at stake
- GDPR requires lawful basis for processing personal data—sending it to a third-party model provider may not qualify
- Zero Data Retention (ZDR) agreements prevent storage but don't prevent the model from seeing the data
- Model providers may still use your data for abuse monitoring even under ZDR
- Enterprise customers in the EU require proof that personal data never leaves your controlled environment
- Data protection authorities increasingly scrutinize AI systems that process personal data
How to solve this
ZDR is not enough. When you send personal data to a model provider, that provider receives and processes the data—even if they promise not to store it. From a GDPR perspective, you've transferred personal data to a third party.
The solution is to filter personal data before it reaches the model. Names, emails, phone numbers, addresses, and other identifiers should be replaced with tokens or removed entirely. The model processes the anonymized version.
This requires inspection at the API boundary between your system and the model provider. Every prompt, every context window, every piece of data in the request must be scanned and cleaned before transmission.
How Superagent prevents this
Superagent provides guardrails for AI agents—small language models purpose-trained to detect and prevent failures in real time. These models sit at the boundary of your agent and inspect inputs, outputs, and tool calls before they execute.
For GDPR compliance, Superagent's Redact model filters outgoing requests before they reach your model provider. When your agent sends a prompt to OpenAI, Anthropic, or any other LLM, Redact scans the request for personal data and removes or masks it.
The model receives an anonymized version of the request. Names become [NAME], emails become [EMAIL], and custom patterns you define are replaced with appropriate tokens. Your response handling can re-map these tokens if needed for the final output.
This approach is provider-agnostic—it works with any LLM API. You maintain full control over what personal data leaves your system, with audit logs proving compliance for regulators and enterprise customers.