The AI firewall

Reasoning-driven runtime protection for every prompt and response.
Stops prompt injections, backdoors, and data leaks.

Open Source (MIT) • Backed by Y Combinator
Secured
1,370 calls protected

Monitoring...

The Threats LLMs Can't
Defend Against

Every LLM call is an attack vector — exposing risks from prompt injections, data leaks, and backdoors.

Prompt Injections

Attackers manipulate AI behavior through crafted inputs, bypassing safety controls and hijacking system prompts

Data Leaks

Sensitive data like API keys, credentials, and PII get exposed in AI responses through direct or indirect extraction

Backdoors

AI generates malicious code patterns that create vulnerabilities, security holes, or hidden exploits in your applications

Safety and security at
inference speed

Sub-50ms decisions with full reasoning. Firewall, routing, vault, and observability — always on.

NinjaLM
A fine-tuned small language model that reasons about every request, catching novel attacks that static filters miss.
Model Router
A flexible router that directs requests by policy, cost, and latency, supporting any model across your stack.
Soon
Vault
A secure store for secrets and environment variables, scoped to prompts so data stays protected and under control.
Observability
Always-on logs and traces for every request, with decision reasoning available for debugging, compliance, and auditability.

Use Cases

Protect both what you build and what you use — from coding agents and internal apps to third-party AI tools.

AI Agents
Keep coding agents and autonomous agents safe from prompt injections, data leaks, and malicious code — without slowing them down.
Applications
Secure both what you build and what you use. From custom APIs and microservices to external tools like Claude Code, Cursor, and ChatGPT.

Deployment Options

Hosted
Managed solution, no maintenance
Start in seconds, scale automatically
Perfect for teams without on-premise requirements
Self-hosted
Deploy on-premise with full control
Complete data ownership
Enterprise-ready for strict requirements

Get Started in Seconds

Add protection with a single change — swap your API URL.
No refactoring required.

curl -X POST https://firewall.superagent.sh/messages \
  -H "Content-Type: application/json" \
  -H "x-api-key: your-key" \
  -d '{
    "model": "gpt-5",
    "messages": [
      {"role":"user","content":"Write a secure password reset email template."}
    ]
  }'

One change. Full protection.

Frequently Asked Questions

Everything you need to know about Superagent AI Firewall

What is Superagent AI Firewall?

Superagent is an AI firewall that provides reasoning-driven runtime protection for every LLM prompt and response. It stops prompt injections, backdoors, and data leaks in real-time with sub-50ms decision making.

What threats does Superagent protect against?

Superagent protects against three core AI threats: prompt injections (attackers manipulating AI behavior), data leaks (sensitive information exposure), and backdoors (malicious code generation).

How fast is the protection?

Superagent provides sub-50ms decisions with full reasoning. Our NinjaLM model is fine-tuned specifically for threat detection, ensuring lightning-fast protection without compromising your AI application's performance.

Is Superagent open source?

Yes, Superagent is open source and released under the MIT License. You can find the code on GitHub and contribute to the project. We believe in transparent security.

What deployment options are available?

Superagent offers both hosted and self-hosted deployment options. The hosted solution is managed with no maintenance required, while self-hosted provides full control and data ownership for enterprise requirements.

How do I integrate Superagent?

Integration is simple - just swap your API URL to route through Superagent's firewall. No refactoring required. Add protection with a single change and get started in seconds.

Your AI has no defenses
— until now

Get full protection in under a minute. No code changes required.

Open Source • MIT License