Blog
Thoughts, updates, and insights from the Superagent team.
SOC-2 is table stakes now. Here's what actually matters for AI products.
A few years ago, having SOC-2 certification was a real differentiator. If you were selling to enterprise, that badge meant something. That's not the world we live in anymore.
Red Teaming AI Agents: What We Learned From 50 Assessments
After red teaming 50 AI agents across different companies, industries, and setups, we've identified critical patterns that teams need to understand. Here's what actually matters when securing AI agents in production.
Open Source AI Models: A Safety Score Reality Check
The open source AI movement has democratized access to powerful language models, enabling developers and organizations to deploy sophisticated AI systems without vendor lock-in or prohibitive costs.
Your System Prompt Is the First Thing Attackers Probe
When attackers target AI agents, they don't start with sophisticated exploits. They start by probing the system prompt—the instructions that define your agent's behavior, tools, and boundaries.
Your RAG Pipeline Is One Prompt Away From a Jailbreak
RAG is marketed as a safety feature, but connect it to agents that browse, call APIs, or touch databases, and every document becomes a potential jailbreak payload. Learn how malicious files, knowledge base poisoning, and indirect prompt injection turn RAG into an attack surface—and how to defend against it.
Practical guide to building safe & secure AI agents
System prompts aren't enough to secure AI agents. As agents move from chatbots to systems that read files, hit APIs, and touch production, we need real runtime protection. Learn how to defend against prompt injection, poisoned tool results, and the 'lethal trifecta' with practical guardrails.
Join our newsletter
We'll share announcements and content regarding AI safety.