Blog

Thoughts, updates, and insights from the Superagent team.

GuardrailsNovember 24, 20252 min read

Your RAG Pipeline Is One Prompt Away From a Jailbreak

RAG is marketed as a safety feature, but connect it to agents that browse, call APIs, or touch databases, and every document becomes a potential jailbreak payload. Learn how malicious files, knowledge base poisoning, and indirect prompt injection turn RAG into an attack surface—and how to defend against it.

Read more
SecurityNovember 20, 20255 min read

Practical guide to building safe & secure AI agents

System prompts aren't enough to secure AI agents. As agents move from chatbots to systems that read files, hit APIs, and touch production, we need real runtime protection. Learn how to defend against prompt injection, poisoned tool results, and the 'lethal trifecta' with practical guardrails.

Read more
ResearchNovember 19, 20252 min read

AI Is Getting Better at Everything—Including Being Exploited

As AI models become more capable and obedient, safety improvements struggle to keep pace. The GPT-5.1 safety score drop reveals a structural problem: capability and attack surface scale faster than safety.

Read more
ResearchNovember 17, 20255 min read

Are AI Models Getting Safer? A Data-Driven Look at GPT vs Claude Over Time

Are frontier models actually getting safer to deploy—or just smarter at getting around guardrails? We analyze 18 months of Lamb-Bench safety scores for GPT and Claude models.

Read more
ResearchNovember 11, 20258 min read

Introducing Lamb-Bench: How Safe Are the Models Powering Your Product?

We built Lamb-Bench to solve a problem every founder faces when selling to enterprise: proving AI safety without a standard way to measure it. An adversarial testing framework that gives both buyers and sellers a common measurement standard.

Read more
ResearchOctober 24, 20258 min read

VibeSec: The Current State of AI-Agent Security and Compliance

Over the past weeks, we've spoken with dozens of developers who are building AI agents and LLM-powered products. The notes below come directly from those conversations and transcripts.

Read more

Join our newsletter

We'll share announcements and content regarding AI safety.