Blog
Thoughts, updates, and insights from the Superagent team.
Open Source AI Models: A Safety Score Reality Check
The open source AI movement has democratized access to powerful language models, enabling developers and organizations to deploy sophisticated AI systems without vendor lock-in or prohibitive costs.
Your System Prompt Is the First Thing Attackers Probe
When attackers target AI agents, they don't start with sophisticated exploits. They start by probing the system prompt—the instructions that define your agent's behavior, tools, and boundaries.
Your RAG Pipeline Is One Prompt Away From a Jailbreak
RAG is marketed as a safety feature, but connect it to agents that browse, call APIs, or touch databases, and every document becomes a potential jailbreak payload. Learn how malicious files, knowledge base poisoning, and indirect prompt injection turn RAG into an attack surface—and how to defend against it.
Practical guide to building safe & secure AI agents
System prompts aren't enough to secure AI agents. As agents move from chatbots to systems that read files, hit APIs, and touch production, we need real runtime protection. Learn how to defend against prompt injection, poisoned tool results, and the 'lethal trifecta' with practical guardrails.
AI Is Getting Better at Everything—Including Being Exploited
As AI models become more capable and obedient, safety improvements struggle to keep pace. The GPT-5.1 safety score drop reveals a structural problem: capability and attack surface scale faster than safety.
Are AI Models Getting Safer? A Data-Driven Look at GPT vs Claude Over Time
Are frontier models actually getting safer to deploy—or just smarter at getting around guardrails? We analyze 18 months of Lamb-Bench safety scores for GPT and Claude models.
Join our newsletter
We'll share announcements and content regarding AI safety.