Blog

Thoughts, updates, and insights from the Superagent team.

SecurityFebruary 18, 20265 min read

The Cline Incidents and the Broken Security Model

Two Cline security incidents in two months expose the same underlying problem: AI agents treat untrusted content as instructions. The npm supply chain and prompt injection attacks reveal why the current security model is fundamentally broken.

Read more
SecurityJanuary 25, 20264 min read

What Can Go Wrong with AI Agents

AI agents fail in ways traditional software doesn't. Data leaks, compliance violations, unauthorized actions. Here's what to watch for.

Read more
SecurityJanuary 12, 20263 min read

AI Guardrails Are Useless

Hot take: most AI guardrails on the market today are security theater. Not because the idea is bad, but because of how they're implemented. Most guardrail solutions are generic, static, and disconnected from what actually matters for your specific agent.

Read more
SecurityDecember 1, 20254 min read

Your System Prompt Is the First Thing Attackers Probe

When attackers target AI agents, they don't start with sophisticated exploits. They start by probing the system prompt—the instructions that define your agent's behavior, tools, and boundaries.

Read more
SecurityNovember 20, 20255 min read

Practical guide to building safe & secure AI agents

System prompts aren't enough to secure AI agents. As agents move from chatbots to systems that read files, hit APIs, and touch production, we need real runtime protection. Learn how to defend against prompt injection, poisoned tool results, and the 'lethal trifecta' with practical guardrails.

Read more

Join our newsletter

We'll share announcements and content regarding AI safety.