Blog
Thoughts, updates, and insights from the Superagent team.
Introducing Lamb-Bench: How Safe Are the Models Powering Your Product?
We built Lamb-Bench to solve a problem every founder faces when selling to enterprise: proving AI safety without a standard way to measure it. An adversarial testing framework that gives both buyers and sellers a common measurement standard.
VibeSec: The Current State of AI-Agent Security and Compliance
Over the past weeks, we've spoken with dozens of developers who are building AI agents and LLM-powered products. The notes below come directly from those conversations and transcripts.
The March of Nines
The gap between a working demo and a reliable product is vast. Andrej Karpathy calls this the 'march of nines' — when every increase in reliability takes as much work as all the previous ones combined. This is the hidden engineering challenge behind every production AI system.
The case for small language models
Most agents today rely on large, general-purpose models built to do everything. If your agent has a single, well-defined job, it should also have a model designed for that job. This is the case for small language models: models that handle one task, run locally, and can be retrained as your data evolves.
Why Your AI Agent Needs More Than Content Safety
You've enabled Azure Content Safety or Llama Guard. Your AI agent still isn't secure. Here's why content filtering isn't enough when your AI takes actions.
Shipped: Runtime Redaction and Command-Line Security
The past two weeks brought runtime redaction, a powerful CLI, URL whitelisting, and a developer experience that puts security directly in your workflow. Here's what shipped and why it matters for teams building with AI agents.
Join our newsletter
We'll share announcements and content regarding AI safety.