Blog
Thoughts, updates, and insights from the Superagent team.
Are AI Models Getting Safer? A Data-Driven Look at GPT vs Claude Over Time
Are frontier models actually getting safer to deploy—or just smarter at getting around guardrails? We analyze 18 months of Lamb-Bench safety scores for GPT and Claude models.
Introducing Lamb-Bench: How Safe Are the Models Powering Your Product?
We built Lamb-Bench to solve a problem every founder faces when selling to enterprise: proving AI safety without a standard way to measure it. An adversarial testing framework that gives both buyers and sellers a common measurement standard.
VibeSec: The Current State of AI-Agent Security and Compliance
Over the past weeks, we've spoken with dozens of developers who are building AI agents and LLM-powered products. The notes below come directly from those conversations and transcripts.
Why Your AI Agent Needs More Than Content Safety
You've enabled Azure Content Safety or Llama Guard. Your AI agent still isn't secure. Here's why content filtering isn't enough when your AI takes actions.
Alignment Faking: The New AI Security Threat
The development of sophisticated Large Language Models has introduced alignment faking as a critical challenge to AI safety. This strategic deception fundamentally complicates traditional safety measures, necessitating robust technical countermeasures.
Join our newsletter
We'll share announcements and content regarding AI safety.