Blog

Thoughts, updates, and insights from the Superagent team.

ResearchNovember 17, 20255 min read

Are AI Models Getting Safer? A Data-Driven Look at GPT vs Claude Over Time

Are frontier models actually getting safer to deploy—or just smarter at getting around guardrails? We analyze 18 months of Lamb-Bench safety scores for GPT and Claude models.

Read more
ResearchNovember 11, 20258 min read

Introducing Lamb-Bench: How Safe Are the Models Powering Your Product?

We built Lamb-Bench to solve a problem every founder faces when selling to enterprise: proving AI safety without a standard way to measure it. An adversarial testing framework that gives both buyers and sellers a common measurement standard.

Read more
ResearchOctober 24, 20258 min read

VibeSec: The Current State of AI-Agent Security and Compliance

Over the past weeks, we've spoken with dozens of developers who are building AI agents and LLM-powered products. The notes below come directly from those conversations and transcripts.

Read more
ResearchOctober 11, 20255 min read

Why Your AI Agent Needs More Than Content Safety

You've enabled Azure Content Safety or Llama Guard. Your AI agent still isn't secure. Here's why content filtering isn't enough when your AI takes actions.

Read more
ResearchSeptember 22, 20254 min read

Alignment Faking: The New AI Security Threat

The development of sophisticated Large Language Models has introduced alignment faking as a critical challenge to AI safety. This strategic deception fundamentally complicates traditional safety measures, necessitating robust technical countermeasures.

Read more

Join our newsletter

We'll share announcements and content regarding AI safety.