Blog
Thoughts, updates, and insights from the Superagent team.
Research•October 24, 2025•8 min read
VibeSec: The Current State of AI-Agent Security and Compliance
Over the past weeks, we've spoken with dozens of developers who are building AI agents and LLM-powered products. The notes below come directly from those conversations and transcripts.
Read more
Research•October 11, 2025•5 min read
Why Your AI Agent Needs More Than Content Safety
You've enabled Azure Content Safety or Llama Guard. Your AI agent still isn't secure. Here's why content filtering isn't enough when your AI takes actions.
Read more
Research•September 22, 2025•4 min read
Alignment Faking: The New AI Security Threat
The development of sophisticated Large Language Models has introduced alignment faking as a critical challenge to AI safety. This strategic deception fundamentally complicates traditional safety measures, necessitating robust technical countermeasures.
Read more
Join our newsletter
We'll share announcements and content regarding AI safety.