Blog

Thoughts, updates, and insights from the Superagent team.

AnnouncementsJanuary 6, 20262 min read

Introducing Superagent Guard

Purpose-trained models that detect prompt injections, identify jailbreak attempts, and enforce guardrails at runtime. Optimized for deployment as a security layer in AI agent systems.

Read more
AnnouncementsOctober 10, 20254 min read

Shipped: Runtime Redaction and Command-Line Security

The past two weeks brought runtime redaction, a powerful CLI, URL whitelisting, and a developer experience that puts security directly in your workflow. Here's what shipped and why it matters for teams building with AI agents.

Read more
AnnouncementsSeptember 24, 20252 min read

Introducing Superagent — Defend Your AI Agents in Runtime

Today, we are proud to announce Superagent — the runtime defense platform that keeps your AI agents safe from prompt injections, malicious tool calls, and data leaks.

Read more
AnnouncementsAugust 19, 20253 min read

Announcing Support for Cursor Agent and OpenCode

Every developer has preferences. Some love Claude's reasoning approach. Others prefer Cursor's interface and workflow. But you shouldn't have to compromise on security just because you prefer a certain agent. VibeKit's universal agent support provides a consistent security and observability layer that works across all your preferred agents.

Read more
AnnouncementsAugust 12, 20253 min read

Introducing VibeKit CLI

Every time you run an AI coding agent, you're giving it direct access to your environment. That moment of hesitation before you let the agent execute commands? We solved that. VibeKit is the safety layer that should have existed from day one.

Read more
AnnouncementsJuly 31, 20253 min read

Introducing Dagger Local Sandboxes

VibeKit now supports Dagger-powered local sandboxes for completely local AI code execution with container isolation and zero cloud dependencies, providing maximum privacy and performance for AI coding workflows.

Read more
Next

Join our newsletter

We'll share announcements and content regarding AI safety.