Customer StoryFintech / B2B Payments

How Capchase ships AI features without losing sleep

Capchase runs every file, prompt, and web fetch through Brin before it ever reaches an agent — and enforces it with a lint rule, not a README.

April 16, 2026
Capchase logo
$2.5B+
Volume processed
50,000+
Transactions
9
Countries

Capchase is the B2B payments and financing platform thousands of software and hardware vendors use to offer flexible terms to their buyers while still getting paid upfront. It's a fintech. When a fintech ships AI features into a pipeline that touches contracts, underwriting data, and buyer information, the failure modes aren't theoretical. A prompt injection that exfiltrates a customer record. A poisoned web fetch that leaks an internal credential. An LLM that confidently misreads a contract clause for a finance team relying on it.

The challenge

Daniel Füvesi is a lead engineer at Capchase. He went looking for a security layer the moment two specific use cases hit his desk.

Document ingestion. An internal tool that extracts data from PDFs to accelerate financing. Daniel tested it himself and found how trivial it was to plant a prompt injection inside a PDF: invisible to a human skimming the document, loud and clear to the model parsing it.

"These files can come in from anywhere. Even if I assume positive intent from our customers, I still have to think about the possibility of malicious content getting into our system."

Open web research. The second was an agent that fetches LLM-friendly content from arbitrary URLs as part of an internal workflow. Powerful, but every webpage on the public internet is now a potential attack surface for indirect prompt injection.

Underneath both concerns was the bigger fear Daniel kept returning to: the unknown unknowns.

"The fear, the uncertainty, and the innovation all come together in this fourth quadrant. I don't know what the next malicious thing is going to be. Prompting techniques that didn't exist a year ago are a different reality today, and tomorrow there'll be a new attack vector you just don't see coming."

Why Brin

Daniel first heard about Brin on the Mastra AI podcast in October 2025. Three things convinced him to actually try it.

Security as a design constraint, not a bolt-on. That matched how Capchase already thought about building systems.

"Security is not something you bolt on after the fact. It's better to design around strong security principles from day one than to try to work around things later."

Developer experience. "What caught my attention was the ease of use. It was refreshing to see a provider in this space that actually prioritized the developer experience. A simple layer you could drop in."

No vendor lock-in. A baseline he could adopt without surrendering control of his architecture.

How it's wired in

Capchase puts Brin in front of two surfaces: what comes in, what goes out.

Brin on inputs. Every file or user prompt is classified for prompt injection and other agentic threats before it ever reaches an agent, internal or customer-facing. This catches exactly the PDF attack vector Daniel had built and tested himself.

Brin on outbound fetches. Every web fetch from a research agent is scored by Brin before the content ever touches the LLM context. If a domain fails the trust check, the fetch is blocked at the middleware layer. Secure the context, not the agent.

Rather than documenting "please use the safe fetch wrapper" in a README and hoping developers remembered, Daniel wrapped Node's native fetch with a Brin-scored version, then added a lint rule that forbids direct use of the native primitive entirely. The secure path became the only path. An AI coding assistant generating new code inside the repo physically cannot reach for the unsafe fetch. A new hire onboarding next month is secure by default without ever reading a security doc.

"I didn't want to fight with developers about which package to use. You put it in the linting rules and it becomes part of the code conventions, same as any other style rule. Nobody has to think about it."

Most security teams ship policies and hope. Capchase shipped a linter rule and made the policy structurally impossible to violate. Security as architecture, not security as documentation.

Results

Unknown unknowns have a floor. Daniel can't predict every new injection technique. He doesn't have to. Every file and every fetched URL passes through a layer that evolves faster than his roadmap does.

No decentralized security decisions. Individual developers no longer choose whether to apply guardrails on a given feature. It's enforced at the lint and middleware layer, across every service that ships out of the repo.

AI assistants stay in bounds. New code written by Capchase's coding agents inherits the same safe-fetch constraint humans do. Secure by default before review.

Innovation speed preserved. The team keeps shipping AI features at full pace. Brin runs inline and stays out of the inner loop.

Agents get more room to run. With a baseline in place, Capchase is comfortable handing agents the broader access that makes them actually useful.

"I wish I could just let our agents run free and solve all our problems. But at what cost? Brin helps us sleep better at night. It's not airtight, nothing is, but at least there's real guardrails in place while we do the work."

Quotes in this story are from
Daniel Füvesi
Daniel FüvesiLead Engineer, CapchaseConnect on LinkedIn
Share this story

Red team your AI.
Prove it's safe.

Get started