Back to Blog
opinions

The 1700s paradox

3 min read
By Alan Zabihi

In a thought-provoking post on X, Klarna CEO Sebastian Siemiatkowski offered an intriguing analogy about AI capability. Imagine standing in the year 1700 and claiming that humans could build cars, computers, and rockets. Were you right or wrong? The materials existed. The human brain had the same capacity it has today. Yet these achievements lay centuries in the future.

This paradox perfectly captures our current moment in artificial intelligence. We have models with remarkable reasoning capabilities – the equivalent of the human brain's raw potential. But like our ancestors in 1700, we're missing something crucial: the infrastructure to turn that potential into reality.

The power and limits of raw capability

Sebastian argues that AI has achieved the fundamental breakthrough of "reasoning." He's right. Today's models can do many things we can: reason through problems, create new ideas, and apply knowledge across domains.

But just as 1700s humans couldn't immediately leap to building rockets, we can't simply point these models at complex problems and expect transformative results. The raw computational capability – while necessary – isn't sufficient.

The two paths we must walk

This reveals a crucial insight: we need to advance in two distinct but complementary directions.

The first path is continuing to develop smarter models through increased computation and learning. OpenAI's recently announced o3 model demonstrates this trajectory, achieving breakthrough performance in complex reasoning tasks through massive computational scale. Their success proves that pushing the boundaries of model training and raw computational power remains crucial.

The second path is building infrastructure. At the foundation, we need specialized AI hardware like GPUs and LPUs to enable massive computation. These require physical infrastructure – datacenters and nuclear power plants to house and power them. Above that, we need software infrastructure like open-source interpreters and libraries that make these capabilities accessible. Finally, we need human interfaces, like Superagent's workspace for autonomous agents, to put this computational power to use. These pieces must work together as an ecosystem to unlock AI's full potential.

Beyond automation

However, there's a crucial difference between our vision and Sebastian's. While he focuses on AI's capability to automate current human work, we must aim higher. The goal isn't simply to automate existing office tasks or replace current workflows. That would be like using early industrial machinery to merely replicate manual craft production, rather than enabling entirely new classes of goods like automobiles.

Instead, we need to create conditions where AI can achieve things that were previously impossible. Infrastructure that doesn't just enable automated work, but entirely new categories of work. Systems where humans and AI can collaborate to solve problems we couldn't even approach before.

Alan Zabihi

Co-founder & CEO

Follow on X

Related Articles

A new way of building software is catching on. People are skipping the traditional engineering process and using tools like Replit, LLM apps, and code agents...

April 21, 20254 min read

Today, our interactions with language models are largely limited to discrete, isolated tasks: drafting emails, generating content snippets, analyzing small...

March 31, 20253 min read

AI agents as we know them are about to become extinct. The complex, over-engineered systems we build today—with their carefully crafted prompts, intricate...

February 19, 20253 min read

Subscribe to our newsletter

Get notified when we publish new articles and updates.