For decades, the internet worked on a kind of unspoken balance. Writing software was hard, so only a relatively small group of trained developers did it. Finding bugs was just as difficult, which meant
many vulnerabilities stayed buried for years, sometimes decades. It wasn’t perfect, but it kept things stable enough. That balance is now starting to fall apart.
In a recent essay in the New York Times, Raffi Krikorian, chief technology officer at Mozilla, warns that new AI systems are changing both sides of that equation at once. They are making it easier than ever to create software, and at the same time, far easier to find what’s wrong with it.
The shift is already visible. A powerful AI model developed by Anthropic was recently able to uncover vulnerabilities that had gone unnoticed for years. In some cases, it identified a 27-year-old flaw in OpenBSD and a 16-year-old issue in FFmpeg, both widely used pieces of software that quietly power parts of the internet.
These aren’t obscure tools. They sit behind things people use every day, from streaming services to secure networks. And the kinds of vulnerabilities being found are the same ones that can be used in ransomware attacks or to break into sensitive systems.
At the same time, AI is lowering the barrier to building software itself.
A growing number of people are now using AI tools to turn simple instructions into working apps, a trend sometimes called “vibe coding.” A shop owner can describe an inventory system, or a clinic can outline a patient portal, and the software gets built for them.
But much of this code is being created without deep security checks.
That creates a new kind of risk. The same tools that make it easier to build software also make it easier to break it.
For years, the difficulty of both writing and exploiting code created a kind of safety buffer. Now, that buffer is disappearing. As the essay puts it, the old balance that kept the internet “safe enough” is effectively over.
There’s another layer to the problem.
Much of the internet runs on open-source software, code maintained not by large corporations, but by small teams and volunteers. These projects often operate on limited budgets, even though they support systems used by millions.
The concern is that while large companies may get early access to advanced AI tools to protect themselves, smaller developers and independent creators may not.
That gap could leave parts of the internet far more exposed.
Krikorian argues that the solution isn’t to slow down innovation, but to rethink how security is built into it. AI tools that generate code should also be designed to secure it. And the developers maintaining critical infrastructure need more support, not less.
Because the real shift isn’t just technological, it’s structural.
The internet is no longer just being built by experts. It’s being built by everyone. And as that happens, the systems that protect it need to evolve just as quickly.
The question now isn’t whether AI will change the internet. It already is. The question is whether the protections keeping it stable can keep up.















