On the morning of July 19, 2024, a single content update from CrowdStrike crashed roughly 8.5 million Windows machines and brought airlines, hospitals, banks, broadcasters, and emergency services to a stop within hours. Fortune 500 companies absorbed an estimated $5.4 billion in direct losses, of which insurance covered only ten to twenty per cent. There was no attacker, no malware, no breach. Fitch Ratings called it “a growing risk of single points of failure” and warned the risk would only intensify as companies consolidated onto fewer dominant vendors. We called the event an outage. The more honest description would have been a confession.
For most of its history, cybersecurity has rested on a comforting story. Systems are essentially sound.
Failures arrive as identifiable defects, awaiting discovery and repair. The whole industrial apparatus of modern defence, including vulnerability scanners, bug bounties, patch cycles, and the elaborate accounting of CVEs, treats brokenness as the exception and the foundation as reliable. That story was always partly fiction. After CrowdStrike, and after what arrived this April, it is no longer tenable.
On April 7, 2026, Anthropic released a preview of a frontier AI model called Claude Mythos, restricting access to twelve launch partners and roughly forty additional organisations under Project Glasswing. The launch partners include AWS, Apple, Google, Microsoft, JPMorganChase, the Linux Foundation, NVIDIA, and Palo Alto Networks, among others. In its initial testing, Mythos identified thousands of previously unknown vulnerabilities across major operating systems and browsers, including a 27-year-old bug in OpenBSD and a 17-year-old remote code execution vulnerability in FreeBSD that allowed any unauthenticated user on the internet to take complete control of a server running NFS. The UK AI Security Institute reported that Mythos was the first AI model able to complete its test of an end-to-end network compromise.
The natural impulse is to read this as the arrival of a faster scanner. That impulse misses what is actually new. What sets Mythos apart is not the speed at which it finds bugs. It is its ability to reason about systems whole. Anthropic’s own researcher, Nicholas Carlini, described the capability in unusually plain terms: the model can chain three, four, or five vulnerabilities into sophisticated end-to-end exploits, in a way no individual finding would predict. It runs autonomously on large, unfamiliar codebases. It works on binary-only software. It holds an entire system in view and asks not what is broken, but what is brittle.
For thirty years, defenders have spent their careers hunting bugs. Attackers are about to start hunting decisions.
The distinction matters more than it sounds. A bug is local: a function, a misconfigured server, a missing input check. Bugs can be fixed in isolation. The whole logic of modern security depends on this localness, on the assumption that risk is bounded.
A design choice is something else. It is a trade-off, made at one moment, that becomes a permanent feature of how a system behaves under stress. A microservice architecture trades latency for modularity. Tight coupling trades resilience for performance. A shared identity layer trades defence in depth for operational simplicity. None of these decisions is wrong on its own. Each, at scale, becomes the surface across which the next attack will travel.
I have come to think of this as composition risk: the vulnerability that emerges not from any single component but from the way components are wired together. It is invisible to code review, barely visible in architecture diagrams, and only becomes legible when something fails. By then, the cost has compounded across every system that shares the same composition. The CrowdStrike incident was a composition failure, not a code failure. The faulty file was a small thing. The architecture that allowed it to reach 8.5 million endpoints simultaneously, with no staggered rollout and no customer-side throttle, was the actual exposure.
Mythos is the first technology that can reason about composition risk at scale. That is why this moment matters.
Mythos does not create new attack surface. The attack surface was always there. Every dependency we never inventoried, every shared library we never re-audited, every architectural decision nobody has revisited in fifteen years has been quietly accumulating into latent surface. Anthropic’s testing found vulnerabilities going back nearly three decades, sitting inside software that runs the modern internet. They did not emerge in 2026. They have been there. The defining feature of the moment is not that risk has increased. It is that risk has become legible.
That changes the politics of security entirely. When weaknesses were buried, executives could plausibly defer them. Once a tool exists that can produce a comprehensive list of an organisation’s structural weaknesses in an afternoon, deferral becomes a documented choice. Liability follows visibility. The lawyers will get there before the engineers do.
Charles Perrow saw this forty years ago. In Normal Accidents, his 1984 study of disasters in nuclear plants, aviation, and chemical refineries, he argued that certain failures are not the consequence of carelessness but of complexity itself. When a system has enough interacting parts, tightly coupled, no level of diligence prevents the eventual cascade. He called such failures normal because they were structural. The systems we now run dwarf anything Perrow studied; we have been operating infrastructure that exceeds human comprehension for decades. Mythos does not change that fact. It ends our ability to ignore it.
For thirty years, attackers held a structural advantage that defenders learned to live with: attackers needed to find one flaw, defenders had to cover all of them. That asymmetry was tolerable at human speed. It is breaking now. Mythos is currently with defenders through Glasswing, but capabilities of this class will reach attackers far sooner than the industry has internalised. Jeff Williams, chief technology officer of Contrast Security, told Foreign Policy that “within six to nine months” other countries will either build their own equivalent models or find ways to access existing ones. The Mythos preview was already breached two weeks after launch by a private Discord community that guessed the URL. The defender window is narrower than the announcements suggest, and what organisations do with it will determine whether the next decade favours attackers or defenders. This is a generational policy moment being treated, in most boardrooms, as a procurement question.
Nowhere is the cost of misreading this higher than in financial services.
Modern technology and operational risk modelling rest on the assumption that adverse events are independent, or at most weakly correlated. That assumption holds well enough most of the time to allow portfolios to be diversified, capital to be allocated efficiently, and insurance to be priced. When the underlying technology systems are themselves correlated through shared architecture, the assumption collapses precisely when it matters most. A common vulnerability in a widely deployed authentication library is not one risk repeated across many institutions. It is one risk affecting all of them simultaneously, in the same minute, in the same direction.
Cyber insurance markets have been adjusting to this reality for years. The CrowdStrike outage produced what CyberCube called potentially the single worst loss in the cyber insurance sector in two decades, and yet, because policy waiting periods and exclusions blunted the impact, fewer than one per cent of insured companies globally filed a claim. The lesson the underwriting community took from this was not that the risk was overstated. It was that policy structure had failed to keep pace with the architecture. The World Economic Forum’s Global Cybersecurity Outlook 2026 puts the broader picture in stark numbers: 65 per cent of large companies now cite third-party and supply chain vulnerabilities as their greatest cyber resilience challenge, up from 54 per cent the previous year, and 87 per cent identify AI-related vulnerabilities as the fastest-growing cyber risk.
The regulatory frame is catching up too slowly. DORA, in force across the EU since January 2025, requires financial entities to monitor third-party concentration, and last November the European Supervisory Authorities designated their first Critical ICT Third-Party Providers for direct oversight, with Microsoft Ireland on the list. These are real steps. But they look at provider concentration, not at architectural composition shared across institutions. The specific common dependencies that turn many institutions into a single risk remain outside regulatory view. A bank that uses the same cloud, the same identity provider, the same observability stack, and the same model APIs as twenty of its peers is not running an independent system. It is running a node in a larger one. The illusion of independent risk has been one of the most expensive mistakes the financial sector has ever made. It is about to become unaffordable.
I would expect three things to happen within the next twenty-four months. Cyber insurance carriers will introduce architectural exposure as a separate underwriting category, distinct from controls maturity, with material pricing differential between concentrated and distributed architectures. Rating agencies will begin to incorporate technology composition into operational resilience scores, the way they currently incorporate funding concentration into liquidity scores. And boards will start asking a question they have never asked before: what is our composition risk, and to whom is it shared?
Serious defence in this picture does not look like a bigger security operations centre or a faster patch cycle, though both will still be needed. It looks like the institutionalisation of three things the industry currently treats as engineering preferences rather than risk controls.
The first is what I would call architectural attestation. Today, organisations attest to controls: multifactor authentication, encryption at rest, vulnerability management. None of this captures the architectural composition that determines blast radius. A meaningful attestation would describe shared dependencies, common providers, and the specific systems whose failure would cascade across the organisation. Auditors are not currently equipped to evaluate this. They will need to be.
The second is continuous architectural simulation. Penetration testing and red teaming are point-in-time exercises that cannot keep up with the rate at which modern systems change, and they were never designed to evaluate composition. The next generation of defensive practice will involve constant adversarial simulation against environments that mirror production, run by the same class of model that attackers use, scoped specifically to find composition vulnerabilities before they are exploited. Institutions that build this capability now will hold a decade-long advantage over those that do not.
The third is architectural simplicity as a board-level priority. Most complexity in modern systems is not necessary; it is the residue of decisions made under deadline pressure or organisational politics. A system you can hold in your head is a system you can defend. A system nobody fully understands is a system that will fail in ways no one anticipated. Boards that do not start rewarding the engineers who simplify, instead of the ones who add capability, will be operating outside their fiduciary duty within the decade.
Cybersecurity is the first place this dynamic appears, but it will not be the last. Mythos is an early instance of a more general phenomenon: artificial intelligence that can reason about whole systems faster than those systems can be redesigned. The same capability will appear, in the next few years, in supply chains, energy grids, capital markets, and biology. In each case, the institutions that depend on those systems will face the same choice cybersecurity faces now. They can keep treating risk as a localised problem of identifiable defects. Or they can recognise that risk has become a structural property of the systems themselves, and that the institutions designed to govern those systems were built for a slower, simpler world.
The deeper question is not whether to defend against Mythos. It is whether we can build institutions capable of governing systems we no longer fully understand. That is the work of the next decade, and it will determine whether technology continues to compound human capacity or begins to compound human exposure.
For most of the modern internet’s life, security has been a kind of janitorial work. We hunted for spills and cleaned them up. We patched, we audited, we wrote ever more elaborate rules about who could touch what. The implicit promise was that diligence would eventually exhaust the supply of spills.
The supply is not exhaustive. What is changing is not the diligence required of defenders. It is the location of the problem.
The future of security will not be about eliminating weaknesses. It will be about understanding the systems we have created, and having the discipline to build the next ones differently.
Aditya Vikram Kashyap is currently Vice-President at Morgan Stanley, New York. Kashyap is an award-winning technology leader. His core competencies focus on enterprise-scale AI, digital transformation, and building ethical innovation cultures. Views expressed are personal and solely those of the author, and do not necessarily reflect News18’s views.
/images/ppid_a911dc6a-image-177738053788741145.webp)
/images/ppid_a911dc6a-image-177738057459787542.webp)


/images/ppid_a911dc6a-image-177743653845155885.webp)

/images/ppid_a911dc6a-image-177752404454033658.webp)

/images/ppid_a911dc6a-image-177754152212659438.webp)
/images/ppid_a911dc6a-image-177745752932564947.webp)
/images/ppid_a911dc6a-image-177754155789351072.webp)
/images/ppid_a911dc6a-image-177754859662275864.webp)
