Debunking AI Security Hype
Recent claims about advanced AI models like Anthropic's proprietary 'Mythos' being indispensable for national security are largely exaggerated marketing.
While these models can indeed discover software vulnerabilities, the process is often resource-intensive and costly, requiring thousands of attempts and significant financial investment, as demonstrated by Mythos's $100 million project fund. Independent research suggests that less expensive, open-weight models can achieve similar results. The true cost and accessibility of identifying security flaws are far more critical for continuous scanning than having a single, cutting-edge model. Furthermore, many vulnerabilities identified are not readily exploitable, and the focus on 'zero-day' flaws overshadows the more common issues of unpatched known vulnerabilities and inadequate defence-in-depth strategies that cyberattacks typically exploit.
Open Source: The Sovereign Choice
The discourse surrounding advanced AI models and cybersecurity, particularly concerning proprietary systems like 'Mythos', is often driven by self-serving motives to create regulatory barriers against competitors, especially from China and India who favor open-source AI. India's critical information infrastructure (CII) faces an unacceptable supply-chain risk if it becomes dependent on proprietary models. Such reliance means that access to vital cyber defenses could be revoked based on the foreign policy decisions of another nation, effectively conceding digital sovereignty. This dependency is not only a fallacy but the inverse of what truly secures national interests. Embracing open-source AI ensures India can modify and locally run AI tools, maintaining control and fostering innovation without succumbing to external influences, a stark contrast to the precarious position of relying on systems susceptible to foreign policy shifts.
Openness Enhances Security
The debate over whether AI advancements benefit attackers or defenders has a clear answer rooted in software philosophy. For proprietary software, which historically relied on "security through obscurity" by hiding its source code, AI's ability to analyze even stripped binaries presents a significant vulnerability. However, Free and Open Source Software (FOSS), akin to cryptography, thrives on openness, embodying the principle that "given enough eyeballs, all bugs are shallow." This openness is now amplified by AI. FOSS developers can leverage diverse AI toolchains to scrutinize and fix vulnerabilities, evolving the "eyeballs" concept to include AI agents and computational power. While attackers may exploit proprietary systems more easily, defenders using open-source models gain a crucial advantage by being able to adapt and fortify their systems collectively and efficiently, making open-source AI a net positive for overall cybersecurity.
Learning from Past Hype
History shows that claims of AI models being "too dangerous" to release, such as OpenAI's initial stance on GPT-2, often stem from self-serving hype rather than genuine threats. Anthropic's current narrative surrounding 'Mythos' echoes this pattern. India must learn to disregard such commercially motivated claims and instead champion the adoption of Free and Open Source Software (FOSS) and open AI models. This strategic shift is crucial for safeguarding digital sovereignty and bolstering national security. By prioritizing open-source solutions, India can avoid the pitfalls of technological dependency and foster an environment where its cybersecurity infrastructure is robust, adaptable, and independently controlled, rather than vulnerable to the whims of external powers or the marketing strategies of AI developers.
















