What is the story about?
In Mahabharata, Krishna, who mythologically wielded the Sudarshana Chakra, was cosmically unconstrained within mythic space and time, which, after hitting its intended target, could come back to him and adorn his index finger. Similarly, Russia’s development of the Burevestnik missile system, being marketed as having an unlimited range within 15,000 km and a 15-hour flight which can traverse the globe, and its likely convergence with artificial intelligence systems pose a real threat to humankind.
The recent fraying of nuclear tempers between the US and Russia has reignited the dangers of the possibility of a nuclear arms race in the world, with US President Donald Trump ordering the restart of nuclear tests, an act which has been restrained by the Comprehensive Test Ban Treaty (CTBT) since 1992.
The old mechanisms to prevent the spread of nuclear weapons are too antiquated and have lost their meaning with the coming of artificial intelligence (AI) into the nuclear arms race. AI integration across the commissioning, deployment, and launch phases of nuclear forces is set to compress decision timelines, amplify sensing and targeting, and reshape command-and-control.
These shifts make a three-way arms race among the United States, Russia, and China faster, more opaque, and more escalation-prone. Combined with ongoing arsenal growth and novel Russian systems like Burevestnik and Poseidon, AI adoption risks turning modernisation into an action–reaction cycle that undermines human judgement and the fragile norms that have restrained nuclear dangers since the Cold War.
Krishna, an Avatara of Vishnu, did not use it to threaten humankind but used it to block the sun and used it as his Maya to kill Jayadratha and keep Arjuna's vow to kill Jayadratha before the sunset. This time around the nuclear command and control structure with its merger with AI is going to play like Shiva’s Tandav, with the potential to destroy the world. Strategic analysts warn the world is entering a new era as nuclear powers build up and diversify arsenals, creating a permissive backdrop for rapid AI adoption into sensing, planning, logistics, and command networks.
Sipri’s 2025 assessments underscore rising nuclear risks as constraints fade, making any destabilising technology—especially AI—a pivotal variable in future crisis dynamics. Simultaneously, Russia’s publicisation of Burevestnik and Poseidon advertises bypass routes around missile defences, magnifying incentives for adversaries to field AI-enabled counter-detection, tracking, and interception tools.
Commissioning Nuclear Weapons With AI
In the commissioning phase—spanning R&D, materials, and systems integration—AI improves modelling, discovery, and optimisation across vast parameter spaces, accelerating iteration on warheads, delivery systems, etc. US stockpile stewardship already relies on advanced simulation and subcritical experiments, and AI-enabled methods are increasingly woven into this science-based certification ecosystem. As tools scale, states can shorten design cycles and reduce uncertainty without explosive testing, lowering the political and technical barriers to fielding novel capabilities at pace.
AI-enabled verification and validation can cut both ways by catching defects earlier and enabling predictive maintenance across nuclear platforms, but they can also breed false confidence if models are trained on sparse or biased data. Arms-control experts warn that overreliance on algorithmic outputs in safety and surety contexts risks systematic errors that propagate across an enterprise accustomed to model-driven certification.
The net effect is a faster commissioning pipeline where advantages are contingent on data quality and model governance rather than raw engineering alone—inviting secrecy and hedging that may spur adversaries to match the pace.
Furthermore, the adoption of AI capabilities by one nuclear power could incentivise destabilising responses from others, including arms races, modernisation of arsenals, renouncing "no first use" policies, or increasing alert statuses. However, the integration of AI into nuclear operations introduces significant risks. AI-driven decision-support systems and autonomous weapons could exacerbate crisis stability by increasing the likelihood of miscalculation during high-alert scenarios, potentially leading to accidental or inadvertent escalation.
Historical incidents like the 1983 Stanislav Petrov event, where a false alarm nearly triggered a nuclear response, highlight the dangers of automated systems. Experts and policymakers agree that without robust governance, agile policies, and international norms, the AI-nuclear nexus could lead to catastrophic outcomes, including a nuclear conflict.
AI Disruption
Alongside detection comes AI-accelerated targeting—prioritising aimpoints, recommending strike packages, and modelling adversary movements—which can shorten the “kill chain” and push commanders toward speed over deliberation. The same techniques applied to active defences and countermeasures drive action–reaction dynamics, with each side seeking dominance in find–fix–finish loops that AI compresses to machine time, not human time. Perception alone can destabilise: if leaders suspect an opponent can use AI to negate survivability, they may disperse forces, alter alert postures, or pre-delegate authorities, all of which reduce crisis stability.
There is no public evidence that any nuclear state has embedded AI directly into formal launch authorisation today, but indirect integration across adjacent systems is expanding and can cascade into nuclear decisions. Analysts highlight the risk of autonomous or semi-autonomous AI agents that learn and act across networks, including the potential for emergent behaviours and unexpected interactions under stress, especially in cyber-contested environments. In the absence of transparency and technical guardrails, the perception that one side is edging toward automation of nuclear decision loops can be as destabilising as the actual adoption.
Sipri’s 2025 data—Russia at roughly 5,459 warheads, the US at 5,177, and China around 600—anchors a strategic triangle where qualitative modernisation increasingly matters as much as raw numbers. China’s rapid expansion from a historically minimal deterrent creates pressure to field modern command, control and delivery models where AI-enabled data fusion is attractive for scale and speed.
Russia’s Burevestnik and Poseidon emphasise penetrative second-strike and coastal devastation narratives, inviting AI-intensive US and Chinese investments in maritime security, anomaly detection, and autonomous undersea systems to preserve or restore survivability.
As each side leans on AI to hedge against the others’ putative breakthroughs, the cycle feeds on itself: better search creates better concealment, which then demands better search, all under compressing timelines and opaque model performance. The absence of a successor to New Start and the suspension of its transparency mechanisms exacerbate this opacity, leaving AI-enabled inferences to fill gaps once handled by inspections and data exchanges. In such conditions, worst-case planning thrives, and AI becomes both the instrument and the accelerant of insecurity spirals.
US discussion about resuming explosive nuclear testing intersects with AI in two ways: first, testing would validate designs beyond the reach of simulation; second, the existence of powerful simulation and stewardship tools argues against the need to test. The United States has maintained its arsenal through a science-based Stockpile Stewardship Program relying on non-explosive experiments and advanced computing, increasingly supplemented by AI for data-rich modelling and diagnostics. Many experts therefore contend there is no practical justification for explosive testing, warning that a detonation would spur reciprocal tests by Russia and China and destroy the non-testing norm.
If testing resumes, AI will likely accelerate test analysis cycles and post-shot interpretations, tightening feedback loops in ways that could speed warhead modernisation across competitors. Conversely, restraint paired with AI-enhanced stewardship can sustain confidence intervals without crossing the explosive threshold, preserving a critical stabiliser in the global regime. The policy choice thus hinges on whether AI is used to justify breaking norms or to reinforce them through better non-explosive assurance.
How AI Disrupts Each Phase
Deployment: AI-enabled analytics challenge the survivability assumptions underpinning deterrence by making once-elusive assets more visible, at least intermittently. This can incentivise hair-trigger postures and preemption in crises as each side fears losing forces on the ground, at sea, or in the air. Integration into defences and countermeasures adds a rapid action–reaction layer that accelerates the arms race beyond human processing speeds.
Commissioning: AI accelerates discovery, optimisation, and certification, compressing time-to-field for new systems and software across the enterprise. Where data are high quality and governance strong, safety and surety can improve; where they are not, failure modes can become systemic and harder to detect. The resulting speed advantage pressures adversaries to keep up, increasing parallel efforts and secrecy that undermine mutual predictability.
Launch cycles: Decision support systems risk automation bias under time pressure, while autonomous agents introduce emergent behaviours and cyber pathways. Even if AI never “pushes the button”, its invisible influence on perception, timing, and confidence can tip leaders toward escalation or misinterpretation. Managing perception becomes as vital as managing technology because suspicion of AI-driven shortcuts can destabilise deterrence even absent direct integration.
Analysts propose bounding AI’s nuclear role through strict human-in-the-loop requirements, red-teaming for adversarial manipulation, and robust validation and verification of models used anywhere. Internationally, dialogues that treat the AI–nuclear nexus explicitly—through transparency measures, testing of incident hotlines for AI-generated warnings, and shared taxonomies of unsafe use cases—can reduce misperception. Track 2 and 1.5 exchanges, paired with national policies that prohibit AI from authorising or executing nuclear launches, can create reinforcing norms even in the absence of formal treaties.
States should prioritise resilience to AI-enabled ISR by investing in decoys, mobility, deception, and hardened communications, alongside carefully designed thresholds to prevent inadvertent escalatory signalling. Within enterprises, governance must include dataset provenance tracking, interpretability thresholds for any system influencing nuclear decisions, and independent oversight to audit performance and drift. Without these steps, AI adoption risks becoming a race to the bottom on speed and opacity rather than a controlled enhancement of safety and stability.
The Russia, China and US outlook
Russia’s unveiling of Burevestnik and Poseidon, framed as second-strike “assurance” systems, invites adversaries to counter with AI-enabled sensing, targeting, and defence concepts, intensifying competitive dynamics around survivability. China’s accelerated buildup from a small base drives investments in command, control, delivery and decision support to manage scaling forces under compressed timelines. US institutions are already integrating AI into the broader nuclear enterprise for security, authentication, and decision assistance, raising the urgency to define boundaries that preserve human judgement.
If the US resumes explosive testing, Russia has stated it would “act accordingly”, and China has warned against breaking the moratorium, creating a permissive context for AI-enhanced test-driven modernisation across all three states. If testing restraint holds, AI’s disruptive potential remains high but can be channelled into stewardship, verification, and crisis communications tools that strengthen stability rather than degrade it. Which path prevails will depend less on raw capability and more on governance choices made in the next few years as New Start’s expiration looms and transparency mechanisms atrophy.
Conclusion
AI will not by itself cause a nuclear arms race, but in an era of eroding guardrails and active arsenal growth, it is the accelerant that can turn parallel modernisation into a rapid, destabilising competition across commissioning, deployment, and launch cycles. Without firm human-in-the-loop commitments, transparent guardrails and renewed channels for data exchange and incident management, the United States, Russia, and China are likely to race toward speed and opacity rather than safety and stability. The strategic choice now is whether to harness AI to strengthen stewardship, verification, and communication—or to let it compress timelines, erode survivability, and make miscalculation more likely in the most dangerous domain of statecraft.
(Amitabh Singh teaches at the School of International Studies, JNU, New Delhi. Views expressed are personal and solely those of the author. They do not necessarily reflect Firstpost’s views.)
The recent fraying of nuclear tempers between the US and Russia has reignited the dangers of the possibility of a nuclear arms race in the world, with US President Donald Trump ordering the restart of nuclear tests, an act which has been restrained by the Comprehensive Test Ban Treaty (CTBT) since 1992.
The old mechanisms to prevent the spread of nuclear weapons are too antiquated and have lost their meaning with the coming of artificial intelligence (AI) into the nuclear arms race. AI integration across the commissioning, deployment, and launch phases of nuclear forces is set to compress decision timelines, amplify sensing and targeting, and reshape command-and-control.
These shifts make a three-way arms race among the United States, Russia, and China faster, more opaque, and more escalation-prone. Combined with ongoing arsenal growth and novel Russian systems like Burevestnik and Poseidon, AI adoption risks turning modernisation into an action–reaction cycle that undermines human judgement and the fragile norms that have restrained nuclear dangers since the Cold War.
Krishna, an Avatara of Vishnu, did not use it to threaten humankind but used it to block the sun and used it as his Maya to kill Jayadratha and keep Arjuna's vow to kill Jayadratha before the sunset. This time around the nuclear command and control structure with its merger with AI is going to play like Shiva’s Tandav, with the potential to destroy the world. Strategic analysts warn the world is entering a new era as nuclear powers build up and diversify arsenals, creating a permissive backdrop for rapid AI adoption into sensing, planning, logistics, and command networks.
Sipri’s 2025 assessments underscore rising nuclear risks as constraints fade, making any destabilising technology—especially AI—a pivotal variable in future crisis dynamics. Simultaneously, Russia’s publicisation of Burevestnik and Poseidon advertises bypass routes around missile defences, magnifying incentives for adversaries to field AI-enabled counter-detection, tracking, and interception tools.
Commissioning Nuclear Weapons With AI
In the commissioning phase—spanning R&D, materials, and systems integration—AI improves modelling, discovery, and optimisation across vast parameter spaces, accelerating iteration on warheads, delivery systems, etc. US stockpile stewardship already relies on advanced simulation and subcritical experiments, and AI-enabled methods are increasingly woven into this science-based certification ecosystem. As tools scale, states can shorten design cycles and reduce uncertainty without explosive testing, lowering the political and technical barriers to fielding novel capabilities at pace.
AI-enabled verification and validation can cut both ways by catching defects earlier and enabling predictive maintenance across nuclear platforms, but they can also breed false confidence if models are trained on sparse or biased data. Arms-control experts warn that overreliance on algorithmic outputs in safety and surety contexts risks systematic errors that propagate across an enterprise accustomed to model-driven certification.
The net effect is a faster commissioning pipeline where advantages are contingent on data quality and model governance rather than raw engineering alone—inviting secrecy and hedging that may spur adversaries to match the pace.
Furthermore, the adoption of AI capabilities by one nuclear power could incentivise destabilising responses from others, including arms races, modernisation of arsenals, renouncing "no first use" policies, or increasing alert statuses. However, the integration of AI into nuclear operations introduces significant risks. AI-driven decision-support systems and autonomous weapons could exacerbate crisis stability by increasing the likelihood of miscalculation during high-alert scenarios, potentially leading to accidental or inadvertent escalation.
Historical incidents like the 1983 Stanislav Petrov event, where a false alarm nearly triggered a nuclear response, highlight the dangers of automated systems. Experts and policymakers agree that without robust governance, agile policies, and international norms, the AI-nuclear nexus could lead to catastrophic outcomes, including a nuclear conflict.
AI Disruption
Alongside detection comes AI-accelerated targeting—prioritising aimpoints, recommending strike packages, and modelling adversary movements—which can shorten the “kill chain” and push commanders toward speed over deliberation. The same techniques applied to active defences and countermeasures drive action–reaction dynamics, with each side seeking dominance in find–fix–finish loops that AI compresses to machine time, not human time. Perception alone can destabilise: if leaders suspect an opponent can use AI to negate survivability, they may disperse forces, alter alert postures, or pre-delegate authorities, all of which reduce crisis stability.
There is no public evidence that any nuclear state has embedded AI directly into formal launch authorisation today, but indirect integration across adjacent systems is expanding and can cascade into nuclear decisions. Analysts highlight the risk of autonomous or semi-autonomous AI agents that learn and act across networks, including the potential for emergent behaviours and unexpected interactions under stress, especially in cyber-contested environments. In the absence of transparency and technical guardrails, the perception that one side is edging toward automation of nuclear decision loops can be as destabilising as the actual adoption.
Sipri’s 2025 data—Russia at roughly 5,459 warheads, the US at 5,177, and China around 600—anchors a strategic triangle where qualitative modernisation increasingly matters as much as raw numbers. China’s rapid expansion from a historically minimal deterrent creates pressure to field modern command, control and delivery models where AI-enabled data fusion is attractive for scale and speed.
Russia’s Burevestnik and Poseidon emphasise penetrative second-strike and coastal devastation narratives, inviting AI-intensive US and Chinese investments in maritime security, anomaly detection, and autonomous undersea systems to preserve or restore survivability.
As each side leans on AI to hedge against the others’ putative breakthroughs, the cycle feeds on itself: better search creates better concealment, which then demands better search, all under compressing timelines and opaque model performance. The absence of a successor to New Start and the suspension of its transparency mechanisms exacerbate this opacity, leaving AI-enabled inferences to fill gaps once handled by inspections and data exchanges. In such conditions, worst-case planning thrives, and AI becomes both the instrument and the accelerant of insecurity spirals.
US discussion about resuming explosive nuclear testing intersects with AI in two ways: first, testing would validate designs beyond the reach of simulation; second, the existence of powerful simulation and stewardship tools argues against the need to test. The United States has maintained its arsenal through a science-based Stockpile Stewardship Program relying on non-explosive experiments and advanced computing, increasingly supplemented by AI for data-rich modelling and diagnostics. Many experts therefore contend there is no practical justification for explosive testing, warning that a detonation would spur reciprocal tests by Russia and China and destroy the non-testing norm.
If testing resumes, AI will likely accelerate test analysis cycles and post-shot interpretations, tightening feedback loops in ways that could speed warhead modernisation across competitors. Conversely, restraint paired with AI-enhanced stewardship can sustain confidence intervals without crossing the explosive threshold, preserving a critical stabiliser in the global regime. The policy choice thus hinges on whether AI is used to justify breaking norms or to reinforce them through better non-explosive assurance.
How AI Disrupts Each Phase
Deployment: AI-enabled analytics challenge the survivability assumptions underpinning deterrence by making once-elusive assets more visible, at least intermittently. This can incentivise hair-trigger postures and preemption in crises as each side fears losing forces on the ground, at sea, or in the air. Integration into defences and countermeasures adds a rapid action–reaction layer that accelerates the arms race beyond human processing speeds.
Commissioning: AI accelerates discovery, optimisation, and certification, compressing time-to-field for new systems and software across the enterprise. Where data are high quality and governance strong, safety and surety can improve; where they are not, failure modes can become systemic and harder to detect. The resulting speed advantage pressures adversaries to keep up, increasing parallel efforts and secrecy that undermine mutual predictability.
Launch cycles: Decision support systems risk automation bias under time pressure, while autonomous agents introduce emergent behaviours and cyber pathways. Even if AI never “pushes the button”, its invisible influence on perception, timing, and confidence can tip leaders toward escalation or misinterpretation. Managing perception becomes as vital as managing technology because suspicion of AI-driven shortcuts can destabilise deterrence even absent direct integration.
Analysts propose bounding AI’s nuclear role through strict human-in-the-loop requirements, red-teaming for adversarial manipulation, and robust validation and verification of models used anywhere. Internationally, dialogues that treat the AI–nuclear nexus explicitly—through transparency measures, testing of incident hotlines for AI-generated warnings, and shared taxonomies of unsafe use cases—can reduce misperception. Track 2 and 1.5 exchanges, paired with national policies that prohibit AI from authorising or executing nuclear launches, can create reinforcing norms even in the absence of formal treaties.
States should prioritise resilience to AI-enabled ISR by investing in decoys, mobility, deception, and hardened communications, alongside carefully designed thresholds to prevent inadvertent escalatory signalling. Within enterprises, governance must include dataset provenance tracking, interpretability thresholds for any system influencing nuclear decisions, and independent oversight to audit performance and drift. Without these steps, AI adoption risks becoming a race to the bottom on speed and opacity rather than a controlled enhancement of safety and stability.
The Russia, China and US outlook
Russia’s unveiling of Burevestnik and Poseidon, framed as second-strike “assurance” systems, invites adversaries to counter with AI-enabled sensing, targeting, and defence concepts, intensifying competitive dynamics around survivability. China’s accelerated buildup from a small base drives investments in command, control, delivery and decision support to manage scaling forces under compressed timelines. US institutions are already integrating AI into the broader nuclear enterprise for security, authentication, and decision assistance, raising the urgency to define boundaries that preserve human judgement.
If the US resumes explosive testing, Russia has stated it would “act accordingly”, and China has warned against breaking the moratorium, creating a permissive context for AI-enhanced test-driven modernisation across all three states. If testing restraint holds, AI’s disruptive potential remains high but can be channelled into stewardship, verification, and crisis communications tools that strengthen stability rather than degrade it. Which path prevails will depend less on raw capability and more on governance choices made in the next few years as New Start’s expiration looms and transparency mechanisms atrophy.
Conclusion
AI will not by itself cause a nuclear arms race, but in an era of eroding guardrails and active arsenal growth, it is the accelerant that can turn parallel modernisation into a rapid, destabilising competition across commissioning, deployment, and launch cycles. Without firm human-in-the-loop commitments, transparent guardrails and renewed channels for data exchange and incident management, the United States, Russia, and China are likely to race toward speed and opacity rather than safety and stability. The strategic choice now is whether to harness AI to strengthen stewardship, verification, and communication—or to let it compress timelines, erode survivability, and make miscalculation more likely in the most dangerous domain of statecraft.
(Amitabh Singh teaches at the School of International Studies, JNU, New Delhi. Views expressed are personal and solely those of the author. They do not necessarily reflect Firstpost’s views.)














