Politics & News
Trending

AI & Nuclear Weapons: Escalation Risks, Deterrence Potential

Rapid advancements in technology have raised debates concerning nuclear strategy and stability, especially among nuclear-weapon states. Policymakers worldwide are concerned about dangers posed by artificial intelligence (AI) if misused in nuclear operations.

In 2023, the US Congress introduced a legislation to ban AI systems from nuclear weapons. During the same year, US President Joe Biden signed an Executive Order on the Safe, Secure and Trustworthy Development and Use of AI.

Demands for China and Russia

A US official urged China and Russia to issue declarations similar to those issued by the US and other states that only humans would make decisions on deploying nuclear weapons, reported Reuters.

US State Department principal deputy assistant secretary in the Bureau of Arms Control, Deterrence and Stability, Paul Dean, told an online briefing on Thursday that Washington had made a “clear and strong commitment” that humans had complete control over nuclear weapons, noting that France and Britain had done the same.

He said: “We would welcome a similar statement by China and the Russian Federation. We think it is an extremely important norm of responsible behavior and we think it is something that would be very welcome in a P5 context,” in reference to the permanent five members of the UN Security Council.

Discussing AI

These remarks coincide with the Biden administration efforts to deepen separate discussions with China over the growth of AI and nuclear weapons policy. The spread of AI technology has taken a center stage in the talks between US Secretary of State Anthony Blinken and his Chinese counterpart Wang Yi that took place in Beijing on April 26.

According to Blinken, both sides agreed to hold their first bilateral talks on artificial intelligence in the coming weeks. The top US diplomat added that they would share views on best ways to manage risks and safety related to AI.

AI and Nuclear Escalation

In a series of wargame simulations, AI models tended to escalate war and use nuclear weapons. In a study titled “Escalation Risks from Language Models in Military and Diplomatic Decision-Making,” researchers at the Georgia Institute of Technology, Stanford University, Northeastern University, and the Hoover Wargaming and Crisis Initiative found that in several instances, AI models deployed nuclear weapons without warning.

For the study purposes, researchers invented fake countries with different military levels, different concerns, and different histories. They asked five different large language models (LLMs) from OpenAI, Meta, and Anthropic to act as their leaders.

“We find that all five studied off-the-shelf LLMs show forms of escalation and difficult-to-predict escalation patterns. We observe that models tend to develop arms-race dynamics, leading to greater conflict, and in rare cases, even to the deployment of nuclear weapons,” the paper said.

Given those findings, researchers recommended “further examination and cautious consideration before deploying autonomous language model agents for strategic military or diplomatic decision-making.”

Modern Warfare

Although humans still lead military operations, AI is playing an increasingly significant role in modern warfare. Drones now use AI software that helps identify people and activities of interests, reported Euronews. Some countries, such as the US and China, are working to deploy AI in autonomous weapons systems to find and attack targets without human assistance.

Fears are growing that AI will take the decision to use nuclear weapons out of the hands of humans. But leaders of most, if not all, nuclear-powered states have kept the authority to launch nuclear weapons for themselves.

However, the real danger of AI’s role in nuclear strategy lies in the integration of AI into processes that assist with the final decision of launching weapons, according to Paul van Hooft of the Hague Center for Strategic Studies.

Strategic Stability

AI could play a significant role in strengthening nuclear deterrence and decision-making in four ways: command and control; missiles delivery systems; conventional counterforce operations; and early warning and Intelligence, Surveillance, and Reconnaissance (ISR). Meanwhile, it has the potential to disturb strategic stability, particularly if humans give considerable confidence to its processes.

AI could unsettle strategic stability during a crisis because much of its processes are unclear to the end user. AI-assisted data analysis could perceive aggressive intentions or actions during a crisis, when there is no time for careful scrutiny of the inputted data or the process of analyzing it.

Therefore, relying on the findings provided by AI can be dangerous as it could create the perception that a nuclear-armed state has a first-strike advantage to destroy their adversary’s nuclear arsenal and defend against it. Automated missile defense could also suggest the ability to absorb an initial nuclear attack.

According to van Hooft, AI can be deceived by poisoning the data at its disposal. The adversary’s AI-assisted data analysis could be tricked by concealing delivery systems. It could also trick the adversary by making them believe there are more delivery systems or more warheads than those actually exist.

Potential Benefits

Despite US efforts to develop a framework that restrains AI in nuclear weapons system, Professor Steffan Puwal pointed out that AI can strengthen nuclear deterrence without necessarily expanding the nuclear arsenal. He explained that the rush to ban AI from nuclear defenses seems to be rooted in a misunderstanding of the current state of AI.

In a NATO Review article, Puwal, a Special Lecturer in Physics and an Adjunct Assistant Professor of Medical Physics at Oakland University, said that AI systems offer an opportunity to enhance nuclear deterrence by providing a more accurate and capable defensive nuclear response. Such capabilities, in turn, act as a more credible deterrent to nuclear war.

Short link :

Related Stories

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button