The role of AI in reducing the risk of weapons of mass destruction
Three things that leaders should know about the way artificial intelligence and machine learning will shift our approach to WMD risk reduction in a polarized world.

This year marks 80 years since the detonation of the first atomic bomb during World War II, which fundamentally reshaped international relations. Today, increased geopolitical fragmentation and multiple recent escalations between nuclear powers have presented new challenges and an urgency to leverage scientific and technological advancements to support weapons of mass destruction (WMD) risk reduction and nonproliferation efforts.
At the recent WMD Risk Reduction Science and Policy Forum, organized by the Johns Hopkins School of Advanced International Studies, Whiting School of Engineering, and Bloomberg School of Public Health, science and policy experts discussed ways to use scientific research programs and emerging technologies to monitor, identify, and mitigate potential uses of chemical, biological, radiological, and nuclear (CBRN) weapons.
The role of artificial intelligence and machine learning in risk reduction was a through line of the forum, which was hosted at the Johns Hopkins University Bloomberg Center. Experts noted the challenges and opportunities these emerging technologies pose to WMD mitigation as geopolitical tensions over nuclear capabilities simultaneously intensify. These are three ways they see AI impacting CBRN defense.
- Emerging tech will challenge the way we approach WMD risk
In some ways, AI is a double-edged sword, experts said. While AI/ML advancements have expanded access to data that has in turn spurred innovation, they come with risks and challenges that can make monitoring and responding to WMD events more difficult, including:
- AI’s ability to proliferate corrupt or incorrect information, affecting accurate attribution of WMD events
- Emerging tech lowering the barriers to entry for anyone to potentially develop a WMD threat
- AI fundamentally changing how quickly threats can emerge, shortening typical surveillance and response timelines
- AI potentially camouflaging proliferation activities, making monitoring more challenging
- The scalability and use of AI-powered military tech, such as drones, that can introduce new unpredictability
AI is distinct from other major emerging technologies that have come before it, like the internet, in that it is advancing at an unparalleled pace, according to Bill Streilein of MIT’s Lincoln Laboratory.
Pranay Vaddi, senior nuclear fellow at MIT’s Center for Nuclear Security Policy, said it’s time to think about AI, theoretical artificial general intelligence (AGI), and large language models (LLMs) as part of the cyberthreat landscape when considering how malicious actors may weaponize them.
“I don’t believe the CBRN community has sufficiently woken up to the reality and magnitude of change that AI is bringing to our lives, to our military, and to our security,” said Rebecca Hersman, former director of the Defense Threat Reduction Agency.
“I don’t believe the CBRN community has sufficiently woken up to the reality and magnitude of change that AI is bringing to our lives, to our military, and to our security.”
– Rebecca Hersman, former director of the Defense Threat Reduction Agency
- AI tech and governance can help assess and prepare for threats better
Experts noted that AI/ML technologies can also enhance CBRN defense, and with appropriate governance, take major risks off the table, too.
Enhanced preparation and prediction. AI opens the door for new training opportunities, experts said, including:
- Modeling complex crises and more rapidly iterating potential wargame simulations, a notoriously laborious process
- Better predictive capabilities by identifying nontraditional signatures and warnings
- Protecting meaningful human control through competitive analysis
- Better detection and verification technologies, especially in zero-knowledge areas
Better precision. AI could help militaries avoid a misfire or civilian deaths by removing human error or mechanical failure from the equation, Vaddi said. This, in turn, could help reduce the risk of inadvertent escalation.
Governance opportunities. Traditionally, when a WMD threat emerges, the main way to mitigate it is to create barriers to it or cut access to critical resources. While this can still work in certain circumstances, regulation also has a role to play, Hersman said.
“I personally think that if we’re looking at domestic regulation, or at least guidance for industry, making it a best practice so that industry is testing their own models with CBRN threats in mind would be an important first step,” Vaddi added.
Additionally, working with U.S. allies and other democratic countries can establish a consensus on how to safely integrate AI into military and intelligence apparatuses while considering the downstream risks, Vaddi said.
Treaties and arms control agreements also play a role in reducing risks between competing countries. “What are the kinds of dynamics that emerge and situations that we want to avoid that parties would agree to—a use that would generate risk that no one is willing to bear?” said Lauren Kahn, senior research analyst at Georgetown’s Center for Security and Emerging Technology. She pointed to last year’s agreement between the U.S. and China that only humans, not AI, should make decisions about the use of nuclear weapons as an example.
- U.S. leadership in AI innovation plays a major role in risk reduction
Experts say the U.S. must continue to maintain its competitive edge in AI, as demonstrating our capabilities not only acts as a deterrent for WMD threats, but it also positions the U.S. to play a leading role in shaping governance models.
“The U.S. leading the world and extending our deterrent umbrella has been the most significant check on nuclear proliferation in history,” said Kimberly Budil, director of Lawrence Livermore National Laboratory.
Strengthening private-public partnership
Unlike during the Cold War, when the government led the advancement of strategic nuclear capabilities, new developments in AI are driven by the private sector. This requires the public sector to intentionally involve itself in the growth of this emerging technology, experts said.
“I think it’s on the U.S. to also work with private industry to really red team the types of efforts that are underway to advance LLMs,” Vaddi said. To leverage this relationship effectively, the government must also integrate classified knowledge into the partnership, so the private sector has a comprehensive understanding of any threats, Hersman added.
Similarly, public-sector researchers should be an active part of the private-sector AI R&D ecosystem, too. Being involved on the front end, Budil said, can help adapt and shape AI models to meet national security needs and manage any AI risks specialty areas like WMD risk reduction. “I’m not sure you can regulate that up front, but I’m confident that the likelihood that we’ll catch those bad outcomes is higher if we’re deeply engaged in that R&D,” she said.
At the same time, the scientific community has a responsibility to ensure policymakers are “scientifically literate,” Budil said, as modern life is inextricably linked to technology that is evolving faster than ever before. “It’s hard to imagine any element of policy today being successful,” she said, “without that kind of science and technology underpinning.”