Securing the Nuclear Future: AI’s Role in Nuclear Security and Supply Chains
Author: Camila Mateos Betancourt
Artificial Intelligence (AI) is transforming industries across the globe, and the nuclear sector is no exception. While AI brings promising tools to improve safety, efficiency, and predictive capabilities, it also introduces significant vulnerabilities, particularly in the nuclear supply chain. From the risk of counterfeit components to new opportunities for cyberattacks, malicious actors now have more sophisticated tools at their disposal. Although AI adoption in nuclear infrastructure remains limited, primarily appearing on the administrative side, the risks and opportunities it presents require urgent attention and proactive governance.
In Episode 236 of The International Risk Podcast, host Dominic Bowen engages with Dr. Sarah Case Lackner, Senior Fellow at the Vienna Center for Disarmament and Non-Proliferation, to delve into the dual-edged impact of AI on nuclear security.
The AI Paradox: Tool or Threat in Nuclear Systems?
Artificial intelligence is slowly entering the nuclear sector, mostly on the business side. While operational technologies remain cautious, discussions around predictive maintenance and automation are growing. quote
AI can detect anomalies, manage data more efficiently, and even predict component failures before they happen. But the very algorithms that can support nuclear safety can also be misused, whether through adversarial attacks, data poisoning, or deepfakes. The consequences in a sector dealing with radioactive materials could be catastrophic.
A major concern raised during the episode is the rise of counterfeit components in the nuclear supply chain. Even without AI, counterfeit items, like explosive pagers, have slipped through procurement processes. With generative AI and manipulated data, counterfeiters now have more advanced tools to fake certifications, simulate authenticity, and bypass verification

Explainability, Oversight, and the Human Factor
One of the biggest challenges in integrating AI into nuclear systems is the question of trust. AI systems often function as “black boxes,” making decisions that even experts struggle to interpret. In high-stakes environments, this opacity is dangerous. Over-reliance on AI outputs, known as automation bias, can mask threats or mislead decision-makers, especially in emergency situations. The takeaway? Critical safety decisions should remain in human hands.
That’s why training and capacity building emerged as a core theme in our podcast discussion. Nuclear operators, regulators, and suppliers must invest in AI expertise, not just AI tools. “Think about your data,” Dr. Case Lackner urges. “Think about the cyber implications of what you’re doing. Verify everything. And train your people, across every level.”
From Awareness to Action: What Risk Professionals Must Do Now
For international risk professionals not yet considering AI in nuclear security, the message is clear: start now. The sector’s past underestimation of cyber threats offers a cautionary tale.
Invest in research, red-team your systems to expose weaknesses, and engage experts before incidents happen. In fact, some national regulators already require adversarial testing as part of their security protocols. But global consistency remains lacking.
The podcast also touches on broader governance challenges. AI and nuclear regulation operate in parallel silos, with few frameworks bridging them. As countries apply different standards and data policies, ensuring secure, sovereign data flows across borders becomes a growing risk. Risk professionals can help fill this gap by advocating for stronger public-private partnerships, standardized AI assessments, and more inclusive international governance models.
For more information on the topic, dive into Dr. Sarah Case Lackner’s latest co-authored publication:
“Nuclear Security and the Nuclear Supply Chain in the Age of Artificial Intelligence”

One Comment
Comments are closed.