In a stark warning, Hamza Chaudhry, a national security expert at the Future of Life Institute, has expressed grave concerns about the potentially catastrophic impact of convincing ‘deepfakes’ on global stability.
Deepfakes—a portmanteau of “deep learning” and “fake”—are synthetic media that have been digitally manipulated to appear real. Such computer-generated images may portray an artificial image or video showing subjects that do not exist in real life or make real people and things convincingly look as if they are doing things they are not.
Drawing parallels to the historic 1983 event when Russian Lt. Col. Stanislav Petrov averted disaster because he did not believe computer-generated indicators that falsely told him the United States launched a first-strike nuclear attack, Chaudhry emphasized the heightened risk posed by rapid advancements in artificial intelligence (AI).
On September 26, 1983, three weeks after the Soviet military had shot down Korean Air Lines Flight 007, Petrov was the duty officer at the Oko nuclear early-warning command center when the system reported that a missile had been launched from the United States, followed by up to five more.
Petrov, a lieutenant colonel of the Soviet Air Defense Forces, whose decision not to report false information—sirens blaring, warning lights flashing, computer screens showing U.S. nuclear missiles on their way—prevented a retaliatory catastrophe.
However, in today’s landscape, Chaudhry noted that the proliferation of AI-driven disinformation has significantly complicated the decision-making process during critical moments.
In today’s society, the vast majority of people get their information about the world and formulate opinions based on content from the internet.
Consequently, anyone capable of creating deepfakes can release misinformation and influence the masses to behave in a way that will advance the faker’s agenda, which could wreak havoc on a micro and macro scale.
On a small scale, deepfakers can, for example, create personalized videos that appear to show a relative asking for a large sum of money to help them out of an emergency and send them to unsuspecting victims, thereby scamming innocents at an unprecedented level.
On a large scale, fake videos of important world leaders stating made-up claims could incite violence and even war.
“Imagine a scenario where Petrov faces similarly alarming but convincingly fabricated evidence of an impending nuclear attack,” Chaudhry said. “The consequences of such misinformation, coupled with shortened response times in modern crises, could escalate tensions to a catastrophic level.”
The implications of deepfake technology extend beyond nuclear threats, encompassing a spectrum of challenges ranging from political polarization and election integrity to cybersecurity and public health.
Chaudhry said the potential for disinformation to disrupt geopolitical dynamics, citing ongoing concerns about Russia’s disinformation campaigns regarding military-biological labs in Ukraine.
Hamza served as a Gleitsman Leadership Fellow at Harvard University’s Center for Public Leadership and has previously worked at the Nuclear Threat Initiative and Council on Foreign Relations, so his counsel is worth heeding.
“The risk of attributing false information to geopolitical adversaries poses a multifaceted threat, from delegitimizing war efforts to complicating responses to emerging crises,” said Chaudhry.
Moreover, the sophistication of AI-driven disinformation tools raises concerns about their misuse in financial scams and targeted cyberattacks, underscoring the need for stringent safeguards to mitigate these risks.
“Recent advancements in AI are profoundly changing how we produce, distribute and consume information,” said Chaudhry. “AI-driven disinformation has affected political polarization, election integrity, hate speech, trust in science and financial scams. As half the world heads to the ballot box in 2024 and deepfakes target everyone from President Biden to Taylor Swift, the problem of misinformation is more urgent than ever before.”
“Although a nuclear confrontation based on fake intelligence may seem unlikely, the stakes during crises are high and timelines are short, creating situations where fake data could well tilt the balance toward nuclear war,” said Chaudhry, who noted that an ICBM from Russia could reach the U.S. within 25 minutes and a submarine-launched missile could hit sooner.
Chaudhry’s call to action focuses on preemptive measures, urging scrutiny of AI systems at their development stages to identify and address disinformation risks.
Such proactive strategies, Chaudhry argues, are imperative not only for safeguarding democracy and economic stability but also for protecting national security and ensuring the safety of citizens.
As the world grapples with the evolving landscape of AI and its potential ramifications, experts emphasize the critical importance of addressing these challenges head-on to avert potential global crises stemming from advanced deepfake technology. Like many other existential threats, such as the climate crisis, income inequality and plastic pollution, Congress is lagging far behind on this.
Americans need to wake up and demand action, instead of continuing to be easily distracted or falling for demagoguery and deceit.

