
When a tsunami alert was issued on June 29, 2025, AI-generated videos of gigantic waves crashing into coastlines racked up millions of views on social media. Screenshots from YouTube/GogglesOn2025
On the evening of July 29, 2025, phones in coastal communities around the Pacific Rim lit up with tsunami alerts. Within minutes, social media feeds filled with AI-generated videos of tsunamis, including dramatic airplane-window perspectives that appeared to show coastlines swallowed by walls of water. Many of these clips garnered millions of views before moderators and fact-checkers could respond. Meanwhile, the AI chatbot embedded in the social media site X, called Grok, incorrectly told users that the alerts were canceled. In the midst of the disaster response to a powerful earthquake off the coast of Russia, the online information environment was polluted with AI misinformation.
That incident was a symptom of a larger problem. AI is not only making misinformation more convincing but also making it available at a moment’s notice. From natural disasters and humanitarian emergencies to nuclear incidents and geopolitical crises, anyone with a smart phone can add to the confusion. Because emergency communication is almost always a race against time, the damage done can be serious, even if corrections eventually arrive.
During crises, falsehoods tend to travel farther and faster than corrections. A 2018 MIT study showed that false news on X (then called Twitter) spreads “farther, faster, deeper, and more broadly” than accurate reports, reaching people up to six times faster and getting retweeted 70 percent more often. Consumer-friendly generative AI tools, which began appearing in 2022, are making the problem worse. Research on crowd-sourced fact-checking finds that corrections often arrive after posts have already peaked. The median half-life of engagement on viral posts can be under two hours—faster than most verification and moderation cycles.
Realistic-looking “deepfakes” and AI-assisted hoaxes feed what legal scholars call the “liar’s dividend”: Once people know fakes exist, they become more willing to dismiss inconvenient real evidence as fake, or hesitate when a genuine warning is issued. In an emergency, hesitation costs lives.
Misinformation has cropped up across all types of disasters. As with the recent tsunami scare, any major weather event is now accompanied by AI-generated visuals that exaggerate the damage and muddy the timelines. January’s Los Angeles wildfires saw apocalyptic scenes generated by AI. July’s Texas Hill Country floods spawned AI montages presented as real reporting. And during Hurricanes Milton and Helene in October 2024, countless AI-generated videos circulated as authentic footage.
AI misinformation isn’t just confined to natural hazards. The most recent escalations between Israel and Iran produced a surge of AI-generated imagery depicting damage and offensive capabilities that made it harder to determine what was really happening during the conflict. In Russia’s war with Ukraine, a deepfake of President Volodymyr Zelenskyy urging surrender appeared after the Ukraine-24 news website was hacked. In 2023, a bogus Putin martial law announcement aired in border regions following broadcast hacks.
In these recent conflicts, official government channels have also posted AI misinformation. Sometimes it is used purposely to reinforce narratives, while in other cases it may be spread unintentionally.
For nuclear weapons and nuclear energy, the stakes are high. In May 2025, during a tense standoff between India and Pakistan, social media platforms were inundated with AI-produced satellite images, fabricated strike footage, and even a fake audio clip of a Pakistani commander declaring a nuclear alert. These posts had the potential to shape public perception and heighten the risk of escalation before they were debunked.
In 2023, when Japan began releasing treated wastewater from the Fukushima nuclear disaster cleanup, misinformation regarding the safety of the water flooded social media. In addition to misattributed videos and false testimonies, AI-generated images of mutated sea life were shared as proof of radiation harm.
While the damage social media has done is extensive, it is important to remember that it also saves lives. Real-time posts have helped responders triage needs, find stranded people, and map damage. Researchers continue to show how curated social-data streams can supercharge situational awareness. That’s why the goal should not be to dam the entire river, but rather to divert the worst torrents while amplifying the channels that demonstrably help.
Most of the biggest social media platforms now have community-driven moderation and context features. Research suggests that tools such as X’s Community Notes and YouTube’s Information Panels help reduce misinformation sharing. But in an emergency, “eventually right” is functionally wrong.
Even if those tools worked perfectly, they don’t change the algorithms that reward attention-grabbing and controversial content. Today, all major social platforms pay creators based on engagement and ad revenue sharing, which turns these spaces into breeding grounds for misinformation. This is especially true during an emergency when more people are seeking the latest information, and moderation staff and tools are stretched thin.
Guarding against AI misinformation isn’t about limiting speech. It’s about making reliable, life-saving information rise above the noise quickly, while preserving free public discourse the rest of the time. Six strategies can help make that possible:
Don’t just debunk, prebunk. Debunking AI misinformation is helpful, but “prebunking” can also be effective. In the same way that inoculation combats viral microbes, prebunking that exposes audiences to common misinformation practices can prevent viral AI content from spreading further. Popular platforms and emergency managers should consider prebunking aggrandized damage footage, fabricated official warnings, and other common fakes in advance of their respective seasons—be they hurricane, wildfire, election, or flu—so that the audience already recognizes the telltale signs of AI-generated content.
Create a crisis mode for feeds. Europe’s Digital Services Act already includes a crisis-response mechanism for larger platforms. In an emergency, platforms can be required to adapt recommendation systems, boost official information, and cooperate with trusted flaggers. This idea deserves global uptake, with transparent operation and well-defined parameters. In practice, crisis mode should throttle virality for hazard keywords in affected regions, insert official alerts at the top of relevant searches and timelines, and require a click-through to repost unverified content.
Promote provenance. Content-credential standards like the Coalition for Content Provenance and Authenticity (C2PA) already enables cameras and software to encrypt and embed digital “signatures” that tell people when an image was captured, edited, or generated by AI. However, incorporation of this feature has been minimal across social media platforms. Especially during a crisis, posts should display credentials prominently, and public agencies should commit to publishing credentialed images that allow the public to identify authentic information.
Stop funding misinformation machines. Monetizing posts during emergencies invites engagement farming first and due diligence later. Social media platforms should suspend revenue sharing for rapidly spreading, unverified content during crises. They could even redirect that money to official emergency channels and local newsrooms for the duration of the event.
Restructure moderation efforts. In declared emergencies, platforms should set clear priorities for fact-checkers and moderation teams when reviewing and labeling posts. When a post spreads quickly past a set threshold, it should move into a priority queue for review. If the information is verifiably false, the platform should notify accounts that have already viewed it and add a temporary note to the post, updating it as new facts come in.
Sound the alarm twice. People can’t live in a permanent red alert. Authorities should ensure that the “all clear” signal is communicated as deliberately as the initial warning. Additionally, they should always use signed content and consistent branding so that audiences learn to recognize official resolution messages. FEMA’s rumor control pages and the International Atomic Energy Agency’s communication playbooks are good starts. Scaling these into always-on, signed, and shareable assets would help anchor trust after after one disaster and before the next.
None of these steps require new detection software or an AI safety breakthrough. They simply require changing defaults, aligning incentives, and rehearsing public-information practices with consistency and seriousness. If emergency managers and social media platforms act now, they can enable truth to travel just as fast as falsehoods—and maybe even get there first.