AI Made Friendly HERE

AI, fact-checks, and the fight for truth

Falsehoods, fabrications, fake news – disinformation is nothing new. For centuries, people have taken deliberate action to mislead the public. In medieval Europe, Jewish communities were persecuted because people believed conspiracy theories suggesting that Jews spread the Black Death by poisoning wells. In 1937, Joseph Stalin doctored newspaper photographs to remove those who no longer aligned with him, altering the historical record to fit the political ambitions of the present.

The advent of social media helped democratise access to information – giving (almost) anyone, (almost) anywhere, the ability to create and disseminate ideas, opinions, and make-up tutorials to millions of people all over the world. Bad actors, or just misinformed ones, can now share whatever they want with whomever they want at an unprecedented scale. Thanks to generative AI tools, it’s now even cheaper and easier to create misleading audio or visual content at scale.

This new, more polluted, information environment has real-world impact. For our institutions (however imperfect they may be), a disordered information ecosystem results in everything from lower voter turnout, impeded effectiveness of emergency responses during natural disasters and mistrust in evidence-based health advice.

Like any viral TikTok moment, trends in misinformation and disinformation will also evolve. New technologies create new opportunities for scale and impact; new platforms give access to new audiences. In the same way BBC Research & Development’s Advisory team explored trends shaping the future of social media, we now look to the future of disinformation. We want to know how misinformation and disinformation are changing – and what technologies drive that change. Most importantly, we want to understand public service media’s role in enabling a healthier information ecosystem beyond our journalistic output.

What we’re seeing

R&D has already been developing new tools and standards for dealing with trust online. A founding member of the Coalition for Content Provenance and Authenticity (C2PA), we recently trialled content credentials with BBC Verify. We’ve also built deepfake detection tools to help journalists assess whether a video or a photo has been altered by AI. But it’s important to understand where things are going, not just where they are today. Based on some preliminary expert interviews, a new picture is emerging:

Anti-anti disinformation and fake facts

Recent decisions to disband fact-checking efforts at large social media platforms (Meta, X) have weakened the infrastructure that supports truth and accountability. Critics have valid questions about the extent to which fact-checking actually prevents the spread of disinformation online, but the current narrative frames the platforms’ decisions as anti-censorship and pro free-speech. This means less funding and support are available for fact-checking or authentication activities. Coupled with the growing trend of state-sponsored fact-checking initiatives that mimic legitimate efforts but really serve political agendas. The anti anti-disinformation backlash makes the choice to address the problem in any way a political act.

Swinging gates

It’s already been said many times: generative AI will impact how we create, distribute and access information (whether true or false). What’s becoming increasingly apparent though is that AI tools are becoming the next generation of information gatekeepers (see AI enabled smart TVs, Google’s AI Mode, etc.). These tools often rely on a narrow set of sources and show inconsistent quality, raising concerns for publishers and audiences alike about source provenance, bias, and amplification of misleading content. In the not-so-distant future, it may not actually be possible to pick and choose where we get our information – instead we’ll have to take what’s given to us by AI intermediaries.

Who knows, wins

We still don’t know much about the real-world impact of mis & disinformation. This is partly because researchers don’t have unfettered access to social media platforms and their data. Still, many of the studies that do exist focus on psychological impacts rather than behavioural change. We know even less about large language models and how they might change our behaviour or beliefs. They aren’t being rigorously evaluated for harms or utility, especially in how they source and present information. We’re still in the early days of AI deployment, but it’s becoming increasingly clear that we need to understand the unintended consequences of its adoption.

What happens next?

Reality, truth, authenticity – these ideas raise big questions: Whose reality is it anyway? Can we be comfortable with half-truths (or outright lies) if we trust the person who’s telling them? Does AI create something new and different, or is it just the same playbook but faster, cheaper and more far-reaching? We hope this work will help identify concrete steps we – R&D, the BBC and public service media – can take to strengthen the information ecosystem in the face of all this philosophical and technical complexity. If you want to read what we find, our report summarising our findings will be released in 2026.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird