We are witnessing a global erosion of trust. What was once a manageable degree of distrust among major actors has now seeped into the smallest social units: individuals, friends and families increasingly shape relationships under a pervasive sense of suspicion. This erosion of trust is one of the defining forces of the so-called post-truth era; a time when the very concept of reality feels hollowed out, leaving societies uncertain about whom or what to believe.
In such an environment, communication becomes ever more critical. It forms the foundation of personal, social, national and international relationships. Rebuilding these connections in the post-truth era requires a rethinking of communication itself, as well as careful consideration of who (or what) can be trusted to facilitate it.
AI at center of change
Artificial intelligence sits at the heart of this evolution. Communication has always adapted to technological advances, and AI is both at the forefront and at the core of this transformation. It is reshaping not only how we communicate but also how we verify information. Today, it isn’t easy to imagine social or intergovernmental interactions that do not, in some way, involve AI.
Yet AI’s dual nature, as both a solution and a risk, cannot be ignored. To overlook this is not just naive; it undermines our ability to establish effective communication and build a sustainable, trust-based future.
Europe offers a strong example of adaptation. The European Union recognizes that trust in the digital age cannot be maintained solely through traditional regulation. The Digital Services Act (DSA) and the AI Act reflect a systematic approach demanding transparency, accountability and meaningful oversight of AI-powered platforms.
Public broadcasters in Germany, France and Northern European countries are investing heavily in AI tools to detect misinformation, strengthen civic engagement and enhance reliable communication. Pan-European research initiatives are exploring AI’s capacity to facilitate dialogue, predict public opinion trends and even contribute to cross-border diplomacy. Communication technologies are increasingly central actors in fostering social cohesion, not merely tools.
Beyond Europe, global institutions are grappling with the post-truth challenge. The OECD, for instance, uses AI-powered analytics to monitor digital narratives, identify alarming trends and translate complex research into accessible policy briefings. Trust is no longer assumed; it must be actively nurtured through AI-enhanced strategies.
The U.N. is likewise attuned to AI’s promises and risks. In June 2024, the U.N. launched the Global Principles on Information Integrity, calling on governments, tech companies, advertisers and civil society to share responsibility for safeguarding accurate information. The U.N.’s global communications department now employs AI for multilingual content production, regulatory monitoring and compelling storytelling.
At the same time, the U.N. is wary of AI’s dangers. At the “AI for Good” Summit in Geneva, experts warned of deepfakes and manipulated media. The International Telecommunication Union (ITU), a U.N. agency, promotes measures such as invisible digital watermarks and advanced detection algorithms. The U.N. emphasizes that misuse of technology is not only technical but also ethical, highlighting the importance of transparency and public access to reliable information.
Trust gap
Research reveals the challenge. A 2025 Pew Research Center survey found that, on average, 53% of adults across 25 countries trust the EU to regulate AI, compared with just 37% for the U.S. and 27% for China; this reflects a clear global trust gap in governance and regulation.
Brookings Institution analysts note a broader disconnect: while public support for AI regulation is high, trust in the institutions responsible for regulation is limited. This gap between regulation and trust leaves democratic institutions vulnerable.
Leading AI researcher Fei-Fei Li stressed the need for evidence-based policymaking at the 2025 Paris AI Summit: “AI governance should be based on science, not science fiction.” Her point underscores the need to ground policies in actual capabilities, not dystopian speculation.
The political dimension of this debate is stark. At the same summit, U.S. Vice President JD Vance criticized Europe’s regulatory approach, warning that “overregulation” could stifle innovation. This illustrates the global tension between ethical safeguards and market-driven innovation, a tension that will define the trajectory of AI adoption worldwide.
The Economist has repeatedly highlighted this dilemma. In a 2024 editorial, it noted that AI-generated content has made trust more valuable than ever, as misinformation is easier to produce and harder to detect. By April 2025, the magazine stressed that AI adoption hinges on trust, yet current models remain largely opaque and unreliable.
In short, trust is fragile, information flows are often unchecked, and AI is both a tool and a test. Addressing the challenges of the post-truth era requires more than regulatory compliance; it demands ethical vigilance, corporate responsibility and global coordination. Trust must be actively built, not passively assumed.
Türkiye at forefront
Türkiye is actively shaping this AI-driven, post-truth landscape through strategic planning, widespread adoption, and a dynamic innovation ecosystem. According to the 2025 Artificial Intelligence Statistics bulletin by the Turkish Statistical Institute (TurkStat), nearly one in five individuals in Türkiye now use generative AI, with adoption highest among younger and highly educated populations. Enterprises are integrating AI across information, finance, production and communication sectors, from marketing and R&D to process optimization. Public and private initiatives are actively addressing barriers such as cost, expertise gaps and regulatory uncertainty, accelerating adoption and strengthening Türkiye’s AI ecosystem.
International recognition reinforces Türkiye’s growing influence. Experts such as Alexander Khanin (CEO, Polynome Group) and Michael Bronstein (Oxford University) cite Türkiye’s dynamic startups, strong universities and supportive policy environment as ideal for AI innovation and talent retention. The country is increasingly seen as a hub for cross-border collaboration, real-world AI applications and ethical governance initiatives. Investments in infrastructure, high-performance computing and research programs translate AI innovation into tangible socioeconomic benefits.
Türkiye is also using AI to improve public trust, promote responsible communication and enhance societal resilience. By combining strategic foresight, public engagement, and comprehensive education, regulation and ethical oversight, Türkiye aims to ensure AI contributes not only to economic growth but also to inclusive, secure and transparent information ecosystems. In doing so, the country is positioning itself as an emerging global actor capable of addressing “post-truth” challenges while harnessing AI for societal good.
The views and opinions expressed in this article are solely those of the author. They do not necessarily reflect the editorial stance, values or position of Daily Sabah. The newspaper provides space for diverse perspectives as part of its commitment to open and informed public discussion.
The Daily Sabah Newsletter
Keep up to date with what’s happening in Turkey,
it’s region and the world.
SIGN ME UP
You can unsubscribe at any time. By signing up you are agreeing to our Terms of Use and Privacy Policy.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
