Lis Anderson is founder and director at PR consultancy AMBITIOUS, an experienced agency MD with 25 years in the communications industry.
Would you bet the farm on financial advice from someone you’ve never met? Expand that question: Would you bet the farm on financial advice from someone you’ve never met, but whose online presence proves them to be a trusted expert?
The reason I’m posing this question is that some big news came out of AI land recently: The number one source cited in AI responses is Reddit.
The likely reason for this is that Reddit is a rich source of authentic human interaction. But while it’s a treasure trove of community engagement, AI tools often fail to see the context or irony behind comments. A heavily upvoted comment signals engagement, but it doesn’t necessarily provide factual information.
This makes trusting AI responses quite a risky business, especially since Google already has a system in place for promoting quality content.
The Story So Far
Google is, if anything, far from perfect. But something it has done particularly well is stem the flow of unhelpful and potentially damaging information through ranking systems that factor in experience, expertise, authoritativeness and trustworthiness (E-E-A-T) and “Your Money or Your Life” (YMYL) parameters. With these little letters, Google attempts to ensure the information its end users receive is both accurate and trustworthy. Think of them as a series of automated checks and balances, underpinned by search quality raters—actual humans who verify both the quality and efficacy of online sources.
E-E-A-T helps ensure that an author is, in fact, the real deal—and that their insights are based on tangible experience that you can trust. YMYL operates almost exclusively as a system of checks and balances for content related to health and wealth.
While E-E-A-T applies to all sectors and businesses, it’s particularly critical for YMYL topics. So, if you’re in professional and financial services or any kind of health or healthcare adjacent sector, these four letters strongly govern your online presence.
In real terms, that means the quality of your content needs to be matched by the quality of your source. At its most simple, a blog post on how to invest for retirement needs to be authored by an expert in that subject to achieve higher visibility.
The more content is spread across varied and trusted sources, like quality media outlets, the better. This is the strategy we’ve been following for many years; it’s an all-encompassing content and media approach designed to increase both individual and business profiles, which, over time, increases the online visibility of both.
But AI has changed things.
The Current State Of Play
We know that AI tools can make mistakes and that they can hallucinate and even deceive. As the Columbia Journalism Review found, they are all programmed to provide an answer, but whether that answer is correct or not is incidental.
We’re using tools that can effectively circumvent the E-E-A-T practices we’ve all come to know and rely on. AI can go wrong—and it has. There was the case of Google’s AI Overviews telling users to put glue on pizza, and that geologists recommended eating one rock per day.
But there have been more serious cases: A 60-year-old man replaced salt in his diet with sodium bromide after seeking medical advice from ChatGPT. Subsequently, he spent three weeks in the hospital with bromide toxicity.
The problem is that some large language models are designed to support your search, not to challenge or query you. They are the ultimate yes-man, reinforcing your ideas and opinions even if those notions are bad for you.
OpenAI took action to mitigate this issue with ChatGPT. But Steven Adle, former OpenAI safety researcher, insists there’s still a lot of work to be done: “AI companies are a long way from having strong enough monitoring/detection and response to cover the wide volume of their activity.”
Why AI Needs Guardrails
If you’re using ChatGPT, Perplexity and even Grok and asking for advice on serious things like mortgages, investments and retirement accounts, would you be 100% certain that the information you’re given is credible?
What if the platform gives you advice that goes outside the letter of the law? In 2024, that’s exactly what a Microsoft-powered chatbot, MyCity, did to New York-based entrepreneurs—it claimed that business owners could take cuts of tips and fire workers who complain about harassment.
Google introduced its YMYL parameters in 2013 to combat bad actors and potentially dangerous “insights” being filtered into search. But AI seems to be undermining that.
AI responses are improving, but they’re often still incorrect or, at times, completely made up. Sources such as Reddit are heavily reliant on context for broader understanding, which LLMs can’t distinguish.
As business leaders, we need to be conscious of the content we put out, as well as what we bring into our businesses. When using AI tools, we have to implement the proper internal safeguards and training processes to make sure that we’re using these tools effectively and not just falling for the hype. This starts with taking a people-first approach, ensuring that AI tools and outputs are formed and shaped in human experience. That means checking all primary and secondary sources, keeping sensitive information private and training teams on the most effective way to write AI prompts. But most importantly, it means not taking AI responses as gospel. The notion of easy productivity gains is exciting, but when using AI tools, we must check, check and check again. We must create processes whereby our AI tools work for us, not the other way around.
AI doesn’t seem to have the same E-E-A-T foundations as Google. But productivity shouldn’t come at the cost of responsibility. If an AI-led mistake creeps into your output, you cannot blame the tool. AI needs guardrails, and those start with us.
Forbes Agency Council is an invitation-only community for executives in successful public relations, media strategy, creative and advertising agencies. Do I qualify?
