
Heathrow’s recent shutdown is a timely reminder that when things break, trust takes the hit. As marketers invest in AI-powered tools, here’s what they can do to stay protected.
When Heathrow Airport ground to a halt earlier this month, it wasn’t AI gone rogue, but it was tech gone dark. No automated system failure. No AI glitch. But the headlines still rang alarm bells – because whether it’s infrastructure, automation or AI, we’re seeing just how fragile modern systems can be when things go wrong.
AI is embedded in every touchpoint of modern business. It is streamlining processes, automating decisions and powering more personalized experiences. But it’s also introducing new forms of risk – ones that many brands haven’t fully prepared for.
The incident may seem far removed from the world of marketing, but it’s a wake-up call. The tools we rely on – from chatbots to ad platforms – are increasingly powered by AI. When they fail, the consequences don’t just hit operations. They hit trust, customer experience and brand reputation.
Whether it’s a system glitch, human error or a cyberattack exploiting new tech, our growing reliance on AI is creating new kinds of breakdowns and throws up questions marketers need to ask before buying into the next AI-powered solution.
Want to go deeper? Ask The Drum
Why marketers should care
It might sound like a tech or an infrastructure issue, but really, it’s a trust issue. And that’s a marketer’s business. AI is now built into everything – from how we create and serve content, personalize ads and engage customers to how we track performance. But when it misfires? The damage is visible, immediate and deeply tied to your brand.
We’ve already seen examples of:
-
Customer chatbots going rogue, making offensive or false claims or misleading outputs.
-
Media buying algorithms and ad systems placing content next to inappropriate material or unsafe content.
-
Deepfakes and AI-generated scams duping audiences.
-
AI models trained on biased data, reinforcing harmful stereotypes.
Marketers themselves might not be responsible for building these systems, but they are responsible for the brand experience they help shape.
Several high-profile brands have learned the hard way what happens when AI isn’t used responsibly or with clear oversight. Take Balenciaga, which faced backlash for a deepfake-style campaign that blurred the line between edgy and unethical. Or Volkswagen, which stirred controversy by digitally resurrecting a deceased Brazilian singer for a commercial, raising serious questions about consent and taste.
Then there’s Toys R Us, which used OpenAI’s video tool Sora to create a brand film. Instead of praise, it drew criticism for its cold, uncanny tone, highlighting the reputational danger of using AI-generated content without human creative judgment.
It’s not just about tone, either. Snapchat’s ‘My AI’ feature raised safety concerns over bizarre, sometimes unsettling responses to users, while Levi’s use of AI-generated models in a campaign meant to champion diversity was widely seen as tone-deaf.
What can go wrong – and why it matters to your buying decisions
As brands race to plug AI into every corner of their tech stack, from media buying to customer service, what should you look out for?
-
Faulty recommendations: If an AI system is trained on the wrong data, it can make bad calls, leading to wasted spend or a poor customer experience.
-
Bias in outputs: Some tools create content or decisions that exclude or stereotype certain audiences. That’s not just bad ethics – it’s bad business.
-
Lack of transparency: If something goes wrong, can your provider explain why? If not, you’ll struggle to fix it or to reassure your audience.
-
Security blind spots: AI systems can be manipulated. You don’t want your brand’s chatbot being hijacked or your customer data exposed.
Smart investment starts with smarter questions
In a world of algorithmic decisions and synthetic content, trust becomes the differentiator. Brands that use AI without a clear understanding of its limits are playing a high-stakes game.
And the governance gap doesn’t help. While regulators are racing to catch up – the EU AI Act and the UK’s AI Safety Institute, for example – many businesses are already deploying AI systems with little oversight or testing. The speed of adoption is far outpacing the readiness of most marketing teams.
As Gen AI tools proliferate across creative departments and AI powers ever more touchpoints in the customer journey, marketers must ask: are we building with integrity? Are we pressure-testing for bias, for safety, for alignment with our brand values? Because if we’re not, it’s not just a technical risk – it’s a brand one.
In the age of AI, it’s not just about what the tools do – it’s about how they’re built, how they’re tested and how they respond when things go wrong.
So before you invest, ask:
-
How does this tool protect my brand?
-
Can I trust the data it uses?
-
Is there a human in the loop?
-
What happens if the system fails?
Things you can do to build trust into your models:
-
Stress-test your tools: Work with vendors who can explain how their models were trained, and run simulations to see how systems behave in edge cases.
-
Build diverse data inputs: To avoid reinforcing bias, ensure your data sources are representative.
-
Keep a human in the loop: Especially for high-stakes decisions, customer service interactions, or public-facing creative.
-
Create a crisis protocol: If your AI tool fails, do you know how to respond? And who’s responsible?
Invest in intelligence – but don’t neglect brand safety
There’s no denying that AI can help you move faster, create more and reach audiences in powerful new ways. But it can also amplify mistakes, scale bias or expose your brand to risks you didn’t see coming – at lightning speed.
The more connected our systems become, the more fragile they are in the face of error or unintended consequences. So, don’t just look for the smartest tool, but the safest partner. Because when trust is on the line, security isn’t a feature but a foundation.
Suggested newsletters for you
Daily Briefing
Daily
Catch up on the most important stories of the day, curated by our editorial team.
Weekly Marketing
Friday
Stay up to date with a curated digest of the most important marketing stories and expert insights from our global team.
The Drum Insider
Once a month
Learn how to pitch to our editors and get published on The Drum.