
Artificial intelligence is no longer a novelty for agencies; it’s becoming a daily partner in how work gets done at many firms. One big differentiator lies in an agency’s approach to AI adoption—smart leaders understand that with new powers come new responsibilities. Successful AI-adoption strategies rely not only on advanced tools, but also on clear guardrails that keep creativity, accuracy and client trust at the center of every use case.
As many agency leaders are finding, such safeguards are less about slowing AI down and more about guiding it toward better outcomes. Following the best practices that 15 members of Forbes Agency Council share below can help agencies protect clients’ brands and their own reputations as they integrate AI to improve the results they achieve together.
1. Write A Creative Brief For AI
Before letting AI generate anything, require that a “mini brief” be written for the machine, just like you’d brief a designer or copywriter. It forces discipline, ensures brand voice is considered up front and reframes AI as a junior team member that still needs proper direction. – Katie Meyer, MoonLab Productions
2. Keep Creative Judgment Human
As agencies, we need to remember that AI has no point of view—at least not yet. Don’t sacrifice your role as the arbiter of what “good” looks like for the sake of convenience and speed. Intervene. Question. Challenge. Use AI to guide, but don’t let it decide. That’s your job. – Stratton Cherouny, The Office of Experience
3. Make Sure Strategy Leads AI
The key safeguard is ensuring AI outputs serve strategic goals, not just tactical metrics. AI might optimize for the wrong things or miss crucial brand context. Regular human reviews confirm AI recommendations support long-term objectives and prevent drift. The danger isn’t AI failing—it’s AI succeeding at the wrong thing. Strategy must lead; AI follows. – Dennis Kirwan, Dymic Digital
4. Use AI Checkers For Content Integrity
Not only are agencies using AI, but so are our clients. Agencies should start utilizing AI checkers to verify that the content shared is not taken directly from ChatGPT. It is our job as a PR agency to guide our clients into developing their own voice and creating unique content based on their insights and domain expertise. – Ayelet Noff, SlicedBrand
Forbes Agency Council is an invitation-only community for executives in successful public relations, media strategy, creative and advertising agencies. Do I qualify?
5. Fact-Check Every Output
Since AI is known to hallucinate, fact checks are incredibly important. With that said, our agency primarily uses AI to get the creative juices flowing—the human brain and creative process are what set us apart. – Jodi Amendola, Amendola Communications
6. Prioritize Cybersecurity In AI Use
As agencies embrace AI, the safeguard often overlooked is cybersecurity. With automation and vibe coding accelerating deployment, we risk introducing hidden vulnerabilities faster than we can detect them. Building robust security checks into every AI workflow ensures adoption strengthens, rather than weakens, the business. – Fernando Beltran, Identika LLC
7. Protect The Brand’s Point Of View
As agencies adopt AI, the real safeguard isn’t just editing for tone, it’s protecting the brand’s point of view. AI can mimic voice but can’t replicate judgment, personality, perspective or strategic intent. Without clear guardrails rooted in brand belief, AI will only scale sameness. – Amy Packard Berry, Sparkpr
8. Keep Humans In The Loop
Agencies should always pair AI with a human-in-the-loop review. We see the best results when we use human judgment and cultural insight to check AI outputs, ensuring our work is accurate, ethical and on-brand. Guardrails plus creativity equals AI that adds value instead of risk. – Dean Broadhead, broadhead
9. Stress-Test For Bias And Fairness
One critical safeguard is to check for bias at the decision points where AI influences outcomes. Speed is easy to measure, but fairness and accuracy are harder—and more important. By stress-testing results against real human behavior, agencies can ensure AI adoption builds trust instead of eroding it. – Sarah Procopio, Thrive Marketing Science
11. Maintain Human Oversight In All Work
The primary safeguard is to ensure humans are always in the loop. Approach AI as an enabler of people’s capabilities, rather than as a tool that simply automates them. Whether AI is involved or not, we always have to be critical of our work and closely review it prior to approval. Even if AI can reduce the time spent on tasks, we can’t reduce the rigor in review cycles and content authentication. – Dani Mariano, Razorfish
12. Use A Filter To Instill Discipline
The safeguard isn’t more rules—it’s discipline. Every function and every industry needs a quality check that asks: Does this use of AI create clarity, equity and originality, or does it just add noise? AI should expand capacity, not cut corners. The filter is simple: Strategy leads, humans decide, and the work must move business and culture forward—always. – Shanna Apitz, Hunt Adkins
13. Establish An AI Policy And Review Process
AI really comes into its own when you scale up. This means it’s not practical to check every output from the AI. Agencies need a strong AI policy that implements guardrails around how and when AI is used and a suitable process that allows review of a sample of the outputs. This review process must build understanding over multiple campaigns, rather than simply focusing on the latest. – Mike Maynard, Napier Partnership Limited
14. Balance AI And Human Capacity
Historically, one of the key pitfalls of the agency business model has been the resource gaps that occur when delivering the scope of work. Today, AI integration is not comprehensive, so it addresses certain workflow phases, while humans lead others. Balancing AI’s capacity to enhance workflows with humans’ ability to react is key to ensuring positive outcomes from AI adoption. – Oksana Matviichuk, OM Strategic Forecasting
15. Form An AI Ethics And Usage Council
One safeguard is to form an internal AI ethics and usage council that sets agencywide standards for how AI tools are tested and deployed. This group should serve as AI gatekeepers—ensuring transparency, minimizing bias and protecting brand trust, while keeping adoption focused on driving efficiency, measurable results and sustainable growth. – Paula Chiocchi, Outward Media, Inc.