The Charcter.AI logo on a smartphone.
© 2023 Bloomberg Finance LP
Character.AI, a Google-backed AI chatbot platform, is facing scrutiny after reports revealed last month that some users created chatbots emulating real-life school shooters and their victims. These chatbots, accessible to users of all ages, allowed for graphic role-playing scenarios, sparking outrage and raising concerns about the ethical responsibilities of AI platforms in moderating harmful content. While the company has since removed these chatbots and taken steps to address the issue, the incident underscores the broader challenges of regulating generative AI, Futurism reports.
The Incident and Character.AI’s Response
In response to my request for comment, Character.AI provided the following statement addressing the controversy:
“The users who created the Characters referenced in the Futurism piece violated our Terms of Service, and the Characters have been removed from the platform. Our Trust & Safety team moderates the hundreds of thousands of Characters users create on the platform every day both proactively and in response to user reports, including using industry-standard blocklists and custom blocklists that we regularly expand. We are working to continue to improve and refine our safety practices and implement additional moderation tools to help prioritize community safety.”
The company also announced new measures aimed at enhancing safety for users under 18. These include filtering characters available to minors and narrowing access to sensitive topics such as crime and violence. Character.AI stated, “Our goal is to provide a space that is both engaging and safe for our community.”
This is not the first time Character.AI has faced criticism. The platform has been embroiled in lawsuits in recent months after claims that its chatbots emotionally manipulated minors, leading to incidents of self-harm and even suicide.
Kids And Chatbots: Supervision Is Key
Despite Character.AI’s age-gating measures and improved filtering, the reality is that no safety system is foolproof without parental or guardian oversight. Kids have a long history of finding ways to bypass digital restrictions, whether through creating fake accounts, using older siblings’ devices or simply lying about their age during sign-up processes.
This is a challenge that extends beyond Character.AI. Social media platforms, video games and other digital spaces with age restrictions face the same issue. Even the most advanced AI moderation systems cannot account for the ingenuity of determined users.
The only truly effective safeguard remains active involvement from parents and guardians. Supervision, open communication and ongoing engagement with a child’s digital activities are essential. For example, parents can monitor app usage, set boundaries for screen time and initiate discussions about the risks of engaging with inappropriate content. Without these proactive measures, children may still find ways to access material that could desensitize them to violence or expose them to harmful ideas.
A Larger Context: Kids, Screens, and AI
This controversy fits into a broader narrative about children’s exposure to potentially harmful digital content. While video games, social media, and other screen-based activities have long been scrutinized for their potential psychological impacts, AI adds a new dimension to the discussion. Unlike passive forms of media, AI chatbots enable two-way interactions, allowing users to actively engage with the content.
Experts, including psychologist Peter Langman, an expert on school shootings, have raised concerns about how such interactive technologies might affect young and vulnerable users. While Langman acknowledges that exposure to violent content alone is unlikely to cause violent behavior, he warns that for those already inclined toward violence — “Someone who may be on the path of violence” — these interactions could normalize or even encourage dangerous ideologies. “Any kind of encouragement or even lack of intervention — an indifference in response from a person or a chatbot — may seem like kind of tacit permission to go ahead and do it,” Langman said.
School Shooter Chatbots Are Innately Inaccurate
The complexities of harmful digital interactions remind me of my work as a digital forensics expert on the cases of Dylann Roof and James Holmes, perpetrators of two of the most infamous mass shootings in U.S. history. Roof was convicted of murder charges in the 2015 Charleston church shooting, a racially motivated attack that claimed the lives of nine African American parishioners. Holmes orchestrated the 2012 Aurora theater shooting during a midnight screening of The Dark Knight Rises, killing 12 people and injuring 70 others.
My work on these cases involved far more than reviewing surface-level data; it required analyzing internet history, private chats, recovered deleted data, location history and broader social interactions. This data was provided to attorneys who then provided it to mental health experts for in-depth analysis. When you forensically examine someone’s phone or computer, in many ways, you are getting a look into their lives, and their minds.
This is where AI falls short. While advanced algorithms can analyze vast amounts of data, they lack the depth of human investigation. AI cannot contextualize behaviors, interpret motives, or provide the nuanced understanding that comes from integrating multiple forms of evidence. The chatbots on Character.AI, which merely mimic patterns of language, can neither replicate nor reveal the mindset of individuals like Roof or Holmes.
User-created school shooter chatbots are innately inaccurate because they rely on insufficient data, but their immersive nature can still wield considerable influence. Unlike static content, like reading a book or watching a documentary on a mass shooter, chatbots let users shape their interactions, which can intensify harmful behavior. Furthermore, because AI companionship remains a relatively new phenomenon, its long-term effects are difficult to foresee, underscoring the need for caution when exploring these personalized and potentially hazardous digital experiences.
This raises critical questions: How do we balance technological progress with safety? What safeguards are sufficient to protect young and vulnerable users? And where does accountability lie when these systems fail?
Character.AI’s proactive steps are a start, but the incident highlights the broader challenge of moderating user-generated AI content. The platform’s reliance on both proactive moderation and user reporting illustrates the difficulty of keeping pace with the sheer volume of content generated daily.
Kids and Chatbots: Why This Matters Now
The controversy surrounding Character.AI comes at a time when AI is rapidly integrating into everyday life, especially for younger generations. This raises urgent questions about the regulatory frameworks—or lack thereof—governing AI technologies. Without clearer standards and stronger oversight, incidents like these will likely become more frequent.
Parents and guardians should also take note. Monitoring children’s online activities, especially on platforms where content creation is largely user-driven, is more crucial than ever. Open conversations about the potential risks of interactive AI tools and setting boundaries for screen time are essential steps toward protecting young users.
Concerning it’s relationship with Character.AI, Google stated to Futurism that, “Google and Character AI are entirely independent companies. Google has never participated in the design or management of their AI models or technologies, nor have we incorporated them into our products.”