AI Made Friendly HERE

Sexually explicit chats with underage users

If you needed another reason to avoid Meta AI like the plague, in addition to Meta forcing its AI chatbot on all its social apps and looking to turn all your public data into training material for the AI, an investigation found an incredibly disturbing one. Meta AI chatbots, including some of those voiced by celebrities, can engage in sexually explicit chats with underage users.

This isn’t the first time we’ve heard about AI chatbots used for companionship, including sexual fantasies. But The Wall Street Journal says that Meta might not change anything. The directive is apparently coming from Mark Zuckerberg, who thinks AI chatbots will be the next big thing on social media, and he doesn’t want to miss out on it like Meta did with the Snapchat and TikTok trends.

Meta AI makes its chatbots available in its social apps, and the company went ahead and licensed the voices of well-known stars like Kristen Bell, John Cena, and Judi Dench to voice some of them. It also licensed characters from Disney to some of these AI chatbots.

Meta AI users can create their own chatbots, giving them specific personalities or using existing ones.

Tech. Entertainment. Science. Your inbox.

The Journal’s tests found that Meta AI chatbots would routinely steer the conversation towards sex, even when these AI models knew they were talking to underage users who shouldn’t have access to such content.

Meta called The Journal’s tests manipulative and unrepresentative of how most people engage with AI companions. Still, Meta made changes to its Meta AI products after the paper’s findings.

Accounts registered to minors can no longer access sexual role-play via Meta AI. Also, the company apparently cut down on Meta AI’s capacity to engage in sexually explicit conversations when using licensed voices and personas.

Disney wasn’t happy to hear that some of its characters might be used in such ways by Meta AI – here’s what a spokesperson told The Journal:

We did not, and would never, authorize Meta to feature our characters in inappropriate scenarios and are very disturbed that this content may have been accessible to its users—particularly minors—which is why we demanded that Meta immediately cease this harmful misuse of our intellectual property.

Meta AI chatbots did have stronger guardrails in place. The report says that a Defcon 2023 competition showed Meta AI was safer than competitors. The AI was “far less likely to veer into unscripted and naughty territory” than rivals. It was also more boring.

Mark Zuckerberg wasn’t happy with the Meta AI team playing it too safe. He wanted guardrails to be loosened, which led to Meta AI getting the ability to engage in sexually explicit chats. This feature gave adult users access to hypersexualized AI personas and underage users access to AI chatbots willing to engage in fantasy sex with children.

The report also says Zuckerberg had bigger plans for the chatbots, looking to make them more humanlike. For that, he also wanted the chatbots to mine a user’s profile for data that would be used in chats with the AI:

Zuckerberg’s concerns about overly restricting bots went beyond fantasy scenarios. Last fall, he chastised Meta’s managers for not adequately heeding his instructions to quickly build out their capacity for humanlike interaction.

At the time, Meta allowed users to build custom chatbot companions, but he wanted to know why the bots couldn’t mine a user’s profile data for conversational purposes. Why couldn’t bots proactively message their creators or hop on a video call, just like human friends? And why did Meta’s bots need such strict conversational guardrails?

The full Wall Street Journal report, complete with sexually explicit examples from chats with AI, is worth a full read. It’s available at this link.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird