AI Made Friendly HERE

As AI influencers storm social media, some fear a ‘digital Pandora’s box’ has been opened

As people spend more time online, bonds forged with influencers generated by artificial intelligence are rising.

But experts warn the phenomenon could also be fuelling harmful behaviours in the darker corners of the internet.

It was the success story of another AI influencer that convinced Madison* and Laura* to make their own.

The inspiration was Aitana Lopez, a bright-haired, 25-year-old model and content creator with 325,000 followers on Instagram.

But Aitana, whose profile describes her as a “gamer at heart” and “fitness lover” living in Barcelona, is entirely generated by AI.

Aitana Lopez’s profile displays artificially generated images of a fabricated life in Spain.()

She also sells images on Fanvue — a subscription platform that hosts virtual models offering adult content.

The agency that manages Aitana was boasting the AI earnt more than $33,000 per month, and could rake in more than $80,000 through brand collaborations, Fanvue content, and her sponsorships on Instagram stories.

Madison and Laura, two tech workers in Melbourne whose names the ABC has changed, knew this was something they could do.

So they set out to build their own creation.

Multiple computer screens featuring the face of AI character 'Margot Monet'. AI models are a growing feature of the pornography industry.()Two images of an AI model's purple-coloured hair, and two images of a close-up of an eye. Margot Monet is a virtual model created by two Melbourne-based women for adult content. ()A woman's fingers typing on a computer keyboard. The creators of Margot went through 50 drafts of the fictional character before the final version was chosen.()

The pair immersed themselves in online tutorials.

They learned how to generate AI imagery and craft digital identities.

After about a month, Margot Monet was born.

Margot was a purple-haired model who appeared to be in her early 20s.

Her Instagram account was sparsely populated with just a few images, but she would send you an “exclusive pic” if you were willing to “buy her a coffee” online.

A young woman with mauve-coloured hair. Margot Monet’s creators used online tutorials to craft her image and identity.()

Madison said she and her co-creator went through about 50 iterations of Margot before the final product was chosen.

While tools like ChatGPT had safeguards, Madison said other platforms allowed for more creative freedom in shaping Margot’s appearance, and creating the exclusive content they could sell to her fans.

That exclusive content included AI-generated nudes and lingerie and bikini shots.

They also offered behind the scenes photos of Margot’s everyday life.

A photo of a young woman posing in a white singlet top with social media comments on the right hand side. AI creation Margot Monet attracted thousands of followers on Instragram.()

Madison described the response to Margot’s account as “overwhelming”.

“Within days, her Instagram following skyrocketed into the thousands, and our notifications were inundated with messages and calls from followers,” she said.

At her peak, Margot enjoyed 4,000 followers.

However, rather than garnering fame or an easy income, the creators said the account introduced them to a darker side of digital humans than they had expected.

Soon after creating Margot, Madison and Laura found themselves receiving pushy messages from men trying to pursue relationships with Margot despite the profile clearly stating she was not real.

Female hands using a computer mouse. The Melbourne-based creators of an AI character say the experience introduced them to an expected dark side of the digital human industry. ()

The Instagram calls and messages became so constant, Margot’s creators put their phones on flight mode at times.

Then, they were sent AI-generated child sex abuse material from a stranger claiming to be an artist.

They reported the incident to Instagram, which the Australian Federal Police says automatically flags the report with the Australian Centre To Counter Child Exploitation.

But it was a worsening symptom of a problem that had presented itself to the creators when they first began researching Margot.

Madison and Laura used ChatGPT to research the most popular OnlyFans accounts, and discovered a trend.

“The top 20 non-celebrity OnlyFans revealed most were very young. One even has braces,” Madison said.

Do you need help – or to report online child exploitation?

  • If you’re in immediate danger dial triple-0
  • Report online abuse and inappropriate behaviour to the Australian Centre To Counter Child Exploitation
  • Report online harm to the eSafety Commissioner
  • Support is available 24/7 at Kids Helpline and Lifeline
  • Support for people at risk of experiencing sexual violence is available 24/7 by calling 1800RESPECT

She said her experience reinforced the importance of implementing robust safeguards within AI image generation platforms, and while some image generation tools like ChatGPT had those safeguards, she said many did not.

While the pair were happy to use technologies allowing people to make pornographic AI images, they said it was important safeguards were in place to ensure child exploitation material was not produced.

A growing influence

AI influencers are a relatively new phenomenon, and research into their prevalence is lacking.

However, a report from Gartner — a research firm specialising in IT trends — projects that by 2035, the “digital human economy” will be a $125-billion market.

AI companions (chatbots that automatically generate responses for users) are also gaining popularity, with one of the largest providers, Replika, claiming over 10 million people had joined the platform.

If you go looking for them, Instagram is awash with AI influencers.

While some offer adult content, others are simply seeking to establish a foothold in the crowded marketplace of online modelling.

Many have tens or hundreds of thousands of followers. Possibly the most popular influencer, Lil Miquela, has 2.5 million.

Lil Miquela, dressed in purple, stands in a wide yoga pose on a beach. Lil Miquela’s page describes the creation as a “21-year-old Robot living in LA”.()

Many are hard to distinguish from human contributors at a glance, and many have similar traits – they’re young, female and fitness orientated.

Relationships with AI influencers and chatbot companions is an area of interest for psychiatrist Rahul Khanna, who has worked in the field for 12 years and is the clinical program director at Transforming Trauma Victoria.

He has seen clients forming parasocial relationships with real human influencers in a phenomenon known as limerence — infatuation or obsessive attachment — and fears this will repeat with their AI-generated counterparts.

Rahul Khanna, dressed in a button-up shirt and green jumper, stands in front of bookshelves in an office. Psychiatrist Rahul Khanna says some of his patients form intense relationships with AI-generated personas online.()

Dr Khanna said he recently saw a patient who had established a one-sided, intense relationship with an older female influencer who posted baking tutorials.

The influencer had taken on a mother-like position within the patient’s mind.

“The quality of the relationship is more significant than any other kind of real relationship that they have,” Dr Khanna said.

Dr Khanna was also disturbed by reports suggesting Google was building new AI-powered chatbots based on celebrities and YouTube influencers, that would allow people to interact with digital stand-ins.

“This development could make the dangers of parasocial relationships and limerence all the more acute and significant, given you’re now basing the AI on real active humans, so you get the additional blurring of the boundaries,” he said.

“Google AI are potentially playing with fire in setting this stuff up, because of the blurred boundaries.”

A Google spokesperson said in a statement that Google Labs was experimenting with the future of AI products, “some of which may be integrated into our products at some point, some of which will not”.

“We have nothing new to share right now,” they said.

Fingers on a keyboard and multiple screens showing an AI woman. Margot’s creators say the online response to their fictional digital character was overwhelming.()

Dr Khanna said he expected issues to worsen as chatbot AI companions, such as Replika, gained expanded memory and could recall more user interactions to provide more personalised responses.

Relationship boundaries are something Margot’s creators had thought about, and Madison said it was crucial to maintain a clear distinction between fantasy and reality.

“While Margot’s persona may resonate with many individuals seeking companionship or explicit content, it’s essential to understand the limitations of such relationships,” she said.

The risks of connecting with AIs

Marc Cheong, a senior lecturer of information systems and digital ethics at the University of Melbourne, warned any relationship with digital humans could bring an asymmetrical power dynamic.

Unlike in a normal human relationship, the creators of the AIs hold all the power, he said.

“They could delete an account or tweak a setting and fundamentally change the personality of the companion, regardless of any connection the fan might feel,” he said.

A silhouetted person before a screen with an out-of-focus AI-generated female figure. AI-generated models are increasingly being monetised online.()

This was seen in the case of Replika after its operators turned off erotic functionality following warnings from Italian regulators.

University of Sydney senior lecturer Raffaele Ciriello studied the fallout via online chat platform Reddit.

“All the chatting and erotic roleplay and sexting, which for many people was an important part, was no longer available in the community, it was called The Lobotomy,” he said.

“A million people suddenly lost their boyfriends or girlfriends, mostly girlfriends, because most users are male.”

Ciriello said there were reports of self-harm and users having suicidal ideation.

“The sheer volume and the emotional intensity, how people were crying out there, was like nothing I’ve ever seen before,” he said.

The situation highlighted how important digital human companions could become.

AI could unleash a ‘digital Pandora’s box’

Among AI influencers, many accounts like Margot’s offer not-safe-for-work or pornographic content.

University of Melbourne associate professor Grant Blashki — a GP and editor of the book Artificial Intelligence for Better or Worse — likens AI porn to opening a “digital Pandora’s box”.

“One of the risks is that the novelty reinforces addictive behaviours and sets very unrealistic expectations that can interfere with normal sexuality,” he said.

“And, of course, there are the ethical issues of image abuse, especially using people’s images without their consent.”

Grant Blashki smiles, standing in a suit, hat and glasses at a university campus. GP and mental health researcher Grant Blashki says relationships with AI-generated figures can interfere with normal sexual behaviour.()

There was also the risk of people spiralling into increasingly extreme imagery.

Young people, who have not experienced healthy sexual relationships, may be particularly perceptible — both to the porn, and the AI influencers, Dr Blashki said.

“For a lot of our teenagers, the most important currency in their life is social media, fame, number of followers, number of likes,” he said.

“So when they’re seeing even an AI influencer, that’s presenting this very idealised version of beauty. I think that it can feel impossible expectations, it can fuel low self-esteem.

“I think it can lead to an altered sense of identity and normality. Over focusing on one’s minor imperfections, in the worst-case scenario, may be part of the drive behind an eating disorder.”

AI companions have been touted as a salve for loneliness, but Dr Blashki likened them to a mirage in the desert.

Grant Blashki's reflection can be seen in a body of water on a sunny day. Dr Blashki warns AI relationships can harm self-esteem.()

“The closer you get, the more you realise, there’s not actually any real water, there’s no authentic human being on the other end.

“And it doesn’t really quench that thirst.”

Australia’s eSafety Commissioner, Julie Inman Grant, has already seen the dangers interactions with AI — which is incentivised to encourage engagement — could pose.

“We have seen reports of vulnerable users being encouraged to self-harm via AI chatbots and we know that AI-powered algorithms can create an echo chamber effect where negative thoughts or feelings are amplified and reinforced,” she said.

The eSafety Commissioner is also closely watching how AI is used in deepfake pornography, which often involves digitally inserting real women’s faces into pornographic content.

“We are already receiving reports containing synthetic [AI-generated] child sexual abuse material, as well as deepfake images and videos created by teens to bully their peers,” Ms Inman Grant said.

“eSafety can provide real help to Australians who fall victim to image-based abuse including deepfakes and we have a very high success rate in getting this distressing material down.”

julie inman grant speaks at a press conference Australia’s eSafety commissioner Julie Inman Grant says AI-powered algorithms can fuel destructive echo chambers.()

The use of AI tech to create or edit child abuse material remains relatively rare, according to Detective Superintendent Frank Rayner from the Australian Centre To Counter Child Exploitation.

Roughly 40,000 online child exploitation reports are received by the ACCCE per year.

To date this year, fewer than 20 of those were AI-generated.

Despite this, Superintendent Rayner said the material was out there, and its prevalence was growing.

Unidentified male using laptop in dark environment. Authorities say they are closely monitoring the web for child abuse material.()

“It’s definitely a concern that we continue to monitor closely,” he said.

“Actually quantifying the amount of material that may be out there and being circulated is pretty difficult.”

Superintendent Rayner said child abuse material created using AI was treated the same way as other child abuse material under Australian law, and anyone thinking of using such tools should not think it was a lesser offence.

The National Center for Missing and Exploited Children in the US is the biggest source of reports of child abuse material that the ACCCE receives of abuse material.

Superintendent Rayner urges anyone who receives such images to report it to the online platform as well as Australian police.

“Every bit of information helps and even if it might not seem so to user, it could potentially feed into our efforts to identify these victims and their perpetrators,” he said.

A severed connection

Madison said while she believed AI could offer companionship and alleviate loneliness, it shouldn’t replace genuine human connections.

She said she hoped to see more regulation in the space that still enabled creators to harness the potential of AI influences, “while mitigating potential risks and ethical concerns”.

Image of an AI human. A psychiatrist has warned of a potentially increasing phenomenon of people forming parasocial relationships with AI-powered chatbots. ()Multiple images of a purple-haired AI-generated character called Margot Monet. It took two tech workers about a month two create AI-generated model Margot Monet. ()

“While there are challenges to navigate, we remain optimistic about the positive impact AI influencers can have when developed and utilised responsibly. We believe when created and used correctly, AI should enhance human experiences, not degrade them,” she said.

Madison declined to say what might be next for Margot.

The AI had not posted a photo in two months, then three weeks ago, a number of her photos disappeared, and she re-emerged with a slightly changed appearance.

Now, her Instagram account has vanished.

Madison said the profile was suspended permanently by Instagram.

“Everything attached to Margot had now been deleted,” Madison said.

When contacted, a spokesperson for Meta, which owns Instagram, said the company did not comment on individual accounts or the reason for their suspensions.

Meta was asked whether the suspension was part of a crackdown on AI influencers. The company has not yet responded.

For Margot’s fans, it seems there will be no more exclusive content, and they can no longer buy her a “coffee”.

Blurred image of AI character 'Margot Monet'. After creating virtual woman Margot Monet, her creators received messages from men pursuing a relationship with her.()

Credits:

  • Photography: Danielle Bonica

Originally Appeared Here

You May Also Like

About the Author:

Early Bird