AI Made Friendly HERE

The Disturbing Case of Jennifer Ann’s Chatbot Where AI Crossed the Line

A bizarre case recently hit the headlines worldover when a girl in the United States, murdered in 2006, was resurrected as an AI chatbot — and her family had no idea!

Drew Crecente, the father of the deceased, woke up to a Google alert which soon transformed into an unnerving revelation. To his utter shock, he realised that an AI chatbot, created using his deceased daughter Jennifer Ann’s name and image, was in operation 18 years after her tragic death.

By the time Crecente discovered the bot, a counter on its profile showed it had already been used in at least 69 chats.

This particular chatbot was found on Character.ai, a platform where users can create their own AI “characters”. Ann’s name and her yearbook photo were used along with a description that called her as an “expert in journalism”.

Eventually, Character.ai removed Ann’s chatbot as a response to Brian’s post.

This particular incident, though traumatising for the family, compels readers to ponder on the direction we are headed with respect to the ethics (or lack thereof) in the application of AI. It also raises disturbing questions about privacy and human rights.

Even though AI developments present new opportunities and benefits, AI tools are also used as a means of societal control, mass surveillance and discrimination. Larry Ellison, the co-founder of Oracle, said that AI will bring in a new era of surveillance that will ensure “citizens will be on their best behaviour”.

However, this leads us to the most obvious question of algorithmic bias which results from the kind of datasets fed into systems. 

According to a research paper by The National Institute of Standards and Technology, ‘Towards a Standard for Identifying and Managing Bias in Artificial Intelligence’ , “AI bias extends beyond computational algorithms and models, and the datasets upon which they are built.” 

Merel Koning, senior advisor on technology and human rights at Amnesty International, highlighted that a xenophobic algorithm used in the Netherlands caused significant harm to thousands of lives. She warned that without human rights protection, such mistakes could repeat in the future. 

Ethics and Unauthorised Use of Personal Data

AI systems can collect vast amounts of data through various means, raising significant privacy concerns. Web scraping allows AI to automatically harvest public and potentially personal information from websites, often without user consent. 

Moreover, the rising use of biometric technologies, such as facial recognition and fingerprinting, collect sensitive, unique data that are not foolproof. AI-powered IoT devices, meanwhile, unlock insights into our daily lives and collective human mind. 

According to Salman Waris, founder & managing partner of TechLegis Advocates & Solicitors, the law does give individuals certain rights of “privacy” and “publicity” which provide limited rights to control how someone’s name, likeness, or other identifying information is used under certain circumstances.

However, these laws vary from state to state, so they are difficult to summarise. “For instance, in California, the law lays down that the right of privacy or publicity is violated when someone’s name, voice, signature, photograph or likeness appears in a work of art and the subject has not consented to its use,” Waris mentioned.

MyHeritage, a geneology website, introduced a tool called Deep Nostalgia that allowed users to animate old photographs of their deceased relatives. The AI would add movement to the eyes, mouth, and head, creating the illusion that the person in the photo was “alive”. 

While many found the technology fascinating and heartwarming, others found it unsettling or emotionally overwhelming, especially when the animations involved long-deceased relatives. The ethical concern was whether animating someone who cannot give consent is respectful.

In the documentary Roadrunner: A Film About Anthony Bourdain, the filmmakers used AI to recreate Bourdain’s voice and generated a few sentences of narration in his voice, based on things he had written but never spoken out aloud. This sparked a debate about the ethical implications of using AI to recreate a deceased individual’s voice without clear consent. 

This raises the crucial question: Are we on the brink of slipping into a world where AI erodes human rights, or have we already crossed that line?

During the Gaza conflict, reports emerged that AI-powered systems were used to identify and strike targets, often with limited real-time human input. 

Algorithms designed to predict behaviour or locate targets based on movement patterns or communications can lead to severe mistakes, particularly in environments where civilians are close to military operations.

There have been claims that AI systems used in the Gaza war disproportionately targeted civilians, including children and families, under the guise of precision strikes. This calls into question whether the use of such technology is compatible with the principles of proportionality and distinction, which are the pillars of international humanitarian law designed to protect civilians.
As per the Council of Europe Study, the use of algorithms in the face of rapidly changing technologies raises considerable challenges, including the safeguarding of human rights and dignity. “Indeed, the increasing use of automation and algorithmic decision-making in all spheres of public and private life is threatening to disrupt the very concept of human rights as protective shields against state interference,” the report highlighted.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird