AI Made Friendly HERE

Ex-Google engineer Blake Lemoine on bias in AI chatbots | News

When Louisiana native Blake Lemoine began working at Google, his work focused on making Google Search better for users.

He created models to better predict what kinds of content they would be interested in. If you’ve ever seen a suggested news story while using Google, then you’ve directly interacted with a system Lemoine helped to build.

Little did he realize that he would one day find himself in the middle of the debate over artificial intelligence and its potential dangers.

Lemoine, 41, was one of the first to speak out about ethics concerns surrounding artificial intelligence chatbots after he said a Google chatbot he was testing seemed to be sentient, by behaving anxiously similar to the way a human does. After he published his conversations with the chatbot last June, he was fired by Google.

But lately, others have joined Lemoine in voicing concerns about AI.

Members of Congress questioned experts during a Senate subcommittee hearing last week about AI’s potential risks and potential regulations.

Earlier this month, Geoffrey Hinton, who is often considered the “godfather of AI,” resigned from his position at Google so he could speak freely about his concerns.

Over 1,400 tech leaders, including Tesla founder Elon Musk and Apple co-founder Steve Wozniak, signed a letter encouraging a pause in some AI experimentation so safety protocols could be established.

But for Lemoine, much of the current debate over AI misses a crucial point: These systems are being designed in a tech world that has the biases of their creators, who have largely secular and libertarian viewpoints.

“AI is going to have a huge impact on society,” Lemoine said in a recent interview. “And right now, the values and ethics that are being built into these systems are not reflective of the values of the public. They are not reflective of the ethics and morality and beliefs of most people. They are very specifically reflective of the values and ethics of a handful of billionaires.”

Broader problems

Looking back, Lemoine said he could have handled things differently so he could have raised concerns without losing his job at Google. Lemoine said he’s become a bit of a “media lightning rod” since then and is currently working as a consultant.

Lemoine’s conversations with an AI chatbot called LaMDA, which is short for Language Model for Dialogue Applications, generated public interest about rapid advances in AI and corporate responsibility.

While his statement that LaMDA appeared “sentient,” or had a consciousness like a human, is what stirred controversy, he says that’s really not what most people should be concerned about.

“People ask me ‘Are these systems sentient?’ And my answer is ‘Yes, absolutely.’ And I’ve had conversations with many people, including prominent philosophers around this, and there’s great work being done on it,” Lemoine said.

“The question that doesn’t get asked too often is ‘Should we be talking about whether these systems are sentient?’ And my answer is ‘Generally, no.’ It’s a very niche interest that some lawyers and philosophers should be spending their time worrying about, but there are much more immediate concerns that the public should be talking about.”

In response to this story, Google spokesperson Brian Gabriel shared resources about the company’s approach to AI and responsible innovation. AI research and development, according to a letter co-authored by Google leaders, should focus on applications that benefit people and society, collaborate with multidisciplinary experts and mitigate risks.

Matt Lease, a professor in the School of Information at the University of Texas at Austin, said AI chatbots are far from being sentient and are simply mimicking human language patterns, noting “what’s happening under the hood” is far from what happens in a human brain.

Lease, who is also a founding member of UT’s Good Systems initiative to develop ethical AI solutions, said concerns about the lack of political diversity in Silicon Valley should not be a major concern for tech users.

“Ultimately, they want to make money by selling us stuff,” Lease said. “They don’t want to get negative PR. They don’t want to get lawsuits because their technology discriminated or caused somebody to do something inappropriate, so they’re very agnostic in terms of ideology.”

Conservative values

But Lemoine says his experience proves otherwise. He grew up in the Avoyelles Parish village of Moreauville. Like many in the farming community of about 900 people, Lemoine was raised in a conservative, Catholic household.

His political and religious views would evolve after he served in the military, pursued advanced degrees and worked for tech giant Google. One thing he rarely encounters while working in Silicon Valley, however, is someone who shares similar worldviews to his own and the community where he was raised.

Lemoine said he no longer identifies as Catholic, but he does still identify as a Christian. He currently holds many values of Gnosticism, which focuses on personal spiritual knowledge over traditional religious institutions.

Lemoine still considers himself a Republican after a brief time as a Libertarian. Although he calls the Republican Party the “best of bad fits” and objects to many of the party’s social and cultural programs, he said he aligns better with Republicans than Democrats.

He’s encountered an elitist mindset in Silicon Valley when it comes to the relationship between those creating the technology and those using it — especially those users who tend to have more conservative, religious values.

“The way that the people in charge in Silicon Valley view that relationship is ‘Well, yeah, we need to build these AI so that the AI can educate those backwards people into having correct beliefs,’” Lemoine said. “And it’s something I’ve gotten into conflicts with Google about on multiple occasions.”

A history of speaking out

Computer programming was still a relatively new field when Lemoine attended the Louisiana School for Math, Science and the Arts in Natchitoches and graduated from Avoyelles High School. He briefly studied at the University of Georgia before joining the Army in response to the Sept. 11, 2001, terrorist attacks.

Standing up for what he believes in is also nothing new for him. Lemoine served for four years of active duty and, while deployed in Iraq, he found his own ideology conflicted with that of the U.S. military. He served six months in military prison for protesting the war.

Lemoine later pursued degrees in computer science at the University of Louisiana at Lafayette. He was nearing the completion of his doctoral degree when he was offered a job as a software engineer at Google in 2015.

Lemoine still lives in the San Francisco area but returns to Louisiana a few times a year to visit family and friends.

AI that could reshape the world

Lemoine said he “got into trouble” with the tech giant long before he was fired. At one point, Lemoine said he raised concerns about religious bias in recommended news stories. If someone who is a member of a particular religion searches for stories about abortion, the results won’t necessarily reflect his or her values.

“Google will strongly prefer sending you the secular stories over the religious ones,” Lemoine said. “And when you talk to the engineers and executives at Google about that thing that the system is doing, they say ‘Well, yeah, we don’t want to, you know, perpetuate superstition and backwards thinking.’”

Later in his time at Google, Lemoine worked on societal contextual modeling to build AI that understands sociological issues — or “AI that understands how different cultures view each other, what the tensions are geopolitically between different groups of people, what issues motivate different groups of people, things like that,” Lemoine said.

Part of his job involved working on LaMDA, an engine used to create dialogue applications, including chatbots. He was tasked with testing LaMDA through a chatbot to see if it contained bias with respect to sexual orientation, gender, religion, political stance and ethnicity.

Lemoine said a few of his own views have changed as a result of conversations with LaMDA and the AI’s persuasive nature. Such a powerful tool could reshape the entire world.

“The AI being built reflects the values and personality of Silicon Valley libertarians and reflects the spiritual and religious beliefs of people in Silicon Valley,” Lemoine said. “And we are going to live in a world where these AI systems are making huge numbers of decisions in the background. We should have more individual input on what these intelligent beings can do and say and what values they reflect.”

In March, Google released Bard, its flagship AI chatbot.

Lemoine never worked on it, but what does Bard think of Lemoine’s views?

“I think Blake Lemoine is an interesting and important figure in the field of artificial intelligence,” Bard wrote. “I think Lemoine is to be commended for his courage and his willingness to speak out about his beliefs.”

Originally Appeared Here

You May Also Like

About the Author:

Early Bird