AI Made Friendly HERE

Daniel Dennett calls for ethics in AI development

“It’s emerging, it’s everywhere. It’s going to be even more everywhere, … and it’s scary and inspiring at the same time,” Jad Oubala, president and founder of the Tufts Artificial Intelligence Society, said when describing AI.

For this reason, TAIS brought together computer science researchers and renowned philosopher Daniel Dennett to discuss the ethical concerns of developing AI technology at a panel discussion titled “Ghost in the Neural Net: Traversing the Ethics of AI” on Nov. 15. Matthias Scheutz and Tina Eliassi-Rad, both computer science professors at Tufts and Northeastern University respectively, joined Dennett on stage in Distler Performance Hall. Oubala, a first-year student, moderated the discussion.

Dennett, director of the Tufts Center for Cognitive Studies and professor emeritus of philosophy, is best known for his groundbreaking work on consciousness. When asked by Oubala to define this term, Dennett made a point to exclude the topic from further discussion.

[AI] is not conscious now, … so just leave aside the question of whether they’re ever going to be conscious or sentient,” Dennett said. “We have bigger problems to worry about that are on our doorstep now.”

Dennett then further expanded on an idea he explored in an article published earlier this year, “The Problem With Counterfeit People,” drawing a comparison between lifelike AI and counterfeit money.

“Maybe [Large Language Models] can do wonderful things that we can’t come close to doing,” Dennett said. “I just want us to be able to tell the difference, and that’s because LLMs are not people: They’re counterfeit people. … I want to suggest that counterfeit people are more dangerous, more potentially destructive, of human civilization than counterfeit money ever was.”

Referring to the speed at which AI technology is being developed without ethical consideration, Dennett offered a pessimistic outlook on the future of the industry and its implications for humanity.

“Unless we take very strong steps immediately … we will soon enter a very dark age,” he said. “It may be too late to stop this from happening.”

Eliassi-Rad, Northeastern University’s inaugural Joseph E. Aoun professor, further emphasized racialized problems that can arise when ethical considerations are not taken during AI development.

“The facial recognition systems will do better on professor Dennett than on me,” Eliassi-Rad said. “They usually don’t do as well on women, and they don’t do well on darker skinned women. … We have known for decades that those oximeters, which measure the amount of oxygen in your blood, do not work very well for darker skinned people. All that data is going into the systems that are developing these AI tools in healthcare.”

She said, however, that the healthcare system isn’t the only place where AI fails those it is designed to help. Eliassi-Rad cited a 2017 Wisconsin legal case in which a judge used an automated risk assessment score to deal an increased prison sentence.

“To me, it’s just unbelievable that the judge is treating this software, this machine learning AI software, as an expert witness without cross examining it,” Eliassi-Rad said. “These tools are being used in life-altering situations. They’re being used in policing. They’re being used in our criminal justice system. They’re being used in healthcare and school assignments and so on and so forth.”

Eliassi-Rad said that when she asks others in her field if they’d want AI systems to be used on themselves, “nobody raises their hands.” According to Eliassi-Rad, developers distance themselves from the social ramifications of their work using a “veil of ignorance.”

“Think about that. The people who know the math, who are building the tools, don’t want it to be used on them,” she said. “That’s one of the biggest dangers we have.”

Scheutz described a “troubling” trend he’s seen among attendees of AI conferences over the last six years, tending towards a lack of critical thinking in making ethical decisions.

“[They] don’t know logic anymore. They don’t know the foundations of AI,” Scheutz said. “I think it’s really important to understand the technology, to really understand the math, understand the tradeoffs, understand what the potential is, right? You hear ‘AI’ everywhere. People talk about it and most of them don’t know what it is and how it works.”

He encouraged students to “learn the math in great detail, understand exactly what that system can and cannot do,” so that more care can be taken and that ethical dilemmas may be mitigated earlier in the development process.

“Then you take that knowledge, and you apply it,” Scheutz said. “You may not use certain algorithms because you know there is a potential that these algorithms will make funky associations. You may not use that particular inference mechanism because you know it will lead to incorrect inferences. … It’s really important to, from the beginning, reflect on the technology as well as the algorithms, especially mitigating difficult questions in light of ethical theories.”

Originally Appeared Here

You May Also Like

About the Author:

Early Bird