AI Made Friendly HERE

OpenAI Invests $1 Million in Duke’s MADLAB ‘AI Morality’ Research

News graphic featuring the logo of OpenAI.Image: eWeek

eWEEK content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

OpenAI awarded a $1 million grant to a Duke University team of researchers seeking to explore AI algorithms capable of predicting people’s moral judgment. The grant invests in academic research into “AI morality” as debates on the ethical use of artificial intelligence continue to gain steam in public discourse. Duke has been researching the use of AI to aid people in exercising ethical decisions, producing resources on ethics involving the latest AI technologies.

The study, “Making Moral AI,” is being conducted by Duke’s Moral Attitudes and Decisions Lab (MADLAB) and led by Walter Sinnott-Armstrong, professor of practical ethics and the project’s principal investigator. Co-investigator Jana Schaich Borg of the Social Science Research Institute joins him on the project.

MADLAB is an interdisciplinary laboratory designed to understand various factors that shape and influence people’s “moral attitudes, decisions, and judgments.” Researchers have been trying to understand the role of AI and how it can be used as a “moral GPS” to help people make better judgments guided by ethical standards and traditions, drawing on “computer science, data science, philosophy, economics, game theory, psychology, and neuroscience.”

About the OpenAI Grant

OpenAI’s grant will be used to “develop algorithms that can predict human moral judgments in scenarios involving conflicts among morally relevant features in medicine, law, and business,” according to a Duke press release. Information about the research grant has not been publicized, but it would be a good start for OpenAI to invest in such an effort. Ethics is highly nuanced. It’s still a long shot for current technologies to grasp the nuances of moral judgments—especially when human emotion also plays a crucial role.

AI models are trained to calculate and compute data and statistics. Large language models mostly rely on patterns in language and reasoning to generate predictions. Various fields are involved in morality research, including philosophy, sociocultural studies, and psychology. Combining these disciplines with AI, computer science, and data science would be an enormous task. It also takes time to fully integrate social sciences and philosophy insights into AI algorithms.

Read our guide to navigating AI’s ethical challenges or our look at the ethics of generative AI models to learn more.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird