AI Made Friendly HERE

In the era of deep-fake videos, tech companies must not dismantle their ethics teams

Someone forwarded me a story about Microsoft laying off its ethics team. My first thought was “fake news.” It’s surprising to learn that Microsoft even had an ethics department. It’s even stranger to hear that the group has been disbanded at a time when technological innovation is getting wild.

These are the days of deep-fake videos, internet trolls, and artificial intelligence (AI). And so, in chasing down this story, I used my best internet skills. I checked multiple sources. I refused to believe websites I had never heard of. Eventually I found a report on Popular Science. A reporter there named Andrew Paul explained, “This month saw the surprise dissolution of Microsoft’s entire Ethics & Society team — the latest casualty in the company’s ongoing layoffs affecting 10,000 employees.”

The article explains that the Ethics and Society team once had 30 members. It was reduced to seven people in 2022. And now it is gone. The article notes that Microsoft still has a department of “Responsible AI.” That led me to search Microsoft’s website for the Responsible AI department. There I discovered a number of documents and reports based on the following six principles: fairness, inclusiveness, reliability and safety, privacy and security, transparency, and accountability. It’s reassuring to see that Microsoft has this guidance in place. But one wonders how humans are administering this, as personnel are being cut.

Anyway, I recount how I tracked down this story as an example of online critical literacy. You need to actively search for information, rather than letting it flow into your feed. You should check multiple sources, rather than relying on the first click. Double check URLs to make sure they’re not phony. Seek legitimate sources in mainstream or legacy media. Corporate documents, policy statements, and legal filings are also useful. And legitimate sources of information typically include an author’s name.

Of course, it requires effort and experience to sort things out. It helps to understand that the internet, in all of its tainted glory, is as much about making dollars as it is about making sense. Websites want clicks. They entice with spicy stories and sexy pictures. Algorithms force-feed us stories and images. Search engines profit when we click.

Story continues

There is money and mayhem to be made online. So, you should enter that space with a suspicious mind. Don’t take anything at face value.

This is especially true as AI and deep fakes become better. I discussed the challenge of AI in a previous column. Here, let’s consider deep fakes.

Two recent deep-fake stories are worth considering. In one, students made a deep-fake video of a school principal uttering a racist rant that included threats of violence. In another, actress Emma Watson’s face was turned into a sexualized ad for an app that could be used to, you guessed it, make deep fakes.

In the first case, it is easy to see how deep fakes could be weaponized, as a fake video could be used to discredit an enemy. In the second case, the goal appears to be to allow for customized pornography, where any face could be “swapped” into a porn video. In the first case, yikes. In the second case, yuck.

One solution to this problem takes us back to the ethics teams at big tech corporations. Now is the time to build these teams up — not tear them down. These groups should be monitoring content and establishing norms and guidelines for the use of technology. Beyond that, we need a full-fledged movement for better education about media literacy, critical internet usage, and respectful community standards for the online world. And lawyers and legislators need to regulate and litigate.

Someone said recently that the internet broke our democracy. It is also possible to imagine how deep-fake technology can break people’s hearts. But this kind of damage can be prevented with ethical guidance, wise legislation, and human ingenuity.

I look forward to reading future stories about the expansion of ethics teams at tech companies. Maybe someday there will be college majors and high school classes in critical thinking and the internet. Of course, when I run across these stories, I’ll double and triple check them to make sure they are not fake news.

Andrew Fiala is a professor of philosophy and director of The Ethics Center at Fresno State. Contact him: fiala.andrew@gmail.com.

Andrew Fiala

Originally Appeared Here

You May Also Like

About the Author:

Early Bird