AI Made Friendly HERE

AI for good, with caveats: How a keynote speaker was censored during an international artificial intelligence summit

Abeba Birhane gives a keynote speech at the AI for Good Global Summit 2025 in Geneva. Image: AI for Good Global Summit

GENEVA, Switzerland — On Tuesday, the United Nations’ flagship platform for artificial intelligence, The AI for Good Global Summit 2025, kicked off in Geneva. But the commencement of the summit wasn’t without controversy. Hours before the keynote speaker, Abeba Birhane—founder and lead of the TCD AI Accountability Lab (AIAL) and one of Time magazine’s 2023 100 Most Influential People in AI—was set to take the stage, she was asked by organizers to remove some of her slides.

Specifically, the organizers wanted Birhane to “remove anything that mentions ‘Palestine’ ‘Israel’ and replace ‘genocide’ with ‘war crimes‘” and “a slide that explains illegal data torrenting by Meta.”

“In the end, it was either remove everything that names names (big tech particularly) and remove logos or cancel my talk,” Birhane, whose research focuses on algorithmic bias and AI ethics and fairness, wrote in a Bluesky post.

I spoke to Birhane to understand what happened behind the scenes, how this undermines the spirit of the summit, and how the industry can do better to ensure better and fairer AI implementation across the field.

Editor’s note: The resulting discussion has been edited and condensed for length and clarity.

Sara Goudarzi: You were set to give a keynote speech titled “AI for social good: the new face of technosolutionism.” What happened before you took the stage?

Abeba Birhane: I got an email very early in the morning to come and rehearse for my talk, which was around 10:30 a.m. I arrived there just after 8:00 a.m., and very friendly organizers sat me down and started talking about how great my work is and how credible a scientist I am. It went on and on. They were going around and around, and I knew there was something wrong. Then I asked, “what is it? Tell me.” And they mentioned I can’t go on stage with the kind of content that I have on my slides and that I must either change my keynote into a fireside chat or radically alter the content of my presentation. They invite me every year for a fireside chat panel, and I feel like it’s not worth my time, so I don’t accept. This year, the only reason I accepted is because it’s a keynote speech, a platform that allows me to communicate to people who need to hear my message.

Then we started negotiating. I opened my laptop; we started going slide by slide through my talk, removing bits every time. One of the main concerns for them was one of the slides I had indicated no AI for war crimes, and it had logos of Microsoft, Amazon, Google Cloud, Palantir, and Cisco; they wanted me to remove that. I had removed a lot of things already. I removed content that mentioned Gaza, Palestine, Israel. I edited “genocide” to “war crimes.” I had removed a slide that connected Meta with illegal data torrenting practices. For me, that was the limit. So, they went and discussed it and came back and said if I don’t remove that one image, or add hundreds of other logos on that slide so that it doesn’t incriminate those particular companies that were identified, I couldn’t give the talk. Another speaker accidentally happened to be there and kind of casually mentioned, maybe you can talk to The New York Times. I think that made the organizers worry. So, they likely extrapolated that maybe just letting me speak is a calculated risk and an easier way out. I managed to keep that one slide and delivered the keynote very stressed and shaking inside.

Goudarzi: Why was it important to cover these elements in your speech?

Birhane: I’m a scientist by training, so everything I presented was either work that has ample empirical evidence to back it up or analysis and articles that are already in the public domain. I don’t have any intention to single out Microsoft, Amazon, Google, or Palantir, but I am doing that because existing records clearly show that they are working with authoritarian regimes to provide cloud infrastructure and various technologies that are powering war and exacerbating injustice. And for me, the AI for Good Summit is all about doing good, all about the sustainable development goals. And within the Sustainable Development Goals, I think SDG 16 is way off track. This is the SDG on peace and justice. Corporations, on the one hand, are using AI for social good as a shield to say, “look, we are working on fundamental rights and sustainability.” But on the other hand, they’re providing the technology that is fueling war. It’s hypocrisy. So, I’m pointing that out, again, using existing evidence. I’m not making anything up.

Goudarzi: What does what happened say about the summit and industry as a whole?

Birhane: With the summit, I am honestly very disappointed, because it feels like when they are claiming AI for social good, it’s only good for AI companies, good for the industry, good for authoritarian governments, and their own appearance. They pride themselves on having tens of thousands in attendance every year, on sponsorships, and on the number of apps that are built, and for me that really is not a good measure of impact. A good measure of impact is actual improvement of lives on the ground. Everywhere you look here feels like any other tech summit rather than a social good summit. So, it’s really disappointing to witness something that is supposed to stand for social good has been completely overtaken by corporate agenda and advancing and accelerating big corporations, especially their interests, rather than doing any meaningful work.

Goudarzi: Do you think it’s a reflection of the AI industry as a whole?

Birhane: When you look around at the demos, it’s full of robots. And you also look at the talks, so many of the speakers are tech executives and CEOs. So, to some extent, it’s not just a reflection of the tech industry, it feels like this is the tech industry. This is an AI-focused initiative that has completely folded or embraced the AI industry and is now advancing their agenda.

Goudarzi: Some of your speech and research focuses on how AI models and data and algorithms don’t consider marginalized communities. What happened on Tuesday morning was yet another example of discounting certain communities.

Birhane: Well, obviously what’s happened is not correct. I think a Black woman being censored after getting an invite and really going through a stressful situation is not a good look by any standard. I don’t know how else to explain it: It’s very disheartening.

Goudarzi: What can the industry do to better serve the greater population, meaning, how can AI really be used for good?

Birhane: Oh, this is a big question. AI is a very broad term and could be a very wide range products, research, applications, and use cases. So, it’s difficult to talk about AI, and if it can do any good in the world, without really specifying an application or domain. But since many people tend to equate AI with generative AI, I’ll answer your question with relation to generative AI.

For me, the harms—the environmental destructions from the energy consumption for, and water needed to cool down, the data centers, the extractive business model where each of us are contributing training data but are never consulted or even aware or asked for consent, and the gig workers who are paid very little and go through some of the most psychological taxing work around content moderation and data labelling—make it really difficult to see how generative AI can be a net positive.

On top of that, a lot of application around generative AI tend to just operate from, the “trust me bro” kind of approach, rather than rigorously testing and assessing these models. Take simple appliances in our homes, like a toaster. It has to go through rigorous standards, assessment, and assurance strategies before it’s out into the market, before it goes into the public. AI, on the other hand, has a much more significant impact on society, yet we have very little guardrails. We have very little regulation and almost no enforcement mechanisms. So, considering all these it’s really difficult to see how generative AI can be beneficial.

Goudarzi: Can it be changed to be more beneficial?

Birhane: I want to say yes in theory, but that will require fundamental rethinking of AI as we know it. We have to really give up on a lot of values that we aspire to in machine learning. Larger, bigger scale general AI has to go. We need smaller models built on purpose and controlled and managed with small communities. Small, purposely built AI might serve some good, but you have to strip it off its capitalist drives. And currently, capitalism and the AI business model are really intertwined.

Also, currently, AI is used in all kinds of harmful ways to punish people. This also has to go. We have to use it to aid people, rather than as a gotcha. If you look at the prison system, recidivism algorithms are there to catch people. You look at the welfare algorithms, they are there to catch fraudsters. So, most AIs that are disseminated or deployed in the social space are there as a punishment mechanism, not as something that will help people. Instead of recidivism algorithms, imagine developing something that helps people rehabilitate into society when they come out of prison. But you can’t make money from that, so we don’t have such tools. Imagine we refocus our definition of crime to the tech CEOs, or white-collar criminals, and use tools to catch those evading tax. But again, that’s not in the interest of the powerful few. So, it doesn’t happen.

We also have a lot of AI that is really resuscitating eugenics and physiognomy. So, these are some of the elements that must change if AI was to be used for good.

Goudarzi: Will you do another event at AI for Good again?

Birhane: I don’t think I’d be invited again. It’s very unlikely.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird