AI Made Friendly HERE

Shaping Our AI Future: Madhumita Murgia on Agency, Ethics, and Resistance

Excerpts from an interview:
Q: Can you tell us about what Code Dependent is about?
A: I’ve been writing about tech since 2012. And when I started off, it was all about the disruption and the innovation that tech entrepreneurs bring to society and how it’s really changed the way that we live, the way we work, the way we interact with each other over the last decade or so.I’ve been very optimistic about tech entrepreneurship so for me, the obvious thing would have been to write a book about the sort of opportunities of AI and the hugely positive ways it might influence and change our lives. But, in the decade that I have been reporting on, really the impact of technology on business, on society, I started to notice a lot of darker undercurrents. And I started writing this more than two years ago. So this was prior to the whole ChatGPT kind of explosion. But I could already see how AI had seeped into our lives in these very invisible ways, but very powerful ways, like recommendation systems on social media, or telling us what to listen to on Spotify, or what to watch on Netflix or Amazon Prime Video and so on. And so I wanted to understand how is it changing lives around the world today. And what does that tell us about our future? So I went looking for human stories around the world, across nine countries that I’ve featured in my book. And through each of these stories of ordinary people and their interactions with AI, I’ve tried to explore the ripple effects and the consequences of automation on our lives and try and help us to grapple with what our world might look like as it becomes increasingly automated.
Q: What is interesting is that you have these case studies, nine countries, whether it is to the scrubbing teams in Africa, to the data mining which is happening in other countries, but there is a sinister edge to it all, which is very worrying.
A: When I started out, the book was about finding stories of all kinds, not necessarily leaders of companies building AI products. I think there’s so much focus on the other side of the coin, which is the rest of us, and how we experience technology, how we use it and how it’s going to change the way we live. Through my stories, I discovered that there were a lot of unintended harms, and a lot of the consequences that I was finding were the opposite of what was intended when these systems were rolled out. I can give you an example. I interviewed a single mother in Amsterdam called Diana. She has two young sons who, at the time of this encounter, were 14 and 16, or 14 and 15. And the Amsterdam govt had created an algorithm, a predictive system, to figure out which young people would go on to commit serious crimes. And her two boys had ended up on this list. One list was for children who had already committed crimes, and they were predicted to commit even worse crimes in future. And the other one was for mostly brothers and siblings of these children who had, in some cases, never even committed a crime. And the goal was not to imprison them in advance but to help these families and to prevent those crimes from occurring. But in practice, it really tore apart these families. It made the parents feel that they had done something wrong. Many of these were single mothers who felt that their children would be taken away from them. And so it ended up having the opposite effect from what was hoped. And this comes up again and again when we try and implement automation into very sensitive parts of our society and daily life. So I think, yes, it did end up being darker and more sinister than even I had expected. But I think that’s the reality of how things are today. And I hope that, for me, it’s about sending a message saying, so what are we going to do about it? If this is how things are today, how can we change this story? How can we participate and shape it to be a brighter future?
Listen to the interview here:

TOI Bookmark Interview With Madhumita Murgia

Q: A lot of those who have been born in this digital age think that any tool which comes their way is acceptable and accessible can be adopted.
A: That’s why I kind of chose to tell my story through the lens of people. My hope is that we won’t, even a generation younger than me won’t sleepwalk into a world where a few companies control the sort of automation of so much of our society, from entertainment to dating, to the way we learn at school and to the relationship with our doctors, with our govt. I think it’s really important for us to have agency in this discussion because it’s happening now.
Q. You can have agency if you recognise you have free will and you have a right. What do you do when you seem to believe that this is the world?
A: I think we as humans tend to have this automation bias, which is that if it’s a machine that hands out a decision or tells you an answer, you tend to trust it, because somehow we are hardwired into believing that computers are more accurate, more efficient than us. And this has been transferred into our relationship with AI. And I think, for me, a huge part of what I hope to educate and sort of bring to people’s awareness who read my book, but also my reporting more broadly, is that that isn’t how AI works. Let’s take self-driving cars as an example, which require AI to run. No matter how many thousands of hours these cameras have racked up driving around and trying to gather data and predict, we still haven’t managed to predict every single edge case. This is why these cars end up having accidents. It’s very difficult to predict human behaviour in particular, because it’s chaotic and it doesn’t follow definitive patterns. And we’re trying to bring AI into these domains, into human creativity, like writing and making art and movies, but also in deciding who should get bail or who should be given a govt welfare benefit and things like that. We should always have human accountability for these systems, and we shouldn’t be treating them like calculators that have the answers.
Q: How is this over-reliance on digital going to impact human society in the long run?
A: We forget that behind these systems are just a few companies from currently a small corner of the world, which is the west coast of the US. And these are the puppet-masters behind these systems. They collect the data that we are giving to these systems by talking to them, writing, uploading documents and so on. And ultimately they are profit-making entities. When you are interacting with a system that’s going to talk to you and tell you you should do this, or this is an interesting thing, you should buy or read or kind of influence your thinking through what it says, then it’s very easy for advertisers to manipulate you because they kind of understand the ways that you think and the ways that you speak.
Q. It parallels the kind of imperialistic ways of conquering the physical world in the 19th century, during the first industrial revolution.
A: I look at examples of Western companies coming, say, to India to collect health data, using Asha workers as data collectors from the front lines, but never really reporting back what they are doing with that data. This concept of data colonialism, you start to see that these companies pop up all over the world supporting govts and becoming the infrastructure that govts rely on or hospitals rely on. They have moved far past just being consumer product companies, similar to in the ways that the East India Company grew to be much more than just a corporation.
Q: Collective will and energy can push back.
A: The entire final third of my book is actually looking at resistors and people who are fighting back against a faceless algorithm or a boss who’s just a computer software. You have collectives, you have unions in Latin America, in Africa and in Europe and many other parts of the world for gig workers who are fighting back. I talk about Maya Wang, who’s a human rights activist in China who’s fighting back against the Communist Party, and finding a way to break through the tyranny there in many ways, even though she’s just one person in the face of it. And that, I think, is what will help us to come together and decide, you know, what are our lines in the sand? What’s okay to have automation? And we need to speak up now because it’s being rolled out.
Originally Appeared Here

You May Also Like

About the Author:

Early Bird