AI Made Friendly HERE

Voices of the Newsroom: Media literacy in the age of generative AI | Voices Of The Newsroom

C.M.: I know, does this feel, like, strange for you?

E.R.: It feels great. I’ve always wanted to be on the opposite side, but I haven’treally had the opportunity, so it feels really cool. Thank you for having me.

C.M.: Of course, I’m so happy to have been able to give you that opportunity. So, I’m curious what initially drove you to cover the topic of generative AI. It’sof course a hot button issue right now, but what else about it specifically piqued your interest?

E.R.: So, this is a little bit of a weird way to go about this, but of course, AI is huge and it’s grown exponentially within the last few years as ChatGPT is usedmore and more, and now with Sora AI and photorealistic, video-realistic things keep progressing more and more towards reality, I was really interested in the topic. But I did see a TikTok that was talking about “AI is looking so realistic, one day it’s just going to look like real life to the point of being indecipherable,” and that really piqued my interest. And so I was thinking, how does this relate to our education? How does this relate to the way in which we personally view mass media and kind of critically analyze that in our brains? And there’s a lot to cover with it.

C.M.: Yeah, it’s definitely a really important topic in education right now, and I feel like it’s also very important for us as journalists in this world of the Loyolan. You’ve been with the Loyolan for a long time, you’re very clearly passionate about journalism, and I was wondering how you might have used this passion for journalism to inform the way you went about gathering more information and more insight into the power and the possible risks of generative AI.

E.R.: Well yeah, journalism — a lot about that has to do with just sharing the truth, sharing with our audiences, the people that are either reading or listeningor watching the things that we produce — they trust us to share truthful information. And a lot of what is happening here with AI is that people are being fed misinformation. They’re being fed things that are generally either not true or aren’t real or are untrustworthy, and there’s a lot behind that. But again, that’s what drew me to this idea of AI being part of media literacy and kind of changing the way in which we go about viewing and consuming mass media.

C.M.: For sure. And media literacy is such an important part of journalism, being able to decipher what is real and what is not, and this is something that we really have drilled into us at the Loyolan. But if credibility is threatened so easily by generative AI, what might this mean for how media is consumed and understood going forward from here?

E.R.: You know, traditionally media literacy is kind of just something a person has. It’s the ability for someone to be able to critically analyze something that’spresented to them through media, whether that be journalism or social media or otherwise, and be able to determine its accuracy, verify if it’s true or if it’scredible. The way in which we use our media literacy skills depends on the media we are consuming, but generally it means that we question what is being fed to us, check different sources that are sharing information under the sort of same umbrella and make an informed decision or create an idea in our minds of what is true. Now, with what is happening with AI, that’s something completely different. Media literacy in the traditional sense doesn’t really work here the way that we originally thought it could or the way it originally did. There’s not a lot of opportunity to check if something is real, and sometimes we just don’t really want to. With AI, it’s become much more difficult to verify information or even combat disinformation. However, while media literacy usually means being skeptical and questioning what we’re seeing, that’s not really what we’re doing now. I mean, I’m a TikTok and Instagram user. Colin, I’m sure you are as well.

C.M.: I absolutely am. 

E.R.: I know you are!

C.M.: Yes!

E.R.: How often do you tap the screen to scroll? How quickly do you move past a TikTok or an Instagram post? Like, that’s not something that is easily measured, of course, but we know it’s fast. That’s because we consume so much all at once in just one sitting, but we don’t really fact check. There’s no reason for us to if we’re just consuming so quickly, and that’s something that the people I spoke to really emphasized as well.

C.M.: Yeah, what other insights were you able to get from some of the people you spoke to?

E.R.: So, I talked to Dr. Christopher Finlay, who is a communications professor here at LMU, and he really emphasized this idea of media literacy now being an aspiration, some sort of hope that we have. Another thing to note is that humans seek information that reinforce our already set beliefs. We kind of, like, go down these rabbit holes that validate those beliefs, whatever they may be, and the algorithm shows us what we like. AI video is different in a sense, yes, but AI video is intensifying the existing problem. We think something is true, and we let it be true. So with AI, our capabilities to determine something’s accuracy or verify if it’s true or credible, it’s just not as possible as it once was. So, media literacy is going to change. How it will be defined is something that we will see in the future. Another person I spoke with was Dr. Andrew Forney, a computer science professor here at LMU, and he really emphasized this idea of what’s trustworthy versus misleading or exaggerated. He talked more from a computer science professor’s perspective, of course, but a huge part of what he emphasized to me is that human thinking is completely different than AI thinking. Humans imagine things in completely different ways. We imagine what-if scenarios. We think of how the world actually works and use the learned experiences of others to think about new situations. Well, generative AI, like ChatGPT or image/video models, are essentially mimics that predict what is likely, based on its training data. AI doesn’t really have real work experiences, so it actually doesn’t understand the world or the cause and effect of the world the way that we do. 

Andrew Forney (A.F.): If you really want to get down and do the deep work, the thing where you need to do discovery, you need to do something new, why turn to a mimic that’s really just going to regurgitate something that’s already been done?

E.R.: So, he kind of said that the problem isn’t just fake videos or images or information presented by generative AI. It’s the biases and the rejection of things that don’t match our beliefs, but it’s also the algorithm of social media that feeds reinforcing information that we already believe. Videos don’t really offer us that opportunity. You either see something and think it’s real and move on, or you see something and think it’s fake and move on. That’s it.

C.M.: Very simple.

E.R.: Right. And that is not what media literacy is. It’s not simple at all. Forney pointed out that AI is great for quick answers, but it’s bad if you’re trying to do any form of original thinking. He used this going-to-the-gym analogy.

A.F.: The analogy I like to use a lot is, you know, using [generative AI] heavily when you’re in your education is a lot like sending someone else to go to the gym for you. Yeah, the weights get lifted, but that’s not really the point.

C.M.: Well, I also know that you teamed up with our audience engagement department to quiz LMU students at Wellness Wednesday on whether or notthey could tell the difference between real videos and videos generated by AI. How well did the students you talked to fare deciphering which videos were real, and which were artificial?

E.R.: So yes, I tested students at Wellness Wednesday to see if they could identify if the videos I showed them were AI or if they were real. And as you could probably suspect, students had a difficult time. Whether that be thinking a real video was fake or thinking an AI video was real, people guessed based on their intuition, based on their gut, and a lot of the time they were wrong. Now, what does this say about media literacy? We have this natural tendency to trust our guts, but our guts can be really wrong a lot of the time. And that’ssomething that Dr. Finlay pointed out as well. Intuition is not great at detecting misinformation, even if we’re confident. AI videos and photos and information do not make that any easier. AI does not hold any accountability. So in this quiz I had done with students, a lot of them either thought that what they were seeing was real and were disappointed when they found out it was AI, or thought something that was real was AI because they were in this place in their heads where they just didn’t know. And that says a lot about how AI has exponentially progressed, but it also says a lot about how our intuitions critically think. There’s no way for them to be able to do outside sourcing research to be able to inform their questions. So, media literacy is just completely changing with AI, and education isn’t really keeping up with it.

C.M.: I totally see that. And with media literacy being in such a dire position, how do you think we can improve our ability to detect content produced by generative AI or in general content that contains misinformation?

E.R.: That’s a great question. Yeah. As I said, we aren’t keeping up with the pace of AI and how quickly it has grown. So now media literacy requires a redesign. How that is redesigned is the major question here. But media literacy right now means remembering that AI systems are powerful mimics like Dr. Forney said. They’re not reasoners. They’re not real. If we let AI replace our own critical thinking instead of supporting it the way that it can where you ask it a question, it can give you a simple answer that can help you come to a conclusion rather than getting you to that conclusion on its own. We do risk losing our own individuality. We risk losing our own creativity. And we also lose our ability to tell what’s true. We lose our ability to want to tell what’s true in the first place. So, we’ve got to understand how quickly misinformation can spread when we don’t question what’s true. The hard part is that in the next few years, we won’t really be able to tell if an AI video is AI. I have a hard timedoing it already.

C.M.: Me as well. It’s really tricky sometimes.

E.R.: It’s tricky. There are some telltale signs, of course. But as it’s gotten better and the more training data it gets, the harder it gets for us to be able to do it. It’s our responsibility to consistently verify but also redefine media literacy as a whole. It’s not what it once was.

C.M.: Absolutely. Well, Emma, thank you so much for joining me back in the studio today.

E.R.: Thank you, Colin. I’m so happy to be here. Thank you for having me.

C.M.: Of course. It’s been a pleasure.

C.M.: This has been “Voices of the Newsroom,” a podcast produced by the Los Angeles Loyolan. Opinions and ideas expressed in this podcast are those of individual student content creators and are not those of Loyola Marymount University, its board of trustees or its student body. This episode was produced and edited by Colin Mills, audio producer, with special thanks to Emma Russell, enterprise reporter, for her participation. You can watch her video on generative AI at laloyolan.com. Feedback about this episode can be submitted to editor@theloyolan.com. Thanks for listening, have a wonderful break and we’ll see you next semester.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird