AI Made Friendly HERE

Students are at the forefront of the AI ethics dilemma

As universities begin to embrace Artificial Intelligence, they owe it to their students to equip them with guidance for responsible and ethical use. As intermediaries between a product and its users, institutions have a responsibility to the consumer — in this case, students, faculty and staff — to disclose the risks associated with emerging technologies.

USC’s rollout of ChatGPT-5, Perplexity AI, Zoom AI Companion and similar tools like NotebookLM displayed the University’s optimistic stance toward AI in higher education. While there are many feats researchers have achieved using the tools, there has also been minimal discussion around the risks we expose ourselves to by relying on them.

Such silence with regard to the moral dilemma we face at the hands of AI speaks volumes amidst the extreme cases of such issues in the news today.

Daily headlines, sent straight to your inbox.

Subscribe to our newsletter to keep up with the latest at and around USC.

In an interview with the Daily Trojan, Elisa Warford, associate professor of technical communication practice at the Viterbi School of Engineering, said that her main concern in the realm of education is students’ relinquishment of critical thinking skills, 

“It concerns me that they did just hand this over without much guidance, especially at a time when faculty … [are] trying to keep up and adjust our teaching and our assignments and there’s a whole range of faculty views,” Warford said. 

A large concern is that students and people at large are doubting their own skills and crediting AI more than they should be, thanks to a seemingly omnipotent entity. The term for this phenomenon has been coined “automation bias.”

“People still need to develop a domain expertise so that even if you’re just going to oversee the AI, you still have [the] knowledge and judgment,” Warford said. “There’s a worry that the human would just be like, ‘Yeah, it’s fine.’ … We know that [Large Language Models] hallucinate and that they make mistakes.” 

Key figures at tech companies like Anthropic and OpenAI have openly expressed concerns about the fallibility of AI tools — especially in situations where even a minor error could mean the difference between life and death. 

Secretary of Defense Pete Hegseth was so eager to get the government’s hands on AI for military purposes that he threatened Anthropic by going as far as giving Anthropic’s CEO a deadline to open its technology up to unrestricted government use. In doing so, the military has disregarded the warnings from developers about the error-prone state of AI tools, rendering them ill-suited for high-risk assignments such as weapons deployment.

AI ethics deserve equal consideration among all institutions that employ them and the individuals that comprise those institutions.

When AI companies themselves warn of the hallucinations and imperfections of these tools, it’s a given that we should take their word for it. Even top researchers struggle to understand exactly how AI works, meaning the general public is not even close to comprehending it.

We must remind ourselves that it’s precisely because AI systems are opaque and not understood by the average user that they entice us. This, however, does not mean that AI tools can be relied on to a greater extent than we rely on our own skills.

The clash between Anthropic and the Pentagon is a prime example of a tendency to quickly accept the risks of imperfection in AI because we have the privilege of being shielded from the consequences.

Students now are at a unique advantage, however, in that we are learning the ropes of AI and exploring our careers simultaneously. If we choose to, we can shape our careers to align with this changing landscape.

But with that, we must remind ourselves that AI has been and always will be a mere tool to help us, not an independent entity that can function without human oversight. 

Yuval Noah Harari, historian and bestselling author of “Sapiens,” said that AI doesn’t have a self-correcting mechanism in the way that humans do. Constrained by the information that already exists, AI can’t correct its own mistakes without human intelligence to identify the flaw.

So, at this crossroads of attitudes toward AI, what’s critical to remember is that artificial and human intelligence will always be inextricable. Our own ethics with regard to AI use are the most important consideration in paving the path forward.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird