To secure a humane future, we must address a critical question: Can AI be trained to operate ethically?
There are no easy answers. Machine ethics are elusive because the scope is global, the rules are lax and little accountability is mandated.
The risks of unethical conduct could be catastrophic. Insofar as powerful AI models are outpacing human reasoning, our species faces an ambiguous future.
Contention over how to act responsibly surfaced in a recent clash between the Department of Defense and big technology companies. Anthropic, a leading start-up, rejected bids to remove safety guardrails on deploying AI for certain classified military operations without sufficient human participation. In response, the Pentagon held that a private company should not tell the government when to use AI.
While the biological capabilities of AI could save an untold number of lives, the potential for the release of toxins is harrowing. Without input from a human doctor, a life-altering diagnosis could be scientifically wrong and morally irresponsible.
Anthropic already experienced a leaked code, known as Claude, and had to settle in court. The newest AI systems and projects are adept at finding flaws in software. They could wreak havoc on crucial global infrastructure, including the internet and electricity grids. But how to penalize AI for its criminal behavior?
On the one side, tech utopians place great faith in AI. On the other, tech dystopians despair over unanticipated consequences.
Proprietary knowledge by major tech companies diminishes transparency. And governments withhold technological information from bad actors. Although the Trump administration is considering overseeing distribution of AI tools, it still opposes significant regulatory measures.
Another complication is that a top tier of society gains the most from the AI industry, and the majority, especially in the global South, bear the cost. A growing AI divide leaves countries with weak digital infrastructure vulnerable to powerful companies like Google and OpenAI; their data centers ingest minerals, use energy, consume water and spew pollution.
That said, three ethical quandaries stand out: enhanced surveillance, technological unemployment and autonomous warfare.
In the first, the internet enables corporations to extract a vast amount of personal data for profit and steer human activity. Shoshana Zuboff, professor emerita at Harvard Business School, has shown how human experience is used to create and sell prediction products that shape behavior. AI power can thereby erode privacy, threaten democracy and undermine freedom.
The second stems from technology’s role in freeing humans from work. With automation, job loss is widely experienced in accounting, the arts, filmmaking, healthcare, programming, engineering, architecture and other pursuits. Technological unemployment is only partially offset by new niches in the information economy. Since work can be a source of finding meaning in life, it is important to consider whether allowing this free-fall destruction of well-being is ethically tolerable.
Third, lethal autonomous weapons systems make decisions with little or no human involvement. The U.S. employs pilotless drones in its military operations. So, too, killer robots remove humans from the kill chain and program other machines.
The Israeli historian Yuval Noel Harari conjures the prospect of technological totalitarianism. And superintelligent machines might go to war with one another, fall into the hands of rogue cyber-attackers, or destroy forms of human intelligence that they embody. Fiction and film writers have unleashed their imaginations to alert the public to the haunting possibility of digital dictatorships.
Proposals for checking algorithmic/AI power, such as regulatory reforms, audits by third-party inspectors and review boards to safeguard citizens’ rights, have barely been instituted. Meanwhile, resistance to ethically dubious practices, known as “techlash,” by digital activists is mounting. On-the-job walkouts, petitions and protest rallies have become commonplace.
Before it’s too late, there’s a fleeting opportunity to infuse stringent ethical standards of democratic rule into AI. There are signs that myriad classes, whose livelihoods cross-cut occupations and ways of life, are calling for inculcating ethical codes into advanced technology. To make this scenario happen, youth, the Gen Z movement, can provide impetus, just as it has fostered political change in countries ranging from Bangladesh, Madagascar, and Nepal to Peru.
I remain hopeful: Hammering out a consensus on which and whose AI ethics is a struggle that can be won.
Jim Mittelman, a Boulder resident and Camera columnist, is an educator, activist, and author. His books include “The Globalization Syndrome: Transformation and Resistance,” “Implausible Dream: The World-Class University and Repurposing Higher Education,” and “Runaway Capitalism.”
To send a letter to the editor about this article, submit online or check out our guidelines for how to submit by email or mail.
