AI Made Friendly HERE

As AI evolves, scientists’ fears escalate: Ted Diadiun

CLEVELAND — You might be familiar with the Terminator movies – or at least the first two featuring Arnold Schwarzenegger, who came back from a future in which machines in an artificial intelligence network called Skynet were trying to take over the world and threatening to destroy human civilization in the bargain:

In the future, a resistance fighter named John Connor is about to lead the humans to victory over the machines when Skynet uses time travel to send an assassin back to 1984 and kill Connor’s mother, so that he is never born. Schwarzenegger plays a bad cyborg trying to kill her in the first movie, and a good cyborg trying to protect the teen-aged Connor in the second.

I’ve read that the idea for the movie arrived in creator James Cameron’s mind as a fever dream in 1982. But 41 years later, a recent story in The New York Times and other media makes the dream seem less feverish and more prescient.

“A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn,” read the May 30 headline on nytimes.com.

There have been no breakthroughs in allowing us to use time travel to go back and correct past mistakes, but the Times story quoted a one-sentence warning about what could be a catastrophic mistake, signed by more than 350 executives, researchers and engineers who have worked on AI.

A previous longer cautionary statement signed by more than a thousand AI experts, and calling for a moratorium on research in the field, was published a couple of months ago, but this one was more dire:

“Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” said the open letter, released by a nonprofit organization called the Center for AI Safety.

If you’re like me, until just recently, you were reasonably sanguine about AI, unconcerned that something like Skynet or the murderous computer HAL from “2001: A Space Odyssey” could ever pose a threat, and content to let the computer geeks worry about it. The most ominous danger seemed to be to the nation’s English and history teachers, whose students might be able to use AI to compose themes and reports.

But if the people who have seen the capabilities and potential of AI from the inside, and know far more about it than you and I do, are thinking that it might lead to the extinction of the human race, perhaps we ought to add this to our list of terrors that keep us up at night.

The scientists and others left the specifics of what frightens them and what ought to be done about it purposefully vague, so as to not exclude those who might disagree on the details – or perhaps to avoid giving those of bad intent ideas.

But in simple terms, the uneasiness seems to be over the realization that the vast networks of computers are increasingly able to synchronize all the knowledge in the world and “learn” from it – which could lead to consequences undreamed of by their creators and unhindered by any sense of morality or ethics.

A more complete explanation than that is far beyond the scope of this column, but a succinct explanation of AI’s capabilities and possible repercussions was presented in a fascinating “60 Minutes” segment that aired April 19. If you haven’t seen it, you should.

The piece is replete with interviews and explanations and a demonstration of soccer-playing robots. But the thing that knocked me out was a demonstration of the capabilities of the Google chatbot named “Bard,” which assimilates information from a self-contained program that is mostly self-taught, according to its creators.

In the words of correspondent Scott Pelley, “Bard appeared to possess the sum of human knowledge, with microchips more than 100,000 times faster than the human brain.”

As one demonstration, Pelley asked Bard to complete what has been called the shortest novel ever written, often attributed to Ernest Hemingway. It is six words: “For sale: baby shoes, never worn.”

In five seconds, Bard responded with what Pelley described as a “deeply human tale with characters it invented, including a man whose wife could not conceive, and a stranger grieving after a miscarriage, and longing for closure.”

Pelley then asked Bard to give him the tale in verse. Five more seconds, and out came a poem, with this closing stanza:

She bought them, held them close,

And felt her pain subside.

She knew her baby’s soul

Would always be alive.

Again … an explanation of how a machine could capture the emotion and humanity of such a tale from only a six-word prompt is beyond the scope of this column, even if I could provide it. But “the breathtaking insight into the mystery of faith,” as Pelley described it, leads one logically, and a bit terrifyingly, to wonder what these machines might be capable of as they continue to “learn.”

The program also offered a cautionary tale on the spread of misinformation for those who might rely on the research and capabilities of Bard and other technology:

Bard was asked to produce an essay on inflation. Seconds later, there it was, complete with recommendations for five books with more information. But it turned out that Bard had fabricated the titles out of thin air – authors and all.

Computer experts call these glitches “hallucinations,” and thus far have not found a foolproof way to eliminate them.

On a lighter note, there’s a lawyer in New York who must wish he had watched at least this part of the 60 Minutes piece on AI.

Representing a plaintiff who was suing Avianca Airlines because he had been struck on the knee by a serving cart, the lawyer submitted a 10-page brief citing several relevant court decisions. Unfortunately for the lawyer, as reported in a New York Times story late last month, he had used ChatGPT for his research, and the program had simply made up the cases.

That bit of deserved hilarity aside, more and more people are becoming concerned enough about the future of AI that they are urging a moratorium on research and development. With all the evil, avaricious and irresponsible people in the world, how such a moratorium might be policed and enforced is a mystery – perhaps an impossibility.

But one thing seems certain:

Whatever we do, we’d better get it right the first time. There won’t be anyone from the future saying, “I’ll be back,” and turning things around for us.

Ted Diadiun is a member of the editorial board of cleveland.com and The Plain Dealer.

To reach Ted Diadiun: tdiadiun@cleveland.com

Have something to say about this topic?

* Send a letter to the editor, which will be considered for print publication.

* Email general questions, comments or corrections regarding this opinion article to Elizabeth Sullivan, director of opinion, at esullivan@cleveland.com.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird