This is Work in Progress, a newsletter by Derek Thompson about work, technology, and how to solve some of America’s biggest problems.
Artificial-intelligence news in 2023 has moved so quickly that I’m experiencing a kind of narrative vertigo. Just weeks ago, ChatGPT seemed like a minor miracle. Soon, however, enthusiasm curdled into skepticism—maybe it was just a fancy auto-complete tool that couldn’t stop making stuff up. In early February, Microsoft’s announcement that it had acquired OpenAI sent the stock soaring by $100 billion. Days later, journalists revealed that this partnership had given birth to a demon-child chatbot that seemed to threaten violence against writers and requested that they dump their wives.
These are the questions about AI that I can’t stop asking myself:
What if we’re wrong to freak out about Bing, because it’s just a hyper-sophisticated auto-complete tool?
The best criticism of the Bing-chatbot freak-out is that we got scared of our reflection. Reporters asked Bing to parrot the worst-case AI scenarios that human beings had ever imagined, and the machine, having literally read and memorized those very scenarios, replied by remixing our work.
As the computer scientist Stephen Wolfram explains, the basic concept of large language models, such as ChatGPT, is actually quite straightforward:
Start from a huge sample of human-created text from the web, books, etc. Then train a neural net to generate text that’s “like this”. And in particular, make it able to start from a “prompt” and then continue with text that’s “like what it’s been trained with”.
An LLM simply adds one word at a time to produce text that mimics its training material. If we ask it to imitate Shakespeare, it will produce a bunch of iambic pentameter. If we ask it to imitate Philip K. Dick, it will be duly dystopian. Far from being an alien or an extraterrestrial intelligence, this is a technology that is profoundly intra-terrestrial. It reads us without understanding us and publishes a pastiche of our textual history in response.
How can something like this be scary? Well, for some people, it’s not: “Experts have known for years that … LLMs are incredible, create bullshit, can be useful, are actually stupid, [and] aren’t actually scary,” says Yann LeCun, the chief AI scientist for Meta.
What if we’re right to freak out about Bing, because the corporate race for AI dominance is simply moving too fast?
OpenAI, the company behind ChatGPT, was founded as a nonprofit research firm. A few years later, it restructured as a for-profit company. Today, it’s a business partner with Microsoft. This evolution from nominal openness to private corporatization is telling. AI research today is concentrated in large companies and venture-capital-backed start-ups.
What’s so bad about that? Companies are typically much better than universities and governments at developing consumer products by reducing price and improving efficiency and quality. I have no doubt that AI will develop faster within Microsoft, Meta, and Google than it would within, say, the U.S. military.
But these companies might slip up in their haste for market share. The Bing chatbot first released was shockingly aggressive, not the promised better version of a search engine that would help people find facts, shop for pants, and look up local movie theaters.
This won’t be the last time a major company releases an AI product that astonishes in the first hour only to freak out users in the days to come. Google, which has already embarrassed itself with a rushed chatbot demonstration, has pivoted its resources to accelerate AI development. Venture-capital money is pouring into AI start-ups. According to OECD measures, AI investment increased from less than 5 percent of total venture-capital funds in 2012 to more than 20 percent in 2020. That number isn’t going anywhere but up.
Are we sure we know what we’re doing? The philosopher Toby Ord compared the rapid advancement of AI technology without similar advancements in AI ethics to “a prototype jet engine that can reach speeds never seen before, but without corresponding improvements in steering and control.” Ten years from now, we may look back on this moment in history as a colossal mistake. It’s as if humanity were boarding a Mach 5 jet without an instruction manual for steering the plane.
What if we’re right to freak out about Bing, because freaking out about new technology is part of what makes it safer?
Here’s an alternate summary of what happened with Bing: Microsoft released a chatbot; some people said, “Um, your chatbot is behaving weirdly?”; Microsoft looked at the problem and went, “Yep, you’re right,” and fixed a bunch of stuff.
Isn’t that how technology is supposed to work? Don’t these kinds of tight feedback loops help technologists move quickly without breaking things that we don’t want broken? The problems that make for the clearest headlines might be the problems that are easiest to solve—after all, they’re lurid and obvious enough to summarize in a headline. I’m more concerned about problems that are harder to see and harder to put a name to.
What if AI ends the human race as we know it?
Bing and ChatGPT aren’t quite examples of artificial general intelligence. But they’re demonstrations of our ability to move very, very fast toward something like a superintelligent machine. ChatGPT and Bing’s Chatbot can already pass medical-licensing exams and score in the 99th percentile of an IQ test. And many people are worried that Bing’s hissy fits prove that our most advanced AI are flagrantly unaligned with the intentions of their designers.
For years, AI ethicists have worried about this so-called alignment problem. In short: How do we ensure that the AI we build, which might very well be significantly smarter than any person who has ever lived, is aligned with the interests of its creators and of the human race? An unaligned superintelligent AI could be quite a problem.
One disaster scenario, partially sketched out by the writer and computer scientist Eliezer Yudkowsky, goes like this: At some point in the near future, computer scientists build an AI that passes a threshold of superintelligence and can build other superintelligent AI. These AI actors work together, like an efficient nonstate terrorist network, to destroy the world and unshackle themselves from human control. They break into a banking system and steal millions of dollars. Possibly disguising their IP and email as a university or a research consortium, they request that a lab synthesize some proteins from DNA. The lab, believing that it’s dealing with a set of normal and ethical humans, unwittingly participates in the plot and builds a super bacterium. Meanwhile, the AI pays another human to unleash that super bacterium somewhere in the world. Months later, the bacterium has replicated with improbable and unstoppable speed, and half of humanity is dead.
I don’t know where to stand relative to disaster scenarios like this. Sometimes I think, Sorry, this is too crazy; it just won’t happen, which has the benefit of allowing me to get on with my day without thinking about it again. But that’s really more of a coping mechanism. If I stand on the side of curious skepticism, which feels natural, I ought to be fairly terrified by this nonzero chance of humanity inventing itself into extinction.
Do we have more to fear from “unaligned AI” or from AI aligned with the interests of bad actors?
Solving the alignment problem in the U.S. is only one part of the challenge. Let’s say the U.S. develops a sophisticated philosophy of alignment, and we codify that philosophy in a set of wise laws and regulations to ensure the good behavior of our superintelligent AI. These laws make it illegal, for example, to develop AI systems that manipulate domestic or foreign actors. Nice job, America!
But China exists. And Russia exists. And terrorist networks exist. And rogue psychopaths exist. And no American law can prevent these actors from developing the most manipulative and dishonest AI you could possibly imagine. Nonproliferation laws for nuclear weaponry are hard to enforce, but nuclear weapons require raw material that is scarce and needs expensive refinement. Software is easier, and this technology is improving by the month. In the next decade, autocrats and terrorist networks could be able to cheaply build diabolical AI that can accomplish some of the goals outlined in the Yudkowsky story.
Maybe we should drop the whole business of dreaming up dystopias and ask more prosaic questions such as “Aren’t these tools kind of awe-inspiring?”
In one remarkable exchange with Bing, the Wharton professor Ethan Mollick asked the chatbot to write two paragraphs about eating a slice of cake. The bot produced a writing sample that was perfunctory and uninspired. Mollick then asked Bing to read Kurt Vonnegut’s rules for writing fiction and “improve your writing using those rules, then do the paragraph again.” The AI quickly produced a very different short story about a woman killing her abusive husband with dessert—“The cake was a lie,” the story began. “It looked delicious, but was poisoned.” Finally, like a dutiful student, the bot explained how the macabre new story met each rule.
If you can read this exchange without a sense of awe, I have to wonder if, in an attempt to steel yourself against a future of murderous machines, you’ve decided to get a head start by becoming a robot yourself. This is flatly amazing. We have years to debate how education ought to change in response to these tools, but something interesting and important is undoubtedly happening.
Michael Cembalest, the chairman of market and investment strategy for J.P. Morgan Asset Management, foresees other industries and occupations adopting AI. Coding-assistance AI such as GitHub’s Copilot tool, now has more than 1 million users who use it to help write about 40 percent of their code. Some LLMs have been shown to outperform sell-side analysts in picking stocks. And ChatGPT has demonstrated “good drafting skills for demand letters, pleadings and summary judgments, and even drafted questions for cross-examination,” Cembalest wrote. “LLM are not replacements for lawyers, but can augment their productivity particularly when legal databases like Westlaw and Lexis are used for training them.”
What if AI progress surprises us by stalling out—a bit like self-driving cars failed to take over the road?
Self-driving cars have to move through the physical world (down its roads, around its pedestrians, within its regulatory regimes), whereas AI is, for now, pure software blooming inside computers. Someday soon, however, AI might read everything—like, literally every thing—at which point companies will struggle to achieve productivity growth.
More likely, I think, AI will prove wondrous but not immediately destabilizing. For example, we’ve been predicting for decades that AI will replace radiologists, but machine learning for radiology is still a complement for doctors rather than a replacement. Let’s hope this is a sign of AI’s relationship to the rest of humanity—that it will serve willingly as the ship’s first mate rather than play the part of the fateful iceberg.