AI Made Friendly HERE

I Promoted AI for Years and Automated Myself Out of a Job

I had to have known this moment would come. As a technology evangelist, I’d spent years minimizing AI’s potential to destroy jobs. Every paper I wrote about AI and its practical cousin, Robotic Process Automation (RPA)—which uses software robots to accomplish “thinking tasks” like reading emails—boasted that the software would liberate employees from drudgery and enable them “to do more meaningful work,” or words to that effect.

But now, the very technology that I helped promote has put me out of the job.

I’ve been evangelizing the business benefits of enterprise technology for more than two decades, first as a marketing executive and more recently as a freelance writer. At IBM, I was known as a “Software Evangelist,” and my work always had me delivering secular sermons about the redemptive potential of the latest computer program or hardware.

As an apostle of digital transformation, a prophet of workflow orchestration, I gushed with messages from the gods of tech about how products like AI and RPA would allow corporations to be born again—or at least generate higher earnings.

Workers Are Terrified About AI. What Can They Do About It?

And then, earlier this year, things started to go a little quiet. When you’re a freelancer, a slowdown is never good, but I grew more alarmed as a client notified me that they would no longer require my services because they were switching their content writing to AI. Others simply dropped me and didn’t send a note. Several more accused me of using AI in my writing, so why would they pay me?

That was quite a smear. After offering a legally-binding attestation that I have not, and never will use AI in my writing, I worked up the nerve to ask a question that’s on the minds of many writers: How do you think the AI software gins up writing that sounds like something I wrote? It does so by stealing.

Generative AI programs—models that can create images, text, and sounds—and specifically large language models (LLMs) train their systems by ingesting and analyzing millions of words written by people like me. Then, they spit out their own rendering of those words, which read an awful lot like human writing.

It’s a Kafkaesque experience to try to prove that you have not stolen writing from a machine that stole your writing in the first place. That seems to be the whole point, though. The AI is so good, you can’t tell if its output was written by a person or a machine—not an auspicious situation for homo sapiens who own laptops.

My chutzpah didn’t change my client’s mind, but oh, the cosmic comeuppance of it all. My business isn’t dead, but it’s wounded, and I deserve some of the blame.

I’d been an eager cog in the machinery of disruption, which has been grinding up American jobs since well before I was born. Kurt Vonnegut’s frighteningly prescient novel about automation, Player Piano, which features factory workers who train machines to do their jobs only to end up on the street, debuted more than 70 years ago. However, the phenomenon of humans being displaced by machines predated him by a few centuries.

Player Piano also contains the remarkable prediction that “computing machines,” as Vonnegut’s characters called them, would soon take over for many thinking tasks. He saw it in 1952, and it quickly came to pass, with computers pink-slipping people who sorted checks at banks, answered phones, and inked entries into paper accounting ledgers.

Behind every such mass unemployment event was an army of people like me, screaming about the business benefits of automation. I was happy to earn a living contributing to unexpected bouts of joblessness, as long as they happened to other people.

The pace of disruptive change is accelerating. Take the RPA-powered “virtual employee.” Next time you contact your insurance company, you might interact with an AI-driven virtualized insurance claims adjuster with a name like “Sally.” She’s an RPA bot who handles your claim in collaboration with other bots, or the rare human being. Sally has an AI-generated personality, photo, and voice, but she doesn’t require an office or sick leave. With Sally on the job, insurance companies can bless their human employees with opportunities to do more meaningful work elsewhere.

OpenAI’s GPT-4 Is Coming for Comedy Show Writers’ Rooms

Some of these innovations are undeniably beneficial. Cybersecurity operations, for instance, would be paralyzed without AI. The problem is that people pushed out of work by disruptions increasingly have nowhere to go. So much of that meaningful other work I so passionately evangelized is unavailable in the form of steady jobs.

Working Americans of all ages have now joined what I call Generation D (Disrupted). For the majority of us, a career now comprises a series of short-term jobs punctuated by regular layoffs driven by technology changes, mergers, or a need to goose earnings. As we age, it gets harder to find jobs, especially as the hiring process itself is now controlled by AI-based software that embodies age bias.

Some of my Gen D cohort have been able to adapt. I transitioned into freelancing after getting laid off from IBM in 2009, along with 10,000 others, a month after the CEO told shareholders that people were IBM’s greatest asset. I’m turning 60 in just over a year, and I’m not sure I can reinvent myself yet again. This may be one disruption too many. Younger members of Generation D are better equipped for the struggle, though a lot of them seem to be adrift, too.

The gig economy could be a solution, though AI appears to be mangling that, too. Upwork tells me I can audition to edit AI-generated articles for $8 an hour. I’m not quite there yet, but ask me about it in a few months.

Read more at The Daily Beast.

Get the Daily Beast’s biggest scoops and scandals delivered right to your inbox. Sign up now.

Stay informed and gain unlimited access to the Daily Beast’s unmatched reporting. Subscribe now.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird