AI Made Friendly HERE

Gen AI’s awkward adolescence: The rocky path to maturity

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More

Is it possible that the generative AI revolution will never mature beyond its current state? That seems to be the suggestion from deep learning skeptic Gary Marcus in his recent blog post in which he pronounced the generative AI “bubble has begun to burst.” Gen AI refers to systems that can create new content — such as text, images, code or audio — based on patterns learned from vast amounts of existing data. Certainly, several recent news stories and analyst reports have questioned the immediate utility and economic value of gen AI, especially bots based on large language models (LLMs). 

We’ve seen such skepticism before about new technologies. Newsweek famously published an article in 1995 that claimed the Internet would fail, arguing that the web was overhyped and impractical. Today, as we navigate a world transformed by the internet, it’s worth considering whether current skepticism about gen AI might be equally shortsighted. Could we be underestimating AI’s long-term potential while focusing on its short-term challenges?

For example, Goldman Sachs recently cast shade in a report titled: “Gen AI: Too much spend, too little benefit?” And, a new survey from freelance marketplace company Upwork revealed that “nearly half (47%) of employees using AI say they have no idea how to achieve the productivity gains their employers expect, and 77% say these tools have actually decreased their productivity and added to their workload.”

A year ago, industry analyst firm Gartner listed gen AI at the “peak of inflated expectations.” However, the firm more recently said the technology was slipping into the “trough of disillusionment.” Gartner defines this as the point when interest wanes as experiments and implementations fail to deliver. 

Source: Gartner

While Gartner’s recent assessment points to a phase of disappointment with early gen AI, this cyclical pattern of technology adoption is not new. The buildup of expectations — commonly referred to as hype — is a natural component of human behavior. We are attracted to the shiny new thing and the potential it appears to offer. Unfortunately, the early narratives that emerge around new technologies are often wrong. Translating that potential into real world benefits and value is hard work — and rarely goes as smoothly as expected. 

Analyst Benedict Evans recently discussed “what happens when the utopian dreams of AI maximalism meet the messy reality of consumer behavior and enterprise IT budgets: It takes longer than you think, and it’s complicated.” Overestimating the promises of new systems is at the very heart of bubbles.

All of this is another way of stating an observation made decades ago. Roy Amara, a Stanford University computer scientist, and long-time head of the Institute for the Future, said in 1973 that “we tend to overestimate the impact of a new technology in the short run, but we underestimate it in the long run.” This truth of this statement has been widely observed and is now known as “Amara’s Law.”

The fact is that it often just takes time for a new technology and its supporting ecosystem to mature. In 1977, Ken Olsen — the CEO of Digital Equipment Corporation, which was then one of the world’s most successful computer companies — said: “There is no reason anyone would want a computer in their home.” Personal computing technology was then immature, as this was several years before the IBM PC was introduced. However, personal computers subsequently became ubiquitous, not just in our homes but in our pockets. It just took time. 

The likely progression of AI technology

Given the historical context, it’s intriguing to consider how AI might evolve. In a 2018 study, PwC described three overlapping cycles of automation driven by AI that will stretch into the 2030s, each with their own degree of impact. These cycles are the algorithm wave which they projected into the early 2020s, the augmentation wave that will prevail into the latter 2020s, and the autonomy wave that is expected to mature in the mid-2030s. 

This projection appears prescient, as so much of the discussion now is on how AI augments human abilities and work. For example, IBM’s first Principle for Trust and Transparency states that the purpose of AI is to augment human intelligence. An HBR article “How generative AI can augment human creativity,” explores the human plus AI relationship. JPMorgan Chase and Co. CEO Jamie Dimon said that AI technology could “augment virtually every job.”  

There are already many such examples. In healthcare, AI-powered diagnostic tools are aiding the accuracy of disease detection, while in finance, AI algorithms are improving fraud detection and risk management. Customer service is also benefiting from AI using sophisticated chatbots that provide 24/7 assistance and streamline customer interactions. These examples illustrate that AI, while not yet revolutionary, is steadily assisting human capabilities and improving efficiency across industries.

Augmentation is not the full automation of human tasks, nor is it likely to eliminate many jobs. In this way, the current state of AI is akin to other computer-enabled tools such as word processing and spreadsheets. Once mastered, these are definite productivity enhancers, but they did not fundamentally change the world. This augmentation wave accurately reflects the current state of AI technology.

Short of expectations

Much of the hype has been around the expectation that gen AI is revolutionary — or will be very soon. The gap between that expectation and current reality is leading to disillusionment and fears of an AI bubble bursting. What is missing in this conversation is a realistic timeframe. Evans tells a story about venture capitalist Marc Andreessen, who liked to say that every failed idea from the Dotcom bubble would work now. It just took time. 

AI development and implementation will continue to progress. It will be faster and more dramatic in some industries than others and accelerate in certain professions. In other words, there will be ongoing examples of impressive gains in performance and ability and other stories where AI technology is perceived to come up short. The gen AI future, then, will be very uneven. Hence, this is its awkward adolescent phase.

The AI revolution is coming

Gen AI will indeed prove to be revolutionary, although perhaps not as soon as the more optimistic experts have predicted. More than likely, the most significant effects of AI will be felt in ten years, just in time to coincide with what PwC described as the autonomy wave. This is when AI will be able to analyze data from multiple sources, make decisions and take physical actions with little or no human input. In other words, when AI agents are fully mature. 

As we approach the autonomy wave in the mid-2030s, we may witness AI applications becoming mainstream, such as in precision medicine and humanoid robots that seem like science fiction today. It is in this phase, for example, that fully autonomous driverless vehicles may appear at scale. 

Today, AI is already augmenting human capabilities in meaningful ways. The AI revolution isn’t just coming — it’s unfolding before our eyes, albeit perhaps more gradually than some predicted. Perceived slowing of progress or payoff could lead to more stories about AI falling short of expectation and greater pessimism about its future. Clearly, the journey is not without its challenges. Longer term, in line with Amara’s law, AI will mature and live up to the revolutionary predictions. 

Gary Grossman is EVP of technology practice at Edelman.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Originally Appeared Here

You May Also Like

About the Author:

Early Bird