While AI has seen significant advancements in recent years, the foundation and work that has allowed these rapid updates were laid in the late 1950s. Its growth has grown substantially since reaching a fever pitch in 2022 when generative AI entered the public market via ChatGPT.
This launch was groundbreaking and has awakened a race in the tech industry to bring AI to its full potential — it’s a race akin to the Cold War’s arms race, complete with tensions that could have devastating consequences depending on how key players choose to act.
But if the work on AI began in the 1950s, why are we only now seeing it become a race? And why are there high tensions around its development? The answers to the questions are nuanced, with many moving elements and factors at work. This article will aim to address them clearly and concisely.
AI Is Growing Exponentially
AI computing models have followed the theory of Moore’s Law, where technology’s abilities or computing power becomes twice as efficient every two years, shaping exponential growth for AI. The most recent developments indicate that we’ve reached a pivotal moment in the tech industry, where advancements are more dramatic and influential than they were ten or more years ago — because more is happening in less time.
Fulfilling the Potential of AI
AI has the makings to become more world-changing than the development of social media or the introduction of the internet. It’s difficult to fault the tech minds that want to be the ones to bring it to the world; not only would it be an achievement testing their skills and capabilities, but it would also be a history-making one.
However, rushing to achieve AI’s full potential is a double-edged sword, introducing one of many tensions in its development: a disregard for human safety. TIME magazine compares the rush to what we saw during the development of social media, saying, “In a winner-takes-all battle for power, Big Tech and their venture-capitalist backers risk repeating past mistakes, including social media’s cardinal sin: prioritizing growth over safety.”
The concerns aren’t just noted by those watching the tech developments unfold. Even the people working on AI, like OpenAI’s Sam Altman and Tesla’s Elon Musk, fear the implications that AI could have in the long term. Musk even fears it has the potential to bring about the dystopian worlds depicted in science-fiction movies or books.
Obtaining Market Dominance
Perhaps the strongest driving force shaping an AI race is the competition to become the dominant AI provider on the market. Following the public release of generative AI tools, like DALL-E and ChatGPT, it became clear that there was serious public interest — and where there’s interest, there’s money. If projections are correct, AI could inject trillions into the world’s economy and line the pockets of AI developers.
Following OpenAI’s lead, other organizations started pushing to release their AI models to remain competitive in the tech landscape and hopefully be the company to reign supreme — turning the development of AI into a race.
A host of AI programs is now available to the general public beyond ChatGPT, like Microsoft’s Copilot and Google’s Gemini. These programs are already making waves across industries, supporting work and taking over entire tasks. Even the marketing industry and gambling industry (with its growing online availability where you can view pages of gaming options already) are seeing impacts from AI.
However, this drive to obtain a stronghold on the market has become another source of concern and tension. It has shaped questions like, “Should AI and its potential impacts be left in the hands of private organizations?”, “Do policymakers need to intervene?”, and many other questions relating to the ethics and morals of AI development and use.
Lacking Regulations
Adding fuel to the fire is the lack of regulation in AI’s development. There’s no policy mandating developers to slow their work or ensure certain safeguards are in place, despite calls from AI researchers and developers for them.
The call was made through an open letter that WIRED says “argued that AI is advancing so quickly and unpredictably that it could eliminate countless jobs, flood us with disinformation, and — as a wave of panicky headlines reported — destroy humanity.” Although the letter underscored serious concerns and was signed by major players in the tech industry, including AI developers like Elon Musk, the response to these concerns was minimal.
Some policymakers, including European ones, created regulations for how AI could be used, restricting its use in high-risk decision-making or for using it in ways that can infringe on privacy or other basic rights. However, they did not address several commonly raised issues, like source material for programming and teaching AI models. Artists and creatives, particularly, have been vocal about the ethics around the use of their work for programming AI without permission.
Final Remarks
The technology that has been shaping the availability of generative AI and artificial general intelligence has been exponentially growing for decades. In recent years, we’ve reached a milestone in its growth where AI now has real-world applications — and implications.
It has attracted a mixture of excitement, interest, and intense scrutiny. The development has led to an arms race among major technology players, each competing for a lasting position as the leader in AI and the many benefits that come with it.
This race has been fraught with worries and tensions between companies and the people shaping the programs, and it has highlighted that our policies and regulations aren’t equipped for this rapidly evolving technology. With all the tensions and concerns brought up with the AI race, one can’t help but wonder: Will this be a race we’ll all win or one we’ll all lose?