
At the time of its release in late 2022, OpenAI’s artificial intelligence (AI) platform, ChatGPT, was seen as the most significant technological innovation in generations. Ensuing advancements in AI have reinforced this view. Less noticed has been a more recent development that could be equally revolutionary: AI models beginning to exhibit reasoning.
“We have come very far, very fast,” says Denny Fish, a portfolio manager at Janus Henderson Investors. “ChatGPT was quickly embraced as a horizontal platform that could alter the utility of technology, and use cases have started to mature across functions, ranging from customer support and content creation to coding and marketing.”
Other foundational models have joined ChatGPT, including Claude, Google’s Gemini, Llama and Mistral. These platforms have been in competition during AI’s training phase, and thus far the scaling laws – the function of data inputs and computing capacity yielding magnified results – have held.
Enter test-time inference
After an intense training phase, many AI models are on the cusp of transition to their more operational inference phase. The goal is for these models to eventually achieve artificial general intelligence (AGI). This pursuit took a step forward in late 2024 with the release of OpenAI’s Strawberry platform, which marries reasoning and memory with large language models (LLM).
What does this mean? Rather than interpreting inputs and predicting a “next step,” AI models instead think through problems iteratively to identify the best solution or path forward. Models can now learn from each iteration, with every additional step producing data that can be referenced in future uses, which should lead to exponentially greater accuracy. This advancement is called test-time inference.
Test-time inference is akin to a computer programme developed by Alphabet’s DeepMind to play board games. Unlike other programmes, DeepMind’s AlphaGo was not preloaded with information; it could only learn iteratively through playing games and was quickly able to defeat humans. An example from a professional setting is an AI startup that sought to deploy the technology to execute paralegal work. It quickly advanced to associate-level tasks, and one can presume an AI partner is not far behind.
These advancements have moved the goalposts. Initially, the expectation was that scaling laws would diminish as the AI training phase matured. Instead, a new set of scaling laws has emerged, owing partly to the data produced within test-time inference.
A mad scramble?
Thus far, AI’s rollout has been relatively orderly. Recent advancements may change that. Rather than a tapering of capital expenditure as the training phase matures, investment levels may be sustained as AI platforms scramble to procure sufficient computing capacity to operate test-time inference.
This stage necessitates platforms being closer to the customer, and already the next generation of AI innovators is being funded to build upon these foundations. Grasping the magnitude of the opportunity, software companies are seeking to integrate AI into their offerings, and services companies are actively exploring ways to leverage AI to grow their businesses and improve efficiencies.
Going mainstream
The possibility remains that, as the AI training phase matures, scaling laws might exhibit diminishing returns, thus lowering the demand for capex-intensive infrastructure.
The more likely scenario is new, complementary scaling laws taking hold that should keep AI-related investment high. Fortifying this argument is the AI opportunity shifting from the multi-billion-dollar software market to the multi-trillion-dollar services sector.
With AI’s potential becoming increasingly visible, players outside of services and research are staking their claim. Governments are developing “sovereign AI” to protect data, increase economic returns and maintain a degree of technological independence1. Within industry, enterprises are seeking to deploy AI to optimise manufacturing processes, design factories and integrate efficiency-enhancing bots across operations.
An investor’s perspective
With computing intensity likely increasing with the deployment of test-time inference, recent impressive investment levels by hyperscalers could be sustained over the next couple of years. This is favourable for the producers of graphics processing units (GPUs), application-specific integrated circuits (ASICs) and other segments integral to AI infrastructure. The introduction of the next generation of considerably more powerful GPUs should reinforce this trend.
Within software, the shift from AI training to reasoning should benefit both infrastructure and applications software companies. The latter could see greater demand as providers harness AI to deliver solutions for high-value business processes and workflows.
Outside the tech sector, power supply must be addressed given AI platforms’ energy intensity. Hyperscalers are considering solutions to avoid energy bottlenecks, including deploying dedicated power sources onsite, co-locating near nuclear plants and studying the feasibility of small-module nuclear reactors and fuel cells.
Early iterations of AI could seem both sophisticated and rudimentary, sometimes delivering dubious, if not comical, outputs. An intense training phase has honed these models, and the advent of reasoning could result in capabilities that were very recently considered theoretical.
“The speed at which all of this is occurring means there are few segments of the global economy and financial markets that won’t feel the impact of AI’s deployment,” says Fish. “While secular in nature, we think it’s reasonable for investors to expect the AI theme to translate into monetisation over the near- to mid-term.”
Disclaimer
Technology industries can be significantly affected by obsolescence of existing technology, short product cycles, falling prices and profits, competition from new market entrants, and general economic conditions. A concentrated investment in a single industry could be more volatile than the performance of less concentrated investments and the market as a whole.
Energy industries can be significantly affected by fluctuations in energy prices and supply and demand of fuels, conservation, the success of exploration projects, and tax and other government regulations.
Concentrated investments in a single sector, industry or region will be more susceptible to factors affecting that group and may be more volatile than less concentrated investments or the market as a whole.
Volatility measures risk using the dispersion of returns for a given investment.
The opinions and views expressed are as of the date published and are subject to change. They are for information purposes only and should not be used or construed as an offer to sell, a solicitation of an offer to buy, or a recommendation to buy, sell or hold any security, investment strategy or market sector. No forecasts can be guaranteed. Opinions and examples are meant as an illustration of broader themes, are not an indication of trading intent and may not reflect the views of others in the organization. It is not intended to indicate or imply that any illustration/example mentioned is now or was ever held in any portfolio. Janus Henderson Group plc through its subsidiaries may manage investment products with a financial interest in securities mentioned herein and any comments should not be construed as a reflection on the past or future profitability. There is no guarantee that the information supplied is accurate, complete, or timely, nor are there any warranties with regards to the results obtained from its use. Past performance is no guarantee of future results. Investing involves risk, including the possible loss of principal and fluctuation of value.
Janus Henderson is a trademark of Janus Henderson Group plc or one of its subsidiaries. © Janus Henderson Group plc.