From devices to on-prem to the public cloud, getting telco AI right involves bringing more new players into an already rapidly expanding ecosystem
Itâs still early days for advanced artificial intelligence (AI) and generative AI (gen AI) with the telecoms set, but the big idea is that customer-facing and internal automation, enabled by AI, could (hopefully) fundamentally change the value proposition operators can put into the market. And thatâs market in the sense that new products and services would help expand addressable market specifically within the enterprise space, and potentially convince financial markets that AI-powered operators are a going concern rather than a safe dividend with flat growth prospects. But before any of that happens, a lot of other things need to happen and, given the scale and complexity, doing those things will require an even bigger ecosystem than already services the sector.
The rise of gen AI comes at a time when communications service providers were already going through major technological and operating model overhauls. The transition to multi-cloud network operations environments, and the reskilling needed to manage the new pace of change that cloud necessitates, and the move towards hardware/software disaggregation in the radio access network (RAN) were already heavy lifts. And now AI.Â
Some key trend lines that speak to the expanding ecosystem operators need around them to get AI right came up during the recent Telco AI Forum, available on demand here. Standouts were the changing nature of customer interaction, the organizational changes needed for humans to work effectively alongside AI-enabled solutions to boost productivity, on-device AI setting the stage for a sort of hybrid processing paradigm, a potential network re-architecture that considers where compute is (or needs to be) in order to support AI use cases and, underlying it all, the people and skills needed to make it all work.Â
Blue Planet Vice President of Products, Alliances and Architectures Gabriele Di Piazza, formerly of Google Cloud and VMware, rightly called out that new players are becoming increasingly relevant to telecoms–the hyperscalers with the money to stand up GPU clusters at global scale and the companies that develop large language models (LLMs), for instance. There will need to be a good bit of ecosystem-level dialogue to âtry to understand what can be done to tune an LLM specific for the telco industry,â he said. And he likened the necessary shift in operating model to the advent of DevOps alongside cloud-native–which is very much still a work in progress for operators. âI think the same dynamic is at play right now in terms of management of AI, in terms of supervision, operations, and so I think it will be a big skills transformation happening as well.â
The radio as the “ultimate bottleneck” that telco AI could address
Looking more narrowly at the radio access network (RAN), Keysight Technologiesâ Balaji Raghothaman said gen AI for customer care type applications is fairly well established but, âWhen it comes to the network itself, itâs very much a work in progress.â AI can improve processes like network planning, traffic shaping, mobility management, etc⦠âBut I think the challenge and focus for me is really on energy efficiency because, as we blow up our capacity expectations, we are having to addâ¦more and more antennas to our radios and then blast at higher power.âÂ
The radio, he said, is the âultimate bottleneckâ in the network and requires the majority of compute and the energy needed for that compute. âThe radio is where the action is. There are laws of physics-types of limits that have to be conquered and AI can play an important role.â From an ecosystem perspective, Raghothaman said early attempts leaned toward the proprietary, black box end of the spectrum whereas the movement now is towards collaborative, multi-vendor implementations and emerging standardization.Â
âThis is really opening up the space,â he said, âbut also leading into new and interesting areas of how different vendors collaborate and exchange models, but still keep their innovative edge to themselves. This is going to be the emerging big area ofâ¦struggle as we accept AI into this wireless network space.â
Expanding from the network out to the actual end user, KORE Wireless Vice President of Engineering Jorrit Kronjee looked at the rise of powerful chipsets that can run multi-billion parameters LLMs on-device, meaning no edge or central cloud is needed to deliver an AI-enabled outcome to a user. Thinking about that opportunity, he said, âI think when we really start re-imagining what will it look like with AI, we may come up with a whole new suite of products that can really benefit the customer in terms of reliability and always-onâ¦Next to that, I think there are more and more devices that are coming into the market that can run AI models locallyâ¦which will open up a whole new set of use cases for customers.âÂ
Back to the earlier conversation around where compute should go in a network based on the need to run various AI workloads, Kronjee said, âWe can now start running AI at the edge,â meaning the far, far edge–the device. âYou can have these models make decisions locally which would reduce your latency, so you can make much quicker decisions compared to having an AI model run in the cloud somewhere.â Another big piece here is the transport cost (or lack thereof) associated with a roundtrip from a device to run an AI workload vs. running that workload right there on the device.
More on the architectural point, Di Piazza said, âIf you start thinking both of moving AI to the edge or even the data center, I think this actually starts to change the compute architecture that has existed for the last 30 years.â With CPU-centric approaches given way to more distributed offloading and acceleration, âI think weâll see a major change in the next maybe two to five years.â But, he said, âNot necessarily everything means changing the location of compute. In fact, itâs important to understand the application profile to be delivered.â He noted that while AR/VR could well be served from central data centers and still meet latency requirements, another maybe sleeper consideration is data residency requirements. Regardless, âCompute will be much more distributed.âÂ
Thinking beyond 5G and onto 6G, Raghothaman highlighted the opportunity around AI-enabled network digital twins. He said a country-scale digital twin of a network would be a âvitalâ tool for experimentation. The digital replica âwhere they can run simulations of new scenarios overnight or in a day where that would have literally taken a year to run in the pastâ¦I think is going to be very interesting.â
From the operator perspective, Antonietta Mastroianna, chief digital and IT officer for Belgian service provider Proximus, focused her comments on how the move from âisolated use casesâ using AI to broad deployment is âan essential shiftâ that âis changing completely the organizing modelâ¦We have moved from improvements here and there into completely revolutionizing the operating model, the skills of the people, the landscape not only in terms of technologies but alsoâ¦how the organization is designed. Itâs unbelievable the shift that is happeningâ¦The opportunity is immense.â