AI Made Friendly HERE

How to get telco AI right—it takes an (even bigger) ecosystem

From devices to on-prem to the public cloud, getting telco AI right involves bringing more new players into an already rapidly expanding ecosystem

It’s still early days for advanced artificial intelligence (AI) and generative AI (gen AI) with the telecoms set, but the big idea is that customer-facing and internal automation, enabled by AI, could (hopefully) fundamentally change the value proposition operators can put into the market. And that’s market in the sense that new products and services would help expand addressable market specifically within the enterprise space, and potentially convince financial markets that AI-powered operators are a going concern rather than a safe dividend with flat growth prospects. But before any of that happens, a lot of other things need to happen and, given the scale and complexity, doing those things will require an even bigger ecosystem than already services the sector.

The rise of gen AI comes at a time when communications service providers were already going through major technological and operating model overhauls. The transition to multi-cloud network operations environments, and the reskilling needed to manage the new pace of change that cloud necessitates, and the move towards hardware/software disaggregation in the radio access network (RAN) were already heavy lifts. And now AI. 

Some key trend lines that speak to the expanding ecosystem operators need around them to get AI right came up during the recent Telco AI Forum, available on demand here. Standouts were the changing nature of customer interaction, the organizational changes needed for humans to work effectively alongside AI-enabled solutions to boost productivity, on-device AI setting the stage for a sort of hybrid processing paradigm, a potential network re-architecture that considers where compute is (or needs to be) in order to support AI use cases and, underlying it all, the people and skills needed to make it all work. 

Blue Planet Vice President of Products, Alliances and Architectures Gabriele Di Piazza, formerly of Google Cloud and VMware, rightly called out that new players are becoming increasingly relevant to telecoms–the hyperscalers with the money to stand up GPU clusters at global scale and the companies that develop large language models (LLMs), for instance. There will need to be a good bit of ecosystem-level dialogue to “try to understand what can be done to tune an LLM specific for the telco industry,” he said. And he likened the necessary shift in operating model to the advent of DevOps alongside cloud-native–which is very much still a work in progress for operators. “I think the same dynamic is at play right now in terms of management of AI, in terms of supervision, operations, and so I think it will be a big skills transformation happening as well.”

The radio as the “ultimate bottleneck” that telco AI could address

Looking more narrowly at the radio access network (RAN), Keysight Technologies’ Balaji Raghothaman said gen AI for customer care type applications is fairly well established but, “When it comes to the network itself, it’s very much a work in progress.” AI can improve processes like network planning, traffic shaping, mobility management, etc… “But I think the challenge and focus for me is really on energy efficiency because, as we blow up our capacity expectations, we are having to add…more and more antennas to our radios and then blast at higher power.” 

The radio, he said, is the “ultimate bottleneck” in the network and requires the majority of compute and the energy needed for that compute. “The radio is where the action is. There are laws of physics-types of limits that have to be conquered and AI can play an important role.” From an ecosystem perspective, Raghothaman said early attempts leaned toward the proprietary, black box end of the spectrum whereas the movement now is towards collaborative, multi-vendor implementations and emerging standardization. 

“This is really opening up the space,” he said, “but also leading into new and interesting areas of how different vendors collaborate and exchange models, but still keep their innovative edge to themselves. This is going to be the emerging big area of…struggle as we accept AI into this wireless network space.”

Expanding from the network out to the actual end user, KORE Wireless Vice President of Engineering Jorrit Kronjee looked at the rise of powerful chipsets that can run multi-billion parameters LLMs on-device, meaning no edge or central cloud is needed to deliver an AI-enabled outcome to a user. Thinking about that opportunity, he said, “I think when we really start re-imagining what will it look like with AI, we may come up with a whole new suite of products that can really benefit the customer in terms of reliability and always-on…Next to that, I think there are more and more devices that are coming into the market that can run AI models locally…which will open up a whole new set of use cases for customers.” 

Back to the earlier conversation around where compute should go in a network based on the need to run various AI workloads, Kronjee said, “We can now start running AI at the edge,” meaning the far, far edge–the device. “You can have these models make decisions locally which would reduce your latency, so you can make much quicker decisions compared to having an AI model run in the cloud somewhere.” Another big piece here is the transport cost (or lack thereof) associated with a roundtrip from a device to run an AI workload vs. running that workload right there on the device. 

More on the architectural point, Di Piazza said, “If you start thinking both of moving AI to the edge or even the data center, I think this actually starts to change the compute architecture that has existed for the last 30 years.” With CPU-centric approaches given way to more distributed offloading and acceleration, “I think we’ll see a major change in the next maybe two to five years.” But, he said, “Not necessarily everything means changing the location of compute. In fact, it’s important to understand the application profile to be delivered.” He noted that while AR/VR could well be served from central data centers and still meet latency requirements, another maybe sleeper consideration is data residency requirements. Regardless, “Compute will be much more distributed.” 

Thinking beyond 5G and onto 6G, Raghothaman highlighted the opportunity around AI-enabled network digital twins. He said a country-scale digital twin of a network would be a “vital” tool for experimentation. The digital replica “where they can run simulations of new scenarios overnight or in a day where that would have literally taken a year to run in the past…I think is going to be very interesting.” 

From the operator perspective, Antonietta Mastroianna, chief digital and IT officer for Belgian service provider Proximus, focused her comments on how the move from “isolated use cases” using AI to broad deployment is “an essential shift” that “is changing completely the organizing model…We have moved from improvements here and there into completely revolutionizing the operating model, the skills of the people, the landscape not only in terms of technologies but also…how the organization is designed. It’s unbelievable the shift that is happening…The opportunity is immense.”

Originally Appeared Here

You May Also Like

About the Author:

Early Bird