AI Made Friendly HERE

What The New AI Architecture Means For Business

Bruce Kelley is CTO and SVP of NetScout, leading technology strategies for product and service solutions.

Enterprise networks have been growing more complex for as long as I’ve worked in this business, but what we’re experiencing today is categorically different. The rapid rise of artificial intelligence, and particularly the shift toward agentic AI, is introducing an entirely new layer of architectural complexity. It’s not just more systems, more data and more cloud. It’s a foundational change, on par with what networking teams experienced during the transition to the cloud. And once again, visibility is at risk.

Just as cloud and containerization changed the rules for performance monitoring and security, the proliferation of AI agents will challenge our ability to see and understand what’s happening on our networks.

This change is already underway. Intelligent agents, built on top of generative AI models and capable of acting autonomously, are beginning to automate real business functions across industries. They’re transforming everything from travel booking to healthcare diagnostics, and soon their presence will be as routine as cloud workloads are today.

However, AI agents don’t function like traditional applications. They’re frequently composed of multiple microservices, often distributed across different environments, connected through APIs and driven by real-time data from databases and third-party sources. Their behaviors are probabilistic, not deterministic. That means they don’t just follow pre-written instructions; they interpret and decide. And because of the way these agents operate—through encrypted connections, ephemeral compute instances and autonomous logic—it becomes incredibly difficult to trace how decisions are made or where things went wrong.

There’s no single statistic that captures the extent of the change in complexity, but there are a few that paint a compelling picture. About 71% of internet traffic now consists of API calls, with the typical company now maintaining more than 600 API endpoints, a number that is rapidly increasing. More than 75% of network traffic is encrypted, while 70% of containers have a lifespan of less than 5 minutes. Agentic AI will massively increase traffic volumes, and it’s expected to grow to a $155 billion market by 2030.

This lack of transparency isn’t just a technical challenge. It’s a looming operational risk for businesses that rely on performance and uptime, especially in sectors like finance, healthcare and logistics.

A New Stack For A New Reality

Supporting the rise of AI agents is a new kind of enterprise IT architecture, the automation stack. It’s built from the ground up: At the base are specialized chips and GPUs designed for AI workloads, above that sits a layer of cloud infrastructure and containerized environments, then come the large language models, and finally, at the top, are the agents themselves, each developed to solve specific business problems in different verticals.

This layered stack enables a degree of automation and intelligence that was previously impossible, but it also introduces interdependencies across systems that are harder than ever to monitor. Companies need to understand the relationship between these components from end to end. A network function can be healthy, and other components can be healthy, but the communication between them might not be, resulting in a poor experience.

Why Observability Is Breaking Down

AI agents are already interfacing with enterprise networks through data that is encrypted, transient and machine-generated. This means traffic patterns are harder to analyze with traditional tools. As these systems grow more dynamic, running across multicloud platforms, edge compute nodes and increasingly leveraging 5G connectivity, organizations are losing their ability to understand what’s happening on their networks.

That’s because autonomous systems don’t always expect availability and customer service the way people do. Human users may report degraded service, but autonomous systems won’t. You’ll need to detect problems before they impact performance or security, even when there’s no obvious signal.

Automation is essential to handle this scale, but automation without observability is dangerous. Without the right tools to inspect traffic and understand system health, organizations risk making high-stakes decisions in the dark.

The business risk is real.

Today’s digital services are the backbone of global operations. This might mean missed trades in the finance world, delayed diagnoses in healthcare, lost revenue in retail or logistics and damaged trust overall.

As companies deploy more intelligent agents to streamline workflows and serve customers, their IT teams must have updated tools to match. Relying on traditional observability platforms—built for static systems and rule-based monitoring—won’t cut it. You need solutions designed to parse encrypted, distributed, real-time environments. That means capturing raw packets, the fundamental building blocks of digital interactions. Packets provide the data needed to generate precise, reliable metadata, offering actionable intelligence and deep visibility into complex interactions. It is that intelligence which enables a comprehensive understanding of service relationships versus myopic individual component health. Ultimately, that type of knowledge fuels better, smarter and faster decision making.

Resilience is where observability and security converge.

There’s another shift as well, bringing together observability and security. Once siloed domains with separate toolsets, these areas are now understood to be related. You want them communicating. This is the new path toward resilience: tools that cross-pollinate observability capabilities with security.

Resiliency means more than just uptime. It means having systems that can detect, adapt to and recover from failures and attacks without manual intervention. In the new era of agentic AI and dynamic networks, resiliency will depend on how well organizations unify their visibility and security strategies.

The shift to agentic AI represents both the greatest automation opportunity and the greatest operational blind spot businesses have faced since the cloud revolution. Companies that proactively invest in resiliency by combining observability and security will avoid the painful scramble of retrofitting visibility into a fully autonomous, encrypted maze of AI agents. Those who wait will find themselves trying to build observability scaffolding around black boxes they can no longer decipher, turning what should be a competitive advantage into a costly and complex remediation project.

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Originally Appeared Here

You May Also Like

About the Author:

Early Bird