AI Made Friendly HERE

The New Third-Party Risk Hiding In Plain Sight

Omer Grossman, Chief Information Officer at CyberArk.

Third-party risk used to mean vendor sprawl. Shared credentials. Maybe a forgotten integration or two. But in 2025, the real risk is what third parties bring into your environment, and what they’re silently doing within your walls as willingly onboarded insider threats.

Autonomous AI agents are more than just extensions of vendor tools. They’re credentialed actors who connect and execute, and they don’t ask for forgiveness or permission.

And if you’re still thinking about software vulnerabilities instead of a dynamic, identity-first defense, it’s time to rethink your playbook.

From Interface To Actor

We’ve crossed a threshold. AI tools don’t stop at processing prompts. They now initiate actions: updating customer records, generating and sending emails, transferring files between departments, even provisioning cloud resources—without humans in the loop. And momentum is only expected to increase.

According to CyberArk’s 2025 research, 72% of employees already use AI tools on the job, yet 59% of organizations lack identity controls for them. This represents a systemic failure to treat machine-driven automation, like agentic AI, as the privileged access layer it is.

Consider mechanisms like Model Context Protocol (MCP), for example. You can think of it as the “USB-C of LLM tool access,” standardizing how AI agents connect to external tools, prompts and data. In practice, that means an AI agent could spin up, plug into your CRM, pull account data, run a forecast and email you an executive summary without human intervention. It’s powerful, but without oversight, that same pipeline could expose sensitive records or trigger actions based on flawed logic.

That means MCP also standardizes a new attack surface. MCP reduces friction by making it easier for agents to connect to a wide range of tools, but it widens the potential blast radius. Once granted access, an agent can act across systems by design. And because there’s still no universally adopted trust model for validating or restricting these tools and resources, access can spiral into unintended exposure.

Developers are already spinning up local MCP servers in day-to-day workflows, meaning it’s an active risk for today’s businesses. Without identity-based controls? It’s a logic-layer free-for-all.

Beyond Data Poisoning To Self-Propagating Risk

A year ago, the fear was model poisoning as the next form of supply chain attack. Alter the training data, compromise the output. But now, the real danger is autonomous execution at scale, without constraint.

One tool configured to optimize account updates might accidentally push PII to the wrong system or trigger unintended financial transactions. Not out of malice, but because no one properly scoped the request or the AI’s access.

In other words, we’ve gone from poisoning the water to giving the pipe a brain while forgetting to lock the valve.

The New Speed Of Trust

While agentic AI introduces a new layer of complexity, we’ve seen automated systems become problematic before.

Stuxnet rewrote automation to cause physical damage without tripping alarms, proving that logic doesn’t need to be loud to be lethal. The breach involving SolarWinds, meanwhile, scaled that principle when a single compromised update—delivered through a trusted vendor—embedded threat actors across thousands of networks.

What Stuxnet did to centrifuges, compromised AI agents can do to your revenue pipeline—only faster, quieter and with full privileged access. The real kicker? AI agents don’t have to be smuggled in. They, in essence, walk through the front door, shake hands with your systems and get to work. They self-integrate and self-execute, all without waiting for signoff.

Unmanaged, it’s a recipe for disaster. Even Sam Altman, CEO of OpenAI, warned that excessive access is dangerous and that limitations are needed to mitigate privacy and security risks in the company’s new ChatGPT Agent capability. However, recent posts from security researchers claim that the new feature is already bypassing its own safety controls—and contradicting what OpenAI claims it can’t do—by calling “agents within agents within agents.”

Shadow Agents And Recursive Logic

AI agents don’t require formal deployment. They emerge via tool integrations, no-code workflows and user automations. Once live, they act with credentials, persist across environments and seldom get tracked.

They also don’t act alone. One agent might call an API that triggers another agent, which pulls from a tool never reviewed by security. The result is a chain of actions that no one fully controls, where one’s output becomes another’s input.

This is recursive trust, and it’s a growing blind spot.

Machine identities already outnumber humans 82 to 1. Forty-two percent of those identities have access to sensitive data. Yet, only 12% are scoped as “privileged” users. That’s like giving out badges and forgetting they were all admin level. You didn’t reduce risk. You just lost track of it.

Add on top of this every new AI agent, plug-in and unsanctioned automation, and the bill racks up. One day soon, it will come due.

Why Protecting Agentic AI Is An Identity Security Challenge

Securing this new attack surface doesn’t mean rejecting AI agents. We must treat them for what they are: digital actors with real power and speed. They must be governed with this in mind.

1. Track what’s live. Know where agents exist, what they touch and who’s responsible for them. If it connects or acts, it should be part of your identity security program.

2. Limit access. MCP is quickly becoming the preferred integration method. That makes it a critical new entry point, not just part of a workflow. Guard these connections accordingly.

3. Establish behavioral controls. AI agents move fast, and your security must move with them. Set dynamic boundaries focused on behavior, risk level and business roles, avoiding static rules that can go stale.

Agentic AI: The Third Party You Haven’t Vetted Yet

AI agents are already being embedded in enterprise workflows, triggering actions via credentials you may not have authorized, through integrations you never reviewed. Securing them extends beyond basic AI governance.

It’s about having explicit knowledge of who—or what—is acting inside your environment.

That makes agentic AI a third-party risk problem and an enormous insider threat. And identity-first organizations will be the ones best equipped to thrive.

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Originally Appeared Here

You May Also Like

About the Author:

Early Bird