AI Made Friendly HERE

Agentic AI in Cybersecurity: Transforming Defense While Expanding New Risks

A paper, published in Telecommunications Policy by Nir Kshetri of the Bryan School of Business and Economics at the University of North Carolina at Greensboro, examines the transformative role of agentic AI in reshaping cybersecurity. Drawing insights from institutions such as the Institute for Experiential AI at Northeastern University, the World Economic Forum, KPMG, and Gartner, it positions agentic AI as both an unprecedented opportunity and a profound risk. Unlike the earlier waves of AI, which were confined to chatbots and rule-based assistants, agentic AI demonstrates higher autonomy, adaptive reasoning, and the ability to pursue long-term goals in complex environments. This distinction has made it especially valuable in the fight against cyber threats. The global AI cybersecurity market, worth $24.8 billion in 2024, is forecast to surge past $146 billion by 2034, with agentic AI technologies expected to dominate. At the same time, a global shortage of nearly four million cybersecurity professionals is driving demand for such autonomous systems to fill critical gaps.

How Agentic AI Strengthens Security Operations

Agentic AI represents a major departure from traditional AI agents. While task-specific agents follow predefined rules, agentic AI systems continuously learn and adapt, enabling them to predict and neutralize threats before they escalate. Security Operations Centers (SOCs) benefit directly, as these systems automate alert triage, detect anomalies, and respond in real time. According to Gartner, SOC efficiency could improve by 40 percent by 2026 as AI-driven automation reduces repetitive workloads and frees analysts for strategic oversight. A KPMG survey reveals that corporate boards remain deeply concerned about cybersecurity risks associated with generative AI tools, and agentic AI is viewed as a way to counter these challenges by providing faster detection and more reliable responses. With cyber adversaries achieving breakout times as short as two minutes, the urgency for intelligent automation has never been higher.

Industry Innovations Driving Adoption

The paper highlights how leading companies are already leveraging agentic AI to transform their security offerings. ReliaQuest’s GreyMatter platform, launched in 2024, processes alerts 20 times faster than traditional methods and automates 98 percent of first-level responses, cutting containment times to under five minutes. CrowdStrike’s Falcon platform incorporates Charlotte AI, which delivers 98 percent accuracy in detection triage, eliminating over 40 hours of manual work per week. Twine, a Tel Aviv–based startup, has developed Alex, a “digital employee” specializing in identity and access management, while Darktrace’s endpoint solution deploys lightweight, self-learning agents to detect both known and unknown threats locally. Microsoft’s Security Copilot integrates 11 task-specific agents to address phishing, vulnerability prioritization, and data security investigations. These innovations highlight a broader trend: agentic AI is not just becoming a critical defensive tool but also a competitive differentiator in the cybersecurity market.

Expanding Risks and the New Attack Surface

The same autonomy that makes agentic AI powerful also heightens risks. By connecting with external systems, APIs, and databases, these agents create a vastly expanded attack surface. Multi-agent systems magnify this vulnerability, since a breach in one agent can cascade across entire networks. The study underscores the importance of a Risk Management Framework rooted in guidelines from the U.S. National Institute of Standards and Technology, urging companies to view cybersecurity as an ongoing process of risk management rather than a quest for total protection. Emerging practices such as “shadow AI,” in which employees deploy unauthorized AI tools, introduce further dangers by exposing sensitive data. Retrieval-augmented generation (RAG) models, increasingly popular in healthcare and finance, also carry inherent risks. Misconfigured servers, unpatched libraries, and poisoned training datasets present adversaries with new opportunities for exploitation. The author stresses that without rigorous oversight and governance, the autonomy of these systems could backfire, resulting in data breaches, system disruptions, or even physical harm in cases such as autonomous vehicles.

Cybercrime and the Arms Race Ahead

While defenders currently maintain a structural advantage, cybercriminals are beginning to experiment with agentic AI for malicious purposes. Self-improving phishing campaigns, synthetic identity fraud, and data poisoning attacks are among the tactics already in circulation. In one striking example, poisoning as little as 0.01 percent of large datasets was achieved for only $60, illustrating the asymmetric economics of cybercrime. Though autonomous attack agents remain unreliable for large-scale operations, experts predict they could soon automate vulnerability scanning, target selection, and ransomware deployment without human involvement. This shift could dramatically lower the barriers for attackers while amplifying the scale of their operations. At the same time, defenders are pushing ahead with predictive analytics and automated response, suggesting an escalating arms race in which both sides deploy increasingly sophisticated AI tools.

Governing the Future of Autonomous Security

The transformative power of agentic AI can only be realized if paired with strong governance frameworks and international regulation. Organizations must establish clear rules around data access, autonomy levels, and monitoring. Guardrails, audit trails, and least-privilege policies are essential to prevent rogue decisions or misuse. Policymakers face the added challenge of harmonizing global privacy and liability standards, especially as agentic AI crosses borders and operates under conflicting legal regimes. Training employees, raising awareness of social engineering threats, and preparing staff to work alongside AI “colleagues” are equally critical. Ultimately, agentic AI is not simply another tool but a structural shift in how cybersecurity will be conducted. Its ability to enhance detection, streamline responses, and adapt to evolving threats offers unprecedented promise, but without vigilance and ethical governance, the same systems could just as easily be weaponized. The future of cybersecurity will be determined not only by human ingenuity but also by how wisely societies manage the decisions made by machines acting on our behalf.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird