Swapnil Chawande is a Cybersecurity Leader at PG&E.
AI systems now analyze millions of security events daily—far beyond human capacity. Yet as we embrace this frontier, we must ask: At what ethical cost?
After over a decade working at the intersection of AI and defense, I’ve seen firsthand how autonomous security is reshaping the protection of critical infrastructure. These tools offer remarkable efficiency, but raise a vital question: How do we balance AI’s speed and scale with the need for human judgment?
This is a question I’ve devoted much of my work to answering. Whether designing threat detection models or rolling out new AI-driven security processes, my focus has always been on responsible innovation. I don’t see the future as a contest between machines and people, but as a relationship where both offer their unique strengths while building defenses that are smarter and grounded in the right ethics.
The Rise Of Autonomous Security Systems
Let’s start with what we mean by autonomous security. AI-powered technologies identify, evaluate and respond to cyber threats without human intervention, like an ultra-sophisticated immune system defending an organization’s digital assets. What took days or weeks now happens instantly.
The main benefits stand out:
• Speed And Scalability: AI can process an eye-watering number of events in mere seconds. This sheer scale means that nuanced attack patterns—ones a human might completely miss—are flagged almost instantly.
• Proactive Defense: We’re not just building digital walls anymore. Today’s AI learns and adapts, and is able to spot risks on the horizon and address them before they cause real harm.
• Operational Efficiency: By automating the routine, these systems let human analysts play to their strengths by digging into complex investigations and focusing on big-picture defense strategy rather than drowning in endless alerts.
From my perspective, implementing these systems has been a night-and-day transformation. Integrating machine learning into threat detection and response significantly strengthened defenses and cut response times—often by over 50%. In cybersecurity, that speed can mean the difference between containing a threat and facing a major breach.
Ethical Challenges In Autonomous Security
Of course, with this much power comes a heavier ethical responsibility. We simply can’t deploy autonomous security without examining the risks that come with it.
Bias in AI Models
To be honest, AI models learn from the data we feed them—biases included. If that data is flawed, the system can mirror or even amplify those biases. A threat detection tool might then flag legitimate activity as suspicious, unfairly targeting certain groups or regions, leading to reputational harm or even discrimination.
Accountability And Transparency
Another major challenge is the “black box” problem. When an AI system makes a decision—like shutting down a network segment or missing a critical threat—who’s accountable? The developer, the ops team or the business? Without transparency into how decisions are made, accountability becomes murky and creates a risk none of us can afford.
Overreliance On Automation
AI excels at analyzing data and spotting patterns, but lacks context and ethical judgment. In major incidents, machine logic can’t replace human intuition or awareness of broader impacts like public safety. Relying too much on automation without human oversight risks critical errors.
Privacy Concerns
Autonomous systems rely on vast data such as user logs, network activity, communications and more. But where do we draw the line between security and privacy? There are no simple answers; it’s a constant balancing act that demands transparency and ongoing dialogue.
The Essential Role Of Human Oversight
So, what’s the answer? Ditching AI isn’t an option, but leaving it unsupervised shouldn’t be either. The solution lies in thoughtful human oversight, built right into the system.
The best security strategies I’ve seen are hybrid ones. AI handles the heavy lifting by sifting through endless routine alerts, but humans are there to step in for the tough decisions and nuanced judgment calls. That means we need to invest in our teams by upskilling cybersecurity pros so they understand how these AI systems work, where they excel and, crucially, where their limits lie.
Mentoring and team-building are huge parts of this. I firmly believe we should prepare professionals to work alongside these advanced systems, not compete with them. Let’s turn them into masters of the tool set, rather than mere operators.
Regulatory And Policy Considerations
While frameworks like NIST and GDPR are helpful, they weren’t built for the specific challenges that AI introduces. There’s a real need for industrywide standards that spell out how we address bias, ensure transparency and create accountability with autonomous systems.
I’ve long advocated for zero-trust architecture and policy automation. Zero-trust assumes no user or device is inherently trusted, which reduces risks when AI makes a wrong call. Consistent policy automation also helps unlock the “black box,” making AI-driven decisions more transparent and predictable.
At a broader level, applying AI in national security requires global cooperation. Without alignment, we risk a digital arms race where even small mistakes could prove too costly.
Strategic Guidance For Cybersecurity Leaders
Looking forward, here are the three key principles I believe should shape our use of AI in security:
1. Transparency And Accountability: AI systems have to be explainable. Everyone from engineers to executives ought to understand the “why” behind important decisions. Trust in our tools starts with clear communication.
2. Human-AI Collaboration: Rather than framing this as humans versus machines, we should lean into partnership. AI should empower security teams, not replace them, so that together we’re better equipped to spot threats and make the right calls.
3. Continuous Growth And Adaptation: Cyber threats are always changing, and our strategies should too. That means ongoing training, regularly updating AI models and refining our hybrid workflows as new risks come into view.
As AI becomes increasingly intricately linked with security, it is our responsibility as cybersecurity infrastructure architects, legislators and business leaders to maintain ethics at the forefront. The systems we are constructing must be deserving of our confidence.
In the end, cybersecurity’s future won’t be about humans versus AI, but rather about humans and AI working together to create resilient, ethical defenses for the world we all share.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
