
Ambuj Kumar is co-founder and CEO of Simbian, a mission-driven company solving security with AI.
Fifty-one seconds. That’s all it took, according to CrowdStrike research, for the fastest cybercriminal in 2024 to go from first click to deep inside their target’s network. No malware, no flashing red alerts—just silent, surgical moves.
Meanwhile, security operations center (SOC) teams are drowning. Google Cloud research found that over 60% of security leaders say the alert volume is overwhelming. Sixty-four percent are buried in false positives, according to the SANS Institute. And the talent gap? Still 4.8 million unfilled seats (registration required).
The cost of failure has barely moved: The average breach costs $4.4 million.
The way forward isn’t more headcount or yet another dashboard. It’s taking the journey from manual operations to safe, smart automation. One way to achieve this is the L0-L4 model for security automation, a concept I’ve borrowed from how the automotive industry (registration required) assesses the different levels of self-driving cars. This blueprint can allow companies to sequence capabilities, guardrails and governance so automation can accelerate without losing control.
The L0-L4 Security Automation Model
The truth is, human analysts were never meant to fight alone at this scale or velocity. They excel at judgment, strategy and connecting the dots. Machines excel at pattern recognition, repetition and acting in milliseconds. Automation is simply the division of labor that the modern SOC demands.
With the L0-L4 model, each of the five levels defines scope, guardrails and governance. Progression is measured by what the system is allowed to do on its own, how it proves safety and how quickly it hands control back to humans when uncertainty or risk rises. Here are the different levels:
Level 0: Manual Operations
This is where human analysts handle detection, investigation and response from start to finish, with automation limited to dashboards, queries and one-off scripts.
At this stage, teams face long alert queues, inconsistent playbook use and uneven evidence capture. The human-in-the-loop model often becomes a bottleneck at scale, with alert fatigue and false positives piling up.
To move forward, organizations need documented playbooks, a standardized incident taxonomy and clearly defined change-control procedures and access boundaries. These foundations are critical before advancing toward higher automation.
Level 1: The Assisted Response Stage
At this stage, AI agents gather context, propose actions and generate artifacts such as tickets, queries and containment plans. Execution still requires human approval.
Core capabilities include automated enrichment, root-cause hypotheses and recommended actions with confidence scores. A dry-run mode shows the intended commands and their blast radius before any action is taken.
Guardrails keep agents read-only by default, with immutable evidence logs to preserve trust. Example outputs range from auto-built case files with timelines and indicators to drafted firewall rules or identity revocation plans.
To progress beyond this stage, organizations must achieve low dispute rates on agent recommendations, meet approval SLAs and ensure analysts consistently trust the agent’s justifications.
Level 2: Pre-Approved, Low-Risk Automation
Here, AI agents execute narrow, reversible actions within strict policy-as-code constraints. No human approval is required, as long as the action falls within predefined risk boundaries.
Guardrails are critical. Agents operate within allowlists and denylists, follow rate limits and are scoped to specific environments such as a segment, tenant or project. A manual kill switch remains in place for immediate overrides.
Typical actions include quarantining a single endpoint, disabling a suspected access token, blocking a domain temporarily pending review or rotating a compromised API key.
Before moving forward, organizations must maintain a verified fix rate that meets targets, achieve containment time service level objectives (SLOs) for 99% of incidents and demonstrate zero material incidents from autonomous actions over a defined period.
Level 3: Conditional Automation
The system runs complete playbooks when risk is clearly bounded, escalating to humans only when uncertainty or potential impact grows.
Core capabilities include dynamic risk scoring with thresholds, staged enforcement, cross-tool coordination (EDR, IdP, cloud, email) and clear “why this, why now, what if not” explanations.
Guardrails require dual control for identity or network-wide changes, strict separation of duties and continuous policy testing in a simulation sandbox.
Typical actions include disabling phished accounts across identity and SaaS, ring-fencing Kubernetes namespaces, rolling back malicious IaC changes or blocking malicious attachments fleet-wide.
Advancement is measured by rising risk-adjusted automation rates and declining human handback without losing precision.
Level 4: Mission Automation
At the highest level, goal-driven agents operate 24/7 with mission objectives such as “minimize lateral movement” across identity, endpoint, network, cloud and SaaS, extending to edge locations with intermittent connectivity.
They plan over long horizons with memory; act across multiple domains; enforce policies on-device where latency, power or sovereignty demand it; and adapt through closed-loop learning with drift detection.
Guardrails include policy-as-code as the governing framework, continuous assurance via pre-deployment tests and invariants and independent audit pipelines producing immutable, signed logs mapped to SOC 2/ISO/NIST controls.
Success is measured by board-level metrics and external audit sign-off.
Safety And Governance By Design
Security automation must be deterministically safe atop probabilistic models. At every level:
• Policy-As-Code: This defines what’s allowed; the model proposes how.
• Reversibility First: Every write action pairs with automated rollback and time-boxed enforcement windows.
• Explainability That IR Actually Uses: Action graphs and evidence chains—not heatmaps—answer: why this action, why now, based on what and what was the outcome?
• Explicit Handback Moments: Thresholds (confidence, risk, impact, uncertainty) force escalation to humans within seconds, not minutes, when needed.
• Auditability: Immutable, signed logs and evidence packs per action satisfy auditors and accelerate investigations.
A Final Word To Boards And CISOs
Automation isn’t a finish line; it’s a discipline. When done right, it elevates humans to the high ground: strategy, simulation and sharp-eyed supervision. The agents? They handle the 3 a.m. chaos, executing reversible, policy-locked moves without breaking stride.
Once you’ve established a baseline, you start at L1, earn your stripes at L2, seal the loop at L3 and only go after L4 when the payoff demands the firepower. Automation isn’t about replacing people. It’s about designing a SOC that thinks faster than the threat, scales without burnout and treats time as the most valuable control plane you have.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?