AI Made Friendly HERE

automation convenience and security debt • The Register

Partner Content As AI-assisted coding tools creep into every corner of software development, teams are starting to discover a less comfortable side effect of all that efficiency: security flaws generated at machine speed.

Used wisely, AI “vibe coding” (letting large language models write code) can accelerate development productivity. But as developers outsource more of the cognitive heavy lifting to machines, a subtle risk emerges: AI automation complacency. It’s the same quiet lapse of vigilance that occurs when pilots rely too heavily on autopilot. Everything feels under control until it isn’t.

Intruder’s experience with vibe coding is a real-world case study of the security impact of AI. Here’s what we learned, and what it means for organizations using AI to code.

When the honeypot bit back

To deliver our Rapid Response service, we deploy honeypots that capture emerging exploits in the wild. In one case, we couldn’t find an open-source tool that met our needs, so we did what many teams now do – we vibe-coded one with AI. We deployed it as intentionally vulnerable infrastructure in an isolated environment, where compromise was expected — but we still gave the code a brief review for safety before launch.

After testing it for a few weeks, the logs started to ring alarm bells. Some files that should have been saved under attacker IP addresses were instead being named after payloads. It suggested user input was being used in a place where we expected trusted data.

The vulnerability we didn’t expect

A closer look at the code revealed the problem: the AI had added logic to pull client-supplied IP headers and treat them as the visitor’s IP. Those headers are only safe when they come from a trusted proxy, otherwise they’re effectively under the client’s control. This means the site visitor can easily spoof their IP address or use the header to inject payloads – a vulnerability we often find in penetration tests.

In this case the attacker’s payload was being inserted into that header, which explains the unusual directory name. There wasn’t any major impact and we saw no sign of a full exploit chain, but the attacker did gain some control over the program’s execution and it wasn’t far from being much worse. If the IP value had been used differently, it could easily have led to local file disclosure or server-side request forgery.

It wasn’t a serious breach, but it showed how easily AI can slip basic mistakes into code that looks perfectly fine at first glance.

SAST didn’t catch it

Could a static application security testing (SAST) tool have helped? We ran the code through Semgrep OSS and Gosec but neither flagged the vulnerability.

That’s not a failure of those tools. It’s a limitation of static analysis. Detecting this particular flaw requires contextual understanding that the client-supplied IP headers were being used without validation, and that no trust boundary was enforced. It’s the kind of nuance that’s obvious to a human pentester, but easily missed when that mindset puts too much trust in the machine.

Not a one-off fluke

If this sounds like a one-off, think again. For instance, we’ve seen AI reasoning models repeatedly produce AWS IAM roles vulnerable to privilege escalation even when prompted for secure configurations. In one case, the model acknowledged its mistake (“You’re absolutely right…”) before producing yet another insecure policy. It took four iterations and manual coaching to reach a safe result.

The new shape of human error

It’s easy to worry about AI putting code generation in the hands of people with no development or security background, and we should. A recent study of AI-based low-code platforms found thousands of exploitable vulnerabilities in automatically generated applications. But what’s even more concerning is that experienced engineers and security teams are making the same mistakes, thanks to a new kind of complacency that comes with automation.

That means the burden falls on the organization producing the product to ensure its security. If your developers are using AI to help write code, it’s time to check your code-review policies and your CI/CD and SAST controls to make sure you’re not caught out by this new class of risk.

It’s only a matter of time before AI-generated vulnerabilities become widespread. Few organizations will ever admit that a weakness came from their use of AI, but we suspect it’s already happening. This won’t be the last you hear of it — of that much we’re sure.

Contributed by Intruder.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird