AI Made Friendly HERE

AI-generated code surge may increase software flaw risk

The exponential growth of AI-generated code is poised to substantially increase developer toil, warns Harness, leaving organisations more vulnerable to software flaws escaping into production. This warning comes as a new report highlights that nine out of ten developers are currently using AI-assisted coding tools to accelerate software delivery, a trend that is leading to a significant increase in the volume of code being shipped to businesses.

Martin Reynolds, Field CTO at Harness, articulated the urgency of the situation: “Generative AI has been a gamechanger for developers, as eight-week projects can suddenly be completed in four. However, as the volume of code being shipped to the business increases, so does the blast radius if developers fail to rigorously test it for flaws and vulnerabilities. AI might not introduce new security gaps to the delivery pipeline, but it does mean there’s more code being funnelled through existing ones.”

Reynolds further explained the potential consequences by drawing a parallel to a recent significant security event: “When the Log4J vulnerability was discovered, developers spent months finding affected components to remediate the threat. In the world of generative AI, they’d have to find the same needle in a much larger haystack.”

The company emphasises that the only effective way to manage the explosion of AI-generated code is to employ AI itself to automate the analysis, testing, and remediation processes. Harness suggests a multi-faceted approach to reduce the risk and mitigate increased developer toil.

First, organisations should integrate security into every phase of the software delivery lifecycle (SDLC) by building secure and governed pipelines. This integration aims to automate every test, check, and verification required. A policy-as-code approach can prevent new code from entering production if it fails to meet strict requirements for availability, performance, and security.

Second, rigorous code attestation practices are essential. Recent incidents like Solarwinds and MoveIT have underscored the importance of extending secure delivery practices beyond an organisation’s own walls. IT leaders are advised to automate processes needed to monitor and control open-source software components and third-party artefacts, such as generating a Software Bill of Materials (SBOM) and conducting Supply Chain Levels for Software Artifacts (SLSA) attestation.

Finally, using generative AI to instantly remediate security issues is highly recommended. These AI capabilities enable development teams to create code faster and also help in quickly triaging and analysing vulnerabilities to secure their applications. This proactive approach can manage security issue backlogs and address critical risks promptly with significantly reduced toil.

Reynolds underscores the paradoxical nature of AI’s benefits without the right quality assurance and security measures: “The whole point of AI is to make things easier, but without the right quality assurance and security measures, developers could lose all the time they have saved. Enterprises must consider the developer experience in every measure or new technology they implement to accelerate innovation.”

He concludes by emphasising the benefits of robust guardrails: “By putting robust guardrails in place and using AI to enforce them, developers can more freely leverage automation to supercharge software delivery. At the same time, teams will spend less time on remediation and other workloads that increase toil. Ultimately, this reduces operational overheads while increasing security and compliance, creating a win-win scenario.”

Originally Appeared Here

You May Also Like

About the Author:

Early Bird