Security
Headlines
HeadlinesLatestCVEs

Headline

Can Generative AI Be Trusted to Fix Your Code?

Not yet — but it can help make incremental progress in reducing vulnerability backlogs.

DARKReading
#vulnerability#mac#java#backdoor#acer

Organizations worldwide are in a race to adopt AI technologies into their cybersecurity programs and tools. A majority (65%) of developers use or plan on using AI in testing efforts in the next three years. There are many security applications that will benefit from generative AI, but is fixing code one of them?

For many DevSecOps teams, generative AI represents the holy grail for clearing their increasing vulnerability backlogs. Well over half (66%) of organizations say their backlogs are comprised of more than 100,000 vulnerabilities, and over two-thirds of static application security testing (SAST) reported findings remain open three months after detection, with 50% remaining open after 363 days. The dream is that a developer could simply ask ChatGPT to “fix this vulnerability,” and the hours and days previously spent remediating vulnerabilities would be a thing of the past.

It’s not an entirely crazy idea, in theory. After all, machine learning has been used effectively in cybersecurity tools for years to automate processes and save time — AI is hugely beneficial when applied to simple, repetitive tasks. But applying generative AI to complex code applications has some flaws, in practice. Without human oversight and express command, DevSecOps teams could end up creating more problems than they solve.

Generative AI Advantages and Limitations Related to Fixing Code

AI tools can be incredibly powerful tools for simple, low-risk cybersecurity analysis, monitoring, or even remedial needs. The concern arises when the stakes become consequential. This is ultimately an issue of trust.

Researchers and developers are still determining the capabilities of new generative AI technology to produce complex code fixes. Generative AI relies on existing, available information in order to make decisions. This can be helpful for things like translating code from one language to another, or fixing well-known flaws. For example, if you ask ChatGPT to “write this JavaScript code in Python,” you are likely to get a good result. Using it to fix a cloud security configuration would be helpful because the relevant documentation to do so is publicly available and easily found, and the AI can follow the simple instructions.

However, fixing most code vulnerabilities requires acting on a unique set of circumstances and details, introducing a more complex scenario for the AI to navigate. The AI might provide a “fix,” but without verification, it should not be trusted. Generative AI, by definition, can’t create something that is not already known, and it can experience hallucinations that result in fake outputs.

In a recent example, a lawyer is facing serious consequences after using ChatGPT to help write court filings that cited six nonexistent cases the AI tool invented. If AI were to hallucinate methods that do not exist and then apply those methods to writing code, it would result in wasted time on a “fix” that can’t be compiled. Additionally, according to OpenAI’s GPT-4 whitepaper, new exploits, jailbreaks, and emergent behaviors will be discovered over time and be difficult to prevent. So careful consideration is required to ensure AI security tools and third-party solutions are vetted and regularly updated to ensure they do not become unintended backdoors into the system.

To Trust or Not to Trust?

It’s an interesting dynamic to see the rapid adoption of generative AI play out at the height of the zero-trust movement. The majority of cybersecurity tools are built on the idea that organizations should never trust, always verify. Generative AI is built on the principle of inherent trust in the information made available to it by known and unknown sources. This clash in principles seems like a fitting metaphor for the persistent struggle organizations face in finding the right balance between security and productivity, which feels particularly exacerbated at this moment.

While generative AI might not yet be the holy grail DevSecOps teams were hoping for, it will help to make incremental progress in reducing vulnerability backlogs. For now, it can be applied to make simple fixes. For more complex fixes, they’ll need to adopt a verify-to-trust methodology that harnesses the power of AI guided by the knowledge of the developers who wrote and own the code.

DARKReading: Latest News

Thai Police Systems Under Fire From 'Yokai' Backdoor