Security
Headlines
HeadlinesLatestCVEs

Headline

News Desk 2024: Can GenAI Write Secure Code?

GenAI’s 30%-50% coding productivity boost comes with a downside — it’s also generating vulnerabilities. Veracode’s Chris Wysopal talks about what he finds out in this News Desk interview during Black Hat USA.

DARKReading
#vulnerability#intel

It’s faster, for sure, but generative artificial intelligence (GenAI) is learning how to code just like a human developer would — and that means picking up mistakes along the way.

GenAI and large language models (LLMs) learn how to code based off of open source models, which themselves are flawed, explained Chris Wysopal, CTO and co-founder of Veracode, at this year’s Dark Reading News Desk at Black Hat USA 2024.

"[GenAI] can write a lot more code a lot faster," he said. “So what you end up with is more code and more vulnerabilities per unit of time.”

The trick then becomes finding and fixing vulnerabilities at the same pace.

In his research presented at the conference, “From HAL to HALT: Thwarting Skynet’s Siblings in the GenAI Coding Era” (a titled itself written by an LLM), Wysopal proposes the best way forward is using AI to find and fix the vulnerabilities in AI-generated software code. But we’re not there yet.

According to research, the current model that can write code the best is an LLM called StarCoder. That’s not to say it writes code completely free of vulnerabilities, but it’s the best thing going. ChatGPT 4.0 and ChatGPT 2.5 were also found to be fairy adept at writing secure code.

“A general purpose LLM can fix some security bugs, but it turns out it’s not great at it,” Wysopal says. “And so we’re seeing specialized LLMs that are trained just to secure code.”

Wysopal and his team at Veracode are working on that, as well as other companies, he adds.

DARKReading: Latest News

Sneaky Skimmer Malware Targets Magento Sites Ahead of Black Friday