Security
Headlines
HeadlinesLatestCVEs

Headline

Vulnerabilities, AI Compete for Software Developers' Attention

This year, the majority of developers have adopted AI assistants to help with coding and improve code output, but most are also creating more vulnerabilities that take longer to remediate.

DARKReading
#vulnerability#microsoft#java#wordpress#auth#sap

Source: Gorodenkoff via Shutterstock

Less than two years after the general release of ChatGPT, most software developers have adopted AI assistants for programming. That’s boosting efficiency, but at the same time, it’s led to a higher cadence of software development that has made maintaining security more difficult.

Developers are on track to download more than 6.6 trillion software components in 2024, which includes a 70% increase in downloads of JavaScript components and a 87% increase in Python modules, according to the annual “State of the Software Supply Chain” report from Sonatype. At the same time, the mean time to remediate vulnerabilities in those open source projects has grown significantly over the past seven years, from about 25 days in 2017 to more than 300 days in 2024.

One likely reason: The advent of AI is driving speedier development cycles, making security more difficult, says Brian Fox, chief technology officer of Sonatype. The majority of developers now use AI tools in their development process according to a recent Stackoverflow survey, with 62% of coders saying they used an AI assistant, up from 44% last year.

“AI has quickly become a powerful tool for speeding up the coding process, but the pace of security has not progressed as quickly, and it’s creating a gap that is leading to lower-quality, less-secure code,” he says. “We’re headed in the right direction, but the true benefit of AI will come when developers don’t have to sacrifice quality or security for speed.”

Related:News Desk 2024: Hacking Microsoft Copilot Is Scary Easy

Security researchers have warned that AI code generation could result in more vulnerabilities and novel attacks. For instance, a group of researchers demonstrated the ability to poison the large language models (LLMs) used for code generation with maliciously exploitable code at the USENIX Security Symposium in August. In March, researchers with an LLM security vendor showed that attackers could use AI hallucinations as a way to direct developers and their applications to malicious packages.

Developers also have growing concerns over the potential for AI assistants to suggest or propagate vulnerable code. While the majority of developers (56%) expect AI assistants to provide usable code, only 23% expect the code to be secure, while a larger group (40%) don’t believe AI assistants provide secure code at all, according to research by software development firm JetBrains and the University of California at Irvine, published in June.

Open source projects take longer to remediate vulnerabilities. Source: Sonatype

Many developers remain nonplussed by the speed of change wrought by AI coding tools, and there is likely more to come, says Jimmy Rabon, senior product manager with Black Duck Software, a software-integrity tools provider.

Related:Chinese Researchers Tap Quantum to Break Encryption

“We haven’t seen the long-term effects of adding something that can code at the level of a junior- or intermediate-level developer and at massive scale,” he says. “My expectation is that we will see more intermediate mistakes — the basic mistakes that you would make as a junior or intermediate level developer — and [issues with] understanding the context of where some of the data flows.”

2024: The Year of the Developer’s AI Assistant

While AI assistants are now being used by the majority of developers, in business environments, adoption of AI tools is much higher — more than 90% of developers used AI assistants, according to Black Duck’s 2024 Global State of DevSecOps survey. AI as a tool for developers is well-entrenched and “will never go away,” Rabon says.

Yet many developers don’t have the experience to judge whether code provided by an AI assistant is safe. Entry-level developers, for example, are more trusting of AI-produced code than their professional counterparts, with 49% trusting the accuracy of AI-generated code versus 42% for more experienced developers, according to Stackoverflow’s annual developer survey.

Related:WP Engine Accuses WordPress of ‘Forcibly’ Taking Over Its Plug-in

In addition, AI tools will affect the education of developers and could make it harder for those entry-level developers to gain the skill needed to advance in their careers, experts say. The reliance on AI to complete simple programming projects could reduce the need for new or entry-level developers who typically tackle simpler coding tasks, removing a training path, Sonatype’s Fox says.

“The development community is aging, and the introduction of AI poses potential risks to younger generations,” he says. “If AI can handle the tasks previously assigned to budding developers, how will they gain the experience needed to replace older developers exiting the industry?”

Automatic Generation of Secure Code

Until the companies behind AI assistants create training datasets that contain secure code suggestions, or put in place guardrails to protect against vulnerable and malicious code generation, companies will have to deploy automated software security tools to check the work of any coding assistant.

The good news is, between the additional security checks and the fast evolution of code-generation assistants, the security of software and applications could eventually become much stronger, says Black Duck’s Rabon.

“There are certain basic security flaws that I think will disappear,” he says. “If you asked an AI system to generate code, why should it ever [suggest an insecure function?] … I don’t think that we’ve had enough time to really see the dramatic effects of [such capabilities] or prove them out.”

About the Author

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline Journalism (Online) in 2003 for coverage of the Blaster worm. Crunches numbers on various trends using Python and R. Recent reports include analyses of the shortage in cybersecurity workers and annual vulnerability trends.

DARKReading: Latest News

Apple Urgently Patches Actively Exploited Zero-Days