Security
Headlines
HeadlinesLatestCVEs

Headline

Large Language AI Models Have Real Security Benefits

Complex neural networks, including GPT-3, can deliver useful cybersecurity capabilities, such as explaining malware and quickly classifying websites, researchers find.

DARKReading
#web#mac#google#microsoft#git#intel#auth

GPT-3, the large neural network created with extensive training using massive datasets, provides a variety of benefits to cybersecurity applications, including natural-language-based threat hunting, easier categorization of unwanted content, and clearer explanations of complex or obfuscated malware, according to research to be presented at the Black Hat USA conference next week.

Using the third version of the Generative Pre-trained Transformer — more commonly known as GPT-3 — two researchers with cybersecurity firm Sophos found that the technology could turn natural-language queries, such as “show me all word processing software that is making outgoing connections to servers in South Asia,” into requests to a security information and event management (SIEM) system. GPT-3 is also very good at taking a small number of examples of website classifications and then using those to categorize other sites, finding commonalities between criminal sites or between exploit forums.

Both applications of GPT-3 can save companies and cybersecurity analysts significant time, says Joshua Saxe, one of the two authors of the Black Hat research and chief scientist for artificial intelligence at Sophos.

“We are not using GPT-3 in production at this point, but I do see GPT-3 and large deep-learning models — the ones you cannot build on commodity hardware — I see those models as important for strategic cyber defense,” he says. “We are getting much better — dramatically better — results using a GPT-3-based approach than we would get with traditional approaches using smaller models.”

The research is the latest application of GPT-3 to show the model’s surprising effectiveness at translating natural-language queries into machine commands, program code, and images. The creator of GPT-3, OpenAI, has teamed up with GitHub, for example, to create an automated pair programming system, Copilot, that can generate code from natural-language comments and simple function names.

GPT-3 is a generative neural network that uses deep-learning algorithms’ ability to recognize patterns to feed back results into a second neural network that creates content. A machine learning (ML) system for recognizing images, for example, can rank results from a second neural network used to turn text into original art. By making the feedback loop automatic, the ML model can quickly create new artificial intelligence (AI) systems, like the art-producing DALL-E.

The technology is so effective that an AI researcher at Google says one implementation of a large-language chatbot model has become sentient.

While the nuanced learning of the GPT-3 model surprised the Sophos researchers, they are far more focused on the utility of the technology to ease the job of cybersecurity analysts and malware researchers. In their upcoming presentation at Black Hat, Saxe and fellow Sophos researcher Younghoo Lee will show how the largest neural networks can deliver useful and surprising results.

In addition to creating queries for threat hunting and classifying websites, the Sophos researchers used generative training to improve the GPT-3 model’s performance for specific cybersecurity tasks. For example, the researchers took an obfuscated and complicated PowerShell script, translated it with GPT-3 using different parameters, and then compared its functionality to the original script. The configuration that translates the original script closest to the original is deemed the best solution and is then used for further training.

“GPT-3 can do about as well as the traditional models but with a tiny handful of training examples,” Saxe says.

Companies have invested in AI and ML as essential tools to improve the efficiency of technology, with “AI/ML” becoming a must-have term in product marketing.

Yet ways to exploit AI/ML models have jumped from whiteboard theory to practical attacks. Government contractor MITRE and a group of technology firms have created an encyclopedia of adversarial attacks on AI systems. Known as the Adversarial Threat Landscape for Artificial-Intelligence Systems, or ATLAS, the classification of techniques includes abusing the real-time learning to poison training data, like what happened with Microsoft’s Tay chatbot, to evading the ML model’s capabilities, such as what researchers did with Cylance’s malware detection engine.

In the end, AI likely has more to offer defenders than attackers, Saxe says. Still, while the technology is worth using, it will not dramatically shift the balance between attackers and defenders, he says.

“The overall goal of the talk is to convince that these large language models are not just hype, they are real, and we need to find where they fit in our cybersecurity toolbox,” Saxe says.

DARKReading: Latest News

EmeraldWhale's Massive Git Breach Highlights Config Gaps