Security
Headlines
HeadlinesLatestCVEs

Headline

Lessons for CISOs From OWASP's LLM Top 10

It’s time to start regulating LLMs to ensure they’re accurately trained and ready to handle business deals that could affect the bottom line.

DARKReading
#vulnerability#web#mac#google#kubernetes#intel#auth#sap

Kevin Bocek, Chief Innovation Officer, Venafi

April 23, 2024

5 Min Read

Source: Bakhtiar Zein via Alamy Stock Photo

COMMENTARY

OWASP recently released its top 10 list for large language model (LLM) applications, in an effort to educate the industry on potential security threats to be aware of when deploying and managing LLMs. This release is a notable step in the right direction for the security community, as developers, designers, architects, and managers now have 10 areas to clearly focus on.

Similar to the National Institute of Standards and Technology (NIST) framework and Cybersecurity and Infrastructure Security Agency (CISA) guidelines provided for the security industry, OWSAP’s list creates an opportunity for better alignment within organizations. With this knowledge, chief information security officers (CISOs) and security leaders can ensure the best security precautions are in place around the use of LLM technologies that are quickly evolving. LLMs are just code. We need to apply what we have learned about authenticating and authorizing code to prevent misuse and compromise. This is why identity provides the kill switch for AI, which is the ability to authenticate and authorize each model and their actions, and stop it when misuse, compromise, or errors occur.

Adversaries Are Capitalizing on Gaps in Organizations

As security practitioners, we’ve long talked about what adversaries are doing, such as data poisoning, supply chain vulnerabilities, excessive agency and theft, and more. This OWASP list for LLMs is proof that the industry is recognizing where risks are. To protect our organizations, we have to course correct quickly and be proactive.

Generative artificial intelligence (GenAI) is putting a spotlight on a new wave of software risks that are rooted in the same capabilities that made it powerful in the first place. Every time a user asks an LLM a question, it crawls countless Web locations in an attempt to provide an AI-generated response or output. While every new technology comes with new risks, LLMs are especially concerning because they’re so different from the tools we’re used to.

Almost all of the top 10 LLM threats center around a compromise of authentication for the identities used in the models. The different attack methods run the gamut, affecting not only the identities of model inputs but also the identities of the models themselves, as well as their outputs and actions. This has a knock-on effect and calls for authentication in the code-signing and creating processes to halt the vulnerability at the source.

Authenticating Training and Models to Prevent Poisoning and Misuse

With more machines talking to each other than ever before, there must be training and authentication of the way that identities will be used to send information and data from one machine to another. The model needs to authenticate the code so that the model can mirror that authentication to other machines. If there’s an issue with the initial input or model — because models are vulnerable and something to keep a close eye on — there will be a domino effect. Models, and their inputs, must be authenticated. If they aren’t, security team members will be questioning if this is the right model they trained or if it’s using the plug-ins they approved. When models can use APIs and other models’ authentication, authorization must be well defined and managed. Each model must be authenticated with a unique identity.

We saw this play out recently with AT&T’s breakdown, which was dubbed a “software configuration error,” leaving thousands of people without cellphone service during their morning commute. The same week, Google experienced a bug that was very different but equally concerning. Google’s Gemini image generator misrepresented historical images, causing diversity and bias concerns due to AI. In both cases, the data used to train GenAI models and LLMs — as well as the lack of guardrails around it — was the root of the problem. To prevent issues like this in the future, AI firms need to spend more time and money to adequately train the models and inform the data better.

In order to design a bulletproof and secure system, CISOs and security leaders should design a system where the model works with other models. This way, an adversary stealing one model does not collapse the entire system, and allows for a kill-switch approach. You can shut off a model and keep working and protecting the company’s intellectual property. This positions security teams in a much stronger way and prevents any further damages.

**Acting on Lessons From the List **

For security leaders, I recommend taking OWASP’s guidance and asking your CISO or C-level execs how the organization is scoring on these vulnerabilities overall. This framework holds us all more accountable for delivering market-level security insights and solutions. It is encouraging that we have something to show our CEO and board to illustrate how we’re doing when it comes to risk preparedness.

As we continue to see risks arise with LLMs and AI customer service tools, like we just did with Air Canada’s chatbot giving a reimbursement to a traveler, companies will be held accountable for mistakes. It’s time to start regulating LLMs to ensure they’re accurately trained and ready to handle business deals that could affect the bottom line.

In conclusion, this list serves as a great framework for rising Web vulnerabilities and the risks we need to pay attention to when using LLMs. While more than half of the top 10 risks are ones that are essentially mitigated and calling for the kill switch for AI, companies will need to evaluate their options when deploying new LLMs. If the right tools are in place to authenticate the inputs and models, as well as the models’ actions, companies will be better equipped to leverage the AI kill-switch idea and prevent further destruction. While this may seem daunting, there are ways to protect your organization amid the infiltration of AI and LLMs in your network.

About the Author(s)

Chief Innovation Officer, Venafi

As Chief Innovation Officer, Kevin is leading Venafi’s growth in new innovations, and is at the forefront of Venafi’s cutting-edge machine identity management for workload identity, Kubernetes and AI. He also drives Venafi’s technology ecosystem and developer community to futureproof customer success and is responsible for the company’s Machine Identity Management Development Fund, which has sponsored innovations with more than 50 developers globally. Kevin brings more than 25 years of experience in cybersecurity with industry leaders including RSA Security, PGP Corporation, IronKey, CipherCloud, Thales, nCipher, and Xcert.

DARKReading: Latest News

Faux ChatGPT, Claude API Packages Deliver JarkaStealer