Headline
OWASP Releases AI Security Guidance
OWASP has released guidance materials addressing how to respond to deepfakes, AI security best practices, and how to secure open source and commercial generative AI applications.
Source: Jozef Sedmak via Alamy Stock Photo
The Open Worldwide Application Security Project (OWASP) has announced new security guidance materials to help organizations identify and manage the risks associated with the adoption, deployment, and management of large language models (LLMs) and generative artificial intelligence (GenAI) applications.
The guidance is part of the OWASP Top 10 for LLM Application Security Project, a global, community-led open source project. Since its inception in 2023, the group has released research, guidance, and resource materials to help organizations develop a comprehensive strategy encompassing governance, collaboration, and practical tools.
The “Guide for Preparing and Responding to Deepfake Events” illustrates the problems posed by “hyper-realistic digital forgeries.” An outgrowth of the AI Cyber Threat Intelligence initiative, this resource combines practical and pragmatic defense strategies to help organizations stay secure as deepfake technology improves.
The “Center of Excellence Guide” helps businesses establish best practices and frameworks for creating AI security practices. The guidance helps organizations establish systems for risk management and interdepartmental coordination among security, legal, data science, and operations teams, as well as how to develop and enforce security policy and educate staff on AI security.
The “AI Security Solution Landscape Guide” is a broad reference on how to secure both open source and commercial LLM and GenAI applications. It categorizes existing and emerging security products and gives guidance on how to think about risks identified in the Top 10 list.
The project brings together more than 500 cybersecurity and AI experts from companies and organizations around the world to identify LLM vulnerabilities and mitigations. In early 2024, the project expanded its focus to include strategic stakeholders, like CISOs and compliance officers, in addition to developers, data scientists, and other security practitioners.
“We’re two years into the generative AI boom, and attackers are using AI to get smarter and faster,” said Steve Wilson, project lead for the OWASP Top 10 for LLM Project, in a statement. “Security leaders and software developers need to do the same. Our new resources arm organizations with the tools they need to stay ahead of these increasingly sophisticated threats.”
About the Author
Jennifer Lawinski is a writer and editor with more than 20 years experience in media, covering a wide range of topics including business, news, culture, science, technology and cybersecurity. After earning a Master’s degree in Journalism from Boston University, she started her career as a beat reporter for The Daily News of Newburyport. She has since written for a variety of publications including CNN, Fox News, Tech Target, CRN, CIO Insight, MSN News and Live Science. She lives in Brooklyn with her partner and two cats.