Security
Headlines
HeadlinesLatestCVEs

Headline

Tech Giants Agree to Standardize AI Security

The Coalition for Secure AI is a consortium of influential AI companies aiming to develop tools to secure AI applications and set up an ecosystem for sharing best practices.

DARKReading
#mac#google#microsoft#linux#intel#auth#ibm

Source: ElenaBS via Alamy Stock Photo

The largest and most influential artificial intelligence (AI) companies are joining forces to map out a security-first approach to the development and use of generative AI.

The Coalition for Secure AI, also called CoSAI, aims to provide the tools to mitigate the risks involved in AI. The goal is to create standardized guardrails, security technologies, and tools for the secure development of models.

“Our initial workstreams include AI and software supply chain security and preparing defenders for a changing cyber landscape,” CoSAI said in a statement.

The initial efforts include creating a secure bubble and systems of checks and balances around the access and use of AI, and creating a framework to protect AI models from cyberattacks, according to Google, one of the coalition’s founding members. Google, OpenAI, and Anthropic own the most widely used large language models (LLMs). Other members include infrastructure providers Microsoft, IBM, Intel, Nvidia, and PayPal.

“AI developers need — and end users deserve — a framework for AI security that meets the moment and responsibly captures the opportunity in front of us. CoSAI is the next step in that journey, and we can expect more updates in the coming months,” wrote Google’s vice president of security engineering, Heather Adkins, and Google Cloud’s chief information security officer, Phil Venables.

AI Safety as a Priority

AI safety has raised a host of cybersecurity concerns since the launch of ChatGPT in 2022. Those include misuse for social engineering to penetrate systems and the creation of deepfake videos to spread misinformation. At the same time, security firms, such as Trend Micro and CrowdStrike, are now turning to AI to help companies root out threats.

AI safety, trust, and transparency are important as results could steer organizations into faulty — and sometimes harmful — actions and decisions, says Gartner analyst Avivah Litan.

“AI cannot run on its own without guardrails to rein it in — errors and exceptions need to be highlighted and investigated,” Litan says.

AI security issues could multiply with technologies such as AI agents, which are add-ons that generate more accurate answers from custom data.

“The right tools need to be in place to automatically remediate all but the most opaque exceptions,” Litan says.

US President Joe Biden has challenged the private sector to prioritize AI safety and ethics. His concern was around AI’s potential to propagate inequity and to compromise national security.

In July 2023, President Biden issued an executive order that required commitments from major companies that are now part of CoSAI to develop safety standards, share safety test results, and prevent AI’s misuse for biological materials and fraud and deception.

CoSAI will work with other organizations, including the Frontier Model Forum, Partnership on AI, OpenSSF, and MLCommons, to develop common standards and best practices.

MLCommons this week told Dark Reading that in fall this year it will release an AI safety benchmarking suite that will rate LLMs on responses related to hate speech, exploitation, child abuse, and sex crimes.

CoSAI will be managed by OASIS Open, which, like the Linux Foundation, manages open source development projects. OASIS is best known for its work around the XML standard and for the ODF file format, which is an alternative to Microsoft Word’s .doc file format.

About the Author(s)

Agam Shah has covered enterprise IT for more than a decade. Outside of machine learning, hardware, and chips, he’s also interested in martial arts and Russia.

DARKReading: Latest News

Closing the Cybersecurity Career Diversity Gap