Security
Headlines
HeadlinesLatestCVEs

Headline

Vulnerabilities Exposed Hugging Face to AI Supply Chain Attacks

By Deeba Ahmed Wiz.io, known for its cloud security expertise, and Hugging Face, a leader in open-source AI tools, are combining their knowledge to develop solutions that address these security concerns. This collaboration signifies a growing focus on securing the foundation of AI advancements. This is a post from HackRead.com Read the original post: Vulnerabilities Exposed Hugging Face to AI Supply Chain Attacks

HackRead
#vulnerability#google#microsoft#git#intel#rce#aws

Cybersecurity firm Wiz.io found that AI-as-a-service (aka AI Cloud) platforms like Hugging Face are vulnerable to critical risks, which allow threat actors to escalate privileges, gain cross-tenant access, and potentially take over continuous integration and continuous deployment (CI/CD) pipelines.

Understanding The Problem

AI models require a strong GPU, often outsourced to AI service providers similar to consuming cloud infrastructure from AWS/GCP/Azure. Hugging Face’s service is called Hugging Face Inference API.

Wiz Research could compromise the service running custom models by uploading their malicious model and using container escape techniques, allowing cross-tenant access to other customers’ models stored in Hugging Face.

The platform supports various AI model formats, two prominent ones being PyTorch (Pickle) and Safetensors. Python’s Pickle format is known for being unsafe and allowing remote code execution upon deserialization of untrusted data although Hugging Face assesses Pickle files uploaded to their platform, highlighting those they deem dangerous.

However, researchers cloned a legitimate Pickle-based model (gpt2), modified it to run a reverse-shell upon loading, and uploaded the hand-crafted model as a private model. They interacted with the model using the Inference API feature, obtaining a reverse shell and discovered that crafting a PyTorch model that executes arbitrary code is straightforward, whereas uploading their model to Hugging Face allowed them to execute code inside the Inference API.

****Potential Risks****

The potential impact is devastating as attackers could access millions of private AI models and apps. Two critical risks include shared inference infrastructure takeover risk, where malicious models run untrusted inference infrastructure, and shared CI/CD takeover risk, where malicious AI applications may attempt to take over the pipeline and execute a supply chain attack after taking over the CI/CD cluster.

Furthermore, adversaries can attack AI models, AI/ML applications, and inference infrastructures using various methods. They can use inputs that cause false predictions, incorrect predictions, or malicious models. AI models are often treated as black-box and used in applications. However, there are few tools to verify the integrity of a model, so developers must be cautious when downloading them.

“Using an untrusted AI model could introduce integrity and security risks to your application and is equivalent to including untrusted code within your application” Wiz Research’s report explained.

****Hugging Face- Wiz Research Join Hands****

Open-source artificial intelligence (AI) hub Hugging Face and Wiz.io have collaborated to address the security risks associated with AI-powered services. The joint effort highlights the importance of proactive measures to ensure the responsible and secure development and deployment of AI technologies.

Commenting on this, Nick Rago, VP of Product Strategy at Salt Security added that _“_Securing the critical cloud infrastructure that houses AI models is crucial and the findings by Wiz are significant. It is also imperative that security teams recognize the vehicle in which AI is trained and serviced is an API, and rigorous security must be applied at that level to ensure the security of AI supply chains.”

****A Concerning Scenario****

This discovery comes at a time when concerns are already raised regarding data safety under AI-based tools. The AvePoint survey shows that less than half of organizations are confident they can use AI safely, with 71% concerned about data privacy and security before implementation, and 61% worried about internal data quality.

Despite the widespread use of AI tools like ChatGPT and Google Gemini, fewer than half have an AI Acceptable Use Policy. Additionally, 45% of organizations experienced unintended data exposure during AI implementation.

The widespread adoption of AI across various industries necessitates strong security measures. These vulnerabilities could potentially allow attackers to manipulate AI models, steal sensitive data, or disrupt critical operations.

  1. Airbus EFB App Vulnerability Risking Aircraft Data
  2. Vulnerability Exposed Ibis Budget Guest Room Codes
  3. CISA Urges Patching Microsoft SharePoint Vulnerability

HackRead: Latest News

84 Arrested as Russian Ransomware Laundering Networks Disrupted