Security
Headlines
HeadlinesLatestCVEs

Headline

AI Solutions Are the New Shadow IT

Ambitious Employees Tout New AI Tools, Ignore Serious SaaS Security RisksLike the SaaS shadow IT of the past, AI is placing CISOs and cybersecurity teams in a tough but familiar spot. Employees are covertly using AI with little regard for established IT and cybersecurity review procedures. Considering ChatGPT’s meteoric rise to 100 million users within 60 days of launch, especially with little

The Hacker News
#xss#vulnerability#web#google#intel#backdoor#ssrf#oauth#auth#ibm#The Hacker News

Ambitious Employees Tout New AI Tools, Ignore Serious SaaS Security Risks

Like the SaaS shadow IT of the past, AI is placing CISOs and cybersecurity teams in a tough but familiar spot.

Employees are covertly using AI with little regard for established IT and cybersecurity review procedures. Considering ChatGPT’s meteoric rise to 100 million users within 60 days of launch, especially with little sales and marketing fanfare, employee-driven demand for AI tools will only escalate.

As new studies show some workers boost productivity by 40% using generative AI, the pressure for CISOs and their teams to fastrack AI adoption — and turn a blind eye to unsanctioned AI tool usage — is intensifying.

But succumbing to these pressures can introduce serious SaaS data leakage and breach risks, particularly as employees flock to AI tools developed by small businesses, solopreneurs, and indie developers.

AI Security Guide

Download AppOmni’s CISO Guide to AI Security - Part 1

AI evokes inspiration, confusion, and skepticism — especially among CISOs. AppOmni’s newest CISO Guide examines common misconceptions about AI security, giving you a balanced perspective on today’s most polarizing IT topic.

Get It Now

Indie AI Startups Typically Lack the Security Rigor of Enterprise AI

Indie AI apps now number in the tens of thousands, and they’re successfully luring employees with their freemium models and product-led growth marketing strategy. According to leading offensive security engineer and AI researcher Joseph Thacker, indie AI app developers employ less security staff and security focus, less legal oversight, and less compliance.

Thacker breaks down indie AI tool risks into the following categories:

  • Data leakage: AI tools, particularly generative AI using large language models (LLMs), have broad access to the prompts employees enter. Even ChatGPT chat histories have been leaked, and most indie AI tools aren’t operating with the security standards that OpenAI (the parent company of ChatGPT) apply. Nearly every indie AI tool retains prompts for “training data or debugging purposes,” leaving that data vulnerable to exposure.
  • Content quality issues: LLMs are suspect to hallucinations, which IBM defines as the phenomena when LLMS “perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate.” If your organization hopes to rely on an LLM for content generation or optimization without human reviews and fact-checking protocols in place, the odds of inaccurate information being published are high. Beyond content creation accuracy pitfalls, a growing number of groups such as academics and science journal editors have voiced ethical concerns about disclosing AI authorship.
  • Product vulnerabilities: In general, the smaller the organization building the AI tool, the more likely the developers will fail to address common product vulnerabilities. For example, indie AI tools can be more susceptible to prompt injection, and traditional vulnerabilities such as SSRF, IDOR, and XSS.
  • Compliance risk: Indie AI’s absence of mature privacy policies and internal regulations can lead to stiff fines and penalties for non-compliance issues. Employers in industries or geographies with tighter SaaS data regulations such as SOX, ISO 27001, NIST CSF, NIST 800-53, and APRA CPS 234 could find themselves in violation when employees use tools that don’t abide by these standards. Additionally, many indie AI vendors have not achieved SOC 2 compliance.

In short, indie AI vendors are generally not adhering to the frameworks and protocols that keep critical SaaS data and systems secure. These risks become amplified when AI tools are connected to enterprise SaaS systems.

Connecting Indie AI to Enterprise SaaS Apps Boosts Productivity — and the Likelihood of Backdoor Attacks

Employees achieve (or perceive) significant process improvement and outputs with AI tools. But soon, they’ll want to turbocharge their productivity gains by connecting AI to the SaaS systems they use every day, such as Google Workspace, Salesforce, or M365.

Because indie AI tools depend on growth through word of mouth more than traditional marketing and sales tactics, indie AI vendors encourage these connections within the products and make the process relatively seamless. A Hacker News article on generative AI security risks illustrates this point with an example of an employee who finds an AI scheduling assistant to help manage time better by monitoring and analyzing the employee’s task management and meetings. But the AI scheduling assistant must connect to tools like Slack, corporate Gmail, and Google Drive to obtain the data it’s designed to analyze.

Since AI tools largely rely on OAuth access tokens to forge an AI-to-SaaS connection, the AI scheduling assistant is granted ongoing API-based communication with Slack, Gmail, and Google Drive.

Employees make AI-to-SaaS connections like this every day with little concern. They see the possible benefits, not the inherent risks. But well-intentioned employees don’t realize they might have connected a second-rate AI application to your organization’s highly sensitive data.

Figure 1: How an indie AI tool achieves an OAuth token connection with a major SaaS platform. Credit: AppOmni

AI-to-SaaS connections, like all SaaS-to-SaaS connections, will inherit the user’s permission settings. This translates to a serious security risk as most indie AI tools follow lax security standards. Threat actors target indie AI tools as the means to access the connected SaaS systems that contain the company’s crown jewels.

Once the threat actor has capitalized on this backdoor to your organization’s SaaS estate, they can access and exfiltrate data until their activity is noticed. Unfortunately, suspicious activity like this often flies under the radar for weeks or even years. For instance, roughly two weeks passed between the data exfiltration and public notice of the January 2023 CircleCI data breach.

Without the proper SaaS security posture management (SSPM) tooling to monitor for unauthorized AI-to-SaaS connections and detect threats like large numbers of file downloads, your organization sits at a heightened risk for SaaS data breaches. SSPM mitigates this risk considerably and constitutes a vital part of your SaaS security program. But it’s not intended to replace review procedures and protocols.

How to Practically Reduce Indie AI Tool Security Risks

Having explored the risks of indie AI, Thacker recommends CISOs and cybersecurity teams focus on the fundamentals to prepare their organization for AI tools:

1. Don’t Neglect Standard Due Diligence

We start with the basics for a reason. Ensure someone on your team, or a member of Legal, reads the terms of services for any AI tools that employees request. Of course, this isn’t necessarily a safeguard against data breaches or leaks, and indie vendors may stretch the truth in hopes of placating enterprise customers. But thoroughly understanding the terms will inform your legal strategy if AI vendors break service terms.

2. Consider Implementing (Or Revising) Application And Data Policies

An application policy provides clear guidelines and transparency to your organization. A simple “allow-list” can cover AI tools built by enterprise SaaS providers, and anything not included falls into the “disallowed” camp. Alternatively, you can establish a data policy that dictates what types of data employees can feed into AI tools. For example, you can forbid inputting any form of intellectual property into AI programs, or sharing data between your SaaS systems and AI apps.

3. Commit To Regular Employee Training And Education

Few employees seek indie AI tools with malicious intent. The vast majority are simply unaware of the danger they’re exposing your company to when they use unsanctioned AI.

Provide frequent training so they understand the reality of AI tools data leaks, breaches, and what AI-to-SaaS connections entail. Trainings also serve as opportune moments to explain and reinforce your policies and software review process.

4. Ask The Critical Questions In Your Vendor Assessments

As your team conducts vendor assessments of indie AI tools, insist on the same rigor you apply to enterprise companies under review. This process must include their security posture and compliance with data privacy laws. Between the team requesting the tool and the vendor itself, address questions such as:

  • Who will access the AI tool? Is it limited to certain individuals or teams? Will contractors, partners, and/or customers have access?
  • What individuals and companies have access to prompts submitted to the tool? Does the AI feature rely on a third party, a model provider, or a local model?
  • Does the AI tool consume or in any way use external input? What would happen if prompt injection payloads were inserted into them? What impact could that have?
  • Can the tool take consequential actions, such as changes to files, users, or other objects?
  • Does the AI tool have any features with the potential for traditional vulnerabilities to occur (such as SSRF, IDOR, and XSS mentioned above)? For example, is the prompt or output rendered where XSS might be possible? Does web fetching functionality allow hitting internal hosts or cloud metadata IP?

AppOmni, a SaaS security vendor, has published a series of CISO Guides to AI Security that provide more detailed vendor assessment questions along with insights into the opportunities and threats AI tools present.

5. Build Relationships and Make Your Team (and Your Policies) Accessible

CISOs, security teams, and other guardians of AI and SaaS security must present themselves as partners in navigating AI to business leaders and their teams. The principles of how CISOs make security a business priority break down to strong relationships, communication, and accessible guidelines.

Showing the impact of AI-related data leaks and breaches in terms of dollars and opportunities lost makes cyber risks resonate with business teams. This improved communication is critical, but it’s only one step. You may also need to adjust how your team works with the business.

Whether you opt for application or data allow lists — or a combination of both — ensure these guidelines are clearly written and readily available (and promoted). When employees know what data is allowed into an LLM, or which approved vendors they can choose for AI tools, your team is far more likely to be viewed as empowering, not halting, progress. If leaders or employees request AI tools that fall out of bounds, start the conversation with what they’re trying to accomplish and their goals. When they see you’re interested in their perspective and needs, they’re more willing to partner with you on the appropriate AI tool than go rogue with an indie AI vendor.

The best odds for keeping your SaaS stack secure from AI tools over the long term is creating an environment where the business sees your team as a resource, not a roadblock.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.

The Hacker News: Latest News

APT-K-47 Uses Hajj-Themed Lures to Deliver Advanced Asyncshell Malware