Security
Headlines
HeadlinesLatestCVEs

Headline

8 Tips on Leveraging AI Tools Without Compromising Security

AI tools can deliver quick and easy results and offer huge business benefits — but they also bring hidden risks.

DARKReading
#web#mac#git#intel#auth

Source: marcos alvarado via Alamy Stock Photo

COMMENTARY

Advances in artificial intelligence (AI) and machine learning (ML) technologies continue to receive significant news coverage for the potential benefits and troubling dangers they could unleash. Forecasts like the Nielsen Norman Group estimating that AI tools may improve an employee’s productivity by 66% have companies everywhere wanting to leverage these tools immediately. Other experts warn that AI usage could have several negative consequences, including generating inaccurate results, data leaks, and theft. So, how can companies employ these powerful AI/ML tools without compromising their security? Below, we’ll touch on the risks AI tools can present and offer eight tips that will help your company leverage these tools as securely as possible.

The Risks of AI Tools

In general, AI/ML is simply statistics at scale. All AI models rely on data to statistically generate results for their focus area. Therefore, most risks revolve around said data, especially confidential or sensitive data. Companies using external AI/ML tools risk that the product vendor might use your data to train their algorithms and accidentally leak or steal your intellectual property. Additionally, an AI tool that uses bad data will generate inaccurate results. Fortunately, many AI tools are quite powerful and can be used securely, with a few precautions.

Tip 1: Remember, if it’s free, you are the product.

If you use a free AI tool or service, you should suspect it may leverage the data you provide. Many early AI services and tools, including ChatGPT, employ a usage model that’s similar to social media services like Facebook and TikTok. While you don’t pay money to use those sites, you are sharing your private data, which those companies use to target ads and monetize. Similarly, a free AI service can gather data from your devices and keep your prompts, which it uses to train its model. While not seemingly malicious, you never know how an AI service will monetize your data. If that company gets breached, threat actors may obtain access to your data.

Tip 2: Have legal review agreements.

To learn how an AI tool or service will handle your data, read the vendor’s end-user license agreement, master service agreement, terms and conditions, and privacy policies. These lengthy, complex documents often use tricky language to obscure how the vendor might use your data, so company lawyers should review them. Your legal department knows the regulatory requirements for safeguarding your sensitive data, including personally identifiable information. They can interpret these documents capably and alert you if any terms put your data at risk. A data privacy officer who specializes in the General Data Protection Regulation (GDPR), the California Consumer Privacy Act (CCPA), or any other privacy compliance can also help review these agreements.

Tip 3: Guard internal data.

A simple risk of using external AI tools is that an employee — intentionally or not — could share sensitive or confidential data with the tool, exposing the data to misuse. To mitigate this threat, companies must ensure they apply least privilege principles to who can access confidential or sensitive data. Unfortunately, some organizations don’t adequately block employees from accessing corporate data that their role doesn’t require. If you limit employees to access only the precise data they need, you minimize the amount of data they might divulge to an external AI tool.

Tip 4: Review settings and consider paying for privacy.

Whether AI tools or services are free or paid, they often have settings for added privacy. Some tools can be set to not store your prompt data. Review your AI tool’s privacy settings and configure them to your preferences. Also, while free tools likely reserve the right to use your data, you can pay licensing fees to get enterprise versions of AI tools that offer more protection. These tools often include features of not using your data and keeping it segmented.

Tip 5: Validate the security of AI tool vendors.

Vendor and supply chain security validation should be a default practice for any new external tool or service partner you adopt. Digital supply chain breaches prove that vendors can become the weakest link in maintaining security. Therefore, third-party risk management (TPRM) products and processes are crucial to information security. A formal TPRM process is vital when using AI tools and service partners.

Tip 6: Leverage local open source AI frameworks and tools.

The accessibility of online, external AI tools that are free and user-friendly brings risk. To assure data privacy, consider avoiding external tools. Alternatively, there are free AI frameworks and tools you can deploy and create yourself. These frameworks require more work as well as AI and data science expertise, so they bring their own cost to use.

Tip 7: Track your company’s AI usage.

It’s difficult to track every asset, software-as-a-service (SaaS) tool, or data store that your employees use, as the cloud and Web services have empowered everyone to use Web-based tools. The same goes for AI-based SaaS offerings. To monitor which external AI tools your employees utilize, how, and with what data, audit and document AI usage as part of the buying process. In identifying all the internal and external AI tools used, you better understand the risks you face.

Tip 8: Create an AI policy and raise risk awareness.

In security, everything starts with policy. The risks your company faces depend on your specific organization’s missions and needs, and the data you use. Companies that traffic in public data may face a low data-sharing risk and allow a permissive AI policy, while those that deal in confidential matters will have stringent rules around data use in external AI tools. Ultimately, you must craft an AI policy that’s tailored to your company’s needs. Once you have that policy, communicate it and the risks associated with AI tools with your employees regularly.

AI tools can deliver quick and easy results and offer huge business benefits while also bringing hidden risks. Fortunately, you can leverage AI tools securely by adopting a few precautions and be on your way to realizing the benefits these tools promise.

About the Author(s)

Chief Security Officer, WatchGuard Technologies

Corey Nachreiner is the chief security officer (CSO) of WatchGuard Technologies. Recognized as a thought leader in IT security, Nachreiner spearheads WatchGuard’s technology and security vision and direction. He has operated at the frontline of cybersecurity for 25 years, evaluating and making accurate predictions about information security trends. As an authority on network security and an internationally quoted commentator, Nachreiner’s expertise and ability to dissect complex security topics make him a sought-after speaker at forums such as Gartner, Infosec, and RSA. He is also a regular contributor to leading publications including CNET, Dark Reading, Forbes, Help Net Security, and more. Find him on www.secplicity.org.

DARKReading: Latest News

Too Much 'Trust,' Not Enough 'Verify'