Security
Headlines
HeadlinesLatestCVEs

Headline

The rise of AI-powered criminals: Identifying threats and opportunities

A major area of impact of AI tools in cybercrime is the reduced need for human involvement in certain aspects of cybercriminal organizations.

TALOS
#vulnerability#web#mac#cisco#git#intel#auth

Monday, August 14, 2023 08:08

  • AI’s influence is growing across the security space, bringing with it major implications for cybercriminals and defenders.
  • The recent adoption of AI has raised significant concerns for cybersecurity due to the many ways that criminals can use AI for disruption and profit.
  • Defenders and law enforcement can use AI to strengthen cybersecurity and counteract illicit activities.

The past decade has seen a massive adoption in machine learning and artificial intelligence. An increasing number of organizations have been leveraging such technologies to automate their operations and make their products and services better.

Despite the extensive use of machine learning (ML) and artificial intelligence (AI) by organizations for some time now, many users have first interacted with such technologies over the past few months in the form of generative AI helping users to generate text, code, images and other digital assets with the provision of limited input. The likes of ChatGPT have brought AI to the top of the public’s mind, fueling an intensive race for AI development.

As with any innovation, the use of AI is expected to have positive and negative effects on global culture as we know it, but I suspect that cybercrime will be one of the areas most affected. On the negative side, AI can help streamline criminals’ operations, making them more efficient, sophisticated, and scalable while allowing them to evade detection and attribution. Concerning the positive impact of AI on cybersecurity, defenders, and law enforcement, can use AI to counteract advancements in illicit activity by developing new tools, tactics and strategies to automate data analysis, perform predictive detection of illicit activity and perform more effective attribution of criminal activity.

It is important to acknowledge that the AI use cases discussed in this blog encompass a range of varying complexities to achieve for both criminals and defenders. Certain use cases can be accomplished using readily available AI-enabled tools, while others demand advanced technical skills, costly infrastructure, and considerable time investments.

Empowering cybercrime

Cybercriminals are expected to benefit in many ways from advancements in machine learning and artificial intelligence.

A major area of impact of AI tools in cybercrime is the reduced need for human involvement in certain aspects of cybercriminal organizations, such as software development, scamming, extortions, etc., which in turn will decrease the need to recruit new members and lower operational costs due to the reduced need for headcount. While crime-related “job” postings typically find their way onto dark web forums and other anonymous channels, striving to ensure author anonymity, this practice carries significant risks as it could potentially unveil the identities and operations of criminals to whistleblowers and undercover law enforcement agents.

AI presents another avenue for cybercriminals to exploit by utilizing it to analyze enormous amounts of information, including leaked data. This analysis empowers them to identify vulnerabilities or high-value targets, enabling more precise and effective attacks that could potentially yield greater financial gains. Big data analytics is a complex undertaking necessitating significant processing power and thereby limiting its application to potentially large criminal organizations and state-sponsored actors capable of harvesting such power.

Another area of criminal activity that can thrive with AI is the development of more sophisticated phishing and social engineering attacks. This includes the creation of remarkably realistic deepfakes, deceitful websites, disinformation campaigns, fraudulent social media profiles and AI-powered scam bots. To illustrate the impact, consider an incident from 2020 wherein an AI-powered voice cloning attack successfully impersonated a CEO, resulting in the theft of more than $240,000 from a UK-based energy company. Similarly, in India criminals employed a machine learning model to analyze and mimic the writing style of a victim’s email contacts to create highly personalized and persuasive phishing emails.

The utilization of AI is anticipated to also be prevalent among state-sponsored actors and prominent criminal organizations, to propagate disinformation and manipulate the public. Such tactics involve the creation and dissemination of deceptive content, including deep fakes, voice cloning, and the deployment of bots. Evidence of such practices already exists by a cybercriminal group employing AI for social media manipulation and spreading disinformation about the COVID-19 pandemic. This campaign relied on machine learning to identify emerging trends and generate highly convincing fake news articles.

The advancement of malware can also be impacted by allowing authors to streamline the process with the help of AI, enabling the creation of sophisticated and more adaptable malware. Allowing AI-powered malware to employ advanced techniques to evade detection by security solutions, utilizing “self-metamorphic” mechanisms rendering them capable of changing their operations based on the environment they operate in. Furthermore, criminals can potentially harness AI technology in the development of AI-powered malware development kits. These kits employ AI agents that learn from the latest tools, tactics, and procedures (TTPs) employed by malware authors, as well as stay updated with the latest advancements in security. An example of AI-powered malware is demonstrated by the researchers behind DeepLocker. Showcased how AI can be used to enhance targeted attacks, ensuring exploitation only when the intended target is present and to evade detection by concealing itself within benign applications.

Counteracting cybercrime

On the other side, cybersecurity professionals, defenders, and law enforcement agencies can harness the power of AI to counteract the advancements made in cybercrime. They can utilize AI to develop innovative tools, tactics, and strategies in their fight against malicious activities.

Areas such as threat detection and prevention will be at the forefront of AI security research. Many existing security tools, heavily rely solely on malicious signatures and user input, which render them ineffective for detecting advanced attacks. Consequently, an increasing number of vendors are turning to machine learning (ML) and AI technologies to achieve more precise and effective threat detection. Prominent examples include Cisco Secure Endpoint and Cisco Umbrella utilizing advanced machine learning to detect and mitigate suspicious behavior in an automated manner on end hosts and networks respectively. The inclusion of these technologies is likely to counter the malware being generated by AI discussed above.

Analysis of large amounts of data for the identification of indicators of compromise can be a tedious undertaking, consuming considerable time and money. As such, one area that can benefit from AI is Incident response and forensics for the automated analysis of large volumes of logs, system images, network traffic and user behavior for the identification of indicators of compromise (IOCs) and adversarial activity. AI can help speed up the investigation process, identify patterns that may be difficult to detect manually, and provide insights into the techniques and tools used by adversaries. Allowing more companies globally to have incident response and forensic capabilities.

Another potential use for AI by defenders and law enforcement alike is to enhance the attribution of criminal activity to adversaries through the analysis of multiple data points, including attack signatures, malware characteristics, and historical attack patterns, tools, tactics, and procedures. By examining these data sets, AI can identify patterns and trends that aid cybersecurity experts in narrowing down the potential origin of an attack. This attribution is valuable as it provides insights into the motives and capabilities of the attackers, allowing for a better understanding of their tactics and potential future threats. In addition, it allows defenders to more accurately identify adversaries that are leveraging tactics to evade identification by misleading attribution (e.g., use techniques, methodologies and tools another hacking group is using), which is an existing occurrence that defenders must consider when performing attribution. Such capabilities are primarily expected to be witnessed in the arsenal of state-affiliated cyber agencies as well as on a corporate level from threat intelligence providers.

ML algorithms and AI are set to expand their utilization for automated analysis and the identification of threats. Through the automated analysis of security-related data from multiple sources like threat intelligence feeds, dark web monitoring, and open-source intelligence, emerging threats can be identified and mitigated effectively. Cisco Talos has been leveraging AI for several years to automate threat intelligence operations such as the classification of similarly rendered web pages, identify spoofing attempts through logo analysis, phishing email classification based on text analytics and binary similarities analysis. Although existing work around emerging threats has proven to be highly effective, AI will further the area by allowing for more automate data collection, analysis, and correlation on a larger scale, facilitating the identification of patterns and trends that may signify new attack techniques or threat actors. This empowers cybersecurity professionals to proactively respond to emerging cyber threats by leveraging AI’s ability to process and interpret vast amounts of data swiftly and accurately.

AI can also serve as a valuable tool for predictive analytics, enabling the anticipation of potential cyber threats and vulnerabilities based on historical data and patterns. By analyzing data from past attacks and adversaries, AI systems can identify common trends, patterns, or groups that may indicate or trigger future attacks. This capability empowers cybersecurity experts to take a more proactive stance to security, such as promptly patching vulnerabilities or implementing supplementary security controls, to mitigate potential risks before they are exploited by adversaries. Additionally, AI-driven predictive analytics allows for closer monitoring of adversaries’ activities, enabling experts to anticipate and prepare for new attacks. By leveraging AI in this manner, cybersecurity professionals can enhance their defenses and stay one step ahead of evolving threats. A sizable number of cybercrime predictive research exists, highlighting how to practically use AI to support cybercrime research, as well as how to perform predictive analysis based on social and economic factors using the Bayesian and Markov Theories.

The rise of AI presents new challenges and great opportunities as its user base and applications continue to expand. The effective and targeted utilization of AI-related technologies will play a pivotal role for cybersecurity experts and law enforcement agencies in detecting, defending against, and attributing digital criminal behavior. By harnessing the power of AI, these entities can enhance their capabilities in combating evolving threats and ensuring the security of digital ecosystems. As the landscape of cybercrime evolves, embracing AI will be instrumental in staying ahead of adversaries.

TALOS: Latest News

New PXA Stealer targets government and education sectors for sensitive information