Headline
AI Could Generate 10,000 Malware Variants, Evading Detection in 88% of Case
Cybersecurity researchers have found that it’s possible to use large language models (LLMs) to generate new variants of malicious JavaScript code at scale in a manner that can better evade detection. “Although LLMs struggle to create malware from scratch, criminals can easily use them to rewrite or obfuscate existing malware, making it harder to detect,” Palo Alto Networks Unit 42 researchers