Ad

AI Can Generate Over 10,000 Malware Variants, Evading Detection in 88% of Cases


AI's Role in Malware Generation

Cybersecurity researchers have uncovered a concerning use of large language models (LLMs): generating thousands of malicious JavaScript variants at scale, making them significantly harder to detect.


"Although LLMs struggle to craft malware entirely from scratch, cybercriminals can use them to rewrite or obfuscate existing malware, enhancing its ability to evade detection," researchers at Palo Alto Networks Unit 42 revealed in a recent analysis. "By prompting LLMs to perform more natural-looking transformations, the resulting malware becomes even harder to identify."


This iterative process not only creates natural-looking malicious code but also degrades the effectiveness of malware classification systems. Over time, these systems may misclassify malicious code as benign, significantly reducing detection accuracy.


Evasive Tactics: The LLM Advantage

Despite security guardrails implemented by LLM providers to prevent misuse, cybercriminals continue to exploit these tools. Notably, malicious platforms like WormGPT are being marketed to automate phishing campaigns and generate sophisticated malware.


Unit 42 demonstrated how LLMs could rewrite existing malware samples using various techniques to bypass machine learning-based detection models such as Innocent Until Proven Guilty (IUPG) or PhishingJS. Through these methods, researchers successfully generated over 10,000 JavaScript variants while preserving the original functionality of the code.


Key transformation techniques include:

  • Variable Renaming: Changing variable names to obscure intent.
  • String Splitting: Breaking strings into smaller, less obvious segments.
  • Junk Code Insertion: Adding meaningless code to confuse analyzers.
  • Whitespace Removal: Reducing unnecessary spaces for compactness.
  • Code Reimplementation: Completely restructuring the code while retaining its behavior.


The result? A malicious JavaScript script that appears benign to detection tools, with an 88% success rate in flipping malware classifiers’ verdicts from malicious to safe.


These variants also evade detection on platforms like VirusTotal, making them a significant threat. Furthermore, LLM-based obfuscation produces more natural-looking rewrites compared to tools like obfuscator.io, which are easier to detect and fingerprint.


Leveraging AI for Cyber Defense

Interestingly, while LLMs pose a threat, they also offer opportunities for defense. Unit 42 highlighted that the same AI-driven obfuscation methods could be leveraged to generate training data, ultimately improving the robustness of malware detection models.


"The scale of new malicious code variants could rise with generative AI, but these techniques could also enhance machine learning models’ resilience," the researchers noted.


Emerging Threats: TPUXtract Attack

In another alarming disclosure, researchers from North Carolina State University unveiled a side-channel attack dubbed TPUXtract. This method enables attackers to steal machine learning models running on Google Edge Tensor Processing Units (TPUs) with 99.91% accuracy.


The TPUXtract attack targets the electromagnetic signals emitted during neural network computations to infer model hyperparameters. This includes sensitive configurations like layer types, number of nodes, kernel sizes, and activation functions.


“Most notably, this is the first comprehensive attack capable of extracting previously unseen models,” the researchers emphasized.


The implications are severe. By stealing AI architecture, adversaries could recreate functional replicas or close surrogates of proprietary models, facilitating intellectual property theft or enabling downstream cyberattacks. However, the attack requires physical access to the target device and expensive equipment to capture electromagnetic traces, limiting its practicality.


Conclusion

The rise of generative AI has created a double-edged sword for cybersecurity. While it empowers defenders with tools to enhance detection, it also gives adversaries unprecedented capabilities to craft sophisticated malware and launch innovative attacks. As AI continues to evolve, the cybersecurity community must remain vigilant and proactive to counter these emerging threats.