The Rise of AI-Generated Malware: A 🆕, Threat Landscape
HP has recently uncovered a concerning email campaign involving malware delivered through an AI-generated dropper, signaling an evolutionary leap toward genuinely novel AI-crafted malware payloads.
Unveiling the Threat
In June 2024, HP identified a phishing email cleverly disguised as an invoice, featuring an encrypted HTML attachment—an example of HTML smuggling designed to bypass detection. While phishing attacks using encrypted files are not unusual, the method employed in this instance caught the researchers' attention. Patrick Schlapfer, principal threat researcher at HP, noted that the AES decryption key was implemented within JavaScript in the attachment, deviating from the typical approach of sending pre-encrypted archive files.
Upon decryption, the attachment revealed a file that resembled a website but contained a VBScript along with the widely used AsyncRAT infostealer. The VBScript acted as a dropper, manipulating various registry variables and creating a JavaScript file executed as a scheduled task. Ultimately, a PowerShell script was generated, leading to the execution of the AsyncRAT payload.
Notable Features of the Malware
While many aspects of this attack align with standard malware behavior, one detail stood out: the VBScript was exceptionally well-organized and contained comments for each significant command. Schlapfer pointed out that this clarity is rare in malware, which typically tends to be obfuscated and devoid of commentary. Additionally, the script was written in French, an uncommon choice among malware developers.
This led researchers to speculate that the script may have been generated by an AI rather than crafted by a human coder. To test this theory, they used their own generative AI to produce a similar script, which yielded results with a comparable structure and commentary. Although this doesn’t constitute definitive proof, researchers are confident that this dropper malware was indeed AI-generated.
The Implications for Cybercrime
This discovery prompts several critical questions. Why was the malware left unobfuscated? Why were the comments not removed? Such choices suggest that the attacker may be a newcomer to the world of cybercrime, leveraging generative AI to streamline their efforts. Alex Holland, co-lead principal threat researcher, emphasized that this attack requires minimal resources: AsyncRAT is freely available, HTML smuggling requires little expertise, and only one command-and-control server is necessary for managing the infostealer.
This analysis strengthens the notion that the attacker is likely inexperienced, leaving explicit clues about the script’s AI origins. Without these comments, it would have been nearly impossible to ascertain that the script was AI-generated.
The Broader AI Threat Landscape
The critical question arises: if this low-level attack was conducted by an inexperienced adversary, could more skilled attackers be using AI without leaving such obvious traces? The answer appears to be affirmative, and it’s plausible that seasoned criminals are harnessing generative AI in ways that remain undetectable.
Holland remarked, “We’ve long suspected that generative AI could be employed to create malware. Now we have tangible evidence that criminals are using AI in real-world attacks.” This marks a significant step toward the anticipated emergence of new AI-generated payloads that extend beyond basic droppers.
Looking to the Future
As generative AI technology continues to advance rapidly, predicting the timeline for more sophisticated AI-generated malware becomes increasingly complex. Holland speculates that we may witness these developments within the next couple of years.
No comments:
Post a Comment