A generative artificial intelligence malware used in phishing attacks

Pierluigi Paganini September 24, 2024

HP researchers detected a dropper that was generated by generative artificial intelligence services and used to deliver AsyncRAT malware.

While investigating a malicious email, HP researchers discovered a malware generated by generative artificial intelligence services and used to deliver the AsyncRAT malware.

The AI-generated malware was discovered in June 2024, the phishing message used an invoice-themed lure and an encrypted HTML attachment, utilizing HTML smuggling to avoid detection. The encryption method stood out because the attacker embedded the AES decryption key in JavaScript within the attachment, which is unusual. Upon decryption, the attachment mimics a website but contains VBScript that acts as a dropper for the AsyncRAT infostealer. The VBScript modifies the Registry, drops a JavaScript file executed as a scheduled task, and creates a PowerShell script that triggers the AsyncRAT payload.

The analysis of the code revealed that the threat actors behind the campaign had commented on almost the entire code. This is unusual among malware authors, as they typically aim to make the analysis of their malicious code more difficult.

!Interestingly, when we analyzed the VBScript and the JavaScript, we were surprised to find that the code was not obfuscated. In fact, the attacker had left comments throughout the code, describing what each line does even for simple functions. Genuine code comments in malware are rare because attackers want to their make malware as difficult to understand as possible.” reads the HP’s “Threat Insights report for Q2 2024. published by HP. “Based on the scripts’ structure, consistent comments for each function and the choice of function names and variables, we think it’s highly likely that the attacker used GenAI to develop these scripts (T1588.007).5 The activity shows how GenAI is accelerating attacks and lowering the bar for cybercriminals to infect endpoints.”

generative artificial intelligence malware

Threat actors have been using generative AI to craft phishing lures, but its use in creating malicious code has been rare. The case described by HP highlights how generative artificial intelligence is accelerating cyberattacks and making it easier for criminals to develop malware.

“The scripts’ structure, comments and choice of function names and variables were strong clues that the threat actor used GenAI to create the malware ().” concludes the report. “The activity shows how GenAI is accelerating attacks and lowering the bar for cybercriminals to infect endpoints.”

Follow me on Twitter:  and  and Mastodon

(SecurityAffairs – hacking, generative artificial intelligence malware)



you might also like

leave a comment