AI-Powered Russia’s bot farm operates on X, US and its allies warn

Pierluigi Paganini July 11, 2024

The US and its allies disrupted an AI-powered Russia-linked bot farm on the social media platform X relying on the Meliorator AI software.

The U.S. FBI and Cyber National Mission Force, along with Dutch and Canadian intelligence and security agencies, warned social media companies about Russian state-sponsored actors using covert AI software, Meliorator, in disinformation campaigns. Affiliates of Russia’s media organization RT used Meliorator to create fake online personas to spread disinformation on X. The campaigns targeted various countries, including the U.S., Poland, Germany, the Netherlands, Spain, Ukraine, and Israel.

“Although the tool was only identified on X, the authoring organizations’ analysis of Meliorator indicated the developers intended to expand its functionality to other social media platforms.” . “The authoring organizations’ analysis also indicated the tool is capable of the following:

  • Creating authentic appearing social media personas en masse;
  • Deploying content similar to typical social media users;
  • Mirroring disinformation of other bot personas;
  • Perpetuating the use of pre-existing false narratives to amplify malign foreign influence; and
  • Formulating messages, to include the topic and framing, based on the specific archetype of the bot.”

As early as 2022, RT had access to the AI-powered bot farm generation and management software Meliorator. By June 2024, it was operational only on X (formerly Twitter), with plans to expand to other platforms. The software includes an admin panel called “Brigadir” and a seeding tool named “Taras,” and is accessed via a virtual network computing (VNC) connection. Developers managed Meliorator using Redmine software, hosted at dtxt.mlrtr[.]com.

The identities (also called “souls”) of these bots are determined by selecting specific parameters or archetypes. The experts said that any unselected fields are auto-generated. Bot archetypes group ideologically aligned bots through an algorithm that constructs each bot’s persona, including location, political ideologies, and biographical data. Taras creates these identities and the AI software registers them on social media platforms. The identities are stored in a MongoDB, enabling ad hoc queries, indexing, load-balancing, aggregation, and server-side JavaScript execution.

Meliorator manages automated scenarios or actions for a soul or group of souls through the “thoughts” tab. The software can instruct personas to like, share, repost, and comment on others’ posts, including videos or links. It also allows for maintenance tasks, creating new registrations, and logging into existing profiles.

“The creators of the Meliorator tool considered a number of barriers to detection and attempted to mitigate those barriers by coding within the tool the ability to obfuscate their IP, bypass dual factor authentication, and change the user agent string.” continues the . “Operators avoid detection by using a backend code designed to auto-assign a proxy IP address to the AI generated persona based on their assumed location.”

The report also provides the infrastructure associated with the bot farm and mitigations.

Follow me on Twitter:  and  and Mastodon

(SecurityAffairs – hacking, disinformation)



you might also like

leave a comment