The Irish Data Protection Commission (DPC) has fined Meta Platforms Ireland Limited (MPIL) €91 million ($100 million) for storing the passwords of hundreds of millions of users in plaintext, violating data protection regulations.
In 2019, Meta disclosed that it had inadvertently stored some users’ passwords in plaintext on its internal systems, without encrypting them.
“As part of a routine security review in January, we found that some user passwords were being stored in a readable format within our internal data storage systems. This caught our attention because our login systems are designed to mask passwords using techniques that make them unreadable.” . “We have fixed these issues and as a precaution we will be notifying everyone whose passwords we have found were stored in this way.”
The company pointed out that these passwords were only visible to people inside of Facebook and found no evidence that anyone internally abused or improperly accessed them.
Meta estimated that the incident impacted hundreds of millions of Facebook Lite users, tens of millions of other Facebook users, and tens of thousands of Instagram users.
Facebook Lite is a simplified version of Facebook primarily used in regions with limited internet connectivity.
The social media giant reported the incident to the Irish Data Protection Commission (DPC), which launched an investigation into the company’s data storage practices in April 2019.
“The Data Protection Commission (DPC) has today announced its final decision following an inquiry into Meta Platforms Ireland Limited (MPIL). This inquiry was launched in April 2019, after MPIL notified the DPC that it had inadvertently stored certain passwords of social media users in ‘plaintext’ on its internal systems (i.e. without cryptographic protection or encryption).” reads DPC’s statement.
“The DPC submitted a draft decision to the other Concerned Supervisory Authorities across the EU/EEA in June 2024, as required under Article 60 of the GDPR. No objections to the draft decision were raised by the other authorities. The decision, which was made by the Commissioners for Data Protection, Dr. Des Hogan and Dale Sunderland, and notified to MPIL yesterday September 26, includes a reprimand and a fine of €91million.”
The Irish Data Protection Commission (DPC) stated that it will release its full decision and additional details about the incident at a later date.
“It is widely accepted that user passwords should not be stored in plaintext, considering the risks of abuse that arise from persons accessing such data. It must be borne in mind, that the passwords the subject of consideration in this case, are particularly sensitive, as they would enable access to users’ social media accounts.” said Deputy Commissioner at the DPC, Graham Doyle.
Follow me on Twitter: and and Mastodon
(SecurityAffairs – hacking, )
]]>Pavel Durov, the founder and CEO of , was arrested at Bourget airport near Paris on Saturday evening. According to the media, the arrest is linked to an investigation in France concerning the lack of content moderators on Telegram, which authorities believe advantaged criminal activity.
“Durov was travelling aboard his private jet, TF1 said on its website, adding he had been targeted by an arrest warrant in France as part of a preliminary police investigation.” . “TF1 and BFM both said the investigation was focused on a lack of moderators on Telegram, and that police considered that this situation allowed criminal activity to go on undeterred on the messaging app.”
Durov may face indictment as the investigation continues.
According to authorities, the lack of moderation allowed several kinds of criminal activity, including child pornography, money laundering, fraud, and drug trafficking.
Over the years, Telegram has become the privileged communication channel for cybercriminals and other threat actors. Telegram and the French Interior Ministry have not yet commented on the news.
Telegram Messenger is a cloud-based, cross-platform instant messaging service launched in 2013 for iOS and Android. It allows users to exchange messages, share media, and hold voice or video calls, with features like end-to-end encryption for voice calls and optional Secret Chats.
Telegram also supports social networking, enabling large public groups, channels for one-way updates, and the ability to post stories.
Founded by Nikolai and Pavel Durov, Telegram’s headquarters are in Dubai, UAE, with servers distributed globally. As of July 2024, Telegram has over 950 million monthly active users, with India leading in user numbers. The app was the most downloaded worldwide in January 2021 and reached 1 billion downloads by August 2021.
Follow me on Twitter: and and Mastodon
(SecurityAffairs – hacking, )
]]>The U.S. FBI and Cyber National Mission Force, along with Dutch and Canadian intelligence and security agencies, warned social media companies about Russian state-sponsored actors using covert AI software, Meliorator, in disinformation campaigns. Affiliates of Russia’s media organization RT used Meliorator to create fake online personas to spread disinformation on X. The campaigns targeted various countries, including the U.S., Poland, Germany, the Netherlands, Spain, Ukraine, and Israel.
“Although the tool was only identified on X, the authoring organizations’ analysis of Meliorator indicated the developers intended to expand its functionality to other social media platforms.” . “The authoring organizations’ analysis also indicated the tool is capable of the following:
As early as 2022, RT had access to the AI-powered bot farm generation and management software Meliorator. By June 2024, it was operational only on X (formerly Twitter), with plans to expand to other platforms. The software includes an admin panel called “Brigadir” and a seeding tool named “Taras,” and is accessed via a virtual network computing (VNC) connection. Developers managed Meliorator using Redmine software, hosted at dtxt.mlrtr[.]com.
The identities (also called “souls”) of these bots are determined by selecting specific parameters or archetypes. The experts said that any unselected fields are auto-generated. Bot archetypes group ideologically aligned bots through an algorithm that constructs each bot’s persona, including location, political ideologies, and biographical data. Taras creates these identities and the AI software registers them on social media platforms. The identities are stored in a MongoDB, enabling ad hoc queries, indexing, load-balancing, aggregation, and server-side JavaScript execution.
Meliorator manages automated scenarios or actions for a soul or group of souls through the “thoughts” tab. The software can instruct personas to like, share, repost, and comment on others’ posts, including videos or links. It also allows for maintenance tasks, creating new registrations, and logging into existing profiles.
“The creators of the Meliorator tool considered a number of barriers to detection and attempted to mitigate those barriers by coding within the tool the ability to obfuscate their IP, bypass dual factor authentication, and change the user agent string.” continues the . “Operators avoid detection by using a backend code designed to auto-assign a proxy IP address to the AI generated persona based on their assumed location.”
The report also provides the infrastructure associated with the bot farm and mitigations.
Follow me on Twitter: and and Mastodon
(SecurityAffairs – hacking, )
]]>
Brazil’s data protection authority, Autoridade Nacional de Proteção de Dados (ANPD), has imposed a temporary ban on Meta from processing users’ personal data for training its artificial intelligence (AI) models.
“The National Data Protection Authority (ANPD) issued today a Preventive Measure determining the immediate suspension, in Brazil, of the validity of the new privacy policy of the company Meta , which authorized the use of personal data published on its platforms for the purpose of training artificial intelligence (AI) systems.” reads the announcement published by ANPD.
ANPD also announced a daily fine of R$50,000 for non-compliance.
The Board of Directors issued a Preventive Measure due to the “use of an inadequate legal basis for data processing, insufficient disclosure of clear and accessible information about privacy policy changes and data processing, excessive limitations on the exercise of data subjects’ rights, and processing of children’s and adolescents’ personal data without proper safeguards.”
Meta’s updated privacy policy allows the social media giant to use public posts for its AI systems.
Meta expressed disappointment with the decision, claiming its practices comply with Brazilian privacy laws.
“This is a step backwards for innovation, competition in AI development and further delays bringing the benefits of AI to people in Brazil,” .
Human Rights Watch recently published a revealing that LAION-5B, a major image-text dataset used for training AI models, includes identifiable photos of Brazilian children. These models can be used by tools employed to create malicious deepfakes that put even more children at risk of exploitation,
In June, Meta it is delaying the training of its large language models (LLMs) using public content shared by adults on Facebook and Instagram following the Irish Data Protection Commission (DPC) request.
Meta added it is disappointed by request from the Irish Data Protection Commission (DPC), the social network giant pointed out that this is a step “backwards for European innovation, competition in AI development and further delays bringing the benefits of AI to people in Europe.”
“We’re disappointed by the request from the Irish Data Protection Commission (DPC), our lead regulator, on behalf of the European DPAs, to delay training our large language models (LLMs) using public content shared by adults on Facebook and Instagram — particularly since we incorporated regulatory feedback and the European DPAs have been informed since March.” reads the from Meta. “This is a step backwards for European innovation, competition in AI development and further delays bringing the benefits of AI to people in Europe.”
The company explained that its AI, including LLM, is already available in other parts of the world. Meta explained that to provide a better service to its European communities, it needs to train the models on relevant information that reflects the diverse languages, geography and cultural references of the people in Europe. For this reason, the company initially planned to train its large language models using the content that its European users in the EU have publicly stated on its products and services.
Meta added that the delay will allow it to address requests from the U.K. Information Commissioner’s Office (ICO) before starting the training.
Follow me on Twitter: and and Mastodon
(SecurityAffairs – hacking, Meta)
]]>
Meta announced it is delaying the training of its large language models (LLMs) using public content shared by adults on Facebook and Instagram following the Irish Data Protection Commission (DPC) request.
“The DPC welcomes the decision by Meta to pause its plans to train its large language model using public content shared by adults on Facebook and Instagram across the EU/EEA. This decision followed intensive engagement between the DPC and Meta.” reads the DPC’s request. “The DPC, in co-operation with its fellow EU data protection authorities, will continue to engage with Meta on this issue.”
Meta added it is disappointed by request from the Irish Data Protection Commission (DPC), the social network giant pointed out that this is a step “backwards for European innovation, competition in AI development and further delays bringing the benefits of AI to people in Europe.”
“We’re disappointed by the request from the Irish Data Protection Commission (DPC), our lead regulator, on behalf of the European DPAs, to delay training our large language models (LLMs) using public content shared by adults on Facebook and Instagram — particularly since we incorporated regulatory feedback and the European DPAs have been informed since March.” reads the from Meta. “This is a step backwards for European innovation, competition in AI development and further delays bringing the benefits of AI to people in Europe.”
The company explained that its AI, including LLM, is already available in other parts of the world. Meta explained that to provide a better service to its European communities, it needs to train the models on relevant information that reflects the diverse languages, geography and cultural references of the people in Europe. For this reason, the company initially planned to train its large language models using the content that its European users in the EU have publicly stated on its products and services.
Meta intended to implement these changes on June 26, giving users the option to opt out of data usage by submitting a request.
“
We remain highly confident that our approach complies with European laws and regulations. AI training is not unique to our services, and we’re more transparent than many of our industry counterparts.” continues the statement.“We are committed to bringing Meta AI, along with the models that power it, to more people around the world, including in Europe. But, put simply, without including local information we’d only be able to offer people a second-rate experience. This means we aren’t able to launch Meta AI in Europe at the moment.”
Meta added that the delay will allow it to address requests from the U.K. Information Commissioner’s Office (ICO) before starting the training.
Follow me on Twitter: and and Mastodon
(SecurityAffairs – hacking, Meta)
]]>
Threat actors exploited a zero-day vulnerability in the video-sharing platform TikTok to hijack high-profile accounts. The vulnerability resides in the direct messages feature implemented by the platform, Forbes.
The malware spreads through direct messages within the app and only requires the user to open a message. The compromised accounts did not post content, and the extent of the impact is unclear. TikTok spokesperson Alex Haurek stated that their security team is aware of the exploit and has taken measures to stop the attack and prevent future incidents. The company is also working with affected account owners to restore access.
The list of compromised accounts includes CNN, Paris Hilton, and Sony, however, it’s still unclear how many accounts have been impacted.
The company did not share technical details about the vulnerability exploited by the attackers.
“Our security team is aware of a potential exploit targeting a number of brand and celebrity accounts. We have taken measures to stop this attack and prevent it from happening in the future. We’re working directly with affected account owners to restore access, if needed.” TikTok spokesperson Alex Haurek Forbes.
Haurek pointed out that the attacks compromised a very small number of accounts.
Semafor first that CNN’s TikTok account had been hacked, forcing the broadcaster to take down its account for several days.
The TikTok spokesperson also added that their security team was recently alerted of malicious actors targeting CNN’s account.
TikTok remarked that it is committed to maintaining the platform’s integrity and will continue to monitor for any further fraudulent activity.
In August 2022, Microsoft researchers a high-severity flaw () in the TikTok Android app, which could have allowed attackers to hijack users’ accounts with a single click. The experts stated that the vulnerability would have required the chaining with other flaws to hijack an account. Microsoft reported the issue to TikTok in February 2022, and the company quickly addressed it. Microsoft confirmed that it is not aware of attacks in the wild exploiting the bug.
Follow me on Twitter: and and Mastodon
(SecurityAffairs – hacking, zero-day)
]]>Cybereason researchers warn that threat actors are utilizing Facebook messages to spread the Snake malware, a Python-based information stealer.
The researchers noticed that the threat actors are maintaining three different Python Infostealer variants. Two of these variants are regular Python scripts, whereas the third variant is an executable assembled by .
Once the malware has siphoned the credentials from the infected system, it transmits them to different platforms such as Discord, GitHub, and by abusing their APIs.
The campaign has been active since at least August 2023 when it was disclosed by a cybersecurity researcher on X.
Threat actors sent Facebook messenger direct messages to the victims attempting to trick them into downloading archive files such as RAR or ZIP files. The archives contain two downloaders, a batch script and a cmd script, with the final downloader used to drop the appropriate Python Infostealer variant on the victim’s system.
“The archived file contains a BAT script which is the first downloader initiating the infection chain. The BAT script attempts to download a ZIP file via the cURL command, placing the downloaded file under the directory C:\Users\Public as myFile.zip. The BAT script proceeds to spawn another PowerShell command to extract the CMD script vn.cmd from the ZIP file and proceeds with its infection.” reads the published by Cybereason. “The CMD script vn.cmd is the primary script responsible for downloading and executing the Python Infostealer. “
The infostealer can gather sensitive data from different web browsers, including:
Let me highlight that Coc Coc Browser is a browser widely used by the Vietnamese community. The selection of this browser also suggests that there was a specific demand to target the Vietnamese community at some point.
The researchers noticed that the infostealer is also able to gather cookie information specific to Facebook.
“Aside from cookies and credential information, project.py dumps cookie information specific to Facebook cookiefb.txt to disk. This behavior is likely for the Threat Actor to hijack the victim’s Facebook account, potentially to expand their infection.” continues the report.
The researchers attribute the campaign to Vietnamese-speaking individuals based on a few indicators, including comments in the scripts, naming conventions, and the presence of the Coc Coc Browser in the list of targeted browsers.
The report includes the MITRE ATT&CK MAPPING for this campaign.
Follow me on Twitter: and and Mastodon
(SecurityAffairs – hacking, Snake)
]]>EU consumer groups are calling on the bloc to sanction the company – which owns Facebook, Instagram and WhatsApp – for allegedly breaching privacy rules. Earlier this week, Meta announced it will set up a team to tackle disinformation and the abuse of generative AI in the run-up to the European Parliament elections – amid concerns about fake news. Abdulvehab Ejupi, a journalist at TRT International, reports.
Below are the questions and answers of my interview:
What specific GDPR rules do the consumer groups claim Meta is not complying with?
Consumer groups assert that Meta is not adhering to various rules established by the European privacy regulation GDPR:
Regardless, I emphasize that, according to GDPR, Meta needs a legitimate reason to collect and use individuals’ data, such as explicit consent, contractual necessity, or a legal obligation.
How does the European Consumer Organisation view Meta’s data processing practices in relation to surveillance-based business models?
The European Consumer Organization strongly opposes Meta’s business model for data collection and processing. They perceive this approach as inconsistent with the core rules of GDPR and believe it represents a serious threat to individual privacy rights.
What has been the criticism regarding Meta’s recent launch of paid, ad-free subscriptions in Europe, and how does Meta defend this move?
Critics argue that Meta is charging users for a basic privacy setting while still collecting extensive data. On the other hand, Meta contends that their subscription model is a legal and compliant response to evolving regulations regarding user consent and data processing.”
Below is the video of my interview:
In 2023, the European Union $1.3 billion for transferring user data to the US. This is the biggest fine since the adoption of the General Data Protection Regulation (GDPR) by the European Union (EU) on May 25, 2018.
Follow me on Twitter: and
(SecurityAffairs – hacking, privacy)
]]>Meta addressed a critical Facebook vulnerability that could have allowed attackers to take control of any account.
The Nepalese researcher Samip Aryal described the flaw as a rate-limiting issue in a specific endpoint of Facebook’s password reset flow. An attacker could have exploited the flaw to takeover any Facebook account by brute-forcing a particular type of nonce.
Meta awarded the researchers for reporting the security issue as part of Facebook’s bug bounty program.
The researchers discovered that the issue impacts Facebook’s password reset procedure when the user selects “Send Code via Facebook Notification.”
Analyzing the vulnerable endpoint the researcher discovered that three conditions opened the door for a brute-force attack:
Choosing the option “Send Code via Facebook Notification” will send a POST request to:
POST /ajax/recover/initiate/ HTTP/1.1
with the parameter; recover_method=send_push_to_session_login
Then the researchers attempted to send a 6-digit code ‘000000’ to analyze the POST request sent to the vulnerable endpoint:
POST /recover/code/rm=send_push_to_session_login&spc=0&fl=default_recover&wsr=0 HTTP/1.1
where “n” parameter holds the nonce.
At this stage, bruteforcing this 6-digit value had become a trivial task for the expert.
“there was no rate limiting on this endpoint, thus the matching code was responded back with a 302 status code. Use this code to log in/reset the FB account password for the user account.” reads the published by Aryal.
The researcher noticed that upon exploiting this vulnerability, Facebook would notify the targeted user. The notification would either display the six-digit code directly or prompt the user to tap the notification to reveal the code.
The researcher reported the flaw to Meta on January 30, 2024, and the company addressed the issue on February 2nd, 2024. This vulnerability had a huge impact, Meta recognized it as a zero-click account takeover exploit. Aryal is currently ranked in in Facebook’s Hall of Fame 2024.
Follow me on Twitter: and
(SecurityAffairs – hacking, Meta)
]]>Hackers hijacked the X account of the US Securities and Exchange Commission (SEC) and used it to publish fake news on the Bitcoin ETF approval.
“Today the SEC grants approval to Bitcoin ETFs for listing on registered national security exchanges,” read the fake massage which was promtly removed.
“The approved Bitcoin ETFs will be subject to ongoing surveillance and compliance measures to ensure continued investor protection.”The message also included a picture of the SEC Chair Gary Gensler with a fake message applauding the approval.
The news had an immediate impact on the Cryptocurrency industry, the price of Bitcoin temporarily jumped up to $48,000 before dropping to around $45,000 after the SEC’s denial.
A Bitcoin ETF (Exchange-Traded Fund) is a financial product that mirrors the price of Bitcoin. Traded on major stock exchanges like stocks and bonds, ETFs enable investors to trade Bitcoin indirectly without direct involvement with the cryptocurrency.
Regulating ETFs in the United States, the SEC (Securities and Exchange Commission) imposes specific criteria for approval. This includes a requirement for a transparent and well-regulated Bitcoin market.
Despite increasing investor and financial institution support, the SEC has been cautious about approving a Bitcoin ETF, expressing concerns about Bitcoin’s volatility and the absence of a regulated cryptocurrency market.
The SEC Chair Gensler published the following message from its account revealing the hack of the SEC’s X account.
“The SEC has not approved the listing and trading of spot bitcoin exchange-traded products.” Gensler wrote.
It’s unclear how threat actors hijacked the SEC’s social media account or whether it was protected with 2FA.
The SEC notified the law enforcement and launched an investigation into the security breach.
Recently, several prominent accounts have been hacked, including the X and networking vendor .
Follow me on Twitter: and and Mastodon
(SecurityAffairs – hacking, ETF approval)
]]>