Brazil’s data protection authority, Autoridade Nacional de Proteção de Dados (ANPD), has imposed a temporary ban on Meta from processing users’ personal data for training its artificial intelligence (AI) models.
“The National Data Protection Authority (ANPD) issued today a Preventive Measure determining the immediate suspension, in Brazil, of the validity of the new privacy policy of the company Meta , which authorized the use of personal data published on its platforms for the purpose of training artificial intelligence (AI) systems.” reads the announcement published by ANPD.
ANPD also announced a daily fine of R$50,000 for non-compliance.
The Board of Directors issued a Preventive Measure due to the “use of an inadequate legal basis for data processing, insufficient disclosure of clear and accessible information about privacy policy changes and data processing, excessive limitations on the exercise of data subjects’ rights, and processing of children’s and adolescents’ personal data without proper safeguards.”
Meta’s updated privacy policy allows the social media giant to use public posts for its AI systems.
Meta expressed disappointment with the decision, claiming its practices comply with Brazilian privacy laws.
“This is a step backwards for innovation, competition in AI development and further delays bringing the benefits of AI to people in Brazil,” .
Human Rights Watch recently published a revealing that LAION-5B, a major image-text dataset used for training AI models, includes identifiable photos of Brazilian children. These models can be used by tools employed to create malicious deepfakes that put even more children at risk of exploitation,
In June, Meta announced it is delaying the training of its large language models (LLMs) using public content shared by adults on Facebook and Instagram following the Irish Data Protection Commission (DPC) request.
Meta added it is disappointed by request from the Irish Data Protection Commission (DPC), the social network giant pointed out that this is a step “backwards for European innovation, competition in AI development and further delays bringing the benefits of AI to people in Europe.”
“We’re disappointed by the request from the Irish Data Protection Commission (DPC), our lead regulator, on behalf of the European DPAs, to delay training our large language models (LLMs) using public content shared by adults on Facebook and Instagram — particularly since we incorporated regulatory feedback and the European DPAs have been informed since March.” reads the from Meta. “This is a step backwards for European innovation, competition in AI development and further delays bringing the benefits of AI to people in Europe.”
The company explained that its AI, including LLM, is already available in other parts of the world. Meta explained that to provide a better service to its European communities, it needs to train the models on relevant information that reflects the diverse languages, geography and cultural references of the people in Europe. For this reason, the company initially planned to train its large language models using the content that its European users in the EU have publicly stated on its products and services.
Meta added that the delay will allow it to address requests from the U.K. Information Commissioner’s Office (ICO) before starting the training.
Follow me on Twitter: and and Mastodon
(SecurityAffairs – hacking, Meta)