Brazil’s data protection authority, Autoridade Nacional de Prote\u00e7\u00e3o de Dados (ANPD), has imposed a temporary ban on Meta from processing users’ personal data for training its artificial intelligence (AI) models.<\/p>\n\n\n\n
“The\u00a0National Data Protection Authority (ANPD) issued today a\u00a0Preventive\u00a0Measure\u00a0determining\u00a0the\u00a0immediate suspension, in Brazil, of the validity of the new privacy policy of the company Meta\u00a0,\u00a0which\u00a0authorized the use of personal data\u00a0published on its platforms\u00a0for the purpose of training artificial intelligence (AI) systems.” reads the announcement published by ANPD.\u00a0<\/p>\n\n\n\n
ANPD also announced a daily fine of R$50,000\u00a0for non-compliance.<\/p>\n\n\n\n
The Board of Directors issued a Preventive Measure due to the “use of an inadequate legal basis for data processing, insufficient disclosure of clear and accessible information about privacy policy changes and data processing, excessive limitations on the exercise of data subjects’ rights, and processing of children’s and adolescents’ personal data without proper safeguards.”
Meta’s updated privacy policy allows the social media giant to use public posts for its AI systems. <\/p>\n\n\n\n
Meta expressed disappointment with the decision, claiming its practices comply with Brazilian privacy laws.
\u201cThis is a step backwards for innovation, competition in AI development and further delays bringing the benefits of AI to people in Brazil,\u201d the spokesperson said<\/a>.<\/em>
Human Rights Watch recently published a report<\/a> revealing that LAION-5B, a major image-text dataset used for training AI models, includes identifiable photos of Brazilian children. These models can be used by tools employed to create malicious deepfakes that put even more children at risk of exploitation,<\/p>\n\n\n\n
In June, Meta announced<\/a> it is delaying the training of its large language models (LLMs) using public content shared by adults on Facebook and Instagram following the Irish Data Protection Commission (DPC) request.<\/p>\n\n\n\n
\u201cWe\u2019re disappointed by the request from the Irish Data Protection Commission (DPC), our lead regulator, on behalf of the European DPAs, to delay training our large language models (LLMs) using public content shared by adults on Facebook and Instagram \u2014 particularly since we incorporated regulatory feedback and the European DPAs have been informed since March.\u201d reads the statement<\/a> from Meta. \u201cThis is a step backwards for European innovation, competition in AI development and further delays bringing the benefits of AI to people in Europe.\u201d<\/em><\/p>\n\n\n\n
The company explained that its AI, including\u00a0Llama<\/a>\u00a0LLM, is already available in other parts of the world. Meta explained that to provide a better service to its European communities, it needs to train the models on relevant information that reflects the diverse languages, geography and cultural references of the people in Europe. For this reason, the company initially planned to train its large language models using the content that its European users in the EU have publicly stated on its products and services.
Pierluigi Paganini<\/strong><\/a><\/p>\n\n\n\n
Follow me on Twitter: @securityaffairs<\/strong><\/a> and Facebook<\/strong><\/a> and Mastodon<\/a><\/p>\n\n\n\n
(<\/strong>SecurityAffairs<\/strong><\/a>\u00a0\u2013<\/strong>\u00a0hacking, Meta)<\/strong><\/p>\n\n\n\n