Meta Platforms faces 11 complaints over proposed changes that would allow it to use personal data to train its AI models without consent, potentially violating EU privacy rules.
Advocacy group NOYB (None of Your Business) is urging national privacy regulators to take immediate action to block this usage, stating that Meta's recent privacy policy changes, set to take effect on June 26, would allow the company to use personal posts, private images, or online tracking data from years past for its AI technology.
NOYB has already filed multiple complaints against Meta and other large tech companies, accusing them of violating the EU's General Data Protection Regulation (GDPR), which can levy fines of up to 4% of a company's global turnover for breaches.
Meta asserts that using user data to train and develop its generative AI models and other AI tools is in its legitimate interest and that these tools can be shared with third parties.
NOYB founder Max Schrems stated in a declaration that the European Court of Justice had already ruled on this issue in 2021.
"The European Court (CJEU) has clearly stated that Meta has no 'legitimate interest' in advertising that could override user data protection rights," he said.
"However, the company tries to use the same argument to justify training unspecified 'AI technologies.' It seems Meta is once again blatantly ignoring the CJEU ruling," Schrems said, adding that opting out is extremely complicated.
"Shifting the burden to users is completely absurd. The law requires Meta to obtain opt-in consent from users, not to provide a hidden and misleading opt-out form," Schrems said, adding, "If Meta wants to use your data, they must ask for your permission, not make users beg to be excluded."
NOYB is calling on data protection agencies in Austria, Belgium, France, Germany, Greece, Italy, Ireland, the Netherlands, Norway, Poland, and Spain to initiate urgent procedures since these changes are about to take effect.