Meta, Facebook's parent company, recently announced that it will start training its AI models using user posts.
The big idea is to improve its machine-learning systems. Facebook AI research will benefit from data about different European languages and cultures. However, this plan has sparked controversy.
European Users Can Opt Out
Meta's decision to include European user data in AI training is huge.
Until now, the company avoided using this data due to Europe's strict privacy laws. Now, Meta is pushing its motive despite the legal challenges involved.
"To properly serve our European communities, the models that power AI at Meta need to be trained on relevant information that reflects the diverse languages, geography, and cultural references of the people in Europe who will use them," Meta stated.
"To do this, we want to train our large language models that power AI features using the content that people in the EU have chosen to share publicly on Meta's products and services."
Meta states that it will use publicly shared content for this AI training. This will include posts, comments, photos, and other content from users aged 18 and above. They claim that Private messages will not be used.
The company said it has sent billions of notifications to European users since May 22.
These notifications will allow users to decline participation before the AI training rules start on June 26. Users can opt-out easily; their posts will never be used for AI training.
This presents a unique situation for individuals residing outside of Europe, as they lack the ability to opt-out.
Meta's Facebook AI posts will leverage data from non-European individuals without obtaining their consent. This compulsory involvement is currently active for the training data utilized in Meta's LLaMa 3.
Furthermore, upcoming AI models will make use of this data unless Meta revises its policy.
Privacy Concerns and Legal Challenges
Although Meta likely feels it's in an excellent position to start using European user data, it's hard to imagine there being no pushback.
Before the social media giant even made its public announcement, it signaled its intentions via an update to its privacy policy last week. That prompted consumer privacy advocacy group NYOB to file complaints across Europe.
Advocacy group NYOB (none of your business) complaints challenge the move in European countries, saying the notifications were insufficient as EU privacy rules required Meta to obtain opt-in or opt-out consent from users.
The fact that data can't really be scrubbed from an LLM or other AI model is also likely to cause problems due to the European Union's Right to be Forgotten.
The relationship between Meta and the EU has always been strained.
This year, the EU initiated investigations into Meta, focusing on issues such as child safety and the spread of misinformation during EU elections.
Impact on Global Users
Meta's policies are stricter for users outside of Europe. These users don't have the option to opt out of AI training, and Facebook AI research will utilize their data without obtaining their consent.
The lack of choice could result in dissatisfaction among Meta's global user base.
While European users can safeguard their privacy, users in other regions have no such option. Meta's actions may spark discussions about privacy and data usage on a global scale.