Meta Suspends AI Development in EU and Brazil Over Data Usage Concerns
Metaâs evolving generative AI push appears to have hit a snag, with the company forced to scale back its AI efforts in both the EU and Brazil due to regulatory scrutiny over how itâs utilizing user data in its process.
First off, in the EU, where Meta has announced that it will withhold its multimodal models, a key element of its coming AR glasses and other tech, due to âthe unpredictable nature of the European regulatory environmentâ at present.
As first reported by Axios, Metaâs scaling back its AI push in EU member nations due to concerns about potential violations of EU rules around data usage.
Last month, advocacy group NOYB called on EU regulators to investigate Metaâs recent policy changes that will enable it to utilize user data to train its AI models. arguing that the changes are in violation of the GDPR. Â
As per NOYB:
âMeta is basically saying that it can use ‘any data from any source for any purpose and make it available to anyone in the world’, as long as itâs done via ‘AI technology’. This is clearly the opposite of GDPR compliance. ‘AI technology’ is an extremely broad term. Much like ‘using your data in databases’, it has no real legal limit. Meta doesn’t say what it will use the data for, so it could either be a simple chatbot, extremely aggressive personalised advertising or even a killer drone.â
As a result, the EU Commission urged Meta to clarify its processes around user permissions for data usage, which has now prompted Meta to scale back its plans for future AI development in the region.
Worth noting, too, that UK regulators are also examining Metaâs changes, and how it plans to access user data.
Meanwhile in Brazil, Metaâs removing its generative AI tools after Brazilian authorities raised similar questions about its new privacy policy in regards to personal data usage.
This is one of the key questions around AI development, in that human input is needed to train these advanced models, and a lot of it. And within that, people should arguably have the right to decide whether their content is utilized in these models or not.
Because as weâve already seen with artists, many AI creations end up looking very similar to actual peopleâs work. Which opens up a whole new copyright concern, and when it comes to personal images and updates, like those shared to Facebook, you can also imagine that regular social media users will have similar concerns.
At the least, as noted by NOYB, users should have the right to opt out, and it seems somewhat questionable that Metaâs trying to sneak through new permissions within a more opaque policy update.
What will that mean for the future of Metaâs AI development? Well, in all likelihood, not a heap, at least initially.
Over time, more and more AI projects are going to be looking for human data inputs, like those available via social apps, to power their models, but Meta already has so much data that it likely wonât change its overall development just yet.
In future, if a lot of users were to opt out, that could become more problematic for ongoing development. But at this stage, Meta already has large enough internal models to experiment with that the developmental impact would seemingly be minimal, even if it is forced to remove its AI tools in some regions.
But it could slow Metaâs AI roll out plans, and its push to be a leader in the AI race.
Though, then again, NOYB has also called for similar investigation into OpenAI as well, so all of the major AI projects could well be impacted by the same. Â Â Â
The final outcome then is that EU, UK and Brazilian users wonât have access to Metaâs AI chatbot. Which is likely no big loss, considering user responses to the tool, but it may also impact the release of Metaâs coming hardware devices, including new versions of its Ray Ban glasses and VR headsets.
By that time, presumably, Meta would have worked out an alternative solution, but it could highlight more questions about data permissions, and what people are signing up to in all regions. Â
Which may have a broader impact, beyond these regions. Itâs an evolving concern, and itâll be interesting to see how Meta looks to resolve these latest data challenges. Â