EU Implements New Laws on AI Development
In some ways, the EU is way ahead on technological regulation, and in taking proactive steps to ensure consumer protection is factored into the new digital landscape.
But in others, EU regulations can stifle development, and implement onerous systems that donāt really serve their intended purpose, and just add more hurdles for developers.
Case in point: Today, the EU has announced a new set of regulations designed to police the development of AI, with a range of measures around the ethical and acceptable use of peopleās data to train AI systems.
And there are some interesting provisions in there. For example:
āThe new rules ban certain AI applications that threaten citizensā rights, including biometric categorization systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases. Emotion recognition in the workplace and schools, social scoring, predictive policing (when it is based solely on profiling a person or assessing their characteristics), and AI that manipulates human behavior or exploits peopleās vulnerabilities will also be forbidden.ā
You can see how these regulations are intended to address some of the more concerning elements of AI usage. But at the same time, these rules can only be applied in retrospect, and thereās plenty of evidence to suggest that AI tools will be, and already have been created that can do these things, even if that was not the intention in their initial development.
So under these rules, EU officials will be able to then ban those apps once they get released. But theyāll still be built, and will likely still be made available through alternative means.
I guess, the new rules will at least give EU officials legal backing to take action in such cases. But it just seems a little pointless to be reigning things in in retrospect, particularly if those same tools are going to be available in other regions either way.
Which is a broader concern with AI development overall, in that developers from other nations will not be beholden to the same regulations. That could see Western nations fall behind in the AI race, stifled by restrictions that aren’t implemented universally.
EU developers could be particularly hamstrung in this respect, because again, many AI tools will be able to do these things, even if thatās not the intention in their creation.
Which, I guess, is part of the challenge in AI development. We donāt know exactly how these systems will work until they do, and as AI theoretically gets āsmarter,āĀ and starts piecing together more elements, there are going to be harmful potential uses for them, with almost every tool set to enable some form of unintended misuse.
Really, the laws should more specifically relate to the language models and data sets behind the AI tools, not the tools themselves, as that would then enable officials to focus on what information is being sourced, and how, and limit unintended consequences in this respect, without restricting actual AI system development.
Thatās really the main impetus here anyway, in policing what data is gathered, and how itās used.
In which case, EU officials wouldnāt necessarily need an AI law, which could limit development, but an amendment to the current Digital Services Act (DSA) in relation to expanded data usage.
Though, either way, policing such is going to be a challenge, and itāll be interesting to see how EU officials look to enact these new rules in practice.
You can read an overview of the new EU Artificial Intelligence Act here.