European Union legislators are set to give final approval to the 27-nation bloc’s artificial intelligence law, putting the world-leading rules on track to take effect later this year.
Members of the European Parliament are poised to vote in favour of the Artificial Intelligence Act five years after they were first proposed.
The AI Act is expected to act as a global signpost for other governments grappling with how to regulate the fast-developing technology.
Dragos Tudorache, a Romanian lawmaker who was a co-leader of the Parliament negotiations on the draft law, said: “The AI Act has nudged the future of AI in a human-centric direction, in a direction where humans are in control of the technology and where it — the technology — helps us leverage new discoveries, economic growth, societal progress and unlock human potential.”
🇪🇺 Democracy: 1️⃣ | Lobby: 0️⃣
I welcome the overwhelming support from European Parliament for our #AIAct —the world's 1st comprehensive, binding rules for trusted AI.
Europe is NOW a global standard-setter in AI.
We are regulating as little as possible — but as much as needed! pic.twitter.com/t4ahAwkaSnAdvertisement— Thierry Breton (@ThierryBreton) March 13, 2024
Big tech companies generally have supported the need to regulate AI while lobbying to ensure any rules work in their favour.
OpenAI chief executive Sam Altman caused a minor stir last year when he suggested the ChatGPT maker could pull out of Europe if it cannot comply with the AI Act — before backtracking to say there were no plans to leave.
Like many EU regulations, the AI Act was initially intended to act as consumer safety legislation, taking a “risk-based approach” to products or services that use artificial intelligence.
The riskier an AI application, the more scrutiny it faces. Low-risk systems, such as content recommendation systems or spam filters, will only face light rules such as revealing that they are powered by AI. The EU expects most AI systems to fall into this category.
High-risk uses of AI, such as in medical devices or critical infrastructure like water or electrical networks, face tougher requirements like using high-quality data and providing clear information to users.
The EU delivered. We have forever attached to the concept of Artificial Intelligence the fundamental values that form the basis of our societies. With that alone, the #AIAct has nudged the future of AI in a human-centric direction. pic.twitter.com/T8oIIVZvBc
— Dragoș Tudorache (@IoanDragosT) March 12, 2024
Some AI uses are banned because they are deemed to pose an unacceptable risk, like social scoring systems that govern how people behave, some types of predictive policing and emotion recognition systems in school and workplaces.
Other banned uses include police scanning faces in public using AI-powered remote “biometric identification” systems, except for serious crimes like kidnapping or terrorism.
The law’s early drafts focused on AI systems carrying out narrowly limited tasks, like scanning CVs and job applications. The astonishing rise of general purpose AI models, exemplified by OpenAI’s ChatGPT, sent EU policymakers scrambling to keep up.
They added provisions for so-called generative AI models, the technology underpinning AI chatbot systems that can produce unique and seemingly lifelike responses, images and more.
Developers of general purpose AI models – from European start-ups to OpenAI and Google – will have to provide a detailed summary of the text, pictures, video and other data on the internet that is used to train the systems as well as follow EU copyright law.
AI-generated deepfake pictures, video or audio of existing people, places or events must be labelled as artificially manipulated.
There’s extra scrutiny for the biggest and most powerful AI models that pose “systemic risks”, which include OpenAI’s GPT4 – its most advanced system – and Google’s Gemini.
The EU says it is worried that these powerful AI systems could “cause serious accidents or be misused for far-reaching cyber attacks”.
They also fear generative AI could spread “harmful biases” across many applications, affecting many people.
Companies that provide these systems will have to assess and mitigate the risks; report any serious incidents, such as malfunctions that cause someone’s death or serious harm to health or property; put cybersecurity measures in place; and disclose how much energy their models use.