After three days of intense discussions, the European Union (EU) Council and European Parliament have finalized a provisional deal on the proposed Artificial Intelligence (AI) Act. This landmark draft law is designed to make sure AI systems used in the EU are safe, respect people’s rights, and match EU values. It also aims to increase investment and innovation in AI within the EU.
AI Act: 🇪🇺 Council and 🇪🇺 Parliament strike a deal on the first rules for AI in the world#AIAct
Read our press release👇
— EU Council Press (@EUCouncilPress) December 9, 2023
Main Points of the Provisional Agreement
The new agreement, compared to the initial Commission proposal, makes several updates to the original proposal:
- New rules for powerful, general AI models that might pose future risks, and for high-risk AI systems.
- An updated governing system with some EU-level enforcement powers.
- More prohibitions, but law enforcement can use AI for facial recognition in public spaces under strict conditions.
- Stronger protection of rights, including a mandatory check on how high-risk AI systems affect fundamental rights.
Detailed Aspects of the Agreement
This section explains the provisional agreement on AI in the EU. It clarifies what counts as AI, separating high-risk and low-risk AI. It outlines prohibited AI uses and sets guidelines for police use of AI. The section also covers how these AI rules will be governed, the fines for breaking them, and steps being taken to protect people’s rights and encourage AI development.
Definitions and Scope
The agreement refines the definition of AI systems to make it clearer and aligns it with international standards. It also states that these rules don’t apply to military uses, national security, or research and innovation.
AI Systems: High-Risk and Prohibited Practices
The agreement sets a protection layer, classifying certain AI as high-risk. Low-risk AI will have minimal obligations, like informing users that AI created the content. High-risk AI systems will be allowed but must meet certain requirements. The agreement also clarifies roles and responsibilities in the AI industry and how these interact with other EU laws.
Prohibited AI practices deemed too risky will be banned. This includes manipulative practices, unauthorized facial recognition, workplace emotion recognition, social scoring, sensitive data inference like sexual orientation, and some predictive policing methods.
Law Enforcement Exceptions
The agreement allows for certain AI uses by law enforcement, with strict rules to protect fundamental rights. This includes emergency uses of AI tools and controlled use of AI for real-time facial recognition.
General Purpose AI Systems and Foundation Models
The agreement introduces rules for general-purpose AI and foundation models, demanding transparency and stricter rules for influential models.
New Governance Structure
A new office within the Commission will oversee advanced AI models. A scientific panel will advise this office, and an AI Board with member states’ representatives will help with coordination and advice.
The agreement sets fines for breaking the AI Act rules, with limits for smaller companies and startups.
The penalties for breaking the AI Act rules are based on how much money the company made globally in the last year or a set amount, whichever is more. The fines are:
- €35 million or 7% of the company’s global yearly income for using banned AI applications.
- €15 million or 3% of the company’s global yearly income for not following the AI Act’s rules.
- €7.5 million or 1.5% of the company’s global yearly income for giving wrong information.
The agreement also says that anyone, whether a person or a company, can report to the market watch authorities if they think someone is not following the AI Act. These complaints will be dealt with according to the authority’s established procedures.
Transparency and Fundamental Rights Protection
The agreement requires a rights impact assessment for high-risk AI systems and more transparency in their use.
The agreement updates support measures for AI innovation, including real-world testing environments and support for small companies.
The Next Steps
After reaching this initial agreement, the next few weeks will involve more detailed work to complete the new regulations. Once this is done, the agreed-upon text will be presented to the member countries’ representatives for their approval.
Both the EU Council and Parliament need to formally agree on the text, which will also go through a final check to ensure its language and legal terms are correct. After this, it will officially become law.
The new AI Act is planned to take effect two years after it becomes law, although some aspects of it might be applied sooner.