EU set to adopt world’s first AI legislation that will ban facial recognition in public places
The European Union (EU) is leading the race to regulate artificial intelligence (AI). Putting an end to three days of negotiations, the European Council and the European Parliament reached a provisional agreement earlier today on what’s set to become the world’s first comprehensive regulation of AI.
Carme Artigas, the Spanish Secretary of State for digitalization and AI, called the agreement a “historical achievement” in a press release. Artigas said that the rules struck an “extremely delicate balance” between encouraging safe and trustworthy AI innovation and adoption across the EU and protecting the “fundamental rights” of citizens.
The draft legislation—the Artificial Intelligence Act— was first proposed by the European Commission in April 2021. The parliament and EU member states will vote to approve the draft legislation next year, but the rules will not come into effect until 2025.
A risk-based approach to regulating AI
The AI Act is designed using a risk-based approach, where the higher the risk an AI system poses, the more stringent the rules are. To achieve this, the regulation will classify AIs to identify those that pose ‘high-risk.’
The AIs that are deemed to be non-threatening and low-risk will be subject to “very light transparency obligations.” For instance, such AI systems will be required to disclose that their content is AI-generated to enable users to make informed decisions.
For high-risk AIs, the legislation will add a number of obligations and requirements, including:
Human Oversight: The act mandates a human-centered approach, emphasizing clear and effective human oversight mechanisms of high-risk AI systems. This means having humans in the loop, actively monitoring and overseeing the AI system’s operation. Their role includes ensuring the system works as intended, identifying and addressing potential harms or unintended consequences, and ultimately holding responsibility for its decisions and actions.
Transparency and Explainability: Demystifying the inner workings of high-risk AI systems is crucial for building trust and ensuring accountability. Developers must provide clear and accessible information about how their systems make decisions. This includes details on the underlying algorithms, training data, and potential biases that may influence the system’s outputs.
Data Governance: The AI Act emphasizes responsible data practices, aiming to prevent discrimination, bias, and privacy violations. Developers must ensure the data used to train and operate high-risk AI systems is accurate, complete, and representative. Data minimization principles are crucial, collecting only the necessary information for the system’s function and minimizing the risk of misuse or breaches. Furthermore, individuals must have clear rights to access, rectify, and erase their data used in AI systems, empowering them to control their information and ensure its ethical use.
Risk Management: Proactive risk identification and mitigation will become a key requirement for high-risk AIs. Developers must implement robust risk management frameworks that systematically assess potential harms, vulnerabilities, and unintended consequences of their systems.