The European AI Act: Key Compliance Actions
Artificial Intelligence (AI) has become a central pillar of companies’ digital strategies. With the adoption of the Artificial Intelligence Regulation (AI Act) by the European Union, organizations must now navigate a strict regulatory framework to ensure ethical and compliant AI usage.
The European AI Regulation came into effect on August 1, 2024. Gradually, the regulation will become fully applicable from August 2, 2026 (Article 113 of the regulation).
Classification of AI Systems by Risk Levels
The AI Act categorizes AI systems into four levels of risk:
- Unacceptable risk: Prohibited (e.g., subliminal manipulation systems, AI for social scoring).
- High risk: Regulated (e.g., AI in healthcare, education, or public safety).
- Limited risk: Transparency obligations (e.g., chatbots must disclose they are not human).
- Minimal risk: No specific requirements (e.g., entertainment applications).
An AI system is considered high-risk if it is intended to be used as a product or a security component of a product (medical devices, autonomous vehicles, connected toys, etc.) requiring a compliance assessment by certified third-party organizations, according to the legislation in Annex I or for one of the purposes described in Annex III.
Obligations of Actors Involved in High-Risk AI Systems
Providers of high-risk AI systems bear the heaviest compliance burden and must perform a conformity assessment before the system can be placed on the market or put into service. They must have post-market monitoring systems and procedures for reporting serious incidents. End-users may also have incident reporting obligations.
- Reporting deadlines: Severe incidents must be reported immediately in certain cases. Reports must be submitted to market surveillance authorities in the EU member states where the incident occurred. Multiple reports may be required if the incident affects several jurisdictions.
- Rights of affected individuals: Impacted individuals have the right to an explanation regarding individual decisions made by AI.
Prohibited Practices
Article 5 lists eight prohibited practices deemed to pose an unacceptable risk. These prohibitions take effect on February 2, 2025:
- Techniques exploiting vulnerable groups
- Subliminal, manipulative, or deceptive techniques
- Emotional inference in workplaces or schools
- Social scoring in certain cases
- Real-time remote biometric identification in public spaces for law enforcement
- Biometric categorization to infer race, political opinions, union membership, religious beliefs, sexual life, or sexual orientation
- Crime prediction based on profiling
- Data scraping to create facial recognition databases
Many prohibitions have exceptions, requiring case-by-case analysis. The list is not definitive and will be re-evaluated annually.
Sanctions for Non-Compliance
Organizations that fail to comply with the AI Act may face significant penalties (Articles 99, 100, and 101):
- Administrative fines: Up to €35 million or 7% of global annual revenue
- Corrective measures: Authorities may order the withdrawal or suspension of non-compliant AI systems
- Universal enforcement: The prohibitions apply regardless of the actor’s role (provider, user, distributor, or importer)
7 Key Practical Actions to Ensure Compliance with the EU AI Act
Here are seven essential steps to ensure compliance with the European AI Act:
1.Scope and Applicability
- Identify affected actors: Evaluate whether you, your suppliers, or your clients fall within the AI Act’s scope.
- Verify affected AI systems or models: Identify which AI systems fall into the regulated risk categories.
- Ensure AI awareness: Prepare training programs and awareness measures to comply with obligations effective February 2, 2025.
2.Compliance with Prohibited Practices
- Review your AI systems: Check whether your systems engage in prohibited practices (e.g., subliminal manipulation, remote biometric recognition).
- Implement annual updates: Track changes in the list of prohibited practices and assess possible exceptions.
3.High-Risk AI Systems
- Determine high-risk criteria: Verify if your AI systems are classified as “high risk” under Article 6 and the Annexes of the AI Act.
- Identify your role in the value chain: Clarify your obligations based on your position (provider, end-user, etc.).
4.General-Purpose AI Models
- Understand key concepts: Familiarize yourself with critical notions, such as general-purpose AI systems and high-risk models.
- Review internal governance: Adjust your processes to ensure compliance.
- Assess legal aspects: Include intellectual property analyses and mechanisms to monitor systemic risk thresholds.
5.Experimental Regulatory Environments
- Participate in AI sandboxes: Prepare strategies to test AI in regulated environments.
- Choose the right country: Select the most suitable EU member state for your testing needs.
- Obtain an exit report: Use this document to accelerate compliance assessment.
6.Transparency
- For AI providers:
- Implement machine-readable labeling for AI-generated content.
- Add disclaimers for direct AI-human interactions.
- For AI users:
- Clearly label deepfakes.
- Inform individuals when using biometric or emotion recognition systems.
7.Monitoring and Governance
- Post-market surveillance: Develop plans to monitor AI systems after their release (effective February 2, 2026).
- Report serious incidents: Integrate clear procedures into your quality management systems to identify and report major incidents (e.g., fundamental rights violations).
By following these steps, you will be better prepared to integrate the AI Act requirements and ensure effective implementation. For further details or expert guidance, do not hesitate to contact us.