News & Press

EU AI Act Receives Final Approval

Press Release
May 22, 2024
2
min read

Yesterday the Council of the European Union gave final approval to the EU AI Act, the last of many hurdles the bill has cleared on its way to becoming the world's first comprehensive regulation on artificial intelligence.

There have been some clarifications since the bill was agreed upon back in December, so if you are curious to read the law yourself, click here.

The bill is still highlighted by the tiered approach, broken down as follows:

  • Unacceptable risk: AI systems that pose a clear threat to the safety, livelihoods, and rights of people, such as social scoring and dark patterns. These systems have an outright ban.
  • High risk: AI systems that include specific high risk use cases, factoring in either the severity of the possible harm and how likely it is to occur. These systems include resume-scanning or housing application tools and carry mandatory conformity assessments.
  • Limited risk: AI systems that pose limited risk to individuals, such as chatbots. These systems will carry transparency obligations so users know AI is present.
  • Minimal/no risk: AI systems that pose no or negligible risk, such as AI-enabled video games or spam filters. Per the EU Commission, the majority of the AI systems will fall within this category and carry no additional compliance obligations.

The most important distinction will be whether an organization is a provider or deployer of an AI system, as the obligations for each differ, similar to how the controller-processor distinction in GDPR works.

Providers of high-risk systems have these baseline obligations (along with a much wider scope of overall obligations):

  • Risk management and quality management systems
  • Only are permitted to use quality datasets
  • Transparency obligations to Deployers and end users
  • Must maintain comprehensive technical documentation and logs
  • Human oversight AND automatic event recording
  • Complete conformity assessments and declaration
  • Ensure accessibility requirements

Deployer obligations include:

  • Using systems in accordance with instructions from the Provider
  • Conduct DPIAs and Fundamental Rights Impact Assessments
  • Human oversight AND incident reporting
  • Verify that input data is relevant and representative
  • Retain logs
  • Notify end users of the use of AI systems where necessary

Exceptions that apply are:

  • AI models or systems used solely for the purpose of scientific research
  • Use of AI systems for purely household activities
  • AI systems used exclusively for defense/military purposes

With the legislation now on the clock, this is the rough timeline for the stages to come into gear:

  • October 2024 - Ban on AI systems within the “unacceptable risk” category
  • January 2025 - AI Act Codes of Conduct now apply
  • April 2025 - Governance rules and obligations for General Purpose AI (GPAI) become applicable
  • April 2026 - Start of application of the EU AI Act for AI systems
The full guidelines can be found
here
Press release can be found
here