Articles

Quick Hits on EU's AI Act

Regulations
James Grieco
James Grieco
Dec 12, 2023
4
min read
Quick Hits on EU's AI Act

After two marathon days of negotiation, the European Commission and European Parliament reached agreement on the AI Act, the world's first comprehensive AI regulation. This follows months of intense talks and compromises on the matter, as the European Union has again positioned itself as the world-leader in regulation, just as the continent had seven years ago when it passed the General Data Protection Regulation (GDPR) to corral snowballing data handling mispractices.

The most important thing to note about the announcement of the AI Act is that the full text is not finished and will not be made accessible to the public until some time in 2024. In fact, the EU has several technical sessions scheduled in the coming month, so while the outline of the regulation is set, many of the details still need to be hammered out.

With that caveat, here are our quick takes given this monumental news and all the discourse around it.

The Tiered Approach Makes Sense

The final result of all the negotiations is a tiered approach to risk, as outlined in this image courtesy of Telefonica:

EU representatives had largely agreed on a risk-based approach a few months back, but last minute concerns and disagreements almost ended up derailing the tiered system. At the 11th hour, however, a new agreement was reached and as a result, we have the tiered system shown above.

Refusing to paint AI with a broad brush is smart, so as to not feed into the fictionalized image of AI many imagine or compound any current public worries over AI. This approach also ensures the regulation will not stifle innovation, a concern among the tech community—particularly in the U.S.—that many voiced upon the AI Act's announcement.

For much of the AI sphere of development, only transparency obligations will apply, and those will not stand in the way of or detract from AI development. For high risk and unacceptable risk models, to view regulation as a crutch rather than a guardrail is the wrong way to look at it. This technology is still going to be developed, but now it will do so with a level of oversight so the public can adopt finished products with more confidence in their safety and trustworthiness.

What Will Be the Global Response to the AI Act?

As noted above, the EU prioritized being the first in the world to pass comprehensive legislation, and takes great pride in accomplishing that feat.

The AI Act will apply to any use of AI that impacts EU citizens, which is also how the GDPR operates. That means it does not matter where the AI service is based, it will need to comply with the new law or shut itself out of the European market entirely (a financial nonstarter for most tech: see Meta's new paid subscription in Europe).

Just as the GDPR created one of the most confusing times in American internet history when it spawned a tidal wave of cookie banners and updated privacy notices when it entered into effect globally in 2018, the AI Act will necessitate certain changes among companies developing and deploying AI.

With companies needing to comply, how do other regions of the world adapt to the change? Will they also pass legislation on AI development and use, or leave it to Europe to act as a floor for how AI compliance looks? The U.S. has been active commenting on the matter, but can Congress step up and actually pass a comprehensive law to match Europe's?

Data Privacy Overlaps TBD

As Data Privacy expert Luiza Jarovsky pointed out in her most recent newsletter, there is unavoidable overlap between the GDPR and AI Act. This is a given, as data protection is one of the core tenets of AI governance, but without the full text of the AI Act to go through, many situations, such as what human oversight of AI looks like in practice and how risk assessment evaluations should be conducted, appear muddled.

It is unlikely that one regulation will preempt the other, but unclear regulatory overlap is only going to make preparation for the AI Act harder. This is an issue that the EU Council & Parliament need to clear up as soon as possible.

While the technical details of the law could make this a moot point for many, as only systems with immense computing power will fall into the high or unacceptable risk categories, crystallizing the regulatory role of data protection in an expanding sphere of AI governance is paramount to ensuring the messaging of the AI Act hits properly.

The law is about creating safe, non-exploitative AI systems for the public, and handling data appropriately and compliantly is the basis for that.

AI Act Enforcement Numbers Look Severe

Speaking of the GDPR, the maximum penalty for a violation is 4% of annual worldwide turnover—a percentage that has resulted in fines of hundreds of millions of dollars for companies like Meta. The AI Act has raised that figure to 7% of annual worldwide turnover, signaling just how serious the EU considers the topic of AI governance.

Considering the cost of running even a large language model like ChatGPT is hundreds of millions of dollars a year, the first wave of AI technology is likely to come from Big Tech (indeed, Google released its own AI system, Gemini, this month). That means potential AI Act violations will immediately carry price tags of hundreds of millions of dollars, hopefully a reverse incentive for the world's largest companies to approach AI development with more consideration than they otherwise would in an unregulated world.

Not Everyone Is Happy, and That's Okay

The AI Act is truly groundbreaking as the world's first comprehensive regulation on artificial intelligence. Even with a solid framework, this initial version of the AI Act will not ever be the most thorough, ironclad regulation on the issue. But the law can serve as a good basis and entry point for future developments, which is what people need to view it as.

Some in the tech world, as mentioned above, view the introduction of this regulation as a wet blanket on innovation.

Some have decried what is explicitly lacking from the law's announcement, saying the act is not progressive enough. Amnesty International came out with a press release calling the law's refusal to outright and unconditionally ban facial recognition technology for mass surveillance, "a hugely missed opportunity."

Not everyone is happy with the law right now, but the struggle to pass the AI Act before the year's end and the upcoming 2024 EU Parliamentary elections was real. Governance is compromise by nature, and with the pressure mounting thanks to AI's unprecedented development speed and market growth, getting a bill out the door on time should be commended.

The full text will clear up many questions the privacy and tech communities currently have, and will certainly create many more. That is the nature of the beast and the task governments undertake when trying to keep pace with innovation. In a world as rapidly evolving as ours, it's a good start.