This article was published on August 2, 2024

EU AI Act enters into force, sets global standard for AI governance

All companies providing services in the bloc must comply


EU AI Act enters into force, sets global standard for AI governance

The European Union’s AI Act entered into force yesterday. It is the world’s first comprehensive regulation for artificial intelligence.

First agreed upon in December 2023, the law is applying a risk-based approach. The strictest measures only apply to “high-risk” systems, including tools related to employment and law enforcement. The regulation entirely prohibits AI systems deemed “unacceptable,” such a social scoring or police profiling.

For “minimal-risk” AI, such as spam filters, there are no additional requirements. For “limited-risk” systems like chatbots, companies need to inform users that they’re interacting with AI.

The Act also introduces a special set of rules for general-purpose AI, which includes foundation models such as the ones powering ChatGPT.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

“The European approach to technology puts people first and ensures that everyone’s rights are preserved,” EU competition chief Margrethe Vestager said in a statement.

The majority of the Act’s provisions will apply from August 2026. The ban of AI systems that pose “unacceptable risks” will apply in six months, while the rules for general-purpose AI will take effect in one year.

All companies that provide services or products in the bloc fall under the law’s scope. Failure to comply can trigger fines of up to 7% of a company’s annual global turnover.

Eyes on the EU

Since its inception, the AI Act has sparked controversy. A number of European businesses have raised objections, fearing that the rules could harm competition and impede innovation.

For others, it represents a major step toward responsible AI.

“The Act’s risk-based approach is designed to ensure AI technologies are developed and used responsibly, providing a framework that balances innovation with the protection of citizens’ rights,” said Maria Koskinen, AI Policy Manager at AI governance startup Saidot.

No matter the view, the time is now for companies to plan their compliance path.

“This is the time for organisations to map their AI projects, classify their AI systems, and risk assess their use-cases,” said Enza Iannopollo, principal analyst at tech advisory firm Forrester.

“They also need to execute a compliance roadmap that is specific to the amount and combination of use-cases they have.”

While we have yet to see the Act’s effect on business operations and competitiveness, the law has made the EU a global leader in setting policy benchmarks for the tech.

“We are witnessing the dawn of a new era of AI governance,” Koskinen said.

“The world is closely watching how the EU will enforce these regulations, and the impact it will have on global AI practices cannot be overstated.”

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with