Reuters    •   3 min read

AI models with systemic risks given pointers on how to comply with EU AI rules

WHAT'S THE STORY?

By Foo Yun Chee

BRUSSELS (Reuters) -The European Commission set out guidelines on Friday to help AI models it has determined have systemic risks and face tougher obligations to mitigate potential threats comply with European Union artificial intelligence regulation (AI Act).

The move aims to counter criticism from some companies about the AI Act and the regulatory burden while providing more clarity to businesses which face fines ranging from 7.5 million euros ($8.7 million) or 1.5% of turnover to 35

AD

million euros or 7% of global turnover for violations.

The AI Act, which became law last year, will apply on Aug. 2 for AI models with systemic risks and foundation models such as those made by Google, OpenAI, Meta Platforms, Anthropic and Mistral. Companies have until August 2 next year to comply with the legislation.

The Commission defines AI models with systemic risk as those with very advanced computing capabilities that could have a significant impact on public health, safety, fundamental rights or society.

The first group of models will have to carry out model evaluations, assess and mitigate risks, conduct adversarial testing, report serious incidents to the Commission and ensure adequate cybersecurity protection against theft and misuse.

General-purpose AI (GPAI) or foundation models will be subject to transparency requirements such as drawing up technical documentation, adopt copyright policies and provide detailed summaries about the content used for algorithm training.

"With today's guidelines, the Commission supports the smooth and effective application of the AI Act," EU tech chief Henna Virkkunen said in a statement.

($1 = 0.8597 euros)

(Reporting by Foo Yun Chee;Editing by Elaine Hardcastle)

AD
More Stories You Might Enjoy