Skip to main content

AI Act definitive text endorsed by EU Member States

February 2, 2024
Copyright and AI

On February 2nd, 2024 the Committee of Permanent Representatives in the European Union formed by the Ambassadors from the 27 EU Member States, adopted the definitive proposal for the AI Act.

Background

A compromise position was reached in December between the European Parliament, the Commission and the Council, (trilogue) but negotiations for a definitive text were still ongoing up to last week as some of the leading countries in the EU, notably, France (though also Germany and Italy also weighed in at one point), considered the political agreement reached in December to be too restrictive and expressed support for lower transparency requirements (e.g. non-binding codes of conduct) in the AI Act to make training AI models easier for European AI companies.  This position raised serious concerns amongst rightsholders, including GESAC, which advocated for the inclusion of transparency obligations for providers of general-purpose AI models. France dropped its reservations just before the COREPER meeting while issuing some conditions, including that the regulation balances transparency and protection of trade secrets and doesn’t make obligations overburdening for companies.  Germany also said that it would be scrutinising implementation to ensure that it met their requirements.

What it entails

One of the benchmarks of the AI Act is that AI systems must comply with EU copyright laws, with the final agreement stating that providers of general purpose AI models will need “to put in place a policy to respect Union copyright law, as well as make publicly available a sufficiently detailed summary about the content used for training of the general purpose AI model, based on a template provided by the AI Office” without prejudice of trade secrets. The text also foresees a disclosure requirement on providers to “embed technical solutions that enable marking in a machine-readable format and detection that the output has been generated or manipulated by an AI system and not a human” (Recital 70 a).

The rules for general-purpose AI models (GPAI), were introduced into the text last year in response to the rise of foundation models, such as large language models (“LLMs”). The specific obligations for these models can be grouped into the following four categories – with additional obligations for such general-purpose AI models that entail systemic risks:

  • drawing up and keeping up-to-date technical documentation of the model, including training and testing process and evaluation results.
  • Provide transparency to downstream system providers looking to integrate the model into their own AI system.
  • putting in place a policy for compliance with copyright law.
  • publishing a detailed summary of the training data used in the model’s development.

GPAI providers with systemic risk must also perform model evaluations, implement risk assessment and mitigation measures, maintain incident response and reporting procedures, and ensure an adequate level of cybersecurity protection. The Commission may adopt delegated acts to amend the thresholds.   The AI Act’s rules target general-purpose “models” that underpin AI tools — not the customer-facing apps, but the software architecture integrated into different providers’ products.  Developers of these models — such as those powering ChatGPT or Google’s Bard — will have to keep detailed technical documentation, help the companies or people deploying their models understand the tools’ functionality and limits, provide a summary of the copyrighted material (such as texts or images) used to train the models; and cooperate with the European Commission and the national enforcing authorities when it comes to compliance with the rulebook.

Some general-purpose models are labelled a “systemic risk” thanks to their power and reach – e.g., an ability to precipitate catastrophic events. Developers of these systems will also have to put mitigation strategies in place and pass on details of any incident to the Commission’s new “AI Office”, lined up to police the rules.

Role of Members States and the Commission

Member States hold a key role in applying and enforcing the AI Act. Each Member State must designate one or more national competent authorities to supervise the application and implementation of the AI Act, as well as carry out market surveillance activities.  Each Member State is expected to designate one national representative to represent the country in the soon-to-be-established European Artificial Intelligence Board.

In addition, the Commission will establish a new European AI Office within the Commission, which will supervise general-purpose AI models, cooperate with the European Artificial Intelligence Board, and be supported by a scientific panel of independent experts.

The new entity will be primarily responsible for supervising compliance by providers of GPAI systems while also playing a supporting role in other aspects, such as the enforcement of the rules on AI systems by national authorities.

Rightsholders organisations in Brussels (including GESAC) expect to have a role to play in inputting to the European AI Office.

Penalties and enforcement

The AI Act comes along with strong enforcement powers and potentially high penalties. Penalties must be effective, proportionate, and dissuasive and can significantly impact businesses. They range from €7.5 to €35m or 1.5 % to 7% of the global annual turnover (whichever is higher), depending on the severity of the infringement.

Next steps in adoption and timeline

Following the endorsement of Feb 2nd, the European Parliament has scheduled meetings of the Committees on Civil Liberties, Justice and Home Affairs and Internal Market on 14 and 15 February 2024 in preparation for the plenary vote of the full Parliament. This will be followed by a final legal-linguistic revision by both co-legislators, with lawyers and linguists cleaning up the text. The formal adoption by Council and Parliament is expected at the end of February or the beginning of March. Once approved, the EU’s AI Act will become the world’s first treaty on AI, setting the standard for AI regulatory frameworks globally.

Following its publication, it can be expected that the AI Act (as now drafted) will enter into force before the summer break. Some provisions of the AI Act (for certain high-risk systems) will become mandatory six months later, i.e. well before the end of the year. Most other provisions will then gradually be applied in the course of 2025 and 2026.

Keep up to date with IMRO news and events

Please select login