AI Regulation: EU Parliament and Member States agree on Regulation on Artificial Intelligence

The Artificial Intelligence Act (AI Act) will regulate artificial intelligence across the EU. After months of trilogue negotiations, the European Parliament and the Council have provisionally agreed on a joint version of the regulation, which the European Commission had already proposed in April 2021. The extensive compromise was recently leaked. The resolution is still pending. The regulation is to become applicable two years after its entry into force, with shorter periods for certain systems.

What is the AI Act?

With the AI Act, the European Commission is expanding its digital strategy, which also includes the Digital Services Act (DSA) and the Digital Markets Act (DMA). Based on its internal market competence, the Commission has drawn up a regulation that is specifically tailored to AI applications. As with the planned directive on AI liability (AI Liability Directive) (COM (2022) 496), it is thus deviating from its previous soft law approach in this area. Citizens of the EU should benefit from AI and be effectively protected from AI risks. At the same time, incentives for innovation should be maintained as far as possible.

The AI Act affects AI systems in the private and public sectors, for example in advertising, recruitment and the justice system. The Union legislator justifies the need for regulation with so-called black box effects and the potential for abuse and manipulation. The Union legislator divides AI systems into different risk classes and links corresponding requirements to them. These are primarily aimed at the providers and users of these products, but also partly at manufacturers and retailers of AI products.

Which requirements apply to which AI systems?

AI applications are classified according to their risk level and subject to correspondingly graded requirements.

According to the regulation, some practices pose an unacceptable risk to fundamental rights and are completely prohibited (Prohibited Artificial Intelligence Practices). These include applications that can have a harmful impact on the behavior of vulnerable people in particular. Social scoring systems and personal predictive policing systems are generally prohibited.

According to the regulation, high-risk AI systems are permitted but strongly regulated. These include, for example, biometric systems and systems used in the medical, education or personnel sectors, in critical infrastructure or in public administration. Before entering the market, providers of high-risk systems must, among other things, undergo certification and comply with special data security and human control requirements as well as carry out a fundamental rights impact assessment.

Specific transparency obligations apply to interaction models that involve less risk (e.g. chatbots or text-to-image generators). For example, deep fakes, i.e. images or videos of supposedly real people or events manipulated using AI, must be labeled as such. The topic has recently been the subject of increased public debate in connection with fake nude images of Taylor Swift.

There are no binding obligations for systems classified as low-risk. Developers of AI basic models (e.g. OpenAI with GPT-4, Meta with Llama 2 and Google with Gemini) are subject to a number of obligations. Among other things, they must create technical documentation and introduce a copyright compliance policy.

What exceptions are provided?

Exemptions may apply to biometric systems for internal security reasons, for example to combat terrorism. In order to ensure that small and medium-sized enterprises and start-ups are not overly restricted by the regulation and remain competitive with large tech companies, there are exemptions from regulation for them (“AI regulatory sandboxes”).

What are the consequences of violating the requirements?

The member states appoint the responsible supervisory authorities to monitor and control the regulations. In addition, an AI authority will be set up within the Commission, which will be responsible for the basic AI models. Fines of between EUR 7.5 and 35 million or between 1.5 and 7% of the annual global turnover are envisaged for breaches of the respective obligations.

(31 January 2024)