General News

Legal rules for AI in the EU


The AI Regulation is the first set of rules that sets legal standards for dealing with artificial intelligence in the European Union, but is also likely to pose challenges for companies.

The rapid pace of technological progress has prompted the institutions of the European Union to introduce standardised regulations in the field of artificial intelligence (AI) that comply with European fundamental rights and safety requirements while also promoting innovation in the field of AI. Around three years after the European Commission's first draft, the European Parliament approved the final proposal for the regulation on 13 March 2024.

The AI Regulation applies both to users of AI systems located in the EU as well as to providers (or their "authorised representatives") who place AI systems on the market or put them into operation in the Union, regardless of where these providers are established. The scope of the Regulation also includes AI-systems whose output is used in the EU. Users" are natural or legal persons as well as public authorities or other bodies that use an AI-system under their own responsibility - unless the AI-system is used in the context of a personal and non-professional activity.

The AI Regulation is essentially based on the risk potential of AI applications, which are categorised into risk levels. As the risk increases, so do the requirements
the requirements for the respective AI systems and their actors. Certain practices of AI applications that are associated with an unacceptable risk are banned altogether (e.g. social scoring).


Supervision & transparency


The regulation focuses on high-risk AI systems, which must fulfil clearly defined requirements based on the risk assessment and whose actors are subject to various obligations throughout the entire AI life cycle, including the establishment of a quality management system. High-risk AI systems, such as critical infrastructure, must be placed under "human supervision".

Transparency obligations apply to AI systems that are intended for interaction with humans. A so-called deepfake must be labelled as such. AI systems that do not fall into a risk category can initially be operated without further requirements. In view of the penalties for non-compliance, it is advisable for providers of AI systems in particular to enquire before the AI Regulation enters into force in 2026 and to take initial compliance measures if necessary.

The AI Regulation is undoubtedly a milestone in the European AI strategy and takes account of the technological age. However, its practicability remains to be seen. It remains to be seen whether the regulations can actually guarantee that level of safety in dealing with AI while at the same time safeguarding fundamental rights.