Newsroom / News / Media / Info Magazine LGP NEWS 02/2021 / Who is liable for the crimes committed by AI?

Who is liable for the crimes committed by AI?

Who is liable for the crimes committed by AI?

Artificial intelligence (AI) is omnipresent today. We encounter it in the operating theatre in the form of robot surgeons, as autonomous vehicles on the road or in court as a decision-making aid for judges. While AI technologies are developing relentlessly and gaining ever greater practical importance, legislators are lagging behind in the implementation of legal framework conditions. 

Although AI is a constant presence in our digitalised lives and can certainly play a significant role in a greener future, it is currently barely regulated. There are no de facto legal provisions that specifically regulate artificial intelligence. A start has been made at the European level, where the EU Commission has proposed “Measures to promote excellence in AI and regulations” as a basis for future regulation and has categorised AI into a risk pyramid of several levels. In this way, the Commission wishes to define clearly what should be permitted or prohibited through AI. Social scoring as practised in China, for example, would be considered impermissible under this framework. AI systems such as “chatbots” would be classified as a limited risk and would thus be permitted under certain conditions (user consent). AI used in critical infrastructures such as transport or as a safety component of products (robot-assisted surgery) would, however, be classified as a high risk. 

In addition to the planned regulation, the Commission has also convened a panel of specialists. The High Level Expert Group on AI (AI HLEG) focuses on seven key requirements for the ethically correct handling of AI. These include data protection and data management as well as the pursuit of social and ecological well-being (keyword Green Energy). These key requirements have not yet been transformed into legally binding regulations. 

Legal framework 

The lack of legal provisions raises some fundamental legal questions: Who is liable if the AI makes a mistake? The manufacturer, the importer or perhaps the AI itself? How should the data protection situation be assessed? Is all data protected? It also raises the exciting question of whether AI can in fact actively contribute to a sustainable and more environmentally friendly future. 

Recourse to existing laws is necessary in order to be able to answer these critical questions. The Railway and Motor Vehicle Liability Act (EKHG) or the Product Liability Act (PHG) would be applicable in cases of liability law. The latter only applies if an AI system is subsumed as a product. Here, however, opinions differ widely. If this aspect is affirmed and a person is killed, physically injured or otherwise comes to harm through a product defect, then either the manufacturer or the importer of the AI is liable for compensation. Problems arise if a company manufactures the hardware for its product but has the AI software supplied by a third party. A good example would be a car manufacturer who procures the car’s AI system externally. Who is then liable if the AI fails? The supplier of the AI or the company that brings the finished car to market? Discussions are also being held on whether the AI itself should be made liable by assigning it its own legal identity: a so-called e-person. 

It is relatively easy to explain why there has been no need for an e-person until now: the so-called autonomous action of an AI is currently limited and can almost always be traced back to human behaviour. This is because an AI system must be programmed by humans. Technology has not (yet) reached the point where an AI system becomes so autonomous that it no longer needs a human component. This concept of the e-person should, however, be kept in mind for the future – as the likelihood that AI systems will become even more autonomous in the future cannot be ruled out. 

Manufacturers of a partial product can also be liable for defects. The example of a car with AI software can once again serve to illustrate this. If an error in the AI leads to damage and this can be clearly distinguished from the hardware, the manufacturer of the AI can be held responsible. If a mechanical part of the car is faulty and this leads to damage, the car manufacturer will be held liable. However, where the boundaries between the interactions of AI and hardware or other third-party data blur, it can become very difficult or even impossible to identify a responsible party. Within current legal means, joint and several liability would be the only solution. Or (and this is particularly commendable): new laws are enacted that clearly regulate these cases. 

Provisions for data protection 

The issue of data protection also plays a major role in connection with AI. Simply put, AI is characterised by the collection of huge amounts of data (Big Data). Therefore, it is more important than ever to comply with the provisions of the GDPR when processing personal data through AI. According to Art 6 of the GDPR, the processing of personal data always requires a legal basis, such as the consent of the data subject or a predominantly legitimate interest on the part of the controller. It would be conceivable and quite problematic if the AI autonomously collected “new” data that is not subject to any of these legal bases. The functioning of the AI could also differ from the principle of data minimisation of Art 5c of the GDPR. After all, AI systems need vast amounts of data in order to function and develop further. It would also be worth considering the compatibility with Art 17 of the GDPR and the right to erasure or the right to be forgotten. Can an AI delete or forget data at all? If this were possible, could the deleted data ever be restored? In data protection law, AI thus also raises many questions to which there is still no legal answer. 

AI also plays an important role in climate protection and ecology. Does the use of AI actually have a positive impact on our climate or on achieving the climate goals formulated by the EU? The answer is ambivalent: artificial intelligence can contribute significantly to climate protection. It is already being used, for example, in agriculture and to prevent illegal deforestation. However, it would be wrong to assume that artificial intelligence automatically leads to greater climate protection. Here, too, sustainability must be consciously placed in the foreground and appropriate framework conditions set. After all, AI applications require large amounts of energy and resources and could therefore increase global electricity consumption. 

In conclusion, many questions remain unanswered within the current legal framework. However, it is also clear that AI development has outpaced the legislator. While recourse to existing laws is possible to a certain extent, there is a lack of new, more specific laws that directly target and explicitly regulate AI systems. 


Dr. Julia Andras, Attorney-at-Law and Managing Partner at LANSKY, GANZGER + partner
(from left to right: Clarissa Wölfer, Philip Grünberger, Marie Posch)

This year, LGP once again employed summer interns who were able to get a taste
of the everyday life of a large law firm. This time, Clarissa WölferMarie Posch and Philip Grünberger, among others, completed their internship at LGP. The three students dealt with the exciting and promising topic of “artificial intelligence” and the resulting relevant legal issues. Artificial intelligence not only has moral, ethical and technical aspects, but also raises numerous legal questions – especially in the area of liability for criminal offences or damages, but also in the area of data protection. LGP is pleased and proud to present this jointly authored article.

My documents

Add page

There are currently no documents in your basket.