This thesis focuses on the relationship between the development of autonomous artificial intelligence systems (“AI systems”) and criminal law. In particular, it deals with the following question: do the laws governing product liability adequately address the criminal liability issues of AI systems? Non-human agents, including robotics and software agents, and especially those using advanced artificial intelligence, are becoming increasingly autonomous in terms of the complexity of tasks they can perform, their potential casual impacts on the world that are unmitigated by human agents, and the diminishing ability of human agents to understand, predict or control how they operate. From a legal perspective, one of the main problems facing this technology development is the uncertain status of liability for the effects caused by artificial agents. In some cases, AI systems are capable of operating not only without the supervision of a human agent, but also without an entirely defined operational modus. As a matter of facts, robots can be programmed to be independent, to “learn” from the environment and to experiment new strategies that cannot be predicted in advance. The problem involves legal as well as philosophical implications. On 16 February 2017, the European Parliament has adopted a Resolution with recommendations to the Commission on Civil Law Rules on Robotics, soliciting to reflect – in relation to autonomous robots – “whether the ordinary rules on liability are sufficient or whether it calls for new principles and rules to provide clarity on the legal liability of various actors concerning responsibility for the acts and omissions of robots”. The civil law has already started dealing with this problem, but the discussion is still at the beginning in the criminal law field. If an intelligent agent causes harms to individuals or property (e.g. killing or injuring a person, or destroying an asset), who should be held criminally responsible? The manufacturer, the owner or the AI system itself? The elaborate addresses the issue from the perspective of criminal liability of the manufacturer of artificial intelligence systems, through the enhancement of the traditional category of “allowed risk”. The development of the theme is accompanied by examples taken from the recent diffusion of self-driving vehicles, in order to help the reader to better understand the practical implications of AI systems.
INTELLIGENZA ARTIFICIALE E RESPONSABILITÀ PENALE PER DANNO DA PRODOTTO
BERTOLESI, RICCARDO
2020
Abstract
This thesis focuses on the relationship between the development of autonomous artificial intelligence systems (“AI systems”) and criminal law. In particular, it deals with the following question: do the laws governing product liability adequately address the criminal liability issues of AI systems? Non-human agents, including robotics and software agents, and especially those using advanced artificial intelligence, are becoming increasingly autonomous in terms of the complexity of tasks they can perform, their potential casual impacts on the world that are unmitigated by human agents, and the diminishing ability of human agents to understand, predict or control how they operate. From a legal perspective, one of the main problems facing this technology development is the uncertain status of liability for the effects caused by artificial agents. In some cases, AI systems are capable of operating not only without the supervision of a human agent, but also without an entirely defined operational modus. As a matter of facts, robots can be programmed to be independent, to “learn” from the environment and to experiment new strategies that cannot be predicted in advance. The problem involves legal as well as philosophical implications. On 16 February 2017, the European Parliament has adopted a Resolution with recommendations to the Commission on Civil Law Rules on Robotics, soliciting to reflect – in relation to autonomous robots – “whether the ordinary rules on liability are sufficient or whether it calls for new principles and rules to provide clarity on the legal liability of various actors concerning responsibility for the acts and omissions of robots”. The civil law has already started dealing with this problem, but the discussion is still at the beginning in the criminal law field. If an intelligent agent causes harms to individuals or property (e.g. killing or injuring a person, or destroying an asset), who should be held criminally responsible? The manufacturer, the owner or the AI system itself? The elaborate addresses the issue from the perspective of criminal liability of the manufacturer of artificial intelligence systems, through the enhancement of the traditional category of “allowed risk”. The development of the theme is accompanied by examples taken from the recent diffusion of self-driving vehicles, in order to help the reader to better understand the practical implications of AI systems.File | Dimensione | Formato | |
---|---|---|---|
phd_unimi_R11696.pdf
Open Access dal 10/08/2021
Dimensione
3.94 MB
Formato
Adobe PDF
|
3.94 MB | Adobe PDF | Visualizza/Apri |
I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/20.500.14242/80265
URN:NBN:IT:UNIMI-80265