It is increasingly important to investigate which factors could influence judgements about artificial agents’ capacities, as this could influence public opinion and policy-making. In this thesis I investigate, using text vignettes, whether the valence of a robot’s action consequence influences judgements of intentional action and moral responsibility. I also investigate whether type of agent (human vs robot), mentalistic descriptions of an agent’s actions and pre-exposure type (images vs. text), further modulates these judgements. First, I ran a series of studies using an adaption of text vignettes which have been shown in literature to yield the ‘side-effect effect.’ These vignettes are referred to throughout the thesis as ‘side-effect effect text vignettes.’ Second, I ran a series of studies with modified text vignettes where the mentalistic descriptions of the agent were removed. These vignettes are henceforth referred to as ‘modified vignettes.’ In the first series of studies using the side-effect effect text vignettes, we found that when participants were pre-exposed to textual descriptions of the humanoid robot iCub, they judged the moral responsibility of the human and robot agent similarly; they attributed more moral responsibility to both agents when their actions led to negative consequences than when they led to positive consequences. However, when participants were pre-exposed to images of a humanoid robot there was no effect of the valence of the action consequence on moral responsibility ratings. For intentional action ratings, we found that when participants were pre-exposed to textual descriptions of a humanoid robot, and when they were pre-exposed to images of the humanoid robot, the valence of the action consequence did not influence the intentional action ratings. This pattern of results differed with a human agent; participants attributed higher intention to the human agent’s action which resulted in negative consequences compared to positive consequences. This could be due to intentional action being more easily dissociated from moral responsibility when evaluating robot actions, compared to human actions. We also found that the side-effect effect text vignettes increased the tendency for participants to adopt the intentional stance towards the robot, which could be due to the mentalistic language in the text vignettes. In the second series of studies, we removed the mentalistic descriptions of the agent from the text vignettes. In the modified text vignettes, we found that participants judged the intentionality of the human and robot’s action similarly, regardless of pre-exposure type; the intention scores for the actions which led to positive consequences were higher than for actions which led to negative consequences. This pattern of results was reflected in praise/blame ratings once we modified the moral responsibility question to enhance clarity. Specifically, regardless of pre-exposure type, participants attributed more praise to the robot and human agent when their actions led to positive consequences, than blame, when their actions led to negative consequences. Interestingly, this pattern of results is reversed compared to the side-effect effect vignettes. We suggest that this reversal could be due to participants defaulting to charitable explanations for the agent’s actions, when explicit references to mental states are removed. Thus, the collection of results presented in this thesis shows that the presence of mentalistic descriptions and action consequence valence can influence the degree to which individuals attribute intention and moral responsibility to a robot’s actions. These finding are particularly relevant for the field of AI and social robotics, as it highlights that the narrative framing of an artificial agent’s behaviour could influence public perception of their moral responsibility and intent behind an action.

Robots as Moral Agents. The Effect of Action Consequence Valence on Moral Responsibility and Intentionality Attributions

O'REILLY, ZIGGY
2025

Abstract

It is increasingly important to investigate which factors could influence judgements about artificial agents’ capacities, as this could influence public opinion and policy-making. In this thesis I investigate, using text vignettes, whether the valence of a robot’s action consequence influences judgements of intentional action and moral responsibility. I also investigate whether type of agent (human vs robot), mentalistic descriptions of an agent’s actions and pre-exposure type (images vs. text), further modulates these judgements. First, I ran a series of studies using an adaption of text vignettes which have been shown in literature to yield the ‘side-effect effect.’ These vignettes are referred to throughout the thesis as ‘side-effect effect text vignettes.’ Second, I ran a series of studies with modified text vignettes where the mentalistic descriptions of the agent were removed. These vignettes are henceforth referred to as ‘modified vignettes.’ In the first series of studies using the side-effect effect text vignettes, we found that when participants were pre-exposed to textual descriptions of the humanoid robot iCub, they judged the moral responsibility of the human and robot agent similarly; they attributed more moral responsibility to both agents when their actions led to negative consequences than when they led to positive consequences. However, when participants were pre-exposed to images of a humanoid robot there was no effect of the valence of the action consequence on moral responsibility ratings. For intentional action ratings, we found that when participants were pre-exposed to textual descriptions of a humanoid robot, and when they were pre-exposed to images of the humanoid robot, the valence of the action consequence did not influence the intentional action ratings. This pattern of results differed with a human agent; participants attributed higher intention to the human agent’s action which resulted in negative consequences compared to positive consequences. This could be due to intentional action being more easily dissociated from moral responsibility when evaluating robot actions, compared to human actions. We also found that the side-effect effect text vignettes increased the tendency for participants to adopt the intentional stance towards the robot, which could be due to the mentalistic language in the text vignettes. In the second series of studies, we removed the mentalistic descriptions of the agent from the text vignettes. In the modified text vignettes, we found that participants judged the intentionality of the human and robot’s action similarly, regardless of pre-exposure type; the intention scores for the actions which led to positive consequences were higher than for actions which led to negative consequences. This pattern of results was reflected in praise/blame ratings once we modified the moral responsibility question to enhance clarity. Specifically, regardless of pre-exposure type, participants attributed more praise to the robot and human agent when their actions led to positive consequences, than blame, when their actions led to negative consequences. Interestingly, this pattern of results is reversed compared to the side-effect effect vignettes. We suggest that this reversal could be due to participants defaulting to charitable explanations for the agent’s actions, when explicit references to mental states are removed. Thus, the collection of results presented in this thesis shows that the presence of mentalistic descriptions and action consequence valence can influence the degree to which individuals attribute intention and moral responsibility to a robot’s actions. These finding are particularly relevant for the field of AI and social robotics, as it highlights that the narrative framing of an artificial agent’s behaviour could influence public perception of their moral responsibility and intent behind an action.
21-mar-2025
Inglese
Università degli Studi di Torino
File in questo prodotto:
File Dimensione Formato  
Final_Robots as Moral Agents_Thesis_ZiggyOReilly.pdf

accesso aperto

Dimensione 2.91 MB
Formato Adobe PDF
2.91 MB Adobe PDF Visualizza/Apri

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14242/200543
Il codice NBN di questa tesi è URN:NBN:IT:UNITO-200543