This doctoral thesis consists of three essays within the field of human technology interaction examined through the lens of behavioural and experimental economics. The three essays in this thesis represent three strands helping to reveal the issue of human-machine interaction from different angles. The first essay contributes to human-machine relations by addressing the problem associated with the problem of an individual experiencing a relative lack of resources that affects human judgment and decision-making in the financial domain. This chapter discusses how policy can leverage emerging technologies to design specific choice architecture that may support more risk-aware decision-making of vulnerable socioeconomic groups. Furthermore, it discusses how behavioural policy initiatives aimed at helping resource-deprived individuals conduct more optimal financial decision making might be effectively assisted by recent Artificial Intelligence (AI) developments and the associated ethical considerations. The primary focus of the second essay relates to individual decision making in a risky environment with algorithm help. By conducting an online experiment, it investigates how humans cognitively offload tasks to algorithms in a risky environment with different time constraints. Results demonstrate that the presence of an AI assistant is beneficial for decision making only when its accuracy is high. The third essay continues the investigation of human-technology inter- actions. The primary attention is paid to how information about the result of the action taken by a human affects the incentive behaviour, depending on the interacting partner. The main focus concerns how the information about the result (out- come) of the investment affects the reward and punishment behaviour of the participants that interact with Human and Algorithm agents. Specifically, I conduct an experiment investigating the interaction between out- come bias and human/algorithm responsibility.

Human technology interaction: Financial decision making and delegation to algorithms

Ismagilova, Zilia
2023

Abstract

This doctoral thesis consists of three essays within the field of human technology interaction examined through the lens of behavioural and experimental economics. The three essays in this thesis represent three strands helping to reveal the issue of human-machine interaction from different angles. The first essay contributes to human-machine relations by addressing the problem associated with the problem of an individual experiencing a relative lack of resources that affects human judgment and decision-making in the financial domain. This chapter discusses how policy can leverage emerging technologies to design specific choice architecture that may support more risk-aware decision-making of vulnerable socioeconomic groups. Furthermore, it discusses how behavioural policy initiatives aimed at helping resource-deprived individuals conduct more optimal financial decision making might be effectively assisted by recent Artificial Intelligence (AI) developments and the associated ethical considerations. The primary focus of the second essay relates to individual decision making in a risky environment with algorithm help. By conducting an online experiment, it investigates how humans cognitively offload tasks to algorithms in a risky environment with different time constraints. Results demonstrate that the presence of an AI assistant is beneficial for decision making only when its accuracy is high. The third essay continues the investigation of human-technology inter- actions. The primary attention is paid to how information about the result of the action taken by a human affects the incentive behaviour, depending on the interacting partner. The main focus concerns how the information about the result (out- come) of the investment affects the reward and punishment behaviour of the participants that interact with Human and Algorithm agents. Specifically, I conduct an experiment investigating the interaction between out- come bias and human/algorithm responsibility.
6-lug-2023
Inglese
decision-making, SES, AI, nudging, choice architecture, online experiment, algorithm, shift blame, delegation decisions
Ploner, Matteo
Università degli studi di Trento
TRENTO
126
File in questo prodotto:
File Dimensione Formato  
Thesis Zilia Ismagilova.pdf

accesso aperto

Dimensione 3.35 MB
Formato Adobe PDF
3.35 MB Adobe PDF Visualizza/Apri

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14242/61168
Il codice NBN di questa tesi è URN:NBN:IT:UNITN-61168