This thesis is situated within the multidisciplinary field of social robotics, a domain that intersects psychology, cognitive science, artificial intelligence, and robotics engineering. Its primary aim is to investigate human-robot interaction (HRI) and the perception of Artificial Intelligence (AI) systems through an integrated framework that combines empirical psychological research with computational modeling. As robots and AI technologies increasingly enter work environments, educational settings, and care contexts, fundamental questions arise regarding trust formation, social acceptance, cognitive representation, and ethical integration. Rather than approaching these issues from a purely engineering-driven or philosophical perspective, this research adopts an empirical, data-driven, and interdisciplinary methodology to propose a pragmatic and operational framework for understanding and designing socially aware AI systems. The thesis is structured around three complementary studies, each addressing human-AI relations at a different level of analysis: psychometric, computational, and applied-educational. The first study, psychometric and social in nature, focused on validating the General Attitudes Towards Artificial Intelligence Scale (GAAIS) within the Italian cultural context. Given the rapid diffusion of AI technologies and the scarcity of validated instruments in Italy, this study aimed to provide a reliable tool for measuring public attitudes. The scale was administered to a large and heterogeneous sample, and confirmatory factor analysis supported the original two-factor structure, confirming its internal consistency and structural robustness. Further correlational and regression analyses revealed that attitudes toward AI are not homogeneous but vary significantly according to demographic variables—particularly gender and age—as well as psychological dimensions such as epistemic trust. These findings highlight the presence of cognitive and socio-cultural biases that influence AI acceptance and provide a validated instrument for future Italian research on technology perception and digital transformation. The second study shifted the focus from human perception to artificial cognition, addressing a core challenge in social robotics: the generalization of trust in dynamic and uncertain environments. While humans are capable of forming trust judgments from limited interactions and extending them to new but similar agents, artificial systems often struggle with this generalization process. This study hypothesized that learning trust from sparse interaction data represents a critical bottleneck in adaptive human-robot collaboration. To address this, a novel cognitive architecture was developed, integrating perception, categorization, and trust-learning mechanisms. The system employed computer vision algorithms—specifically Haar Cascade for face detection and YOLOv8 for object recognition—to identify and classify informants into archetypal categories. Experimental simulations were conducted using a virtual robot interacting with archetype-based informants across three phases: familiarization, decision-making, and trust generalization. Results demonstrated that the architecture successfully learned trust patterns and generalized them to novel agents sharing similar features. This provides a computational model for developing more resilient, context-sensitive, and socially adaptive robots capable of functioning in complex, real-world environments. The third study explored the applied dimension of social robotics, investigating the acceptance of humanoid robots in educational contexts. Through qualitative methodologies—including focus groups and semi-structured interviews—17 teachers were asked to evaluate the potential of the NAO humanoid robot in classroom settings. Thematic analysis revealed predominantly positive perceptions. Teachers viewed the robot not as a replacement for human educators but as a supportive tool capable of enhancing engagement, sustaining attention, assisting students with learning difficulties, and facilitating group management. An especially relevant finding concerned the integration of the robot with a Large Language Model (LLM), such as ChatGPT. Participants perceived this integration as transformative: it reduces programming complexity, enables real-time personalization of activities, and allows dynamic interaction adapted to students’ needs. This suggests that the combination of embodied robotics and advanced language models may represent a promising direction for future educational technologies. Taken together, the three studies offer a multi-layered contribution to the understanding of human-AI relations. At the societal level, the thesis validates a psychometric instrument to measure attitudes toward AI in Italy. At the computational level, it proposes an innovative architecture addressing trust learning and generalization in artificial systems. At the applied level, it provides empirical insights into the practical and pedagogical implications of social robots in education. Overall, the findings delineate concrete steps toward the development of safer, more trustworthy, and socially responsible AI systems. By integrating psychological measurement, cognitive modeling, and applied research, this thesis contributes to shaping a more adaptive and ethically informed coexistence between humans and intelligent machines in contemporary society.

The Reciprocity of Trust: From Psychological Foundations to Social Robots Implementation

SACCO, FEDERICA
2026

Abstract

This thesis is situated within the multidisciplinary field of social robotics, a domain that intersects psychology, cognitive science, artificial intelligence, and robotics engineering. Its primary aim is to investigate human-robot interaction (HRI) and the perception of Artificial Intelligence (AI) systems through an integrated framework that combines empirical psychological research with computational modeling. As robots and AI technologies increasingly enter work environments, educational settings, and care contexts, fundamental questions arise regarding trust formation, social acceptance, cognitive representation, and ethical integration. Rather than approaching these issues from a purely engineering-driven or philosophical perspective, this research adopts an empirical, data-driven, and interdisciplinary methodology to propose a pragmatic and operational framework for understanding and designing socially aware AI systems. The thesis is structured around three complementary studies, each addressing human-AI relations at a different level of analysis: psychometric, computational, and applied-educational. The first study, psychometric and social in nature, focused on validating the General Attitudes Towards Artificial Intelligence Scale (GAAIS) within the Italian cultural context. Given the rapid diffusion of AI technologies and the scarcity of validated instruments in Italy, this study aimed to provide a reliable tool for measuring public attitudes. The scale was administered to a large and heterogeneous sample, and confirmatory factor analysis supported the original two-factor structure, confirming its internal consistency and structural robustness. Further correlational and regression analyses revealed that attitudes toward AI are not homogeneous but vary significantly according to demographic variables—particularly gender and age—as well as psychological dimensions such as epistemic trust. These findings highlight the presence of cognitive and socio-cultural biases that influence AI acceptance and provide a validated instrument for future Italian research on technology perception and digital transformation. The second study shifted the focus from human perception to artificial cognition, addressing a core challenge in social robotics: the generalization of trust in dynamic and uncertain environments. While humans are capable of forming trust judgments from limited interactions and extending them to new but similar agents, artificial systems often struggle with this generalization process. This study hypothesized that learning trust from sparse interaction data represents a critical bottleneck in adaptive human-robot collaboration. To address this, a novel cognitive architecture was developed, integrating perception, categorization, and trust-learning mechanisms. The system employed computer vision algorithms—specifically Haar Cascade for face detection and YOLOv8 for object recognition—to identify and classify informants into archetypal categories. Experimental simulations were conducted using a virtual robot interacting with archetype-based informants across three phases: familiarization, decision-making, and trust generalization. Results demonstrated that the architecture successfully learned trust patterns and generalized them to novel agents sharing similar features. This provides a computational model for developing more resilient, context-sensitive, and socially adaptive robots capable of functioning in complex, real-world environments. The third study explored the applied dimension of social robotics, investigating the acceptance of humanoid robots in educational contexts. Through qualitative methodologies—including focus groups and semi-structured interviews—17 teachers were asked to evaluate the potential of the NAO humanoid robot in classroom settings. Thematic analysis revealed predominantly positive perceptions. Teachers viewed the robot not as a replacement for human educators but as a supportive tool capable of enhancing engagement, sustaining attention, assisting students with learning difficulties, and facilitating group management. An especially relevant finding concerned the integration of the robot with a Large Language Model (LLM), such as ChatGPT. Participants perceived this integration as transformative: it reduces programming complexity, enables real-time personalization of activities, and allows dynamic interaction adapted to students’ needs. This suggests that the combination of embodied robotics and advanced language models may represent a promising direction for future educational technologies. Taken together, the three studies offer a multi-layered contribution to the understanding of human-AI relations. At the societal level, the thesis validates a psychometric instrument to measure attitudes toward AI in Italy. At the computational level, it proposes an innovative architecture addressing trust learning and generalization in artificial systems. At the applied level, it provides empirical insights into the practical and pedagogical implications of social robots in education. Overall, the findings delineate concrete steps toward the development of safer, more trustworthy, and socially responsible AI systems. By integrating psychological measurement, cognitive modeling, and applied research, this thesis contributes to shaping a more adaptive and ethically informed coexistence between humans and intelligent machines in contemporary society.
18-mar-2026
Inglese
artificial intelligence
social robots
theory of mind
Marchetti, Antonella
File in questo prodotto:
File Dimensione Formato  
PhD_THESIS_SACCO_FEDERICA.pdf

embargo fino al 20/03/2029

Licenza: Creative Commons
Dimensione 2.99 MB
Formato Adobe PDF
2.99 MB Adobe PDF

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14242/362315
Il codice NBN di questa tesi è URN:NBN:IT:UNIPI-362315