This thesis examines the multifaceted legal and ethical challenges arising from potentially manipulative and deceptive practices in human-machine interaction, with a specific focus on social robots and AI-based technologies. As these technologies become increasingly integrated into everyday life, they raise profound questions about human autonomy, dignity, and the normative frameworks that should govern their development and deployment. The research begins by establishing robust definitional and technical foundations for understanding artificial intelligence and social robotics. Artificial intelligence is conceptualised through its methodological evolution from early symbolic approaches to contemporary machine learning paradigms, including deep learning, reinforcement learning, and knowledge representation systems. Social robots are defined as embodied AI systems designed for meaningful social interaction according to cultural and contextual norms, distinguished from general-purpose robots by their focus on social communication, emotional recognition, and adherence to social expectations. This technical understanding provides essential context for the subsequent ethical and legal analysis. Building on these foundations, the thesis develops a comprehensive taxonomy of manipulation and deception in human-machine interaction. It traces the evolution from user-centred "agreeable design" to potentially problematic behavioural manipulation, examining how AI systems may exploit cognitive biases, heuristics, and psychological vulnerabilities through their opacity, personalisation capabilities, and anthropomorphic features. The research establishes crucial conceptual distinctions between persuasion, manipulation, and deception, with persuasion engaging rational capacities transparently, manipulation covertly exploiting decision-making vulnerabilities, and deception creating false beliefs through various mechanisms. The psychological impact of anthropomorphic design receives particular attention, with analysis of how human-like interfaces trigger automatic social responses that may bypass rational evaluation. Design strategies including vocal characteristics, facial expressions, movement patterns, personalisation algorithms, and narrative elements are examined for their role in shaping user perceptions and potentially facilitating manipulation. The thesis critically evaluates claims of "benevolent deception" in contexts such as elder care, interrogating whether manipulative practices might be justified when purportedly serving user interests. A significant contribution of the research is its examination of intent and responsibility in AI manipulation. The thesis challenges the notion of a "responsibility gap," demonstrating that despite the distributed nature of AI development and the emergent properties of some systems, responsibility remains attributable to developers and manufacturers based on foreseeability, professional role, and established principles of product liability. This analysis has important implications for both regulatory approaches and corporate governance of AI development. The legal dimension of the thesis identifies protected interests potentially harmed by manipulative AI practices, including autonomy, cognitive liberty, human dignity, and psychological wellbeing. These interests receive varying protection across legal frameworks, including constitutional and human rights law, consumer protection regulations, data protection frameworks, sectoral regulations, tort law, and contract law. The research demonstrates how the appropriate legal framework depends critically on which form of manipulation is at issue, the context in which it occurs, and the specific interests threatened. A comprehensive comparative legal analysis reveals significant regulatory approaches across jurisdictions, with particular attention to the European Union's evolving framework. The thesis examines horizontal instruments such as the Unfair Commercial Practices Directive, the General Data Protection Regulation, and the E-Commerce Directive, alongside the emerging AI-specific regulation embodied in the AI Act. The AI Act's risk-based approach and explicit prohibition of manipulative practices targeting manipulation or specific vulnerabilities are analysed for their potential effectiveness and limitations. The research further explores comparative perspectives from diverse national contexts, identifying both convergence around core concerns and significant divergence in regulatory implementation. Common themes include recognition of the potential harm from exploitation of psychological vulnerabilities, the importance of transparency about AI capabilities, and enhanced protection for vulnerable populations. However, jurisdictions differ significantly in their conceptual framing and practical approaches to these issues. The principle of self-determination provides a foundational normative framework for evaluating these challenges. The thesis demonstrates that meaningful autonomy requires not only freedom from coercion but also adequate information, supportive choice environments, protection against exploitation of psychological vulnerabilities, and preservation of conditions for both individual and collective self-governance. This multidimensional understanding of autonomy has significant implications for regulatory design. The findings reveal several critical tensions with implications for future research and policy development: the tension between innovation and protection, the relationship between transparency and effectiveness of disclosure, the balance between universal principles and contextual evaluation, and the distinction between individual and collective harms. These tensions suggest the need for regulatory frameworks that engage directly with technological architecture rather than focusing exclusively on outcomes. The thesis concludes that protecting meaningful self-determination in human-machine interaction demands multifaceted approaches addressing technical design, legal governance, economic incentives, and social norms. It advocates for the development of "legal protection by design," hybrid oversight mechanisms combining traditional legal instruments with technical standards, and autonomy-enhancing constraints that foster rather than undermine human agency. In a world increasingly mediated by intelligent systems designed to predict and influence human behaviour, these approaches are essential for ensuring that technology serves genuine human needs and values while preserving the fundamental capacity for self-determination.

The Architecture of Influence: Mapping the Boundaries of Manipulation in Human-Machine Interaction

LIMONGELLI, ROCCO
2026

Abstract

This thesis examines the multifaceted legal and ethical challenges arising from potentially manipulative and deceptive practices in human-machine interaction, with a specific focus on social robots and AI-based technologies. As these technologies become increasingly integrated into everyday life, they raise profound questions about human autonomy, dignity, and the normative frameworks that should govern their development and deployment. The research begins by establishing robust definitional and technical foundations for understanding artificial intelligence and social robotics. Artificial intelligence is conceptualised through its methodological evolution from early symbolic approaches to contemporary machine learning paradigms, including deep learning, reinforcement learning, and knowledge representation systems. Social robots are defined as embodied AI systems designed for meaningful social interaction according to cultural and contextual norms, distinguished from general-purpose robots by their focus on social communication, emotional recognition, and adherence to social expectations. This technical understanding provides essential context for the subsequent ethical and legal analysis. Building on these foundations, the thesis develops a comprehensive taxonomy of manipulation and deception in human-machine interaction. It traces the evolution from user-centred "agreeable design" to potentially problematic behavioural manipulation, examining how AI systems may exploit cognitive biases, heuristics, and psychological vulnerabilities through their opacity, personalisation capabilities, and anthropomorphic features. The research establishes crucial conceptual distinctions between persuasion, manipulation, and deception, with persuasion engaging rational capacities transparently, manipulation covertly exploiting decision-making vulnerabilities, and deception creating false beliefs through various mechanisms. The psychological impact of anthropomorphic design receives particular attention, with analysis of how human-like interfaces trigger automatic social responses that may bypass rational evaluation. Design strategies including vocal characteristics, facial expressions, movement patterns, personalisation algorithms, and narrative elements are examined for their role in shaping user perceptions and potentially facilitating manipulation. The thesis critically evaluates claims of "benevolent deception" in contexts such as elder care, interrogating whether manipulative practices might be justified when purportedly serving user interests. A significant contribution of the research is its examination of intent and responsibility in AI manipulation. The thesis challenges the notion of a "responsibility gap," demonstrating that despite the distributed nature of AI development and the emergent properties of some systems, responsibility remains attributable to developers and manufacturers based on foreseeability, professional role, and established principles of product liability. This analysis has important implications for both regulatory approaches and corporate governance of AI development. The legal dimension of the thesis identifies protected interests potentially harmed by manipulative AI practices, including autonomy, cognitive liberty, human dignity, and psychological wellbeing. These interests receive varying protection across legal frameworks, including constitutional and human rights law, consumer protection regulations, data protection frameworks, sectoral regulations, tort law, and contract law. The research demonstrates how the appropriate legal framework depends critically on which form of manipulation is at issue, the context in which it occurs, and the specific interests threatened. A comprehensive comparative legal analysis reveals significant regulatory approaches across jurisdictions, with particular attention to the European Union's evolving framework. The thesis examines horizontal instruments such as the Unfair Commercial Practices Directive, the General Data Protection Regulation, and the E-Commerce Directive, alongside the emerging AI-specific regulation embodied in the AI Act. The AI Act's risk-based approach and explicit prohibition of manipulative practices targeting manipulation or specific vulnerabilities are analysed for their potential effectiveness and limitations. The research further explores comparative perspectives from diverse national contexts, identifying both convergence around core concerns and significant divergence in regulatory implementation. Common themes include recognition of the potential harm from exploitation of psychological vulnerabilities, the importance of transparency about AI capabilities, and enhanced protection for vulnerable populations. However, jurisdictions differ significantly in their conceptual framing and practical approaches to these issues. The principle of self-determination provides a foundational normative framework for evaluating these challenges. The thesis demonstrates that meaningful autonomy requires not only freedom from coercion but also adequate information, supportive choice environments, protection against exploitation of psychological vulnerabilities, and preservation of conditions for both individual and collective self-governance. This multidimensional understanding of autonomy has significant implications for regulatory design. The findings reveal several critical tensions with implications for future research and policy development: the tension between innovation and protection, the relationship between transparency and effectiveness of disclosure, the balance between universal principles and contextual evaluation, and the distinction between individual and collective harms. These tensions suggest the need for regulatory frameworks that engage directly with technological architecture rather than focusing exclusively on outcomes. The thesis concludes that protecting meaningful self-determination in human-machine interaction demands multifaceted approaches addressing technical design, legal governance, economic incentives, and social norms. It advocates for the development of "legal protection by design," hybrid oversight mechanisms combining traditional legal instruments with technical standards, and autonomy-enhancing constraints that foster rather than undermine human agency. In a world increasingly mediated by intelligent systems designed to predict and influence human behaviour, these approaches are essential for ensuring that technology serves genuine human needs and values while preserving the fundamental capacity for self-determination.
8-gen-2026
Italiano
AI
Deception
HRI
Law
Manipulation
Robot
BERTOLINI, ANDREA
MARIA CRISTINA GAETA
GIOVANNA CAPILLI
File in questo prodotto:
File Dimensione Formato  
v8_RL_Final_Thesis_.pdf

embargo fino al 30/05/2028

Licenza: Tutti i diritti riservati
Dimensione 933.06 kB
Formato Adobe PDF
933.06 kB Adobe PDF

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14242/356386
Il codice NBN di questa tesi è URN:NBN:IT:SSSUP-356386