Advances in perception, learning, and compliant control are enabling robots to interact with the physical world in ways previously limited to human dexterity. Real progress now lies in blending these capabilities into unified systems that respond intelligently to uncertainty and real-time feedback. How to achieve such integration is still not fully resolved, a challenge that this thesis approaches by asking: how can physically interactive robots be endowed with adaptive perception–action capabilities that allow them to interpret human intent, understand environmental context, and regulate contact dynamics in uncertain and unstructured settings? As the thesis begins, it examines scenarios where human perception itself is severely constrained, making the robot responsible for providing both motion assistance and environmental awareness. This challenge is most pronounced in assistive guidance for visually impaired users, where the human partner cannot independently perceive the surrounding environment or anticipate hazards within it. To address this, an adaptive robot-assisted 3D navigation framework was developed, integrating human tracking, adaptive impedance control, and three-dimensional path planning to deliver responsive physical guidance. By combining human-state monitoring with obstacle perception, the system continuously generates safe trajectories and avoids static and dynamic obstacles at varying heights through adaptive guidance. The research further extends to interaction scenarios in which the human’s perceptual limitations arise not from impairment but from the task itself. In collaborative object transportation, large, bulky objects can obstruct the operator’s field of view, restricting awareness and elevating the risk of collision. These conditions demand not only adaptive control but also shared perceptual reasoning between human and robot. To support such cooperation, a perception-informed collaborative transportation framework was developed in which the robot augments its partner’s awareness by sensing and reacting to environmental conditions that may fall outside the human’s attention or observation span. By merging vibrotactile feedback that informs the human about nearby obstacles with proactive robot behaviour that constrains unsafe motions, the framework enhances shared awareness and supports reliable cooperation. However, introducing an object into the interaction has an important consequence: friction distorts the forces that reveal human intent, making alternative perceptual cues essential. To address this, a context-aware collaboration strategy is proposed in which the robot first assesses frictional surface forces and then combines this information with human motion intention estimation to provide synchronized collaboration with the human partner. These contributions highlight how complementary perceptual modalities can be fused to maintain coordinated, reliable interaction in human-robot partnerships. While the previous frameworks emphasized shared perception and cooperative awareness between humans and robots in contact-rich settings, such interactions are not limited to human involvement. Among these forms of physical contact, pushing is a key motion primitive that extends robotic manipulation capabilities, particularly for tasks such as clearing paths or transporting large, unwieldy, or ungraspable objects. In such scenarios, unpredictability arises from unknown object properties, friction, and complex contact dynamics. Unlike prior studies that assume a priori knowledge of such information or rely on simplifications that fail to capture real-world variability, this thesis introduces a reactive non-prehensile manipulation strategy that uses tactile perception from the robot’s pushing region to regulate motion without the need for any object-specific information or else. The strategy allows the robot to dynamically adjust its motion and achieve robust object transport even toward targets located behind, which are often excluded in existing literature. Since equipping robots with identical perception capabilities is rarely feasible, transferring control strategies between platforms remains a key challenge. Thus, thanks to the learning-based cross-modal substitution approach developed in the scope of this thesis, it has been shown that the tactile-based reactive controller can also operate effectively with alternative sensing modalities. The method preserves consistent manipulation performance across diverse sensing conditions and opens new possibilities for generalizing other control strategies beyond their specific sensor dependencies. Together, these contributions provide insights for developing frameworks for perception-informed adaptation by detailing how human-state and environment monitoring, frictional context, and inferred contact conditions can be systematically interpreted and embedded into control laws that regulate contact behaviour, whether interacting directly with a human, indirectly through a jointly manipulated object, or with an object alone, each necessitating different forms of real-time adaptation. The resulting principles have potential relevance across application areas, including industrial co-manipulation, logistics, and assistive mobility, indicating directions for developing physically interactive robots that are situationally aware, robust to sensing variability, and responsive.

Perception-Informed Adaptation for Physically Interactive Robots in Autonomous and Collaborative Settings

OZDAMAR, IDIL
2026

Abstract

Advances in perception, learning, and compliant control are enabling robots to interact with the physical world in ways previously limited to human dexterity. Real progress now lies in blending these capabilities into unified systems that respond intelligently to uncertainty and real-time feedback. How to achieve such integration is still not fully resolved, a challenge that this thesis approaches by asking: how can physically interactive robots be endowed with adaptive perception–action capabilities that allow them to interpret human intent, understand environmental context, and regulate contact dynamics in uncertain and unstructured settings? As the thesis begins, it examines scenarios where human perception itself is severely constrained, making the robot responsible for providing both motion assistance and environmental awareness. This challenge is most pronounced in assistive guidance for visually impaired users, where the human partner cannot independently perceive the surrounding environment or anticipate hazards within it. To address this, an adaptive robot-assisted 3D navigation framework was developed, integrating human tracking, adaptive impedance control, and three-dimensional path planning to deliver responsive physical guidance. By combining human-state monitoring with obstacle perception, the system continuously generates safe trajectories and avoids static and dynamic obstacles at varying heights through adaptive guidance. The research further extends to interaction scenarios in which the human’s perceptual limitations arise not from impairment but from the task itself. In collaborative object transportation, large, bulky objects can obstruct the operator’s field of view, restricting awareness and elevating the risk of collision. These conditions demand not only adaptive control but also shared perceptual reasoning between human and robot. To support such cooperation, a perception-informed collaborative transportation framework was developed in which the robot augments its partner’s awareness by sensing and reacting to environmental conditions that may fall outside the human’s attention or observation span. By merging vibrotactile feedback that informs the human about nearby obstacles with proactive robot behaviour that constrains unsafe motions, the framework enhances shared awareness and supports reliable cooperation. However, introducing an object into the interaction has an important consequence: friction distorts the forces that reveal human intent, making alternative perceptual cues essential. To address this, a context-aware collaboration strategy is proposed in which the robot first assesses frictional surface forces and then combines this information with human motion intention estimation to provide synchronized collaboration with the human partner. These contributions highlight how complementary perceptual modalities can be fused to maintain coordinated, reliable interaction in human-robot partnerships. While the previous frameworks emphasized shared perception and cooperative awareness between humans and robots in contact-rich settings, such interactions are not limited to human involvement. Among these forms of physical contact, pushing is a key motion primitive that extends robotic manipulation capabilities, particularly for tasks such as clearing paths or transporting large, unwieldy, or ungraspable objects. In such scenarios, unpredictability arises from unknown object properties, friction, and complex contact dynamics. Unlike prior studies that assume a priori knowledge of such information or rely on simplifications that fail to capture real-world variability, this thesis introduces a reactive non-prehensile manipulation strategy that uses tactile perception from the robot’s pushing region to regulate motion without the need for any object-specific information or else. The strategy allows the robot to dynamically adjust its motion and achieve robust object transport even toward targets located behind, which are often excluded in existing literature. Since equipping robots with identical perception capabilities is rarely feasible, transferring control strategies between platforms remains a key challenge. Thus, thanks to the learning-based cross-modal substitution approach developed in the scope of this thesis, it has been shown that the tactile-based reactive controller can also operate effectively with alternative sensing modalities. The method preserves consistent manipulation performance across diverse sensing conditions and opens new possibilities for generalizing other control strategies beyond their specific sensor dependencies. Together, these contributions provide insights for developing frameworks for perception-informed adaptation by detailing how human-state and environment monitoring, frictional context, and inferred contact conditions can be systematically interpreted and embedded into control laws that regulate contact behaviour, whether interacting directly with a human, indirectly through a jointly manipulated object, or with an object alone, each necessitating different forms of real-time adaptation. The resulting principles have potential relevance across application areas, including industrial co-manipulation, logistics, and assistive mobility, indicating directions for developing physically interactive robots that are situationally aware, robust to sensing variability, and responsive.
26-feb-2026
Inglese
Dr. Arash Ajoudani
MASSOBRIO, PAOLO
Università degli studi di Genova
File in questo prodotto:
File Dimensione Formato  
phdunige_5095894.pdf

embargo fino al 26/02/2027

Licenza: Tutti i diritti riservati
Dimensione 13.8 MB
Formato Adobe PDF
13.8 MB Adobe PDF

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14242/359753
Il codice NBN di questa tesi è URN:NBN:IT:UNIGE-359753