This thesis presents the design, development, and application of three new variants of multi-modal sensor systems, each created to address key challenges in robotic manipulation, perception, and human-robot interaction. Modern robots are increasingly expected to operate in unstructured, sensitive, and shared environments. To perform effectively in such contexts, they require compact, high-resolution, and versatile sensing platforms that can combine tactile, proximity, and orientation or posture-based data. This research responds to that need through a step-by-step development process, where each sensor builds on the previous design to progressively expand capability and application. The first prototype integrates eight optoelectronic components, a time-of-flight (ToF) sensor for depth perception, and a 6-axis Inertial Measurement Unit (IMU). It provided the foundation for multimodal sensing, enabling the collection of proximity, motion, and acceleration data. This early platform served as a testbed for calibration, signal integration, and basic interaction tasks, establishing the principles for sensor fusion in robotic systems. The second variant extended these capabilities through a 4×3 matrix array of optoelectronic elements having enhanced spatial resolution combined with a 6-axis IMU and a modular interface. Its adaptable design allows mounting on a wide range of robotic grippers, including both commercial devices and custom in-lab prototypes. This version was validated in three distinct domains: 1. Delicate object handling, such as fresh food, where sensitive force control was required. 2. Human-Robot Interaction (HRI), where the system enabled safer, more intuitive collaboration between humans and robots. 3. In satellite robotics, Only IMU for feedback to enhance attitude control. These applications confirmed the sensor’s robustness, real-time performance, and flexibility. The third and most advanced sensor is a fingertip-integrated module developed for the SEEDS anthropomorphic robotic gripper, very similar to the human fingertip. It incorporates PWM-controlled optoelectronic elements, a 9-axis Bosch IMU with onboard processing, and a high-resolution ToF sensor, all embedded directly within the fingertip. This miniaturized design delivers high-frequency, low-latency data, supporting precise force estimation, object shape recognition, and compliant grasping. The result is a sensing solution that enables dexterous, human-like manipulation. Alongside these multi-modal sensor developments, this thesis also addresses vision-based perception challenges through the integration of a novel stereo vision approach for detecting thin wire features. Standard 3D cameras are often bulky and costly, making them unsuitable for fine robotic manipulation. To overcome this, a stereo vision system using two low-cost, off-the-shelf endoscopic cameras was developed, allowing integration directly into the robot’s end-effector. By applying state-of-the-art vision algorithms, this system achieved a diameter estimation error below 10% and a location error below 3% for wires of only a few millimeters or less. This contribution demonstrates how compact, affordable vision systems can support fine-grained robotic perception tasks, expanding the scope of multimodal sensing in robotics. The contributions of this thesis can be summarized as: 1. A progressive methodology for the scalable and modular design of robotic sensor systems. 2. The integration of optoelectronic, inertial, and depth sensing technologies into compact, customizable formats. 3. Development of low-cost stereo vision system for robotic manipulation. 4. Experimental validation of the sensors in diverse application domains including space robotics, food handling, and HRI. 5. Establishing a foundation for future work in soft robotics, teleoperation, and assistive technologies. Overall, this thesis advances the field of tactile and perceptual sensing in robotics by introducing adaptable, high-performance, and application-specific sensory systems. In addition to tactile and proprioceptive sensing, it establishes a pathway toward vision-augmented multimodal perception, enabling richer environmental awareness and more intelligent robot behavior. These contributions open new opportunities for sensor-driven control strategies, machine learning-based perception, and enhanced robot cognition, while also fostering safer human-robot collaboration and paving the way for next-generation intelligent machine applications.

Design and Integration of Multi-Modal Sensor Systems for Enhanced Robotic Manipulation: Applications in Grasping & Control Across Adaptive Grippers and Robotic Platforms

FAZAL, TANZEEL AHMAD
2026

Abstract

This thesis presents the design, development, and application of three new variants of multi-modal sensor systems, each created to address key challenges in robotic manipulation, perception, and human-robot interaction. Modern robots are increasingly expected to operate in unstructured, sensitive, and shared environments. To perform effectively in such contexts, they require compact, high-resolution, and versatile sensing platforms that can combine tactile, proximity, and orientation or posture-based data. This research responds to that need through a step-by-step development process, where each sensor builds on the previous design to progressively expand capability and application. The first prototype integrates eight optoelectronic components, a time-of-flight (ToF) sensor for depth perception, and a 6-axis Inertial Measurement Unit (IMU). It provided the foundation for multimodal sensing, enabling the collection of proximity, motion, and acceleration data. This early platform served as a testbed for calibration, signal integration, and basic interaction tasks, establishing the principles for sensor fusion in robotic systems. The second variant extended these capabilities through a 4×3 matrix array of optoelectronic elements having enhanced spatial resolution combined with a 6-axis IMU and a modular interface. Its adaptable design allows mounting on a wide range of robotic grippers, including both commercial devices and custom in-lab prototypes. This version was validated in three distinct domains: 1. Delicate object handling, such as fresh food, where sensitive force control was required. 2. Human-Robot Interaction (HRI), where the system enabled safer, more intuitive collaboration between humans and robots. 3. In satellite robotics, Only IMU for feedback to enhance attitude control. These applications confirmed the sensor’s robustness, real-time performance, and flexibility. The third and most advanced sensor is a fingertip-integrated module developed for the SEEDS anthropomorphic robotic gripper, very similar to the human fingertip. It incorporates PWM-controlled optoelectronic elements, a 9-axis Bosch IMU with onboard processing, and a high-resolution ToF sensor, all embedded directly within the fingertip. This miniaturized design delivers high-frequency, low-latency data, supporting precise force estimation, object shape recognition, and compliant grasping. The result is a sensing solution that enables dexterous, human-like manipulation. Alongside these multi-modal sensor developments, this thesis also addresses vision-based perception challenges through the integration of a novel stereo vision approach for detecting thin wire features. Standard 3D cameras are often bulky and costly, making them unsuitable for fine robotic manipulation. To overcome this, a stereo vision system using two low-cost, off-the-shelf endoscopic cameras was developed, allowing integration directly into the robot’s end-effector. By applying state-of-the-art vision algorithms, this system achieved a diameter estimation error below 10% and a location error below 3% for wires of only a few millimeters or less. This contribution demonstrates how compact, affordable vision systems can support fine-grained robotic perception tasks, expanding the scope of multimodal sensing in robotics. The contributions of this thesis can be summarized as: 1. A progressive methodology for the scalable and modular design of robotic sensor systems. 2. The integration of optoelectronic, inertial, and depth sensing technologies into compact, customizable formats. 3. Development of low-cost stereo vision system for robotic manipulation. 4. Experimental validation of the sensors in diverse application domains including space robotics, food handling, and HRI. 5. Establishing a foundation for future work in soft robotics, teleoperation, and assistive technologies. Overall, this thesis advances the field of tactile and perceptual sensing in robotics by introducing adaptable, high-performance, and application-specific sensory systems. In addition to tactile and proprioceptive sensing, it establishes a pathway toward vision-augmented multimodal perception, enabling richer environmental awareness and more intelligent robot behavior. These contributions open new opportunities for sensor-driven control strategies, machine learning-based perception, and enhanced robot cognition, while also fostering safer human-robot collaboration and paving the way for next-generation intelligent machine applications.
28-gen-2026
Inglese
PIROZZI, SALVATORE
SGORBISSA, ANTONIO
Università degli studi di Genova
File in questo prodotto:
File Dimensione Formato  
phdunige_5495733.pdf

accesso aperto

Licenza: Tutti i diritti riservati
Dimensione 7.29 MB
Formato Adobe PDF
7.29 MB Adobe PDF Visualizza/Apri

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14242/356338
Il codice NBN di questa tesi è URN:NBN:IT:UNIGE-356338