Upper-limb amputations significantly impact the quality of life, and while prosthetic devices can restore some lost functionalities, control complexity remains a major barrier for effective user-driven prosthesis control. Most commercial prostheses are based on electromyography (EMG) or mechanomyography (MMG), relating these input signals to the velocity of the prosthesis motors. When more than one degree-of-freedom is available, the Sequential Switching and Control paradigm is used. In this case, only one joint at a time is driven and the user gives an explicit input signal to switch between the different degrees of freedom, resulting in a cumbersome control. Therefore, relieving the user from complex control input modalities is of high interest in prosthetics. To address such limitations, this PhD project proposes a shared autonomy pipeline that integrates computer vision with the standard EMG-based mechanism for prosthesis control, thus reducing the cognitive burden on the user and increasing the prosthesis dexterity. This work starts with the exploration of different camera positioning (i.e., eye-in-hand vs. egocentric), followed by its integration into the prosthetic device. As for the prosthesis control, two different entities can be identified: the thumb and the wrist. Initially, they are tackled separately, then, as a final outcome, they are integrated into a unique pipeline achieving closed-loop wrist control during the approach followed by hand and thumb pre-shape prediction for grasping. Given the scarce availability of visual data in the prosthetic scenario, and to avoid tedious data collection, this work follows the sim-to-real approach, where the models are trained on synthetically generated data and deployed in a zero-shot fashion in the real world. Therefore, I validate each component of our pipeline first in simulation and then in the real world. In addition, as a further contribution, a novel pipeline exploring a fully-autonomous control rather than shared user-machine collaboration is proposed. This approach leverages the knowledge gained from methods in robotics and introduces a novel framework to adapt them for prosthetic applications. The method is tested both on able-bodied and amputee subjects to assess system's usability and a pilot study on cognitive load shows that the proposed pipeline has the potential to reduce the mental effort on the user. Overall, the results collected throughout the PhD suggest that vision-based prosthetic grasping systems have the potential to both reduce the cognitive burden on the user and to foster a more natural grasping action. These characteristics are crucial for easing the use of prosthetic hands and minimizing the rejection of the prosthetic device. This work paves the way for further use of Artificial Intelligence methods for the control of prosthetic devices in more complex scenarios, such as long-horizon tasks and bimanual manipulation.
Towards Effortless Prosthetic Grasping through a Vision-based Shared Autonomy Pipeline
VASILE, FEDERICO
2025
Abstract
Upper-limb amputations significantly impact the quality of life, and while prosthetic devices can restore some lost functionalities, control complexity remains a major barrier for effective user-driven prosthesis control. Most commercial prostheses are based on electromyography (EMG) or mechanomyography (MMG), relating these input signals to the velocity of the prosthesis motors. When more than one degree-of-freedom is available, the Sequential Switching and Control paradigm is used. In this case, only one joint at a time is driven and the user gives an explicit input signal to switch between the different degrees of freedom, resulting in a cumbersome control. Therefore, relieving the user from complex control input modalities is of high interest in prosthetics. To address such limitations, this PhD project proposes a shared autonomy pipeline that integrates computer vision with the standard EMG-based mechanism for prosthesis control, thus reducing the cognitive burden on the user and increasing the prosthesis dexterity. This work starts with the exploration of different camera positioning (i.e., eye-in-hand vs. egocentric), followed by its integration into the prosthetic device. As for the prosthesis control, two different entities can be identified: the thumb and the wrist. Initially, they are tackled separately, then, as a final outcome, they are integrated into a unique pipeline achieving closed-loop wrist control during the approach followed by hand and thumb pre-shape prediction for grasping. Given the scarce availability of visual data in the prosthetic scenario, and to avoid tedious data collection, this work follows the sim-to-real approach, where the models are trained on synthetically generated data and deployed in a zero-shot fashion in the real world. Therefore, I validate each component of our pipeline first in simulation and then in the real world. In addition, as a further contribution, a novel pipeline exploring a fully-autonomous control rather than shared user-machine collaboration is proposed. This approach leverages the knowledge gained from methods in robotics and introduces a novel framework to adapt them for prosthetic applications. The method is tested both on able-bodied and amputee subjects to assess system's usability and a pilot study on cognitive load shows that the proposed pipeline has the potential to reduce the mental effort on the user. Overall, the results collected throughout the PhD suggest that vision-based prosthetic grasping systems have the potential to both reduce the cognitive burden on the user and to foster a more natural grasping action. These characteristics are crucial for easing the use of prosthetic hands and minimizing the rejection of the prosthetic device. This work paves the way for further use of Artificial Intelligence methods for the control of prosthetic devices in more complex scenarios, such as long-horizon tasks and bimanual manipulation.File | Dimensione | Formato | |
---|---|---|---|
phdunige_5184598_1.pdf
accesso aperto
Dimensione
8.67 MB
Formato
Adobe PDF
|
8.67 MB | Adobe PDF | Visualizza/Apri |
phdunige_5184598_2.pdf
accesso aperto
Dimensione
14.09 MB
Formato
Adobe PDF
|
14.09 MB | Adobe PDF | Visualizza/Apri |
I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/20.500.14242/208975
URN:NBN:IT:UNIGE-208975