Despite the growing demand for intelligent robotic systems and the substantial progress achieved in related domains such as rigid object manipulation, the field of garment manipula- tion continues to face considerable challenges and remains comparatively underdeveloped. This lag is primarily due to the intrinsic complexity of garments as manipulation targets. Unlike rigid objects, garments are characterised by extreme thinness, a high-dimensional configuration space, frequent self-occlusion, complex and non-linear deformation dynamics, and a vast diversity of shapes, textures, and material properties. These properties severely limit the effectiveness of conventional manipulation strategies, which are typically predicated on rigid-body assumptions and relatively simple interaction dynamics. Recent advances in data-driven methods, particularly in reinforcement learning and computer vision, have demonstrated strong potential in addressing similarly high-dimensional and non-linear prob- lems. These approaches, however, are critically dependent on the availability of large, annotated datasets to support generalisable learning. In the context of garments, generating such datasets is particularly burdensome due to the continuous nature of their configuration spaces and the difficulty of annotating states that are often partially or wholly occluded. This thesis aims at addressing these bottlenecks, it introduces an object-centric frame- work for efficient learning in garment manipulation tasks, grounded in a surface-based representation of deformable objects. Our approach addresses a central challenge in robotic manipulation of deformable materials: the need for data efficiency, generalisability, and real-world applicability. The proposed framework consists of three principal components. First, we develop a compact representation of garment configurations using intrinsic surface geometry, enabling dimensionality reduction without sacrificing expressive power. Second, we design learning architectures that exploit maximal access to surface information during training, leading to substantial gains in sample efficiency. Third, we establish a robust method for transferring learned surface-based quantities, such as affordances or manipulation policies, to real- world scenarios, even when the physical objects differ from those seen during training. Together, these components form a cohesive pipeline that integrates geometry-aware modelling, efficient data generation, and robust learning. We empirically validate our methods on a variety of simulated and real-world tasks, demonstrating improvements in both training efficiency and generalisation performance. Our results suggest that grounding the learning process in the intrinsic geometry of surfaces provides a principled route to handling the complexity of deformable object manipulation.
Advanced garment modeling for object-centric assistive dressing policies
DELEHELLE, DONATIEN
2025
Abstract
Despite the growing demand for intelligent robotic systems and the substantial progress achieved in related domains such as rigid object manipulation, the field of garment manipula- tion continues to face considerable challenges and remains comparatively underdeveloped. This lag is primarily due to the intrinsic complexity of garments as manipulation targets. Unlike rigid objects, garments are characterised by extreme thinness, a high-dimensional configuration space, frequent self-occlusion, complex and non-linear deformation dynamics, and a vast diversity of shapes, textures, and material properties. These properties severely limit the effectiveness of conventional manipulation strategies, which are typically predicated on rigid-body assumptions and relatively simple interaction dynamics. Recent advances in data-driven methods, particularly in reinforcement learning and computer vision, have demonstrated strong potential in addressing similarly high-dimensional and non-linear prob- lems. These approaches, however, are critically dependent on the availability of large, annotated datasets to support generalisable learning. In the context of garments, generating such datasets is particularly burdensome due to the continuous nature of their configuration spaces and the difficulty of annotating states that are often partially or wholly occluded. This thesis aims at addressing these bottlenecks, it introduces an object-centric frame- work for efficient learning in garment manipulation tasks, grounded in a surface-based representation of deformable objects. Our approach addresses a central challenge in robotic manipulation of deformable materials: the need for data efficiency, generalisability, and real-world applicability. The proposed framework consists of three principal components. First, we develop a compact representation of garment configurations using intrinsic surface geometry, enabling dimensionality reduction without sacrificing expressive power. Second, we design learning architectures that exploit maximal access to surface information during training, leading to substantial gains in sample efficiency. Third, we establish a robust method for transferring learned surface-based quantities, such as affordances or manipulation policies, to real- world scenarios, even when the physical objects differ from those seen during training. Together, these components form a cohesive pipeline that integrates geometry-aware modelling, efficient data generation, and robust learning. We empirically validate our methods on a variety of simulated and real-world tasks, demonstrating improvements in both training efficiency and generalisation performance. Our results suggest that grounding the learning process in the intrinsic geometry of surfaces provides a principled route to handling the complexity of deformable object manipulation.| File | Dimensione | Formato | |
|---|---|---|---|
|
phdunige_5353592.pdf
accesso aperto
Licenza:
Tutti i diritti riservati
Dimensione
10.75 MB
Formato
Adobe PDF
|
10.75 MB | Adobe PDF | Visualizza/Apri |
I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/20.500.14242/312675
URN:NBN:IT:UNIGE-312675