Vibration response is an important indicator that provides insight into structural dynamics, structural health monitoring, and structural control. It is generally measured by attaching contact-based sensors, such as accelerometers, to measure acceleration or strain at particular points of a structure. Because contact sensors carry practical and methodological limitations, the community has increasingly turned to noncontact approaches. Installing wired accelerometers and strain gauges on large civil structures is labor intensive, costly, and often intrusive to operations. Cabling and power distribution add complexity and potential failure points. Sensor mass and mounting can introduce local stiffening or mass loading. Sparse layouts provide only pointwise measurements that may miss critical spatial variations. In contrast, noncontact sensing, particularly vision-based methods, enables rapid and safe deployment with minimal interruption, offers dense or full-field coverage from a standoff location, and uses standard cameras with modern algorithms to recover displacement and modal information with high spatial resolution. Although image-processing methods are effective in structural measurement and identification, there is no one-size-fits-all approach. Pixel intensity is governed by both the temporal motion and the spatial structure of the scene. Because of this, a clear view of each method’s strengths and weaknesses is needed to build a framework that is both reliable and robust. Depending on the specific case and frame-to-frame image displacement (pixels), from subpixel (<1-2 pixel) to large (tens to hundreds of pixels), an appropriate method should be chosen. By categorizing measurement methods into two groups, intensity-based methods relate to changes in pixel intensity, while phase-based methods use the phase extracted from image intensity. Methods that operate on intensity face the challenge of calculating displacements smaller than a pixel, while phase-based methods encounter difficulties with displacements larger than a pixel. Intensity-based methods, through iterative processes, can offer highly accurate measurements; however, these computations require interpolation, which is computationally intensive and can become trapped in local optima of the objective function to be optimized. On the other hand, phase-based methods extract phase information using filters, which are robust against changes in lighting and noise. However, these filters must be oriented in the desired direction, whereas different parts of a structure have varying shapes and edges, and the direction of displacement and existing edges is not uniform. Additionally, selecting initial parameters is crucial in these methods. This thesis aims to advance vision-based structural identification to achieve more accurate measurements in each category by addressing these challenges using hybrid (global–local) block-matching frameworks and a complementary phase-based multiscale optical-flow framework. We begin by laying the theoretical groundwork of visual features as measures in the spatial and frequency domains, to extract as much information as possible about structural properties of the visual signal for motion estimation. Subsequently, by integrating video-processing methods with output-only modal analysis, we propose three contributions. Two of these contributions combine a stochastic population-based method with a deterministic search method to estimate the frame-to-frame image displacement (best-matching motion) with sub-pixel accuracy robustly. By exploiting a stochastic search, entrapment in local suboptimal solutions during motion estimation becomes avoidable. The first framework is a fully targetless subpixel motion-estimation method that combines Particle Swarm Optimization (PSO) and gradient refinement in parallel within each PSO iteration. In each iteration of the PSO-based Gradient-Based Block Matching (PGBM) method, PSO explores candidate motions while an integrated gradient-based step refines the best candidates; at the end of the iteration, we apply a deterministic refinement, then proceed to the next PSO iteration. This parallel coupling helps navigate texture variation and spatial-frequency challenges more effectively. On a synthetic shifted pattern, PSO with adaptive parameters accurately recovered motion and outperformed constant-parameter PSO. Subsequently, the PGBM scheme, was applied without adaptive PSO tuning and delivered comparable accuracy at lower computational cost. The framework was validated on railway-bridge and pole light structure videos and on a laboratory beam; the recovered motions enabled Stochastic Subspace Identification (SSI) to extract modal frequencies and mode shapes with high accuracy and close agreement to reference values. The second framework is a multi-step Smart Block Matching with Reduced Error Gradient (SBM-REG) approach, where the stochastic and deterministic stages run in series. We first run a population-based block-matching search to estimate coarse displacement. After this stage is completed, we start the deterministic gradient-based refinement to reach sub-pixel accuracy. This framework also uses adaptive search limits to increase robustness, a pixel-subsampling strategy with a suitable cost function to reduce computation, and an error-cancellation strategy to remove the systematic error of the gradient-based method. On a synthetic shifted pattern, the first stage reliably identified static and subpixel-motion blocks; focusing these in the REG stage yielded precise subpixel estimates and canceled the systematic error. In laboratory tests on a six-story model and a cantilever beam, locations with high-amplitude vibration were accurately tracked by the SBM stage without prior knowledge of amplitude, and the subsequent REG refinement preserved subpixel accuracy, all within a single coherent pipeline. Applying a Blind Source Separation (BSS) technique then identified four mode shapes and their vibrating frequencies with high precision and close agreement to references. While these two methods are region-based, the third framework is pixel-based and provides full-field measurement. Using a method built on the monogenic phase from a spherical quadratic filter, two-dimensional full-field displacements, ranging from large to very small, are measured within a multiscale framework. Quaternionic Phase based Motion Estimation (QPME) and Enhanced Quaternionic phase-based Motion estimation (EQM) were validated through extensive tests on synthetic data with displacement control, laboratory models, and real in-service bridges. In each case, motion was extracted and followed by modal identification using two different methods. On a rotated cantilever beam, QPME outperformed a traditional phase-based method, delivering more accurate mode shapes and confirming the rotation-invariant behavior of the underlying filter. On the Tacoma Narrows Bridge videos, full-field cable mode shapes were recovered, and deck motions at the edge were tracked. Using the multiscale EQM pyramid provided robust estimates across both large and small vibration amplitudes on different deck regions and revealed a torsional mode that was not visible with single-scale QPME. The results show that the frameworks deliver robust, high-resolution, and scalable full-field motion and modal analysis, paving the way for more effective and accessible vision-based structural health monitoring (SHM) in real applications.
La risposta vibratoria è un indicatore fondamentale che fornisce informazioni sulla dinamica strutturale, sul monitoraggio della salute strutturale (SHM) e sul controllo strutturale. In generale, essa viene misurata installando sensori a contatto, come accelerometri, per acquisire accelerazioni o deformazioni in punti specifici della struttura. Poiché tali sensori presentano limitazioni pratiche e metodologiche, la comunità scientifica si è orientata sempre più verso approcci senza contatto. L’installazione di accelerometri cablati e estensimetri su grandi strutture civili richiede manodopera intensiva, è costosa e spesso intrusiva per le operazioni. La distribuzione dei cavi e dell’alimentazione aggiunge complessità e possibili punti di guasto. La massa del sensore e il suo fissaggio possono introdurre irrigidimenti locali o effetti di massa aggiunta. Inoltre, configurazioni sensoriali rade forniscono solo misure puntuali che possono non rilevare variazioni spaziali critiche. Al contrario, le tecniche senza contatto, in particolare quelle basate sulla visione, consentono un’installazione rapida e sicura con minima interferenza, offrono coperture dense o a campo pieno da posizioni distanti e utilizzano telecamere standard e algoritmi moderni per recuperare spostamenti e informazioni modali con alta risoluzione spaziale. Sebbene i metodi di elaborazione delle immagini siano efficaci nella misura e nell'identificazione strutturale, non esiste un approccio universale. L’intensità dei pixel è influenzata sia dal moto temporale sia dalla struttura spaziale della scena. Per questo motivo è necessario comprendere chiaramente i punti di forza e i limiti di ciascun metodo, così da costruire un framework affidabile e robusto. A seconda del caso specifico e dell’entità dello spostamento tra fotogrammi (in pixel), che può variare da subpixel (<1–2 pixel) fino a valori elevati (decine o centinaia di pixel), occorre scegliere un metodo idoneo. Classificando i metodi di misura in due gruppi, i metodi basati sull’intensità si riferiscono alle variazioni dell’intensità dei pixel, mentre i metodi basati sulla fase utilizzano la fase estratta dall’intensità dell’immagine. I metodi che operano sull’intensità affrontano la difficoltà di stimare spostamenti inferiori a un pixel, mentre quelli basati sulla fase incontrano difficoltà con spostamenti maggiori di un pixel. I metodi basati sull’intensità, tramite processi iterativi, possono offrire misure molto accurate; tuttavia, tali procedure richiedono interpolazione, che è computazionalmente onerosa e può rimanere intrappolata in ottimi locali della funzione obiettivo. D’altra parte, i metodi basati sulla fase estraggono informazioni di fase mediante filtri robusti rispetto a variazioni di illuminazione e rumore. Tuttavia, questi filtri devono essere orientati nella direzione desiderata, mentre le diverse parti di una struttura presentano geometrie ed edge differenti, e la direzione dello spostamento e degli edge non è uniforme. Inoltre, la scelta dei parametri iniziali è cruciale in tali metodi. Questa tesi mira a far progredire i metodi di identificazione strutturale basati sulla visione per ottenere misure più accurate in ciascuna categoria, affrontando tali sfide mediante framework ibridi (global–local) di block-matching e un framework complementare basato sul flusso ottico multiscala fondato sulla fase. Iniziamo introducendo le basi teoriche delle caratteristiche visive come misure nei domini spaziale e frequenziale, per estrarre quante più informazioni possibili sulle proprietà strutturali del segnale visivo ai fini della stima del moto. Successivamente, integrando i metodi di video processing con l’analisi modale output-only, proponiamo tre contributi. Due di questi contributi combinano un metodo stocastico basato su popolazioni con un metodo di ricerca deterministico per stimare lo spostamento immagine tra fotogrammi (best-matching motion) con accuratezza subpixel in modo robusto. Sfruttando una ricerca stocastica, diventa possibile evitare l’intrappolamento in soluzioni subottimali locali durante la stima del moto. Il primo framework è un metodo completamente targetless di stima del moto subpixel che combina Particle Swarm Optimization (PSO) e una fase di raffinamento tramite gradienti in parallelo all’interno di ciascuna iterazione PSO. In ogni iterazione del metodo PGBM (PSO-based Gradient-Based Block Matching), il PSO esplora i moti candidati mentre uno step basato sul gradiente raffina i migliori candidati; al termine dell’iterazione applichiamo un raffinamento deterministico e quindi procediamo alla successiva iterazione PSO. Tale accoppiamento parallelo consente di gestire più efficacemente le variazioni di texture e le sfide relative al contenuto spaziale-frequenziale. Su un pattern sintetico traslato, il PSO con parametri adattivi ha recuperato gli spostamenti con alta accuratezza e ha superato il PSO a parametri costanti. Successivamente, lo schema PGBM, applicato senza tuning adattivi del PSO, ha fornito accuratezza comparabile con un costo computazionale inferiore. Il framework è stato validato su video di un ponte ferroviario, di una struttura a palo luminoso e su una trave di laboratorio; gli spostamenti recuperati hanno consentito alla Stochastic Subspace Identification (SSI) di estrarre frequenze modali e forme modali con alta precisione e stretta corrispondenza ai valori di riferimento. Il secondo framework è un approccio a più fasi denominato Smart Block Matching with Reduced Error Gradient (SBM-REG), nel quale le fasi stocastica e deterministica vengono eseguite in serie. Si effettua inizialmente una ricerca block-matching basata su popolazione per stimare lo spostamento grossolano. Dopo questa fase, viene avviato il raffinamento deterministico basato sul gradiente per raggiungere l’accuratezza subpixel. Questo framework utilizza inoltre limiti di ricerca adattivi per aumentare la robustezza, una strategia di subsampling dei pixel con una funzione di costo adeguata per ridurre il carico computazionale e una strategia di cancellazione dell’errore per eliminare l’errore sistematico del metodo basato sul gradiente. Su un pattern sintetico traslato, la prima fase ha identificato in modo affidabile blocchi statici e blocchi con spostamento subpixel; concentrando questi blocchi nella fase REG si sono ottenute stime subpixel precise e si è eliminato l’errore sistematico. Nei test di laboratorio su un modello a sei piani e su una trave a sbalzo, le zone con vibrazioni di ampiezza elevata sono state tracciate accuratamente dallo stadio SBM senza conoscenza preliminare dell’ampiezza, e il successivo raffinamento REG ha preservato l’accuratezza subpixel, all’interno di un unico workflow coerente. L’applicazione di una tecnica di Blind Source Separation (BSS) ha poi permesso di identificare quattro forme modali e le rispettive frequenze di vibrazione con alta precisione e stretta concordanza con i valori di riferimento. Mentre i primi due metodi sono basati su regioni, il terzo framework è pixel-based e fornisce misure a campo pieno. Utilizzando un metodo basato sulla fase monogenica derivata da un filtro sferico quadratico, si misurano spostamenti bidimensionali a campo pieno, da molto grandi a molto piccoli, all’interno di un framework multiscala. Il metodo Quaternionic Phase-based Motion Estimation (QPME) e la sua estensione Enhanced Quaternionic Phase-based Motion Estimation (EQM) sono stati validati mediante numerosi test su dati sintetici con spostamenti controllati, modelli di laboratorio e ponti reali in esercizio. In ciascun caso, il moto è stato estratto e successivamente analizzato tramite identificazione modale con due diversi metodi. Su una trave a sbalzo ruotata, il QPME ha superato un metodo tradizionale basato sulla fase, fornendo forme modali più accurate e confermando il comportamento rotation-invariant del filtro sottostante. Nei video del ponte Tacoma Narrows, sono state recuperate le forme modali dei cavi a campo pieno e sono stati tracciati gli spostamenti del bordo dell’impalcato. L’uso della piramide multiscala EQM ha fornito stime robuste sia per grandi sia per piccole ampiezze di vibrazione in diverse regioni dell’impalcato e ha rivelato una modalità torsionale che non era visibile con il QPME a singola scala. I risultati dimostrano che i framework sviluppati forniscono analisi del moto e modale robuste, ad alta risoluzione e scalabili a campo pieno, aprendo la strada a un monitoraggio strutturale (SHM) più efficace e accessibile basato sulla visione in applicazioni reali.
Full-field structural identification through hybrid video-based motion estimation and signal-driven modal analysis
SAMIRA, AZIZI
2025
Abstract
Vibration response is an important indicator that provides insight into structural dynamics, structural health monitoring, and structural control. It is generally measured by attaching contact-based sensors, such as accelerometers, to measure acceleration or strain at particular points of a structure. Because contact sensors carry practical and methodological limitations, the community has increasingly turned to noncontact approaches. Installing wired accelerometers and strain gauges on large civil structures is labor intensive, costly, and often intrusive to operations. Cabling and power distribution add complexity and potential failure points. Sensor mass and mounting can introduce local stiffening or mass loading. Sparse layouts provide only pointwise measurements that may miss critical spatial variations. In contrast, noncontact sensing, particularly vision-based methods, enables rapid and safe deployment with minimal interruption, offers dense or full-field coverage from a standoff location, and uses standard cameras with modern algorithms to recover displacement and modal information with high spatial resolution. Although image-processing methods are effective in structural measurement and identification, there is no one-size-fits-all approach. Pixel intensity is governed by both the temporal motion and the spatial structure of the scene. Because of this, a clear view of each method’s strengths and weaknesses is needed to build a framework that is both reliable and robust. Depending on the specific case and frame-to-frame image displacement (pixels), from subpixel (<1-2 pixel) to large (tens to hundreds of pixels), an appropriate method should be chosen. By categorizing measurement methods into two groups, intensity-based methods relate to changes in pixel intensity, while phase-based methods use the phase extracted from image intensity. Methods that operate on intensity face the challenge of calculating displacements smaller than a pixel, while phase-based methods encounter difficulties with displacements larger than a pixel. Intensity-based methods, through iterative processes, can offer highly accurate measurements; however, these computations require interpolation, which is computationally intensive and can become trapped in local optima of the objective function to be optimized. On the other hand, phase-based methods extract phase information using filters, which are robust against changes in lighting and noise. However, these filters must be oriented in the desired direction, whereas different parts of a structure have varying shapes and edges, and the direction of displacement and existing edges is not uniform. Additionally, selecting initial parameters is crucial in these methods. This thesis aims to advance vision-based structural identification to achieve more accurate measurements in each category by addressing these challenges using hybrid (global–local) block-matching frameworks and a complementary phase-based multiscale optical-flow framework. We begin by laying the theoretical groundwork of visual features as measures in the spatial and frequency domains, to extract as much information as possible about structural properties of the visual signal for motion estimation. Subsequently, by integrating video-processing methods with output-only modal analysis, we propose three contributions. Two of these contributions combine a stochastic population-based method with a deterministic search method to estimate the frame-to-frame image displacement (best-matching motion) with sub-pixel accuracy robustly. By exploiting a stochastic search, entrapment in local suboptimal solutions during motion estimation becomes avoidable. The first framework is a fully targetless subpixel motion-estimation method that combines Particle Swarm Optimization (PSO) and gradient refinement in parallel within each PSO iteration. In each iteration of the PSO-based Gradient-Based Block Matching (PGBM) method, PSO explores candidate motions while an integrated gradient-based step refines the best candidates; at the end of the iteration, we apply a deterministic refinement, then proceed to the next PSO iteration. This parallel coupling helps navigate texture variation and spatial-frequency challenges more effectively. On a synthetic shifted pattern, PSO with adaptive parameters accurately recovered motion and outperformed constant-parameter PSO. Subsequently, the PGBM scheme, was applied without adaptive PSO tuning and delivered comparable accuracy at lower computational cost. The framework was validated on railway-bridge and pole light structure videos and on a laboratory beam; the recovered motions enabled Stochastic Subspace Identification (SSI) to extract modal frequencies and mode shapes with high accuracy and close agreement to reference values. The second framework is a multi-step Smart Block Matching with Reduced Error Gradient (SBM-REG) approach, where the stochastic and deterministic stages run in series. We first run a population-based block-matching search to estimate coarse displacement. After this stage is completed, we start the deterministic gradient-based refinement to reach sub-pixel accuracy. This framework also uses adaptive search limits to increase robustness, a pixel-subsampling strategy with a suitable cost function to reduce computation, and an error-cancellation strategy to remove the systematic error of the gradient-based method. On a synthetic shifted pattern, the first stage reliably identified static and subpixel-motion blocks; focusing these in the REG stage yielded precise subpixel estimates and canceled the systematic error. In laboratory tests on a six-story model and a cantilever beam, locations with high-amplitude vibration were accurately tracked by the SBM stage without prior knowledge of amplitude, and the subsequent REG refinement preserved subpixel accuracy, all within a single coherent pipeline. Applying a Blind Source Separation (BSS) technique then identified four mode shapes and their vibrating frequencies with high precision and close agreement to references. While these two methods are region-based, the third framework is pixel-based and provides full-field measurement. Using a method built on the monogenic phase from a spherical quadratic filter, two-dimensional full-field displacements, ranging from large to very small, are measured within a multiscale framework. Quaternionic Phase based Motion Estimation (QPME) and Enhanced Quaternionic phase-based Motion estimation (EQM) were validated through extensive tests on synthetic data with displacement control, laboratory models, and real in-service bridges. In each case, motion was extracted and followed by modal identification using two different methods. On a rotated cantilever beam, QPME outperformed a traditional phase-based method, delivering more accurate mode shapes and confirming the rotation-invariant behavior of the underlying filter. On the Tacoma Narrows Bridge videos, full-field cable mode shapes were recovered, and deck motions at the edge were tracked. Using the multiscale EQM pyramid provided robust estimates across both large and small vibration amplitudes on different deck regions and revealed a torsional mode that was not visible with single-scale QPME. The results show that the frameworks deliver robust, high-resolution, and scalable full-field motion and modal analysis, paving the way for more effective and accessible vision-based structural health monitoring (SHM) in real applications.| File | Dimensione | Formato | |
|---|---|---|---|
|
FinalThesisSamiraAzizi.pdf
accesso solo da BNCF e BNCR
Licenza:
Tutti i diritti riservati
Dimensione
83.11 MB
Formato
Adobe PDF
|
83.11 MB | Adobe PDF |
I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/20.500.14242/356120
URN:NBN:IT:POLIMI-356120