The objective of this thesis is to investigate the feasibility of adversarial attacks in the physical domain, where inputs undergo real-world transformations such as printing, scanning, and photographic acquisition. While Adversarial Machine Learning has been extensively studied in the digital setting, its physical-domain implications remain less explored. The present study focuses on two representative applications of high societal relevance: Printer Source Attribution (PSA) and License Plate Detection (LPD). Both tasks require perturbations that survive complex physical and environmental distortions while remaining visually unobtrusive. The thesis makes several contributions toward this objective. First, it introduces a realistic Print-and-Scan (P&S) simulation framework capable of modeling degradations such as color shifts, texture variations, ink dispersion, and scanning noise. By embedding this simulator into adversarial optimization, perturbations can be crafted that preserve their effectiveness after physical reproduction. In addition, the proposed simulators are shown to be valuable not only for crafting attacks but also for improving the robustness of systems in multimedia security, by enabling reproducible and controlled evaluations under realistic conditions. Building on this foundation, this research proposes novel adversarial strategies for PSA, demonstrating that printer-specific forensic signatures can be suppressed to mislead attribution systems even after documents are reprinted. In the case of LPD, the design of localized perturbations confined only to the license plate region, which are shown to effectively disrupt state-of-the-art object detectors both in simulation and under real-world acquisition conditions. Beyond these specific tasks, the framework establishes a unified methodology for generating physically robust adversarial examples and enables reproducible robustness evaluations under realistic operational constraints. Extensive experimental validation confirms that conventional digital-only attacks typically fail once subjected to physical distortions, while simulation-guided perturbations maintain significant adversarial power. By combining methodological innovations with comprehensive experimental evidence, the thesis delivers a systematic understanding of physical adversarial threats and provides practical tools for their assessment. Overall, it contributes to the development of more resilient, reliable, and socially responsible machine learning systems, reinforcing the trustworthiness of technologies deployed in critical infrastructures.
Physical Domain Adversarial Attacks: Bridging Theory and Real World Threats
PURNEKAR, NISCHAY
2025
Abstract
The objective of this thesis is to investigate the feasibility of adversarial attacks in the physical domain, where inputs undergo real-world transformations such as printing, scanning, and photographic acquisition. While Adversarial Machine Learning has been extensively studied in the digital setting, its physical-domain implications remain less explored. The present study focuses on two representative applications of high societal relevance: Printer Source Attribution (PSA) and License Plate Detection (LPD). Both tasks require perturbations that survive complex physical and environmental distortions while remaining visually unobtrusive. The thesis makes several contributions toward this objective. First, it introduces a realistic Print-and-Scan (P&S) simulation framework capable of modeling degradations such as color shifts, texture variations, ink dispersion, and scanning noise. By embedding this simulator into adversarial optimization, perturbations can be crafted that preserve their effectiveness after physical reproduction. In addition, the proposed simulators are shown to be valuable not only for crafting attacks but also for improving the robustness of systems in multimedia security, by enabling reproducible and controlled evaluations under realistic conditions. Building on this foundation, this research proposes novel adversarial strategies for PSA, demonstrating that printer-specific forensic signatures can be suppressed to mislead attribution systems even after documents are reprinted. In the case of LPD, the design of localized perturbations confined only to the license plate region, which are shown to effectively disrupt state-of-the-art object detectors both in simulation and under real-world acquisition conditions. Beyond these specific tasks, the framework establishes a unified methodology for generating physically robust adversarial examples and enables reproducible robustness evaluations under realistic operational constraints. Extensive experimental validation confirms that conventional digital-only attacks typically fail once subjected to physical distortions, while simulation-guided perturbations maintain significant adversarial power. By combining methodological innovations with comprehensive experimental evidence, the thesis delivers a systematic understanding of physical adversarial threats and provides practical tools for their assessment. Overall, it contributes to the development of more resilient, reliable, and socially responsible machine learning systems, reinforcing the trustworthiness of technologies deployed in critical infrastructures.| File | Dimensione | Formato | |
|---|---|---|---|
|
PhD_Thesis_Purnekar.pdf
accesso aperto
Licenza:
Creative Commons
Dimensione
167.71 MB
Formato
Adobe PDF
|
167.71 MB | Adobe PDF | Visualizza/Apri |
I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/20.500.14242/354140
URN:NBN:IT:UNIPI-354140