Recommender systems are ubiquitous. Our digital lives are influenced by their use when, for instance, we select the news to read, the product to buy, the friend to connect, and the movie to watch. While enormous academic research efforts have been mainly focused on getting high-quality recommendations to reach the maximum customers' satisfaction, little effort has been devoted to studying the integrity and security of these systems. Is there an underlying relationship between the characteristics of the historical user-item interactions and the efficacy of injection of false users/feedback strategies against collaborative models? Can public semantic data be used to perform attacks more potent in raising the recommendability of victim items? Can a malicious user (i.e., the adversary) poison or evade the image data of visual recommenders with adversarial perturbed product images? What is a possible defensive solution to reduce the effectiveness of test-time adversarial attacks? Is the family of model-based recommenders more vulnerable to multi-step gradient-based adversarial perturbations? Furthermore, is the adversarial training robustification still effective in the last scenario? Is this training defense influencing the beyond-accuracy and bias performance? This dissertation intends to pave the way towards more robust recommender systems, beginning with understanding how a model can be made more robust, the cost of robustness in terms of recommendation quality, and the adversarial risks of modern recommenders. This thesis, getting inspiration from the literature on the security of collaborative models against the insertion of hand-engineered fake profiles and the recent advances of adversarial machine learning methods in other research areas like computer vision, contributes to several directions: (i) the proposal of a practical framework to interpret the impact of data characteristics on the robustness of collaborative recommenders, (ii) the design of powerful attack strategies using publicly available semantic data, (iii) the identification of severe adversarial vulnerabilities of visual-based recommender models where adversaries can break the recommendation integrity by pushing products to the highest recommendation positions with a simple and human-imperceptible perturbation of products' images, (iv) the design of a novel defense method to protect visual recommenders against test-time adversarial attacks, (v) the proposal of robust adversarial perturbation methods capable of completely breaking the accuracy of matrix factorization recommenders, and (vi) a formal study that examines the effects of adversarial training in reducing the recommendation quality of state-of-the-art model-based recommenders.
L’attività di ricerca del dottorando Merra Felice Antonio si è sviluppata partendo dallo studio dello stato dell’arte per quanto riguarda i Sistemi di raccomandazione. Contemporaneamente è stato studiato lo stato dell’arte delle tecniche di Adversarial Machine Learning e della sicurezza dei Sistemi di raccomandazione. Sin da subito è emerso che i sistemi di raccomandazione possono essere soggetto a tecniche di “adversarial attacks” con conseguenze molto gravi nelle piattaforme che utilizzano tali sistemi. L’identificazione della mancanza di una risorsa ocmune di ricerca e di possibili nuovi attacchi e difese ha portato alla suddivisione del mio lavoro id ricerca in 4 macro fasi: svolgimento di una revisione della letteratura, sviluppo di tecniche d'interpretazione dell’efficacia degli attacchi, proposta di nuove strategie di attacco, e studio degli effetti delle tecniche di difesa sulle performance globali di un recommender. Nel primo filone di ricerca è stata svolta una revisione della letteratura che ha riguardato centinaia id articoli e dal cui risultati di analisi è stato pubblicato un survey in cui ho proposta una tassonomia e possibili problematiche future e nuove direzioni di ricerca. Una parte di queste problematiche è stata oggetto di ricerca del mio secondo filone di ricerca. In tale filone ho proposto un modello d'interpretazione della robustezza dei sistemi collaborativi soggetti a tutti gli attacchi allo stato dell’arte. Inoltre, in tale filone, ho proposto una nuova strategia di attacco che inglobasse l’informazione semantica (pubblica). Nel terzo filone di ricerca ho proposto differenti strategie di attacco che possono distruggere l’affidabilità di un sistema di raccomandazione multimediale con l’inserimento d'immagini “adversarial” le cui perturbazioni sono completamente non visualizzabili dagli esseri umani. A tal proposto e studiato nuove difese che possano proteggere tali sistemi. Il quarto filone di ricerca è stato dedicato all’analisi dei sistemi di raccomandazione basati su modello (come i sistemi a fattorizzazione di matrici). In questo filone ho proposto una nuova tecnica di attacco di tipo iterativo che ha dimostrato come esistano strategie che rendono il sistema di raccomandazione completamente casuale, perdendo completamente quanto imparato in fase di addestramento. Inoltre, ho studiato matematicamente l’effetto di una tecnica di addestramento adversarial che ha ottenuto molta popolarità nella comunità dei sistemi di raccomandazione constatando che il miglioramento dell’accuratezza dei sistemi è connesso ad una amplificazione della popolarità dei sistemi.
Adversarial Machine Learning in Recommender Systems
Merra, Felice Antonio
2022
Abstract
Recommender systems are ubiquitous. Our digital lives are influenced by their use when, for instance, we select the news to read, the product to buy, the friend to connect, and the movie to watch. While enormous academic research efforts have been mainly focused on getting high-quality recommendations to reach the maximum customers' satisfaction, little effort has been devoted to studying the integrity and security of these systems. Is there an underlying relationship between the characteristics of the historical user-item interactions and the efficacy of injection of false users/feedback strategies against collaborative models? Can public semantic data be used to perform attacks more potent in raising the recommendability of victim items? Can a malicious user (i.e., the adversary) poison or evade the image data of visual recommenders with adversarial perturbed product images? What is a possible defensive solution to reduce the effectiveness of test-time adversarial attacks? Is the family of model-based recommenders more vulnerable to multi-step gradient-based adversarial perturbations? Furthermore, is the adversarial training robustification still effective in the last scenario? Is this training defense influencing the beyond-accuracy and bias performance? This dissertation intends to pave the way towards more robust recommender systems, beginning with understanding how a model can be made more robust, the cost of robustness in terms of recommendation quality, and the adversarial risks of modern recommenders. This thesis, getting inspiration from the literature on the security of collaborative models against the insertion of hand-engineered fake profiles and the recent advances of adversarial machine learning methods in other research areas like computer vision, contributes to several directions: (i) the proposal of a practical framework to interpret the impact of data characteristics on the robustness of collaborative recommenders, (ii) the design of powerful attack strategies using publicly available semantic data, (iii) the identification of severe adversarial vulnerabilities of visual-based recommender models where adversaries can break the recommendation integrity by pushing products to the highest recommendation positions with a simple and human-imperceptible perturbation of products' images, (iv) the design of a novel defense method to protect visual recommenders against test-time adversarial attacks, (v) the proposal of robust adversarial perturbation methods capable of completely breaking the accuracy of matrix factorization recommenders, and (vi) a formal study that examines the effects of adversarial training in reducing the recommendation quality of state-of-the-art model-based recommenders.File | Dimensione | Formato | |
---|---|---|---|
Adversarial_Machine_Learning_in_Recommender_Systems.pdf
accesso aperto
Dimensione
4.09 MB
Formato
Adobe PDF
|
4.09 MB | Adobe PDF | Visualizza/Apri |
I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/20.500.14242/64394
URN:NBN:IT:POLIBA-64394