Malware analysis and detection is a long-standing research topic in the cybersecurity field. In the last decade, the massive quantity of available data has pushed researchers to move toward data-driven approaches, also due to the inability of classical methods (such as signature-based techniques) to scale with such a quantity of data. Nevertheless, such methods have vulnerabilities and weaknesses which deserve a deep investigation. One of the most significant controversies regards the "explainability" of such methodologies. This thesis studies how malware can be represented to highlight malicious behavior and which detection techniques can be enforced to classify such samples. We represent malware as graphs and images and adopt deep learning and model-checking techniques to distinguish between malicious and benign samples. The dissertation is guided by the comparison of such methodologies and the "explainability": we aim to provide a malware detector methodology that makes output prediction easily interpretable by humans.

Representation and Detection of Malware Families using Explainable Approaches

IADAROLA, GIACOMO
2023

Abstract

Malware analysis and detection is a long-standing research topic in the cybersecurity field. In the last decade, the massive quantity of available data has pushed researchers to move toward data-driven approaches, also due to the inability of classical methods (such as signature-based techniques) to scale with such a quantity of data. Nevertheless, such methods have vulnerabilities and weaknesses which deserve a deep investigation. One of the most significant controversies regards the "explainability" of such methodologies. This thesis studies how malware can be represented to highlight malicious behavior and which detection techniques can be enforced to classify such samples. We represent malware as graphs and images and adopt deep learning and model-checking techniques to distinguish between malicious and benign samples. The dissertation is guided by the comparison of such methodologies and the "explainability": we aim to provide a malware detector methodology that makes output prediction easily interpretable by humans.
17-feb-2023
Italiano
cybersecurity
deep learning
explainable AI
malware analysis
model checking
Martinelli, Fabio
Micheli, Alessio
Mercaldo, Francesco
File in questo prodotto:
File Dimensione Formato  
Iadarola_PhDThesis.pdf

accesso aperto

Dimensione 16.55 MB
Formato Adobe PDF
16.55 MB Adobe PDF Visualizza/Apri
Iadarola_Report_PhD_Activities_and_Publications.pdf

non disponibili

Dimensione 102.16 kB
Formato Adobe PDF
102.16 kB Adobe PDF

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14242/216717
Il codice NBN di questa tesi è URN:NBN:IT:UNIPI-216717