In a variety of contexts, involving execution of software or hardware manipulating sensitive data, one would like that public observables do not leak information that should be kept secret. Since it is practically impossible to avoid leakage entirely, there is a growing interest in the quantitative aspects of information flow analysis. This is also related to privacy, that is to protection of sensitive information concerning individuals. In this context, we first study methods to measure the average amount of leakage due to system execution, quantifying the possibility of inferring the secret information from observables. We assume an attacker that can make a single guess after observing a certain number of independent executions of the program. We study the asymptotic behaviour of information leakage and the dichotomy between protection of the whole secret and of a property of the secret. An average measure of leakage may not be adequate when privacy of individuals is at stake. To study this issue, we introduce a strong semantic notion of security, that expresses absence of any privacy breach above a given level of seriousness, irrespective of any background information. We then analyze this notion according to two dimensions: worst vs. average case, single vs. repeated observations, and clarify its relation to differential privacy. Finally, motivated by the complexity of exact computation of quantitative information leakage, we study statistical approaches to its estimation, when only a black-box access to the system is provided, and little is known about the input generation mechanism.

Information-theoretic models of confidentiality and privacy

2014

Abstract

In a variety of contexts, involving execution of software or hardware manipulating sensitive data, one would like that public observables do not leak information that should be kept secret. Since it is practically impossible to avoid leakage entirely, there is a growing interest in the quantitative aspects of information flow analysis. This is also related to privacy, that is to protection of sensitive information concerning individuals. In this context, we first study methods to measure the average amount of leakage due to system execution, quantifying the possibility of inferring the secret information from observables. We assume an attacker that can make a single guess after observing a certain number of independent executions of the program. We study the asymptotic behaviour of information leakage and the dichotomy between protection of the whole secret and of a property of the secret. An average measure of leakage may not be adequate when privacy of individuals is at stake. To study this issue, we introduce a strong semantic notion of security, that expresses absence of any privacy breach above a given level of seriousness, irrespective of any background information. We then analyze this notion according to two dimensions: worst vs. average case, single vs. repeated observations, and clarify its relation to differential privacy. Finally, motivated by the complexity of exact computation of quantitative information leakage, we study statistical approaches to its estimation, when only a black-box access to the system is provided, and little is known about the input generation mechanism.
lug-2014
Inglese
QA75 Electronic computers. Computer science
Boreale, Prof. Michele
Scuola IMT Alti Studi di Lucca
File in questo prodotto:
File Dimensione Formato  
Paolini_phdthesis.pdf

accesso aperto

Tipologia: Altro materiale allegato
Dimensione 849.08 kB
Formato Adobe PDF
849.08 kB Adobe PDF Visualizza/Apri

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14242/144895
Il codice NBN di questa tesi è URN:NBN:IT:IMTLUCCA-144895