Contemporary society is increasingly dependent on the use of autonomous computational systems. Being sure that these systems behave appropriately and do what they have been designed for is thus fundamental. Several techniques have been developed to this purpose. Among them, model checking includes a series of methods that make use of computa- tional logic and automatic procedures to check whether systems’ behaviour satisfies given desirable properties. In recent years, due to the increasing prevalence of stochastic models, conventional model checking techniques have proven inadequate. The consequence has been the devel- opment of probabilistic model checking, a new research program aimed at developing proper tools to check the behaviour of stochastic systems. Among others, the probabilistic model checking with Markov models semantics and its related languages (i.e., the Probabilistic Computation Tree Logic (PCTL) and its extensions) has been particularly successful and is nowadays applied throughout a variety of different systems and domains. Nevertheless, there remain numerous unexplored developments and applications in this field. In this doctoral dissertation, we specifically concentrate on three such aspects that hold signifi- cant relevance to contemporary AI research. To each of these aspect, we will dedicate a chapter. The dissertation is structured as follows. In the first chapter, we present the essential foundational information regarding prob- ability theory and Markov models. Initially, we introduce fundamental terminology and refresh the reader’s memory on the significant philosophical interpretations of probability, the Kolmogorov axiomatization, and the associated calculus. Subsequently, we delve into the application of probability in studying stochastic systems. Our primary emphasis is on Markov models, including discrete-time Markov chains, Markov reward models, and Markov decision processes. These models are indeed considered the reference formalism for representing stochastic systems in the domain of model checking. In the second chapter, we provide an overview of the state of the art of model check- ing, with a particular emphasis on both single-agent and multi-agents probabilistic model checking. In the opening section, we discuss the fundamental ideas of the program verifica- tion research field, of which model checking represents one of the contemporary evolutions. We focus in particular on the philosophical debate that emerged in the 1980s between 3 4 advocates and opponents of formal methods in computer science. This debate can be considered a pivotal milestone in the shift from traditional program verification methods to modern techniques such as model checking. In the subsequent sections, we then introduce various formalisms utilized in model checking for representing computational systems, specifying their attributes, and verifying their behaviors. For standard (non probabilistic) model checking, these formalisms include transition systems and their related logical languages (i.e., LTL and CTL), while for prob- abilistic model checking they include Markov models and the related language PCTL. We conclude the chapter by examining some recent extensions of probabilistic model checking suitable for the analysis of epistemic and probabilistic properties of stochastic multi-agent systems. Notably, the formalisms we consider include the logics CTLK [113], PCTLK [175], and COGWED [28], where the latter two formalisms will serve as the foundation for subsequent advancements and applications discussed in the remaining chapters of the dissertation. In the third chapter, we explore a potential connection between probabilistic model- checking and eXplainable AI (XAI), a recently-born field of research aimed at developing methods and tools to make opaque machine learning systems more humanly understand- able. The chapter is divided in two main parts. In the first part, we analyse the so-called opacity problem from a more philosophical point of view, notably focusing on elucidating its many dimensions and explaining how these are actually addressed within the XAI re- search program. In the second part, we restrict our focus to a specific XAI framework, i.e., post-hoc explanation methods based on surrogate models. More specifically, we pro- pose a logic and related model-checking procedures for specifying and checking relevant reliability properties of XAI-explanations for opaque ML systems provided via surrogate models. Among relevant explanations reliability properties, we consider in particular transparency,accuracy, and trustworthiness. We introduce a language with appropriate operators to specify such properties with respect to a single or a group of surrogate mod- els. We propose a semantics for these operators based on a multi-agent Markov structure and develop related model-checking algorithms based on extending established existing procedures introduced in Chapter 2. Finally, we conclude with some computational com- plexity considerations and remarks about further developments of this research. In the fourth chapter, we present an extension of probabilistic model checking to the domain of imprecise probabilities. We start with a brief introduction of the latter, focusing in particular on the reasons motivating its usefulness in the model checking domain. We argue that imprecise probabilistic models may be of fundamental importance for model checking applications, particularly since they permit to relax the stationarity assump- tions associated with the utilization of Markov models. This allows for the analysis of non-stationary systems (i.e., systems whose stochastic behavior changes over time) and systems whose behaviour is not fully known. In the first part of the chapter, we introduce 5 imprecise Markov models for both single and multi-agent systems and present feasible methods to compute relevant inferences over them. In the second part, we present a logic that enables the specification of properties for systems represented by imprecise Markov model. The logic is obtained as an extension of the well-known PCTL and it is called Epistemic-Imprecise Probabilistic Computation Tree Logic (EIPCTL). We provide this logic with a suitable semantics defined over imprecise Markov models and related model checking procedures based on feasible inferences discussed in the first part of the chap- ter. We conclude the chapter by outlining some computational complexity considerations. Notably, we argue that one of the greatest advantages of working with imprecise proba- bilities is the possibility to relax stationarity assumptions, which limit the applicability of model checking, without incurring in an increase of computational complexity. In fact, we demonstrate that all the relevant tasks involved in model checking, which can be resolved in standard models by solving linear equation systems, can be resolved in imprecise models by solving linear programming tasks whose complexity is always polynomial in the models’ number of states. In the fifth chapter, we present a novel application of Markov model semantics to the problem of logical omniscience, which has been considered one of the major issues in the development of an epistemic logic suitable for representing real-world agents since the 1960s. In the first part of the chapter, after a brief introduction to epistemic logic, we analyze a variety of solutions proposed for the problem of logical omniscience, both inside and outside the model-theoretic tradition. For each of these solutions, we briefly analyze the reasons that make them not completely satisfactory. In the second part, we introduce our new framework for epistemic logic, which is based on understanding real-world agents’ deductive reasoning in terms of a ”state-space exploration task” that can be modeled via Markov decision processes and reinforcement learning. Within this framework, we provide a new definition of explicit knowledge according to which an agent knows explicitly a certain proposition if and only if the agent is able to derive that proposition from the information it is actually aware of by making the best use of its finite resources. Based on this definition, we introduce a new logic of knowledge and awareness called ”Markov Deduction Structures Logic.” By exploiting a combination of probabilistic model checking and reinforcement learning techniques, we then outline a semantics and relative model checking framework for this logic. In the third part, we present a dynamic multi-agent extension of the logic for modeling agents’ actions. These actions may include learning inferential rules from other agents or exchanging resources and information with them. Although we do not offer any experimental evaluation of our framework, we advance some considerations concerning its implementation in the conclusions. The dissertation builds upon a range of preliminary results previously presented at multiple international workshops and conferences, and subsequently published in scientific 6 journals and conference proceedings. The opacity taxonomy included in chapter 3 is largely based on the analogous tax- onomy developed in [62]. The Ex-PCTL logic presented also in chapter 3 resumes and extends the ATCTL (Accuracy and Trustworthiness Computation Tree Logic) developed in [160]. The main results in Chapter 4 have been collected in a journal paper forth- coming in SN computer science. They unify and extend two previous works on model checking with imprecise Markov models: [157] and [158]. The former develops a model checking framework for imprecise Markov reward models, while the latter focuses on model- checking stochastic multi-agent systems with imprecise probabilities. An early draft of the framework developed in chapter 5 was presented at the Logic Colloquium 2021, while the extended version here presented has been recently submitted to IJAR
PROBABILISTIC MODEL CHECKING WITH MARKOV MODELS SEMANTICS: NEW DEVELOPMENTS AND APPLICATIONS
TERMINE, ALBERTO
2023
Abstract
Contemporary society is increasingly dependent on the use of autonomous computational systems. Being sure that these systems behave appropriately and do what they have been designed for is thus fundamental. Several techniques have been developed to this purpose. Among them, model checking includes a series of methods that make use of computa- tional logic and automatic procedures to check whether systems’ behaviour satisfies given desirable properties. In recent years, due to the increasing prevalence of stochastic models, conventional model checking techniques have proven inadequate. The consequence has been the devel- opment of probabilistic model checking, a new research program aimed at developing proper tools to check the behaviour of stochastic systems. Among others, the probabilistic model checking with Markov models semantics and its related languages (i.e., the Probabilistic Computation Tree Logic (PCTL) and its extensions) has been particularly successful and is nowadays applied throughout a variety of different systems and domains. Nevertheless, there remain numerous unexplored developments and applications in this field. In this doctoral dissertation, we specifically concentrate on three such aspects that hold signifi- cant relevance to contemporary AI research. To each of these aspect, we will dedicate a chapter. The dissertation is structured as follows. In the first chapter, we present the essential foundational information regarding prob- ability theory and Markov models. Initially, we introduce fundamental terminology and refresh the reader’s memory on the significant philosophical interpretations of probability, the Kolmogorov axiomatization, and the associated calculus. Subsequently, we delve into the application of probability in studying stochastic systems. Our primary emphasis is on Markov models, including discrete-time Markov chains, Markov reward models, and Markov decision processes. These models are indeed considered the reference formalism for representing stochastic systems in the domain of model checking. In the second chapter, we provide an overview of the state of the art of model check- ing, with a particular emphasis on both single-agent and multi-agents probabilistic model checking. In the opening section, we discuss the fundamental ideas of the program verifica- tion research field, of which model checking represents one of the contemporary evolutions. We focus in particular on the philosophical debate that emerged in the 1980s between 3 4 advocates and opponents of formal methods in computer science. This debate can be considered a pivotal milestone in the shift from traditional program verification methods to modern techniques such as model checking. In the subsequent sections, we then introduce various formalisms utilized in model checking for representing computational systems, specifying their attributes, and verifying their behaviors. For standard (non probabilistic) model checking, these formalisms include transition systems and their related logical languages (i.e., LTL and CTL), while for prob- abilistic model checking they include Markov models and the related language PCTL. We conclude the chapter by examining some recent extensions of probabilistic model checking suitable for the analysis of epistemic and probabilistic properties of stochastic multi-agent systems. Notably, the formalisms we consider include the logics CTLK [113], PCTLK [175], and COGWED [28], where the latter two formalisms will serve as the foundation for subsequent advancements and applications discussed in the remaining chapters of the dissertation. In the third chapter, we explore a potential connection between probabilistic model- checking and eXplainable AI (XAI), a recently-born field of research aimed at developing methods and tools to make opaque machine learning systems more humanly understand- able. The chapter is divided in two main parts. In the first part, we analyse the so-called opacity problem from a more philosophical point of view, notably focusing on elucidating its many dimensions and explaining how these are actually addressed within the XAI re- search program. In the second part, we restrict our focus to a specific XAI framework, i.e., post-hoc explanation methods based on surrogate models. More specifically, we pro- pose a logic and related model-checking procedures for specifying and checking relevant reliability properties of XAI-explanations for opaque ML systems provided via surrogate models. Among relevant explanations reliability properties, we consider in particular transparency,accuracy, and trustworthiness. We introduce a language with appropriate operators to specify such properties with respect to a single or a group of surrogate mod- els. We propose a semantics for these operators based on a multi-agent Markov structure and develop related model-checking algorithms based on extending established existing procedures introduced in Chapter 2. Finally, we conclude with some computational com- plexity considerations and remarks about further developments of this research. In the fourth chapter, we present an extension of probabilistic model checking to the domain of imprecise probabilities. We start with a brief introduction of the latter, focusing in particular on the reasons motivating its usefulness in the model checking domain. We argue that imprecise probabilistic models may be of fundamental importance for model checking applications, particularly since they permit to relax the stationarity assump- tions associated with the utilization of Markov models. This allows for the analysis of non-stationary systems (i.e., systems whose stochastic behavior changes over time) and systems whose behaviour is not fully known. In the first part of the chapter, we introduce 5 imprecise Markov models for both single and multi-agent systems and present feasible methods to compute relevant inferences over them. In the second part, we present a logic that enables the specification of properties for systems represented by imprecise Markov model. The logic is obtained as an extension of the well-known PCTL and it is called Epistemic-Imprecise Probabilistic Computation Tree Logic (EIPCTL). We provide this logic with a suitable semantics defined over imprecise Markov models and related model checking procedures based on feasible inferences discussed in the first part of the chap- ter. We conclude the chapter by outlining some computational complexity considerations. Notably, we argue that one of the greatest advantages of working with imprecise proba- bilities is the possibility to relax stationarity assumptions, which limit the applicability of model checking, without incurring in an increase of computational complexity. In fact, we demonstrate that all the relevant tasks involved in model checking, which can be resolved in standard models by solving linear equation systems, can be resolved in imprecise models by solving linear programming tasks whose complexity is always polynomial in the models’ number of states. In the fifth chapter, we present a novel application of Markov model semantics to the problem of logical omniscience, which has been considered one of the major issues in the development of an epistemic logic suitable for representing real-world agents since the 1960s. In the first part of the chapter, after a brief introduction to epistemic logic, we analyze a variety of solutions proposed for the problem of logical omniscience, both inside and outside the model-theoretic tradition. For each of these solutions, we briefly analyze the reasons that make them not completely satisfactory. In the second part, we introduce our new framework for epistemic logic, which is based on understanding real-world agents’ deductive reasoning in terms of a ”state-space exploration task” that can be modeled via Markov decision processes and reinforcement learning. Within this framework, we provide a new definition of explicit knowledge according to which an agent knows explicitly a certain proposition if and only if the agent is able to derive that proposition from the information it is actually aware of by making the best use of its finite resources. Based on this definition, we introduce a new logic of knowledge and awareness called ”Markov Deduction Structures Logic.” By exploiting a combination of probabilistic model checking and reinforcement learning techniques, we then outline a semantics and relative model checking framework for this logic. In the third part, we present a dynamic multi-agent extension of the logic for modeling agents’ actions. These actions may include learning inferential rules from other agents or exchanging resources and information with them. Although we do not offer any experimental evaluation of our framework, we advance some considerations concerning its implementation in the conclusions. The dissertation builds upon a range of preliminary results previously presented at multiple international workshops and conferences, and subsequently published in scientific 6 journals and conference proceedings. The opacity taxonomy included in chapter 3 is largely based on the analogous tax- onomy developed in [62]. The Ex-PCTL logic presented also in chapter 3 resumes and extends the ATCTL (Accuracy and Trustworthiness Computation Tree Logic) developed in [160]. The main results in Chapter 4 have been collected in a journal paper forth- coming in SN computer science. They unify and extend two previous works on model checking with imprecise Markov models: [157] and [158]. The former develops a model checking framework for imprecise Markov reward models, while the latter focuses on model- checking stochastic multi-agent systems with imprecise probabilities. An early draft of the framework developed in chapter 5 was presented at the Logic Colloquium 2021, while the extended version here presented has been recently submitted to IJARFile | Dimensione | Formato | |
---|---|---|---|
phd_unimi_R12745.pdf
accesso aperto
Dimensione
2.29 MB
Formato
Adobe PDF
|
2.29 MB | Adobe PDF | Visualizza/Apri |
I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/20.500.14242/172447
URN:NBN:IT:UNIMI-172447