The present thesis is dedicated to the development of a toolkit for the computation of sensitivities of prices of financial products, and for their practical use in multi-dimensional settings. Specifically, we want to address two main points, corresponding to the two parts of this work: 1. The traditional theoretical setting where sensitivities-based hedging is justified involves questionable idealizations, such as continuous-time portfolio rebalancing with no costs. Do more realistic assumptions impact the way in which sensitivities should be used? 2. When the number of drivers is very large, the estimation of sensitivities becomes a computationally demanding task. How can many of them be calculated efficiently? Part I is concerned with the effective use of multiple sensitivities in practice. Chapter 1 studies the effects on hedging of the interaction between different underlying instruments as modelled by instantaneous diffusive correlation. This parameter would not play any role in idealized continuously-rebalanced hedging, but we find that it does if rebalancing times are in finite number and potentially different for different instruments, as often in practice. Under suitable assumptions, we find a strategy in which the sensitivities are combined in a nontrivial way, since some hedge positions are sometimes not rebalanced because the corresponding exposure can be in part offset by overweighting or underweighting other correlated hedges. Chapter 2 considers how the practice of periodically recalibrating model parameters to market data affects the way in which sensitivities should be looked at. Indeed, recalibration effectively falsifies the distributional assumptions behind the pricing model, so that a formalization is almost hopeless inside a traditional stochastic processes based no-arbitrage theory. Hence we propose an alternative mathematisation based on differential geometry, which describes the degrees of freedom one has in the construction of the hedging portfolio in this setting. Part II focuses on the efficient computation of large numbers of sensitivities. Chapter 3 concentrates on first order sensitivities of prices whose computation is costly due to the need of Monte Carlo simulation. Our starting point is that for continuous payoffs, the pathwise application of a computer science technique known as adjoint algorithmic differentiation gives remarkably fast and accurate price gradients of arbitrary length; however, the generalizations to discontinuous payoffs like digital options are nontrivial. The new algorithm proposed here distinguishes itself by extending the pathwise adjoints method in a most natural way, and by its empirically very low Monte Carlo uncertainties. Chapter 4 looks for fast algorithms to compute the full second order sensitivity matrix of a Monte Carlo price. Many combinations of first order estimators have been tried in the literature to this purpose, and our first contribution is an orderly theoretical and empirical comparison of these proposals. Then, since none of the alternatives appears satisfactory in all settings, we propose two original methods: the first one generalizes the idea of the previous chapter, while the other one leverages a functional relation between first and second order derivatives. The former shows excellent generality and computational times. The latter has more limited applicability, but it is by far the most effective in at least one relevant example, and has a theoretical interest, being the first practical estimator of the full Hessian whose complexity, as a multiple of that of the only-price implementation, does not grow with the dimension of the problem.
La presente tesi è dedicata al calcolo delle sensitività del prezzo di prodotti finanziari e al loro e utilizzo pratico in contesti multidimensionali. In particolare, si affrontano due punti principali, che corrispondono alle due parti del lavoro: 1. La giustificazione teorica tradizionale dell’uso delle sensitività per la copertura comporta idealizzazioni di dubbia validità, come il ribilanciamento continuo del portafoglio senza costi. Assunzioni più realistiche impattano il modo in cui le sensitività dovrebbero essere usate? 2. Quando il numero di input è molto grande, la stima delle sensitività diventa computazionalmente pesante. Come si possono calcolare molte sensitività efficientemente? La prima parte della tesi si concentra sull’uso efficace di sensitività multiple nella pratica. Il Capitolo 1 studia gli effetti sulla strategia di copertura delle interazioni tra diversi strumenti sottostanti modellandola come una correlazione diffusiva. Essa non avrebbe alcun ruolo se il portafoglio di copertura fosse ribilanciato continuamente, ma questo non è più vero se i tempi di ribilanciamento sono in numero finito e potenzialmente differenti per i diversi strumenti, come spesso accade in pratica. Sotto opportune ipotesi, su ricava una strategia in cui le sensitività compaiono in combinazioni non banali, poiché alcune posizioni sono talvolta non ribilanciate in quanto l’esposizione corrispondente può essere in parte mitigata sovra-pesando o sotto-pesando altre coperture correlate. Il Capitolo 2 esamina come la pratica di ricalibrare periodicamente i parametri di modello ai dati di mercato impatti il modo in cui le sensitività dovrebbero essere interpretate. Infatti, la ricalibrazione contraddice di fatto le assunzioni distribuzionali sottostanti al modello di pricing, cosicché una formalizzazione tradizionale basata sulla teoria dell’assenza di arbitraggio è impossibile. Pertanto si propone una matematizzazione alternativa basata sulla geometria differenziale, che permette la descrizione dei gradi di libertà a disposizione nella costruzione del portafoglio di copertura in questo contesto. La seconda parte si occupa sul calcolo efficiente di un numero elevato di sensitività. Il Capitolo 3 si concentra sulle sensitività del primo ordine di prezzi il cui calcolo è costoso a causa dell’uso di simulazioni Montecarlo. Il punto di partenza è che per payoff continui, l’applicazione su ogni cammino di una tecnica informatica nota come differenziazione algoritmica aggiunta fornisce gradienti accurati di lunghezza arbitraria in un breve tempo di calcolo; tuttavia, le generalizzazioni a payoff discontinui come le opzioni digitali sono non banali. Il nuovo algoritmo qui proposto si distingue perché estende il metodo degli aggiunti pathwise in modo particolarmente naturale, e perché le sue incertezze Montecarlo sono empiricamente molto contenute. Il Capitolo 4 analizza possibili algoritmi per il calcolo rapido e completo delle derivate del secondo ordine di un prezzo Montecarlo. Molte combinazioni di stimatori del primo ordine sono state tentate in letteratura a questo scopo, e il primo contributo di questa tesi è un confronto ordinato di queste proposte sul piano sia teorico che empirico. Nessun metodo appare soddisfacente in tutti i contesti, pertanto si propongono due nuovi stimatori: il primo generalizza l’idea del capitolo precedente, mentre il secondo sfrutta una relazione funzionale tra derivate prime e seconde. Il primo risulta eccellente dal punto di vista della generalità e dei tempi di calcolo. Il secondo ha un’applicabilità più limitata, ma è di gran lunga il migliore in almeno un caso rilevante, e ha un interesse teorico in quanto è il primo stimatore praticamente utilizzabile della matrice Hessiana la cui complessità computazionale, espressa come multiplo di quella del calcolo del solo prezzo, non cresce all’aumentare della dimensione del problema.
Fast mass computation of sensitivities and effective hedging of financial products
DALUISO, ROBERTO
2019
Abstract
The present thesis is dedicated to the development of a toolkit for the computation of sensitivities of prices of financial products, and for their practical use in multi-dimensional settings. Specifically, we want to address two main points, corresponding to the two parts of this work: 1. The traditional theoretical setting where sensitivities-based hedging is justified involves questionable idealizations, such as continuous-time portfolio rebalancing with no costs. Do more realistic assumptions impact the way in which sensitivities should be used? 2. When the number of drivers is very large, the estimation of sensitivities becomes a computationally demanding task. How can many of them be calculated efficiently? Part I is concerned with the effective use of multiple sensitivities in practice. Chapter 1 studies the effects on hedging of the interaction between different underlying instruments as modelled by instantaneous diffusive correlation. This parameter would not play any role in idealized continuously-rebalanced hedging, but we find that it does if rebalancing times are in finite number and potentially different for different instruments, as often in practice. Under suitable assumptions, we find a strategy in which the sensitivities are combined in a nontrivial way, since some hedge positions are sometimes not rebalanced because the corresponding exposure can be in part offset by overweighting or underweighting other correlated hedges. Chapter 2 considers how the practice of periodically recalibrating model parameters to market data affects the way in which sensitivities should be looked at. Indeed, recalibration effectively falsifies the distributional assumptions behind the pricing model, so that a formalization is almost hopeless inside a traditional stochastic processes based no-arbitrage theory. Hence we propose an alternative mathematisation based on differential geometry, which describes the degrees of freedom one has in the construction of the hedging portfolio in this setting. Part II focuses on the efficient computation of large numbers of sensitivities. Chapter 3 concentrates on first order sensitivities of prices whose computation is costly due to the need of Monte Carlo simulation. Our starting point is that for continuous payoffs, the pathwise application of a computer science technique known as adjoint algorithmic differentiation gives remarkably fast and accurate price gradients of arbitrary length; however, the generalizations to discontinuous payoffs like digital options are nontrivial. The new algorithm proposed here distinguishes itself by extending the pathwise adjoints method in a most natural way, and by its empirically very low Monte Carlo uncertainties. Chapter 4 looks for fast algorithms to compute the full second order sensitivity matrix of a Monte Carlo price. Many combinations of first order estimators have been tried in the literature to this purpose, and our first contribution is an orderly theoretical and empirical comparison of these proposals. Then, since none of the alternatives appears satisfactory in all settings, we propose two original methods: the first one generalizes the idea of the previous chapter, while the other one leverages a functional relation between first and second order derivatives. The former shows excellent generality and computational times. The latter has more limited applicability, but it is by far the most effective in at least one relevant example, and has a theoretical interest, being the first practical estimator of the full Hessian whose complexity, as a multiple of that of the only-price implementation, does not grow with the dimension of the problem.File | Dimensione | Formato | |
---|---|---|---|
phd_unimib_802676.pdf
accesso aperto
Dimensione
1.52 MB
Formato
Adobe PDF
|
1.52 MB | Adobe PDF | Visualizza/Apri |
I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/20.500.14242/77474
URN:NBN:IT:UNIMIB-77474