We study a natural extension of classical empirical risk minimization, where the hypothesis space is a random subspace of a given space. In particular, we consider possibly data dependent subspaces spanned by a random subset of the data, recovering as a special case Nyström approaches for kernel methods. Considering random subspaces naturally leads to computational savings, but the question is whether the corresponding learning accuracy is degraded. These statistical-computational tradeoffs have been recently explored for the least squares loss and self-concordant loss functions, such as the logistic loss. Here, we work to extend these results to convex Lipschitz loss functions, that might not be smooth, such as the hinge loss used in support vector machines. This unified analysis requires developing new proofs, that use different technical tools to establish fast rates. Our main results show the existence of different settings, depending on how hard the learning problem is, for which computational efficiency can be improved with no loss in performance. The analysis is also specialized to smooth loss functions. In the final part of the paper we convert our surrogates risk bounds into classification error bounds and compare the choice of hinge loss with respect to square loss.

Optimal and Efficient Learning In Classification

DELLA VECCHIA, ANDREA
2023

Abstract

We study a natural extension of classical empirical risk minimization, where the hypothesis space is a random subspace of a given space. In particular, we consider possibly data dependent subspaces spanned by a random subset of the data, recovering as a special case Nyström approaches for kernel methods. Considering random subspaces naturally leads to computational savings, but the question is whether the corresponding learning accuracy is degraded. These statistical-computational tradeoffs have been recently explored for the least squares loss and self-concordant loss functions, such as the logistic loss. Here, we work to extend these results to convex Lipschitz loss functions, that might not be smooth, such as the hinge loss used in support vector machines. This unified analysis requires developing new proofs, that use different technical tools to establish fast rates. Our main results show the existence of different settings, depending on how hard the learning problem is, for which computational efficiency can be improved with no loss in performance. The analysis is also specialized to smooth loss functions. In the final part of the paper we convert our surrogates risk bounds into classification error bounds and compare the choice of hinge loss with respect to square loss.
29-mag-2023
Inglese
DE VITO, ERNESTO
ROSASCO, LORENZO
VIGNI, STEFANO
Università degli studi di Genova
File in questo prodotto:
File Dimensione Formato  
phdunige_4794275.pdf

accesso aperto

Dimensione 1.12 MB
Formato Adobe PDF
1.12 MB Adobe PDF Visualizza/Apri

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14242/63674
Il codice NBN di questa tesi è URN:NBN:IT:UNIGE-63674