255948
20190509131756.0
10.5075/epfl-thesis-8765
doi
THESIS
eng
8765
Learning without Smoothness and Strong Convexity
Lausanne
EPFL
2018
2018
157
Theses
Prof. Patrick Thiran (président) ; Prof. Volkan Cevher (directeur de thèse) ; Prof. Martin Jaggi, Prof. Jérôme Bolte, Prof. Pradeep Ravikumar (rapporteurs)
Recent advances in statistical learning and convex optimization have inspired many successful practices. Standard theories assume smoothness---bounded gradient, Hessian, etc.---and strong convexity of the loss function. Unfortunately, such conditions may not hold in important real-world applications, and sometimes, to fulfill the conditions incurs unnecessary performance degradation. Below are three examples.
1. The standard theory for variable selection via L_1-penalization only considers the linear regression model, as the corresponding quadratic loss function has a constant Hessian and allows for exact second-order Taylor series expansion. In practice, however, non-linear regression models are often chosen to match data characteristics.
2. The standard theory for convex optimization considers almost exclusively smooth functions. Important applications such as portfolio selection and quantum state estimation, however, correspond to loss functions that violate the smoothness assumption; existing convergence guarantees for optimization algorithms hence do not apply.
3. The standard theory for compressive magnetic resonance imaging (MRI) guarantees the restricted isometry property (RIP)---a smoothness and strong convexity condition on the quadratic loss restricted on the set of sparse vectors---via random uniform sampling. The random uniform sampling strategy, however, yields unsatisfactory signal reconstruction performance empirically, in comparison to heuristic sampling approaches.
In this thesis, we provide rigorous solutions to the three examples above and other related problems. For the first two problems above, our key idea is to instead consider weaker localized versions of the smoothness condition. For the third, our solution is to propose a new theoretical framework for compressive MRI: We pose compressive MRI as a statistical learning problem, and solve it by empirical risk minimization. Interestingly, the RIP is not required in this framework.
Smoothness
strong convexity
statistical learning
convex optimization
variable selection
lasso
quantum state estimation
mirror descent
compressive MRI
Li, Yen-Huan
247574
221971
Cevher, Volkan
dir.
199128
3075313
http://infoscience.epfl.ch/record/255948/files/EPFL_TH8765.pdf
LIONS
oai:infoscience.epfl.ch:255948
DOI
STI
thesis
GLOBAL_SET
STI
IEL
EDIC
LIONS
2018-07-13
2018
8765/THESES
EPFL
PUBLISHED
THESIS