Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. EPFL thesis
  4. Learning without Smoothness and Strong Convexity
 
doctoral thesis

Learning without Smoothness and Strong Convexity

Li, Yen-Huan  
2018

Recent advances in statistical learning and convex optimization have inspired many successful practices. Standard theories assume smoothness---bounded gradient, Hessian, etc.--- and strong convexity of the loss function. Unfortunately, such conditions may not hold in important real-world applications, and sometimes, to fulfill the conditions incurs unnecessary performance degradation. Below are three examples.

  1. The standard theory for variable selection via L_1-penalization only considers the linear regression model, as the corresponding quadratic loss function has a constant Hessian and allows for exact second-order Taylor series expansion. In practice, however, non-linear regression models are often chosen to match data characteristics.

  2. The standard theory for convex optimization considers almost exclusively smooth functions. Important applications such as portfolio selection and quantum state estimation, however, correspond to loss functions that violate the smoothness assumption; existing convergence guarantees for optimization algorithms hence do not apply.

  3. The standard theory for compressive magnetic resonance imaging (MRI) guarantees the restricted isometry property (RIP) ---a smoothness and strong convexity condition on the quadratic loss restricted on the set of sparse vectors--- via random uniform sampling. The random uniform sampling strategy, however, yields unsatisfactory signal reconstruction performance empirically, in comparison to heuristic sampling approaches.

In this thesis, we provide rigorous solutions to the three examples above and other related problems. For the first two problems above, our key idea is to instead consider weaker localized versions of the smoothness condition. For the third, our solution is to propose a new theoretical framework for compressive MRI: We pose compressive MRI as a statistical learning problem, and solve it by empirical risk minimization. Interestingly, the RIP is not required in this framework.

  • Files
  • Details
  • Metrics
Type
doctoral thesis
DOI
10.5075/epfl-thesis-8765
Author(s)
Li, Yen-Huan  
Advisors
Cevher, Volkan  orcid-logo
Jury

Prof. Patrick Thiran (président) ; Prof. Volkan Cevher (directeur de thèse) ; Prof. Martin Jaggi, Prof. Jérôme Bolte, Prof. Pradeep Ravikumar (rapporteurs)

Date Issued

2018

Publisher

EPFL

Publisher place

Lausanne

Public defense year

2018-07-13

Thesis number

8765

Total of pages

157

Subjects

Smoothness

•

strong convexity

•

statistical learning

•

convex optimization

•

variable selection

•

lasso

•

quantum state estimation

•

mirror descent

•

compressive MRI

EPFL units
LIONS  
Faculty
STI  
School
IEL  
Doctoral School
EDIC  
Available on Infoscience
July 10, 2018
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/147203
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés