Nothing Special   »   [go: up one dir, main page]

Foundations and Trends® in Machine Learning > Vol 10 > Issue 3-4

Non-convex Optimization for Machine Learning

By Prateek Jain, Microsoft Research, India, prajain@microsoft.com | Purushottam Kar, IIT Kanpur, India, purushot@cse.iitk.ac.in

 
Suggested Citation
Prateek Jain and Purushottam Kar (2017), "Non-convex Optimization for Machine Learning", Foundations and Trends® in Machine Learning: Vol. 10: No. 3-4, pp 142-363. http://dx.doi.org/10.1561/2200000058

Publication Date: 04 Dec 2017
© 2017 P. Jain and P. Kar
 
Subjects
Optimization,  Robustness,  Statistical learning theory,  Sparse representations,  Computational learning,  Design and analysis of algorithms
 

Free Preview:

Download extract

Share

Download article
In this article:
Preface
Mathematical Notation
Part I: Introduction and Basic Tools 
1. Introduction
2. Mathematical Tools
Part II: Non-convex Optimization Primitives 
3. Non-Convex Projected Gradient Descent
4. Alternating Minimization
5. The EM Algorithm
6. Stochastic Optimization Techniques
Part III: Applications 
7. Sparse Recovery
8. Low-rank Matrix Recovery
9. Robust Linear Regression
10. Phase Retrieval
References

Abstract

A vast majority of machine learning algorithms train their models and perform inference by solving optimization problems. In order to capture the learning and prediction problems accurately, structural constraints such as sparsity or low rank are frequently imposed or else the objective itself is designed to be a non-convex function. This is especially true of algorithms that operate in high-dimensional spaces or that train non-linear models such as tensor models and deep networks. The freedom to express the learning problem as a non-convex optimization problem gives immense modeling power to the algorithm designer, but often such problems are NP-hard to solve. A popular workaround to this has been to relax non-convex problems to convex ones and use traditional methods to solve the (convex) relaxed optimization problems. However this approach may be lossy and nevertheless presents significant challenges for large scale optimization. On the other hand, direct approaches to non-convex optimization have met with resounding success in several domains and remain the methods of choice for the practitioner, as they frequently outperform relaxation-based techniques – popular heuristics include projected gradient descent and alternating minimization. However, these are often poorly understood in terms of their convergence and other properties. This monograph presents a selection of recent advances that bridge a long-standing gap in our understanding of these heuristics. We hope that an insight into the inner workings of these methods will allow the reader to appreciate the unique marriage of task structure and generative models that allow these heuristic techniques to (provably) succeed. The monograph will lead the reader through several widely used nonconvex optimization techniques, as well as applications thereof. The goal of this monograph is to both, introduce the rich literature in this area, as well as equip the reader with the tools and techniques needed to analyze these simple procedures for non-convex problems.

DOI:10.1561/2200000058
ISBN: 978-1-68083-368-3
212 pp. $99.00
Buy book (pb)
 
ISBN: 978-1-68083-369-0
212 pp. $260.00
Buy E-book (.pdf)
Table of contents:
Preface
Mathematical Notation
Part I: Introduction and Basic Tools
1. Introduction
2. Mathematical Tools
Part II: Non-convex Optimization Primitives
3. Non-Convex Projected Gradient Descent
4. Alternating Minimization
5. The EM Algorithm
6. Stochastic Optimization Techniques
Part III: Applications
7. Sparse Recovery
8. Low-rank Matrix Recovery
9. Robust Linear Regression
10. Phase Retrieval
References

Non-convex Optimization for Machine Learning

Non-convex Optimization for Machine Learning takes an in-depth look at the basics of non-convex optimization with applications to machine learning. It introduces the rich literature in this area, as well as equipping the reader with the tools and techniques needed to analyze these simple procedures for non-convex problems.

Non-convex Optimization for Machine Learning is as self-contained as possible while not losing focus of the main topic of non-convex optimization techniques. Entire chapters are devoted to present a tutorial-like treatment of basic concepts in convex analysis and optimization, as well as their non-convex counterparts. As such, this monograph can be used for a semester-length course on the basics of non-convex optimization with applications to machine learning. On the other hand, it is also possible to cherry pick individual portions, such the chapter on sparse recovery, or the EM algorithm, for inclusion in a broader course. Several courses such as those in machine learning, optimization, and signal processing may benefit from the inclusion of such topics.

Non-convex Optimization for Machine Learning concludes with a look at four interesting applications in the areas of machine learning and signal processing and explores how the non-convex optimization techniques introduced earlier can be used to solve these problems.

 
MAL-058