Nothing Special   »   [go: up one dir, main page]

Foundations and Trends® in Machine Learning > Vol 17 > Issue 5

Automated Deep Learning: Neural Architecture Search Is Not the End

By Xuanyi Dong, Complex Adaptive Systems Lab, University of Technology Sydney, Australia and Brain Team, Google Research, USA, xuanyi.dxy@gmail.com | David Jacob Kedziora, Complex Adaptive Systems Lab, University of Technology Sydney, Australia, david.kedziora@uts.edu.au | Katarzyna Musial, Complex Adaptive Systems Lab, University of Technology Sydney, Australia, katarzyna.musial-gabrys@uts.edu.au | Bogdan Gabrys, Complex Adaptive Systems Lab, University of Technology Sydney, Australia, bogdan.gabrys@uts.edu.au

 
Suggested Citation
Xuanyi Dong, David Jacob Kedziora, Katarzyna Musial and Bogdan Gabrys (2024), "Automated Deep Learning: Neural Architecture Search Is Not the End", Foundations and TrendsĀ® in Machine Learning: Vol. 17: No. 5, pp 767-920. http://dx.doi.org/10.1561/2200000119

Publication Date: 27 Feb 2024
© 2024 X. Dong et al.
 
Subjects
Deep learning,  Model choice,  Classification and prediction,  Optimization,  Evaluation,  Reinforcement learning,  Bayesian learning
 
Keywords
Automated deep learning (AutoDL)neural architecture search (NAS)hyperparameter optimization (HPO)automated data engineeringhardware searchautomated deploymentlife-long learningpersistent learningadaptationautomated machine learning (AutoML)autonomous machine learning (AutonoML)deep neural networksdeep learning
 

Free Preview:

Download extract

Share

Download article
In this article:
1. Introduction
2. AutoDL: An Overview
3. Automated Problem Formulation
4. Automated Data Engineering
5. Neural Architecture Search
6. Hyperparameter Optimization
7. Automated Deployment
8. Automated Maintenance
9. Critical Discussion and Future Directions
10. Conclusions
Acknowledgments
References

Abstract

Deep learning (DL) has proven to be a highly effective approach for developing models in diverse contexts, including visual perception, speech recognition, and machine translation. However, the end-to-end process for applying DL is not trivial. It requires grappling with problem formulation and context understanding, data engineering, model development, deployment, continuous monitoring and maintenance, and so on. Moreover, each of these steps typically relies heavily on humans, in terms of both knowledge and interactions, which impedes the further advancement and democratization of DL. Consequently, in response to these issues, a new field has emerged over the last few years: automated deep learning (AutoDL). This endeavor seeks to minimize the need for human involvement and is best known for its achievements in neural architecture search (NAS), a topic that has been the focus of several surveys. That stated, NAS is not the be-all and end-all of AutoDL. Accordingly, this review adopts an overarching perspective, examining research efforts into automation across the entirety of an archetypal DL workflow. In so doing, this work also proposes a comprehensive set of ten criteria by which to assess existing work in both individual publications and broader research areas. These criteria are: novelty, solution quality, efficiency, stability, interpretability, reproducibility, engineering quality, scalability, generalizability, and eco-friendliness. Thus, ultimately, this review provides an evaluative overview of AutoDL in the early 2020s, identifying where future opportunities for progress may exist.

DOI:10.1561/2200000119
ISBN: 978-1-63828-318-8
166 pp. $99.00
Buy book (pb)
 
ISBN: 978-1-63828-319-5
166 pp. $155.00
Buy E-book (.pdf)
Table of contents:
1. Introduction
2. AutoDL: An Overview
3. Automated Problem Formulation
4. Automated Data Engineering
5. Neural Architecture Search
6. Hyperparameter Optimization
7. Automated Deployment
8. Automated Maintenance
9. Critical Discussion and Future Directions
10. Conclusions
Acknowledgments
References

Automated Deep Learning: Neural Architecture Search Is Not the End

Deep learning (DL) has proven to be a highly effective approach for developing models in diverse contexts, including visual perception, speech recognition, and machine translation. Automated deep learning (AutoDL) endeavors to minimize the need for human involvement and is best known for its achievements in neural architecture search (NAS).

In this monograph, the authors examine research efforts into automation across the entirety of an archetypal DL workflow. In so doing, they propose a comprehensive set of ten criteria by which to assess existing work in both individual publications and broader research areas, namely novelty, solution quality, efficiency, stability, interpretability, reproducibility, engineering quality, scalability, generalizability, and eco-friendliness.

Aimed at students and researchers, this monograph provides an evaluative overview of AutoDL in the early 2020s, identifying where future opportunities for progress may exist.

 
MAL-119