Nothing Special   »   [go: up one dir, main page]

skip to main content
10.5555/2971808.2971918guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
research-article
Free access

Conditional deep learning for energy-efficient and enhanced pattern recognition

Published: 14 March 2016 Publication History

Abstract

Deep learning neural networks have emerged as one of the most powerful classification tools for vision related applications. However, the computational and energy requirements associated with such deep nets can be quite high, and hence their energy-efficient implementation is of great interest. Although traditionally the entire network is utilized for the recognition of all inputs, we observe that the classification difficulty varies widely across inputs in real-world datasets; only a small fraction of inputs require the full computational effort of a network, while a large majority can be classified correctly with very low effort. In this paper, we propose Conditional Deep Learning (CDL) where the convolutional layer features are used to identify the variability in the difficulty of input instances and conditionally activate the deeper layers of the network. We achieve this by cascading a linear network of output neurons for each convolutional layer and monitoring the output of the linear network to decide whether classification can be terminated at the current stage or not. The proposed methodology thus enables the network to dynamically adjust the computational effort depending upon the difficulty of the input data while maintaining competitive classification accuracy. We evaluate our approach on the MNIST dataset. Our experiments demonstrate that our proposed CDL yields 1.91x reduction in average number of operations per input, which translates to 1.84x improvement in energy. In addition, our results show an improvement in classification accuracy from 97.5% to 98.9% as compared to the original network.

References

[1]
S. Venkataramani et al. Scalable-effort classifiers for energy-efficient machine learning. Proceedings of the 52nd Annual Design Automation Conference. ACM, 2015.
[2]
G Hinton et al. A fast learning algorithm for deep belief nets. Neural computation, 2006.
[3]
Y. Bengio. Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2009.
[4]
G Rosenberg. Improving photo search: A step across the semantic gap http://googleresearch.blogspot.com/2013/06/improving-photo-search-step-across.html 2009.
[5]
Y Netzer et al. Reading digits in natural images with unsupervised feature learning. In NIPS workshop on deep learning and unsupervised feature learning, 2011.
[6]
J Dean et al. Large scale distributed deep networks. In NIPS, 2012.
[7]
G Hinton et al. Deep neural networks for acoustic modelling in speech recognition: The shared views of four research groups. Signal Processing Magazine, IEEE, 29(6):82--97, 2012.
[8]
Scientists See Promise in Deep-Learning Programs, www.nytimes.com/2012/11/24/science/scientists-see-advancesin-deep-learning-a-part-of-artificial-intelligence.html.
[9]
A. Krizhevsky et. al Imagenet classification with deep convolutional neural networks. In NIPS, 2012.
[10]
S. Ramasubramanian et. al Spindle: Spintronic deep learning engine for large-scale neuromorphic computing. In ISLPED, IEEE/ACM 2014.
[11]
M Zeiler et. al. Deconvolutional networks. In CVPR IEEE, 2010.
[12]
J. Yosinski et. al. How transferable are features in deep neural networks? In NIPS 2014.
[13]
A. Razavian et. al. Cnn features off-the-shelf an astounding baseline for recognition. In CVPRW, IEEE, 2014.
[14]
C. Szegedy et. al. Going deeper with convolutions.arXiv:1409.4842, 2014.
[15]
Kaiming He et al. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. arXiv:1502.01852, 2015.
[16]
P Sermanet et. al. Overfeat: Integrated recognition, localization and detection using convolutional networks. arXiv:1312.6229, 2013.
[17]
Yann LeCun et. al. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278--2324, 1998.
[18]
Yann LeCun et. al. Learning methods for generic object recognition with invariance to pose and lighting. In CVPR IEEE, 2004.
[19]
R. Palm. Prediction as a candidate for learning deep hierarchical models of data. Technical University of Denmark, 2012.

Cited By

View all
  • (2024)Early-Exit Deep Neural Network - A Comprehensive SurveyACM Computing Surveys10.1145/369876757:3(1-37)Online publication date: 22-Nov-2024
  • (2024)Adapting Neural Networks at Runtime: Current Trends in At-Runtime Optimizations for Deep LearningACM Computing Surveys10.1145/365728356:10(1-40)Online publication date: 14-May-2024
  • (2023)When does confidence-based cascade deferral suffice?Proceedings of the 37th International Conference on Neural Information Processing Systems10.5555/3666122.3666553(9891-9906)Online publication date: 10-Dec-2023
  • Show More Cited By
  1. Conditional deep learning for energy-efficient and enhanced pattern recognition

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image Guide Proceedings
      DATE '16: Proceedings of the 2016 Conference on Design, Automation & Test in Europe
      March 2016
      1779 pages
      ISBN:9783981537062
      • General Chair:
      • Luca Fanucci,
      • Program Chair:
      • Jürgen Teich

      Sponsors

      • IMEC: IMEC
      • Systematic: Systematic Paris-Region Systems & ICT Cluster
      • DREWAG: DREWAG
      • AENEAS: AENEAS
      • Technical University of Dresden
      • CMP: Circuits Multi Projets
      • PENTA: PENTA
      • CISCO
      • OFFIS: Oldenburger Institut für Informatik
      • Goethe University: Goethe University Frankfurt

      Publisher

      EDA Consortium

      San Jose, CA, United States

      Publication History

      Published: 14 March 2016

      Author Tags

      1. conditional activation
      2. deep learning convolutional neural network
      3. energy efficiency
      4. enhanced accuracy

      Qualifiers

      • Research-article

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)30
      • Downloads (Last 6 weeks)5
      Reflects downloads up to 22 Nov 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Early-Exit Deep Neural Network - A Comprehensive SurveyACM Computing Surveys10.1145/369876757:3(1-37)Online publication date: 22-Nov-2024
      • (2024)Adapting Neural Networks at Runtime: Current Trends in At-Runtime Optimizations for Deep LearningACM Computing Surveys10.1145/365728356:10(1-40)Online publication date: 14-May-2024
      • (2023)When does confidence-based cascade deferral suffice?Proceedings of the 37th International Conference on Neural Information Processing Systems10.5555/3666122.3666553(9891-9906)Online publication date: 10-Dec-2023
      • (2023)Resource-Efficient Convolutional Networks: A Survey on Model-, Arithmetic-, and Implementation-Level TechniquesACM Computing Surveys10.1145/358709555:13s(1-36)Online publication date: 13-Jul-2023
      • (2022)Post-hoc estimators for learning to defer to an expertProceedings of the 36th International Conference on Neural Information Processing Systems10.5555/3600270.3602394(29292-29304)Online publication date: 28-Nov-2022
      • (2022)A Construction Kit for Efficient Low Power Neural Network Accelerator DesignsACM Transactions on Embedded Computing Systems10.1145/352012721:5(1-36)Online publication date: 8-Oct-2022
      • (2022)DynO: Dynamic Onloading of Deep Neural Networks from Cloud to DeviceACM Transactions on Embedded Computing Systems10.1145/351083121:6(1-24)Online publication date: 18-Oct-2022
      • (2021)Synergistically Exploiting CNN Pruning and HLS Versioning for Adaptive Inference on Multi-FPGAs at the EdgeACM Transactions on Embedded Computing Systems10.1145/347699020:5s(1-26)Online publication date: 17-Sep-2021
      • (2021)Adaptive Inference through Early-Exit NetworksProceedings of the 5th International Workshop on Embedded and Mobile Deep Learning10.1145/3469116.3470012(1-6)Online publication date: 25-Jun-2021
      • (2021)ApproxNet: Content and Contention-Aware Video Object Classification System for Embedded ClientsACM Transactions on Sensor Networks10.1145/346353018:1(1-27)Online publication date: 5-Oct-2021
      • Show More Cited By

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Login options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media