Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/2968456.2976766acmotherconferencesArticle/Chapter ViewAbstractPublication PagesesweekConference Proceedingsconference-collections
research-article
Public Access

Going deeper than deep learning for massive data analytics under physical constraints

Published: 01 October 2016 Publication History

Abstract

Deep Neural Networks (DNNs) are a set of powerful yet computationally complex learning mechanisms that are projected to dominate various artificial intelligence and massive data analytic domains. Physical viability, such as timing, memory, or energy efficiency, are standing challenges in realizing the true potential of DNNs. We propose DeLight, a set of novel methodologies which aim to bring physical constraints as design parameters in the training and execution of DNN architectures. We use physical profiling to bound the network size in accordance to the pertinent platform's characteristics. An automated customization methodology is proposed to adaptively conform the DNN configurations to meet the characterization of the underlying hardware while minimally affecting the inference accuracy. The key to our approach is a new content- and resource-aware transformation of data to a lower-dimensional embedding by which learning the correlation between data samples requires significantly smaller number of neurons. We leverage the performance gain achieved as a result of the data transformation to enable the training of multiple DNN architectures that can be aggregated to further boost the inference accuracy. An accompanying API is also developed, which can be used for rapid prototyping of an arbitrary DNN application customized to the platform. Proof-of concept evaluations for deployment of different imaging, audio, and smart-sensing applications demonstrate up to 100-fold performance improvement compared to the state-of-the-art DNN solutions.

References

[1]
Y. LeCun, Y. Bengio, and G. Hinton, "Deep learning," Nature, vol. 521, no. 7553, 2015.
[2]
B. Liu, M. Wang, H. Foroosh, M. Tappen, and M. Pensky, "Sparse convolutional neural networks," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015.
[3]
J. Kung, D. Kim, and S. Mukhopadhyay, "A power-aware digital feedforward neural network platform with backpropagation driven approximate synapses," in IEEE/ACM International Symposium on Low Power Electronics and Design (ISLPED). IEEE, 2015.
[4]
B. Rouhani, A. Mirhoseini, and F. Koushanfar, "Delight: Adding energy dimension to deep neural networks," in International Symposium on Low Power Electronics and Design (ISLPED). ACM, 2016.
[5]
A. Mirhoseini, B. Rouhani, E. Songhori, and F. Koushanfar, "Performml: Performance optimized machine learning by platform and content aware customization." Design Automation Conference (DAC), 2016.
[6]
B. D. Rouhani, E. M. Songhori, A. Mirhoseini, and F. Koushanfar, "Ssketch: An automated framework for streaming sketch-based analysis of big data on fpga," in Field-Programmable Custom Computing Machines (FCCM), 2015 IEEE 23rd Annual International Symposium on. IEEE, 2015, pp. 187--194.
[7]
https://developer.nvidia.com/jetson-tk1, "Jetson tk1," 2015.
[8]
I. Sutskever, J. Martens, G. Dahl, and G. Hinton, "On the importance of initialization and momentum in deep learning," in Proceedings of the 30th international conference on machine learning (ICML-13), 2013.
[9]
http://www.ehu.es/ccwintco/index.php/Hyperspectral Remote Sensing Scenes, "Remote sensing," 2015.
[10]
https://archive.ics.uci.edu/ml/datasets/Human+Activity+Recognition+Using+Smartphones, "Uci machine learning repository," 2015.
[11]
https://archive.ics.uci.edu/ml/datasets/isolet, "Uci machine learning repository," 2015.
[12]
N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, "Dropout: A simple way to prevent neural networks from overfitting," The Journal of Machine Learning Research, vol. 15, no. 1, 2014.

Cited By

View all
  • (2021)Blackthorn: Latency Estimation Framework for CNNs on Embedded Nvidia PlatformsIEEE Access10.1109/ACCESS.2021.31019369(110074-110084)Online publication date: 2021
  • (2017)TinyDL: Just-in-time deep learning solution for constrained embedded systems2017 IEEE International Symposium on Circuits and Systems (ISCAS)10.1109/ISCAS.2017.8050343(1-4)Online publication date: May-2017
  1. Going deeper than deep learning for massive data analytics under physical constraints

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    CODES '16: Proceedings of the Eleventh IEEE/ACM/IFIP International Conference on Hardware/Software Codesign and System Synthesis
    October 2016
    294 pages
    ISBN:9781450344838
    DOI:10.1145/2968456
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 01 October 2016

    Permissions

    Request permissions for this article.

    Check for updates

    Qualifiers

    • Research-article

    Funding Sources

    Conference

    ESWEEK'16
    ESWEEK'16: TWELFTH EMBEDDED SYSTEM WEEK
    October 1 - 7, 2016
    Pennsylvania, Pittsburgh

    Acceptance Rates

    Overall Acceptance Rate 280 of 864 submissions, 32%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)57
    • Downloads (Last 6 weeks)4
    Reflects downloads up to 12 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2021)Blackthorn: Latency Estimation Framework for CNNs on Embedded Nvidia PlatformsIEEE Access10.1109/ACCESS.2021.31019369(110074-110084)Online publication date: 2021
    • (2017)TinyDL: Just-in-time deep learning solution for constrained embedded systems2017 IEEE International Symposium on Circuits and Systems (ISCAS)10.1109/ISCAS.2017.8050343(1-4)Online publication date: May-2017

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Get Access

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media