Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3583133.3596333acmconferencesArticle/Chapter ViewAbstractPublication PagesgeccoConference Proceedingsconference-collections
research-article

EvoPrunerPool: An Evolutionary Pruner using Pruner Pool for Compressing Convolutional Neural Networks

Published: 24 July 2023 Publication History

Abstract

This paper proposes EvoPrunerPool - an Evolutionary Pruner using Pruner Pool for Compressing Convolutional Neural Networks. EvoPrunerPool formulates filter pruning as a search problem for identifying the right set of pruners from a pool of off-the-shelf filter pruners and applying them in appropriate sequence to incrementally sparsify a given Convolutional Neural Network. The efficacy of EvoPrunerPool has been demonstrated on LeNet model using MNIST data as well as on VGG-19 deep model using CIFAR-10 data and its performance has been benchmarked against state-of-the-art model compression approaches. Experiments demonstrate a very competitive and effective performance of the proposed Evolutionary Pruner. Since EvoPrunerPool employs the native representation of a popular machine learning framework and filter pruners from a well-known AutoML toolkit the proposed approach is both extensible and generic. Consequently, a typical practitioner can use EvoPrunerPool without any in-depth understanding of filter pruning in specific and model compression in general.

References

[1]
Y. Cheng, D. Wang, P. Zhou, and T. Zhang. 2018. Model Compression and Acceleration for Deep Neural Networks: The Principles, Progress, and Challenges. IEEE Signal Processing Magazine 35, 1 (Jan. 2018), 126--136.
[2]
Tejalal Choudhary, Vipul Mishra, Anurag Goswami, and Jagannathan Sarangapani. 2020. A comprehensive survey on model compression and acceleration. Artificial Intelligence Review 53 (2020), 5113--5155.
[3]
Xiaohan Ding, Guiguang Ding, Jungong Han, and Sheng Tang. 2018. Auto-Balanced Filter Pruning for Efficient Convolutional Neural Networks. In Proceedings of the AAAI Conference on Artificial Intelligence. 6797--6804.
[4]
Jonathan Frankle and Michael Carbin. 2019. The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks. In Proceedings of 7th International Conference on Learning Representations (ICLR). New Orlean, LA, USA.
[5]
Swapna G, Soman K P, and Vinayakumar R. 2018. Automated detection of cardiac arrhythmia using deep learning techniques. In Procedia Computer Science, Vol. 132. Elsevier, 1192--1201.
[6]
Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. 2017. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. CoRR abs/1704.04861 (2017). http://arxiv.org/abs/1704.04861
[7]
Hengyuan Hu, Rui Peng, Yu-Wing Tai, and Chi-Keung Tang. 2016. Network Trimming: A Data-Driven Neuron Pruning Approach towards Efficient Deep Architectures. arXiv:1607.03250 [cs.NE]
[8]
Junhao Huang, Weize Sun, and Lei Huang. 2020. Deep neural networks compression learning based on multiobjective evolutionary algorithms. Neurocomputing 378 (2020), 260--269.
[9]
Neural Network Intelligence. [n. d.]. . Retrieved April 13, 2023 from https://github.com/microsoft/nni
[10]
Francisco E. Fernandes Jr. and Gary G. Yen. 2021. Pruning Deep Convolutional Neural Networks Architectures with Evolution Strategy. Information Sciences 552 (2021), 29--47.
[11]
R. Kamalraj, S. Neelakandan, M. Ranjith Kumar, V. Chandra Shekhar Rao, Rohit Anand, and Harinder Singh. 2021. Interpretable filter based convolutional neural network (IF-CNN) for glucose prediction and classification using PD-SS algorithm. Measurement 183 (2021).
[12]
K. Keerthi Krishnan and K. P. Soman. 2021. CNN based classification of motor imaginary using variational mode decomposed EEG-spectrum image. Biomedical Engineering Letters 11 (2021), 235--247.
[13]
R. Ganesh Kumar and N. M. Dhanya. [n. d.]. Efficient Speech to Emotion Recognition Using Convolutional Neural Network. In Advances in Electrical and Computer Technologies (ICAECT 2020) (Lecture Notes in Electrical Engineering), T. Sengodan, M. Murugappan, and S. Misra (Eds.).
[14]
H.T. Kung, Bradley McDanel, and Sai Qian Zhang. 2019. Packing Sparse Convolutional Neural Networks for Efficient Systolic Array Implementations: Column Combining Under Joint Optimization. In Proceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS'19). New York, NY, USA, 821--834.
[15]
Guiying Li, Chao Qian, Chunhui Jiang, Xiaofen Lu, and Ke Tang. 2018. Optimization based Layer-wise Magnitude-based Pruning for DNN Compression. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI-18), Jérôme Lang (Ed.). Stockholm, 2383--2389.
[16]
Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. 2017. Pruning Filters for Efficient ConvNets. In Proceedings of the 5th International Conference on Learning Representations (ICLR'17). Toulon, France.
[17]
Qingyuan Li, Bo Zhang, and Xiangxiang Chu. 2022. EAPruning: Evolutionary Pruning for Vision Transformers and CNNs. In The 33rd British Machine Vision Conference Proceedings (BMVC 2022). The British Machine Vision Association, London, UK.
[18]
Tailing Liang, John Glossner, Lei Wang, Shaobo Shi, and Xiaotong Zhang. 2021. Pruning and Quantization for Deep Neural Network Acceleration: A Survey. Neurocomputing 461 (2021), 370--403.
[19]
Jian-Hao Lou, Jianxin Wu, and Weiyao Lin. 2017. ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression. In Proceedings IEEE International Conference on Computer Vision (ICCV-2017). 5068--5076.
[20]
Hassen Louati, Slim Bechikh, Ali Louati, Abdulaziz Aldaej, and Lamjed Ben Said. 2022. Joint design and compression of convolutional neural networks as a Bi-level optimization problem. Neural Computing and Applications 34 (2022), 15007--15029.
[21]
Giosué Cataldo Marinó, Alessandro Petrini, Dario Malchiodi, and Marco Frasca. 2023. Deep Neural Networks Compression: A Comparative Survey and Choice Recommendations. Neurocomputing 520 (2023), 152--170.
[22]
Santanu Pattanayak, Subhrajit Nag, and Sparsh Mittal. 2021. CURATING: A multi-objective based pruning technique for CNNs. Journal of Systems Architecture 116 (2021).
[23]
Javier Poyatos, Daniel Molina, Aritz D. Martinez, Javier Del Ser, and Francisco Herrera. 2023. EvoPruneDeepTL: An evolutionary pruning model for transfer learning based deep neural networks. Neural Networks 158 (2023), 59--82.
[24]
Haopu Shang, Jia-Liang Wu, Wenjing Hong, and Chao Qian1. 2022. Neural Network Pruning by Cooperative Coevolution. In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence (IJCAI-22), Luc De Raedt (Ed.). Vienna, 4814--4820.
[25]
Karen Simonyan and Andrew Zisserman. 2015. Very Deep Convolutional Networks for Large-Scale Image Recognition. In International Conference on Learning Representations.
[26]
Zhehui Wang, Tao Luo, Miqing Li, Joey Tianyi Zhou, Rick Siow Mong Goh, and Liangli Zhen. 2021. Evolutionary Multi-Objective Model Compression for Deep Neural Networks. IEEE Computational Intelligence Magazine 16, 3 (2021), 10--21.
[27]
Chuanguang Yang, Zhulin An, Chao Li, Boyu Diao, and Yongjun Xu. 2019. Multi-objective Pruning for CNNs Using Genetic Algorithm. In 28th International Conference on Artificial Neural Networks (Lecture Notes in Computer Science, Vol. 11728), I. Tetko, V. Kůrková, P. Karpov, and F. Theis (Eds.). Springer.
[28]
Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun. 2018. ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[29]
Yidan Zhang, Guangjin Wang, Taibo Yang, Tianfeng Pang, Zhenan He, and Jiancheng Lv. 2022. Compression of deep neural networks: bridging the gap between conventional-based pruning and evolutionary approach. Neural Computing and Applications volume 34 (2022), 16493--16514.
[30]
Yao Zhou, Gary G. Yen, and Zhang Yi. 2020. Evolutionary Compression of Deep Neural Networks for Biomedical Image Segmentation. IEEE Transactions on Neural Networks and Learning Systems 31, 8 (Aug. 2020), 2916--2929.
[31]
Yao Zhou, Gary G. Yen, and Zhang Yi. 2021. A Knee-Guided Evolutionary Algorithm for Compressing Deep Neural Networks. IEEE Transactions on Cybernetics 5, 3 (2021), 1626--1638.

Cited By

View all
  • (2024)Multi-objective evolutionary architectural pruning of deep convolutional neural networks with weights inheritanceInformation Sciences10.1016/j.ins.2024.121265685(121265)Online publication date: Dec-2024

Index Terms

  1. EvoPrunerPool: An Evolutionary Pruner using Pruner Pool for Compressing Convolutional Neural Networks

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    GECCO '23 Companion: Proceedings of the Companion Conference on Genetic and Evolutionary Computation
    July 2023
    2519 pages
    ISBN:9798400701207
    DOI:10.1145/3583133
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 24 July 2023

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. evolutionary pruning
    2. pruner pool
    3. model compression and filter pruning

    Qualifiers

    • Research-article

    Conference

    GECCO '23 Companion
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 1,669 of 4,410 submissions, 38%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)63
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 25 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Multi-objective evolutionary architectural pruning of deep convolutional neural networks with weights inheritanceInformation Sciences10.1016/j.ins.2024.121265685(121265)Online publication date: Dec-2024

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media