Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

An Inverted Residual based Lightweight Network for Object Detection in Sweeping Robots

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Object detection is a core capability required by sweeping robots to conduct their cleanup jobs. Currently, deep learning-based object detection has achieved high accuracy; however, owing to the large number of backbone network layers, detection speed depends on the computing hardware, which hinders its application to real-time sweeping jobs. To address this problem, this paper proposes an accurate, fast, and lightweight You Only Look Once (YOLO) network for sweeping robots (SR). First, an improved online enhancement method is proposed to enrich the original dataset. Second, an inverted residual block based lightweight network is constructed for object recognition. Finally, an optimized spatial pyramid pooling method is added behind the backbone network to further improve network performance. Comparative experiments with many state-of-the-art methods are conducted, and the results reveal that the new model accuracy is 13.79% better than that of YOLOv4_tiny using a similar model size. Furthermore, its model size is only 1/9 that of YOLOv4 and achieves similar accuracy, demonstrating the advantages of the proposed model in terms of size, accuracy, and speed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Yu L, Fei S (2020) Large-screen interactive technology with 3d sensor based on clustering filtering method and unscented kalman filter. IEEE Access 8:8675–8680. https://doi.org/10.1109/ACCESS.2019.2963698

    Article  Google Scholar 

  2. Lecun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proceedings of the IEEE 86(11):2278–2324. https://doi.org/10.1109/5.726791

    Article  Google Scholar 

  3. Krizhevsky, A, Sutskever, I, Hinton, G: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25(2) (2012)

  4. Simonyan, K, Zisserman, A: Very deep convolutional networks for large-scale image recognition. Computer Science (2014)

  5. He, K, Zhang, X, Ren, S, Sun, J: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)

  6. Huang, G, Liu, Z, Laurens, V, Weinberger, KQ: Densely connected convolutional networks. IEEE Computer Society (2016)

  7. Girshick, R, Donahue, J, Darrell, T, Malik, J: Rich feature hierarchies for accurate object detection and semantic segmentation. In: 2014 IEEE Conference on computer vision and pattern recognition, pp 580–587 (2014). https://doi.org/10.1109/CVPR.2014.81

  8. Girshick, R: Fast r-cnn. Computer Science (2015)

  9. Ren S, He K, Girshick R, Sun J (2017) Faster r-cnn: Towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence 39(6):1137–1149. https://doi.org/10.1109/TPAMI.2016.2577031

    Article  Google Scholar 

  10. He, K, Gkioxari, G, Dollár, P, Girshick, R: Mask r-cnn. In: IEEE Transactions on Pattern Analysis and Machine Intelligence (2017)

  11. Redmon, J, Divvala, S, Girshick, R, Farhadi, A: You only look once: Unified, real-time object detection. In: 2016 IEEE Conference on computer vision and pattern recognition (CVPR), pp 779–788 (2016). https://doi.org/10.1109/CVPR.2016.91

  12. Liu, DW and Anguelov, Erhan, C.D. and Szegedy: Ssd: Single shot multibox detector. In: European conference on computer vision (2016)

  13. Redmon, J, Farhadi, A: Yolo9000: Better, faster, stronger. In: 2017 IEEE Conference on computer vision and pattern recognition (CVPR), pp 6517–6525 (2017). https://doi.org/10.1109/CVPR.2017.690

  14. Redmon, J, Farhadi, A: Yolov3: An incremental improvement. arXiv e-prints (2018)

  15. Bochkovskiy, CYA and Wang, Liao, H: Yolov4: Optimal speed and accuracy of object detection abs/2004.10934 (2020)

  16. Sandler, M, Howard, A, Zhu, M, Zhmoginov, A, Chen, L-C: Mobilenetv2: Inverted residuals and linear bottlenecks. In: 2018 IEEE/CVF Conference on computer vision and pattern recognition, pp 4510–4520 (2018). https://doi.org/10.1109/CVPR.2018.00474

  17. Han, S, Mao, H, Dally, WJ: Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. In: Fiber, pp 3–7 (2015)

  18. Hassibi, DGB and Stork: Second order derivatives for network pruning: optimal brain surgeon. In: Advances in neural information processing systems (1993)

  19. Lebedev, V, Lempitsky, V: Fast convnets using group-wise brain damage. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR), pp 2554–2564 (2016). https://doi.org/10.1109/CVPR.2016.280

  20. Hu, H, Peng, R, Tai, YW, Tang, CK: Network trimming: A data-driven neuron pruning approach towards efficient deep architectures (2016)

  21. Iandola, FN, Han, S, Moskewicz, MW, Ashraf, K, Dally, WJ, Keutzer, K: Squeezenet: Alexnet-level accuracy with 50x fewer parameters and<0.5mb model size (2016)

  22. Gholami, A, Kwon, K, Wu, B, Tai, Z, Yue, X, Jin, P, Zhao, S, Keutzer, K: Squeezenext: Hardware-aware neural network design. In: 2018 IEEE/CVF conference on computer vision and pattern recognition workshops (CVPRW), pp 1719–171909 (2018). https://doi.org/10.1109/CVPRW.2018.00215

  23. Howard, AG, Zhu, M, Chen, B, Kalenichenko, D, Wang, W, Weyand, T, Andreetto, M, Adam, H: Mobilenets: Efficient convolutional neural networks for mobile vision applications (2017)

  24. Howard, A, Sandler, M, Chen, B, Wang, W, Chen, L-C, Tan, M, Chu, G, Vasudevan, V, Zhu, Y, Pang, R, Adam, H, Le, Q: Searching for mobilenetv3. In: 2019 IEEE/CVF international conference on computer vision (ICCV), pp 1314–1324 (2019). https://doi.org/10.1109/ICCV.2019.00140

  25. Yang, TJ, Howard, A, Chen, B, Zhang, X, Go, A, Sze, V, Adam, H: Netadapt: Platform-aware neural network adaptation for mobile applications (2018)

  26. Zhang, X, Zhou, X, Lin, M, Sun, J: Shufflenet: An extremely efficient convolutional neural network for mobile devices. In: 2018 IEEE/CVF conference on computer vision and pattern recognition, pp 6848–6856 (2018). https://doi.org/10.1109/CVPR.2018.00716

  27. Ma, N, Zhang, X, Zheng, HT, Sun, J: Shufflenet v2: Practical guidelines for efficient cnn architecture design. In: European conference on computer vision (2018)

  28. Han, K, Wang, Y, Tian, Q, Guo, J, Xu, C, Xu, C: Ghostnet: More features from cheap operations. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 1580–1589 (2020)

  29. Hu, J, Shen, L, Sun, G: Squeeze-and-excitation networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7132–7141 (2018)

  30. Guo, Z, Zhang, X, Mu, H, Heng, W, Liu, Z, Wei, Y, Sun, J: Single path one-shot neural architecture search with uniform sampling. In: European conference on computer vision, pp 544–560 (2020). Springer

  31. Lv Y, Fang Y, Chi W, Chen G, Sun L (2021) Object detection for sweeping robots in home scenes (odsr-ihs): A novel benchmark dataset. IEEE Access 9:17820–17828. https://doi.org/10.1109/ACCESS.2021.3053546

    Article  Google Scholar 

  32. Simard, PY, Steinkraus, D, Platt, JC: Best practices for convolutional neural networks applied to visual document analysis. In: Seventh international conference on document analysis and recognition, 2003. Proceedings., pp 958–963 (2003). https://doi.org/10.1109/ICDAR.2003.1227801

  33. Sun, K, Xiao, B, Liu, D, Wang, J: Deep high-resolution representation learning for human pose estimation. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 5693–5703 (2019)

  34. Tan, M, Pang, R, Le, QV: Efficientdet: Scalable and efficient object detection. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 10781–10790 (2020)

  35. Ghiasi, G, Lin, T-Y, Le, QV: Nas-fpn: Learning scalable feature pyramid architecture for object detection. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 7036–7045 (2019)

  36. He K, Zhang X, Ren S, Sun J (2015) Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 37(9):1904–1916. https://doi.org/10.1109/TPAMI.2015.2389824

    Article  Google Scholar 

  37. Grauman, K, Darrell, T: The pyramid match kernel: discriminative classification with sets of image features. In: Tenth IEEE International conference on computer vision (ICCV’05) Volume 1, vol 2, pp 1458–14652 (2005). https://doi.org/10.1109/ICCV.2005.239

  38. Ioffe, S, Szegedy, C: Batch normalization: Accelerating deep network training by reducing internal covariate shift. JMLR.org (2015)

  39. Chollet, F: Xception: Deep learning with depthwise separable convolutions. In: 2017 IEEE conference on computer vision and pattern recognition (CVPR), pp 1800–1807 (2017). https://doi.org/10.1109/CVPR.2017.195

  40. Kingma, D, Ba, J: Adam: A method for stochastic optimization. Computer Science (2014)

  41. Huang Z, Wang J, Fu X, Yu T, Guo Y, Wang R (2020) DC-SPP-YOLO: Dense connection and spatial pyramid pooling based YOLO for object detection. Information Sciences 522:241–258

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This work was partially supported by National Key R&D Program of China grant #2019YFB1310003, National Science Foundation of China grant #61903267 and China Postdoctoral Science Foundation grant #2020M681691 awarded to Wenzheng Chi.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wenzheng Chi.

Ethics declarations

Conflicts of interest

The authors declare that they have no conflict of interest

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lv, Y., Liu, J., Chi, W. et al. An Inverted Residual based Lightweight Network for Object Detection in Sweeping Robots. Appl Intell 52, 12206–12221 (2022). https://doi.org/10.1007/s10489-021-03104-9

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-021-03104-9

Keywords

Navigation