Abstract
Object detection is a core capability required by sweeping robots to conduct their cleanup jobs. Currently, deep learning-based object detection has achieved high accuracy; however, owing to the large number of backbone network layers, detection speed depends on the computing hardware, which hinders its application to real-time sweeping jobs. To address this problem, this paper proposes an accurate, fast, and lightweight You Only Look Once (YOLO) network for sweeping robots (SR). First, an improved online enhancement method is proposed to enrich the original dataset. Second, an inverted residual block based lightweight network is constructed for object recognition. Finally, an optimized spatial pyramid pooling method is added behind the backbone network to further improve network performance. Comparative experiments with many state-of-the-art methods are conducted, and the results reveal that the new model accuracy is 13.79% better than that of YOLOv4_tiny using a similar model size. Furthermore, its model size is only 1/9 that of YOLOv4 and achieves similar accuracy, demonstrating the advantages of the proposed model in terms of size, accuracy, and speed.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Yu L, Fei S (2020) Large-screen interactive technology with 3d sensor based on clustering filtering method and unscented kalman filter. IEEE Access 8:8675–8680. https://doi.org/10.1109/ACCESS.2019.2963698
Lecun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proceedings of the IEEE 86(11):2278–2324. https://doi.org/10.1109/5.726791
Krizhevsky, A, Sutskever, I, Hinton, G: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25(2) (2012)
Simonyan, K, Zisserman, A: Very deep convolutional networks for large-scale image recognition. Computer Science (2014)
He, K, Zhang, X, Ren, S, Sun, J: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)
Huang, G, Liu, Z, Laurens, V, Weinberger, KQ: Densely connected convolutional networks. IEEE Computer Society (2016)
Girshick, R, Donahue, J, Darrell, T, Malik, J: Rich feature hierarchies for accurate object detection and semantic segmentation. In: 2014 IEEE Conference on computer vision and pattern recognition, pp 580–587 (2014). https://doi.org/10.1109/CVPR.2014.81
Girshick, R: Fast r-cnn. Computer Science (2015)
Ren S, He K, Girshick R, Sun J (2017) Faster r-cnn: Towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence 39(6):1137–1149. https://doi.org/10.1109/TPAMI.2016.2577031
He, K, Gkioxari, G, Dollár, P, Girshick, R: Mask r-cnn. In: IEEE Transactions on Pattern Analysis and Machine Intelligence (2017)
Redmon, J, Divvala, S, Girshick, R, Farhadi, A: You only look once: Unified, real-time object detection. In: 2016 IEEE Conference on computer vision and pattern recognition (CVPR), pp 779–788 (2016). https://doi.org/10.1109/CVPR.2016.91
Liu, DW and Anguelov, Erhan, C.D. and Szegedy: Ssd: Single shot multibox detector. In: European conference on computer vision (2016)
Redmon, J, Farhadi, A: Yolo9000: Better, faster, stronger. In: 2017 IEEE Conference on computer vision and pattern recognition (CVPR), pp 6517–6525 (2017). https://doi.org/10.1109/CVPR.2017.690
Redmon, J, Farhadi, A: Yolov3: An incremental improvement. arXiv e-prints (2018)
Bochkovskiy, CYA and Wang, Liao, H: Yolov4: Optimal speed and accuracy of object detection abs/2004.10934 (2020)
Sandler, M, Howard, A, Zhu, M, Zhmoginov, A, Chen, L-C: Mobilenetv2: Inverted residuals and linear bottlenecks. In: 2018 IEEE/CVF Conference on computer vision and pattern recognition, pp 4510–4520 (2018). https://doi.org/10.1109/CVPR.2018.00474
Han, S, Mao, H, Dally, WJ: Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. In: Fiber, pp 3–7 (2015)
Hassibi, DGB and Stork: Second order derivatives for network pruning: optimal brain surgeon. In: Advances in neural information processing systems (1993)
Lebedev, V, Lempitsky, V: Fast convnets using group-wise brain damage. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR), pp 2554–2564 (2016). https://doi.org/10.1109/CVPR.2016.280
Hu, H, Peng, R, Tai, YW, Tang, CK: Network trimming: A data-driven neuron pruning approach towards efficient deep architectures (2016)
Iandola, FN, Han, S, Moskewicz, MW, Ashraf, K, Dally, WJ, Keutzer, K: Squeezenet: Alexnet-level accuracy with 50x fewer parameters and<0.5mb model size (2016)
Gholami, A, Kwon, K, Wu, B, Tai, Z, Yue, X, Jin, P, Zhao, S, Keutzer, K: Squeezenext: Hardware-aware neural network design. In: 2018 IEEE/CVF conference on computer vision and pattern recognition workshops (CVPRW), pp 1719–171909 (2018). https://doi.org/10.1109/CVPRW.2018.00215
Howard, AG, Zhu, M, Chen, B, Kalenichenko, D, Wang, W, Weyand, T, Andreetto, M, Adam, H: Mobilenets: Efficient convolutional neural networks for mobile vision applications (2017)
Howard, A, Sandler, M, Chen, B, Wang, W, Chen, L-C, Tan, M, Chu, G, Vasudevan, V, Zhu, Y, Pang, R, Adam, H, Le, Q: Searching for mobilenetv3. In: 2019 IEEE/CVF international conference on computer vision (ICCV), pp 1314–1324 (2019). https://doi.org/10.1109/ICCV.2019.00140
Yang, TJ, Howard, A, Chen, B, Zhang, X, Go, A, Sze, V, Adam, H: Netadapt: Platform-aware neural network adaptation for mobile applications (2018)
Zhang, X, Zhou, X, Lin, M, Sun, J: Shufflenet: An extremely efficient convolutional neural network for mobile devices. In: 2018 IEEE/CVF conference on computer vision and pattern recognition, pp 6848–6856 (2018). https://doi.org/10.1109/CVPR.2018.00716
Ma, N, Zhang, X, Zheng, HT, Sun, J: Shufflenet v2: Practical guidelines for efficient cnn architecture design. In: European conference on computer vision (2018)
Han, K, Wang, Y, Tian, Q, Guo, J, Xu, C, Xu, C: Ghostnet: More features from cheap operations. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 1580–1589 (2020)
Hu, J, Shen, L, Sun, G: Squeeze-and-excitation networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7132–7141 (2018)
Guo, Z, Zhang, X, Mu, H, Heng, W, Liu, Z, Wei, Y, Sun, J: Single path one-shot neural architecture search with uniform sampling. In: European conference on computer vision, pp 544–560 (2020). Springer
Lv Y, Fang Y, Chi W, Chen G, Sun L (2021) Object detection for sweeping robots in home scenes (odsr-ihs): A novel benchmark dataset. IEEE Access 9:17820–17828. https://doi.org/10.1109/ACCESS.2021.3053546
Simard, PY, Steinkraus, D, Platt, JC: Best practices for convolutional neural networks applied to visual document analysis. In: Seventh international conference on document analysis and recognition, 2003. Proceedings., pp 958–963 (2003). https://doi.org/10.1109/ICDAR.2003.1227801
Sun, K, Xiao, B, Liu, D, Wang, J: Deep high-resolution representation learning for human pose estimation. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 5693–5703 (2019)
Tan, M, Pang, R, Le, QV: Efficientdet: Scalable and efficient object detection. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 10781–10790 (2020)
Ghiasi, G, Lin, T-Y, Le, QV: Nas-fpn: Learning scalable feature pyramid architecture for object detection. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 7036–7045 (2019)
He K, Zhang X, Ren S, Sun J (2015) Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 37(9):1904–1916. https://doi.org/10.1109/TPAMI.2015.2389824
Grauman, K, Darrell, T: The pyramid match kernel: discriminative classification with sets of image features. In: Tenth IEEE International conference on computer vision (ICCV’05) Volume 1, vol 2, pp 1458–14652 (2005). https://doi.org/10.1109/ICCV.2005.239
Ioffe, S, Szegedy, C: Batch normalization: Accelerating deep network training by reducing internal covariate shift. JMLR.org (2015)
Chollet, F: Xception: Deep learning with depthwise separable convolutions. In: 2017 IEEE conference on computer vision and pattern recognition (CVPR), pp 1800–1807 (2017). https://doi.org/10.1109/CVPR.2017.195
Kingma, D, Ba, J: Adam: A method for stochastic optimization. Computer Science (2014)
Huang Z, Wang J, Fu X, Yu T, Guo Y, Wang R (2020) DC-SPP-YOLO: Dense connection and spatial pyramid pooling based YOLO for object detection. Information Sciences 522:241–258
Acknowledgements
This work was partially supported by National Key R&D Program of China grant #2019YFB1310003, National Science Foundation of China grant #61903267 and China Postdoctoral Science Foundation grant #2020M681691 awarded to Wenzheng Chi.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflicts of interest
The authors declare that they have no conflict of interest
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Lv, Y., Liu, J., Chi, W. et al. An Inverted Residual based Lightweight Network for Object Detection in Sweeping Robots. Appl Intell 52, 12206–12221 (2022). https://doi.org/10.1007/s10489-021-03104-9
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10489-021-03104-9