Abstract
In traditional automated assembly lines, operations of the robotic arm settings always rely on programming, which is unconvenient and time consuming. To solve this problem, computer vision is an alternative choice, and popular in robotics recently. With the great breakthrough of deep learning in the field of computer vision, image processing methods based on deep neural network (DNN) have been applied in various industry scenes. This paper explores the application of DNN-based image segmentation technology in factory assembly lines. We have developed one electronics’ image datasets for experiments. And deep convolution neural networks (CNN) based on region proposal have been used to detect, segment objects, and generate masks for corresponding electronics. Experimental results show that the proposed framework has high recognition accuracy and strong robustness, which will be helpful when applied in robotic arm settings in automated assembly lines.
S. Ma and X. Fan—Equal contributions.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39(4), 640–651 (2014)
Ren, S., He, K., Girshick, R., et al.: Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137–1149 (2015)
He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CNN. In: 2017 IEEE International Conference on Computer Vision (ICCV), Venice, pp. 2980–2988 (2017)
Lin, T.Y., Dollár, P., Girshick, R., et al.: Feature pyramid networks for object detection. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, pp. 936–944 (2017)
Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1_48
Arnab, A., Torr, P.H.S.: Pixelwise instance segmentation with a dynamically instantiated network. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, pp. 879–888 (2017)
Bai, M, Urtasun, R.: Deep watershed transform for instance segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2858–2866 (2017)
Cordts, M., Omran, M., Ramos, S., et al.: The cityscapes dataset for semantic urban scene understanding. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, pp. 3213–3223 (2016)
Gidaris, S., Komodakis, N.: Object detection via a multi-region and semantic segmentation-aware CNN model. In: IEEE International Conference on Computer Vision (ICCV), pp. 1134–1142 (2015)
Girshick, R., Donahue, J., Darrelland, T., et al.: Rich feature hierarchies for object detection and semantic segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE (2014)
Uijlings, J.R.R., van de Sand, K.E.A., et al.: Selective search for object recognition. Int. J. Comput. Vis. 104(2), 154–171 (2013)
He, K., Zhang, X., Ren, S., et al.: Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 37(9), 1904–1916 (2015)
Krizhevsky, A., Sutskever, I., Hinton, G.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, vol. 25(2) (2012)
He, K., Zhang, X., Ren, S., et al.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Szegedy, C., Ioffe, S., Vanhoucke, V.: Inception-v4, inception-ResNet and the impact of residual connections on learning (2016)
Kittler, J.: On the accuracy of the Sobel edge detector. Image Vis. Comput. 1(1), 37–42 (1983)
Canny, J.: A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 8(6), 679–698 (1986)
Redmon, J., Farhadi, A.: YOLO9000: better, faster, stronger. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6517–6525 (2017)
Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)
Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2005), San Diego, CA, USA, vol. 1, pp. 886–893 (2005)
Sermanet, P., Eigen, D., Zhang, X., et al.: OverFeat: integrated recognition, localization and detection using convolutional networks. In: The International Conference on Learning Representations (ICLR) (2014)
Girshick, R.: Fast R-CNN. In: 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, pp. 1440–1448 (2015)
Dai, J., He, K., Li, Y., Ren, S., Sun, J.: Instance-sensitive fully convolutional networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9910, pp. 534–549. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46466-4_32
Acknowledgement
This work was supported in part by the National Key R&D Program of China (2018YFB1308000), in part by the National Natural Science Foundation of China (61772508 and U1713213); in part by Shenzhen Technology Project (JCYJ201704 13152535587, JCYJ20170307164023599, and JSGG20170823091924128); in part by CAS Key Technology Talent Program, Guangdong Technology Program (2016 B010108010, 2016B010125003, and 2017B010110007); and in part by Shenzhen Engineering Laboratory for 3D Content Generating Technologies ([2017] 476).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendix
Appendix
In this part, we will show some testing results.
In the last two rows of images, we choose the test samples with complex object distribution, which can better illustrate the effectiveness of the neural network in this task. We should notice that even though the objects are crowded and some are occluded by another, the neural network can still detect and segment them with high accuracy (Fig. 3).
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Ma, S., Fan, X., Wang, L., Cheng, J., Xu, C. (2019). Neural Network Based Electronics Segmentation. In: Yu, H., Liu, J., Liu, L., Ju, Z., Liu, Y., Zhou, D. (eds) Intelligent Robotics and Applications. ICIRA 2019. Lecture Notes in Computer Science(), vol 11744. Springer, Cham. https://doi.org/10.1007/978-3-030-27541-9_43
Download citation
DOI: https://doi.org/10.1007/978-3-030-27541-9_43
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-27540-2
Online ISBN: 978-3-030-27541-9
eBook Packages: Computer ScienceComputer Science (R0)