Nothing Special   »   [go: up one dir, main page]

Skip to main content

Neural Network Based Electronics Segmentation

  • Conference paper
  • First Online:
Intelligent Robotics and Applications (ICIRA 2019)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 11744))

Included in the following conference series:

  • 2763 Accesses

Abstract

In traditional automated assembly lines, operations of the robotic arm settings always rely on programming, which is unconvenient and time consuming. To solve this problem, computer vision is an alternative choice, and popular in robotics recently. With the great breakthrough of deep learning in the field of computer vision, image processing methods based on deep neural network (DNN) have been applied in various industry scenes. This paper explores the application of DNN-based image segmentation technology in factory assembly lines. We have developed one electronics’ image datasets for experiments. And deep convolution neural networks (CNN) based on region proposal have been used to detect, segment objects, and generate masks for corresponding electronics. Experimental results show that the proposed framework has high recognition accuracy and strong robustness, which will be helpful when applied in robotic arm settings in automated assembly lines.

S. Ma and X. Fan—Equal contributions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39(4), 640–651 (2014)

    Google Scholar 

  2. Ren, S., He, K., Girshick, R., et al.: Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137–1149 (2015)

    Article  Google Scholar 

  3. He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CNN. In: 2017 IEEE International Conference on Computer Vision (ICCV), Venice, pp. 2980–2988 (2017)

    Google Scholar 

  4. Lin, T.Y., Dollár, P., Girshick, R., et al.: Feature pyramid networks for object detection. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, pp. 936–944 (2017)

    Google Scholar 

  5. Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1_48

    Chapter  Google Scholar 

  6. Arnab, A., Torr, P.H.S.: Pixelwise instance segmentation with a dynamically instantiated network. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, pp. 879–888 (2017)

    Google Scholar 

  7. Bai, M, Urtasun, R.: Deep watershed transform for instance segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2858–2866 (2017)

    Google Scholar 

  8. Cordts, M., Omran, M., Ramos, S., et al.: The cityscapes dataset for semantic urban scene understanding. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, pp. 3213–3223 (2016)

    Google Scholar 

  9. Gidaris, S., Komodakis, N.: Object detection via a multi-region and semantic segmentation-aware CNN model. In: IEEE International Conference on Computer Vision (ICCV), pp. 1134–1142 (2015)

    Google Scholar 

  10. Girshick, R., Donahue, J., Darrelland, T., et al.: Rich feature hierarchies for object detection and semantic segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE (2014)

    Google Scholar 

  11. Uijlings, J.R.R., van de Sand, K.E.A., et al.: Selective search for object recognition. Int. J. Comput. Vis. 104(2), 154–171 (2013)

    Article  Google Scholar 

  12. He, K., Zhang, X., Ren, S., et al.: Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 37(9), 1904–1916 (2015)

    Article  Google Scholar 

  13. Krizhevsky, A., Sutskever, I., Hinton, G.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, vol. 25(2) (2012)

    Google Scholar 

  14. He, K., Zhang, X., Ren, S., et al.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  15. Szegedy, C., Ioffe, S., Vanhoucke, V.: Inception-v4, inception-ResNet and the impact of residual connections on learning (2016)

    Google Scholar 

  16. Kittler, J.: On the accuracy of the Sobel edge detector. Image Vis. Comput. 1(1), 37–42 (1983)

    Article  Google Scholar 

  17. Canny, J.: A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 8(6), 679–698 (1986)

    Article  Google Scholar 

  18. Redmon, J., Farhadi, A.: YOLO9000: better, faster, stronger. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6517–6525 (2017)

    Google Scholar 

  19. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)

    Article  Google Scholar 

  20. Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2005), San Diego, CA, USA, vol. 1, pp. 886–893 (2005)

    Google Scholar 

  21. Sermanet, P., Eigen, D., Zhang, X., et al.: OverFeat: integrated recognition, localization and detection using convolutional networks. In: The International Conference on Learning Representations (ICLR) (2014)

    Google Scholar 

  22. Girshick, R.: Fast R-CNN. In: 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, pp. 1440–1448 (2015)

    Google Scholar 

  23. Dai, J., He, K., Li, Y., Ren, S., Sun, J.: Instance-sensitive fully convolutional networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9910, pp. 534–549. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46466-4_32

    Chapter  Google Scholar 

Download references

Acknowledgement

This work was supported in part by the National Key R&D Program of China (2018YFB1308000), in part by the National Natural Science Foundation of China (61772508 and U1713213); in part by Shenzhen Technology Project (JCYJ201704 13152535587, JCYJ20170307164023599, and JSGG20170823091924128); in part by CAS Key Technology Talent Program, Guangdong Technology Program (2016 B010108010, 2016B010125003, and 2017B010110007); and in part by Shenzhen Engineering Laboratory for 3D Content Generating Technologies ([2017] 476).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lei Wang .

Editor information

Editors and Affiliations

Appendix

Appendix

In this part, we will show some testing results.

In the last two rows of images, we choose the test samples with complex object distribution, which can better illustrate the effectiveness of the neural network in this task. We should notice that even though the objects are crowded and some are occluded by another, the neural network can still detect and segment them with high accuracy (Fig. 3).

Fig. 3.
figure 3

Object recognition and instance segmentation results

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ma, S., Fan, X., Wang, L., Cheng, J., Xu, C. (2019). Neural Network Based Electronics Segmentation. In: Yu, H., Liu, J., Liu, L., Ju, Z., Liu, Y., Zhou, D. (eds) Intelligent Robotics and Applications. ICIRA 2019. Lecture Notes in Computer Science(), vol 11744. Springer, Cham. https://doi.org/10.1007/978-3-030-27541-9_43

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-27541-9_43

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-27540-2

  • Online ISBN: 978-3-030-27541-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics