Abstract
Picking in a unstructured environment is an important task for the further autonomy of the robot manipulation in real applications. A primary challenge for the task is to identify the object from the cluttered sensor readings. In this paper, a real time segmentation algorithm is proposed to partition the scene into objects using only depth and geometry information. We employ a graph to model the scene, in which the surfaces are regarded as nodes while the geometric relations between surfaces as edges. The relations are represented by the convexity and connectivity of the two neighbor surfaces. Upon the segmentation result, a measure was developed for robot grasping proposal suggestion. Our method has advantages over the RGB and learning based methods as it is robust against the illumination variation and does not require the collection of samples, thus achieving more convenient deployment. The method was evaluated on public datasets to validate its feasibility and effectiveness, demonstrating better performance compared to other depth information based image segmentation method. Besides, a real-world robot grasping experiment is conducted to investigate the possibility of on-site production. abstract environment.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., Süsstrunk, S.: SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell. 34(11), 2274–2282 (2012)
Anguelov, D., Taskar, B., Chatalbashev, V., Koller, D., Gupta, D., Heitz, G., Ng, A.: Discriminative learning of Markov random fields for segmentation of 3D scan data. In: IEEE Computer Society Conference on Computer Vision Pattern Recognition, vol. 2, pp. 169–176 (2005)
Barber, B.C., Dobkin, D., Huhdanpaa, H.: The quickhull algorithm for convex hull (2015)
Bay, H., Tuytelaars, T., Gool, L.V.: Surf: speeded up robust features. Comput. Vis. Image Underst. 110(3), 404–417 (2006)
Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: Semantic image segmentation with deep convolutional nets and fully connected CRFs. Comput. Sci. 4, 357–361 (2014)
Collet, A., Martinez, M., Srinivasa, S.S.: The moped framework: object recognition and pose estimation for manipulation. Int. J. Robot. Res. 30(10), 1284–1306 (2011)
Couprie, C., Farabet, C., Najman, L., Lecun, Y.: Indoor semantic segmentation using depth information. Eprint arXiv (2013)
Ecins, A., Fermuller, C., Aloimonos, Y.: Cluttered scene segmentation using the symmetry constraint. In: IEEE International Conference on Robotics and Automation (2016)
Felzenszwalb, P., Huttenlocher, D.: Efficient graph-based segmentation algorithm. IJCV 59, 167–181 (2014)
Gabow, H.N., Tarjan, R.E.: A linear-time algorithm for a special case of disjoint set union. J. Comput. Syst. Sci. 30(2), 209–221 (1985)
Gupta, S., Girshick, R., Arbeláez, P., Malik, J.: Learning rich features from RGB-D images for object detection and segmentation. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8695, pp. 345–360. Springer, Cham (2014). doi:10.1007/978-3-319-10584-0_23
Henry, P., Krainin, M., Herbst, E., Ren, X., Fox, D.: RGB-D mapping: using kinect-style depth cameras for dense 3D modeling of indoor environments. Int. J. Robot. Res. 31(5), 647–663 (2012)
Kappler, D., Bohg, J., Schaal, S.: Leveraging big data for grasp planning. In: IEEE International Conference on Robotics and Automation, pp. 4304–4311 (2015)
Lenz, I., Lee, H., Saxena, A.: Deep learning for detecting robotic grasps. Int. J. Robot. Res. 34(4–5), 705–724 (2013)
Levine, S., Pastor, P., Krizhevsky, A., Quillen, D.: Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. In: Kulić, D., Nakamura, Y., Khatib, O., Venture, G. (eds.) ISER 2016. Springer, Cham (2016)
Liao, Y., Kodagoda, S., Wang, Y., Shi, L., Liu, Y.: Understand scene categories by objects: a semantic regularized scene classifier using convolutional neural networks. In: 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 2318–2325. IEEE (2016)
Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(60), 91–110 (2004)
Moosmann, F., Pink, O., Stiller, C.: Segmentation of 3D lidar data in non-flat urban environments using a local convexity criterion. In: 2009 IEEE in Intelligent Vehicles Symposium, pp. 215–220 (2009)
Rao, D., Le, Q.V., Phoka, T., Quigley, M., Sudsang, A., Ng, A.Y.: Grasping novel objects with depth segmentation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2578–2585 (2010)
Richtsfeld, A., Mörwald, T., Prankl, J., Zillich, M., Vincze, M.: Segmentation of unknown objects in indoor environments. In: 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4791–4796. IEEE (2012)
Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440 (2015)
Stein, S.C., Schoeler, M., Papon, J., Worgotter, F.: Object partitioning using local convexity. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 304–311 (2014)
Sung, J., Jin, S.H., Saxena, A.: Robobarista: object part based transfer of manipulation trajectories from crowd-sourcing in 3D pointclouds. arXiv preprint arXiv:1504.03071 (2015)
Uckermann, A., Haschke, R., Ritter, H.: Real-time 3D segmentation of cluttered scenes for robot grasping, pp. 198–203 (2012)
Vedaldi, A., Soatto, S.: Quick shift and kernel methods for mode seeking. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008. LNCS, vol. 5305, pp. 705–718. Springer, Heidelberg (2008). doi:10.1007/978-3-540-88693-8_52
Wang, Y., Huang, S., Xiong, R., Wu, J.: A framework for multi-session RGBD slam in low dynamic workspace environment. CAAI Trans. Intell. Technol. 1, 90–103 (2016)
Acknowledgment
This work is supported by the National Nature Science Foundation of China (Grant No. NSFC: U1609210, 61473258 and U1509210).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Chen, Y., Wang, Y., Hu, J., Xiong, R. (2017). Picking from Clutter: An Object Segmentation Method for Robot Grasping. In: Sun, F., Liu, H., Hu, D. (eds) Cognitive Systems and Signal Processing. ICCSIP 2016. Communications in Computer and Information Science, vol 710. Springer, Singapore. https://doi.org/10.1007/978-981-10-5230-9_35
Download citation
DOI: https://doi.org/10.1007/978-981-10-5230-9_35
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-10-5229-3
Online ISBN: 978-981-10-5230-9
eBook Packages: Computer ScienceComputer Science (R0)