Abstract
In this paper, we propose a grasping network using feature augmentation to improve the problem of low generalization in novel object grasping. Three proposed modules, including Gaussian Noise Mix (GNM), Resblock, and Local Features Interpolation (LFI), use GSNet as the baseline. GNM is used for feature augmentation of backbone features during training to reduce the empirical risk of the model when dealing with novel samples. Resblock is designed to expand the prior weights of the features to improve the accuracy and importance of point-wise features for global context. Moreover, LFI is introduced to enhance the local geometric representation of seed point features and reduce the model’s dependence on sample data by inserting hybrid features into seed point features. The effectiveness of our method is verified on the GraspNet-1Billion dataset through a range of experiments, including simulation experiments, ablation experiments, and generalization experiments. Our model is state-of-the-art and achieves a maximum improvement of 13.07% relative to the baseline on the similar set, particularly on the most challenging novel set with a maximum improvement of 4.06%. In practical experiments, similar and novel objects are selected to verify that the novel generalization ability of our network has been significantly improved.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Nguyen, V.-D.: Constructing force- closure grasps. Int. J. Robot. Res. 7, 16–23 (1988)
Miller, A.T., Peter, K.A.: Examples of 3D grasp quality computations. In: Proceedings 1999 IEEE International Conference on Robotics and Automation (Cat. No.99CH36288C), vol. 2, pp. 1240–1246 (1999)
Fang, H., et al.: GraspNet-1Billion: a large-scale benchmark for general object grasping. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 11441–11450 (2020)
Ma, H., Di, H.: Towards scale balanced 6-DoF grasp detection in cluttered scenes. In: Conference on Robot Learning (2022)
Redmon, J., Anelia, A.: Real-time grasp detection using convolutional neural networks. In: 2015 IEEE International Conference on Robotics and Automation (ICRA), pp. 1316–1322 (2014)
Ainetter, S., Friedrich, F.: End-to-end trainable deep neural network for robotic grasp detection and semantic segmentation from RGB. In: 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 13452–13458 (2021)
Wang, C., et al.: Graspness discovery in clutters for fast and accurate grasp detection. In: 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 15944–15953 (2021)
Fang, H., et al.: GraspNet: A Large-Scale Clustered and Densely Annotated Datase for Object Grasping (2019). ArXiv abs/1912.13470
Fang, C., et al.: Unbiased metric learning: on the utilization of multiple datasets and web images for softening bias. In: 2013 IEEE International Conference on Computer Vision, pp. 1657–1664 (2013)
Verma, V., et al.: Manifold Mixup: Better Representations by Interpolating Hidden States. In: International Conference on Machine Learning (2018)
Li, P., et al.: A simple feature augmentation for domain generalization. In: 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 8866–8875 (2021)
Ma, X., et al.: Rethinking Network Design and Local Geometry in Point Cloud: A Simple Residual MLP Framework (2022). ArXiv abs/2202.07123
He, K., et al.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2015)
Qi, C., et al.: PointNet++: deep hierarchical feature learning on point sets in a metric space. In: Neural Information Processing Systems (2017)
Morrison, D., et al.: Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach (2018). ArXiv abs/1804.05172
Chu, F., et al.: Real-World Multiobject, Multigrasp Detection. IEEE Robot. Autom. Lett. 3, 3355–3362 (2018)
Liang, H., et al.: PointNetGPD: detecting grasp configurations from point sets. In: 2019 International Conference on Robotics and Automation (ICRA), pp. 3629–3635 (2018)
Acknowledgements
This work is supported by the National Key R&D Program of China (grant No.: 2022YFB4700400) and National Natural Science Foundation of China (grant No.: 62073249), Key R&D Program of Hubei Province (grant No.: 2023BBB011).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Zhang, L., Lin, Y., Xu, Y., Min, H. (2024). A 6-DoF Grasping Network Using Feature Augmentation for Novel Domain Generalization. In: Huang, DS., Pan, Y., Guo, J. (eds) Advanced Intelligent Computing Technology and Applications. ICIC 2024. Lecture Notes in Computer Science, vol 14873. Springer, Singapore. https://doi.org/10.1007/978-981-97-5615-5_1
Download citation
DOI: https://doi.org/10.1007/978-981-97-5615-5_1
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-97-5614-8
Online ISBN: 978-981-97-5615-5
eBook Packages: Computer ScienceComputer Science (R0)