Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

Neural network-based robot visual positioning for intelligent assembly

  • Published:
Journal of Intelligent Manufacturing Aims and scope Submit manuscript

Abstract

A fundamental task in robotic assembly is the pick and place operation. Generally, this operation consists of three subtasks; guiding the robot to the target and positioning the manipulator in an appropriate pose, picking up the object and moving the object to a new location. In situations where the pose of the target may vary in the workspace, sensory feedback becomes indispensable to guide the robot to the object. Ideally, local image features must be clearly visible and un-occluded in multiple views of the object. In reality, this may not be always the case. Local image features are often are often rigidly constrained to a particular target and may require specialized feature localization algorithms. We present a visual positioning system that addresses feature extraction issues for a class of objects that have smooth or curved surfaces. In this work, the visual sensor consists of an arm mounted camera and a grid pattern projector that produces images with local surface description of the target. The projected pattern is always visible in the image and it is sensitive to variations in the object’s pose. A set of low-order geometric moments globally characterizes the observed pattern, eliminating the need for feature localization and overcoming the point correspondence problem. A neural network then learns the complex relationship between the robot’s pose displacements and the observed variations in the image features. After training, visual feedback guides the robot to the target from any arbitrary location in the workspace. Its applicability using a five degrees of freedom (DOF) industrial robot is demonstrated.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Agin, G. J. (1985) Calibration and the use of a light stripe range sensor mounted on the hand of a robot. Proceedings of the International Conference on Robotics and Automation, pp. 943–950.

  • Bhatnagar, D., Pujari, A. K. and Seetharamulu, P. (1991) Static scene analysis using structured light. Image and Vision Computing, 9(2), 82–87.

    Google Scholar 

  • Blackburn, M. R. and Nguyen, H. G. (1994) Learning in robotic vision directed reaching a comparison of methods. Proceedings of the Image Understanding Workshop, Monterey, CA, 1, pp. 1143–1150.

    Google Scholar 

  • Buessler, J. L. and Urban, J. P. (1998) Visually guided movements with modular neural maps in robotics. Neural Networks, Special Issue on Neural Control and Robotics Biology and Technology, 11, 1395–1415.

    Google Scholar 

  • Cimponeriu, A. and Gresser, J. (1998) Precision requirements for closed-loop kinematic robot control using linear local mappings. Neural Networks, 11(1), 173–182.

    Google Scholar 

  • Hassoun, M. H. (1995) Fundamentals of Neural Networks. MIT Press, Cambridge.

    Google Scholar 

  • Hutchinson, S. et al. (1996) A tutorial on visual servo control. IEEE Transactions on Robotics and Automation, 12(5), 651–670.

    Google Scholar 

  • Katić, D. and Vukobratović, M. (1994) Connectionist approaches to the control of manipulator robots at the executive hierarchical level: an overview. Journal of Intelligent and Robotic Systems, 10, 1–36.

    Google Scholar 

  • Khadroui, D. et al. (1996) Visual servoing in robotics scheme using a camera/laser-stripe sensor. IEEE Transactions on Robotics and Automation, 12(5), 743–750.

    Google Scholar 

  • Kubota, T. and Hashimoto, H. (1996) Visual control of robotic manipulator based on neural networks, in Neural Networks for Robotic Control–Theory and Applications, Zalzala, A. M. S. and Morris, A. S. (eds), Ellis Horwood, pp. 218–244.

  • Lindsey, P. and Blake, A. (1995) Real-time tracking of surfaces with structured light. Image and Vision Computing, 13(7), 585–591.

    Google Scholar 

  • Miller, T. W. (1987) Sensor-based control of robotic manipulators using a general learning algorithm. IEEE Journal of Robotics and Automation, RA-3(2), 157–165.

    Google Scholar 

  • Nayar, S. K., Nene, S. A. and Murase, H. (1996) Subspace methods for robot vision. IEEE Transactions on Robotics and Automation, 12(5), 750–758.

    Google Scholar 

  • Papanikolopoulos, N. P. (1995) Selection of features and evaluation of visual measurements during robotic visual servoing tasks. Journal of Intelligent and Robotic Systems, 13, 279–304.

    Google Scholar 

  • Shewchuk, J. (1994) An introduction to the conjugate gradient method without the agonizing pain. School of Computer Science Technical Report Carnegie-Mellon University, Pittsburgh, PA, pp. 94–125.

  • van der Smagt, P., Groen, F. and Kröse, B. (1995) Robot hand-eye coordination using neural networks. Technical Report (TR CS-93–10), Department of Computer Science, University of Amsterdam.

  • Wang, Y. F. and Pandey, A. (1989) Interpretation of 3D structure and motion using structured lighting. IEEE Workshop on Interpretation of 3–D Scenes, Austin, TX, pp. 84–90.

  • Wei, G. Q. and Hirzinger, G. (1999) Multisensory visual servoing by a neural network. IEEE Transactions on Systems, Man and Cybernetics, 29(2), 1–6.

    Google Scholar 

  • Wells, G., Venaille, C. and Torras, C. (1996) Promising research vision-based robot positioning using neural networks. Image and Vision Computing, 14, 715–732.

    Google Scholar 

  • Will, P. and Pennington, K. S. (1972) Grid coding a novel technique for image processing. Proceedings of the IEEE, 60(6), 669–680.

    Google Scholar 

  • Wunsch, P. S., Winkler, S. and Hirzinger, G. (1997) Real-time estimation of 3–d objects from camera using neural networks. Proceedings of the International Conference on Robotics and Automation, pp. 3232–3237.

  • Yakali, H. H., Kröse, B. and Dorst, L. (1996) Vision-based 6–dof robot end effector positioning using neural networks. RWCP Novel Functions SNN Laboratory, University of Amsterdam.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dhanesh Ramachandram.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Ramachandram, D., Rajeswari, M. Neural network-based robot visual positioning for intelligent assembly. Journal of Intelligent Manufacturing 15, 219–231 (2004). https://doi.org/10.1023/B:JIMS.0000018034.76366.b8

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/B:JIMS.0000018034.76366.b8

Navigation