Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

Co-manipulation with a library of virtual guiding fixtures

  • Published:
Autonomous Robots Aims and scope Submit manuscript

Abstract

Virtual guiding fixtures constrain the movements of a robot to task-relevant trajectories, and have been successfully applied to, for instance, surgical and manufacturing tasks. Whereas previous work has considered guiding fixtures for single tasks, in this paper we propose a library of guiding fixtures for multiple tasks, and propose methods for (1) creating and adding guides based on machine learning; (2) selecting guides on-line based on probabilistic implementation of guiding fixtures; (3) refining existing guides based on an incremental learning method. We demonstrate in an industrial task that a library of guiding fixtures provides an intuitive haptic interface for joint human–robot completion of tasks, and improves performance in terms of task execution time, mental workload and errors.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Notes

  1. Our previous work (Raiola et al. 2015a, b) focussed on the theoretical framework underlying multiple virtual guides, as well as an analysis of their stability. This paper focusses instead on the pragmatic implementation of a library of such guides, including a full user study.

  2. Equation (6) contains the inverse of the matrix \(({\mathbf {J}_{\text{ vm }}}^\intercal B \mathbf {J}_{\text{ vm }})\), which may lead to singularities. This problem and possible solutions are presented in Section 2.3 of Raiola (2017).

  3. Note that M is not strictly necessary to describe the model but it will be useful for the incremental training.

  4. The covariance matrix \(\varvec{\Sigma } _{e, S}\) is actually a scalar, because the phase is always 1-dimensional. For consistency, we nevertheless use the bold symbol \(\varvec{\Sigma }\) rather than \(\sigma ^2\).

  5. The stability of such probabilistically weighted virtual guides is analyzed in Raiola et al. (2015b).

  6. For this reason, we call the resulting guides “Hard Guides”.

  7. We call the resulting guides “Soft Guides”.

  8. In practice, the initial estimation is frequently performed using the K-means clustering algorithm which defines the initial values for the priors, means and covariance matrices.

  9. The threshold \(\mathscr {C}= 0.01\) is used in our case.

  10. https://www.isybot.com.

  11. The code used to generate and interact with the library of virtual guides is available at https://github.com/graiola/virtual-fixtures.

References

  • Aarno, D., Ekvall, S., & Kragic, D. (2005). Adaptive virtual fixtures for machine-assisted teleoperation tasks. In ICRA (pp. 897–903).

  • Abbott, J. J. (2005). Virtual fixtures for bilateral telemanipulation. Ph.D. thesis, Johns Hopkins University.

  • Abdi, H., & Williams, L. J. (2010). Tukeys honestly significant difference (hsd) test. In N. J. Salkind (Ed.), Encyclopedia of research design (pp. 1–5). Thousand Oaks, CA: Sage.

  • Amor, H. B., Neumann, G., Kamthe, S., Kroemer, O., & Peters, J. (2014). Interaction primitives for human–robot cooperation tasks. In 2014 IEEE international conference on robotics and automation (ICRA) (pp. 2831–2837). IEEE.

  • Becker, B. C., Maclachlan, R. A., Lobes, L. A., Hager, G. D., & Riviere, C. N. (2013). Vision-based control of a handheld surgical micromanipulator with virtual fixtures. IEEE Transactions on Robotics, 29(3), 674–683.

    Article  Google Scholar 

  • Bettini, A., Marayong, P., Member, S., Lang, S., Okamura, A. M., & Hager, G. D. (2004). Vision assisted control for manipulation using virtual fixtures. In International conference on intelligent robots and systems (IROS) (pp. 1171–1176).

  • Bowyer, S. A., & y Baena F. R. (2013). Dynamic frictional constraints for robot assisted surgery. In World haptics conference (WHC), 2013 (pp. 319–324). https://doi.org/10.1109/WHC.2013.6548428.

  • Bowyer, S. A., Davies, B. L., & y Baena, F. R. (2014). Active constraints/virtual fixtures: A survey. IEEE Transactions on Robotics, 30(1), 138–157. https://doi.org/10.1109/TRO.2013.2283410.

    Article  Google Scholar 

  • Boy, E. S., Burdet, E., Teo, C. L., & Colgate, J. E. (2007). Investigation of motion guidance with scooter cobot and collaborative learning. IEEE Transactions on Robotics, 23(2), 245–255.

    Article  Google Scholar 

  • Calinon, S. (2007). Incremental learning of gestures by imitation in a humanoid robot. In Proceedings of the 2007 ACM/IEEE international conference on human–robot interaction (pp. 255–262).

  • Calinon, S., Guenter, F., & Billard, A. (2007). On learning, representing and generalizing a task in a humanoid robot. IEEE Transactions on Systems, Man and Cybernetics, 37(2), 286–298. (Special issue on robot learning by observation, demonstration and imitation).

    Article  Google Scholar 

  • Calinon, S., Bruno, D., & Caldwell, D. G. (2014). A task-parameterized probabilistic model with minimal intervention control. In Proceedings of IEEE international conference on robotics and automation (ICRA), Hong Kong, China (pp. 3339–3344).

  • Colgate, J. E., Peshkin, M. A., & Klostermeyer, S. H. (2003). Intelligent assist devices in industrial applications: A review. In IROS (pp. 2516–2521).

  • David, O., Russotto, F. X., Simoes, M. D. S., & Measson, Y. (2014). Collision avoidance, virtual guides and advanced supervisory control teleoperation techniques for high-tech construction: Framework design. Automation in Construction, 44, 63–72.

    Article  Google Scholar 

  • Davies, B., Jakopec, M., Harris, S. J., Baena, F. R. Y., Barrett, A., Evangelidis, A., et al. (2006). Active-constraint robotics for surgery. Proceedings of the IEEE, 94(9), 1696–1704. https://doi.org/10.1109/JPROC.2006.880680.

    Article  Google Scholar 

  • Dumora, J. (2014). Contribution à linteraction physique homme-robot: application à la comanipulation dobjets de grandes dimensions. Ph.D. thesis, Montpellier 2.

  • Ewerton, M., Maeda, G., Kollegger, G., Wiemeyer, J., & Peters, J. (2016). Incremental imitation learning of context-dependent motor skills. In 2016 IEEE-RAS 16th international conference on humanoid robots (Humanoids) (pp. 351–358). IEEE.

  • Girden, E. R. (1992). ANOVA: Repeated measures (Vol. 84). London: Sage.

    Book  Google Scholar 

  • Held, L., & Bov, D. S. (2013). Applied statistical inference: Likelihood and Bayes. New York: Springer.

    Google Scholar 

  • Hermann, M., Pentek, T., & Otto, B. (2016). Design principles for industrie 4.0 scenarios. In 2016 49th Hawaii international conference on system sciences (HICSS) (pp. 3928–3937). IEEE.

  • Ho, S. C., Hibberd, R. D., & Davies, B. L. (1995). Robot assisted knee surgery. IEEE Engineering in Medicine and Biology Magazine, 14(3), 292–300. https://doi.org/10.1109/51.391774.

    Article  Google Scholar 

  • Joly, L., & Andriot, C. (1995). Imposing motion constraints to a force reflecting telerobot through real-time simulation of a virtual mechanism. In 1995 IEEE international conference on robotics and automation, 1995. Proceedings (Vol. 1, pp. 357–362). https://doi.org/10.1109/ROBOT.1995.525310.

  • Kuang, A., Payandeh, S., Zheng, B., Henigman, F., & MacKenzie, C. (2004). Assembling virtual fixtures for guidance in training environments. In 12th International symposium on haptic interfaces for virtual environment and teleoperator systems, 2004. HAPTICS ’04. Proceedings (pp. 367–374). https://doi.org/10.1109/HAPTIC.2004.1287223.

  • Lee, D., & Ott, C. (2011). Incremental kinesthetic teaching of motion primitives using the motion refinement tube. Autonomous Robots, 31(2–3), 115–131.

    Article  Google Scholar 

  • Li, M., & Okamura, A. M. (2003). Recognition of operator motions for real-time assistance using virtual fixtures. In Proceedings of 11th symposium on haptic interfaces for virtual environments and teleoperator systems (pp. 125–131).

  • Lin, H. C., Marayong, P., Mills, K., Karam, R., Kazanzides, P., Okamura, A. M., & Hager, G. D. (2006). Portability and applicability of virtual fixtures across medical and manufacturing tasks. In IEEE international conference on robotics and automation (pp. 225–340).

  • Marayong, P., Li, M., Okamura, A. M., & Hager, G. D. (2003). Spatial motion constraints: Theory and demonstrations for robot guidance using virtual fixtures. In ICRA (pp. 1954–1959). IEEE.

  • Medina, J. R., Lee, D., & Hirche, S. (2012). Risk-sensitive optimal feedback control for haptic assistance. In 2012 IEEE international conference on robotics and automation (ICRA) (pp. 1025–1031). IEEE.

  • Mollard, Y., Munzer, T., Baisero, A., Toussaint, M., & Lopes, M. (2015). Robot programming from demonstration, feedback and transfer. In 2015 IEEE/RSJ international conference on intelligent robots and systems (IROS) (pp. 1825–1831). https://doi.org/10.1109/IROS.2015.7353615.

  • Nolin, J. T., Stemniski, P. M., & Okamura, A. M. (2003). Activation cues and force scaling methods for virtual fixtures. In Proceedings of 11th international symposium on haptic interfaces for virtual environment and teleoperator systems (pp. 404–409).

  • Pezzementi, Z., Hager, G. D., & Okamura, A. M. (2007). Dynamic guidance with pseudoadmittance virtual fixtures. In IEEE international conference on robotics and automation (pp. 1761–1767).

  • Raiola, G. (2017). Co-manipulation with a library of virtual guides. Ph.D. thesis, Université Paris-Saclay.

  • Raiola, G., Lamy, X., & Stulp, F. (2015a). Co-manipulation with multiple probabilistic virtual guides. In 2015 IEEE/RSJ international conference on intelligent robots and systems (IROS) (pp. 7–13). IEEE.

  • Raiola, G., Rodriguez-Ayerbe, P., Lamy, X., Tliba, S., & Stulp, F. (2015b). Parallel guiding virtual fixtures: Control and stability. In 2015 IEEE international symposium on intelligent control (ISIC) (pp. 53–58). IEEE.

  • Rosenberg, L. (1993). Virtual fixtures: Perceptual tools for telerobotic manipulation. In Proceedings of IEEE virtual reality international sympsoium.

  • Rozo, L., Calinon, S., Caldwell, D. G., Jimnez, P., & Torras, C. (2016). Learning physical collaborative robot behaviors from human demonstrations. IEEE Transactions on Robotics, 32(3), 513–527. https://doi.org/10.1109/TRO.2016.2540623.

    Article  Google Scholar 

  • Ryden, F., Stewart, A., & Chizeck, H. (2013). Advanced telerobotic underwater manipulation using virtual fixtures and haptic rendering. Oceans-San Diego, 2013, 1–8.

    Google Scholar 

  • Sanchez Restrepo, S., Raiola, G., Chevalier, P., Lamy, X., & Sidobre, D. (2017). Iterative virtual guides programming for human–robot comanipulation. In IEEE international conference on advanced intelligent mechatronics (AIM).

  • Vakanski, A., Mantegh, I., Irish, A., & Janabi-Sharifi, F. (2012). Trajectory learning for robot programming by demonstration using hidden markov model and dynamic time warping. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 42(4), 1039–1052. https://doi.org/10.1109/TSMCB.2012.2185694.

    Article  Google Scholar 

  • Wrede, S., Emmerich, C., Grünberg, R., Nordmann, A., Swadzba, A., & Steil, J. (2013). A user study on kinesthetic teaching of redundant robots in task and configuration space. Journal of Human–Robot Interaction, 2(1), 56–81.

    Google Scholar 

  • Yoon, H., Wang, R., & Hutchinson, S. (2014). Modeling user’s driving-characteristics in a steering task to customize a virtual fixture based on task-performance. In 2014 IEEE international conference on robotics and automation (ICRA) (pp. 625–630). https://doi.org/10.1109/ICRA.2014.6906920.

  • Yu, W., Alqasemi, R., Dubey, R., & Pernalete, N. (2005). Telemanipulation assistance based on motion intention recognition. In Proceedings of the 2005 IEEE international conference on robotics and automation, 2005. ICRA 2005 (pp. 1121–1126). https://doi.org/10.1109/ROBOT.2005.1570266.

Download references

Acknowledgements

This project has received funding from DIGITEO

figure h

(www.digiteo.fr).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gennaro Raiola.

Additional information

This is one of the several papers published in Autonomous Robots comprising the Special Issue on Learning for Human-Robot Collaboration.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (mp4 32045 KB)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Raiola, G., Restrepo, S.S., Chevalier, P. et al. Co-manipulation with a library of virtual guiding fixtures. Auton Robot 42, 1037–1051 (2018). https://doi.org/10.1007/s10514-017-9680-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10514-017-9680-7

Keywords

Navigation