A Robotic Recording and Playback Platform for Training Surgeons and Learning Autonomous Behaviors Using the da Vinci Surgical System
<p>An envisioned immersive teaching environment using real surgical data. The trainee would be immersed in video, audio, and haptics to allow him to synchronously see, hear, and feel the pre-recorded surgery. It would also enable searching the recordings and adjusting the rate of playback.</p> "> Figure 2
<p>An overview of the recording and playback system enabled by the da Vinci Research Kit (DVRK), which allows data capture and playback for the envisioned machine learning and training applications.</p> "> Figure 3
<p>Applications that are enabled by a recording and playback system are shown on the right. They include automation of tasks, immersive training, and procedure planning.</p> "> Figure 4
<p>(<b>Left</b>) Our da Vinci Surgical System, which is used as a test platform for algorithm implementation and subject testing. (<b>Left</b>, <b>inset</b>) The modified peg transfer task. (<b>Right</b>) Our software simulation of the da Vinci test platform, which is used for algorithm prototyping and data playback/visualization. The simulated robot closely matches the real one, allowing rapid development and testing to be done first in simulation.</p> "> Figure 5
<p>The recorded data will include stereo videos, kinematic data from both surgeon controllers, and kinematic data from all instrument arms, amongst other data. Everything will be synchronized with timestamps. This is an example from the recording and playback on a da Vinci Standard Surgical System.</p> "> Figure 6
<p>The relevant joints of the right hand controller. The left hand controller is symmetrical to this.</p> "> Figure 7
<p>A diagram of the 10 recordings that were made to evaluate the system. A human operator moved the hand controllers for the top two source boxes, whereas the proportional–integral–derivative (PID) control system played back the corresponding source recordings for the bottom boxes.</p> "> Figure 8
<p>Experimental setup for evaluation of system accuracy. Camera-based tags are used on the hand controllers to capture the movement of the controllers during recording and playback sessions.</p> "> Figure 9
<p>Comparison between the source data (during recording) and the replayed data (both without and with hands in the hand controllers) in terms of the <span class="html-italic">x</span>-, <span class="html-italic">y</span>-, and <span class="html-italic">z</span>-axis positions of the endpoint of the left hand controller. This position data was computed using joint feedback and a kinematic model of the hand controller.</p> "> Figure 10
<p>Distances between the source data (during recording) and the replayed data (both without and with hands in the hand controllers) in terms of the end position of the left hand controller during slow movements. The top graph is based on playback using the initial PID parameters (without our tuning), whereas the bottom graph is based on playback using tuned PID parameters.</p> "> Figure 11
<p>Distances between the source data (during recording) and the replayed data (both without and with hands in the hand controllers) in terms of the end position of the left hand controller during faster movements. The top graph is based on playback using the initial PID parameters (without our tuning), whereas the bottom graph is based on playback using tuned PID parameters.</p> "> Figure 12
<p>Distance between left hand controller positions measured using (1) a camera and an optically tracked marker (an AprilTag) and (2) using joint feedback and a kinematic model of the hand controller.</p> ">
Abstract
:1. Introduction
2. Literature Survey
2.1. Current Methods of Surgical Training and Evaluation
2.2. Data for Automation of Surgical Procedures Using Machine Learning
3. Materials and Methods
3.1. da Vinci Surgical System and da Vinci Research Kit (DVRK)
3.2. Robot Operating System Recording Software
3.3. Optimization of Playback Accuracy
3.3.1. System of Coupled Joints
3.3.2. Tuning of the PID Control System
3.4. Evaluation of System Accuracy
3.4.1. Collection of Test Data
3.4.2. Processing of Test Data
3.4.3. Analysis of Test Data
- We computed the differences between a source recording and the recordings of the system while it played back the movements of the source recording. This included playback without a user touching the hand controllers and playback while a user gently placed his hand in the hand controller and allowed the system to guide his hand.
- We compared the performance of the playback system before and after tuning of the system’s PID parameters.
- We evaluated how well the system handled slow movements as opposed to faster movements of the hand controller.
- We compared the data provided by the system’s internal kinematic feedback to the data of an external optical tracking system.
4. Results
4.1. Overall Assessment of Playback Accuracy
4.2. Analysis of Error in Playback
4.2.1. Results of PID tuning
4.2.2. Comparison of Different Speeds of Hand Controller Motions
4.2.3. External Verification of Tracking
5. Discussion
6. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Sridhar, A.N.; Briggs, T.P.; Kelly, J.D.; Nathan, S. Training in robotic surgery—An overview. Curr. Urol. Rep. 2017, 18, 58. [Google Scholar] [CrossRef] [PubMed]
- DiMaio, S.; Hasser, C. The da Vinci research interface. In Proceedings of the MICCAI Workshop on Systems and Arch. for Computer Assisted Interventions, New York, NY, USA, 6–10 September 2008. [Google Scholar]
- Kumar, R.; Jog, A.; Vagvolgyi, B.; Nguyen, H.; Hager, G.; Chen, C.C.; Yuh, D. Objective measures for longitudinal assessment of robotic surgery training. J. Thorac. Cardiovasc. Surg. 2012, 143, 528–534. [Google Scholar] [CrossRef] [PubMed]
- Jain, K.; Weinstein, G.S.; O’Malley, B.W.; Newman, J.G. Robotic Surgery Training. In Atlas of Head and Neck Robotic Surgery; Springer: Cham, Switzerland, 2017; pp. 27–31. [Google Scholar]
- Yang, K.; Perez, M.; Hubert, N.; Hossu, G.; Perrenot, C.; Hubert, J. Effectiveness of an integrated video recording and replaying system in robotic surgical training. Ann. Surg. 2017, 265, 521–526. [Google Scholar] [CrossRef] [PubMed]
- Moles, J.J.; Connelly, P.E.; Sarti, E.E.; Baredes, S. Establishing a training program for residents in robotic surgery. Laryngoscope 2009, 119, 1927–1931. [Google Scholar] [CrossRef] [PubMed]
- Curry, M.; Malpani, A.; Li, R.; Tantillo, T.; Jog, A.; Blanco, R.; Ha, P.K.; Califano, J.; Kumar, R.; Richmon, J. Objective assessment in residency-based training for transoral robotic surgery. Laryngoscope 2012, 122, 2184–2192. [Google Scholar] [CrossRef] [PubMed]
- Sperry, S.M.; Weinstein, G.S. The University of Pennsylvania curriculum for training otorhinolaryngology residents in transoral robotic surgery. ORL 2014, 76, 342–352. [Google Scholar] [CrossRef] [PubMed]
- Hong, M.; Rozenblit, J.W. A haptic guidance system for Computer-Assisted Surgical Training using virtual fixtures. In Proceedings of the 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Budapest, Hungary, 9–12 October 2016; pp. 002230–002235. [Google Scholar]
- van der Meijden, O.A.; Schijven, M.P. The value of haptic feedback in conventional and robot-assisted minimal invasive surgery and virtual reality training: A current review. Surg. Endosc. 2009, 23, 1180–1190. [Google Scholar] [CrossRef] [PubMed]
- Teo, C.L.; Burdet, E.; Lim, H. A robotic teacher of Chinese handwriting. In Proceedings of the 10th Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems (HAPTICS 2002), Orlando, FL, USA, 24–25 March 2002; pp. 335–341. [Google Scholar]
- Daluja, S.; Golenberg, L.; Cao, A.; Pandya, A.K.; Auner, G.W.; Klein, M.D. An Integrated Movement Capture and Control Platform Applied Towards Autonomous Movements of Surgical Robots. In Studies in Health Technology and Informatics – Volume 142: Medicine Meets Virtual Reality 17; IOS Press Ebooks: Amsterdam, The Netherlands, 2009; pp. 62–67. [Google Scholar]
- Garudeswaran, S.; Cho, S.; Ohu, I.; Panahi, A.K. Teach and Playback Training Device for Minimally Invasive Surgery. Minim. Invasive Surg. 2018, 2018, 4815761. [Google Scholar] [CrossRef] [PubMed]
- Ho, J.; Ermon, S. Generative adversarial imitation learning. In Proceedings of the 30th International Conference on Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016; pp. 4565–4573. [Google Scholar]
- Duan, Y.; Andrychowicz, M.; Stadie, B.; Ho, O.J.; Schneider, J.; Sutskever, I.; Abbeel, P.; Zaremba, W. One-shot imitation learning. In Proceedings of the 31th International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 1087–1098. [Google Scholar]
- Tremblay, J.; To, T.; Molchanov, A.; Tyree, S.; Kautz, J.; Birchfield, S. Synthetically Trained Neural Networks for Learning Human-Readable Plans from Real-World Demonstrations. arXiv, 2018; arXiv:1805.07054. [Google Scholar]
- Gao, Y.; Vedula, S.S.; Reiley, C.E.; Ahmidi, N.; Varadarajan, B.; Lin, H.C.; Tao, L.; Zappella, L.; Béjar, B.; Yuh, D.D.; et al. JHU-ISI gesture and skill assessment working set (JIGSAWS): A surgical activity dataset for human motion modeling. In Proceedings of the MICCAI Workshop: M2CAI, Boston, MA, USA, 14–18 September 2014; p. 3. [Google Scholar]
- Ellis, R.D.; Munaco, A.J.; Reisner, L.A.; Klein, M.D.; Composto, A.M.; Pandya, A.K.; King, B.W. Task analysis of laparoscopic camera control schemes. Int. J. Med. Robot. Comput. Assist. Surg. 2016, 12, 576–584. [Google Scholar] [CrossRef] [PubMed]
- Satava, R.M. How the future of surgery is changing: Robotics, telesurgery, surgical simulators and other advanced technologies. J. Chir. 2009, 5, 311–325. [Google Scholar]
- Fard, M.J.; Pandya, A.K.; Chinnam, R.B.; Klein, M.D.; Ellis, R.D. Distance-based time series classification approach for task recognition with application in surgical robot autonomy. Int. J. Med. Robot. Comput. Assist. Surg. 2016, 13, e1766. [Google Scholar] [CrossRef] [PubMed]
- Yip, M.; Das, N. Robot autonomy for surgery. arXiv, 2017; arXiv:1707.03080. [Google Scholar]
- Chen, Z.; Deguet, A.; Taylor, R.; DiMaio, S.; Fischer, G.; Kazanzides, P. An Open-Source Hardware and Software Platform for Telesurgical Robotics Research. In Proceedings of the MICCAI Workshop on Systems and Architecture for Computer Assisted Interventions, Nagoya, Japan, 22–26 September 2013. [Google Scholar]
- Open Source Robotics Foundation. RViz. 16 May 2018. Available online: http://wiki.ros.org/rviz (accessed on 3 December 2018).
- Quigley, M.; Conley, K.; Gerkey, B.; Faust, J.; Foote, T.; Leibs, J.; Wheeler, R.; Ng, A.Y. ROS: An open-source Robot Operating System. In Proceedings of the ICRA Workshop on Open Source Software, Kobe, Japan, 12–17 May 2009. [Google Scholar]
- Eslamian, S.; Reisner, L.A.; King, B.W.; Pandya, A.K. Towards the Implementation of an Autonomous Camera Algorithm on the da Vinci Platform. In Proceedings of the Medicine Meets Virtual Reality 22—NextMed, MMVR 2016, Los Angeles, CA, USA, 7–9 April 2016. [Google Scholar]
- Chen, Z.; Deguet, A.; Kazanzides, P. cisst/SAW stack for the da Vinci Research Kit. 22 October 2018. Available online: https://github.com/jhu-dvrk/sawIntuitiveResearchKit/ (accessed on 3 December 2018).
- Denavit, J. A kinematic notation for low pair mechanisms based on matrices. ASME J. Appl. Mech. 1955, 22, 215–221. [Google Scholar]
- Olson, E. AprilTag: A robust and flexible visual fiducial system. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, 9–13 May 2011; pp. 3400–3407. [Google Scholar]
- Donias, H.W.; Karamanoukian, R.L.; Glick, P.L.; Bergsland, J.; Karamanoukian, H.L. Survey of resident training in robotic surgery. Am. Surg. 2002, 68, 177. [Google Scholar] [PubMed]
- Pandya, A.; Reisner, L.; King, B.; Lucas, N.; Composto, A.; Klein, M.; Ellis, R. A Review of Camera Viewpoint Automation in Robotic and Laparoscopic Surgery. Robotics 2014, 3, 310–329. [Google Scholar] [CrossRef]
- Yavuzer, G.; Selles, R.; Sezer, N.; Sütbeyaz, S.; Bussmann, J.B.; Köseoğlu, F.; Atay, M.B.; Stam, H.J. Mirror therapy improves hand function in subacute stroke: A randomized controlled trial. Arch. Phys. Med. Rehabil. 2008, 89, 393–398. [Google Scholar] [CrossRef] [PubMed]
Joint | Proportional Gain Kp | Integral Gain Ki | Derivative Gain Kd | Nonlinear Coeff. Kn |
---|---|---|---|---|
Outer yaw | 30 | 1 | 1.5 | 0 |
Shoulder pitch | 30 | 1 | 1.5 | 0 |
Elbow pitch | 30 | 1 | 1.5 | 0 |
Wrist pitch | 20 | 0 | 0.4 | 0 |
Wrist yaw | 10 | 0 | 0.3 | 0.35 |
Wrist roll | 1.2 | 0 | 0.04 | 0.35 |
Wrist platform | 2 | 0.5 | 0.15 | 1 |
Joint | Proportional Gain Kp | Integral Gain Ki | Derivative Gain Kd | Nonlinear Coeff. Kn |
---|---|---|---|---|
Outer yaw | 39 | 1 | 5 | 0 |
Shoulder pitch | 1 | 6 | 5.8 | 0 |
Elbow pitch | 5 (3) | 4.6 (3.6) | 4 | 0 |
Wrist pitch | 10 | 0.06 | 0.7 | 0 |
Wrist yaw | 10 | 0 | 0.3 | 0.35 |
Wrist roll | 1.2 | 0.016 | 0.04 | 0.35 |
Wrist platform | 2 | 0.5 | 0.15 | 1 |
Error for Initial PID Parameters (mm) | Error for Tuned PID Parameters (mm) | |||||||
---|---|---|---|---|---|---|---|---|
Min. | Max. | Mean | Std. Dev. | Min. | Max. | Mean | Std. Dev. | |
Source to No Hands | 0.69 | 8.14 | 5.07 | 1.28 | 0.49 | 5.62 | 3.59 | 0.88 |
Source to With Hands | 0.54 | 8.81 | 4.93 | 1.56 | 0.46 | 6.37 | 3.85 | 1.06 |
Error for Initial PID Parameters (mm) | Error for Tuned PID Parameters (mm) | |||||||
---|---|---|---|---|---|---|---|---|
Min. | Max. | Mean | Std. Dev. | Min. | Max. | Mean | Std. Dev. | |
Source to No Hands | 0.41 | 10.57 | 5.51 | 1.92 | 0.93 | 7.22 | 3.87 | 1.35 |
Source to With Hands | 0.38 | 11.94 | 5.60 | 2.69 | 0.43 | 8.01 | 4.29 | 1.69 |
Min. | Max. | Mean | Std. Dev. |
---|---|---|---|
0.29 | 16.11 | 4.88 | 3.11 |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Pandya, A.; Eslamian, S.; Ying, H.; Nokleby, M.; Reisner, L.A. A Robotic Recording and Playback Platform for Training Surgeons and Learning Autonomous Behaviors Using the da Vinci Surgical System. Robotics 2019, 8, 9. https://doi.org/10.3390/robotics8010009
Pandya A, Eslamian S, Ying H, Nokleby M, Reisner LA. A Robotic Recording and Playback Platform for Training Surgeons and Learning Autonomous Behaviors Using the da Vinci Surgical System. Robotics. 2019; 8(1):9. https://doi.org/10.3390/robotics8010009
Chicago/Turabian StylePandya, Abhilash, Shahab Eslamian, Hao Ying, Matthew Nokleby, and Luke A. Reisner. 2019. "A Robotic Recording and Playback Platform for Training Surgeons and Learning Autonomous Behaviors Using the da Vinci Surgical System" Robotics 8, no. 1: 9. https://doi.org/10.3390/robotics8010009
APA StylePandya, A., Eslamian, S., Ying, H., Nokleby, M., & Reisner, L. A. (2019). A Robotic Recording and Playback Platform for Training Surgeons and Learning Autonomous Behaviors Using the da Vinci Surgical System. Robotics, 8(1), 9. https://doi.org/10.3390/robotics8010009