Nothing Special   »   [go: up one dir, main page]

Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2021 Feb 18.
Published in final edited form as: IEEE Int Conf Robot Autom. 2016 Jun 9;2016:708–714. doi: 10.1109/icra.2016.7487197

Plenoptic Cameras in Surgical Robotics: Calibration, Registration, and Evaluation

Azad Shademan 1, Ryan S Decker 2, Justin Opfermann 3, Simon Leonard 4, Peter C W Kim 5, Axel Krieger 6
PMCID: PMC7891458  NIHMSID: NIHMS1663308  PMID: 33614192

Abstract

Three-dimensional sensing of changing surgical scenes would improve the function of surgical robots. This paper explores the requirements and utility of a new type of depth sensor, the plenoptic camera, for surgical robots. We present a metric calibration procedure for the plenoptic camera and the registration of its coordinate frame to the robot (hand-eye calibration). We also demonstrate the utility in robotic needle insertion and application of sutures in phantoms. The metric calibration accuracy is reported as 1.14 ± 0.80 mm for the plenoptic camera and 1.57 ± 0.90 mm for hand-eye calibration. The accuracy of needle insertion task is 1.79 ± 0.35 mm for the entire robotic system. Additionally, the accuracy of suture placement with the presented system is reported at 1.80 ± 0.43 mm. Finally, we report consistent suture spacing with only 0.11 mm standard deviation between inter-suture distances. The measured accuracy of less than 2 mm with consistent suture spacing is a promising result to provide repeatable leak-free suturing with a robotic tool and a plenoptic depth imager.

I. INTRODUCTION

In robotic surgery, visualization of the field is critical for both the surgeon and the robot. Quantitative imaging has the potential to transform the way we perform surgery by measuring the scene accurately and providing additional information to the surgeon. One of the most important challenges is to sense a reliable real-time 3D model of the surgical scene during operation.

Several technologies are currently available for quantitative imaging in the operating room, each with unique advantages and drawbacks. Magnetic resonance imaging (MRI) [1], computed tomography (CT) scans [2], and 3D ultrasound [3] are 3D medical imaging technologies that are readily available, but are difficult to integrate into real-time surgery. Stereo endoscopic images have been used to generate 3D models [4], and to reconstruct and track surfaces of soft tissue [5]. Since surgeons already use stereoscopic endoscopes for depth perception during minimally-invasive surgery (MIS), the hardware is already available. However, quantitative imaging relies on accurate calibration of endoscopes and stereo matching, which is practically challenging because of scene deformations [6]. Structured-light 3D endoscopes have been recently developed with sub-millimeter accuracy, but remain in the research stage [7].

The demand for improved clinical outcome [8] indicates the trend towards more precise and accurate application of technologies and functional outcomes in surgery. 3D quantification is already used in dermatology diagnosis [9], wound [10] and burn evaluation [11], polyp detection [12], and for differentiation of healthy and cancerous tissues [13], all applications where tissue morphology is important. For example, the precision [14] and accuracy of suture placement [15] could be informed by a 3D camera.

Newly available plenoptic cameras have not been widely used in surgical settings to date. The integration of new technologies into robotic workflows typically requires these steps: camera calibration, verifying the reported measurements are accurate, registering the camera to the robot, and evaluating performance in a relevant setting. This paper seeks to address requirements for potential application of plenoptic cameras to robotic surgery. It details the procedures and practical obstacles facing metric calibration and registration, and demonstrates performance in a surgical task requiring millimetric accuracy.

For robotic interventions, quantitative measurement of the surgical field is necessary, and the performance of such quantitative imagers contributes to the overall capability of robotic systems. Clinical use of real-time quantitative 3D imagers will drive adoption of numerically informed procedures. Plenoptic cameras have been discussed in previous studies [16]–[18], focusing mainly on theory of operation and technological improvements. There has been recent progress on the calibration of unfocused lenslet-based plenoptic cameras [19] such as commercially available Lytro and focused lenslet-based plenoptic cameras such commercially available Raytrix cameras [20]. Plenoptic calibration is complicated because a single point is projected on multiple points. Evaluation of metric plenoptic calibration in realistic machine vision applications has not been performed to-date.

Our previous work on vision-based guidance of surgical robots used a 2D camera for a planar suturing task [21] with an accuracy of 0.5 mm [22]. The integration of a plenoptic camera as a depth sensor will transition this system into the third dimension. We previously reported good measurement performance while observing static scenes with the plenoptic camera [23] independent of a robotic system. Here we contribute by detailing the calibration and registration methods and testing its utility in a relevant, dynamic surgical task. We investigate the integration and performance of a plenoptic camera to guide the motions of a surgical robot arm. The metric calibration of the plenoptic camera provides a metric point cloud, which can be registered to the end-effector of the manipulator using standard hand-eye calibration methods if the robot is in the FOV. While hand-eye calibration is a well-discussed topic in the literature when the camera is rigidly attached to the robot [24], there are unique workspace and FOV requirements of calibrating a long surgical tool attached to a manipulator. For completeness, we present and evaluate a simple hand-eye calibration, where the plenoptic camera only observes a partial view of the surgical tool within a small FOV.

II. Materials and Methods

A. System Description

The core components of the system are a light-weight 7-DOF robot arm (LWR 4+, KUKA Robotics Corp., Augsburg, Germany), a surgical tool (Endo360, EndoEvolution, Raynham, MA), a plenoptic camera and software for quantitative 3D measurement applications (R12, Raytrix GmbH, Schauenburgerstrasse, Germany) with a custom field of view (FOV) of 70 × 65 × 30 mm [25], and custom software developed in Open RObot COntrol Software (OROCOS) [26] and Robot Operating System (ROS) [27] as shown in Fig. 1. The KUKA Robot Controller (KRC) computer is in charge of the kinematics, dynamics, control and generating motion trajectories. A simple program written in KUKA Robot Language (KRL) enables the communication between the KRC and the OROCOS components and ROS packages via Ethernet using the Fast Research Interface (FRI) [28]. The Raytrix camera has an Application Programming Interface (API) with proprietary binary libraries and some read-only parameters. The API provides a virtual depth image and a focused image which are captured on a dedicated Windows machine with an NVidia GeForce GTX Titan graphics card. Virtual 3D depth is processed at 10 frames per second (fps) at a pixel resolution of 2008 × 1508.

Figure 1.

Figure 1.

Robotic surgical system showing the robot arm, plenoptic (3D) camera, surgical tool, the Graphical Surgeon Interface and the interconnectivity of software modules.

B. Plenoptic Camera Model

Plenoptic cameras have a main lens and an additional micro lens array (MLA). Image features are matched between different nearby microlens images from which virtual depth is calculated by parallax. A simplified plenoptic camera model based on [20] is shown in Fig. 2. The distance b between the image sensor plane and the MLA and distance a between the MLA and the virtual image plane are used to define virtual depth ν = a / b. Parameter b is proprietary and provided by the manufacturer. In Fig. 2, the image feature seen in two microlens images corresponds to I1 and I2. The difference between matched points ΔI = |I2I1|, and the distance between microlens centers D are used to calculate the virtual depth following the intercept theorem: ν = D / ΔI. The MLA makes it possible to evaluate the depth of an image feature seen in more than one micro lens image (Fig. 3a and 3b). The camera manufacturer provides software to find the centers of the micro lenses, from which distance D can be found.

Figure 2.

Figure 2.

The plenoptic camera model. The MLA makes it possible to evaluate the virtual depth of an image feature seen in more than one micro lens image.

Figure 3.

Figure 3.

Plenoptic images. (a) Raw image of micro lens array (MLA) showing a surgical tool and suture pad, (b) a magnified view showing the details of MLA images, (c) processed extended depth-of-field image, and (d) virtual depth image color-coded according to depth, with red areas being closer to the camera.

The software also enables the generation of an extended depth-of-field image, where the entire image is focused provided the subject is inside the depth-of-field range (Fig. 3c). The camera provides a grayscale image with additional depth information at each pixel coordinate (Fig. 3d). The intrinsic camera parameters include the focal length, the focus distance, the distance between the MLA and image sensor, and pixel size. It is also necessary to include distortion coefficients to rectify the image for metric depth measurement. Through our calibration and custom scripts, this information was translated to a metric point cloud and used to position the arm.

C. Command Interface

The system performance is evaluated on phantom suturing pads, which are used to train surgeons. We use the system as a point-and-click positioner for a robotic arm. The user must have an easy to use command interface to select the target tool position and informing the robot. A graphical user interface is developed based on the visualization component RViz of the Robot Operating System (ROS) framework and ROS messages and topic.

An operator clicks on desired suture location Pc within the visualized point cloud on the RViz display:

Pc=[XcYcZc1],

where the coordinates are expressed in the camera frame after metric calibration. This point is then expressed in the robot base frame to generate Cartesian motions:

P0=[X0Y0Z01]=cT0Pc,

where rigid-body transformation CT0 takes the camera frame to the robot base frame and is often called the hand-eye calibration. Fig. 4 shows the command interface and the transformations. In Section II.E, we describe how the hand-eye transformation is found.

Figure 4.

Figure 4.

The command interface is a visual display of 3D point cloud representing the surgical scene through the lens of the plenoptic camera. The operator clicks on a target point Pc which is expressed in the camera frame and transformed to the robot from through CT0. The tool position is known through forward kinematics CT8.

D. Metric Depth Calibration

Metric depth calibration will ensure that the values reported by our imager are at the correct scale and location with respect to the camera coordinate system. After intrinsic parameters are calculated, the camera natively reports the virtual depth, which is the distance between the virtual image point and the microlens array, as shown in [20]. As we will see, this virtual depth does not correlate linearly with rectified metric coordinates.

The camera data is streamed over Ethernet to a custom ROS node developed in C++, where subsequent rectification and conversion to metric coordinates is performed (Fig. 1, Raytrix node).

We adopt a pinhole camera model for the total-focus grayscale image and find the camera intrinsic parameters using Zhang’s well-known camera calibration algorithm [29] to generate a rectified extended depth-of-field image. Using a checkerboard pattern and the intrinsic parameters, the reference coordinate frame of the checkerboard is expressed in the camera coordinate frame. For metric calibration of virtual depth, we rigidly attach a flat calibration pattern (Fig. 5a) to the robot arm and manually move the robot to align the checkerboard to the camera coordinate system XY plane. Verification of alignment is possible by a ROS node for pose estimation (Fig. 5b), which takes as input the extended depth-of-field image of the pattern, intrinsic camera parameters, and known size of the checkerboard. A separate gradient detection module is run on the rectified image to find the corners of the checkerboard on the rectified 2D extended depth-of-field image, which are then overlaid on the corresponding point cloud points as seen in Fig. 5c. This is possible because the camera API provides a one-to-one matching between the extended depth-of-field image pixels and the depth image.

Figure 5.

Figure 5.

Metric calibration of plenoptic camera. (a) The plenoptic camera views the checkerboard, which is attached to a robot arm. (b) The checkerboard coordinate frame is overlaid on the rectified extended depth-of-field plenoptic image. (c) Comer points are detected in 2D and overlaid on the point cloud (red). Figure best seen in color.

While moving the checkerboard at regular fixed intervals in Z, we observe a non-linear depth distortion related to distance from the sensor. This can be seen in Fig. 6a and 6b. As the reference moves away from the camera, its size in the XY plane also increases disproportionately, when it should remain constant. There is also a distortion of depth values along the Z axis [20]. We evaluate the reported virtual depth in a FOV larger than the one selected for the surgical task. In fact, we find that it is possible to approximate the distortion in our chosen FOV with linear regressions.

Figure 6.

Figure 6.

Distortion of observed checkerboard reference. (a) Isometric view of the checkerboard corners before metric calibration. (b) Side view of the checkerboard corners with our chosen FOV highlighted in red. Inside the highlighted area, the distortions can be approximated with a linear function. (c) Geometric calculation of the vanishing point from lines fit to the distorted calibration pattern within the FOV. Dimensions are unit-less as these measurements were taken prior to metric calibration.

The best-fit 3D lines that go through the four corners intersect at a vanishing point in front of the camera in the least squares sense. The vanishing point and the four corner points approximate a pyramid (Fig. 6c). We moved the planar calibration pattern at fixed displacements of 10 mm along the z-axis. This helps to find the z scale using the faces of the pyramid and the principle of similar triangles. Once the z scale is found, and because we used a calibration pattern with known x and y dimensions, the x and y scales can be calculated as a function of actual depth. In essence, we rectify and scale the pyramid to create a homogeneous rectangular box, where a constant size for the calibration checkerboard can be reported. This box is the calibrated FOV, which translates the reported virtual depth and XY locations to metric 3D measurements.

E. Robot-Camera (Hand-Eye) Calibration

Once we are satisfied that the camera is reporting correct metric measurements, we must know the camera position and orientation with respect to the robot base frame, also known as hand-eye calibration. To accomplish this, we observe a calibration object with the camera and also with the robot. The object is a checkerboard pattern seen previously in Fig. 5b. Using the rectified plenoptic camera image and the pose estimator explained in the previous section, we are able to segment the checkerboard boundaries and determine its position and orientation with respect to the camera coordinate system. The checkerboard pose with respect to the robot is then determined.

We cannot use a reference feature or camera directly attached to the robot arm, as is standard practice [31], [32], because long surgical tool lengths put our workspace and camera view out of reach of the robot arm and necessitate a pointer rod. A precisely machined calibration pointer rod is attached to the robot arm. The pointer rod has been designed to provide less than 0.2 mm of runout, meaning 0.1 mm of positioning error in the XY plane of the tool. This ensures that the tool model closely matches reality, and allows us to know the location of the pointer rod tip very accurately. The rod is then used to touch the four corners of the planar calibration checkerboard and quantify their positions in the robot coordinate system. Once we have at least 4 points, we can solve for the rigid transformation between the two coordinate frames. In practice we use two sets of four-point correspondences, measured at different depths with respect to the camera to quickly estimate the transformation from non-coplanar points. This registers the camera to the robot, and allows position commands from the plenoptic point cloud to be executed in the robot base coordinate frame.

F. Phantom Targeting

To evaluate our total system accuracy, we target 10 pre-marked locations on a soft-tissue suture pad (3-Dmed, Franklin, OH). These locations have been distributed randomly throughout the FOV and at different heights. The distance between the predetermined and targeted points will count as the error. The error is a 3D dimensional norm which is derived from knowing the error in the XY plane of the suture pad as well as the error in the depth or bite size of the suture. These will be measured as shown in Fig. 7.

Figure 7.

Figure 7.

Metrics to assess system accuracy. (a) Random points are marked on a suture pad. (b) Needle insertion error is defined as the distance from the desired target dot center to the midpoint of the suture thread entry and exit points on the phantom. A planar accuracy Δxy, a depth accuracy Δz, and overall targeting accuracy E for needle insertion is measured. (c) Suture spacing S is defined as the distance between consecutive running sutures.

The XY error is the distance from the target dot to the midpoint of the suture entry and exit points. The depth error is the distance from the desired to achieved bite depth. These are combined using Pythagoras’ theorem to find the overall 3D targeting error (Fig. 7a). To evaluate the precision and accuracy of the placement of sutures in a curvilinear track, we will also measure the consistency of suture spacing (the distance between consecutive stitches) after completing 6 stitches across a curvilinear track with desired spacing of 3 mm ((Fig. 7b). Both tests are repeated 3 times at different locations within the camera FOV. These tests combine the error of the camera, robot, and user clicking on target dots into one metric. We measure the XY accuracy, depth, and spacing consistency using a pair of calipers (Products Engineering Corporation, Torrance, CA).

III. Results

A. Accuracy of Metric Calibration of Plenoptic Camera

The camera was evaluated previously [23] in terms of accuracy and precision. An accurate metric calibration will rectify the distorted pyramid and report a checkerboard of uniform size at any depth in the calibrated FOV. We observe the checkerboard again, detecting a grid of 110 points at 5 depths, and quantify the deviation from this ideal rectangle across a volume of 50 × 45 × 20 mm. The accuracy was found to be 1.14 ± 0.80 mm on average, and errors were concentrated at the edges of the FOV as seen in Fig. 8. To test precision, the camera viewed a flat plane at 6 different locations within the camera FOV. The resulting 6 point clouds were split into 15 sections and a plane was fit to each section. The deviation of each point from its local plane is the precision at that location. The average precision was found to be 0.97 ± 0.77 mm for this FOV. In comparison, 3D reconstruction error with a calibrated stereo endoscope can range from 1 mm at 0.25 Hz [32], to 3 mm at a comparable framerate of 1.77 Hz, to 10 mm at 4.9 Hz [33]. These accuracy and precision tests give us confidence that our metric calibration was successful, and we can proceed to apply this technology in a demanding surgical task.

Figure 8.

Figure 8.

Accuracy results showing the rectified metric data [25]. Units are in [mm]. The black points are locations of expected measurements, and colored points are observations coded according to accuracy error. Notice higher errors towards FOV edges, due to radial distortion. This test was conducted for a 50 × 45 × 20mm FOV.

B. Accuracy of Hand-eye Calibration

To evaluate the success of the hand-eye calibration, we compare the position of the tool tip of the calibration rod, measured in the robot base frame as it touched the corners of the calibration pattern, with the coordinates of the same point from the point cloud in the camera frame transformed to the robot frame using the calibrated CT0. The calibration pattern was moved within the FOV of the plenoptic camera 5 times, each time recording and comparing 4 corner points. The average reprojection error from 20 locations is 1.57 mm with a standard deviation of 0.9 mm. The significance of this result is the low standard deviation of 0.90 mm, which is similar to the metric calibration standard deviation of 0.80 mm. This means that addition of long length of surgical tool does not grossly affect accuracy if care is taken in designing the tool with small runout. This result gives us confidence that the registration will allow a precise task to be informed by the camera and executed by the robot.

It should be emphasized that these results are obtained within the calibrated FOV of the camera. Because we do not model the radial distortion of the plenoptic camera, the metric calibration breaks outside the calibrated FOV, where the linearity assumption is no longer true. In these circumstances, we have observed reprojection errors of 4 mm or higher.

C. Phantom Targeting

As a robust evaluation of our system performance in a relevant setting, we target various locations on the pad seen in Fig 6a. After the pad is held firmly in place, the user clicks on 30 desired suture locations and allows the robot to move and place stitches. Results from this experiment can be seen in Tables I and II.

TABLE I.

Targeting Accuracy,E, for both needle insertion (targeting 30 suture targets) and linear suturing tasks (18 sutures). Mean m and standard deviation σ in [mm] are reported.

Task μ [mm] σ [mm]
Needle insertion Δxy 0.92 0.57
 Δz 1.40 0.37
E 1.80 0.43
Linear Suturing Δxy 0.96 0.17
 Δz 1.44 0.34
E 1.79 0.35

TABLE II.

Inter-stitch spacing measured on three sets of running sutures with 6 stitches per running suture. Mean m and standard deviation σ in [mm] are reported.

Task μ [mm] σ [mm]
Ideal 3.00 0.00
Linear Suturing 2.94 0.11

The clinical performance of the camera can be judged by the targeting error and the consistency of suture spacing, which were measured on the suture pad as seen in Fig. 9. The average 3D norm targeting error for all 30 targets across both phantoms was 1.8 mm with a standard deviation of 0.43 mm. The average gap between suture placements for all 18 targets on the curvilinear phantom was 2.94 mm with a standard deviation of 0.11 mm over 6 stitches. The low standard deviation is evidence that the camera performed well as a positioning tool, and enabled easy and consistent targeting for a surgical task.

Figure 9.

Figure 9.

(a) Phantom suture pad showing the result of targeting test and linear suturing test. (b) Magnified view of a representative targeting task and targeting accuracy E. (c) Magnified view of a representative linear suturing task with targeting error E and spacing gap S outlined for one suture. Both (b) and (c) assess overall system accuracy.

IV. Conclusion

There are unique challenges associated with metric calibration of plenoptic cameras, including non-linear and radial distortion. For a smaller FOV, a successful calibration enables a clinically relevant task with precise and accurate measurement. We demonstrated millimetric accuracy of our metric calibration, showing only a small degradation when registering the camera to the robot. This enabled the system to be used effectively for the task of suture placement. The rise of plenoptic cameras suitable for this task is encouraging for the future of single-lens depth imagers in other applications. Another plenoptic camera is available from Lytro, a company focused initially on consumer and creative applications. The chosen Raytrix camera includes quantitative measurement software, enabling us to build our own applications.

We validated that depth from plenoptic cameras is sufficient for performing robotic surgery. The plenoptic 3D reconstructions errors are comparable to some stereoscopic 3D reconstruction methods, and will improve as the technology matures.

Our results can be expanded to other domains which require a 3D depth sensor. In particular, we were impressed by how well metal objects were reconstructed by this technology and could see this technology be used in manufacturing.

Future work will explore other calibration methods to rectify larger portions of the camera FOV, fusion of plenoptic with other camera technologies, laparoscopic embodiments, and the inference of more actionable information from the plenoptic point cloud. We also plan to expand applications to include in vivo tests.

Acknowledgment

The authors would like to thank Arne Erdmann, Christian Perwass, and Stefano Spyropoulos for help with Raytrix software and calibration; Dr. Jin Kang and Richard Cha for help with camera assessment and theory.

Contributor Information

Azad Shademan, School of Automation, Southeast University, Nanjing, Jiangsu, China; Kanazawa University, Kanazawa, Japan.

Ryan S. Decker, Control Science and Engineering Department, University of Shanghai for Science and Technology, Shanghai, China; Kanazawa University, Kanazawa, Japan

Justin Opfermann, School of Automation, Southeast University, Nanjing, Jiangsu, China; Kanazawa University, Kanazawa, Japan.

Simon Leonard, Kanazawa University, Kanazawa, Japan.

Peter C. W. Kim, Kanazawa University, Kanazawa, Japan

Axel Krieger, Industrial Research Institute of Ishikawa, Kanazawa, Japan.

References

  • [1].Gilson WD, Yang Z, French BA, and Epstein FH, “Measurement of myocardial mechanics in mice before and after infarction using multislice displacement-encoded MRI with 3D motion encoding,” Am. J. Physiol. - Heart Circ. Physiol, vol. 288, no. 3, pp. H1491–H1497, March 2005. [DOI] [PubMed] [Google Scholar]
  • [2].Boda-Heggemann J, Köhler FM, Wertz H, Ehmann M, Hermann B, Riesenacker N, Küpper B, Lohr F, and Wenz F, “Intrafraction motion of the prostate during an IMRT session: a fiducial-based 3D measurement with Cone-beam CT,” Radiat. Oncol. Lond. Engl, vol. 3, p. 37, 2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [3].Treece G, Prager R, Gee A, and Berman L, “3D ultrasound measurement of large organ volume,” Med. Image Anal, vol. 5, no. 1, pp. 41–54, March 2001. [DOI] [PubMed] [Google Scholar]
  • [4].Kowalczuk J, Meyer A, Carlson J, Psota ET, Buettner S, Pérez LC, Farritor SM, and Oleynikov D, “Real-time three-dimensional soft tissue reconstruction for laparoscopic surgery,” Surg. Endosc, vol. 26, no. 12, pp. 3413–3417, December 2012. [DOI] [PubMed] [Google Scholar]
  • [5].Stoyanov D, Mylonas GP, Deligianni F, Darzi A, and Yang GZ, “Soft-tissue motion tracking and structure estimation for robotic assisted MIS procedures,” Med. Image Comput. Comput.-Assist. Interv. MICCAI Int. Conf. Med. Image Comput. Comput.-Assist. Interv, vol. 8, no. Pt 2, pp. 139–146, 2005. [DOI] [PubMed] [Google Scholar]
  • [6].Lourenço M, Stoyanov D, and Barreto JP, “Visual Odometry in Stereo Endoscopy by Using PEaRL to Handle Partial Scene Deformation,” in Augmented Environments for Computer-Assisted Interventions, Linte CA, Yaniv Z, Fallavollita P, Abolmaesumi P, and Abolmaesumi DR III, Eds. Springer International Publishing, 2014, pp. 33–40. [Google Scholar]
  • [7].Schmalz C, Forster F, Schick A, and Angelopoulou E, “An endoscopic 3D scanner based on structured light,” Med. Image Anal, vol. 16, no. 5, pp. 1063–1072, July 2012. [DOI] [PubMed] [Google Scholar]
  • [8].Treasure T, Valencia O, Sherlaw-Johnson C, and Gallivan S, “Surgical Performance Measurement,” Health Care Manag. Sci, vol. 5, no. 4, pp. 243–248, November 2002. [DOI] [PubMed] [Google Scholar]
  • [9].Torsten Hain RE, “Indications for Optical Shape Measurements in Orthopaedics and Dermatology,” Med. Laser Appl, vol. 17, no. 1, pp. 55–58, 2002. [Google Scholar]
  • [10].Wilson DM, Iwata BA, and Bloom SE, “Computer-Assisted Measurement of Wound Size Associated with Self-Injurious Behavior,” J. Appl. Behav. Anal, vol. 45, no. 4, pp. 797–808, December 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [11].Wojciech Tylman MJ, “Computer-aided approach to evaluation of burn wounds,” 2011. [Google Scholar]
  • [12].Parot V, Lim D, González G, Traverso G, Nishioka NS, Vakoc BJ, and Durr NJ, “Photometric stereo endoscopy,” J. Biomed. Opt, vol. 18, no. 7, p. 076017, 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [13].Jiang H, Iftimia NV, Xu Y, Eggert JA, Fajardo LL, and Klove KL, “Near-infrared optical imaging of the breast with model-based reconstruction,” . Acad. Radiol, vol. 9, no. 2, pp. 186–194, February 2002. [DOI] [PubMed] [Google Scholar]
  • [14].Waseda M, Inaki N, Torres Bermudez JR, Manukyan G, Gacek IA, Schurr MO, Braun M, and Buess GF, “Precision in stitches: Radius Surgical System,” Surg. Endosc, vol. 21, no. 11, pp. 2056–2062, November 2007. [DOI] [PubMed] [Google Scholar]
  • [15].Seki S, “Accuracy of suture placement,” Br. J. Surg, vol. 74, no. 3, pp. 195–197, March 1987. [DOI] [PubMed] [Google Scholar]
  • [16].Adelson EH and Wang JYA, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell, vol. 14, no. 2, pp. 99–106, 1992. [Google Scholar]
  • [17].Ng R, Levoy M, Brédif M, Duval G, Horowitz M, and Hanrahan P, “Light field photography with a hand-held plenoptic camera.” [Google Scholar]
  • [18].Levoy M and Hanrahan P, “Light Field Rendering,” in Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, New York, NY, USA, 1996, pp. 31–42. [Google Scholar]
  • [19].Dansereau DG, Pizarro O, and Williams SB, “Decoding, Calibration and Rectification for Lenselet-Based Plenoptic Cameras,” in 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013, pp. 1027–1034. [Google Scholar]
  • [20].Johannsen O, Heinze C, Goldluecke B, and Perwaß C, “On the Calibration of Focused Plenoptic Cameras,” in Time-of-Flight and Depth Imaging. Sensors, Algorithms, and Applications, Grzegorzek M, Theobalt C, Koch R, and Kolb A, Eds. Springer; Berlin Heidelberg, 2013, pp. 302–317. [Google Scholar]
  • [21].Leonard S, Wu KL, Kim Yonjae, Krieger A, and Kim PCW, “Smart Tissue Anastomosis Robot (STAR): A Vision-Guided Robotics System for Laparoscopic Suturing,” IEEE Trans. Biomed. Eng, vol. 61, no. 4, pp. 1305–1317, April 2014. [DOI] [PubMed] [Google Scholar]
  • [22].Leonard S, Shademan A, Kim Y, Krieger A, and Kim PC, “Smart Tissue Anastomosis Robot (STAR): Accuracy evaluation for supervisory suturing using near-infrared fluorescent markers,” in Robotics and Automation (ICRA), 2014 IEEE International Conference on, 2014, pp. 1889–1894. [Google Scholar]
  • [23].Decker R, Shademan A, Kim P, and Krieger A, “Performance Evaluation and Clinical Applications of 3D Plenoptic Cameras,” in Next-Generation Robotics II: Machine Intelligence and Bio-inspired Computation: Theory and Applications IX., 2015, vol. 9494. [Google Scholar]
  • [24].Strobl KH and Hirzinger G, “Optimal Hand-Eye Calibration,” in 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2006, pp. 4647–4653. [Google Scholar]
  • [25].Perwass C and Wietzke L, “Single lens 3D-camera with extended depth-of-field,” 2012, vol. 8291, pp. 829108–829108–15. [Google Scholar]
  • [26].“The Orocos Component Builder’s Manual.” [Online]. Available: http://www.orocos.org/stable/documentation/rtt/v2.x/doc-xml/orocos-components-manual.html. [Accessed: 31-Aug-2015].
  • [27].ROS.org | Powering the world’s robots.”. [Google Scholar]
  • [28].Schreiber G, Stemmer A, and Bischoff R, “The fast research interface for the kuka lightweight robot.” [Google Scholar]
  • [29].Zhang Z, “A Flexible New Technique for Camera Calibration,” Pattern Anal. Mach. Intell. IEEE Trans. On, vol. 22, no. 11, pp. 1330–1334. [Google Scholar]
  • [30].Tsai RY and Lenz RK, “A new technique for fully autonomous and efficient 3D robotics hand/eye calibration,” IEEE Trans. Robot. Autom, vol. 5, no. 3, pp. 345–358, June 1989. [Google Scholar]
  • [31].Horaud R and Dornaika F, “Hand-Eye Calibration,” Int. J. Robot. Res, vol. 14, no. 3, pp. 195–210, June 1995. [Google Scholar]
  • [32].Stoyanov D, Darzi A, and Yang GZ, “A practical approach towards accurate dense 3D depth recovery for robotic laparoscopic surgery,” Comput. Aided Surg, vol. 10, no. 4, pp. 199–208, January 2005. [DOI] [PubMed] [Google Scholar]
  • [33].Parchami M, Cadeddu JA, and Mariottini G-L, “Endoscopic stereo reconstruction: A comparative study,” in 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2014, pp. 2440–2443. [DOI] [PubMed] [Google Scholar]

RESOURCES