Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Resume2Vec: Transforming Applicant Tracking Systems with Intelligent Resume Embeddings for Precise Candidate Matching
Next Article in Special Issue
Fostering Inclusion: A Virtual Reality Experience to Raise Awareness of Dyslexia-Related Barriers in University Settings
Previous Article in Journal
Stiffness Perception Analysis in Haptic Teleoperation with Imperfect Communication Network
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Development of New Generation Portable Camera-Aided Surgical Simulator for Cognitive Training in Laparoscopic Cholecystectomy

by
Yucheng Li
1,*,
Victoria Nelson
1,
Cuong T. Nguyen
2,
Irene Suh
3,
Suvranu De
2,
Ka-Chun Siu
3 and
Carl Nelson
1,3
1
Department of Mechanical & Materials Engineering, University of Nebraska-Lincoln, Lincoln, NE 68588, USA
2
Center for Modeling, Simulation and Imaging in Medicine (CEMSIM), Rensselaer Polytechnic Institute, Troy, NY 12180, USA
3
Department of Health & Rehabilitation Sciences, University of Nebraska Medical Center, Omaha, NE 68198, USA
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(4), 793; https://doi.org/10.3390/electronics14040793
Submission received: 1 January 2025 / Revised: 14 February 2025 / Accepted: 16 February 2025 / Published: 18 February 2025
(This article belongs to the Special Issue Virtual Reality Applications in Enhancing Human Lives)
Figure 1
<p>Workflow diagram of the portable surgical training simulator. The system comprises three components: (1) three smartphones equipped with cameras, an installed app, and an image segmentation program; (2) a Raspberry Pi running a triangulation program to estimate marker positions in space; and (3) a vision computer that renders the VR environment. Smartphones capture marker pixel coordinates and transmit the data to the Raspberry Pi. The Raspberry Pi processes the data to calculate marker positions and sends the results to the computer, which generates the immersive VR simulation.</p> ">
Figure 2
<p>Portable enclosure design and assembly process. (<b>A</b>) Unfolded enclosure: The piece is laser-cut from <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>/</mo> <msup> <mn>8</mn> <mrow> <mo>″</mo> </mrow> </msup> </mrow> </semantics></math> plywood and connected with <math display="inline"><semantics> <msup> <mn>12</mn> <mrow> <mo>″</mo> </mrow> </msup> </semantics></math> hinges using <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>/</mo> <msup> <mn>8</mn> <mrow> <mo>″</mo> </mrow> </msup> </mrow> </semantics></math> rivets for foldability. (<b>B</b>) Folded enclosure: The compact design achieves a folded volume of <math display="inline"><semantics> <mrow> <mn>12</mn> <mo>.</mo> <msup> <mn>25</mn> <mrow> <mo>″</mo> </mrow> </msup> <mo>×</mo> <msup> <mn>12</mn> <mrow> <mo>″</mo> </mrow> </msup> <mo>×</mo> <mn>1</mn> <mo>.</mo> <msup> <mn>25</mn> <mrow> <mo>″</mo> </mrow> </msup> </mrow> </semantics></math> for portability. (<b>C</b>) Installed enclosure: Tabs and slots securely connect the panels to form the working structure. (<b>D</b>) Fully assembled prototype: The enclosure is equipped with three smartphones and two laparoscopic graspers, ready for simulation use.</p> ">
Figure 3
<p>The schematic of the enclosure shows the enclosure design and layout for camera positioning and triangulation analysis. The Remote Center of Motion (RCM) is the fixed position where the laparoscopic gripper passes through and is secured within the enclosure. The red, green, and blue arrows represent the x-, y-, and z-axes, respectively.</p> ">
Figure 4
<p>Color-based segmentation is applied to identify surgical instrument tips. (<b>A</b>) shows color markers, (<b>B</b>) highlights the segmented colors, and (<b>C</b>) shows centroids for triangulation.</p> ">
Figure 5
<p>(<b>A</b>) Local view showing the rays from three cameras and the estimated target position. (<b>B</b>) Triangulation setup illustrating camera rays and target position. The red, green, and blue arrows represent the <span class="html-italic">x</span>-, <span class="html-italic">y</span>-, and <span class="html-italic">z</span>-axes, respectively.</p> ">
Figure 6
<p>Test scenarios for assessing camera layouts. All camera configurations are positioned on the surface of a sphere centered on the target position. (<b>A</b>,<b>D</b>,<b>G</b>) correspond to <math display="inline"><semantics> <mrow> <mi>θ</mi> <mo>=</mo> <msup> <mn>80</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>; (<b>B</b>,<b>E</b>,<b>H</b>) to <math display="inline"><semantics> <mrow> <mi>θ</mi> <mo>=</mo> <mn>54</mn> <mo>.</mo> <msup> <mn>8</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>; and (<b>C</b>,<b>F</b>,<b>I</b>) to <math display="inline"><semantics> <mrow> <mi>θ</mi> <mo>=</mo> <msup> <mn>85</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>. (<b>A</b>–<b>C</b>) have an azimuthal angle distribution of <math display="inline"><semantics> <mrow> <mi>ϕ</mi> <mo>=</mo> <msup> <mn>150</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>; (<b>D</b>–<b>F</b>) have <math display="inline"><semantics> <mrow> <mi>ϕ</mi> <mo>=</mo> <msup> <mn>120</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>; and (<b>G</b>–<b>I</b>) have <math display="inline"><semantics> <mrow> <mi>ϕ</mi> <mo>=</mo> <msup> <mn>60</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>. The red, green, and blue arrows represent the <span class="html-italic">x</span>-, <span class="html-italic">y</span>-, and <span class="html-italic">z</span>-axes, respectively.</p> ">
Figure 7
<p>Contour map of condition numbers for various camera layout scenarios, computed based on Equation (<a href="#FD9-electronics-14-00793" class="html-disp-formula">9</a>). The map covers a continuous range of <math display="inline"><semantics> <mi>θ</mi> </semantics></math> and <math display="inline"><semantics> <mi>ϕ</mi> </semantics></math>, illustrating the impact of these parameters on the condition number. The nine discrete scenarios (<b>A</b>–<b>I</b>) from <a href="#electronics-14-00793-f006" class="html-fig">Figure 6</a> are marked at their corresponding locations on the map for reference.</p> ">
Figure 8
<p>(<b>A</b>) Illustration of the <math display="inline"><semantics> <mrow> <mn>5</mn> <mo>×</mo> <mn>5</mn> </mrow> </semantics></math> grid of target positions within the enclosure’s workspace, with each square measuring 15 mm <math display="inline"><semantics> <mrow> <mo>×</mo> <mn>15</mn> </mrow> </semantics></math> mm. (<b>B</b>) Positioning of the two symmetric cameras. (<b>C</b>) Positioning of the camera on the symmetric plane. (<b>D</b>) Schematic of the enclosure’s interior showing camera placement and target grid.</p> ">
Figure 9
<p>Accuracy test results comparing estimated target positions (blue points) to ground truth target positions (red grid) across different planes: (<b>A</b>) <math display="inline"><semantics> <mrow> <mi>x</mi> <mi>y</mi> </mrow> </semantics></math>-plane, (<b>B</b>) <math display="inline"><semantics> <mrow> <mi>y</mi> <mi>z</mi> </mrow> </semantics></math>-plane, (<b>C</b>) <math display="inline"><semantics> <mrow> <mi>x</mi> <mi>z</mi> </mrow> </semantics></math>-plane, and (<b>D</b>) 3D view of the workspace. All dimensions are presented in millimeters. The red, green, and blue arrows represent the <span class="html-italic">x</span>-, <span class="html-italic">y</span>-, and <span class="html-italic">z</span>-axes, respectively.</p> ">
Figure 10
<p>Simulation results of the VR environment for laparoscopic training. The system visualizes the gallbladder and surrounding organs with realistic coloration and texture to enhance realism. Interaction with virtual organs demonstrates key procedural steps, including connective tissue dissection and gallbladder isolation. (<b>A</b>) Illustration of the setup with two laparoscopic grippers. (<b>B</b>) Demonstration of both devices grasping either the liver or the gallbladder. (<b>C</b>) Illustration of the left arm grasping the liver while the right arm dissects fat tissue. (<b>D</b>) Depiction of the right arm grasping the liver while the left arm dissects fat tissue.</p> ">
Review Reports Versions Notes

Abstract

:
Laparoscopic cholecystectomy (LC) is the standard procedure for gallbladder removal, but improper identification of anatomical structures can lead to biliary duct injury (BDI). The critical view of safety (CVS) is a standardized technique designed to mitigate this risk. However, existing surgical training systems primarily emphasize haptic feedback and physical skill development, making them expensive and less accessible. This paper presents the next-generation Portable Camera-Aided Surgical Simulator (PortCAS), a cost-effective, portable, vision-based surgical training simulator designed to enhance cognitive skill acquisition in LC. The system consists of an enclosed physical module equipped with a vision system, a single-board computer for real-time instrument tracking, and a virtual simulation interface that runs on a user-provided computer. Unlike traditional simulators, PortCAS prioritizes cognitive training over force-based interactions, eliminating the need for costly haptic components. The system was evaluated through user studies assessing accuracy, usability, and training effectiveness. Results demonstrate that PortCAS provides a sufficiently accurate tracking performance for training surgical skills such as CVS, offering a scalable and accessible solution for surgical education.

1. Introduction

Surgical skills are fundamental to healthcare, from basic wound care to intricate diagnostic and therapeutic interventions. Beyond technical proficiency, effective surgical training also requires a focus on knowledge acquisition and the development of professional attitudes within a comprehensive educational framework [1]. The digital revolution has increasingly rendered traditional surgical training methods, which rely on observation, practice, and teaching, both resource-intensive and inaccessible, obsolete [2,3].
Simulation is a valuable tool for surgical training, offering safe and controlled environments for learners to practice repeatedly. The Association of Surgeons in Training defines simulation as any activity designed to replicate a system or environment to assess, inform, and modify behavior. By utilizing physical models, computer programs, or hybrid systems, simulation enables skill acquisition and assessment through feedback and objective performance metrics [4]. Laparoscopic training is essential in most surgical disciplines. In the late 1980s and early 1990s, laparoscopic surgery training was frequently conducted without formal assessment of skills or competency [5]. Since then, surgical training has undergone a significant transformation, with simulation becoming a key component [6]. Simulation models are now the main method for teaching laparoscopic skills [7,8].
The importance of surgical training models and the challenges associated with existing models are discussed in [9]. Various surgical training models have been developed to meet the growing demand for hands-on practice, including systems from Kroton Medical Technology [10], Simulab [11], and 3D-printed models [12]. However, these models are often plagued by limitations such as high costs, immobility, and lack of reusability. In addition, rural physicians face significant barriers due to limited access to major medical centers and training facilities, further exacerbating the challenges to obtain consistent and effective surgical training [13,14].
Surgical simulators vary significantly in cost and complexity, ranging from low-tech, low-cost options such as the Fundamentals of Laparoscopic Surgery (FLS) trainer to high-tech, high-cost haptic-based trainers. The cost of low-fidelity simulators (or in some cases, components thereof) ranges from under 10 dollars to approximately 230 dollars USD, while commercial systems can cost hundreds to many thousands of dollars [15]. Some of the available training simulators with somewhat higher cost include the LapVR system [16] and the Lübecker Toolbox [17,18], and these remain much less expensive than the most high-fidelity simulators. However, studies indicate that high-cost trainers do not necessarily provide inherent advantages over their lower-cost counterparts [19]. Several low-cost laparoscopic trainers have been developed in recent years, including the FLS training simulator [20], do-it-yourself (DIY) box trainers [21], the eoSim portable box trainer (eoSurgical, Edinburgh, Scotland) [22], the iTrainer with iPad 3 [17], LABOT [23], and the 3D-printed SCLT trainer [24]. A key feature of low-cost trainers is portability, which requires a compact and durable design. This characteristic is particularly essential for enabling remote surgical training in underserved or rural areas [25]. These low-cost innovations hold great promise for laparoscopic skill training by making it more accessible [26].
The techniques used to prevent biliary duct injury during open cholecystectomy have proven challenging to implement in laparoscopic cholecystectomy (LC) [27]. To enhance safety in LC, the critical view of safety (CVS) was developed to minimize biliary injuries. In 2014, SAGES established the Safe Cholecystectomy Task Force (SCTF) to identify the causes of biliary duct injury and promote a safety-focused approach [28]. In 2022, SAGES launched the CVS Challenge [29] to develop effective clinical methods for assessing CVS achievement. Many surgical procedures, including LC, require complex cognitive decision-making. For instance, achieving CVS demands that trainees accurately determine the appropriate moment and location for dissecting the hepatic artery. This decision is based on whether the cystic duct and artery have been clearly identified and sufficiently cleared, ensuring it is safe to proceed to the next step. Unlike motor skill training, which emphasizes instrument handling and force application, cognitive training focuses on enhancing a trainee’s ability to recognize critical anatomical structures and make informed intraoperative decisions.
To address the challenges of high-cost surgical training systems, limited accessibility in low-resource settings, and the difficulty of effectively training and assessing the CVS, this paper introduces a prototype system built on prior research [30,31,32], which developed Portable Camera-Aided Surgical Simulator (PortCAS), a portable and affordable laparoscopic training simulator. Previous studies established the foundation by validating a comprehensive approach to collecting image data, processing it to track instrument locations, and integrating those locations into a virtual reality (VR) application. The next-generation PortCAS is designed to prioritize cognitive training (procedural steps) over physical skill development (hand–eye coordination and haptic feedback). Since cognitive-focused training does not require haptic interaction, a low-cost trainer is appropriate, with costs comparable to physical-only analog systems such as FLS. The new PortCAS offers a sufficiently accurate yet cost-effective surgical training experience compared to existing market models (e.g., laparoscopic cholecystectomy simulators [11]). It features a user-friendly setup with minimal software installation requirements. The system comprises three key components: a vision-equipped enclosed module, a single-board computer for instrument tracking, and a virtual simulation interface running on a user-provided computer. This modular design enhances portability and accessibility while maintaining compatibility with surgeons’ preferred instruments and personal laptops. While the current simulation environment focuses on laparoscopic gallbladder surgery, the system is adaptable for future surgical procedures. The limitations identified in [30,31,32] drove the objectives for additional development.

2. Materials and Methods

2.1. Design Requirement

Before developing our latest surgical training simulator, the following design requirements are established to address the objectives of portability, ease of use, and simulation realism:
  • Portability: The system must be compact, with dimensions not exceeding 22 × 14 × 9 [33] and a weight under 4 pounds, enabling effortless transportation. It should fit within a carry-on bag or small case to ensure deployment in various settings, including rural areas where access to dedicated training facilities may be constrained.
  • Realism and Accuracy: The simulator must provide precise surgical instrument tracking, maintaining a spatial accuracy of less than 5 mm in critical operational areas. This level of fidelity is essential for ensuring that trainees develop confidence and proficiency in performing complex procedures, such as laparoscopic cholecystectomy (LC).
  • Real-Time Performance: The data processing pipeline, including marker detection and 3D position estimation, must support a refresh rate of at least 10 frames per second (FPS), enabling smooth and responsive interaction with the VR simulation environment.

2.2. Portable Laparoscopic Training System Design

The workflow of the proposed laparoscopic training system, as shown in Figure 1, integrates a seamless 4-stage process that bridges the physical enclosure box with the VR environment, enabling realistic surgical simulation. The first stage involves image and video capture using three smartphones equipped with cameras and a custom-developed app. This app includes an image segmentation program to detect and track instrument markers in real-time. The three smartphones integrated into the PortCAS system are equipped with built-in flashlights that are activated during simulator operation. These flashlights provide adequate lighting to capture clear images of the color markers, ensuring accurate position calculations. In the second stage, the smartphones process the captured images to identify the pixel coordinates of these markers, transmitting the data to a Raspberry Pi. The third stage employs the Raspberry Pi, which runs a triangulation algorithm to calculate the 3D spatial positions of the markers based on the input from multiple cameras. This critical step ensures precise estimation of instrument locations within the simulated surgical environment. Finally, in the fourth stage, the calculated marker positions are sent to a vision computer that integrates them into the VR environment, enabling real-time rendering of surgical instruments and their movements. Separating the simulation process into these subtasks and executing each one on distinct hardware provides the system with a modularity that is particularly useful during development and allows optimal use of the strengths of each component device. The overall system operates with a refresh rate of approximately 10 Hz. Each stage of this workflow is detailed in the subsequent sections, demonstrating how the system effectively combines hardware and software to achieve a portable and efficient surgical training solution. To achieve these design requirements, the following modeling assumptions have been established to guide the development of the simulation framework:
  • The smartphones are fixed at the designed positions of the proposed portable system (Figure 2). The cameras are located at the centers of the windows located on the box designed for the cameras. Smartphones do not change positions during usage.
  • All smartphones and their cameras have the same parameters, same performance, and same settings. There is no need for repeat calibration for each camera.
Figure 1. Workflow diagram of the portable surgical training simulator. The system comprises three components: (1) three smartphones equipped with cameras, an installed app, and an image segmentation program; (2) a Raspberry Pi running a triangulation program to estimate marker positions in space; and (3) a vision computer that renders the VR environment. Smartphones capture marker pixel coordinates and transmit the data to the Raspberry Pi. The Raspberry Pi processes the data to calculate marker positions and sends the results to the computer, which generates the immersive VR simulation.
Figure 1. Workflow diagram of the portable surgical training simulator. The system comprises three components: (1) three smartphones equipped with cameras, an installed app, and an image segmentation program; (2) a Raspberry Pi running a triangulation program to estimate marker positions in space; and (3) a vision computer that renders the VR environment. Smartphones capture marker pixel coordinates and transmit the data to the Raspberry Pi. The Raspberry Pi processes the data to calculate marker positions and sends the results to the computer, which generates the immersive VR simulation.
Electronics 14 00793 g001
Figure 2. Portable enclosure design and assembly process. (A) Unfolded enclosure: The piece is laser-cut from 1 / 8 plywood and connected with 12 hinges using 1 / 8 rivets for foldability. (B) Folded enclosure: The compact design achieves a folded volume of 12 . 25 × 12 × 1 . 25 for portability. (C) Installed enclosure: Tabs and slots securely connect the panels to form the working structure. (D) Fully assembled prototype: The enclosure is equipped with three smartphones and two laparoscopic graspers, ready for simulation use.
Figure 2. Portable enclosure design and assembly process. (A) Unfolded enclosure: The piece is laser-cut from 1 / 8 plywood and connected with 12 hinges using 1 / 8 rivets for foldability. (B) Folded enclosure: The compact design achieves a folded volume of 12 . 25 × 12 × 1 . 25 for portability. (C) Installed enclosure: Tabs and slots securely connect the panels to form the working structure. (D) Fully assembled prototype: The enclosure is equipped with three smartphones and two laparoscopic graspers, ready for simulation use.
Electronics 14 00793 g002
The design of the physical enclosure prioritized ease of assembly and disassembly to ensure users can efficiently deploy and store the device. Additionally, the construction minimized the use of threaded fasteners to prevent components from loosening during transport. The prototype’s enclosure box is shown in Figure 2. The physical enclosure of the prototype is referred to as the tent. The disassembled tent is shown in Figure 2A,B, while the assembled version is depicted in Figure 2C,D. The prototype is constructed from laser-cut 1 / 8 plywood, with 12 hinges secured to each plywood piece using 1 / 8 rivets. This design takes the form of an A-frame tent, consisting of two primary sections: the base and the frame. The base folds accordion-style, featuring three hinges arranged to alternate folding directions. The frame is connected via a single hinge, linking the front and back faces of the device. The base and frame interlock using friction. Three Samsung Galaxy A03 smartphones are positioned along the left side, front face, and right side of the enclosure to detect instrument positions inside. The design is flat-foldable and measures approximately 1 . 25 thick when fully folded. The enclosure incorporates portholes for surgical instruments, camera windows for image capture, and removable ledges to support the phones. This design is efficient due to the simple assembly and disassembly process, consisting of only two non-fastened parts. The tent is both flat-foldable and lightweight, making it well suited for easy transport and deployment. A model of the enclosure is presented in Figure 3, which illustrates the 3D representation and layout of the physical enclosure design.
The system uses three cameras to distribute viewpoints as evenly as possible to maximize coverage of the enclosure’s interior. Camera positions are optimized based on their Field of View (FOV) and the condition number of the triangulation matrix, as discussed in a later section. To accurately determine an object’s spatial coordinates within the workspace, the object must be visible to at least two cameras. The accuracy and performance of this estimation method are validated in subsequent examples. The workspace within the enclosure refers to the region where the tips of the laparoscopic graspers are visible to at least two cameras, enabling effective position estimation. The functional workspace is slightly smaller than the total physical volume of the enclosure.

2.3. Image Segmentation

Color-based segmentation is applied to camera images to identify the tip locations of surgical instruments. Each instrument’s jaw is marked with a distinct colored marker. The color segmentation algorithm isolates the specified color while removing all other colors from the image. Figure 4A illustrates multiple color markers in a sample image frame. Applying the segmentation algorithm, Figure 4B isolates the two target colors, leaving only the relevant markers. Subsequent processing identifies the contours and centroids of each colored object, as shown in Figure 4C, with contours highlighted in green and centroids in red. These centroids are then used by the triangulation algorithm presented in Section 2.5 to calculate the x-, y-, and z-coordinates of the surgical instrument tips.
Segmentation is performed using camera images captured by three Samsung Galaxy A03 smartphones positioned on the enclosure’s front and sides. A custom-developed application on the phones automatically segments the instrument tip based on color detection. Color segmentation was chosen over other methods due to its simplicity, rapid implementation, and high accuracy in detecting instrument markers. The position of the instrument tip is then calculated using a triangulation algorithm on the Raspberry Pi, as detailed in Section 2.5.

2.4. Single Camera Calibration

The mapping between the position in the camera image and the position in the real-world frame is established as follows. The origin in the camera image is translated to the center of the image,
x c y c = x p y p c x c y ,
where x p and y p are the pixel coordinates of a position in the camera image, and c x and c y are the pixel coordinates of the image center. The target position in the real-world frame can be written as
d c = λ x c y c z c ,
where λ is an unknown scaling factor. The third coordinate can be calculated as
z c = a 0 + a 2 ρ 2 + a 3 ρ 3 + a 4 ρ 4 ,
where ρ = x c 2 + y c 2 is a function dependent solely on the distance of a point from the image center. The polynomial coefficients a 0 , a 2 , a 3 , and a 4 are described by the Scaramuzza model [34], which can be determined through camera calibration. Although the factor λ remains unknown, once the coordinate z c is calculated, the direction vector from the origin to the target position in the camera frame can be determined as
t c = d c d c = 1 x c 2 + y c 2 + z c 2 x c y c z c .
The direction from the camera to the target position in the world frame is
t = A t c ,
where the matrix A SO ( 3 ) represents the camera’s orientation in the world frame, determined during the camera’s installation.

2.5. Triangulation

The triangulation determines the three-dimensional spatial position of a target marker by using the pixel coordinates extracted through image segmentation from multiple camera views. The algorithm uses stereoscopic projection [35] to compute the target position from the pixel coordinates of a single marker centroid. The system uses three cameras to locate a target marker, as shown in Figure 5B. Figure 5A illustrates the local view with rays from the three cameras, along with the estimated and target positions. The marker must be visible to at least two cameras for position determination. Each camera pair calculates the marker’s position independently. The estimation results in two values with two cameras and six values when all three cameras are used. For each camera pair, the target position is described by
d t = d i + c i t i , d t = d j + c j t j ,
where i and j denote the indices of the two cameras. The camera positions are determined by the enclosure geometry. The unit vectors t i and t j , which point from the cameras to the target, are obtained using Equation (5). Although the distances from the cameras to the target are initially unknown, by combining the two equations in Equation (6), the distances from the cameras to the target position can be calculated as
c i c j = t i t j 1 ( d i d j ) .
Substituting Equation (7) back into Equation (6) yields two distinct estimates of the target position. Geometrically, the two target position estimates correspond to the closest points on the rays extending from the two cameras toward the target, as illustrated in Figure 5A. The three cameras form three unique camera pairs, resulting in six distinct target position estimates. The final triangulation estimate is calculated as the arithmetic mean of these six values, as illustrated in Figure 5A. The estimates are averaged to enhance accuracy. Incorporating three cameras introduces redundancy, ensuring reliable tracking even if one camera loses sight of the marker.
As illustrated in Figure 1, the triangulation process is executed on the single-board computer within the second orange module of the workflow. The triangulated x-, y-, and z-coordinates (in mm) are transmitted from the single-board computer to the user PC. These coordinates are used in the VR environment to render the instruments for simulation.

2.6. Camera Layout Assessment

To assess the camera layout on the enclosure, an equation is required to relate all camera positions to the estimated target positions. This equation incorporates a matrix analogous to the Jacobian matrix in robotics [36], mapping between two variable spaces. The camera positions are then assessed using the condition number of this Jacobian-inspired matrix. The derivation of the prescribed matrix is outlined below.
In this system, multiple cameras are used to estimate the target position, with three cameras employed in this study. Similar to Equation (6), the target position is independently defined for each camera as follows:
d t = d 0 + c 0 t 0 , d t = d 1 + c 1 t 1 , d t = d 2 + c 2 t 2 .
In contrast to Equation (7), which combines only two of these equations, combining all three equations in Equation (8) allows the distance from the cameras to the target position to be expressed as follows:
c 0 c 1 c 2 = t 0 t 1 0 0 t 1 t 2 t 0 0 t 2 1 d 0 d 1 d 1 d 2 d 2 d 0 .
The condition number of the matrix in Equation (9) is used to assess the camera layout. The method for calculating the condition number is described in [36]. The optimization aims to minimize the condition number by selecting different camera layouts on the enclosure. The results of the assessment and optimization are presented in Section 3.1.

3. Results

3.1. Camera Layout Optimization

To evaluate the camera layouts, tests were conducted with various configurations, all maintaining the same distance to a single target position, as illustrated in Figure 6. In other words, the cameras in the simulated scenarios, Figure 6A–I, are positioned on a common spherical surface. The nine camera layouts form a matrix of scenarios. In each column of the matrix, all cameras are placed at the same height relative to the target position, corresponding to the same polar angle, θ , relative to the sphere’s central axis. In each row, the layouts share the same azimuthal angle distribution, ϕ , on the sphere. For consistency, one camera is always located on the symmetry plane, while the other two are symmetrically distributed on either side. The azimuthal distribution angle, ϕ , defines the angle between the first camera and the other two cameras.
The condition number of the matrix in Equation (9) is evaluated for multiple scenarios. Figure 6 presents a contour map displaying the condition numbers across a continuous range of θ and ϕ values, corresponding to different camera layouts. The nine scenarios, Figure 6A–I, are indicated at their respective locations on the map.
The lowest condition number is observed in Figure 7E, where θ = 54 . 8 and ϕ = 120 . The results suggest that evenly distributed cameras enhance the system’s ability to estimate the target position. The middle column of the scenario matrix, comprising Figure 7B,E,H, exhibits lower condition numbers compared to the other six scenarios. This suggests that optimizing the system’s capability requires selecting a polar angle θ within the range of 50 to 60 . In the design of the system’s enclosure, the height positions of the three cameras can be set based on this optimal range for θ .

3.2. Whole-System Accuracy Test

A whole-system accuracy test is conducted to validate the effectiveness of the surgical training system. Figure 8D presents the schematic of the enclosure’s interior. The world frame is defined at the center of the enclosure’s ground board. A 5 × 5 grid of target positions, shown in Figure 8A, is selected on the ground within the enclosure’s workspace. Recall that the kinematic model of the enclosure, including the cameras, world frame, and the matrix of target positions used in the experiment, is illustrated in Figure 3. To evaluate the accuracy of target position estimation, the exact coordinates of the target positions are measured. The positions of the three cameras, two symmetric cameras depicted in Figure 8B and one positioned on the symmetric plane shown in Figure 8C, are measured.
The results of the accuracy test are presented in Figure 8, which illustrate the performance of the system in estimating target positions. Figure 8A–C display the accuracy of the system in the x y -plane, y z -plane, and x z -plane, respectively. Each plane shows the target positions (red grid) alongside the estimated positions (blue points). Figure 8D provides a 3D view of the results.
The overall spatial error distance is 8.2 ± 3.9 mm, as shown in Figure 9. The error along the x-axis (red arrow) is 3.3 ± 1.6 mm, along the y-axis (green arrow) is 7.1 ± 4.3 mm, and along the z-axis (blue arrow) is 0.8 ± 0.9 mm. The results confirm the accuracy and reliability of the system in estimating target positions for surgical training. The largest error in estimated marker positions occurs along the y-axis. The primary source of this error is the manufacturing tolerances in the wooden enclosure and the slight misalignment of the cameras during assembly. Note that, if the enclosure dimensions and camera positions were perfectly aligned with the theoretical setup, the largest error would likely occur along the camera axis due to the challenges of depth estimation in vision-based systems. The camera layout also contributes to this discrepancy. This analysis highlights potential hardware improvements.

3.3. Simulation in iMSTK

The interactive Medical Simulation Toolkit (iMSTK) is an open-source platform for real-time medical simulations, using position-based dynamics to model tissue interactions. This section presents the simulation results obtained using the iMSTK platform, demonstrating the integration of the designed system. The motion of the laparoscopic simulation is controlled via markers attached to the trainee’s hand-held laparoscopic instrument. The positions of the markers are computed on the Raspberry Pi and then transferred to the user’s laptop via the Secure Shell (SSH) protocol. The iMSTK program reads the transferred data and updates the pose of the laparoscopic gripper within the simulation environment on the user’s laptop. These markers track the instrument’s position and orientation, ensuring that the virtual environment accurately reflects the trainee’s movements. The iMSTK environment also allows dynamically changing the viewpoint, representing the effect of laparoscope manipulation during the surgical procedure.
Figure 10A–D present the simulation results of the VR environment for laparoscopic training, showcasing realistic visualizations of the gallbladder and surrounding organs. The system effectively replicates key procedural steps, including dissecting connective tissue to expose anatomical structures and isolating the gallbladder for removal. The simulation integrates precise hand motion capture, enhancing the realism and educational value of the training experience. The figure illustrates the system setup with two laparoscopic grippers, demonstrating their ability to interact with virtual organs. Specific interactions include grasping the liver or gallbladder and performing tissue dissection, highlighting the system’s functionality and relevance to surgical practice.

4. Discussion

The proposed PortCAS system provides a low-cost, portable solution for cognitive training in laparoscopic cholecystectomy. Unlike high-cost haptic trainers, PortCAS focuses on procedural learning rather than force feedback, making it an accessible alternative to traditional simulators. Its modular design ensures ease of use, requiring minimal setup without dedicated hardware beyond commonly available devices. However, there are limitations to the current implementation. The system does not include force or tactile feedback, which could limit the development of motor skills necessary for real-world surgical performance. Additionally, the tracking method relies on smartphone cameras, which may introduce accuracy limitations compared to specialized optical tracking systems.
Medical feedback is crucial in evaluating the effectiveness of the simulator. Future studies will involve collaboration with surgeons and trainees to assess the system’s usability, training effectiveness, and integration into existing surgical education curricula. Compared to other portable trainers, such as the FLS box trainer [21] and eoSim [22], PortCAS offers a virtual environment that enhances cognitive engagement while maintaining a cost similar to physical-only analog systems. However, further validation is necessary to quantify its educational impact and ensure that the training outcomes align with established surgical skill benchmarks.

5. Conclusions

This paper presents a novel hybrid hardware/VR surgical training system, developed as the new generation of PortCAS, designed to enhance laparoscopic skills in a controlled and immersive environment. The development of a compact, foldable enclosure coupled with a multi-camera system ensures accurate and reliable tracking of surgical instruments. The results from the accuracy tests demonstrate that the system achieves effective performance, with estimating errors primarily concentrated along the y-axis due to camera layout configurations. The simulation using iMSTK provides a realistic and interactive virtual environment, further improving the realism of surgical training.
The combination of VR simulation with hardware that is simple, robust, and modular offers significant advantages. Future work could focus on further optimizing camera placement, refining the hybrid system’s hardware–software integration, and expanding the scope of the training system to include more advanced laparoscopic procedures.

Author Contributions

Conceptualization, K.-C.S. and C.N.; Methodology, Y.L., V.N. and C.N.; Software, Y.L., V.N. and C.T.N.; Validation, Y.L., V.N. and C.T.N.; Investigation, Y.L., V.N., C.T.N., K.-C.S. and C.N.; Writing—original draft, Y.L.; Writing—review and editing, Y.L., K.-C.S. and C.N.; Supervision, S.D., K.-C.S. and C.N.; Project administration, I.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Institutes of Health under award number R56EB030053 as well as NASA Nebraska Space Grant under award number 80NSSC20M0112. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study.

Acknowledgments

Research reported in this paper was supported by the National Institutes of Health under award number R56EB030053 as well as NASA Nebraska Space Grant under award number 80NSSC20M0112. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. Software development assistance from NebDev, and clinical consultation from D. Oleynikov respectively, are appreciated and acknowledged.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Kneebone, R. Simulation in surgical training: Educational issues and practical implications. Med. Educ. 2003, 37, 267–277. [Google Scholar] [CrossRef] [PubMed]
  2. Gurusamy, K.S.; Aggarwal, R.; Palanivelu, L.; Davidson, B.R. Virtual reality training for surgical trainees in laparoscopic surgery. Cochrane Database Syst. Rev. 2013, 2013, CD006575. [Google Scholar] [CrossRef]
  3. Suresh, D.; Aydin, A.; James, S.; Ahmed, K.; Dasgupta, P. The role of augmented reality in surgical training: A systematic review. Surg. Innov. 2023, 30, 366–382. [Google Scholar] [CrossRef] [PubMed]
  4. Motola, I.; Devine, L.A.; Chung, H.S.; Sullivan, J.E.; Issenberg, S.B. Simulation in healthcare education: A best evidence practical guide. AMEE Guide No. 82. Med. Teach. 2013, 35, e1511–e1530. [Google Scholar] [CrossRef]
  5. Lefor, A.K. Robotic and laparoscopic surgery of the pancreas: An historical review. BMC Biomed. Eng. 2019, 1, 2. [Google Scholar] [CrossRef]
  6. Lefor, A.K.; Harada, K.; Kawahira, H.; Mitsuishi, M. The effect of simulator fidelity on procedure skill training: A literature review. Int. J. Med Educ. 2020, 11, 97. [Google Scholar] [CrossRef]
  7. Torricelli, F.C.; Barbosa, J.A.B.; Marchini, G.S. Impact of laparoscopic surgery training laboratory on surgeon’s performance. World J. Gastrointest. Surg. 2016, 8, 735. [Google Scholar] [CrossRef]
  8. Palter, V.N.; Grantcharov, T.P. Simulation in surgical education. Can. Med. Assoc. J. 2010, 182, 1191–1196. [Google Scholar] [CrossRef]
  9. De Loose, J.; Weyers, S. A laparoscopic training model for surgical trainees. Gynecol. Surg. 2017, 14, 24. [Google Scholar] [CrossRef]
  10. Kroton-Medical Technology. Kroton Training Modules-Laparoscopic. 2024. Available online: https://kroton.info/en/27-laparoscopic (accessed on 21 December 2024).
  11. Simulab Corporation. Simulab Products-Laparoscopic Cholecystectomy Model. 2024. Available online: https://simulab.com/products/laparoscopic-cholecystectomy-model (accessed on 21 December 2024).
  12. Hanisch, M.; Kroeger, E.; Dekiff, M.; Timme, M.; Kleinheinz, J.; Dirksen, D. 3D-printed surgical training model based on real patient situations for dental education. Int. J. Environ. Res. Public Health 2020, 17, 2901. [Google Scholar] [CrossRef]
  13. Borgstrom, D.C.; Deveney, K.; Hughes, D.; Rossi, I.R.; Rossi, M.B.; Lehman, R.; LeMaster, S.; Puls, M. Rural surgery. Curr. Probl. Surg. 2022, 59, 101173. [Google Scholar] [CrossRef]
  14. Walker, J.P. Status of the rural surgical workforce. Surg. Clin. 2020, 100, 869–877. [Google Scholar] [CrossRef] [PubMed]
  15. Li, M.M.; George, J. A systematic review of low-cost laparoscopic simulators. Surg. Endosc. 2017, 31, 38–48. [Google Scholar] [CrossRef] [PubMed]
  16. Iwata, N.; Fujiwara, M.; Kodera, Y.; Tanaka, C.; Ohashi, N.; Nakayama, G.; Koike, M.; Nakao, A. Construct validity of the LapVR virtual-reality surgical simulator. Surg. Endosc. 2011, 25, 423–428. [Google Scholar] [CrossRef]
  17. Yoon, R.; Del Junco, M.; Kaplan, A.; Okhunov, Z.; Bucur, P.; Hofmann, M.; Alipanah, R.; McDougall, E.M.; Landman, J. Development of a novel iPad-based laparoscopic trainer and comparison with a standard laparoscopic trainer for basic laparoscopic skills testing. J. Surg. Educ. 2015, 72, 41–46. [Google Scholar] [CrossRef]
  18. Laubert, T.; Esnaashari, H.; Auerswald, P.; Höfer, A.; Thomaschewski, M.; Bruch, H.P.; Keck, T.; Benecke, C. Conception of the Lübeck Toolbox curriculum for basic minimally invasive surgery skills. Langenbeck’s Arch. Surg. 2018, 403, 271–278. [Google Scholar] [CrossRef]
  19. Geissler, M.E.; Bereuter, J.P.; Geissler, R.B.; Kowalewski, K.F.; Egen, L.; Haney, C.; Schmidt, S.; Fries, A.; Buck, N.; Weiß, J.; et al. Comparison of laparoscopic performance using low-cost laparoscopy simulators versus state-of-the-art simulators: A multi-center prospective, randomized crossover trial. Surg. Endosc. 2025, 1–10. [Google Scholar] [CrossRef]
  20. McCluney, A.; Vassiliou, M.; Kaneva, P.; Cao, J.; Stanbridge, D.; Feldman, L.; Fried, G. FLS simulator performance predicts intraoperative laparoscopic skill. Surg. Endosc. 2007, 21, 1991–1995. [Google Scholar] [CrossRef]
  21. Sellers, T.; Ghannam, M.; Asantey, K.; Klei, J.; Olive, E.; Roach, V. Low-cost laparoscopic skill training for medical students using homemade equipment. MedEdPORTAL 2019, 15, 10810. [Google Scholar] [CrossRef]
  22. Sloth, S.B.; Jensen, R.D.; Seyer-Hansen, M.; Christensen, M.K.; De Win, G. Remote training in laparoscopy: A randomized trial comparing home-based self-regulated training to centralized instructor-regulated training. Surg. Endosc. 2021, 36, 1444–1455. [Google Scholar] [CrossRef]
  23. Soriero, D.; Atzori, G.; Barra, F.; Pertile, D.; Massobrio, A.; Conti, L.; Gusmini, D.; Epis, L.; Gallo, M.; Banchini, F.; et al. Development and validation of a homemade, low-cost laparoscopic simulator for resident surgeons (LABOT). Int. J. Environ. Res. Public Health 2020, 17, 323. [Google Scholar] [CrossRef] [PubMed]
  24. Lubet, A.; Renaux-Petel, M.; Marret, J.B.; Rod, J.; Sibert, L.; Delbreilh, L.; Liard, A. Multi-center evaluation of the first, low-cost, open source and totally 3D-printed pediatric laparoscopic trainer. Heliyon 2024, 10, e40550. [Google Scholar] [CrossRef]
  25. Shahrezaei, A.; Sohani, M.; Taherkhani, S.; Zarghami, S.Y. The impact of surgical simulation and training technologies on general surgery education. BMC Med. Educ. 2024, 24, 1297. [Google Scholar] [CrossRef] [PubMed]
  26. Bökkerink, G.M.; Joosten, M.; Leijte, E.; Verhoeven, B.H.; de Blaauw, I.; Botden, S.M. Take-home laparoscopy simulators in pediatric surgery: Is more expensive better? J. Laparoendosc. Adv. Surg. Tech. 2021, 31, 117–123. [Google Scholar] [CrossRef] [PubMed]
  27. Strasberg, S.M.; Sanabria, J.R.; Clavien, P.A. Complications of laparoscopic cholecystectomy. Can. J. Surg. J. Can. Chir. 1992, 35, 275–280. [Google Scholar]
  28. Pucher, P.H.; Brunt, L.M.; Fanelli, R.D.; Asbun, H.J.; Aggarwal, R. SAGES expert Delphi consensus: Critical factors for safe surgical practice in laparoscopic cholecystectomy. Surg. Endosc. 2015, 29, 3074–3085. [Google Scholar] [CrossRef]
  29. SAGES. The SAGES Critical View of Safety Challenge. 2024. Available online: https://www.sages.org/the-sages-cvs-challenge/ (accessed on 30 December 2024).
  30. Nelson, V.; Ang, H.W.; Thengvall, S.; Nelson, C.A.; Li, H.; Suh, I.; Siu, K.C. Improved Portable Instrument Tracking System for Surgical Training. In Proceedings of the 2024 Design of Medical Devices Conference. American Society of Mechanical Engineers, Minneapolis, MN, USA, 8–10 April 2024; Volume 87752, p. V001T06A006. [Google Scholar]
  31. Zahiri, M.; Booton, R.; Nelson, C.A.; Oleynikov, D.; Siu, K.C. Virtual reality training system for anytime/anywhere acquisition of surgical skills: A pilot study. Mil. Med. 2018, 183, 86–91. [Google Scholar] [CrossRef]
  32. Zahiri, M.; Booton, R.; Siu, K.C.; Nelson, C.A. Design and evaluation of a portable laparoscopic training system using virtual reality. J. Med. Devices 2017, 11, 011002. [Google Scholar] [CrossRef]
  33. U.S. News & World Report. Carry-On Luggage Size and Weight Limits by Airline. 2024. Available online: https://travel.usnews.com/features/carry-on-luggage-sizes-size-restrictions-by-airline (accessed on 30 December 2024).
  34. Scaramuzza, D.; Martinelli, A.; Siegwart, R. A toolbox for easily calibrating omnidirectional cameras. In Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, 9–13 October 2006; pp. 5695–5701. [Google Scholar]
  35. Duda, R.O.; Hart, P.E.; Stork, D.G. Pattern Classification and Scene Analysis; Wiley: New York, NY, USA, 1973; Volume 3. [Google Scholar]
  36. Gosselin, C.; Angeles, J. A global performance index for the kinematic optimization of robotic manipulators. J. Mech. Robot. 1991, 113, 220–226. [Google Scholar] [CrossRef]
Figure 3. The schematic of the enclosure shows the enclosure design and layout for camera positioning and triangulation analysis. The Remote Center of Motion (RCM) is the fixed position where the laparoscopic gripper passes through and is secured within the enclosure. The red, green, and blue arrows represent the x-, y-, and z-axes, respectively.
Figure 3. The schematic of the enclosure shows the enclosure design and layout for camera positioning and triangulation analysis. The Remote Center of Motion (RCM) is the fixed position where the laparoscopic gripper passes through and is secured within the enclosure. The red, green, and blue arrows represent the x-, y-, and z-axes, respectively.
Electronics 14 00793 g003
Figure 4. Color-based segmentation is applied to identify surgical instrument tips. (A) shows color markers, (B) highlights the segmented colors, and (C) shows centroids for triangulation.
Figure 4. Color-based segmentation is applied to identify surgical instrument tips. (A) shows color markers, (B) highlights the segmented colors, and (C) shows centroids for triangulation.
Electronics 14 00793 g004
Figure 5. (A) Local view showing the rays from three cameras and the estimated target position. (B) Triangulation setup illustrating camera rays and target position. The red, green, and blue arrows represent the x-, y-, and z-axes, respectively.
Figure 5. (A) Local view showing the rays from three cameras and the estimated target position. (B) Triangulation setup illustrating camera rays and target position. The red, green, and blue arrows represent the x-, y-, and z-axes, respectively.
Electronics 14 00793 g005
Figure 6. Test scenarios for assessing camera layouts. All camera configurations are positioned on the surface of a sphere centered on the target position. (A,D,G) correspond to θ = 80 ; (B,E,H) to θ = 54 . 8 ; and (C,F,I) to θ = 85 . (AC) have an azimuthal angle distribution of ϕ = 150 ; (DF) have ϕ = 120 ; and (GI) have ϕ = 60 . The red, green, and blue arrows represent the x-, y-, and z-axes, respectively.
Figure 6. Test scenarios for assessing camera layouts. All camera configurations are positioned on the surface of a sphere centered on the target position. (A,D,G) correspond to θ = 80 ; (B,E,H) to θ = 54 . 8 ; and (C,F,I) to θ = 85 . (AC) have an azimuthal angle distribution of ϕ = 150 ; (DF) have ϕ = 120 ; and (GI) have ϕ = 60 . The red, green, and blue arrows represent the x-, y-, and z-axes, respectively.
Electronics 14 00793 g006
Figure 7. Contour map of condition numbers for various camera layout scenarios, computed based on Equation (9). The map covers a continuous range of θ and ϕ , illustrating the impact of these parameters on the condition number. The nine discrete scenarios (AI) from Figure 6 are marked at their corresponding locations on the map for reference.
Figure 7. Contour map of condition numbers for various camera layout scenarios, computed based on Equation (9). The map covers a continuous range of θ and ϕ , illustrating the impact of these parameters on the condition number. The nine discrete scenarios (AI) from Figure 6 are marked at their corresponding locations on the map for reference.
Electronics 14 00793 g007
Figure 8. (A) Illustration of the 5 × 5 grid of target positions within the enclosure’s workspace, with each square measuring 15 mm × 15 mm. (B) Positioning of the two symmetric cameras. (C) Positioning of the camera on the symmetric plane. (D) Schematic of the enclosure’s interior showing camera placement and target grid.
Figure 8. (A) Illustration of the 5 × 5 grid of target positions within the enclosure’s workspace, with each square measuring 15 mm × 15 mm. (B) Positioning of the two symmetric cameras. (C) Positioning of the camera on the symmetric plane. (D) Schematic of the enclosure’s interior showing camera placement and target grid.
Electronics 14 00793 g008
Figure 9. Accuracy test results comparing estimated target positions (blue points) to ground truth target positions (red grid) across different planes: (A) x y -plane, (B) y z -plane, (C) x z -plane, and (D) 3D view of the workspace. All dimensions are presented in millimeters. The red, green, and blue arrows represent the x-, y-, and z-axes, respectively.
Figure 9. Accuracy test results comparing estimated target positions (blue points) to ground truth target positions (red grid) across different planes: (A) x y -plane, (B) y z -plane, (C) x z -plane, and (D) 3D view of the workspace. All dimensions are presented in millimeters. The red, green, and blue arrows represent the x-, y-, and z-axes, respectively.
Electronics 14 00793 g009
Figure 10. Simulation results of the VR environment for laparoscopic training. The system visualizes the gallbladder and surrounding organs with realistic coloration and texture to enhance realism. Interaction with virtual organs demonstrates key procedural steps, including connective tissue dissection and gallbladder isolation. (A) Illustration of the setup with two laparoscopic grippers. (B) Demonstration of both devices grasping either the liver or the gallbladder. (C) Illustration of the left arm grasping the liver while the right arm dissects fat tissue. (D) Depiction of the right arm grasping the liver while the left arm dissects fat tissue.
Figure 10. Simulation results of the VR environment for laparoscopic training. The system visualizes the gallbladder and surrounding organs with realistic coloration and texture to enhance realism. Interaction with virtual organs demonstrates key procedural steps, including connective tissue dissection and gallbladder isolation. (A) Illustration of the setup with two laparoscopic grippers. (B) Demonstration of both devices grasping either the liver or the gallbladder. (C) Illustration of the left arm grasping the liver while the right arm dissects fat tissue. (D) Depiction of the right arm grasping the liver while the left arm dissects fat tissue.
Electronics 14 00793 g010
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, Y.; Nelson, V.; Nguyen, C.T.; Suh, I.; De, S.; Siu, K.-C.; Nelson, C. Development of New Generation Portable Camera-Aided Surgical Simulator for Cognitive Training in Laparoscopic Cholecystectomy. Electronics 2025, 14, 793. https://doi.org/10.3390/electronics14040793

AMA Style

Li Y, Nelson V, Nguyen CT, Suh I, De S, Siu K-C, Nelson C. Development of New Generation Portable Camera-Aided Surgical Simulator for Cognitive Training in Laparoscopic Cholecystectomy. Electronics. 2025; 14(4):793. https://doi.org/10.3390/electronics14040793

Chicago/Turabian Style

Li, Yucheng, Victoria Nelson, Cuong T. Nguyen, Irene Suh, Suvranu De, Ka-Chun Siu, and Carl Nelson. 2025. "Development of New Generation Portable Camera-Aided Surgical Simulator for Cognitive Training in Laparoscopic Cholecystectomy" Electronics 14, no. 4: 793. https://doi.org/10.3390/electronics14040793

APA Style

Li, Y., Nelson, V., Nguyen, C. T., Suh, I., De, S., Siu, K.-C., & Nelson, C. (2025). Development of New Generation Portable Camera-Aided Surgical Simulator for Cognitive Training in Laparoscopic Cholecystectomy. Electronics, 14(4), 793. https://doi.org/10.3390/electronics14040793

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop