Abstract
This paper details the process of prototyping a Surgical Instrumentation Simulator using Virtual Reality and a gesture-based natural interaction. Our prototype used a cost-efficient mobile headset along with a telephone screen for a stereoscopic display, thus creating a low-cost. We applied an iterative approach to our prototyping process, and user testing included both students and professionals from the Health field. Results showed proposed interactions techniques as satisfactory according to users. In addition, the low-cost hardware choice proved sufficient in quality to support an immersive experience.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Since their arrival in the early 2010s, Head Mounted Displays (HMDs) have gained relevance in the industries of video games, simulators, health, and education. They differentiate by allowing greater immersion to users in first-person experiences. Furthermore, advances in interaction and visualization technology for HMDs allowed users to feel closer to a real environment. Specially in systems using body tracking devices that transfer user’s hands or their entire body to the virtual world. Immersion was also improved by progress on graphical engines and HMD screen resolutions, promoting visual realism in such experiences.
Examples of interaction techniques for Virtual Reality (VR) exist in both academia and commercial applications. They use a variety of interaction technologies such as camera tracking, navigation systems, single-hand sensory controllers, and traditional game controllers. Each device supports different functionalities and degrees of immersion to the user [1].
In this article, we first document current use of HMDs for medical training. Medical applications using HMDs include endoscopy surgery [2], image examination, tridimensional mapping in high-risk locations [3], and surgery simulators. Later, we later detail the construction and user testing of a surgical instrumentation simulator with health students and professionals. It is necessary for health students to train the recognition and organization of surgical instruments. Such instruments are expensive to purchase, and educational institutions, in some countries, do not provide students with necessary time to practice using them. Thereby, developing such a low-cost simulator would not only reduce costs in acquiring surgical materials but also allow students to learn from their homes.
2 Presence and Interaction
An interaction designer strives to build interactive products that are easy to learn, effective, and pleasant to use. The benefits of good interaction design to a VR system include greater immersion in the virtual environment, a better sense of presence, a smoother learning curve, and better general usability. According to Bowman [4], an interaction technique is a method that allows a user to carry out a task through a user interface. An interaction technique includes both hardware and software. It maps information from an input device to an action, then to the system output in a way that the output devices can interpret. Thus, interaction design seeks performance, usability, and utility [4].
Performance is how well a user and the system perform an action, and it reflects the efficiency and precision of the product. Usability indicates if an interactive product is pleasant, easy to use, and effective to users. Utility means that the developed interactions fully support, or complete, the user’s goals.
The main goal of VR technology is to recreate the sense of presence for users. Hence, it immerses them in a virtual environment so that they suppose being in a “real” world [5]. VR is an advanced interaction technique that allows immersion, navigation, and interaction in a synthetic tridimensional computer-generated environment, using multi-sensorial channels [6]. Accordingly, users seek immersive experiences in VR.
Through technology such as body tracking and haptic feedback, interactive devices can aggregate senses to VR experiences when combined with an HMD. Such is the case with the Oculus Rift controller. VR applications can use input from its touch and movement sensors to render an accurate representation of the user’s hands. If the user has a closed grip on the controller, the game shows their hands closed. The Leap motion is another example of a device that enables hand presence, although it does its tracking through infra-red cameras. It detects a multitude of gestures through high-precision hand and finger tracking, and it is a low-cost device for VR. When used with VR, the Leap motion is placed in the HMD providing a 150-degree range of detection.
3 Related Work
The paper “Virtual Training of Fire Wardens Through Immersive 3D Environments” [7] presents an immersive interface using Oculus Rift and a Leap Motion. They presented gesture-based interaction for choosing paths and actions in a fire escape simulation. In the simulation, users selected one out of two or three pre-defined paths through their choices. Similarly, our work also uses the Leap Motion as an input device, and gestures for actions such as moving to waypoints in the virtual space.
In their study, Dorabjee et al. [8] analyzed ten applications using Leap Motion and an HMD. They presented design suggestions and tendencies for creating natural interfaces. Following Wixon and Wigdor, they claimed that natural interfaces do not need to be natural, but rather need to make the users behave naturally while interacting with the product. Wixon and Wigdor also say that five basic characteristics are necessary for an interface to be considered natural. The first characteristic is “Evocative” since the interface evokes a natural behavior for the user to interact with the system. “Unmediated” is the second characteristic, meaning the users must interact with the system without a secondary system, using their hands or body instead. Next, there is “Fast and few”, as the user should interact with the objects through fast interactions related to their natural properties instead of a set of commands. Contextual, the fourth characteristic, prescribes that the interface presents an ambient where the user naturally knows how to realize the actions. Lastly, the fifth characteristic is Intuition: the objects respond to the actions how the user would expect them to. In this paper we describe a prototype that employs a natural interface with Wixon and Wigdor’s five characteristics.
Experiences with HMDs may cause undesirable physical symptoms to the user, which Davis et al. [9] names cybersickness. The symptoms include nausea, vomiting, and eyestrain, and they occur due to the disconnection between the virtual and real experiences. For instance, if a user is motionless in the real world, but moving in the virtual world, this confuses their brain and might cause cybersickness. Factors that affect chance of cybersickness include user age, playing stance, and hardware framerate and latency.
Settgast et al. [10] compared the level of immersion of two VR systems, DAVE and HMD, for scenarios of observation, emotion, and interaction. Users were also checked for cybersickness on after each session. Results showed that for all tasks users preferred the DAVE system because they perceived their actions and the visuals as “more realistic”. They attributed this to the screens in the DAVE system, which rendered images in larger scale. Their tests showed, however, that users presented slightly higher levels of cybersickness after the DAVE session than with HMDs. During our tests, we inquired users about cybersickness symptoms after using the low-cost equipment prototype.
4 Objectives
In this paper, we present and evaluate a surgery instrumentation simulator prototype with educational purposes. The prototype aims to promote immersion and procedures close to the real experience while maintaining a low cost. Thus, we checked the participant for cybersickness symptoms after the tests.
5 Methodology
Our methodology had four stages: Technology Selection, Prototype Construction, User Tests, and Feedback and Data Analysis. We first observed existing interaction techniques from academia and the industry, identifying the hardware most suited to our project. Then we developed the prototype aided by interaction design and level design techniques. During the prototype’s construction, we performed preliminary tests to iterate over the interaction techniques and level design. With the prototype completed, we started user tests to evaluate interactions and hardware components.
5.1 Technology Choice and Prototype Development
For the prototype’s hardware, our team investigated market options regarding price and user evaluations. We chose a U$ 45.00 basic mobile HMD, of the Brazilian brand Beenoculus, alongside a U$ 150.00 LG Nexus 4 smartphone, and a U$ 79.99 Leap Motion. A computer was also necessary for rendering the video and streaming it to the smartphone using the Trinus VR app through a Wi-fi or USB connection.
We chose to develop the prototype using the Unity 5 Game Engine because it was a free platform with great support for VR and the Leap Motion. The engine’s illumination features were also decisive, allowing for a good level of visual fidelity. Thus, the project setup is shown in Fig. 1. It uses the mobile HMD with a Leap Motion attached and using the Smartphone as a stereo screen.
The prototype was developed through an iterative process. We generated test builds for small groups of users to evaluate the interaction techniques and complexity of the tasks. During these iterations, we adjusted, added and removed interactions and changed the task order. Sections 5.2 and 5.3 detail the changes made and their motivations.
5.2 Step Descriptions
The goal of the prototype was to simulate the organization of surgical instruments in a desk. Our first iterations of the prototype had the player taking objects from an unorganized desk, then placing them on another. However, the users expressed discomfort in moving between the tables frequently, so we split the process into two tasks: Categorization and organization. In the former, the user takes objects from the unorganized desk separating them in groups. Next, in the organization step, the user repositions the objects, sorting them accordingly.
The prototype initially had 41 objects which the user had to organize in sectors, one for each category of tools. However, due to users struggling to differentiate small instruments of similar size, 8 objects were removed, leaving 41-8 in the final iteration. Figure 2 shows the layout of the table.
In the Categorization phase, the user collects the objects belonging to each category and places them in a floating tray. Incorrectly categorized objects return to their initial position in the table, and the player receives a new, empty tray for each category. A text indicator for the current category shows the amount of remaining objects. Due to table size, the user must move through the virtual space during this test, which is done by pointing (Fig. 3).
In the organization phase, the user has a tray with objects to place on the table. The table has slots for the tools that change color to show whether the order the objects are in the correct order. When the user completes a section, they move automatically to the next one, until all are complete.
5.3 Interaction Techniques
We started the prototype development by implementing tool manipulation. The user could pinch objects carry them, then release the pinch to drop them. In addition, we prototyped a looking interaction to show object details when the user looked at them. Our initial tests showed that, although the pinch interaction was effective and intuitive, a fraction of the users naturally tried to grab objects by grasping them, so we added that grasp interaction. Furthermore, we noticed users had difficulty grabbing small objects when they were close to each other. To mitigate this problem, we added a highlight to the closest object. We also added that highlight to places where the tools could be placed.
Early in testing, it became clear that the natural reach of the user’s hands was insufficient for manipulating distant objects. Making the user move around the virtual space would be undesirable, so we prototyped interactions for moving the table instead. We prototyped an interaction where the user would grab specific spots at the table to drag it. This, however, showed to be unintuitive and hard to convey visually, so it was discarded. Next, we tested an interaction to move the user instead: If they pointed at a destination for a while, they would move to it. Our tests showed that it was effective, so we kept it. The final interactions of highlight, movement, and object manipulation are shown in Fig. 4.
5.4 Metrics
For the usability analysis of this prototype, we used Sauro and Lewis’s PSSUQ (Post-study System Usability Questionnaire) [11]. The survey had 16 questions about usability on a 0 to 7 Likert scale, where 0 and 7 corresponded to “completely agree” and “completely disagree”. Similarly to Davis et al. [9], our cybersickness evaluation was done by asking “Have you felt any discomfort using this prototype?” to which the user would answer a number from 0 to 10 where 0 meant “I had no discomfort” and 10 meant “I feel sick”. We analyzed participant answers and complaints to identify problems in the prototype.
5.5 User Tests
The prototype’s user sample comprised of 19 users (11 males), of which 8 were health students or professionals. Of the users within the care sector, 2 were heart surgeons, 1 was a resident doctor, 4 were medical students and one was a nursing student. The remaining participants were students of either computer science, law, or electronic engineering. Users were between 20 and 31 years old, with an average of 24 (SD = 3.9). We executed test sessions in a hospital room, in the GRVM research group and in the house of one of the developers, and Fig. 5 shows test sessions. After completing the tests, the users answered a survey about discomfort and prototype usability. We organized the data in table form, which the next session will detail.
6 Results
6.1 Data Analysis
The duration of the experiments occurred with time minimum of 16 min and maximum of 35 min. Users spent an average of 23 min (SD = 4.1) to complete the test. Looking at the Fig. 6, we noticed that 14 users had total comfort or the least discomfort when using the prototype. They scored the level of nausea at 1 or 2 on the scale where 1 means “no discomfort” and 10 means “I am nauseous (vomiting)”. Only one participant (female) was unable to complete the test. She had symptoms of nausea and dizziness after 5 min of use and gave up the tasks. The participant did not respond to the PSSUQ questionnaire because she did not complete the test. Another participant, also female, presented dizziness problems soon after finishing the test and removed the HMD. She affirmed loss of spatial orientation after leaving the virtual environment and punctuated the discomfort with note 8.
The Table 1 shows the averages of the usability questionnaire. The average system usage score (SysUse) was 1.79, which demonstrates that users have stated that it is easy to use the prototype. Users had no problem learning the interactions proposed by the simulator. The command that required more time to learn was to walk in the virtual environment. This interaction worked from the pointing gesture. When we gave the user the “walk” command, most of them felt they were present in the virtual environment and walked forward in the real world. The interaction also presented problems because some participants stretched the arm forward making it impossible for the sensor to visualize the user’s hand.
While executing some gestures, the users reported the absence of error messages when software problems happened. The main problem in the prototype occurred in the Grab interaction. Participants had their hand mirrored by the Leap Motion when picking up instruments. This problem caused dropping of objects, and frustration in users. In addition, an error has occurred caused by the limitation of the Leap Motion range. Users moved their hands to a position out of bounds of the sensor and caused non-rendering of hands in the simulator. However, in developing the prototype tasks the system reported errors to users. For example, if the user placed some object in the wrong order the system would display the color red at the piece slot. Looking at item 7 of the table (), we can see that the users defined the error messages as bad. Scores for this item resulted in an average of 3.66. This happened because users who dealt with implementation errors or Leap Motion sensor problems gave high marks for error messages.
Overall, users have approved the system interface. Observing the data from the table, it can be stated that users gave good marks for Interface Quality (intQual), obtaining an average of 1.81. The users assimilated the highlight function, the positioning of the target texts, and the help that existed through the colors. The Information Quality (InfoQual) score were also good with 2.19 average. To complete the tasks the users needed to know the name and type of the instruments they were manipulating. However, the team noted that some users were unable to see the names of the objects in the scenario. This was because some users did not regulate the convergence lens correctly.
The interface did not have a reticle in the center of the screen. The absence of this component in the interface generated an interactive problem. To know the name of a certain surgical instrument the users had to centralize the object on the HMD screen. However, some participants did not center the object by moving the head. They just moved their eyes inside the HMD and could not know the name of the object they were looking at. After completing all the tests, the development team realized the need to add a reticle in the interface. This component would make users know specifically where to look. Users identified a display issue on their smartphone screen because it was small at 720p resolution. This made some visualization tasks difficult. One participant commented, “The resolution was so low that sometimes I could not distinguish the difference between haemostat and surgical scissors”.
7 Discussions
The team developed the project in about 5 months and evaluated interactions with 19 users who approved the interactive project. A small part of the sample (2 users) suffered problems of cybersickness. However, the vast majority of users felt that the hardware did not generate physical problems. Therefore, the low cost setup it is safely for use. Participants spent an average of 23 min to complete the test. Users spent more time doing the first task than in the second. This happened because the first task required users to become familiar with the movement, visualization, and interactions of the virtual environment.
Range, occlusions or leap motion sensor failures were the main usability problems. At times, users placed their hand out of range of the sensor, resulting in tracking failures. At other times, one hand could create occlusion over the other. This happened when the user placed one hand in front of the other causing the non-visualization of a hand in the virtual environment. The team was able to avoid the problem of accuracy of the leap motion after reducing the number of objects from 41 to 33. This helped to make the pinch interaction more accurate.
While developing the prototype, the team realized that users found the grab interaction more natural than the pinch interaction. The team added in the project the two gestural forms to perform the pick-up action. However, the grab technique generated leap motion failures because the sensor could not identify the thumb of the users. Sometimes, the leap motion mirrored users’ hands and caused objects to drop on the table.
Not implementing a reticle on the simulator screen caused interface problems. Some users could not which object was in the center of the HMD screen. Users were able to memorize actions intuitively. However, the development team noticed problems in the interaction of pointing to walk. The users caused leap motion failures by stretching the arm and getting out of bounds.
Highlights marked components locations. Snaps and waypoints have helped users to complete tasks. At the first moment of the research, the team planned the level design with a single phase. The goal was to give full freedom for the user to make the decisions to choose by which task to begin. The team realized that users were feeling tired due to the large number of goals, and the need for constant movement. Therefore, the development team decided to split the level into two levels. Separating the complex task of organizing a surgical instrumentation table into several mini-tasks optimized the users’ time.
8 Conclusions and Future Work
This research successfully built the intended prototype. The low-cost setup proved sufficient for an immersive experience. From analyzing the test results and feedback, we concluded that most users were satisfied by the system usability and found the interface pleasant. However, two of the 19 participants showed cybersickness symptoms, which might hinder such a product’s accessibility.
As future works, we intend to solve issues found by our testers and run more tests using a larger sample of health care professionals. Our software should be refined to solve problems caused by jitter in the Leap Motion input and prevent errors that hinder the user experience. Regarding the prototype hardware, we would like to use a smartphone with a 1080p resolution screen to ease observation tasks. Finally, improvements to the interface would include a reticle for a more clear indication of the looking interaction and a range indicator for the maximum hand distance.
References
Anthes, C., Hernández, R., Weidemann, M., Kranzlmuller, D.: State of the art of virtual reality technology. In: 2016 IEEE Aerospace Conference, At Big Sky, MT, USA (2016)
Sony introduces a medical head mount display system for use in endoscopic surgery (2014). http://www.sony.co.uk/pro/press/pr-sony-head-mount. Sony
Google Cardboard saves baby’s life (2016). http://edition.cnn.com/2016/01/07/health/google-cardboard-baby-saved/. CNN
Bowman, D.A., Kruijff, E., LaViola Jr., J.J., Poupyrev, I.: 3D User Interfaces: Theory and Practice. Addison-Wesley, Westford (2004)
Slater, M., Wilbur, S.: A, framework for immersive virtual environments: speculations on the role of presence in virtual environments. Presence Teleoperators Virtual Environ. 6(6), 603–616 (1997)
Burdea, G., Coiffet, P.: Virtual Reality Technology. Wiley, New York (1994)
Diez, H., García, S., Mujika, A., Moreno, A., Oyarzun, D.: Virtual training of fire wardens through immersive 3D environments. In: Web3D 2016, Anaheiim, CA, USA, 22–24 July 2016
Dorabiee, R., Bown, O., Sarkar, S., Tomitsch, M.: Back to the future: identifying interface trends from the past, present and future in immersive applications. In: OzCHI 2015, Melbourne, VIC, Australia, 07–10 December 2015
Davis, S., Nesbitt, K., Nalivaiko, E.: A systematic review of cybersickness. In: IE 2014, Newcastle, NSW, Australia (2014)
Settgast, V., Pirker, J., Lontschar, S., Maggale, S., Gütl, C.: Evaluating experiences in different virtual reality setups. In: Wallner, G., Kriglstein, S., Hlavacs, H., Malaka, R., Lugmayr, A., Yang, H.-S. (eds.) ICEC 2016. LNCS, vol. 9926, pp. 115–125. Springer, Cham (2016). doi:10.1007/978-3-319-46100-7_10
Sauro, J., Lewis, J.R.: Quantifying the User Experience: Practical Statistics for User. Elsevier, Waltham (2012)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Lopes, A., Harger, A., Breyer, F., Kelner, J. (2017). A Natural Interaction VR Environment for Surgical Instrumentation Training. In: Marcus, A., Wang, W. (eds) Design, User Experience, and Usability: Designing Pleasurable Experiences. DUXU 2017. Lecture Notes in Computer Science(), vol 10289. Springer, Cham. https://doi.org/10.1007/978-3-319-58637-3_39
Download citation
DOI: https://doi.org/10.1007/978-3-319-58637-3_39
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-58636-6
Online ISBN: 978-3-319-58637-3
eBook Packages: Computer ScienceComputer Science (R0)