Abstract
Virtual reality (VR) is a promising tool and is increasingly used in many different fields, in which virtual walking can be generalized through detailed modeling of the physical environment such as in sports science, medicine and furthermore. However, the visualization of a virtual environment using a head-mounted display (HMD) differs compared to reality, and it is still not clear whether the visual perception works equally within VR. The purpose of the current study is to compare the spatial orientation between real world (RW) and VR. Therefore, the participants had to walk blindfolded to different placed objects in a real and virtual environment, which did not differ in physical properties. They were equipped with passive markers to track the position of the back of their hand, which was used to specify each object’s location. The first task was to walk blindfolded from one starting position to different placed sport-specific objects requiring different degrees of rotation after observing them for 15 s (0°, 45°, 180°, and 225°). The three-way ANOVA with repeated measurements indicated no significant difference between RW and VR within the different degrees of rotation (p > 0.05). In addition, the participants were asked to walk blindfolded three times from a new starting position to two objects, which were ordered differently during the conditions. Except for one case, no significant differences in the pathways between RW and VR were found (p > 0.05). This study supports that the use of VR ensures similar behavior of the participants compared to real-world interactions and its authorization of use.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
In recent years, virtual reality (VR) has been increasingly used for a lot of purposes, e.g., rehabilitation for people with impaired vision (Palieri et al. 2018), sports training (Pastel et al. 2020a, b, c; Petri et al. 2018, 2019) or therapy for anxiety disorders (Powers and Emmelkamp 2008). The use of VR is not only restricted to entertainment, but also integrated into science due to its enormous advantages. The VR applications allow a user to explore large virtual environments in a smaller physical space (Hirt et al. 2018). Advanced computer technology enables to use realistic computer-generated virtual environments for having a greater degree of control and offers less physically demanding experiences (Kimura et al. 2017). It also provides the potential to increase the motivation of children (Harris and Reid 2005), or when it comes to enhance learning (Sattar et al. 2019).
A factor that has an impact on the quality of perceiving virtual environments is what kind of VR application is used since they differ in the sense of being present in the virtual environment. A head-mounted display (HMD) is known for an increased feeling of being present and for providing high immersion compared to other applications (Mondellini et al. 2018). Since the majority of the population is not familiar with wearing the HMD, physical discomfort, better known as cybersickness, may occur. This could affect the feeling of being present (Mondellini et al. 2018; Witmer and Singer 1998). An established method to measure cybersickness is the simulator sickness questionnaire (SSQ) (Kennedy et al. 1993), which was used in numerous studies (e.g., Christensen et al. 2018; Tregillus et al. 2017; Walch et al. 2017).
Spatial orientation skill should not be reduced to only one ability. Generally, spatial orientation skill allows us to determine our location in relation to the environment (Carbonell-Carrera and Saorin 2018). It is also known as the ability to remain oriented in a spatial environment when the objects in this environment are observed from different positions (Fleishman and Dusek 1971). Wolbers and Hegarty (2010) gave a good impression of all included components that ensure spatial orientation. The authors stated the ability to find one’s way that involved basic perceptual and memory-related processes are seen as a complex construct due to the multisensory process in which information needs to be adjusted over space and time (Wolbers and Hegarty 2010). The correlation between the spatial orientation and memory is shown in a study in which deficits of them are known as an early marker for pathological cognitive declines (Flanagin et al. 2019). Therefore, different tests were used to analyze spatial memory by letting the participants conduct a test that required them to memorize the order of objects on a map and to reconstruct it from memory (Lehnung et al. 1998). Besides, a lot of studies focused on spatial navigation which is defined as the ability to find the way between places in the environment (Bruder et al. 2012; Diersch and Wolbers 2019). The participants were often asked to complete wayfinding- or homing in tasks, which is essential in our daily life (Cao et al. 2019; Ishikawa 2019; Kitchin 1994). During those tasks, the user perceives the space and acquires spatial knowledge and orientation about it (Carbonell-Carrera and Saorin 2018). The user develops a cognitive map, which is defined as the internal cartographic representation of the surrounding environment (Carbonell-Carrera and Saorin 2018). An additional factor of helping us to orientate in an unknown environment is the distance perception. The comparisons of the perceived distances in VR to the real environment were already examined by letting the participants estimate verbally and by walking different distances using a head-mounted display, which showed no significant differences between both conditions, or at least tendencies of equal estimations (Kelly et al. 2017). Since we used a successor system of the HTC Vive (the HTC Vive pro, Taiwan), distance estimation was not considered in the current study due to already proven equal estimations for egocentric perception. The egocentric reference systems specify location and orientation relative to the observer, whereas the allocentric reference frame works independently of it (Wolbers and Wiener 2014). Previous studies showed that allocentric information is used for coding targets for memory-guided reaching in depth (Klinghammer et al. 2016). The authors emphasized the meaning of both reference systems but also referred to studies that crystallized the egocentric reference system as the dominant role to specify objects’ locations (Battaglia-Mayer et al. 2003; Klatzky 1998). For measuring environmental spatial abilities, pointing is a commonly used method that can be varied to examine different aspects of spatial ability (Flanagin et al. 2019; Kimura et al. 2017). Kimura et al. (2017) found out that participants could reorient by using either the geometry of the room or the implemented features (objects), whereas feature-based cues seem to have more impact on spatial ability skills in virtual environments. Previous research has shown that allocentric information was used for memory-guided reaching in depth (Klinghammer et al. 2016).
Despite its development due to higher computing power and practicable applications, it is still unknown whether information processing occurs similarly in RW and VR. Only a few studies have investigated whether the visual perception works equally in VR compared to RW (Pastel et al. 2020a, b, c), although the visualization of the VR environment took place artificially. The visual system relies on the distance and depth indicators which helps us to determine objects in a virtual scene (Ghinea et al. 2018). Most studies considered spatial navigation due to its high relevance in our daily life. Furthermore, pointing was frequently used to measure the ability to orientate in a new environment. During sports, it is important to move precisely to defend the opponent’s attacks, to build up an imagination of the position of each teammate, or to grasp appropriated objects such as a ball or racket. When it comes to adequate training in VR, it should be ensured that those skills can be realized in the same way as it works under reality condition without seeing the whole body, as it was the case in other studies (Kimura et al. 2017; Petri et al. 2018).
The aim of the study was to compare the ability to orientate in a new environment by letting the participants move to different sport-specific objects in a real and virtual environment. The focus was to examine whether the participants were able to move actively to each object without using other locomotion technique such as teleportation or cyberwalk (Brewster et al. 2019; Souman et al. 2011). Therefore, two main tasks were developed to examine the spatial orientation skills by first letting the participants walk blindfolded to different placed objects including different degrees of rotation (rotation task). In the second task, they had to walk blindfolded to a previously announced order of objects to observe free movement under more complex conditions (pathway task). Due to the repeated task demands, possible habituation leading to improved performances was examined in the rotation task (first and second run). More co-factors have been chosen to examine, which can have an impact on the participants’ performances, such as memory, previous experiences in VR and time for completion the tasks. Other studies reported that user characteristics such as cognitive abilities can have an impact on spatial orientation (Coughlan et al. 2018; León et al. 2016). At present, the rising computing power leads to more realistic graphics and the perception of virtual environments aligns to natural scenes. Nevertheless, based on conflicting research findings, the comparison of the spatial orientation between the virtual and real environment should be further examined.
2 Methods
A within-subject-study was designed and conducted under the declaration of Helsinki. The approval of the Ethics Committee of the Otto-von-Guericke University at the Medical Faculty and University Hospital Magdeburg was obtained under the number 132/16.
2.1 Experimental apparatus
2.1.1 Hardware
An HTC Vive (HTC, Taiwan) was used with a field of view of 110° (a total resolution of 2880 × 1600 Pixel) for visualization of the virtual environment. To execute the VR environment smoothly, a high-performance desktop equipped with Intel i7 CPU, 16 GB memory, 512 GB SSD, and Nvidia GTX 1080 8GB graphics card was used. A motion capture system (Vicon, Oxford, UK) including 13 cameras with a sampling rate of 200 Hz was used to capture the location of each marker accurately. The VR-controller (HTC Vive) was used to match the position of the virtual objects with those from reality.
2.1.2 Software
The creation of a VR environment with high fidelity preventing the participants from a conflict between the real-world and the virtual environment was created with Blender using the scales and the textures of the objects in the real world. The same room was also used during the experiment in the real environment (see Fig. 1). The created virtual environments were then imported into Unity3D (version 2019.1), and the SteamVR (version 2.5) was used to enable users to interact in the virtual reality. Visual Studio 2017 was used for implementing the C# program for Unity to control the studies.
The raw data were captured and prepared with Vicon Nexus (version 2.4, Oxford, UK). The results of the studies in the next section were processed and calculated with MATLAB R2019a. Finally, statistical analyses were performed with SPSS (α=0.05), and the relevant and detailed statistical methods are explained later.
2.1.3 Participants and experimental setup
Twenty young and healthy adults (8 males and 12 females, averaged age = 23.1 ± 3.32 years) were recruited in this study. 10 participants stated having previous experiences of immersion in virtual environments. Previous experiences in VR consisted of the participation of other VR-studies, but no one possessed or owned VR-application for private uses. The study was done in a test room (see Fig. 1) equipped with a Vicon system as described previously. In the middle of the test room, an area was marked with a white rope that was placed between four chairs to ensure sensory feedback for the participants during blindfolded walking. During the experiment, all tasks were conducted by the participants in this fixed area (5.5 m × 7.5 m). Inside the area, the four different objects (a red pylon, a pink ball, a yellow slalom pole, and a white ergometer) were placed also on fixed positions in the test room. The objects in VR were placed via a controller with a programmed function at the same coordinates of the placed objects in the real environment. As perceivable in Fig. 1, the position of each object was switched, when the condition (RW or VR) changed. This reduced the learning process, which could occur due to each participant completing every task in both real and virtual environment on the same day. The distances and the required degrees of rotation remained the same. Six reflective markers, one on the sternum, two at the center of each scapular, one was placed orthogonally of the glenoid cavity from the shoulder, one at the back of the hand right next to the joint of the index finger (this marker was used for the calculation of the deviations later) and a further one on the elbow on the participants to capture movement coordinates using the Vicon system.
2.2 Procedure
Before the conduction of the experiments started, the participants agreed and signed the consent form and then filled in a questionnaire regarding their previous experience in the VR, including VR gaming, immersive 360-degree movies, or other relevant applications. After the questionnaires were filled out, the experimenter explained the whole experimental procedure and measured the participants’ interpupillary distance to have a clear visual input from the HMD. Each participant completed the experiment in both conditions, but the order was randomized. Before starting the experiment, the participant had to go through the first memory and orientation test. For memory, ten words were observed by the participants for one minute. Afterward, they performed the orientation test (part of the Berliner-intelligence-structure test—BIS), which consisted of an observation phase (90 s observation of black colored buildings from a bird’s eye perspective). On the next step, the participants received the same sheet without colored buildings. The task was to mark all previously colored buildings they could remember. Afterward, the participant was asked to reproduce all words named in the memory test. The words were then repeated at the end of the study.
When all preliminary tests were completed, the participants were guided to the starting position (SP) as shown in Fig. 1 and received further instructions. The first task was the rotation task (RT). Each object was observed for 15 s from the SP. Thereafter, the visual scene was darkened in VR and in RW their eyes were covered with a blindfold. The participants should then walk to the object by using the marker placed at the back of their hand right next to their joint of the index finger as reference. Afterward, the participants were guided back to the SP without getting any feedback about their performances. The visual scene was presented again to observe the next object, which was placed at a different degree of rotation in the room (see Fig. 1). Each object should be approached only once.
After the RT, the participants in RW had their blindfolds removed and in VR the visual scene became visible again. The participants then had 2 min observation phase in which they could walk without covered vision through the whole scene to gain further experience with the environment. Then, they had to return to the SP and the vision was covered again (Fig. 1). Afterward, they were guided to a new position (P1, P2, P3) from where they had to walk blindfolded to a previously announced order of objects (e.g., first to the ball and second to the ergometer). This task was referred to as the pathway task (PT). A total of three pathways including two minutes previously conducted observation phases were performed. For each pathway, the participants walked to two objects (Table 1).
The last task was again the first task that required to walk to objects including different degrees of rotation for measuring possible habituation (see Fig. 2). After completing all tasks in both environments, the 10 words at the beginning of the study were queried again. After the tasks were done in VR, the SSQ was handed over. When the experiment was finished, the participants were asked to fill out a self-created questionnaire about used strategies. To obtain the subjective estimated difficulty of each task across the conditions (RW and VR), a scale was used from 0 points (no subjective difficulty) to 10 points (very difficult).
2.2.1 Data analysis
The quality of the orientation to the objects was measured by the two-dimensional deviation (cm) of the marker which was placed on the back of the hand right next to the joint of the index finger on the preferred hand to the real position of each object. In addition, the time (seconds) from the starting position (or changed perspective for the pathways) until the participant reached the object was captured. The deviations were calculated by using a MATLAB script for the rotation and pathway task. The dataset was checked for the requirements of the statistical analysis (normal distributions, no significant outliers, and given sphericity). Effect sizes were obtained using Cohen’s f being defined as f = 0.1 small effect, f = 0.25 moderate effect, and f = 0.4 large effect (Cohen 2013, pp. 285–287). SPSS, version 25, was used to run the statistics. To detect memory skills, a Wilcoxon signed-rank test was used to reveal possible differences in remembering the listed words and the buildings. To determine the correlation coefficient between the memory skills and the accurate walking (distance to the objects in cm) exists, the spearman rank correlation was used due to the small sample size and non-parametric data. This was also done for the analysis of correlations between RW and VR including the RT and PT. The level of significance was set to α = 0.05.
2.3 Rotation task (RT)
For the comparisons between the deviations (cm), time for completion (time in seconds) and subjective estimation of difficulty, which all define the dependent variable, a three-way ANOVA with repeated measurements with degrees of rotation [0°, 45°, 180°, 225°], the conditions [RW, VR], and the runs [first, second] was conducted. If sphericity was not given, the Greenhouse-Geisser corrected data were chosen for analyses. Although non-parametric data set was given in some cases, we chose to conduct the ANOVA since previous studies showed the robustness in terms of power and violations of normality by considering the distribution of skewness and kurtosis (Blanca et al. 2017). Dunn-Bonferroni corrected post-hoc tests were used to determine the pairwise comparisons within the different placed objects. Significant outliers were removed using boxplot (participant 12 and 17).
2.4 Pathway task (PT)
For the pathways, participant 14 was removed from the data set due to technical problems. Due to normal distributed data and given sphericity, a two-way ANOVA with repeated measurements was conducted. Furthermore, Dunn-Bonferroni corrected post-hoc tests were used to analyze the pairwise comparisons using the corrected significance to avoid the alpha error accumulation. Similar to the first task, the dependent variables were deviation (cm), subjective estimation of difficulty (0 points = no subjective difficulty to 10 points = very difficult) and for time completion (time in seconds), whereas the pathways (three in each condition) were treated as independent variable.
We compared the performances of the participants between the conditions (RW and VR). In addition, an analysis within each condition was made to get supportive information about similar behavior in both environments independently. The comparison of the first and the second run of the RT allows us to examine possible habituation to the task demands in both conditions. The first step was to examine whether habituation from the starting condition to the following one occurred. This step was completed first to exclude possible learning from one condition to the following. After that, further analyses with the associated factors were conducted.
3 Results
The results are divided into two parts. The first part focuses on the comparison of the performances within and between each condition (RW and VR) by comparing the deviations (measured in cm) and time for completions (s) for the rotation task and pathway task. The second part describes the analysis of the memory skills and orientation skills (BIS), and the subjective estimation of difficulty. Before starting with the analysis, it was examined whether there occurred differences from the starting and the subsequent condition. No significant differences were found between the starting condition and the following one for all dependent variables [deviations in cm, time for completion in s, subjective estimation of difficulties] neither within the rotation nor for the pathway task (p > 0.05). This was done before to be able to focus on the comparison of both conditions afterward. In addition, two runs were made within the rotation task, and the analysis within and between each condition was made separately.
3.1 Rotation task
The deviation (cm), time for completion (s), and subjective estimation (1–10) of difficulty in the rotation task are presented in Tables 2, 3, and 4.
For the deviations, no significant differences were found between the conditions (RW and VR) and the runs (first and second run). The results indicate significant differences within the deviations of the different placed objects that needed different degrees of rotation. The participants walked most accurately to the object requiring no rotation (in RW the bar and in VR the pylon). The other objects were reached in the same manner in both conditions. Participants needed more time to walk to objects that required increased rotations in both conditions. Similar to the deviations within accuracy, the most differences were shown between the object that required no rotation in comparison with the others. This is observable in both conditions. No significant differences were found between the first and the second run for each rotation (p > 0.05). An overview of the estimations of difficulty is given in Table 4. Generally, no differences were detected between RW and VR for each degree of rotation in each run (p > 0.05). For all rotation degrees existed no significant differences between the first and the second run in both conditions (p > 0.05). The objects which required less rotation were subjectively easier to reach than the others. In VR, the participants specified higher difficulty to complete the tasks. However, those differences were not significant (p > 0.05).
The comparison of the different rotations showed that the participants estimated the task as more difficult when the degree of rotation increased (see Table 4). This is shown by an increased effect size, e.g., when the first object which required no rotation was compared to the last object required a 225° rotation (Fig. 1).
3.2 Pathways
The results of the pathways indicate no significant difference between the RW and VR for the deviations, time for completion and subjective estimation of difficulty (see Table 5). For pathway 2, the participants needed significantly more time to reach the end compared to the others. However, this had a greater impact in RW than in VR.
3.3 Simulator of sickness
The results of the SSQ showed high values for nausea (11.45 ± 12.61), oculomotor (18.95 ± 14.02) and for disorientation (20.88 ± 19.43). The overall average value was 19.45. Previous research (Stanney et al. 1997) stated negligible symptoms lower than 5, minimal (5–10), significant (10 to 15), concerning (15 to 20), and worst-case and not appropriated simulator (higher than 20). However, the participants did not complain about any symptoms nor did they criticize the VR-environment. The participants mostly complained about not given feedback, which made it hard to estimate their accomplishments. Only one participant complained about dizziness due to a lost signal from the lighthouses to the HMD which was normally not the case. Four participants appeared to perceive smaller distances and decreased objects’ sizes in VR compared to RW, but this did not affect the physical discomfort.
3.4 Memory
No significant differences were found after the first and second-time points in the numbers of remembered words (p>0.05). A high significant correlation between the results of the short-term and long-term memory test was found (r = 0.81, p < 0.01, N = 20). No significant correlations were found between the test of remembering the buildings and the short-term memory (r = 0.16, p = 0.954, N = 20) and for long-term memory testing (r = 0.088, p = 0.711, N = 20). The ability to memorize the number of words did not influence the remembering of marked buildings from the birds-eye perspective. No significant correlations were found between the memorizing ability (neither for the words nor for the remembered buildings) and the accuracy of reaching the objects neither for the rotations nor for the pathways (p > 0.05).
4 Discussion
The goal of this study was to compare the ability to orientate in a virtual and real environment towards different placed objects. To check whether the participants differed in terms of accuracy (cm), they were equipped with markers to trace their positions in a two-dimensional space. The sport-specific objects were placed in positions that required different degrees of rotation to reach them blindfolded. In addition, the starting position varied, and the participants then had to walk blindfolded to a different order of objects. The two-dimensional deviation was captured and calculated for the degrees of rotation and for each pathway the participants had to walk. The results are divided into two main parts. The first part concentrates on the ability to rotate within RW and VR (rotation task). The second part shows whether the participants were able to memorize each object in each condition by walking to two objects as accurately as possible in a specific order (pathway task). In both tasks, the two-dimensional deviations (cm), the time for completion, and the subjective estimation of difficulty were treated as the dependent variable, and they were defined as parameters revealing the quality of spatial orientation skills within both conditions. Furthermore, we used the SSQ to test whether the VR simulation impacted participants’ state of mind. We also examined possible correlations between memory and orientation skills.
Although doubts about using the HTC Vive for scientific use still exists (Niehorster et al. 2017), the results showed no significant differences between RW and VR in the two-dimensional deviations of the objects that required different degrees of rotation (0°, 45°, 180°, and 225°). A closer look revealed differences from object 1 (required no rotation, 0°) to the other placed objects (see Table 2). The extent of rotation had predominantly no impact on the ability to rotate in both conditions (see Tables 2, 3, and 4). The same observed quality of walking including different degrees of rotation in both conditions endorses that the perception of the environments worked similarly, which is essential for virtual walking (Cirio et al. 2013).
Nevertheless, the ability to orientate was only measured by walking towards the objects and by measuring the two-dimensional distance (in cm). To reveal more information about the ability to rotate between objects, further analysis of the degrees of rotation should be done, such as locomotion trajectories between two oriented points in space (Cirio et al. 2013). Although a higher performance of the participant can be expected due to active exploration in the wayfinding task (Cao et al. 2019), the statistics showed no significant differences between the first and the second run. No habituation occurred after remaining longer in VR or also for RW. Although, the participants found the second run to be easier, no significant differences were found in terms of accuracy (deviations in cm) and pace (time in seconds).
The study did not represent a realistic setting, since normally visual feedback is always present during exploring new environments. The locations of objects are represented in egocentric (Thompson and Henriques 2011) and allocentric reference system (Schütz et al. 2015). Both systems are necessary when humans perform memory-guided reaching movements (Byrne and Crawford 2010). The current study included both, since the participants had to observe and simultaneously extract information from the starting position, as well as exploring the scenes by walking through them. Therefore, it is difficult to differentiate between the two reference systems since other studies showed that allocentric information was used in 3D VR to reach out for memorized objects (Klinghammer et al. 2016) and within perceptual tasks (Murgia and Sharkey 2019). We refer to Klinghammer et al. (2016), who gave a good overview of the role of each reference system.
Examining the pathways also revealed no significant differences between RW and VR (see Table 5). Changing the starting position and therefore also changing the perspective led to no significant differences between the conditions in the two-dimensional deviations. Except for one case, no differences of the time needed to complete each pathway were found. The only difference was found within the RW condition. The deviations in the rotation task were less than those from the pathways. The participants stated to have less problems to imagine the position of each object from egocentric perspective. Since no differences in the deviation could be found, it can be assumed that the performance of the basic locomotion tasks was done in a stereotyped manner, which means that the participants followed similar trajectories when walking from object to object (Hicheur et al. 2007).
The subjective estimation of difficulty and time to completion did not deliver surprising results. Higher rotations were estimated more difficult or needed longer to complete in both conditions. This is not consistent with previous studies, which reported that tasks in VR were completed with longer time durations compared to RW (Pastel et al. 2020a, b, c; Read and Saleem 2017). The participants stated verbally to be less accurate or needed more time in VR. However, those differences were not significant between the different degrees of rotation (Table 4) and the different pathways (Table 5) between RW and VR. Those differences were not consistent with other studies, in which tasks completed in VR were rated more difficult (Pastel et al. 2020a, b, c). A reason for that could be the simple task demands of the present study since no complex movements were needed to successfully complete them. The results showed that the bar and the pylon were estimated as the easiest to reach, whereas the ergometer and the ball were rated higher in terms of difficulty.
The task demands consisted of localizing the objects with the marker placed at the back of the hand right next to the joint of the index finger and no vision was provided. This presupposes proprioceptive knowledge to be able to specify an accurate position of the objects. During the observation of the scene between each pathway, no visualization of the subject’s own body was provided. Previous studies have shown that this factor could lead to a negative impact on performances (Pastel et al. 2020a, b, c). We ensured that there was no loss in tracking during the observation of the virtual scene due to high shifting in the offset to the physical ground plane (Niehorster et al. 2017). However, the results of this study showed that reaching to an object (grasping the object and associating it to a specific position) could be completed without any restrictions compared to the real condition, also when no whole-body visualization was provided.
Overall, the study showed that VR is a useful tool for analyzing the spatial orientation in VR and the visual input received from the HMD worked equally, which is in line with previous studies (Kimura et al. 2017; Pastel et al. 2020a, b, c). Since navigational deficits in cognitive aging or neurodegenerating disease could be found (Cushman et al. 2008; Laczó et al. 2018), further investigations with seniors should be conducted to compare the ability to orientate precisely to objects in a virtual scene.
5 Limitation
The current study has its limitation on the transfer on realistic scenarios due to its laboratory setting and standardized conduction. In sports, for example, the accurate movement needs to be done under time pressure and in a more complex scenario including teammates, opponents, field restrictions, and interacting objects (ball, racket, etc.). Besides, the current task demands guided the participants to set the focus only on one feature such as one static object in the rotation task, and stepwise through the pathways, which is also not in line within realistic sports scenarios. During the observation phases, the subject’s own body was not visualized which could have had an impact on performances. To form a valid conclusion, more people should be tested to increase the statistical power and to substantiate the equality of RW and VR in terms of spatial skills. Therefore, further excluding criteria concerning the selection of participants should be considered such as experience in VR, gender distribution or the testing time.
6 Conclusion
The results of the current study supported the similarity of the ability to reach objects in VR compared to the real environment. Nevertheless, the subjective impression of the virtual environment seems to differ due to graphical limitation and restricted field of view (110°). Regarding the use of VR in sports, more sport-related tasks should be implemented and completed by the participants to verify this tool as a valid and reliable method.
Availability of data and material
Yes
Code availability
Yes
References
Battaglia-Mayer A, Caminiti R, Lacquaniti F, Zago M (2003) Multiple levels of representation of reaching in the parieto-frontal network. Cereb Cortex (new York, n. y.: 1991) 13(10):1009–1022. https://doi.org/10.1093/cercor/13.10.1009
Blanca MJ, Alarcón R, Arnau J, Bono R, Bendayan R (2017) Non-normal data: Is ANOVA still a valid option? Psicothema 29(4):552–557. https://doi.org/10.7334/psicothema2016.383
Brewster S, Fitzpatrick G, Cox A, Kostakos V (Hg.) (2019). In: Proceedings of the 2019 CHI conference on human factors in computing systems—CHI '19. ACM Press
Bruder G, Interrante V, Phillips L, Steinicke F (2012) Redirecting walking and driving for natural navigation in immersive virtual environments. IEEE Trans Visual Comput Graph 18(4):538–545. https://doi.org/10.1109/TVCG.2012.55
Byrne PA, Crawford JD (2010) Cue reliability and a landmark stability heuristic determine relative weighting between egocentric and allocentric visual information in memory-guided reach. J Neurophysiol 103(6):3054–3069. https://doi.org/10.1152/jn.01008.2009
Cao L, Lin J, Li N (2019) A virtual reality based study of indoor fire evacuation after active or passive spatial exploration. Comput Hum Behav 90:37–45. https://doi.org/10.1016/j.chb.2018.08.041
Carbonell-Carrera C, Saorin JL (2018) Virtual learning environments to enhance spatial orientation. EURASIA J Math Sci Technol Educ. https://doi.org/10.12973/ejmste/79171
Christensen JV, Mathiesen M, Poulsen JH, Ustrup EE, Kraus M (2018) Player experience in a VR and non-VR multiplayer game. In Richir S (Hg.) Proceedings of the virtual reality international conference—Laval Virtual on—VRIC '18 (S. 1–4). ACM Press. https://doi.org/10.1145/3234253.3234297
Cirio G, Olivier A-H, Marchal M, Pettré J (2013) Kinematic evaluation of virtual walking trajectories. IEEE Trans Visual Comput Graph 19(4):671–680. https://doi.org/10.1109/TVCG.2013.34
Cohen J (2013) Statistical power analysis for the behavioral sciences (2nd ed.). Taylor and Francis. http://gbv.eblib.com/patron/FullRecord.aspx?p=1192162
Coughlan G, Laczó J, Hort J, Minihane A-M, Hornberger M (2018) Spatial navigation deficits—overlooked cognitive marker for preclinical Alzheimer disease? Nat Rev Neurol 14(8):496–506. https://doi.org/10.1038/s41582-018-0031-x
Cushman LA, Stein K, Duffy CJ (2008) Detecting navigational deficits in cognitive aging and Alzheimer disease using virtual reality. Neurology 71(12):888–895. https://doi.org/10.1212/01.wnl.0000326262.67613.fe
Diersch N, Wolbers T (2019) The potential of virtual reality for spatial navigation research across the adult lifespan. J Exp Biol. https://doi.org/10.1242/jeb.187252
Flanagin VL, Fisher P, Olcay B, Kohlbecher S, Brandt T (2019) A bedside application-based assessment of spatial orientation and memory: approaches and lessons learned. J Neurol 266(Suppl 1):126–138. https://doi.org/10.1007/s00415-019-09409-7
Fleishman JJ, Dusek ER (1971) Reliability and learning factors associated with cognitive tests. Psychol Rep 29(2):523–530. https://doi.org/10.2466/pr0.1971.29.2.523
Ghinea M, Frunză D, Chardonnet J-R, Merienne F, Kemeny A (2018) Perception of absolute distances within different visualization systems: HMD and CAVE. In de Paolis LT, Bourdot P (Hg.) Lecture notes in computer science. Augmented reality, virtual reality, and computer graphics (Bd. 10850, S. 148–161). Springer. https://doi.org/10.1007/978-3-319-95270-3_10
Harris K, Reid D (2005) The influence of virtual reality play on children’s motivation. Can J Occup Therapy Revue Can D’ergotherapie 72(1):21–29. https://doi.org/10.1177/000841740507200107
Hicheur H, Pham Q-C, Arechavaleta G, Laumond J-P, Berthoz A (2007) The formation of trajectories during goal-oriented locomotion in humans. I. A stereotyped behaviour. Eur J Neurosci 26(8):2376–2390. https://doi.org/10.1111/j.1460-9568.2007.05836.x
Hirt C, Zank M, Kunz A (2018) Geometry extraction for ad hoc redirected walking using a SLAM device. In: de Paolis LT, Bourdot P (Hg.) Lecture notes in computer science. Augmented reality, virtual reality, and computer graphics (Bd. 10850, S. 35–53). Springer. https://doi.org/10.1007/978-3-319-95270-3_3
Ishikawa T (2019) Satellite navigation and geospatial awareness: long-term effects of using navigation tools on wayfinding and spatial orientation. Prof Geogr 71(2):197–209. https://doi.org/10.1080/00330124.2018.1479970
Kelly JW, Cherep LA, Siegel ZD (2017) Perceived space in the HTC vive. ACM Trans Appl Percept 15(1):1–16. https://doi.org/10.1145/3106155
Kennedy RS, Lane NE, Berbaum KS, Lilienthal MG (1993) Simulator sickness questionnaire: an enhanced method for quantifying simulator sickness. Int J Aviat Psychol 3(3):203–220. https://doi.org/10.1207/s15327108ijap0303_3
Kimura K, Reichert JF, Olson A, Pouya OR, Wang X, Moussavi Z, Kelly DM (2017) Orientation in virtual reality does not fully measure up to the real-world. Sci Rep 7(1):18109. https://doi.org/10.1038/s41598-017-18289-8
Kitchin RM (1994) Cognitive maps: what are they and why study them? J Environ Psychol 14(1):1–19. https://doi.org/10.1016/S0272-4944(05)80194-X
Klatzky RL (1998) Allocentric and egocentric spatial representations: definitions, distinctions, and interconnections. In: Goos G, Hartmanis J, van Leeuwen J, Freksa C, Habel C, Wender KF (Hg.) Lecture notes in computer science. Spatial cognition (Bd. 1404, S. 1–17). Springer, Berlin. https://doi.org/10.1007/3-540-69342-4_1
Klinghammer M, Schütz I, Blohm G, Fiehler K (2016) Allocentric information is used for memory-guided reaching in depth: a virtual reality study. Vis Res 129:13–24. https://doi.org/10.1016/j.visres.2016.10.004
Laczó J, Parizkova M, Moffat SD (2018) Spatial navigation, aging and Alzheimer’s disease. Aging 10(11):3050–3051. https://doi.org/10.18632/aging.101634
Lehnung M, Leplow B, Friege L, Herzog A, Ferstl R, Mehdorn M (1998) Development of spatial memory and spatial orientation in preschoolers and primary school children. Br J Psychol 89(3):463–480. https://doi.org/10.1111/j.2044-8295.1998.tb02697.x
León I, Tascón L, Cimadevilla JM (2016) Age and gender-related differences in a spatial memory task in humans. Behav Brain Res 306:8–12. https://doi.org/10.1016/j.bbr.2016.03.008
Mondellini M, Arlati S, Greci L, Ferrigno G, Sacco M (2018) Sense of presence and cybersickness while cycling in virtual environments: their contribution to subjective experience. In: de Paolis LT, Bourdot P (Hg.) Lecture notes in computer science. Augmented reality, virtual reality, and computer graphics (Bd. 10850, S. 3–20). Springer. https://doi.org/10.1007/978-3-319-95270-3_1
Murgia A, Sharkey PM (2019) Estimation of Distances in Virtual Environments Using Size Constancy. Int J Virtual Reality 8(1):67–74. https://doi.org/10.20870/IJVR.2009.8.1.2714
Niehorster DC, Li L, Lappe M (2017) The accuracy and precision of position and orientation tracking in the HTC vive virtual reality system for scientific research. i-Perception 8(3):2041669517708205. https://doi.org/10.1177/2041669517708205
Palieri M, Guaragnella C, Attolico G (2018) Omero 2.0. In: de Paolis LT, Bourdot P (Hg.) Lecture notes in computer science. Augmented reality, virtual reality, and computer graphics (Bd. 10850, S. 21–34). Springer. https://doi.org/10.1007/978-3-319-95270-3_2
Pastel S, Chen CH, Bürger D, Naujoks M, Martin LF, Petri K, Witte K (2020a) Spatial orientation in virtual environment compared to real-world. J Mot Behav. https://doi.org/10.1080/00222895.2020.1843390
Pastel S, Chen C-H, Martin L, Naujoks M, Petri K, Witte K (2020b) Comparison of gaze accuracy and precision in real-world and virtual reality. Virtual Reality. https://doi.org/10.1007/s10055-020-00449-3
Pastel S, Chen C-H, Petri K, Witte K (2020c) Effects of body visualization on performance in head-mounted display virtual reality. PLoS ONE 15(9):e0239226. https://doi.org/10.1371/journal.pone.0239226
Petri K, Bandow N, Witte K (2018) Using several types of virtual characters in sports—a literature survey. Int J Comput Sci Sport 17(1):1–48. https://doi.org/10.2478/ijcss-2018-0001
Petri K, Emmermacher P, Danneberg M, Masik S, Eckardt F, Weichelt S, Bandow N, Witte K (2019) Training using virtual reality improves response behavior in karate kumite. Sports Eng. https://doi.org/10.1007/s12283-019-0299-0
Powers MB, Emmelkamp PMG (2008) Virtual reality exposure therapy for anxiety disorders: a meta-analysis. J Anxiety Disord 22(3):561–569. https://doi.org/10.1016/j.janxdis.2007.04.006
Read JM, Saleem JJ (2017) Task performance and situation awareness with a virtual reality head-mounted display. Proc Humn Fact Ergon Soc Annu Meet 61(1):2105–2109. https://doi.org/10.1177/1541931213602008
Sattar MU, Palaniappan S, Lokman A, Hassan A, Shah N, Riaz Z (2019) Effects of virtual reality training on medical students’ learning motivation and competency. Pak J Med Sci 35(3):852–857. https://doi.org/10.12669/pjms.35.3.44
Schütz I, Henriques DYP, Fiehler K (2015) No effect of delay on the spatial representation of serial reach targets. Exp Brain Res 233(4):1225–1235. https://doi.org/10.1007/s00221-015-4197-9
Souman JL, Giordano PR, Schwaiger M, Frissen I, Thümmel T, Ulbrich H, de Luca A, Bülthoff HH, Ernst MO (2011) CyberWalk. ACM Trans Appl Percept 8(4):1–22. https://doi.org/10.1145/2043603.2043607
Stanney KM, Kennedy RS, Drexler JM (1997) Cybersickness is not simulator sickness. Proc Hum Factors Ergon Soc Annu Meet 41(2):1138–1142. https://doi.org/10.1177/107118139704100292
Thompson AA, Henriques DYP (2011) The coding and updating of visuospatial memory for goal-directed reaching and pointing. Vis Res 51(8):819–826. https://doi.org/10.1016/j.visres.2011.01.006
Tregillus S, Al Zayer M, Folmer E (2017) Handsfree omnidirectional VR navigation using head tilt. In: Mark G, Fussell S, Lampe C, Schraefel MC, Hourcade JP, Appert C, Wigdor D (Hg.) Proceedings of the 2017 CHI conference extended abstracts on human factors in computing systems—CHI EA '17 (S. 4063–4068). ACM Press. https://doi.org/10.1145/3025453.3025521
Walch M, Frommel J, Rogers K, Schüssel F, Hock P, Dobbelstein D, Weber M (2017) Evaluating VR driving simulation from a player experience perspective. In: Mark G, Fussell S, Lampe C, Schraefel MC, Hourcade JP, Appert C, Wigdor D (Hg.) Proceedings of the 2017 CHI conference extended abstracts on human factors in computing systems—CHI EA '17 (S. 2982–2989). ACM Press. https://doi.org/10.1145/3027063.3053202
Witmer BG, Singer MJ (1998) Measuring presence in virtual environments: a presence questionnaire. Presence Teleop Virtual Environm 7:225–240
Wolbers T, Hegarty M (2010) What determines our navigational abilities? Trends Cogn Sci 14(3):138–146. https://doi.org/10.1016/j.tics.2010.01.001
Wolbers T, Wiener JM (2014) Challenges for identifying the neural mechanisms that support spatial navigation: the impact of spatial scale. Front Hum Neurosci 8:571. https://doi.org/10.3389/fnhum.2014.00571
Acknowledgements
The study was financed by the German Research Foundation (DFG) under Grant WI 1456/22-1. We thank Leon Wischerath and Mark Schmitz for helping in collecting the data and conducting the experiment.
Funding
Open Access funding enabled and organized by Projekt DEAL. The study was financed by the German Research Foundation (DFG) under Grant WI 1456/22-1.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Ethics approval
The approval of the Ethics Committee of the Otto-von-Guericke University at the Medical Faculty and University Hospital Magdeburg was obtained under the number 132/16.
Consent of participate
Available.
Consent of publication
All authors agreed for the publication process.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Pastel, S., Bürger, D., Chen, C.H. et al. Comparison of spatial orientation skill between real and virtual environment. Virtual Reality 26, 91–104 (2022). https://doi.org/10.1007/s10055-021-00539-w
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10055-021-00539-w