Abstract
A head-mounted display cannot cover an angle of visual field as wide as that of natural view (out-of-view problem). To enhance the visual cognition of an immersive environment, previous studies have developed various guidance designs that visualize the location or direction of items presented in the users’ surroundings. However, two issues regarding the guidance effects remain unresolved: How are the guidance effects different with each guided direction? How much is the cognitive load required by the guidance? To investigate the two issues, we performed a visual search task in an immersive environment and measured the search time of a target and time spent to recognize a guidance design. In this task, participants searched for a target presented on a head-mounted display and reported the target color while using a guidance design. The guidance designs (a moving window, 3D arrow, radiation, spherical gradation, and 3D radar) and target directions were manipulated. The search times showed an interaction effect between guidance designs and guided directions, e.g., the 3D arrow and radar shorten the search time for targets presented at the back of users. The recognition times showed that the participants required short times to recognize the details of the moving window and radiation but long times for the 3D arrow, spherical gradation, and 3D radar. These results suggest that the moving window and radiation are effective with respect to cognitive load, but the 3D arrow and radar are effective for guiding users’ attention to necessary items presented at the out-of-view.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Viewing 3D contents using a head-mounted display (HMD) has been known to cause out-of-view problems (Gruenefeld et al. 2017a), which impair users’ spatial cognition of the surrounding objects. For 2D contents using a flat display, users can easily view the necessary information because almost all events will be presented in the field of view. However, for 3D contents, visual events (e.g., computer graphics and real-life movies recorded by a 360-degree camera) will be presented omnidirectionally (e.g., the left, right, and back directions of users). In this case, users would have trouble recognizing necessary information. Given that the effective visual angle of HMD is smaller than the natural viewing angle (e.g., 110 degree in HTC VIVE Pro vs. 180 degree in the natural field of view), the out-of-view problem is a potential issue for developing accessible contents of virtual reality (VR).
To resolve the out-of-view problem, previous works have developed various guidance designs that visualize the location or direction of items presented in the users’ surroundings. One guidance design is an extension of the visual field in which target items are transformed by projection into augmented items in the field of view. There have been various visual extension designs such as 3D radar (Gruenefeld et al. 2019), EyeSee360 (Gruenefeld et al. 2017b), fisheye lens (Orlosky et al. 2014), mirror ball (Bork et al. 2018), panorama vision (Ardouin et al. 2012), and spider vision (Fan et al. 2014). For instance, the 3D radar transforms target items surrounding users into a small map and superimposes it on the field of view. The other visual guidance technique is the attentional guidance, in which users’ attention is guided to target items by augmented items such as FlyingARrow (Gruenefeld et al. 2018b), 3D Halo/Wedge (Gruenefeld et al. 2018a), and SWAVE (Renner and Pfeiffer 2017). For instance, FlyingARrow appears in the field of view and moves to the location of the target items. The attentional guidance is based on the cognitive mechanism of automatic capture of attention (Jonides and Yantis 1988; Pratt et al. 2010).
These guidance designs have general effects that enhance the spatial cognition of users’ surroundings. The guidance effect has been evaluated by the time required to detect target items in a search task (search times: Bork et al. 2018; Gruenefeld et al. 2018a). On this point, the 3D radar may enhance the accuracy of target localization because it requires users to switch their visual perspectives between the ego-centric and bird’s views to locate the items transformed to a 360-degree environment. This switch has been reported to enhance the spatial cognition of users’ surroundings (Gorisse et al. 2017). Another important criterion to evaluate the guidance effect is the cognitive load. The cognitive load is the extent of the effortfulness of the processes recognizing guidance designs (e.g., reading images and locating target items). A measure of cognitive load is the time required to recognize the details of the guidance design (recognition time: Chen et al. 2018) because it delays human responses (Levy and Pashler 2001). Given this, the 3D radar may delay locating target items in a 360-degree environment because the perspective switch increases the cognitive loads on users (Friedman et al. 2008). Users are frequently required to respond to any targets in a 360-degree environment (e.g., the detection of enemy in a VR game), and in such situations, guidance designs that produce a small cognitive load are preferred.
However, two issues remain unresolved: (a) how are the guidance effects different with each guided direction? (b) how much cognitive load is required by the guidance? Regarding the first issue, no study has evaluated the guidance effect for omnidirectional surroundings. However, it is important because human attention is biased among spatial directions. For example, humans tend to pay attention to the leftward (pseudo-neglect phenomenon: Jewell and McCourt, 2000; Zago et al. 2017). Moreover, it is more difficult to pay attention to the surroundings slightly over the head and back of users (Harada and Ohyama 2019). Regarding the second issue, few studies have quantitatively measured cognitive loads for guidance in a 360-degree environment. In particular, it has remained unclear how guidance designs delay the localization of target items. The two issues are related to the design of accessible VR contents. For example, VR game contents require users to detect important items quickly (e.g., bullets and enemies in a battle scene) and to localize information accurately (e.g., a destination and facilities in a moving scene). In the former case, users would require guidance that produces a small cognitive load, whereas in the latter case, they would need guidance that is accurate for each direction.
This study aimed to evaluate the effect of guidance designs on spatial cognition in a 360-degree environment with respect to the search time of each direction and recognition time. To examine these issues, we conducted a visual search, a task frequently used in cognitive science for investigating attentional mechanisms with respect to behavioral (Finlayson and Grove 2015; Huang and Pasher 2005) and brain perspectives (Bichot et al. 2005; Leonards et al. 2000). In this task, the participants were instructed to utilize a guidance design to search for a target presented on an HMD. The guidance design and direction of a target were manipulated to evaluate the guidance effects of each direction. We mainly examined two points. One was how the search times of each target direction were different between guidance designs. The search times were also used to create criteria maps in which guidance effects for directions were visualized as heat maps. The other point was how the times required to recognize guidance designs (i.e., cognitive loads) were different between guidance designs.
2 Experimental methods
2.1 Participants
Thirty students from the University of Tsukuba (15 men and 15 women) aged 19–24 (M = 21.70, SD = 1.51) participated in the experiment. All participants reported normal or corrected to normal visual acuity and were naïve as to the purpose of the experiment. A post hoc power analysis with G*Power showed that the power was 0.754, whose value was comparable to the criterion.8 (Cohen 1992).
2.2 Materials
Our systems were mostly based on Harada and Ohyama (2019, 2020) (Fig. 1). Virtual images were presented on an HMD equipped with an eye-tracking system (Tobii Pro VR Integration on HTC VIVE) by a laptop personal computer (PC) (DELL Alienware R4). The GPU was GeForce GTX1080 (NVIDIA). Participants’ responses were received by two controllers (HTC VIVE). Unity (2018.4.13f1) and SteamVR (version 1.10.32) were employed to control the presentation of the visual images and record data. The HMD and controllers were tracked by two sensors (HTC Base station 1.0). The cable connecting the PC and HMD was hung on the ceiling.
The target was a white or black “ + ” (2.86° × 2.86° of visual angle), with the direction defined as a combination of latitude and longitude. The spatial interval between the directions of the target was 45°. The distractor was a white or black “T” (2.86° × 1.43° of visual angle) that was rotated 0°, 90°, 180°, or 270°. The spatial interval between distractors was 22.5°, except for latitudes 67.5° and -67.5° (the longitudinal interval was 45°). The distractor was not presented at the direction of the target or guidance design. The directions of the target and distractors are shown in Fig. 2.
Before the evaluations, we classified existing designs of visual guidance into three categories. The first category was the moving design in which a salient item appears in the user’s visual field and moves toward the direction of the target (e.g., FlyingARrow). This moving design relies on the attentional capturing mechanism (Jonides and Yantis 1988; Pratt et al. 2010) in which attention is automatically captured by exogenous stimuli. The second category is the orientation design in which the location of the target is shown by a decentering (e.g., halo, wedge, and SWAVE) or pointing item (e.g., pointing arrow). This orientation design relies on the attention-context coordination in which attention is allocated based on contextual cues, such as the directions of an arrow (Ristic and Kingstone 2012) and another person’s gaze (Friesen et al. 2004). The third category was the visual extension, discussed in the introduction.
Five guidance designs were evaluated (Fig. 3). From the moving design category, the moving window was selected. This was a red circle moved from the central position (UD0 × LR0) to the target direction. The 3D arrow, radiation, and spherical gradation were selected from the orientation design category. The 3D arrow was presented at the central position and linearly pointed toward the target direction. The radiation comprised eight lines that were focused on the target direction. The spherical gradation was shaped by a sphere that contained the participants. The color of the gradation was distributed from black to white, and the whitest area showed the target direction. From the visual extension design, the 3D radar was selected. This comprised a transparent sphere, small black dot, and small red dot. The sphere was a schematic field covering 360°; the black dot represented the participant position, and the red dot represented the target direction.
2.3 Procedure
For each participant, this experiment (250 trials) was conducted in a quiet room within approximately 90 min, including rest times. After obtaining informed consents, participants received instructions for the experiment and were asked to wear the HMD and hold a controller in each hand.
The sequence of the experimental trial was as follows (Fig. 4). After the start button was pressed, a fixation cross was presented for 500 ms. Subsequently, the guidance design, the target, and several distractors were simultaneously presented. The presentation time of the guidance design was restricted (500 ms). The target and distractors lasted until the response button was pressed or 25 s had passed, and the target direction was randomly selected from one of the 25 directions. Each participant was to search for the target and report the color of the target as accurately and quickly as possible by pressing the button.
We divided the experimental session into five blocks, and the total number of trials was 250 [visual guidance designs (5) × target directions (25) × repetitions (2)]. The guidance design was manipulated between different blocks, and the target direction was manipulated within one block. Additionally, the order of guidance design was counterbalanced between participants.
2.4 Data analysis
Search times and recognition times were measured as guidance effects. The search times were calculated by the duration from the onset of a guidance design to the target detection (i.e., the button press). The recognition times were calculated by the duration from the onset of a guidance design to the start of a search (when gazes were moved out of the 20° diameter circle around the fixation cross). The search time includes all the durations of eye movements during the trial, whereas the recognition time includes it before the target search. Therefore, the search time includes both recognition time and remaining time. The search time was converted into heat maps to visualize guidance effect for directions. This was performed by the Kriging method (Yang et al. 2004) with Surfer (Golden Software).
As another measure of guidance effects, eye movements were obtained from the data that combined left and right eyes. The eye movements were analyzed from the perspectives of fixations and saccades. Because the usage of eye-tracking for immersive conditions has several problems (Clay et al. 2019) such as a focus-accommodation-conflict (Hoffman et al. 2008) and the drift of calibration (Tripathi and Guenter 2016); it may be difficult to apply results obtained from such a situation into daily situations. Although further studies of the eye-tracking method in immersive VR are expected in general, we believe that the impact of these problems on our research is limited. This is because our purpose was to evaluate guidance designs used in VR environments, and this evaluation was conducted in the same condition, where the effect of the eye-tracking problem would equally occur under all guidance conditions. Fixation was defined as a gaze that dwelled for a minimum duration of 150 ms (as in Sitzmann et al. 2018) on a circular area spanning 2° of visual angle in diameter. The threshold of saccades was defined as a median velocity of each trial, and saccades were detected by Microsaccade Toolbox (Engbert et al. 2015). The algorithm of the tool box can be applied to detect not only microsaccades but also saccades (Mitsudo and Nakamizo 2010).
For statistical significance tests, the software R studio was used to conduct analysis of variances (ANOVAs: see Turner and Thayer, 2001 for basic information). Although ANOVAs require the normality of the data set, time data (e.g., search times and recognition times) can be followed by the ex-Gaussian distribution but not the Gaussian distribution (Dawson 1988). However, such non-normality of errors has been reported to little influence the results of ANOVAs (David and Johnson 1951; Kanji 1977; Schmider et al. 2010). Indeed, ANOVAs have been performed on time data in wide fields (Ganel and Goodale 2003; Greene et al. 2001; Hicks et al. 2004; Joy et al. 2021). Therefore, the usage of ANOVAs for time data will be general in the relevant literatures. Moreover, to avoid the effect of errors, ANOVAs were performed with the multiple-level model (“lmer” function in the “lmerTest” package was used). This model can control errors among participants, target colors, and order of guidance designs by entering their variables into random effects and examine the effects of dependent variables by entering them into fixed effects (see Baayen et al. 2008; Bate et al. 2015 for more details). In these analyses, we started with a model that included relevant factors and their interactions as fixed effects. The random intercept-only model was used, in which participants, target color, and the order of guidance designs were entered. Multiple comparisons were performed using the “lsmeans” function in the “lsmeans” package (p values were corrected using Tukey’s method).
As targets could not be presented at U45 × LR0 in 20 participants due to program errors, the data obtained from the location in these participants were excluded from the analysis. Moreover, the data obtained from incorrect responses, hasty responses (less than 100 ms), and outlier values of search times (mean + 2SD) were excluded based on previous studies (Franconeri and Simons 2003; Henderson and Macquistan 1993).
3 Results
The search times, eye movement data, and recognition times were analyzed with ANOVAs. These results are shown in Table 1, and their details are reported in the following sections.
3.1 Search time
Figure 5a illustrates the search times averaged across 30 participants, which mean the extent to which the design effectively guides participants to the direction of target items. To test for the effect of guidance designs and target directions, a two-way within-participants ANOVA was performed on the search times with fixed effects of guidance designs (5) and target directions (25). The test revealed significant main effects of guidance designs and target direction, and a significant two-way interaction (Table 1). A multiple comparison test revealed significant differences between guidance designs. In Fig. 5b, the colors show p values; redder indicates faster search times in the left guidance design than in the top design. These results indicate that the moving window and radiation improve the search for targets presented at a frontal area (i.e., from L45 to R45) and that the 3D arrow and radar improve the search for targets presented at the back of users (e.g., LR180). This suggests that the guidance effects change with the guided direction; the moving window and radiation are effective for guidance in the frontal direction, and the 3D arrow and radar are effective for guidance in the backward direction.
In an attempt to visualize the guidance effects for spatial directions, the search times were converted to criteria maps. Figure 6 illustrates the criteria maps in which the color becomes red, yellow, green, and blue as the search time increases. The interval between the grids was 45°, and the cross points of the latitude and longitude grids show the directions of target appearance.
3.2 Eye movement
Figure 7 shows the number of fixations, duration of a fixation, number of saccades, and length of the saccades. Effective guidance would decrease the number of fixations, fixation duration, and number of saccades and extend the length of saccades because these measures indicate the extent to which attention was captured by distractor items. To test for the effect of guidance designs, one-way within-participants ANOVAs were performed on the four dependent variables with a fixed effect of guidance designs (5). These tests consistently revealed significant main effects of guidance designs (Table 1). In particular, the moving window produced relatively few numbers of fixation and saccade and shorter duration of a fixation (see Fig. 7 for the details of significant differences and Supplementary Materials for their statistical values). This indicates that the moving window made users ignore the distractor items more strongly than the other guidance designs, suggesting effective guidance.
3.3 Recognition time
Figure 8 illustrates the recognition times for the guidance design averaged across 30 participants, which means the amount of cognitive load caused by the guidance design. To test for the effect of guidance designs, a one-way within-participants ANOVA was performed on the recognition times with the fixed effect of guidance designs (5). The test showed a significant main effect (Table 1). A multiple comparison test revealed several significant differences (see Fig. 8 for the details and Supplementary Materials for their statistical values): in particular, the participants required short times to recognize the details of the moving window and radiation but long times for the 3D arrow, spherical gradation, and 3D radar. This suggests small cognitive loads for the two former designs and large cognitive loads for the three latter designs.
4 Discussion
To evaluate the guidance effect of each direction and cognitive load, the present study conducted a visual search task in a 360-degree environment. Notably, our results suggest that guidance effects are a trade-off among directions. Thus, this study provides empirical evidence for effectively designing the visual guidance in 360-degree context.
4.1 Evaluation for each guidance design
The moving window and the radiation were similar with respect to the guidance effects. The search times showed that the two designs most precisely guided attention toward the front of users. Moreover, the recognition times of the guidance designs showed that the two designs required a small cognitive load to utilize the guidance. This is consistent with previous findings, showing that the animated item automatically captured attention (Franconeri and Simons, 2003) and decentering designs promoted the localization of targets (Baudisch and Rosenholtz 2003). However, these two designs were less effective when targets were located behind participants (i.e., LR180), suggesting that the moving window and radiation are susceptible to the field of view in HMD. From this perspective, the moving window and radiation would precisely guide users to information located in front of them.
Compared to the moving window and radiation, the 3D radar was ineffective for guiding attention toward the front of participants but effective toward their back. The present findings are consistent with the idea that the users switch viewpoints between ego-centric and bird’s eye views to utilize 3D radar. This would promote spatial cognition of target items located in the out-of-view. This idea is further supported by the recognition times, showing that a larger cognitive load was required to utilize the 3D radar. Given that the switch between different perspectives is associated with the cognitive ability such as executive function (Friedman et al. 2008); the 3D radar requires large amounts of cognitive resources. As another potential reason, the cognitive loads may be due to the amount of information contained in the 3D radar. This design shows not only the target direction but also more information such as the location of users and surrounding areas. The large amount of details would delay the recognition of target direction because the set size of information on a visual field increases the cognitive loads (Palmer 1994; Wolfe 2010). Our results suggest that the 3D radar accurately guides users to necessary information irrespective of directions but delays the recognition of guidance.
Interestingly, the 3D arrow balanced out between the moving window/radiation and 3D radar. The search times suggest that the 3D arrow was (a) more effective than the 3D radar but less effective than the moving window/radiation for guidance toward the front of participants and (b) more effective than the moving window/radiation but less effective than the 3D radar for guidance toward their back. Moreover, the recognition times of guidance design in the 3D arrow were larger than in the moving window/radiation but smaller than in the 3D radar. A plausible explanation is that the 3D arrow may promote not only attentional guidance (Ristic and Kingstone 2012), but also perspective taking. If the 3D arrow is pointed at a target located in the out-of-view, it would be more difficult to recognize the target direction visually. In this case, users may take an object-based or allocentric viewpoint (Maringelli et al. 2001) to recognize the target direction. In other words, the 3D arrow may guide attention when targets are in the field of view and may have users take an allocentric viewpoint when targets are located out-of-view.
Unlike the four designs, the spherical gradation was entirely ineffective for 360-degree guidance. The number of fixations and saccades suggest that the spherical gradation requires users to search a large field of the 360-degree environment.
4.2 Practical application
The criteria maps and recognition time data suggest that the appropriate guidance design depends on the requirements and the context. In particular, the moving window and radiation are useful when quick responses are needed, and the 3D radar is useful when accurate guidance is needed. For example, users of social VR contents (e.g., VR chat, Rec Room, cluster.) can freely search for other users, communicate with them, and create virtual environments. When communicating among multiple people, users are required to switch quickly and dynamically paying attention toward the user that takes the turn of communication. In this way, VR contents users frequently require a dynamic response. The switch of attentional directions can be assisted by the moving window and radiation. Contrary to this, users may have difficulty searching for other users and necessary items due to the out-of-view problem. The difficulty would be reduced by the visual guidance of 3D radar. This assistance would be applied to VR contents of other fields such as VR game, training, and remote operation. For instance, users of action games would be required to detect the location of enemies, weapons, and escape routes as quickly as possible. The detection of necessary items would be quickly assisted by the moving window and radiation with small cognitive loads. Such a mixed use of different guidance designs would be applied to a wide range of fields that need visual assistance.
Additionally, our methodology would be applied to the evaluation of guidance designs in actual VR contents. Recent HMDs have an eye-tracking system (e.g., HTC VIVE Pro eye and NeU-VR 1.0), which can record users’ eye movements during the use of VR contents. These devices provide two types of eye data: (a) times in viewing a visual guidance design and (b) times in searching for the guided target. The data show the extent to which the guidance is accurate and quick in a certain content, which would contribute to the development of a guidance design in each VR content.
4.3 Limitations and next step
The significance of this study is that it clarifies the general effects of guidance design on the accuracy for each direction and cognitive loads. To control the effect of content types, we excluded the factor of contexts. In other words, one of the limitations of the present study is that it does not investigate the contextual effects on the guidance effects/cognitive loads. Actual VR contents have different contexts such as backgrounds, target items, and tasks. Given that users’ attention changes with contexts (Harada & Ohyama, 2020), the interaction between the guidance and contexts would be important to predict visual cognition. Therefore, the interaction needs to be investigated in future studies.
Another limitation is that the study does not investigate the effects of design parameters. The study selected five designs, but each guidance design can be illustrated by different physical features (e.g., color, size, and viewing distance). For example, the criteria maps suggest that spherical gradation was less accurate for the guidance than the other four designs. This may be related to the complexity of physical features in the spherical gradation. The spherical gradation is considered complex compared to the four designs, which would impair visual cognition because the amount of information influences cognitive resources (Mackworth 1965). In other words, the optimal modification of these physical parameters may enhance the guidance effects of spherical gradation. However, we did not examine this issue because it was not within the scope of this study. Therefore, future work is needed to investigate this issue while manipulating each physical feature parameter.
Additionally, there is a limitation on the long-term use. The effect of long-term use on the attentional guidance is important for application in professional situations. Especially for novice users, the use of guidance designs that produce small cognitive loads (e.g., moving window and radiation) would be useful because large cognitive loads limit mental capacities that are used by the working memory process to learn new skills (Paas et al. 2003). However, the long-term use may decrease the cognitive loads owing to perceptual learning (Goldstone 1998), in which the visual cognition of certain stimuli become automatic owing to repeated presentations. These suggest that, although the moving window and radiation are effective for guiding novice users, other guidance designs are also useful for long-term use. Although the present study helps explain the effect of cognitive loads on the long-term use, further studies are needed to evaluate the details of learning effects on guidance designs in long-term use.
5 Conclusion
In this study, we quantitatively evaluated the effect of visual guidance designs for each direction and cognitive loads. We found that (a) the guidance effect varied by the combination of designs and guided directions and that (b) cognitive loads are larger for the 3D radar and smaller for the moving window and radiation. We also developed the criteria maps of guidance effect for spatial directions, which sets a framework for assisting user cognition. These maps can be used in designing an accessible 3D user interface.
References
Ardouin J, Lécuyer A, Marchal M, Riant C, Marchand E (2012) FlyVIZ: a novel display device to provide humans with 360 vision by coupling catadioptric camera with HMD. In: Proceedings of the 18th ACM symposium on virtual reality software and technology, pp 41‒44
Baayen RH, Davidson DJ, Bates DM (2008) Mixed-effects modeling with crossed random effects for subjects and items. J Mem Lang 59:390–412. https://doi.org/10.1016/j.jml.2007.12.005
Bates D, Mächler M, Bolker BM, Walker SC (2015) Fitting lnear mixed-effects models using lme4. J Stat Softw 67:1–48. https://doi.org/10.18637/jss.v067.i01
Baudisch P, Rosenholtz R (2003) Halo: a technique for visualizing off-screen locations. In: Proceedings of the SIGCHI conference on human factors in computing systems, pp 481‒488
Bichot NP, Rossi AF, Desimone R (2005) Parallel and serial neural mechanisms for visual search in Macaque are V4. Science 308:529–534. https://doi.org/10.1126/science.1109676
Bork F, Schnelzer C, Eck U, Navab N (2018) Towards efficient visual guidance in limited field-of-view head-mounted displays. IEEE Trans vis Comput Graph 24:2983–2992. https://doi.org/10.1109/TVCG.2018.2868584
Chen J, Wang RQ, Lin Z, Guo X (2018) Measuring the cognitive loads of construction safety sign designs during selective and sustained attention. Saf Sci 105:9–21. https://doi.org/10.1016/j.ssci.2018.01.020
Clay V, König P, König SU (2019) Eye tracking in virtual reality. J Eye Mov Res. https://doi.org/10.16910/jemr.12.1.3
Cohen J (1992) A power primer. Psychol Bull 112:155–159. https://doi.org/10.1037/0033-2909.112.1.155
David FN, Johnson NL (1951) The effect of non-normality on the power function of the F-Test in the analysis of variance. Biometrika 38:43–57. https://doi.org/10.1093/biomet/38.1-2.43
Dawson MRW (1988) Fitting the ex-Gaussian equation to reaction time distributions. Behav Res Methods Instrum Comput 20:54–57. https://doi.org/10.3758/BF03202603
Engbert R, Sinn P, Mergenthaler K, Trukenbrod H (2015) Microsaccade toolbox: potsdam mind research repository. Retrieved from http://read.psych.uni-potsdam.de/index.php?option=com_content&view=article&id=140:engbert-et-al-2015-microsaccadetoolbox-for-r&catid=26:publications&Itemid=34. Accessed 8 Jan 2020
Fan K, Huber J, Nanayakkara S, Inami M (2014) SpiderVision: extending the human field of view for augmented awareness. In: Proceedings of the 5th augmented human international conference, pp 1–8
Finlayson NJ, Grove PM (2015) Visual search is influenced by 3D spatial layout. Atten Percept Psychophys 77:2322–2330. https://doi.org/10.3758/s13414-015-0924-3
Franconeri SL, Simons DJ (2003) Moving and looming stimuli capture attention. Percept Psychophys 65:999–1010. https://doi.org/10.3758/BF03194829
Friedman D, Nessler D, Johnson R, Ritter W, Bersich M (2008) Age-related changes in executive function: an event-related potential (ERP) investigation of task-switching. Neuropsychol Dev Cogn B Aging Neuropsychol Cogn 15:95–128. https://doi.org/10.1080/13825580701533769
Friesen C, Ristic J, Kingstone A (2004) Attentional effects of counterpredictive gaze and arrow cues. J Exp Psychol Hum Percept Perform 30:319–329. https://doi.org/10.1037/0096-1523.30.2.319
Ganel T, Goodale A (2003) Visual control of action but not perception requires analytical processing object shape. Nature 426:664–667. https://doi.org/10.1038/nature02156
Goldstone RL (1998) Perceptual learning. Annu Rev Psychol 49:585–612. https://doi.org/10.1146/annurev.psych.49.1.585
Gorisse G, Christmann O, Amato EA, Richir S (2017) First-and third-person perspectives in immersive virtual environments: presence and performance analysis of embodied users. Front Robot AI 4:33. https://doi.org/10.3389/frobt.2017.00033
Greene JD, Sommerville RB, Nystrom LE, Darley JM, Cohen JD (2001) An fMRI investigation of emotional engagement in moral judgment. Science 293:2105–2108. https://doi.org/10.1126/science.1062872
Gruenefeld U, Ali AE, Boll S, Heuten W (2018a) Beyond halo and wedge: visualizing out-of-view objects on head-mounted virtual and augmented reality devices. In: Proceedings of the 20th international conference on human-computer interaction with mobile devices and services, pp 1–11
Gruenefeld U, Ali AE, Heuten W, Boll S (2017a) Visualizing out-of-view objects in head-mounted augmented reality. In: Proceedings of the 19th international conference on human-computer interaction with mobile devices and services, pp 1–7
Gruenefeld U, Ennenga D, Ali AE, Heuten W, Boll S (2017b) EyeSee360: designing a visualization technique for out-of-view objects in head-mounted augmented reality. In: Proceedings of the 5th symposium on spatial user interaction, pp 109–118
Gruenefeld U, Koethe I, Lange D, Weiß S, Heuten W (2019) Comparing techniques for visualizing moving out-of-view objects in head-mounted virtual reality. In: 2019 IEEE conference on virtual reality and 3D user interfaces (VR), pp 742–746
Gruenefeld U, Lange D, Hammer L, Boll S, Heuten W (2018b) FlyingARrow: pointing towards out-of-view objects on augmented reality devices. In: Proceedings of the 7th ACM international symposium on pervasive displays, pp 1‒6
Harada Y, Ohyama J (2019) Spatiotemporal characteristics of 360-degree basic attention. Sci Rep 9:16083. https://doi.org/10.1038/s41598-019-52313-3
Harada Y, Ohyama J (2020) The effect of task-irrelevant spatial contexts on 360-degree attention. PLoS ONE 15:e0237717. https://doi.org/10.1371/journal.pone.0237717
Henderson JM, Macquistan AD (1993) The spatial distribution of attention following an exogenous cue. Percept Psychophys 53:221–230. https://doi.org/10.3758/BF03211732
Hicks M, Nichols S, O’Malley C (2004) Comparing the roles of 3D representations in audio and audio-visual collaborations. Virtual Real 7:148–163. https://doi.org/10.1007/s10055-004-0126-0
Hoffman DM, Girshick AR, Akeley K, Banks MS (2008) Vergence-accommodation conflicts hinder visual performance and cause visual fatigue. J vis 8:33. https://doi.org/10.1167/8.3.33
Huang L, Pasher H (2005) Attention capacity and task difficulty in visual search. Cognition 94:B101–B111. https://doi.org/10.1016/j.cognition.2004.06.006
Jewell G, McCourt ME (2000) Pseudoneglect: a review and meta-analysis of performance factors in line bisection tasks. Neuropsychologia 38:93–110. https://doi.org/10.1016/S0028-3932(99)00045-7
Jonides J, Yantis S (1988) Uniqueness of abrupt visual onset in capturing attention. Percept Psychophys 43:346–354. https://doi.org/10.3758/bf03208805
Joy T, Ugur E, Ayhan I (2021) Trick the body trick the mind: avatar representation affects the perception of available action possibilities in virtual reality. Virtual Real. https://doi.org/10.1007/s10055-021-00511-8
Kanji GK (1977) Power aspects of analysis of variance in random effects models: a simulation study. Int J Math Educ Sci Technol 8(3):293–297. https://doi.org/10.1080/0020739770080305
Leonards U, Sunaert S, Hecke PV, Orban GA (2000) Attention mechanisms in visual search: an fMRI study. J Cogn Neurosci 12:61–75. https://doi.org/10.1162/089892900564073
Levy J, Pashler H (2001) Is dual-task slowing instruction dependent? J Exp Psychol Hum Percept Perform 27:862–869. https://doi.org/10.1037/0096-1523.27.4.862
Mackworth NH (1965) Visual noise causes tunnel vision. Psychon Sci 3:67–68
Maringelli F, McCarthy J, Steed A, Slater M, Umilta C (2001) Shifting visuo-spatial attention in a virtual three-dimensional space. Brain Res Cogn Brain Res 10:317–322. https://doi.org/10.1016/s0926-6410(00)00039-2
Mitsudo H, Nakamizo S (2010) Illusory motion produced by dichoptic stimuli during saccades. Perception 39:1591–1605. https://doi.org/10.1068/p6739
Orlosky J, Wu Q, Kiyokawa K, Takemura H, Nitschke C (2014) Fisheye vision: peripheral spatial compression for improved field of view in head mounted displays. In: Proceedings of the 2nd ACM symposium on spatial user interaction, pp 54–61
Paas F, Tuovinen JE, Tabbers H, Van Gerven PWM (2003) Cognitive load measurement as a means to advance cognitive load theory. Educ Psychol 38:63–71. https://doi.org/10.1207/S15326985EP3801_8
Palmer J (1994) Set-size effects in visual search: the effect of attention is independent of the stimulus for simple tasks. Vis Res 34:1703–1721. https://doi.org/10.1016/0042-6989(94)90128-7
Pratt J, Radulescu PV, Guo RM, Abrams RA (2010) It’s alive! animate motion captures visual attention. Psychol Sci 21:1724–1730. https://doi.org/10.1177/0956797610387440
Renner P, Pfeiffer T (2017) Attention guiding techniques using peripheral vision and eye tracking for feedback in augmented-reality-based assistance systems. In: 2017 IEEE symposium on 3D user interfaces (3DUI), pp 186–194
Ristic J, Kingstone A (2012) A new form of human spatial attention: automated symbolic orienting. Vis Cogn 20:244–264. https://doi.org/10.1080/13506285.2012.658101
Schmider E, Ziegler M, Danar E, Beyer L, Buhner M (2010) Is it really robust? reinvestigating the robustness of ANOVA against violations of the normal distribution assumption. Methodology 6:147–151. https://doi.org/10.1027/1614-2241/a000016
Sitzmann V, Sarrano A, Pavel A, Agrawala M, Gutierrez D, Masia B, Wetzstein G (2018) Saliency in VR: how do people explore virtual environments? IEEE Trans vis Comput Graph 24:1633–1642. https://doi.org/10.1109/TVCG.2018.2793599
Tripathi S, Guenter B (2016) A statistical approach to continuous self-calibrating eye gaze tracking for head-mounted virtual reality systems. In: 2017 IEEE winter conference on applications of computer vision (WACV), pp 862–870
Turner JR, Thayer JF (2001) Introduction to analysis of variance: design, analysis & interpretation. SAGE Publications, California
Wolfe JM (2010) Visual search. Curr Biol 20:346–349. https://doi.org/10.1016/j.cub.2010.02.016
Yang CS, Kao SP, Lee FB, Hung PS (2004) Twelve different interpolation methods: a case study of Surfer 8.0. In: Proceedings of the XXth ISPRS congress, pp 778–785
Zago L, Petit L, Jobard G, Hay J, Mazoyer B, Tzouri-Mazoyer N et al (2017) Pseudoneglect in line bisection judgement is associated with a modulation of right hemispheric spatial attention dominance in right-handers. Neuropsychologia 94:75–83. https://doi.org/10.1016/j.neuropsychologia.2016.11.024
Acknowledgements
We thank Drs. Naoya Todo and Ryuta Iseki for their assistance of statistics. We also thank Ryo Hanashima, Airi Takase, and Etsuko Hiraide for their participations in the pilot experiment of this study and helpful comments.
Funding
This work was supported by Council for Science, Technology and Innovation, “Cross-ministerial Strategic Innovation Promotion Program (SIP), Big-data and AI-enabled Cyberspace Technologies”. (funding agency: NEDO).
Author information
Authors and Affiliations
Contributions
YH developed the experimental settings and programs, conducted the experiment, analyzed the data, and wrote the first draft of this paper. JO proposed the concept of this research and totally managed this project. Both authors discussed the results and wrote the paper.
Corresponding author
Ethics declarations
Conflict of interest
There are no conflicts of interest to declare.
Ethics approval
This experiment was approved by the ethics committee of National Institute of Advanced Industrial Science and Technology and conducted according to the principles of the Declaration of Helsinki.
Data availability
The use of raw data is limited for ethical reasons. For the data set supporting the conclusions of this paper, please contact the corresponding author.
Code availability
For the code, please contact the corresponding author.
Consent to participate/for publication
Written informed consent for the participation and publication was obtained from all participants before the experiment.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Harada, Y., Ohyama, J. Quantitative evaluation of visual guidance effects for 360-degree directions. Virtual Reality 26, 759–770 (2022). https://doi.org/10.1007/s10055-021-00574-7
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10055-021-00574-7