Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 

Virtual Reality and Augmented Reality

A special issue of Multimodal Technologies and Interaction (ISSN 2414-4088).

Deadline for manuscript submissions: closed (20 December 2022) | Viewed by 73284

Special Issue Editors


E-Mail Website
Guest Editor
1. School of Information Technology and Mathematical Sciences, University of South Australia, Adelaide, SA 5000, Australia
2. Empathic Computing Laboratory, The University of Auckland, Auckland 1010, New Zealand
Interests: augmented reality; virtual reality; HCI; empathic computing; bioengineering
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
CYENS - Centre of Excellence, Dimarchias Square 23, Nicosia 1016, Cyprus
Interests: virtual reality; augmented reality; human–machine interaction; brain–computer interfaces; serious games; procedural modeling
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Nottingham School of Art and Design, Nottingham Trent University, Nottingham, UK
Interests: ubiquitous computing; mobile computing; augmented reality (AR); cross/extended reality (XR); interaction design

E-Mail Website
Guest Editor
Department of Computer Science and Information Engineering, National Central University, Taoyuan, Taiwan
Interests: machine learning; computational intelligence; swarm intelligence; neural networks; fuzzy systems; optimization algorithms; pattern recognition; image processing; rehabilitation technology; bioinformatics processing; robotics; E-learning; augmented reality; human-computer interfaces and interactions

Special Issue Information

Dear Colleagues,

In recent years, VR and AR technology have seen remarkable progress. Their fundamental problems, such as tracking and registration, have been solved almost entirely, and their applications for education, medicine, architecture, automobiles, advertising, entertainment, art and culture, etc. that researchers could only dream of 20 years ago have been put to practical use today. As some basic research has come to fruition, expectations for VR and AR have increased, and opportunities for advanced studies have also expanded. What should remote communications be when real-time three-dimensional reconstruction is realized and 5G high-speed communication becomes widespread? What information should be selected and presented to the user when advanced situational awareness becomes possible? What kind of short- and long-term effects on our body and mind will be exhibited by augmented vision or body modification? VR and AR are not just high-level computing environments but are becoming the next generation of social infrastructure. Various technologies, such as artificial intelligence, human augmentation, and brain science, are progressing and merging with VR and AR to become a driving force that puts VR and AR to even higher levels.

This Special Issue calls for interesting studies that will open up new horizons of VR and AR. In addition to research that has steadily improved existing issues, we welcome research papers that present new possibilities of VR and AR. Topics of interest include but are not limited to the following:

  • 360 video;
  • VR/AR applications;
  • Artificial intelligence/machine learning for VR/AR;
  • Brain science for VR/AR;
  • VR/AR collaboration;
  • Computer graphics for VR/AR;
  • Computer vision for VR/AR;
  • Content creation and management for VR/AR;
  • Context awareness for VR/AR;
  • Education with VR/AR;
  • Entertainment, narrative and games with VR/AR
  • Multimodal VR/AR;
  • Display technologies for VR/AR;
  • Ethics/humanity in VR/AR;
  • Human augmentations with VR/AR;
  • Human–computer interactions in VR/AR;
  • Human factors in VR/AR;
  • Perception/presence in VR/AR;
  • Performance, cultural heritage and art in VR/AR;
  • Physiological sensing for VR/AR;
  • User experience/usability in VR/AR;
  • Virtual humans/avatars in VR/AR;
  • Visualization/visual analytics with VR/AR;
  • Wellbeing with VR/AR.

Prof. Dr. Mark Billinghurst
Prof. Dr. Fotis Liarokapis
Prof. Dr. Lars Holmquist
Prof. Dr. Mu-Chun Su
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Multimodal Technologies and Interaction is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (19 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

16 pages, 4284 KiB  
Article
The Influence of Avatar Personalization on Emotions in VR
by Rivu Radiah, Daniel Roth, Florian Alt and Yomna Abdelrahman
Multimodal Technol. Interact. 2023, 7(4), 38; https://doi.org/10.3390/mti7040038 - 29 Mar 2023
Cited by 6 | Viewed by 5514
Abstract
In this paper, we investigate the impact of avatar personalization on perceived emotions. Avatar embodiment is a crucial aspect of collaborative and social virtual reality (VR) systems. Previous research found that avatar appearance impacts the acceptability of the virtual body and changes users’ [...] Read more.
In this paper, we investigate the impact of avatar personalization on perceived emotions. Avatar embodiment is a crucial aspect of collaborative and social virtual reality (VR) systems. Previous research found that avatar appearance impacts the acceptability of the virtual body and changes users’ behavior. While virtual embodiment has been extensively investigated, we know very little about how embodiment affects users’ experienced emotions. In a user study (N = 40), we applied an autobiographical recall method to evoke happiness and investigated the influence of different types of avatar embodiment (personalized same-gender, personalized opposite-gender, non-personalized same-gender, and non-personalized opposite-gender) on participants’ perceived emotion. We recorded both self-reported assessments and physiological data to observe participants’ emotional responses resulting from the emotions elicited by the use of different avatars. We found significant differences in happiness with the personalized same-gender avatar and the personalized opposite-gender avatar. We provide empirical evidence, demonstrating the influence of avatar personalization on emotions in VR. We conclude with recommendations for users and designers of virtual reality experiences. Full article
(This article belongs to the Special Issue Virtual Reality and Augmented Reality)
Show Figures

Figure 1

Figure 1
<p>We investigate how different avatar embodiment influences user emotions in VR. We used the participant’s photograph (<b>A</b>), to generate four types of embodiment: (<b>B</b>) personalized, same-gender avatar based on (<b>A</b>,<b>C</b>) personalized opposite-gender avatar based on (<b>A</b>,<b>D</b>) non-personalized, same-gender avatar, and (<b>E</b>) non-personalized opposite-gender avatar.</p>
Full article ">Figure 2
<p>Virtual room setup from different perspectives. (<b>a</b>) Initial Room setup. (<b>b</b>) Avatar in game mode. (<b>c</b>) Participant in-front of mirror. (<b>d</b>) Participant view of the SAM scale.</p>
Full article ">Figure 3
<p>Box plot SAM Value (Valence, Arousal and Dominance). The median is shown in red.</p>
Full article ">
14 pages, 2023 KiB  
Article
Higher Education in the Pacific Alliance: Descriptive and Exploratory Analysis of the Didactic Potential of Virtual Reality
by Álvaro Antón-Sancho, Pablo Fernández-Arias and Diego Vergara
Multimodal Technol. Interact. 2023, 7(3), 30; https://doi.org/10.3390/mti7030030 - 15 Mar 2023
Cited by 3 | Viewed by 1743
Abstract
In this paper, we conducted descriptive quantitative research on the assessment of virtual reality (VR) technologies in higher education in the countries of the Pacific Alliance (PA). Specifically, differences between PA countries in terms of the above perceptions were identified and the behavior [...] Read more.
In this paper, we conducted descriptive quantitative research on the assessment of virtual reality (VR) technologies in higher education in the countries of the Pacific Alliance (PA). Specifically, differences between PA countries in terms of the above perceptions were identified and the behavior of the gender and knowledge area gaps in each of them was analyzed. A validated quantitative questionnaire was used for this purpose. As a result, we found that PA professors express high ratings of VR but point out strong disadvantages regarding its use in lectures; in addition, they have low self-concept of their digital competence. In this regard, it was identified that there are notable differences among the PA countries. Mexico is the country with the most marked gender gaps, while Chile has strong gaps by areas of knowledge. We give some recommendations towards favoring a homogeneous process of integration of VR in higher education in the PA countries. Full article
(This article belongs to the Special Issue Virtual Reality and Augmented Reality)
Show Figures

Figure 1

Figure 1
<p>Distribution of participants by country of origin and gender.</p>
Full article ">Figure 2
<p>Distribution of participants by country of origin and knowledge area.</p>
Full article ">Figure 3
<p>Average responses (out of 5) differentiating by country and gender.</p>
Full article ">Figure 4
<p>Average responses (out of 5) differentiating by country and knowledge area.</p>
Full article ">Figure 5
<p>Ranking of the PA countries according to the average scores given in each of the families of questions in the questionnaire.</p>
Full article ">
22 pages, 2829 KiB  
Article
Learning about Victims of Holocaust in Virtual Reality: The Main, Mediating and Moderating Effects of Technology, Instructional Method, Flow, Presence, and Prior Knowledge
by Miriam Mulders
Multimodal Technol. Interact. 2023, 7(3), 28; https://doi.org/10.3390/mti7030028 - 6 Mar 2023
Cited by 4 | Viewed by 2627
Abstract
The goal of the current study was to investigate the effects of a virtual reality (VR) simulation of Anne Frank’s hiding place on learning. In a 2 × 2 experiment, 132 middle school students learned about the living conditions of Anne Frank, a [...] Read more.
The goal of the current study was to investigate the effects of a virtual reality (VR) simulation of Anne Frank’s hiding place on learning. In a 2 × 2 experiment, 132 middle school students learned about the living conditions of Anne Frank, a girl of Jewish heritage during the Second World War, through desktop VR (DVR) and head-mounted display VR (HMD-VR) (media conditions). Approximately half of each group engaged in an explorative vs. an expository learning approach (method condition). The exposition group received instructions on how to explore the hiding place stepwise, whereas the exploration group experienced it autonomously. Next to the main effects of media and methods, the mediating effects of the learning process variables of presence and flow and the moderating effects of contextual variables (e.g., prior technical knowledge) have been analyzed. The results revealed that the HMD-VR led to significantly improved evaluation, and—even if not statistically significant—perspective-taking in Anne, but less knowledge gain compared to DVR. Further results showed that adding instructions and segmentation within the exposition group led to significantly increased knowledge gain compared to the exploration group. For perspective-taking and evaluation, no differences were detected. A significant interaction between media and methods was not found. No moderating effects by contextual variables but mediating effects were observed: For example, the feeling of presence within VR can fully explain the relationships between media and learning. These results support the view that learning processes are crucial for learning in VR and that studies neglecting these learning processes may be confounded. Hence, the results pointed out that media comparison studies are limited because they do not consider the complex interaction structures of media, instructional methods, learning processes, and contextual variables. Full article
(This article belongs to the Special Issue Virtual Reality and Augmented Reality)
Show Figures

Figure 1

Figure 1
<p>Research design.</p>
Full article ">Figure 2
<p>Room of Anne Frank and Fritz Pfeffer. <a href="https://www.annefrank.org/en/about-us/news-and-press/news/2018/6/12/anne-frank-house-vr-launched/" target="_blank">https://www.annefrank.org/en/about-us/news-and-press/news/2018/6/12/anne-frank-house-vr-launched/</a> (accessed on 12 February 2023).</p>
Full article ">Figure 3
<p>The hiding place.</p>
Full article ">Figure 4
<p>Study process (illustration based on [<a href="#B82-mti-07-00028" class="html-bibr">82</a>]).</p>
Full article ">Figure 5
<p>Knowledge and perspective-taking: pre to post.</p>
Full article ">Figure 6
<p>Mediation principles (illustration based on [<a href="#B92-mti-07-00028" class="html-bibr">92</a>]). Annotations: IV = independent variable, DV = dependent variable, M = mediator.</p>
Full article ">Figure 7
<p>Presentation of a significant complete mediation—smooth automated flow—satisfaction. ** <span class="html-italic">p</span> &lt; 0.01, *** <span class="html-italic">p</span> &lt; 0.001.</p>
Full article ">
16 pages, 1983 KiB  
Article
Assessing Heuristic Evaluation in Immersive Virtual Reality—A Case Study on Future Guidance Systems
by Sebastian Stadler, Henriette Cornet and Fritz Frenkler
Multimodal Technol. Interact. 2023, 7(2), 19; https://doi.org/10.3390/mti7020019 - 9 Feb 2023
Cited by 3 | Viewed by 3008
Abstract
A variety of evaluation methods for user interfaces (UI) exist such as usability testing, cognitive walkthrough, and heuristic evaluation. However, UIs such as guidance systems at transit hubs must be evaluated in their intended application field to allow the effective and valid identification [...] Read more.
A variety of evaluation methods for user interfaces (UI) exist such as usability testing, cognitive walkthrough, and heuristic evaluation. However, UIs such as guidance systems at transit hubs must be evaluated in their intended application field to allow the effective and valid identification of usability flaws. However, what if evaluations are not feasible in real environments, or laboratorial conditions cannot be ensured? Based on adapted heuristics, in the present study, the method of heuristic evaluation is combined with immersive Virtual Reality (VR) for the identification of usability flaws of dynamic guidance systems (DGS) at transit hubs. The study involved usability evaluations of nine DGS concepts using the newly proposed method. The results show that compared to computer-based heuristic evaluations, the use of immersive VR led to the identification of an increased amount of “severe” usability flaws as well as overall usability flaws. Within a qualitative assessment, immersive VR is validated as a suitable tool for conducting heuristic evaluations involving significant advantages such as the creation of realistic experiences in laboratorial conditions. Future work seeks to further prove the suitability of using immersive VR for heuristic evaluations and compare the proposed method to other evaluative methods. Full article
(This article belongs to the Special Issue Virtual Reality and Augmented Reality)
Show Figures

Figure 1

Figure 1
<p>Digital twin of a Singaporean transit hub including a baseline guidance system.</p>
Full article ">Figure 2
<p>User interfaces of the immersive VR simulation to change and adapt the environment.</p>
Full article ">Figure 3
<p>Two experts during a scenario of the heuristic evaluation.</p>
Full article ">Figure 4
<p>Summary of DGS concepts tested via the heuristic evaluation in immersive VR.</p>
Full article ">Figure 5
<p>Comparison of identified usability flaws and its means based on display technology (i.e., desktop-based vs. immersive VR).</p>
Full article ">
25 pages, 2075 KiB  
Article
Designing to Leverage Presence in VR Rhythm Games
by Robert Dongas and Kazjon Grace
Multimodal Technol. Interact. 2023, 7(2), 18; https://doi.org/10.3390/mti7020018 - 9 Feb 2023
Cited by 4 | Viewed by 3032
Abstract
Rhythm games are known for their engaging gameplay and have gained renewed popularity with the adoption of virtual reality (VR) technology. While VR rhythm games have achieved commercial success, there is a lack of research on how and why they are engaging, and [...] Read more.
Rhythm games are known for their engaging gameplay and have gained renewed popularity with the adoption of virtual reality (VR) technology. While VR rhythm games have achieved commercial success, there is a lack of research on how and why they are engaging, and the connection between that engagement and immersion or presence. This study aims to understand how the design of two popular VR rhythm games, Beat Saber and Ragnarock, leverages presence to immerse players. Through a mixed-methods approach, utilising the Multimodal Presence Scale and a thematic analysis of open-ended questions, we discovered four mentalities which characterise user experiences: action, game, story and musical. We discuss how these mentalities can mediate presence and immersion, suggesting considerations for how designers can leverage this mapping for similar or related games. Full article
(This article belongs to the Special Issue Virtual Reality and Augmented Reality)
Show Figures

Figure 1

Figure 1
<p>Screenshots of the two games used in this study. Top: Beat Saber—using laser swords to slice blocks in time with music. Bottom: Ragnarock—using hammers to hit drums in time with music.</p>
Full article ">Figure 2
<p>Comparison between physical presence ratings for Beat Saber and Ragnarock. This shows very high results for physical presence in both titles. Additionally, Beat Saber scored significantly higher in the overall Physical sub-scale, as well as statements 1 (“The virtual environment seemed real to me”) and 4 (“While I was in the virtual environment, I had a sense of ‘being there’”).</p>
Full article ">Figure 3
<p>Comparison between social presence ratings for Beat Saber and Ragnarock. This shows low scores for social presence in Beat Saber (as expected) and average scores for Ragnarock. Additionally, Ragnarock scored significantly higher in statement 6 (“I felt like I was in the presence of another person in the virtual environment.”).</p>
Full article ">Figure 4
<p>Comparison between self presence ratings for Beat Saber and Ragnarock. This shows very high results for self presence in both titles, but no significant differences overall or for any of the statements.</p>
Full article ">Figure 5
<p>A thematic map indicating the hierarchy between player mentalities and themes from our thematic analysis comparing Beat Saber and Ragnarock. Four mentalities were identified: Action (focus on physical engagement and movement), Game (focus on ability and level completion), Musical (focus on rhythm and beat) and Narrative (focus on the fictional elements of the world).</p>
Full article ">Figure 6
<p>Mappings between presence (after Lee, 2004) and immersion (after Nilsson et al., 2016) for each of the four mentalities identified in our analysis of Beat Saber and Ragnarock: Action, Game, Story, Musical.</p>
Full article ">Figure 7
<p>Translation of the mentalities identified in our analysis of Beat Saber and Ragnarock to the 3-axis taxonomy of immersion from Nilsson (2016). The Action, Game and Story mentalities map directly to System, Challenge-based and Narrative immersion respectively. The Musical mentality in a rhythm game can be considered immersive across all 3 categories.</p>
Full article ">
24 pages, 18483 KiB  
Article
Enhancing Operational Police Training in High Stress Situations with Virtual Reality: Experiences, Tools and Guidelines
by Olivia Zechner, Lisanne Kleygrewe, Emma Jaspaert, Helmut Schrom-Feiertag, R. I. Vana Hutter and Manfred Tscheligi
Multimodal Technol. Interact. 2023, 7(2), 14; https://doi.org/10.3390/mti7020014 - 31 Jan 2023
Cited by 20 | Viewed by 7155
Abstract
Virtual Reality (VR) provides great opportunities for police officers to train decision-making and acting (DMA) in cognitively demanding and stressful situations. This paper presents a summary of findings from a three-year project, including requirements collected from experienced police trainers and industry experts, and [...] Read more.
Virtual Reality (VR) provides great opportunities for police officers to train decision-making and acting (DMA) in cognitively demanding and stressful situations. This paper presents a summary of findings from a three-year project, including requirements collected from experienced police trainers and industry experts, and quantitative and qualitative results of human factor studies and field trials. Findings include advantages of VR training such as the possibility to safely train high-risk situations in controllable and reproducible training environments, include a variety of avatars that would be difficult to use in real-life training (e.g., vulnerable populations or animals) and handle dangerous equipment (e.g., explosives) but also highlight challenges such as tracking, locomotion and intelligent virtual agents. The importance of strong alignment between training didactics and technical possibilities is highlighted and potential solutions presented. Furthermore training outcomes are transferable to real-world police duties and may apply to other domains that would benefit from simulation-based training. Full article
(This article belongs to the Special Issue Virtual Reality and Augmented Reality)
Show Figures

Figure 1

Figure 1
<p>VR set up for field trials.</p>
Full article ">Figure 2
<p>Multiple layers of scenario design.</p>
Full article ">Figure 3
<p>Tactical belt and physical props.</p>
Full article ">Figure 4
<p>Boxplots of change in HR and HRV relative to baseline.</p>
Full article ">Figure 5
<p>SHOTPROS Stress Dashboard: left: Stress Cue Control, middle: Stress Level Assessment, right: live VR view.</p>
Full article ">Figure 6
<p>In-Action Monitoring panel.</p>
Full article ">Figure 7
<p>After-Action Review.</p>
Full article ">
17 pages, 1934 KiB  
Article
Does Augmented Reality Help to Understand Chemical Phenomena during Hands-On Experiments?–Implications for Cognitive Load and Learning
by Hendrik Peeters, Sebastian Habig and Sabine Fechner
Multimodal Technol. Interact. 2023, 7(2), 9; https://doi.org/10.3390/mti7020009 - 19 Jan 2023
Cited by 10 | Viewed by 3870
Abstract
Chemical phenomena are only observable on a macroscopic level, whereas they are explained by entities on a non-visible level. Students often demonstrate limited ability to link these different levels. Augmented reality (AR) offers the possibility to increase contiguity by embedding virtual models into [...] Read more.
Chemical phenomena are only observable on a macroscopic level, whereas they are explained by entities on a non-visible level. Students often demonstrate limited ability to link these different levels. Augmented reality (AR) offers the possibility to increase contiguity by embedding virtual models into hands-on experiments. Therefore, this paper presents a pre- and post-test study investigating how learning and cognitive load are influenced by AR during hands-on experiments. Three comparison groups (AR, animation and filmstrip), with a total of N = 104 German secondary school students, conducted and explained two hands-on experiments. Whereas the AR group was allowed to use an AR app showing virtual models of the processes on the submicroscopic level during the experiments, the two other groups were provided with the same dynamic or static models after experimenting. Results indicate no significant learning gain for the AR group in contrast to the two other groups. The perceived intrinsic cognitive load was higher for the AR group in both experiments as well as the extraneous load in the second experiment. It can be concluded that AR could not unleash its theoretically derived potential in the present study. Full article
(This article belongs to the Special Issue Virtual Reality and Augmented Reality)
Show Figures

Figure 1

Figure 1
<p>Study design. The instruments used are explained in the Instruments section.</p>
Full article ">Figure 2
<p>Example from the filmstrip of the first experiment (translated into English).</p>
Full article ">Figure 3
<p>(<b>a</b>) Main menu of the AR app with the different environments to select (translated into English); (<b>b</b>) AR app during the experiment.</p>
Full article ">Figure 4
<p>Mean scores in domain-specific knowledge for the different groups.</p>
Full article ">
17 pages, 787 KiB  
Article
Multimodal Augmented Reality Applications for Training of Traffic Procedures in Aviation
by Birgit Moesl, Harald Schaffernak, Wolfgang Vorraber, Reinhard Braunstingl and Ioana Victoria Koglbauer
Multimodal Technol. Interact. 2023, 7(1), 3; https://doi.org/10.3390/mti7010003 - 26 Dec 2022
Cited by 8 | Viewed by 2461
Abstract
Mid-air collision is one of the top safety risks in general aviation. This study addresses the description and experimental assessment of multimodal Augmented Reality (AR) applications for training of traffic procedures in accordance with Visual Flight Rules (VFR). AR has the potential to [...] Read more.
Mid-air collision is one of the top safety risks in general aviation. This study addresses the description and experimental assessment of multimodal Augmented Reality (AR) applications for training of traffic procedures in accordance with Visual Flight Rules (VFR). AR has the potential to complement the conventional flight instruction by bridging the gap between theory and practice, and by releasing students’ time and performance pressure associated with a limited simulator time. However, it is critical to assess the impact of AR in the specific domain and to identify any potential negative learning transfer. Multimodal AR applications were developed to address various areas of training: guidance and feedback for the correct scanning pattern, estimation if an encountering traffic is on collision course and application of the relevant rules. The AR applications also provided performance feedback for collision detection, avoidance and priority decisions. The experimental assessment was conducted with 59 trainees (28 women, 31 men) assigned to an experimental group (AR training) and a control group (simulator training). The results of tests without AR in the flight simulator show that the group that trained with AR obtained similar levels of performance like the control group. There was no negative training effect of AR on trainees’ performance, workload, situational awareness, emotion nor motivation. After training the tasks were perceived as less challenging, the accuracy of collision detection has improved, and the trainees reported less intense negative emotions and fear of failure. Furthermore, a scanning pattern test in AR showed that the AR training group performed the scanning pattern significantly better than the control group. In addition, there was a significant gender effect on emotion, motivation and preferences for AR features, but not on performance. Women liked the voice interaction with AR and the compass hologram more than men. Men liked the traffic holograms and the AR projection field more than women. These results are important because they provide experimental evidence for the benefits of multimodal AR applications that could be used complementary to the flight simulator training. Full article
(This article belongs to the Special Issue Virtual Reality and Augmented Reality)
Show Figures

Figure 1

Figure 1
<p>Augmented Reality (AR) Moving ball scheme for the scanning pattern.</p>
Full article ">Figure 2
<p>AR Moving ball guidance for the scanning pattern in the application.</p>
Full article ">Figure 3
<p>AR Heading exercise.</p>
Full article ">Figure 4
<p>Scanning performance of the AR and conventional training groups in AR post-test. Error bars represent standard errors.</p>
Full article ">Figure 5
<p>Mean scores of the ARI subscales in female and male trainees from the experimental group that used the AR applications. Error bars represent standard errors.</p>
Full article ">
19 pages, 5449 KiB  
Article
“AR The Gods of Olympus”: Design and Pilot Evaluation of an Augmented Reality Educational Game for Greek Mythology
by Evangelos Ventoulis and Stelios Xinogalos
Multimodal Technol. Interact. 2023, 7(1), 2; https://doi.org/10.3390/mti7010002 - 22 Dec 2022
Cited by 13 | Viewed by 3514
Abstract
Teaching and learning theoretical subjects, such as History, although important, is considered by many students to be non-appealing. Alternative teaching approaches include the use of educational games and augmented reality (AR) applications, or more recently, AR educational games. Such games are considered to [...] Read more.
Teaching and learning theoretical subjects, such as History, although important, is considered by many students to be non-appealing. Alternative teaching approaches include the use of educational games and augmented reality (AR) applications, or more recently, AR educational games. Such games are considered to increase students’ interest for the subject and lead to better learning outcomes. However, studies about the use of AR educational games in the classroom are sparse and further research is necessary. In this article, we present an AR-enhanced educational game for teaching History (Greek Mythology) to 3rd grade Primary school students in Greece. The game, called “AR The Gods of Olympus” consists of three mini games: an AR game with the gods/goddesses of Olympus using narration; a memory game with cards depicting the gods and their symbols; and a quiz game. In order to study the effectiveness of the game and students’ experience and perceptions on it, a study was carried out with primary school students that used the game in classroom. The study utilized a pre-/post-test design, a brief questionnaire based on the MEEGA+ model for evaluating educational games, and observation of students during game playing. Students’ performance was improved after playing the game but the difference was not statistically significant, while the game was positively perceived by students. Especially the AR mini game raised students’ interest and as the students themselves stated helped them “learn while playing”. Full article
(This article belongs to the Special Issue Virtual Reality and Augmented Reality)
Show Figures

Figure 1

Figure 1
<p>The start screen of the game.</p>
Full article ">Figure 2
<p>The target image of Poseidon.</p>
Full article ">Figure 3
<p>Scanning the target image.</p>
Full article ">Figure 4
<p>AR preview of the goddess Artemis.</p>
Full article ">Figure 5
<p>Testing of the app from a student in the class.</p>
Full article ">Figure 6
<p>Correct matching of two cards.</p>
Full article ">Figure 7
<p>The icons disappear after the correct matching.</p>
Full article ">Figure 8
<p>Correct answer in the “Quiz game”.</p>
Full article ">Figure 9
<p>Wrong answer in the “Quiz game”.</p>
Full article ">Figure 10
<p>Third grade students’ performance (correct answers) in the pre-test.</p>
Full article ">Figure 11
<p>Third grade students’ performance (correct answers) in the post-test.</p>
Full article ">Figure 12
<p>Third grade students’ mean grade in the pre-/post-test.</p>
Full article ">Figure 13
<p>Fourth grade students’ performance (correct answers) in the pre-test.</p>
Full article ">Figure 14
<p>Fourth grade students’ performance (correct answers) in the post-test.</p>
Full article ">Figure 15
<p>Fourth grade students’ mean grade in the pre-/post-test.</p>
Full article ">
18 pages, 953 KiB  
Article
Narrative Visualization with Augmented Reality
by Ana Beatriz Marques, Vasco Branco and Rui Costa
Multimodal Technol. Interact. 2022, 6(12), 105; https://doi.org/10.3390/mti6120105 - 26 Nov 2022
Cited by 1 | Viewed by 3150
Abstract
The following study addresses, from a design perspective, narrative visualization using augmented reality (AR) in real physical spaces, and specifically in spaces with no semantic relation with the represented data. We intend to identify the aspects augmented reality adds, as narrative possibilities, to [...] Read more.
The following study addresses, from a design perspective, narrative visualization using augmented reality (AR) in real physical spaces, and specifically in spaces with no semantic relation with the represented data. We intend to identify the aspects augmented reality adds, as narrative possibilities, to data visualization. Particularly, we seek to identify the aspects augmented reality introduces regarding the three dimensions of narrative visualization—view, focus and sequence. For this purpose, we adopted a comparative analysis of a set of fifty case studies, specifically, narrative visualizations using augmented reality from a journalistic scope, where narrative is a key feature. Despite the strong explanatory character that characterizes the set of analyzed cases, which sometimes limits the user’s agency, there is a strong interactive factor. It was found that augmented reality can expand the narrative possibilities in the three dimensions mentioned—view, focus and sequence—but especially regarding visual strategies where simulation plays an essential role. As a visual strategy, simulation can provide the context for communication or be the object of communication itself, as a replica. Full article
(This article belongs to the Special Issue Virtual Reality and Augmented Reality)
Show Figures

Figure 1

Figure 1
<p>Exploration and explanation as complementary instances, as proposed by Thudt et al. [<a href="#B8-mti-06-00105" class="html-bibr">8</a>].</p>
Full article ">Figure 2
<p>Types of narrative structures found in the case studies.</p>
Full article ">Figure 3
<p>Types of combinations between different narrative structures.</p>
Full article ">
17 pages, 13899 KiB  
Article
Are the Instructions Clear? Evaluating the Visual Characteristics of Augmented Reality Content for Remote Guidance
by Bernardo Marques, Carlos Ferreira, Samuel Silva, Andreia Santos, Paulo Dias and Beatriz Sousa Santos
Multimodal Technol. Interact. 2022, 6(10), 92; https://doi.org/10.3390/mti6100092 - 14 Oct 2022
Cited by 5 | Viewed by 2315
Abstract
Augmented Reality (AR) solutions are emerging in multiple scenarios of application as Industry 4.0 takes shape. In particular, for remote collaboration, flexible mechanisms such as authoring tools can be used to generate instructions and assist human operators as they experience increased complexity in [...] Read more.
Augmented Reality (AR) solutions are emerging in multiple scenarios of application as Industry 4.0 takes shape. In particular, for remote collaboration, flexible mechanisms such as authoring tools can be used to generate instructions and assist human operators as they experience increased complexity in their daily tasks. In addition to the traditional handicap of ensuring instructions can be intuitively created without having to understand complicated AR concepts, another relevant topic is the fact that the quality of said instructions is not properly analyzed prior to the tools being evaluated. This means that the characteristics of the visual content are not adequately assessed beforehand. Hence, it is essential to be aware of the cognitive workload associated with AR instructions to assert if they can be easily understood and accepted before being deployed in real-world scenarios. To address this, we focused on AR during sessions of remote guidance. Based on a participatory process with domain experts from the industry sector, a prototype for creating AR-based instructions was developed, and a user study with two parts was conducted: (1) first, a set of step-by-step instructions was produced, and their visual characteristics were evaluated by 129 participants based on a set of relevant dimensions; (2) afterward, these instructions were used by nine participants to understand if they could be used to assist on-site collaborators during real-life remote maintenance tasks. The results suggest that the AR instructions offer low visual complexity and considerable visual impact, clarity, and directed focus, thus improving situational understanding and promoting task resolution. Full article
(This article belongs to the Special Issue Virtual Reality and Augmented Reality)
Show Figures

Figure 1

Figure 1
<p>On-site technician performing an intervention based on a set of instructions provided by a remote expert using the proposed AR-based authoring tool. Adapted from [<a href="#B43-mti-06-00092" class="html-bibr">43</a>].</p>
Full article ">Figure 2
<p>Information flow overview. Goal: Allow an on-site technician to share the context of the task (through video stream) for discussion and situation understanding with a remote expert. The expert can freeze the on-site worker video stream and create AR-based instructions using mechanisms to annotate it. Finally, the technician can view the real world augmented with the instructions and perform an intervention.</p>
Full article ">Figure 3
<p>On-site collaborator performing an augmentation of the instructions suggesting where a new component should be installed (provided by the remote expert) on top of the real world.</p>
Full article ">Figure 4
<p>Step-by-step instructions analysed: 1—push the cables to the side to make some space; 2—remove the screws that hold the fan; 3—unplug the power cables from the energy module; 4—reach in and remove the fan; 5—install the new fan by just repeating the opposite procedures.</p>
Full article ">Figure 5
<p>Boxplot chart for the results associated with the visual characteristics of Step 1 instructions—push the cables to the side to make space.</p>
Full article ">Figure 6
<p>Boxplot chart for the results associated with the visual characteristics of Step 2 instructions—remove the screws that hold the fan.</p>
Full article ">Figure 7
<p>Boxplot chart for the results associated with the visual characteristics of Step 3 instructions—unplug the power cables from the energy module.</p>
Full article ">Figure 8
<p>Boxplot chart for the results associated with the visual characteristics of Step 4 instructions—reach in and remove the fan.</p>
Full article ">Figure 9
<p>Boxplot chart for the results associated with the visual characteristics of Step 5 instructions—repeat the opposite procedure to install the new fan.</p>
Full article ">Figure 10
<p>Sum of participants ratings for the visual characteristics of the step-by-step instructions, according to: Visual Complexity (VC), Visual Impact (VI), Clarity (CLA), Directed Focus (DF) and Inference Support (IS).</p>
Full article ">Figure 11
<p>Dendrogram of the visual characteristics for five dimensions: Visual Complexity for selected step (VC_STP_x); Visual Impact for selected step (VI_STP_x); Clarity for selected step (CLA_STP_x); Directed Focus for selected step (DF_STP_x); Inference Support for selected step (IS_STP_x).</p>
Full article ">Figure 12
<p>Remote team members collaborating through the AR-based tool: on-site participant being assisted by a remote expert.</p>
Full article ">
20 pages, 2187 KiB  
Article
Participatory Design of Sonification Development for Learning about Molecular Structures in Virtual Reality
by Miguel Garcia-Ruiz, Pedro Cesar Santana-Mancilla, Laura Sanely Gaytan-Lugo and Adriana Iniguez-Carrillo
Multimodal Technol. Interact. 2022, 6(10), 89; https://doi.org/10.3390/mti6100089 - 12 Oct 2022
Cited by 2 | Viewed by 2443
Abstract
Background: Chemistry and biology students often have difficulty understanding molecular structures. Sonification (the rendition of data into non-speech sounds that convey information) can be used to support molecular understanding by complementing scientific visualization. A proper sonification design is important for its effective educational [...] Read more.
Background: Chemistry and biology students often have difficulty understanding molecular structures. Sonification (the rendition of data into non-speech sounds that convey information) can be used to support molecular understanding by complementing scientific visualization. A proper sonification design is important for its effective educational use. This paper describes a participatory design (PD) approach to designing and developing the sonification of a molecular structure model to be used in an educational setting. Methods: Biology, music, and computer science students and specialists designed a sonification of a model of an insulin molecule, following Spinuzzi’s PD methodology and involving evolutionary prototyping. The sonification was developed using open-source software tools used in digital music composition. Results and Conclusions: We tested our sonification played on a virtual reality headset with 15 computer science students. Questionnaire and observational results showed that multidisciplinary PD was useful and effective for developing an educational scientific sonification. PD allowed for speeding up and improving our sonification design and development. Making a usable (effective, efficient, and pleasant to use) sonification of molecular information requires the multidisciplinary participation of people with music, computer science, and molecular biology backgrounds. Full article
(This article belongs to the Special Issue Virtual Reality and Augmented Reality)
Show Figures

Figure 1

Figure 1
<p>Students discussing the molecular sonification design.</p>
Full article ">Figure 2
<p>Our PD methodology, based on Spinuzzi [<a href="#B51-mti-06-00089" class="html-bibr">51</a>].</p>
Full article ">Figure 3
<p>The sonification development process.</p>
Full article ">Figure 4
<p>A VR headset displaying a molecular model using Sketchfab.</p>
Full article ">Figure 5
<p>The molecular model of insulin shown on the Sketchfab website.</p>
Full article ">Figure 6
<p>Graph showing the Likert-scale results from <a href="#mti-06-00089-t002" class="html-table">Table 2</a>.</p>
Full article ">Figure A1
<p>Scatter plots of the five questions asked to participants, described in <a href="#mti-06-00089-t002" class="html-table">Table 2</a>.</p>
Full article ">
21 pages, 6513 KiB  
Article
Interactive Scientific Visualization of Fluid Flow Simulation Data Using AR Technology-Open-Source Library OpenVisFlow
by Dennis Teutscher, Timo Weckerle, Ömer F. Öz and Mathias J. Krause
Multimodal Technol. Interact. 2022, 6(9), 81; https://doi.org/10.3390/mti6090081 - 14 Sep 2022
Cited by 1 | Viewed by 3038
Abstract
Computational fluid dynamics (CFD) are being used more and more in the industry to understand and optimize processes such as fluid flows. At the same time, tools such as augmented reality (AR) are becoming increasingly important with the realization of Industry 5.0 to [...] Read more.
Computational fluid dynamics (CFD) are being used more and more in the industry to understand and optimize processes such as fluid flows. At the same time, tools such as augmented reality (AR) are becoming increasingly important with the realization of Industry 5.0 to make data and processes more tangible. Placing the two together paves the way for a new method of active learning and also for an interesting and engaging way of presenting industry processes. It also enables students to reinforce their understanding of the fundamental concepts of fluid dynamics in an interactive way. However, this is not really being utilized yet. For this reason, in this paper, we aim to combine these two powerful tools. Furthermore, we present the framework of a modular open-source library for scientific visualization of fluid flow “OpenVisFlow” which simplifies the creation of such applications and enables seamless visualization without other software by allowing users to integrate the visualization step into the simulation code. Using this framework and the open-source extension AR-Core, we show how a new markerless visualization tool can be implemented. Full article
(This article belongs to the Special Issue Virtual Reality and Augmented Reality)
Show Figures

Figure 1

Figure 1
<p>Concept of the OpenVisFlow library, bridging the gap between simulation software (red) and helping libraries (yellow) to directly write programs and apps that scientifically visualize the calculated flows in AR, VR, or on an ordinary screen. Some of the OpenVisFlows functions can be seen in the OpenLBar application.</p>
Full article ">Figure 2
<p>Schematic representation of the requests and data flows used in OpenLBar using an REST interface.</p>
Full article ">Figure 3
<p>Selection of the simulation data to be displayed above the QR code seen in the background. (<b>a</b>) Data selection options that can be viewed. (<b>b</b>) Data selection user prompt to decide how to handle large files.</p>
Full article ">Figure 4
<p>Class diagram of the WebApiDataManager with all essential functions and fields.</p>
Full article ">Figure 5
<p>Class diagram of the QRCodeVisualizer with an overview of the essential functions and fields.</p>
Full article ">Figure 6
<p>A ray cast from the camera through a field of trackable feature points. The red point is the one chosen by the algorithm, since its distance from the ray is the smallest.</p>
Full article ">Figure 7
<p>Visualization of how the algorithm can choose the wrong feature point, leading to large discrepancies in depth and subsequently scaling and positioning.</p>
Full article ">Figure 8
<p>Visualization of how the algorithm accounts for depth discrepancies and chooses a better suited point for the scaling and positioning of the virtual object.</p>
Full article ">Figure 9
<p>Class diagram of the GeometryOverlapVisualizer with all relevant functions and fields.</p>
Full article ">Figure 10
<p>Example of geometry overlay on a decanter centrifuge. (<b>a</b>) Decanter preplacement with visible feature point tracking. (<b>b</b>) Decanter overlaid over real decanter centrifuge.</p>
Full article ">Figure 11
<p>Results of a particle simulation in a decanter centrifuge, visualized using a polygon file fomat: (<b>a</b>) shows the first time step and (<b>b</b>) the last time step of the simulation. With each time step, the particles migrate further.</p>
Full article ">Figure 12
<p>Class diagram of the QRCodeVisualizer with all relevant fields and functions regarding animations.</p>
Full article ">Figure 13
<p>Here, you can see the simulations on the QR code with different zoom. <a href="#mti-06-00081-f013" class="html-fig">Figure 13</a>a–c shows this on the basis of the cooling truck and <a href="#mti-06-00081-f013" class="html-fig">Figure 13</a>d–f on the basis of the OlBee. (<b>a</b>) Cooling truck zoomed out. (<b>b</b>) Cooling truck medium zoom. (<b>c</b>) Cooling truck zoomed in. (<b>d</b>) OLBee zoomed out. (<b>e</b>) OLBee medium zoom. (<b>f</b>) OLBee zoomed in.</p>
Full article ">Figure 14
<p>Implementation of the CuttingPlane UI by Abdullah Aldandarawy from the Cross Section Shader add-on from the Unity Asset Store. (<b>a</b>) CuttingPlane frontal orientation. (<b>b</b>) CuttingPlane rotated sideways. (<b>c</b>) CuttingPlane rotated over all axes.</p>
Full article ">Figure 15
<p>Overview of the classes that are currently implemented in the library and how they relate to each other. The red classes are the case classes implementing the library and managing the UI.</p>
Full article ">Figure 16
<p>Class diagram of the WebApiDataManager with all the functions and fields and its connection to its parent class.</p>
Full article ">Figure 17
<p>Class diagram of the ByteToAssetTransformer with all the functions and fields, and its connection to its parent class.</p>
Full article ">Figure 18
<p>Class diagram of the GeometryOverlapVisualizer with all essential functions and fields, and its connection to its parent class.</p>
Full article ">Figure 19
<p>First set of questions regarding the usability of OpenLBar.</p>
Full article ">Figure 20
<p>Second set of questions regarding the purpose of OpenLBar.</p>
Full article ">Figure 21
<p>Third set of questions regarding the applicability questions.</p>
Full article ">Figure 22
<p>Viewing the results on the QR code from different angles. <a href="#mti-06-00081-f022" class="html-fig">Figure 22</a>a–d show this using the cooling truck and <a href="#mti-06-00081-f022" class="html-fig">Figure 22</a>e–h using the bee. (<b>a</b>) Cooling truck top down view. (<b>b</b>) Cooling truck back side view. (<b>c</b>) Cooling truck left side view. (<b>d</b>) Cooling truck right side view. (<b>e</b>) OLBee top down view. (<b>f</b>) OLBee back side view. (<b>g</b>) OLBee left side view. (<b>h</b>) OLBee right side view.</p>
Full article ">Figure 23
<p>Results of geometry overlay using a decanter centrifuge as an example. (<b>a</b>) Decanter front uncut. (<b>b</b>) Cooling truck front cut. (<b>c</b>) Cooling truck top cut.</p>
Full article ">
14 pages, 936 KiB  
Article
A Typology of Virtual Reality Locomotion Techniques
by Costas Boletsis and Dimitra Chasanidou
Multimodal Technol. Interact. 2022, 6(9), 72; https://doi.org/10.3390/mti6090072 - 25 Aug 2022
Cited by 20 | Viewed by 4703
Abstract
Researchers have proposed a wide range of categorization schemes in order to characterize the space of VR locomotion techniques. In a previous work, a typology of VR locomotion techniques was proposed, introducing motion-based, roomscale-based, controller-based, and teleportation-based types of VR locomotion. The fact [...] Read more.
Researchers have proposed a wide range of categorization schemes in order to characterize the space of VR locomotion techniques. In a previous work, a typology of VR locomotion techniques was proposed, introducing motion-based, roomscale-based, controller-based, and teleportation-based types of VR locomotion. The fact that (i) the proposed typology is used widely and makes a significant research impact in the field and (ii) the VR locomotion field is a considerably active research field, creates the need for this typology to be up-to-date and valid. Therefore, the present study builds on this previous work, and the typology’s consistency is investigated through a systematic literature review. Altogether, 42 articles were included in this literature review, eliciting 80 instances of 10 VR locomotion techniques. The results indicated that current typology cannot cover teleportation-based techniques enabled by motion (e.g., gestures and gazes). Therefore, the typology was updated, and a new type was added: “motion-based teleporting.” Full article
(This article belongs to the Special Issue Virtual Reality and Augmented Reality)
Show Figures

Figure 1

Figure 1
<p>The VR locomotion typology, as presented in [<a href="#B3-mti-06-00072" class="html-bibr">3</a>].</p>
Full article ">Figure 2
<p>Flowchart of included/excluded articles.</p>
Full article ">Figure 3
<p>The number of instances of the 10 locomotion techniques, as documented from the 42 reviewed articles.</p>
Full article ">Figure 4
<p>The updated typology of VR locomotion techniques.</p>
Full article ">
21 pages, 1612 KiB  
Article
Inter- and Transcultural Learning in Social Virtual Reality: A Proposal for an Inter- and Transcultural Virtual Object Database to be Used in the Implementation, Reflection, and Evaluation of Virtual Encounters
by Rebecca M. Hein, Marc Erich Latoschik and Carolin Wienrich
Multimodal Technol. Interact. 2022, 6(7), 50; https://doi.org/10.3390/mti6070050 - 25 Jun 2022
Cited by 3 | Viewed by 3160
Abstract
Visual stimuli are frequently used to improve memory, language learning or perception, and understanding of metacognitive processes. However, in virtual reality (VR), there are few systematically and empirically derived databases. This paper proposes the first collection of virtual objects based on empirical evaluation [...] Read more.
Visual stimuli are frequently used to improve memory, language learning or perception, and understanding of metacognitive processes. However, in virtual reality (VR), there are few systematically and empirically derived databases. This paper proposes the first collection of virtual objects based on empirical evaluation for inter-and transcultural encounters between English- and German-speaking learners. We used explicit and implicit measurement methods to identify cultural associations and the degree of stereotypical perception for each virtual stimuli (n = 293) through two online studies, including native German and English-speaking participants. The analysis resulted in a final well-describable database of 128 objects (called InteractionSuitcase). In future applications, the objects can be used as a great interaction or conversation asset and behavioral measurement tool in social VR applications, especially in the field of foreign language education. For example, encounters can use the objects to describe their culture, or teachers can intuitively assess stereotyped attitudes of the encounters. Full article
(This article belongs to the Special Issue Virtual Reality and Augmented Reality)
Show Figures

Figure 1

Figure 1
<p>Step 1 describes the collection process. Step 2 screens the objects according to predefined criteria. Step 3 describes a study with a homogeneous sample. The objects were examined for recognition and cultural association i. a. in preparation for IAT in step 4. In step 4, a second study was conducted where the results from Study 1 were verified, and an implicit association test (IAT) was performed. Hereby, a heterogeneous sample was interviewed.</p>
Full article ">Figure 2
<p>This figure shows the study design and timeline of the first study.</p>
Full article ">Figure 3
<p>This figure shows the study design and timeline of the second study.</p>
Full article ">Figure A1
<p>This figure shows the time sequence of an assignment task using the example of the exercise task.</p>
Full article ">Figure A2
<p>This figure shows an affective assignment task with positive and negative selection options using the visual stimulus “apple” as an example.</p>
Full article ">Figure A3
<p>The figure shows the virtual objects identified and evaluated for the first proposal of the <span class="html-italic">InteractionSuitcase</span>.</p>
Full article ">
16 pages, 2923 KiB  
Article
Vocational Training in Virtual Reality: A Case Study Using the 4C/ID Model
by Miriam Mulders
Multimodal Technol. Interact. 2022, 6(7), 49; https://doi.org/10.3390/mti6070049 - 24 Jun 2022
Cited by 9 | Viewed by 4015
Abstract
Virtual reality (VR) is an emerging technology with a variety of potential benefits for vocational training. Therefore, this paper presents a VR training based on the highly validated 4C/ID model to train vocational competencies in the field of vehicle painting. The following 4C/ID [...] Read more.
Virtual reality (VR) is an emerging technology with a variety of potential benefits for vocational training. Therefore, this paper presents a VR training based on the highly validated 4C/ID model to train vocational competencies in the field of vehicle painting. The following 4C/ID components were designed using the associated 10 step approach: learning tasks, supportive information, procedural information, and part-task practice. The paper describes the instructional design process including an elaborated blueprint for a VR training application for aspiring vehicle painters. We explain the model’s principles and features and their suitability for designing a VR vocational training that fosters integrated competence acquisition. Following the methodology of design-based research, several research methods (e.g., a target group analysis) and the ongoing development of prototypes enabled agile process structures. Results indicate that the 4C/ID model and the 10 step approach promote the instructional design process using VR in vocational training. Implementation and methodological issues that arose during the design process (e.g., limited time within VR) are adequately discussed in the article. Full article
(This article belongs to the Special Issue Virtual Reality and Augmented Reality)
Show Figures

Figure 1

Figure 1
<p>The three-part structure of the VR training application.</p>
Full article ">Figure 2
<p>The paint booth.</p>
Full article ">Figure 3
<p>The 4C/ID model according to [<a href="#B12-mti-06-00049" class="html-bibr">12</a>, <a href="#B13-mti-06-00049" class="html-bibr">13</a>].</p>
Full article ">Figure 4
<p>The <span class="html-italic">HandLeVR</span> project process.</p>
Full article ">Figure 5
<p>Instructional design process within the <span class="html-italic">HandLeVR</span> project.</p>
Full article ">Figure 6
<p>Supportive information: layer thickness (<span class="html-italic">Heatmap</span>).</p>
Full article ">Figure 7
<p>Overview of the task class new part painting.</p>
Full article ">Figure 8
<p>Supportive information: personal protective equipment.</p>
Full article ">Figure 9
<p>Procedural information: a beam indicating the distance between the workpiece and the spray gun.</p>
Full article ">Figure 10
<p>First task class: new part painting.</p>
Full article ">
15 pages, 1056 KiB  
Article
Didactic Use of Virtual Reality in Colombian Universities: Professors’ Perspective
by Álvaro Antón-Sancho, Diego Vergara, Pablo Fernández-Arias and Edwan Anderson Ariza-Echeverri
Multimodal Technol. Interact. 2022, 6(5), 38; https://doi.org/10.3390/mti6050038 - 16 May 2022
Cited by 19 | Viewed by 3718
Abstract
This paper presents quantitative research on the perception of the didactic use of virtual reality by university professors in Colombia, with special attention to the differences according to their area of knowledge, as the main variable, and gender and digital generation, as secondary [...] Read more.
This paper presents quantitative research on the perception of the didactic use of virtual reality by university professors in Colombia, with special attention to the differences according to their area of knowledge, as the main variable, and gender and digital generation, as secondary variables. The study involved 204 professors from different Colombian universities. As an instrument, a survey designed for this purpose was used with four scales that were used to measure, on a Likert scale, different dimensions involving the participants’ perception of the use of virtual reality in the classroom. The answers were analyzed statistically and the differences in the perceptions have been identified by means of parametric statistical tests according to the following: (i) area of knowledge, (ii) gender, (iii) digital generation of the participants. The results showed that the participants expressed high valuations of virtual reality, despite having intermediate or low levels of digital competence. Gaps were identified in terms of area of knowledge, gender, and digital generation (digital natives or immigrants) with respect to opinions of virtual reality and digital competence. The highest valuations of virtual reality are given by professors of Humanities, and by digital natives. It is suggested that Colombian universities implement training plans on digital competence for professors and that these plans be aimed at strengthening knowledge of virtual reality. Full article
(This article belongs to the Special Issue Virtual Reality and Augmented Reality)
Show Figures

Figure 1

Figure 1
<p>Main generations in terms of the degree of connection with digital technologies.</p>
Full article ">Figure 2
<p>Variables considered in this study.</p>
Full article ">

Review

Jump to: Research, Other

26 pages, 2793 KiB  
Review
Design Considerations for Immersive Virtual Reality Applications for Older Adults: A Scoping Review
by Kiran Ijaz, Tram Thi Minh Tran, Ahmet Baki Kocaballi, Rafael A. Calvo, Shlomo Berkovsky and Naseem Ahmadpour
Multimodal Technol. Interact. 2022, 6(7), 60; https://doi.org/10.3390/mti6070060 - 20 Jul 2022
Cited by 26 | Viewed by 5824
Abstract
Immersive virtual reality (iVR) has gained considerable attention recently with increasing affordability and accessibility of the hardware. iVR applications for older adults present tremendous potential for diverse interventions and innovations. The iVR literature, however, provides a limited understanding of guiding design considerations and [...] Read more.
Immersive virtual reality (iVR) has gained considerable attention recently with increasing affordability and accessibility of the hardware. iVR applications for older adults present tremendous potential for diverse interventions and innovations. The iVR literature, however, provides a limited understanding of guiding design considerations and evaluations pertaining to user experience (UX). To address this gap, we present a state-of-the-art scoping review of literature on iVR applications developed for older adults over 65 years. We performed a search in ACM Digital Library, IEEE Xplore, Scopus, and PubMed (1 January 2010–15 December 2019) and found 36 out of 3874 papers met the inclusion criteria. We identified 10 distinct sets of design considerations that guided target users and physical configuration, hardware use, and software design. Most studies carried episodic UX where only 2 captured anticipated UX and 7 measured longitudinal experiences. We discuss the interplay between our findings and future directions to design effective, safe, and engaging iVR applications for older adults. Full article
(This article belongs to the Special Issue Virtual Reality and Augmented Reality)
Show Figures

Figure 1

Figure 1
<p>PRISMA flow diagram.</p>
Full article ">Figure 2
<p>Studies classified based on objective and context.</p>
Full article ">Figure 3
<p>Studies classified based on sex ratio in sample size (<b>a</b>) and HMD type (<b>b</b>).</p>
Full article ">Figure 4
<p>Design considerations grouped by themes and clusters.</p>
Full article ">Figure 5
<p>Challenges reported for older adults in iVR.</p>
Full article ">

Other

Jump to: Research, Review

23 pages, 1676 KiB  
Systematic Review
Usability Assessments for Augmented Reality Head-Mounted Displays in Open Surgery and Interventional Procedures: A Systematic Review
by Ellenor J. Brown, Kyoko Fujimoto, Bennett Blumenkopf, Andrea S. Kim, Kimberly L. Kontson and Heather L. Benz
Multimodal Technol. Interact. 2023, 7(5), 49; https://doi.org/10.3390/mti7050049 - 9 May 2023
Cited by 1 | Viewed by 3487
Abstract
Augmented reality (AR) head-mounted displays (HMDs) are an increasingly popular technology. For surgical applications, the use of AR HMDs to display medical images or models may reduce invasiveness and improve task performance by enhancing understanding of the underlying anatomy. This technology may be [...] Read more.
Augmented reality (AR) head-mounted displays (HMDs) are an increasingly popular technology. For surgical applications, the use of AR HMDs to display medical images or models may reduce invasiveness and improve task performance by enhancing understanding of the underlying anatomy. This technology may be particularly beneficial in open surgeries and interventional procedures for which the use of endoscopes, microscopes, or other visualization tools is insufficient or infeasible. While the capabilities of AR HMDs are promising, their usability for surgery is not well-defined. This review identifies current trends in the literature, including device types, surgical specialties, and reporting of user demographics, and provides a description of usability assessments of AR HMDs for open surgeries and interventional procedures. Assessments applied to other extended reality technologies are included to identify additional usability assessments for consideration when assessing AR HMDs. The PubMed, Web of Science, and EMBASE databases were searched through September 2022 for relevant articles that described user studies. User assessments most often addressed task performance. However, objective measurements of cognitive, visual, and physical loads, known to affect task performance and the occurrence of adverse events, were limited. There was also incomplete reporting of user demographics. This review reveals knowledge and methodology gaps for usability of AR HMDs and demonstrates the potential impact of future usability research. Full article
(This article belongs to the Special Issue Virtual Reality and Augmented Reality)
Show Figures

Figure 1

Figure 1
<p>PRISMA flowchart for systematic literature review.</p>
Full article ">Figure 2
<p>Time distribution of XR+ usability articles.</p>
Full article ">Figure 3
<p>Time distribution of XR+ usability articles by hardware type.</p>
Full article ">Figure 4
<p>Time distribution of (<b>a</b>) XR+ and (<b>b</b>) AR HMD usability articles for surgical planning vs. procedures.</p>
Full article ">
Back to TopTop