Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article
Open access

Evaluating Learning Outcomes of Virtual Reality Applications in Education: A Proposal for Digital Cultural Heritage

Published: 24 June 2023 Publication History

Abstract

The surge of Mobile Virtual Reality (VR) applications is getting growing attention among researchers and practitioners. The recent literature demonstrates its benefits when used for education purposes, since virtual immersion yields promising results for learning. Leveraging this trend, within the so called “digital didactics”, the need to gauge VR’s effectiveness in the didactic field has become paramount; so far, a method to connect traditional evaluation strategies to novel VR-based learning is still broadly missing. This paper investigates the problem of quantifying the learning outcomes and proposes a new didactic evaluation method for the Digital Cultural Heritage (DCH) learning. This research, conducted in a higher education institute, proposes three new Key Performance Indicators, referring to Revised Bloom’s Taxonomy (RBT): Mnemonic (M), Transversal (T), and Disciplinary (D). A questionnaire was administered by the same teacher who holds the course, to evaluate how well the application communicated information. The participants have been subdivided into two groups with the same knowledge base. The first group (1ACAT) that represents the “VR group” used the app at home to deepen their subject studies; while the second group (1AGR) that represents the “control group” consulted and studied the app only before the test. The results have demonstrated that the “control group” has a greater ability to support purely mnemonic topics (1ACAT 46.9%, 1AGR 53.1%), such as dates and simple definitions. The skills reached by the “VR group” attest to both transveral (1ACAT 52.9%, 1AGR 47.1%) and disciplinary (1ACAT 52.5%, 1AGR 47.5%) knowledge. These results validate the use of VR in teaching, demonstrating both experiential value and student involvement, but even confirming the compensatory function of VR if compared with the irreplaceable role of teachers in guiding learners to learn.

1 Introduction

The leveraging of the latest technological achievements has infused society with ongoing innovation that makes multimedia product use unavoidable. With this has come an overwhelming means of sensing common experiences through digital applications. Younger generations, sometimes called “digital natives” [1], have a boundless world of devices available that can be used for countless purposes, be they edifying or not. Among such technologies, Virtual Reality (VR) and Augmented Reality (AR) are gaining increasing attention in the education domain [2, 3, 4, 5]. In the last two decades, AR and VR have been adapted to learning activities to improve learning quality [6] and engagement [7], facing well-known challenges of classical learning approaches [8, 9] that include overcoming state-of-the-art limitations [10]. Rather than just exploiting these ideas for a temporary “wow effect”, several domains have already employed it, including industrial environments [11], healthcare [12], language learning [13], and also for preserving intangible cultural heritage [14]. In this context, all schooling levels have tested and adopted AR and VR in the classroom prolifically to exploit their potential. For example, in the review presented by [15], the authors discuss the advantages and disadvantages of using AR and VR in the classroom, and their ability to increase teaching performance. As stated by [16], four elements make VR suitable for such a context: experience, engagement, equitability, and everywhere. The work of [17] demonstrated the three key elements of AR: contextuality, interactivity, and spatiality. Thus, there are several features to be examined that remain partially unexplored [18].
The educational domain is witnessing a turning point, and insiders have made claims of driving greater awareness of how to use new tools. Of the more influential research challenges that remain open, how to go beyond the emotional impact of new technologies, how to understand teacher’s roles and how to identify the educational domains where technologies can be more effective are significant.
The comfort of younger generations with multimedia technologies has led to innovative learning methodologies and new research trends for teaching. New terms have been coined (e.g., “learning by research” and “scientific education based on inquiry” [19]) as knowledge based on research, surveying and model building has been created. As the learning experience changes through the use of technologies, studying how technologies affect learning and skills has drawn interest, since it improves traditional learning experience using an interactive learning environment [20, 21, 22].
While technologies can trigger emotional effects that can benefit the learning process [23, 24], further investigation is needed to determine the actual performance of AR and VR technologies in teaching, which involves all aspects related to teaching and learning. Few studies have focused on such benefits to date [25], though in [26] the authors confirmed that students experienced strengthened motivation to learn through teaching activities built using AR and VR technologies. Learning through these technologies has proven more entertaining and engaging [27], but such evidence does not validate their potential regarding didactic action.
In the field of learning, many theories support experience enhancement as a vehicle for student involvement. Dewey affirmed that school education must promote more than codified knowledge, which is typical of traditional learning, but not sacrifice control in the name of ‘gaining experience’ [28]. Therefore, the role of a teacher guiding the learning process remains crucial, not only to transfer knowledge but rather to help learners to learn, even when the didactic method is combined with computer aided systems [29]. AR and VR applications facilitate the learning process but have not been subjected to in-depth and systematic study regarding real didactic value (i.e., in terms of specific knowledge and skills that can be spent in transversal, interdisciplinary, and long-term ways). This aspect requires deep study to verify whether the knowledge acquired by students remains over time as a cultural and metacognitive background, which can be applied to other contexts and in interdisciplinary ways.
Recent studies [30] have revealed 18 application domains that have been tested as VR benchmarks. Among them, art and architecture are the Least explored, a trend confirmed by [31]. Conversely, the context of Digital Cultural Heritage (DCH) appears particularly suitable for AR and VR exploitation due to their capacity for the immersive experience of masterpieces that allows for deeper and more attractive knowledge [32]. Albeit in different domains, the work of [33] confirms their benefits. As stated in [30], there is a lack of well-established methodologies for evaluating learning outcomes.
This study fills these gaps by introducing Key Performance Indicators (KPIs) to measure and quantify student learning performance when teachers use VR as a teaching tool. This study aims to provide a general tool for evaluating the effectiveness of technological methods and their impacts on novel and relevant KPIs. Moreover, the present study shall prove that combining traditional teaching with VR application use provides better learning results. Skill acquisition processes reach higher levels if supported by an educator providing suitable tools to achieve training objectives. Tools that favour knowledge development follow the theory presented in [34] and the VR contents, which can be inserted to expand teaching methods in knowledge transmission, fit that description. As Vygotsky, also Bloom [35] studied experiential learning and the current research has been developed from their theories.

1.1 Main Contributions and Paper Organization

The main contributions of this paper are summarised as follows:
(1)
Understand the engaging value of VR technologies as a primarily educational experience (edutainment). A cultural heritage related project based on VR has been exploited to drive the user through the immersive educational experience.
(2)
Define the benefits of VR through proper teacher-led use. For this, experimental protocol has been introduced within daily educational activities in a real school environment.
(3)
Evaluate and quantify through indicators the contribution of VR to the learning process. A set of KPIs has been defined to measure the real benefit of VR for educational purposes.
The reminder of the paper is organised as follows. Over the huge amount of recent work in the field of education, the works more related to our experiment have been selected and analysed in Section 2. Afterwards, the methodology used to conduct the tests have been described in Section 3; the experimental setup and the consequent protocol used for gathering the learning outcomes by the students are detailed in Section 4. Results of the experiments, together with the statistical validation of the proposed approach can be found in the dedicated results in Section 5. Discussions, benefits, pros and cons of our methodology, and findings are described in Section 6. Section 7 is dedicated to the concluding remarks and future research directions.

2 Related Works

Much research has shown that VR technologies can significantly help students to improve their skills and knowledge. The use of these tools in the didactic field makes teaching and learning not only more attractive, but also more effective [36, 37, 38]. Students reach more accurate knowledge with greater efficiency by better assimilating topics and making them their own [39].
The work of [40] demonstrated that these tools improved learning by increasing knowledge sharing and curiosity through fun and living virtual experiences. Although multiple AR and VR teaching applications exist, several open issues also relate to the use of this technology. VR applications are more widespread and used more than AR, especially in the field of video games, and this has made the younger generations skilled with VR devices [31, 41]. Taking this aspect into account, VR technology can be more easily introduced into a school educational path as it is more familiar to the younger generations than AR. VR devices grant users immersive experiences where knowledge transmission crosses an experiential dynamic, increasing younger generations involvement in the learning practice [42]. The most suitable educational scenarios for VR systems are technical and scientific ones, allowing students to reproduce formulas or theories in a simplified and concrete way [43]. However, such activity could be interpreted as gamelike, diverting attention from its ultimate goal of increasing student knowledge and skill. VR could be interestingly applied to visual arts and architecture, as virtual technologies can show what is not visible and reconstruct lost heritage, allowing complete involvement with surrounding environments while displaying aspects and details that would not otherwise be read [44].

2.1 DCH Learning in VR Environments

In this section, some VR applications in the DCH domain are described. Moreover, the work of [45] has been examined, since it provides an overview of the state of the art of serious game in the DCH domain, emphasising the educational ambition of games.
Through the Nearpod educational platform,1 primary classes at the Unified School of San Francisco and a public school complex of Polk County in Florida allow children to take virtual tours of Easter Island, ancient Egypt, a coral reef, and Mars. The Marin School of the Arts in Novato, California, has a wall covered with ultra-flat monitors that are used by many classes of children to create and manipulate \(360^\circ\) scenes while on the opposite side, a protected area has been created for students to use an optical device: the HTC Vive headsets.
In 2014 Mendel High School in Opava, Czech Republic, was the first European high school to create courses integrated with the latest generation VR technology. Using optical devices, Oculus Rift headsets, and a Leap Motion controller, they offer educational units in science and history.2
The Google Expeditions application3 has been designed for classwork, enabling virtual visits to the most attractive places in the world using just a mobile phone and a Google Cardboard headset.
Related to DCH environment are the \(360^\circ\) views of the Vatican app,4 which provides tours and information.
Cave Automatic Virtual Environments (CAVE) [46] is a tool providing a fully-immersive VR experience in a room where the walls and floor are projection screens. Users wearing 3D glasses can move freely in that projected world that, while holding interesting educational and teaching potential, is seldom used due to cost and dedicated space requirements. Despite this, CAVE technology has been adopted rarely in the educational field for DCH.
In the context of underwater exploration of DCH, interesting is the work of [47], which proposes a VR application to overcome limitations of the underwater archaeological site. The experimental phase demonstrates that the system can provide a learning experience with a high emotional impact, both for younger and inexperienced users.

2.2 Learning with VR

In literature, there are different works that study the Bloom’s Taxonomy (BT) for similar purposes. In the paper presented by [48], the authors have compared three different learning experiences using the HTC VIVE VR headsets: interactive 3D model, role play scene, and 3-D creation of space. Considering BT, they have grouped the levels of knowledge and have established the learning experience that can be associated to a group.
Instead, recently, in [49], the authors evaluated the learning outcomes with a VR application in the educational field, considering three levels of the BT (applying, analysing, and evaluating). This choice was raised on the fact that VR has the potential to influence higher levels of BT.
Another recent paper has integrated a VR application for teaching systems of linear equations. Conversely, to evaluate the effectiveness of the learning, they have adopted only the last four levels of knowledge (creating, evaluating, analysing, and applying) of BT [50].
The BT is also considered in [51] to evaluate the levels of knowledge reached using VR applications in education. Starting from the six levels of BT, the authors have developed 18 new indicators to establish the levels of knowledge. Furthermore, in [52], the authors propose a virtual heritage learning game based on an ancient Egyptian temple by using BT.
However, to the best of our knowledge, the current literature has no quantitative evaluations that use KPIs to determine education validity of VR applications for the DCH domain. These indicators are measurable values important for teachers since demonstrate the effectiveness regarding student learning processes and therefore the teaching method [53].
Research presented by [54] proposed a technological pedagogical content knowledge framework as an analysis tool for describing student competence with a constructive map. Teachers used these maps to better understand student competence levels, establishing objectives and following learning procedures. The recent work of [55] studied a technology acceptance model based on structural equation modeling [56] to evaluate student acceptance of tablet use as a technological tool in mathematics classes in a Middle Eastern University. The results highlighted user satisfaction and perceived usefulness.

3 Research Procedure and the Methodology Specification

Student learning achievements have been investigated in terms of mnemonic, transversal and disciplinary achievements, exploiting a VR application in the DCH domain. An explanatory overview of our main research steps is depicted in Figure 1.
Fig. 1.
Fig. 1. Workflow depicting main research steps. After VR application development, user tests were validated to assess the novel KPIs.
In particular, starting from a VR application for DCH, and after a users test that compared two different learning methods, the knowledge acquired was validated through the definition of three KPIs: Mnemonic (M), Transversal (T), and Disciplinary (D). Starting with an equal level of knowledge of the topic, two groups have been formed: the “VR group” and the “control group”: the former was able to study thoroughly the app at home, the latter was able to experience the app only few minutes before the test. For each question, a different weight has been assigned to connect traditional evaluation to VR-based evaluation. Thus, the numerical results have been analysed to demonstrate a connection between the traditional evaluation strategy and the KPIs.

3.1 Research Questions

Our research questions are as follows:
RQ1
To explore the benefits that VR introduces in the learning process of DCH contents, the following question arises: Comparing two different learning methods, which is the most suitable to be adopted in the classroom?
RQ2
Considering the lack of well-established evaluation models to quantify the level of knowledge reached by students using VR applications, the following question arises: Comparing two different methods of learning, is it possible to propose a reliable evaluation model capable of determining the levels of knowledge?
RQ3
To establish if VR stimulates mnemonic learning only or rather develops critical skills in exploiting knowledge, the following question must be answered: Which are the competences enhanced by the use of VR applications?
RQ4
To understand the role of the teacher as a mediator of the use of VR in the classroom or as a means of study, the following question arises: Can the use of VR replace the role of teacher and/or the standard didactic approach?

3.2 SmartMarca Project and its Relation to This Research

To evaluate how innovative educational paths improve learning, the test was carried out within the SmartMarca project, a platform specifically created to manage AR and VR contents for DCH [57]. The project has several beneficial and advantageous objectives for cultural tourism coming from digitisation, web publication, and utilisation.
This VR application allows an immersive visit to the Roman Theatre of Falerone, providing a contextual and interactive reading of historical and architectural information about the 3D model of the theatre. Elements and details are highlighted while the user moves within the virtual reconstruction. This application, besides providing a 360 \(^{\circ }\) view of a 3D model, exploits immersive visualisation through a stereoscopic mode, placing users completely inside the model (Figure 2).
Fig. 2.
Fig. 2. VR visualisation of a 3D model of the Falerone Theatre. View of the Scene’s front and side doors. Reading tags indicate constituent parts of the monument.
While the app was initially designed for tourism purposes, its VR permits an in-depth analysis worthy of didactic investigation. The teaching experience undertaken through the SmartMarca5 application has provided interesting results regarding the potential of digital tools introduced in daily school practice. This VR application of the Roman Theatre of Falerone is a valid navigation tool within an archaeological site, rebuilding salient architectural parts for the discovery and recognition of their nomenclature and function. The text boxes help learners to move around in a virtual archaeological site and facilitate the understanding of functions of the various parts that would otherwise not be understandable, being often destroyed and missing, as in most archaeological sites.

4 Experimental Design

The definitions and notations of quantities used in the paper are introduced in the following.
AcronymMeaning
1ACATFirst year of the Costruzioni Ambiente course Territorio
1AGRFirst year of the Grafica course
3DThree-dimensional
ARAugmented Reality
BTBloom’s Taxonomy
DDisciplinary
DCHDigital cultural heritage
KPIKey performance indicator
MMnemonic
QRVQuestionnaire reliability verification
RBTRevised Bloom’s Taxonomy
RQResearch question
StdDevStandard deviation
StdErrorStandard error of the mean
TTransversal
VRVirtual Reality
In traditional teaching, reading textbooks and related didactic images are often unexciting and fail to offer the searching action that occurs with an app, which demands tag identification and accompanying text window examination to locate information. In an app, images connected to questions facilitate information recognition in the explanatory captions (where tags are prepared by a teacher in the VR model), enabling immediate content associations from students.
The general objective of the experiment is to evaluate the didactic potential of VR applications with the aim of contributing to the introduction and implementation of tools that allow teachers to create tailored didactic proposals.
Moreover, the work aims to propose and analyse an evaluation methodology of learning achievements at the end of an educational path, carried out with the support of VR technology and applied to the study of architectural history. This methodology introduces indicators that facilitate the evaluation process itself.
The research has been included in the disciplinary path of the History of Architecture’s teaching curriculum in an upper secondary school course. The VR apps contained in the SmartMarca Project were the support for the didactic action undertaken within the school programs developed during the year, allowing the contributions of digital technology in student learning processes to be tested.

4.1 Participants

The main characters of the methodological analysis are 37 students, 14 years old: 18 (16 male and 2 female) of the first year of the C.A.T. Costruzioni Ambiente Territorio (1ACAT), and 19 (14 male and 5 female) of the first year of the Graphics Course (1AGR). However, our analysis shows that gender has no significant effects on the learning and the use of VR. Moreover, it is worth mentioning the students’ inclination towards the use of VR. Indeed, from our previous study, the students were asked to express their attitude toward technology, demonstrating their willingness and preparedness. Interested readers could refer to [58] for details.
After following a common learning path developed through theoretical lessons and a guided tour of the archaeological remains of a Roman theatre, students are invited to carry out an online test. Both groups studied topics related to the test at home, but with a difference: the first group (1ACAT) could familiarise themselves with the app at home deepening their subject studies, and represents the “VR group”. While the second group (1AGR) consulted and studied using the app before completing the test, and represents the “control group”. Then, the “VR group” has the advantage of the time available to study the content using the app, but the “control group” can consult the app only before the test.

4.2 Didactic Evaluation

The test was carried out using Socrative,6 an online application that allows tests to be carried out while collecting results, data, and statistics related to student learning. All the tests have been done at school during the annual course by the same teacher who holds the course. The Table A.1 in the appendix reports the multiple-choice questionnaire administered to the students, where the first three questions represents the pre-test. This pre-test is the prerequisite and the base of knowledge for both groups.
The evaluation process during the teaching practices must not be understood only as a simple listing of percentages or values obtained from the correct answers at the end of a test. The evaluation is obtained through a path that starts from the presentation of the disciplinary contents, to the choice of the learning methodology classroom lesson, participatory lesson, problem solving, case study, up to the identification of the key elements, of a didactic unit, to be transmitted also through the use of a specific language. The first phase of the formative exam follows with queries, summary diagrams and short summaries of the main concepts to allow the students to learn the new explored ideas and make the topic their own. The final exam (which can be oral, written, graphic) contains various evaluation elements of the learning process. The types of test have different parameters of information that make it possible to express an opinion on the level and degree of preparation achieved by the student as a whole.

4.3 Methodological Evaluation

To assess the levels reached by the students, questions were asked referring to BT levels [35]. These were investigated and classified according to six levels of cognitive process competence: remember, understand, apply, analyse, evaluate and create (Figure 3).
Fig. 3.
Fig. 3. Simplified representation of RBT, a classification of human cognition critical to the learning process. Source [59].
To simplify information collection, the levels were grouped into three descriptors that were used to estimate questionnaire answers, as shown in Figure 4.
Fig. 4.
Fig. 4. The new KPIs defined starting from the BT. The levels were grouped into three descriptors that were used to estimate questionnaire answers.
In Table 1, it is shown how the levels of knowledge have been grouped and the meaning of each of Bloom’s descriptors and our descriptors to better explain our choice. The choice of adopting BT depended on making the introduction of evaluation metrics for VR consistent; specifically, this taxonomy is widely adopted in the Italian higher schools, and it is the one used by the teachers to evaluate the results of an examination. Thus, it has been transferred into descriptors for the evaluation of learning with the VR application.
Table 1.
Bloom levelsResearch descriptorsBloom level descriptorsResearch level descriptors
RememberMRecognise and rememberMnemonic application
UnderstandTUnderstand the meaning and interpret it, classifying and comparingTransversal knowledge between disciplines and reworking of learned contents
ApplyApply the concepts and implement
AnalyseDAnalyse to differentiate and organiseContextualisation and readaptation of learned content
EvaluateCoordinate and evaluate
CreateGenerate hypotheses and plan
Table 1. Comparison Table between the Levels of RBT and the Descriptors used in the Questionnaire Answer Evaluation Analysis
At this point, we needed to codify formulas for reading results obtained from the learning tests administered at the end of the app-using process. The level of investigation was expanded through a comparative reading of individual questions (Table 1) by assigning indicators and descriptors for understanding the understanding of students’ information acquisition and students’ ability to transform that information into transversal, interdisciplinary, and metacognitive skills. Merely collecting the correct answer results from the two class-sample groups and a general comparison was not sufficient; instead, it was necessary to analyse question types. The content and information provided by the app on the Falerone Theatre were developed by the teacher and included in a curricular educational path. In this way, students acquired a specific technical language necessary to understand the more complex and articulated content of the discipline, preparing for the continuation of their training activities.

4.4 KPIs Definition

The questions proposed for the learned content test were elaborated on by the teacher, articulated using different indicators within the requests that can be summarised as follows:
Mnemonic (M): the question contains requests that only require mnemonic applications without reprocessing to understand content and context;
Transversal (T): the question requires transversal knowledge between disciplines and content reprocessing learned in other disciplinary contexts;
Disciplinary (D): the question requires a good level of content learning developed within the discipline, using readaptation and contextualisation.
The scale of values given to each indicator ranged from 0 to 3, based on the weight the indicator assumed within each question. The scale goes from 0 to 3 because the minimum value 0 corresponds to the question that does not contain elements of the indicator, while the maximum value 3 corresponds to the question that contains all the elements of the indicator, as shown in Table 2.
Table 2.
ItemWeight
Does not contain elements of the indicator0
Contains a few elements of the indicator1
Contains many elements of the indicator2
Contains all the elements of the indicator3
Table 2. Values Assigned to Each Indicator

4.5 Questionnaire Reliability Verification

The questions were structured in order to be classified, according to their complexity (greater or lesser) and type of request, within the research descriptors. Such descriptors were defined as follows: questions requiring a simple mnemonic application of requested content were marked with M (M descriptor), those requiring content reprocessing via expressed request interpretation were marked with T (T descriptor) and those requiring a careful content analysis for a broader contextualization were marked with D (D descriptor). The research proposed by [60, 61, 62] validated the internal questionnaire consistency, Cronbach’s reliability test [63] was carried out using its alpha coefficient to determine interesting result reflections obtained from the two student classes. The latter expressed a measure of the relative weight of variability associated with the items regarding the variability associated with their sum, as in (1). The minimum recommended value for alpha was between 0.60 and 0.70 to ensure sufficient internal consistency of the investigation tool. Values between 0.70 and 0.80 showed fair internal consistency.
\begin{equation} \alpha = \frac{k}{k-1}\left(1-\frac{\sum \nolimits _{i=1}^k\sigma _i^2}{\sigma _x^2}\right) \end{equation}
(1)
k = number of item;
\(\sigma _i^2\) = variance of each item;
\(\sigma _x^2\) = total variance of the test;
The values of reliability associated to \(\alpha\) are the following:
\(\alpha \lt 0.4\) corresponds to a low reliability;
\(0.4\lt \alpha \lt 0.6\) corresponds to an uncertain reliability;
\(0.6\lt \alpha \lt 0.8\) corresponds to an acceptable reliability;
\(0.8\lt \alpha \lt 0.9\) corresponds to a good reliability.

5 Experimental Results

5.1 Quantitative Data

The initial analysis, shown in Table 3, indicated the average results achieved by the 1AGR class (68.5%) were better than those achieved by the 1ACAT class (66.4%).
Table 3.
QuestionCorrect answer results (%)
Falerone1ACAT1AGR
Q14487
Q26153
Q37867
Q44467
Q56167
Q67847
Q78980
Q88387
Q97860
Q106793
Q116760
Q125053
Q136173
Q1483100
Q156727
Q165053
Q177893
Q1810093
Q195647
Q204473
Q218960
Q225093
Q235053
Q247893
Q255633
Mean66.468.5
Table 3. Percentages of Correct Answers Provided by the Classes for Each Proposed Question
Table 4 represents the results of the statistical analysis obtained considering data in Table 3. Mean, variance, standard deviation (StdDev), and standard error of the mean (StdError) are determined from the percentage of each right question.
Table 4.
Question1ACAT1AGRMeanVarianceStdDevStdError
Q1448765.5924.530.421.5
Q2615357325.74
Q3786772.560.57.85.5
Q4446755.5264.516.311.5
Q5616764184.23
Q6784762.5480.521.915.5
Q7898084.540.56.44.5
Q883878582.82
Q978606916212.79
Q1067938033818.413
Q11676063.524.54.93.5
Q12505351.54.52.11.5
Q13617367728.56
Q148310091.5144.512.08.5
Q1567274780028.320
Q16505351.54.52.11.5
Q17789385.5112.510.67.5
Q181009396.524.54.93.5
Q19564751.540.56.44.5
Q20447358.5420.520.514.5
Q21896074.5420.520.514.5
Q22509371.5924.530.421.5
Q23505351.54.52.11.5
Q24789385.5112.510.67.5
Q25563344.5264.516.311.5
Table 4. Results of Statistical Analysis
Table 5 represents the statistical results after averaging the statistical indicators on the two groups (1ACAT and 1AGR).
Table 5.
Statistical parameters1ACAT1AGR
Mean66.468.5
Variance260.8414.3
StdDev16.120.4
StdError3.24.1
Table 5. Mean Value for the Two Groups
Moreover, to compare the rate of correct answers, we have proposed both Figure 5(a) which represents a comparison between the correct answers given by the two groups and Figure 5(b) that graphically reports the percentage of correct answers for each question.
Fig. 5.
Fig. 5. Rate of right answers given by the two groups (1ACAT and 1AGR). Figure 5(a) represents a comparison between the correct answers given by the two groups and Figure 5(b) reports the percentage of correct answers for each question.
The weight of each indicator assigned to each question by the teacher is shown in Table 6.
Table 6.
QuestionWeight of indicators
FaleroneMnemonic (M)Transversal (T)Disciplinary (D)
Q1301
Q2132
Q3033
Q4213
Q5213
Q6023
Q7301
Q8301
Q9123
Q10311
Q11311
Q12300
Q13122
Q14310
Q15123
Q16122
Q17123
Q18133
Q19122
Q20310
Q21310
Q22122
Q23023
Q24311
Q25013
Table 6. Weights of the Indicators Identified within Each Question
The test was carried out after allowing 1AGR (“control group”) to study app contents shortly before the test, while 1ACAT (“VR group”) could study the app contents at home (i.e., traditionally) with longer times and procedures. A more detailed analysis of the exact answers collected was performed. Questions were associated with the three previously defined indicators by simplifying RBT. The percentage of correct answers was then multiplied with the weight of each indicator and subsequently divided with the type of indicator, to obtain a comparative analysis of the individual values. The pairwise comparisons for the pre-test type revealed no statistical significance, indicating that the starting level in the two groups was equal.
Table 7 is obtained by combining the values of Table 3 and Table 6. The values of Table 3 have been updated multiplying them with the weight assigned to each indicator and determined according to Table 2. Table 8 reports the correct answer percentages for each question multiplied by a weight of 3 for each indicator.
Table 7.
QuestionMnemonic (%)Transversal (%)Disciplinary (%)
Falerone1ACAT1AGR1ACAT1AGR1ACAT1AGR
Q1132261004487
Q26153183159122106
Q300234201234201
Q4881344467132201
Q51221346167183201
Q60015694234141
Q7267240008980
Q8249261008387
Q97860156120234180
Q1020127967936793
Q1120118067606760
Q121501590000
Q136173122146122146
Q142493008310000
Q1567271345420181
Q165053100106100106
Q177893156186234279
Q1810093300279300279
Q1956471129411294
Q20132219447300
Q21267180896000
Q225093100186100186
Q2300100106150159
Q2423427978937893
Q2500563316899
Mean115.7128.797.795.1122.2118.4
Table 7. Analysis of Calibrated Values using Weights Assigned to Each Individual Question
Table 8.
QuestionCorrect answer results (%)Mnemonic (%)Transversal (%)Disciplinary (%)
Falerone1ACAT1AGR1ACAT1AGR1ACAT1AGR1ACAT1AGR
Q14487132261    
Q26153  183159  
Q37867  234201234201
Q44467    132201
Q56167    183201
Q67847    234141
Q78980267240    
Q88387249261    
Q97860    234180
Q106793201279    
Q116760201180    
Q125053150159    
Q136173      
Q1483100249300    
Q156727    20181
Q165053      
Q177893    234279
Q1810093  300279300279
Q195647      
Q204473132219    
Q218960267180    
Q225093      
Q235053    150159
Q247893234279    
Q255633    168 99 
 66.468.5208223587176391901722
   46.9%53.1%52.9%47.1%52.5%47.5%
Table 8. Correct Answer Percentages for Each Question Multiplied by a Weight of 3 for Each Indicator
Obtained results were added together and translated as a percentage for a comparison of the values achieved by the two classes.
For the M (Mnemonic) indicator, the “control group” obtained a better rating than the “VR group” with a difference of about 13%. Being able to study app content just before the test favoured the “control group”, enabling easy storage of information containing numeric data, historical character names and simple definitions.
For the T (Transversal) indicator, the results were 3% better for the “VR group”, indicating a different response path being taken. A general count of the exact answers obtained did not permit detailed highlighting of educational values for individual questions administered, as stated previously. The indicator detected profound acquired knowledge reprocessing ability that allowed the readjustment of its founding cores to other contexts, including transversal ones. Each question required reflection on content already provided to students during previous lessons and different teaching units but was still preparatory to the activity in question. The 1ACAT class, processing app information, had time to assimilate the content and establish necessary cognitive connections with what had already been learned in previous lessons. Study time was the variable that favoured the best results for the 1ACAT class in this indicator.
For the D (Disciplinary) indicator, the “VR group” obtained 4% better results. Proposed questions contained topics covered widely in class and were deepened with images and examples. As such, both groups had the opportunity to learn essential content, connections, and didactic meaning. By detailing the weight within the administered questionnaire, a better learning result was obtained by the “VR group”, who studied at home and were able to reconnect the app content with the lesson content. Therefore, the app enabled deepening of acquired knowledge and content validation by crossing referencing information learned in the classroom with that contained in the app.
To provide additional reflective elements, additional analysis of the data collected through the indicators was carried out. The collected results were added by multiplying the percentage of exact answers associated with the maximum value of each indicator (i.e., the weight of 3, as shown in Table 3).
The weights of the answers provided by 1AGR students are clearly higher for the mnemonic indicator (53.1%) while the other indicators show 1ACAT students surpassing their colleagues: 52.9% and 52.5% for transversal and disciplinary, respectively. The weighted indicator category totals allow further validation of the differences between the results achieved. In Figure 6, a comparison between totals is obtained by adding the results of the exact answers multiplied by a weight of 3 for each indicator. In Figure 7, these values have been translated into a percentage. The mnemonic indicator values show the 1AGR class obtaining the best results by experiencing the app shortly before completing the questionnaire. For the transversal and disciplinary indicators, the 1ACAT class earned the best results by using the app to study at home, extending learning times, and enabling app content comparison with class content.
Fig. 6.
Fig. 6. Comparison of the total results obtained by the two classes, multiplying exact answer percentages by a weight value of 3 for each indicator. The mnemonic indicator values show the 1AGR class obtaining the best results by experiencing the app shortly before completing the questionnaire. For the transversal and disciplinary indicators, the 1ACAT class earned the best results by using the app to study at home, extending learning times and enabling app content comparison with class content.
Fig. 7.
Fig. 7. Comparison of the total results obtained by the two classes, multiplying exact answer percentages by a weight of 3 for each indicator, before translating the results into percentages. The mnemonic indicator values show the 1AGR class obtaining the best results by experiencing the app shortly before completing the questionnaire. For the transversal and disciplinary indicators, the 1ACAT class earned the best results by using the app to study at home, extending learning times and enabling app content comparison with class content.

5.2 Statistical Analysis

Returning to the levels indicated in RBT, the mnemonic indicator refers to the taxonomy’s first level (remembering) where the cognitive dimension consists of the basic thought skills of recognition and memorisation. This ability does not require students to employ analytical skills, such as understanding and elaboration, that are prerequisites for achieving more elaborate and consistent levels of knowledge, both in content and over time. These attitudes and processes were detected in higher skill levels (such as understanding, analysis, and evaluation), which our indicators call transversal and disciplinary. Achieving these levels indexed the ability to interpret acquired data through the learning process, implementing one’s own set of skills with personal and creative reprocessing to achieve a more elaborate and complete level of knowledge that can be spent in a transversal and original way.

5.2.1 Test of Reliability: Cronbach’s Alpha.

Considering the Cronbach’s alpha value related to the two classes, we obtained the value using (2) for the 1ACAT class.
\begin{equation} \alpha =\frac{25}{25-1}\biggl (1-\frac{2.60}{12.27}\biggr)=0.82 \end{equation}
(2)
The Cronbach’s alpha for the 1AGR class is expressed in (3):
\begin{equation} \alpha =\frac{25}{25-1}\biggl (1-\frac{4.14}{12.27}\biggr)=0.69 \end{equation}
(3)
The data collected from the “VR group” shows good internal consistency and a good alpha Cronbach factor ( \(\alpha \gt 0.80\) ), confirming the result correspondence with the questionnaire form. This indicates answers possessed total uniformity over the entire questionnaire. The average of the standard deviation for the individual results was closer to the average of the standard deviation for the entire questionnaire. However, the Cronbach’s alpha factor for the “control group” results ( \(0.60 \lt \alpha \lt 0.70\) ) shows poor standard deviation uniformity, meaning many correct answers were given only for some types of questions while raising the general average of the positive results obtained. Basically, almost all students answered correctly to certain types of questions only. These differing results provide understanding about how the teaching-learning process performed, with the “control group” producing positive results only for mnemonic questions.

5.2.2 Test of Significance: t-Test.

In order to validate the results, a statistical significance test was performed. As shown in Table 9, with a significance coefficient \(\alpha\) = 0.05, both P(T \(\le\) t) and t-critical fall within the acceptance area, revealing that mean and std cannot be disregarded.
Table 9.
Statistical parameters1ACAT1AGR
Mean66.568.5
Variance260.8414.3
Observations16.120.4
Correlation of Pearson2525
Assumed difference for averages0 
Gdl24 
tStat-0.5 
P(T \(\le\) t) 1 tail0.3 
t critical 1 tail1.7 
P(T \(\le\) t) 2 tails0.6 
t critical 2 tails2.1 
Table 9. General Analysis

6 Discussion

In the following section, we discuss the results by answering the research question in Section 3.1 and identifying method limitations.

6.1 VR for Learning (RQ1: Comparing Two Different Methods of Learning through the use of VR, Which is the Most Suitable to Propose in the Classroom?)

Our research showed that several factors contribute to achieving knowledge levels in the learning process. Our study’s use of VR and knowledge about constituent parts of a Roman Theatre encouraged greater student involvement. The didactic unit, which develops specific historical and architectural knowledge content of an artefact, received more precise and productive reading involvement app usage, proving that the study of architectural history (as any other artistic expression) is favoured by a combination of image and technology use enabling immersive reading. This positive result is demonstrated by the average percentage of exact answers obtained (Table 3 and Figure 5(a)). Values were between 66% and 68%, confirming that the topic was sufficiently acquired by all students. The content learning path was therefore facilitated with VR app usage, allowing all students to reach satisfactory levels of topic knowledge [64, 65].

6.2 Evaluation Model (RQ2: Comparing Two Different Methods of Learning, is it Possible to Propose an Evaluation Model Able to Properly Determine the Levels of Knowledge?)

The model created for KPI definition was applied to individual questions administered in the questionnaire by associating each of them with a weight (from 0 to 3), classifying the link with each described individual level. This enabled the collection of more detailed information about the real value of the correct answers, permitting the deduction of the actual knowledge levels achieved. It is not exhaustive to collect correct results, but it is fundamental to analyse each answer’s value within RBT, which allows a more precise declination of learning levels. The use of the app with different timing allowed the definition of different levels of learning content (Table 7 and Table 8). The added value, constituted by the introduction of VR technology in reading on the architectural artefact, favoured a study approach differentiation whose fundamental variable was defined by the study time factor.

6.3 Learning Evaluation (RQ3: Which are the Competences Enhanced by the use of VR Applications?)

The questionnaire administered to the students at the end of the teaching activities was structured to include questions requiring different acquired skills. Some only needed mnemonic skill application, facilitated by the use of images present in the questionnaire that stored content. Other questions required learners to use synthesis or content processing skills. An initial analysis revealed the best results being obtained by the class using the SmartMarca application before the assessment. By contrast, home study by both groups was supported by the use of VR technology, allowing the distinguishing of different levels and types of content acquisition. Application use at home enabled focusing on information and its most complete reworking, thanks to more relaxed study times and a comparative mode contrasting book and app content. The combined use of traditional study and VR application within an adequate study time leads to better knowledge and skill results. The class group that was able to review the app just before the test indicated a greater ability to support purely mnemonic topics, such as dates and simple definitions. The skills reached by the students who studied at home with the app, referring to RBT, attest to both conceptual and metacognitive knowledge (Figures 6 and 7). The students were able to transfer what they learned to other situations and forms, showing a deeper and more structured learning profile. Thus, the results validated the use of digital technology in teaching, demonstrating experiential value and student involvement [23]). Considering the small GAP in size of using VR-based learning, our findings are in line with Wu et al. [66]. However, in both cases the percentage of correct answers is higher than 60%, which is over the mean with a traditional lesson. This value has been reported by the experience of the teacher that compared the results of the previous years, when the test over the same argument was done without the application.

6.4 Teacher Role (RQ4: Can VR use Replace the Role of Teacher and/or the Standard Didactic Approach?)

Our research, which involves the definition and application of performance indicators within a questionnaire, has shown that the topic, which study is distributed over a long time and supported by VR application use, deepens and strengthens student motivation while providing more solid and aware knowledge levels. Though the levels reached, upon initial analysis, appeared inferior, they were proven to be more performing and complete. Correct answers were more evenly distributed and all students had an average homogeneous preparation validated by Cronbach’s alpha formula. The teacher is understood to provide an irreplaceable function of the primary actor of knowledge transmission as has also been investigated in a recent review [67]. In-depth activities carried out with VR favoured focusing on founding topics and conceptual re-elaboration of the topics dealt with. It was fundamental for students to manage information with the required learning times and possess the metacognitive process that led to positive questionnaire results [16]. The evidence of our study corroborates Spector’ [68] findings, which affirm that advances in educational technology do not guarantee improved learning, and ‘the focus should be on learning rather than on technology’. Our research was conducted following this philosophy, and we answered this question by making an analysis of the results obtained, considering the responses of the two groups, VR confirmed its capability in engaging students, evidenced by their inclination to the research experience; however, the teacher still plays a key role. All the tests conducted within the related teaching experiences have always confirmed that the intervention of the teacher, assisted by the technological tool, has allowed the achievement of better results, in terms of learning. Only within a relationship between teacher and student, in a mechanism of mutual exchange, the learning experience can take place. Moreover, the role of the teacher is important not only to improve the basic and preliminary skills, but even in the creation of the contents, which can be easily modified and adapted to the topic thanks to the digital tool [58].

6.5 Limitations

Our research has a number of limitations that nevertheless constitute the backbone for further developments and investigations. Each evaluation process in the didactic field requires a preliminary definition of disciplinary didactic objectives, which must be pursued throughout the whole teaching-learning process. For this reason, it was necessary to establish topic characteristics for evaluation. RBT levels should be structured within any questions given to students, which means each question should contain information to establish the knowledge level achieved by a student beyond the simple repetition of information received. Each question must then be assigned a separate weighting according to indicators established by the teacher. This process should be repeated each time one undertakes such data collection; while time consuming, it can prevent significant exploitation of the proposed methodology. Moreover, there are different levels and possible didactic methodologies for verifying learning and expressing subject assessment. Generally, all this is expressed with a judgement that refers to a docimological scale, but every teacher knows that within such simple numbers there are several elements of judgement and synthesis related to the path a student has taken to reach their level of knowledge. Each class group also constitutes different and difficult to classify working contexts. Therefore, teaching models are not always fully repeatable. The most receptive class will not need detailed goal setting, but will more easily reach articulated and deep levels of knowledge. Otherwise, in scenarios with classes possessing learning difficulties, the work will be slower and reach more basic levels of knowledge.
The added value of using VR will facilitate knowledge transmission, but it will be articulated according to different contexts. Diversifying the didactic is more necessary in light of the growing presence of students with dyslexia, specific learning disorders (e.g., D.S.A.) or special educational needs (e.g., B.E.S.), as also shown in a work similar to ours, but in a musical context [69]. Each teacher, regardless of discipline, thus requires defined minimum learning objectives and course planning to facilitate teaching all students. Consequently, disciplinary objectives and relative learning levels require remodelling according to individual situations, making non-uniform evaluations necessary and further limiting the value of KPIs defined in our research.
In light of the above-mentioned consideration, we can affirm that the definition of a well-established baseline (e.g., the RBT) constitutes only a starting point for question formulation. It must be followed by weighted attribution identified in relation to defined indicators. The aim of this research, thus, is to provide a possible method that can be rearranged to other learning taxonomies and, obviously, should be moulded according to the topic. The learning process in the DCH domain is favoured by the use of visual contents that constitute a privileged vehicle for information transmission. VR has been a strong aid for the discernment capacity (namely transversal and disciplinary indicators) facilitating their systematisation in the process of re-elaboration carried out by the students. Nonetheless, further studies are needed to confirm these results even in other domains. The need for a transition from a subject-specific context to a discipline-specific context is highlighted by several studies [70]. This work moves in this direction, by proposing a method for the DCH, which will hopefully produce collective evidence on VR benefits. However, further testing is required to assess the validity of the proposed KPIs. Finally, as demonstrated by the recent literature [67], data collection methods have been based on questionnaires, while difference inferential statistics has been the primary analysis method. Thus, the proposed method makes a step beyond the current state of the art, by providing a method that, despite exploiting the existing baseline of evaluation in high school, makes it for the first time comparable with a VR-based learning approach. Digital tools are nowadays ready to gather huge amounts of data, and in the future we are confident that our platform will collect a larger dataset, which will make our findings more reliable.

7 Conclusion

A quantitative method for evaluating student learning outcomes when a VR application is employed has been proposed by our study. The learning field of the experiment was DCH. To achieve desired results, KPIs were defined using RBT as a model; this method enabled educational path validation by providing evaluation information for obtained results. Each didactic action includes different methodologies and techniques, as well as an evaluative action, and each teacher adapts their teaching path to discipline objectives. To overcome such complexity, defining KPIs as indicators of level and objectives favoured the teaching-learning process. KPIs were used to validate the research path carried out, which saw the insertion of VR technology into a teaching unit; performance results were positive (see research question 1) and reached sufficiency on average. KPIs also allowed the definition of various knowledge levels and student skills and understanding of real acquisition levels for covered topics. It can be deduced that reaching higher levels of knowledge and topic re-elaboration requires application use and textbook study with a more systematic and distributed study over time. The whole didactic path should be structured consequently with disciplinary objectives to be reached through didactic action. The added value is represented by the possibility of assisting, in quantitative and possibly qualitative terms, knowledge transmission methods. The final assessment of the learning path provided, through the performance indicators, the level of skills reached by the students.
In future experimental evaluation, we foresee to evaluate the benefits of VR on knowledge retention; moreover, in the future development of the application, it is expected to add more interaction by the students with the real environment, in order to stimulate meta-cognitive skills.
Finally, we are planning to release a content creation framework specifically designed for the school, that will enable students to create, without programming skills, their own virtual experience.

Footnotes

A Appendix

Table A.1.
ROMAN THEATRE OF FALERONE
CODEQUESTIONANSWERINDICATOR
Q1What was the name of the ancient Roman city of FaleroneA FalerienseM
B Falerio Piceno
C Farfense
D Falerio Romano
Q2Unlike the Greek theater, the Roman theaterA stands on a hillsideT
B has only side entrances
C stands on a flat ground
D has a greater number of seats
Q3The construction technique is thatA dry masonry with overlapping stone blocksT/D
B masonry with brick facing and internal cementitious core
C masonry with rows of bricks arranged in two heads
D masonry with stone covering and cocciopesto core
Q4The place used for the choir and after used for magistrates and priests isA CaveaD
B Proscenio
C Tribunalia
D Orchestra
Q5The main place for spectators is calledA TribunaliaD
B Parados
C Orchestra
D Cavea
Q6The service areas under the Cavea also have the functionA material storage for the showD
B harmonic resonance box
C audience meeting point
D ticket office
Q7How many spectators could the Falerone theater containA 3,000M
B 1,600
C 20,000
D 150
Q8The Parados in the Roman theater isA the cover of the cavea for repairing from the sunM
B the clothing of the actors
C the wall covering the proscenium
D the covered passage to access the orchestra
Q9The secondary actors were able to enter the stageA from Porta RegiaD
B from Vomitoria
C from Parados
D from Hospitalia
Q10In the II century AD, in the period of Antonino Pio, the Falerone theaterA was restoredM
B was demolished
C was rebuilt
D was expanded
Q11The Porta Regia is the entranceA of the emperor or kingM
B of lead actors
C of important audience
D of magistrates
Q12The stage housed dedicated statuesA GioveM
B Minerva
C Cerere
D Apollo
Q13The steps that housed the public and the orchestra were linedA in wood for better acousticsT/D
B in travertine and stone to give regularity to the structure
C in cement conglomerate to reinforce the structure
D in tufa blocks to lighten the structure
Q14How high is the Pulpitum compared to the OrchestraA 3 metresM
B 1.30 metres
C 20 centimetres
D 5 metres
Q15What material the Pulpitum is made ofA marbleD
B rammed earth
C cocciopesto
D wood
Q16The Roman city of Falerone was born as a settlement ofA veterans who had taken part in civil warsT/D
B new citizens from Rome who did not have accommodation in the city
C the knights of Augustus who had fought against the Persians
D veterans who had taken part in civil wars
Q17The Vomitoria are covered entrancesA placed around the theater to shelter from the rain or the sunD
B for actors
C for spectators
D to watch the actions of minor scenes
Q18The theater has a semicircular shapeA to allow the public to sit comfortablyT/D
B to enhance vision and acoustics
C to facilitate access to the cavea
D to allow actors to see the public
Q19The Roman theater has an external porticoA TRUET/D
B FALSE
Q20The summa cavea was intended for magistrates and officialsA TRUEM
B FALSE
Q21The ima cavea was intended for women and plebsA TRUEM
B FALSE
Q22The backdrop of the theater scene has three doorsA TRUET/D
B FALSE
Q23The velarium in the theater was usedA to close the sceneD
B to protect from the sun
C to avoid illuminating the scene
D to avoid furnishing the proscenium
Q24The actors in ancient Rome were only menA TRUEM
B FALSE
Q25Actors in Roman times used coturniA to cover the head from the sun’s raysD
B to paint the hair according to the role played
C to avoid having to change each scene
D to be seen better by the spectators
Table A.1. Questionnaire given after the Experience on Falerone Theatre: The First Three Rows Represent the Pre-test; Column 3 shows the Possible Answers, with the Correct Answers Highlighted in Bold; Column 4 shows the Indicator with the Greater Weight

References

[1]
Marc Prensky. 2001. Digital natives, digital immigrants. On the Horizon 9, 5 (2001).
[2]
Kuo-Ting Huang, Christopher Ball, Jessica Francis, Rabindra Ratan, Josephine Boumis, and Joseph Fordham. 2019. Augmented versus virtual reality in education: An exploratory study examining science knowledge retention when using augmented reality/virtual reality mobile applications. Cyberpsychology, Behav. Social Netw. 22, 2 (2019), 105–110.
[3]
Faruk Arici, Pelin Yildirim, Şeyma Caliklar, and Rabia M. Yilmaz. 2019. Research trends in the use of augmented reality in science education: Content and bibliometric mapping analysis. Comput. Educ. 142 (2019), 103647.
[4]
Ling Cen, Dymitr Ruta, Lamees Mahmoud Mohd Said Al Qassem, and Jason Ng. 2019. Augmented immersive reality (AIR) for improved learning performance: A quantitative evaluation. IEEE Trans. Learn. Technol. 13, 2 (2019), 283–296.
[5]
Dilara Sahin and Rabia Meryem Yilmaz. 2020. The effect of augmented reality technology on middle school students’ achievements and attitudes towards science education. Comput. Educ. 144 (2020), 103710.
[6]
Doriana Cisternino, Laura Corchia, Valerio De Luca, Carola Gatto, Silvia Liaci, Liliana Scrivano, Anna Trono, and Lucio Tommaso De Paolis. 2021. Augmented reality applications to support the promotion of cultural heritage: The case of the Basilica of Saint Catherine of Alexandria in Galatina. Journal on Computing and Cultural Heritage (JOCCH) 14, 4 (2021), 1–30.
[7]
Ramy Hammady, Minhua Ma, and Carl Strathearn. 2020. Ambient information visualisation and visitors’ technology acceptance of mixed reality in museums. Journal on Computing and Cultural Heritage (JOCCH) 13, 2 (2020), 1–22.
[8]
D. Teferra and P. G. Altbachl. 2004. African higher education: Challenges for the 21st century. Higher Educ. 47, 1 (2004), 21–50.
[9]
Carl Benedikt Frey and Michael A. Osborne. 2017. The future of employment: How susceptible are jobs to computerisation? Technological Forecasting and Social Change 114 (2017), 254–280.
[10]
Rita C. Richey. 2008. Reflections on the 2008 AECT definitions of the field. TechTrends 52, 1 (2008), 24–25.
[11]
Roberto Pierdicca, Emanuele Frontoni, Rama Pollini, Matteo Trani, and Lorenzo Verdini. 2017. The use of augmented reality glasses for the application in industry 4.0. In Int. Conf. Augmented Reality, Virtual Reality and Computer Graphics. Springer, 389–401.
[12]
Egui Zhu, Arash Hadadgar, Italo Masiello, and Nabil Zary. 2014. Augmented reality in healthcare education: An integrative review. PeerJ 2 (2014), e469.
[13]
Vasiliki Liagkou, Dimitrios Salmas, and Chrysostomos Stylios. 2019. Realizing virtual reality learning environment for industry 4.0. Procedia CIRP 79 (2019), 712–717.
[14]
Elmedin Selmanović, Selma Rizvic, Carlo Harvey, Dusanka Boskovic, Vedad Hulusic, Malek Chahin, and Sanda Sljivo. 2020. Improving accessibility to intangible cultural heritage preservation using virtual reality. Journal on Computing and Cultural Heritage (JOCCH) 13, 2 (2020), 1–19.
[15]
Brian Boyles. 2017. Virtual reality and augmented reality in education. Center For Teaching Excellence, United States Military Academy, West Point, NY (2017).
[16]
G. Cooper, H. Park, Z. Nasr, L. P. Thong, and R. Johnson. 2019. Using virtual reality in the classroom: Preservice teachers’ perceptions of its use as a teaching and learning tool. Educational Media Int. 56, 1 (2019), 1–13.
[17]
Jule M. Krüger, Alexander Buchholz, and Daniel Bodemer. 2019. Augmented reality in education: Three unique characteristics from a user’s perspective. In Proc. 27th Int. Conf. Computers in Education. 412–422.
[18]
Noureddine Elmqaddem. 2019. Augmented reality and virtual reality in education. Myth or reality? Int. J. Emerg. Technologies Learn. (iJET) 14, 03 (2019), 234–242.
[19]
Chengjiu Yin, Han-Yu Sung, Gwo-Jen Hwang, Sachio Hirokawa, Hui-Chun Chu, Brendan Flanagan, and Yoshiyuki Tabata. 2013. Learning by searching: A learning environment that provides searching and analysis facilities for supporting trend analysis activities. J. Educational Technol. Soc. 16, 3 (2013), 286–300.
[20]
Jacky C. P. Chan, Howard Leung, Jeff K. T. Tang, and Taku Komura. 2010. A virtual reality dance training system using motion capture technology. IEEE Trans. Learn. Technol. 4, 2 (2010), 187–195.
[21]
Patrick Salamin, Tej Tadi, Olaf Blanke, Frederic Vexo, and Daniel Thalmann. 2010. Quantifying effects of exposure to the third and first-person perspectives in virtual-reality-based training. IEEE Trans. Learn. Technol. 3, 3 (2010), 272–276.
[22]
Marc Ericson C. Santos, Angie Chen, Takafumi Taketomi, Goshiro Yamamoto, Jun Miyazaki, and Hirokazu Kato. 2013. Augmented reality learning experiences: Survey of prototype design and evaluation. IEEE Trans. Learn. Technol. 7, 1 (2013), 38–56.
[23]
K. E. Stavroulia, M. Christofi, E. Baka, D. Michael-Grigoriou, N. Magnenat-Thalmann, and A. Lanitis. 2019. Assessing the emotional impact of virtual reality-based teacher training. The International Journal of Information and Learning Technology.
[24]
Mohd Kamal Othman, Shaziti Aman, Nurfarahani Norman Anuar, and Ikram Ahmad. 2021. Improving children’s cultural heritage experience using game-based learning at a living museum. Journal on Computing and Cultural Heritage (JOCCH) 14, 3 (2021), 1–24.
[25]
Isabell Wohlgenannt, Jennifer Fromm, Stefan Stieglitz, Jaziar Radianti, and Tim A. Majchrzak. 2019. Virtual reality in higher education: Preliminary results from a design-science-research project. In Proc. 28th Int. Conf. Information Systems Development (ISD2019 Toulon, France). 1–10.
[26]
George Chang, Patricia Morreale, and Padmavathi Medicherla. 2010. Applications of augmented reality systems in education. In Int. Conf. Society for Information Technology and Teacher Education. Association for the Advancement of Computing in Education (AACE), 1380–1385.
[27]
R. Luckin and D. S. Fraser. 2011. Limitless or pointless? An evaluation of augmented reality technology in the school and home. Int. J. Technol. Enhanced Learn. 3, 5 (2011), 510–524.
[28]
John Dewey. 1986. Experience and education. In The Educational Forum, Vol. 50. Taylor & Francis, 241–252.
[29]
Elinda Ai-Lim Lee and Kok Wai Wong. 2008. A review of using virtual reality for learning. Trans. Edutainment I (2008), 231–241.
[30]
Jaziar Radianti, Tim A. Majchrzak, Jennifer Fromm, and Isabell Wohlgenannt. 2020. A systematic review of immersive virtual reality applications for higher education: Design elements, lessons learned, and research agenda. Comput. Educ. 147 (2020), 103778.
[31]
Mafkereseb Kassahun Bekele, Roberto Pierdicca, Emanuele Frontoni, Eva Savina Malinverni, and James Gain. 2018. A survey of augmented, virtual, and mixed reality for cultural heritage. J. Comput. and Cultural Heritage (JOCCH) 11, 2 (2018), 1–36.
[32]
Simona Naspetti, Roberto Pierdicca, Serena Mandolesi, Marina Paolanti, Emanuele Frontoni, and Raffaele Zanoli. 2016. Automatic analysis of eye-tracking data for augmented reality applications: A prospective outlook. In Int. Conf. Augmented Reality, Virtual Reality and Computer Graphics. Springer, 217–230.
[33]
Peng Chen, Xiaolin Liu, Wei Cheng, and Ronghuai Huang. 2017. A review of using augmented reality in education from 2011 to 2016. In Innovations in Smart Learning. Springer, 13–18.
[34]
Lev Semenovich Vygotsky. 1980. Mind in Society: The Development of Higher Psychological Processes. Harvard University Press.
[35]
Benjamin S. Bloom. 1956. Taxonomy of educational objectives. Vol. 1: Cognitive domain. New York: McKay (1956), 20–24.
[36]
Neven A. M. El Sayed, Hala H. Zayed, and Mohamed I. Sharawy. 2010. ARSC: Augmented reality student card. In Int. Conf. Computer Engineering (ICENCO). IEEE, 113–120.
[37]
Joanna K. Crosier, Sue Cobb, and John R. Wilson. 2002. Key lessons for the design and integration of virtual environments in secondary science. Comput. Educ. 38, 1-3 (2002), 77–94.
[38]
Hannes Kaufmann, Dieter Schmalstieg, and Michael Wagner. 2000. Construct3D: A virtual reality application for mathematics and geometry education. Educ. Inf. Technologies 5, 4 (2000), 263–276.
[39]
Chris Christou. 2010. Virtual reality in education. In Affective, Interactive and Cognitive Methods for E-learning Design: Creating an Optimal Education Experience. IGI Global, 228–243.
[40]
Michael Gargalakos, Elpida Giallouri, Aggelos Lazoudis, Sofoklis Sotiriou, and Franz X. Bogner. 2011. Assessing the impact of technology-enhanced field trips in science centers and museums. Adv. Sci. Lett. 4, 11-12 (2011), 3332–3341.
[41]
Melanie J. Maas and Janette M. Hughes. 2020. Virtual, augmented and mixed reality in K–12 education: A review of the literature. Technol. Pedagogy Educ. 29, 2 (2020), 231–249.
[42]
Eugene Ch’ng, Yue Li, Shengdan Cai, and Fui-Theng Leow. 2020. The effects of VR environments on the acceptance, experience, and expectations of cultural heritage learning. J. Comput. and Cultural Heritage (JOCCH) 13, 1 (2020), 1–21.
[43]
Diana Bogusevschi, Cristina Muntean, and Gabriel-Miro Muntean. 2020. Teaching and learning physics using 3D virtual learning environment: A case study of combined virtual reality and virtual laboratory in secondary school. J. Comput. Math. Sci. Teaching 39, 1 (2020), 5–18.
[44]
Paolo Clini, Ramona Quattrini, Emanuele Frontoni, Roberto Pierdicca, and Romina Nespeca. 2017. Real/not real: Pseudo-holography and augmented reality applications for cultural heritage. In Handbook of Research on Emerging Technologies for Digital Preservation and Information Modeling. IGI Global, 201–227.
[45]
Michela Mortara, Chiara Eva Catalano, Francesco Bellotti, Giusy Fiucci, Minica Houry-Panchetti, and Panagiotis Petridis. 2014. Learning cultural heritage by serious games. J. Cultural Heritage 15, 3 (2014), 318–325.
[46]
Carolina Cruz-Neira, Daniel J. Sandin, Thomas A. DeFanti, Robert V. Kenyon, and John C. Hart. 1992. The CAVE: Audio visual experience automatic virtual environment. Commun. ACM 35, 6 (1992), 64–73.
[47]
Fabio Bruno, Loris Barbieri, Antonio Lagudi, Marco Cozza, Alessandro Cozza, Raffaele Peluso, and Maurizio Muzzupappa. 2018. Virtual dives into the underwater archaeological treasures of South Italy. Virtual Reality 22, 2 (2018), 91–102.
[48]
Xiaozhe Yang, Pei-Yu Cheng, and Xue Yang. 2017. The impact of three types of virtual reality scene on learning. In Int. Conf. Educational Innovation through Technology (EITT). IEEE, 322–324.
[49]
Jani Holopainen, Antti Juhani Lähtevänoja, Osmo Mattila, Ilona Södervik, Essi Pöyry, Petri Parvinen, et al. 2020. Exploring the learning outcomes with various technologies: Proposing design principles for virtual reality learning environments. In Proc. 53rd Annu. Hawaii Int. Conf. System Sciences. University of Hawaii.
[50]
Yi-Chen Hsu. 2020. Exploring the learning motivation and effectiveness of applying virtual reality to high school mathematics. Universal J. Educational Res. 8, 2 (2020), 438–444.
[51]
Nori Barari, Morteza RezaeiZadeh, Abasalt Khorasani, and Farnoosh Alami. 2020. Designing and validating educational standards for E-teaching in virtual learning environments (VLEs), based on revised Bloom’s taxonomy. Interactive Learn. Environ. (2020), 1–13.
[52]
Jeffrey Jacobson, Kerry Handron, and Lynn Holden. 2009. Narrative and content combine in a learning game for virtual heritage. Distance Educ. 9, 2 (2009), 7–26.
[53]
Zhenan Feng, Vicente A. González, Robert Amor, Ruggiero Lovreglio, and Guillermo Cabrera-Guerrero. 2018. Immersive virtual reality serious games for evacuation training and research: A systematic literature review. Comput. Educ. 127 (2018), 252–266.
[54]
Ralph Saubern, Daniel Urbach, Matthew Koehler, and Michael Phillips. 2020. Describing increasing proficiency in teachers’ knowledge of the effective use of digital technology. Comput. Educ. 147 (2020), 103784.
[55]
Bashar Zogheib. 2019. Using structural equation modelling to study the influence of perceived usefulness and perceived compatibility on students’ attitudes towards using Ipad. Innovations, Technologies Res. Educ. (2019), 53–66.
[56]
Fred D. Davis. 1989. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly (1989), 319–340.
[57]
Emanuele Frontoni, Marina Paolanti, Mariapaola Puggioni, Roberto Pierdicca, and Michele Sasso. 2019. Measuring and assessing augmented reality potential for educational purposes: SmartMarca project. In Int. Conf. Augmented Reality, Virtual Reality and Computer Graphics. Springer, 319–334.
[58]
Mariapaola Puggioni, Emanuele Frontoni, Marina Paolanti, and Roberto Pierdicca. 2021. ScoolAR: An educational platform to improve students’ learning through virtual reality. IEEE Access 9 (2021), 21059–21070.
[59]
Marion G. Ben-Jacob. 2017. Assessment: Classic and innovative approaches. Open J. Social Sciences 5, 1 (2017), 46–51.
[60]
Balakrishnan Muniandy, Mei Yean Ong, Kia Kien Phua, and Saw Lan Ong. 2011. User acceptance of a key performance indicators monitoring system (KPI-MS) in higher education: An application of the technology acceptance model. In 2nd Int. Conf. Education Management Technology, Shanghai, China.
[61]
B. Muniandy, S. M. Y. Ong, K. K. Phua, and S. L. Ong. 2011. Assessing Key Performance Indicators Monitoring System (KPI-MS) of a university using technology acceptance model. International Journal of Social Science and Humanity 1, 3 (2011), 171.
[62]
Mahmudul Hasan, Nurazean Maarop, Ganthan Narayana Samy, Roslina Mohammad, Nurulhuda Firdaus Azmi, Noor Hafizah Hassan, and Nabilah Abdul Ghaffar. 2018. Measurement tool for assessing research information management system success. J. Telecommunication, Electron. Comput. Eng. (JTEC) 10, 3-2 (2018), 53–57.
[63]
Jum C. Nunnally. 1975. Psychometric theory—25 years ago and now. Educational Researcher 4, 10 (1975), 7–21.
[64]
Guan-Yu Lin. 2018. Anonymous versus identified peer assessment via a Facebook-based learning application: Effects on quality of peer feedback, perceived learning, perceived fairness, and attitude toward the system. Comput. Educ. 116 (2018), 81–92.
[65]
Shao-Chen Chang, Ting-Chia Hsu, and Morris Siu-Yung Jong. 2020. Integration of the peer assessment approach with a virtual reality design system for learning earth science. Comput. Educ. 146 (2020), 103758.
[66]
Bian Wu, Xiaoxue Yu, and Xiaoqing Gu. 2020. Effectiveness of immersive virtual reality using head-mounted displays on learning performance: A meta-analysis. Brit. J. Educational Technol. 51, 6 (2020), 1991–2005.
[67]
Heng Luo, Gege Li, Qinna Feng, Yuqin Yang, and Mingzhang Zuo. 2021. Virtual reality in K-12 and higher education: A systematic review of the literature from 2000 to 2019. J. Comput. Assisted Learn. (2021), 1–15.
[68]
J. Michael Spector. 2020. Remarks on progress in educational technology. Educational Technology Research and Development 68, 3 (2020), 833–836.
[69]
Edoardo Degli Innocenti, Michele Geronazzo, Diego Vescovi, Rolf Nordahl, Stefania Serafin, Luca Andrea Ludovico, and Federico Avanzini. 2019. Mobile virtual reality for musical genre learning in primary education. Comput. Educ. 139 (2019), 102–117.
[70]
Ayoung Suh and Jane Prophet. 2018. The state of immersive technology research: A literature analysis. Comput. Human Behav. 86 (2018), 77–90.

Cited By

View all
  • (2024)Designing Effective VR Learning EnvironmentsCreating Immersive Learning Experiences Through Virtual Reality (VR)10.4018/979-8-3693-6407-9.ch004(77-104)Online publication date: 22-Nov-2024
  • (2024)Cultural Heritage as a Didactic Resource through Extended Reality: A Systematic Review of the LiteratureMultimodal Technologies and Interaction10.3390/mti80700588:7(58)Online publication date: 5-Jul-2024
  • (2024)Virtual Reality Applied to Heritage in Higher Education—Validation of a Questionnaire to Evaluate Usability, Learning, and EmotionsHeritage10.3390/heritage70601327:6(2792-2810)Online publication date: 28-May-2024
  • Show More Cited By

Index Terms

  1. Evaluating Learning Outcomes of Virtual Reality Applications in Education: A Proposal for Digital Cultural Heritage

          Recommendations

          Comments

          Please enable JavaScript to view thecomments powered by Disqus.

          Information & Contributors

          Information

          Published In

          cover image Journal on Computing and Cultural Heritage
          Journal on Computing and Cultural Heritage   Volume 16, Issue 2
          June 2023
          312 pages
          ISSN:1556-4673
          EISSN:1556-4711
          DOI:10.1145/3585396
          Issue’s Table of Contents

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          Published: 24 June 2023
          Online AM: 24 April 2023
          Accepted: 22 December 2022
          Revised: 17 August 2022
          Received: 24 February 2022
          Published in JOCCH Volume 16, Issue 2

          Permissions

          Request permissions for this article.

          Check for updates

          Author Tags

          1. Digital cultural heritage learning
          2. evaluation methodologies
          3. teaching strategies

          Qualifiers

          • Research-article

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • Downloads (Last 12 months)2,144
          • Downloads (Last 6 weeks)204
          Reflects downloads up to 01 Oct 2024

          Other Metrics

          Citations

          Cited By

          View all
          • (2024)Designing Effective VR Learning EnvironmentsCreating Immersive Learning Experiences Through Virtual Reality (VR)10.4018/979-8-3693-6407-9.ch004(77-104)Online publication date: 22-Nov-2024
          • (2024)Cultural Heritage as a Didactic Resource through Extended Reality: A Systematic Review of the LiteratureMultimodal Technologies and Interaction10.3390/mti80700588:7(58)Online publication date: 5-Jul-2024
          • (2024)Virtual Reality Applied to Heritage in Higher Education—Validation of a Questionnaire to Evaluate Usability, Learning, and EmotionsHeritage10.3390/heritage70601327:6(2792-2810)Online publication date: 28-May-2024
          • (2024)Virtual Reality System Supporting Food Culture Learning2024 Nicograph International (NicoInt)10.1109/NICOInt62634.2024.00013(19-23)Online publication date: 14-Jun-2024
          • (2024)The Role of Virtual Reality Technology in Knowledge Acquisition: A Systematic Review2024 IEEE 7th International Conference on Advanced Technologies, Signal and Image Processing (ATSIP)10.1109/ATSIP62566.2024.10638906(255-260)Online publication date: 11-Jul-2024
          • (2024)Mapping the landscape of digital cultural heritage research: a quantitative and visual bibliometric studyLibrary Hi Tech10.1108/LHT-09-2023-0465Online publication date: 20-Jun-2024
          • (2024)Breaking boundaries: Enhancing dance learning through virtual reality innovationEducation and Information Technologies10.1007/s10639-024-12834-5Online publication date: 5-Jul-2024

          View Options

          View options

          PDF

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          HTML Format

          View this article in HTML Format.

          HTML Format

          Get Access

          Login options

          Full Access

          Media

          Figures

          Other

          Tables

          Share

          Share

          Share this Publication link

          Share on social media