1. Introduction
Massive Open Online Courses (MOOCs) have been with us for decades [
1] and, more recently, have been recognised as facilitators for the achievement to "ensure inclusive and equitable quality", the fourth goal of the United Nations (UN) 17 Sustainable Development Goals (SDGs) [
2,
3]. Indeed, the inherent characteristics of online learning platforms, such as
interactivity, and their
social and
emotional dimensions, make them valuable learning tools for learners around the world. As far as interactivity is concerned, the growth of the digital ecosystem has favoured the apparition of numerous tools for creating dynamic and student-centered resources in MOOCS [
4]. These resources include digital storytelling [
5], infographics [
6], interactive videos, and questionnaires, among others. Nevertheless, the social dimension has not yet been successfully exploited since research studies have revealed that, despite the huge number of students usually enrolled in MOOCS, students usually learn in isolation [
7]. Regarding the emotional dimension, recent findings show that MOOC students value content quality, user interface design, and enjoyment [
8,
9].
Moreover, MOOCs have to contend with low retention rates and a high no-show rate. Bezerra et al. [
10] divided the reasons that cause the high dropout rates into two groups. On the one hand, some reasons are inherent to the instructional design of the course—duration, content, length, and types of activity—including the fact that some of the students sign up to the course out of curiosity, because it is free, or because they are interested in particular sections of it. In response to this problem, the educational community has contributed with solutions such as the design of (short) "pills" of knowledge containing rich and compelling interactive content, as nanoMOOCs (nanoMOOCs are a new audio-visual educational format for knowledge pills;
https://nanomoocs.cat/, accessed on 12 January 2023) do. On the other hand, other causes of low retention rates can be attributed to the student, such as little social interaction, boredom, tiredness, lack of motivation and time, and that the content does not suit their skills. Thus, the design of personalised solutions, the increase of cooperative activities between the students, as well as the design of gamified experiences, are potential solutions for addressing these challenges.
Adaptive gamification, i.e., the design of personalised emotional learning experiences through gamification, is the approach that this paper focuses on. Gamification is the integration of Game Elements (from now on also referred to as GE) in non-game contexts in order to increase user participation and engagement in activities [
11]. In fact, research has stated that gamification had a positive impact on participation and retention in MOOCs [
12]. However, integrating the game elements/mechanics in MOOCs as an effective way of enhancing student engagement remains a challenge.
Additionally, the literature presents gamified experiences that mostly adopt the one-fits-all approach [
13], that is, an approach that is standard and not tailored to individual needs, considering thus that all users have the same gaming profile. In contrast, adaptive gamification has emerged as a means for adapting the gamified experience to user profiles [
14,
15]. These adaptive approaches use a
static profile of the player, which is computed at the beginning of the experience through the use of a player type questionnaire. Then, the gamified system presents the users with game elements that fit their initial (static) playing profile.
In a previous research paper [
16], we presented an adaptive gamification method that took this a step further, considering that the results of the player type questionnaire can be imprecise and also that player profiles can be dynamic. Thus, our method takes player profiles as initial information but also considers how these profiles change over time based on users’ interactions and opinions. Then, the users are provided with a personalized experience through the use of game elements that correspond to their
dynamic playing profile. In [
17], we presented a case study on the nanoMOOCs educational platform that provides the users with gamified knowledge pills, and performed a preliminary evaluation of the approach using a simulator with bots.
In this paper, we experimentally evaluate the contribution of the dynamic adaptive approach to participants’ experience with gamification. Concretely, we want to know if our method has an impact on the hedonic dimension of user experience (UX). To do so, we compared Dynamic Adaptive Gamification (DynamicAG) and Static Adaptive Gamification (StaticAG) in terms of the metrics that help us to measure user engagement with gamification. Note that this study is concerned with how students experienced (felt about) the gamification, i.e.: to what extent did they interact with the game elements that appear throughout the gamified course? Did students like the look and feel of the game elements? Therefore, the study of the impact of gamification on course completion and engagement with educational activities are beyond the scope of this study. We evaluated our method with 63 students at schools in the city of Viladecans, Barcelona (Spain). Our results show that the dynamic adaptive gamification group interacted significantly more than the group in the static adaptive gamification condition, with an increase of 40% in the mean number of interactions with game elements. Moreover, the dynamic adaptive strategy revealed a broader range of game elements to the users. Since the game elements were the same in both conditions, as expected, students in both groups had almost equal opinions regarding the elements’ look and feel.
The rest of the paper is structured as follows. The related work section reviews research works that aimed at providing the users with gamified personalized learning experiences. Next, we describe two adaptive gamification strategies, one that considers the profile of the player, but using a static player profile throughout the course, and our approach, which considers this profile as dynamic. The next section details the strategy followed to situate the game elements along the activities in the course. Then, the rewards logic defines the integration of students’ progress in the MOOC in terms of the gamification element rewards. The Materials and Methods section describes the details of the experiment to compare the StaticAG and DynamicAG approaches. The following sections present metrics, analyse the data gathered, and reflect upon the findings and limitations of our research. The last section provides conclusions drawn from this research and establishes directions for future work.
2. Related Work
The research literature on both MOOCs and blended learning environments has reported a positive influence of gamification on students’ motivation [
18], performance [
19,
20,
21], and collaboration [
22,
23]. In consequence, more and more commercial and open-source MOOC platforms are considering gamification either as a core feature or as a plug-in. For example, EdX’s Digital Badging system (
https://blog.edx.org/digital-badges-on-the-open-edx-platform, accessed on 12 January 2023) awards participants with badges when they complete a course, and Moodle’s Ranking Block (
https://moodle.org/plugins/block_ranking, accessed on 12 January 2023) shows a leaderboard displaying the points accumulated by students when performing educational activities.
However, the majority of MOOCs implement shallow gamification based simply on extrinsic motivation rewards like points, badges, or leaderboards. These basic approaches are evolving to include elements of entertainment, challenge and social interaction since these elements enhance students’ intention of continued usage of MOOCs as well as their course performance [
24]. In this study, we opt for using deep gamification mechanics [
25], based on intrinsic motivation rewards such as helping others and collaborative features like social status and social network.
Another challenge that gamified online courses face is to engage a variety of student personalities and preferences, and also to sustain participants’ engagement over time. A response to this challenge can be found in the diversity of participants and therefore in adaptive approaches as suggested by [
26]. Actually, literature reviews dealing with player types research [
27,
28,
29] have provided thorough overviews of theories of player types, such as the one proposed by Bartle [
30], BrainHex [
31], and Hexad [
32]. Some of these taxonomies propose classifications focused solely on player types, others propose classifications that consider both learning (learner type) and gameplay aspects (player type), but all of them aimed to find a relationship between typologies of users and the game elements that best suit them [
33]. Moreover, these works conclude that users exhibit features of more than one kind of player type, as indicated by the results of publicly available questionnaires (as seen in The Average Type Scores of the HEXAD questionnaire
https://gamified.uk/UserTypeTest/user-type-test-results.php, accessed on 12 January 2023).
Approaches to personalised gamified learning experiences reveal two main aspects to consider: what to adapt and the criteria on which to base the adaptation. Regarding what to adapt, we can highlight the game elements to be shown to the student [
34,
35], the values of the rewards given to the student [
36], and the look and feel of the game elements. In relation to the criteria, the adaptation can be based on students’ (learner and player) profiles [
14,
37], their progress throughout the course, and their interaction with the game elements [
17]. In [
38], the authors suggested using their theoretical framework to modify the system at runtime based on the user’s preference and behaviour, but with neither a real application nor an experimental study that put the theoretical framework into practice. Another study [
15] proposed a matrix factorization method to obtain game element scores for each user and so show students the element with the highest score. Previously mentioned works used either standard or self-reported questionnaires to capture static data, thus not considering that the characteristics and preferences of individuals are dynamic in nature and can evolve over time, nor the possible lack of reliability in questionnaires [
39]. Our adaptive method explores this gap and proposes to dynamically capture real-time data from user interactions and, based on the data, provides the user with a personalised experience [
16,
17].
Regarding the AI technique that supported the adaptation task in different studies, we mainly found ML (Machine Learning)-based solutions. One study [
40] used machine learning to predict—based on the facial keypoint data of users—the results of a gamified activity. Other studies were not directly related to gamification in learning environments but to educational institutions or games. In [
41], the authors proposed a CBR-based architecture for designing adaptive text-based computer games. Data mining was also used for measuring institutional performance based on key performance indicators such as the number of schools, teachers, school locations, enrolment, and available facilities [
42]. Another approach, by Hallifax [
43], combined two machine learning techniques, ANN and CBR, to adapt gamification and learning content based on the learner’s profile. In line with [
15], our adaptive approach is based on matrix factorization, a well-known method of collaborative filtering in recommender systems.
3. Static Adaptive Gamification versus Dynamic Adaptive Gamification
In this section, we describe two Adaptive Gamification strategies, considering that diverse groups of users have differing motivations during an ongoing MOOC. These adaptive strategies are based on the HEXAD player types classification [
44], which will guide the dynamic selection of the game elements to be shown during the MOOC. Adaptive Gamification strategies use player types to guide the selection of the game elements that will appear to the students during the MOOC, in contrast to “one-fits-all” strategies that consider that all the students have the same profile.
One of the first approaches to adaptive gamification was proposed by Lavoue et al. [
15], who obtained a player profile at the beginning of the experiment, assuming that this player profile remained constant (static) over time. They relied on the BrainHex typology ([
31]) which represents the player profile as a vector of values of player types, so several player types were considered, and not only the predominant one [
45]. Moreover, they proposed a matrix factorization model to obtain the most suitable gamification element to be shown. They used two matrices, the first of which contains the player types of all the users, while the second matches the game elements and the player types (i.e., the Matching Matrix). The combination of these two matrices yield the game element scores for each user. Finally, the algorithm selects and shows the element with the highest score to the user.
We use the term Static Adaptive Gamification strategy,
StaticAG, to refer to a modified version of Lavoue’s [
15] original idea. In this static version [
16], we keep her main idea to initially obtain a player profile and maintain it as constant throughout the experience. However, we used the HEXAD classification of player types ([
32]), which is a slightly different classification compared to BrainHex, as used by them. While BrainHex is a classification oriented towards video game players, HEXAD focuses more on the psychological aspects of gamified virtual environments. We also consider the case of participants that do not want to play (non-player type). Then, in the
StaticAG version, we define the Player Profile vector as a 7-length array containing the HEXAD classification player types plus a non-player type. Consequently, we added the set of game elements related to these new player types—which is a subset of those proposed by Tondello [
44]. In
Figure 1, the Player Profile is shown as a 7-length vector, where each component represents the ratings corresponding to each player type in the range
. For example, a HEXAD Individual Player Profile defined as (0.25, 0.15, 0.2, 0.4, 0.4, 0.1, 0) means that the student is 25% disruptor, 15% free spirit, 20% achiever, 40% player, 40% socializer, 10% philanthropist, and 0% no-player. Additionally,
Figure 1 shows the list of the considered Game Elements related to those Player Types. Following these changes in the player type classification and the addition of the non-player type, we adapted the Matching Matrix proposed by Lavoué [
15] in order to define the relationships between the new player types and the game elements most suitable for them (shown in
Figure 1, where the higher the value of the cell, the more suitable the game element is). Note that empty cells contain zero values denoting the incompatibility of the game element with the player type. The combination of a Player Profile and the Matching Matrix produces one scored value per game element (Step 1 in
Figure 1).
Note that the selection of the top-scored game element slightly constrains the experience of the user, who may end up losing their gamification engagement. In the static adaptive gamification approach, since the Player Profile does not fluctuate during the experience, the top-scored game element will always be the same and the user will therefore only interact with a single game element, even if that game element varies in appearance or reward values. In order to provide more variability in the gamified experience, we selected game elements by considering the scores of the game elements as probabilities. Thus, we used the normalised scores of the GE Scorings vector (Step 2 in
Figure 1) as the probability that each game element is shown. We selected a weighted random value of the GE Probabilities vector shown in the Step 3 of
Figure 1, instead of selecting the top-scored ones, as Lavoué did.
However, as the initial player profile may either be inaccurately defined by the user when filling in the questionnaire, or may evolve slightly over time [
46], the Dynamic Adaptive Gamification strategy,
DynamicAG, relies on the fact that the behaviour (i.e., interactions) of each student is useful for tuning the values of his/her player profile during the experience [
16]. In our strategy, we forecast the Player Profile in terms of certain milestones in the MOOC based on the learner’s interactions and opinions about the Game Elements shown to him/her earlier in the MOOC [
17]. Basically, the dynamic algorithm recomputes the player profile, i.e., all the player types, at the end of some learning activities in the MOOC in order to select the game element that is the best fit at that moment. Once the new Player Profile is obtained, we use the Matching Matrix, as in the static adaptive gamification strategy (shown in
Figure 1, Step 1), to obtain the next game element to be shown to the user. Note that the Matching Matrix has as many rows as the number of considered Game Elements (GE), where each row contains values between 0 and 1, representing how this Game Element affects the corresponding player type. For instance, the GE development tool applies to the Disruptor player type (first row) and the Points Game Element applies to all the player types (seventh row).
The DynamicAG strategy is therefore especially useful in profiles that are mostly non-players. In this case, the system shows random game elements during the experience, giving the students some opportunities to interact with them and to join in the gamified experience. Therefore, we interpret non-interaction as the desire of the student not to play, and gamification is therefore disabled.
4. The Design of Adaptive Gamification in MOOCs
Embedding gamification in MOOCs to engage students is not an easy task. Gamification elements must be integrated in MOOCs in terms of graphic design, maintaining the same appearance as for other course activities, but they must also be strongly connected to current student progress in the MOOC. Moreover, as MOOCs support diverse player profiles, the selection of game elements during the course based on different player types is crucial for engaging students in the gamified experience.
In the following sections, we describe how we managed the appearance of the game elements during the course, as well as the rewards logic that integrates a student’s progress in the MOOC with the gamification element rewards.
4.1. Game Element Distribution in the MOOC
A common non-gamified MOOC is a course organised in a sequence of learning activities following an itinerary (see
Figure 2). These learning activities are usually grouped in formative units to achieve diverse learning outcomes. Each formative unit contains formative activities, such as interactive info-graphics, presentations, videos, readings, etc., and graded activities, such as questionnaires and problems. When we consider gamifying a MOOC, the main objective is to engage students in the course by making boring activities more fun and rewarding according to their achievements. Thus, we propose the incorporation of game elements after the most difficult activities, such as those that require additional work from the student, which at the same time allows us to grant a prize based on a student’s effort or performance.
Additionally, we incorporated a dashboard so that the student could be informed at all times of his or her progress in the game (see top right of
Figure 3). In this figure, we also show how the gamification is embedded in the learning itinerary. In Step 1, students answer a Player Type Questionnaire to define their Player Profile, which is needed for the both the StaticAG and DynamicAG adaptive strategies. students have the choice of not answering the questionnaire, in which case we consider the player profile to be a non-player. In [
17], we proposed the use of the player type questionnaire [
32]
https://www.gamified.uk/UserTypeTest2016, accessed on 1 December 2022. Users see their initial Player Profiles, i.e., the results of the questionnaire in the Dashboard embedded in the course. The Dashboard highlights the Player Profile, showing the students their player types with the highest scores (see green circles). It also allows students to interact with already obtained game elements and to select their avatar. In Step 2, the course starts with the learning activities of the first formative unit. At the end of some activities, gamification becomes active when students ask for their award, and only when they claim it (Step 3), the gamification algorithm is triggered to suggest the game element to be displayed (Step 4). In this step, as we can see in the bottom right in
Figure 3, the student can perform several actions: (i) ask for more information, (ii) check her game status on the dashboard, (iii) play with the game element (for instance, play the lottery), and (iv) rate the current game element with stars. All these interactions are recorded by the gamification system which, in the case of dynamic adaptive gamification, will be used to recompute the Player Profile. Finally, in Step 5, the control passes to the learning system, and the next activity in the formative unit of the course will follow.
4.2. Rewarding Logic
It is worth mentioning that it is not only important to know the next game element to show, but it is also crucial that the gamification reflects the achievements in the course. Thus, the gamification system uses the progress of the students and their learning outcomes in the graded activities to rate the value of the reward contained in the game elements.
Our rewarding logic is based on a global score computed as a weighted value of the performance in the learning activities previously performed by the user, and the student’s current status (as shown in
Figure 2). We define the
in some activity
t as:
where the
is the normalised mean of the ratings obtained until the current activity
t, i.e., with valued in the range
, and the
indicates where the game element appears in the learning itinerary. It is also a value between 0 and 1, where 0 indicates that the learner is at the beginning of the course, and 1 means that the student has completed all course activities. The
allows us to better reward students as they approach to the end of the course. Moreover, we select the avatar look based on the
value, i.e., we can use the newbie, medium, and pro avatars when a student’s status values are in the intervals
,
and
, respectively (see the different looks in
Figure 2).
The allows for the setup of some game elements, used either for computing their properties (i.e., rewards in lotteries) or also for guiding changes to their visual appearance (i.e., icons of badges and gifts). The properties of the game element that are susceptible to change are related to the type of game element: (i) the values of rewards for Points, Challenges, Easter Eggs, Lotteries, Unlockables, and Gifts; (ii) difficulty in Challenges; (iii) types of Badges, i.e, badges, stars, medals, cups; (iv) the number of elements allowed to be modified with the Development Tools; and (v) the number of recommended friends in social networks. The visual elements that can be modified are the colours of the Easter Eggs, the symbols of the icons in Points and Badges, and some animations in the Challenges.
In summary,
Table 1 shows the game elements modifiable in terms of both their properties and appearance. For instance, challenges rely on the
to shape their visual look and feel, showing different numbers of mountains to climb, and also to setup levels of difficulty and the reward that the users receive. Some other game elements (see
Table 2), such as lotteries, adjust only their rewards according to the
but maintain the same look and feel. Others only reflect information about the student’s
in the learning itinerary, i.e., levels, and their gamification status, i.e., leaderboards, but without using the global score to configure their properties or appearance (see
Table 3).
5. Materials and Methods
This section describes the experimental setup required to answer our research question regarding the impact of the DynamicAG approach on the engagement of the user with gamification. We first present the course that was used for testing our adaptive dynamic gamification approach. We then give a detailed description of other aspects of the experiment including ethical approval, recruitment of participants on the course, experimental and control groups, data that we collected based on our research question, and the statistical methods that we used to analyse the data.
5.1. The Gamified Course: “How to Recycle Plastics from the Sea?”
The course is designed by experts in pedagogy at the Edebe company based in Barcelona (Spain)
https://edebe.com/, accessed on 12 January 2023. It is addressed to high-school students aged between 15 and 16. The goal of the course is to provide them with practical and theoretical knowledge about recycling plastics found in the sea and also to raise awareness about the problem of plastics spills. It contains four formative units, each one consisting of activities including readings, videos, interactive info-graphics, and questionnaires. These activities encourage students, through critical thinking, to reason about the problem of plastics in the sea and to propose creative and effective solutions.
The gamification of the course is designed so that the game elements appear when the participants have completed a series of activities. For example, a gamification element (GE) appears to motivate and reward the students when they have watched a video and then taken a quiz. As the gamification system is adaptive, different students are rewarded with different gamification elements depending on their player type.
5.2. Ethical Considerations
Prior to the beginning of our study, we formulated a statement regarding the ethical implications of interactions with human beings, for the approval of an ethics committee. It included a description of the expected impact of the research on the participants, and information about the voluntary nature of participation in the study, the data to be collected and anonymously stored following standards of encryption and security, and the informed consent documents for participants. As the participants were not adults, we prepared two consent forms, one for the parents and another for the school directors. The institutional ethical review board of the University of Barcelona approved our study in December 2021.
5.3. Recruitment of Participants
As the study was planned to be held from January to February 2022. We started the recruitment three months before so that the course could fit into the schools’ educational scheduling. We looked for students who were currently in the 3rd and/or 4th year of high school, and we restricted the search to the city of Viladecans (Barcelona), since it is sufficiently large to enable us to recruit students from diverse social and educational backgrounds. In addition, thanks to our experience with previous projects, we knew that schools in Viladecans were open to participating in innovative educational initiatives.
Firstly, a call was made to all schools in the town to explain the main characteristics of the project. Subsequently, groups of students from five schools that voluntarily confirmed their interest were randomly selected and assigned to the experimental (DynamicAG) and control (StaticAG) groups. The initial number of participants in the dynamic adaptive gamification group was 46, and 51 in the static adaptive gamification group, but due to circumstances mainly related to the coronavirus epidemic (forced isolation for teachers and students) these numbers dropped to 30 and 33, respectively. Gender was fairly evenly distributed in both groups (44.68% females and 55.32% males).
5.4. Experiment Design
We adopted a pseudo-experimental design with a control group. The experiment was a between-subjects design where each participant in the same class took the course in one of the two conditions (dynamic adaptive gamification and static adaptive gamification). Once the groups of students were randomly assigned to either the experimental or the control group, to ensure that these groups were comparable, we conducted the SIMS motivation test with two additional questions with individuals in both groups before starting the experiment.
The Situational Motivation Scale (SIMS) questionnaire has the objective of measuring intrinsic motivation in students. Four questions refer to the measurement of intrinsic motivation while another four questions measure the level of amotivation. All of them were extracted from the SIMS Spanish version validated by [
47]. We added two more questions to form an idea of whether there were significant differences in the preference for games/gamification between the subjects in the different experimental groups.
The data gathered with the SIMS questionnaire helped us to test the internal validity of the experiment since significant differences would have an impact on the interpretation of the results of the study. We observed that students started from a very similar state of motivation because no significant differences were found (see
Figure 4). Specifically, although the DynamicAG group had a slightly higher intrinsic motivation (IM) than the other group, this difference was not significant (U Mann–Whitney,
p-value = 0.9265460;
p > 0.05 so we accept the null hypothesis). Amotivation was almost the same in both groups (U Mann–Whitney confirms this fact;
p-value = 0.99498195). Similarly, the DynamicAG group started with higher results than the StaticAG group in the G-Gamer measure, but the difference was not significant either (U Mann–Whitney;
p-value = 0.37294404). In the two groups, we observed that the “gamer motivation” (G) indicator was the highest, followed by the “intrinsic motivation” (IM) indicator and finally by the “amotivation” (A) indicator. The value of the G indicator reflects the willingness to “play” of both groups, which is desirable for performing our experiment.
5.5. Procedure
We created two versions of the course on the nanoMOOCs platform. They had identical activities but different gamification configurations: static adaptive and dynamic adaptive, respectively. Player profile updating was disabled in the course with static adaptive gamification. It is worth noting that participants were not able to distinguish the version of the course.
The experiment consisted of the following stages:
Presentation session in the schools. We visited the schools to instruct the students on platform registration as well as filling in the motivation pre-test and the HEXAD gamification user type questionnaire.
Course realisation. The students had 20 days to perform the course activities, which required a total of between 10 and 20 h. Since the course was online, participants were able to carry out the activities in any place where they had access to a computer with an Internet connection (such as at school, a library or their home). Although participating teachers were somewhat free to organise and monitor the progress of the educational content of the course, we provided them with a recommended plan for completing the course over the weeks of the experiment. During this stage we performed data collection.
Final session in the schools. This was performed face-to-face in one school and online in others, due to COVID-19 restrictions. We wanted to meet again with the participants to record their opinions and comments as well as to share the interesting data gathered throughout the experiment with the students.
5.6. Data Collection and Statistical Analysis
To answer our proposed research question regarding user engagement with gamification, we collected the following data (see
Figure 5):
: Number of interactions with the gamification dashboard by a student.
: Number of times that a Game Element was shown to a student during the experience.
: Number of user interactions with Game Elements during the whole experiment.
: The total time spent with game elements by a student.
: The mean score that a user gave to game elements during the course.
Note that we assume that the more engaged the students are with the gamification, the more they will interact with these GEs. Thus, all these metrics are related to the interaction with gamification elements and not to the interaction with other user interface elements.
Data gathered during the experiment were analysed using both descriptive statistics and non-parametric comparative tests to evaluate differences between the experimental and control groups. Differences in means were analysed in terms of normality using the Kolmogorov–Smirnov test and variance equality using the Levene test. The Brunner–Munzel test was used to compare means for non-normal distributions and unequal variances. In measurements with equal variances, mean comparisons were computed using U Mann–Whitney tests. Tests obtaining p-values lower than 0.05 were considered as statistically significant. The calculations were computed using the SciPy package [
48].
6. Results
This section analyses the gamification data collected during the experiment. First, we report on how the individual player profiles changed in the DynamicAG experimental group. Recall that the player profile did not change in the StaticAG control group. We analysed the evolution of the player type, computing the difference between the player profile at the beginning and at the end of the course. We observed that the player profile changed in 40% of the participants (18 out of the total of 44). In fact, these differences were very small. Note that this value is greater than a maximum double-precision error,
, with the highest being
, but they are relevant enough to change the GE Probabilities associated with each Game Element (see Step 2 in
Figure 1). Consequently, these changes had an impact on the selection of Game Elements in the DynamicAG group and thus, the overall experience of the students in this group. Specifically, these dynamic changes in players’ profiles occurred in participants with a high number of interactions and a high average time spent with the gamified experience.
Figure 6 presents a comparison of the
values between the StaticAG (blue bars) and DynamicAG (orange bars) groups. On the left, the histogram shows that the majority of the users in the StaticAG group interacted less with the Dashboard (all the values are between 0 and 10 clicks), while users in DynamicAG were more active in using the dashboard. Indeed, the comparison of the means (right side of the figure) confirms that the difference is significant (Brunner–Munzel test,
p-value = 0.040 < 0.05; Cohen’s d = 0.5821). In relation to the dispersion in
, we analysed the data thoroughly and realised that it corresponded to students who played games regularly, suggesting that this element may have been familiar to them and that they were therefore more inclined to visit the dashboard.
Figure 7 presents a between-group comparison of the
between groups. The figure shows the same trend as the previous one. Note that the display of the GEs requires the users to claim a reward by selecting the
"I want an award" button (see on left side of
Figure 5). Even though this button was equally available to both groups, the DynamicAG group was more engaged and participants in this group demanded awards more frequently. In fact, a comparison of the means (right side of the figure) shows a significant difference (Brunner–Munzel,
p-value = 0.031749 < 0.05 ;Cohen’s d = 0.7175, between medium and large effect size).
Game elements are not only aesthetically appealing but are also interactive since they allow students to get rewards, e.g., the lottery, the gift, or sending a tip to workmates.
Figure 8 presents the average number of users’ interactions with game elements during the whole experience (
). The distribution of orange bars along the entire
x axis indicates that students in the experimental condition (DynamicAG) interacted more with game elements than those in the control condition (Brunner–Munzel
p-value = 0.011 < 0.05; Cohen’s d = 1.0001 large effect size). However, this metric gives low values in both conditions, which can be explained by the discoverability of the designed elements [
49]. That is, the students did not notice that some elements were interactive.
Another measure of students’ engagement with gamification is the average time that they spent interacting with game elements (
).
Figure 9 indicates a certain parallels with the previous figure (
), especially in the low values for minutes. In fact, both measures are closely related (Mann–Whitney U,
p-value = 0.27 > 0.05) because the more interactions students had with the game elements, the more time they spent on them. Nevertheless, measuring time is a non-trivial task as students may leave, for instance, the dashboard open while they are doing other things (such as getting up from the chair or chatting with other classmates). Note that in this case, we obtained some outliers (i.e., values higher than 70 min) as the right part of the figure reveals.
We also gathered the score that students gave to the game elements (see
in the middle of
Figure 5) that appeared throughout the course. To do so, we computed the mean score of each mechanic (normalized between 0 and 1) as the quotient between the sum of each score,
, and the number of times,
n, the game element was shown. If a game element was shown more than one time to a particular user, we added the score for each game element occurrence.
In general, DynamicAG scores equalled or slightly exceeded those obtained in StaticAG. Participants in both groups scored the mechanics almost equally (DynamicAG = 0.456, StaticAG = 0.483); indeed, this difference is non significant (Mann–Whitney U,
p-value = 0.699). To further analyse the
of Game Elements,
Figure 10 shows a histogram of scores detailed by game elements. The majority of students in both conditions gave a medium score to GEs, but scores higher than 0.65 only occurred in the dynamic adaptive gamification.
As can be seen in the figure, several game elements have no score in StaticAG (badge, gift, lottery, development tool, knowledge share, lottery, and unlockable). The reason for this is that the fixed game elements corresponding to the initial player profile limited the number of mechanics shown to the students and, as a consequence, they were not able to score them. This was not the case for the DynamicAG group, where all the 14 game elements were shown and scored by participants.
Specifically, game elements as shown in
Table 1 with both changeable properties and appearances, such as Easter eggs and challenges, scored well. The low score of the badge element can be due to its visual design, which was similar, and probably the students were not able to perceive the differences. Thus, they believed that they were receiving the same badge all the time. Nevertheless, this is a supposition that should be proved. Moreover, there are game elements with changeable properties but fixed visual appearance (see
Table 2), such as unlockable, lottery, development tools, and gifts, that obtained low scores. We think that, again, the discoverability of the elements may be the reason behind these results. If the users had been aware of how they functioned, their opinion could have been more positive. Finally, game elements targeting socializers (social status, social network) and the leaderboard were rated positively by participants.
7. Discussion
Overall, our results show that approaches to gamification that take into account the player profiles have a positive impact on the user experience. Indeed, these results are in line with previous research [
15]. However, we went further, updating the player profile during the experience. Our findings demonstrate that the dynamic approach, compared to a static player profile, fits students’ preferences better, as proved by the differences found in the metrics of engagement with gamification.
Moreover, we used weighted random probabilities to select, from the entire set of GEs, the one to show to the student in both the static and dynamic approaches (see Selection 1 in
Figure 1), making the static approach more “dynamic” than a purely static approach that would have selected the game element with the highest score (see Selection 1 in
Figure 1). Notice that our study could not evaluate the behaviour of non-player profiles because there were no participants who chose not to “play”, as expected with 15–16-year-old participants. Even so, the results of the DynamicAG group surpassed those in StaticAG.
Throughout the development of this project we learned some lessons to share regarding different aspects: (i) the setup of the DynamicAG approach, (ii) the instructional design of the course, and (iii) the design of gamification elements.
First, in relation to the DynamicAG approach, there are certain parameters used to change the Player Profile depending on the interactions and opinions of the students regarding game elements. To avoid high fluctuations in the Player Profile values, our method manages small values of these two parameters. However, these small values provoked few variations in the player profile. Consequently, it is necessary to revisit the method for tuning parameter values.
Second, another aspect to be considered is the influence of the instructional design. We proposed this experiment in a nanoMOOC, which is a short and very focused learning itinerary. Therefore, the adaptive gamification method had to tune the Player Profile with limited data in a limited time. A possible extension is to maintain the gamification across several courses and thereby collect new data to continually improve the Player Profile model.
Third, an important factor is the design and distribution of the game elements across the span of the instructional activities. In the gamification design, we considered the incorporation of game elements after the most arduous activities. However, the gamification elements also require feedback regarding the student performance in those activities in order to adapt their visual aspect or to change their properties, such as rewards. This fact implies that activities that cannot be tracked by the learning platform, such as reading a pdf, cannot be rewarded with a game element. Moreover, both adaptive gamification approaches automatically select the game elements to display, but the DynamicAG performed better. First, it distributed a wider range of GEs than the StaticAG. Second, the GEs were generally scored higher by students. Future studies may analyse the impact that the changes in appearance and properties of GEs have on the whole experience.
Additionally, the architecture of the OpenEdX platform has some limitations. For instance, when students finish an activity, they must select the I want an award button to display the Game Element, causing users to probably overlook such game elements.
Another challenge to face in the context of online courses is how to measure the “real”/“effective” time spent by students on activities: tracking the presence of the student in the course implies detecting the active window, capturing the students’ attention in this active window, and capturing the students’ face, among others. Thus, we recommend a careful choice of the learning platform, since technological limitations can complicate the seamless integration of gamification strategies.