Nothing Special   »   [go: up one dir, main page]

Alonso Casalino HELMETO2019

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/335871657

Explainable Artificial Intelligence for Human-Centric Data Analysis in Virtual


Learning Environments

Chapter · September 2019


DOI: 10.1007/978-3-030-31284-8_10

CITATION READS
1 284

2 authors:

Jose M Alonso Gabriella Casalino


University of Santiago de Compostela Università degli Studi di Bari Aldo Moro
153 PUBLICATIONS   1,425 CITATIONS    38 PUBLICATIONS   151 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Looking for a real-world-semantics-based approach to the interpretability of fuzzy systems. (Cat Ho Nguyen Institute of Information Technology, VAST, Vietnam, and Jose M.
Alonso, Universidade de Santiago de Compostela, Spain) View project

Special Session "Advances on eXplainable Artificial Intelligence" @ FUZZ'IEEE 2019 View project

All content following this page was uploaded by Gabriella Casalino on 20 September 2019.

The user has requested enhancement of the downloaded file.


Explainable Artificial Intelligence for
Human-Centric Data Analysis in Virtual
Learning Environments

Jose M. Alonso1[0000−0003−3673−421X] and Gabriella


Casalino2[0000−0003−0713−2260]
1
Centro Singular de Investigación en Tecnoloxı́as Intelixentes (CiTIUS),
Universidade de Santiago de Compostela, Spain
josemaria.alonso.moral@usc.es
2
Department of Computer Science, University of Bari Aldo Moro, Italy,
gabriella.casalino@uniba.it

Abstract. The amount of data to analyze in virtual learning environ-


ments (VLEs) grows exponentially everyday. The daily interaction of
students with VLE platforms represents a digital foot print of the stu-
dents’ engagement with the learning materials and activities. This big
and worth source of information needs to be managed and processed to
be useful. Educational Data Mining and Learning Analytics are two re-
search branches that have been recently emerged to analyze educational
data. Artificial Intelligence techniques are commonly used to extract hid-
den knowledge from data and to construct models that could be used,
for example, to predict students’ outcomes. However, in the educational
field, where the interaction between humans and AI systems is a main
concern, there is a need of developing new Explainable AI (XAI) systems,
that are able to communicate, in a human understandable way, the data
analysis results. In this paper, we use an XAI tool, called ExpliClas,
with the aim of facilitating data analysis in the context of the decision-
making processes to be carried out by all the stakeholders involved in the
educational process. The Open University Learning Analytics Dataset
(OULAD) has been used to predict students’ outcome, and both graphi-
cal and textual explanations of the predictions have shown the need and
the effectiveness of using XAI in the educational field.

Keywords: Educational Data Mining · Data Science · Trustworthy AI


· Explainable AI · Virtual Learning Environments

1 Introduction

Distance education history starts almost two centuries ago with postal ser-
vices [20]. With the advent of the Internet, significant changes have occurred,
and the use of on-line distance learning (e-Learning in short) platforms has ex-
ponentially grown. These virtual learning environments (VLEs) eliminate the
physical distance between learners and courses, thus facilitating and favouring
2 J.M. Alonso, G. Casalino

the enrollments. In addition to online teaching material, VLEs provide a set


of synchronous and asynchronous study assistance tools, such as chat, video
lessons, forums, wikis, messaging systems, emails, etc. In these environments,
the student learning behaviour can be elicited by observing interaction with the
platform: the number of times that the student has visited the main page, the
number of messages she has exchanged with the professor, the number of extra
resources that have been uploaded, and so on. Accordingly, the observation of
the student learning behaviour could be used to suggest adaptive feedback, cus-
tomized assessment, and more personalized attention [24]. All the stakeholders
that are involved in VLEs, such as teachers, tutors, students, and managers, can
take advantage from information obtained through educational data analysis.
Educational data mining (EDM) and Learning Analytics (LA) are two re-
search branches that are increasingly attracting attention. They use Artificial
Intelligence (AI) techniques to collect, process, report and work on educational
data, in order to improve the educational process [2]. Indeed, the application of
LA to historic VLE activity data allows to predict students’ failure or success,
and it is commonly used to improve student retention [30].
Several studies have proved the effectiveness of EDM and LA techniques in
analyzing educational data [8]. Most related work in the scientific literature is
devoted to predict students’ performance [1, 10, 22]. In addition to prediction
techniques, visualization techniques are applied to observe the students’ perfor-
mances [11, 12, 15, 23]. In addition, numerical methods can facilitate unveiling
unknown hidden learning skills [6]. In addition, learners can be grouped into
categories automatically extracted from empirical data [7, 21]. There are also
advance techniques to manage and analyze big educational data [26–28, 31].
Getting effective explanations is becoming more and more important in social
sciences [19]. This general trend is confirmed in Education Science. Even though
current AI tools have proved to be ready for finding out valuable knowledge
in the context of EDM and LA, their effectiveness for decision-making support
is still limited by a lack of explanation ability. Thus, in applications such as
e-Learning where the interaction between humans and AI systems is a main
concern, there is a need of developing new Explainable AI (XAI in short) sys-
tems. It is worth noting that this fact is aligned with the XAI scientific challenge
launched (in 2016) by the USA Defense Advanced Research Projects Agency
(DARPA) which remarked that “even though current AI systems offer many
benefits in many applications, their effectiveness is limited by a lack of expla-
nation ability when interacting with humans” [14]. XAI systems are expected
to provide users with comprehensible explanations through natural interaction.
Moreover, the European Commission emphasizes the importance of boosting in-
novation and investment in AI technologies as follows “EU must therefore ensure
that AI is developed and applied in an appropriate framework which promotes
innovation and respects the Union’s values and fundamental rights as well as
ethical principles such as accountability and transparency”3 . Hence, XAI sys-
3
European Commission, Artificial Intelligence for Europe, Brussels, Bel-
gium, Communication from the Commission to the European Parliament,
Explainable AI for Virtual Learning Environments 3

tems are expected to be beneficial to society through fairness, transparency and


explainability, regarding not only technical but also ethical and legal issues. The
interested reader is kindly referred to [13] where an exhaustive review of XAI
techniques is presented.
Providing students with explanations in relation with their learning activities
is expected to be highly appreciated and to contribute to get better students’
satisfaction and qualifications. Moreover, XAI systems may assist teachers and
managers when designing courses and contents. In this paper, we describe the use
of ExpliClas [4], a web service ready to provide users with multimodal (textual +
graphical) explanations, in the context of e-Learning. Namely, we study the util-
ity and effectiveness of explanations automatically generated by ExpliClas when
considering the Open University Learning Analytics Dataset (OULAD) [17].
The rest of this manuscript is organized as follows. Section 2 introduces
some material and methods. Section 3 presents a use case. Section 4 concludes
the paper with some final remarks and points out future work.

2 Preliminaries

2.1 The ExpliClas Web Service

ExpliClas [4] is a XAI tool aimed at facilitating the comprehension of AI systems


to both expert and non-expert users. In the core of ExpliClas there are AI, Nat-
ural Language (NL) processing and generation techniques, argumentation and
Human-Computer Interaction technologies. Fortunately, all these technologies
are transparent to the user through a user-friendly interface. Accordingly, users
only need to concentrate in reading textual explanations as well as visualizing
simple and intuitive graphical model representations.
Currently, ExpliClas generates multimodal (textual + graphical) explana-
tions, at both local and global level, related to Weka classifiers. Local explana-
tions pay attention to how the given AI model is instantiated for single clas-
sifications while global explanations look at the AI model itself. Notice that
current global explanations are merely descriptive, i.e., they only report infor-
mation on how good or bad is the classifier but with no indications on how to
add/remove/modify a specific rule in order to improve the classification rate.
Actually, four Weka classifiers are available [9]:

– Three decision tree classifiers:


- J48: an open source Java implementation of the C4.5 decision tree al-
gorithm [25];
- RepTree: a fast implementation of C4.5 decision trees using information
gain along with backfitting reduced-error pruning;
the European Council, the Council, the European Economic and So-
cial Committee and the Committee of the Regions, Tech. Rep., 2018,
(SWD(2018) 137 final) https://ec.europa.eu/digital-single-market/en/
news/communication-artificial-intelligence-europe
4 J.M. Alonso, G. Casalino

- RandomTree: C4.5 decision trees that consider K randomly chosen


attributes at each node;
– One fuzzy rule-based classifier:
- FURIA: a Fuzzy Unordered Rule Induction Algorithm [16]. FURIA
is one of the most outstanding fuzzy rule-based classification methods
attending to accuracy. It also produces compact rule bases, i.e., rule
bases which are made up of a small number of rules (and antecedents
per rule). In addition, its inference mechanism is based on a winner class
mechanism with weighted rules in combination with the so-called rule
stretching method which is in charge of handling uncovered instances.
The interested reader is referred to [16] for further details about FURIA.
It is worth noting that fuzzy classifiers deal naturally with imprecision
and uncertainty [29, 32]. In addition, a recent survey in the XAI research
field has revealed the importance of fuzzy systems in the quest for XAI
systems [3].
ExpliClas is made up of a REST API4 and a web client5 . It is released as
free software under the GNU General Public License. Interested readers could
refer to [4] for more information about ExpliClas.

2.2 The OULAD Dataset


The Open University (OU)6 is a public distance learning university in the UK.
It provides the research community with free open data7 related to their on-
line courses. More precisely, available data are structured in several csv files
(courses.csv, assessments.csv, studentInfo.csv, and so on). They contain anonymized
information which is taken from the OU database.
In this paper, we have first selected a subset of all the information that
is available at the OU database. Then, we have built a dataset ready to be
used by ExpliClas. Particularly, data coming from three modules (codes AAA,
BBB, CCC) have been considered. Furthermore, we have focused on students’
information, useful to predict the students’ outcomes, as this is the data analysis
goal. Thus, a subset of attributes has been selected, and aggregated attributes
have been created to summarize the students’ behaviour, as described in tables
1-3.
Only two outcomes (classes) have been considered out of the 4 classes in
the original dataset: Fail (4219 students) and Pass (5959 students). It is worth
noting that the Fail class has been obtained by merging “Fail” and “Withdrawn”
classes in the original dataset. Likewise, the Pass class has been obtained by
merging the original classes “Pass” and “Distinction”. As a result, we have a
binary classification dataset which is made up of 10178 samples and 21 attributes
grouped in:
4
ExpliClas API: https://demos.citius.usc.es/ExpliClasAPI/
5
ExpliClas Web Client: https://demos.citius.usc.es/ExpliClas/
6
Open University (OU) website: http://www.open.ac.uk/
7
OU Open Data: https://analyse.kmi.open.ac.uk/open_dataset#data
Explainable AI for Virtual Learning Environments 5

Table 1. Description of attributes related to general information.

Attribute Name Description


Code Module Identification code of the module (AAA, BBB, CCC ) to
which the assessment belongs
Code Presentation Identification code of the student presentation (2014J,
2014B, 2013J, 2013B ): year + session
Gender Students’ gender (M/F )
Region Geographic region, where the student lived while taking the
module-presentation (Scotland, Wales, Ireland, London Re-
gion, Yorkshire Region, South Region, South East Region,
South West Region, North Region, North Western Region,
East Midlands Region, West Midlands Region, East Anglian
Region)
Highest Education Highest student’s education level, when enrolled (A Level
or Equivalent, Lower Than A Level, HE Qualification, Post
Graduate Qualification, No Formal quals)
Imd Band Index of Multiple Deprivation band value of the place where
the student lived when enrolled. It is an UK government
measure to evaluate deprived areas in English local councils.
(Values ∈ [0% − 100%])
Age Band Students’ age band (0-35, 35-55, higher than 55 )
Disability Whether the student has declared a disability (Y/N )

– General information, such as gender or education level (see Table 1);


– Student assessments information, such as average assessment score or the
number of previous attempts (see Table 2);
– Student interactions with different materials in the platform, such as quiz,
glossary, homepage, subpages etc. (see Table 3).

3 Case Study
Data analysis results provided by XAI systems must be comprehensible by both
expert and non-expert users in order to become trustable. That is, a general user
(no matter her expertise on AI) should be able to answer to “why” and “how”
questions in the light of outcomes provided by XAI systems.
In order to show the effectiveness of the ExpliClas tool in assisting users to
understand results of educational data analysis, we have selected two Weka clas-
sifiers (J48 and FURIA) to automatically build XAI models from the dataset
described in the previous section. We describe below the classification perfor-
mance of these models together with the associated explanations.
As we already introduced in the previous section, ExpliClas provides users
with two different kind of explanations: a general explanation that reports the
classification results on the whole dataset; and local explanations that refer to
single cases.
We first uploaded the OULAD dataset to ExpliClas and built a FURIA
classifier which achieved 92.56% of classification rate (10-fold cross-validation)
6 J.M. Alonso, G. Casalino

Table 2. Description of attributes related to assessment of students.

Attribute Name Description


Number of previous Number times the student has attempted this module (Nu-
attempts meric)
Studied credits Total number of credits that the student is studying (Nu-
meric)
Number of assess- Number of assessments that the student has submitted for
ments the module (Numeric)
Average assessments Weighted average score that the student has obtained in his
score assessments for the module. Different assessments could have
different weights. (Numeric)

Table 3. Description of attributes related to VLE interactions.

Attribute Name Description


Resource Number of interactions with extra material given by the pro-
fessor (Numeric)
Homepage Number of interactions with the module homepage (Nu-
meric)
Forum Number of student’s messages in the module forum (Nu-
meric)
Glossary Number of interactions with an hyper-link dictionary that
explains particular words in the module (Numeric)
Out content Number of interactions with extra platform material which
was suggested by the professor (Numeric)
Subpage Number of interactions with course subpages that focus on
a particular topic (Numeric)
Url Number of interactions with external resources which were
liked by the professor (Numeric)
Out collaboration Number of collaborations among students (Numeric)
Quiz Number of interactions with questionnaires regarding the
module contents (Numeric)

with 28 fuzzy rules (16 rules pointing out class=Pass and 12 rules pointing out
class=Fail).
The Fig. 1 shows an example of global explanation. The user can select the
visualization mode (fuzzy rules and confusion matrices on training and test sets)
through the menu in the upper part of the picture. At the bottom, the related
explanation in natural language is reported: “There are 2 types of evaluation:
Fail and Pass. This classifier is very reliable because correctly classified instances
represent a 92, 56%. There is confusion related to all types of evaluation.”. This
explanation “translate” into natural words (i.e., in a more human understandable
form) the content of the confusion matrix that is depicted in the Fig. 2. On the
one hand, the class Fail is confused with Pass in 601 out 3618 students who
really fail (16.61%). On the other hand, Pass is confused with Fail in 2.69% of
students.
Explainable AI for Virtual Learning Environments 7

Fig. 1. Example of global explanation obtained with ExpliClas (FURIA classifier).

Fig. 2. Confusion matrix of the model obtained by FURIA. On the left the actual class
labels, on the top the predicted labels.

In order to illustrate the use of local explanations, we selected one student in


the dataset. The Fig. 3 shows the data values associated to all attributes. More-
over, the user can visualize and/or edit the semantic grounding that is behind
the explanation model. By default, three linguistic terms are assigned to qual-
itatively describe each attribute: Low, Medium and High. The Fig. 3(b) shows
the definition of these linguistic terms associated to the attribute number of as-
sessments. Of course, the user can edit this definition regarding both granularity
(i.e., the number of terms) and semantics (i.e., linguistic terms along with their
numerical values).
Accordingly, local explanations have multi-modal nature in the sense that
they combine graphs and text. In the upper part of the Fig. 4 an histogram
visualizes the student’s outcome (Pass) along with the associated activation
degree in the interval [0, 1]. In the lower part of the figure, the information
8 J.M. Alonso, G. Casalino

(a) (b)

Fig. 3. Example of data values associated to one of the students in the OULAD dataset.

included in the fired fuzzy rule is verbalized as “Evaluation is Pass because


number of assessments is high, resource is low and forum is medium”. Moreover,
the system allows to browse the fuzzy rule base, and to expand the graphical
representations of the selected rules. The Fig. 5 shows the fuzzy sets in the winner
fired fuzzy rule “IF number of assessments in [9, 10, inf, inf] and forum in [20,
24, inf, inf] and resource in [-inf, -inf, 13, 415] THEN class=Pass (CF=0.94)”.
The winner rule is the one with maximum firing degree for the given instance.
It is worth noting that the rule firing degree is computed with minimum at t-
norm. Then, the output class is computed with maximum as t-conorm, regarding
all fired rules which point out as the same output. The certainty factor CF
is a weight factor which FURIA computes regarding the relevance of rules in
accordance with the training data. In case no rules are fired then FURIA applies
the so-called rule stretching mechanism which looks for slight modifications in
the rule base with the aim of finding out a new rule on-the-fly able to manage the
given instance. The interested reader is kindly referred to [16] for further details
about FURIA. Moreover, additional information about how to carefully design
Explainable AI for Virtual Learning Environments 9

fuzzy models (with special attention to how to select the right fuzzy operators)
is available at [29, 32].
Rules generated by FURIA have local semantics, i.e., the most suitable fuzzy
sets are defined independently for each rule. This fact may jeopardize the in-
terpretability of a fuzzy rule-based system that is automatically derived from
data like the one described in this section. As described in [5], setting up global
semantics a priori is required when looking for interpretable fuzzy systems. More-
over, building interpretable fuzzy systems is a matter of careful design because
model interpretability can not be granted only for the fact of using fuzzy sets
and systems [29]. However, it is possible to add a linguistic layer to facilitate the
interpretability of fuzzy rules even if they lack of global semantics [18]. In Expli-
Clas, global semantics is set up beforehand (and validated by experts if they are
available) for a given dataset (see the Fig. 3). All algorithms (e.g., FURIA or
J48) share the same global semantics what makes feasible the comparison among
generated explanations. Then, the local semantics determined by fuzzy sets such
as those depicted in Fig. 5 can be translated into natural words in the context
of the global semantics previously defined. It is worth noting that a similarity
measure (see eq. 1) is used to compare each fuzzy set with all defined linguistic
terms and the one with the highest similarity degree is selected.

A∩B
S(A, B) = (1)
A∪B

Fig. 4. Example of local explanation (FURIA).

Once we have automatically translated the winner fuzzy rule into natural text
then it is straightforward to understand the result of the fuzzy inference even if
the reader is not an expert in fuzzy logic. The local explanation associated to
10 J.M. Alonso, G. Casalino

Fig. 5. Example of fired fuzzy rule (FURIA).

our illustrative example (see the Fig. 4) suggests that students who perform a
high number of assessments along the courses, even if the number of messages
exchanges through the forum is medium and the number of visited resources is
low, they are more likely to succeed.
As in real context, where more than one expert could be consulted, we used
a second classifier to have a different point of view on the student’s behaviour,
and the factors that could influence her outcome. The Fig. 6 shows the local
explanation generated by ExpliClas when data analysis is supported by the J48
classifier instead of the FURIA classifier.
Since J48 builds a binary decision tree instead of a fuzzy rule-based system,
in this case the upper part of the picture shows a sketch of the tree where
the fired branch is highlighted in green color. This branch of the tree can be
interpreted (from the root to the leaf) as an IF-THEN rule. It is worth noting
that the same attribute could appear more than once (each time with a different
split condition) in the same branch of the tree. As a result, there is an interval of
values associated to each attribute similarly to the fuzzy sets defined by FURIA.
Once again, there is a lack of global semantics in the classifier model. Fortunately,
we can apply the same procedure that we introduced earlier in order to translate
the local semantics associated to each branch of tree into the context of global
semantics that is used to verbalize (with natural words) the model output. In
our illustrative example, the graphical representation in Fig. 6 is interpreted as
the following rule (with the same format previously described for FURIA rules):
“IF number of assessments in [4, 9] and average assessment score in [450, 1139]
THEN class=Pass (CF=100%)”. Of course, ExpliClas verbalizes this rule into a
natural text explanation in the lower part of the picture with the aim of becoming
understandable by both expert and non-expert users.
It is interesting to notice that on one hand the two classifiers (FURIA and
J48) agree on the student’s outcome prediction (Pass), but on the other hand
they identify different attributes as discriminant for the classification task. This
could give some insights to teachers, managers, or tutors about how to improve
the learning process. For example, if some attributes turned up as not rele-
Explainable AI for Virtual Learning Environments 11

Fig. 6. Example of local explanation (J48).

vant for any case under study, then they could be removed from the OULAD
dataset. Of course, this means the e-learning program should be revised and
updated accordingly to lighten the students’ study load and the teachers’ work.
On the contrary, if some attributes were deemed as essential to pass an exami-
nation, then the related tasks should be emphasized and strengthened, perhaps
by changing the structure of the educational process.

4 Conclusions and Future Work


We have illustrated the use of the ExpliClas XAI tool in educational data mining
context. A classification dataset was built with information extracted from open
data provided online by the Open University. ExpliClas provided us with illustra-
tive examples of both global and local explanations related to the given dataset.
12 J.M. Alonso, G. Casalino

In addition, ExpliClas automatically generated multimodal explanations which


consisted of a mixture of graphs and text. These explanations look like natural,
expressive and effective, similar to those expected to be made by humans.
It is worth noting that the rationale behind ExpliClas is completely transpar-
ent to the user, which can understand the reasoning that leads to a given output
no matter the selected algorithm for classification. The given linguistic layer
along with the global semantics that is enforced beforehand makes straightfor-
ward the interpretation of results no matter the user background and expertise.
Moreover, the knowledge behind the explanation model can be edited and mod-
ified if needed to refine the generated explanations, in order to better fit with
the user knowledge and intuition.
The case study has shown that the explanations given by ExpliClas are suit-
able for the the stakeholders that are involved in VLEs: teachers, tutors, stu-
dents, and managers. All these people are domain experts but they are not data
analysts, so they need to deeply understand automatic generated results, in order
to trust them.
This is a preliminary work to show the need of XAI in the educational field.
Several extensions could be explored, but firstly we need to evaluate the user
appreciation of the system. As future work, we will set up an online survey to ask
human users (including students, teachers and managers) about the goodness of
these explanations. Later, we will integrate them in an XAI decision-support
tool.

Acknowledgments

Jose M. Alonso is Ramón y Cajal Researcher (RYC-2016-19802). This research


was also funded by the Spanish Ministry of Science, Innovation and Universi-
ties (grants RTI2018-099646-B-I00, TIN2017-84796-C2-1-R and TIN2017-90773-
REDT) and the Galician Ministry of Education, University and Professional
Training (grants ED431F 2018/02, ED431C 2018/29 and “accreditation 2016-
2019, ED431G/08”) which is co-funded by the European Regional Development
Fund (ERDF/FEDER program).
Gabriella Casalino is member of the INdAM Research group GNCS.

References

1. Agudo-Peregrina, Á.F., Hernández-Garcı́a, Á., Iglesias-Pradas, S.: Predicting aca-


demic performance with learning analytics in virtual learning environments: A
comparative study of three interaction classifications. In: 2012 International Sym-
posium on Computers in Education (SIIE). pp. 1–6. IEEE (2012)
2. Aldowah, H., Al-Samarraie, H., Fauzy, W.M.: Educational data min-
ing and learning analytics for 21st century higher education: A re-
view and synthesis. Telematics and Informatics 37, 13 – 49 (2019).
https://doi.org/https://doi.org/10.1016/j.tele.2019.01.007
Explainable AI for Virtual Learning Environments 13

3. Alonso, J.M., Castiello, C., Mencar, C.: A bibliometric analysis of the explainable
artificial intelligence research field. In: International Conference on Information
Processing and Management of Uncertainty in Knowledge-based Systems (IPMU).
pp. 3–15 (2018)
4. Alonso, J.M., Bugarı́n, A.: ExpliClas: Automatic generation of explanations in
natural language for weka classifiers. In: 2019 IEEE International Conferences on
Fuzzy Systems. pp. 1–6. IEEE (2019)
5. Alonso, J.M., Castiello, C., Mencar, C.: Interpretability of fuzzy systems: Current
research trends and prospects. In: Springer Handbook of Computational Intelli-
gence, pp. 219–237 (2015)
6. Casalino, G., Castiello, C., Del Buono, N., Esposito, F., Mencar, C.: Q-matrix
extraction from real response data using nonnegative matrix factorizations. In:
International Conference on Computational Science and Its Applications. pp. 203–
216. Springer (2017)
7. Castellano, G., Fanelli, A., Roselli, T.: Mining categories of learners by a com-
petitive neural network. In: IJCNN’01. International Joint Conference on Neural
Networks. Proceedings (Cat. No. 01CH37222). vol. 2, pp. 945–950. IEEE (2001)
8. Dutt, A., Ismail, M.A., Herawan, T.: A systematic review on educational data
mining. IEEE Access 5, 15991–16005 (2017)
9. Eibe, F., Hall, M., Witten, I.: The weka workbench. online appendix for data
mining: Practical machine learning tools and techniques. Morgan Kaufmann (2016)
10. Elbadrawy, A., Polyzou, A., Ren, Z., Sweeney, M., Karypis, G., Rangwala, H.:
Predicting student performance using personalized analytics. Computer 49(4), 61–
69 (2016)
11. de-la Fuente-Valentı́n, L., Pardo, A., Hernández, F.L., Burgos, D.: A visual an-
alytics method for score estimation in learning courses. J. UCS 21(1), 134–155
(2015)
12. Gonçalves, A.F.D., Maciel, A.M.A., Rodrigues, R.L.: Development of a data mining
education framework for visualization of data in distance learning environments.
In: The 29th International Conference on Software Engineering and Knowledge
Engineering, Wyndham Pittsburgh University Center, Pittsburgh, PA, USA, July
5-7, 2017. pp. 547–550 (2017). https://doi.org/10.18293/SEKE2017-130
13. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.:
A survey of methods for explaining black box models. ACM Comput. Surv. 51(5),
93:1–93:42 (Aug 2018). https://doi.org/10.1145/3236009, http://doi.acm.org/
10.1145/3236009
14. Gunning, D.: Explainable Artificial Intelligence (XAI). Tech. rep., Defense Ad-
vanced Research Projects Agency (DARPA), Arlington, USA (2016), DARPA-
BAA-16-53
15. Hernández-Garcı́a, Á., González-González, I., Jiménez-Zarco, A.I., Chaparro-
Peláez, J.: Visualizations of online course interactions for social network learn-
ing analytics. International Journal of Emerging Technologies in Learning (iJET)
11(07), 6–15 (2016)
16. Hühn, J., Hüllermeier, E.: Furia: an algorithm for unordered fuzzy rule in-
duction. Data Mining and Knowledge Discovery 19(3), 293–319 (Dec 2009).
https://doi.org/10.1007/s10618-009-0131-8
17. Kuzilek, J., Hlosta, M., Zdrahal, Z.: Open university learning analytics dataset.
Scientific data 4, 170171 (2017)
18. Mencar, C., Alonso, J.M.: Paving the way to explainable artificial intelligence with
fuzzy modeling. In: Roberto Fúller, Silvio Giove, F.M. (ed.) WILF2018 - 12th
14 J.M. Alonso, G. Casalino

International Workshop on Fuzzy Logic and Applications, pp. 215–227. Springer


(2019)
19. Miller, T.: Explanation in artificial intelligence: Insights from the social sciences.
Artificial Intelligence 267, 1–38 (2019)
20. Moore, J.L., Dickson-Deane, C., Galyen, K.: e-learning, online learning, and dis-
tance learning environments: Are they the same? The Internet and Higher Educa-
tion 14(2), 129–135 (2011)
21. Nen-Fu, H., Hsu, I., Chia-An, L., Hsiang-Chun, C., Jian-Wei, T., Tung-Te, F.,
et al.: The clustering analysis system based on students’ motivation and learn-
ing behavior. In: 2018 Learning With MOOCS (LWMOOCS). pp. 117–119. IEEE
(2018)
22. Nieto, Y., Garcı́a-Dı́az, V., Montenegro, C., Crespo, R.G.: Supporting academic
decision making at higher educational institutions using machine learning-based
algorithms. Soft Computing pp. 1–9 (2019)
23. Paiva, R., Bittencourt, I.I., Lemos, W., Vinicius, A., Dermeval, D.: Visualizing
learning analytics and educational data mining outputs. In: International Confer-
ence on Artificial Intelligence in Education. pp. 251–256. Springer (2018)
24. Preidys, S., Sakalauskas, L.: Analysis of students study activities in virtual learn-
ing environments using data mining methods. Technological and economic devel-
opment of economy 16(1), 94–108 (2010)
25. Quinlan, J.R.: C4. 5: programs for machine learning. Elsevier (2014)
26. Rabelo, T., Lama, M., Amorim, R.R., Vidal, J.C.: Smartlak: A big data architec-
ture for supporting learning analytics services. In: 2015 IEEE Frontiers in Educa-
tion Conference (FIE). pp. 1–5. IEEE (2015)
27. Romero, C., Ventura, S.: Educational data science in massive open online courses.
Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 7(1),
e1187 (2017)
28. Sun, X., Zhou, W., Xiang, Q., Cui, B., Jin, Y.: Research on big data analytics
technology of mooc. In: 2016 11th International Conference on Computer Science
& Education (ICCSE). pp. 64–68. IEEE (2016)
29. Trillas, E., Eciolaza, L.: Fuzzy Logic: An Introductory Course for Engineering
Students. Springer (2015)
30. Wolff, A., Zdrahal, Z., Nikolov, A., Pantucek, M.: Improving retention: predicting
at-risk students by analysing clicking behaviour in a virtual learning environment.
In: Proceedings of the third international conference on learning analytics and
knowledge. pp. 145–149. ACM (2013)
31. Xu, N., Ruan, B.: An application of big data learning analysis based on mooc plat-
form. In: 2018 9th International Conference on Information Technology in Medicine
and Education (ITME). pp. 698–702. IEEE (2018)
32. Zadeh, L.A.: Fuzzy Sets. Information and Control 8, 338–353 (1965)

View publication stats

You might also like