Journal of Information Technology and Application in Education Vol. 3 Iss. 2, June 2014
doi: 10.14355/jitae.2014.0302.02
www.jitae.org
Quality Development of Learning Objects:
Comparison, Adaptation and Analysis of
Learning Object Evaluation Frameworks for
Online Courses:
E‐Learning Center of an Iranian University case study
Peyman Akhavan*1, Majid Feyz Arefi2
Department of Management and Soft Technologies, Malek‐Ashtar University of Technology, Iran, Tehran
*1
akhavan@iust.ac.ir; 2majidfeyzarefi@gmail.com
Received 10 July 2013; Revised 23 December 2013; Accepted 4 March 2014; Published 27 May 2014
© 2014 Science and Engineering Publishing Company
Abstract
I nt roduc t ion
The purpose of the present study is investigating, comparing
evaluation frameworks of learning objects and deriving the
more important indices. After doing so, comparison,
correspondence and analysis of the frameworks are done.
Realizing the most important indices, the e‐content of the
two lessons of M.Sc courses in one of the foresaid centers has
been evaluated. In this research, we pursue the analytic
approach of literature review in offering theoretic bases to
practical application of the theory, and then by two‐round
Delphi technique, important criteria and their significance
were identified by experts. The purpose of this study is to
show the important scales for the evaluation of the learning
objects, having the most important role in e‐content of E‐
Lesson and the use of case‐study for detecting weak points
and improvable aspects. The findings reflect that most of the
performed studies have focused on quality of the objects and
on correspondence of the e‐learning standards with objects
formulation and there isn’t sufficient study in strategic and
planning aspects of learning, having crucial role in learning
quality and effectiveness of the E‐content of E‐lessons. In
addition, in these studies, the classification of the scales is
not based on specified aspects for learning objects evaluation.
The present study answers the following questions: Which
factors affect the design and formulation of the quality of
learning objects? Onto which aspects one must pay attention
to effective designing of E‐content of E‐Lessons? Which
scales of learning objects are the main highly referred ones
evaluation in previous studies? Which aspects are the
improvable and challenging ones, not being focused enough?
The essential ability of E‐learning is more than its
access to information and its transactional and
communicative abilities are its base. The purpose of
the qualitative E‐learning is combining variety and
solidarity in such an active learning environment
which is mentally challenging. Level of these
transactions is more than a one sided transition of
content and embeds our thought range with respect to
the relations between the participants in learning
process (Grison and Anderson, 2003; Bonner and Lee,
2012).
E‐Learning Elements
Keywords
E‐learning, E‐content; Learning Object; Learning
Evaluation; E‐learning Standards; Instructional Design
E‐learning as a very functional term has been
introduced in education field along with information
technology and e‐learning has been considered as a
long term planning with a huge investment in
educational centers, especially in the universities of
many countries (Triantafillou, et al. 2002; Kuo, 2012).
Many universities and educational institutes around
the world have been designed for providing e‐
learning from the beginning in order to response to
increasing educational demands. Betts reports that in
many developed countries application in e‐leaning
courses are many times higher than the total higher
education growth (Betts, 2009).
Object
The first are the ones capable of being named
physically these exist physically or at least
57
www.jitae.org
Journal of Information Technology and Application in Education Vol. 3 Iss. 2, June 2014
electronically. These elements are: learning files,
management software and information banks. The
second are the conceptual ones such as: courses and
lessons, necessary to be understand completely for the
discussion of E‐learning (Fallon and Brown, 2003).
Learning Objects and E‐content
LO1 is the smallest part of the content which itself
could be a learning unit or meaning. The size of LO
can be varied, but the best performance of Lo is having
a special learning aim. Lo must be meaningful and
independent on content. In other words, it must not
depend on other parts of learning content to get
completed. This means that each LO could be used in
several lessons or courses (Fallon and Brown, 2003).
One can infer the LOs as the elements of E‐learning.
As they correspond to same standards, one can use
any combination of them provided that they were
matched each other. Anyway, by matching LOs, one
can form bigger parts of the learning content such as:
topics, lessons or all courses and so on (Fallon and
Brown, 2003).
Knowledge element, learning source, online materials
and learning components are all words with the same
usage as “LO” (Krauss and Ally, 2005).
E‐learning Standards
Aviation is one of the first industries in early 80’s
which accepts the computer‐based learning in a large
scale. AICC was founded in 1988 (Fallon and Brown,
2003). At the same year, US Department of Defense
founded Advanced Distributed Learning Initiative
(ADL). Its primary mission is to develop and promote
the learning methods for US Military, but the results
have many applications in other public and private
sectors. ADL published the first edition of the SCORM
in 1999 (Fallon and Brown, 2003).
The foresaid standards offered proposals in both
conceptual design and performance. Some of them like:
AICC and IMS are more focused on determination of
the technical specifications and the others like:
SCORM and AICC are the reference models of
performance (Triantafillou, et al. 2002). One can refer
to Dublin Core and ARIADNE as other standards
institutions. The first was commenced offering
metadata standards. It represented a model including
fifteen parts, supporting online storage and recovery
of public resources. It differed over these issues from
1
. Learning object
58
previous resources: it’s simple, and it has balance
capability, universal acceptance and development
capacity. ARIADNE has been commenced its activities
since 1966. Its focus is on providing tools and
protocols supporting productivity, storage, delivery
and reuse of learning courses (Fallon and Brown, 2003).
Technical LO Evaluation
The various approaches to LOs attempt to meet two
common objectives:
To reduce the overall costs of LOs.
To obtain better LOs (Wiley, 2003).
LO evaluation is a rather new concern. [5, 6] increase
of LOs, writers, design variety and their accessibility
to educators trained or untrained led to the tendency
to investigate how to evaluate LOs and which scales
are appropriate to judge their quality and profitability
(Haughey and Muirhead, 2005).
Lit e ra t ure Ana lysis
Evaluation of learning objects, is needed to develop
criteria for judging them (Kurilovas and Dagiene,
2009).
Vargo et al (2002) Learning Object Review Instrument
(LORI) for evaluating learning objects have developed.
Version 1.3 LORI using 10 criteria to evaluate learning
objects, including: 1 – Presentation: aesthetics 2 –
Presentation: Design for Learning 3 ‐ Accuracy of
content 4 ‐ support for learning goals 5 ‐ Motivation 6
– Interaction: usability 7‐ Interaction: feedback and
adaptation 8‐ Reusability 9‐ Metadata and
interoperability compliance 10‐ Accessibility (Vargo, et
al. 2003). The 10 criteria of a literature review related
to instructional design, computer science, multimedia
development and educational psychology have been
achieved (Kurilovas and Dagiene, 2009). Each measure
was weighted equally and was rated on a four‐point
scale from weak to moderate to strong to
perfect (Vargo, et al. 2003).
In the same year, Belfer et al Presented the version 1.4
LORI, with the 8 criteria which are such as: content
quality, learning goal alignment, feedback and
adaptation,
motivation,
presentation
design,
interaction usability, reusability, and value of
accompanying instructor guide (Belfer, et al. 2002).
Nesbit et al in 2004 propose version 1.5 of LORI that
had nine criteria, these criteria include: content quality,
learning goal alignment, feedback and adaptation,
Journal of Information Technology and Application in Education Vol. 3 Iss. 2, June 2014
motivation, presentation design, interaction usability,
accessibility, reusability and standards compliance
(Leacock and Nesbit, 2007).
Krauss and Ally gave a paper in 2005; it was based on
LORI. The aim of this study was identifying challenges
and problems of instructional designers in the design
and evaluate the effectiveness of a learning object. This
paper presents a framework of eight criteria for
evaluating learning objects that these criteria include:
content quality, learning goal alignment, feedback,
and adaptation, motivation, presentation design,
interaction usability, reusability, student/Instructor
guides (Krauss and Ally, 2005).
Susan Smith Nash s article entitled learning objects,
learning object repositories, and learning theory:
Preliminary best practices for online courses,
published in 2005, This paper examines the current
practices of learning objects and best practices have
been studied. And by combining theories of learning,
a new approach has been proposed to improve. In this
paper, the factors for determining the usability of
learning objects are presented. These factors include:
relevance,
usability,
cultural
appropriateness,
infrastructure support, redundancy of access, size of
object, relation to the infrastructure / delivery (Nash,
2005).
Nicole Buzzetto‐Moore and kaye pinhey 2006, an
article entitled Guidelines and standards for the
development of Fully online learning objects
presented, that in these article, 18 Qualitative criteria
for evaluating learning objects of online courses of
University of Maryland Eastern Shore were
introduced, these criteria include: Prerequisites,
Technology requirements, Objectives and outcomes,
Activities support learning, Assessment, A Variety of
tools enhance interaction, Course materials, Student
support, Frequent and timely feedback, Appropriate
Pacing, Expectations for student discussion / chat
participation, Grading, Course content, Navigation,
Display, Multimedia (if appropriate), Time devoted,
Reusability (Buzzetto‐More and Pinhey, 2006).
Six action areas for establishing Learning Object of
technical quality criteria are suggested by Paulsson
and Naeve in 2006, these criteria include:1) A narrow
definition 2) A mapping taxonomy 3) More extensive
standards 4) Best practice for use of existing standards
5) Architecture models 6) The separation of pedagogy
from the supporting technology of LOs. Most
evaluations of learning objects have not done much
with this vision Focus in this Model on technical
www.jitae.org
quality criteria for Learning Objects. Other quality
criteria, such as pedagogical quality, usability or
functional quality are in these scopes. Some aspects of
quality are addressed by Van Assche and Vourikari
(2006), they suggested a quality framework for the
whole life cycle of learning objects (Paulsson and
Naeve, 2006).
The MELT content audited in 2007, including an in
depth examination of project partners who exist
content quality guidelines. It proposed a checklist to
help them make decisions about what content from
their repositories for enrichment in the project that
should be made available. This checklist is divided
into five categories: pedagogical, usability, reusability,
accessibility and production. This list is by no means
vision and not all of the criteria can always be applied
to all of Learning Objects. The MELT project partners
seek to provide access to learning content that meets
nationally recognized quality criteria (MELT, 2007)
SREB (Southern Regional Education Board) in 2007
with the aim of improving the quality of its training
programs, are provided a checklist for learning object
evaluation, that has 10 criteria that include: 1) Content
quality 2) Learning goal alignment 3) feedback 4)
Motivation 5) Presentation design 6) Interface
Usability 7) Accessibility 8) Reusability 9) Standards
compliance 10) Intellectual property and copyright
(SREB‐SCORE, 2007).
Tele‐University of Quebec in 2007 improved the
effectiveness, efficiency and flexibility of Learning
objects, implement a quality assurance strategy wich
called quality for reuse (Q4R), the scientific projects
that were started at the university, as well as proper
storing and retrieval strategies. They have organized
these strategies into four main groups, this group
includes: organizational strategies, and then three
strategies inspired by the life‐cycle of a LO, that is
from its conception to its use/reuse (adaptations) (Q4R,
2007).
Ministry of Education and Science of Lithuania in June
2008, the criteria for technical evaluating of learning
objects to help teach computer, set out to take the
assessment tool called the Lithuanian learning objects
evaluation tool. These criteria are: 1) Methodical
aspects. 2) User interface 4) LOs arrangement
possibilities. 5) Communication and collaboration
possibilities and tools. 6) Technical Features. 7)
Documentation. 8) Implementation and maintenance
expenditure (Kubilinskiene and Kurilovas, 2008).
59
www.jitae.org
Journal of Information Technology and Application in Education Vol. 3 Iss. 2, June 2014
The study of Kurilovas and Dagiene in 2009 with the
combination framework (Vargo et al, 2003), (Paulsson
and Naeve, 2006), (MELT, 2007), (Q4R, 2007) and
studies in 2007 (Kurilovas, 2007), proposed the
original set of LO evaluation criteria, and it is called
“Recommended learning objects technical evaluation
tool”. This tool includes LO technical evaluation
criteria suitable for different LO life‐cycle stages.
These criteria are:
The first (before LO inclusion in the LOR): Narrow
definition
compliance,
Reusability
level:
interoperability – decontextualisation level ‐ cultural /
learning diversity principles – Accessibility – LO
Architecture, working stability, design and usability.
The second (during LO inclusion in the LOR):
Membership or contribution control strategies,
technical interoperability. The third (after LO inclusion
in the LOR) retrieval quality, information quality. [5]
Guenaga et al. in 2012, introduced a tool for evaluating
learning objects, that this tool is composed of two
aspects of the technology and pedagogy. In this survey,
a questionnaire was designed where each of these two
aspects was included in several criteria (Guenaga, et al.
2012).
Kurilovas and Zilinskiene in 2013, introduced a new
AHP method for evaluating quality of learning
scenarios. In this research, qualitative measures of
learning scenarios are divided into three sections:
learning objects, learning activities and learning
environment, that each of three sections has several
indices in terms of internal quality and quality in use
(Kurilovas and Zilinskiene, 2013).
Re se a rc h Ana lysis
Analysis and Comparison of LO’s Evaluation
Frameworks and E‐content
In this section, frameworks and models are technically
focused on LOs and E‐content (Table 1).
In table 2, we have qualitative scales of these fourteen
frameworks along with a respective figure. At last, in
table 3, the scales corresponded to each other. In fact,
table 3 shows the scales frequency in models and
frameworks, then their importance, it also shows the
differences between models and frameworks and the
scales existing in some of them. We can recognize
researcher’s focus, improvable challenges and aspects
by such comparison that the future studies will
handle the weak points and strong points.
60
TABLE 1 LO’S EVALUATION FRAMEWORKS AND E‐CONTENT
Symbols used
in table 3
Year
LO’s evaluation Frameworks and E‐
content
F1
2002
Vargo et al (LORI 1.3)
F2
2002
Belfer et al (LORI 1.4)
F3
2004
Nesbit et al (LORI 1.5)
F4
2005
Krauss and Ally
F5
2005
Susan Smith Nash
F6
2006
Paulsson and Naeve
F7
2006
Nicole Buzzetto‐more and Kaye Pinhey
F8
2007
MELT
F9
2007
SREB (SREB‐SCORE)
F 10
2007
Q4R
F 11
2008
Kubilinskiene and Kurilovas
F 12
2009
Kurilovas and Dagiene
F 13
2012
Guenaga et al
F 14
2013
Kurilovas and Zilinskiene
TABLE 2 CRITERIA OF LO’S EVALUATION FRAMEWORKS AND E‐CONTENT
Symbols used
in table 3
C1
C2
C3
C4
C5
C6
C7
C8
C9
C 10
C 11
C 12
C 13
C 14
C 15
C 16
C 17
C 18
C 19
C 20
C 21
C 22
C 23
C 24
C 25
C 26
C 27
C 28
C 29
C 30
C 31
C 32
C 33
C 34
C 35
Criteria of LO’s evaluation Frameworks and
E‐content
Presentation aesthetics
Presentation design for learning
Accuracy of content
Support for learning goals
Motivatioon
Interactioon usability
Interaction feedback and adaptation
Reusability
Standards compliance
Accessibility
Learning goal alignment
Student/Instructor guides
more extensive Standards
Best practice for use of existing standards
Architecture models
The Separation of Pedagogy from the supporting
technology of LOs
Organizational strategies
Life Cycle Strategies
Intellectual property and copyright
Los arrangement possibilities
Communication and collaboration possibilities
and tools
Technical features
Interoperability
Cultural/learning diversity principles
Technical interoperability
Retrieval quality
Information quality
Relevance
Redundancy of access
Infrastructure support
Size of object
Relation to the infrastructure/delivery
Assessment
Expectations for student discussion/ chat
participation
Navigation
Journal of Information Technology and Application in Education Vol. 3 Iss. 2, June 2014
TABLE 3 COMPARISON OF CRITERIA OF LO’S EVALUATION FRAMEWORKS
F 13
F 14
F 11
F 12
F9
F 10
F8
F7
F6
F5
F4
F3
F2
Frame
work
F1
AND E‐CONTENT
Criteria
C1
C2
C3
C4
C5
C6
C7
C8
C9
C 10
C 11
C 12
C 13
C 14
C 15
C 16
C 17
C 18
C 19
C 20
C 21
C 22
C 23
C 24
C 25
C 26
C 27
C 28
C 29
C 30
C 31
C 32
C 33
C 34
C 35
Limits of Qualitative Frameworks of LO Evaluation
Undoubtedly, each of the frameworks and models and
E‐contents has its limits because of unique activity of
respective institution, model or the strategy and
purpose of the researchers. According to the above
table, LORI, Krauss & Ally and SREB have many same
scales and their focus is on E‐content evaluation and
qualitative promotion of content with respect to some
standards components. There is no focus on metadata,
object details and organization’ technical features and
its strategies. But Q4R has 4 main strategies to
guarantee the quality of LO which Kurilovas &
Dagiene offered a more complicated model by the use
of them as their base along with some limits including:
(LORI 1.3), (LORI 1.5), (MELT,2007), (SREB,2007),
(Guenaga, et al. 2012). They didn’t investigate the
www.jitae.org
different steps of life time cycle of LOs. Q4R didn’t
sufficiently investigate the technical evaluation scales
of LOs prior to their location in LO’s store. LORI 1.3,
LORI 1.5, MELT, SREB, Q4R and Guenaga didn’t
sufficiently investigate the scales of reusability.
(Kurilovas & Dagiene, 2009) and (Kurilovas &
Zilinskiene, 2013) model is focused on technical
evaluation of LOs and put aside the strategies and
learning purposes and as the result the
appropriateness of the content and strategy.
M e t hod of Re se a rc h
Two‐round Delphi Method
We used Delphi Method to determine the validity of
the paper’s criteria. Delphi has been designed as a
structured communication technique by RAND in
1950s to collect data through collective opinion polling
(Gallenseon, et al. 2002).
In fact, experts’ opinions were used to assess the
validity of the paper’s criteria. In this two‐round
Delphi method, the panel of experts includes 16
professors and experts in e‐learning, information
technology, computer engineering, instructional
technology and systems engineering fields. The
questionnaire construct was designed based on the
following Likert Scale: 1. strongly unimportant 2.
unimportant 3. neutral 4. important 5. strongly
important. Research structure illustrated in figure 2.
Delphi
Delphi
# Round 1
# Round 2
‐Validation
of
Indicators
‐ Adding
and
subtracting
indicators
‐Review of
Indicators
Review Indicators
and determine
their importance
‐Determine
importance
of
indicators
Output:
Table 2
Output:
d1,…,d12
Learning Object Evaluation
FIG. 1 RESEARCH STRUCTURE
After studying literature, research exploring Delphi
method has been used in this study. And several
61
www.jitae.org
Journal of Information Technology and Application in Education Vol. 3 Iss. 2, June 2014
specialized sessions and working group appropriate
criteria for evaluating learning objects of e‐content
were studied. Apart from the questionnaire survey
interviews and meetings were conducted as well.
After reviewing the literature of the learning objects ,
e‐content evaluation and examining the previous
credible quality criteria, in expert panels and
specialized working groups, criteria with respect to
the scope and nature of these applications were
identified. After identification and validation of
indicators, and adding or subtracting of indicators by
them, in the first part of Delphi Method, in the second
part of Delphi Method, after review and approval of
selected indicators, a questionnaire with 35 criteria
was given based on Likert range in order that the
experts indicate their views about the importance of
each indicator. After two rounds of the Delphi method,
the indicators were fully identified and validated by
experts.
round Delphi method, important indicators have been
identified and their significance was determined by
experts. These indicators are: Presentation design for
learning, Accuracy of content, Motivation, Interaction
usability, Interaction feedback and Adaptation,
Reusability, Standards compliance, Accessibility,
Learning goal alignment , Architecture models,
Organizational strategies and Cultural/learning
diversity principles which represented in table 4; d1,
d2, d3 …, d12, respectively.
In this method, before the comparison, one must
determine the importance of the indices, and then
refer to its overall importance by scoring it. In white
slots, we do pair comparison and write the letter of
more important index there and then score it _Zero for
the ones with the same importance and three for the
most difference between importances of indices
according to experts in the Likert. Finally, the results
were arranged by aggregating the scores.
As shown in the last table, d1, d4, d6, d8 have the most
importance, and then comes d3, d5, d9, d12, at the
third step comes d2, d10, d1, and at last rank comes d7.
Pa ir c om pa risons a nd im port a nc e
de t e rm ina t ion of indic e s
As mentioned in the methodology of research, in two‐
TABLE 4 PAIRED COMPARISON INDICES
Index
d1
d2
d3
d4
d5
d6
d7
d8
d9
d10
d11
d12
d1
d2
d1:2
d3
d1:1
d3:1
d4
0
d4:2
d4:1
d5
d1:1
d5:1
0
d4:1
d6
0
d6:2
d6:1
0
d6:1
d7
d1:3
d2:1
d3:2
d4:3
d5:2
d6:3
d8
0
d8:2
d8:1
0
d8:1
0
d8:3
d9
d1:1
d9:1
0
d4:1
0
d6:1
d9:2
d8:1
d10
d1:2
0
d3:1
d4:2
d5:1
d6:2
d10:1
d8:2
d9:1
d11
d1:2
0
d3:1
d4:2
d5:1
d6:2
d11:1
d8:2
d9:1
0
d12
d1:1
d12:1
0
d4:1
0
d6:1
d12:2
d8:1
0
d12:1
d12:1
TABLE 5 THE FINAL SCORE FOR EACH CRITERIA USING PAIRED COMPARISONS WITH OTHER INDICATORS
62
Index
Total Score
Rating
d1
13
d1 : 17.33 %
d2
1
d4 : 17.33 %
d3
5
d6 : 17.33 %
d4
13
d8 : 17.33 %
d5
5
d3 : 6.66 %
d6
13
d5 : 6.66 %
d7
0
d9 : 6.66 %
d8
13
d12 : 6.66 %
d9
5
d2 : 1.33 %
d10
1
d10 : 1.33 %
d11
1
d11 : 1.33 %
d12
5
d7 : 0
Journal of Information Technology and Application in Education Vol. 3 Iss. 2, June 2014
Ca se St udy
According to the indices which identified by experts, 2
lessons of E‐lessons have been offered in one of the E‐
learning centers in Iran universities and then they
were evaluated.
This university E‐learning center performed some
studies in 2003. It then commenced its activity as the
pivotal project in order to reach to the knowledge of
designing and performing E learning systems and to
establish in internal network of university. The project
is focused on one lesson in some fields of M.Sc in 2008‐
2009. In the mid 2010, it successfully takes the Master’s
course of those fields. As a result, E‐content of the 2
lessons was selected for evaluation by the use of a
framework including the foresaid indices.
First lesson content is about human resources which
represented as a WORD and a POWERPOINT file. 8 3‐
hour sessions cover the 134 page WORD file (font: 11).
The content consists of tables, figures and … for better
understanding of the lesson. At the end of each session
there are some relative questions. The POWERPOINT
file consists of 206 slides presenting the headlines. In
this center, there are some record rooms for masters to
records audio file of the lessons and also they can chat
with students and answer their questions.
The content of the second lesson is about knowledge
management. The course consists of a WORD (150
pages, font: 11) and a POWERPOINT presented like
www.jitae.org
the first one. Again the content consists of tables,
figures ect. for better understanding of the lesson. But
in the design of the foresaid lessons, the masters are
more focused on the course discussion rather than
learning, evaluativeness, effectiveness and other
aspects. As an example, in the content of the lessons
there is no point about needs evaluation,
determination of master‐student tasks, students’
evaluation strategies and methods. In addition, there
is nothing about strategies, purposes and sessions. As
the students have no physical attendance, masters
should present a useful content by the use of
equipments and creativity and focus on environmental,
cultural, social, telecommunication and other
limitation. Master can increase the effectiveness by the
use of animated files and films_ these two lessons are
weak with regard to this point.
The contents of the lessons are evaluated by 3 expert
masters by a 9‐question questionnaires and Likert
scale.
E‐content Evaluation of First Course
The result of e‐content evaluation of the first course
are shown in table 6.
E‐content Evaluation of Second Course
Result of e‐content evaluation of second course are
shown in table 7.
TABLE 6. RESULT OF E‐CONTENT EVALUATION OF FIRST COURSE
Indices
d1
d2
d3
d4
d5
d6
d7
d8
d9
d10
d11
d12
Average of all
indices
First professor
2
4
3
4
3
3
5
4
2
2
3
2
3.08
Second professor
3
5
3
4
4
2
4
5
3
3
3
2
3.42
Third professor
3
4
4
3
4
3
3
4
2
2
2
3
3.08
Average
2.66
4.33
3.33
3.66
3.66
2.66
4
4.33
2.33
2.33
2.66
2.33
3.19
Professors
TABLE 7. RESULT OF E‐CONTENT EVALUATION OF SECOND COURSE
Indices
d1
d2
d3
d4
d5
d6
d7
d8
d9
d10
d11
d12
Average of all
indices
First professor
2
3
3
2
4
3
4
5
3
3
2
2
3
Second professor
2
4
3
3
3
2
4
4
2
2
2
2
2.75
Third professor
2
3
4
2
4
3
3
4
2
2
3
2
2.83
Average
2
3.33
3.33
2.33
3.66
2.66
3.66
4.33
2.33
2.33
2.33
2
2.86
Professors
63
www.jitae.org
Journal of Information Technology and Application in Education Vol. 3 Iss. 2, June 2014
Disc ussion
As shown in table 6 and table 7, the content of the first
lesson has a low performance in scales number 1, 6, 9,
10, 11 and 12 which means it was weak at Presentation
design for learning, Reusability, Learning goal
alignment, Architecture models, Organizational
strategies and Cultural/learning diversity principles;
because master designing of them are weak at
designing regarding visual, hearing, image display
and graphical elements relative to learning purposes.
Also, they have unacceptable level of LO designing
with respect to reuse and effectiveness and their
respective content don’t appropriately assign the
purposes. This content has medial performance
regarding scales number 3, 4 and 5 which means that
it has medial level in Motivational content, Interaction
usability, Interaction feedback and Adaptation. One
can say that the use of tables, Figures and etc as well
as coherence of the content, raising questions and
incitement challenges are useful, but lack of visual
slides, films and animations makes it medial. The
content is also medial regarding to offering LOs
having simple mutual relation and operational liaison
with qualitative features, so corresponding to
students’ needs and giving feedback to the masters. At
last, it has acceptable performance regarding scales
number 2, 7 and 8; that is, Accuracy of content,
Standards compliance and Accessibility, because it has
content credit, accuracy and enough details and
balanced ideas and the foresaid standards and has
accessibility for everyone. The content of the second
lesson is weak at Presentation design for learning,
Reusability, Learning goal alignment, Architecture
and
models,
Organizational
strategies
Cultural/learning diversity principles; it means, it is
weak at designing LOs with necessary capabilities and
designing inappropriate E‐content regarding to meet
the purposes. It has the medial level with respect to
the scales number 2, 3, 5 and 7. At last it is only
acceptable in scale number 8_ accessibility. Paying
attention to the weaknesses and challenges in the
center, we should promote masters’ ability and
familiarity regarding to qualitative indices of LO
evaluation to improve the E‐contents. Designers also
should interact with design models and learning
technology and the approaches to meet the learning
needs of content. We can also predict and evaluate the
offered contents and give feedback to masters.
Conc lusion
As shown in this study, the most important criteria for
evaluating learning objects are analyzed by studying
64
literature, and were examined by two rounds of
Delphi with experts in various fields. After identifying
the appropriate indicators, and their significance was
determined by experts. Then the paired comparisons
were performed to determine the percentage of
indicators relative to each other, and then as a case
study, e‐content of the two courses of virtual e‐
learning courses generated from the one e‐learning
center of Iran were evaluated by experts and
professors in this field.
In this study topics such as components of e‐learning,
learning objects and e‐content, e‐learning standards,
technical evaluation of learning objects, analysis and
comparison of LO’s evaluation frameworks and e‐
content were discussed.
According to the results, in order to design and
develop learning objects and e‐content, various
aspects and dimensions should be addressed, such as
instructional design, strategy and importance
qualitative aspects. This study identified 35 important
criteria for evaluating learning objects, and after
surveys, significant weaknesses and areas for
improvement were obtained. Given that these areas
were also provided with valuable suggestions.
Sum m a ry a nd Sugge st ions
The Evaluation of E‐content helps organizations to
promote their content quality and offer more effective
learning plans. Although there are many standard
institutions, we need something else to produce an E‐
content with an acceptable quality and effectiveness
consisting of useful LOs. In fact, standard refers to the
least qualitative specifications of LOs.
Models and frameworks presented here are evaluated
by researchers. They are useful for offering better LOs
and as a result better E‐content. These frameworks are
set to promote the quality of E‐content by developing
standards and more features and components. But the
challenge is the use of such a framework with
appropriate qualitative scales meeting students’ needs
and tendencies.
According to the limits, to evaluate the E‐content, one
first determines his aim and focus and then choose one
of the models and frameworks. As said before, to
evaluate LOs technically, Kurilovas & Dagiene model
covers more aspects and has the capability to develop
regarding to strategy. But one of the main issues is the
ignorance of coordination of E content and learning
design in the previous studies. We should review the
Journal of Information Technology and Application in Education Vol. 3 Iss. 2, June 2014
design models and learning technologies in order to
evaluate the E‐contents regarding to effectiveness and
learning aims. We must do that because the final aim
of E‐lessons is to present an effective content and reach
to high quality learning. So we should promote the
designers abilities.
In all the foresaid models and frameworks,
researchers had chosen the scales for the evaluation,
some of which take same aspects and others take
different aspects of content. In present study,
important criteria was identified and confirmed by
experts and then we performed a case study in e‐
learning center of a university in Iran. In this research,
e‐content of 2 Master’s lessons were evaluated by
some experts .The results show that this study center
faces problems regarding to production of high quality
E‐content. Some recommendations were made as well
to promote the ability of the designers.
It is recommended in future studies, main aspects
related to LO evaluation be determined and classified.
Also organizational strategies and learning purposes
shall be investigating cause of their importance in
content production. It is also recommended that
researchers investigate the scales related to LO
designing in known models and learning technology
in order to increase the effectiveness of E‐content for
the students of e‐courses.
www.jitae.org
Gallenseon, A., Heins, J. and Heins, T. “Macromedia MX:
Creating learning objects.”, 2002. [Macromedia White
Paper]. Macromedia Inc. Retrieved 6/02/06 from
http://download.macromedia.com/pub/elearning/objects/mx
_creating_lo.pdf
Grison D.R. and Anderson T. “E‐learning in the 21ST
century a framework for research and practice”, 2003.
Guenaga, M., Mechaca, I., Romero, S. and Eguiluz, A. “A
tool to evaluate the level of inclusion of digital learning
objects.” Procedia Computer Science 14 (2012): 148 – 154.
Haughey, M. and Muirhead, B. “Evaluating learning objects
for
schools”,
2005.
[online],
http://www.usq.edu.au/electpub/e‐
jist/docs/vol8_no1/fullpapers/Haughey_Muirhead.pdf
Krauss, F. and Ally, M. “A study of the design and
evaluation of a learning object and implications for
content
development.”
Interdisciplinary
journal
of
knowledge and learning objects 1 (2005): 1‐22.
Kubilinskienė, S. and Kurilovas, E. “Lithuanian Learning
Objects Technical Evaluation Tool and its Application in
Learning Object Metadata Repository.” In: Informatics
Education
Contributing
Proceedings
of
the
3rd
Across
the
International
Curriculum:
Conference
“Informatics in Secondary Schools – Evolution and
REFEREN CES
Perspective” (ISSEP‐2008). Torun, Poland, July 1 – 4, 2008.
Belfer, K., Nesbit, J. and Leacock, T. “Learning Object
Kuo, C. “Evaluation of E‐learning Effectiveness in Culture
Review Instrument (LORI).” Version 1.4, 2002.
Betts, K. “Online Human Touch (OHT) Training & Support:
A
Conceptual
Framework
to
Increase
Faculty
Engagement, Connectivity, and Retention in Online
Education, Part 2”, Journal of Online Learning and
Bonner, J. M. And Lee, D. “Blended Learning in
Setting”,
Journal
of
Information
Technology and Application in Education, Vol. 1 No. 4,
Taiwan”,
Journal
of
Information Technology and
Application in Education, Vol. 1 No. 1, 9‐18, 2012.
Kurilovas, E. “Digital Library of Educational Resources and
mokslai (Information Sciences). Vilnius, 42–43(2007): 69–
77.
Kurilovas, E. and Dagiene, V. “Learning Objects and Virtual
Learning Environments Technical Evaluation Criteria.”
164‐172, 2012.
Buzzetto‐More,
and Arts Promotion: The Case of Cultural Division in
Services: Evaluation of Components”. Informacijos
Teaching, Vol. 5 No. 1, 2009.
Organizational
Selected papers, 147 – 158.
N.
and
Pinhey,
K.
“Guidelines
and
Standards for the development of fully online learning
objects.” Interdisciplinary journal of knowledge and
learning objects 2 (2006): 95‐104.
Fallon, C. and Brown, Sh. “E‐Learning standards: a guide to
purchasing. developling and deplaying standards ‐
Electronic Journal of e‐learning, Vol. 7 No. 2, 127‐136,
2009.
Kurilovas, E. and Zilinskiene, I. “New MCEQLS AHP
Method for Evaluating Quality of Learning Scenarios.”,
Technological and Economic Development of Economy,
Vol. 19 No. 1, 78‐92, 2013.
conformant e‐learning.”, 2003.
65
www.jitae.org
Journal of Information Technology and Application in Education Vol. 3 Iss. 2, June 2014
Leacock, T.L. and Nesbit, J.C. “A Framework for evaluating
the
quality
of
multimedia
learning
resources.”,
Educational Technology & Society, Vol. 10 No. 2, 44‐59,
Wiley,
D.
A.
“Learning
objects:
Difficulties
and
opportunities”, 2003. [online], http://wiley.ed.usu.edu/
docs/lo_do.pdf
2007.
MELT. “Metadata Ecology for Learning and Teaching”, 2007.
project web site. [online], http://melt‐project.eun.org.
Nash, S.S. “Learning Objects, Learning Object Repositories,
and Learning Theory: Premilinary Best Practices for
Online Courses.” Interdisciplinary journal of knowledge
and learning objects 1 (2005): 217‐228.
Paulsson, F. and Naeve, A. “Establishing Technical Quality
Criteria
for
Learning
Objects,
2006.
[online],
http://www.frepa.org/wp/wp‐content/files/paulsson‐
Establ‐Tech‐Qual_finalv1.pdf
Peyman Akhavan received his M.Sc. and
Ph.D. degrees in industrial engineering from Iran University
of Science and Technology, Tehran, Iran. His research
interests are in knowledge management, information
technology, innovation and strategic planning. He has
published 6 books and has more than 100 research papers in
different conferences and journals.
Q4R. “Quality for Reuse”, 2007. project web site, [online].
http://www.q4r.org.
SREB‐SCORE.
“Checklist
for
evaluating
SREB‐SCORE
Learning Objects, Educational Technology Cooperative,
2007.
Triantafillou, E., Pomportis, A. and Georgiadou, E. “AES‐CS:
Adaptive Educational System Based on Cognitive
Styles.” Proc.OFAH 2002 workshop Systems for Web‐
based Education, 2002.
Vargo, J., Nesbit, J.C. and Archambault, A. “Learning Object
Evaluation: Computer‐Mediated Collaboration and Inter‐
Rater Reliability.”, International Journal of Computers
and Applications, Vol. 25 No. 3, 198‐205, 2003.
66
Majid Feyz Arefi received his B.Sc. degree
in Pure Mathematic from Ferdowsi University of Mashhad,
Iran in 2008. He received his M.Sc. degree in Industrial
Engineering with specialty in System Management and
Productivity from Malek‐Ashtar University of Technology,
Tehran, Iran in 2012. During his studies he has been working
on topics such as: E‐learning, Learning Objects, Information
Technology, Customer Satisfaction and After‐Sales Services.
His fields of interests include E‐learning, E‐content and Co‐
creation. He is currently continuing his research with Dr.
Akhavan.