Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3270112.3270114acmconferencesArticle/Chapter ViewAbstractPublication PagesmodelsConference Proceedingsconference-collections
research-article

Adapting Kirkpatrick's evaluation model to technology enhanced learning

Published: 14 October 2018 Publication History

Abstract

Experiments, case studies and surveys are part of the standard toolbox researchers use for validating their proposed educational tools and methods. Yet the breadth of such evaluations is often limited to one dimension such as assessing the effect on learning outcomes by means of experiments, or evaluating user acceptance and perceived utility by means of surveys. Besides a positive effect in the classroom, it is equally important that students transfer their knowledge to their working environment, which constitutes yet another evaluation dimension. The lack of a widely accepted validation method encompassing a broad set of dimensions hampers the comparability and synthesis of research on modelling education. This study adapts Kirkpatrick's model of training evaluation to the assessment of Technology Enhanced Learning (TEL), i.e. learning settings where the teaching is supported by means of didactic tools. The adaptation proposes concrete metrics and instruments for each of Kirkpatrick's model level. The adaption is demonstrated by means of a case study on a TEL environment for User Interface (UI) modelling that supports the learning UI design principles.

References

[1]
K. D. Schenk, N. P. Vitalari, and K. S. Davis, "Differences between novice and expert systems analysts: What do we know and what do we do?," J. Manag. Inf. Syst., vol. 15, no. 1, pp. 9--50, 1998.
[2]
G. Sedrakyan, M. Snoeck, and S. Poelmans, "Assessing the effectiveness of feedback enabled simulation in teaching conceptual modeling," Comput. Educ., vol. 78, pp. 367--382, 2014.
[3]
G. Sedrakyan and M. Snoeck, "Feedback-enabled MDA-prototyping effects on modeling knowledge," in Enterprise, Business-Process and Information Systems Modeling, Springer, 2013, pp. 411--425.
[4]
H. M. Walker, S. Fitzgerald, and J. F. Dooley, "Curricular assessment: Tips and techniques," in SIGCSE 2015, 2015, pp. 265--266.
[5]
D. G. Janelle, M. Hegarty, and N. S. Newcombe, "Spatial Thinking Across the College Curriculum: A Report on a Specialist Meeting," Spat. Cogn Comput., vol. 14, no. 2, pp. 124--141, 2014.
[6]
S. Wang, Y. Han, W. Wu, and Z. Hu, "Modeling student learning outcomes in studying programming language course," in Seventh International Conference on Information Science and Technology, 2017, pp. 263--270.
[7]
K. Garg, A. Sureka, and V. Varma, "A Case Study on Teaching Software Engineering Concepts using a Case-Based Learning Environment.," in QuASoQ/WAWSE/CMCE@APSEC, 2015, pp. 71--78.
[8]
D. L. Kirkpatrick, "Techniques for evaluating training programs," Tech. Eval. Train. programs, vol. 13, pp. 3--9, 1959.
[9]
D. L. Kirkpatrick, "Evaluation of training," in Training and development handbook: A guide to human resource development, R. L. Craig, Ed. New York: McGraw Hill: Springer, 1976.
[10]
R. L. Hammond, "Evaluation at the local level," in B.R. Worthen & J.R. Sanders, Educational evaluation: Theory and practice, 1973.
[11]
E. Holton, "The flawed four-level evaluation model," Hum. Resour. Dev. Q., vol. 7, no. 1, pp. 5--21, 1996.
[12]
D. L. Stufflebeam, "The CIPP model for evaluation," in International handbook of educational evaluation, Springer, 2003, pp. 31--62.
[13]
T. Sitzmann and J. M. Weinhardt, "Training engagement theory: A multilevel perspective on the effectiveness of work-related training," J. Manage., 2015.
[14]
J. Venable, J. Pries-Heje, and R. Baskerville, "FEDS: a framework for evaluation in design science research," Eur. J. Inf. Syst., vol. 25, no. 1, pp. 77--89, 2016.
[15]
R. Chinta, M. Kebritchi, and J. Elias, "A conceptual framework for evaluating higher education inst," Int. J. Educ. Manag., vol. 30, no. 6, pp. 989--1002, 2016.
[16]
D. L. Galloway, "Evaluating distance delivery and e-learning is kirkpatrick's model relevant?," Perform. Improv., vol. 44, no. 4, pp. 21 -27, 2005.
[17]
L. Praslova, "Adaptation of Kirkpatrick's four level model of training criteria to assessment of learning outcomes and program evaluation in higher education," Educ. Assessment, Eval. Account., vol. 22, no. 3, pp. 215--225, 2010.
[18]
A. D. D. Ho, S. W. Arendt, T. Zheng, and K. A. Hanisch, "Exploration of hotel managers' training evaluation practices and perceptions utilizing Kirkpatrick's and Phillips's models," J. Hum. Resour. Hosp. Tour., vol. 15, no. 2, pp. 184--208, 2016.
[19]
J. D. Kirkpatrick and W. K. Kirkpatrick, Kirkpatrick's four levels of training evaluation. Association for Talent Development, 2016.
[20]
K. A. Moreau, "Has the new Kirkpatrick generation built a better hammer for our evaluation toolbox?," Med. Teach., vol. 39, no. 9, pp. 999--1001, 2017.
[21]
R. Bates, "A critical analysis of evaluation practice: the Kirkpatrick model and the principle of beneficence," Eval. Program Plann., vol. 27, no. 3, pp. 341--347, 2004.
[22]
D. L. Kirkpatrick and J. D. Kirkpatrick, Evaluating training programs. The four levels. Berrett-Koehler Publishers, Inc., 2006.
[23]
M. Paull, C. Whitsed, and A. Girardi, "Applying the Kirkpatrick model: Evaluating an Interaction for Learning Framework curriculum intervention," Issues Educ. Res., vol. 26, no. 3, pp. 490--507, 2016.
[24]
M. E. Van Buren and W. Erskine, "The 2002 state of the Industry report," Alexandria, 2002.
[25]
P. Donovan, "The Measurement of Transfer Using Return on Investment," in Transfer of Learning in Organizations, Springer, 2014, pp. 145--168.
[26]
D. N. Rouse, "Employing Kirkpatrick's evaluation framework to determine the effectiveness of health information management courses and programs," Perspect. Heal. Inf. Manag., vol. 8, no. Spring, 2011.
[27]
T. M. Hamtini, "Evaluating e-learning programs: An adaptation of Kirkpatrick's model to accommodate e-learning environments," J. Comput. Sci., vol. 4, no. 8, p. 693, 2008.
[28]
K. Chrysafiadi and M. Virvou, "PeRSIVA: An empirical evaluation method of a student model of an intelligent e-learning environment for computer programming," Comput. Educ., vol. 68, pp. 322--333, 2013.
[29]
P. Avogadro, S. Calegari, and M. Dominoni, "Designing the Content of a Social e-Learning Dashboard," in IC3K 2015, 2015, pp. 79--89.
[30]
C. Chatterjee, "Measurement of E-Learning Quality," in 3rd International Conference on Advanced Computing and Communication Systems, 2016.
[31]
G. Haupt and S. Blignaut, "Uncovering learning outcomes: explicating obscurity in learning of aesthetics in design and technology education," Int. J. Technol. Des. Educ., vol. 18, no. 4, pp. 361--374, 2008.
[32]
D. L. Moody, "The method evaluation model: a theoretical model for validating information systems design methods," ECIS 2003 Proc., p. 79, 2003.
[33]
F. D. Davis, "Perceived usefulness, perceived ease of use, and user acceptance of information technology," MIS Q., pp. 319--340, 1989.
[34]
J. Brooke, "SUS: A Quick and Dirty Usability Scale," in Usability Evaluation in Industry, London: Taylor & Francis, 1996.
[35]
M. Elkoutbi, I. Khriss, and R. K. Keller, "Generating user interface prototypes from scenarios," in Requirements Engineering, 1999. Proceedings. IEEE International Symposium on, 1999, pp. 150--158.
[36]
J. R. Lewis, "IBM Computer Usability Satisfaction Questionnaires: Psychometric Evaluation and Instructions for Use," Boca Raton, 1993.
[37]
S. Poelmans and P. Wessa, "A Constructivist Approach in an e-Learning Environment for Statistics: a Students' Evaluation.," Interact. Learn. Environ., vol. 23, no. 3, pp. 385--401, 2015.
[38]
D. A. Cook and R. H. Ellaway, "Evaluating technology-enhanced learning: a comprehensive framework," Med. Teach., vol. 37, no. 10, pp. 961--970, 2015.
[39]
O. Erdinç and J. R. Lewis, "Psychometric Evaluation of the T-CSUQ: The Turkish Version of the Computer System Usability Questionnaire," Int. J. Hum. Comput. Interact., vol. 29, no. 5, pp. 319--326, 2013.
[40]
G. McArdle and M. Bertolotto, "Assessing the application of three-dimensional collaborative technologies within an e-learning environment," Interact. Learn. Environ., vol. 20, no. 1, pp. 57--75, 2012.
[41]
D. R. Krathwohl, "A revision of Bloom's taxonomy: An overview," Theory Pract., vol. 41, no. 4, pp. 212--218, 2002.
[42]
J. J. G. Van Merriënboer, R. E. Clark, and M. B. M. De Croock, "Blueprints for complex learning: The 4C/ID-model," Educ. Technol. Res. Dev., vol. 50, no. 2, pp. 39--61, 2002.
[43]
J. Ruiz, E. Serral, and M. Snoeck, "A Fully Implemented Didactic Tool for the Teaching of Interactive Software Systems," in Modelsward'2018, 2018, pp. 95--105.
[44]
M. Maguire, "Guidelines for a University Short Course on Human-Computer Interaction," in International Conference on Human-Computer Interaction, 2017, pp. 38--46.
[45]
J. Ruiz and M. Snoeck, "Assessing the effectiveness of learning UI design principles using FENIkS," IEEE Trans. Learn. Technol. Under Rev., 2018.
[46]
J. C. Carver, L. Jaccheri, S. Morasca, and F. Shull, "A checklist for integrating student empirical studies with research and teaching goals," Empir. Softw. Eng., vol. 15, pp. 35--59, 2010.

Cited By

View all
  • (2024)Case Study of a Model that evaluates the Learner Experience with DICTsExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3637138(1-9)Online publication date: 11-May-2024
  • (2024)Assessing the testing skills transfer of model-based testing on testing skill acquisitionSoftware and Systems Modeling10.1007/s10270-023-01141-123:4(953-971)Online publication date: 22-Jan-2024
  • (2023)Feasibility Study of a Model that evaluates the Learner Experience: A Quantitative and Qualitative AnalysisProceedings of the XXII Brazilian Symposium on Human Factors in Computing Systems10.1145/3638067.3638119(1-11)Online publication date: 16-Oct-2023
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
MODELS '18: Proceedings of the 21st ACM/IEEE International Conference on Model Driven Engineering Languages and Systems: Companion Proceedings
October 2018
214 pages
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

In-Cooperation

  • IEEE CS

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 14 October 2018

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. kirkpatrick model
  2. modelling education
  3. research methods

Qualifiers

  • Research-article

Conference

MODELS '18
Sponsor:

Acceptance Rates

MODELS '18 Paper Acceptance Rate 19 of 29 submissions, 66%;
Overall Acceptance Rate 144 of 506 submissions, 28%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)121
  • Downloads (Last 6 weeks)18
Reflects downloads up to 12 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Case Study of a Model that evaluates the Learner Experience with DICTsExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3637138(1-9)Online publication date: 11-May-2024
  • (2024)Assessing the testing skills transfer of model-based testing on testing skill acquisitionSoftware and Systems Modeling10.1007/s10270-023-01141-123:4(953-971)Online publication date: 22-Jan-2024
  • (2023)Feasibility Study of a Model that evaluates the Learner Experience: A Quantitative and Qualitative AnalysisProceedings of the XXII Brazilian Symposium on Human Factors in Computing Systems10.1145/3638067.3638119(1-11)Online publication date: 16-Oct-2023
  • (2023)Proposal and Preliminary Evaluation of a Learner Experience Evaluation Model in Information SystemsProceedings of the XIX Brazilian Symposium on Information Systems10.1145/3592813.3592919(308-316)Online publication date: 29-May-2023
  • (2023)Effect of flipped classroom and automatic source code evaluation in a CS1 programming course according to the Kirkpatrick evaluation modelEducation and Information Technologies10.1007/s10639-023-11678-928:10(13235-13252)Online publication date: 24-Mar-2023
  • (2022)Evaluation of In-Service Vocational Teacher Training Program: A Blend of Face-to-Face, Online and Offline Learning ApproachesSustainability10.3390/su14211390614:21(13906)Online publication date: 26-Oct-2022
  • (2021)Adaptation of Kirkpatrick’s Four-Level Model of Training Criteria to Evaluate Training Programmes for Head TeachersEducation Sciences10.3390/educsci1103011611:3(116)Online publication date: 11-Mar-2021
  • (2020)The Sandwich principle: assessing the didactic effect in lectures on “cleft lips and palates”BMC Medical Education10.1186/s12909-020-02209-y20:1Online publication date: 15-Sep-2020

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media