Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Understanding validity criteria in technology-enhanced learning: : A systematic literature review

Published: 18 November 2024 Publication History

Abstract

Technological aids are ubiquitous in today's educational environments. Whereas much of the dust has settled in the debate on how to validate traditional educational solutions, in the area of technology-enhanced learning (TEL) many questions still remain. Technologies often abstract away student behaviour by condensing actions into numbers, meaning teachers have to assess student data rather than observing students directly. With the rapid adoption of artificial intelligence in education, it is timely to obtain a clear image of the landscape of validity criteria relevant to TEL. In this paper, we conduct a systematic review of research on TEL interventions, where we combine active learning for title and abstract screening with a backward snowballing phase. We extract information on the validity criteria used to evaluate TEL solutions, along with the methods employed to measure these criteria. By combining data on the research methods (qualitative versus quantitative) and knowledge source (theory versus practice) used to inform validity criteria, we ground our results epistemologically. We find that validity criteria tend to be assessed more positively when quantitative methods are used and that validation framework usage is both rare and fragmented. Yet, we also find that the prevalence of different validity criteria and the research methods used to assess them are relatively stable over time, implying that a strong foundation exists to design holistic validation frameworks with the potential to become commonplace in TEL research.

Highlights

The first technology-enhanced learning (TEL) review with a pre-published protocol.
Usage of validation frameworks in TEL is rare and fragmented.
Criteria tend to be assessed more positively when quantitative methods are used.
The TEL validity criteria landscape has remained remarkably stable over time.
A strong foundation exists to design holistic validation frameworks for TEL.

References

[1]
M.D. Abdulrahaman, N. Faruk, A.A. Oloyede, N.T. Surajudeen-Bakinde, L.A. Olawoyin, O.V. Mejabi, Y.O. Imam-Fulani, A.O. Fahm, A.L. Azeez, Multimedia tools in the teaching and learning processes: A systematic review, Heliyon 6 (2020),.
[2]
C. Addey, B. Maddox, B.D. Zumbo, Assembled validity: Rethinking kane's argument-based approach in the context of international large-scale assessments (ILSAs), Assessment in Education: Principles, Policy & Practice 27 (2020) 588–606,.
[3]
A. Ahadi, M. Bower, J. Lai, A. Singh, M. Garrett, Evaluation of teacher professional learning workshops on the use of technology - a systematic review, Professional Development in Education 50 (2021) 221–237,.
[4]
C. Aloisi, The future of standardised assessment: Validity and trust in algorithms for assessment and scoring, European Journal of Education 58 (2023) 98–110,.
[5]
Z. Başağaoğlu Demirekin, M.H. Buyukcavus, Effect of distance learning on the quality of life, anxiety and stress levels of dental students during the COVID-19 pandemic, BMC Medical Education 22 (2022) 1–9,.
[6]
R.E. Bennett, I.I. Bejar, Validity and automated scoring: It's not only the scoring, Educational Measurement: Issues and Practice 17 (1998) 9–17,.
[7]
M. Bond, K. Buntins, S. Bedenlier, O. Zawacki-Richter, M. Kerres, Mapping research in student engagement and educational technology in higher education: A systematic evidence map, International Journal of Educational Technology in Higher Education 17 (2020) 1–30,.
[8]
E.A. Boyle, T. Hainey, T.M. Connolly, G. Gray, J. Earp, M. Ott, T. Lim, M. Ninaus, C. Ribeiro, J. Pereira, An update to the systematic literature review of empirical evidence of the impacts and outcomes of computer games and serious games, Computers & Education 94 (2016) 178–192,.
[9]
P.E. Brewer, M. Racy, M. Hampton, F. Mushtaq, J.E. Tomlinson, F.M. Ali, A three-arm single blind randomised control trial of naïve medical students performing a shoulder joint clinical examination, BMC Medical Education 21 (2021) 1–7,.
[10]
T. Chen, An argument-based validation of an asynchronous written interaction task, Frontiers in Psychology 13 (2022) 1–10,.
[11]
F. Chen, Y. Cui, A. Lutsyk-King, Y. Gao, X. Liu, M. Cutumisu, J.P. Leighton, Validating a novel digital performance-based assessment of data literacy: Psychometric and eye-tracking analyses, Education and Information Technologies (2023) 1–28doi,.
[12]
B.E. Clauser, M.T. Kane, D.B. Swanson, Validity issues for performance-based tests scored with computer-automated scoring systems, Applied Measurement in Education 15 (2002) 413–432,.
[13]
L. Clunie, N.P. Morris, V.C. Joynes, J.D. Pickering, How comprehensive are research studies investigating the efficacy of technology-enhanced learning resources in anatomy education? A systematic review, Anatomical Sciences Education 11 (2018) 303–319,.
[14]
T. Consoli, J. Désiron, A. Cattaneo, What is “technology integration” and how is it measured in K-12 education? A systematic review of survey instruments from 2010 to 2021, Computers & Education 197 (2023) 1–19,.
[15]
L.J. Cronbach, P.E. Meehl, Construct validity in psychological tests, Psychological Bulletin 52 (1955) 281–302,.
[16]
F.L. da Silva, B.K. Slodkowski, K.K.A. da Silva, S.C. Cazella, A systematic literature review on educational recommender systems for teaching and learning: Research trends, limitations and opportunities, Education and Information Technologies 28 (2023) 3289–3328,.
[17]
K.A. Douglas, H.E. Merzdorf, N.M. Hicks, M.I. Sarfraz, P. Bermel, Challenges to assessing motivation in MOOC learners: An application of an argument-based approach, Computers & Education 150 (2020) 1–16,.
[18]
M. Erdt, A. Fernández, C. Rensing, Evaluating recommender systems for technology enhanced learning: A quantitative survey, IEEE Transactions on Learning Technologies 8 (2015) 326–344,.
[19]
Y. Fan, J. van der Graaf, L. Lim, M. Raković, S. Singh, J. Kilgour, J. Moore, I. Molenaar, M. Bannert, D. Gašević, Towards investigating the validity of measurement of self-regulated learning based on trace data, Metacognition and Learning 17 (2022) 949–987,.
[20]
D. Gašević, S. Greiff, D.W. Shaffer, Towards strengthening links between learning analytics and assessment: Challenges and potentials of a promising new bond, Computers in Human Behavior 134 (2022) 1–7,.
[21]
C. Geertz, Thick description: Toward an interpretive theory of culture, in: The interpretation of cultures, Basic Books, New York, NY, USA, 1973, pp. 3–30.
[22]
F. Goldhammer, C. Hahnel, U. Kroehne, F. Zehner, From byproduct to design factor: On validating the interpretation of process indicators based on log data, Large-scale Assessments in Education 9 (2021) 1–25,.
[23]
J. Heil, D. Ifenthaler, Online assessment in higher education: A systematic review, Online Learning 27 (2023) 187–218,.
[24]
A.C. Huggins-Manley, B.M. Booth, S.K. D'Mello, Toward argument-based fairness with an application to AI-enhanced educational assessments, Journal of Educational Measurement 59 (2022) 362–388,.
[25]
M.T. Kane, An argument-based approach to validity, Psychological Bulletin 112 (1992) 527–535,.
[26]
M.T. Kane, Validating the interpretations and uses of test scores, Journal of Educational Measurement 50 (2013) 1–73,.
[27]
J.W.M. Lai, M. Bower, How is the use of technology in education evaluated? A systematic review, Computers & Education 133 (2019) 27–42,.
[28]
J.W.M. Lai, M. Bower, Evaluation of technology use in education: Findings from a critical analysis of systematic literature reviews, Journal of Computer Assisted Learning 36 (2020) 241–259,.
[29]
J.W.M. Lai, M. Bower, J. De Nobile, Y. Breyer, What should we evaluate when we use technology in education?, Journal of Computer Assisted Learning 38 (2022) 743–757,.
[30]
E.L.C. Law, M. Heintz, Augmented reality applications for K-12 education: A systematic review from the usability and user experience perspective, International Journal of Child-Computer Interaction 30 (2021) 1–23,.
[31]
Y.S. Lincoln, E.G. Guba, But is it rigorous? Trustworthiness and authenticity in naturalistic evaluation, New Directions for Program Evaluation 1986 (1986) 73–84,.
[32]
Q. Liu, S. Geertshuis, R. Grainger, Understanding academics' adoption of learning technologies: A systematic review, Computers & Education 151 (2020) 1–19,.
[33]
S. Messick, Validity, in: R.L. Linn (Ed.), Educational measurement, 3 ed., Macmillan Publishing Co, Inc, New York, NY, USA, 1989, pp. 13–103. The American Council on Education/Macmillan Series on Higher Education.
[34]
J. Mingers, C. Standing, A framework for validating information systems research based on a pluralist account of truth and correctness, Journal of the Association for Information Systems 21 (2020) 117–151,.
[35]
R.J. Mislevy, How developments in psychology and technology challenge validity argumentation, Journal of Educational Measurement 53 (2016) 265–292,.
[36]
D. Moher, L. Shamseer, M. Clarke, D. Ghersi, A. Liberati, M. Petticrew, P. Shekelle, L.A. Stewart, PRISMA-P Group, Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement, Systematic Reviews 4 (2015) 1–9,.
[37]
M.J. Page, J.E. McKenzie, P.M. Bossuyt, I. Boutron, T.C. Hoffmann, C.D. Mulrow, L. Shamseer, J.M. Tetzlaff, E.A. Akl, S.E. Brennan, R. Chou, J. Glanville, J.M. Grimshaw, A. Hróbjartsson, M.M. Lalu, T. Li, E.W. Loder, E. Mayo-Wilson, S. McDonald, …., D. Moher, The PRISMA 2020 statement: An updated guideline for reporting systematic reviews, Systematic Reviews 372 (2021) 1–9,.
[38]
M. Raković, D. Gašević, S.U. Hassan, J.A. Ruipérez Valiente, N. Aljohani, S. Milligan, Learning analytics and assessment: Emerging research trends, promises and future opportunities, British Journal of Educational Technology 54 (2023) 10–18,.
[39]
M.J. Rodríguez-Triana, L.P. Prieto, A. Vozniuk, M.S. Boroujeni, B.A. Schwendimann, A. Holzer, D. Gillet, Monitoring, awareness and reflection in blended technology enhanced learning: A systematic review, International Journal of Technology Enhanced Learning 9 (2017) 126–150,.
[40]
E. Rossiter, T. Thomson, R. Fitzgerald, Supporting university students' learning across time and space: A from-scratch, personalised and mobile-friendly approach, Interactive Technology and Smart Education 21 (2024) 108–130,.
[41]
R. Salehi, S. de Young, A. Asamoah, S.E. Aryee, R. Eli, B. Couper, B. Smith, C. Djokoto, Y.N. Agyeman, A.F.S. Zakaria, N. Butt, A. Boadu, F. Nyante, G. Merdiemah, J. Oliver-Commey, L. Ofori-Boadu, S.K. Akoriyea, M. Parry, C. Fiore, …., H. Acquah, Evaluation of a continuing professional development strategy on COVID-19 for 10 000 health workers in Ghana: A two-pronged approach, Human Resources for Health 21 (2023) 1–13,.
[42]
M. Stadler, K. Herborn, M. Mustafić, S. Greiff, The assessment of collaborative problem solving in PISA 2015: An investigation of the validity of the PISA 2015 CPS tasks, Computers & Education 157 (2020) 1–11,.
[43]
D.W. Straub, Validating instruments in MIS research, MIS Quarterly 13 (1989) 147–169,. arXiv:248922.
[44]
M. Topor, J. Pickering, A.B. Mendes, D. Bishop, F.C. Büttner, M. Elsherif, T.R. Evans, E.L. Henderson, T. Kalandadze, F. Nitschke, Non-interventional, reproducible, and open systematic review (NIRO-SR) guidelines, 2020,.
[45]
C.W. Tsai, Do students need teacher's initiation in online collaborative learning?, Computers & Education 54 (2010) 1137–1144,.
[46]
R. van de Schoot, J. de Bruin, R. Schram, P. Zahedi, J. de Boer, F. Weijdema, B. Kramer, M. Huijts, M. Hoogerwerf, G. Ferdinands, A. Harkema, J. Willemsen, Y. Ma, Q. Fang, S. Hindriks, L. Tummers, D.L. Oberski, An open source machine learning framework for efficient and transparent systematic reviews, Nature Machine Intelligence 3 (2021) 125–133,.
[47]
M. van Haastrecht, M. Brinkhuis, J. Peichl, B. Remmele, M. Spruit, Embracing trustworthiness and authenticity in the validation of learning analytics systems, in: Proceedings of the 13th international learning analytics and knowledge conference, Association for Computing Machinery, Arlington, TX, USA, 2023, pp. 552–558,.
[48]
M. van Haastrecht, M.J.S. Brinkhuis, S. Wools, M. Spruit, Vast: A practical validation framework for e-assessment solutions, Information Systems and E-Business Management 21 (2023) 603–627,.
[49]
M. van Haastrecht, I. Sarhan, B. Yigit Ozkan, M. Brinkhuis, M. Spruit, SYMBALS: A systematic review methodology blending active learning and snowballing, Frontiers in Research Metrics and Analytics 6 (2021) 1–14,.
[50]
K. Verbert, N. Manouselis, X. Ochoa, M. Wolpers, H. Drachsler, I. Bosnic, E. Duval, Context-aware recommender systems for learning: A survey and future challenges, IEEE Transactions on Learning Technologies 5 (2012) 318–335,.
[51]
P. Vivekananda-Schmidt, M. Lewis, A.B. Hassell, Group, T.A.V.R.C.R., Cluster randomized controlled trial of the impact of a computer-assisted learning package on the learning of musculoskeletal examination skills by undergraduate medical students, Arthritis Care & Research 53 (2005) 764–771,.
[52]
S. Whitaker, M. Kinzie, M.E. Kraft-Sayre, A. Mashburn, R.C. Pianta, Use and evaluation of web-based professional development services across participant levels of support, Early Childhood Education Journal 34 (2007) 379–386,.
[53]
S. Wojniusz, V.D. Thorkildsen, S.T. Heiszter, Y. Røe, Active digital pedagogies as a substitute for clinical placement during the COVID-19 pandemic: The case of physiotherapy education, BMC Medical Education 22 (2022) 1–9,.
[54]
S. Wools, M. Molenaar, D. Hopster-den Otter, The validity of technology enhanced assessments—threats and opportunities, in: B.P. Veldkamp, C. Sluijter (Eds.), Theoretical and practical advances in computer-based educational measurement, Springer International Publishing, New York, NY, USA, 2019, pp. 3–19,. Methodology of Educational Measurement and Assessment.
[55]
X. Zhai, J. Krajcik, J.W. Pellegrino, On the validity of machine learning-based next generation science assessments: A validity inferential network, Journal of Science Education and Technology 30 (2021) 298–312,.
[56]
B.D. Zumbo, B. Maddox, N.M. Care, Process and product in computer-based assessments: Clearing the ground for a holistic validity framework, European Journal of Psychological Assessment 39 (2023) 252–262,.

Index Terms

  1. Understanding validity criteria in technology-enhanced learning: A systematic literature review
    Index terms have been assigned to the content through auto-classification.

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image Computers & Education
    Computers & Education  Volume 220, Issue C
    Oct 2024
    168 pages

    Publisher

    Elsevier Science Ltd.

    United Kingdom

    Publication History

    Published: 18 November 2024

    Author Tags

    1. Evaluation methodologies
    2. Mobile learning
    3. Distance education and online learning

    Qualifiers

    • Research-article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 0
      Total Downloads
    • Downloads (Last 12 months)0
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 23 Nov 2024

    Other Metrics

    Citations

    View Options

    View options

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media