Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3430895.3460140acmotherconferencesArticle/Chapter ViewAbstractPublication Pagesl-at-sConference Proceedingsconference-collections
research-article
Open access

Examining the Effects of Student Participation and Performance on the Quality of Learnersourcing Multiple-Choice Questions

Published: 08 June 2021 Publication History

Abstract

While generating multiple-choice questions has been shown to promote deep learning, students often fail to realize this benefit and do not willingly participate in this activity. Additionally, the quality of the student-generated questions may be influenced by both their level of engagement and familiarity with the learning materials. Towards better understanding how students can generate high quality questions, we designed and deployed a multiple-choice question generation activity in seven college-level online chemistry courses. From these courses, we collected data on student interactions and their contribution to the question-generation task. A total of 201 students enrolled in the courses and 57 of them elected to generate a multiple-choice question. Our results indicated that students were able to contribute quality questions, with 67% of them being evaluated by experts as acceptable for use. We further identified several student behaviors in the online courses that are correlated to their participation in the task and the quality of their contribution. Our findings can help teachers and students better understand the benefits of student-generated questions and effectively implement future learnersourcing activities.

Supplementary Material

MP4 File (L-at-S21-fp043.mp4)
In this study, we deploy a completely optional low-stakes learnersourcing activity that has students generate a multiple-choice question (MCQ). 201 students working in an online chemistry course were presented with this activity as they worked throughout the course and 57 of those students elected to create and submit a question. We then had a team of experts review the quality of the questions using an 19-item rubric that has previously been used to assess the quality of MCQs. We found that around 28% of the students participated in this task and there was a significant difference in the participation rates of those students and their participation in the other activities in the course. We also found that 67% of the student-generated questions contained zero or one errors as deemed by the rubric, which means they are acceptable for use as is. There was a strong correlation between a student?s question quality and their performance on the other activities in the course.

References

[1]
Ester Aflalo. 2018. Students generating questions as a way of learning. Active Learning in Higher Education: 1469787418769120.
[2]
Nadia Amini, Nicolas Michoux, Leticia Warnier, Emilie Malcourant, Emmanuel Coche, and Bruno Vande Berg. 2020. Inclusion of MCQs written by radiology residents in their annual evaluation: innovative method to enhance resident's empowerment? Insights into Imaging 11, 1: 1--8.
[3]
Richard A. Armstrong. 2014. When to use the Bonferroni correction. Ophthalmic and Physiological Optics 34, 5: 502--508.
[4]
N. Bier, S. Moore, and M. Van Velsen. 2019. Instrumenting courseware and leveraging data with the Open Learning Initiative (OLI). In Companion Proceedings 9th International Learning Analytics & Knowledge Conference, Tempe, AZ.
[5]
William G. Bowen, Matthew M. Chingos, Kelly A. Lack, and Thomas I. Nygren. 2014. Interactive learning online at public universities: Evidence from a six-campus randomized trial. Journal of Policy Analysis and Management 33, 1: 94--111.
[6]
Jared Breakall, Christopher Randles, and Roy Tasker. 2019. Development and use of a multiple-choice item writing flaws evaluation instrument in the context of general chemistry. Chemistry Education Research and Practice 20, 2: 369--382.
[7]
Andrew C. Butler. 2018. Multiple-choice testing in education: Are the best practices for assessment also good for learning? Journal of Applied Research in Memory and Cognition 7, 3: 323--331.
[8]
Sandra L. Clifton and Cheryl L. Schriner. 2010. Assessing the quality of multiple-choice test items. Nurse Educator 35, 1: 12--16.
[9]
Albert T. Corbett and John R. Anderson. 1994. Knowledge tracing: Modeling the acquisition of procedural knowledge. User modeling and user-adapted interaction 4, 4: 253--278.
[10]
Tina Danh, Tamara Desiderio, Victoria Herrmann, Heather M. Lyons, Frankie Patrick, Gwendolyn A. Wantuch, and Kamila A. Dell. 2020. Evaluating the quality of multiple-choice questions in a NAPLEX preparation book. Currents in Pharmacy Teaching and Learning.
[11]
Paul Denny. 2015. Generating practice questions as a preparation strategy for introductory programming exams. In Proceedings of the 46th ACM Technical Symposium on Computer Science Education, 278--283.
[12]
Paul Denny, John Hamer, Andrew Luxton-Reilly, and Helen Purchase. 2008. PeerWise: students sharing their multiple choice questions. In Proceedings of the Fourth international Workshop on Computing Education Research (ICER '08), 51--58. https://doi.org/10.1145/1404520.1404526
[13]
Paul Denny, Ewan Tempero, Dawn Garbett, and Andrew Petersen. 2017. Examining a student-generated question activity using random topic assignment. In Proceedings of the 2017 ACM Conference on Innovation and Technology in Computer Science Education, 146--151.
[14]
David DiBattista and Laura Kurzawa. 2011. Examination of the quality of multiple-choice items on classroom tests. Canadian Journal for the Scholarship of Teaching and Learning 2, 2: 4.
[15]
Mercedes Douglas, Juliette Wilson, and Sean Ennis. 2012. Multiple-choice question tests: a convenient, flexible and effective learning tool? A case study. Innovations in Education and Teaching International 49, 2: 111--121.
[16]
Denis Duret, Rob Christley, Paul Denny, and Avril Senior. 2018. Collaborative learning with PeerWise. Research in Learning Technology 26.
[17]
Alireza Farasat, Alexander Nikolaev, Suzanne Miller, and Rahul Gopalsamy. 2017. Crowdlearning: Towards collaborative problem-posing at scale. In Proceedings of the Fourth (2017) ACM Conference on Learning@ Scale, 221--224.
[18]
Rebecca Grainger, Emma Osborne, Wei Dai, and Diane Kenwright. 2018. The process of developing a rubric to assess the cognitive complexity of student-generated multiple choice questions in medical education. The Asia Pacific Scholar 3, 2: 19--24.
[19]
Thomas M. Haladyna, Steven M. Downing, and Michael C. Rodriguez. 2002. A review of multiple-choice item-writing guidelines for classroom assessment. Applied measurement in education 15, 3: 309--333.
[20]
Judy Hardy, Simon P. Bates, Morag M. Casey, Kyle W. Galloway, Ross K. Galloway, Alison E. Kay, Peter Kirsop, and Heather A. McQueen. 2014. Student-Generated Content: Enhancing learning through sharing multiple-choice questions. International Journal of Science Education 36, 13: 2180--2194. https://doi.org/10.1080/09500693.2014.916831
[21]
Neil T. Heffernan and Cristina Lindquist Heffernan. 2014. The ASSISTments ecosystem: Building a platform that brings scientists and teachers together for minimally invasive research on human learning and teaching. International Journal of Artificial Intelligence in Education 24, 4: 470--497.
[22]
Alice Huang, Dale Hancock, Matthew Clemson, Giselle Yeo, Dylan J. Harney, Paul Denny, and Gareth Denyer. 2020. Selecting Student-Authored Questions for Summative Assessments. bioRxiv.
[23]
Hyoungwook Jin, Minsuk Chang, and Juho Kim. 2019. SolveDeep: A System for Supporting Subgoal Learning in Online Math Problem Solving. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, 1--6.
[24]
Jennifer A. Jones. 2019. Scaffolding self-regulated learning through student-generated quizzes. Active Learning in Higher Education 20, 2: 115--126.
[25]
Ahmad Zamri Khairani and Hasni Shamsuddin. 2016. Assessing Item Difficulty and Discrimination Indices of Teacher-Developed Multiple-Choice Tests. In Assessment for Learning Within and Beyond the Classroom. Springer, 417--426.
[26]
Hassan Khosravi, Kirsty Kitto, and Joseph Jay Williams. 2019. RiPPLE: A Crowdsourced Adaptive Platform for Recommendation of Learning Activities. Journal of Learning Analytics 6, 3: 91--105.
[27]
Juho Kim. 2015. Learnersourcing: improving learning with collective learner activity. Massachusetts Institute of Technology.
[28]
Myo-Kyoung Kim, Rajul A. Patel, James A. Uchizono, and Lynn Beck. 2012. Incorporation of Bloom's taxonomy into multiple-choice examination questions for a pharmacotherapeutics course. American journal of pharmaceutical education 76, 6.
[29]
Josh B. Kurtz, Michael A. Lourie, Elizabeth E. Holman, Karri L. Grob, and Seetha U. Monrad. 2019. Creating assessments as an active learning strategy: what are students' perceptions? A mixed methods study. Medical education online 24, 1: 1630239.
[30]
Marsha Lovett, Oded Meyer, and Candace Thille. 2008. The Open Learning Initiative: Measuring the Effectiveness of the OLI Statistics Course in Accelerating Student Learning. Journal of Interactive Media in Education.
[31]
Brian E. Mavis, Bridget L. Cole, and Ruth B. Hoppe. 2001. A survey of student assessment in US medical schools: the balance of breadth versus fidelity. Teaching and Learning in Medicine 13, 2: 74--79.
[32]
Heather A. McQueen, Cathy Shields, D. J. Finnegan, J. Higham, and M. W. Simmen. 2014. PeerWise provides significant academic benefits to biological science students across diverse learning tasks, but with minimal instructor intervention. Biochemistry and Molecular Biology Education 42, 5: 371--381.
[33]
Piotr Mitros. 2015. Learnersourcing of complex assessments. In Proceedings of the Second (2015) ACM Conference on Learning@ Scale, 317--320.
[34]
Badr Muhammad Moeen-uz-Zafar Khan. 2011. Evaluation of modified essay questions (MEQ) and multiple choice questions (MCQ) as a tool for assessing the cognitive skills of undergraduate medical students. International journal of health sciences 5, 1: 39.
[35]
Eugenia MW Ng. 2014. Using a mixed research method to evaluate the effectiveness of formative assessment in supporting student teachers' wiki authoring. Computers & education 73: 141--148.
[36]
Adam Pate and David J. Caldwell. 2014. Effects of multiple-choice item-writing guideline utilization on item and student performance. Currents in Pharmacy Teaching and Learning 6, 1: 130--134.
[37]
C. Daniel Riggs, Sohee Kang, and Olivia Rennie. 2020. Positive Impact of Multiple-Choice Question Authoring and Regular Quiz Participation on Student Learning. CBE-Life Sciences Education 19, 2: ar16.
[38]
Bonnie R. Rush, David C. Rankin, and Brad J. White. 2016. The impact of item-writing flaws and item complexity on examination item difficulty and discrimination value. BMC medical education 16, 1: 1--10.
[39]
Ajit K. Sachdeva. 1996. Use of effective questioning to enhance the cognitive abilities of students. Journal of Cancer Education 11, 1: 17--24.
[40]
Sandra Sanchez-Gordon and Sergio Luján-Mora. 2016. How could MOOCs become accessible? The case of edX and the future of inclusive online learning.
[41]
Darina Scully. 2017. Constructing multiple-choice items to measure higher-order thinking. Practical Assessment, Research, and Evaluation 22, 1: 4.
[42]
Marie Tarrant, Aimee Knierim, Sasha K. Hayes, and James Ware. 2006. The frequency of item writing flaws in multiple-choice questions used in high stakes nursing assessments. Nurse Education Today 26, 8: 662--671.
[43]
Marie Tarrant and James Ware. 2008. Impact of item-writing flaws in multiple-choice questions on student achievement in high-stakes nursing assessments. Medical education 42, 2: 198--206.
[44]
Xu Wang, Srinivasa Teja Talluri, Carolyn Rose, and Kenneth Koedinger. 2019. UpGrade: Sourcing student open-ended solutions to create scalable learning opportunities. In Proceedings of the Sixth (2019) ACM Conference on Learning@ Scale, 1--10.
[45]
Abeer Watted and Miri Barak. 2018. Motivating factors of MOOC completers: Comparing between university-affiliated students and general participants. The Internet and Higher Education 37: 11--20.
[46]
Joseph Jay Williams, Juho Kim, Anna Rafferty, Samuel Maldonado, Krzysztof Z. Gajos, Walter S. Lasecki, and Neil Heffernan. 2016. Axis: Generating explanations at scale with learnersourcing and machine learning. In Proceedings of the Third (2016) ACM Conference on Learning@ Scale, 379--388.
[47]
Iman Yeckehzaare, Tirdad Barghi, and Paul Resnick. 2020. QMaps: Engaging Students in Voluntary Question Generation and Linking. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1--14.
[48]
Nikki L. Bibler Zaidi, Karri L. Grob, Seetha M. Monrad, Joshua B. Kurtz, Andrew Tai, Asra Z. Ahmed, Larry D. Gruppen, and Sally A. Santen. 2018. Pushing critical thinking skills with multiple-choice questions: does bloom's taxonomy work? Academic Medicine 93, 6: 856--859.
[49]
Alex Y. Zheng, Janessa K. Lawhorn, Thomas Lumley, and Scott Freeman. 2008. Application of Bloom's Taxonomy Debunks the "MCAT Myth." SCIENCE-NEW YORK THEN WASHINGTON- 319, 5862: 414.

Cited By

View all
  • (2024)Learnersourcing: Student-generated Content @ Scale: 2nd Annual WorkshopProceedings of the Eleventh ACM Conference on Learning @ Scale10.1145/3657604.3664643(559-562)Online publication date: 9-Jul-2024
  • (2024)PRIMM and Proper: Authentic Investigation in HE Introductory Programming with PeerWise and GitHubProceedings of the 8th Conference on Computing Education Practice10.1145/3633053.3633062(33-36)Online publication date: 5-Jan-2024
  • (2024)Can Autograding of Student-Generated Questions Quality by ChatGPT Match Human Experts?IEEE Transactions on Learning Technologies10.1109/TLT.2024.339480717(1600-1610)Online publication date: 1-Jan-2024
  • Show More Cited By

Index Terms

  1. Examining the Effects of Student Participation and Performance on the Quality of Learnersourcing Multiple-Choice Questions

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Other conferences
      L@S '21: Proceedings of the Eighth ACM Conference on Learning @ Scale
      June 2021
      380 pages
      ISBN:9781450382151
      DOI:10.1145/3430895
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 08 June 2021

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. learnersourcing
      2. multiple-choice questions
      3. online education
      4. question generation
      5. student participation

      Qualifiers

      • Research-article

      Funding Sources

      • California Governor?s Office of Planning and Research
      • Institute of Education Sciences

      Conference

      L@S '21
      L@S '21: Eighth (2021) ACM Conference on Learning @ Scale
      June 22 - 25, 2021
      Virtual Event, Germany

      Acceptance Rates

      Overall Acceptance Rate 117 of 440 submissions, 27%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)251
      • Downloads (Last 6 weeks)35
      Reflects downloads up to 09 Mar 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Learnersourcing: Student-generated Content @ Scale: 2nd Annual WorkshopProceedings of the Eleventh ACM Conference on Learning @ Scale10.1145/3657604.3664643(559-562)Online publication date: 9-Jul-2024
      • (2024)PRIMM and Proper: Authentic Investigation in HE Introductory Programming with PeerWise and GitHubProceedings of the 8th Conference on Computing Education Practice10.1145/3633053.3633062(33-36)Online publication date: 5-Jan-2024
      • (2024)Can Autograding of Student-Generated Questions Quality by ChatGPT Match Human Experts?IEEE Transactions on Learning Technologies10.1109/TLT.2024.339480717(1600-1610)Online publication date: 1-Jan-2024
      • (2024)A Comparative Analysis of Different Large Language Models in Evaluating Student-Generated Questions2024 13th International Conference on Educational and Information Technology (ICEIT)10.1109/ICEIT61397.2024.10540914(24-29)Online publication date: 22-Mar-2024
      • (2023)Crowdsourcing the Evaluation of Multiple-Choice Questions Using Item-Writing Flaws and Bloom's TaxonomyProceedings of the Tenth ACM Conference on Learning @ Scale10.1145/3573051.3593396(25-34)Online publication date: 20-Jul-2023
      • (2023)Dissecting Knowledge, Guessing, and Blunder in Multiple Choice AssessmentsApplied Measurement in Education10.1080/08957347.2023.217201736:1(80-98)Online publication date: 21-Feb-2023
      • (2023)Multiple-Choice Questions for Teaching Quantitative Instrumental Element Analysis: A Follow-UpJournal of Chemical Education10.1021/acs.jchemed.3c00061100:10(4099-4105)Online publication date: 7-Sep-2023
      • (2023)Assessing the Quality of Multiple-Choice Questions Using GPT-4 and Rule-Based MethodsResponsive and Sustainable Educational Futures10.1007/978-3-031-42682-7_16(229-245)Online publication date: 4-Sep-2023
      • (2022)Learnersourcing: Student-generated Content @ ScaleProceedings of the Ninth ACM Conference on Learning @ Scale10.1145/3491140.3528286(259-262)Online publication date: 1-Jun-2022
      • (2022)Learnersourcing in Theory and Practice: Synthesizing the Literature and Charting the FutureProceedings of the Ninth ACM Conference on Learning @ Scale10.1145/3491140.3528277(234-245)Online publication date: 1-Jun-2022
      • Show More Cited By

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Login options

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media