Abstract
The multiple choice cloze (MCC) question format is commonly used to assess students’ comprehension. It is an especially useful format for ITS because it is fully automatable and can be used on any text. Unfortunately, very little is known about the factors that influence MCC question difficulty and student performance on such questions. In order to better understand student performance on MCC questions, we developed a model of MCC questions. Our model shows that the difficulty of the answer and the student’s response time are the most important predictors of student performance. In addition to showing the relative impact of the terms in our model, our model provides evidence of a developmental trend in syntactic awareness beginning around the 2nd grade. Our model also accounts for 10% more variance in students’ external test scores compared to the standard scoring method for MCC questions.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Mostow, J., Beck, J., Bey, J., Cuneo, A., Sison, J., Tobin, B., Valeri, J.: Using automated questions to assess reading comprehension, vocabulary, and effects of tutorial interventions. Technology, Instruction, Cognition and Learning 2, 97–134 (2004)
Beck, J.E.: Engagement tracing: using response times to model student disengagement. In: International Conference on Artificial Intelligence and Education (to appear, 2005)
Schwanenflugel, P.J., Stahl, S.A., McFalls, E.L.: Partial word knowledge and vocabulary growth during reading comprehension. Journal of Literacy Research 29(4), 531–553 (1997)
Kamil, M.L., Mosenthal, P.B., Pearson, P.D., Barr, R. (eds.): Handbook of Reading Research, vol. III. Lawrence Erlbaum Associates, Mahwah (2000)
Gentner, D.: Some interesting differences between verbs and nouns. Cognition and Brain Theory 4(2), 161–178 (1981)
Golinkoff, R.M., Hirsh-Pasek, K., Bloom, L., et al.: Becoming a word learner: A debate on lexical acquisition. Oxford University Press, New York (2000)
Schmid, H.: Probabilistic Part-of-Speech Tagging Using Decision Trees. In: International Conference on New Methods in Language Processing, pp. 44–49 (1994)
Coniam, D.: A preliminary inquiry into using corpus word frequency data in the automatic generation of English language cloze tests. CALICO Journal 14(2-4), 15–33 (1997)
Menard, S.: Applied Logistic Regression Analysis. Quantitative Applications in the Social Sciences, 106 (1995)
Woodcock, R.W.: Woodcock Reading Mastery Tests - Revised (WRMT-R/NU), Circle Pines, Minnesota: American Guidance Service (1998)
Abraham, R.G., Chapelle, C.A.: The meaning of cloze test scores: an item difficulty perspective. The Modern Language Journal 76, 468–479 (1992)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Hensler, B.S., Beck, J. (2006). Better Student Assessing by Finding Difficulty Factors in a Fully Automated Comprehension Measure. In: Ikeda, M., Ashley, K.D., Chan, TW. (eds) Intelligent Tutoring Systems. ITS 2006. Lecture Notes in Computer Science, vol 4053. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11774303_3
Download citation
DOI: https://doi.org/10.1007/11774303_3
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-35159-7
Online ISBN: 978-3-540-35160-3
eBook Packages: Computer ScienceComputer Science (R0)