Abstract
Item selection is the key process for computerized adaptive testing (CAT) to effectively assess examinees’ knowledge states. Existing item selection algorithms mainly rely on information metrics, suffering two issues: one is that the implicit cognitive information like relations between testing items as well as knowledge components cannot be captured by the information-based methods, and the other one is that the information-based algorithms computes item’s suitableness depending on examinees’ knowledge states which are estimated and imprecise inherently. To address these two issues, this work proposes to employ reinforcement learning technology to learn the item selection algorithm automatically in a data-driven manner. It is also able to properly capture the implicit cognitive relations between different testing items and avoid unnecessary item testing, and does not depend on examinees’ estimated knowledge states at all.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Albert, D., Lukas, J.: Knowledge Spaces: Theories, Empirical Research, and Applications. Psychology Press, London (1999)
Azhar, A.Z., Segal, A., Gal, K.: Optimizing representations and policies for question sequencing using reinforcement learning. In: International Educational Data Mining Society (2022)
Chang, H.H., Ying, Z.: A global information approach to computerized adaptive testing. Appl. Psychol. Measur. 20(3), 213–229 (1996)
Doroudi, S., Aleven, V., Brunskill, E.: Where’s the reward? a review of reinforcement learning for instructional sequencing. Int. J. Artif. Intell. Educ. 29(4), 568–620 (2019)
Efremov, A., Ghosh, A., Singla, A.: Zero-shot learning of hint policy via reinforcement learning and program synthesis. In: Proceedings of Educational Data Mining (EDM) (2020)
ETS: Graduate record examinations 1996–97 information and registration bulletin (1996)
Fan, Z., Wang, C., Chang, H.H., Douglas, J.: Utilizing response time distributions for item selection in cat. J. Educ. Behav. Stat. 37(5), 655–670 (2012)
Han, K.T.: An efficiency balanced information criterion for item selection in computerized adaptive testing. J. Educ. Measur. 49(3), 225–246 (2012)
Kingsbury, G.G., Zara, A.R.: Procedures for selecting items for computerized adaptive tests. Appl. Measur. Educ. 2(4), 359–375 (1989)
Li, X., Lipton, Z.C., Dhingra, B., Li, L., Gao, J., Chen, Y.N.: A user simulator for task-completion dialogues. arXiv preprint arXiv:1612.05688 (2016)
Lord, F.M.: Applications of item response theory to practical testing problems. Routledge, Milton Park (2012)
Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015)
Narvekar, S., Peng, B., Leonetti, M., Sinapov, J., Taylor, M.E., Stone, P.: Curriculum learning for reinforcement learning domains: A framework and survey. arXiv preprint arXiv:2003.04960 (2020)
Singla, A., Rafferty, A.N., Radanovic, G., Heffernan, N.T.: Reinforcement learning for education: Opportunities and challenges. arXiv preprint arXiv:2107.08828 (2021)
Sutton, R.S., Barto, A.G.: Reinforcement learning: An introduction. MIT press, Cambridge (2018)
Tatsuoka, C.: Data analytic methods for latent partially ordered classification models. J. R. Stat. Soc. Ser. C (Appl. Stat.) 51(3), 337–350 (2002)
Vedavathi, N., Bharadwaj, R.S.: Deep flamingo search and reinforcement learning based recommendation system for E-learning platform using social media. Procedia Comput. Sci. 215, 192–201 (2022)
Zhou, G., Azizsoltani, H., Ausin, M.S., Barnes, T., Chi, M.: Hierarchical reinforcement learning for pedagogical policy induction. In: Isotani, S., Millán, E., Ogan, A., Hastings, P., McLaren, B., Luckin, R. (eds.) AIED 2019. LNCS (LNAI), vol. 11625, pp. 544–556. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-23204-7_45
Acknowledgements
This research is supported by the National Natural Science Foundation of China (No. 62177009, 62077006, 62007025).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Pian, Y., Chen, P., Lu, Y., Song, G., Chen, P. (2023). Improving the Item Selection Process with Reinforcement Learning in Computerized Adaptive Testing. In: Wang, N., Rebolledo-Mendez, G., Dimitrova, V., Matsuda, N., Santos, O.C. (eds) Artificial Intelligence in Education. Posters and Late Breaking Results, Workshops and Tutorials, Industry and Innovation Tracks, Practitioners, Doctoral Consortium and Blue Sky. AIED 2023. Communications in Computer and Information Science, vol 1831. Springer, Cham. https://doi.org/10.1007/978-3-031-36336-8_35
Download citation
DOI: https://doi.org/10.1007/978-3-031-36336-8_35
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-36335-1
Online ISBN: 978-3-031-36336-8
eBook Packages: Computer ScienceComputer Science (R0)