Nothing Special   »   [go: up one dir, main page]

Skip to main content

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1831))

Included in the following conference series:

  • 3771 Accesses

Abstract

Item selection is the key process for computerized adaptive testing (CAT) to effectively assess examinees’ knowledge states. Existing item selection algorithms mainly rely on information metrics, suffering two issues: one is that the implicit cognitive information like relations between testing items as well as knowledge components cannot be captured by the information-based methods, and the other one is that the information-based algorithms computes item’s suitableness depending on examinees’ knowledge states which are estimated and imprecise inherently. To address these two issues, this work proposes to employ reinforcement learning technology to learn the item selection algorithm automatically in a data-driven manner. It is also able to properly capture the implicit cognitive relations between different testing items and avoid unnecessary item testing, and does not depend on examinees’ estimated knowledge states at all.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 139.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Albert, D., Lukas, J.: Knowledge Spaces: Theories, Empirical Research, and Applications. Psychology Press, London (1999)

    Google Scholar 

  2. Azhar, A.Z., Segal, A., Gal, K.: Optimizing representations and policies for question sequencing using reinforcement learning. In: International Educational Data Mining Society (2022)

    Google Scholar 

  3. Chang, H.H., Ying, Z.: A global information approach to computerized adaptive testing. Appl. Psychol. Measur. 20(3), 213–229 (1996)

    Article  Google Scholar 

  4. Doroudi, S., Aleven, V., Brunskill, E.: Where’s the reward? a review of reinforcement learning for instructional sequencing. Int. J. Artif. Intell. Educ. 29(4), 568–620 (2019)

    Article  Google Scholar 

  5. Efremov, A., Ghosh, A., Singla, A.: Zero-shot learning of hint policy via reinforcement learning and program synthesis. In: Proceedings of Educational Data Mining (EDM) (2020)

    Google Scholar 

  6. ETS: Graduate record examinations 1996–97 information and registration bulletin (1996)

    Google Scholar 

  7. Fan, Z., Wang, C., Chang, H.H., Douglas, J.: Utilizing response time distributions for item selection in cat. J. Educ. Behav. Stat. 37(5), 655–670 (2012)

    Article  Google Scholar 

  8. Han, K.T.: An efficiency balanced information criterion for item selection in computerized adaptive testing. J. Educ. Measur. 49(3), 225–246 (2012)

    Article  Google Scholar 

  9. Kingsbury, G.G., Zara, A.R.: Procedures for selecting items for computerized adaptive tests. Appl. Measur. Educ. 2(4), 359–375 (1989)

    Article  Google Scholar 

  10. Li, X., Lipton, Z.C., Dhingra, B., Li, L., Gao, J., Chen, Y.N.: A user simulator for task-completion dialogues. arXiv preprint arXiv:1612.05688 (2016)

  11. Lord, F.M.: Applications of item response theory to practical testing problems. Routledge, Milton Park (2012)

    Google Scholar 

  12. Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015)

    Google Scholar 

  13. Narvekar, S., Peng, B., Leonetti, M., Sinapov, J., Taylor, M.E., Stone, P.: Curriculum learning for reinforcement learning domains: A framework and survey. arXiv preprint arXiv:2003.04960 (2020)

  14. Singla, A., Rafferty, A.N., Radanovic, G., Heffernan, N.T.: Reinforcement learning for education: Opportunities and challenges. arXiv preprint arXiv:2107.08828 (2021)

  15. Sutton, R.S., Barto, A.G.: Reinforcement learning: An introduction. MIT press, Cambridge (2018)

    Google Scholar 

  16. Tatsuoka, C.: Data analytic methods for latent partially ordered classification models. J. R. Stat. Soc. Ser. C (Appl. Stat.) 51(3), 337–350 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  17. Vedavathi, N., Bharadwaj, R.S.: Deep flamingo search and reinforcement learning based recommendation system for E-learning platform using social media. Procedia Comput. Sci. 215, 192–201 (2022)

    Article  Google Scholar 

  18. Zhou, G., Azizsoltani, H., Ausin, M.S., Barnes, T., Chi, M.: Hierarchical reinforcement learning for pedagogical policy induction. In: Isotani, S., Millán, E., Ogan, A., Hastings, P., McLaren, B., Luckin, R. (eds.) AIED 2019. LNCS (LNAI), vol. 11625, pp. 544–556. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-23204-7_45

    Chapter  Google Scholar 

Download references

Acknowledgements

This research is supported by the National Natural Science Foundation of China (No. 62177009, 62077006, 62007025).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Penghe Chen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Pian, Y., Chen, P., Lu, Y., Song, G., Chen, P. (2023). Improving the Item Selection Process with Reinforcement Learning in Computerized Adaptive Testing. In: Wang, N., Rebolledo-Mendez, G., Dimitrova, V., Matsuda, N., Santos, O.C. (eds) Artificial Intelligence in Education. Posters and Late Breaking Results, Workshops and Tutorials, Industry and Innovation Tracks, Practitioners, Doctoral Consortium and Blue Sky. AIED 2023. Communications in Computer and Information Science, vol 1831. Springer, Cham. https://doi.org/10.1007/978-3-031-36336-8_35

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-36336-8_35

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-36335-1

  • Online ISBN: 978-3-031-36336-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics