Abstract
Knowledge tracking (KT) is a fundamental tool to customize personalized learning paths for students so that they can take charge of their own learning pace. The main task of KT is to model the learning state of the students, however the process is quite involved. First, due to the sparsity of real-world educational data, the previous KT models ignore the high-order information in question-skill; second, the long sequence of student interactions poses a demanding challenge for KT models when dealing with long-term dependencies, and the last, due to the complexity of the forgetting mechanism. To address these issues, in this paper, we propose a Graph-based Attentive Knowledge-Search Model for Knowledge Tracing (GASKT). The model divides problems and skills into two types of nodes, utilizing R-GCN to thoroughly incorporate the relevance of problem-skill through embedding propagation, which reduces the impact of sparse data. Besides, it employs the modified attention mechanism to address the long-term dependencies issue. For the attention weight score between questions, on the basis of using the scaled dot-product, the forgetting mechanism is fully considered. We conduct extensive experiments on several real-world benchmark datasets, and our GASKT outperforms the state-of-the-art KT models, with at least 1% AUC improvement.
C. Peng—Supported by the Scientific and Technological Innovation 2030 Major Projects under Grant 2018AAA0100902, the Shanghai Science and Technology Commission under Grant No. 20511100200 and OneSmart Education Group.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Piech, C., et al.: Deep knowledge tracing. In: Advances in Neural Information Processing Systems, vol. 28, pp. 505–513 (2015)
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)
Zhang, J., Shi, X., King, I., Yeung, D.-Y.: Dynamic key-value memory networks for knowledge tracing. In: Proceedings of the 26th International Conference on World Wide Web, pp. 765–774 (2017)
Nagatani, K., Zhang, Q., Sato, M., Chen, Y.-Y., Chen, F., Ohkuma, T.: Augmenting knowledge tracing by considering forgetting behavior. In: The World Wide Web Conference, pp. 3101–3107 (2019)
Su, Y., et al.: Exercise-enhanced sequential modeling for student performance prediction. In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, no. 1 (2018)
Yang, Y., et al.: Gikt: a graph-based interaction model for knowledge tracing. In: Joint European Conference on Machine Learning and Knowledge Discovery in Databases (2020)
Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016)
Schlichtkrull, M., Kipf, T.N., Bloem, P., van den Berg, R., Titov, I., Welling, M.: Modeling relational data with graph convolutional networks. In: Gangemi, A., et al. (eds.) ESWC 2018. LNCS, vol. 10843, pp. 593–607. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-93417-4_38
Corbett, A.T., Anderson, J.R.: Knowledge tracing: modeling the acquisition of procedural knowledge. User Model. User-Adap. Inter. 4(4), 253–278 (1994). https://doi.org/10.1007/BF01099821
Pardos, Z.A., Heffernan, N.T.: Modeling individualization in a Bayesian networks implementation of knowledge tracing. In: De Bra, P., Kobsa, A., Chin, D. (eds.) UMAP 2010. LNCS, vol. 6075, pp. 255–266. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-13470-8_24
Baker, R.S.J., Corbett, A.T., Aleven, V.: More accurate student modeling through contextual estimation of slip and guess probabilities in Bayesian knowledge tracing. In: Woolf, B.P., Aïmeur, E., Nkambou, R., Lajoie, S. (eds.) ITS 2008. LNCS, vol. 5091, pp. 406–415. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-69132-7_44
Pardos, Z.A., Heffernan, N.T.: KT-IDEM: introducing item difficulty to the knowledge tracing model. In: Konstan, J.A., Conejo, R., Marzo, J.L., Oliver, N. (eds.) UMAP 2011. LNCS, vol. 6787, pp. 243–254. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-22362-4_21
Cen, H., Koedinger, K., Junker, B.: Learning factors analysis – a general method for cognitive model evaluation and improvement. In: Ikeda, M., Ashley, K.D., Chan, T.-W. (eds.) ITS 2006. LNCS, vol. 4053, pp. 164–175. Springer, Heidelberg (2006). https://doi.org/10.1007/11774303_17
Pavlik Jr, P.I., Cen, H., Koedinger, K.R.: Performance factors analysis-a new alternative to knowledge tracing. Online Submission (2009)
Yeung, C.-K., Yeung, D.-Y.: Addressing two problems in deep knowledge tracing via prediction-consistent regularization. In: Proceedings of the Fifth Annual ACM Conference on Learning at Scale, pp. 1–10 (2018)
Williams, R.J., Zipser, D.: A learning algorithm for continually running fully recurrent neural networks. Neural Comput. 1(2), 270–280 (1989)
Pandey, S., Karypis, G.: A self-attentive model for knowledge tracing. arXiv preprint arXiv:1907.06837 (2019)
Ebbinghaus, H.: Memory: a contribution to experimental psychology. Ann. Neurosci. 20(4), 155 (2013)
Xiong, X., Zhao, S., Van Inwegen, E.G., Beck, J.E.: Going deeper with deep knowledge tracing. International Educational Data Mining Society (2016)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Wang, M., Peng, C., Yang, R., Wang, C., Chen, Y., Yu, X. (2021). GASKT: A Graph-Based Attentive Knowledge-Search Model for Knowledge Tracing. In: Qiu, H., Zhang, C., Fei, Z., Qiu, M., Kung, SY. (eds) Knowledge Science, Engineering and Management. KSEM 2021. Lecture Notes in Computer Science(), vol 12815. Springer, Cham. https://doi.org/10.1007/978-3-030-82136-4_22
Download citation
DOI: https://doi.org/10.1007/978-3-030-82136-4_22
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-82135-7
Online ISBN: 978-3-030-82136-4
eBook Packages: Computer ScienceComputer Science (R0)