Nothing Special   »   [go: up one dir, main page]

Skip to main content

Explainable Automatic Grading with Neural Additive Models

  • Conference paper
  • First Online:
Artificial Intelligence in Education (AIED 2024)

Abstract

The use of automatic short answer grading (ASAG) models may help alleviate the time burden of grading while encouraging educators to frequently incorporate open-ended items in their curriculum. However, current state-of-the-art ASAG models are large neural networks (NN) often described as “black box”, providing no explanation for which characteristics of an input are important for the produced output. This inexplicable nature can be frustrating to teachers and students when trying to interpret, or learn from an automatically-generated grade. To create a powerful yet intelligible ASAG model, we experiment with a type of model called a Neural Additive Model that combines the performance of a NN with the explainability of an additive model. We use a Knowledge Integration (KI) framework from the learning sciences to guide feature engineering to create inputs that reflect whether a student includes certain ideas in their response. We hypothesize that indicating the inclusion (or exclusion) of predefined ideas as features will be sufficient for the NAM to have good predictive power and interpretability, as this may guide a human scorer using a KI rubric. We compare the performance of the NAM with another explainable model, logistic regression, using the same features, and to a non-explainable neural model, DeBERTa, that does not require feature engineering.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Agarwal, R., Frosst, N., Zhang, X., Caruana, R., Hinton, G.E.: Neural additive models: Interpretable machine learning with neural nets. arXiv preprint arXiv:2004.13912 (2020)

  2. Alonso-Fernández, C., Martínez-Ortiz, I., Caballero, R., Freire, M., Fernández-Manjón, B.: Predicting students’ knowledge after playing a serious game based on learning analytics data: A case study. J. Comput. Assist. Learn. 36(3), 350–358 (2020)

    Article  Google Scholar 

  3. Bertsch, S., Pesta, B.J., Wiscott, R., McDaniel, M.A.: The generation effect: a meta-analytic review. Mem. Cogn. 35(2), 201–210 (2007)

    Article  Google Scholar 

  4. Bouchiat, K., Immer, A., Yèche, H., Rätsch, G., Fortuin, V.: Laplace-approximated neural additive models: improving interpretability with bayesian inference. arXiv preprint arXiv:2305.16905 (2023)

  5. Chen, D., Ye, W.: Monotonic neural additive models: Pursuing regulated machine learning models for credit scoring. In: Proceedings of the Third ACM International Conference on AI in Finance, pp. 70–78 (2022)

    Google Scholar 

  6. Chi, M.T., De Leeuw, N., Chiu, M.H., LaVancher, C.: Eliciting self-explanations improves understanding. Cogn. Sci. 18(3), 439–477 (1994)

    Google Scholar 

  7. Condor, A., Pardos, Z.: A deep reinforcement learning approach to automatic formative feedback. International Educational Data Mining Society (2022)

    Google Scholar 

  8. Condor, A., Pardos, Z., Linn, M.: Representing scoring rubrics as graphs for automatic short answer grading. In: Rodrigo, M.M., Matsuda, N., Cristea, A.I., Dimitrova, V. (eds.) AIED 2022. LNCS, vol. 13355, pp. 354–365. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-11644-5_29

    Chapter  Google Scholar 

  9. Deho, O.B., Zhan, C., Li, J., Liu, J., Liu, L., Le Duy, T.: How do the existing fairness metrics and unfairness mitigation algorithms contribute to ethical learning analytics? Br. J. Edu. Technol. 53(4), 822–843 (2022)

    Article  Google Scholar 

  10. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)

  11. Dietterich, T.G.: Approximate statistical tests for comparing supervised classification learning algorithms. Neural Comput. 10(7), 1895–1923 (1998). https://doi.org/10.1162/089976698300017197

    Article  Google Scholar 

  12. Fletcher, R.: Practical Methods of Optimization. Wiley, New York (2000)

    Book  Google Scholar 

  13. Goodman, B., Flaxman, S.: European union regulations on algorithmic decision making and a “right to explanation’’. AI Mag. 38(2), 781–796 (2017)

    Google Scholar 

  14. Gunning, D., Aha, D.: Darpa’s explainable artificial intelligence (XAI) program. AI Mag. 40(2), 44–58 (2019)

    Google Scholar 

  15. Haller, S., Aldea, A., Seifert, C., Strisciuglio, N.: Survey on automated short answer grading with deep learning: from word embeddings to transformers. arXiv preprint arXiv:2204.03503 (2022)

  16. Hancock, C.L.: Implementing the assessment standards for school mathematics: enhancing mathematics learning with open-ended questions. Math. Teach. 88(6), 496–499 (1995)

    Article  Google Scholar 

  17. He, P., Gao, J., Chen, W.: DeBERTaV3: improving DeBERTa using ELECTRA-style pre-training with gradient-disentangled embedding sharing. arXiv preprint arXiv:2111.09543 (2021)

  18. Jo, W., Kim, D.: Neural additive models for nowcasting. arXiv preprint arXiv:2205.10020 (2022)

  19. Kayid, A., Frosst, N., Hinton, G.E.: Neural additive models library (2020)

    Google Scholar 

  20. Kelley, K., Preacher, K.J.: On effect size. Psychol. Methods 17(2), 137 (2012)

    Article  Google Scholar 

  21. Le, C.V., Pardos, Z.A., Meyer, S.D., Thorp, R.: Communication at scale in a MOOC using predictive engagement analytics. In: Penstein Rosé, C., et al. (eds.) AIED 2018. LNCS (LNAI), vol. 10947, pp. 239–252. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-93843-1_18

    Chapter  Google Scholar 

  22. Linn, M.C.: Designing the knowledge integration environment. Int. J. Sci. Educ. 22(8), 781–796 (2000)

    Article  Google Scholar 

  23. Luber, M., Thielmann, A., Säfken, B.: Structural neural additive models: enhanced interpretable machine learning. arXiv preprint arXiv:2302.09275 (2023)

  24. Mariotti, E., Moral, J.M.A., Gatt, A.: Exploring the balance between interpretability and performance with carefully designed constrainable neural additive models. Inf. Fus. 99, 101882 (2023)

    Article  Google Scholar 

  25. Misiejuk, K., Wasson, B., Egelandsdal, K.: Using learning analytics to understand student perceptions of peer feedback. Comput. Hum. Behav. 117, 106658 (2021)

    Article  Google Scholar 

  26. Moslehi, S., Mahjub, H., Farhadian, M., Soltanian, A.R., Mamani, M.: Interpretable generalized neural additive models for mortality prediction of COVID-19 hospitalized patients in hamadan, iran. BMC Med. Res. Methodol. 22(1), 339 (2022)

    Article  Google Scholar 

  27. Poulton, A., Eliens, S.: Explaining transformer-based models for automatic short answer grading. In: Proceedings of the 5th International Conference on Digital Technology in Education, pp. 110–116 (2021)

    Google Scholar 

  28. Prize, A.S.A.: The Hewlett foundation: automated essay scoring (2019)

    Google Scholar 

  29. Reimers, N., Gurevych, I.: Sentence-BERT: sentence embeddings using Siamese BERT-networks. arXiv preprint arXiv:1908.10084 (2019)

  30. Riordan, B., et al.: An empirical investigation of neural methods for content scoring of science explanations. In: Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications (2020)

    Google Scholar 

  31. Schlippe, T., Stierstorfer, Q., Koppel, M.t., Libbrecht, P.: Explainability in automatic short answer grading. In: Cheng, E.C.K., Wang, T., Schlippe, T., Beligiannis, G.N. (eds.) AIET 2022. LNCS, vol. 154, pp. 69–87. Springer, Singapore (2022). https://doi.org/10.1007/978-981-19-8040-4_5

  32. Singh, C., et al.: Explaining black box text modules in natural language with language models. arXiv preprint arXiv:2305.09863 (2023)

  33. Tornqvist, M., Mahamud, M., Guzman, E.M., Farazouli, A.: ExASAG: explainable framework for automatic short answer grading. In: Proceedings of the 18th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2023), pp. 361–371 (2023)

    Google Scholar 

  34. Utkin, L., Konstantinov, A.: An extension of the neural additive model for uncertainty explanation of machine learning survival models. In: Kravets, A.G., Bolshakov, A.A., Shcherbakov, M. (eds.) Cyber-Physical Systems: Intelligent Models and Algorithms, vol. 417, pp. 3–13. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-95116-0_1

    Chapter  Google Scholar 

  35. Vaswani, A., et al.: Attention is all you need. Adv. Neural Inf. Process. Syst. 30 (2017)

    Google Scholar 

  36. Xu, F., Uszkoreit, H., Du, Y., Fan, W., Zhao, D., Zhu, J.: Explainable AI: a brief survey on history, research areas, approaches and challenges. In: Tang, J., Kan, M.-Y., Zhao, D., Li, S., Zan, H. (eds.) NLPCC 2019. LNCS (LNAI), vol. 11839, pp. 563–574. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32236-6_51

    Chapter  Google Scholar 

  37. Zeng, Z., Li, X., Gasevic, D., Chen, G.: Do deep neural nets display human-like attention in short answer scoring? In: Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 191–205 (2022)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Aubrey Condor .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Condor, A., Pardos, Z. (2024). Explainable Automatic Grading with Neural Additive Models. In: Olney, A.M., Chounta, IA., Liu, Z., Santos, O.C., Bittencourt, I.I. (eds) Artificial Intelligence in Education. AIED 2024. Lecture Notes in Computer Science(), vol 14829. Springer, Cham. https://doi.org/10.1007/978-3-031-64302-6_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-64302-6_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-64301-9

  • Online ISBN: 978-3-031-64302-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics