Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1007/978-3-030-57717-9_1guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

Exploring Artificial Jabbering for Automatic Text Comprehension Question Generation

Published: 14 September 2020 Publication History

Abstract

Many educational texts lack comprehension questions and authoring them consumes time and money. Thus, in this article, we ask ourselves to what extent artificial jabbering text generation systems can be used to generate textbook comprehension questions. Novel machine learning-based text generation systems jabber on a wide variety of topics with deceptively good performance. To expose the generated texts as such, one often has to understand the actual topic the systems jabbers about. Hence, confronting learners with generated texts may cause them to question their level of knowledge. We built a novel prototype that generates comprehension questions given arbitrary textbook passages. We discuss the strengths and weaknesses of the prototype quantitatively and qualitatively. While our prototype is not perfect, we provide evidence that such systems have great potential as question generators and identify the most promising starting points may leading to (semi) automated generators that support textbook authors and self-studying.

References

[1]
Anderson RC and Biddle WB On asking people questions about what they are reading Psychol. Learn. Motiv. Adv. Res. Theor. 1975 9 C 89-132
[2]
Campos R, Mangaravite V, Pasquali A, Jorge A, Nunes C, and Jatowt A YAKE! keyword extraction from single documents using multiple local features Inf. Sci. 2020 509 257-289
[3]
Campos R, Mangaravite V, Pasquali A, Jorge AM, Nunes C, and Jatowt A Pasi G, Piwowarski B, Azzopardi L, and Hanbury A YAKE! collection-independent automatic keyword extractor Advances in Information Retrieval 2018 Cham Springer 806-810
[4]
Chen, X., Mitrovic, T., Mathews, M.: Do novices and advanced students benefit from erroneous examples differently. In: Proceedings of 24th International Conference on Computers in Education (2016)
[5]
Dong, L., et al.: Unified language model pre-training for natural language understanding and generation. In: Advances in Neural Information Processing Systems, pp. 13042–13054 (2019)
[6]
Du, X., Shao, J., Cardie, C.: Learning to ask: neural question generation for reading comprehension. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), vol. 8, pp. 1342–1352. Association for Computational Linguistics, Stroudsburg (2017). http://aclweb.org/anthology/P17-1123
[7]
Duke NK and Pearson PD Effective practices for developing reading comprehension J. Educ. 2009 189 1–2 107-122
[8]
Gao, Y., Bing, L., Chen, W., Lyu, M.R., King, I.: Difficulty controllable generation of reading comprehension questions. In: Proceedings of the 28th International Joint Conference on Artificial Intelligence, pp. 4968–4974. AAAI Press (2019)
[9]
Graesser, A., Rus, V., Cai, Z.: Question classification schemes. In: Proceedings of the Workshop on Question Generation, pp. 10–17 (2008)
[10]
Große CS and Renkl A Finding and fixing errors in worked examples: can this foster learning outcomes? Learn. Instr. 2007 17 6 612-634
[11]
Heilman, M., Smith, N.A.: Good question! Statistical ranking for question generation. In: Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pp. 609–617. Association for Computational Linguistics (2010)
[12]
Holtzman, A., Buys, J., Forbes, M., Choi, Y.: The curious case of neural text degeneration. arXiv preprint arXiv:1904.09751 (2019)
[13]
Ippolito, D., Duckworth, D., Callison-Burch, C., Eck, D.: Human and automatic detection of generated text. arXiv preprint arXiv:1911.00650 (2019)
[14]
Kopp V, Stark R, and Fischer MR Fostering diagnostic knowledge through computer-supported, case-based worked examples: effects of erroneous examples and feedback Med. Educ. 2008 42 8 823-829
[15]
Kurdi, G., Leo, J., Parsia, B., Sattler, U., Al-Emari, S.: A systematic review of automatic question generation for educational purposes. Int. J. Artif. Intell. Educ. 30, 1–84 (2019)
[16]
Liao, Y., Wang, Y., Liu, Q., Jiang, X.: GPT-based generation for classical chinese poetry. arXiv preprint arXiv:1907.00151 (2019)
[17]
Liu M, Calvo RA, and Rus V G-asks: an intelligent automatic question generation system for academic writing support Dialogue Discourse 2012 3 2 101-124
[18]
Mayring P Qualitative content analysis A Companion Qual. Res. 2004 1 159-176
[19]
Ohlsson S Learning from performance errors Psychol. Rev. 1996 103 2 241
[20]
Pan, L., Lei, W., Chua, T.S., Kan, M.Y.: Recent advances in neural question generation. arXiv preprint arXiv:1905.08949 (2019)
[21]
Qin, L., Bosselut, A., Holtzman, A., Bhagavatula, C., Clark, E., Choi, Y.: Counterfactual story reasoning and generation. arXiv preprint arXiv:1909.04076 (2019)
[22]
Qin, L., et al.: Conversing by reading: contentful neural conversation with on-demand machine reading. arXiv preprint arXiv:1906.02738 (2019)
[23]
Radford A, Wu J, Child R, Luan D, Amodei D, and Sutskever I Language models are unsupervised multitask learners OpenAI Blog 2019 1 8 9
[24]
Richey JE et al. More confusion and frustration, better learning: the impact of erroneous examples Comput. Educ. 2019 139 173-190
[25]
Rouet, J.F., Vidal-Abarca, E.: Mining for meaning: cognitive effects of inserted questions in learning from scientific text. Psychol. Sci. Text Comprehension, pp. 417–436 (2002)
[26]
See, A., Liu, P.J., Manning, C.D.: Get to the point: summarization with pointer-generator networks. arXiv preprint arXiv:1704.04368 (2017)
[27]
See, A., Pappu, A., Saxena, R., Yerukola, A., Manning, C.D.: Do massively pretrained language models make better storytellers? In: Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pp. 843–861 (2019)
[28]
Solaiman, I., et al.: Release strategies and the social impacts of language models. arXiv preprint arXiv:1908.09203 (2019)
[29]
Tsovaltzi D, McLaren BM, Melis E, and Meyer AK Erroneous examples: effects on learning fractions in a web-based setting Int. J. Technol. Enhanced Learn. 2012 4 3–4 191-230
[30]
Tsovaltzi D, Melis E, McLaren BM, Meyer A-K, Dietrich M, and Goguadze G Wolpers M, Kirschner PA, Scheffel M, Lindstaedt S, and Dimitrova V Learning from erroneous examples: when and how do students benefit from them? Sustaining TEL: From Innovation to Learning and Practice 2010 Heidelberg Springer 357-373
[31]
Watts GH and Anderson RC Effects of three types of inserted questions on learning from prose J. Educ. Psychol. 1971 62 5 387
[32]
Willis, A., Davis, G., Ruan, S., Manoharan, L., Landay, J., Brunskill, E.: Key phrase extraction for generating educational question-answer pairs. In: Proceedings of the 6th (2019) ACM Conference on Learning@ Scale, pp. 1–10 (2019)
[33]
Zhang, S., Bansal, M.: Addressing semantic drift in question generation for semi-supervised question answering. arXiv preprint arXiv:1909.06356 (2019)
[34]
Zhao, Y., Ni, X., Ding, Y., Ke, Q.: Paragraph-level neural question generation with maxout pointer and gated self-attention networks. In: EMNLP, pp. 3901–3910 (2018). http://aclweb.org/anthology/D18-1424
[35]
Zhou Q, Yang N, Wei F, Tan C, Bao H, and Zhou M Huang X, Jiang J, Zhao D, Feng Y, and Hong Y Neural question generation from text: a preliminary study Natural Language Processing and Chinese Computing 2018 Cham Springer 662-671

Cited By

View all
  • (2021)On the Linguistic and Pedagogical Quality of Automatic Question Generation via Neural Machine TranslationTechnology-Enhanced Learning for a Free, Safe, and Sustainable World10.1007/978-3-030-86436-1_22(289-294)Online publication date: 20-Sep-2021

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Guide Proceedings
Addressing Global Challenges and Quality Education: 15th European Conference on Technology Enhanced Learning, EC-TEL 2020, Heidelberg, Germany, September 14–18, 2020, Proceedings
Sep 2020
507 pages
ISBN:978-3-030-57716-2
DOI:10.1007/978-3-030-57717-9

Publisher

Springer-Verlag

Berlin, Heidelberg

Publication History

Published: 14 September 2020

Author Tags

  1. Text comprehension
  2. Language models
  3. Automatic question generation
  4. Educational technology

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 14 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2021)On the Linguistic and Pedagogical Quality of Automatic Question Generation via Neural Machine TranslationTechnology-Enhanced Learning for a Free, Safe, and Sustainable World10.1007/978-3-030-86436-1_22(289-294)Online publication date: 20-Sep-2021

View Options

View options

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media