Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3649217.3653587acmconferencesArticle/Chapter ViewAbstractPublication PagesiticseConference Proceedingsconference-collections
research-article
Open access

Explaining Code with a Purpose: An Integrated Approach for Developing Code Comprehension and Prompting Skills

Published: 03 July 2024 Publication History

Abstract

Reading, understanding and explaining code have traditionally been important skills for novices learning programming. As large language models (LLMs) become prevalent, these foundational skills are more important than ever given the increasing need to understand and evaluate model-generated code. Brand new skills are also needed, such as the ability to formulate clear prompts that can elicit intended code from an LLM. Thus, there is great interest in integrating pedagogical approaches for the development of both traditional coding competencies and the novel skills required to interact with LLMs. One effective way to develop and assess code comprehension ability is with "Explain in plain English'' (EiPE) questions, where students succinctly explain the purpose of a fragment of code. However, grading EiPE questions has always been difficult given the subjective nature of evaluating written explanations and this has stifled their uptake. In this paper, we explore a natural synergy between EiPE questions and code-generating LLMs to overcome this limitation. We propose using an LLM to generate code based on students' responses to EiPE questions -- not only enabling EiPE responses to be assessed automatically, but helping students develop essential code comprehension and prompt crafting skills in parallel. We investigate this idea in an introductory programming course and report student success in creating effective prompts for solving EiPE questions. We also examine student perceptions of this activity and how it influences their views on the use of LLMs for aiding and assessing learning.

References

[1]
Joe Michael Allen, Frank Vahid, Alex Edgcomb, Kelly Downey, and Kris Miller. 2019. An Analysis of Using Many Small Programs in CS1 (SIGCSE '19). ACM, NY, NY, USA, 585--591. https://doi.org/10.1145/3287324.3287466
[2]
Sushmita Azad, Binglin Chen, Maxwell Fowler, Matthew West, and Craig Zilles. 2020. Strategies for deploying unreliable AI graders in high-transparency high-stakes exams. In Artificial Intelligence in Education: 21st International Conference, AIED 2020, Ifrane, Morocco, July 6--10, 2020, Proceedings, Part I 21. Springer, 16--28.
[3]
Brett A. Becker, Paul Denny, James Finnie-Ansley, Andrew Luxton-Reilly, James Prather, and Eddie Antonio Santos. 2023. Programming Is Hard - Or at Least It Used to Be: Educational Opportunities and Challenges of AI Code Generation. In Proc of the 54th ACM Tech Symp on CS Education V. 1 (Toronto ON, Canada) (SIGCSE 2023). ACM, NY, USA, 500--506. https://doi.org/10.1145/3545945.3569759
[4]
John B Biggs and Kevin F Collis. 2014. Evaluating the quality of learning: The SOLO taxonomy (Structure of the Observed Learning Outcome). Academic Press.
[5]
Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative Research in Psychology, Vol. 3, 2 (2006), 77--101. https://doi.org/10.1191/1478088706qp063oa
[6]
Teresa Busjahn, Carsten Schulte, and Andreas Busjahn. 2011. Analysis of Code Reading to Gain More Insight in Program Comprehension. In Proc of the 11th Koli Calling Int. Conf. on Computing Education Research (Koli, Finland) (Koli Calling '11). ACM, New York, NY, USA, 1--9. https://doi.org/10.1145/2094131.2094133
[7]
Binglin Chen, Sushmita Azad, Rajarshi Haldar, Matthew West, and Craig Zilles. 2020. A validated scoring rubric for explain-in-plain-english questions. In Proc of the 51st ACM Technical Symposium on CS Education. 563--569.
[8]
Bruno Pereira Cipriano and Pedro Alves. 2023. Gpt-3 vs object oriented programming assignments: An experience report. In Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1. 61--67.
[9]
Malcolm Corney, Sue Fitzgerald, Brian Hanks, Raymond Lister, Renee McCauley, and Laurie Murphy. 2014. Explain in Plain English Questions Revisited: Data Structures Problems. In Proceedings of the 45th ACM Technical Symposium on Computer Science Education (Atlanta, Georgia, USA) (SIGCSE '14). ACM, NY, USA, 591--596. https://doi.org/10.1145/2538862.2538911
[10]
Paul Denny, Viraj Kumar, and Nasser Giacaman. 2023 a. Conversing with Copilot: Exploring Prompt Engineering for Solving CS1 Problems Using Natural Language. In Procof the 54th ACM Technical Symposium on CS Education V. 1 (Toronto, Canada) (SIGCSE 2023). ACM, NY, USA, 1136--1142. https://doi.org/10.1145/3545945.3569823
[11]
Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, and Brent N. Reeves. 2023 b. Promptly: Using Prompt Problems to Teach Learners How to Effectively Utilize AI Code Generators. arxiv: 2307.16364 [cs.HC]
[12]
Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, and Brent N. Reeves. 2024 a. Prompt Problems: A New Programming Exercise for the Generative AI Era. In Proc of the 55th ACM Technical Symposium on CS Education V. 1 (Portland, USA) (SIGCSE 2024). ACM, NY, USA, 296--302. https://doi.org/10.1145/3626252.3630909
[13]
Paul Denny, James Prather, Brett A. Becker, James Finnie-Ansley, Arto Hellas, Juho Leinonen, Andrew Luxton-Reilly, Brent N. Reeves, Eddie Antonio Santos, and Sami Sarsa. 2024 b. Computing Education in the Era of Generative AI. Commun. ACM, Vol. 67, 2 (Jan 2024), 56--67. https://doi.org/10.1145/3624720
[14]
James Finnie-Ansley, Paul Denny, Brett A. Becker, Andrew Luxton-Reilly, and James Prather. 2022. The Robots Are Coming: Exploring the Implications of OpenAI Codex on Introductory Programming. In Proc of the 24th Australasian Comp Ed Conference (Virtual Event, Australia) (ACE '22). ACM, New York, NY, USA, 10--19. https://doi.org/10.1145/3511861.3511863
[15]
James Finnie-Ansley, Paul Denny, Andrew Luxton-Reilly, Eddie Antonio Santos, James Prather, and Brett A. Becker. 2023. My AI Wants to Know If This Will Be on the Exam: Testing OpenAI's Codex on CS2 Programming Exercises (ACE '23). ACM, NY, NY, USA, 97--104. https://doi.org/10.1145/3576123.3576134
[16]
Max Fowler, Binglin Chen, Sushmita Azad, Matthew West, and Craig Zilles. 2021c. Autograding "Explain in Plain English" Questions Using NLP. In Proceedings of the 52nd ACM Technical Symposium on Computer Science Education (Virtual Event, USA) (SIGCSE '21). Association for Computing Machinery, New York, NY, USA, 1163--1169. https://doi.org/10.1145/3408877.3432539
[17]
Max Fowler, Binglin Chen, and Craig Zilles. 2021a. How Should We `Explain in Plain English'? Voices from the Community. In Proceedings of the 17th ACM Conference on International Computing Education Research (Virtual Event, USA) (ICER 2021). Association for Computing Machinery, New York, NY, USA, 69--80. https://doi.org/10.1145/3446871.3469738
[18]
Max Fowler, Binglin Chen, and Craig Zilles. 2021b. How should we ?Explain in plain English'? Voices from the Community. In Proceedings of the 17th ACM conference on international computing education research. 69--80.
[19]
Mohammed Hassan and Craig Zilles. 2021. Exploring "reverse-Tracing' Questions as a Means of Assessing the Tracing Skill on Computer-Based CS 1 Exams. In Proc. of the 17th ACM Conf. on Int. Comp. Ed. Research (Virtual Event, USA) (ICER 2021). ACM, New York, NY, USA, 115--126. https://doi.org/10.1145/3446871.3469765
[20]
Viraj Kumar and Arun Raman. 2023. Helping Students Develop a Critical Eye with Refute Questions. In Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 2 (Toronto ON, Canada) (SIGCSE 2023). ACM, New York, NY, USA, 1181. https://doi.org/10.1145/3545947.3569636
[21]
Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim, et al. 2023 a. StarCoder: may the source be with you! arXiv preprint arXiv:2305.06161 (2023).
[22]
Tiffany Wenting Li, Silas Hsu, Max Fowler, Zhilin Zhang, Craig Zilles, and Karrie Karahalios. 2023 b. Am I Wrong, or Is the Autograder Wrong? Effects of AI Grading Mistakes on Learning. In Proc. of the 2023 ACM Conference on International Computing Education Research - Volume 1 (Chicago, IL, USA) (ICER '23). ACM, New York, NY, USA, 159--176. https://doi.org/10.1145/3568813.3600124
[23]
Raymond Lister, Beth Simon, Errol Thompson, Jacqueline L Whalley, and Christine Prasad. 2006. Not seeing the forest for the trees: novice programmers and the SOLO taxonomy. ACM SIGCSE Bulletin, Vol. 38, 3 (2006), 118--122.
[24]
Stephen MacNeil, Joanne Kim, Juho Leinonen, Paul Denny, Seth Bernstein, Brett A Becker, Michel Wermelinger, Arto Hellas, Andrew Tran, Sami Sarsa, et al. 2023. The Implications of Large Language Models for CS Teachers and Students. In Proc. of the 54th ACM Technical Symposium on Computer Science Education, Vol. 2.
[25]
Mary L McHugh. 2012. Interrater reliability: the kappa statistic. Biochemia medica, Vol. 22, 3 (2012), 276--282.
[26]
Maria Medvidova and Jaroslav Porub"an. 2022. Program Comprehension and Quality Experiments in Programming Education. In Third International Computer Programming Education Conference (ICPEC 2022), Vol. 102. Schloss Dagstuhl -- Leibniz-Zentrum für Informatik, Dagstuhl, Germany, 14:1--14:12. https://doi.org/10.4230/OASIcs.ICPEC.2022.14
[27]
Laurie Murphy, Sue Fitzgerald, Raymond Lister, and Renée McCauley. 2012a. Ability to 'explain in Plain English' Linked to Proficiency in Computer-Based Programming. In Proc. of the 9th Annual International Conference on International Computing Education Research (Auckland, New Zealand) (ICER '12). ACM, New York, NY, USA, 111--118. https://doi.org/10.1145/2361276.2361299
[28]
Laurie Murphy, Renée McCauley, and Sue Fitzgerald. 2012b. 'Explain in Plain English' Questions: Implications for Teaching. In Proceedings of the 43rd ACM Technical Symposium on Computer Science Education (Raleigh, North Carolina, USA) (SIGCSE '12). Association for Computing Machinery, New York, NY, USA, 385--390. https://doi.org/10.1145/2157136.2157249
[29]
James Prather, Paul Denny, Juho Leinonen, Brett A. Becker, Ibrahim Albluwi, Michelle Craig, Hieke Keuning, Natalie Kiesler, Tobias Kohn, Andrew Luxton-Reilly, Stephen MacNeil, Andrew Petersen, Raymond Pettit, Brent N. Reeves, and Jaromir Savelka. 2023. The Robots Are Here: Navigating the Generative AI Revolution in Computing Education. In Proc of the 2023 Working Group Reports on Innovation and Technology in CS Education (Turku, Finland) (ITiCSE-WGR '23). ACM, New York, NY, USA, 108--159. https://doi.org/10.1145/3623762.3633499
[30]
Arun Raman and Viraj Kumar. 2022. Programming pedagogy and assessment in the era of AI/ML: A position paper. In Proceedings of the 15th Annual ACM India Compute Conference. 29--34.
[31]
Sami Sarsa, Paul Denny, Arto Hellas, and Juho Leinonen. 2022. Automatic Generation of Programming Exercises and Code Explanations Using Large Language Models. In Proc of the 2022 ACM Conference on International Computing Education Research - Volume 1 (Lugano and Virtual Event, Switzerland) (ICER '22). ACM, NY, USA, 27--43. https://doi.org/10.1145/3501385.3543957
[32]
Jaromir Savelka, Arav Agarwal, Marshall An, Chris Bogart, and Majd Sakr. 2023. Thrilled by Your Progress! Large Language Models (GPT-4) No Longer Struggle to Pass Assessments in Higher Education Programming Courses. In Proceedings of the 2023 ACM Conference on International Computing Education Research - Volume 1 (Chicago, IL, USA) (ICER '23). Association for Computing Machinery, New York, NY, USA, 78--92. https://doi.org/10.1145/3568813.3600142
[33]
David H. Smith IV and Craig Zilles. [n.,d.]. Code Generation Based Grading: Evaluating an Auto-grading Mechanism for ?Explain-in-Plain-English" Questions. arxiv: 2311.14903 [cs.CY]
[34]
Leigh Ann Sudol-DeLyser, Mark Stehlik, and Sharon Carver. 2012. Code Comprehension Problems as Learning Events. In Proceedings of the 17th ACM Annual Conference on Innovation and Technology in Computer Science Education (Haifa, Israel) (ITiCSE '12). Association for Computing Machinery, New York, NY, USA, 81--86. https://doi.org/10.1145/2325296.2325319
[35]
Alaaeddin Swidan and Felienne Hermans. 2019. The Effect of Reading Code Aloud on Comprehension: An Empirical Study with School Students. In Proceedings of the ACM Conference on Global Computing Education (Chengdu,Sichuan, China) (CompEd '19). Association for Computing Machinery, New York, NY, USA, 178--184. https://doi.org/10.1145/3300115.3309504
[36]
Renske Weeda, Cruz Izu, Maria Kallia, and Erik Barendsen. 2020. Towards an Assessment Rubric for EiPE Tasks in Secondary Education: Identifying Quality Indicators and Descriptors. In Proc. of the 20th Koli Calling International Conf. on Computing Ed. Research (Koli, Finland) (Koli Calling '20). ACM, New York, NY, USA, Article 30, bibinfonumpages10 pages. https://doi.org/10.1145/3428029.3428031
[37]
Matthew West, Geoffrey L Herman, and Craig Zilles. 2015. PrairieLearn: Mastery-based online problem solving with adaptive scoring and recommendations driven by machine learning. In 2015 ASEE Annual Conference & Exposition. 26--1238.

Cited By

View all
  • (2024)Prompting for Comprehension: Exploring the Intersection of Explain in Plain English Questions and Prompt WritingProceedings of the Eleventh ACM Conference on Learning @ Scale10.1145/3657604.3662039(39-50)Online publication date: 9-Jul-2024
  • (2024)Not Merely Useful but Also Amusing: Impact of Perceived Usefulness and Perceived Enjoyment on the Adoption of AI-Powered Coding AssistantInternational Journal of Human–Computer Interaction10.1080/10447318.2024.2375701(1-13)Online publication date: 11-Jul-2024

Index Terms

  1. Explaining Code with a Purpose: An Integrated Approach for Developing Code Comprehension and Prompting Skills

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      ITiCSE 2024: Proceedings of the 2024 on Innovation and Technology in Computer Science Education V. 1
      July 2024
      776 pages
      ISBN:9798400706004
      DOI:10.1145/3649217
      This work is licensed under a Creative Commons Attribution International 4.0 License.

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 03 July 2024

      Check for updates

      Author Tags

      1. code comprehension
      2. cs1
      3. eipe
      4. explain in plan english
      5. introductory programming
      6. large language models
      7. llms
      8. prompting

      Qualifiers

      • Research-article

      Funding Sources

      • Research Council of Finland

      Conference

      ITiCSE 2024
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 552 of 1,613 submissions, 34%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)377
      • Downloads (Last 6 weeks)131
      Reflects downloads up to 12 Nov 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Prompting for Comprehension: Exploring the Intersection of Explain in Plain English Questions and Prompt WritingProceedings of the Eleventh ACM Conference on Learning @ Scale10.1145/3657604.3662039(39-50)Online publication date: 9-Jul-2024
      • (2024)Not Merely Useful but Also Amusing: Impact of Perceived Usefulness and Perceived Enjoyment on the Adoption of AI-Powered Coding AssistantInternational Journal of Human–Computer Interaction10.1080/10447318.2024.2375701(1-13)Online publication date: 11-Jul-2024

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Get Access

      Login options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media