Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3632621.3671427acmconferencesArticle/Chapter ViewAbstractPublication PagesicerConference Proceedingsconference-collections
poster

Refute Questions for Concrete, Cluttered Specifications

Published: 12 August 2024 Publication History

Abstract

Learners must be able to critique AI-generated code, which is often plausible but not always appropriate. Refute Questions have been proposed as one way to develop this ability. Each such question has two components: a task specification and a purported solution (typically, a single function) for that task. Our prior work [1, 4, 5] has focused on Refute Questions where purported solutions are inappropriate because they are logically incorrect. Given such a question, learners must demonstrate their understanding of both the buggy function and the specification by providing an input on which the function’s behavior does not satisfy the specification.
In this work, we modify our conception of Refute Questions in two ways. First, we critique not only the correctness of the purported function but also aspects of its design. Specifically, we focus on the appropriateness of:
These design aspects are based on the first two steps of Felleisen et al.’s 6-step function design recipe [3]. Second, our prior work has only considered abstract task specifications. In our experience, AI-generated code is often well-designed for such specifications (e.g., see Figure 1). Thus, we consider task specifications that are concrete (i.e., they include task-specific details) and cluttered (i.e., they contain extraneous details). Abstracting away certain details from the real-world specifications and eliminating clutter are important and often challenging skills for learners [6, 7, 8, 9, 10]. Further, today’s code-generation tools often produce poorly-designed code given concrete, cluttered specifications (e.g., see Figure 2), partly because the performance of the underlying AI model drops substantially as the length of the specification grows [2]. We believe that our new style of Refute Questions may help learners develop the ability to critique such code.
We illustrate our process for designing such Refute Questions with an example. We start with an abstract task specification such as shown in Figure 11. Next, we manually invent concretizations for the abstract terms ‘integers’ and ‘sign’ (respectively: ‘bank accounts’ and their ‘type’ – active, dormant, or overdrawn). Further, we clutter the specification by inventing extraneous facts about the percentage of permissible accounts of each type. For these questions, learners critique the AI-generated code by answering a series of True/False questions related to the design aspects noted previously. Further, if the design is appropriate, learners critique the logical correctness of the code, as with standard Refute Questions.
This poster will present findings from an initial study of Refute Questions for Concrete, Cluttered Specifications in the context of an online course on programming with AI code-generation tools2, which runs in June 2024.

References

[1]
Nimisha Agarwal, Viraj Kumar, Arun Raman, and Amey Karkare. 2023. A Bug’s New Life: Creating Refute Questions from Filtered CS1 Student Code Snapshots. In Proceedings of the ACM Conference on Global Computing Education Vol 1 (, Hyderabad, India, ) (CompEd 2023). Association for Computing Machinery, New York, NY, USA, 7–14. https://doi.org/10.1145/3576882.3617916
[2]
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Pondé de Oliveira Pinto, Jared Kaplan, Harrison Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, and Mikhail Pavlov et al.2021. Evaluating Large Language Models Trained on Code. CoRR abs/2107.03374 (2021). arXiv:2107.03374https://arxiv.org/abs/2107.03374
[3]
Matthias Felleisen, Robert Bruce Findler, Matthew Flatt, and Shriram Krishnamurthi. 2018. How to design programs: an introduction to programming and computing. MIT Press.
[4]
Viraj Kumar. 2021. Refute: An Alternative to ‘Explain in Plain English’ Questions. In Proceedings of the 17th ACM Conference on International Computing Education Research (Virtual Event, USA) (ICER 2021). Association for Computing Machinery, New York, NY, USA, 438–440. https://doi.org/10.1145/3446871.3469791
[5]
Viraj Kumar and Arun Raman. 2023. Helping Students Develop a Critical Eye with Refute Questions. In Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 2 (Toronto ON, Canada) (SIGCSE 2023). Association for Computing Machinery, New York, NY, USA, 1181. https://doi.org/10.1145/3545947.3569636
[6]
Dastyni Loksa and Amy J Ko. 2016. The role of self-regulation in programming problem solving process and success. In Proceedings of the 2016 ACM conference on international computing education research. 83–91.
[7]
James Prather, Raymond Pettit, Kayla McMurry, Alani Peters, John Homer, and Maxine Cohen. 2018. Metacognitive difficulties faced by novice programmers in automated assessment tools. In Proceedings of the 2018 ACM Conference on International Computing Education Research. 41–50.
[8]
Yanyan Ren, Shriram Krishnamurthi, and Kathi Fisler. 2019. What help do students seek in TA office hours?. In Proceedings of the 2019 ACM Conference on International Computing Education Research. 41–49.
[9]
Jacqueline Whalley and Nadia Kasto. 2014. A qualitative think-aloud study of novice programmers’ code writing strategies. In Proceedings of the 2014 conference on Innovation & technology in computer science education. 279–284.
[10]
John Wrenn and Shriram Krishnamurthi. 2019. Executable examples for programming problem comprehension. In Proceedings of the 2019 ACM conference on international computing education research. 131–139.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
ICER '24: Proceedings of the 2024 ACM Conference on International Computing Education Research - Volume 2
August 2024
61 pages
ISBN:9798400704765
DOI:10.1145/3632621
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 12 August 2024

Check for updates

Author Tags

  1. Code critique
  2. Function design
  3. Refute Questions

Qualifiers

  • Poster
  • Research
  • Refereed limited

Conference

ICER 2024
Sponsor:

Acceptance Rates

Overall Acceptance Rate 189 of 803 submissions, 24%

Upcoming Conference

ICER 2025
ACM Conference on International Computing Education Research
August 3 - 6, 2025
Charlottesville , VA , USA

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 39
    Total Downloads
  • Downloads (Last 12 months)39
  • Downloads (Last 6 weeks)8
Reflects downloads up to 18 Nov 2024

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media