Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3341525.3387392acmconferencesArticle/Chapter ViewAbstractPublication PagesiticseConference Proceedingsconference-collections
research-article

Misconception-Based Peer Feedback: A Pedagogical Technique for Reducing Misconceptions

Published: 15 June 2020 Publication History

Abstract

Developing high quality pedagogical materials and techniques is a challenging but important task. We leverage prior work identifying student misconceptions and difficulties in introductory computing courses to design misconception-based feedback (MBF) to address these difficulties. In MBF, peers working in pairs use prompts to guide their discussion of a recently completed coding assignment. A human autograder group (HAG) simulates the behavior of a typical autograder program, supplying only test cases and their correct outputs, allowing us to factor out the effect of the medium (computer vs. human). Participants completed conceptual pre-tests and post-tests that asked them to explain their reasoning, and we captured screen and audio recordings of the sessions. Our mixed-methods analysis looked for statistical differences in frequency counts of misconceptions and qualitatively analyzed audio/video data and language used in pre/post-test written responses to look for explanations of these differences. We use a mixed-methods approach: looking for statistical differences in frequency counts of misconceptions and qualitatively analyzing audio/video data and language used in pre/post-test written responses for explanations of these differences. Significant benefits of MBF were seen on questions that required students to comprehend differences between pass-by-value and pass-by-reference. Not only did these questions show a greater reduction in misconceptions for the MBF group than the HAG group, but the qualitative analysis provided evidence that the students' improvement in language and understanding of the concepts from pre-test to post-test was directly tied to the MBF intervention. This study presents a promising, low resource technique to address misconceptions about several important computer science concepts.

References

[1]
2019. Mimir. (December 2019). https://www.mimirhq.com/
[2]
Lorin W Anderson, David R Krathwohl, Peter W Airasian, Kathleen A Cruikshank, Richard E Mayer, Paul R Pintrich, James Raths, and Merlin C Wittrock. 2001. A taxonomy for learning, teaching, and assessing: A revision of Bloom's taxonomy of educational objectives, abridged edition. White Plains, NY: Longman(2001).
[3]
Susan Askew. 2004.Feedback for learning. Routledge.
[4]
John B Biggs and Kevin F Collis. 2014. Evaluating the quality of learning: The SOLO taxonomy (Structure of the Observed Learning Outcome). Academic Press.
[5]
Paul Black. 1998. Formative assessment: Raising standards inside the classroom.School Science Review80, 291 (1998), 39--46.
[6]
Sallyann Bryant, Pablo Romero, and Benedict du Boulay. 2006. The collaborative nature of pair programming. In International Conference on Extreme Programming and Agile Processes in Software Engineering. Springer, 53--64.
[7]
Ricardo Caceffo, Steve Wolfman, Kellogg S Booth, and Rodolfo Azevedo. 2016.Developing a computer science concept inventory for introductory programming. In Proceedings of the 47th ACM Technical Symposium on Computing Science Education. ACM, 364--369.
[8]
Yingjun Cao, Leo Porter, Soohyun Nam Liao, and Rick Ord. 2019. Paper or Online? A Comparison of Exam Grading Techniques. In Proceedings of the 2019ACM Conference on Innovation and Technology in Computer Science Education.99--104.
[9]
Sheelagh Carpendale, Søren Knudsen, Alice Thudt, and Uta Hinrichs. 2017. Analyzing qualitative data. In Proceedings of the 2017 ACM International Conference on Interactive Surfaces and Spaces. ACM, 477--481.
[10]
Chiu-Liang Chen, Shun-Yin Cheng, and Janet Mei-Chuen Lin. 2012. A study of misconceptions and missing conceptions of novice Java programmers. In Proceedings of the International Conference on Frontiers in Education: Computer Science and Computer Engineering (FECS). The Steering Committee of The World Congress in Computer Science, Computer . . . , 1.
[11]
Yuliya Cherenkova, Daniel Zingaro, and Andrew Petersen. 2014. Identifying challenging CS1 concepts in a large problem dataset. In Proceedings of the 45thACM technical symposium on Computer science education. ACM, 695--700.
[12]
Michelene TH Chi. 2000. Self-explaining expository texts: The dual processes of generating inferences and repairing mental models. Advances in instructional psychology 5 (2000), 161--238.
[13]
Elizabeth F Churchill. 2015. Patchwork living, rubber duck debugging, and the chaos monkey. interactions 22, 3 (2015), 22--23.
[14]
Michael Clancy. 2004. Misconceptions and attitudes that interfere with learning to program. Computer science education research(2004), 85--100.
[15]
Herbert H Clark, Susan E Brennan, et al.1991. Grounding in communication. Perspectives on socially shared cognition 13, 1991 (1991), 127--149.
[16]
Benjamin F Crabtree and William L Miller. 1999. Doing qualitative research. sage publications.
[17]
Michelle Craig and Andrew Petersen. 2016. Student difficulties with pointer concepts in C. In Proceedings of the Australasian Computer Science Week Multi-conference. ACM, 8.
[18]
Peter E Doolittle. 2014. Complex constructivism: A theoretical model of complexity and cognition.International Journal of teaching and learning in higher education 26, 3 (2014), 485--498.
[19]
Stephen H Edwards. 2003. Using test-driven development in the classroom: Providing students with automatic, concrete feedback on performance. In Proceedings of the international conference on education and information systems: technologies and applications EISTA, Vol. 3. Citeseer.
[20]
Barry J Fraser, Herbert J Walberg, Wayne W Welch, and John A Hattie. 1987. Syntheses of educational productivity research. International journal of educational research 11, 2 (1987), 147--252.
[21]
Ken Goldman, Paul Gross, Cinda Heeren, Geoffrey Herman, Lisa Kaczmarczyk, Michael C Loui, and Craig Zilles. 2008. Identifying important and difficult concepts in introductory computing courses using a delphi process. ACM SIGCSE Bulletin 40, 1 (2008), 256--260.
[22]
T. C. Nicholas Graham, Catherine A. Morton, and Tore Urnes. 1996. Clock Works: Visual programming of component-based software architectures. Journal of Visual Languages and Computing 7, 2 (1996), 175--196.
[23]
Luke Gusukuma, Austin Cory Bart, Dennis Kafura, and Jeremy Ernst. 2018. Misconception-Driven Feedback: Results from an Experimental Study. In Proceedings of the 2018 ACM Conference on International Computing Education Research. ACM, 160--168.
[24]
Geoffrey L. Herman, Lisa Kaczmarczyk, Michael C. Loui, and Craig Zilles. 2008. Proof by Incomplete Enumeration and Other Logical Misconceptions. In Proceedings of the Fourth International Workshop on Computing Education Research (ICER'08). ACM, New York, NY, USA, 59--70. https://doi.org/10.1145/1404520.1404527
[25]
Richard Higgins, Peter Hartley, and Alan Skelton. 2002. The conscientious consumer: Reconsidering the role of assessment feedback in student learning.Studies in higher education 27, 1 (2002), 53--64.
[26]
Celia Hoyles, Richard Noss, Ross Adamson, and Sarah Lowe. 2001. Programming rules: what do children understand? In PME conference, Vol. 3. 3--169.
[27]
Andrew Hunt and David Thomas. 1999.The pragmatic programmer: From Journeyman to Master. Addison-Wesley Professional.
[28]
Cazembe Kennedy. 2020.An Exploration of Student Reasoning About Under-graduate Computer Science Concepts: An Active Learning Technique to Address Misconceptions. Ph.D. Dissertation. Clemson University.
[29]
Cazembe Kennedy and Eileen T Kraemer. 2018. What Are They Thinking?: Eliciting Student Reasoning About Troublesome Concepts in Introductory Com-puter Science. In Proceedings of the 18th Koli Calling International Conference on Computing Education Research. ACM, 7.
[30]
Cazembe Kennedy and Eileen T Kraemer. 2019. Qualitative Observations of Student Reasoning: Coding in the Wild. In Proceedings of the 2019 ACM Conference on Innovation and Technology in Computer Science Education. 224--230.
[31]
Aubrey Lawson, Eileen T Kraemer, S Megan Che, and Cazembe Kennedy. 2019. A Multi-Level Study of Undergraduate Computer Science Reasoning about Concurrency. In Proceedings of the 2019 ACM Conference on Innovation and Technology in Computer Science Education. 210--216.
[32]
Nguyen-Thinh Le. 2016. A classification of adaptive feedback in educational systems for programming. Systems 4, 2 (2016), 22.
[33]
Raymond Lister, Elizabeth S Adams, Sue Fitzgerald, William Fone, John Hamer, Morten Lindholm, Robert McCartney, Jan Erik Moström, Kate Sanders, Otto Seppälä, et al.2004. A multi-national study of reading and tracing skills in novice programmers. In ACM SIGCSE Bulletin, Vol. 36. ACM, 119--150.
[34]
Mike Lopez, Jacqueline Whalley, Phil Robbins, and Raymond Lister. 2008. Relationships between reading, tracing and writing skills in introductory programming. In Proceedings of the fourth international workshop on computing education research. ACM, 101--112.
[35]
Michael McCracken, Vicki Almstrum, Danny Diaz, Mark Guzdial, Dianne Hagan,Yifat Ben-David Kolikant, Cary Laxer, Lynda Thomas, Ian Utting, and Tadeusz Wilusz. 2001. A multi-national, multi-institutional study of assessment of programming skills of first-year CS students. In Working group reports from ITiCSE on Innovation and technology in computer science education. ACM, 125--180.
[36]
Engineering National Academies of Sciences, Medicine, et al.2018.Assessing and responding to the growth of computer science undergraduate enrollments. National Academies Press.
[37]
Miranda C Parker, Mark Guzdial, and Shelly Engleman. 2016. Replication, validation, and use of a language independent CS1 knowledge assessment. In Proceedings of the 2016 ACM conference on international computing education research. ACM, 93--101.
[38]
Christine Pauli. 2015. Co-constructivism in Educational Theory and Practice.(2015).
[39]
DN Perkins and Fay Martin. 1986. Fragile knowledge and neglected strategies in novice programmers. In first workshop on empirical studies of programmers on Empirical studies of programmers. 213--229.
[40]
Lee S Shulman. 1986. Those who understand: Knowledge growth in teaching. Educational researcher15, 2 (1986), 4--14.
[41]
John P Smith III, Andrea A DiSessa, and Jeremy Roschelle. 1994. Misconceptions reconceived: A constructivist analysis of knowledge in transition. The journal of the learning sciences3, 2 (1994), 115--163.
[42]
Elliot Soloway. 1986. Learning to program= learning to construct mechanisms and explanations. Commun. ACM29, 9 (1986), 850--858.
[43]
Donna Teague, Malcolm Corney, Alireza Ahadi, and Raymond Lister. 2013. Aqualitative think aloud study of the early neo-piagetian stages of reasoning in novice programmers. In Proceedings of the Fifteenth Australasian Computing Education Conference-Volume 136. Australian Computer Society, Inc., 87--95.
[44]
Allison Elliott Tew and Mark Guzdial. 2011. The FCS1: a language independent assessment of CS1 knowledge. In Proceedings of the 42nd ACM technical symposium on Computer science education. ACM, 111--116.
[45]
Sheng-Chau Tseng and Chin-Chung Tsai. 2007. On-line peer assessment and the role of the peer feedback: A study of high school computer course. Computers & Education 49, 4 (2007), 1161--1174.
[46]
Chris Wilcox. 2015. The role of automation in undergraduate computer science education. In Proceedings of the 46th ACM Technical Symposium on Computer Science Education. ACM, 90--95.
[47]
Laurie Williams and Robert Kessler. 2002. Pair programming illuminated. Addison-Wesley Longman Publishing Co., Inc.
[48]
Laurie Williams, Eric Wiebe, Kai Yang, Miriam Ferzli, and Carol Miller. 2002. In support of pair programming in the introductory computer science course. Computer Science Education 12, 3 (2002), 197--212.

Cited By

View all
  • (2023)Explain Trace: Misconceptions of Control-Flow StatementsComputers10.3390/computers1210019212:10(192)Online publication date: 24-Sep-2023
  • (2023)Experiences from Using Code Explanations Generated by Large Language Models in a Web Software Development E-BookProceedings of the 54th ACM Technical Symposium on Computer Science Education V. 110.1145/3545945.3569785(931-937)Online publication date: 2-Mar-2023
  • (2022)Automatic Generation of Programming Exercises and Code Explanations Using Large Language ModelsProceedings of the 2022 ACM Conference on International Computing Education Research - Volume 110.1145/3501385.3543957(27-43)Online publication date: 3-Aug-2022
  • Show More Cited By

Index Terms

  1. Misconception-Based Peer Feedback: A Pedagogical Technique for Reducing Misconceptions

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    ITiCSE '20: Proceedings of the 2020 ACM Conference on Innovation and Technology in Computer Science Education
    June 2020
    615 pages
    ISBN:9781450368742
    DOI:10.1145/3341525
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 15 June 2020

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. active learning
    2. computing education
    3. misconceptions
    4. pedagogical content knowledge

    Qualifiers

    • Research-article

    Conference

    ITiCSE '20
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 552 of 1,613 submissions, 34%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)50
    • Downloads (Last 6 weeks)8
    Reflects downloads up to 29 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2023)Explain Trace: Misconceptions of Control-Flow StatementsComputers10.3390/computers1210019212:10(192)Online publication date: 24-Sep-2023
    • (2023)Experiences from Using Code Explanations Generated by Large Language Models in a Web Software Development E-BookProceedings of the 54th ACM Technical Symposium on Computer Science Education V. 110.1145/3545945.3569785(931-937)Online publication date: 2-Mar-2023
    • (2022)Automatic Generation of Programming Exercises and Code Explanations Using Large Language ModelsProceedings of the 2022 ACM Conference on International Computing Education Research - Volume 110.1145/3501385.3543957(27-43)Online publication date: 3-Aug-2022
    • (2021)Wrong Answers for Wrong Reasons: The Risks of Ad Hoc InstrumentsProceedings of the 21st Koli Calling International Conference on Computing Education Research10.1145/3488042.3488045(1-11)Online publication date: 17-Nov-2021
    • (2021)Effects of Hints on Debugging Scratch Programs: An Empirical Study with Primary School Teachers in TrainingProceedings of the 16th Workshop in Primary and Secondary Computing Education10.1145/3481312.3481344(1-10)Online publication date: 18-Oct-2021
    • (2021)When Wrong is Right: The Instructional Power of Multiple ConceptionsProceedings of the 17th ACM Conference on International Computing Education Research10.1145/3446871.3469750(184-197)Online publication date: 16-Aug-2021

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media