Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

How Can Automatic Feedback Help Students Construct Automata?

Published: 10 March 2015 Publication History

Abstract

In computer-aided education, the goal of automatic feedback is to provide a meaningful explanation of students' mistakes. We focus on providing feedback for constructing a deterministic finite automaton that accepts strings that match a described pattern. Natural choices for feedback are binary feedback (correct/wrong) and a counterexample of a string that is processed incorrectly. Such feedback is easy to compute but might not provide the student enough help. Our first contribution is a novel way to automatically compute alternative conceptual hints. Our second contribution is a rigorous evaluation of feedback with 377 students. We find that providing either counterexamples or hints is judged as helpful, increases student perseverance, and can improve problem completion time. However, both strategies have particular strengths and weaknesses. Since our feedback is completely automatic, it can be deployed at scale and integrated into existing massive open online courses.

Supplementary Material

MP4 File (a9.mp4)

References

[1]
Umair Z. Ahmed, Sumit Gulwani, and Amey Karkare. 2013. Automatically generating problems and solutions for natural deduction. In Proceedings of the 23rd International Joint Conference on Artificial Intelligence (IJCAI'13). 1968--1975. http://dl.acm.org/citation.cfm?id=2540128.2540411.
[2]
Rajeev Alur, Loris D'Antoni, Sumit Gulwani, Dileep Kini, and Mahesh Viswanathan. 2013. Automated grading of DFA constructions. In Proceedings of the 23rd International Joint Conference on Artificial Intelligence (IJCAI'13). 1976--1982. http://dl.acm.org/citation.cfm?id=2540128.2540412
[3]
Erik Andersen, Sumit Gulwani, and Zoran Popovic. 2013. A trace-based framework for analyzing and synthesizing educational progressions. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'13). ACM, New York, NY, 773--782.
[4]
John R. Anderson, Albert T. Corbett, Kenneth R. Koedinger, and Ray Pelletier. 1995. Cognitive tutors: Lessons learned. Journal of the Learning Sciences 4, 2, 167--207.
[5]
John R. Anderson and Brian J. Reiser. 1985. The LISP tutor: It approaches the effectiveness of a human tutor. BYTE 10, 4, 159--175. http://dl.acm.org/citation.cfm?id=3351.3354
[6]
Robert B. Ashlock. 1986. Error Patterns in Computation: A Semi-Programmed Approach. Merrill Publishing Company.
[7]
Philip Bille. 2005. A survey on tree edit distance and related problems. Theoretical Computer Science 337, 1--3, 217--239.
[8]
Manuel Bodirsky, Tobias Gartner, Timo von Oertzen, and Jan Schwinghammer. 2004. Efficiently computing the density of regular languages. In LATIN 2004: Theoretical Informatics. Lecture Notes in Computer Science, Vol. 2976. Springer, 262--270.
[9]
Beatrix Braune, Stephan Diehl, Andreas Kerren, and Reinhard Wilhelm. 2001. Animation of the generation and computation of finite automata for learning software. In Automata ImplementationLecture Notes in Computer Science, Vol. 2214. Springer, 39--47.
[10]
Cristina Conati. 2009. Intelligent tutoring systems: New challenges and directions. In Proceedings of the 21st International Joint Conference on Artificial Intelligence (IJCAI'09). 2--7. http://dl.acm.org/ citation.cfm?id=1661445.1661447
[11]
Vladan Devedzic and John Debenham. 1998. An intelligent tutoring system for teaching formal languages. In Intelligent Tutoring Systems. Lecture Notes in Computer Science, Vol. 1452. Springer, 514--523.
[12]
Ethan Fast, Colleen Lee, Alex Aiken, Michael Bernstein, Daphne Koller, and Eric Smith. 2013. Crowd-scale interactive formal reasoning and analytics. In Proceedings of the 26th Annual Symposium on User Interface Software and Technology (UIST'13). 363--372.
[13]
Tara Gallien and Jody Oomen-Early. 2008. Personalized versus collective instructor feedback in the online courseroom: Does type of feedback affect student satisfaction, academic performance and perceived connectedness with the instructor? International Journal on E-Learning 7, 3, 463--476. http://www.editlib.org/p/23582
[14]
Xinbo Gao, Bing Xiao, Dacheng Tao, and Xuelong Li. 2010. A survey of graph edit distance. Pattern Analysis and Applications 13, 1, 113--129.
[15]
Sumit Gulwani, Vijay Anand Korthikanti, and Ashish Tiwari. 2011. Synthesizing geometry constructions. SIGPLAN Notices 46, 6, 50--61.
[16]
Shachar Itzhaky, Sumit Gulwani, Neil Immerman, and Mooly Sagiv. 2013. Solving geometry problems using a combination of symbolic and numerical reasoning. In Logic for Programming, Artificial Intelligence, and Reasoning. Lecture Notes in Computer Science, Vol. 8312. Springer, 457--472.
[17]
Katy Jordan. 2014. MOOC Completion Rates: The Data. Retrieved February 4, 2015, from http://www.katyjordan.com/MOOCproject.html.
[18]
Kenneth R. Koedinger and Albert T. Corbett. 2006. Cognitive tutors: Technology bringing learning science to the classroom. In The Cambridge Handbook of the Learning Sciences, K. Sawyer (Ed.). Cambridge University Press, 61--78.
[19]
Jakub Kozik. 2005. Conditional densities of regular languages. Electronic Notes in Theoretical Computer Science 140, 67--79.
[20]
Andreas Maletti. 2008. The power of extended top-down tree transducers. Information and Computation 206, 9--10, 1187--1196.
[21]
Mary A. Mark and Jim E. Greer. 1993. Evaluation methodologies for intelligent tutoring systems. Journal of Artificial Intelligence in Education 4, 129--153.
[22]
Antonija Mitrovic. 1998. Learning SQL with a computerized tutor. In Proceedings of the 29th SIGCSE Technical Symposium on Computer Science Education (SIGCSE'98). ACM, New York, NY, 307--311.
[23]
Antonija Mitrovic, Michael Mayo, Pramuditha Suraweera, and Brent Martin. 2001. Constraint-based tutors: A success story. In Engineering of Intelligent Systems. Lecture Notes in Computer Science, Vol. 2070. Springer, 931--940. http://dl.acm.org/citation.cfm?id=646863.707788
[24]
David Patterson. 2013. Why Are English Majors Studying Computer Science? Retrieved February 4, 2015, from http://blogs.berkeley.edu/2013/11/26/why-are-english-majors-studying-computer-science/.
[25]
Leena Razzaq and Neil T. Heffernan. 2006. Scaffolding vs. Hints in the assistment system. In Proceedings of the 8th International Conference on Intelligent Tutoring Systems. 635--644.
[26]
Susan H. Rodger and Thomas Finley. 2006. JFLAP: An Interactive Formal Languages and Automata Package. Jones and Bartlett.
[27]
Vinay S. Shekhar, Anant Agarwalla, Akshay Agarwal, Nitish B, and Viraj Kumar. 2014. Enhancing JFLAP with automata construction problems and automated feedback. In Proceedings of the 7th International Conference on Contemporary Computing (IC3). 19--23.
[28]
Valerie J. Shute and J. Wesley Regian. 1993. Principles for evaluating intelligent tutoring systems. Journal of Artificial Intelligence in Education 4, 2--3, 245--271.
[29]
Rohit Singh, Sumit Gulwani, and Sriram K. Rajamani. 2012. Automatically generating algebra problems. In Proceedings of the 26th Conference on Artificial Intelligence (AAAI-12).
[30]
Rishabh Singh, Sumit Gulwani, and Armando Solar-Lezama. 2013. Automated feedback generation for introductory programming assignments. In Proceedings of the 34th Annual ACM SIGPLAN Conference on Programming Language Design and Implementation. ACM, New York, NY, 15--26.
[31]
Matthias F. Stallmann, Suzanne P. Balik, Robert D. Rodman, Sina Bahram, Michael C. Grace, and Susan D. High. 2007. ProofChecker: An accessible environment for automata theory correctness proofs. ACM SIGCSE Bulletin 39, 3, 48--52.
[32]
New York Times. 2012. Instruction for Masses Knocks Down Campus Walls. Retrieved February 4, 2015, from http://www.nytimes.com/2012/03/05/education/moocs-large-courses-open-to-all-topple-campus-walls.html?pagewanted=all
[33]
Kurt VanLehn. 1992. Mind Bugs: The origins of procedural misconceptions. Artificial Intelligence 52, 3, 329--340.
[34]
Kurt VanLehn. 2006. The behavior of tutoring systems. International Journal of Artificial Intelligence in Education 16, 3, 227--265. http://dl.acm.org/citation.cfm?id=1435351.1435353
[35]
Kurt VanLehn. 2011. The relative effectiveness of human tutoring, Intelligent tutoring systems, and other tutoring systems. Educational Psychologist 46, 4, 197--221.

Cited By

View all
  • (2024)Exploring Error Types in Formal Languages Among Students of Upper Secondary EducationProceedings of the 24th Koli Calling International Conference on Computing Education Research10.1145/3699538.3699540(1-8)Online publication date: 12-Nov-2024
  • (2024)Fault Localization for Novice Programs Combining Static Analysis and Dynamic Detection2024 IEEE 48th Annual Computers, Software, and Applications Conference (COMPSAC)10.1109/COMPSAC61105.2024.00023(94-102)Online publication date: 2-Jul-2024
  • (2024)Enhancing python learning with PyTutor: Efficacy of a ChatGPT-Based intelligent tutoring system in programming educationComputers and Education: Artificial Intelligence10.1016/j.caeai.2024.1003097(100309)Online publication date: Dec-2024
  • Show More Cited By

Recommendations

Reviews

Franz J Kurfess

Student learning for concepts like automata and formal languages can be a challenge. Students often fail to see the relevance to more practical aspects, and the feedback they receive on lab exercises or homework assignments typically occurs with significant delays, and may offer little meaningful information. From an instructor's or grader's perspective, assessing large numbers of similar exercises quickly becomes tedious. In an attempt to provide real-time, meaningful feedback, the authors developed an automatic assessment system for deterministic finite automata. It provides instant, personalized feedback, allowing students to incrementally work on a problem until they find a correct solution. In addition to binary (correct/incorrect) and counterexamples reported in related work, the authors construct hints for students with suggestions for strategies to improve their current solution. The system was tested at several universities with hundreds of students in classes on theoretical computer science. While the evaluation emphasis was more on a comparison of the different types of feedback, in comparison with similar exercises by students from earlier classes not using any such tools, there was significant improvement in the student scores. Binary feedback was helpful, but less so than the other two types, both according to differences in scores, attempts, and time needed to find a correct solution and to student feedback. Preferences between hints and counterexamples were fairly evenly divided among students, and empirical results also were not conclusive. Considering that it is fairly easy to incorporate such a tool into a class as an additional resource or replacement for some graded exercises, it is not surprising that it has been adopted by a significant number of instructors at various institutions. Online Computing Reviews Service

Access critical reviews of Computing literature here

Become a reviewer for Computing Reviews.

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Transactions on Computer-Human Interaction
ACM Transactions on Computer-Human Interaction  Volume 22, Issue 2
Special Issue on Online Learning at Scale
April 2015
133 pages
ISSN:1073-0516
EISSN:1557-7325
DOI:10.1145/2744768
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 10 March 2015
Accepted: 01 January 2015
Revised: 01 November 2014
Received: 01 May 2014
Published in TOCHI Volume 22, Issue 2

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. A/B study
  2. Autograding
  3. automata
  4. feedback

Qualifiers

  • Research-article
  • Research
  • Refereed

Funding Sources

  • NSF Expeditions in Computing

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)63
  • Downloads (Last 6 weeks)16
Reflects downloads up to 27 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Exploring Error Types in Formal Languages Among Students of Upper Secondary EducationProceedings of the 24th Koli Calling International Conference on Computing Education Research10.1145/3699538.3699540(1-8)Online publication date: 12-Nov-2024
  • (2024)Fault Localization for Novice Programs Combining Static Analysis and Dynamic Detection2024 IEEE 48th Annual Computers, Software, and Applications Conference (COMPSAC)10.1109/COMPSAC61105.2024.00023(94-102)Online publication date: 2-Jul-2024
  • (2024)Enhancing python learning with PyTutor: Efficacy of a ChatGPT-Based intelligent tutoring system in programming educationComputers and Education: Artificial Intelligence10.1016/j.caeai.2024.1003097(100309)Online publication date: Dec-2024
  • (2023)Automated Grading of Automata with ACL2sElectronic Proceedings in Theoretical Computer Science10.4204/EPTCS.375.7375(77-91)Online publication date: 10-Mar-2023
  • (2023)Framework for SQL Error Message Design: A Data-Driven ApproachACM Transactions on Software Engineering and Methodology10.1145/360718033:1(1-50)Online publication date: 23-Nov-2023
  • (2023)Using Micro Parsons Problems to Scaffold the Learning of Regular ExpressionsProceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 110.1145/3587102.3588853(457-463)Online publication date: 29-Jun-2023
  • (2023)MAATSE: Prototyping and Evaluating an Open and Modular E-Assessment Tool for STEM Education2023 IEEE 2nd German Education Conference (GECon)10.1109/GECon58119.2023.10295151(1-6)Online publication date: 2-Aug-2023
  • (2023)Automated Grading of Regular ExpressionsProgramming Languages and Systems10.1007/978-3-031-30044-8_4(90-112)Online publication date: 22-Apr-2023
  • (2022)Teaching Simple Constructive Proofs with Haskell ProgramsElectronic Proceedings in Theoretical Computer Science10.4204/EPTCS.363.4363(54-73)Online publication date: 26-Jul-2022
  • (2022)Contribution of automated feedback to the English writing competence of distance foreign language learnersE-Learning and Digital Media10.1177/2042753022113957921:1(24-41)Online publication date: 26-Nov-2022
  • Show More Cited By

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media