Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3478431.3499315acmconferencesArticle/Chapter ViewAbstractPublication PagessigcseConference Proceedingsconference-collections
research-article

Multilingual CS Education Pathways: Implications for Vertically-Scaled Assessment

Published: 22 February 2022 Publication History

Abstract

The expansion of computer science (CS) into K-12 contexts has resulted in a diverse ecosystem of curricula designed for various grade levels, teaching a variety of concepts, and using a wide array of different programming languages and environments. Many students will learn more than one programming language over the course of their studies. There is a growing need for computer science assessment that can measure student learning over time, but the multilingual learning pathways create two challenges for assessment in computer science. First, there are not validated assessments for all of the programming languages used in CS classrooms. Second, it is difficult to measure growth in student understanding over time when students move between programming languages as they progress in their CS education. In this position paper, we argue that the field of computing education research needs to develop methods and tools to better measure students' learning over time and across the different programming languages they learn along the way. In presenting this position, we share data that shows students approach assessment problems differently depending on the programming language, even when the problems are conceptually isomorphic, and discuss some approaches for developing multilingual assessments of student learning over time.

References

[1]
John R. Anderson. 1993. Rules of the Mind .Lawrence Erlbaum Associates, Hillsdale, NJ.
[2]
Michal Armoni, Orni Meerbaum-Salant, and Mordechai Ben-Ari. 2015a. From Scratch to 'Real' Programming. ACM Transactions on Computing Education, Vol. 14, 4, Article 25 (2015), bibinfonumpages15 pages. https://doi.org/10.1145/2677087
[3]
Michal Armoni, Orni Meerbaum-Salant, and Mordechai Ben-Ari. 2015b. From Scratch to "Real" Programming. ACM Trans. Comput. Educ., Vol. 14, 4, Article 25 (Feb. 2015), bibinfonumpages15 pages. https://doi.org/10.1145/2677087
[4]
American Educational Research Association, American Psychological Association, and National Council on Measurement in Education. 2014. Standards for Educational and Psychological Testing .AERA, Washington, DC.
[5]
Guanhua Chen, Ji Shen, Lauren Barth-Cohen, Shiyan Jiang, Xiaoting Huang, and Moataz Eltoukhy. 2017. Assessing elementary students' computational thinking in everyday reasoning and robotics programming. Computers & Education, Vol. 109 (2017), 162--175.
[6]
D. C. Cliburn. 2008. Student opinions of Alice in CS1. In 2008 38th Annual Frontiers in Education Conference. IEEE, T3B--1--T3B--6. https://doi.org/10.1109/FIE.2008.4720254
[7]
Smarter Balanced Assessment Consortium. 2019. Interpretive Guide for English Language Arts/Literacy and Mathematics Assessments . https://portal.smarterbalanced.org/library/en/reporting-system-interpretive-guide.pdf
[8]
Holger Danielsiek, Wolfgang Paul, and Jan Vahrenhold. 2012. Detecting and Understanding Students' Misconceptions Related to Algorithms and Data Structures. In Proceedings of the 43rd ACM Technical Symposium on Computer Science Education (Raleigh, North Carolina, USA) (SIGCSE '12). Association for Computing Machinery, New York, NY, USA, 21--26. https://doi.org/10.1145/2157136.2157148
[9]
Wanda Dann, Dennis Cosgrove, Don Slater, Dave Culyba, and Steve Cooper. 2012. Mediated transfer: Alice 3 to java. In Proceedings of the 43rd ACM technical symposium on Computer Science Education. 141--146.
[10]
Adrienne Decker and Monica M. McGill. 2019. A Topical Review of Evaluation Instruments for Computing Education. In Proceedings of the 50th ACM Technical Symposium on Computer Science Education (Minneapolis, MN, USA) (SIGCSE '19). Association for Computing Machinery, New York, NY, USA, 558--564. https://doi.org/10.1145/3287324.3287393
[11]
Allison Elliott Tew. 2010. Assessing fundamental introductory computing concept knowledge in a language independent manner . Ph.D. Dissertation. Atlanta, GA. http://hdl.handle.net/1853/37090
[12]
Allison Elliott Tew and Mark Guzdial. 2011. The FCS1: A Language Independent Assessment of CS1 Knowledge. In Proceedings of the 42nd ACM Technical Symposium on Computer Science Education (Dallas, TX, USA) (SIGCSE '11). Association for Computing Machinery, New York, NY, USA, 111--116. https://doi.org/10.1145/1953163.1953200
[13]
Mohammed F. Farghally, Kyu Han Koh, Jeremy V. Ernst, and Clifford A. Shaffer. 2017. Towards a Concept Inventory for Algorithm Analysis Topics. In Proceedings of the 2017 ACM SIGCSE Technical Symposium on Computer Science Education (Seattle, Washington, USA) (SIGCSE '17). Association for Computing Machinery, New York, NY, USA, 207--212. https://doi.org/10.1145/3017680.3017756
[14]
Diana Franklin, David Weintrop, Jennifer Palmer, Merijke Coenraad, Melissa Cobian, Kristan Beck, Andrew Rasmussen, Sue Krause, Max White, Marco Anaya, et almbox. 2020. Scratch Encore: The Design and Pilot of a Culturally-Relevant Intermediate Scratch Curriculum. In Proceedings of the 51st ACM Technical Symposium on Computer Science Education. 794--800.
[15]
Dan Garcia, Brian Harvey, and Tiffany Barnes. 2015. The beauty and joy of computing. ACM Inroads, Vol. 6, 4 (2015), 71--79.
[16]
Mark Guzdial. 2019. We Should Stop Saying 'Language Independent.' We Don't Know How To Do That . https://cacm.acm.org/blogs/blog-cacm/238782-we-should-stop-saying-language-independent-we-dont-know-how-to-do-that/fulltext
[17]
Brian Harvey and Jens Mönig. 2010. Bringing "no ceiling" to scratch: Can one language serve kids and computer scientists. Proc. Constructionism (2010), 1--10.
[18]
Charles E. Lance, Marcus M. Butts, and Lawrence C. Michels. 2006. The Sources of Four Commonly Reported Cutoff Criteria: What Did They Really Say? Organizational Research Methods, Vol. 9, 2 (2006), 202--220. https://doi.org/10.1177/1094428105284919
[19]
Robert J. Mislevy, Russell G. Almond, and Janice F. Lukas. 2003. A brief introduction to evidence-centered design . Technical Report RR-03--16. Educational Testing Service, Princeton, NJ. https://www.ets.org/Media/Research/pdf/RR-03--16.pdf
[20]
Mark Noone and Aidan Mooney. 2018. Visual and textual programming languages: a systematic review of the literature. Journal of Computers in Education, Vol. 5 (2018), 149--174. https://doi.org/10.1007/s40692-018-0101--5
[21]
Jum Nunnally and Ira H. Bernstein. 1994. Psychometric Theory: 3rd Edition .McGraw-Hill Education.
[22]
Miranda C. Parker, Mark Guzdial, and Shelly Engleman. 2016. Replication, Validation, and Use of a Language Independent CS1 Knowledge Assessment. In Proceedings of the 2016 ACM Conference on International Computing Education Research (Melbourne, VIC, Australia) (ICER '16). Association for Computing Machinery, New York, NY, USA, 93--101. https://doi.org/10.1145/2960310.2960316
[23]
Miranda C. Parker, Yvonne S. Kao, Dana Saito-Stehberger, Diana Franklin, Susan Krause, Debra Richardson, and Mark Warschauer. 2021. Development and Preliminary Validation of the Assessment of Computing for Elementary Students (ACES). In Proceedings of the 52nd ACM Technical Symposium on Computer Science Education (Virtual Event, USA) (SIGCSE '21). Association for Computing Machinery, New York, NY, USA, 10--16. https://doi.org/10.1145/3408877.3432376
[24]
Leo Porter, Daniel Zingaro, Soohyun Nam Liao, Cynthia Taylor, Kevin C. Webb, Cynthia Lee, and Michael Clancy. 2019. BDSI: A Validated Concept Inventory for Basic Data Structures. In Proceedings of the 2019 ACM Conference on International Computing Education Research (Toronto ON, Canada) (ICER '19). Association for Computing Machinery, New York, NY, USA, 111--119. https://doi.org/10.1145/3291279.3339404
[25]
Arif Rachmatullah, Bita Akram, Danielle Boulden, Bradford Mott, Kristy Boyer, James Lester, and Eric Wiebe. 2020. Development and validation of the middle grades computer science concept inventory (MG-CSCI) assessment. Eurasia Journal of Mathematics, Science and Technology Education, Vol. 16, 5 (2020). https://doi.org/10.29333/ejmste/116600
[26]
Alper Sahin and Duygu Anil. 2017. The effects of test length and sample size on item parameters in item response theory. Educational Sciences: Theory & Practice, Vol. 17, 1 (2017). https://doi.org/10.12738/estp.2017.1.0270
[27]
Elizabeth Schofield, Michael Erlinger, and Zachary Dodds. 2014. MyCS: CS for Middle-Years Students and Their Teachers. In Proceedings of the 45th ACM Technical Symposium on Computer Science Education (Atlanta, Georgia, USA) (SIGCSE '14). Association for Computing Machinery, New York, NY, USA, 337--342. https://doi.org/10.1145/2538862.2538901
[28]
Jean Scholtz and Susan Wiedenbeck. 1990. Learning second and subsequent programming languages: A problem of transfer. International Journal of Human-Computer Interaction, Vol. 2, 1 (1990), 51--72.
[29]
Exploring Computer Science. [n.d.]. What is ECS? http://www.exploringcs.org/
[30]
Juha Sorva. 2012. Visual program simulation in introductory programming education; Visuaalinen ohjelmasimulaatio ohjelmoinnin alkeisopetuksessa . G4 Monografiaväitöskirja. http://urn.fi/URN:ISBN:978--952--60--4626--6
[31]
Rebecca Vivian, Diana Franklin, Dave Frye, Alan Peterfreund, Jason Ravitz, Florence Sullivan, Melissa Zeitz, and Monica M. McGill. 2020. Evaluation and Assessment Needs of Computing Education in Primary Grades. In Proceedings of the 2020 ACM Conference on Innovation and Technology in Computer Science Education (Trondheim, Norway) (ITiCSE '20). Association for Computing Machinery, New York, NY, USA, 124--130. https://doi.org/10.1145/3341525.3387371
[32]
David Weintrop, Heather Killen, and Baker E Franke. 2018. Blocks or Text? How programming language modality makes a difference in assessing underrepresented populations. In Rethinking Learning in the Digital Age: Making the Learning Sciences Count, 13th International Conference of the Learning Sciences (ICLS) 2018. International Society of the Learning Sciences. https://doi.org/10.22318/cscl2018.328
[33]
David Weintrop and Uri Wilensky. 2015. Using Commutative Assessments to Compare Conceptual Understanding in Blocks-Based and Text-Based Programs. In Proceedings of the Eleventh Annual International Conference on International Computing Education Research (Omaha, Nebraska, USA) (ICER '15). Association for Computing Machinery, New York, NY, USA, 101--110. https://doi.org/10.1145/2787622.2787721
[34]
David Weintrop and Uri Wilensky. 2019. Transitioning from introductory block-based and text-based environments to professional programming languages in high school computer science classrooms. Computers & Education, Vol. 142 (2019), 103646. https://doi.org/10.1016/j.compedu.2019.103646
[35]
Linda Werner, Jill Denner, Shannon Campe, and Damon Chizuru Kawamoto. 2012. The Fairy Performance Assessment: Measuring Computational Thinking in Middle School. In Proceedings of the 43rd ACM Technical Symposium on Computer Science Education (Raleigh, North Carolina, USA) (SIGCSE '12). Association for Computing Machinery, New York, NY, USA, 215--220. https://doi.org/10.1145/2157136.2157200
[36]
Eric Wiebe, Jennifer London, Osman Aksit, Bradford W. Mott, Kristy Elizabeth Boyer, and James C. Lester. 2019. Development of a Lean Computational Thinking Abilities Assessment for Middle Grades Students. In Proceedings of the 50th ACM Technical Symposium on Computer Science Education (Minneapolis, MN, USA) (SIGCSE '19). Association for Computing Machinery, New York, NY, USA, 456--461. https://doi.org/10.1145/3287324.3287390

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
SIGCSE 2022: Proceedings of the 53rd ACM Technical Symposium on Computer Science Education - Volume 1
February 2022
1049 pages
ISBN:9781450390705
DOI:10.1145/3478431
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 22 February 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. assessment
  2. assessment validation
  3. computer science education
  4. k-12 education
  5. programming

Qualifiers

  • Research-article

Funding Sources

  • National Science Foundation

Conference

SIGCSE 2022
Sponsor:

Acceptance Rates

Overall Acceptance Rate 1,595 of 4,542 submissions, 35%

Upcoming Conference

SIGCSE Virtual 2024
1st ACM Virtual Global Computing Education Conference
December 5 - 8, 2024
Virtual Event , NC , USA

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 96
    Total Downloads
  • Downloads (Last 12 months)24
  • Downloads (Last 6 weeks)1
Reflects downloads up to 19 Nov 2024

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media