Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3321408.3322850acmotherconferencesArticle/Chapter ViewAbstractPublication Pagesacm-turcConference Proceedingsconference-collections
research-article

A cognitive diagnosis framework based on peer assessment

Published: 17 May 2019 Publication History

Abstract

Given examinees' performance (i. e., scores) on each problem, cognitive diagnosis models can discover the latent characteristics of examinees. Traditional cognitive diagnosis models require teachers to provide scores in time. Thus we can hardly apply traditional models in large-scale scenarios, such as Massive Open Online Courses (MOOC). Peer assessment refers to a teaching activity in which students evaluate each other's assignments. The scores given by students could replace the teacher's assessments to a certain extent. In this paper, we propose a novel cognitive diagnosis model named Peer-Assessment Cognitive Diagnosis Framework (PACDF). This model combines peer assessments with cognitive diagnosis, aiming at reduce the burden of teachers. Specifically, we propose a novel probabilistic graphic model at first. This model characterizes not only the relationships between real scores and scores given by peer assessment, but also the relationship between examinees' skill proficiency and problem mastery. Then we adopt Monte Carol Markov Chain (MCMC) sampling algorithm to estimate the parameters of the model. Lastly, we use the model to predict examinees' performance. The experimental results show that PACDF could quantitatively explain and analyze skill proficiencies of examinees, thus perform better in predicting examinees' performances.

References

[1]
C. Kulkarni, M. S. Bernstein, and S. Klemmer. Peerstudio: Rapid peer feedback emphasizes revision and improves performance. 2015.
[2]
L'Hadi Bouzidi and Alain Jaillet. Can online peer assessment be trusted? Journal of Educational Technology & Society, 12(4):257--268, 2009.
[3]
Philip M. Sadler and Eddie Good. The impact of self- and peer-grading on student learning. Educational Assessment, 11(1):1--31, 2006.
[4]
Chris Piech, Jonathan Huang, Zhenghao Chen, Chuong Do, Andrew Ng, and Daphne Koller. Tuned models of peer assessment in moocs. Computer Science, 2013.
[5]
Nancy Falchikov and Judy Goldfinch. Student peer assessment in higher education: A meta-analysis comparing peer and teacher marks. Review of Educational Research, 70(3):287--322, 2000.
[6]
David A. F. Haaga. Peer review of term papers in graduate psychology courses. Teaching of Psychology, 20(1):28--32, 1993.
[7]
George A. Marcoulides and Mark G. Simkin. The consistency of peer review in student writing projects. Journal of Education for Business, 70(4):220--223, 1995.
[8]
Kwangsu Cho, Christian D. Schunn, and Roy W. Wilson. Validity and reliability of scaffolded peer assessment of writing from instructor and student perspectives. Journal of Educational Psychology, 98(4):891--901, 2006.
[9]
McGarr, Olliver, and Clifford. 'just enough to make you take it seriously': exploring students';attitudes towards peer assessment. Higher Education, 65(6):677--693, 2013.
[10]
Thomas Eckes. Introduction to many-facet rasch measurement. Frankfurt am, 2011.
[11]
Carol M Myford and Edward W Wolfe. Detecting and measuring rater effects using many-facet rasch measurement: Part i. Journal of applied measurement, 4(4):386--422, 2003.
[12]
Farahman Farrokhi and Rajab Esfandiari. A many-facet rasch model to detect halo effect in three types of raters. Theory & Practice in Language Studies, 1(11), 2011.
[13]
Everett V Smith Jr and Jonna M Kulikowich. An application of generalizability theory and many-facet rasch measurement using a complex problem-solving skills assessment. Educational and Psychological Measurement, 64(4):617--639, 2004.
[14]
Richard J. Patz, Brian W. Junker, Matthew S. Johnson, and Louis T. Mariano. The hierarchical rater model for rated test items and its application to large-scale educational assessment data. Journal of Educational & Behavioral Statistics, 27(4):341--384, 2002.
[15]
Geoff N. Masters. A rasch model for partial credit scoring. Psychometrika, 47(2):149--174, 1982.
[16]
Lawrence T. Decarlo and Matthew S. Johnson. A hierarchical rater model for constructed responses, with a signal detection rater model. Journal of Educational Measurement, 48(3):333--356, 2011.
[17]
Mark Wilson and Machteld Hoskens. The rater bundle model. Journal of Educational & Behavioral Statistics, 26(3):283--306, 2001.
[18]
Louis V. Dibello, Louis A. Roussos, and William Stout. 31a review of cognitively diagnostic assessment and a summary of psychometric models 1 2. Handbook of Statistics, 26(06):979--1030, 2006.
[19]
J. P. Leighton and M. J. Gierl. Cognitive diagnostic assessment for education: Theory and applications. Journal of Qingdao Technical College, 45(4):407--411, 2007.
[20]
Nihar B. Shah and Joseph K. Bradley. A case for ordinal peer-evaluation in moocs.
[21]
Y. Kotturi, C. Kulkarni, M. S. Bernstein, and S. Klemmer. Structure and messaging techniques for online peer learning systems that increase stickiness. In Acm Conference on Learning, 2015.
[22]
Catherine M Hicks, Vineet Pandey, C Ailie Fraser, and Scott Klemmer. Framing feedback: Choosing review environment features that support high quality peer assessment. 2016.
[23]
Hui Tzu Min. The effects of trained peer review on efl students' revision types and writing quality. Journal of Second Language Writing, 15(2):118--141, 2006.
[24]
Chinmay Kulkarni, Koh Pang Wei, Huy Le, Daniel Chia, Kathryn Papadopoulos, Justin Cheng, Daphne Koller, and Scott R. Klemmer. Peer and self assessment in massive online classes. ACM Transactions on Computer-Human Interaction (TOCHI), 20(6):1--31, 2013.
[25]
Stephen Balfour. Assessing writing in moocs: Automated essay scoring and calibrated peer review. Research & Practice in Assessment, 8(1):40--48, 2013.
[26]
G. Rasch. "on general laws and the meaning of measurement in psychology,". In Berkeley Symposium on Mathematical Statistics, 1961.
[27]
Susan E Embretson and Steven P Reise. Item response theory for psychologists. Quality of Life Research, 13(3):715--716, 2004.
[28]
Xitao Fan. Item response theory and classical test theory: An empirical comparison of their item/person statistics. Educational and psychological measurement, 58(3): 357--381, 1998.
[29]
ALord Birnbaum. Some latent trait models and their use in inferring an examinee's ability. Statistical theories of mental test scores, 1968.
[30]
Brian W Junker and Klaas Sijtsma. Cognitive assessment models with few assumptions, and connections with nonparametric item response theory. Applied Psychological Measurement, 25(3):258--272, 2001.
[31]
Jimmy De La Torre. The generalized dina model framework. Psychometrika, 76(2):179--199, 2011.
[32]
Jimmy De La Torre. Dina model and parameter estimation: A didactic:. Journal of Educational & Behavioral Statistics, 34(1):115--130, 2009.
[33]
Edward Haertel. An application of latent class models to assessment data. Applied Psychological Measurement, 8(3):333--346, 1984.
[34]
Runze Wu, Qi Liu, Yuping Liu, Enhong Chen, Yu Su, Zhigang Chen, and Guoping Hu. Cognitive modelling for predicting examinee performance. In Twenty-Fourth International Joint Conference on Artificial Intelligence, 2015.
[35]
Dornyei and Zoltan. Motivation strategies in the language classroom. Elt Journal, 57(3): 308--310, 2001.
[36]
Zachary A. Pardos, Neil T. Heffernan, Carolina Ruiz, and Joseph E. Beck. The composition effect: Conjuntive or compensatory? an analysis of multi-skill math questions in its. In International Conference on Educational Data Mining, 2008.

Cited By

View all
  • (2024)Improving Grading Fairness and Transparency with Decentralized Collaborative Peer AssessmentProceedings of the ACM on Human-Computer Interaction10.1145/36373508:CSCW1(1-24)Online publication date: 26-Apr-2024
  • (2022)Artificial Intelligence for Assessment and Feedback to Enhance Student Success in Higher EducationMathematical Problems in Engineering10.1155/2022/52157222022(1-19)Online publication date: 5-May-2022

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Other conferences
ACM TURC '19: Proceedings of the ACM Turing Celebration Conference - China
May 2019
963 pages
ISBN:9781450371582
DOI:10.1145/3321408
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 17 May 2019

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. automated assessment
  2. cognitive diagnosis
  3. peer assessment
  4. peer grading
  5. qualitative feedback

Qualifiers

  • Research-article

Conference

ACM TURC 2019

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)9
  • Downloads (Last 6 weeks)3
Reflects downloads up to 13 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Improving Grading Fairness and Transparency with Decentralized Collaborative Peer AssessmentProceedings of the ACM on Human-Computer Interaction10.1145/36373508:CSCW1(1-24)Online publication date: 26-Apr-2024
  • (2022)Artificial Intelligence for Assessment and Feedback to Enhance Student Success in Higher EducationMathematical Problems in Engineering10.1155/2022/52157222022(1-19)Online publication date: 5-May-2022

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media