How a Rubric Score Application Empowers Teachers’ Attitudes over Computational Thinking Leverage
Abstract
:1. Introduction
2. Qualitative Assessment of Students’ Performance
3. Related Work
4. Technology Acceptance Model (TAM)
- RQ1.
- Is it probable that this Rubric Score Application will be perceived as useful to a significant extent by the teachers?
- RQ2.
- Is it probable that this Rubric Score Application will be perceived as easy to use to a significant extent by the teachers?
- RQ3.
- Does the perceived ease-of-use of this Rubric Score Application have a significant effect on its perceived usefulness?
- RQ4.
- Do the perceived usefulness and perceived ease-of-use significantly affect teachers’ satisfaction?
- RQ5.
- Is there a significant difference between teachers who teach STEM and non-STEM courses in terms of the frequency of usage of computational thinking as an assessment criterion?
- RQ6.
- What are the teachers’ attitudes towards the use of computational thinking as an assessment criterion?
5. Descriptive Assessment Application
6. Material and Methods
6.1. Research Population
6.2. Data Collection Methods
6.3. Data Analysis Methods
6.3.1. Validity Test
6.3.2. Reliability Test
6.3.3. Normality Test
6.3.4. One Sample T-Test
6.3.5. Randomness Test
6.3.6. Homoscedasticity Test
6.3.7. Multiple Linear Regression Analysis
6.3.8. T Test
6.3.9. F Test
6.3.10. The Coefficient of Determination (R Square)
6.3.11. Mann–Whitney U Test
7. Results
7.1. Validity Test
7.2. Reliability Test
7.3. Correlation Analysis
7.4. Research Question 1
- H0: It is probable that the Rubric Score Application will be perceived as useful to a significant extent by the teachers. (The perceived usefulness is statistically equal to a value of 2).
- H1: It is not probable that the Rubric Score Application will be perceived as useful to a significant extent by the teachers. (The perceived usefulness is not statistically equal to a value of 2).
7.4.1. Normality Test
7.4.2. One Sample T-Test
7.5. Research Question 2
- H2: It is probable that the Rubric Score Application will be perceived as easy to use to a significant extent by the teachers. (The perceived ease-of-use is statistically equal to a value of 2).
- H3: It is not probable that the Rubric Score Application will be perceived as easy to use to a significant extent by the teachers. (The perceived ease-of-use is not statistically equal to a value of 2).
7.5.1. Normality Test
7.5.2. One Sample T-Test
7.6. Research Question 3
- H4: The perceived ease of use does not have a significant effect on the perceived usefulness.
- H5: The perceived ease of use have a significant effect on the perceived usefulness.
7.6.1. Normality Test
7.6.2. Randomness Test
7.6.3. Homoscedasticity Test
7.6.4. Simple Linear Regression Analysis
7.6.5. Coefficient of Determination
7.6.6. T Test
7.6.7. F Test
7.6.8. The Coefficient of Determination (R Square)
7.7. Research Question 4
- H6: The perceived usefulness and perceived ease-of-use do not have significant effects on satisfaction.
- H7: The perceived usefulness and perceived ease-of-use have significant effects on satisfaction.
7.7.1. Normality Test
7.7.2. Randomness Test
7.7.3. Homoscedasticity Test
7.7.4. Linear Regression Analysis
7.7.5. Coefficient of Determination
7.7.6. T Test
7.7.7. F Test
7.7.8. The Coefficient of Determination (R Square)
7.8. Research Question 5
- H8: There is no significant difference between STEM and non-STEM teachers in terms of the frequency of use of the computational thinking criterion.
- H9: There is a significant difference between STEM and non-STEM teachers in terms of the frequency of use of the computational thinking criterion.
- 1.
- The ability to describe and represent a problem (Abstractive/Algorithmic thinking) (C1)
- 2.
- Computational thinking skills (with or without the use of digital methods) (C2)
- 3.
- The emergence of Scientific Practice Skills (observing, recording, classifying, comparing) (C3)
- 4.
- Computational thinking skills (Designing and solving problems using programming techniques) (C4)
- 5.
- Critical Thinking (C5)
7.9. Research Question 6
- H10: The teachers’ attitudes towards computational thinking as an assessment criterion are positive. (The variable computational thinking is statistically equal to a value of 3).
- H11: The teachers’ attitudes towards computational thinking as an assessment criterion are not positive. (The variable computational thinking is not statistically equal to a value of 3).
7.10. Focus Groups
7.10.1. Teachers’ Qualitative Assessment of the Ease-of-Use of the Application
7.10.2. Teachers’ Qualitative Assessment of the Usefulness of the Application
7.10.3. Teachers’ Qualitative Assessment of Their Satisfaction with the Application
7.10.4. Teachers’ Qualitative Assessment of the Integration of Computational Thinking Criterion into Evaluative Process
7.10.5. General Comments
8. Discussion
9. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A
Criterion | Performance Level | |||
---|---|---|---|---|
1 | 2 | 3 | 4 | |
Reading comprehension | Fails to extract | With significant difficulty | Satisfactorily extracts | Extremely easily |
and process primary | and help, extracts | and processes | extracts and processes | |
and secondary information | and processes primary | primary and secondary | primary and secondary | |
from texts; identifies | and secondary | information from texts; | information from texts; | |
parts of speech | information from texts; | identifies parts of speech | identifies parts of speech | |
and their functions. | identifies parts of speech | and their functions. | and their functions. | |
and their functions. | ||||
Writing | Writes with | Writes with | Writes with | Writes extremely |
poor ideas, many | several lexical and | few lexical and | easily without lexical and | |
lexical and grammatical | grammatical errors, moderate | grammatical errors, satisfactory | grammatical errors, with a wide | |
errors, poor vocabulary | vocabulary and often | vocabulary and sometimes | vocabulary, various | |
and stereotypical style. | stereotypical style. | personal and sometimes | communicative goals and | |
stereotypical style. | personal style. | |||
Critical thinking | Fails to combine | With significant difficulty | Satisfactorily combines | Extremely easily |
information, explain | and help, combines | information, explains | combines information, | |
reasoning, and argue | information, explains | reasoning and argues | explains reasoning, | |
about situations asked | reasoning, and argues | about situations | and argues about | |
of them (hypothetical or real) | about situations asked | asked of them | situations asked of them | |
of them (hypothetical or real) | (hypothetical or real) | (hypothetical or real) | ||
Participation-collaboration | Does not participate | Rarely participates | Several times | Actively participates |
in collective actions | in collective actions | participates in | in collective actions | |
and hesitates to | and often hesitates | collective actions and | and does not | |
ask for and offer | to ask for and offer | often asks for and | hesitate to ask | |
help and does not | help and does not have a high | offers help and appreciates | for and offer | |
seem to appreciate the | appreciation of the value | the value of the group. | help and appreciates | |
value of the group. | of the group. | the value of the group. | ||
Diligence | Does not complete | Rarely completes | Often completes | Always completes |
exercises or | exercises or | exercises or | exercises or | |
tasks assigned to | tasks assigned to | tasks assigned to | tasks assigned to | |
them in the school | them in the school | them in the school | them in the school | |
environment or | environment or | environment or | environment or | |
at home. | at home. | at home. | at home. | |
Computational thinking | Never uses | Rarely uses | Often perceives | Always analyses, |
flexible methods | flexible methods | and implements more | documents and | |
that improve | that improve | effective methods | implements flexible | |
learning and/or | learning and/or | for solving teaching | methods of solving | |
solve teaching | solve teaching | or everyday problems. | teaching problems | |
or everyday problems. | or everyday problems. | or everyday problems. |
References
- Wing, J. Computational thinking’s influence on research and education for all. Ital. J. Educ. Technol. 2017, 25, 7–14. [Google Scholar]
- Wing, J.M. Computational thinking. Commun. ACM 2006, 49, 33–35. [Google Scholar] [CrossRef]
- Wing, J. Research notebook: Computational thinking—What and why. Link Mag. 2011, 6, 20–23. [Google Scholar]
- Grover, S.; Pea, R. Computational thinking in K–12: A review of the state of the field. Educ. Res. 2013, 42, 38–43. [Google Scholar] [CrossRef]
- Selby, C.; Woollard, J. Refining an Understanding of Computational Thinking; University of Southampton: Southampton, UK, 2014. [Google Scholar]
- Avila, C.O.; Foss, L.; Bordini, A.; Debacco, M.S.; da Costa Cavalheiro, S.A. Evaluation rubric for computational thinking concepts. In Proceedings of the 2019 IEEE 19th International Conference on Advanced Learning Technologies (ICALT), Alagoas, Brazil, 15–18 July 2019; Volume 2161, pp. 279–281. [Google Scholar]
- Martin, F. Rethinking Computational Thinking. Commun. ACM 2018, 59, 8. [Google Scholar]
- Elder, L.; Paul, R. Critical thinking: Why we must transform our teaching. J. Dev. Educ. 1994, 18, 34. [Google Scholar]
- Paul, R.; Elder, L. The Miniature Guide to Critical Thinking Concepts and Tools; Rowman & Littlefield: Lanham, MD, USA, 2019. [Google Scholar]
- Bell, T.; Vahrenhold, J. CS Unplugged-How Is It Used, and Does It Work? In Adventures Between Lower Bounds and Higher Altitudes; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2018; Volume 11011. [Google Scholar] [CrossRef]
- de Araujo, A.L.S.O.; Andrade, W.L.; Guerrero, D.D.S. A systematic mapping study on assessing computational thinking abilities. In Proceedings of the 2016 IEEE Frontiers in Education Conference (FIE), Eire, PA, USA, 12–15 October 2016; pp. 1–9. [Google Scholar]
- Araujo, A.L.; Andrade, W.; Guerrero, D. Um mapeamento sistemático sobre a avaliaçao do pensamento computacional no brasil. Anais dos Workshops do Congresso Brasileiro de Informática na Educação 2016, 5, 1147. [Google Scholar]
- Avila, C.; Cavalheiro, S.; Bordini, A.; Marques, M.; Cardoso, M.; Feijó, G. Metodologias de Avaliação do Pensamento Computacional: Uma revisão sistemática. Braz. Symp. Comput. Educ. 2017, 28, 113. [Google Scholar]
- Taras, M. Assessment–summative and formative–some theoretical reflections. Br. J. Educ. Stud. 2005, 53, 466–478. [Google Scholar] [CrossRef]
- Roman-Gonzalez, M.; Moreno-Leon, J.; Robles, G. Complementary Tools for Computational Thinking Assessment. In Proceedings of the International Conference on Computational Thinking Education, Hong Kong, 13–15 July 2017. [Google Scholar]
- Hadad, R.; Thomas, K.; Kachovska, M. Practicing Formative Assessment for Computational Thinking in Making Environments. J. Sci. Educ. Technol. 2020, 29, 162–173. [Google Scholar] [CrossRef]
- Guggemos, J.; Seufert, S.; Román-González, M. Computational Thinking Assessment—Towards More Vivid Interpretations. Tech. Know Learn. 2022. [Google Scholar] [CrossRef]
- Ramaprasad, A. On the definition of feedback. Behav. Sci. 1983, 28, 4–13. [Google Scholar] [CrossRef]
- Fong, C.J.; Schallert, D.L. Feedback to the future: Advancing motivational and emotional perspectives in feedback research. Educ. Psychol. 2023. [Google Scholar] [CrossRef]
- Chowdhury, F. Application of rubrics in the classroom: A vital tool for improvement in assessment, feedback and learning. Int. Educ. Stud. 2019, 12, 61–68. [Google Scholar] [CrossRef]
- Popham, W.J. What’s Wrong–and What’s Right–with Rubrics. Educ. Leadersh. 1997, 55, 72–75. [Google Scholar]
- Finson, K.D.; Ormsbee, C.K. Rubrics and their use in inclusive science. Interv. Sch. Clin. 1998, 34, 79–88. [Google Scholar] [CrossRef]
- Sanger, C.S.; Gleason, N.W. Diversity and Inclusion in Global Higher Education: Lessons from across Asia; Palgrave Macmillan Singapore; Springer Nature: Berlin/Heidelberg, Germany, 2020. [Google Scholar] [CrossRef]
- Dawson, P. Assessment rubrics: Towards clearer and more replicable design, research and practice. Assess. Eval. High. Educ. 2017, 42, 347–360. [Google Scholar] [CrossRef]
- Johnson, R.L.; Penny, J.A.; Gordon, B. Assessing Performance: Designing, Scoring, and Validating Performance Tasks; The Guilford Press: London, UK, 2008. [Google Scholar]
- English, N.; Robertson, P.; Gillis, S.; Graham, L. Rubrics and formative assessment in K-12 education: A scoping review of literature. Int. J. Educ. Res. 2022, 113, 101964. [Google Scholar] [CrossRef]
- Basu, S. Using Rubrics Integrating Design and Coding to Assess Middle School Students’ Open-ended Block-based Programming Projects. In Proceedings of the Proceedings of the 50th ACM Technical Symposium on Computer Science Education, New York, NY, USA, 13 February 2019; pp. 1211–1217. [Google Scholar]
- Alves, N.d.C.; von Wangenheim, C.G.; Hauck, J.C.R.; Borgatto, A.F. A large-scale evaluation of a rubric for the automatic assessment of algorithms and programming concepts. In Proceedings of the 51st ACM Technical Symposium on Computer Science Education, Portland, OR, USA, 11–14 March 2020; pp. 556–562. [Google Scholar]
- Park, Y.; Shin, Y. Comparing the effectiveness of scratch and app inventor with regard to learning computational thinking concepts. Electronics 2019, 8, 1269. [Google Scholar] [CrossRef]
- Reynders, G.; Lantz, J.; Ruder, S.M.; Stanford, C.L.; Cole, R.S. Rubrics to assess critical thinking and information processing in undergraduate STEM courses. Int. J. STEM Educ. 2020, 7, 1–15. [Google Scholar] [CrossRef]
- Yin, Y.; Khaleghi, S.; Hadad, R.; Zhai, X. Developing effective and accessible activities to improve and assess computational thinking and engineering learning. Educ. Technol. Res. Dev. 2022, 70, 951–988. [Google Scholar] [CrossRef]
- Brookhart, S.M. Appropriate criteria: Key to effective rubrics. Front. Educ. 2018, 3, 22. [Google Scholar] [CrossRef]
- Ana, A.; Yulia, C.; Jubaedah, Y.; Muktiarni, M.; Dwiyanti, V.; Maosul, A. Assessment of student competence using electronic rubric. J. Eng. Sci. Technol 2020, 15, 3559–3570. [Google Scholar]
- Battaglia, N.; Neil, C.; De Vincenzi, M. Software Engineering Competence-Based Learning in Collaborative Virtual Environments. In Proceedings of the 2021 IEEE World Conference on Engineering Education (EDUNINE), Guatemala, Guatemala, 14–17 March 2021; pp. 1–5. [Google Scholar]
- Efthymiou, L.; Zarifis, A.; Orphanidou, Y. A Measurement Model for Collaborative Online Learning in Postgraduate Engineering Management Studies. Cases Eng. Manag. Educ. Pract. 2021, 1–21. [Google Scholar] [CrossRef]
- Davis, F.D. Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Q. 1989, 13, 319–340. [Google Scholar] [CrossRef]
- Davis, F.D.; Bagozzi, R.P.; Warshaw, P.R. User Acceptance of Computer Technology: A Comparison of Two Theoretical Models. Manag. Sci. 1989, 35, 982–1003. [Google Scholar] [CrossRef] [Green Version]
- Lamond, B.; Mo, S.; Cunningham, T. Teachers’ perceived usefulness of assistive technology in Ontario classrooms. J. Enabling Technol. 2022. [Google Scholar] [CrossRef]
- Nicholas, N.; Hartono, K.; Vincent, V.; Gui, A. A Study on Student’s Satisfaction of using E-learning System During the COVID-19 Pandemic. In Proceedings of the 2022 26th International Conference on Information Technology (IT), Penang, Malaysia, 13–15 September 2022; pp. 1–4. [Google Scholar]
- Prasetyo, Y.T.; Tumanan, S.A.R.; Yarte, L.A.F.; Ogoy, M.C.C.; Ong, A.K.S. Blackboard E-learning system acceptance and satisfaction among filipino high school students: An extended technology acceptance model (TAM) approach. In Proceedings of the 2020 IEEE International Conference on Industrial Engineering and Engineering Management (IEEM), Singapore, 14–17 December 2020; pp. 1271–1275. [Google Scholar]
- Al-Chalabi, H.K.M. Evaluation of a Multi-Parameter E-learning System using Web 3.0 Technologies. In Proceedings of the 2021 13th International Conference on Electronics, Computers and Artificial Intelligence (ECAI), Pitesti, Romania, 1–3 July 2021; pp. 1–4. [Google Scholar]
- Romero-Sánchez, D.; Barrios, D. Technological Acceptance of Virtual Platforms in University Students: An Analysis in Times of Pandemic. IEEE Rev. Iberoam. Tecnol. Aprendiz. 2022, 17, 17–20. [Google Scholar] [CrossRef]
- Rahim, M.; Hulukati, W. Development of Handbooks of Guidance and Counseling to Enhance Elementary School Teachers’ Competence in Cultivating Students’ Creativity. Eur. J. Educ. Res. 2021, 10, 657–670. [Google Scholar] [CrossRef]
- Cohen, L.; Manion, L.; Morrison, K. Research Methods in Education, 6th ed.; Routledge: Oxon, UK, 2007. [Google Scholar]
- Gall, M.D.; Borg, W.R.; Gall, J.P. Educational Research: An Introduction, 8th ed.; Pearson: Washington, DC, USA, 2006. [Google Scholar]
- Creswell, J.W. Research Design:Quantitative, Qualitative, and Mixed Methods Approaches; Sage Publication Inc.: Thousand Oaks, CA, USA, 2009. [Google Scholar]
- Merriam, S.B. Qualitative Research and Case Study Applications in Education. Revised and Expanded from: “Case Study Research in Education”, 2nd ed.; Jossey-Bass: San Francisco, CA, USA, 1998. [Google Scholar]
- Kitzinger, J. Qualitative research: Introducing focus groups. BMJ 1995, 311, 299–302. [Google Scholar] [CrossRef]
- Krueger, R.A.; Casey, M.A. Designing and Conducting Focus Group Interviews; University of Minnesota: Minneapolis, MN, USA, 2002; Volume 18. [Google Scholar]
- Oman, K.S.; Krugman, M.E.; Fink, R.M. Nursing Research Secrets, 1st ed.; Hanley & Belfus: Philadelphia, PA, USA, 2003. [Google Scholar]
- Nachar, N. The Mann-Whitney U: A test for assessing whether two independent samples come from the same distribution. Tutor. Quant. Methods Psychol. 2008, 4, 13–20. [Google Scholar] [CrossRef]
- Gonzalez, H.B.; Kuenzi, J.J. Science, technology, engineering, and mathematics (STEM) education: A primer. In Congressional Research Service; Library of Congress: Washington, DC, USA, 2012. [Google Scholar]
- O.Nyumba, T.; Wilson, K.; Derrick, C.J.; Mukherjee, N. The use of focus group discussion methodology: Insights from two decades of application in conservation. Methods Ecol. Evol. 2018, 9, 20–32. [Google Scholar] [CrossRef]
- The Next Generation Science Standards. Criteria for Producing and Evaluating High-Quality and Aligned Summative Science Assessments. 2018. Available online: https://www.nextgenscience.org/resources/criteria-procuring-and-evaluating-high-quality-and-aligned-summative-science-assessments (accessed on 1 December 2022).
- National Assessment of Educational Progress. The Nation’s Report Card; U.S. Department of Education: Washington, DC, USA, 2018. [Google Scholar]
- Dimos, I.; Velaora, C.; Kakarountas, A. Computational Thinking in Greek Educational System for K-12: Towards the future teaching approach. In Proceedings of the 2022 Panhellenic Conference on Electronics & Telecommunications (PACET), Tripolis, Greece, 2–3 December 2022; pp. 1–6. [Google Scholar] [CrossRef]
- Wiliam, D. What is assessment for learning? Stud. Educ. Eval. 2011, 37, 3–14. [Google Scholar] [CrossRef]
- Tang, X.; Yin, Y.; Lin, Q.; Hadad, R. Assessing Computational Thinking: A Systematic Review of the Literature; American Educational Research Association (AERA): New York, NY, USA, 2018. [Google Scholar]
- Mukasheva, M.; Omirzakova, A. Computational thinking assessment at primary school in the context of learning programming. World J. Educ. Technol. Curr. Issues 2021, 13, 336–353. [Google Scholar] [CrossRef]
Participant Background Information | N | Percentages |
---|---|---|
Gender | ||
Male | 4 | 26.7% |
Female | 11 | 73.3% |
Grade level taught | ||
Primary | 3 | 20.0% |
Secondary | 12 | 80.0% |
Experience | ||
Descriptive student assessment | ||
Not at all, Low | 7 | 46.7% |
Satisfactory, Good | 8 | 53.3% |
ICT student assessment | ||
Not at all, Low | 8 | 53.3% |
Satisfactory, Good | 7 | 46.7% |
Teaching using CT principles | ||
Agree | 14 | 93.3% |
Neutral | 1 | 6.7% |
Teaching CT principles | ||
Agree | 13 | 86.6% |
Neutral | 1 | 6.7% |
No answer | 1 | 6.7% |
Teaching CT as a separate course | ||
Agree | 7 | 46.7% |
Neutral | 7 | 46.7% |
Disagree | 1 | 6.6% |
Factors | Items | Questions |
---|---|---|
PU | The use of this application helped me to … | |
Q1 | become more productive. | |
Q2 | communicate essentially with parents. | |
Q3 | study the data and differentiate my teaching. | |
Q4 | monitor and evaluate the material I have taught. | |
PEOU | I believe that the assessment application that I used… | |
Q5 | works as I expected it to. | |
Q6 | is stable during its operation. | |
Q7 | is compatible with school procedures. | |
SAT | Q8 | facilitates the teachers with the assessment |
of the students during distance learning. | ||
Q9 | if it was commercially available (at an affordable price) | |
I think it is a worthwhile purchase for teachers. | ||
CT | Q10 | To what extent did you engage with the |
computational thinking criterion? | ||
Q11 | To what extent did the computational thinking | |
criterion match your subject? |
Factors | Items | r (Item, Total) |
---|---|---|
PU | Q1 | 0.820 |
Q2 | 0.641 | |
Q3 | 0.749 | |
Q4 | 0.555 | |
PEOU | Q5 | 0.794 |
Q6 | 0.913 | |
Q7 | 0.903 | |
SAT | Q8 | 0.863 |
Q9 | 0.888 | |
CT | Q10 | 0.823 |
Q11 | 0.850 |
Factors | Items | Cronbach’s Alpha |
---|---|---|
PU | 4 | 0.624 |
PEOU | 3 | 0.831 |
SAT | 2 | 0.708 |
CT | 2 | 0.554 |
Factors | PU | PEOU | SAT | CT |
---|---|---|---|---|
PU | 1.000 | 0.700 | 0.815 | 0.365 |
PEOU | 0.700 | 1.000 | 0.482 | 0.082 |
SAT | 0.815 | 0.482 | 1.000 | 0.354 |
CT | 0.365 | 0.082 | 0.354 | 1.000 |
Teachers | Frequency | Sum | ||||
---|---|---|---|---|---|---|
C1 | C2 | C3 | C4 | C5 | ||
T1 | - | 12 | 31 | - | - | 43 |
T2 | - | - | 13 | - | - | 13 |
T3 | - | 2 | - | - | - | 2 |
T4 | - | 7 | 14 | - | - | 21 |
T5 | 77 | 37 | - | 33 | - | 147 |
T6 | 14 | 13 | - | - | - | 27 |
T7 | 54 | 48 | - | - | - | 102 |
T8 | - | 38 | 20 | - | - | 58 |
Teachers | Frequency | Sum | ||||
---|---|---|---|---|---|---|
C1 | C2 | C3 | C4 | C5 | ||
T8 | - | 53 | - | - | 19 | 72 |
T9 | - | 23 | - | - | - | 23 |
T10 | 29 | 24 | - | - | - | 53 |
T11 | - | 71 | - | - | 71 | 142 |
T12 | - | 3 | - | - | - | 3 |
T13 | - | 35 | - | - | - | 35 |
T14 | - | - | - | - | - | 0 |
T15 | - | - | - | - | - | 0 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Dimos, I.; Velaora, C.; Louvaris, K.; Kakarountas, A.; Antonarakou, A. How a Rubric Score Application Empowers Teachers’ Attitudes over Computational Thinking Leverage. Information 2023, 14, 118. https://doi.org/10.3390/info14020118
Dimos I, Velaora C, Louvaris K, Kakarountas A, Antonarakou A. How a Rubric Score Application Empowers Teachers’ Attitudes over Computational Thinking Leverage. Information. 2023; 14(2):118. https://doi.org/10.3390/info14020118
Chicago/Turabian StyleDimos, Ioannis, Chrysoula Velaora, Konstantinos Louvaris, Athanasios Kakarountas, and Assimina Antonarakou. 2023. "How a Rubric Score Application Empowers Teachers’ Attitudes over Computational Thinking Leverage" Information 14, no. 2: 118. https://doi.org/10.3390/info14020118
APA StyleDimos, I., Velaora, C., Louvaris, K., Kakarountas, A., & Antonarakou, A. (2023). How a Rubric Score Application Empowers Teachers’ Attitudes over Computational Thinking Leverage. Information, 14(2), 118. https://doi.org/10.3390/info14020118