2019 - Kong & Abelson - Book - ComputationalThinkingEducation
2019 - Kong & Abelson - Book - ComputationalThinkingEducation
2019 - Kong & Abelson - Book - ComputationalThinkingEducation
Computational
Thinking
Education
Computational Thinking Education
Siu-Cheung Kong Harold Abelson
•
Editors
Computational Thinking
Education
Editors
Siu-Cheung Kong Harold Abelson
Department of Mathematics Computer Science and Artificial
and Information Technology Intelligence Laboratory
The Education University of Hong Kong Massachusetts Institute of Technology
Hong Kong, Hong Kong Cambridge, MA, USA
© The Editor(s) (if applicable) and The Author(s) 2019. This book is an open access publication.
Open Access This book is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adap-
tation, distribution and reproduction in any medium or format, as long as you give appropriate credit to
the original author(s) and the source, provide a link to the Creative Commons license and indicate if
changes were made.
The images or other third party material in this book are included in the book’s Creative Commons
license, unless indicated otherwise in a credit line to the material. If material is not included in the book’s
Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the
permitted use, you will need to obtain permission directly from the copyright holder.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publi-
cation does not imply, even in the absence of a specific statement, that such names are exempt from the
relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, express or implied, with respect to the material contained herein or
for any errors or omissions that may have been made. The publisher remains neutral with regard to
jurisdictional claims in published maps and institutional affiliations.
This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd.
The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721,
Singapore
Preface
Over the past few decades, Computational Thinking (CT) has gained widespread
attention and been regarded as one of the essential skills required by those growing
up in the digital era. To nurture the next generation to become creative
problem-solvers, there is a growing need to implement CT education into the school
curriculum. This book is an edited volume with a specific focus on CT education.
The chapters were contributed by a group of world-renowned scholars and
researchers, who pioneer research on CT education. To enable readers with various
interests to advance their knowledge in this fresh yet important field, this book
covers sub-themes that will be of interest to academics and educators, school
teachers, policymakers and other readers. The sub-themes include CT and tool
development, student competency and assessment, CT and programming education
in K-12, CT in K-12 STEM education and non-formal learning, teacher and mentor
development in K-12 education, and CT in educational policy and implementation.
School teachers will be particularly interested in chapters in K-12 and K-12 STEM
education; educators and academics will be interested in chapters in CT and tool
development, student competency and assessment, and teacher and mentor devel-
opment; policymakers will be particularly interested in chapters in policy and
implementation; and readers, in general, will be interested in chapters in all
sub-themes.
This edited volume was funded by CoolThink@JC, a cutting-edge 4-year ini-
tiative created and funded by The Hong Kong Jockey Club Charities Trust, and
co-created by The Education University of Hong Kong, Massachusetts Institute of
Technology and City University of Hong Kong.
CoolThink@JC strives to inspire younger generations to apply digital creativity
in their daily lives and prepare them for tackling future challenges in many fields.
Considering CT as an indispensable capability to empower students to move
beyond mere technology consumption into problem-solving and innovation, with
the belief that primary school education is the key in laying the foundations in CT
to support future participation in a computing-rich society, CoolThink@JC edu-
cated over 16,500 upper primary students in Hong Kong at 32 pilot schools through
CT and programming education. The Education University of Hong Kong and
v
vi Preface
vii
viii Contents
in computer studies. Among the issues discussed in these chapters, the key focus of
CTE is the importance of learning to think computationally.
1.1 Introduction
Logo was created at the Cambridge research firm Bolt Beranek and Newman
and then moved to the MIT Artificial Intelligence Laboratory with the start of the
MIT Logo project in 1969. Papert had worked at the University of Geneva with the
renowned Swiss psychologist Jean Piaget, and he brought to Logo Piaget’s construc-
tivist theory of learning, which emphasizes that children construct meaning through
the interaction between experience and ideas. Papert’s extension of constructivism
(and wordplay on the term), which he called constructionism, holds that learning hap-
pens ‘especially felicitously in a context where the learner is engaged in constructing
a public entity’ (Papert, 1991).
For Papert, constructing entities with the Logo computer language could provide
such a felicitous context. The perspective arose that computing could be a powerful
intellectual tool for all children, and that technology, as Papert wrote, could become
…something children themselves will learn to manipulate, to extend, to apply to projects,
thereby gaining a greater and more articulate mastery of the world, a sense of the power of
applied knowledge and a self-confidently realistic image of themselves as intellectual agents.
(Papert, 1971)
The full expression of this idea and its characterisation as ‘computational thinking’
first appeared in Papert’s book Mindstorms (Papert, 1980), although Papert also
referred to the same idea as ‘procedural thinking’. He wrote:
In this book I have clearly been arguing that procedural thinking is a powerful intellectual
tool and even suggested analogizing oneself to a computer as a strategy or doing it. … The
cultural assimilation of the computer presence will give rise to computer literacy. This phrase
is often taken as meaning knowing how to program, or knowing about the varied uses made
of computer. But true computer literacy is not just knowing how to make use of computers
and computational ideas. It is knowing when it is appropriate to do so. (Papert, 1980, p. 155)
Even here, in the first articulation of the idea, there was a concern to clarify the
distinction between computational (or procedural) thinking and the knowledge of
how to program or to use computational tools. This concern for clarification has
persisted through the growth of the CT movement and is present in several papers in
the present volume.
The appearance of the personal computer in the late 1970s produced an outburst
of optimism about computing’s potential to play a major role in K-12 education.
Apple II BASIC appeared in 1978 and Apple Pascal in 1979. MIT worked with
Texas Instruments to create a Logo implementation for the TI 99/4 home computer,
piloting it in a 450-student Dallas elementary school in 1980 and later in several
New York City public schools. Versions of Logo for the Apple II appeared in 1982
(Abelson, 1982a, b).
While the impact of these implementations on the emerging home hobbyist com-
puter community was significant, there was little take-up in K-12 education. School
adoption was meagre, with little adherence to the vision that computation could be
a powerful learning framework for everyone, not only for students working towards
careers involving computer programming. As several leaders of the school com-
puting movement observed in 2003: ‘while the literature points to the potential for
4 S.-C. Kong et al.
impact, the reality is sobering: to a first order approximation, the impact of comput-
ing technology over the past 25 years on primary and secondary education has been
essentially zero’ (Norris, Sullivan, Poirot, & Soloway, 2003).
However, the adoption of computers in K-12 began to increase with the tremen-
dous increase in the impact of information technology in society and with the emer-
gence of computing as a continuous presence in daily life. An important catalyst for
change was Jeanette Wing’s (2006) seminal essay ‘Computational Thinking’ (Wing,
2006). Wing reintroduced the term ‘computational thinking’ together with the notion
in Papert’s tradition that CT was not just programming and that it should be a funda-
mental skill for everyone. The appearance of Wing’s essay, contemporaneous with
the start of an enormous upwelling of the computing industry, led to a surge of inter-
est in CT and computing in K-12 education that surprised many long-time observers
of computing education. Even Wing herself observed:
“Not in my lifetime.” That’s what I said when I was asked whether we would ever see
computer science taught in K-12. It was 2009, and I was addressing a gathering of attendees
to a workshop on computational thinking convened by the National Academies. I’m happy
to say that I was wrong. (Wing, 2016)
Yet even with this burgeoning interest, there remains a widespread lack of clarity
about what exactly CT is, and the struggle continues to articulate its fundamentals.
The report of the 2009 National Academies workshop that Wing mentions above
expressed the motivation behind it:
Various efforts have been made to introduce K-12 students to the most basic and essential
computational concepts, and college curricula have tried to provide students a basis for
lifelong learning of increasingly new and advanced computational concepts and technologies.
At both ends of this spectrum, however, most efforts have not focused on fundamental
concepts.
One common approach to incorporating computation into the K-12 curriculum is to empha-
size computer literacy, which generally involves using tools to create newsletters, documents,
Web pages, multimedia presentations, or budgets. A second common approach is to empha-
size computer programming by teaching students to program in particular programming
languages such as Java or C++. A third common approach focuses on programming appli-
cations such as games, robots, and simulations.
But in the view of many computer scientists, these three major approaches—although useful
and arguably important—should not be confused with learning to think computationally.
(National Research Council, 2010)
It is sobering that the concern to distinguish CT from programming and from the
use of computer tools is the same as that expressed by Papert in Mindstorms at the
genesis of the CT movement 30 years previous.
Many of the papers in this volume grapple with this same concern, and readers
will find several discussions of what ‘computation thinking’ means in the papers
that follow. Much of the 2009 NRC workshop was devoted to a discussion of this
same definitional question. The workshop participants, all experts in the field, did
not come to any clear agreement, nor do the authors in this volume. Yet as in the
1 Introduction to Computational Thinking Education 5
NRC report, they agree that the cluster of ideas around CT is important in a world
being increasingly transformed by information technology.
A second theme in the papers in this volume is the need to confront issues of
educational computing at scale. One result of the increasing attention to CT is that
jurisdictions are beginning to mandate computing education in K-12. Estonia, Aus-
tralia, New Zealand, Taiwan, the United Kingdom and the US states of Virginia,
Arkansas and Indiana have already taken this step, and other nations are formulat-
ing strategies to do so. This has led to serious issues of equipment availability and
teacher education; several of the papers below present overviews of plans enacted or
in progress. Key among the issues here is assessment, as the increasing mandates for
computer learning are requiring increasing accountability from citizens and policy
makers.
The chapters of the book were selected based on our conceptual framework of com-
putational thinking education with six sub-themes, as illustrated in Fig. 1.1. At the
top of Fig. 1.1 is ‘Computational Thinking and Tool Development’, the basic build-
ing block of CTE, which involves issues of the definition of CT and the design of
the programming environment for facilitating CT. Students’ CT development can
occur in K-12 and can be combined with STEM education and non-formal learn-
ing, as captured by the sub-themes of ‘Computational Thinking and Programming
Education in K-12’ and ‘Computational Thinking in K-12 STEM Education and
Non-formal Learning’, respectively. To evaluate the effectiveness of students’ CT
development, we need to consider assessment issues, which include the articulation
of the competencies involved, and the latest methods of assessing CT, as reflected in
the sub-theme of ‘Student Competency and Assessment’. Teacher and mentor devel-
opment is a key factor to support the implementation of CTE, as captured by the
sub-theme of ‘Teacher and Mentor Development in K-12 Education’. From a broader
perspective, policy matters can also play a supportive role in CTE, as illustrated in the
sub-theme of ‘Computational Thinking in Educational Policy and Implementation’.
The chapters in this book were chosen according to these six sub-themes.
arising from its abstraction. The available abstractions, from either data representa-
tion or processing, form a repertoire of possible choices to generate computational
artefacts. The programming environment supports different models of computation
for users to choose from (e.g. visual programming interface), thus enabling pro-
gramming flexibility. Furthermore, in terms of abstract operational mechanisms and
data structure, computational media in inquiry learning contexts are of finite repre-
sentational flexibility. In the other chapter, Patton, Tissenbaum and Harunani doc-
ument the development of an online platform for facilitating CT, the App Inventor,
at the Massachusetts Institute of Technology (MIT). They identify the logic and
goals underlying the design of the platform and document how it can be used for
educational purposes, focusing on empowerment through programming. These two
chapters indicate the importance of the use of abstraction and a well-designed pro-
gramming environment to facilitate students in learning to think computationally.
Among the key issues that require further exploration is how to measure students’
CT ability and examine the effects of the teaching of CT, especially in terms of
major assessment approaches and research methodologies. The sub-theme of ‘Stu-
dent Competency and Assessment’ includes five chapters on related issues. Eickel-
mann presents a large-scale international comparative study on CT, with problem
conceptualisation and solution operationalisation as the two main strands of con-
structs of students’ achievements in CT to be assessed. The identification of these
constructs echoes Papert’s idea that to think computationally involves ‘not just know-
ing how to make use of computers and computational ideas… [but] knowing when
1 Introduction to Computational Thinking Education 7
There are three chapters under the sub-theme of ‘Computational Thinking and Pro-
gramming Education in K-12’. Kong, using an example of learning prime and com-
posite numbers through the development of an app, illustrates how learners’ CT can
be developed in their learning of primary mathematics, highlighting the pedago-
gies that can be used. Tan, Yu and Lin, who regard CT as the cultivation of logical
thinking and problem-solving skills, present a study on how CT can be taught using
mathematical gamification. They develop mobile games to help students develop
problem-solving skills and gain mathematical insights by solving linear equations.
The difficulty at each level is calibrated based on the users’ performance to ensure a
reasonable growing curve and prevent users from becoming frustrated at early levels.
They argue that gamification can be considered an effective educational approach to
gaining arithmetic proficiency and computational skills. Lee and Chan document the
design of an educational website guided by a fun-based CT framework integrated
with a knowledge management approach, and they discuss how the different com-
ponents in this website can facilitate students’ mathematics learning. These chapters
illustrate how CT and programming education can be implemented in K-12 class-
rooms.
8 S.-C. Kong et al.
Three chapters are included in the sub-theme of ‘Teacher and Mentor Development in
K-12 Education’. Sanford and Naidu (2016) argue that ‘computational thinking does
not come naturally and requires training and guidance’ and thus that qualified teach-
ers for future CT education are urgently needed. Fields, Lui and Kafai identify the
teaching practices that can support students’ iterative design in CT, including teach-
ers’ modelling of their own CT processes and mistakes and of students’ mistakes
in front of the whole class. They highlight an iterative design process as a crucial
1 Introduction to Computational Thinking Education 9
aspect of CT and the importance of revision and working through mistakes. Hsu
considers the readiness of CT education from the perspective of school leaders, rat-
ing computer hardware readiness and leadership support readiness most favourably
and instructor readiness and instructional resources readiness lower. Wong, Kwok,
Cheung, Li and Lee discuss the supportive roles played by teaching assistants in CT
classes and analyse their self-development through the process of service-oriented
stress-adaption-growth.
References
Norris, C., Sullivan, T., Poirot, J., & Soloway, E. (2003). No access, no use, no impact: Snapshot
surveys of educational technology in K-12. Journal of Research on Technology in Education,
36(1), 15–27.
Papert, S. (1971). Teaching children thinking. MIT Artificial Intelligence Laboratory Memo no.
2247, Logo Memo no. 2. http://hdl.handle.net/1721.1/5835.
Papert, S. (1980). Mindstorms: Children, computers, and powerful ideas. New York: Basic Books.
Papert, S. (1991). Situating constructionism. In S. Papert & I. Harel (Eds.), Constructionism
(pp. 1–11). Norwood, NJ: Ablex.
Sanford, J. F., & Naidu, J. T. (2016). Computational thinking concepts for grade school. Contem-
porary Issues in Education Research, 9(1), 23–32.
Solomon, C. (1986). Computer environments for children: A reflection on theories of learning and
education. Cambridge, MA: MIT press.
Wing, J. M. (2006). Computational thinking. Communications of the ACM, 49(3), 33–35.
Wing, J. M. (2016). Computational thinking, 10 years later. Microsoft Research Blog. https://www.
microsoft.com/en-us/research/blog/computational-thinking-10-years-later/.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter’s Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter’s Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Part I
Computational Thinking and Tool
Development
Chapter 2
Computational Thinking—More Than
a Variant of Scientific Inquiry!
Abstract The essence of Computational Thinking (CT) lies in the creation of “log-
ical artifacts” that externalize and reify human ideas in a form that can be interpreted
and “run” on computers. Various approaches to scientific inquiry (learning) also make
use of models that are construed as logical artifacts, but here the main focus is on
the correspondence of such models with natural phenomena that exist prior to these
models. To pinpoint the different perspectives on CT, we have analyzed the terminol-
ogy of articles from different backgrounds and periods. This survey is followed by a
discussion of aspects that are specifically relevant to a computer science perspective.
Abstraction in terms of data and process structures is a core feature in this context.
As compared to a “free choice” of computational abstractions based on expressive
and powerful formal languages, models used in scientific inquiry learning typically
have limited “representational flexibility” within the boundaries of a predetermined
computational approach. For the progress of CT and CT education, it is important
to underline the nature of logical artifacts as the primary concern. As an example
from our own work, we elaborate on “reactive rule-based programming” as an entry
point that enables learners to start with situational specifications of action that can
be further expanded into more standard block-based iterative programs and thus
allows for a transition between different computational approaches. As an outlook
beyond current practice, we finally envisage the potential of meta-level programming
and program analysis techniques as a computational counterpart of metacognitive
strategies.
2.1 Introduction
Although Papert (1996) had already used the term “Computational Thinking” (CT)
ten years earlier, the current discussion of CT can be traced back to Wing (2006). Wing
characterized CT by stating that it “involves solving problems, designing systems,
and understanding human behavior, by drawing on the concepts fundamental to
computer science”. She emphasized that CT is not about thinking like a computer
but about how humans solve problems in a way that can be operationalized with and
on computers.
Science as well as humanities are influenced by CT since “computational concepts
provide a new language for describing hypotheses and theories” in these fields of
interest (Bundy, 2007). In addition, Barr and Stevenson (2011) formulated CT-related
challenges for K-12 education in computer science, math, science, language and
social studies.
Wing (2008) refined her first statement pointing out that the essence and key of CT
is abstraction in a way that is more complex than in mathematics. If someone builds
a computational model and wants to include all properties seen in the real environ-
ment, s/he cannot enjoy “the clean, elegant or easy definable algebraic properties of
mathematical abstraction”.
To bring CT to K-12, the International Society for Technology in Education and the
Computer Science Teacher Association (ISTE and CSTA 2011) presented a definition
of CT for K-12 Education
Computational Thinking is a problem-solving process that includes (but is not
limited to) the following characteristics:
• Formulating problems in a way that enables us to use a computer and other tools
to help solve them
• Logically organizing and analyzing data
• Representing data through abstractions, such as models and simulations
• Automating solutions through algorithmic thinking (a series of ordered steps)
• Identifying, analyzing, and implementing possible solutions with the goal of
achieving the most efficient and effective combination of steps and resources
• Generalizing and transferring this problem-solving process to a wide variety of
problems.
The first three bullet points highlight the importance of computational modeling as
part of the problem-solving process. General approaches to computational modeling
are typically facilitated by programming languages. However, we will see that the
2 Computational Thinking—More Than a Variant of Scientific … 15
creation of computational or formal models related to given problems can also make
use of other abstract tools such as automata or grammars.
As part of the same proposal, it is claimed that CT problem-solving should be
connected to realistic applications and challenges in science and humanities. This
would link the learning experience and the ensuing creations to relevant real-world
problems. However, realism in this sense can also lead to a “complexity overkill”
that obstructs the pure discovery of important basic building blocks (the nature of
which will be elaborated later). On the other hand, if models are simplified too much
they lose fidelity and ultimately credibility. Many computer science concepts do not
require an application to real complex environment in their basic version. Interactive
game environments, for example, do not necessarily require an accurate modeling
of physics but they can still promote the learning of CT concepts.
CT activities typically result in the creation of logical artifacts that can be run, tested
against the original intentions, and can be refined accordingly. The creation of an
initial executable artifact can be very challenging for learners. Lee et al. (2011)
presented a model with a low threshold for novices and promoted it as the “Use-
Modify-Create Progression” (see Fig. 2.1).
In the initial phase of this process, learners use or analyze a predefined and exe-
cutable (programming) artifact that typically contains a new construct or new type
abstraction. In this phase, the learners will modify the artifact, which causes observ-
able changes in the output. At the beginning, these modifications are often confined
to varying predefined parameters. The consecutive create phase is essential for the
appropriation of new CT skills, giving learners the opportunity to be creative and
express themselves through programming. Iteratively, the students create new com-
putational artifacts, execute these, and evaluate the ensuing outputs. This progression
There are few meta-level studies on the existing CT literature. For example, Kaleli-
oglu, Gülbahar, and Kukul (2016) unveiled that CT is very frequently related with
“problem-solving”, whereas Lockwood and Mooney (2017) point out that the defi-
nition of CT as well as the embedment of CT in curricula are still emerging, which
make a final characterization difficult.
To characterize the discourse on CT considering different phases and perspectives,
we have conducted a text analysis on selected corpora of scientific articles. We used
text mining tools to extract important concepts connected to CT. First, we defined
three and selected categories of articles1 :
Articles about CT mainly related to …
(i) … computer science education before 2006 (corpus n 14),
(ii) … computer science education 2006 and later (corpus n 33),
(iii) … inquiry-based learning in science (corpus n 16).
The distinction between the first two categories and corpora is based on a sepa-
ration with respect to the Wing (2006) article. The third category is also limited to
articles following up on Wing (2006) but with a focus in K-12 education in science
classes and humanities classes. We will show how the concepts related to CT will
differ.
To achieve this, we extracted terms from the three corpora based on the standard
tf -idf -measure. Since the corpora differ in number of documents and volume, the
extraction has been normalized to yield about 100 terms per category. The reduction
of terms is based on sparsity of the document-term matrices. In the first step, the full
paper texts (abstracts included) were loaded. After removing unnecessary symbols
and stop words, the texts have been lemmatized using a dictionary of base terms. Rel-
evant composite terms such as “computational thinking” (comp_think), “computer
science”, “problem solving” (prob_solv) or “computational model” (comp_model)
are considered as possible target terms.
The sparsity criterion leads to a selection of terms that appear in at least p docu-
ments as a threshold t. The threshold has been adjusted per category to achieve the
desired selection of 100 terms approximately. Table 2.1 shows the attributes of each
matrix before and after removing of the sparse terms.
The frequency-based word clouds shown in Fig. 2.2 indicate differences in the
corresponding portions of discourse for the three categories. Terms such as “learn”,
“model”, “system”, and “problem” appear in each category but with different weights.
The earlier work in computer science education (first category) frequently refers to
“mathematics” and “systems”. Category (ii) contains an explicit notion of “com-
puter science”, whereas “science” in general and “simulation” are characteristic for
category (iii). Category (ii) is also more concerned with special concepts related to
computer science, such as “algorithm”, “abstraction”, and “computing”. A special
focus in this category appears to be on “games”.
In Fig. 2.3, a comparison between selected frequent terms between all three cat-
egories is presented. Here, each bar represents the tf-idf-measure normalized to the
selected term base.
Figure 2.3 corroborates the impressions gained from the word cloud diagrams.
The profiles of categories (ii) and (iii) appear to be quite similar among each other as
compared to category (i). “Algorithm” and “abstraction” are more important concepts
Fig. 2.2 Word Clouds of the terms with the highest tf-idf value of category (i), (ii), and (iii)
18 H. Ulrich Hoppe and S. Werneburg
in category (ii), whereas “education” is more prominent in category (iii). The latter
observation corresponds to the disciplinary background of the contributions which is
more often education and less computer science for category (iii) as compared to (ii).
However, CT as a theme is even more frequently addressed in category (iii). Several
papers in category (ii) use games as examples to illustrate general principles of CT,
which explains the prominent role of this concept. For category (i), in addition to
the focus on mathematics (cf. “the (Logo) turtle meets Euclid”—Papert 1996), this
earlier discourse is much more centered on “knowledge” and “problem (solving2 )”
as epistemological categories. In the later discourse for both (ii) and (iii), the episte-
mological perspective is substituted by instructional and cognitive frameworks.
In his definition of CT, the computer science pioneer Aho (2012) characterizes Com-
putational Thinking as “the thought processes involved in formulating problems so
their solution can be represented as computational steps and algorithms”. Before
2 Problem-solving itself is only present in some documents, so the sparsity threshold led to removing
this term from the word cloud, even though the td-idf value was high.
2 Computational Thinking—More Than a Variant of Scientific … 19
The essence of computational thinking is abstraction… First, our abstractions do not nec-
essarily enjoy the clean, elegant or easily definable algebraic properties of mathematical
abstractions, such as real numbers or sets, of the physical world… In working with layers of
abstraction, we necessarily keep in mind the relationship between each pair of layers, be it
defined via an abstraction function, a simulation relation, a transformation or a more general
kind of mapping. We use these mappings in showing the observable equivalence between
an abstract state machine and one of its possible refinements, in proving the correctness of
an implementation with respect to a specification and in compiling a program written in a
high-level language to more efficient machine code. And so the nuts and bolts in compu-
tational thinking are defining abstractions, working with multiple layers of abstraction and
understanding the relationships among the different layers. Abstractions are the ‘mental’
tools of computing.
those discoveries in the search for new understanding” (Ash, 2003). Inquiry learn-
ing leads students through various phases (Pedaste et al., 2015), typically starting
with an orientation phase followed by a conceptualization with idea generation and
the development of hypotheses. During the investigation phase, students engage in
experimentation, taking measurements to test their hypotheses. Finally, the results
are evaluated and discussed, which may lead to reformulation of hypotheses. This
is usually understood as a spiral or cyclic process that allows for repeated revisions
and refinements (Minner, Levy, & Century, 2010).
CT and IL practices overlap in the underlying cyclic phase models as learning
process structures, as exemplified by the use-modify-create progression and vari-
ous IL-related process models. However, the role of computational artifacts differs
between CT and IL: In IL, the artifact or model serves as a proxy for a scientific
target scenario (e.g., an ecosystem) and the main point is what the model can tell
us about the original. In CT, the computational artifact is of primary interest per se,
including the way it is built and its inherent principles. If a learner evaluates the
computational artifact (or model) at hand in an IL context, this will typically involve
a variation of parameters and possibly redefinition of behaviors. In a CT context, this
encompasses the reflection and redesign of the underlying computational model and
representation as well.
Sengupta, Kinnebrew, Basu, Biswas, and Clark (2013) elaborate in detail
on relationships between IL and CT concepts using the CTSiM environ-
ment (“Computational Thinking in Simulation and Modeling”) as a con-
crete point of reference. The environment as such is based on visual agent-
based programming. Their approach is that, from educator’s perspective, stu-
dents learn best when they use design-based learning environments which
is also an approach of “science in practice” that involves “engag(ing) stu-
dents in the process of developing the computational representational practices
” (Sengupta, Kinnebrew, Basu, Biswas, & Clark 2013). In this article, certain compu-
tational concepts and principles are related to aspects of the IL environment. Abstrac-
tion is discussed in relation to the classical definition given by the philosopher Locke
as the process in which “ideas taken from particular beings become general repre-
sentatives of all of the same kind” (Locke, 1700). As we have seen above, this is
not sufficient for the understanding of abstraction in a computer science sense. The
specified relationships between computer science concepts and structural and opera-
tional aspects found in the CTSiM environment are rich and relevant. Yet, we need to
distinguish between representational choices made in the design and implementation
of the environment and choices that are “handed over” to the learners operating in
the environment using the visual agent-based programming interface. These choices
are indeed limited.
Perkins and Simmons (1988) showed that novice misconceptions in mathematics,
science, and programming exhibit similar patterns in that conceptual difficulties in
each of these domains have both domain-specific roots (e.g., challenging concepts)
and domain general roots (e.g., difficulties pertaining to conducting inquiry, problem-
solving, and epistemological knowledge).
2 Computational Thinking—More Than a Variant of Scientific … 23
Regarding the structuring of learning processes and the enrichment of such processes
with computational media, inquiry learning in science and CT education are quite
closely related. However, a discourse that is primarily driven by pedagogical inspi-
rations and interest tends to neglect the importance of genuine computer science
concepts and their role in shaping CT. The essence of CT lies in the creation of
“logical artifacts” that externalize and reify human ideas in a form that can be inter-
preted and “run” on computers. The understanding of the principles underlying and
constituting such logical artifacts, including “models of computation” in the sense
of Aho as well as specific “abstractions as constructs”, are of central importance
for CT. In contrast, in general scientific inquiry learning, computational models are
instrumental for the understanding the domain if interest (e.g., the functioning of
ecosystems or certain chemical reactions). Usually, the computational media used
in scientific inquiry learning contexts are largely predetermined in terms of data
structures and processing mechanisms. In this sense, they are of limited “represen-
tational flexibility” regarding the free choice of data representations and algorithmic
strategies.
24 H. Ulrich Hoppe and S. Werneburg
We have already seen that even for the practical activities CT cannot be reduced
to programming only. However, there is no doubt that programming resonates with
and requires CT. Programs are the most prominent examples of computational arti-
facts. Starting the construction and creation of a program is a difficult challenge
especially for beginners. Guzdial (2004) discusses different approaches and tools
for novices, such as the Logo family, the rule-based family, and traditional program-
ming approaches. These tools provide different underlying models of computation as
a basis to create computational artifacts. Although there is a wider variety of options
block-based programming tools became a standard for introductory programming
(Weintrop & Wilensky, 2015). However, depending on which tool best supports the
implementation of an idea, students should be able to choose the way how they
represent their ideas as computational artifacts.
We propose a “reactive rule-based programming” tool, in which the user/learner
defines program elements as reactions of a programmable agent to situations and
events in the learning environment. There is a close interplay between “running”
and “teaching” the agent. Whenever the agent finds itself in a situation for which
there is no applicable rule the learner will be prompted to enter such a rule. The
condition part of the rule is generated by the system in correspondence to the given
situational parameters (context). The user then specifies the actions to be applied.
Once a suitable rule is entered the system will execute it. If more and more rules are
provided the system will be able to execute chains of actions without further user
input.
As can be seen in Fig. 2.4, the current context determines the conditions that must
be considered for the rule. If none of the already defined rules applies the system
requests the student to define actions for the current situation. Then, the defined rule
can be executed and the cycle restarts in a new situation or ends in a predefined end.
This loop puts the learner in a first person perspective of identifying herself/himself
with the agent in the environment.
The reactive rule-based programming approach is the basis for the ctMazeStudio,
a web-based programming environment to develop algorithms steering an agent
Yes
Fig. 2.4 Flow diagram for the reactive rule-based programming paradigm
2 Computational Thinking—More Than a Variant of Scientific … 25
who tries to find a way out of a maze. The goal for the students is to implement a
universal solution that works on any maze. This learning environment contains three
components: the rule editor, the behavior stage and a rule library (Fig. 2.5).
As can be seen in Fig. 2.5, the rule editor (d) provides a visual programming
interface, which is available when a new situation is encountered. The editor com-
prises a condition component (IF part) and an action component (THEN part). For
the given conditions, the students can select the desired actions for the current and
corresponding situations with the same conditions to define a local “reactive” behav-
ior. The users can also delete conditions, which implies that the corresponding rule
will be applied more generally (generalization).
The rule library (c) manages the collection of all defined rules. In this user inter-
face, the students can edit or delete already defined rules, directly enter new rules
and change the order (and thus priority) of the rules to be checked. In the behavior
stage (a), the behavior of the agent is visualized. Depending on the entries in the
(b)
(a)
(c)
Only the way to
the left is opened.
(d)
rule library, the corresponding actions are executed and the specific entry in the rule
library is highlighted.
To support CT, it is possible to apply different strategies to improve the program-
ming code. When learners investigate and test their rulesets in consecutive situations,
they may revise formerly defined rule sets through generalization (of conditions) or
reordering. The challenge is to create a maximally powerful ruleset with a minimum
number of rules. This requires a level of understanding that allows for predicting
global behavior based on locally specified rules. In the maze example, as one of the
first steps, a small set of rules will be created to implement a wall-following strategy.
This strategy will be later refined, e.g., to avoid circling around islands.
The ctMazeStudio environment can also be programmed through a block-
structured interface to represent conditions and loops governed by a global control
strategy. In this sense, it provides a choice between different models of computa-
tion and thus supports “representational flexibility”. Based on these options, we are
currently studying transitions from rule-based reactive to block-structured iterative
programs.
and efficient. An example challenge is finding the sum of all even Fibonacci numbers
with values below 4 million (Problem 2). These Euler problems define challenges
of both mathematical and computational nature. Manske and Hoppe introduced sev-
eral metrics to capture different features of the programming solutions. Static and
dynamic code metrics (including lines of code, cyclomatic complexity, frequency
of certain abstractions, and test results) cover structural quality and compliance,
while similarity based metrics address originality. In a supervised machine learning
approach, classifiers have been generated based on these features together with cre-
ativity scores from expert judgments. The main problem encountered in this study
was the divergence of human classification related to creativity in programming.
Logic programming in combination with so-called meta-interpreters allows for
dynamically reflecting deviations of actual program behavior from intended results or
specifications (Lloyd, 1987). The method of deductive diagnosis (Hoppe, 1994) uses
this technique in the context of a mathematics learning environment to identify and
pinpoint specific mistakes without having to provide an error or bug library. From a
CT point view, these meta-level processing techniques are relevant extensions of the
formal repertoire. In the spirit of “open student modeling” (Bull & Kay, 2010), not
only the results but also the basic functioning of such meta-level analyses could be
made available to the learners to improve reflection on their computational artifacts
and to extend their understanding of computational principles.
Of course, metacognition has also been addressed and discussed in several
approaches to inquiry learning (White, Shimoda, & Frederiksen, 1999; Manlove,
Lazonder, & Jong, 2006; Wichmann & Leutner, 2009). In these approaches, metacog-
nition is conceived as an element of human learning strategies, possibly supported by
technology but not simulated on a computational level. The examples above show that
“second order” computational reflection techniques can be applied to “first order”
computational artifacts in such way as to reveal diagnostic information related to the
underlying human problem-solving and construction processes. In this sense, we can
identify computational equivalents of metacognitive strategies. Making these second
order computations susceptible to human learning as tools of self-reflection is a big
challenge, but certainly of interest for CT education.
2.4 Conclusion
The current scientific discourse centered around the notion of “Computational Think-
ing” is multi-disciplinary with contributions from computer science, cognitive sci-
ence, and education. In addition, the curricular application contexts of CT are mani-
fold. Still, it is important to conceive the basic computational concepts and principles
in such a way as to keep up with the level of understanding developed in modern
computer science. This is especially the case for the notion of “abstraction”.
Our comparison of the perspectives on CT from computer science education and
Inquiry Learning in science has brought forth the following main points:
28 H. Ulrich Hoppe and S. Werneburg
1. The essence of Computational Thinking (CT) lies in the creation of “logical arti-
facts” that externalize and reify human ideas in a form that can be interpreted and
“run” on computers. Accordingly, CT sets a focus on computational abstractions
and representations—i.e., the computational artifact and how it is constituted is
of interest as such and not only as a model of some scientific phenomenon.
2. Beyond the common sense understanding of “abstraction”, computational
abstractions (plural!) are constructive mind tools. The available abstractions (for
data representation and processing) form a repertoire of possible choices for the
creation of computational artifacts.
3. Inquiry learning uses computational artifacts and/or systems as models of natural
phenomena, often in the form of (programmable) simulations. Here, the choice
of the computational representation is usually predetermined and not in the focus
of the learners’ own creative contributions.
4. Inquiry learning as well as CT-related learning activities both exhibit and rely on
cyclic patterns of model progression (cycle of inquiry steps - creation/revision
cycle).
There is a huge potential for a further productive co-development of CT-centric
educational environments and scenarios from multiple perspectives. “Representa-
tional flexibility” as the handing over of choices related to data structuring and
other abstractions to the learners is desirable from a computer science point of view.
This does not rule out the meaningful and productive use of more fixed compu-
tational models in other learning contexts. Yet, this future co-development of CT
should benefit from taking up and exploring new types of abstractions and models
of computation (including, e.g., different abstract engines or meta-level reasoning
techniques) to enrich the learning space of CT. This may also reconnect the discourse
to epistemological principles.
Acknowledgements This article is dedicated to the memory of Sören Werneburg who made impor-
tant and essential contributions to this work, including the full responsibility for the design and
implementation of the ctMazeStudio and ctGameStudio environments.
References
Aho, A. V. (2012). Computation and computational thinking. The Computer Journal, 55(7),
832–835.
Ash, D. (2003). Inquiry Thoughts, Views and Strategies for the K-5 Classroom. In Foundations: A
monograph for professionals in science, mathematics and technology education.
Barr, V., & Stephenson, C. (2011). Bringing computational thinking to K-12: What Is Involved and
what is the role of the computer science education community? ACM Inroads, 2(1), 48–54.
Basu, S., Kinnebrew, J., Dickes, A., Farris, A., Sengupta, P., Winger, J., … Biswas, G. (2012). A
science learning environment using a computational thinking approach. In Proceedings of the
20th International Conference on Computers in Education.
Bull, S., & Kay, J. (2010). Open Learner Models. In Advances in intelligent tutoring systems
(pp. 301–322): Springer.
2 Computational Thinking—More Than a Variant of Scientific … 29
Bundy, A. (2007). Computational thinking is pervasive. Journal of Scientific and Practical Com-
puting, 1(2), 67–69.
Curzon, P., & McOwan, P. W. (2016). The power of computational thinking: Games, magic and
puzzles to help you become a computational thinker: World Scientific
diSessa, A. (2000). Changing minds: Computers. Learning and Literacy.
Giere, R. (1988). Laws, theories, and generalizations.
Guzdial, M. (2004). Programming environments for novices. Computer Science Education
Research, 2004, 127–154.
Hartmann, W., Nievergelt, J., & Reichert, R. (2001). Kara, finite state machines, and the case for
programming as part of general education. In Proceedings IEEE Symposia on Human-Centric
Computing Languages and Environments.
Hoppe, H. U. (1994). Deductive error diagnosis and inductive error generalization for intelligent
tutoring systems. Journal of Interactive Learning Research, 5(1), 27.
Hu, C. (2011). Computational thinking: What it might mean and what we might do about it. In
Proceedings of the 16th Annual Joint Conference on Innovation and Technology in Computer
Science Education.
ISTE, & CSTA. (2011). Computational thinking in K–12 education leadership toolkit.
Kafura, D., & Tatar, D. (2011). Initial experience with a computational thinking course for computer
science students. In Proceedings of the 42nd ACM Technical Symposium on Computer Science
Education.
Kalelioglu, F., Gülbahar, Y., & Kukul, V. (2016). A framework for computational thinking based
on a systematic research review. Baltic Journal of Modern Computing, 4(3), 583.
Lee, I., Martin, F., Denner, J., Coulter, B., Allan, W., Erickson, J., et al. (2011). Computational
thinking for youth in practice. ACM Inroads, 2(1), 32–37.
Lehrer, R., & Schauble, L. (2006). Scientific thinking and science literacy. In Handbook of Child
Psychology.
Lloyd, J. W. (1987). Declarative error diagnosis. New Generation Computing, 5(2), 133–154.
Locke, J. (1700). An essay concerning human understanding.
Lockwood, J., & Mooney, A. (2017). Computational thinking in education: Where does it fit? A
Systematic Literary Review.
Manlove, S., Lazonder, A. W., & Jong, T. D. (2006). Regulative support for collaborative scientific
inquiry learning. Journal of Computer Assisted Learning, 22(2), 87–98.
Manske, S., & Hoppe, H. U. (2014). Automated indicators to assess the creativity of solutions
to programming exercises. In 2014 IEEE 14th International Conference on Advanced Learning
Technologies (ICALT).
Matsuzawa, Y., Tanaka, Y., Kitani, T., & Sakai, S. (2017). A demonstration of evidence-based
action research using information dashboard in introductory programming education. In IFIP
World Conference on Computers in Education.
Minner, D. D., Levy, A. J., & Century, J. (2010). Inquiry-based science instruction—what is it and
does it matter? Results from a research synthesis years 1984 to 2002. Journal of Research in
Science Teaching, 47(4), 474–496.
Nersessian, N. J. (1992). How do scientists think? Capturing the dynamics of conceptual change in
science. Cognitive Models of Science, 15, 3–44.
NRC. (2008). Public participation in environmental assessment and decision making: National
Academies Press.
Papert, S. (1996). An Exploration in the Space of Mathematics Educations. International Journal
of Computers for Mathematical Learning, 1(1), 95–123.
Pedaste, M., Mäeots, M., Siiman, L. A., De Jong, T., Van Riesen, S. A., Kamp, E. T., et al. (2015).
Phases of inquiry-based learning: Definitions and the inquiry cycle. Educational Research Review,
14, 47–61.
Perkins, D. N., & Simmons, R. (1988). Patterns of misunderstanding: An integrative model for
science, math, and programming. Review of Educational Research, 58(3), 303–326.
30 H. Ulrich Hoppe and S. Werneburg
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter’s Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter’s Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Chapter 3
MIT App Inventor: Objectives, Design,
and Development
3.1 Introduction
The smartphone is an information nexus in today’s digital age, with access to a nearly
infinite supply of content on the web, coupled with rich sensors and personal data.
However, people have difficulty harnessing the full power of these ubiquitous devices
for themselves and their communities. Most smartphone users consume technology
without being able to produce it, even though local problems can often be solved with
mobile devices. How then might they learn to leverage smartphone capabilities to
solve real-world, everyday problems? MIT App Inventor is designed to democratize
this technology and is used as a tool for learning computational thinking in a variety
The MIT App Inventor user interface includes two main editors: the design editor
and the blocks editor. The design editor, or designer (see Fig. 3.1), is a drag and drop
interface to lay out the elements of the application’s user interface (UI). The blocks
editor (see Fig. 3.2) is an environment in which app inventors can visually lay out
the logic of their apps using color-coded blocks that snap together like puzzle pieces
to describe the program. To aid in development and testing, App Inventor provides
a mobile app called the App Inventor Companion (or just “the Companion”) that
developers can use to test and adjust the behavior of their apps in real time. In this
way, anyone can quickly build a mobile app and immediately begin to iterate and
test.
3 MIT App Inventor: Objectives, Design, and Development 33
Fig. 3.1 App Inventor’s design editor. App inventors drag components out from the palette (far
left) to the viewer (center left) to add them to the app. Inventors can change the properties of
the components (far right). An overview of the screen’s components and project media are also
displayed (center right)
In the design of MIT App Inventor, introducing mobile app development in educa-
tional contexts was a central goal. Prior to its release, most development environments
for mobile applications were clunky, only accessible with expertise in systems level
or embedded programming, or both. Even with Google’s Android operating system
and the Java programming language, designing the user interface was a complex
task. Further, use of the platform required familiarity with Java syntax and seman-
tics, and the ability to debug Java compilation errors (e.g., misspelled variables or
misplaced semicolons) for success. These challenges presented barriers to entry for
individuals not versed in computer science, App Inventor’s target demographic. We
briefly highlight and discuss design goals for the App Inventor project, specifically,
the use of components to abstract some of the complexity of platform behavior, and
the use of blocks to eliminate complexity of the underlying programming language.
These goals can be further explained as aligning the visual language to the mental
models of young developers and enabling exploration through fast, iterative design.
34 E. W. Patton et al.
Fig. 3.2 App Inventor’s blocks editor. Blocks code is typically read left to right, top to bottom. In
this example, one would read “when Cat click, do call Meow play,” that is, play the meow sound
when the cat is clicked
Components are core abstractions in MIT App Inventor. Components reduce the
complexity of managing interactions with platform-specific application program-
ming interfaces (APIs) and details concerning state management of device hardware.
This allows the user to think about the problem at hand rather than the minutia typi-
cally required of application developers. For example, someone planning to use MIT
App Inventor to build an app to use the global positioning system (GPS) to track
movement need not be concerned with application lifecycle management, GPS soft-
ware and hardware locks, or network connectivity (in case location detection falls
back to network-based location). Instead, the app developer adds a location sensor
component that abstracts away this complexity and provides an API for enabling and
processing location updates. More concretely, this implementation reduces 629 lines
of Java code to 23 blocks, of which only two are required to accomplish location
tracking. This reduction in complexity enables app inventors to focus on the problem
at hand and quickly accomplish a goal.
Components are made up of three major elements: properties, methods, and events.
Properties control the state of the component and are readable and/or writable by the
app developer. For example, the enabled property of the location sensor includes the
functionality required to configure the GPS receiver and to manage its state while the
app is in use. Methods operate on multiple inputs and possibly return a result. Events
3 MIT App Inventor: Objectives, Design, and Development 35
respond to changes in the device or app state based on external factors. For example,
when the app user changes their location, the location changed event allows the app
logic to respond to the change.
In MIT App Inventor, users code application behavior using a block-based program-
ming language. There are two types of blocks in App Inventor: built-in blocks and
component blocks. The built-in blocks library provides the basic atoms and opera-
tions generally available in other programming languages, such as Booleans, strings,
numbers, lists, mathematical operators, comparison operators, and control flow oper-
ators. Developers use component blocks (properties, methods, and events) to respond
to system and user events, interact with device hardware, and adjust the visual and
behavioral aspects of components.
All program logic is built on three top-level block types: global variable defini-
tions, procedure definitions, and component event handlers. Global variables provide
named slots for storing program states. Procedures define common behaviors that
can be called from multiple places in the code. When an event occurs on the device,
it triggers the corresponding application behavior prescribed in the event block. The
event handler block may reference global variables or procedures. By limiting the
top-level block types, there are fewer entities to reason about.
The development team for App Inventor considered a number of restrictions when
designing the environment. We examine a few design decisions, the rationale behind
them, and their effects on computational thinking within App Inventor.
The design editor for App Inventor allows developers to see how the app will appear
on the device screen and adjust the form factor of the visualized device (e.g., phone or
tablet). Adjustments to properties of the visual components, for example, background
color and size, are reflected in real time. Apps can also be run in a live development
mode using the Companion, which we will be discussed in more detail below.
36 E. W. Patton et al.
The App Inventor team recently added capability for creating map-based applica-
tions. The functionality allows app inventors to drag, drop, and edit markers, lines,
polygons, rectangles, and circles in their maps, as well as integrate web-based data
from geographic information systems (GIS) to build content-rich apps. This way, the
user can move the content around easily to achieve great results without needing to
provide most of the logic for this in code.
Unlike many programming languages, App Inventor limits runtime creation of new
entities. This provides multiple benefits. First, by explicitly positioning all compo-
nents in the app, the user can visualize it clearly rather than having to reason about
things that will not exist until a future time. Second, it reduces the chances of users
introducing cyclic memory dependencies in the user interface that would eventually
cause the app to run out of memory. This encourages app inventors to think about
how to appropriately structure their applications and reuse components to avoid
overloading the system or their end users.
The number system in App Inventor assumes a starting value of 1, in line with
children’s counting skills (Gelman & Gallistel, 1978). This is unlike most program-
ming languages, which are more aligned with machine architecture and therefore
start at 0.
A key feature of MIT App Inventor is its live development environment for mobile
applications. App Inventor provides this by means of a companion app installed
on the user’s mobile device. The App Inventor web interface sends code to the
companion app, which interprets the code and displays the app in real time to the
developer (Fig. 3.3). This way, the user can change the app’s interface and behavior
in real time. For example, a student making a game involving the ball component
may want to bounce the ball off the edge of the play area. However, an initial imple-
mentation might have the ball collide with the wall and then stop. After discovering
the Ball.EdgeReached event, the student can add the event and update the direc-
tion of the ball using the Ball.Bounce method. By testing the app and adjusting its
programming in response to undesired behavior, students can explore more freely.
The traditional build cycle for an Android app involves writing code in a text
editor or integrated development environment, and rebuilding the application for
testing may often take minutes, whereas making a change in the live development
3 MIT App Inventor: Objectives, Design, and Development 37
Fig. 3.3 The MIT Companion app interface for Android (left). After establishing a connection
with the user’s browser session, the active project is displayed in the companion app (right). See
Fig. 3.1 for the designer view of the same project
environment typically takes effect in 1–2 s. Seeing changes reflected in the app
quickly means that students can explore and even make mistakes while exploring,
because the time cost of those mistakes is relatively small.
The App Inventor project began at Google in 2007 when Prof. Hal Abelson of MIT
went on sabbatical at Google Labs. The project leads were inspired by increased
interest in educational blocks programming languages, such as Scratch, and the
release of the new Android operating system. This educational project was migrated
to MIT when Google closed Google Labs in 2011. In this section, we briefly cover
inception and early development of the App Inventor platform, first at Google, and
then at MIT.
38 E. W. Patton et al.
Hal Abelson conceived the idea of App Inventor while on sabbatical at Google
Labs in 2007. Abelson had previously taught a course at MIT on mobile program-
ming, but at the time mobile app development required significant investment on the
part of developers and development environments. Also in 2007, Google publicly
announced the Android operating system. Abelson and Mark Friedman of Google
began developing an intermediate language between the blocks language and Java
APIs for Android, called Yet Another Intermediate Language (YAIL). The project
was intended to help younger learners program for Android. Abelson and Friedman
generated YAIL from a block-based language based on OpenBlocks (Roque, 2007),
and the design of which was drawn from StarLogo TNG (Begel & Klopfer, 2007).
The user interface and related components embodied Papert’s idea of “powerful ideas
in mind-size bites” (Papert, 1993). The Google version of the project terminated at
the end of 2011, but the educational technology was transferred to MIT so that
development and educational aspects could continue (Kincaid, 2011). Prof. Abelson
joined Prof. Eric Klopfer of the Scheller Teacher Education Program lab and Prof.
Mitch Resnick of the MIT Media Lab, forming a group called the MIT Center for
Mobile Learning to carry on the App Inventor vision.
In late 2011, Google transferred stewardship of the App Inventor project to MIT.
Much of the development focused on increasing capabilities to support educational
goals of the project. At this time, the team developed additional curricula, making
them freely available to teachers for computer science and computational thinking
education. The MIT team also hosted a number of 1-day workshops, primarily around
the northeast United States, training teachers in the pedagogy of App Inventor. We
now focus on guided and open exploration in our materials rather than presenting
students with step-by-step instructions in order to encourage self-guided learning. By
making mistakes, students have the opportunity to practice more of the computational
thinking principles, such as debugging, described by Brennan and Resnick (2012).
Technical development at MIT focused on development of new components
including robotics (LEGO™ EV3), cloud-oriented data storage (CloudDB), and geo-
graphic visualization (Map). App Inventor team also developed Internet of Things
related extensions so learners could interact with physical hardware external to their
mobile devices, and to leverage the growing collection of small computer boards,
such as Arduino, BBC micro:bit, and Raspberry Pi. To this day, the team continues
its work of development, creating complementary educational materials in parallel.
3 MIT App Inventor: Objectives, Design, and Development 39
The primary aim of MIT App Inventor is providing anyone with an interest in building
apps to solve problems with the tools necessary to do so. Instructional materials
developed by the team are primarily oriented toward teachers and students at the
middle- and high-school levels, but app inventors come in all ages from around the
world. In this section, we describe a few of the key components of the MIT App
Inventor educational strategy, including massively online open courses (MOOCs)
focused on MIT App Inventor, the Master Trainer (MT) program, the extensions
functionality of App Inventor that allows incorporation of new material for education,
and research projects that have leveraged App Inventor as a platform for enabling
domain-specific computing.
MIT provides special instruction to educators through the Master Trainers program.1
A prototype of the Master Trainers program began during a collaboration with the
Verizon App Challenge in 2012. Skilled App Inventor educators were recruited and
given a small amount of special training to help mentor and train teams who subse-
quently won the App Challenge. The current Master Trainers program was conceived
in 2015, to “grow a global community of experts on mobile app development who are
available to guide others through the exploration of mobile app creation…, thus pro-
viding a pathway into computer science, software development, and other disciplines
relevant in today’s digital world.”
1 http://appinventor.mit.edu/explore/master-trainers.html.
40 E. W. Patton et al.
3.5.3 Extensions
Anyone with Java and Android programming experience can write their own compo-
nents for App Inventor using our extension mechanism. For example, MIT recently
published a suite of Internet of things (IOT)-related extensions2 for interfacing with
Arduino 101 and BBC micro:bit microcontrollers, with support for other platforms
in development. Using these extensions, teachers can assemble custom curricula to
leverage these technologies in the classroom and encourage their students to explore
the interface between the world of software and the world of hardware.
We foresee the development of extensions related to artificial intelligence tech-
nologies, including deep learning, device support for image recognition, sentiment
analysis, natural language processing, and more. Ideally, these complex technologies
could be leveraged by anyone looking to solve a problem with the smartphone as a
platform.
2 http://iot.appinventor.mit.edu.
3 MIT App Inventor: Objectives, Design, and Development 41
Traditional computer science curricula at the university level often focus on theory
and include evaluation tools (e.g., Big-O notation of algorithms) and comprehension
of the space and time complexity of data structures. Instead, App Inventor curricula
focus on using a language practically to solve real-world problems. Rather than plac-
ing emphasis on learning concepts such as linked lists or key–value mappings, App
Inventor hides the complexity of these data structures behind blocks so that students
can spend more time designing apps that perform data collection and analysis, or
integrate with a range of sensors and actuators interacting with external environ-
ments. This allows for a top-down, goal-based decomposition of the problem rather
than a bottom-up approach, although App Inventor does not preclude such a strategy.
The concept of computational thinking was first used by Seymour Papert in his sem-
inal book Mindstorms: Children, computers, and powerful ideas (1993); however, it
was largely brought into the mainstream consciousness by Jeannette Wing in 2006.
For Wing, computational thinking is the ability to think like a computer scientist. In
the decade since, many educational researchers have worked to integrate computa-
tional thinking into modern computing and STEM curricula (Tissenbaum, Sheldon,
& Sherman, 2018). However, the explosive growth of computational thinking has
also resulted in a fragmentation of its meaning, with educational researchers, cur-
riculum designers, and teachers using different definitions, educational approaches,
and methods of assessments (Denning, 2017). There have been attempts to recon-
cile these differences (National Academy of Sciences, 2010) and to bring leading
42 E. W. Patton et al.
While the growth of computational thinking has brought new awareness to the impor-
tance of computing education, it has also created new challenges. Many educational
initiatives focus solely on the programming aspects, such as variables, loops, con-
ditionals, parallelism, operators, and data handling (Wing, 2006), divorcing com-
puting from real-world contexts and applications. This decontextualization threatens
to make learners believe that they do not need to learn computing, as they cannot
envision a future in which they will need to use it, just as many see math and physics
education as unnecessary (Flegg et al., 2012; Williams et al., 2003).
This decontextualization of computing education from the actual lives of students
is particularly problematic for students underrepresented in the fields of computing
and engineering, such as women and other learners from nondominant groups. For
these students, there is a need for their work to have an impact in their community
and for it to help them develop a sense of fit and belonging (Pinkard et al., 2017).
Lee and Soep (2016) argue that a critical perspective for computing is essential
for students to develop a critical consciousness around what they are learning and
making, moving beyond simply programming, instead of asking the students what
they are programming and why they are programming it.
In response, the App Inventor team advocates for a new approach to computing
education that we call computational action. The computational action perspective
on computing argues that while learning about computing, young people should also
3 MIT App Inventor: Objectives, Design, and Development 43
have opportunities to create with computing which have direct impact on their lives
and their communities. Through our work with App Inventor, we have developed
two key dimensions for understanding and developing educational experiences that
support students in engaging in computational action: (1) computational identity
and (2) digital empowerment. Computational identity builds on prior research that
showed the importance of young people’s development of scientific identity for
future STEM growth (Maltese & Tai, 2010). We define computational identity as a
person’s recognition that they can use computing to create change in their lives and
potentially find a place in the larger community of computational problem-solvers.
Digital empowerment involves instilling in them the belief that they can put their
computational identity into action in authentic and meaningful ways.
Computational action shares characteristics with other approaches for refocusing
computing education toward student-driven problem-solving, most notably compu-
tational participation (Kafai, 2016). Both computational action and computational
participation recognize the importance of creating artifacts that can be used by others.
However, there is a slight distinction between the conceptualizations of community
in the two approaches. In computational participation, community largely means the
broader community of learners engaging in similar computing practices (e.g., the
community of Scratch programmers that share, reuse, and remix their apps). While
such a learning community may be very beneficial to learners taking part in a com-
putational action curriculum, the community of greater importance is the one that
uses or is impacted by the learners’ created products (e.g., their family, friends, and
neighbors). This computational identity element of computational action acknowl-
edges the importance of learners feeling a part of a computing community (i.e., those
that build and solve problems with computing), but it is not a requirement that they
actively engage with this larger community. A small group of young app builders,
such as those described below, may develop significant applications and believe they
are authentically part of the computing community, without having connected with
or engaged with it in a deep or sustained way as would be expected in computational
participation.
Through students’ use of App Inventor, we have seen this computational action
approach produce amazing results. Students in the United States have developed apps
to help a blind classmate navigate their school (Hello Navi3 ); students in Moldova
developed an app to help people in their country crowdsource clean drinking water
(Apa Pura4 ); and as part of the CoolThink@JC project, students in Hong Kong
created an app, “Elderly Guardian Alarm,” to help the elderly when they got lost.
Across these projects, we see students engaging with and facilitating change in their
communities, while simultaneously developing computational identities.
3 https://www.prnewswire.com/news-releases/321752171.html.
4 The Apa Pura Technovation pitch video is available online at https://youtu.be/1cnLiSySizw.
44 E. W. Patton et al.
We started the App of the Month program in 2015 in order to encourage App Inventors
to share their work with the community. Any user can submit their app to be judged
in one of four categories: Most Creative, Best Design, Most Innovative, and Inventor.
Submissions must be App Inventor Gallery links, so that any user can remix winning
apps. Furthermore, apps are judged in two divisions: youth and adult.
Now, 3 years after the program’s inception, approximately 40 apps are submitted
each month. More youth tend to submit than adults, and significantly more male
users submit than female users, especially in the adult division. While submissions
come in from all over the world, India and the USA are most highly represented.
Themes of submitted apps vary widely. Many students submit “all-in-one” apps
utilizing the Text to Speech and Speech Recognizer components. Adults often submit
learning apps for small children. Classic games, such as Pong, also get submitted
quite frequently. Teachers tend to submit apps that they use in their classrooms.
Perhaps most importantly, students and adults alike submit apps designed to solve
problems within their own lives or their communities. For example, a recent submitter
noticed that the Greek bus system is subject to many slowdowns, so he built an app
that tracks buses and their routes. Similarly, a student noticed that many of her peers
were interested in reading books, but did not know how to find books they would
like, so she built an app that categorizes and suggests popular books based on the
Goodreads website.
However, not all users fit the same mold. One student found that he enjoys logic-
and math-based games, and after submitting regularly for about a year, his skill
improved tremendously. Hundreds of people have remixed his apps from the Gallery,
and even downloaded them from the Google Play Store, encouraging the student to
pursue a full-time career in game development.
The App of the Month program, as a whole, encourages users to think of App
Inventor as a tool they can use in their daily lives and off-the-screen communities.
It also provides incentive to share their apps and recognition for their hard work.
Users go to App Inventor to solve problems—which makes them App Inventors
themselves.
3.7 Discussion
We have seen in detail many aspects of the MIT App Inventor program from the
development and educational perspective. There are some misconceptions, limita-
tions, and benefits that are important to highlight.
3 MIT App Inventor: Objectives, Design, and Development 45
One common position detractors take is that blocks programming is not real pro-
gramming (often comparing blocks languages to text languages). This is a false
dichotomy if one understands programming to be the act of describing to a computer
some realization of a Turing machine. The examples presented in earlier sections
highlight how people use MIT App Inventor to solve real problems they face in their
communities. To this end, younger individuals recognize that through tools such as
App Inventor they can effect real change in their community, if not the whole world.
Novice users who begin learning programming with blocks languages also tend to
go further and continue more often than learners of textual languages (Weintrop &
Wilensky, 2015).
Another common misconception is that creating mobile applications is some-
thing that only experts and those who have a lot of experience programming can
do. However, students across the K-12 spectrum use App Inventor to develop their
own mobile applications with little to no prior experience. For instance, the Cool-
Think@JC curriculum targets over 15,000 students in Hong Kong from grades 4–6.
This intervention has enabled these elementary students to learn both to think com-
putationally and to develop their own apps to address local issues (Kong et al., 2017).
3.7.2 Limitations
5 http://appinventortojava.com/.
46 E. W. Patton et al.
continue to grow the platform and user community and is a worthy subject for further
exploration.
Users of the App Inventor platform benefit from being able to repurpose the com-
putational thinking skills they learn to interface with physical space in the external
world. The visual programming of App Inventor and the abstraction and compart-
mentalization of concepts into components and blocks allow the app inventor to
focus more on decomposing their problems into solvable elements. The facility of
running apps on mobile devices allows the students to experience their own apps
as part of an ecosystem they interact with daily, and with which they are intimately
familiar. Since this encapsulation reduces the time it takes to build an app, even a
straightforward prototype, app inventors can quickly grasp and iterate without pay-
ing a significant cost in terms of a compile-load-run cycle that is typical with mobile
app development.
3.8 Conclusions
The MIT App Inventor project continues to push the boundaries of education within
the context of mobile app development. Its abstraction of hardware capabilities and
the reduction of complex logic into compact representations allows users to quickly
and iteratively develop projects that address real-world problems. We discussed
how App Inventor’s curriculum development incorporates elements of computa-
tional thinking and encourages computational action with real-world effects. We
also presented a number of projects that effectively accomplish this mission. We
continue to grow the platform to democratize access to newer technologies, prepar-
ing future generations for a world in which computational thinking is a central part
of problem-solving.
Acknowledgements The authors would like to thank Prof. Hal Abelson, Karen Lang, and Josh
Sheldon for their input, and discussions of material in the manuscript. App Inventor has received
financial support from Google, NSF grant #1614548, Hong Kong Jockey Club, Verizon Foundation,
and individual contributors.
References
Aho, A. V. (2012). Computation and computational thinking. The Computer Journal, 55(7),
832–835.
Begel, A., & Klopfer, E. (2007). Starlogo TNG: An introduction to game development. Journal of
E-Learning, 53, 146.
Brennan, K., & Resnick, M. (2012). New frameworks for studying and assessing the development
of computational thinking. In: Proceeding of the 2012 AERA.
Deng, X. (2017). Group collaboration with App Inventor (Masters thesis, Massachusetts Institute
of Technology).
Denning, P. J. (2017). Remaining trouble spots with computational thinking. Communications of
the ACM, 60(6), 33–39.
Flegg, J., Mallet, D., & Lupton, M. (2012). Students’ perceptions of the relevance of mathematics in
engineering. International Journal of Mathematical Education in Science and Technology, 43(6),
717–732.
Fraser, N. (2013). Blockly: A visual programming editor. https://developers.google.com/blockly/.
Gelman, R., & Gallistel, C. R. (1978). The child’s understanding of number. Cambridge, MA:
Harvard University Press.
Harunani, F. (2016). AppVis: Enabling data-rich apps in App Inventor. Masters Thesis, University
of Massachusetts, Lowell.
Jain, A., Adebayo, J., de Leon, E., Li, W., Kagal, L., Meier, P., et al. (2015). Mobile application
development for crisis data. Procedia Engineering, 107, 255–262.
Kafai, Y. B. (2016). From computational thinking to computational participation in K-12 education.
Communications of the ACM, 59(8), 26–27.
48 E. W. Patton et al.
Kincaid, J. (2011). Google gives Android App Inventor a new home at MIT Media Lab. Techcrunch.
Retrieved March 04, 2018, from https://techcrunch.com/2011/08/16/google-gives-android-app-
inventor-a-new-home-at-mit-media-lab/.
Kong, S., Abelson, H., Sheldon, J., Lao, A., Tissenbaum, M., & Lai, M. (2017). Curriculum activ-
ities to foster primary school students’ computational practices in block-based programming
environments. In Proceedings of Computational Thinking Education (p. 84).
Lee, C. H., & Soep, E. (2016). None but ourselves can free our minds: critical computational literacy
as a pedagogy of resistance. Equity & Excellence in Education, 49(4), 480–492.
Maloney, J., Resnick, M., Rusk, N., Silverman, B., & Eastmond, E. (2010). The scratch programming
language and environment. ACM Transactions on Computing Education (TOCE), 10(4), 16.
Maltese, A. V., & Tai, R. H. (2010). Eyeballs in the fridge: Sources of early interest in science.
International Journal of Science Education, 32(5), 669–685.
Martin, F., Michalka, S., Zhu, H., & Boudelle, J. (2017, March). Using AppVis to build data-rich
apps with MIT App Inventor. In Proceedings of the 2017 ACM SIGCSE Technical Symposium on
Computer Science Education (pp. 740–740). ACM.
Morelli, R., De Lanerolle, T., Lake, P., Limardo, N., Tamotsu, E., & Uche, C. (2011, March). Can
Android App Inventor bring computational thinking to k-12. In Proceedings of the 42nd ACM
Technical Symposium on Computer Science Education (SIGCSE’11) (pp. 1–6).
Mota, J. M., Ruiz-Rube, I., Dodero, J. M., & Figueiredo, M. (2016). Visual environment for design-
ing interactive learning scenarios with augmented reality. International Association for Develop-
ment of the Information Society.
National Academies of Science. (2010). Report of a workshop on the scope and nature of compu-
tational thinking. Washington DC: National Academies Press.
Papadakis, S., & Orfanakis, V. (2016, November). The combined use of Lego Mindstorms NXT
and App Inventor for teaching novice programmers. In International Conference EduRobotics
2016 (pp. 193–204). Springer, Cham.
Papert, S. (1990). A critique of technocentrism in thinking about the school of the future. Cambridge,
MA: Epistemology and Learning Group, MIT Media Laboratory.
Papert, S. (1993). Mindstorms: Children, computers, and powerful ideas (2nd ed.). Basic Books.
Papert, S. (2000). What’s the big idea? Toward a pedagogy of idea power. IBM Systems Journal,
39(3–4), 720–729.
Pinkard, N., Erete, S., Martin, C. K., & McKinney de Royston, M. (2017). Digital Youth Divas:
Exploring narrative-driven curriculum to spark middle school girls’ interest in computational
activities. Journal of the Learning Sciences, (just-accepted).
Powell, W. W., & Snellman, K. (2004). The knowledge economy. Annual Reviews in Sociology, 30,
199–220.
Resnick, M., Maloney, J., Monroy-Hernández, A., Rusk, N., Eastmond, E., Brennan, K., … Kafai,
Y. (2009). Scratch: programming for all. Communications of the ACM, 52(11), 60–67.
Roque, R. V. (2007). OpenBlocks: An extendable framework for graphical block programming
systems. Doctoral dissertation, Massachusetts Institute of Technology.
Tissenbaum, M., Sheldon, J., & Sherman, M. (2018). The state of the field in computational thinking
assessment. In To Appear in the Proceedings of the 2018 International Conference of the Learning
Sciences. London.
Turbak, F., Wolber, D., & Medlock-Walton, P. (2014, July). The design of naming features in
App Inventor 2. In 2014 IEEE Symposium on Visual Languages and Human-Centric Computing
(VL/HCC) (pp. 129–132). IEEE.
Weintrop, D., & Wilensky, U. (2015). To block or not to block, that is the question: Students’
perceptions of blocks-based programming. In Proceedings of the 14th International Conference
on Interaction Design and Children (IDC’15) (pp. 199–208).
Williams, C., Stanisstreet, M., Spall, K., Boyes, E., & Dickson, D. (2003). Why aren’t secondary
students interested in physics? Physics Education, 38(4), 324.
Wing, J. M. (2006). Computational thinking. Communications of the ACM, 49(3), 33–35.
3 MIT App Inventor: Objectives, Design, and Development 49
Xie, B., Shabir, I., & Abelson, H. (2015, October). Measuring the usability and capability of app
inventor to create mobile applications. In Proceedings of the 3rd International Workshop on
Programming for Mobile and Touch (pp. 1–8). ACM.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter’s Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter’s Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Part II
Student Competency and Assessment
Chapter 4
Measuring Secondary School Students’
Competence in Computational Thinking
in ICILS 2018—Challenges, Concepts,
and Potential Implications for School
Systems Around the World
Birgit Eickelmann
B. Eickelmann (B)
Institute for Educational Science, Paderborn University, Paderborn, Germany
e-mail: birgit.eickelmann@upb.de
Comparatively new in the discussion about what kind of competences young people
need in order to participate effectively in the digital society and to be prepared for
work as well as for everyday life, competences related to computational thinking
are attracting increasing attention. The corresponding research can be allocated in a
broader understanding of researching ICT literacy (e.g., Ainley, Schulz, & Fraillon,
2016; Siddiq, Hatlevik, Olsen, & Throndsen, 2016; ETS, 2007). From this per-
spective, computational thinking adds to a new understanding of computer-related
problem-solving. It not only broadens the previous definitions of ICT literacy but
indeed opens up a new perspective. In this scope, teaching and learning about how
to solve problems and how computer systems work means competence in compu-
tational thinking can be applied in different contexts (Ainley, Schulz, & Fraillon,
2016). Accordingly, the current discussion grows around the question as to where
these competences should and could be taught. Answering this question requires
developing an understanding of teaching and learning computational thinking and
establishing common ideas of computational concepts, practices, and perspectives
within a school system (Kong, 2016). Currently, computational thinking challenges
the work of education systems all over the world, especially with regard to the
development of competence models, teacher education, and curriculum integration
of computational thinking (Kafai, 2016; Bescherer & Fest, 2018). Looking at the
current developments, three approaches to support the acquisition of competences in
computational thinking and improve students’ achievement in computational think-
ing can be identified: (1) Cross-curricular approach: The first approach is to under-
stand computational thinking as a cross-curricular competence which can be taught
within different subjects, acknowledging that each subject has a particular view and
contribution to students’ acquisition of competences in computational thinking (Barr
& Stephenson, 2011). (2) Computer science approach: The second discussion refers
to the understanding of computational thinking as being a substantial part of com-
puter science (e.g., Kong, 2016). In this understanding, thought processes related to
computational thinking and involved in formulating and solving problems can be rep-
resented as computational steps and algorithms (Aho, 2011). This, to a certain extent,
leads to promoting computational thinking best in the context of teaching computer
science and is highly correlated with its contribution to modeling, programming, and
robotics. Following on from this school of thought, Kong (2016) proposes a compu-
tational thinking framework, based on a framework by Brennan and Resnick (2012),
to develop a curriculum in K-12 that promotes computational thinking through pro-
gramming. The special thing about this framework is that although it is assumed that
computational thinking draws “on the fundamental knowledge and skills of com-
puter science” (Kong, 2016, p. 379), it is supposed that computational thinking is
broader than computer science and refers to problem-solving, system design, and
human behavior. Based on this, computational thinking is promoted in a separate
3-year curriculum. This results in a third approach, which is to develop a new learn-
4 Measuring Secondary School Students’ Competence … 55
In the following section, the theoretical framework and the empirical approach of
measuring students’ achievements in computational thinking in the context of the
international comparative large-scale study ICILS 2018 will be presented. The first
section provides information on the study and its scope. The second section describes
the approach of researching computational thinking in the context of the study. It
provides information on the research design, on the understanding of computational
thinking in the scope of the study as a construct that can be measured by conduct-
ing computer-based tests. Furthermore, the research questions that are addressed
in the study ICILS 2018 as well as information on the relevant context factors are
presented. The last subsection deals with insights into national extensions imple-
mented by individual countries participating in the computational thinking option of
ICILS 2018.
56 B. Eickelmann
With ICILS 2018 (International Computer and Information Literacy Study), the
IEA (International Association for the Evaluation of Educational Achievement) is
completing the second cycle of ICILS. The study is an international comparative
assessment with participating countries from all around the world. As for ICILS
2013, the international study center of ICILS 2018 is allocated at the Australian
Council for Educational Research (ACER). ICILS 2013 for the first time focused
on computer and information literacy (CIL) as a competence area measured in inter-
national comparisons by conducting computer-based student tests for Grade 8 in 21
education systems around the world (Fraillon, Ainley, Schulz, Friedman, & Geb-
hardt, 2014). After having successfully run this first cycle of ICILS, the IEA decided
to conduct a second cycle (ICILS 2018). Acknowledging the rapid changes affecting
ICT in teaching and learning and the aspiration to conduct a future-oriented study,
ACER suggested adding computational thinking as an extension of the study. The
core and trend part of both ICILS 2013 and ICILS 2018 comprises student tests for
CIL and questionnaires on teaching and learning with ICT and individual, classroom
and school factors with regard to the acquisition of CIL. Within the scope of ICILS
2018, nine education systems (Denmark, France, Germany, Luxembourg, Portugal,
the U.S., the Republic of Korea and the benchmarking participants Moscow (Russia)
and the German federal state North Rhine-Westphalia) are making use of the inter-
national option and are participating in the additional module focusing on computa-
tional thinking. Each student of the representative student sample takes, in addition
to two 30-min CIL tests, two 25-min computer-based tests on computational think-
ing. From the research perspective, the development of the computer-based tests,
covering all aspects of computational thinking and making it work for Grade 8 stu-
dents, has probably been the most challenging part of the current cycle of the study.
The aforementioned student tests are complemented by questionnaires addressing
the tested student, teachers in the participating schools and the school principals and
ICT coordinators of the schools which are selected for participation in the study.
In this context, questionnaire items of particular interest in the context of compu-
tational thinking are added in the student and teacher questionnaires. Furthermore,
all participating countries are asked to provide data about the education system and
its approach to teaching and learning with ICT by filling in a so-called national
context survey. This country-related questionnaire refers, for instance, to aspects of
educational goals, curricula, and teacher education related to the scope of the study.
Data collection took place in spring 2018 for the Northern Hemisphere and in
autumn 2018 for countries from the Southern Hemisphere. All education systems
participated with a representative school sample, comprising representative teacher
and student samples. Therefore, the results of the study allow for interpreting the
status quo of Grade 8 student achievement in CIL and in addition, for those education
4 Measuring Secondary School Students’ Competence … 57
systems which are taking part in the international option, also for the domain of
computational thinking.
The study starts where most research on computational thinking begins, basing itself
on an adaptation of Wing’s (2006) statements on computational thinking. In her
understanding, computational thinking comprises fundamental skills which allow
individuals to solve problems by using computers: “Computational thinking is a way
humans solve problems; it is not trying to get humans to think like computers” (Wing,
2006, p. 35). Wing’s idea relates to Papert (1980) who developed the essential features
of computational thinking. In recent years, the field of computational thinking has
been continuously developed (e.g., Dede, Mishra, & Voogt, 2013; Mannila et al.,
2014; Voogt, Fisser, Good, Mishra, & Yadav, 2015).
In the context of ICILS 2018, computational thinking is defined as “an individ-
ual’s ability to recognize aspects of real-world problems which are appropriate for
computational formulation and to evaluate and develop algorithmic solutions to those
problems so that the solutions could be operationalized with a computer” (Fraillon,
Schulz, Friedman, & Duckworth, 2019). In this context, the understanding of compu-
tational thinking and its relevance for future generation lead to new tasks for schools
and school systems in order to offer the possibility for every child to participate effec-
tively in the digital world. In this context, it is stressed that this approach sees young
people not only as consumers in a digital world but also their need for competence
as reflective creators of content (IEA, 2016).
Apart from this broader understanding of computational thinking, the subject of
the study is a more detailed definition of the construct “computational thinking.” This
has been developed by taking previous research findings, relevant approaches, and
understandings of computational thinking into account. The study’s understanding
of computational thinking skills also corresponds to international standards such as
the ISTE standards for students (2016). These standards focus on the understand-
ing of “Computational Thinkers” that “students develop and employ strategies for
58 B. Eickelmann
understanding and solving problems in ways that leverage the power of technolog-
ical methods to develop and test solutions” (p. 1), including skills such as problem
formulation, data collection and analysis, abstraction, modeling, algorithmic think-
ing, solution finding, use of digital tools, representation of data, decomposition, and
automation. The construct as it is addressed in ICILS 2018 consists of two strands
which are both subdivided into subareas (Fraillon, Schulz, Friedman, & Duckworth,
2019).
Strand I: Conceptualizing problems: The first strand refers to the conceptualiza-
tion of problems. Conceptualizing problems acknowledges that before solutions can
be developed, problems must first be understood and framed in a way that allows
algorithmic or system thinking to assist in the process of developing solutions. As
subareas it includes three aspects: 1. Knowing about and understanding computer
systems; 2. Formulating and analyzing problems; and 3. Collecting and representing
relevant data. A task that provides evidence of an individual’s ability to know about
and understand computer systems includes, for example, operating a system to pro-
duce relevant data for analysis or explaining why simulations help to solve problems.
Formulating problems entails the decomposition of a problem into smaller manage-
able parts and specifying and systematizing the characteristics of the task so that
a computational solution can be developed—possibly with the help of a computer.
Analyzing consists of making connections between the properties of and developing
solutions to previously experienced problems and new problems to establish a con-
ceptual framework to underpin the process of breaking down a large problem into
a set of smaller, more manageable parts. Collecting and representing relevant data
comprises making effective judgements about problem-solving within systems. This
requires knowledge and understanding of the characteristics of the relevant data and
of the mechanisms available for collection, organization, and representation of the
data for analysis. This could, for instance, involve creating or using a simulation of
a complex system to produce data that may show specific patterns or characteristics.
Strand II: Operationalizing solutions: The second strand concerns operationaliz-
ing solutions. Operationalizing solutions comprise the processes associated with cre-
ating, implementing, and evaluating computer-based system responses to real-world
problems. It includes the iterative processes of planning for, implementing, testing,
and evaluating algorithmic solutions to real-world problems. The strand includes an
understanding of the needs of users and their likely interaction with the system under
development. The strand comprises two aspects: 1. Planning and evaluating solutions
and 2. Developing algorithms, programs, and interfaces. Examples of tasks that, for
instance, provide evidence of an individual’s ability to develop algorithms, programs,
and designs can be processed such as creating a simple algorithm or modifying an
existing algorithm for a new purpose.
This understanding of computational thinking acted as a basis for the development
of the student tests. Each aspect is covered in at least one of the two computational
thinking test modules. Student test data is analyzed using IRT scaling and student
achievement data is analyzed in relationship to the context data which is gathered
via the various types of questionnaires. The analyses are guided by the research
questions, which are presented in the following section.
4 Measuring Secondary School Students’ Competence … 59
Taking the aforementioned research gap and the aims of the study into account,
the following three overarching research questions are addressed in the study. The
questions refer to different levels within education systems: the school system level,
the school and classroom level, and the individual student level.
(1) First, what variations exist in and across different countries in student achieve-
ment in computational thinking and what aspects of students’ personal and
social background are related to it?
This question is answered by gathering data on student achievement in computa-
tional thinking, using computer-based computational thinking tests. The student test
data enables compilation of national averages as well as for comparison of student
achievement between countries. As in other international comparative studies, the
student achievement data also allows for in-depth analyses within countries, e.g.,
comparing student achievement between boys and girls, between students with and
without migration background or between students from different socioeconomic
backgrounds. If countries have chosen to stratify the student sample to differentiate
between different school types or tracks, analysis can also reveal differences and
similarities between groups of students from different school types. These are just
a few examples for potential analysis that are useful for the purpose of obtaining
information on student achievement in computational thinking and gathering data
and information to describe efforts made in teaching and learning computational
thinking.
(2) Second, what aspects of education systems, schools, and classroom practice
explain variation in student achievement in computational thinking?
This question focuses on the way in which computational thinking is implemented
in education systems, in schools, and in classroom practice. Data relevant to this
question will be collected via the questionnaires. This type of data and results, for
instance, enable interpretation and framing of findings to address the first ques-
tion. Furthermore, information on educational practice and settings is gathered and
provides interesting insights into teaching and learning with ICT in the context of
computational thinking.
(3) Third, how is student achievement in computational thinking related to their
computer and information literacy (CIL) and to their self -reported proficiency
in using computers?
This research question connects the core part of the study referring to CIL and the
additional part referring to computational thinking. By applying both the student test
on computational thinking and the student test on CIL within the same student sam-
ple, initial correlations between the two constructs can be examined (For a detailed
overview of the underlying construct of CIL in ICILS 2013, see Fraillon, Ainley,
Schulz, Friedman, & Gebhardt, 2014. The exact constructs of CT and CIL in ICILS
60 B. Eickelmann
2018 can be found in the study’s framework. It also provides information on how the
study envisages the differences between these two areas). This also leads to new fun-
damental theoretical knowledge as well as to important information on how teaching
of both competences can be combined and how student learning might be better
supported.
A comprehensive and more detailed list of research questions as well as further
details on instrument preparation and content can be found in the study’s assessment
framework (Fraillon, Schulz, Friedman, & Duckworth, 2019). A detailed overview
of all instruments and objects will be published by ACER (Australian Council for
Educational Research) in 2020 with the so-called ICILS 2018 user guide for the
international database.
As mentioned above, the ICILS 2018 applies computer-based student tests for Grade
8 students and adds questionnaires for students, teachers, school principals, and
IT coordinators. Furthermore, a so-called national context survey questionnaire is
applied and filled in by the national study centers of the countries participating in
the study.
In the scope of the study, assessing students’ achievement in computational think-
ing by applying computer-based tests means developing tests that have an authentic
real-world focus to capture students’ imagination in an appropriate way. At the core of
the test modules “authoring tasks” contain authentic computer software applications
(Fraillon, Schulz, & Ainley, 2013). The actual content of the test modules themselves
will be published in the report of the study in 2019. Additionally, with the aim of
exploring classroom practices regarding student use of computational thinking tasks,
ICILS 2018 gathers information from students via student questionnaires. Parts of
the questionnaires relate to computational thinking and are only applied in those
countries that have chosen to include the computational thinking module. Students
should, for example, specify the extent to which they have learned different com-
putational thinking tasks in school. These tasks refer to the study’s computational
thinking construct described above. Furthermore, with respect to teacher attitudes
toward teaching computational thinking, teachers are asked about the value they
attach to teaching skills in the field of computational thinking. These skills also refer
to the abovementioned computational thinking construct and are part of the teacher
questionnaires. It is of note that the experts from the national centers together with
the international study center, located at ACER in Melbourne, decided to include
teachers from all different school subjects. Going beyond only involving computer
science teachers has been a consensual decision of the experts included in the devel-
opment of the instruments of the study. Based on this decision, the study will allow
for comparing teachers practice as well as attitudes between different subject areas.
4 Measuring Secondary School Students’ Competence … 61
Considering the increasing relevance of the next generation’s competences in the field
of computational thinking, several educational systems around the world have already
decided to implement computational thinking as an obligatory subject into school
curricula. Despite the fact that the approaches taken in implementing computational
thinking may differ between countries, it becomes clear that supporting computa-
tional thinking processes and competences is considered as a future-oriented part of
school education and adds to the traditional subjects and learning areas (Labusch
& Eickelmann, 2017). Developing an understanding of computational thinking that
leads to facilitating teaching it in schools, however, seems to be challenging. While
various perspectives on computational thinking and its relevance for school learning
exist, there is very limited availability of empirical knowledge based on a sound
database. In addition to many studies currently being conducted around the world,
the international comparative large-scale assessment ICILS 2018 for the first time
provides empirical findings on student achievement in computational thinking in
different education systems. The addition of questionnaire data to the results of the
62 B. Eickelmann
References
Aho, A. V. (2011). Computation and computational thinking. Computer Journal, 55(7), 832–835.
Ainley, J., Schulz, W., & Fraillon, J. (2016). A global measure of digital and ICT literacy skills.
Background paper prepared for the 2016 Global Education Monitoring Report. Paris: UNESCO.
Barr, V., & Stephenson, C. (2011). Bringing computational thinking to K-12: What is involved and
what is the role of the computer education science community? ACM Inroads, 2(1), 48–54.
Bescherer, C., & Fest, A. (2018). Computational thinking in primary schools: Theory and casual
model. In A. Tatnall & M. Webb (Eds.), Tomorrow’s learning: Involving everyone. IFIP advances
in information and communication technology. Springer.
Brennan, K., & Resnick, M. (2012). New frameworks for studying and assessing the development
of computational thinking. In A. F. Ball & C. A. Tyson (Eds.), Proceedings of the 2012 Annual
Meeting of the American Educational Research Association (pp. 1–25). Vancouver: American
Educational Research Association.
Caeli, E. N., & Bundsgaard, J. (2018). Computational thinking initiatives in Danish grade 8 classes.
A quantitative study of how students are taught to think computationally. Paper presented at ECER
2018 (European Conference on Educational Research), Bolzano, Italy.
Dede, C., Mishra, P., & Voogt, J. (2013). Advancing computational thinking in 21st century learning.
Presented at EDUsummIT. International Summit on ICT in Education, Dallas, TX.
4 Measuring Secondary School Students’ Competence … 63
Voogt, J., Fisser, P., Good, J., Mishra, P., & Yadav, A. (2015). Computational thinking in compulsory
education: Towards an agenda for research and practice. Education and Information Technologies,
1–14.
Wing, J. M. (2006). Computational thinking. Communications of the ACM, 49(3), 33–35.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter’s Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter’s Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Chapter 5
Computational Thinking Processes
and Their Congruence
with Problem-Solving and Information
Processing
5.1 Introduction
However, this poses major challenges for schools and teachers, because then
noncomputer science teachers must also have skills that allow them to teach com-
putational thinking adequately. However, it turns out that schools are often already
“struggling with the way in which to use information technology in the classroom”
(Faber, Wierdsma, Doornbos, & van der Ven, 2017, p. 13). Faber et al. (2017) sug-
gest “unplugged” lessons in computational thinking. When discussing the question
of what unplugged activities of computational thinking might look like, based on
existing definitions, problem-solving, and cognitive abilities are important aspects.
Taking a large congruence for granted gives researchers the added advantage of being
able to resort to problem-solving and cognitive abilities theories. Problem-solving,
especially complex problem-solving, has been explored for decades (Fischer, Greiff,
& Funke, 2017), thus providing a sound body of well-proven research.
Assuming that the study will show that not all students possess this competence
equally and to a sufficient degree, the question also arises as to which influencing
factors can explain differences in students’ achievement in computational thinking.
Personal and social factors can be taken into account, but when it comes to learning
and teaching computational thinking, the question of the role of schools and espe-
cially teachers arises. Therefore, the school context should not be disregarded as
a possible influence variable on students’ achievement of computational thinking.
It has been a controversial issue in the recent discourse on the implementation of
computational thinking into curricula of how to learn and teach it (Grover, 2017).
Some experts argue that knowledge of what constitutes computational thinking and
how to make full use of it makes it accessible (Barr, Harrison, & Conery, 2011).
From this perspective, it is assumed that understanding computational thinking and
its core elements “would enable teachers to integrate it in their teaching concepts”
(Labusch & Eickelmann, 2017, p. 103).
aspects of the school context are to be taken into account in the context of the in
school acquisition of competences in the field of computational thinking (5.2.4) to
be able to explain variation in students’ achievement in computational thinking.
In recent years, computational thinking has often been mentioned in the problem-
solving context (e.g., Korkmaz, Çakir, & Özden, 2017; Román-González, Pérez-
González, & Jiménez-Fernéndez, 2017). Problem-solving is described as a trans-
formation from an undesirable initial state to a desirable final state (Beecher, 2017)
by overcoming a barrier. Doing so means operating at high levels of thinking and
reasoning (Spector & Park, 2012). Binkley et al. (2012) turn their attention to the
challenge of teaching additional competences such as sophisticated thinking and
flexible problem-solving to put students in the best possible starting position for
participation in work, life, and society.
While the acquisition of problem-solving skills is a current topic of discussion,
studies into problem-solving skills and problem-solving processes have been being
conducted for decades. A problem-solving process is frequently described as a seven-
stage cycle (Pretz, Naples, & Sternberg, 2003): (1) the recognition or identification
of a problem, (2) the definition and mental representation of the problem, (3) the
development of a strategy to solve the problem, (4) the organization of knowledge
concerning the problem, (5) the allocation of mental and physical resources to solving
the problem, (6) the monitoring of progress toward the goal, and (7) the evaluation
of the solution for accuracy. Although there might be several ways to solve a prob-
lem—and the one chosen will depend on the nature of the actual problem in hand—it
is assumed that they will all have the same processes in common. The first stage of
the problem-solving process is therefore of huge importance: The problem needs
to be both recognized and identified in order to select the best possible solution
since how a problem is solved depends on the actual problem itself. Since these
processes are very similar to computational thinking processes, it is important to
investigate their similarities and differences by exploring the congruence between
general problem-solving and computational thinking.
Many researchers and educators contend that computational thinking changes our
way of thinking (Bundy, 2007). Information processing theories and models offer
deeper insights into this subject: ACT-R (Adaptive Control of Thought—Revised)
(Anderson, 2010) and MEKIV (Model for elementary and complex human informa-
tion processing) (Hussy, 1998), for instance, are two global models that incorporate
a useful cognitive architecture for modeling complex problem-solving processes.
5 Computational Thinking Processes and Their Congruence … 69
They both have in common that they try to integrate multiple mental functions and
thus take a more realistic approach to the complexity of action control and thinking.
While ACT-R has a symbolic structure and can be used to create cognitive models
that simulate human cognitive processes (Hussy, 1998), MEKIV provides a more
theoretical description of elementary and complex human information processing,
starting with the sensory organs and ending in the motoric system. All information
stored in the sensory register is passed on to the long-term memory, where the incom-
ing pieces of information are compared with the stored patterns that have meaning.
Stimuli received via sensory organs are thus encoded and interpreted by translat-
ing them into cognitive representations, which are then stored in memory. In doing
so, the stimulus is correlated with existing knowledge, whereby it is enriched with
information. Prerequisites for this are the accessibility and retrievability of stored
knowledge. The process of categorizing the stimulus is of particular importance
when it comes to encoding. The encoded perception is itself stored in memory and
creates new memory contents in interaction with the knowledge already stored there.
Human behavior then represents the output of this information processing (Bless,
Fiedler, & Strack, 2004).
includes both the identification and the definition of problems. That means that
problems have to be conceptualized in understanding and framed before solutions
can be formed (ibid.). Identifying the problem also permits a decision on whether it
can be solved using computational thinking. Understanding the problem to be solved
is an important prerequisite for the problem-solving process. Pólya (2004) points out
that “it is foolish to answer a question that you do not understand” (p. 6). While
a scientific investigation process begins with a hypothesis (Riley & Hunt, 2014), a
problem-solving process begins with a problem definition. Michaelson (2015) main-
tains that characterizing the problem is the most challenging part of problem-solving.
When a person presents a problem to someone else and asks them to solve it (e.g.,
in school, when a teacher asks her/his students to solve a problem), it is best if they
formulate the problem in their own words (Pólya, 2004). While it is not the rule that
computing activities begin with problem identification and definition or are always
thought in terms of problems to be solved, according to the above definition, prob-
lem identification and problem definition may be included in computational thinking
processes.
Miller (1956) has found that the human memory is limited to 7 ± 2 items, which
means that some problems are too complex for the human brain to solve unless they
are first decomposed into subproblems to be processed in the brain. This process
of breaking a problem “down into manageable steps” (IEA, 2016, p. 1) is called
decomposition. It is a core element of a computational thinking process and ensures
that even complex problems can be understood. While some computational thinking
processes belong to only one strand (conceptualizing problems or operationalizing
solutions), others—like pattern recognition and pattern matching—are relevant to
almost all aspects of a computational thinking process. Decomposing a problem by
analyzing it as a whole also requires knowledge of what constitutes manageable steps
and how they are related to each other and the whole problem. The decomposition
process is well known in language arts, where it is referred to as outlining (Barr &
Stephenson, 2011), i.e., organizing work by decomposing it into its main ideas (Riley
& Hunt, 2014). Data can support the computational thinking process by providing
important information. This means finding a data source, analyzing the data, and
using data structures to represent the data (Barr & Stephenson, 2011).
The problem fragments of the decomposition process are shaped into possible solu-
tions by modeling and revising them. A model can serve as an abstract representation
of a real-world situation (Frigg, 2002), which is why a modeled solution might also be
applied to real-world problems. Modeling a solution involves different processes and
5 Computational Thinking Processes and Their Congruence … 71
Denning (2009) suggests that the basic idea behind computational thinking is in
essence algorithmic thinking. In the field of computer science, an algorithm is defined
as “any well-defined sequence of actions that takes a set of values as input and proce-
dures some set of values as output” (Riley & Hunt, 2014, p. 130). An algorithmic view
of life is assumed to be very valuable because it involves many essential activities in
life by following simple and discrete steps. The thinking process required to formu-
late an algorithm differs in one aspect from the formulation of a well-defined rule of
72 A. Labusch et al.
An analysis of the above reveals the emergent need for a study which measures the
in-school acquisition of computational thinking as a cross-curricular competence,
using a research concept that integrates general problem-solving and school-related
parameters. The findings indicate that while a congruence between computational
thinking and problem-solving is often taken for granted, few scholars have actually
investigated this assumption. There is also an absence of a study with a substantial
database that explores the congruence between computational thinking and problem-
solving as well as individual and school-related factors relating to the acquisition of
competences in the field of computational thinking with the aim of creating a holistic
picture of computational thinking in schools.
As a result, the following research questions are addressed within this contribution:
1. To what extent are students’ achievements in computational thinking related to
their general problem-solving skills?
2. To what extent are students’ achievements in computational thinking related to
their general problem-solving skills under control of individual characteristics?
3. To what extent are students’ achievements in computational thinking related to
their general problem-solving skills under control of the school context?
It is assumed that, as has already been shown theoretically, there is a strong
positive relationship between students’ achievement in computational thinking and
their problem-solving skills (research question 1). However, it is also assumed that
there are other variables at the student and school level influencing the relation-
ship (research question 2 and 3). These assumptions have to be verified or rejected
in a suitable study.
The methods and instruments to be used for the analyzes will be aligned with the
aforementioned research questions. The data basis is provided by the international
comparative study IEA-ICILS 2018 (International Computer and Information Lit-
eracy Study), in which the authors are participating as members of the national study
center in Germany. In ICILS 2013, students’ computer and information literacy (CIL)
was measured on a representative basis in an international comparison. In the course
of the second cycle of ICILS in 2018, the IEA (International Association for the
Evaluation of Educational Achievement) is for the first time implementing a new,
74 A. Labusch et al.
Since contextual factors influence variations in human cognition, affect and behavior,
it will be needed to use, in addition to descriptive primary and secondary analyses,
a multilevel statistical model to avoid problems of parameter misestimation and
model mis-specification (Rowe, 2009). It is now widely accepted that studies that
assess the impact of school education on students’ learning outcomes must take
into account the fact that students’ progress is influenced by complex, multilevel,
5 Computational Thinking Processes and Their Congruence … 75
For the first time, the congruence between computational thinking and general
problem-solving will be examined under the control of other variables at student
and school level on the basis of a representative sample in the context of a national
extension of the IEA-study ICILS 2018. The data has been collected in 2018, and ini-
tial results will be available in 2019. Using a cognitive approach, these results might
provide a starting point for implementing computational thinking into curricula as
a multidisciplinary and cross-curricular key competence of the twenty-first century
and contribute to moving school systems into the digital age.
References
Anderson, J. R. (2010). How can the human mind occur in the physical universe? New York: Oxford
University Press.
Barr, D., Harrison, J., & Conery, L. (2011). Computational thinking: A digital age skill for everyone.
Learning & Learning with Technology, 38(6), 20–23.
Barr, V., & Stephenson, C. (2011). Bringing computational thinking to K-12: what is involved and
what is the role of the computer science education community? ACM Inroads, 2(1), 48–54.
Beecher, K. (2017). Computational Thinking: A beginner’s guide to problem-solving and program-
ming. Swindon, UK: BCS Learning & Development Limited.
Binkley, M., Erstad, O., Herman, J., Raizen, S., Ripley, M., Miller-Ricci, M., & Rumble, M. (2012).
Defining twenty-first century skills. In P. Griffin, B. McGaw & E. Care (Eds.), Assessment and
teaching of 21st century skills (pp. 17–66). Dordrecht: Springer.
76 A. Labusch et al.
Bless, H., Fiedler, K., & Strack, F. (2004). Social cognition. How individuals construct social reality
[Social psychology: A modular course]. Philadelphia, PA: Psychology Press.
Brennan, K., & Resnick, M. (2012). New frameworks for studying and assessing the development of
computational thinking. In Paper Presented at the Annual Meeting of the American Educational
Research Association, Vancouver, Canada.
Bundy, A. (2007). Computational thinking is pervasive. Journal of Scientific and Practical Com-
puting, 1(2), 67–69.
Curzon, P., & McOwan, P. W. (2017). The power of computational thinking: Games, magic and
puzzles to help you become a computational thinker. London, UK: World Scientific Publishing
Europe Ltd.
Denning, P. J. (2009). The profession of IT: Beyond computational thinking. Communications of
the ACM, 52(6), 28–30.
Eickelmann, B. (2017). Computational Thinking als internationales Zusatzmodul zu ICILS 2018 –
Konzeptionierung und Perspektiven für die empirische Bildungsforschung [Computation Think-
ing in the context of ICILS 2018 – the perspective of educational research]. Tertium Compara-
tionis. Journal für International und Interkulturell Vergleichende Erziehungswissenschaft, 23(1),
47–61.
Faber, H. H., Wierdsma, M. D. M., Doornbos, R. P., & van der Ven, J. S. (2017). Teaching compu-
tational thinking to primary school students via unplugged programming lessons. Journal of the
European Teacher Education Network, 12, 13–24.
Fink, G. A. (2014). Markov models for pattern recognition. from theory to applications. London:
Springer.
Fischer, A., Greiff, S., & Funke, J. (2017). The history of complex problem solving. In B. Csapó &
J. Funke (Eds.), The nature of problem solving: Using research to inspire 21st century learning
(pp. 107–121). Paris: OECD Publishing.
Fraillon, J., Ainley, J., Schulz, W., Friedman, T., & Gebhardt, E. (2014). Preparing for life in
a digital age. The IEA international computer and information literacy study. In International
Report. Amsterdam: International Association for the Evaluation of Educational Achievement
(IEA).
Fraillon, J., Schulz, W., Friedman, T., & Duckworth, D. (2019). Assessment framework of ICILS
2018. Amsterdam: IEA.
Frigg, R. (2002). Models and representation: Why structures are not enough. Measurement in
physics and economics project discussion paper series. London: London School of Economics.
Grover, S. (2017). Assessing algorithmic and computational thinking in K-12: Lessons from a
middle school classroom. In P. Rich & C. B. Hodges (Eds.), Emerging research, practice, and
policy on computational thinking (pp. 269–288): Springer Publishing Company.
Grover, S., & Pea, R. (2013). Computational thinking in K–12: A review of the state of the field.
Educational Researcher, 42(1), 38–43.
Hussy, W. (1998). Denken und Problemlösen [Thinking and problem-solving] (2nd ed.). Stuttgart:
Kohlhammer.
IEA (2016). The IEA’s international computer and information literacy study (ICILS) 2018. What’s
next for IEA’s ICILS in 2018? Retrieved December 12, 2017, from http://www.iea.nl/fileadmin/
user_upload/Studies/ICILS_2018/IEA_ICILS_2018_Computational_Thinking_Leaflet.pdf.
Kong, S. C., Abelson, H., Sheldon, J., Lao, A., Tissenbaum, M., Lai, M., Lang, K., & Lao, N. (2017).
Curriculum activities to foster primary school students’ computational practices in block-based
programming environments. In S. C. Kong, J. Sheldon & K. Y. Li (Eds.), Conference Proceedings
of International Conference on Computational Thinking Education 2017 (pp. 84–89). Hong
Kong: The Education University of Hong Kong.
Korkmaz, Ö., Çakir, R., & Özden, M. Y. (2017). A validity and reliability study of the Computational
Thinking Scales (CTS). Computers in Human Behavior, 72, 558–569.
Labusch, A., & Eickelmann, B. (2017). Computational thinking as a key competence—A research
concept. In S. C. Kong, J. Sheldon & K. Y. Li (Eds.), Conference Proceedings of International
5 Computational Thinking Processes and Their Congruence … 77
Conference on Computational Thinking Education 2017 (pp. 103–106). Hong Kong: The Edu-
cation University of Hong Kong.
Liu, Z., Zhi, R., Hicks, A., & Barnes, T. (2017). Understanding problem solving behavior of 6–8
graders in a debugging game. Computer Science Education, 27(1), 1–29.
Mannila, L., Dagiene, V., Demo, B., Grgurina, N., Mirolo, C., Rolandsson, L., & Settle, A. (2014).
Computational thinking in K-9 education. In Proceedings of the 2014 Conference on Innovation
& Technology in Computer Science Education (pp. 1–29).
Michaelson, G. (2015). Teaching Programming with computational and informational thinking.
Journal of Pedagogic Development, 5(1), 51–66.
Milller, G. A. (1956). The magical number seven plus or minus two: some limits on our capacity
for processing information. Psychological Review, 63(2), 81–97.
Murphy, L., Lewandowski, G., McCauley, R. Simon, B., Thomas, L., & Zander, C. (2008). Debug-
ging: The good, the bad, and the quirky—A qualitative analysis of novices’ strategies. SIGCSE
Bulletin, 40(1), 163–167.
Pólya, G. (2004). How to solve it: A new aspect of mathematical method. Princeton: Princeton
University Press.
Pretz, J. E., Naples, A. J., & Sternberg, R. J. (2003). Recognizing, defining and representing prob-
lems. In J. E. Davidson & R. J. Sternberg (Eds.), The psychology of problem solving (pp. 3–30).
Cambridge, UK: Cambridge University Press.
Ramalingam, V., Labelle, D., & Wiedenbeck, S. (2004). Self-efficacy and mental models in learning
to program. SIGCSE Bulletin Inroads, 36(3), 171–175.
Riley, D. D., & Hunt, K. A. (2014). Computational thinking for the modern problem solver. Boca
Raton, FL: CRC Press.
Román-González, M., Pérez-González, J.-C., & Jiménez-Fernandez, C. (2017). Which cognitive
abilities underlie computational thinking? Criterion validity of the Computational Thinking Test.
Computers in Human Behavior, 72, 678–691.
Román-González, M., Pérez-González, J.-C., Moreno-León, J., & Robles, G. (2016). Does compu-
tational thinking correlate with personality? The non-cognitive side of computational thinking.
In Paper presented at the Fourth International Conference on Technological Ecosystems for
Enhancing Multiculturality, Salamanca, Spain.
Rowe, K. J. (2009). Structural equation modeling in educational research. In T. Teo & M. S.
Khine (Eds.), Structural equation modeling in educational research: concepts and applications
(pp. 201–239). Rotterdam, The Netherlands: Sense Publishers.
Sanford, J. F., & Naidu, J. T. (2016). Computational thinking concepts for grade school. Contem-
porary Issues in Education Research, 9(1), 23–32.
Spector, J. M., & Park, S. W. (2012). Argumentation, critical reasoning, and problem solving. In S.
B. Fee & B. R. Belland (Eds.), The role of criticism in understanding problem solving (pp. 13–33).
New York: Springer.
Sullivan, A., & Bers, M. U. (2016). Girls, boys, and bots: Gender differences in young children’s
performance on robotics and programming tasks. Journal of Information Technology Education:
Innovations in Practice, 15, 145–165.
Voogt, J., Fisser, P., Good, J., Mishra, P., & Yadav, A. (2015). Computational thinking in compulsory
education: Towards an agenda for research and practice. Education and Information Technologies,
20(4), 715–728.
Wing, J. M. (2006). Computational thinking. Communication in ACM, 49(3), 33–35.
Wing, J. M. (2008). Computational thinking and thinking about computing. Philosophical Trans-
actions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences,
366(1881), 3717–3725.
Wing, J. M. (2017). Computational thinking’s influence on research and education for all. Italian
Journal of Educational Technology, 25(2), 7–14.
Yadav, A., Gretter, S., Good, J., & McLean, T. (2017a). Computational thinking in teacher education.
In P. Rich & C. B. Hodges (Eds.), Emerging research, practice, and policy on computational
thinking (pp. 205–220). Springer Publishing Company.
78 A. Labusch et al.
Yadav, A., Stephenson, C., & Hong, H. (2017b). Computational thinking for teacher education.
Communications of the ACM, 60(4), 55–62.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter’s Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter’s Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Chapter 6
Combining Assessment Tools
for a Comprehensive Evaluation
of Computational Thinking Interventions
Abstract Given that computational thinking (CT) is still a blurry psychological con-
struct, its assessment remains as a thorny, unresolved issue. Hence, in recent years,
several assessment tools have been developed from different approaches and opera-
tional definitions of CT. However, very little research has been conducted to study
whether these instruments provide convergent measurements, and how to combine
them properly in educational settings. In response, we first review a myriad of CT
assessment tools and classify them according to their evaluative approach. Second,
we report the results of two convergent validity studies that involve three of these CT
assessment tools, which come from different perspectives: the Computational Think-
ing Test, the Bebras Tasks, and Dr. Scratch. Finally, we propose a comprehensive
model to evaluate the development of CT within educational scenarios and interven-
tions, which includes the aforementioned and other reviewed assessment tools. Our
comprehensive model intends to assess CT along every cognitive level of Bloom’s
taxonomy and throughout the various stages of typical educational interventions.
Furthermore, the model explicitly indicates how to harmoniously combine the dif-
ferent types of CT assessment tools in order to give answer to the most common
research questions in the field of CT Education. Thus, this contribution may lead
scholars and policy-makers to perform accurate evaluation designs of CT according
to their inquiry goals.
M. Román-González (B)
Universidad Nacional de Educación a Distancia (UNED) – Facultad de Educación,
C/ Juan del Rosal, nº 14, Office 2.18, Madrid, Spain
e-mail: mroman@edu.uned.es
J. Moreno-León
Instituto Nacional de Tecnologías Educativas y de Formación del Profesorado (INTEF),
C/ Torrelaguna, nº 58, Madrid, Spain
e-mail: jesus.morenol@educacion.gob.es
G. Robles
Universidad Rey Juan Carlos (URJC), ETSI Telecomunicación, Camino del Molino s/n,
Fuenlabrada, Madrid, Spain
e-mail: grex@gsyc.urjc.es
6.1 Introduction
In the last decade, computational thinking (CT) (Wing, 2006) has emerged as an
umbrella term that refers to a broad set of problem-solving skills, which should be
acquired by the new generations to thrive in our computer-based world (Bocconi
et al., 2016). Thus, the use of the CT term has evolved and grown up, even without
reaching a consensus about its definition (Kalelioglu, Gülbahar, & Kukul, 2016).
Moreover, the relation between CT and computer programming is blurry too. It is
assumed that computer programming enables CT to come alive, and it is the main way
to demonstrate CT skills (Lye & Koh, 2014). However, CT might be projected onto a
wide range of tasks that do not involve programming (Wing, 2008). In other words,
it is necessary to activate CT skills in order to program properly, but these skills
could be used in other contexts that are disconnected from computer programming.
Therefore, CT is a broader term than computer programming.
In a certain sense, the coining of CT as an umbrella term has been extremely use-
ful, and its sudden success can be explained. First, the CT term has helped to place
Computer Science Education beyond computer programming. Second, it has helped
to lower the entry barriers to computer programming, in parallel of the appearance
and rise of visual blocks languages; in the same vein, CT has provided the frame
to focus not on the computer programming syntax, but on the underlying mental
processes for it. As a result, CT is perceived as a friendly and nonthreatening term
that has contributed to bring Computer Science (CS) closer to the teachers and to
foster the CS4all movement. Third, CT is such a liquid term that it can be used
more as an approach than as a concept; then, CT has enhanced the metaphor of
“programming to learn” instead of “learning to program.” In other words, CT has
made possible to imagine a computational approach for any of the subjects of the
curriculum. Finally, CT term has gathered not only cognitive skills, such as decompo-
sition, pattern recognition, abstraction, and algorithmic design, but also noncognitive
variables (Román-González, Pérez-González, Moreno-León, & Robles, 2018) and
related soft skills such as persistence, self-confidence, tolerance to ambiguity, cre-
ativity, and teamwork, among others. In summary, CT term has served as a response
to a global phenomenon in which it has become evident that our lives, increasingly
mediated by algorithms, need a new set of skills to relate properly with the ubiquitous
machines (Rushkoff, 2010; Sadin, 2015).
Nevertheless, this lack of definition of CT term that has proved useful in the past
could be a burden for its future survival and development. Thus, there is not only a
lack of consensus on a CT formal definition but also disagreements about how CT
should be integrated into educational curricula (Lye & Koh, 2014), and especially
on how it should be properly assessed (Grover, 2015; Grover & Pea, 2013). The
latter is an extremely relevant and urgent topic to be addressed because without
6 Combining Assessment Tools for a Comprehensive Evaluation … 81
reliable and valid assessment tools, CT will have difficulties to consolidate in the
educational system and it runs a serious risk of disappearing as a construct worthy
of consideration for educational psychology.
Expressed in a metaphorical way, CT term has had its naïve childhood, it has gone
through an impulsive (and productive) adolescence, and it is now entering adulthood
through a process of demystification. As Shute, Sun, and Asbell-Clarke (2017) say,
CT is being demystified and, if it wants to survive, it is time to study it scientifically
and to define operational CT models that can be empirically validated.
Ironically, although CT assessment seems to be the thorniest issue in the field,
we consider that it brings the biggest opportunity to reinforce CT as a serious and
well-established psychological construct. Assessment and measurement imply to
operationally define the construct, CT in this case, in order to design an assess-
ment tool that must be consequently validated. Hence, advances in assessment can
contribute decisively to consolidate CT as a solid concept, a solid variable worthy
of being studied and developed. In this sense, this chapter aims to review the cur-
rent state-of-the-art CT assessment tools and to propose a comprehensive evaluation
model, which could combine these tools effectively.
The chapter is structured as follows: in the next section, we review a myriad of
CT assessment tools and classify them according to their evaluative approach. In the
third section, we report the results of two convergent validity studies that involve
three of these CT assessment tools, which come from different perspectives: the
Computational Thinking Test, the Bebras Tasks, and Dr. Scratch. In the fourth section,
we propose a comprehensive model to evaluate CT interventions within educational
scenarios, which includes the aforementioned and other reviewed assessment tools.
Finally, in the fifth and last section, we offer our conclusions and we speculate about
future lines of research.
Without being exhaustive, and focusing on K-12 education, we can find the following
CT assessment tools, which can be classified depending on their evaluative approach:
• CT diagnostic tools: They are aimed at measuring the CT aptitudinal level of
the subject. Their major advantage is that they can be administered in pure pretest
condition (e.g., subjects without any prior programming experience). Complemen-
tarily, the diagnostic tools can be also applied in posttest condition (i.e., after an
educational intervention) in order to check if the CT ability has increased. Some of
the CT diagnostic tools are the Computational Thinking Test (Román-González,
2015; Román-González, Pérez-González, & Jiménez-Fernández, 2017b), the Test
for Measuring Basic Programming Abilities (Mühling, Ruf, & Hubwieser, 2015),
and the Commutative Assessment Test (Weintrop & Wilensky, 2015). All the
aforementioned tests are aimed at middle-school and/or high-school students; for
82 M. Román-González et al.
1 http://appinventor.cs.trincoll.edu/csp/quizly/.
2 http://apps.computacaonaescola.ufsc.br:8080/.
6 Combining Assessment Tools for a Comprehensive Evaluation … 83
Babu, & Gundersen, 2014). This type of tools is especially suitable for assessing
the degree of retention and transfer of CT, once a time has elapsed since the end
of a CT educational intervention.
• CT perceptions–attitudes scales: They are aimed at assessing the perceptions
(e.g., self-efficacy perceptions) and attitudes of the subjects not only about CT,
but also about related issues such as computers, computer science, computer pro-
gramming, or even digital literacy. Among scales targeted to students, we can name
the Computational Thinking Scales (CTS) (Korkmaz, Çakir, & Özden, 2017), the
Computational Thinking Skills Scale (CTSS) (Durak & Saritepeci, 2018), or the
Computer Programming Self -Efficacy Scale (CPSES) (Kukul, Gökçearslan, &
Günbatar, 2017). When we are interested in assessing the perceptions and atti-
tudes of teachers, the research work of Yadav, Mayfield, Zhou, Hambrusch, and
Korb (2014) can be highlighted. This kind of tools can be administered both before
and after a CT educational intervention.
• CT vocabulary assessment: Finally, these tools intend to measure several ele-
ments and dimensions of CT, when they are verbally expressed by the subjects.
These verbal expressions have been denominated as “computational thinking lan-
guage” (e.g., see Grover, 2011).
It is worth noting that those different types of instruments have their own intrinsic
characteristics, which lead each of them to approach CT assessment in a particular
way. For example, while the diagnostic and the summative tools are based on stu-
dent responses to predefined CT items or questions, the formative–iterative and the
data-mining tools rely on the analysis of student programming creations and of stu-
dent activity when developing CT, respectively. Thus, the information coming from
each type of instruments has a different nature and all of them must be harmonized
and triangulated to reach a complete CT assessment of the individual, as will be
exemplified in the following empirical section.
Consequently, if only one from the aforementioned types of CT assessment tools is
utilized, then it is very likely that an incomplete view of the students’ CT is obtained.
This incomplete and biased view can lead us to misunderstand the CT development
of our students, and to take wrong educational decisions. In the same vein, Brennan
and Resnick (2012) have stated that assessing students’ computational competencies
just looking at the programs created by the learners could be clearly insufficient, so
they have emphasized the need of multiple means of assessment. Following this line
of reasoning, Grover (2015) affirm that different types of complementary assessment
tools must be systematically combined to reach a total and comprehensive under-
standing of the CT of our students. These combinations have been denominated as
“systems of assessment,” which is the leitmotiv on which we want to contribute in
the next two sections.
Therefore, in the next section, we investigate these “systems of assessments” from
an empirical psychometric approach. It is supposed that a “system of assessment” will
be composed of instruments that provide convergent measures. Specifically, in the
next section, we study the convergence of the measurements provided by three of the
aforementioned CT assessment tools. Later on, in the fourth section, we speculate
84 M. Román-González et al.
In this section, we report two different convergent validity studies, which were carried
out with two independent samples. The first study investigates the convergent validity
of the Computational Thinking Test (CTt) with respect to a selection of Bebras
Tasks. The second study does the same, but between the CTt and Dr. Scratch. Before
reporting the results of both studies, some background and details about these three
CT assessment tools are offered:
• Computational Thinking Test (CTt): The CTt3 is a diagnostic assessment tool
that consists of a multiple-choice instrument composed of 28 items, which are
administered online in a maximum time of 45 min. Each of the items of the CTt is
presented either in a “maze” or in a “canvas” interface, and is designed according
to the following three dimensions:
– Computational concept(s) addressed: Each item addresses one or more of the
following computational concepts, which appear in increasing difficulty and
which are progressively nested along the test: basic directions and sequences;
loops (repeat times, repeat until); conditionals (if, if/else); while conditional
loops; simple functions.
– Style of response options: In each item, responses are depicted in any of the
following two styles: “visual arrows” or “visual blocks”.
– Required cognitive task: In order to be solved, each item demands to the subject
one of the following cognitive tasks: to sequence an algorithm, to complete an
incomplete algorithm, or to debug an incorrect algorithm.
The CTt has demonstrated to be reliable and valid for assessing CT in subjects
between 10 and 16 years old (Román-González, 2015; Román-González et al.,
2017b). We show some examples of the CTt items in Figs. 6.1, 6.2, and 6.3, whose
specifications are detailed in the respective caption.
• The Bebras Tasks: These tasks consist of a set of activities designed within the
context of the Bebras International Contest,4 a competition created in Lithuania
in 2003, which is aimed at promoting the interest and excellence of K-12 students
around the world in the field of CS from a CT perspective (Dagiene & Futschek,
Fig. 6.1 CTt, item #11 (“maze”): loops “repeat until + repeat times” (nested); “visual arrows”;
“debugging”
Fig. 6.2 CTt, item #18 (“maze”): loops “repeat until” + if/else conditional (nested); “visual blocks”;
“sequencing”
2008). Every year, the contest launches a new set of Bebras Tasks, which require
the students to transfer and project their CT skills in order to solve “real-life”
problems. For this feature, in the previous sections, we have classified the Bebras
Tasks as a CT skill transfer assessment tool. Moreover, another advantage of the
Bebras Tasks is that they are independent from any particular software or hardware,
and they can even be administered to individuals without any prior programming
experience. The three Bebras Tasks used in our convergent validity study are shown
in Figs. 6.4, 6.5, and 6.6.
86 M. Román-González et al.
Fig. 6.3 CTt, item #26 (“canvas”): loops “repeat times” + simple functions (nested); “visual
blocks”; “completing”
Fig. 6.4 Bebras Task #1: Water Supply (CT dimension involved: logic-binary structures) (reprinted
by permission of http://bebras.org/)
• Dr. Scratch5 (Moreno-León et al., 2015) is a free and open-source web application
that analyzes, in an automated way, projects programmed with Scratch language.
The score that Dr. Scratch assigns to a project is based on the degree of development
of seven dimensions of CT competence: abstraction and problem decomposition,
logical thinking, synchronization, parallelism, algorithmic notions of flow control,
5 http://www.drscratch.org/.
6 Combining Assessment Tools for a Comprehensive Evaluation … 87
Fig. 6.5 Bebras Task #2: Fast Laundry (CT dimensions involved: parallelism, algorithms)
(reprinted by permission of http://bebras.org/)
Fig. 6.6 Bebras Task #3: Abacus (CT dimension involved: abstraction, decomposition, algorithms)
(reprinted by permission of http://bebras.org/)
88 M. Román-González et al.
user interactivity, and data representation. These dimensions are statically evalu-
ated by inspecting the source code of the analyzed project and given punctuation
from 0 to 3, resulting in a total evaluation (“mastery score”) that ranges from 0 to
21 when all seven dimensions are aggregated. In addition, Dr. Scratch generates a
feedback report that is displayed to learners, which includes ideas and proposals to
enhance the students’ CT skills. The feedback report also encourages learners to
try new Scratch blocks and structures, in order to improve the “mastery score” of
their next projects (see Fig. 6.7). Because of this feature, in the previous sections,
we have classified Dr. Scratch as a CT formative–iterative assessment tool.
The ecological validity of Dr. Scratch, for being implemented with positive results
in school settings, has been demonstrated (Moreno-León et al., 2015). Further-
more, the convergent validity of Dr. Scratch with respect to the grades provided
by CS educators (Moreno-León, Román-González, Harteveld, & Robles, 2017),
and with respect to several software engineering complexity metrics (Moreno-
León, Robles, & Román-González, 2016), has been already reported. Finally, Dr.
Scratch has also proved discriminant validity to distinguish between different types
of Scratch projects, such as animations, art projects, music projects, stories, and
games (Moreno-León, Robles, & Román-González, 2017).
Given that the CTt, the Bebras Tasks, and Dr. Scratch are aimed at assessing the
same construct (i.e., CT), but they approach this goal from different perspectives, a
total convergence (r > 0.7) is not expected among them, but a partial one (0.4 < r <
0.7) (Carlson & Herdman, 2012).
Thus, the convergent validity of these three instruments was investigated through
two different correlational studies, with two independent samples. In the first study,
the CTt and the aforementioned selection of Bebras Tasks were concurrently admin-
istered to a sample of Spanish middle-school students (n 179), in pure pretest
condition (i.e., students without prior formal experience in programming or similar).
A positive, moderate, and statistically significant correlation was found (r 0.52).
As depicted in Fig. 6.8, the higher the CT ability of the subject (as measured by
the CTt) is, the more likely it is that the subject correctly transfers CT to real-life
problems (as measured by the Bebras Tasks).
Regarding the second study, it was performed in the context of an 8-week program-
ming course in the Scratch platform, following the Creative Computing (Brennan,
Balch, & Chung, 2014) curriculum and involving Spanish middle-school students
(n 71). Before starting with the course, the CTt was administered to the students
in pure pretest condition. After the programming course, students took a posttest
with the CTt and teachers selected the most advanced project of each student, which
was analyzed with Dr. Scratch. These measures offered us the possibility to analyze
the convergent validity of the CTt and Dr. Scratch in predictive terms (CTtpre-test *
Dr. Scratch) and in concurrent terms (CTtpost-test * Dr. Scratch). As in the first study,
positive, moderate, and statistically significant correlations were found again, both
in predictive (r 0.44) and in concurrent terms (r 0.53) (Fig. 6.9).
As we expected, the CTt, the Bebras Tasks, and Dr. Scratch are just partially
convergent (0.4 < r < 0.7). This result is consistent with their different assessment
approaches: diagnostic-aptitudinal (CTt), skill transfer (Bebras Tasks), and forma-
6 Combining Assessment Tools for a Comprehensive Evaluation … 89
Fig. 6.7 Dr. Scratch assessment results and feedback report (bottom), for the Scratch project “Alice
in Wonderland—a platformer” (https://scratch.mit.edu/projects/176479438/) (top)
The empirical findings presented in the previous section have some implications.
On the one hand, the just partial convergence found implies that none of the CT
assessment tools considered in our studies should be used instead of any of the
others. In other words, since the scores coming from these different instruments are
only moderately correlated, none of the measures can replace or be reduced to any of
the others; otherwise, the three tools might be combined in school contexts in order
to achieve a more sensitive portrait of the students’ CT skills. On the other hand, from
a pedagogical point of view, the three assessment tools empirically studied seem to
be theoretically complementary, as the weaknesses of the ones are the strengths of
the others.
Thus, the CTt has some strengths, which are common to other diagnostic tools,
such as it can be collectively administered in pure pretest conditions, so it could be
used in massive screenings and early detection of computationally talented subjects
or individuals with special needs in CT education. Moreover, the diagnostic tools
(e.g., the CTt), and also the summative ones, can be utilized for collecting quantitative
data in pre–post-evaluations aimed at developing CT. However, the CTt and other
tools of the same type have some obvious weaknesses too, since they just provide
static and decontextualized assessments, which are usually focused on computational
“concepts” (Brennan & Resnick, 2012), ignoring “practices” and “perspectives” (see
Table 6.1).
As a counterbalance of the above, the Bebras Tasks and other skill transfer tools
provide naturalistic assessments, which are contextualized in significant “real-life”
problems. Thus, this kind of tools can be used not only as measuring instruments, but
also as tasks for teaching and learning CT in a meaningful way. When used strictly for
assessment, the skill transfer tools are especially powerful if they are administered
after certain time has elapsed from the end of the CT educational intervention, in order
to check the degree of retention and transfer of the acquired CT skills. Nevertheless,
the psychometric properties of this kind of tasks are still far of being demonstrated,
and some of them are at risk of being too tangential to the core of CT.
92 M. Román-González et al.
Data-mining tools b c a
Perceptions–attitudes – a c
scales
Vocabulary assessments – a c
Finally, Dr. Scratch complements the CTt and the Bebras Tasks, given that the for-
mer involves “computational practices” (Brennan & Resnick, 2012) that the other two
do not, such as iterating, testing, remixing, or modularizing. However, Dr. Scratch,
like other formative–iterative or data-mining tools, cannot be used in pure pretest
conditions, since it is applied onto programming projects after the student has learnt
at least some computer programming for a certain time. In other words, these for-
mative–iterative and data-mining tools are affected by the kind of programming
instruction delivered by the teacher, and by the kind of programming projects that
are proposed to the students.
In this vein, Table 6.1 shows a tentative proposal on the degree of adequacy
between the different types of CT assessment tools with respect to the three CT
dimensions stated by Brennan and Resnick (2012) framework: “computational con-
cepts” (sequences, loops, events, parallelism, conditionals, operators, and data);
“computational practices” (experimenting and iterating, testing and debugging,
reusing and remixing, abstracting, and modularizing); and “computational perspec-
tives” (expressing, connecting, and questioning).
All of the above leads us to affirm the complementarity of the different types
of CT assessment tools in educational scenarios, and to raise the clear possibility
of building up powerful “systems of assessments” through a proper combination of
them. Hence, from a chronological point of view, the degree of adequacy in the use of
each type of assessment tools at the different phases of CT educational interventions
and evaluations is presented in Table 6.2.
Furthermore, when reflecting about the characteristics of the various and diverse
CT assessment tools, it is possible to state what levels of the Bloom’s (revised) tax-
onomy of cognitive processes are being addressed by each type (Fig. 6.10). Thus,
the diagnostic tools provide information about how the students “remember” and
“understand” some CT concepts; the skill transfer tools inform about the students’
competence to “analyze” and “apply” their CT skills to different contexts; and the for-
mative–iterative tools allow the students to “evaluate” their own and others’ projects,
as well as to “create” better and more complex ones. In addition, the summative tools
6 Combining Assessment Tools for a Comprehensive Evaluation … 93
Summative tools a – c b
Formative–iterative – c b –
tools
Data-mining tools – c b –
Skill transfer tools a b b c
Perceptions–attitudes c – c a
scales
Vocabulary a – c b
assessments
a Little adequacy, b Moderate adequacy, c Excellent adequacy, – No adequacy at all
serve as a transition between the lower and the intermediate levels of the taxonomy,
while the data-mining tools do so between the intermediate and the upper ones.
Nonetheless, there might be still some CT issues, such as usability, originality, inter-
face design, etc., which could not be assessed by tools, otherwise by human sensitiv-
ity. Finally, since the perceptions–attitudes scales do not assess cognitive processes,
but noncognitive ones, these tools are not placed along the pyramidal taxonomy, but
they surround and frame it.
As a final contribution, we raise some research designs (RD) that could compose
together a comprehensive evaluation model of CT interventions. For each research
design, we specify what kind of research question is addressed, and what CT assess-
ment tools should be used:
• Correlational–predictive RD: Through this RD, we investigate to what extent the
current level of the subject’s CT can explain–predict his/her future performance
in related tasks (e.g., his/her academic achievement in STEM subjects, his/her
computer programming competence, etc.). The right instruments to be utilized
in this kind of RD are the diagnostic tools. Once the predictive power of a CT
diagnostic tool has been established, then it can be used subsequently to detect
subjects who may need preventive CT educational interventions.
• Quasi-experimental RD: This type of RD is aimed at verifying if a certain inter-
vention has been effective to develop the CT of the students. A quasi-experimental
RD needs pretest and posttest measures, administered on treatment and control
groups, in order to monitor the causes of the changes in those measures, if any.
The proper instruments to be used in this RD are the summative tools (and some
of the diagnostic tools too). Moreover, it is imperative that the intervention under
94 M. Román-González et al.
References
Basawapatna, A., Koh, K. H., Repenning, A., Webb, D. C., & Marshall, K. S. (2011). Recogniz-
ing computational thinking patterns. In Proceedings of the 42nd ACM Technical Symposium on
Computer Science Education (pp. 245–250). https://doi.org/10.1145/1953163.1953241.
Bocconi, S., Chioccariello, A., Dettori, G., Ferrari, A., Engelhardt, K., et al. (2016). Developing
computational thinking in compulsory education-implications for policy and practice. Seville:
Join Research Center (European Commission). Retrieved from http://publications.jrc.ec.europa.
eu/repository/bitstream/JRC104188/jrc104188_computhinkreport.pdf.
Brennan, K., Balch, C., & Chung, M. (2014). Creative computing. Harvard Graduate School of
Education.
Brennan, K., & Resnick, M. (2012). New frameworks for studying and assessing the development
of computational thinking. In Proceedings of the 2012 Annual Meeting of the American Educa-
tional Research Association, Vancouver, Canada (pp. 1–25). Retrieved from http://scratched.gse.
harvard.edu/ct/files/AERA2012.pdf.
Carlson, K. D., & Herdman, A. O. (2012). Understanding the impact of convergent validity on
research results. Organizational Research Methods, 15(1), 17–32.
Chen, G., Shen, J., Barth-Cohen, L., Jiang, S., Huang, X., & Eltoukhy, M. (2017). Assessing
elementary students’ computational thinking in everyday reasoning and robotics programming.
Computers & Education, 109, 162–175. https://doi.org/10.1016/j.compedu.2017.03.001.
Dagiene, V., & Futschek, G. (2008). Bebras international contest on informatics and computer
literacy: Criteria for good tasks. In International Conference on Informatics in Secondary Schools-
Evolution and Perspectives (pp. 19–30). Berlin, Germany: Springer. https://doi.org/10.1007/978-
3-540-69924-8_2.
Daily, S. B., Leonard, A. E., Jörg, S., Babu, S., & Gundersen, K. (2014). Dancing alice: Exploring
embodied pedagogical strategies for learning computational thinking. In Proceedings of the 45th
ACM Technical Symposium on Computer Science Education (pp. 91–96). https://doi.org/10.1145/
2538862.2538917.
Durak, H. Y., & Saritepeci, M. (2018). Analysis of the relation between computational thinking
skills and various variables with the structural equation model. Computers & Education, 116,
191–202. https://doi.org/10.1016/j.compedu.2017.09.004.
Eguiluz, A., Guenaga, M., Garaizar, P., & Olivares-Rodriguez, C. (2017). Exploring the progression
of early programmers in a set of computational thinking challenges via clickstream analysis. IEEE
Transactions on Emerging Topics in Computing (in press). https://doi.org/10.1109/TETC.2017.
2768550.
Grover, S. (2011). Robotics and engineering for middle and high school students
to develop computational thinking. In Annual Meeting of the American Educational
Research Association, New Orleans, LA. Retrieved from https://pdfs.semanticscholar.org/69a7/
c5909726eed5bd66719aad69565ce46bbdcc.pdf.
Grover, S. (2015). “Systems of assessments” for deeper learning of computational thinking in
K-12. In Proceedings of the 2015 Annual Meeting of the American Educational Research Associ-
ation (pp. 15–20). Retrieved from https://www.sri.com/sites/default/files/publications/aera2015-
_systems_of_assessments_for_deeper_learning_of_computational_thinking_in_k-12.pdf.
Grover, S., Basu, S., Bienkowski, M., Eagle, M., Diana, N., & Stamper, J. (2017). A framework for
using hypothesis-driven approaches to support data-driven learning analytics in measuring com-
putational thinking in block-based programming environments. ACM Transactions on Computing
Education (TOCE), 17(3), 14:1–14:25. https://doi.org/10.1145/3105910.
Grover, S., Bienkowski, M., Niekrasz, J., & Hauswirth, M. (2016). Assessing problem-solving
process at scale. In Proceedings of the Third (2016) ACM Conference on Learning@ Scale
(pp. 245–248).
Grover, S., & Pea, R. (2013). Computational thinking in K–12 a review of the state of the field.
Educational Researcher, 42(1), 38–43. https://doi.org/10.3102/0013189X12463051.
6 Combining Assessment Tools for a Comprehensive Evaluation … 97
Kalelioglu, F., Gülbahar, Y., & Kukul, V. (2016). A framework for computational thinking
based on a systematic research review. Baltic Journal of Modern Computing, 4(3), 583–596.
Retrieved from http://www.bjmc.lu.lv/fileadmin/user_upload/lu_portal/projekti/bjmc/Contents/
4_3_15_Kalelioglu.pdf.
Koh, K. H., Basawapatna, A., Bennett, V., & Repenning, A. (2010). Towards the automatic recog-
nition of computational thinking for adaptive visual language learning. In 2010 IEEE Symposium
on Visual Languages and Human-Centric Computing (VL/HCC) (pp. 59–66). https://doi.org/10.
1109/VLHCC.2010.17.
Koh, K. H., Basawapatna, A., Nickerson, H., & Repenning, A. (2014). Real time assessment of
computational thinking. In 2014 IEEE Symposium on Visual Languages and Human-Centric
Computing (VL/HCC) (pp. 49–52). https://doi.org/10.1109/VLHCC.2014.6883021.
Korkmaz, Ö., Çakir, R., & Özden, M. Y. (2017). A validity and reliability study of the computational
thinking scales (CTS). Computers in Human Behavior. https://doi.org/10.1016/j.chb.2017.01.
005.
Kukul, V., Gökçearslan, S., & Günbatar, M. S. (2017). Computer programming self-efficacy scale
(CPSES) for secondary school students: Development, validation and reliability. Eğitim Teknolo-
jisi Kuram ve Uygulama, 7(1). Retrieved from http://dergipark.ulakbim.gov.tr/etku/article/view/
5000195912.
Lye, S. Y., & Koh, J. H. L. (2014). Review on teaching and learning of computational thinking
through programming: What is next for K-12? Computers in Human Behavior, 41, 51–61. https://
doi.org/10.1016/j.chb.2014.09.012.
Maiorana, F., Giordano, D., & Morelli, R. (2015). Quizly: A live coding assessment platform for
App Inventor. In 2015 IEEE Blocks and Beyond Workshop (Blocks and Beyond) (pp. 25–30).
https://doi.org/10.1109/BLOCKS.2015.7368995.
Meerbaum-Salant, O., Armoni, M., & Ben-Ari, M. (2013). Learning computer science concepts
with scratch. Computer Science Education, 23(3), 239–264.
Moreno-León, J., Robles, G., & Román-González, M. (2015). Dr. Scratch: Automatic analysis
of scratch projects to assess and foster computational thinking. RED. Revista de Educación a
Distancia, 15(46), 1–23. Retrieved from http://www.um.es/ead/red/46/moreno_robles.pdf.
Moreno-León, J., Robles, G., & Román-González, M. (2016). Comparing computational thinking
development assessment scores with software complexity metrics. In 2016 IEEE Global Engi-
neering Education Conference (EDUCON) (pp. 1040–1045). https://doi.org/10.1109/EDUCON.
2016.7474681.
Moreno-León, J., Robles, G., & Román-González, M. (2017). Towards data-driven learning paths
to develop computational thinking with scratch. IEEE Transactions on Emerging Topics in Com-
puting. https://doi.org/10.1109/TETC.2017.2734818.
Moreno-León, J., Román-González, M., Harteveld, C., & Robles, G. (2017). On the automatic
assessment of computational thinking skills: A comparison with human experts. In Proceed-
ings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems
(pp. 2788–2795). New York, NY, USA: ACM. https://doi.org/10.1145/3027063.3053216.
Mühling, A., Ruf, A., & Hubwieser, P. (2015). Design and first results of a psychometric test
for measuring basic programming abilities. In Proceedings of the Workshop in Primary and
Secondary Computing Education (pp. 2–10). https://doi.org/10.1145/2818314.2818320.
Ota, G., Morimoto, Y., & Kato, H. (2016). Ninja code village for scratch: Function samples/function
analyser and automatic assessment of computational thinking concepts. In 2016 IEEE Symposium
on Visual Languages and Human-Centric Computing (VL/HCC) (pp. 238–239).
Román-González, M. (2015). Computational thinking test: Design guidelines and content valida-
tion. In Proceedings of the 7th Annual International Conference on Education and New Learning
Technologies (EDULEARN 2015) (pp. 2436–2444). https://doi.org/10.13140/RG.2.1.4203.4329.
Román-González, M., Moreno-León, J., & Robles, G. (2017a). Complementary tools for com-
putational thinking assessment. In S. C. Kong, J. Sheldon, & K. Y. Li (Eds.), Proceedings of
International Conference on Computational Thinking Education (CTE 2017) (pp. 154–159).
98 M. Román-González et al.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter’s Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter’s Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Chapter 7
Introducing
and Assessing Computational Thinking
in the Secondary Science Classroom
7.1 Introduction
practices, and systems thinking practices. Figure 7.1 depicts the practices within
each of these four strands.
Though they are not unique to STEM, these CT practices are common to the STEM
disciplines. In this way, they differ from the domain-general CT practices charac-
terized by Wing (2006) (e.g., using computer science concepts to solve problems
and design systems), the National Research Council (2010) (e.g., heuristic reason-
ing, search strategies, and problem abstraction and decomposition), and Brennan
and Resnick (2012) (e.g., being incremental and iterative, testing and debugging,
reusing and remixing, and abstracting and modularizing). We identified key activi-
ties relevant to each of the CT-STEM practices in our taxonomy and proposed those
as learning objectives. We have used these learning objectives to guide our develop-
ment of curricula and assessments that foster and evaluate students’ development of
computational thinking practices in STEM subjects at the secondary level.
In the study described herein, we analyze student gains in the modeling and
simulation practices strand of the taxonomy. We build on work we have done using
agent-based modeling in science classrooms (Blikstein & Wilensky, 2009; Sengupta
& Wilensky, 2009; Horn & Wilensky, 2012; Horn, Brady, Hjorth, Wagh, & Wilensky,
2014; Levy & Wilensky, 2009; Wilensky, 2003; Wilensky & Reisman, 2006). In
future work, we plan to analyze each of the four strands and gains in summative
assessments of CT-STEM practices.
7.3 Method
We show that a group of 9th grade students developed competencies for modeling and
simulation practices as a result of their engagement in our computational biology
curriculum. As evidence, we present findings from a quantitative analysis of 133
9th grade students’ written responses to assessments given before and after their
participation in three computational biology units.
The data in this study come from the fourth iteration of a design-based research cycle
(Collins, Joseph, & Bielaczyc, 2004). The implementation spanned the 2015–2016
school year and was tested in three 9th grade biology classrooms at a partner sec-
ondary school in a Midwestern city in the United States. Students were given a
CT-STEM practices pre-test (Weintrop et al., 2014) at the beginning of the school
year and a CT-STEM practices post-test at the end of the school year. Over the year
they participated in three CT-STEM biology units; each unit approximately four days
long. We investigated the role of the CT-STEM science units in students’ development
of competencies for modeling and simulation practices by looking for statistically
significant gains in student scores for particular items from pre- to post-test.
7.3.2 Participants
The pre- and post-test were each given during one 50-min class period at the begin-
ning and end of the school year. Students took the tests individually on school laptops
in their biology classrooms. The pre- and post-tests were not designed to evaluate
students’ science content knowledge. Rather, they were meant to evaluate their devel-
opment of competencies relevant to CT-STEM practices. In this chapter, we present
results concerned with two particular learning objectives within our modeling and
simulation practices strand.
The first learning objective focuses on an activity relevant to the CT-STEM prac-
tice using computational models to understand a concept and states that a student
should be able to “explore a model by changing parameters in the interface or code.”
104 H. Swanson et al.
This is a very basic activity but it plays an important role in students’ (and sci-
entists’) abilities to learn about the relationship between particular parameters and
system behavior at the macro-level.
The second learning objective focuses on an activity relevant to the CT-STEM
practice assessing computational models and states that a student should be able to
“identify the simplifications made by a model.” This activity is important to students’
epistemological development, as it relates to their understanding of a computational
model as a tool that is both powerful and limited with regards to the construction of
new knowledge.
Both pre- and post-tests required students to interact with computational simula-
tions which they were given basic instructions on how to operate. For the pre-test,
students interacted with a simulation (shown in Fig. 7.2) that modeled climate change
and showed the relationship between temperature and amount of CO2 in the atmo-
sphere (Tinker & Wilensky, 2007). For the post-test, students explored a simulation
(shown in Fig. 7.3) that modeled the relationship between the pressure of a gas, its
Fig. 7.2 Screenshot of pre-test simulation that models the relationship between temperature and
atmospheric CO2 levels
7 Introducing and Assessing Computational Thinking … 105
Fig. 7.3 Screenshot of post-test simulation that models the relationship between the pressure of a
gas, its volume, and the number of particles
volume, and the number of particles in a sealed environment (Wilensky, 1997a, 2003;
Wilensky, Novak, & Levy, 2005).
To assess students’ abilities to explore a model by changing parameters in the
interface or code, we analyzed their responses to test items (quoted below) that
asked them to attend to the relationships between adjustable parameters and system-
level characteristics. To assess students’ abilities to identify simplifications made by
a model, we analyzed their responses to test items (quoted below) that asked them
for the ways in which the simulations differed from the real-world. These assessment
items were selected to investigate students’ development with respect to the same
learning objectives across two very different computationally modeled phenomena.
tencies outlined in the rubrics) and then scored them. The researchers’ inter-rater
reliability for the pre-test was 97% for the item measuring the first learning objective
and 90% for the item measuring the second learning objective. Inter-rater reliabilities
for the post-test items were 95% and 80% respectively.
For the pre-test, students were asked to explore a model by changing its parameters
in the context of the greenhouse gas simulation. In particular, they responded to the
prompt: “Set cloud coverage to 0%. Take some time to experiment with different set-
tings for the ‘CO2 -amount’ slider. What happens to the temperature if you increase
the amount of the CO2 in the model?” For the post-test, students were asked to explore
the model in the context of the gas-law simulation. In particular, they responded to
the question: “What values for container size and number of particles will result
in the lowest pressure in the container? What steps did you take to come up with
these values?” It is important to note that while both items are concerned with stu-
dents’ abilities to learn about a parameter’s influence on a system’s behavior, they are
inversely structured. While the pre-test item instructs students to change a parameter
and report its effect on the system, the post-test item instructs students to change
parameters until they achieve a specified system behavior. We argue that while they
are different, both items are concerned with the causal relationship between param-
eter values and system-level behavior and are therefore comparable assessments of
students’ abilities to explore a model by changing parameters in the interface or
code.
We examined students’ pre- and post-test responses, sorting responses into cate-
gories based on similarities that were relevant to the learning objective. Three cat-
egories emerged that were representative of response types across both pre- and
post-test. These are comparing across trials, attending to explanatory factors, and
attending to parameter-system relationships. We identified these as three competen-
cies relevant to exploring a model by changing parameters in the interface or code.
These competencies are outlined, described, and illustrated with examples from the
data in Table 7.1.
We scored students’ responses by awarding one point for each competence demon-
strated in their response and taking the sum of these points. This resulted in scores
ranging from 0 to 3. We characterize the distribution of competencies (demonstrated
in both pre- and post-test) in our findings section.
As part of the pre-test, students were asked to identify the simplifications made by
the greenhouse simulation. As part of the post-test, they were asked to identify the
simplifications made by the gas-law simulation. For both tests, they responded to the
7 Introducing and Assessing Computational Thinking … 107
Table 7.1 Pre- and post-test rubric for analyzing students’ responses and characterizing the com-
petencies they drew upon when exploring a model by changing parameters in the interface or
code
Comparing across trials
Response compares data across multiple simulation trials. When exploring a model to learn more
about the dynamics of, or test a hypothesis regarding, a complex system, it is important to
observe more than one simulation run. This is because complex systems are inherently random
and the results of changing a parameter vary over different simulation trials. A pattern of
cause-effect relationships will hover around an average tendency, but this average tendency may
not be exactly embodied in one (or several) simulation trials. So, if a student only runs one trial,
they may have a misguided impression of a pattern in system behavior. It is also a good idea to
run multiple trials in order to systematically compare the effects of different parameter values on
system behavior.
Pre-test “When I increase the amount of CO2 the earth heats up much faster than it would
if the setting was lower.”
Post-test “To come up with these values I first tried putting the number of particles and the
container size at its max. After that, I tried the number of particles at its minimum
and the container size at its maximum.”
Attending to explanatory factors
Response provides some explanation for the relationship between system parameters and
macro-level patterns. Explanations such as this convey the students’ reasoning and suggest that
they are not only attending to cause and effect, but that they are going one step further and trying
to make sense of the relationship between cause and effect—a fundamental activity of science.
Pre-test “The carbon dioxide blocks the IR from reaching the sky but doesn’t stop the
sunlight from reaching the ground the higher you increase the Carbon Dioxide.”
Post-test “A bigger area and less particles shouldn’t produce a large amount of pressure
since it’s a lot of space for the particles.”
Attending to parameter-system relationships
Response describes relationship between system parameters and macro-level patterns. It is
important to attend to outcomes of the simulation when tinkering with or testing parameters, in
order to notice relationships between cause and effect. Simple qualitative characterizations of the
relationships within a system are a foundation for constructing more detailed or mathematical
relationships. A simple qualitative understanding of a cause-effect relationship can be a powerful
tool for reasoning about system dynamics and for conveying the big ideas about the relationships
within a system to others. In the scientific world, these “others” might be collaborators or
members of the scientific community at-large.
Pre-test “The temperature increases.”
Post-test “I slid the wall-position to its maximum and the number of particles to its
minimum.”
Table 7.2 Pre- and post-test rubric for analyzing students’ responses and characterizing the com-
petencies they drew upon when identifying simplifications made by a model
Attending to general issues—score: 1
Response refers to general, rather than specific, inaccuracies or missing factors. This suggests
that students understand that the model is not an accurate depiction of reality, however they have
not done the cognitive work of identifying a particular limitation.
Pre-test “In reality, other factors could come into play rather than just CO2 and clouds.”
Post-test “Inaccuracy in particles and wall position can make it different from the real
world.”
Attending to representational issues—score: 1
Response refers to representational limitations of the model. This suggests that students
understand that the model is not an accurate depiction of reality. This is not a “meaningful”
limitation compared to other limitations that students mentioned, as the simplification does not
influence the interactions between the elements of the model and therefore does not influence the
outcome of any given simulation trial.
Pre-test “Obviously, sunlight is not a bunch of little sticks raining down.”
Post-test “It’s not actually life size.”
Attending to issues of controllability—score: 2
Response refers to the existence of control over factors in the model that one does not have
control over in real life. This suggests that students understand the model is different from reality
because it allows certain conditions to be tested by being varied, which is impossible to do in
reality.
Pre-test “Because you can control how much CO2 and cloud coverage there is.”
Post-test “In real life, you cannot add or subtract molecules nor can you adjust the wall
positioning.”
Attending to issues of completeness—score: 2
Response refers to specific elements or factors that are missing from, or extraneous to, the model.
These students recognize that a model is an approximation of reality. They have compared it with
the real world and identified factors that are found in the real world but missing from the model.
It is probable they believe these factors are somehow important to the model and would change
the outcome of a simulation trial. Limitations such as these are important for scientists to
identify, because they help them interpret their results and recognize their limitations.
(continued)
7 Introducing and Assessing Computational Thinking … 109
7.4 Findings
To test whether the computational biology units played a role in developing compe-
tencies for modeling and simulation practices, pre- and post-test scores for the two
items were compared using a Wilcoxon signed-rank test and competence frequencies
were compared using McNemar’s tests. We report the results of our analysis below.
The class average for the pre-test item measuring students’ ability to explore a model
by changing parameters in the interface or code was a score of 1.24. The average
for the post-test item was a score of 1.46. The p-value obtained using a paired
Wilcoxon signed-rank test (with continuity correction) was 0.01389 (V 1175.5).
The difference in student scores is therefore statistically significant at the 5% level,
110 H. Swanson et al.
29
Comparing Across Trials
45
6
Aending to Explanatory Factors
51
118
Parameter-System Relaonships
84
Fig. 7.4 Frequencies of competencies demonstrated in students’ responses to the pre- and post-test
items assessing their mastery of learning objective 1
which supports the claim that engagement in our curriculum helped students improve
with regards to this learning objective. To gain a more nuanced understanding of how
students developed their abilities to explore a model, we compared the frequencies
of competencies they demonstrated in pre- and post-test responses. The bar chart
(Fig. 7.4) illustrates the number of students comparing across trials, attending to
explanatory factors, and attending to parameter-system relationships, on both the
pre- and post-test.
Notably, the frequencies increased from pre- to post-test for comparing across
trials and attending to explanatory factors. Frequencies decreased for attending to
parameter-system relationships. Below, we present results of statistical analyses
that show whether these changes in frequency may have been the result of students’
participation in our computational biology units.
The class average for the pre-test item measuring students’ ability to identify simplifi-
cations made by a model was a score of 1.39. Their average post-test score was 1.63.
The p-value obtained using the Wilcoxon signed-rank test was 0.02 (V 647.5). The
112 H. Swanson et al.
76
General or Representational Issues
55
31
Controllability or Completeness
60
11
Procedural Limitations
14
0 10 20 30 40 50 60 70 80
Pre-Test Post-Test
Fig. 7.5 Frequencies of competencies demonstrated in students’ responses to the pre- and post-test
items assessing their mastery of learning objective 2
7.5 Discussion
We have presented findings from a quantitative analysis of 133 9th grade students’
written responses to assessments given before and after their participation in three
computational biology units. Our results suggest that our curriculum helped students
develop a number of important competencies for exploring a model by changing
114 H. Swanson et al.
parameters in the interface or code, such as comparing simulation results across mul-
tiple trials and moving beyond merely describing relationships between a parameter
and system behavior, to attending to explanatory factors in the model. Our results also
suggest that students developed important competencies for identifying simplifica-
tions made by a model, such as shifting attention from general and representational
limitations with the model to deeper limitations such as model completeness and
controllability. While our results are encouraging, we can’t rule out the possibility
that limitations of our experimental design (such as asymmetries between pre- and
post-test items discussed earlier) may have influenced our findings.
Our work is concerned with characterizing students’ engagement in computational
thinking practices in their secondary science classrooms. It is therefore in conversa-
tion with scholarship on the nature of computational thinking and the nature of com-
putational thinking in STEM. Previously, we created a taxonomy of computational
thinking practices used by experts in computational STEM disciplines. The findings
presented here provide insight into how students can develop expertise with respect to
modeling and simulation practices by characterizing, at a fine grain-size, the compe-
tencies students draw upon when exploring a model by changing its parameters in the
interface or code and identifying simplifications made by a model. Our research pro-
gram continues to uncover the space of competencies relevant to CT-STEM practices
representing all strands of our taxonomy and investigate how these competencies can
be developed through engagement with our computationally-enriched science cur-
riculum. In future work, we aim to connect our quantitative treatment with qualitative
analysis of student utterances, NetLogo log files, and work.
While the units investigated by this study featured NetLogo, other CT-STEM units
(which have been created as part of a larger curricular design effort) feature modeling
environments such as Molecular Workbench (Concord Consortium, 2010) and PhET
(Perkins et al., 2006). Other units introduce students to computational tools for data
analysis and problem solving, such as CoDAP (Finzer, 2016). Exposing students to
a diverse range of computational tools is meant to help them develop a flexible set
of CT-STEM practices.
In addition to understanding how our curriculum can support students’ devel-
opment of CT-STEM practices, our research aims to understand how engagement
in these practices can support students’ science content learning. Research already
points to the productivity of computational tools for science learning (Guzdial, 1994;
National Research Council, 2011; Redish & Wilson, 1993; Repenning, Webb, &
Ioannidou, 2010; Sengupta, Kinnebrew, Basu, Biswas, & Clark, 2013; Sherin, 2001;
Taub, Armoni, Bagno, & Ben-Ari, 2015; Wilensky & Reisman, 2006). As described
by restructuration theory, the representational form of knowledge influences how it
can be understood. The advance of computational tools has afforded representations
that have had profound influence on the way scientists understand phenomena. We
argue that these same tools can also be employed in science learning to make complex
content more accessible to students, while at the same time broadening engagement
with computational thinking.
7 Introducing and Assessing Computational Thinking … 115
References
Blikstein, P., & Wilensky, U. (2009). An atom is known by the company it keeps: A constructionist
learning environment for materials science using agent-based modeling. International Journal of
Computers for Mathematical Learning, 14(2), 81–119.
Brennan, K., & Resnick, M. (2012, April). New frameworks for studying and assessing the devel-
opment of computational thinking. In Proceedings of the 2012 Annual Meeting of the American
Educational Research Association, Vancouver, Canada (pp. 1–25).
Collins, A., Joseph, D., & Bielaczyc, K. (2004). Design research: Theoretical and methodological
issues. Journal of the learning sciences, 13(1), 15–42.
Concord Consortium. (2010). Molecular workbench. Java simulations and modeling tools
(2004–2013).
diSessa, A. A. (2001). Changing minds: Computers, learning, and literacy. MIT Press.
Finzer, W. (2016). Common online data analysis platform (CODAP). Emeryville, CA: The Concord
Consortium. [Online: concord. org/codap].
Goody, J. (1977). The domestication of the savage mind. New York: Cambridge University Press.
Guzdial, M. (1994). Software-realized scaffolding to facilitate programming for science learning.
Interactive Learning Environments, 4(1), 001–044.
Horn, M. S., & Wilensky, U. (2012). NetTango: A mash-up of NetLogo and Tern. In AERA 2012.
Horn, M. S., Brady, C., Hjorth, A., Wagh, A., & Wilensky, U. (2014, June). Frog pond: A code-first
learning environment on evolution and natural selection. In Proceedings of the 2014 Conference
on Interaction Design and Children (pp. 357–360). ACM.
Kaczmarczyk, L., & Dopplick, R. (2014). Rebooting the pathway to success: Preparing students
for computing workforce needs in the United States. Education Policy Committee, Association
for Computing Machinery.
Levy, F., & Murnane, R. (2004). The new division of labor: How computers are creating the new
job market. Princeton, NJ: Princeton University Press.
Levy, S. T., & Wilensky, U. (2009). Students’ learning with the connected chemistry (CC1) cur-
riculum: Navigating the complexities of the particulate world. Journal of Science Education and
Technology, 18(3), 243–254.
Margolis, J. (2008). Stuck in the shallow end: Education, race, and computing. Cambridge: The
MIT Press.
Margolis, J., & Fisher, A. (2003). Unlocking the clubhouse: Women in computing. Cambridge: The
MIT Press.
National Research Council. (2010). Report of a workshop on the scope and nature of computational
thinking. Washington, DC: The National Academies Press.
National Research Council. (2011). Learning science through computer games and simulations.
Washington, DC: The National Academies Press.
Novak, M., & Wilensky, U. (2011). NetLogo fish tank genetic drift model. Northwestern University,
Evanston, IL: Center for Connected Learning and Computer-Based Modeling.
Olson, D. R. (1994). The world on paper. New York: Cambridge University Press.
116 H. Swanson et al.
Papert, S. (1980). Mindstorms: Children, computers, and powerful ideas. New York, NY: Basic
Books Inc.
Perkins, K., Adams, W., Dubson, M., Finkelstein, N., Reid, S., Wieman, C., et al. (2006). PhET:
Interactive simulations for teaching and learning physics. The Physics Teacher, 44(1), 18–23.
Quinn, H., Schweingruber, H., & Keller, T. (Eds.). (2012). A framework for K-12 science education:
Practices, crosscutting concepts, and core ideas. National Academies Press.
Redish, E. F., & Wilson, J. M. (1993). Student programming in the introductory physics course:
MUPPET. American Journal of Physics, 61, 222–232.
Repenning, A., Webb, D., & Ioannidou, A. (2010). Scalable game design and the development of a
checklist for getting computational thinking into public schools. In Proceedings of the 41st ACM
Technical Symposium on Computer Science Education (pp. 265–269).
Sengupta, P., & Wilensky, U. (2009). Learning electricity with NIELS: Thinking with electrons and
thinking in levels. International Journal of Computers for Mathematical Learning, 14(1), 21–50.
Sengupta, P., Kinnebrew, J. S., Basu, S., Biswas, G., & Clark, D. (2013). Integrating computational
thinking with K-12 science education using agent-based computation: A theoretical framework.
Education and Information Technologies, 18(2), 351–380.
Sherin, B. L. (2001). A comparison of programming languages and algebraic notation as expressive
languages for physics. International Journal of Computers for Mathematical Learning, 6(1),
1–61.
Taub, R., Armoni, M., Bagno, E., & Ben-Ari, M. (2015). The effect of computer science on physics
learning in a computational science environment. Computing Education, 87, 10–23.
Tinker, R., & Wilensky, U. (2007). NetLogo Climate Change model. Northwestern University,
Evanston, IL: Center for Connected Learning and Computer-Based Modeling.
Weintrop, D., Beheshti, E., Horn, M., Orton, K., Jona, K., Trouille, L., et al. (2016). Defining
computational thinking for mathematics and science classrooms. Journal of Science Education
and Technology, 25(1), 127–147.
Weintrop, D., Beheshti, E., Horn, M. S., Orton, K., Trouille, L., Jona, K., & Wilensky, U. (2014).
Interactive assessment tools for computational thinking in high school STEM classrooms. In D.
Reidsma, I. Choi, & R. Bargar (Eds.), Proceedings of Intelligent Technologies for Interactive
Entertainment: 6th International Conference, INTETAIN 2014, Chicago, IL, USA (pp. 22–25).
Springer International Publishing.
Wilensky, U. (1997a). NetLogo GasLab gas in a box model. Northwestern University, Evanston,
IL: Center for Connected Learning and Computer-Based Modeling. http://ccl.northwestern.edu/
netlogo/models/GasLabGasinaBox.
Wilensky, U. (1997b). NetLogo wolf sheep predation model. Northwestern University, Evanston,
IL: Center for Connected Learning and Computer-Based Modeling. http://ccl.northwestern.edu/
netlogo/models/WolfSheepPredation.
Wilensky, U. (1997c). NetLogo AIDS model. Northwestern University, Evanston, IL: Center for Con-
nected Learning and Computer-Based Modeling. http://ccl.northwestern.edu/netlogo/models/
AIDS.
Wilensky, U. (1999). NetLogo. Northwestern University, Evanston, IL: Center for Connected Learn-
ing and Computer-Based Modeling. http://ccl.northwestern.edu/netlogo/.
Wilensky, U. (2001). Modeling nature’s emergent patterns with multi-agent languages. In Proceed-
ings of EuroLogo (pp. 1–6).
Wilensky, U. (2003). Statistical mechanics for secondary school: The GasLab multi-agent modeling
toolkit. International Journal of Computers for Mathematical Learning, 8(1), 1–41.
Wilensky, U., Brady, C. E., & Horn, M. S. (2014). Fostering computational literacy in science
classrooms. Communications of the ACM, 57(8), 24–28.
Wilensky, U., Novak, M., & Levy S. T. (2005). NetLogo connected chemistry 6 volume and pressure
model. Northwestern University, Evanston, IL: Center for Connected Learning and Computer-
Based Modeling.
7 Introducing and Assessing Computational Thinking … 117
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter’s Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter’s Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Chapter 8
Components and Methods of Evaluating
Computational Thinking for Fostering
Creative Problem-Solvers in Senior
Primary School Education
Siu-Cheung Kong
8.1 Introduction
To keep pace with the Fourth Industrial Revolution, which has integrated physical,
digital and biological spheres in all aspects of life (World Economic Forum, 2016),
it is necessary to nurture the next generation to become creative problem-solvers in
the digital era. Computational Thinking (CT) is widely accepted as a fundamental
practice for equipping young people to formulate and solve problems in the dig-
8.2 Background
CT is not a new idea; Papert (1980) advocated that children can learn to think expres-
sively about thinking and can foster procedural thinking through programming. He
argued that ‘computational technology and computational ideas can provide children
with new possibilities for learning, thinking, and growing emotionally as well as cog-
nitively’ (Papert, 1980, pp. 17–18). The idea of CT has gained global awareness and
discussion after Wing’s introduction in 2006. She suggested that everyone—not only
computer scientists—should learn and use CT (Wing, 2006). According to Wing, CT
8 Components and Methods of Evaluating Computational Thinking … 121
to others and the technological world that they develop by expressing, connecting
and questioning during programming. According to the National Research Council
(2012), in the twenty-first century, it is important to assess a person’s competence in
the cognitive, intrapersonal and interpersonal domains. CT concepts and practices
represent the cognitive domain, such as learners’ programming knowledge and prac-
tices, while CT perspectives represent the intrapersonal and interpersonal domains,
such as learners’ expression using programming tools and connection to the dig-
italised world. Lye and Koh (2014) pointed out that most studies focused on CT
concepts, and fewer on practices and perspectives. This framework helps to provide
a comprehensive picture for evaluating CT development in schools.
This framework is also appropriate for young learners to begin learning CT. In
Piaget’s model of the four stages of development, learners in the 7–11 age groups are
developing operational thinking (Piaget, 1972). Thus, senior primary school learners
in the 9–11 age groups are at the suitable starting point to learn basic programming
knowledge. As the context of this framework is in visual programming environments
such as Scratch and App Inventor, it enables learners to ‘learn programming concepts
and practice CT abilities while avoiding the syntax hurdle associated with text-based
programming languages’ (Araujo et al., 2016, p. 6). Within this context, Brennan and
Resnick’s framework facilitates the design of evaluation components and methods
to capture the progress and achievements of learners studying CT in senior primary
education.
8.3 Methodology
Grover et al. (2015, p. 199) wrote that ‘a system of assessments is beneficial to get
a comprehensive picture of learners’ CT learning.’ In other words, there is no single
evaluation component or method that can entirely measure learners’ CT achieve-
ments. Researchers, therefore, strive to develop both quantitative and qualitative
approaches to evaluate learners’ learning outcomes. This section sheds light on the
essential evaluation components and assessment methods of each of the three dimen-
sions proposed by Brennan and Resnick (2012): CT concepts, CT practices and CT
perspectives.
8.4.1 CT Concepts
by studies used to evaluate CT concepts. The tabulated results are sorted accord-
ing to the frequencies with which the methods were used. In terms of quantitative
approaches, most of the studies measured learners’ understanding of CT concepts
using test items, most of which were designed with multiple choice questions in the
programming context (Ericson & McKlin, 2012; Ruf, Mühling, & Hubwieser, 2014;
Zur-Bargury et al., 2013). Task and project rubrics were also commonly used so that
teachers could assess learners’ project work with scoring guidelines.
Among the studies using qualitative approaches, interviews were the most com-
mon method of evaluating CT concepts. Interviews are conducted to understand the
CT concepts that learners use to complete programming tasks or projects. Project
analysis was another method used to evaluate learners’ development in CT concepts.
Evaluators used the Scrape tool to analyse the programming blocks of projects.
Scrape can provide a record of the CT concepts that the learner utilised in his/her
projects (Brennan & Resnick, 2012; Burke, 2012). Studies also observed program-
ming lessons to investigate whether learners could correctly handle the CT concepts
in programming activities. One study asked learners to write reflective reports on what
kinds of CT concepts they used to complete their programming tasks and projects
(Zhong et al., 2016).
Duncan and Bell (2015) analysed the computing curricula for primary school in Eng-
land and recommended that key stage 2 learners in the 7–11 year age group acquire
8 Components and Methods of Evaluating Computational Thinking … 127
concepts such as loops, conditionals, sequences and variables. One study found that
loops, conditionals, variables and operators were the fundamental programming con-
cepts used by children in the Scratch programming environment (Maloney, Peppler,
Kafai, Resnick, & Rusk, 2008). Wilson et al. (2012) found that over half of chil-
dren’s game-making projects used concepts such as loops, conditionals, sequences
and parallelism. Because event handling is commonly used in block-based program-
ming environments such as Scratch and App Inventor, the concept of event handling
should be included (Brennan & Resnick, 2012). Although some may believe that pro-
cedure and initialisation are difficult for senior primary learners, they are essential
concepts in programming. Procedure can avoid the repetition of codes and dupli-
cation of commands (Marji, 2014), and initialisation helps a programme reach its
initial state. Therefore, if teachers introduce procedure and initialisation in class,
they can consider assessing learners’ abilities regarding these two concepts. In line
with the above discussion and the results tabulated in Table 8.2, the following CT
concepts should be evaluated in the K-12 CT curriculum: (1) loops, (2) condition-
als, (3) sequences, (4) parallelism, (5) data structures such as variables and lists, (6)
mathematical operators, functions and Boolean operators, (7) event handling, (8)
procedures and (9) initialisation. Table 8.4 summarises the proposed CT concepts at
the primary school level.
Because CT concepts are made up of different kinds of components, they can be
evaluated by an aggregation of learners’ achievement across components. Therefore,
CT concepts can be evaluated by designing quantitative test instruments with itemised
pieces that each evaluates a particular component. Evaluators can use these test
instruments to measure the learning outcomes of learners and to administer pre- and
post-test designs to measure learners’ progression. Evaluators can also assess whether
learners apply CT concepts appropriately by looking into their tasks/projects with the
rubrics. Quantitative data can thus be obtained by accumulating the scores of learners
generated from the rubrics. It is proposed that test designs with multiple choice
questions be used to measure learners’ progression for large numbers of learners.
Evaluators can show programming segments in the test items and ask questions
related to the segments, such as finding the output from the programming segment,
128 S.-C. Kong
Fig. 8.1 An example of testing the CT concept of conditionals in the form of a multiple choice
question
8.4.2 CT Practices
rithmic thinking as the foundation of CT practices. Learners are required to define the
steps and develop instructions to solve a problem. Seven studies also suggested that
testing and debugging are the core of CT practices because it is ‘rare for a program to
work correctly the first time it is tried, and often several attempts must be made before
all errors are eliminated’ (Peterson, 2002, p. 93). Four past studies suggested that
problem decomposition and the practices of being incremental and iterative are indis-
pensable in the programming process. Learners should learn to repeatedly ‘develop a
little bit, then try it out, then develop more’ until the programme is complete (Brennan
& Resnick, 2012, p. 7). Problem decomposition involves breaking down problems
into smaller, more manageable tasks (Waller, 2016). Several researchers found that
planning and designing were part of CT practices. They were used to investigate
whether learners plan their solutions before they write the code or use trial and error
during programming. Additionally, researchers noted that the reuse and remix of the
works of other programmers are crucial in the online communities of Scratch and
Alice (Brennan & Resnick, 2012), and they encouraged novices to produce more
complicated creations by building on existing projects or ideas (Brennan & Resnick,
2012).
For evaluation methods, because of the difficulties in finding out how learners
tackle problems during programming, evaluators use both quantitative and qualita-
tive methods to measure learners’ CT practices. Table 8.6 summarises the methods
adopted by the studies to evaluate CT practices. The tabulated results are sorted by
the frequency with which the methods were used. Seven studies used task/project
rubrics to evaluate learners’ CT practices, as teachers marked the programming out-
comes of tasks or projects based on rubrics to evaluate the CT practices of learners.
Four projects used tests with task-based questions in coding and non-coding contexts
to assess learners’ proficiency in CT practices. The qualitative methods proposed are
favourable for demonstrating their learning process. Four studies proposed the use
of interviews to understand learners’ CT practices, as interviews enable teachers to
understand learners’ thoughts behind the code (Grover & Pea, 2013). Two studies
proposed the use of observations: researchers observe in classrooms, take notes and
videorecord the entire programming process to understand learners’ CT practices
(Grover & Pea, 2013). One study used learners’ reflection reports to understand their
programming practices. Reflection reports are self-assessments that are distributed
to learners after they finish a task or a project (Zhong et al., 2016), in which they
write out how they accomplished their problem-solving tasks.
Programming is not a simple process, and it might not have a clear order. However,
the components of CT practices can be divided into two categories: design practices
and programming practices. In the design practices category, learners design pro-
grammes. This category includes three practices: problem decomposition, abstracting
and modularising, and algorithmic thinking. Because people often perceive planning
as a ‘highly specialized skill for solving problems’ that requires scarce resources
8 Components and Methods of Evaluating Computational Thinking … 131
(Lawler, 1997), planning and designing are not recommended for inclusion in the
programming process. Learners are supposed to decompose problems into sub-tasks
first (Waller, 2016). The practice of modularising is recommended to be merged with
abstracting. Learners will be able to connect the parts of the whole so that they can
test and debug different parts of the programme incrementally (Brennan & Resnick,
2012). The Center for Computational Thinking, Carnegie Mellon (2010) asserted
that algorithmic thinking is essential, as it helps learners to produce efficient, fair
and secure solutions. In sum, in line with the results in Table 8.5, problem decom-
position, abstracting and modularising, and algorithmic thinking are essential CT
practices for evaluation at this stage.
In the programming practices category, learners implement their designs to pro-
duce a concrete programme artefact. This stage includes three practices: reusing and
remixing, being iterative and incremental, and testing and debugging. The first prac-
tice helps novices create their own programmes by building on others’ works and
developing their own works incrementally (Brennan & Resnick, 2012). The practice
of being iterative and incremental is tied to testing and debugging: programmers have
to develop part of the programme and test it to ensure that it works; they then repeat
these steps and continue to develop the programme. In sum, in line with the results
in Table 8.5, reusing and remixing, being iterative and incremental, and testing and
debugging are essential CT practices for evaluation at this stage.
Problem formulating is proposed to be included in the CT practices component
in the study of this paper. Although Cuny et al. (2010, p. 1) defined CT as the
‘thought processes involved in formulating problems and their solutions’, none of
the reviewed studies proposed assessing learners’ abilities in problem formulation. In
the early twentieth century, Einstein argued that raising questions was more impor-
tant than solving problems. In his classic study, The Evolution of Physics, he wrote,
‘the formulation of a problem is often more essential than its solution, which may
be merely a matter of mathematical or experimental skill. To raise new questions,
new possibilities, to regard old problems from a new angle require creative imagina-
132 S.-C. Kong
tion and marks real advances in science’ (Einstein & Infeld, 1938, p. 92). Because
learners’ creativity can be demonstrated by formulating creative problems, there is
a need to add this component to CT practices to produce creative problem-solvers
in the digitalised world. The aims of CT practices are thus to develop both creative
thinkers and problem-solvers (Brugman, 1991). Therefore, this study proposes to
include ‘problem formulating’ as one of the components of CT practices. Table 8.7
summarises the proposed evaluation components of the CT practices of this study:
(1) problem formulating, (2) problem decomposition, (3) abstracting and modular-
ising, (4) algorithmic thinking, (5) reusing and remixing, (6) being iterative and
incremental and (7) testing and debugging.
Like CT concepts, CT practices involve several components; these can be mea-
sured by an aggregation of learners’ achievement across components. One quanti-
tative method to evaluate CT practices is designing rubrics to assess the final pro-
gramming projects of learners. The criteria of the rubrics are the components of CT
practices, and there is a need to design descriptors of performance levels for each
CT practice. These rubrics help evaluators to review learners’ abilities to formulate
and solve problems. In addition to using rubrics to assess the CT practices of learn-
ers, SRI International proposed a task-based approach to assess CT practices. SRI
International also published a report titled ‘Assessment Design Patterns for Com-
putational Thinking Practices in Secondary Computer Science’ (Bienkowski, Snow,
Rutstein, & Grover, 2015), which presented an overview of four CT practice design
patterns and two supporting design patterns. The CT practice design patterns include
analysing the effects of developments in computing, designing and implementing
creative solutions and artefacts, designing and applying abstractions and models,
and supporting design patterns analysing learners’ computational work and the work
of others. SRI International developed focal knowledge, skills and other attributes for
each CT practice, which illustrate the core abilities that learners need to acquire and
what should be assessed. Its report provided direction for evaluating CT practices
using the task-based approach. In this approach, evaluators’ first step is to define
the fundamental abilities of each CT practice. The next step is to design tasks that
include a scenario and to ask questions about the scenarios to test learners’ abili-
ties related to CT practice. Learners need to understand the scenarios and to answer
questions about selecting options, organising sequences of activities or performing
8 Components and Methods of Evaluating Computational Thinking … 133
1. Scenario
An app developer developed a mobile location tracking app that can show the location of a
place on a map using latitude and longitude. However, a user reported that the app does not
show the location correctly on the map. Please identify the reason(s) why the error occurred.
(a) The mobile app developer was unable to find the correct latitude and longitude of
the location
(b) The mobile app developer was unable to mark the location correctly on the map
(c) The mobile app developer committed both mistakes (a) and (b)
Fig. 8.2 An example of testing the CT practice of testing and debugging using a task with a scenario
categorisations to demonstrate their abilities. For example, one of the focal skills of
the CT practice of testing and debugging practice is the ability to explain the cause
of an error. Figure 8.2 demonstrates an example of testing learners’ understanding
of testing and debugging using a task with a scenario.
With this approach, tests with sets of task-based questions can be designed to
evaluate learners’ abilities with the various components of CT practices. Pre- and
post-test designs with task-based questions can enable evaluators to identify learners’
progression in CT practices.
Measuring learners’ CT practices by directly analysing their processes of design-
ing and building programmes requires great effort. Qualitative approaches such as
observing learners in designing and building programmes are one such method.
Another is to interview them after they formulate, design and build their programmes
and to ask them semi-structured questions. It is difficult to ask primary school learn-
ers to write detailed reflection reports on how they formulate, design and build their
programmes; however, evaluators can ask learners to write simple reflection reports
on how they can improve their programmes after solving their computational tasks.
These qualitative methods can serve as supplements for evaluators seeking to under-
stand what practices learners used in their programming tasks. Table 8.7 summarises
the proposed evaluation components of CT practices at the primary school level.
8.4.3 CT Perspectives
The literature review indicated not only that CT concepts and practices are of
paramount importance in measuring learners’ CT development, but learners’ per-
spectives on learning programming are also vital. Brennan and Resnick (2012) pro-
posed three kinds of perspectives in the programming process—expressing, con-
necting and questioning—to investigate learners’ understanding of themselves and
their relationships to others and the technological world. Based on the literature
review, CT perspectives include not only Brennan and Resnick’s (2012) suggestions
134 S.-C. Kong
(i.e. expressing, connecting and questioning) but also learners’ motivation beliefs,
namely, value and expectancy (Wigfield & Eccles, 2000; Chiu & Zeng, 2008), in their
understanding of themselves. Value refers to learners’ intrinsic motivations, such as
their attitudes towards and interest in learning programming. Expectancy refers to
learners’ programming confidence, which includes their programming self-efficacy
and self-concept. Programming self-efficacy is ‘people’s judgments of their capabil-
ities to organize and execute courses of action required to attain designated types of
performance’ in the context of programming (Bandura, 1986, p. 391; Kong, 2017),
while programming self-concept is the belief in one’s programming competence
(Chiu & Zeng, 2008). Previous studies indicated that researchers investigated both
learners’ motivation beliefs regarding learning programming and their perspectives
on the technological world.
The 10 related studies in the literature identify three components of CT perspec-
tives: (1) attitudes such as interest in programming and computing, (2) confidence
in programming and computing, programming self-efficacy and programming com-
petence and (3) expressing, connecting and questioning. Table 8.8 summarises the
components of CT perspectives proposed to be evaluated by previous studies. The
tabulated results are sorted according to the frequency with which they were dis-
cussed.
Most researchers agree that learners’ attitudes, such as their interest in program-
ming, should be included in this evaluation dimension. Researchers have focused on
learners’ programming confidence, self-efficacy and competence, and most have used
quantitative instruments to measure learners’ CT perspectives, although some have
used qualitative data to analyse their attitudes. Table 8.9 summarises the methods
adopted by studies to evaluate CT perspectives. Surveys are commonly used, with
five-point Likert scales used most frequently (e.g. Ericson & McKlin, 2012; Ruf
et al., 2014). Conducting surveys before and after a learning programming might
facilitate the investigation of learners’ CT perspectives. Qualitative methods such
8 Components and Methods of Evaluating Computational Thinking … 135
Strongly Strongly
Disagree Disagree Neutral Agree Agree
Computational Identity
1. I feel associated with my classmates when
participating in computer programming activities with
them.
2. Learning computer programming with my classmates
gives me a strong sense of belonging.
Programming Empowerment
1. I have confidence in my ability to use digital
technologies.
2. I can solve problems with digital technologies.
Fig. 8.3 Sample items of the computational identity, programming empowerment and perspectives
of expressing, connecting and questioning survey instruments
8.5 Conclusion
closely linked with questioning in CT perspectives, which means that learners are
empowered to formulate problems in the computational context. In addition, this
study discussed the ideas of computational identity and programming empowerment
and proposed them as components for evaluating the perspectives of expressing,
connecting and questioning proposed by Brennan and Resnick (2012) to capture
learners’ motivation beliefs in a more comprehensive manner.
Past studies indicated that no single method could effectively measure learners’
CT development in the three dimensions; multiple methods are needed to evaluate
learning outcomes. To assess a large number of learners’ CT development, quantita-
tive methods are more feasible in terms of resources. This study proposed the design
of tests with multiple choice questions to evaluate the CT concepts of learners. Eval-
uators could also assess learners’ CT concepts by analysing their programming tasks
and projects with rubrics. Qualitative methods such as interviews, project analyses,
observations and simple reflection reports can be conducted as supplements to quanti-
tative approaches. CT practices can be evaluated by measuring programming project
outcomes with rubrics and the design of tests with task-based questions. Qualitative
methods such as interviews, observations and simple reflection reports can be con-
ducted given the in-depth understanding of a number of learners. CT perspectives
can be measured by well-designed survey instruments, and qualitative methods such
as interviews can be conducted if in-depth understanding is needed.
The future work of evaluating learners’ CT development is to design instruments
for measuring primary school learners in these three dimensions. Regarding CT con-
cepts and practices, researchers should design programming project rubrics and test
instruments to capture learners’ learning outcomes in these areas. Regarding CT per-
spectives, the constructs of computational identity, programming empowerment and
perspectives of expressing, connecting and questioning should be further explored
and established. Researchers are recommended to develop and validate instruments
for measuring learners’ computational identity and programming empowerment, and
how they express, connect and question in the digitalised world. The proposed evalu-
ation components and methods will be implemented in senior primary schools when
these instruments are developed. With appropriate evaluation components and meth-
ods, it is believed that schools will be in a better position to motivate and nurture
young learners to become creative problem formulators and problem-solvers in the
digitalised world.
References
Aho, A. V. (2012). Computation and computational thinking. The Computer Journal, 55(7),
832–835.
Araujo, A. L., Andrade, W. L., & Guerrero, D. D. (2016, October 12–15). A systematic mapping
study on assessing computational thinking abilities. In Proceedings of the IEEE Frontiers in
Education Conference (pp. 1–9). Erie, PA.
Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory. Englewood,
Cliffs, NJ: Prentice-Hall.
8 Components and Methods of Evaluating Computational Thinking … 139
Barr, V., & Stephenson, C. (2011). Bringing computational thinking to K-12: What is involved and
what is the role of the computer science education community? ACM Inroads, 2(1), 48–54.
Barr, V., Harrison, J., & Conery, L. (2011). Computational thinking: A digital age skill for everyone.
ISTE Learning & Leading, 38(6), 20–22.
Bienkowski, M., Snow, E., Rutstein, D. W., & Grover, S. (2015). Assessment design patterns for
computational thinking practices in secondary computer science: A first look (SRI technical
report). Menlo Park, CA: SRI International. Retrieved November 21, 2016, from http://pact.sri.
com/resources.html.
Brennan, K., & Resnick, M. (2012, April). New frameworks for studying and assessing the devel-
opment of computational thinking. In A. F. Ball & C. A. Tyson (Eds.), Proceedings of the 2012
Annual Meeting of the American Educational Research Association (25 pp.). Vancouver, Canada:
American Educational Research Association.
Brugman, G. M. (1991). Problem finding: Discovering and formulating problems. European Journal
of High Ability, 2(2), 212–227.
Burke, Q. (2012). The markings of a new pencil: Introducing programming-as-writing in the middle
school classroom. Journal of Media Literacy Education, 4(2), 121–135.
Center for Computational Thinking, Carnegie Mellon. (2010). Retrieved November 15, 2017, from
https://www.cs.cmu.edu/~CompThink/.
Chiu, M. M., & Zeng, X. (2008). Family and motivation effects on mathematics achievement:
Analyses of students in 41 countries. Learning and Instruction, 18(4), 321–336.
Computer Science Teachers Association. (2011). K-12 computer science standards. Retrieved
November 21, 2016, from http://csta.acm.org/Curriculum/sub/K12Standards.html.
Cuny, J., Snyder, L., & Wing, J. M. (2010). Demystifying computational thinking for non-computer
scientists. Unpublished manuscript in progress. Retrieved April 16, 2017, from http://www.cs.
cmu.edu/~CompThink/resources/TheLinkWing.pdf.
Deci, E. L., & Ryan, R. M. (1985). Intrinsic motivation and self-determination in human behavior.
New York, NY: Plenum.
Denner, J., Werner, L., Campe, S., & Ortiz, E. (2014). Pair programming: Under what conditions
is it advantageous for middle school learners? Journal of Research on Technology in Education,
46(3), 277–296.
Duncan, C., & Bell, T. (2015, November 9–11). A pilot computer science and programming course
for primary school students. In Proceedings of the 10th Workshop in Primary and Secondary
Computing Education (pp. 39–48). London, England.
Einstein, A., & Infeld, L. (1938). The evolution of physics: The growth of ideas from early concepts
to relativity and quanta. New York, NY: Simon and Schuster.
Ericson, B., & McKlin, T. (2012, February 29–March 3). Effective and sustainable computing
summer camps. In Proceedings of the 43rd ACM Technical Symposium on Computer Science
Education (pp. 289–294). Raleigh, NC.
Fessakis, G., Gouli, E., & Mavroudi, E. (2013). Problem solving by 5–6 years old kindergarten
children in a computer programming environment: A case study. Computers & Education, 63,
87–97.
Gee, J. P. (2000). Identity as an analytic lens for research in education. Review of Research in
Education, 25, 99–125.
Giordano, D., & Maiorana, F. (2014, April 3–5). Use of cutting edge educational tools for an
initial programming course. In Proceedings of IEEE Global Engineering Education Conference
(pp. 556-563). Istanbul, Turkey.
Gouws, L., Bradshaw, K., & Wentworth, P. (2013, October 7–9). First year student performance
in a test for computational thinking. In Proceedings of the South African Institute for Computer
Scientists and Information Technologists Conference (pp. 271–277). East London, South Africa.
Grover, S., & Pea, R. (2013). Computational thinking in K-12: A review of the state of the field.
Educational Researcher, 42(1), 38–43.
140 S.-C. Kong
Grover, S., Cooper, S., & Pea, R. (2014, June 21–25). Assessing computational learning in K-12.
In Proceedings of the Conference on Innovation & Technology in Computer Science Education
(pp. 57–62). Uppsala, Sweden.
Grover, S., Pea, R., & Cooper, S. (2015). Designing for deeper learning in a blended computer
science course for middle school learners. Computer Science Education, 25(2), 199–237.
Huitt, W. (2007). Maslow’s hierarchy of needs. Educational psychology interactive. Valdosta, GA:
Valdosta State University. Retrieved November 8, 2017, from http://www.edpsycinteractive.org/
topics/regsys/maslow.html.
Kong, S. C. (2017). Development and validation of a programming self-efficacy scale for senior
primary school learners. In S. C. Kong, J. Sheldon & K. Y. Li (Eds.), Proceedings of the Interna-
tional Conference on Computational Thinking Education 2017 (pp. 97–102). Hong Kong: The
Education University of Hong Kong.
Kukul, V., & Gökçearslan, Ş. (2017). Computer programming self-efficacy scale (cpses) for sec-
ondary school students: Development, validation and reliability. Eğitim Teknolojisi Kuram ve
Uygulama, 7(1), 158–158.
Lawler, R. W. (1997). Learning with computers. Exeter: Intellect Books.
Lye, S. Y., & Koh, J. H. (2014). Review on teaching and learning of computational thinking through
programming: What is next for K-12? Computers in Human Behavior, 41, 51–61.
Maguire, P., Maguire, R., Hyland, P., & Marshall, P. (2014). Enhancing collaborative learning
using pair programming: Who benefits? All Ireland Journal of Teaching and Learning in Higher
Education, 6(2), 1411–1425.
Makinen, M. (2006). Digital empowerment as a process for enhancing citizens’ participation. E-
learning, 3(3), 381–395.
Maloney, J. H., Peppler, K., Kafai, Y., Resnick, M., & Rusk, N. (2008, March 12–15). Programming
by choice: Urban youth learning programming with scratch. In Proceedings of the 39th SIGCSE
Technical Symposium on Computer Science Education (pp. 367–371). Portland, OR.
Marji, M. (2014). Learn to program with Scratch: A visual introduction to programming with games,
art, science, and math. San Francisco, CA: No Starch Press.
Maslow, A. H. (1943). A theory of human motivation. Psychological Review, 50(4), 370.
Meerbaum-Salant, O., Armoni, M., & Ben-Ari, M. (2013). Learning computer science concepts
with scratch. Computer Science Education, 23(3), 239–264.
Mueller, J., Beckett, D., Hennessey, E., & Shodiev, H. (2017). Assessing computational thinking
across the curriculum. In P. J. Rich & C. B. Hodges (Eds.), Emerging research, practice, and policy
on computational thinking (pp. 251–267). Cham, Switzerland: Springer International Publishing.
National Research Council. (2011). Report of a workshop of pedagogical aspects of computational
thinking. Retrieved November 21, 2016, from https://www.nap.edu/catalog/13170/report-of-a-
workshop-on-the-pedagogical-aspects-of-computational-thinking.
National Research Council. (2012). Education for life and work: Developing transferable knowledge
and skills in the 21st century. Committee on defining deeper learning and 21st century skills.
Retrieved December 6, 2016, from http://www.p21.org/storage/documents/Presentations/NRC_
Report_Executive_Summary.pdf.
Papert, S. (1980). Mindstorms: Children, computers, and powerful ideas. New York, NY: Basic
Books.
Peterson, I. (2002). Mathematical treks: From surreal numbers to magic circles. Washington, DC:
Mathematical Association of America.
Piaget, J. (1972). Intellectual evolution from adolescence to adulthood. Human Development, 15(1),
1–12.
Rodriguez, B., Kennicutt, S., Rader, C., & Camp, T. (2017, March 8–11). Assessing computa-
tional thinking in CS unplugged activities. In Proceedings of the 2017 ACM SIGCSE Technical
Symposium on Computer Science Education (pp. 501–506). Seattle, Washington.
Román-González, M., Pérez-González, J., & Jiménez-Fernández, C. (2016). Which cognitive abil-
ities underlie computational thinking? Criterion validity of the computational thinking test.
8 Components and Methods of Evaluating Computational Thinking … 141
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter’s Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter’s Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Part III
Computational Thinking and
Programming Education in K-12
Chapter 9
Learning Composite and Prime Numbers
Through Developing an App:
An Example of Computational Thinking
Development Through Primary
Mathematics Learning
Siu-Cheung Kong
9.1 Introduction
9.2 Background
Wing’s (2006) article popularised the term ‘computational thinking’ and defined it
as the ‘thought processes involved in formulating problems and their solutions so
that the solutions are represented in a form that can be effectively carried out by
an information-processing agent’ (p. 1). Computational thinking relates to design,
problem-solving and the understanding of human behaviour and draws on the basic
concepts of computer science. Although it originates from computer science, CT is
not exclusive to that field, and it goes well beyond computing. It is ‘reasoning about
the problems in computational terms’ (Syslo & Kwiatkowska, 2014, p. 2), which
means that learners are able to formulate problems in ways that help to develop
computer solutions.
9 Learning Composite and Prime Numbers Through Developing an App … 147
9.2.1 CT Framework
Brennan and Resnick developed a CT framework (2012) that was structured in three
dimensions: CT concepts that learners engage with in programming, CT practices
that learners develop when solving computational problems and CT perspectives
that learners gain when interacting with technologies and relate to their experience
with the digital world. Regarding CT concepts, learners are expected to learn the
basic ingredients of programming such as sequences, conditionals, loops, events,
operators and data handling. When tackling computational problems, learners expe-
rience CT practices such as reusing and remixing, being incremental and iterative,
abstracting and modularising, testing and debugging, and algorithmic thinking. In
the programming process, learners have opportunities to develop CT perspectives
such as expressing, questioning and connecting with the digital world. Expressing
refers to learners’ opportunities to use programming as a medium for self-expression.
Questioning is the ability to ask questions about and with technology. Connecting
is the process by which learners create with and for others in programming. It helps
learners to experience working with other people and can lead to more accomplish-
ments resulting from discussions and collaborations. Creating for others enables
learners to experience positive and negative feedbacks in authentic contexts when
their creations are appreciated (or not appreciated) by other people. Connecting is
the most important component of CT perspectives, as it enables learners to connect
CT practices with the digital world so that they feel empowered and have an identity
in the digital world. All three dimensions of CT are essential for implementing CT
in school education.
One of the traditional methods of learning composite and prime numbers is through
the sieve of Eratosthenes. While it is generally taught at the tertiary level, its simplified
version is introduced in primary school: ‘sifting out the composite numbers, leaving
only the primes’ (Maletsky et al., 2004). Ploger and Hecht (2009) suggested using
the software Chartworld to introduce the sieve of Eratosthenes to primary learners in
a visualised and interactive way. Chartworld colours all multiples of a number when
that number is selected. For instance, after selecting the number 3, all multiples
of 3 except 3 will be highlighted. After selecting the numbers from 2 to 10, all
of their multiples will be highlighted. The remaining un-highlighted numbers are
prime numbers, except 1. Although this method enables young learners to learn about
composite and prime numbers, it is limited in finding prime numbers in a given range.
In this regard, there is a need to explore an alternative way to teach primary school
learners so that they can explore larger composite and prime numbers. This study
proposes an innovative pedagogy to guide primary school learners to make an app
that can find the factors of an inputted number.
In traditional mathematics lessons, senior primary school learners are taught that
prime numbers are only be divisible by 1 and themselves, while composite numbers
have factors other than 1 and themselves. Yet mere rote memorisation of these defi-
nitions is not encouraged, as learners will not truly understand the concepts of com-
posite and prime numbers. This study introduces an inquiry pedagogical approach to
guide learners to develop these mathematics concepts together with the development
of an app. Learners will be initially introduced to examples of and ideas underlying
composite and prime numbers. Learners will then develop an app that finds the fac-
tors of an inputted number and tests whether they can distinguish between composite
and prime numbers. The programming process helps learners to think more deeply
about the meaning of composite and prime numbers.
This study introduces the concepts of composite and prime numbers using an inquiry
approach, which is a learning process whereby ‘students are involved in their learning,
formulate questions, investigate widely and then build new understandings, meanings
150 S.-C. Kong
and knowledge’ (Alberta Learning, 2004, p. 1). In this activity, learners will be asked
what a composite number and a prime number are. Teachers will then explain the
concepts with examples. Numbers with more than two factors are composite numbers,
while numbers with only two factors—1 and themselves—are prime numbers. When
finding the factors of a number, learners are reminded of the commutative properties
of multiplication. For instance, 8 can be expressed as ‘1 × 8 8’, ‘2 × 4 8’, ‘4
× 2 8’ and ‘8 × 1 8’. Because the products are the same irrespective of the
order of the factors, learners know that ‘4 × 2 8’ and ‘8 × 1 8’ are equivalent
to ‘2 × 4 8’ and ‘1 × 8 8’, respectively. When finding all of the factors of 8,
learners only need to explore ‘1 × 8 8’, ‘2 × 4 8’ and do not need to consider
‘4 × 2 8’ and ‘8 × 1 8’. The order of operation is not important in finding the
factors of a number, and this is the commutative property of multiplication. This study
encourages learners to generalise composite and prime numbers from examples. The
concepts of composite and prime numbers can also be better understood by asking
young learners to explore the factors of a number through visualisation activities,
such as exploring with the online activity ‘Find the factorisations of the number’
in the Illumination Project (National Council of Teachers of Mathematics [NCTM],
n.d.). By choosing a particular number, such as 8, learners are invited to represent
all of the factorisations of 8 by selecting the appropriate rectangular planes.
Learners are guided to conclude that composite numbers can be represented by
more than one rectangular plane, while prime numbers can only be represented by a
single rectangular plane. This indicates that composite numbers have more than one
pair of factorisations, while prime numbers have only a single pair of factors: 1 and
themselves. Figure 9.1 shows a way to find all of the factors of 8 by listing all of
its factorisations. Using this form of representation, young primary school learners
can be easily guided to categorise numbers as composite or prime by counting the
factors of a selected number. Figure 9.2 lists examples of numbers that have only
two factors and numbers that have more than two factors in a range from 2 to 12.
After introducing composite and prime numbers by counting the number of factors of
a number, teachers can guide learners to develop an algorithm to generalise a method
9 Learning Composite and Prime Numbers Through Developing an App … 151
The numbers that only have two factors from The numbers that have more than two factors
2 to 12 from 2 to 12
2, 3, 5, 7, 11 4, 6, 8, 9, 10, 12
Fig. 9.2 Categorising composite and prime numbers by counting the number of factors of a number
in the range from 2 to 12
Fig. 9.3 The sample screens display the results of testing the inputted number 16 when a the ‘Prime
Number’ button is pressed and b the ‘Composite Number’ button is pressed
There is a need to guide learners to figure out the sub-tasks in building the app.
Decomposing the problems into sub-tasks is a high-level abstraction process. The
four sub-tasks are finding the factors of the inputted number, showing the factors on
the screen, deciding whether the number is a prime number and deciding whether the
number is a composite number. Figure 9.4 shows the high-level abstraction involved
in decomposing the problem into sub-tasks.
The most challenging task is to facilitate learners to find ways to check whether
a number is a factor of an inputted number. The use of examples is the best way to
guide learners. For example, learners can be asked to fill in the columns ‘Modulo’
and ‘Factor of 8?’ to check whether 1, 2, 3, 4, 5, 6, 7 and 8 are factors of 8 when it
is divided by 1, 2, 3, 4, 5, 6, 7 and 8, respectively, as listed in Table 9.1. Table 9.1
shows all of the results in columns ‘Modulo’ and ‘Factor of eight?’ when eight is
divided by 1, 2, 3, 4, 5, 6, 7 and 8, respectively. Learners are encouraged to discuss
in pairs to identify the pattern of zero modulo in identifying the factors of the given
number.
The operator of modulo in programming needs to be introduced to learners to
prepare them to find an algorithm to check whether a number is a factor of a given
Fig. 9.4 A high-level abstraction of decomposing the problem into four sub-tasks
Fig. 9.5 The interface and code for learners to understand the operator of modulo in the program-
ming environment
Input an InputNumber
Repeat number from 1 to InputNumber:
End Repeat
Print Factorlist
Fig. 9.6 An algorithm for finding all of the factors of an inputted number
number. Figure 9.5 shows the interface and the code for learners to understand the
operator of modulo in the programming environment.
Learners are then asked to work in pairs to design an algorithm to find all factors
of an inputted number. Figure 9.6 shows an algorithm for finding all factors of an
inputted number.
Following the algorithmic design, learners can start to build an app to find all of the
factors of a given number by using the operator of modulo. Figure 9.7 shows the
code of a program for finding all of the factors of a positive integral number inputted
by a learner.
Figure 9.8 shows the code executed when the ‘Prime Number’ button and the
‘Composite Number’ button are clicked, respectively. The code in ‘primeButton’ and
‘compButton’ counts the number of factors of the input number to check whether it
is a prime or composite number.
154 S.-C. Kong
Fig. 9.7 The code of a program for finding all of the factors of a positive integral number
Fig. 9.8 The code executed when the ‘Prime Number’ button and the ‘Composite Number’ button
are clicked, respectively
Learners can then test the program by checking whether the input number is a prime
number or a composite number. The possible answers are ‘This is a prime number’
or ‘This is not a prime number’ when the ‘Prime Number’ button is clicked and ‘This
is a composite number’ or ‘This is not a composite number’ when the ‘Composite
Number’ button is pressed. Figure 9.9 shows the sample screens displaying the results
9 Learning Composite and Prime Numbers Through Developing an App … 155
Fig. 9.9 Sample screens displaying the results of testing the inputted number 953 when a the
‘Composite Number’ button and b the ‘Prime Number’ button are pressed
of testing the inputted number 953 when (a) the ‘Composite Number’ button and (b)
the ‘Prime Number’ button are pressed, respectively.
This app enables learners to test whether large positive numbers are prime or
composite numbers, and they will thus come to understand the importance of the
application of such testing in cryptography. This mathematical topic is selected as
the theme of this computational task because of the significant role of prime numbers
in cryptography. Email communication, instant messaging and satellite television,
which are ubiquitous in everyday life, often require data encryption for the safe
delivery of information. Encryption methodologies such as RSA use trapdoor one-
way functions to encrypt messages. These functions are easy to compute in the
forward direction, but difficult to compute in reverse without special information.
RSA utilises some properties of prime number multiplication to design a function
that satisfies the definition of the trapdoor one-way function (Salomaa, 1996). To
enable learners to experience a one-way function, multiplication and factorisation
are used as examples in this study to demonstrate the forward and reverse operations,
respectively. The app developed in this study enables learners to test whether a large
positive number is a composite or prime number. For example, the app can provide
the factors of a relatively large number, such as 563,879, from the perspective of
156 S.-C. Kong
Fig. 9.10 The factors of a relatively large number are displayed after a time gap
young primary learners. As shown in Fig. 9.10, the factors of 563879 are 569 and
991, which are prime. Learners have to wait several minutes before they can see
the outcomes, during which time they can discuss and connect the function with the
work of encryption. The larger the number, the more time will be needed to identify
its factors. It is important for us to find sufficiently large numbers for the encryption
work so that our messages can be transmitted safely in all digital communications.
Learners are then guided to find the number of factors of the number 1, which is
one. The outcomes will be ‘This is not a prime number’ and ‘This is not a composite
number’. At this stage, teachers can emphasise the definition of a prime number
and explain why 1 is not considered prime, which is rarely discussed in teaching.
This app can thus help teachers to trigger discussion among learners in classrooms.
Figure 9.11 shows the sample screens displaying the results of testing an inputted
number 1 when (a) the ‘Composite Number’ button and (b) the ‘Prime Number’
button are pressed.
Learners are then asked to find number of factors of the number 0. The app will
return an empty list of factors because it is not designed to handle this case. There
are infinitely many factors for 0 when it is checked by the rule ‘the inputted number
can be divided by a positive integer and it is a factor when the modulo is zero’.
When 0 is divided by 1, the modulo is 0. Similarly, when 0 is divided by 2 or any
9 Learning Composite and Prime Numbers Through Developing an App … 157
Fig. 9.11 The sample screens displaying the results of testing an inputted number 1 when a the
‘Composite Number’ button and b the ‘Prime Number’ button are pressed
other positive number, the modulo is 0. Therefore, 0 has infinitely many factors, and
the app will report ‘There are infinitely many factors’ when 0 is inputted for testing.
Thus, learners can understand the limitations of the computer program regarding this
case, which is worthy of discussion among learning peers and teachers. Figure 9.12
shows the message after clicking the ‘Show All Factors of the Number’ button when
0 is inputted.
Learners are then asked to write a conditional statement to respond to the input of
0. Figure 9.13 shows the conditional statement for handling the special case when
0 is inputted. This is an opportunity for learners to understand program debugging.
Learners will know that programming an app is an iterative process and that the
program needs to be completed incrementally. Learners can reuse the code for the
‘findFactors’ function shown in Fig. 9.13 to search for factors.
With the categorisation of composite and prime numbers taking place in the above
manner, this study proposed four categories of numbers: composite numbers, prime
158 S.-C. Kong
Fig. 9.12 The message ‘There are infinitely many factors’ after clicking the ‘Show All Factors of
the Number’ button when 0 is inputted
Fig. 9.13 The conditional statement for handling the case in which 0 is inputted in this app
9 Learning Composite and Prime Numbers Through Developing an App … 159
In the process of building the app, learners learn CT concepts such as sequence,
loops, conditionals, events, operators, data handling and procedures (see Fig. 9.14).
The concept of sequences can be developed in building the ‘when showButton.Click’
program. Learners need to think carefully about the order of finding the factors
after considering the special case of 0. When building the ‘when showButton.Click’
program, learners will first consider whether the input number is 0. If so, the app
should tell users that it has an infinite number of factors instead of listing all of
its factors. If the input number is not 0, the program should call the ‘findFactors’
procedure, as all numbers except 0 have a finite number of factors. Learners will thus
understand the concept of sequence in greater depth in this case if they understand
that the order of instructions for finding factors or assigning the message ‘There are
infinitely many factors’ to the factorList should always come before showing the
factors or the message. The concept of repetition is also developed when building
the ‘findFactors’ codes. Using the ‘for each from to’ block, the program repeatedly
runs the blocks in the do section, testing the numbers ranging from 1 to the input
number in finding all of the factors. Learners can understand the concept of events
when using the ‘when showButton.Click’ block. They will observe when an event
occurs (i.e. clicking the ‘show’ button), and the app will trigger a series of tasks
for execution until completion. They can also learn the concept of naming/variables
when they create a factor list with the ‘initialise global name to’ block. The concept
of conditionals is embedded in the use of modulo to find the factors of the input
number. Using the ‘if-then’ block, the program will add the number to the factor
list if the modulo is 0 after the input number is divided by the number. Learners
can also develop the concept of operators when using the modulo to check the
remainders of the divisions and using ‘greater than’ and ‘equals’ blocks in the ‘when
primeButton.Click’ and ‘when compButton.Click’ programs. The concept of data
is introduced in the use of lists. After the program finds the factors by the modulo,
learners need to store all of the factors in the list before showing them. Learners
can also understand the use of procedures in a complex computational task like this
one. After the ‘findFactors’ program is built, the app will show the list of factors on
the screen. Therefore, a procedure that shows factors should be created. To show the
factor list, the ‘showButton’ is made. When the ‘showButton’ is clicked, the program
calls the ‘findFactors’ and ‘showFactors’ procedures that were created previously.
Learners thus know that using procedures can avoid bulky programs.
160 S.-C. Kong
Learners can also experience CT practices, such as reusing and remixing, being
iterative and incremental, testing and debugging, abstracting and modularising, and
algorithmic thinking, in developing the app. They experience the practice of reusing
and remixing when they use the code block of the modulo from the simple app (see
Fig. 9.5) to create the ‘findFactor’ program (see Fig. 9.15). To reuse the code block
of the modulo from the simple app in making the Factor app, learners can save the
code in the backpack, and it can then be extracted when building the ‘findFactors’
program. The backpack is a copy-and-paste feature in App Inventor that allows users
to copy blocks from a previous workspace and paste them into the current workspace.
Learners can also experience the practice of being incremental and iterative in
the process of app development (see Fig. 9.16). Learners can be guided to initially
build a program that finds and shows the factors of an input number and then put
the program into testing and debugging. Learners can develop the program further to
check whether the inputted number is a prime or a composite number. Thus, learners
can experience the process of developing the app incrementally and iteratively.
Learners learn also about testing and debugging throughout the app development
process in this example. One particular point concerns the testing of the case of
inputting 1 and 0 in this example. When the user presses either of the buttons to
test whether 1 is a prime or composite number, the app will give no response (see
Fig. 9.17), as the original app only functions when the inputted number has two or
more factors. When the inadequacy is identified, learners have to modify the program
Fig. 9.15 Reuse and remix the modulo code from the simple app using the backpack feature in App Inventor
9 Learning Composite and Prime Numbers Through Developing an App …
161
162 S.-C. Kong
to handle the special case of inputting 1. Moreover, an error will be found if 0 is input.
Because 0 has an infinite number of factors, the app cannot list them; thus, the app
needs to be enhanced to inform the user that 0 has an unlimited number of factors, as
shown in Fig. 9.17. From these examples, learners will understand the importance
of testing and debugging in app development.
It is not realistic to expect primary school learners are able to conduct high-level
abstraction in programming design; however, this app development exposes learners
to the high-level abstraction process. The four main sub-tasks highlighted at the
beginning of programming design (i.e. finding factors and showing factors of the
input number, checking whether the input number is a prime number and checking
whether the input number is a composite number) can be shown to learners if they
are unable to generate the idea after discussion. Teachers should not be frustrated
if learners are unable to decompose problems into sub-tasks at this stage, but they
should expose them to this process so that they appreciate that it is an important part
of the programming process. Before building the code, learners should be motivated
to conduct an algorithmic design that delineates the detailed instructions required to
solve the problem. This process facilitates the development of algorithmic thinking,
which is an essential cognitive skill in CT development (Angeli et al., 2016; Barr
& Stephenson, 2011; Wing, 2006). Again, we should not expect learners to be able
to work this out on their own; however, they should be guided and exposed to this
process so that the seeds are sown for later development and growth. Individuals
with algorithmic thinking skills can help them to precisely state a problem, divide
the problem into well-defined sub-tasks and work out a step-by-step solution to tackle
9 Learning Composite and Prime Numbers Through Developing an App … 163
Fig. 9.17 Learners experience testing and debugging after 1 and 0 are inputted
each sub-task (Cooper, Dann, & Pausch, 2000). Figure 9.6 shows the algorithm for
finding the factors with modulo.
As mentioned in Chap. 8, problem formulation is an essential component of CT
practices. With the experience gained from developing the factor app, learners should
be encouraged to formulate problems using their pattern recognition experience.
Learners can be guided to reflect again on the pattern-identifying factors of a number
by considering the outcome of the modulo when the number is divided by a divisor
(Table 9.1). Teachers can ask learners to look for other patterns for formulating
problems, for instance, learners may be able to discover patterns such as those shown
in Table 9.2, which presents a pattern of identifying odd numbers using a similar
strategy to that in Table 9.1.
164 S.-C. Kong
Table 9.2 A table with the Division Divisor Modulo Odd number?
columns ‘Modulo’ and ‘Odd
number?’ to be filled in by 1÷2 2 1 ✓
learners to identify patterns in 2÷2 2 0
finding an odd or even
3÷2 2 1 ✓
number
4÷2 2 0
5÷ 2 2 1 ✓
In the process of developing the app, learners have opportunities to develop their
CT perspectives, expressing themselves and connecting with and questioning the
digital world. An important issue in the digital world is the security of data transmis-
sion. To encode and decode messages in real-life protocols, two sufficiently large
prime numbers are used to compose a composite number. This app enables learners
to obtain an initial idea of the speed of computing when they use the app to test
whether a given relatively large number is a prime number or not. Through the app
development, learners cannot only express what they understand about prime and
composite numbers in the coding process, but also connect these mathematical con-
cepts with encryption technology in the digital world and to think of different ways
to encrypt messages. This encourages them to learn more mathematical concepts for
encryption. Learners will be better able to raise questions about the digital world and
privacy issues. Are the messages that they send in the digital world safe? Are there
any other methods for data encryption? Should they learn more about programming
in the digital world? They will also gain a sense of programming empowerment and
will feel autonomous and competent in contributing to society by developing apps
that can help them learn. We hope that in the long term, with the accumulation of
more programming experience, learners will develop their computational identity
and regard themselves as members of the computational community.
9.5 Conclusion
This study demonstrates how primary school learners develop an in-depth under-
standing of composite and prime numbers by building an app using App Inventor
that uses the number of factors to formulate the concepts of composite and prime
numbers. Learners can develop their CT concepts and experience CT practices in a
series of programming tasks, including high-level planning to find and show factors
and ground-level implementation of the modularised blocks to find and show the
factors and to check whether the input number is a prime or composite number. They
can also experience logical thinking and gain an in-depth understanding of compos-
ite and prime numbers in the process of building and testing the app. This example
shows that building an app cannot only deepen their mathematics understanding but
9 Learning Composite and Prime Numbers Through Developing an App … 165
also develop their CT in terms of concepts, practices and perspectives. Future work
is to evaluate the effectiveness of this pedagogy in classroom teaching. CT should
not be limited to programming lessons, but should be developed in young learners
in other subjects. It is also necessary to extend this pedagogy to other subjects such
as science, language learning and visual arts.
References
Alberta Learning. (2004). Focus on inquiry: A teacher’s guide to implementing inquiry-based learn-
ing. Edmonton, AB: Alberta Learning, Learning and Teaching Resources Branch.
Angeli, C., Fluck, A., Webb, M., Cox, M., Malyn-Smith, J., & Zagami, J. (2016). A K-6 computa-
tional thinking curriculum framework: Implication for teacher knowledge. Educational Technol-
ogy & Society, 19(3), 47–57.
Barr, V., & Stephenson, C. (2011). Bringing computational thinking to K-12: What is involved and
what is the role of the computer science education community? ACM Inroads, 2(1), 48–54.
Begel, A. (1996). LogoBlocks: A graphical programming language for interacting with the world.
Cambridge, MA: Massachusetts Institute of Technology.
Brennan, K., & Resnick, M. (2012). New frameworks for studying and assessing the computational
thinking. In Annual American educational research association meeting. Vancouver, BC, Canada.
Cooper, S., Dann, W., & Pausch, R. (2000). Developing algorithmic thinking with Alice. In Pro-
ceedings of the information systems educators conference (pp. 506–539). Philadelphia, PA: AITP.
Eisenberg, M. (2002). Output devices, computation, and the future of mathematical crafts. Interna-
tional Journal of Computers for Mathematical Learning, 7(1), 1–44.
Foerster, K. (2016). Integrating programming into the mathematics curriculum: Combining scratch
and geometry in grades 6 and 7. In Proceedings of the 17th annual conference on information
technology education (pp. 91–96). New York, NY: ACM.
Grover, S., & Pea, R. (2013). Computational thinking in K-12: A review of the state of the field.
Educational Researcher, 42(1), 38–43.
Hambrusch, S., Hoffmann, C., Korb, J. T., Haugan, M., & Hosking, A. L. (2009). A multidisciplinary
approach towards computational thinking for science majors. ACM SIGCSE Bulletin, 41(1),
183–187.
Hsu, T., & Hu, H. (2017). Application of the four phases of computational thinking and integration
of blocky programming in a sixth-grade mathematics course. In S. C. Kong, J. Sheldon, & K.
Y. Li (Eds.), Proceedings of international conference on computational thinking education 2017
(pp. 73–76). Hong Kong: The Education University of Hong Kong.
Jona, K., Wilensky, U., Trouille, L., Horn, M. S., Orton, K., Weintrop, D., & Beheshti, E. (2014,
January). Embedding computational thinking in science, technology, engineering, and math (CT-
STEM). Paper presented at the future directions in computer science education summit meeting,
Orlando, FL.
Kahn, K., Sendova, E., Sacristán, A. I., & Noss, R. (2011). Young students exploring cardinality
by constructing infinite processes. Technology, Knowledge and Learning, 16(1), 3–34.
Ke, F. (2014). An implementation of design-based learning through creating educational computer
games: A case study on mathematics learning during design and computing. Computers & Edu-
cation, 73(1), 26–39.
Lye, S. Y., & Koh, J. H. L. (2014). Review on teaching and learning of computational thinking
through programming: What is next for K-12? Computers in Human Behavior, 41(1), 51–61.
Maletsky, E., Roby, T., Andrews, A., Bennett, J., Burton, G., Luckie, L., McLeod, J., Newman, V.,
…, Scheer, J. (2004). Harcourt math (Grade 4, Florida Edition). Orlando, FL: Harcourt.
Maloney, J., Resnick, M., Rusk, N., Silverman, B., & Eastmond, E. (2010). The programming
language and environment. ACM Transactions on Computing Education, 10(4), 16.
166 S.-C. Kong
Meerbaum-Salant, O., Armoni, M., & Ben-Ari, M. M. (2010). Learning computer science concepts
with Scratch. In Proceedings of the sixth international workshop on computing education research
(pp. 69–76). New York, NY: ACM.
Morelli, R., De Lanerolle, T., Lake, P., Limardo, N., Tamotsu, E., & Uche, C. (2010). Can android app
inventor bring computational thinking to K-12. Proceeding of 42nd ACM technical symposium
on computer science education (pp. 1–6). New York, NY: ACM.
National Council of Teachers of Mathematics. (n.d.). Illuminations: Factorize. https://www.nctm.
org/Classroom-Resources/Illuminations/Interactives/Factorize.
National Research Council. (2012). A framework for K-12 science education: Practices, crosscutting
concepts, and core ideas. Washington, DC: The National Academies Press.
Ploger, D., & Hecht, S. (2009). Enhancing children’s conceptual understanding of mathematics
through Chartworld Software. Journal of Research in Childhood Education, 23(3), 267–277.
Price, T., & Barnes, T. (2015). Comparing textual and block interfaces in a novice programming
environment. In Proceedings of the eleventh annual international conference on international
computing education research (pp. 91–99). Omaha, USA: ACM.
Repenning, A., Webb, D., & Ioannidou, A. (2010). Scalable game design and the development of a
checklist for getting computational thinking into public schools. In Proceedings of the 41st ACM
technical symposium on computer science education (pp. 265–269). New York, NY: Association
for Computing Machinery.
Salomaa, A. (1996). Public-key cryptography. Berlin, Germany: Springer.
Syslo, M. M., & Kwiatkowska, A. B. (2014). Learning mathematics supported by computa-
tional thinking. Constructionism and creativity. In G. Futschek, & C. Kynigos (Eds.), Construc-
tionism and creativity: Proceedings of the 3rd international constructionism conference 2014
(pp. 367–377). Vienna, Austria: Austrian Computer Society.
Wilensky, U. (1995). Paradox, programming, and learning probability: A case study in a connected
mathematics framework. Journal of Mathematical Behavior, 14(2), 253–280.
Wilensky, U., Brady, C., & Horn, M. (2014). Fostering computational literacy in science classrooms.
Communications of the ACM, 57(8), 24–28.
Wing, J. M. (2006). Computational thinking. Communications of the ACM, 49(3), 33–35.
Wing, J. M. (2011). Research notebook: Computational thinking—What and why? The link maga-
zine, Carnegie Mellon University, Pittsburgh. http://link.cs.cmu.edu/article.php?a=600.
Wolber, D., Abelson, H., & Friedman, M. (2014). Democratizing computing with App Inventor.
GetMobile: Mobile Computing and Communications, 18(4), 53–58.
Wolber, D., Abelson, H., Spertus, E., & Looney, L. (2011). App Inventor: Create your own android
apps. Sebastopol, CA: O’Reilly Media.
Yadav, A., Mayfield, C., Zhou, N., Hambrusch, S., & Korb, J. T. (2014). Computational thinking
in elementary and secondary teacher education. ACM Transactions on Computing Education
(TOCE), 14(1), 1–16.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter’s Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter’s Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Chapter 10
Teaching Computational Thinking Using
Mathematics Gamification in Computer
Science Game Tournaments
Abstract One of the key foundations in computer science is abstract algebra. Ideas
of abstract algebra can be introduced to students at middle or pre-college schools to
cultivate their capacity for logical thinking and problem-solving skills as well as gain-
ing mathematical competency required in computer science. In this book chapter,
we introduce ideas of mathematics gamification and a mobile app game, Algebra
Game, richly rooted in abstract algebra and first proposed by the mathematician Tao
(Gamifying algebra, 2012a, Software mock-up of algebra game 2012b). The Algebra
Game teaches elementary algebra seemingly on the surface, and yet the game-play
design possesses interesting abstract algebra ideas and mathematics gamification
potential. We define mathematics gamification as the process of embedding mathe-
matical concepts and their logical manipulations in a puzzle game-like setting aided
by computers. We describe several mathematics gamification instances to enrich the
Algebra Game play. Finally, we evaluate the learning efficacy of the Algebra Game
mobile app software in computer science game tournaments modeled after eSports-
like computer games in order to scale up the number of students who can learn the
Algebra Game mathematics.
C. W. Tan (B)
The Institute for Pure and Applied Mathematics, Los Angeles, USA
e-mail: algebrachallenge@gmail.com
C. W. Tan · P.-D. Yu · L. Lin
Nautilus Software Technologies Limited, Department of Computer Science, City University of
Hong Kong, Kowloon Tong, Hong Kong
e-mail: pdyu@princeton.edu
L. Lin
e-mail: hkalexling@gmail.com
P.-D. Yu
Department of Electrical Engineering, Princeton University, Princeton, USA
10.1 Introduction
Marvin Minsky, in his 1970 Turing Award Lecture, asserted that “The computer sci-
entist thus has a responsibility to education…how to help the children to debug their
own problem-solving processes.” (Minsky 1970). Minsky pointed out that cultivating
the capacity for logical thinking and problem-solving skills of students, while they
are young, to learn foundational subjects such as mathematics is of the essence. The
emphasis is on the tools and motivations for students to acquire problem-solving
skills in lifelong learning of mathematics. Computer science and its software tech-
nologies might just offer an intriguing way for students to persist and persevere in
learning mathematics. We described a preliminary pedagogical study on learning K-
12 mathematics through mathematics gamification ideas and tested it at a computer
science tournament in Hong Kong.
We define mathematics gamification as the process of embedding mathematical
concepts into puzzle game-like instantiations that are aided by computing technolo-
gies. We focus on the software development for typical computing technologies run
on a mobile device of the learner. Game playing is essentially the manipulative of
mathematical objects or structures in a logical manner such to acquire useful math-
ematical insights that otherwise are not obvious or taught in traditional classrooms.
Also, the engaging game-like nature can potentially motivate students and serve
as instructional tools for regular practice to gain proficiency in mathematics and
numeracy. There are several ways to implement ideas of mathematics gamification
in software that can be delivered to the students. We have chosen to deliver them
by mobile app software that presents potentially strong opportunities for students to
learn advanced mathematics in a systematic manner and at their own pace.
Any form of personalized learning technologies should provide a way to teach for
mastery, where students are allowed to progressively move on to other challenging
topics only after they have fully grasped the topic at hand. This approach requires
a personalized approach that caters for individual student learning pace that can
also complement with traditional classroom teaching. Our mathematics gamifica-
tion technology leverages the ubiquitous availability of personal computing devices
such as mobile devices like smartphones and tablets, offering students the opportu-
nity to personalize their mathematics learning experience, allowing students to learn
new knowledge or perform self-assessments at their own pace, and therefore max-
imizing the efficiency of learning in order to “teach for mastery”. Another unique
advantage of our mathematics gamification technology is its ability to offer students
instant feedback as they play, which is a crucial part of “debugging” their thinking
process. In traditional classroom learning, students’ answers often are graded and
then returned weeks later. On the contrary, personalized learning technologies such
as our mathematics games (namely, Algebra Game and Algebra Maze, which we
will present in details in the following) enable students to get instant visual feedback
as they play, so that they can reconsider the situation and correct their moves, which
is an example of “thinking process debugging” on the go and in real time.
10 Teaching Computational Thinking Using Mathematics Gamification … 169
Tao’s insightful remarks aptly highlight two key facts, namely, that (i) certain kinds
of K-12 mathematics are amenable to game design that can motivate student to learn
and (ii) problem-solving skills can be cultivated through this gamifying process as a
means to learning the mathematical subject. In other words, there are several ways to
solve elementary algebra—strategizing moves in a mathematical puzzle game is one
of them. With the aid of computing technologies, this introduces novel perspectives
to learn elementary algebra for young students. Also, Tao (2012a, b) developed a
software mock-up of the game as shown in Fig. 10.1.
The idea of Tao’s algebra game is to reduce a given linear algebra equation to
a form with only “x” and a numerical value on the left-hand and right-hand sides,
respectively, through a selection of a finite number of given clues. In the following,
170 C. W. Tan et al.
Fig. 10.1 Terence Tao’s software mock-up of his algebra game in 2012
we give an example using a screenshot of the game as shown in Fig. 10.1. Initially,
the puzzle state is the algebra equation “5x + 3 x + 11”, and the given clues are the
three possibilities “Subtract 1”, “Divide by 2”, and “Subtract x”. The player chooses
one of the three possibilities by clicking on the avatar icon. Say, suppose the player
chooses “Subtract 1”, the algebra equation (correspondingly, the puzzle state) then
changes to “5x + 2 x + 10” (since both sides of the original equation “5x + 3 x
+ 11” get subtracted by one).
One possible “solution” to the puzzle given in Fig. 10.1 is the sequence of “Subtract
1” then “Subtract x” then “Divide by 2” then “Subtract 1” and then finally “Divide
by 2” to yield “x 2”. This requires a total of five moves to reach the desired state.
It is important to note that what matters is not the final value of x, but it is rather the
inquisitive problem-solving process while playing that is valuable.
The benefit to computational thinking is obvious: students learn a founda-
tional subject (e.g., mastering algebra) while playing. With regard to the game-
play design, there are several intriguing questions: first, how to engineer the
difficulty level of the game automatically? Second, how does a computer (not
human player) solve a given puzzle efficiently, i.e., with the fewest number of
moves? And, third, how to engage the human players in an entertaining man-
ner so that they keep on playing it and, unknowingly, develop a better number
sense or mathematical intuition and that such an improvement can be measured?
These questions were explored in The Algebra Game Project founded by the first
10 Teaching Computational Thinking Using Mathematics Gamification … 171
author (“The Algebra Game Project”, n.d.), and detailed answers to these questions
along with the software development will be published in other venues.
In this and the next section, we describe the mathematics gamification building on
Tao (2012a, b) algebra game and the software implementation of our mobile apps
called the Algebra Game and the Algebra Maze, and the mobile app software are
freely available to download at the Apple iOS Store or Google Play Store (“The
Algebra Game Project”, n.d.).
In Algebra Maze, we combine maze-solving elements and single-variable algebra
equation solving together as shown in Fig. 10.2, which is the game-play screenshot
of Algebra Maze. The goal is to move the purple avatar toward the treasure (i.e.,
equivalently solving the linear equation). Each movement of the avatar corresponds
to a mathematical operation on the equation given below the maze. For example, the
button “+1x” corresponds to the avatar moving one unit upward, and the button “+2”
corresponds to the avatar moving rightward two units. Hence, the operation on x is
an up–down movement, and the operation on the constant is a left–right movement
of the avatar. With the rules above, we can deduce that the position of the avatar
also has an algebraic meaning, i.e., each position in the maze represents a different
equation having different coefficients or constant values.
In the initial levels of Algebra Maze, the treasure is made visible, and then at
subsequent levels, the treasure is rendered invisible, i.e., hidden from the player as
shown in the right-hand side of Fig. 10.2. Hence, the player needs to make use of
the “information” in the given equation to deduce the location of the treasure. In
Fig. 10.2 Algebra maze mobile app game with maze-like gamification design and freely available
for download at iTunes app store and Google play store
172 C. W. Tan et al.
some levels, the player has to first get a key, which is in a certain position of the
maze, before opening a locked door located nearby to the treasure. This setting is
equivalent to asking the player to reach to a certain equation first before they solve
this equation. Finally, when the avatar locates the (potentially hidden) treasure, the
algebra equation will be in the desired form “x numerical_solution”, i.e., the puzzle
is solved.
In order to make Algebra Game more challenging, such that players can learn
thoroughly, we add more new features in subsequent higher levels. One of the features
is “traps”, once the player steps on the trap features, the buttons for movement control
will toggle, either from left–right to up–down or from up–down to left–right. For
example, the player can only perform “+s” or “−t”, to the given equation initially,
and after the player has stepped on the “traps”, the operation will change from or
“−t” to “+ux” or “−vx” which is the operation related to the x-term where s, t, u, and
v are four constants. In Fig. 10.2, there are only left–right buttons, “+2” and “−1”,
in the left most screenshot. After the character steps on the trap, which are the green
buttons, the left–right buttons will then be changed into up–down buttons, “+1x” and
“−2x”, which is shown in the middle screenshot of Fig. 10.2.
Another way to increase the difficulty of Algebra Maze is to create “fake paths”
in the maze (Figs. 10.3 and 10.4). We define a “fake path” as a path that seems to
be a possible path to the treasure box, but it will, in fact, lead the player to a dead
end. In the following, we give illustrative examples on how to combine the fake path
and the “trap” together and to design four mazes in which their topology structures
look similar but actually possess different difficulty levels. All of these four mazes
are with clues {+ 2, −1, +1x, −2x}. The starting point marked as dark blue, and
the treasure box marked as yellow, and the orange squares represent the walls and
obstacles, respectively.
In Maze 1, which is the easiest one, there are two possible paths (red path and
green path) that the player can take in order to reach the treasure box. In Maze 2, we
limit the number of paths that the player can take by adding some obstacles, which
are indicated by the light blue square in the second figure, and now the player can
only reach to the treasure box along the red line. In this example, the original green
path in Maze 2 is a “fake path”.
This is one possible way to enhance game-play design such that the difficulty
of Maze 2 is higher than that of Maze 1. In Maze 3, we have added some traps to
10 Teaching Computational Thinking Using Mathematics Gamification … 173
Fig. 10.4 Four different game designs of algebra maze with difficulty levels ordered from easy to
hard: Maze 1, Maze 2, Maze 3, and Maze 4
the maze, and the difficulty level will then further increase. Players need to think
carefully how to overcome obstacles in order to reach the treasure box. The circles
in the third figure represent the traps (the green button in the screenshot). At the
beginning, there are only two movement control buttons, namely, the go-leftward
and go-rightward buttons. When the player steps on the traps, the movement control
buttons will then toggle to upward and downward buttons. In Maze 4, the difficulty
level can then be further increased by adding a dead end in the map. This means that,
when the player reaches the dead end, he/she will be trapped in that position and
cannot move further anymore. In such a case, the player either loses the game or can
retry it. The dead end is highlighted in light blue in the fourth figure. By leveling up
the maze gradually from Maze 1 to Maze 4, the Algebra Maze will become more
challenging and requires the player to think deeply before making a move. These
four examples also point out that the elements in the clue set are strongly related to
the difficulty-level design. For example, we can leverage the Euclid Algorithm with
the clue set design. Students can even get to know and learn important foundational
computer science knowledge such as how the Euclid Algorithm works. In addition,
the positions of the traps limit the players’ moves (e.g., the dead end) and also need
to be considered when designing the clue set.
In Algebra Game, unlike the Algebra Maze, we split the clues into two parts, one
part is the mathematical operators, i.e., addition “+”, subtraction “−”, multiplication
“*”, division “÷”, and the other part is the operand such as the x-term or the number
as shown in Fig. 10.5. Hence, the combination of the clues is more general than
the original design. The goal in Algebra Game is the same as Tao’s algebra game:
To reduce a given linear algebra equation to “x numerical_solution” through a
selection of a finite number of given clues. As shown in the right-hand side of
Fig. 10.5, if the player “drags” the division button “÷” and “drops” it on the button
“2”, then the equation will be divided by two on both sides. Algebra Game is not
only about solving a linear equation but it also contains other mathematical insights.
For example, consider an equation “x − 23 2” with the clues “+, −” and “2,
174 C. W. Tan et al.
Fig. 10.5 A puzzle instance in the algebra game mobile app game that challenges students to find
the right combinations of prime number factors for the number 105 in order to solve the puzzle.
Important mathematical results such as the fundamental theorem of arithmetic can be displayed as
hint to facilitate a deeper understanding of mathematics for the game players
3”, then this is equivalent to ask if we are able to use 2 and 3 to construct 23.
The players can develop their number sense through playing this game. Let us use
another example, consider an equation “24x 48” with the clues “*, ÷” and “2,
3”, then this is equivalent to asking the players to factorize 24 by using 2 and 3
(prime numbers). Other than the factorization concept, there are many instances of
mathematical manipulations that can be embedded in the Algebra Game such as the
Frobenius’s problem (also known as the coin problem) in the form of making up a
number from two given clues. In essence, given the available clues at each level, the
player can only perform a limited number of operations, and this restriction helps to
stimulate computational thinking in finding the shortest sequence of moves to solve
the problem. If the player is familiar with the mathematical analysis underpinning
the difficulty level design in the Algebra Game, the player can even further develop
mathematical insights and intuition while playing the Algebra Game.
10 Teaching Computational Thinking Using Mathematics Gamification … 175
The mathematics gamification process also requires analyzing the scoring rule
at each puzzle game level that can be evaluated according to different reasonable
design criteria. For example, scoring rules can be evaluated in terms of the number
of moves needed or the speed to solve each level in the case of the Algebra Game
and the number of “redo” on hitting obstacles or the speed to locate the hidden
treasures in the case of the Algebra Maze. A well-designed scoring system can
improve the design of the difficulty level at each level for improving students learning
experience. Furthermore, concrete mathematics can be purposefully interleaved at
certain levels of the games. For example, after a consecutive sequence of games
involving factorization in the Algebra Game, the mathematical statement of The
Fundamental Theorem of Arithmetic (stating that all natural numbers are uniquely
composed of prime numbers) can be displayed to the player in order to highlight
game features (e.g., the prime numbers as clues) with the mathematical rudiment.
In this way, the players learn about fundamental mathematical knowledge (such as
The Fundamental Theorem of Arithmetic in Euclid’s Elements that is not typically
taught in the classroom). For example, Fig. 10.5 shows an example of the puzzle. In
summary, we find that the Algebra Maze and the Algebra Game can provide players
with new perspectives to gaining new mathematical insights while training their
individual number sense and problem-solving skills that are critical to developing
their capacity to view mathematics at multiple abstract levels.
Fig. 10.6 Primary school student tournament of algebra maze at the computer science challenge
in May 2016
participants beforehand as they were posted online only after the CS Challenge was
over (Fig. 10.8).
We describe next the learning efficacy of the first task of Algebra Game and
Algebra Maze based on analysis of the data collected in the tournament. We analyze
the performance evaluation of learning efficacy based on the time spent at each level,
each move that a user has taken, and the number of “redo” times at each level. The
difficulty at each level is calibrated based on our mathematical analysis of the game
(from easy to hard), and we expect to have a reasonable difficulty curve so that
players gain confidence instead of frustration at early levels. Let us evaluate Algebra
Game first. In Fig. 10.10, we see that the number of students who have completed the
corresponding level has observable reduction at higher levels, e.g., about 20 percent,
from Level 9 to Level 10 which can also be observed in Fig. 10.9, the time spent at
Level 10 almost doubled that at Level 9. In fact, the number of moves needed at Level
10 is also almost double that at Level 9 as shown in Table 10.1. We conclude that the
total number of moves needed at each level is a crucial factor in the difficulty-level
calibration design of the game.
Interestingly, the average number of moves needed at Level 12 is around 8.8, and
yet the time spent at Level 12 is the highest. This implies that the total number of
moves needed is not the only factor that may affect the difficulty of the game for
human players. Finally, the reason for the longer time spent at Level 1 is that students
are initially warming up (as they get familiar with the game interface and rules). If we
omit the information at Level 1 and proceed to compute the correlation coefficient
10 Teaching Computational Thinking Using Mathematics Gamification … 177
Fig. 10.7 Secondary school student tournament of algebra game at the computer science challenge
in May 2016
Fig. 10.8 Percentage of the number of students versus the total number of completed levels of
algebra game with an unlimited number of levels a priori designed in the algebra game during the
20-min duration for the game tournament
178 C. W. Tan et al.
Fig. 10.9 The average time a player spent at each level of the algebra game
between the time spent and the average number of moves taken, then we have a
correlation coefficient that is 0.807 which reveals that they are highly correlated for
the 12 levels being analyzed.
From Fig. 10.9, we observe that the average time that students spent on a level in
the Algebra Game increases with the level of progression (except for the first level,
where students are just getting familiar with the game and learning the rules on how
to play the game and thus the time spent is relatively higher). From a game design
perspective, an extension is to adapt difficulty levels that are fine-tuned according to
the skill or knowledge of the player. In addition, how to classify the difficulty level of
the Algebra Game and to integrate that with player’s learning curve is an interesting
direction that we will address in the future release versions of Algebra Game and
10 Teaching Computational Thinking Using Mathematics Gamification … 179
Fig. 10.10 Percentage of number of students versus the total number of completed levels of algebra
maze with a total of 45 levels a priori designed during the 20-min duration of the game tournament
Algebra Maze. This will allow the game-playing experience to be adaptive to the
personalized learning goal of mastering arithmetical and numeracy skills in algebra
manipulations (Fig. 10.10).
As part of classifying the difficulty level of the Algebra Game, we will address
the automatic generation of games in the Algebra Game and Algebra Maze so that
high-quality levels can be generated at real time. There are several ways to classify
difficulty levels of the Algebra Game and Algebra Maze. One possibility is a machine
learning approach to quantify automatically the difficulty level in the Algebra Game
of the puzzle. We can treat this as a simple regression problem in machine learning:
the input is the puzzle parameter of an Algebra Game level (initial equation, number
of moves allowed, set of allowed operations, etc.), and the output is a numeric value
measuring the difficulty of the given level. We can train this model by feeding it with
the puzzles we used in the game tournaments, e.g., the Computer Science Challenge,
and their corresponding difficulty measures derived from the average time spent by
human players. After the training, we can use the generated model to predict the
difficulty level of a particular generated level of Algebra Game. This can, in turn,
be presented to general users in the form of automatically generated Algebra Game
puzzles with increasing difficulties.
Another enhancement we are currently making to the algebra gamification system
is a personalized puzzle recommendation system, where the backend server can push
auto-generated personalized Algebra Game/Algebra Maze puzzles to each individual
student’s device. We call it “personalized” because the system will determine a
180 C. W. Tan et al.
specific student’s need (e.g., a student might need more practice in factorization of
numbers, which can be seen from her past playing record in the system) and push the
most suitable puzzles to the student. Once we finish the difficulty level classification
task, the system can also push puzzles at different difficulty levels to suitable student
groups, so all students from different skill levels can find the game challenging.
10.7 Conclusions
References
Tao, T. (2012a). Gamifying algebra. Terence Tao’s Wordpress blog article. https://terrytao.
wordpress.com/2012/04/15/gamifying-algebra
Tao. T. (2012b). Software mockup of algebra game. https://scratch.mit.edu/projects/2477436 (main
Page, n.d.).
The Algebra Game Project. https://algebragame.app.
Wing, J. M. (2006). Computational thinking. Communications of the ACM, 49(3), 33–35.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter’s Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter’s Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Chapter 11
Mathematics Learning: Perceptions
Toward the Design of a Website Based
on a Fun Computational Thinking-Based
Knowledge Management Framework
11.1 Introduction
The application of computational thinking (CT) can be found in most institutions and
agencies today. Wing (2006) defines CT as “a universally applicable attitude and skill
set everyone, not just computer scientists, would be eager to learn and use.” With
11.1.1 Problem
others, they tend to only discuss video games. As time passes, their social commu-
nication will cease and they will have problems communicating with members of
society in general.
Considering the benefits of learning Mathematics and CT skills, it is proposed
that children’s interest should be diverted from unhealthy video games to more edu-
cational, Mathematics-based games. Furthermore, if we can make learning more fun
and enjoyable, we can incorporate Knowledge Management to enhance collaborative
learning.
11.1.2 Objectives
Computing involves not only programming and practicing computing skills but
also recursive thinking, model matching, and compositional reasoning. These will
enhance algorithmic thinking, parallel thinking, and practical thinking as well. As a
result, through CT, the learner develops problem-solving processes and dispositions.
These are very important to the advancement of computer systems and functions,
especially analysis and design as well as justifications to problems and solutions
(Togyer & Wing, 2017).
CT shows how individual knowledge and skills gained from computing subjects
can bring positive impact to society. The CT course website (2017) provides some
examples. In Economics, one can find cycle patterns in the rise and fall of a country’s
11 Mathematics Learning: Perceptions Toward … 187
11.3 Methodology
First of all, the Software Development Life Cycle (SDLC) is used to design the
game and to ensure the quality of the online game portal. For user requirements,
we have carried out a questionnaire to survey students and parents’ opinion and
preferences with regard to our objectives and the proposed Website portal. Next, we
have designed and developed the Website according to their opinion and preferences
188 C.-S. Lee and P.-Y. Chan
and the KM cycle. Stakeholders’ involvement is critical and as such, their feedback
weighed heavily on our design and development. Consequently, we have adopted
agile methodology to design and develop incrementally to cater to user feedback.
For design and analysis, ease of use and usefulness, the two key assessment metrics
of the Technology Acceptance Model (Ventakesh & Davis, 2000) are also applied in
alpha and beta testings.
Computational Thinking
Algorithm design
Decomposition - Worksheets/Answer sheets
- Scratch modularity/programming
- Scratch modularity/programming
- Mathematics crossword
Pattern Recognition
Abstraction
- Worksheets
- Mathematics crossword
- Scratch modularity/programming
- Scratch modularity/programming
which are of different difficulty levels, enable students to not only go through the
steps for each concept exercise but also to inductively realize that many problems
are similar. In Scratch programming, building blocks can be reused and remixed for
different fun outcomes.
Next is algorithm design and later, abstraction. For solution steps (algorithm
design), diverse formulas/methods of solving problems can lead to the same answer.
We choose a Mathematics crossword puzzle because it enables students to identify
different solutions to the same problem. This leads to generalizing.
Due to the iterative nature of agile-based SDLC methodology, the sections below
present the progression/iterative design process and user testings, which have led to
the subsequent versions of the Website.
In the pilot test, a total of 27 responses from different age groups are gathered through
a questionnaire to identify their opinion and preferences. Questions are scored on a
range of one (not important) to five (most important). There are five different age
groups: children (6 years old to 12 years old), teenagers (13 years old to 19 years
old), young adults (20 years old to 35 years old), adults (36 years old to 59 years
old), and senior citizens (60 years old and above). The five different age groups are
categorized as such to derive a more holistic understanding of factors, which would
sustain both learning and future system development.
Young adults make up 40.7% of the sample, whereas adults only make up 7.4%.
There are 18.6% more female respondents compared to male respondents. During
190 C.-S. Lee and P.-Y. Chan
the pilot test, the participants are shown the Website’s main page in wireframe format
(Fig. 11.2), followed by the survey questions.
A summary of the responses received from the participants is presented in
Table 11.3. The feedback obtained is used to further refine the proposed Website
portal.
The purpose of the alpha testing is twofold: (a) to ensure that all functions in the
Website portal are working; and (b) to obtain user feedback to refine the system.
Users can have an overview of the Website portal on the homepage (Fig. 11.3).
They can also access the Mathematics test to determine and improve their level of
Mathematics.
Worksheets are divided according to the students’ age group to help narrow down
the worksheets to the targeted age group:
• For Primary 1, basic Mathematics equations for users are included to familiarize
them with numbers and equations.
• For Primary 2, the multiplication timetable and basic problem-solving skills are
included.
• For Primary 3, the focus is mainly on money value and problem-solving skills.
Answer sheets are provided for each worksheet so that users can check their
performance and see different methods of solving a problem. Figures 11.4, 11.5, and
11.6 are sample screenshots of the questions and answers.
A forum page for parents and students is also included in the Website to enable
discussion, information sharing, and knowledge exchange. Moreover, the forum can
be used to help users improve their social skills. The game created using Scratch is
found in “Fun Time”.
Fig. 11.5 Primary 2 sample worksheet (number value, pattern recognition with regard to numbers,
numeric value, and operators)
For the alpha user testing, a total of 31 responses from different age groups are
collected. Similar to the pilot test, there are five categories of respondents. For the
question regarding the importance of understanding Mathematics in the real word,
35.5% of them said that it is important. From the range of one (not important) to five
(most important), none of them said that Mathematics is not important.
As for the overall rating of the Website, 16.1% of the respondents rate it as
average, while most users (61.3%) give it a rating of 4. To further elicit user require-
ments/expectations, user feedback is compiled (Table 11.4) and considered for further
testings.
11 Mathematics Learning: Perceptions Toward … 193
Fig. 11.6 Primary 3 sample worksheet (numeric value, pattern recognition with regard to numbers,
numeric value, operators, and algorithm design)
For the final evaluation process, the design is based on the alpha users’ perceptions
and experience. We first present the design for the game and then the Website. Users
can select to start the game or to understand the game instructions. The voice narration
for the game is downloaded from the Watson text-to-speech demo.
Once users start the game, a brief introduction is presented. There are four stages
in this game. The first and second stages of the game require users to select all the
colored balls containing even and odd numbers in order to proceed to the next level.
As they proceed with the game, they would get points if they manage to answer the
given questions correctly. Once users manage to complete all the stages, they are
required to purchase the “tickets” with their points to watch the space videos as a
reward. The reward can demonstrate to users that mastering Mathematical skills is key
to earning points. Knowledge translation is involved when users apply Mathematical
skills that they have learnt to identify the correct answers.
194 C.-S. Lee and P.-Y. Chan
Fig. 11.7 a Even number game (number value), b even number clip (number value)
If users do not know what even and odd numbers are, they can watch a short
clip explaining and teaching them what these numbers are. Figure 11.7a, b is sample
screenshots of the even number game and clip.
Next is the game and/or reward using Scratch (Fig. 11.8). Students can manipulate
the actions, variables, and values for different outcomes.
Figure 11.8 shows a hide-and-seek game developed in Scratch. The seeker is
given 90s to find the hider, who can hide at any random position. This game will be
repeated until the time runs out. After a couple of seconds, the seeker will show up
and start seeking the hider. If the seeker finds the hider at the x- and y-coordinates
(e.g., −167, −111), then the hider will be revealed and the user score will increase
by 1 point. The word “Congratulations” will then be displayed to let the user feel a
sense of achievement.
The variability in values enables users to manipulate the pace of the game, as well
as the kinds of encouragement and sound effects they would like to include in the
game. Water drop in this example may not be appropriate. So it may be the first value
to be changed.
Finally, a Mathematics crossword puzzle enables users to click on the empty
boxes within the crossword to fill in the answers. The purpose of the crossword is to
show users that similar Mathematical functions such as addition and subtraction can
11 Mathematics Learning: Perceptions Toward … 195
be found in different Mathematical equations, and that there are different compo-
nents/methods to arrive at a solution. In Fig. 11.9, users have to think of the solution
from multiple perspectives, simulating higher level algorithm design. For example,
there are two ways to arrive at the total 10. We hope that with practice, users will
understand the link between algorithm design and numeric value, operators, and
problem-solving.
Since we are trying to encourage learning in a community, a blog site enables
posting of updates regarding the subject matter. The forum enables them to share
ideas, knowledge, and problem-solving among users and their parents.
For beta testing, a total of 30 responses from five different age groups are collected.
The age groups include children (6 years old to 12 years old), teenagers (13 years old
to 19 years old), young adults (20 years old to 35 years old), adults (36 years old to
59 years old), and senior citizens (60 years old and above). 56.7% of the respondents
comprises the young adults and 26.7% of them are adults.
Table 11.5 shows the comparisons between the alpha- and beta testing results,
while Table 11.6 presents user expectations for the future Website and game devel-
opment. The Likert scale ranges from 1 (least important) to 5 (most important).
Although the perception outcomes are consistent and positive, the stakeholders
mainly prefer simple designs and manageable exercises and games. Furthermore,
users may find it challenging to find the solutions to problems all by themselves.
Hence, adaptations in the sequence and choice of activities would be necessary.
196
11.8 Significance
11.9 Conclusion
Parents should start introducing Mathematical concepts to their children from young.
In order to attract their attention to learning, methods of teaching and learning should
be fun as it might be difficult to gain their interest once they have lost it.
The design of this study has incorporated Brennan and Resnick’s (2012) com-
putational perspective, computational practice, computational concepts, and KM
approaches. CT-based Mathematics provides a direct link to principles. Hence, it is
important to develop such skills. Our fun Computational Thinking-Knowledge Man-
agement (CT-KM) design and approach follow an inquiry-student-centered learning
framework. The role of technology in terms of exploration, construction as well as
communication in inquiry-student-centered learning parallels that of KM’s knowl-
edge acquisition, sharing, creation, and dissemination.
To validate our designs for the learning of Mathematics and CT, the opinions and
feedback of users from different age groups are obtained through user testings and
questionnaires. Although the target users of the Website are primary school students,
comments from young adults, adults, and senior citizens are taken into consideration
as they are the stakeholders who will influence and encourage the younger children.
Our findings indicate that the perceptions of different stakeholders toward CT-
based Mathematics learning have improved as the Website provides different ways
of learning to suit different contexts and needs. It is our hope that schools in Malaysia
will also see more creative outcomes to problems. Furthermore, we hope that by
enabling students to create, share, and discuss fun-based problems for others to
solve, they would feel encouraged to create and share knowledge, and eventually,
have a greater appreciation of Mathematics and CT.
Acknowledgements The first author would like to thank Prof. Tak-Wai Chan for his question what
is Mathematics learning? while she was a Faculty at the Graduate Institute of Network Technology,
National Central University, Taiwan, ROC. Thanks also to MIT’s Scratch for paving the way, the
2017 Computational Thinking conference for highlighting what CT is and Dr. K. Daniel Wong for
prior collaboration on design thinking and CT.
References
Abelson, H. (2012). From computational thinking to computational values. In Special Interest Group
on Computer Science Education (SIGCSE) (pp. 239–240).
Adkins, S. S. (2016). The 2016–2021 Worldwide game-based learning market. Ambient Insight. In
Serious Play Conference. Retrieved April 30, 2017.
American Academy of Child and Adolescent Psychiatry. (2015). Video Games and Children:
Playing with Violence, Retrieved April 12, 2017, from http://www.aacap.org/AACAP/Families_
and_Youth/Facts_for_Families/FFF-Guide/Children-and-Video-Games-Playing-with-Violence-
091.aspx.
Biswas, C. (2015). The importance of Maths in everyday life. Guwahati News. Retrieved April
15, 2017, from http://timesofindia.indiatimes.com/city/guwahati/The-importance-of-maths-in-
everyday-life/articleshow/48323205.cms.
11 Mathematics Learning: Perceptions Toward … 199
Brennan, K., & Resnick, M. (2012). New frameworks for studying and assessing the development of
computational thinking. In Proceedings of the 2012 Annual Meeting of the American Educational
Research.
Chan, T. W. (2010). How East Asian classrooms may change over the next 20 years. 25th Anniversary
Special Issue of Journal of Computer-Assisted Learning, January, 2010.
Chien, T. C., Chen, Z. H., & Chan, T. W. (2017). Exploring long-term behavior patterns in a book
recommendation system for reading. Educational Technology & Society, 20(2), 27–36.
Christopher, D. (2015). The negative effects of video game addiction. Livestrong.com. Retrieved
April 12, 2017, from http://www.livestrong.com/article/278074-negative-effects-of-video-game-
addiction.
Computationalthinkingcourse.withgoogle.com (2017). What is computational thinking?
Computational Thinking for Educators. Retrieved October 27, 2017, from https://
computationalthinkingcourse.withgoogle.com/unit.
Ertmer, P. A., Ottenbreit-Leftwich, A. T., Sadik, O., Sendurur, E., & Sendurur, P. (2012). Teacher
beliefs and technology integration practices: A critical relationship. Computers & Education,
59(2), 423–435.
Frappaolo, C. (2006). Knowledge management (2nd ed.). New York: Capstone Publishing.
Jashapara, A. (2011). Knowledge management: An integrated approach. London: Pearson.
Kazimoglu, C., Kiernan, M., Bacon, L., & MacKinnon, L. (2012). Learning programming at the
computational thinking level via digital game-play. Procedia Computer Science, 9, 522–531.
Kong, S. C. (2019). Partnership among schools in e-Learning implementation: Implications on
elements for sustainable development. Educational Technology & Society, 22(1), 28–43.
Kong, S. C., & Song, Y. (2013). A principle-based pedagogical design framework for developing
constructivist learning in a seamless learning environment: A teacher development model for
learning and teaching in digital classrooms. British Journal of Educational Technology, 44(6),
209–212.
Koster, R. (2004). Theory of fun for game design. New York: O’Reilly Media.
Lao, A. C. C., Cheng, H. N., Huang, M. C., Ku, O., & Chan, T. W. (2017, Jan). Examining moti-
vational orientation and learning strategies in Computer-Supported Self-Directed Learning (CS-
SDL) for Mathematics: The perspective of intrinsic and extrinsic goals. Journal of Educational
Computing Research, 54(8), 1168–1188.
Lee, C. S., Wong, J. W., & Ee, P. Y. (2017). Gamified mathematics practice: Designing with e-
commerce and computational concepts. In International Conference on Computational Thinking,
July 13–15, 2017, Hong Kong.
Lee C. S. & Wong K. D. (2017). An entrepreneurial narrative media-model framework for knowledge
building and open co-design. In IEEE SAI Computing, July 18–20, 2017, London, UK.
Maier, R. (2010). Knowledge management systems: Information and communication technologies
for knowledge management (3rd ed.). Heidelberg: Springer.
Nonaka, I., & Takeuchi, H. (1991). The knowledge-creating company: How Japanese companies
create the dynamics of innovation. Harvard Business Review, 69, 96–104.
Resnick, M. (2007). Sowing the seeds for a more creative society. In Learning and leading with
technology, December–January, 2007–2008, 18–22.
Snalune, P. (2015). The benefits of computational thinking. ITNOW, 57(4), 58–59. Retrieved Novem-
ber 4, 2017, from http://www.bcs.org/content/ConWebDoc/55416.
Song, Y., & Looi, C. K. (2012). Linking teacher beliefs, practices and student inquiry-based learning
in a CSCL environment: A tale of two teachers. International Journal of Computer-Supported
Collaborative Learning, 7(1), 129–159.
Togyer, J., & Wing, M. J. (2017). Research notebook: Computational thinking–what and why?
Carnegie Mellon School of Computer Science. Retrieved November 2, 2017, from https://www.
cs.cmu.edu/link/research-notebook-computational-thinking-what-and-why.
Venkatesh, V., & Davis, F. D. (2000). A theoretical extension of the technology acceptance model:
Four longitudinal field studies. Management Science, 46(2), 186–204.
200 C.-S. Lee and P.-Y. Chan
Weintrop, D., Holbert, N. S., Horn, M., & Wilensky, U. (2016). Computational thinking in con-
structionist video games. International Journal of Game-Based Learning, 6(1), 1–17. Retrieved
November 2, 2017 from http://ccl.northwestern.edu/2016/WeintropHolbertHornWilensky2016.
pdf.
Wing, J. (2006). Computational thinking. Communications of the ACM, 49(3), 33–35.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter’s Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter’s Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Part IV
Computational Thinking in K-12 STEM
Education and Non-formal Learning
Chapter 12
Defining and Assessing Students’
Computational Thinking in a Learning
by Modeling Environment
12.1 Introduction
Studies have shown that CT and STEM subjects shared a reciprocal relationship.
There is evidence in the literature that students improved their understanding of
STEM topics when they are studied in a CT framework (e.g., Basu et al., 2017; Sen-
gupta et al., 2013; Weintrop et al., 2016a, b). Similarly, developing CT concepts and
practices in a science learning framework provides a context and a perspective for the
better understanding of CT. For example, the NRC (2010) report states that CT con-
cepts and practices are best acquired when studying them within domain disciplines.
If students were introduced to CT in programming contexts only, they might not
develop the skills to apply the generalized CT concepts across disciplines because
of the difficulties in the transfer of learning (NRC, 2011). Additionally, learning CT
12 Defining and Assessing Students’ Computational Thinking … 207
Figure 12.1 gives an overview of our STEM + CT framework. Central to the frame-
work is the Practices applied across STEM domains that use CT methods. There
are four types of practices: Systems Thinking, Problem-solving, Modeling and Sim-
ulation, and Data and Inquiry. The STEM + CT practices are the means to support
students’ synergistic learning and understanding of Domain Content and CT con-
cepts. The CT concepts include variables and assignments, sequential execution of
statements, loop structures, conditionals, functions, and events. These CT concepts
are fundamental to most programming environments.
We use Computational Thinking using Simulation and Modeling (CTSiM) as the
Learning Environment in our framework to help students foster the key Domain
and CT concepts and practices. CTSiM (Basu et al., 2017; Sengupta et al., 2013) is
an open-ended learning environment (OELE) that helps students achieve the syner-
STEM+CT
Pracces*
cycle in fish tanks (Basu et al., 2017; Sengupta et al., 2013; Zhang, Biswas, & Dong,
2017; Zhang & Biswas, 2018).
Figure 12.2 shows the user interface of four primary activities in CTSiM: (1)
reading the science library, (2) editing the conceptual model, (3) building the com-
putational model, and (4) comparing the behaviors generated by the students’ models
to those generated by an expert model. In this example, the student used computa-
tional blocks from a domain-specific block-based modeling language to implement
the update-speed-and-direction behavior of dye molecule agents. When executed,
the animation on the left depicts the motion of the molecules as modeled by the
student. Students can compare the motion of molecules in their simulation to the
motion of molecules in the expert simulation on the right side of the compare tab.
In addition, students can also compare the changes in key simulation variables over
time to the expert model. Note that students cannot see the implementation of the
expert simulation model; they can only observe the behaviors it generates.
Fig. 12.4 Following two students model building progression using the TED measure
Table 12.2 The assessment modality and evidence for the STEM + CT practices
Assessment modality and evidence
ST1 Summative: the correctness of the conceptual models;
ST2 Summative: the domain and CT pre-post questions and the correctness of the
computational models;
ST3 Summative: the domain and CT pre-post questions;
Formative: average computational edit chunk size
MS1 Summative: the correctness of the conceptual models;
Formative: SC-related CA metrics;
MS2 Summative: the correctness of the computational models
Formative: SC-related CA metrics;
MS3 Formative: SA-related CA metrics;
MS4 Formative: CA metrics associated with SA → SC, and SA → IA transitions
PS1 Formative: aggregated IA-, SC- and SA-related CA metrics;
PS2 Formative: computational model evolution and SC-related CA metrics;
PS3 Formative: aggregated SA and SC-related CA metrics; CA metrics associated with SA
→ SC (SA actions followed by SC actions) transitions
DI1 Formative: IA-related CA metrics (supporting action percentage, duration, etc.);
DI2 Summative: domain and CT pre-post questions;
Formative: inquiry learning activities in the system;
DI3 Summative: domain and CT pre-post questions;
Formative: inquiry learning activities in the system; SA- and SC-related CA metrics
DI4 Summative: answers to the problem-solving inquiry questions;
Formative: inquiry learning activities;
learning gains in the STEM domain content knowledge and key CT concepts; (2) the
synergy between STEM and CT learning; and (3) students’ application of STEM + CT
practices as well as their links to the learning gains and model-building performance.
We then computed pairwise correlations between the learning gains in CT, Accel-
eration, and Diffusion units, as well as the accuracy of the students’ computational
models in the Acceleration, Collision, and Diffusion units. Table 12.5 presents the
correlation coefficients (Spearman’s ρ) between all pairs of performance metrics.
The asterisks (*) indicate statistically significant correlations (p < 0.05).
Students’ learning gains in CT showed moderately high and statistically signifi-
cant correlations with the domain gains in the Acceleration (ρ 0.32) and Diffusion
units (ρ 0.27), providing some evidence that there was synergistic learning of
STEM content knowledge and CT concepts. The fact that students who improved
more in their understanding of CT also achieved larger learning gains in the STEM
content supports the notion for synergistic learning through the CTSiM intervention.
Table 12.5 also shows that all of the computational model distances were nega-
tively1 correlated with the CT gains. Two out of three units (Collision and Diffusion)
had statistically significant correlations with the learning gains (ρ −0.34 and ρ
1A larger model distance from the correct model indicates a more incorrect model. Therefore, the
negative correlations between the model distance and learning gains indicates better model building
ability is related to better performance in the domain and CT learning gains.
12 Defining and Assessing Students’ Computational Thinking … 215
−0.28), respectively. The relation between the acceleration learning gain and accel-
eration test score had a low correlation value and was not significant. In addition,
students’ CT pre-test scores showed low correlations with model-building perfor-
mance, therefore, it is unlikely that students’ prior CT knowledge was a significant
factor in their model building abilities. Overall, the results provide evidence of syner-
gistic learning of STEM domain and CT concepts as students worked on their model
building tasks, and it seemed to improve as students worked through different units.
The students’ computational model building performance was consistent across the
three units, as all computational model distances showed moderate correlation values
(and the correlations were statistically significant).
Finally, we show how the Coherence Analysis-derived (Kinnebrew et al., 2017) met-
rics can help characterize students’ application of STEM + CT practices. Coherence
analysis provides a framework for defining a number of metrics related to individual
tasks students perform in the system (e.g., seeking information, building models, and
checking models) (Segedy et al., 2015). An introduction to the CA-derived metrics
was provided in Sect. 12.3.3. In previous work, we have developed a number of these
measures to analyze students’ work in CTSiM (Zhang et al., 2017). In this chapter,
we extend the collection of CA-derived measures to characterize the students’ use of
STEM + CT practices. Due to limited space, we only report the analyses on the Dif-
fusion unit. We first ran a feature selection algorithm to select features that produced
higher percentages of the total variance in the feature space. We assumed these fea-
tures would better distinguish students who used the STEM + CT practices from those
that did not. This approach also helped reduce the dimensions of the feature space for
216 N. Zhang and G. Biswas
clustering. The eight features obtained after feature selection and their descriptions
are summarized in Table 12.6. The features are described as percentages and were
computed with respect to the total number of actions performed by a student. For
example, the feature “Conceptual Model Edit Effort” of a student accounts for the
percentage of computational model edit actions among all her actions in CTSiM.
Table 12.6 also lists the STEM + CT practices aligned with the chosen features.
We then clustered the student data using the Expectation-Maximization (Gaussian
Mixture Model) algorithm to generate probabilistic models corresponding to each
cluster. The Calinski-Harabasz information criteria were applied to select the number
of clusters, and applying this measure produced three clusters (Zhang et al., 2017).
Table 12.7 reports the mean values and the standard deviations of the eight features
for the three clusters. We assumed the feature value probabilities for each cluster
would explain the differences in the use of STEM + CT practices among students.
We also report the results of single-factor analysis of variance (ANOVA) for each
had the highest number of testing and comparison (SA) actions (42% of their actions
were SA actions), and their computational model edit chunk sizes were the smallest
(again, the differences between the three groups was statistically significant). The
implication is that students in Group 1 tended to use more trial-and-error Problem-
solving and Modeling and Simulation practices. As shown in our previous work
(e.g., Basu et al., 2017; Segedy et al., 2015; Zhang & Biswas, 2018), these trial-
and-error approaches adversely affect students’ modeling tasks. These students also
had significantly larger computational model distances (mean distance 7.9) when
compared to the other clusters (mean distance 2.1 and 3.0, p 0.01). In summary,
students in Cluster 1 had the least effective model-building behaviors and they did not
achieve high domain learning gains. However, these type of tinkering behaviors were
not necessarily negative. Cluster 1 students remained engaged in their model building
tasks, and their learning gains in CT were quite strong although not significantly
higher than the students in Cluster 2.
Compared to students in Cluster 1, students in Cluster 2 spent less time reading
the domain library, yet their IA → SC transition rates were high compared to the
other groups. Correspondingly, they achieved the best computational model building
performance (mean distance 2.1 and average computational modeling distance
slope −0.62) with the lowest percentages of SC and SA actions. The combined
SC (test + comparison) percentages were the lowest (20%), and the combined SA
(conceptual model-building + computational model-building) percentages are also
quite low (26%). This reflects good debugging and error-checking practices because
they did not need excessive SA—actions while debugging because they were able to
pinpoint the issues with their computational models quickly. However, their average
computational edit chunk size was the largest, indicating that they did not frequently
switch from SC to other types of actions. The students in Cluster 2 had the highest
IA → SC rate (13%) while the other two clusters only had (2 or 3%). As a result,
although these students did quite well on the practices related to Data and Inquiry
and Modeling and Simulation, their Problem-solving skills needed improvement.
This was also reflected by the fact that they had the lowest percentages of supported
computational model edit actions.
Students in Cluster 3 had very similar patterns of learning activity to Cluster 2
except in two features (average computational model edit chunk size and IA →
SC Rates). Post hoc Tukey’s Honest Significant Difference Test revealed that the
differences in these two features compared to Cluster 2 were statistically significant
(p 0.02 and 0.03, respectively). The students in this cluster had high learning gains
in the domain and CT tests and were efficient model constructors as seen from the
slopes of their computational model TED values over time (they had small final
computational modeling distances and steep negative slopes similar to cluster 2).
Students in this cluster performed the most computational model edits, and their
average model edit chunk sizes were in between those of Clusters 1 and 2, which
indicates they successfully applied problem-decomposition practices for the larger
model building tasks. They performed the smallest number of SA actions, however,
12 Defining and Assessing Students’ Computational Thinking … 219
the percentage of supported computational model edits were the highest among all
clusters. This reflects the proficient use of debugging and error-checking practices,
which was similar to Cluster 2. However, these students’ had a low IA → SC transition
rate, which implies that they could improve in their Data and Inquiry practices. As
a result, though they showed good learning gains in the STEM and CT concepts,
their computational model building performance was not as good as the students in
Cluster 2.
In summary, the clustering results revealed the differences in the use of STEM +
CT practices by the students. Although not all of the learning performance metrics
and the CA metrics showed significant differences as suggested by the ANOVA
results, analyzing these differences in learning behaviors provided insights and links
to the students’ learning gains and model-building performance.
12.5 Conclusions
In this Chapter, we stressed the benefit and importance of the synergistic learning
of STEM and CT concepts and practices. We extended our previous work on the
use of CT concepts and practices in STEM classrooms and refined the STEM + CT
framework to develop students’ synergistic STEM- and CT- learning. The STEM +
CT framework defines (1) the practices that are frequently applied in both STEM
and CT learning contexts, (2) the STEM domain content knowledge, (3) the set of
key CT concepts required for computational modeling, (4) the open-ended learning
environment, CTSiM, that fosters students’ learning of these STEM and CT concepts
and practices, and (5) the assessment framework that provide summative and forma-
tive measures for evaluating student performance and learning behaviors. We then
used results from a recent CTSiM classroom study in the U.S. to demonstrate how
learning can be defined and analyzed with our STEM + CT framework. The results
show that students’ model-building performances were significantly correlated to
their STEM and CT learning, and students’ distinct model-building and problem-
solving behaviors in CTSiM were indicative of the model-building performance and
learning gains.
In future work, we will refine the set of key STEM + CT practices and the assess-
ment framework such that it will be compatible with other learning modalities, such
as collaborative learning. We will generalize the CTSiM framework and extend it to
different STEM domains, such as the C2STEM learning environment for high school
physics. Our overall goal is to study the feasibility and effectiveness of the CTSiM
learning environment in multiple contexts and domains.
220 N. Zhang and G. Biswas
References
Barr, D., Harrison, J., & Conery, L. (2011). Computational thinking: A digital age skill for everyone.
Learning & Leading with Technology, 38(6), 20–23.
Barr, V., & Stephenson, C. (2011). Bringing computational thinking to K-12: What is Involved and
what is the role of the computer science education community? ACM Inroads, 2(1), 48–54.
Basawapatna, A., Koh, K. H., Repenning, A., Webb, D. C., & Marshall, K. S. (2011). Recogniz-
ing computational thinking patterns. In Proceedings of the 42nd ACM Technical Symposium on
Computer Science Education—SIGCSE ’11, 245. http://doi.org/10.1145/1953163.1953241.
Basu, S., Biswas, G., & Kinnebrew, J. S. (2017). Learner modeling for adaptive scaffolding in a
computational thinking-based science learning environment. User Model. User-Adapt, 27.
Basu, S. & Biswas, G. (2016). Providing adaptive scaffolds and measuring their effectiveness in
open-ended learning environments. In 12th International Conference of the Learning Sciences
(pp. 554–561). Singapore.
Bille, P. (2005). A survey on tree edit distance and related problems. Theoretical Computer Science,
337(1–3), 217–239.
Bransford, J. D., Brown, A. L., & Cocking, R. R. (2000). How people learn.
Brennan, K., & Resnick, M. (2012, April). New frameworks for studying and assessing the devel-
opment of computational thinking. In Proceedings of the 2012 Annual Meeting of the American
Educational Research Association, Vancouver, Canada (pp. 1–25).
Cheng, B., Ructtinger, L., Fujii, R., & Mislevy, R. (2010). Assessing systems thinking and complexity
in science (large-scale assessment technical report 7). Menlo Park, CA: SRI International.
García-Peñalvo, F. J., Reimann, D., Tuul, M., Rees, A., & Jormanainen, I. (2016). An overview of
the most relevant literature on coding and computational thinking with emphasis on the relevant
issues for teachers. Belgium: TACCLE3 Consortium.
Grover, S. (2015). “Systems of Assessments” for deeper learning of computational thinking in K-12.
In Proceedings of the 2015 Annual Meeting of the American Educational Research Association
(pp. 15–20).
Grover, S., & Pea, R. (2013). Computational thinking in K-12: A review of the state of the field.
Educational Researcher, 42(1), 38–43.
International Society for Technology in Education (ISTE), & Computer Science Teachers Associa-
tion (CSTA). (2011). Operational definition of computational thinking, 1030054. Retrieved from
http://www.iste.org/learn/computational-thinking.
Kinnebrew, J. S., Segedy, J. R., & Biswas, G. (2017, April). Integrating model-driven and data-
driven techniques for analyzing learning behaviors in open-ended learning environments. IEEE
Transactions on Learning Technologies 10(2), 140–153.
Mislevy, R. J., Almond, R. G., & Lukas, J. F. (2003). A brief introduction to evidence-centered
design. ETS Research Report Series, 2003(1), 1–29.
National Research Council (U.S.). (2010). Report of a workshop on the scope and nature of com-
putational thinking. Washington, D.C: National Academies Press.
National Research Council (U.S.). (2011). Report of a workshop on the pedagogical aspects of
computational thinking. Washington, D.C: National Academies Press.
NGSS Lead States. (2013). Next generation science standards: For states, by states. Washington,
DC: The National Academies Press.
Piech, C., Sahami, M., Koller, D., Cooper, S., & Blikstein, P. (2012). Modeling how students learn to
program. In Proceedings of the 43rd ACM Technical Symposium on Computer Science Education
(pp. 153–160).
Repenning, A., Ioannidou, A., & Zola, J. (2000). AgentSheets: End-user programmable simulations.
Journal of Artificial Societies and Social Simulation, 3(3), 351–358.
Segedy, J. R., Kinnebrew, J. S., & Biswas, G. (2015). Using coherence analysis to characterize self-
regulated learning behaviors in open-ended learning environments. Journal of Learning Analytics,
2(1), 13–48.
12 Defining and Assessing Students’ Computational Thinking … 221
Sengupta, P., Kinnebrew, J. S., Basu, S., Biswas, G., & Clark, D. (2013). Integrating computational
thinking with K-12 science education using agent-based computation: A theoretical framework.
Education and Information Technologies, 18(2), 351–380.
Weintrop, D., Beheshti, E., Horn, M., Orton, K., Jona, K., Trouille, L., et al. (2016a). Defining
computational thinking for mathematics and science classrooms. Journal of Science Education
and Technology, 25(1), 127–147.
Weintrop, D., Holbert, N., Horn, M. S., & Wilensky, U. (2016b). Computational thinking in con-
structionist video games. International Journal of Game-Based Learning, 6(1), 1–17.
Werner, L., Denner, J., Campe, S., & Kawamoto, D. C. (2012). The fairy performance assessment:
Measuring computational thinking in middle school. In The 43rd ACM Technical Symposium on
Computer Science Education (pp. 215–220).
Wing, J. M. (2006). Computational thinking. Communications of the ACM, 49(3), 33–35.
Wing, J. M. (2011). Research Notebook: Computational Thinking–What and Why?. Retrieved
January 1, 2017 from https://www.cs.cmu.edu/link/research-notebook-computational-thinking-
what-and-why.
Wilensky, U. (1999). NetLogo. Center for Connected Learning and Computer-Based Modeling,
Northwestern University, Evanston. Retrieved January 1, 2017 from http://ccl.northwestern.edu/
netlogo.
Wilensky, U., & Resnick, M. (1999). Thinking in levels: A dynamic systems approach to making
sense of the world. Journal of Science Education and 489 technology, 8(1), 3–19.
Zhang, N., & Biswas, G. (2017). Assessing students’ computational thinking in a learning by
modeling environment. In S.-C. Kong (Ed.), The Education University of Hong Kong, Hong
Kong (pp. 11–16).
Zhang, N., Biswas, G., & Dong, Y. (2017). Characterizing students’ learning behaviors using unsu-
pervised learning methods. In E. Andre, R. Baker, X. Hu, M. M. T. Rodrigo, & B. du Boulay
(Eds.), (pp. 430–441). Cham: Springer.
Zhang, N., Biswas, G. (2018). Understanding students’ problem-solving strategies in a synergistic
learning-by-modeling environment. In C. Penstein Rosé et al (Eds.), Artificial intelligence in
education. AIED 2018. Lecture Notes in Computer Science (Vol. 10948). Cham: Springer.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter’s Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter’s Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Chapter 13
Roles, Collaboration,
and the Development of Computational
Thinking in a Robotics Learning
Environment
13.1 Introduction
Computational Thinking (CT) reflects the habits of the mind engaged in by computer
scientists including, but not limited to “…solving problems, designing systems, and
understanding human behavior, by drawing on the concepts fundamental to com-
puter science” (Wing, 2006). CT can be thought of as a broad foundation consisting
of the heuristics used by computer scientists and as a way to think about the diverse
thinking skills associated with computing. The Computer Science Teachers Associa-
tion (CSTA) has disseminated a definition of CT as including: formulating problems
in ways that enable us to use a computer to solve them, and automating solutions
through algorithmic thinking. They further indicate that these skills are important
because they create a tolerance for ambiguity, allow for persistence in working with
difficult problems, and provide an opportunity to practice collaborating with oth-
ers to achieve a common goal (Computer Science Teachers Association, 2016). An
understanding of the aspects of teaching CT is important because it has been argued
that CT skills are essential skills that should be taught to all school-aged learners
(Lee, et al., 2011). Not only because these skills are used by computer scientists, but
influence an abundance of other fields (Wing, 2006) and people with a command
of these competencies will be better positioned to participate in a world where the
computer is ubiquitous (Grover & Pea, 2013).
When working collaboratively, students take on roles as the group seeks to regulate
its work. Group roles are an important element of all collaborative learning, but
especially in a Computer-Supported Collaborative Learning (CSCL) environment
(Hoadley, 2010), such as robotics. According to Stijbos and De Laat (2010), roles
come into being in one of two ways: scripted or emergent. Scripted roles are those
that is assigned by a teacher to facilitate the process of collaborative learning, and
include precise instructions on how students should interact with one another in
the group setting to promote learning. This is contrasted with emergent roles that
“emerge spontaneously or are negotiated spontaneously by group members without
interference by the teacher or researcher” (p. 496).
Research on scripting collaborative roles for students demonstrates mixed results.
While there is a fair amount of support for the efficacy of providing students with
scripts for how to interact with one another while working collaboratively (Fis-
cher, Kollar, Mandl, & Haake, 2007; O’Donnell, 1999). There is also research that
indicates that scripts lose their efficacy when students are engaged in collaborative
learning over longer periods (Rummel & Spada, 2007); and that other approaches,
for example, students observing a model of collaborative learning, are more effective
than scripts in enabling successful collaborative learning in student groups (Rum-
mel & Spada, 2009). Moreover, providing students scripts for collaborative learning
interactions has been criticized as overly directive, depriving students of the oppor-
tunity to engage in the type of thinking that will lead to creativity in problem-solving
(Dillenbourg, 2002).
Meanwhile, robotics learning environments are multidimensional problem spaces
which afford multiple roles that may be taken up in an emergent fashion, including
the role of programmer, builder, and analyst. The multiple tools in this problem space
can create a situation where students vie for control of the tools through adopting
certain roles (Jones & Issroff, 2005). Such vying for control can create tension in the
group and interfere with opportunities to learn (Sullivan & Wilson, 2015). Therefore,
it is not only the functions of the role that are important, but the shifting of roles,
and the negotiation of roles that may also affect student learning (Sullivan & Wilson,
2015; Wise, Saghafian, & Padmanabhan, 2012).
In collaborative learning settings, work is meant to be done by the entire group
and through these social interactions new knowledge is built (Vygotsky, 1978). How-
ever, collaborative groups do not always work well together, hence, these learning
interactions may not take place (Dillenbourg, 1999). Successful collaborative group
work requires ongoing, well-coordinated group interactions (Barron, 2003). Barron
has argued that the accomplishment of this coordination occurs through a complex
array of cognitive and social tasks, which may be categorized into three areas of
collaborative interaction: shared task alignment, joint attention, and mutuality. Col-
laborative groups may demonstrate varying degrees of these interactions, indicating
both higher and lower levels of coordination. In this way, the level of coordination
228 P. Kevin Keith et al.
will have an impact on the group’s learning outcomes. Recent research has shown that
students who have adopted emergent roles in group work can successfully achieve
higher levels of coordination (Sullivan & Wilson, 2015). Given this, and the fact that
emergent roles may better support creative thinking (Dillenbourg, 2002) scripted
collaborative learning was avoided. In our study, students were asked to collaborate,
they were not assigned specific roles, but they had access to the three primary roles
associated with the activity: programmer, builder, and analyst. Here, we examine
how the specific emergent roles the students inhabit impacts both their collaborative
interactions and their engagement in computational thinking.
This research proceeds from the Vygotsky (1978) perspective, which holds that
through direct interaction with the robotics tools and engagement in collaborative
group discussions, students will develop knowledge and ideas about computation.
Here, we aim, through descriptive and qualitative analysis, to characterize the nature
of girls’ computational thinking as they “do” robotics. This analysis will serve as a
starting point for designing effective curricular and pedagogical scaffolds for sup-
porting girls’ development of computational thinking abilities and computer science
knowledge.
Our focus is on how emergent roles in the multidimensional problem space of
robotics relates to collaboration and to different types of computational thinking.
For the purposes of this study, we define three working modes: working individually,
working cooperatively (defined as working jointly toward a solution through a divide
and conquer approach), and working collaboratively (defined as working jointly
toward a solution through discussion and dialogue). This analysis is accomplished
through systematic observational methods and sequential analysis which allows for
the understanding of behavior as it unfolds over time (Bakerman & Quera, 2011).
The goal of this research is to improve our knowledge of how the emergent roles
enabled in a robotics learning space are taken up and how they relate to collaborative
interactions, engagement in computational thinking and learning outcomes as mea-
sured by the difficulty of challenges undertaken. This knowledge will assist teachers
and curriculum developers in creating robotics learning settings that best support
student learning in computer science. Our specific research questions are presented
below.
RQ1. What are the role transitions made by novice programmers in this study?
RQ2. How do different roles in a robotics programming environment relate to dif-
ferent types of collaboration?
RQ3. How do different types of collaboration in a robotics programming environment
relate to different types of computational thinking?
RQ4. How does engagement in specific types of computational thinking relate to
student learning of robotics as measured by the difficulty of challenges undertaken?
13 Roles, Collaboration, and the Development of Computational … 229
13.2 Methods
This observational case study took place at a 1-day, all-girl introduction to robotics
event called “Girls Connect.” The students spent 2 hours learning how to design and
program the robot (Lego® EV3), they then spent the remainder of the day working on
solving robotics challenges. The participants in the workshop included 17 girls, ages
8–13 (M 11.725) who attended five different schools in New England. Purposeful
sampling was used to select students from various backgrounds and geographic areas
from the pool of students who volunteered for the event. Students were drawn from
schools that were struggling academically; four of the five schools who sent student
participants were not meeting state standards for student academic performance. All
of the participants were working with robotics for the first time. The “Girls Connect”
event was designed to introduce girls to the FIRST LEGO League (FLL)® in order to
stimulate their interest in the FLL and robotics. The workshop featured the FLLs 2011
challenge: “Food Factor.” This challenge includes 11 missions of varying degrees of
difficulty. The students were allowed to select the mission(s) they wished to solve. In
the morning of the 1-day workshop, the girl participants did team building exercises,
learned a little about programming, learned about the food factor challenge, and
built a basic robotic car from blueprints. After lunch, the girls devoted themselves to
solving the robotics missions that were laid out on the food factor challenge board. In
our study, the aggregate of missions attempted across all groups was seven. In other
words, collectively the six groups of girls attempted to solve seven of the missions.
The students were divided into six teams (five teams of 3 and one team of 2); girls
from the same schools were on the same team. Each of the six teams were given
color-coded t-shirts to wear for the day. For example, one team wore green t-shirts,
one yellow, etc. The t-shirts bore the “Girls Connect” logo and were presented both
as commemorative gifts to participating girls and also to function as an aid to the
researchers in keeping track of who was on which team as the girls roamed about
the room.
The data analyzed in this study are drawn from two of the teams who participated.
For purposes of analysis, we selected a team that appeared to demonstrate higher lev-
els of coordination (worked collaboratively), and a team that appeared to demonstrate
lower levels of coordination (worked in parallel), based on viewing of the videotapes.
The purpose of making this selection is to aid analysis of the relationship of role to
collaboration and computational thinking. By selecting a more collaborative team
and a less collaborative team, we are better able to delineate the relationship of our
constructs of interest. The data consists of the afternoon problem-solving activities
and discussions, and includes 3 h of video for each of two teams. Pseudonyms of
Anna, Becky, and Cindy were used to identify each of the girls on the light blue team,
and Janet, Kaylee, and Lisa on the dark blue team. Consent was obtained from the
parents of the participants and assent from the participants themselves.
We collected audio and video data at the 1-day event. Each of the girl participants
in the study wore a wireless microphone. Each group of girls had their own worktable,
a LEGO Mindstorms EV3 robotics kit, and a laptop computer. Two challenge arenas
230 P. Kevin Keith et al.
were set up in the room so that the girls could test their solutions (see Fig. 13.2). A
stationary video camera was mounted at each group worktable to capture the building
and programming of the robots. Two additional cameras were focused on each of the
challenge arenas. From these data, we created a video and audio recording of each
group’s activity and discussion for the day as they moved between their individual
worktables and the challenge arenas. See Fig. 13.3 for an illustration of the room
and camera setup. A professional transcriptionist transcribed all group talk. We also
ran a screen capture program on each groups’ laptop. In this way, we collected all
of the robotics programming activity engaged in by each group. This data includes
the final robotics program(s) created by each group. The data analysis unfolded over
four distinct phases including: (1) behavior analysis of the roles students enacted and
collaborative interactions observed; (2) discourse analysis of student talk related to
computational thinking; (3) descriptive statistics related to the observed roles and
collaborations; and (4) learning outcomes analysis based on a challenge difficulty
score, role enactment and observed collaboration.
The first phase was to record onset and offset times of certain behaviors as observed
on the videos. The role of the student and the type of collaboration were coded
as continuous and mutually exclusive. This means that all 3 h of video data were
coded (for each student) and no overlapping of the codes occurs. The unit of analysis
for this phase of data coding was “shift of focus.” In other words, when students
13 Roles, Collaboration, and the Development of Computational … 231
switched their attention from one activity to another activity, we initiated a new code
(Chi, 1997). The codes for the emergent roles were derived from attending the event
and watching the video recording numerous times and included Programmer, Tester,
Builder, Analyst, and Other.
The codes for collaboration are based on Forman and Cazden’s (1985) codes for
participation in groups as markers of coordination, discussed above. They included
Parallel (little to no focus on the group), Cooperative (Working together, focused
on own results), Collaborative (Working together and sharing ideas), and External
(Focused on something outside of the group).
Inter-rater reliability was assessed by training a second coder. A timed-event
alignment kappa (Bakerman & Quera, 2011) was used since it allows for tolerances
between onset and offset times. Results for inter-rater reliability for the role were
κ 0.83 and for collaboration were κ 0.92. In order to achieve this kappa, the
coders reviewed disagreements in their coding and resolved conflicts.
This phase focused on the discourse related to the robotics activity. First, we seg-
mented the 3-hour conversations into Individual Troubleshooting Cycles (TSC) for
each group. As discussed above, the troubleshooting cycle is iterative and consists of
building the device, writing a program, testing the program, diagnosing and debug-
232 P. Kevin Keith et al.
ging the program, and revising the program, or revising the design of the built device
(Sullivan, 2011). The transcripts were then coded using a priori codes we developed
based on the work of Wing (2006) and Barr and Stephenson (2011). These codes
have been synthesized to be relevant for the activities and type of behavior expected
and observed for novice programmers in a robotics environment. Table 13.1 presents
the coding scheme.
Two undergraduate students were trained in the use of the coding scheme and
all utterances from both groups over the course of the challenge period (3 h in
the afternoon) were coded. This chapter is concerned with student’s computational
thinking. Therefore, all comments that were not related to computational thinking
were coded as other. Inter-rater reliability was calculated utilizing Krippendorff’s
alpha (Krippendorff, 2004). Results for inter-rater reliability for the discourse were
α 0.901, which indicate that inter-rater reliability for this study was high.
The first step in phase III was to calculate descriptive statistics to summarize the
coded observations of student behavior. The total time and relative duration were
13 Roles, Collaboration, and the Development of Computational … 233
calculated for role and collaboration. Then, once we had segmented, coded, and
scored the discourse data, we tabulated the instances of the CT codes in each TSC
for each group. As a result, we created one table of data for each group. The rows
of the tables were the eight CT codes described above. The columns listed out all
the TSCs of the group. In each cell of the tables, we put the number of instances we
observed for each CT code at each TSC. Using those tables as input, we conducted
one-way ANOVAs to compare the means of the number of instances of the CT codes
across all the TSCs. For each group, the null hypothesis we tested was whether the
discussions underlying the codes occurred at the same frequency on average across
the TSCs. To visually inspect the frequency differences among the groups, we then
created boxplots for the instance counts of each CT code for each group. We used
those plots to visually assist the comparison of how frequent the discussions were
with each group.
The second step in phase III was to calculate the joint probabilities of roles and
type of collaboration. A joint probability is the probability that an event will occur
given another event. This is also called Lag(0) (Bakerman & Quera, 2011) analysis
since the calculation describes the co-occurrence of events in the same time frame.
Transitional probabilities, or Lag(1) (Bakerman & Quera, 2011) were also calculated.
These show the probability of one event following another event.
The last phase of the analysis included examining the difficulty of the programs that
each of the two groups attempted as a proxy for the level of learning each group had
achieved. We reasoned that the more confident each of the teams felt over the course
of the afternoon, the more likely they would be to select more challenging tasks to
accomplish on the game board. We developed a rubric to score these programs. This
rubric is based on the difficulty of the mission tasks presented on the FLL challenge
arena. There were 11 missions on this particular arena. However, the students in
our study only attempted seven of those missions. The missions (also referred to as
challenges in this chapter), consisted of moving the robot to different sections of the
game board in order to act upon materials placed on the game board. In the Food
Factor challenge, students learn about bacteria, overfishing, and other aspects of food
safety and food production. The missions relate to these ideas and consist of moving
materials, maneuvering around materials, and collecting materials.
The first author on this chapter is a computer science professor and an expert in
the field. The first author calculated the difficulty of each of the missions by creating
an optimal solution for each task. Due to the fact that students were novices, with
no prior programming or robotics experience, the challenge solutions were simple
and relied primarily on moving the robot forward and backward and turning the
robot. These elements, forward/backward and turning, could be programmed by
234 P. Kevin Keith et al.
adjusting power to the motors, and/or setting the number of rotations of the attached
wheels on the robotic device. We have annotated an example of a student program,
drawn from the data, to highlight the nature of student programs (see Fig. 13.4).
The annotations highlight the meaning of the icons in each block, as well as provide
an English language translation of the meaning of each block as arranged in the
program. After we had developed optimal solutions, we then compared the relative
difficulty of each solution based on three variables including: (1) the number of
programmed moves forward/backward needed, (2) the number of turns needed, and
(3) the distance needed to be crossed on the board to attempt that mission (based
on # of wheel rotations). We reasoned that attempting a mission that was far away
required more precision in navigating the board to arrive at the mission. Therefore,
for the far missions, we doubled the sum of turns + moves to account for the added
difficulty presented by the distance. Student programs were then scored based on
the difficulty of the mission attempted. Table 13.2 presents the scoring rubric by the
mission for the seven missions attempted by participants in the study.
In the final step, we focused on examining the relationship between the frequency
of the discussions that are the most relevant to computational thinking and the overall
scores for the groups. Those discussions were those that received the codes A, ATO,
ATV, D, and DO. For each group, we computed the frequency means of the five codes
across all the TSCs. Then, we plotted those means along with the overall program
scores in the same scatter plot for each group. To guide the interpretation of the
relationship, we used different colors and symbols for each group and each CT code.
13 Roles, Collaboration, and the Development of Computational … 235
13.3 Results
In answer to research question #1—What are the role transitions made by novice pro-
grammers in this study?, we present the results of our behavior analysis of transitional
probabilities; this gives us an overall view of how the students organized themselves.
Transitional probabilities in this analysis are indicated by the arrow (direction) and
percentage (probability). The transitional probabilities in this analysis specify the
probability of the next role taken up by the student given their current role. For
example, in Fig. 13.4, there is a 76% probability that for Anna, the role of succeed-
ing programmer would be tester. The light blue Team (Fig. 13.5) took a divide and
conquer approach to solving the programming challenges. This is evidenced by the
lack of continuity shown in their paths between roles. Each student has a unique path,
indicating variation in student activity. Anna has taken on a primary role of program-
mer, Becky the role of builder, which left Cindy with no primary individual role, but
she did take part in the collaborative role of analyst. The dark blue team (Fig. 13.6)
have taken up a more collaborative strategy that is highlighted by the similarity of
their role transitions. In other words, in visually examining the role transitions for
the dark blue team, there is less variation, indicating that the students were jointly
involved in sharing the roles and collaborating.
13.3.2 Collaboration
that she would be working collaboratively. Overall, and for both groups, the two roles
of tester and analyst do stand out as having a higher probability of being collaborative
for all students.
As can be seen from this table, the dark blue team attempted two missions that
were rated as more difficult, whereas the light blue team attempted two missions
that were relatively simpler. We take this as evidence of the knowledge the dark blue
team built, working collaboratively over the course of the day. With a greater level
of knowledge shared among the group, the group chose to attempt more challenging
missions.
The final analysis consists of a scatter plot, which visualizes the relationship of
student scores on their final programs (y-axis) and the amount and type of CT talk
each group engaged in during each TSC (x-axis). The scatter plot is presented in
Fig. 13.10.
Fig. 13.10 Scatter plots for the most frequent CT codes and program scores
13 Roles, Collaboration, and the Development of Computational … 241
The scatter plot indicates that the dark blue team, who had the highest program-
ming score, also had the highest mean for all of the computational thinking codes,
except for the design code. This team spent a lot of time discussing the algorithm
they were working to develop. Meanwhile, the light blue team spent a lot of time
discussing the design of the robot. The design relates to the physical elements.
13.4 Discussion
the execution of the program, and then jointly enact the analyst role in discussing
the executed program. Therefore, this result is not surprising, but expected. It is the
external and manipulative nature of the robotic activity that supports robust cognitive
engagement (Sullivan & Heffernan, 2016).
Collaboration in groups is difficult, but due to the expense of educational robotics,
many teachers are forced to create groups. In order to increase the probability that
groups will collaborate, scaffolding is necessary (Winne, Hadwin, & Perry, 2013).
However, in order for scaffolding to be successful, we must understand the conditions
for interactions and the interactions that are indicative of learning (Hoadley, 2010).
This research illustrates the importance of scaffolding role rotation in robotics activ-
ity. While some may be tempted to interpret these findings as support for a scripted
approach, we resist such an interpretation. Rather than providing students with scripts
for how to interact, we believe that a simple enforcement of the idea of role rotation
could support a group in building enough intersubjective understanding to develop
high levels of coordination in the group. Robotics is a very creative activity (Sul-
livan, 2017). Groups should be supported to share roles, but also given freedom to
explore problem solutions in an authentic way. Perhaps it is time to think about a
middle ground between completely emergent roles and highly scripted roles. We
would advocate for a middle ground in the case of robotics and other highly creative
activities. For example, one way to structure this would be to have students intention-
ally switch roles after a certain amount of time. In this way, the roles would rotate
among students and would support collaboration as students would gain knowledge
and share it with one another. Another idea would be to introduce a wide-screen
multi-touch display, in place of a laptop, for programming. It is possible that such a
technology would create more access to the programming tool and allow for greater
conversations among students.
Future research should focus on providing varying levels of scaffolded support
to students working with robotics. Is it enough to enforce a role rotation within
groups, or will students need more support to engage in collaboration? We believe
that collaboration within a group is influenced by a multitude of factors. However, we
also believe that setting certain conditions for participation may enable higher levels
of coordination, for example enforcing role rotation. Future research studies could
create conditions that include role rotation and those that support emergent roles.
Engagement in collaboration and the development of computational thinking could
then be measured in each of the conditions. Moreover, environments where students
are developing computational thinking abilities often have a material technological
component. Future research should examine how scaffolding student participation
by prescribing shared control of the material artifacts affects student discussions and
the ability to coordinate the work of the group. For example, as mentioned above, we
think that a wide screen multi-touch display may shift collaborative interactions by
virtue of allowing students to more closely observe programming activities. In this
scenario, students who are not directly manipulating the software could still mean-
ingfully participate through developing a deeper understanding of the programming
blocks, thereby improving their ability to reason about the program and recommend
possible changes to programs by virtue of close observation. Again, such research
13 Roles, Collaboration, and the Development of Computational … 243
may include conditions (widescreen, multi-touch display, vs. laptop). Such studies
will improve our ability to provide robust collaborative robotics learning environ-
ments for students.
References
Ali, S. R., Brown, S. D., & Loh, Y. (2017). Project HOPE: Evaluation of health science career
education programming for rural latino and european american youth. The Career Development
Quarterly, 65, 57–71.
Bakerman, R., & Quera, V. (2011). Sequential analysis and observational methods for the behavioral
sciences. New York: Cambridge University Press.
Barr, V., & Stephenson, C. (2011). Bringing computational thinking to K-12: What is involved and
what is the role of the computer science education community? ACM Inroads, 2(1), 48–54.
Barron, B. (2003). When smart groups fail. Journal of the Learning Sciences, 12(3), 307–359.
Chi, M. T. H. (1997). Quantifying qualitative analysis of verbal data: A practical guide. Journal of
the Learning Sciences, 6(3), 271–315.
CSTA. (2016). Computational Thinking. Retrieved from https://csta.acm.org/Curriculum/sub/
CurrFiles/CompThinkingFlyer.pdf.
Dillenbourg, P. (1999). Collaborative learning: Cognitive and computational approaches. New
York, NY: Elsevier Science.
Dillenbourg, P. (2002). Over-scripting CSCL: The risks of blending collaborative learning with
instructional design. In P. A. Kirschner (Ed.), Three worlds of CSCL. Can we support CSCL
(pp. 61–91). Heerlen, NL: Open Universiteit Nederland.
Fischer, F., Kollar, I., Mandl, H., & Haake, J. (Eds.). (2007). Scripting computer-supported com-
munication of knowledge. Cognitive, computational, and educational perspectives. New York,
NY: Springer.
Forman, E. A., & Cazden, C. B. (1985). Exploring Vygotskian perspectives in education: The
cognitive value of peer interaction. In J. V. Wertsch (Ed.), Culture, communication, and cognition:
Vygotskian perspectives (pp. 323–347). New York: Cambridge University Press.
Grover, S., & Pea, R. (2013). Computational thinking in K-12: A review of the state of the field.
Educational Researcher, 42(38), 38–43. https://doi.org/10.3102/0013189X12463051.
Hoadley, C. (2010). Roles, design, and the nature of CSCL. Computers in Human Behavior, 26(4),
551–555.
Ji, P. Y., Lapan, R., & Tate, K. (2004). Vocational interests and career efficacy expectations in
relation to occupational sex-typing beliefs for eighth grade students [Electronic version]. Journal
of Career Development, 31(2), 143–154.
Jones, A., & Issroff, K. (2005). Learning technologies: Affective and social issues in computer-
supported collaborative learning. Computers & Education, 44(4), 395–408.
Krippendorff, K. (2004). Content analysis, and introduction to its methodology (2nd ed.). Thousand
Oaks, CA: Sage Publications.
Lee, I., Martin, F. L., Denner, J., Coulter, R., Allan, W., Erickson, J., Malyn-Smith, J., & Werner,
L. (2011). Computational thinking for youth in practice. ACM Inroads, 2(1), 32–37. https://doi.
org/10.1145/1929887.1929902.
National Instruments (2014). LabVIEW object-oriented programming faq. Retrieved online at http://
www.ni.com/white-paper/3574/3n/.
National Science Foundation (2016). Building a foundation for CS for all. https://www.nsf.gov/
news/news_summ.jsp?cntn_id=137529.
O’Donnell, A. M. (1999). Structuring dyadic interaction through scripted cooperation. In A. M.
O’Donnell & A. King (Eds.), Cognitive perspectives on peer learning (pp. 179–196). Mahwah,
NJ: Erlbaum.
244 P. Kevin Keith et al.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter’s Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter’s Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Chapter 14
Video Games: A Potential Vehicle
for Teaching Computational Thinking
Sue Inn Ch’ng, Yeh Ching Low, Yun Li Lee, Wai Chong Chia
and Lee Seng Yeong
Abstract Previous studies in computer science education show that game playing is
negatively correlated with success in introductory programming classes in (Wilson,
Shrock ACM SIGCSE Bulletin, vol 33, pp 184–188, 2001). However, informally, we
observed that students who have previous gaming experience take to programming
tasks easier than those without gaming experience. There have also been recent
studies that show that playing strategic video games can improve problem-solving
skills which is an essential skill in program design. This chapter presents the findings
of our study to identify if a correlation between previous gaming experience (game
playing) and individual computational thinking (CT) skills exists. To achieve this,
a survey was administered on undergraduate students undertaking an introductory
computing course to collect data on their gaming history and an individual assignment
on Scratch. Each project was subsequently analysed to determine the level of mastery
of core CT skills. Cochran–Armitage test of trend was then executed on each CT skill
category with respect to the coded gaming experience. The results obtained during
our analysis shows a correlation between gaming experience and specific categories
of the CT skills domain particularly in the area of abstraction and problem-solving
and user interactivity. The outcome of our study should be beneficial as ways to
leverage on students’ gaming experience in the classroom will also be discussed.
14.1 Introduction
learn technical concepts through gameplay (Kazimoglu, Kiernan, Bacon, & MacK-
innon, 2012; Liu, Cheng, & Huang, 2011; Muratet, Torguet, Jessel, & Viallet, 2009).
Despite studies (Becker, 2001; Kazimoglu, Kiernan, Bacon, & MacKinnon, 2012;
Liu, Cheng, & Huang, 2011) reporting an improvement in student engagement and
motivation towards the CS content, these studies do not investigate the adoption
rate of these games as leisure activities at the end of the course or the effects of
extended usage of the designed serious games on students’ problem-solving and/or
programming skills over time.
On the other hand, children and adults learn best by playing. The work by (Ch’ng,
Lee, Chia, & Yeong, 2017) delineates the gameplay elements possessed by popular
COTS games for different game genres that support key skills in computational
thinking. However, the study did not determine if there is indeed a correlation between
playing video games and the mastery of key computational thinking skills. In this
chapter, we therefore present our research design and findings to answer the research
question on whether past gaming experience does influence specific computational
thinking skills. If so, video games can be used as a vehicle to train students to think
logically in a fun environment as an alternative to the forceful use of serious games
or soldiering through the learning of a programming languages to teach students
computational thinking.
The main idea behind the computational thinking movement is that knowledge and
skills derived from the field of computer science has far-reaching applications that
can be beneficial to other fields too. However, since its conception, there have been
different definitions about the skills that encompass computational thinking skills.
Some of these definitions can be tightly coupled to programming while others are
more loosely defined and general. For example, the CT skills listed by Moreno-León,
Robles, & Román-González (2015) is more closely related to programming while
the definitions provided by (Lee, Mauriello, Ahn, & Bederson, 2014) is general
in nature with little reference made to programming. Barr and Stephenson (Barr
& Stephenson, 2011) provided examples on how the nine core1 CT concepts and
capabilities may be embedded in different discipline activities. Table 14.1 lists the
different definitions of CT skills by different parties. A look at these definitions shows
a repetition and overlap in some of the skills defined such as abstraction, algorithm
design and problem decomposition.
It is a common practice for researchers (Berland & Lee, 2012; Kazimoglu, Kier-
nan, Bacon, & MacKinnon, 2012) in the field of CT education to usually formulate
their own definition of CT skills by rationalizing from literature and existing defini-
1 Definitionproposed by the American Computer Science Teacher Association (CSTA) and Inter-
national Society for Technology in Education (ISTE) for use in K-12 education.
250 S. I. Ch’ng et al.
Table 14.1 Table of comparison listing the difference skills encompassing CT defined by different
parties
Organization/Researchers CT skills Definitions
Google for education Abstraction Identifying and extracting relevant
(Google, 2018) information to define main idea(s)
Algorithm design Creating an ordered series of instructions for
solving similar problems or for doing a task
Automation Having computers or machines do repetitive
tasks
Data collection Gathering information
Data analysis Making sense of data by finding patterns or
developing insights
Data representation Depicting and organizing data in appropriate
graphs, charts, words or images
Decomposition Breaking down data, processes or problems
into smaller, manageable chunks
Parallelization Simultaneously processing or smaller tasks
from a larger task to more efficiently reach a
common goal
Pattern Creating models, rules, principles, or theories
generalization of observed patterns to test predicted
outcomes
Pattern recognition Observing patterns, trends and regularities in
data
Simulation Developing a model to imitate real-world
processes
BBC Bitesize (BBC, Decomposition Breaking down a complex problem or system
2018) into smaller, more manageable parts
Pattern recognition Looking for similarities among and within
problems
Abstraction Focusing on the important information only,
ignoring irrelevant detail
Algorithms Developing a step-by-step solution to the
problem, or the rules to follow to solve the
problem
Barr and Stephenson Data collection The process of gathering appropriate
(2011) information
Data analysis Making sense of data, finding patterns and
drawing conclusions
Data representation Depicting and organizing data in appropriate
graphs, charts, words or images
(continued)
14 Video Games: A Potential Vehicle for Teaching … 251
tions. For our work in this chapter, we will use the CT skills defined by (Moreno-León,
Robles, & Román-González, 2015).
14.3 Methodology
Data was collected from 736 first-year undergraduate students taking an ‘Introduc-
tion to Computers’ course at a private university in Malaysia. This course teaches
students the basic concepts of what computers are, how computers store and process
information, communicate with each other and applications of computers in daily
life. The course also covers a brief introduction to programming, more specifically
software design lifecycle, basic programming constructs and the different types of
tools that can be used to develop software applications. Students were taught how
to use Microsoft© Office tools to solve problems and basic game design using MIT
Scratch (MIT, 2016) during the practical sessions of the course. Scratch was chosen
as the development platform so that students can focus on the design of the solution
instead of the syntax of a particular programming language. The students were from
two different schools—School of Computing and School of Business. Table 14.2
shows the composition of students from each school.
For the individual assignment, students were tasked to design a Catching Name
Game—a game where the objective of the game is to collect characters that appear
14 Video Games: A Potential Vehicle for Teaching … 253
on the screen to spell out words. The students were given the freedom to determine
the type of gameplay that they wish to submit for the assignment but the game must
include components of their own name inside the game to reduce the possibility of
students passing off someone else’s work as their own. An online questionnaire was
administered on the students after they have submitted their Scratch Assignment
to collect information on their gaming habits. Through the questionnaire students
were asked to self-report their gaming habits (now and when they were young)
through multiple choice questions (starting age and frequency of play) and open-
ended questions (name of the favourite video game), refer to Appendix 1 for the full
questionnaire. Based on the premise that gaming is a memorable experience during
the students’ childhood or adolescence, students who truly played games and for
those who have spent a sizeable amount of their time doing this would at the very
least remember the name of the game that they have played and/or be able to describe
the gameplay of that particular game.
The starting age at which students reported their first foray into games and the
responses from the open-ended question was used as a reference point to check the
validity of the responses. For example, if the respondents claim that they started
playing Candy Crush at an age of less than 6 years old, this response would be
deemed invalid because Candy Crush was only released in the year 2012. Responses
that were incomplete or those who gave nonexistent/invalid games for either instance
were ignored in the study. If the game title provided by the respondents at either point
of times—young and current—is valid, the respondents were categorized as having
‘previous gaming experience’. The assessment of computational thinking skills of
the students’ Scratch project was done using the free web-based tool Dr. Scratch. The
tool analyses each Scratch project in seven CT dimension (Moreno-León, Robles, &
Román-González, 2015). Each dimension was then given a score that ranges from
0 to 3 according to the criteria provided in Table 14.3. The addition of the partial
score from each dimension yields a CT Score and, based on this score, different
feedback was also provided by the website to provide the student’s information and
suggestions on improvement.
Since each of the CT dimension constitutes an ordinal variable and ‘Gaming Expe-
rience’ is a nominal categorical variable, Cochran–Armitage test of trend was utilized
to investigate the relationship between each CT dimension and ‘Gaming Experience’.
All statistical analysis was conducted using SAS Enterprise Guide software.
254 S. I. Ch’ng et al.
Table 14.3 Description for each CT dimension assessed by Dr. Scratch (Moreno-León, Robles, &
Román-González, 2015; Robles, Moreno-León, Aivaloglou, & Hermans, 2017)
CT dimension Basic (1 point) Developing (2 Proficient (3 points)
points)
Abstraction and problem More than one Definition of blocks Use of clones
decomposition script and more (creation of custom (instances of
than one sprite blocks) sprites)
Parallelism Two scripts on Two scripts on key Two scripts on
green flag pressed or on the when I receive
same sprite clicked message, or video
or input audio or
when backdrop
changes to
Logical thinking Use of ‘If’ blocks Use of ‘If … else’ Logical operations
blocks
Synchronization Wait Message broadcast, Wait until, when
stop all, stop backdrop change to,
program broadcast and wait
Flow control Sequence of blocks Repeat, forever Repeat until
User interactivity Green flag Key pressed, sprite Webcam, input
clicked, ask and sound
wait, mouse blocks
Data representation Modifiers of sprites Operations on Operations on lists
properties variables
Based on the results obtained in Table 14.4, it was observed that there is a strong
evidence (p-value 0.0100) of an association between the CT dimension of Abstrac-
tion and Problem Decomposition and gaming experience of students. A plausible
explanation for this correlation is that all games, regardless of game genre, have
goals/missions and a reward mechanism that entices players to continue playing—an
attribute which makes games engaging. Players would then try to find ways to max-
imize these rewards while minimizing damage to their game characters during the
gameplay (Ch’ng, Lee, Chia, & Yeong, 2017; Gee, 2008). This feature in all games
requires players to determine the problem that they are currently encountering and
to devise new solutions based on whatever information, which may differ greatly
depending on the game, that they have at hand. It was noted in (Adachi & Willoughby,
2013) that these were also the exact features that promote problem-solving skills. We
posit that, perhaps, this is the attribute of COTs games that provide informal training
to its players in the CT dimension of Abstraction and Problem Decomposition.
It was also observed that there is a weak evidence (p-value 0.0470) to support
the hypothesis of an association between the CT dimension of User Interactivity
and gaming experience. A possible explanation to this phenomenon is that when
14 Video Games: A Potential Vehicle for Teaching … 255
students are exposed to video games and through repeated play over the years, they
will indirectly pick up the basic elements needed to interact with computer software
such as use of keyboard to input text and mouse to make selections; compared to
those who have minimal or no exposure. Further investigation needs to be conducted
if the same observation applies to those who are exposed to repeated general software
usage and not only video games to determine if this observation holds.
Since the p-values of the Cochran–Armitage z-statistic is greater than 0.05 for the
other CT dimensions considered in our study, we conclude that there is insufficient
evidence from our sample to support the hypothesis that there is an association
between ‘Parallelism’, ‘Logical Thinking’, ‘Synchronization’, ‘Flow Control’, ‘Data
Representation’ and gaming experience of the students, respectively.
Our findings show that there is a correlation between previous gaming experience
and the CT dimensions of Abstraction and User Interactivity. At this moment, we
cannot tell which particular aspects of specific COTs video games that support the
cultivation of these particular skills or why the correlation exists without further
investigations. However, our findings does encourage the idea that COTs games
possess the potential to cultivate skills as suspected by Klopfer, Osterweil, and Salen
(2009), Shreve (2005). The question now is: How do we actually harvest it to make
it work for CT education?
The report by Klopfer, Osterweil, and Salen (2009) has presented creative ways
in which games can be incorporated into classrooms to support different aspects of
learning. The most common approaches that are currently utilized for CT education
is the use of games as programming and reflective systems via game development
assignments and serious games respectively. However, the creation of serious games
takes time and skills that most educators do not possess and pale in comparison
256 S. I. Ch’ng et al.
(Barr, 2017). For the field of CT education, we believe that video games have the
same potential to be used as an effective tool for teaching and learning within and
outside the classroom. The results reported in our study provide preliminary proof of
this. In the future, we plan to obtain concrete evidence advocating the use of video
games as a potential vehicle for learning and teaching computational thinking by
implementing the ideas put forth in a real classroom environment.
We are conducting a research about the relationship between previous gaming expe-
rience and game development. Your response to these questions will not affect your
grades for this subject. Please answer the following questions.
Yes
No
4. How old were you when you started playing video games?
Rarely
Occasionally
Frequently
7. Do you actively play video games over the past two years?
Yes
No
9. How long do you spend each day, on average, playing your favourite video game?
<1 h
1–2 h
3–4 h
3–4 h
5–6 h
>6 h
Your task in this exercise is to describe the steps that you take to play one of the
games that you frequently play at home. Games in this case can be any type of games
ranging from board games, video games on the personal computer, mobile phone or
television consoles or even physical activity game.
Components Instructions
Game title Name of the chosen game
Game description Provide a short description of the chosen game
Goal of the game State the main goal of the game that players
must fulfil to win the game
Game strategy Describe the steps that you take as a player to
achieve the goal of the game. You can include
screenshots or images to aid your explanation.
Be as detailed as possible in your description
so that another new player can use your
description as a walkthrough to complete the
game on his own
Sub-objective(s) of the game [Optional] Some games have
mini-games/missions embedded inside the
game itself. If the game has many
mini-games/missions, select one and state the
goal that players must fulfil in order to win or
complete the mini-game/mission
Mini-game/Mission strategy [Optional] Describe the steps that you take as
a player to achieve the goal of the
mini-game/mission within the game. You can
include screenshots or images to aid your
explanation. Be as detailed as possible in your
description so that another new player can use
your description as a walkthrough to complete
the same mini-game or mission on his own
14 Video Games: A Potential Vehicle for Teaching … 259
References
Adachi, P. J. C., & Willoughby, T. (2013). More than just fun and games: The longitudinal relation-
ships between strategic video games, self-reported problem solving skills, and academic grades.
Journal of Youth and Adolescence, 42(7), 1041–1052.
Annetta, L. A. (2008). Video games in education: Why they should be used and how they are being
used. Theory into Practice, 47(3), 229–239.
Barba, L. A. (2016). Computational thinking: I do not think it means what you think it means.
Retrieved January 11, 2018, from http://lorenabarba.com/blog/computational-thinking-i-do-not-
think-it-means-what-you-think-it-means/.
Barr, M. (2017). Video games can develop graduate skills in higher education students: A ran-
domised trial. Computers & Education, 113, 86–97.
Barr, V., & Stephenson, C. (2011). Bringing computational thinking to K-12: What is Involved and
what is the role of the computer science education community? ACM Inroads, 2(1), 48–54.
Basawapatna, A. R., Koh, K. H., & Repenning, A. (2010). Using scalable game design to teach
computer science from middle school to graduate school. In Proceedings of the fifteenth annual
conference on Innovation and technology in computer science education (pp. 224–228). ACM.
BBC. (2018). BBC—Introduction to computational thinking. Retrieved from https://www.bbc.co.
uk/education/guides/zp92mp3/revision/1.
Becker, K. (2001). Teaching with games: The minesweeper and asteroids experience. Journal of
Computing Sciences in Colleges, 17(2), 23–33.
Berland, M., & Lee, V. R. (2012). Collaborative strategic board games as a site for distributed com-
putational thinking. Developments in Current Game-Based Learning Design and Deployment,
285.
Blizzard. (2018). Blizzard entertainment: Classic games. Retrieved January 24, 2018, from http://
eu.blizzard.com/en-gb/games/legacy/.
Ch’ng, S., Lee, Y., Chia, W., & Yeong, L. (2017). Computational thinking affordances in video
games. Siu-Cheung KONG The Education University of Hong Kong, Hong Kong, 133
Fletcher, G. H. L., & Lu, J. J. (2009). Education Human computing skills: Rethinking the K-12
experience. Communications of the ACM, 52(2), 23–25.
Frazer, A., Argles, D., & Wills, G. (2008). The same, but different: The educational affordances
of different gaming genres. In Eighth IEEE International Conference on Advanced Learning
Technologies, 2008. ICALT ’08. (pp. 891–893). IEEE.
Gee, J. P. (2005). Good video games and good learning. In Phi Kappa Phi Forum (Vol. 85, p. 33).
The Honor Society of Phi Kappa Phi.
Gee, J. P. (2008). Learning and games. In K. Salen (Ed.), The ecology of games: Connecting youth,
games, and learning (pp. 21–40). MIT Press.
Google. (2018). Google for education: Exploring computational thinking. Retrieved January 11,
2018, from https://edu.google.com/resources/programs/exploring-computational-thinking/#!ct-
overview.
Kazimoglu, C., Kiernan, M., Bacon, L., & MacKinnon, L. (2012). Learning programming at the
computational thinking level via digital game-play. Procedia Computer Science, 9, 522–531.
Klopfer, E., Osterweil, S., & Salen, K. (2009). Moving learning games forward. Cambridge, MA:
The Education Arcade.
Koulouri, T., Lauria, S., & Macredie, R. D. (2015). Teaching introductory programming: A quanti-
tative evaluation of different approaches. ACM Transactions on Computing Education (TOCE),
14(4), 26.
Lee, T. Y., Mauriello, M. L., Ahn, J., & Bederson, B. B. (2014). CTArcade: Computational thinking
with games in school age children. International Journal of Child-Computer Interaction, 2(1),
26–33.
Leutenegger, S., & Edgington, J. (2007). A games first approach to teaching introductory program-
ming. In ACM SIGCSE Bulletin (Vol. 39, pp. 115–118). ACM.
260 S. I. Ch’ng et al.
Liu, C.-C., Cheng, Y.-B., & Huang, C.-W. (2011). The effect of simulation games on the learning
of computational problem solving. Computers & Education, 57(3), 1907–1918.
Miller, M., & Hegelheimer, V. (2006). The SIMs meet ESL incorporating authentic computer
simulation games into the language classroom. Interactive Technology and Smart Education,
3(4), 311–328.
MIT. (2016). Scratch—Imagine, program, share. Retrieved from https://scratch.mit.edu/.
Monroy-Hernández, A., & Resnick, M. (2008). FEATURE empowering kids to create and share
programmable media. Interactions, 15(2), 50–53.
Moreno-León, J., Robles, G., & Román-González, M. (2015). Dr. Scratch: Automatic analysis
of scratch projects to assess and foster computational thinking. RED. Revista de Educación a
Distancia, 46, 1–23.
Muratet, M., Torguet, P., Jessel, J.-P., & Viallet, F. (2009). Towards a serious game to help students
learn computer programming. International Journal of Computer Games Technology, 2009, 3.
Papert, S. (1980). Mindstorms: Children, computers, and powerful ideas. Basic Books, Inc.
Papert, S. (1996). An exploration in the space of mathematics educations. International Journal of
Computers for Mathematical Learning, 1(1), 95–123.
Robles, G., Moreno-León, J., Aivaloglou, E., & Hermans, F. (2017). Software clones in scratch
projects: On the presence of copy-and-paste in computational thinking learning. In 2017 IEEE
11th International Workshop on Software Clones (IWSC) (pp. 1–7). IEEE.
Rosser, J. C., Lynch, P. J., Cuddihy, L., Gentile, D. A., Klonsky, J., & Merrell, R. (2007). The impact
of video games on training surgeons in the 21st century. Archives of Surgery, 142(2), 181–186.
Shreve, J. (2005). Let the games begin. Video games, once confiscated in class, are now a key
teaching tool. If they’re done right. George Lucas Educational Foundation.
Terzano, K., & Morckel, V. (2017). SimCity in the community planning classroom: Effects on
student knowledge, interests, and perceptions of the discipline of planning. Journal of Planning
Education and Research, 37(1), 95–105.
Wilson, B. C., & Shrock, S. (2001). Contributing to success in an introductory computer science
course: a study of twelve factors. In ACM SIGCSE Bulletin (Vol. 33, pp. 184–188). ACM.
Wing, J. M. (2006). Computational thinking. Communications of the ACM, 49(3), 33–35.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter’s Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter’s Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Chapter 15
Transforming the Quality of Workforce
in the Textile and Apparel Industry
Through Computational Thinking
Education
Bessie Chong—Ph.D., Director of Group Training and Talent Management, Esquel Group.
Ronald Wong—Former Associate Director of Corporate Communications, Esquel Group.
15.1 Introduction
The textile and apparel industry has long been regarded as “traditional” and “old-
fashioned”. It is probably not an industry the new generation aspires to join. Founded
in 1978, Esquel started as a shirt maker. Over the last 40 years, Esquel developed
the capacity to weave innovative technologies into its people-centric culture. With
key production bases established in strategic locations in China, Malaysia, Vietnam,
Mauritius, and Sri Lanka, and a network of branches in the US, Europe, and Asia,
it offers a one-stop solution, from concept to rack. The annual sales turnover was
US$1.3 billion in 2016.
Esquel employs more than 57,000 diversified workforce globally, united under the
corporate 5E culture—Ethics, Environment, Exploration, Excellence, and Education,
and driven by the mission of “Fun People Serving Happy Customers”. It operates
with an aspiration of “Making a Difference” by creating a positive impact on the
employees, communities, and environment. The key employee development strategy
is to “groom people from within”. As a nontraditional company in a traditional
industry, Esquel encourages and empowers employees to innovate and to challenge
the status quo by placing great emphasis on learning and people development in
facilitating employees to transform and upgrade.
In this day and age, the competition in the industry of textile and apparel manufac-
turing is fierce. All players face structural challenges from rising labor and material
costs, reduced profit margin, and shortened order lead time to the shortage of skilled
labors. The rise of fast fashion further disrupts the industry by demanding quicker
production cycles and rapid prototyping in small orders. The traditional manufac-
turing model of long lead time and mass production would find it challenging to
survive.
However, the textile and apparel manufacturing industry employed over 75 mil-
lion people worldwide (Stotz & Kane, 2015) with an aggregate export amount of
over US$744 billion in 2015 (World Trade Organization, 2015). The industry is still
versatile and has huge potential. The question is how do manufacturers stay com-
petitive while enabling sustainable growth amidst the changing environment? Many
players in this industry migrates their manufacturing bases and chase after cheap
labor to stay competitive. On the contrary, Esquel decided to stay in locations where
it has good operating conditions and to cultivate the local talent pool. Esquel strives
to improve labor productivity to offset rising wages. The company recognizes the
importance to improve the added-value of their people, to provide an inclusive work
environment and to pay them well by integrating them into the technology, rather
than replacing them with technology.
15 Transforming the Quality of Workforce in the Textile … 263
The advent of the Fourth Industrial Revolution, also known as Industry 4.0, is
associated with the development of global industrial networks, to which all pro-
duction processes of a wide variety of enterprises will be connected throughout the
global supply chain. As a result, computer interaction environment is developed
around the modern human (Yastreb, 2015). That means employees would work with
cyber-physical systems in a smart factory environment and make use of the Inter-
net of Things (IoT) technology and data collected to streamline operation, empower
lean supply chains, and make timely decisions. Ultimately, it will increase supply
chain agility, adaptability, and alignment (Lee, 2004) that foster productivity and
efficiency. The Fourth Industrial Revolution provides Esquel with an opportunity to
sustain its leading position in apparel operations.
While digitalization is bringing great impact to the current business models and
operations in the global economy, enterprises need to identify the new value creation
opportunity in the process of moving to digital business. Esquel is expected to tap into
the digitalization process and optimize its supply chain. However, to facilitate this
transformation, it is vital for the employees to have some understanding of computer
programming regardless of the profession they are in. Programming will soon become
a basic job skill for everyone. The rise of robotics and artificial intelligence calls for
new skills and competencies. The new age employees need to be equipped with a
new set of skills in order to master technology, explore new possibilities and convert
the new ideas into actions. More importantly, we need to train them on how to think
systematically through developing their programming ability.
Without the right people, the effectiveness of technology would not be maximized
or create a positive impact on the business results. Among Esquel’s 57,000 employ-
ees, only 3.4% of them are equipped with formal technical qualifications, 12% of
them possess a college diploma or above, with 38% of them were born before per-
sonal computer became popular. The fear of using technology and the shortage of
computer-literate employees soon become the barrier of transformation. The chal-
lenge is how can we turn employees into confident technology users? How can we
empower them to come up with continuous improvement ideas and solve the daily
work problems systematically and independently?
A campaign is needed to drive this transformation and to inspire the employees
to participate in the revolution. It would be a huge challenge as the target group is
highly diversified in culture, age, and education, and spreads over 9 countries in 20
operation sites. It would also be hard to keep the learning momentum.
264 B. Chong and R. Wong
Programming is a skill that helps people learn how to think systematically. By devel-
oping computational thinking, people can break down complex problems into man-
ageable parts, look for similarities among and within problems and identify different
recommendations step by step. For people who don’t have technical knowledge, com-
putational thinking may seem too abstract and programming may seem too technical.
The fun and practical “App Inventor” application developed by the Massachusetts
Institute of Technology (MIT) was therefore identified as the main driver of this
campaign.
The simple graphical interface of App Inventor allows an inexperienced user to
create basic, fully functional mobile apps within an hour or less. It transforms the
complex language of text-based programming into visual, drag-and-drop building
blocks. The easy-to-use interface would change employees’ perception of technol-
ogy adoption and overcome the fear of using IT through this campaign. It further
develops their logical reasoning skills, programming capabilities, and more impor-
tantly, computational thinking ability. Computational thinking is a fundamental skill
for everyone, and it is a way humans solve problems (Wing, 2006). It includes prob-
lem decomposition, algorithmic thinking, abstraction, and automation (Yadav, Good,
Voogt, & Fisser, 2017). By equipping employees with computational thinking ability,
the company can empower them to become innovative problem solvers, collabora-
tors as well as process owners. Yadav et al. (2017) further stressed that given the
irreplaceable role of computing in the working life of today, the competence to solve
problems in technology-rich environments is of paramount importance.
There is a need to pay attention to CT as part of the broader concept of digital literacy in
vocational education and training, as otherwise adults with only professional qualification
may not be well prepared for the working life in the twenty-first century (Yadav et al., 2017,
p. 1065).
This campaign is designed around how to change Attitudes, upgrade Skills, and build
Knowledge, as shown in Fig. 15.1.
It is impractical if only IT colleagues are involved in providing classroom train-
ing and expect employees to change their attitude towards technology. Therefore, the
role of IT throughout the campaign is purposely downplayed to convince all employ-
ees that programming can be trained to less technically savvy people. An “all-in”
15 Transforming the Quality of Workforce in the Textile … 265
approach was adopted. The campaign was carried out in five development phases:
(1) Pioneering, (2) Modeling, (3) Changing, (4) Cultivating, and (5) Realizing, as
shown in Fig. 15.2.
The first round of “You Can Code” campaign started in 2015– 2016. The campaign
used top-down and bottom-up approaches to engage colleagues from all levels. Plat-
forms included Yammer (Esquel’s internal social media networking tool), WeChat,
Intranet, company’s TV broadcasting, as well as traditional channels such as notice
boards and promotional booths at factories were used to educate colleagues and
promote the training workshops and “Esquel’s App Challenge”. Through 28 work-
shops, over 2,430 training hours were provided to 1,200 participants, including 1,100
employees and 100 of their children from 10 different locations in the first 10 months
(Fig. 15.3). The strategy of teaching the children and letting them teach their parents
back was proved to be effective. Overall, the impact was encouraging and a lot of
positive feedback was received:
Something looks complicated but can be very user-friendly for us in building an app. Useful
and valuable information/tools can be shared with the company!
— A Sales Manager in Hong Kong
The introduction of the online programme ‘App Inventor’ is useful for non-professionals to
build our own app.
— A Senior Sales Executive in Hong Kong
Easy to operate for dummies. All ordinary people can participate in creating an app without
the support of IT.
— An Engineering Officer from a Garment Factory in China
• by developing • by developing an
those ideas independent
gradually from To upgrade learning mindest
accepting, to • by creating a rich
understanding, to learning resource
embracing, to • by conducting environment
exploring workshops,
• by showcasing activities, events
employees that and competitions
everyone can code • by encouraging
employees to
innovate or to solve To build
To change specific problems
using technology
Workshops for the senior Some General Managers About 50 super-users were The master trainers The first “Esquel’s App
management team and and Directors were invited identified and trained to be launched a series of Challenge” Competition
board members were to be the trainers to conduct the change agents or workshops and fun days for was organized to encourage
conducted to collect their workshops for other ambassadors. staff and their kids in order the application of the new
feedback and get their colleagues, from workers to They joined a customized to cultivate the skills and skills. Over 400 mobile app
buy-in. managers. mindset. ideas were submitted.
master trainer course. Then
About 90 percent of them Almost 300 employees they delivered training at The ambassadors set up Many interesting and
attended the training. were trained. They different operations. information and practical apps were
Some of them also became reinforced the notion of ‘If I promotional booths to developed.
Esquel’s pioneers. can code, you can code educate frontline operators.
too.’ A total of almost 800
people were trained.
Fig. 15.3 Young colleagues and children of Esquel staff members learn how to code in an hour
The most important impact of this enterprise-wide campaign is the values cre-
ated, including attitude change towards technology, employee engagement, employer
branding, and process improvement. Many more app ideas from the employees were
received. It shows after innovating once, employees are likely to innovate again.
Now, non-IT employees can perform part of the IT routine tasks, and some are even
able to build prototypes by themselves. This, in turn, allows IT professionals to focus
on enterprise-level app development.
The above prototype mobile apps (Table 15.1) were developed by non-IT col-
leagues based on their local needs. These applications help save time and improve
efficiency, while the broader benefits are incalculable. Department heads and IT
15 Transforming the Quality of Workforce in the Textile … 267
team are reviewing many more bottom-up initiatives from employees to further
enhance production efficiency. This campaign is an example of Esquel’s commit-
ment to upgrading their workers to become more knowledgeable. And at the same
time, the campaign also reinforces Esquel’s employer brand as a caring and nontra-
ditional company.
She also realized that the basic programming technique equipped her with compu-
tational thinking ability, which in turn helped her to become an independent thinker.
As a sewing worker, from time to time, she faced problems in operating her sewing
machine and managing the sewing quality. Before she learned how to code, whenever
she came across problems, she would simply ask the technician to fix it or change
some machine parts by herself. She had never bothered to understand the problems,
the root causes, and thought about how to prevent them in the future. But now, she
becomes proactive in learning technical skills and starts to integrate the computa-
tional thinking ability to solve her daily work problems. She also aspires to evolve
from a sewing worker to a technician one day.
268 B. Chong and R. Wong
Recruitment app
This app helps Human Resources colleagues to streamline some
manual work during the recruitment process, such as marking test
paper and personnel data entry. Applicants can key in their basic
personal information in the app and conduct some simple aptitude
tests.
Fig. 15.4 Interface of the wardrobe application developed by Yang Huamei’s team and a photo of
Yang Hua Mei at work and training
This first phrase of “You Can Code” campaign was completed in 2016. It helped the
employee to leverage technology to solve their daily issues and improve productivity.
Indeed, it started the momentum. The rise of mobile app initiatives after “You Can
Code” campaign is the result of the empowerment through computational action.
Esquel Carpool
Fig. 15.5 Design map and login page of Esquel Carpool application
line under the sun, rain, and wind. Commuting can easily take up half an hour or
even one full hour per trip. For those 2,000 employees with their own private cars,
the situation is not better than the others. Driving to work is not at all pleasant when
they have to be stuck in traffic and fight for the limited 200 parking spaces available
around the factories. Most of the time, employees have to park far away and take
another 10-minute walk back to the office.
How can Esquel make a difference for the colleagues so that they can save time
in waiting for the bus, fighting for the traffic or looking for parking space? How can
they save on gasoline bills while reducing carbon footprint?
Can something be done to change their lifestyle and behavior, reduce the envi-
ronmental impact, and inspire others to contribute to building a green city?
through the app. After that, they can form a group chat to communicate directly for
boarding arrangement. What’s more, this app can also share the real-time location
of the company bus and show the carpool usage report.
Within the first 24 months, this app has already recorded more than 139,083 carpools,
with the saving of more than 105,703 liters of gasoline. It avoided the emission of
243.1 tons of carbon dioxide. This app helps to realize the benefits of carpooling on
saving the environment. More importantly, it provides a platform to make a connec-
tion with colleagues from different departments that promotes the caring culture.
The company is committed to provide this app for free to any companies and
organizations, and the app is readily available in the open-source community GitHub.
Esquel is the first commercial entity to adopt App Inventor to train employees in
computational thinking. Even though computational thinking is rather conceptual
and hard to develop in a short period of time, the company has managed to change
attitudes, upgrade skills, and build knowledge through the development of the mobile
app.
The first round of “You Can Code” campaign received an initial success. The
“all-in” approach encouraged everyone to engage in the campaign. Many employ-
ees, including board members and sewing workers, joined the fun and easy “1-hour
programming” workshops. Employees’ kids were also invited, who in turn, influ-
enced and motivated their parents to learn programming. The campaign engaged
people from primary students to Ph.D. graduates aged from 6 to over 60.
It successfully engaged all levels of staff members by enrolling board members
and senior managers as pioneers, ambassadors, and trainers. Some even modeled the
skills and trained their teams at their sites. They jointly promoted the notion of “If
I can code, you can code too”, and successfully changed the perception that senior
staff are conservative and less tech-savvy.
272 B. Chong and R. Wong
The project team understands that what they have done is just a small step. In order
to build a programming culture and independent critical thinking skills, more efforts
are required. Indeed, the project team also understands what the employees have
learned in the 1-hour App Inventor workshop is not sufficient. The next step is how
to deep dive into what they have learned to embrace the computational thinking skills
and unleash their creativity. Thus, the project team needs to provide a platform to
enable the participants to organize and analyze information logically and creatively.
Then, they can break the challenges down into small pieces, and approach them by
using programmatic thinking techniques. The exercise would reiterate the interest of
the participants to learn, create, and modify their own applications.
With this in mind, the “You Can Code 2.0” campaign started in November 2017.
The project team aims to take one-step further from creating a simple mobile applica-
tion to developing a stimulated real-life application in the workplace. The company
plans to teach colleagues essential skills to build a microcontroller-based embedded
system, which is monitored by a mobile app developed by App Inventor.
The project team aims to train the participants basic programming technique
connected with physical devices as an example. Then, the participants can replicate
the setting and methodology to create their own IoT or Arduino prototypes. More
importantly, the project team wants to teach the logic behind those programming
techniques. Once the participants figure out most of the intelligent production and
process systems found in the work environment are in fact programmed by the basic
computer logic, they will have confidence in making recommendations and asking
questions on the current practices and systems.
The explosive growth of the “Internet of Things” is changing how things are being
operated. It allows people to innovate new designs and products at work and home.
The capturing of big data means a huge opportunity for operation efficiency improve-
ment. Esquel has many “Internet of Things” applications, for example, auto-guided
vehicles and drones are adopted in the factory for transportation use. At the backend,
a knowledge base is established and linked to the intelligent control system, which
is further connected to smart devices. Thus, if the colleagues can understand the way
to connect those devices to the backend system through the internet, they will be
motivated to build applications and devices that could ease the manual work. Indeed,
IoT comes with a combination of software and hardware. Colleagues can apply what
they have learned to program the Arduino and build a smart system by connecting
basic sensors and actuators for automation.
To launch the “You Can Code 2.0” workshop, a progressive and systematic learn-
ing path was designed. Three levels of workshop, according to the difficulty and
15 Transforming the Quality of Workforce in the Textile … 273
Fig. 15.6 Three levels of workshops for “You Can Code 2.0”
complexity of the app, were organized (Fig. 15.6). We targeted to recruit those who
have not joined the App Inventor workshop before or those who are interested in
using technology. As a result, majority of participants are young generation.
Level 1 workshop is a beginner course focusing on basic App Inventor techniques.
It aims to help new participants to equip with basic skills, and to refresh the previous
practice. At this level, not all block building instructions are given, except basic func-
tions and interface of the App. Some instructions are deliberately left blank, and only
partial block building instructions are provided. Participants need to complete the
instructions and build the blocks according to their needs and understanding. Custom
design interface and functions are recommended to demonstrate their creativity, the
comprehensiveness of logic, and the ability of the computational thinking.
Level 2 workshop focuses on programming technique applications on a micro-
ecosystem. Participants are required to develop a Smart Plantation Monitoring Sys-
tem based on IoT to monitor a small plant growth. The mobile app collects humidity
and light data from the plant via a sensor node. Colleagues will then base on the
data collected to build a decision-making mechanism. With the combination of the
wireless sensor network, embedded development, and data transmission, colleagues
will receive recommendation instantly, such as watering or giving more light to keep
the plant healthier. This workshop further examines how computer programming
would aid decision-making through an ecosystem experiment.
Level 3 workshop focuses on providing a simulated operation environment for
problem-solving. Participants are taught on how to program the Arduino to perform
274 B. Chong and R. Wong
certain tasks, such as turning on the lights, making a device turns, moving forward
and backward, and escaping from a maze.
After the completion of all three levels of the workshop, a second round of
“Esquel’s App Challenge” competition will be organized to further stimulate their
creativity and problem-solving ability. To facilitate a series of workshops to different
locations, more ambassadors and change agents are identified. Therefore, Esquel has
incorporated this basic App Inventor training into their management trainee (MT)
training curriculum. After receiving the training, the MTs will be responsible to teach
colleagues at their local sites. About 100 MTs, spreading across different locations
will have an exposure to the technology and take ownership in fostering the learning
culture in their own locations. The project team strongly believes that peer learning
is more effective in promoting the use of technology and will overcome their fear
of technology adoption. More local needs on improving work-related or life-related
issues can be solved by themselves with flexibility. The active participants will be
the voice and the next generation of leaders who strive make a difference.
15.5 Conclusion
References
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter’s Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter’s Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Part V
Teacher and Mentor Development in K-12
Education
Chapter 16
Teaching Computational Thinking
with Electronic Textiles: Modeling
Iterative Practices and Supporting
Personal Projects in Exploring Computer
Science
D. A. Fields (B)
Utah State University, 2830 Old Main Hill, Logan, UT 84322, USA
e-mail: deborah.fields@usu.edu
D. Lui · Y. B. Kafai
University of Pennsylvania, 3700 Walnut Street, Philadelphia, PA 19104, USA
e-mail: deblui@upenn.edu
Y. B. Kafai
e-mail: kafai@upenn.edu
16.1 Introduction
The introduction of computational thinking into the K-12 curriculum has become a
global effort. Computational thinking (CT) was defined by Wing (2006) as a way of
approaching and conceptualizing problems, which draws upon concepts fundamental
to computer science such as abstraction, recursion, or algorithms. Early work in this
area primarily focused on defining computational thinking, specifically its cognitive
and educational implications as well as highlighting existing contexts for teaching
computational thinking (e.g., NRC, 2011). While much subsequent work has focused
on the development of different environments and tools for CT, as well as curricular
initiatives in the K-12 environment, there is growing need for more empirical work
situated in actual classroom environments (Grover & Pea, 2013).
Iterative design is an important aspect of computational thinking that involves
engaging in an adaptive process of design and implementation where students learn
to face challenges and persevere in fixing them (Brennan & Resnick, 2012). Yet
one glaring absence in the work on iteration in computational thinking is a lack of
understanding exactly how teachers can support such CT practices in their class-
rooms (Barr & Stephenson, 2011). Thus far, most studies focused on CT tools and
environments had researchers themselves implement projects or were situated in out-
of-school contexts where youth voluntarily engaged on topics of their own choosing
(e.g., Grover, Pea, & Cooper, 2015; Denner, Werner, & Ortiz, 2012). While these
studies provide important insights about the feasibility of engaging students in CT,
they could not address the critical issue of how computer science teachers, dealing
with large class sizes and curricular restrictions, can integrate CT into their class-
room activities by connecting technology, content, and pedagogy (Mishra & Kohler,
2006).
In this paper, we focus on how two high school teachers supported iterative prac-
tice as a core CT practice during their implementation of an eight-week (~40 h)
electronic textiles unit within their classrooms during the year-long Exploring Com-
puter Science (ECS) curriculum (Goode, Margolis, & Chapman, 2014). Electronic
textiles (e-textiles), or fabric-based computing, incorporate basic electronics such as
microcontrollers, actuators and sensors with textiles, conductive thread and similar
“soft” materials (see Buechley, Peppler, Eisenberg, & Kafai, 2013a). Two researchers
observed the daily implementation of the curriculum, documenting classroom activi-
ties and interactions in extensive field notes, video recordings and photos of students’
work. The following research question guided our analysis “What kind of teaching
strategies did the teachers employ to support students’ iterative practice during the
e-textiles unit?” Our discussion focuses on the teachers’ modeling and personaliza-
tion strategies to make iteration accessible in students’ work, particularly through
classroom practices that support iteration.
16 Teaching Computational Thinking with Electronic Textiles … 281
16.2 Background
concept that can be applied into the body of a computer program itself, but can also
refer to the purposefully incremental process of creating a computational artifact.
Here, it is not only the act of iteration that matters, but an awareness and explicit
acknowledgment of its importance in tackling problems in a systematic way—basi-
cally, learning how to think in an iterative way about real-world issues.
Iterative practices are also important within the field of engineering where they
involve engagement with trial-and-error processes, along with revision and refine-
ment of ideas over time (Barr & Stephenson, 2011; Lee et al., 2011). Within engineer-
ing education, these processes of iteration and revisions are more formally structured
into classroom practice through the model of the engineering design process, which
highlights the steps of prototyping, testing, and redesign (Tayal, 2013). From this
perspective, the practice of iterative design should be considered something to be
supported within both CT curricula and contexts. However, because CT-focused cur-
ricula and pedagogy are newer, there remains a critical need to highlight how iterative
design as a practice can be supported through pedagogical interventions.
Using e-textiles affords different opportunities to observe teaching strategies to
support iterative design because they (1) integrate CT within both programming
(i.e., software design) and engineering (i.e., circuit design, physical craft) and can
illustrate how teachers make connections between these contexts; (2) are hybrid
nature in nature (i.e., as textual code on the screen and as physical circuits on the
textile) and can make visible how teachers navigate between different modalities;
and (3) allow for creative expression and aesthetics through personalized projects
and can demonstrate how teachers respond to and are supportive of distinct student
interests. Focusing on two classrooms from the Exploring Computer Science (ECS)
program (Goode et al., 2014), we examined what strategies these experienced ECS
teachers used in their implementation of the new e-textiles curriculum unit (Fields,
Lui, & Kafai, 2017; Fields et al., 2018a, b). In this chapter we focus on strategies
that support iteration as a key computational thinking process that can be difficult to
implement in classrooms.
16.3 Methods
16.3.1 Context
Our e-textiles unit is embedded within the Exploring Computer Science (ECS) ini-
tiative, which comprises a one-year introductory computer science curriculum with
a 2-year professional development sequence. The curriculum consists of six units:
Human–Computer Interaction, Problem-Solving, Web Design, Introduction to Pro-
gramming (Scratch), Computing and Data Analysis, and Robotics (Lego Mind-
storms) (Goode & Margolis, 2011). The instructional design of the curriculum adopts
inquiry-based teaching practices so that all students are given opportunities to explore
and design investigations, think critically and test solutions, and solve real problems.
284 D. A. Fields et al.
ECS has successfully increased diversity to representative rates in Los Angeles and
has subsequently scaled nationwide to other large urban districts and regions, now
with over 500 teachers nationwide.
Within this successfully implemented, inquiry-based curriculum, we noted an
opportunity to broaden the range of computer science activities by including e-
textiles. The curriculum unit was co-developed by e-textiles and ECS experts to
combine best practices of teaching and crafting e-textiles based on a constructionist
philosophy alongside ECS principles, style, and writing. The curriculum contains
big ideas and recommended lesson plans, with much room for teachers to interpret
and bring in their own style. A final version of the curriculum can be found at http://
exploringcs.org/e-textiles.
The ECS e-textiles unit implemented for this study consisted of six projects, each
increasing in difficulty and creative freedom, that introduced concepts and skills
including conductive sewing and sensor design; simple, parallel, and computational
circuits (independently programmable); programming sequences, loops, condition-
als, and Boolean logic; and data from various inputs (switches and sensors). The
projects were as follows: (1) a paper-card using a simple circuit, (2) a “stitch-card”
with one LED sewn as a simple circuit, (3) a wristband with three LEDs in parallel,
(4) a felt project using a preprogrammed LilyTiny microcontroller and 3–4 LEDs,
(5) a classroom-wide mural project where pairs of students created portions that each
incorporated two switches to computationally create four lighting patterns, and (6)
a “human sensor” project that used two aluminum foil conductive patches that when
squeezed generated a range of data to be used as conditions for lighting effects. Stu-
dent artifacts included stuffed animals, paper cranes, and wearable shirts or hoodies,
all augmented with the sensors and actuators.
In Spring 2016 two high school teachers, each with 8–12 years of computer
science classroom teaching experience, piloted the e-textiles unit in their ECS classes
in two large public secondary schools in a major city in the western United States.
Both schools had socioeconomically disadvantaged students (59–89% of students
at each school) with ethnically non-dominant populations (i.e., the majority of the
students at each school include African American, Hispanic/Latino, or southeast
Asian students).
interruptions from holidays, testing, and other school obligations). The researchers
documented teaching with detailed field notes, in-class video and audio recordings,
and pictures/videos of student work, supplemented by three interviews with the teach-
ers before, during, and after the unit, and brief focus group interviews with students
at the end of the unit.
The analysis of field notes involved constant comparative analysis (see Charmaz,
2011) to (1) identify computational thinking practices exhibited during the e-textiles
unit, and then (2) compare these with the larger corpus of computational thinking
practices identified in the AP Computer Science Principles curriculum (see Fields
et al., 2017). Through this process, iteration stood out a key area of learning. Then
the team re-coded the data to find all of the teaching practices in this area. Finally, the
team compared findings from observational data with the interviews from teachers
and students to see whether these practices came up from participants’ perspectives
and to understand these two areas in greater depth.
Within the e-textiles unit, students engaged with iteration at many stages: prototyping,
testing, and revising designs while tackling bugs and problems that arose in the
process. This was evident through the changes that students made in their projects,
including improvements in circuit diagrams, changes in and expansions of code, and
visible alterations in the physical projects themselves. In fact, iterating on project
ideas and implementation was one of the most frequent things we coded across
the data. Earlier work in e-textiles has documented similar changes in student design
(Fields, Kafai, & Searle, 2012; Kafai et al., 2014b). The focus of our findings here is on
how the teachers supported a culture of iteration and refinement in their classrooms.
What practices did they use to create an environment where sharing about mistakes
and seeing them as a means of learning was valued? Furthermore, how did this culture
support students’ awareness and acknowledgement of iteration as a key perspective
that could be used in service of creating a computational artifact? Below we outline
three main areas of teaching practices that helped to develop a classroom culture
of iteration: teachers’ modeling of their own mistakes and teachers’ modeling of
students’ mistakes, all with a focus on students’ personal designs.
One key teaching practice involved teachers promoting their own mistakes, errors,
and less-than-perfect projects in front of the classroom. When introducing a new
project, for instance, Ben or Angela would show their own sample creations (which
286 D. A. Fields et al.
were made during teacher professional development for the unit). This not only
served to give students ideas but also allowed the teachers to showcase their own
experience of revision and iteration, and coach students on tips for dealing with this
process. Consider the way Angela shared her work in her introduction to the LilyTiny
project in class, as highlighted within our field notes
So let me tell you a little story. When I was working on one of my projects, I didn’t think
I needed a plan. “I’ll just do this,” I thought.’ Angela went on to explain that she worked
for two days on her project and then eventually had to take most of it apart because it didn’t
work. ‘If you don’t plan, it’s going to take you more time to take things out and fix it than
it would to do it right the first time. So you’re going to draw it all out, and you’ll use that
blueprint and then you’ll have your little map. (160405 field notes1 ).
Here Angela told a self-deprecating story of her process of creating her project. She
highlighted how not having a predetermined plan for her e-textiles project—specifi-
cally, mapping out circuitry connections beforehand—meant that she ended up mak-
ing numerous mistakes while crafting and eventually had to redo her entire project.
She framed the circuit diagram as a way of creating a well-thought out plan for imple-
mentation, a skill which is helpful not only with regard to crafting but also within
the context of creating a complex computational artifact that encompasses multiple
modes (i.e., sewing, circuitry, programming). It serves as both a time-saving measure
to prevent costly mistake fixing, but also to help students structure their construction
time more efficiently. Note that the knowledge Angela shared was very pragmatic;
she did not ask students to trust her on her authority alone (i.e., “have a plan because
I said so”) but rather because of her personal experience.
The other teacher, Ben, similarly highlighted his own process of dealing with
mistakes when using his own project to aid in teaching. During a programming
lesson, Ben shared his personal project code as an exemplar and sample for students
to remix while using a preassembled e-textiles circuit board (the LilyPad Protosnap)
Ben: As an introduction, “everyone please open up my.pdf called Function Code. And I
want pairs to inspect the code and discuss what the code is going to do.” Two students
who were absent the prior day pointed to the switch and said that Number 2 should turn
on… They seemed confused. At that moment, Ben realized his error—the code that he
shared was something he wrote up for his own (extra) project that he is making that will
involve switches and buzzers —his variables refer to the wrong ports on the Protosnap
microcontroller. (160524 field notes).
Ben went on to explain this mistake to the class: “My apologies. I was messing
around with the buzzer last night [on my own project],” and then explained how the
connections were different from the LilyPad Protosnap boards students were using.
However, rather than starting over on the task, Ben highlighted his error to students,
inviting them to participate in how it could be fixed so that the code matched with
the boards (160524 field notes).
1 Weuse double quotation marks (“ ”) to show exact words of participations and single quotation
marks (‘ ’) to show paraphrased words that occur most often in field notes where conversations
were typed in the moment rather than audio recorded and transcribed.
16 Teaching Computational Thinking with Electronic Textiles … 287
In modeling their own imperfect processes of creation and addressing their mis-
takes, the teachers, Angela and Ben, thereby promoted a classroom culture of itera-
tive practice, valuing process over product. While everyone was encouraged to make
full, working projects, the teachers stressed that the actual experience of creation and
learning would most likely include moments of failure, and subsequent revisions and
iterations. Students were encouraged to think that it was okay not to be perfect the
first time (or the second, third, fourth) they did something. Perfection, in these cases,
could prevent students from moving forward in their learning.
Just as the teachers showed their own projects and processes in front of the entire
class, they also showcased students’ challenges, mistakes, and in-process projects in
order to promote the practice of iteration and revision. For instance, Angela added
a journal question after the completion of the wristband activity that solicited chal-
lenges that students had faced: “Think about this week’s project, what was the biggest
challenge? If you had no challenges, what tips do you have for people who may be
struggling?” (160422 field notes). After students had some minutes to think and to
write, she invited various students to share out what they had written. Numerous
students shared advice such as “plan more,” echoing Angela’s lesson from above
regarding the importance of creating clear circuit diagrams as blueprints in help in
constructing a functional e-textile artifact. Students also mentioned other issues that
spanned across the multiple domains of e-textiles work. For instance, another student
brought up the polarity issue of “mixing up the positive and negative” when trying
to create a working sewn circuit. In response, Angela invited students to share sug-
gestions on how to avoid this issue; they suggested curling or twisting the positive
and negative sides of the LED wires to look different, developing symbolic means
to identify polarity.
Another way that the teachers modeled students’ mistakes was through reflections
between projects. For instance, after the wristband (project #3) was complete, Ben
provided some constructive thoughts to his class
Ben: ‘I want to talk about the [project] we just did. Because those bracelets
look awesome, they look fantastic and you should be proud of yourselves.
There were a couple of things that I saw that you could improve … I saw
some sloppy stitching. Meaning that they were more than an inch. Some
of them were not pulled completely tight. I know that some of you might
not want to do the work of going in and out and in and out. But what might
be a concern?’
Student: ‘It would get caught in something.’
Ben: ‘Yes, and concise means a little tighter.’ (160418, field notes)
In this example, Ben went on to coach students about two other problems he saw
in student work on the wristbands. Each time, he presented a problem then asked
288 D. A. Fields et al.
students why they thought it might be an issue, allowing the students to share their
expertise on these problems. In this way Ben also supported iterative design between
projects, not just within a project. This suggests that having a series of projects
provided more opportunities for iterative practices than just having a single project.
Students could improve their techniques across projects by recalling this ongoing cat-
alogue of mistakes and solutions, and essentially acting as problem-solving resources
for one another. Beyond merely supporting iterative activity, the teaching strategies
described above pushed students to explicitly consider how iterative thinking was
an essential part of computational work. By asking students to continually reflect
upon the challenges and issues they faced, both Angela and Ben were able to high-
light the inherently adaptive, trial and error nature of creating a functional e-textile
artifact. Students learned not only how to recognize potential problems, but how to
continually address these through more effective planning and ongoing implemen-
tation. From this standpoint, students became more cognizant of the powerful role
that iteration could play in developing their own increasingly complex projects over
time.
What inspired these practices of revealing and sharing mistakes, thus reinforcing
process over product, and supporting iteration as an activity and a perspective? One
thing that both teachers noted in their interviews with us was that doing the projects
themselves helped them understand what students were going through. As Angela
expressed, “A lot of times, we don’t do what we’re asking kids to do. It was great
doing the projects in the [professional development sessions] because it helped me
anticipate questions that might come up in class and think of ways to address them”
(160609, interview). Creating projects helped the teachers themselves to reflect on
their own processes of making them (including their own experience of trial and
error, revision, and iteration) and provided a base of stories and examples to share
with students. Thus one of the most important underlying aspects of the classroom
culture was the teachers’ emphasis on their own as well as students’ personalized
projects.
How did the teachers’ promotion of personalized projects promote iterative think-
ing and design? Students were tasked to create projects within predetermined con-
straints throughout the unit (e.g., a light-up wristband, an interactive felt banner).
However, both Angela and Ben also actively encouraged students to develop their
own personal ideas through these assignments. This can be illustrated, in particular,
through the students’ final human sensor project, where a fabric object was modified
to include four to five LEDs, which could be triggered by readings from conductive
foil patches into at least four customized light pattern functions. From the start, the
teachers actively encouraged personalization by allowing students to either bring or
16 Teaching Computational Thinking with Electronic Textiles … 289
Fig. 16.1 Human sensor projects by students (top to bottom): Bridget’s jellyfish (top and bottom
views), Mateo’s Viva Mexico poncho (front and back), and Peter’s dog harness (top and bottom)
create their own personal objects for modification. This ranged from a “Viva Mex-
ico” poncho, to a dog walking harness, to a stuffed jellyfish made of fabric (see
Fig. 16.1). Further, teachers encouraged the students to customize the desired func-
tionality based on their own interests and desires. For instance, some projects had
a practical purpose (the walking harness that lit up when it was actually worn by
a dog), while others’ projects were more whimsical (the jellyfish, whose tentacles
glowed when triggered through play and interaction).
By pushing this personalization within the context of the given project constraints,
teachers inherently led students through an iterative process. One way this occurred
was in developing unique circuit diagrams for the diverse objects. For instance,
Angela had to assist students in thinking through the complex spatial dimensions
290 D. A. Fields et al.
From this standpoint, encouraging personalization not only creates natural oppor-
tunities for iterative design and troubleshooting, but also has the potential of motivat-
ing students to continually engage with and push through these processes. Consider-
ing that incremental and iterative work is so essential to computational thinking, the
personalization promoted by the teachers and afforded by the medium of e-textiles
therefore further enhances iteration.
16.5 Discussion
References
Ball, D., Thames, M. H., & Phelps, G. (2008). Content knowledge for teaching: What makes it
special? Journal of Teacher Education, 59(5), 389–407.
Barr, V., & Stephenson, C. (2011). Bringing computational thinking to K-12: What is involved and
what is the role of the computer science education community? ACM Inroads, 2(1), 48–54.
Brennan, K., & Resnick, M. (2012, April). New frameworks for studying and assessing the devel-
opment of computational thinking. Annual Meeting of the American Educational Research Asso-
ciation, Vancouver, BC, Canada.
Buchholz, B., Shively, K., Peppler, K., & Wohlwend, K. (2014). Hands on, hands off: Gendered
access in sewing and electronics practices. Mind, Culture, and Activity, 21(4), 1–20.
Buechley, L., Peppler, K., Eisenberg, M., & Kafai, Y. (Eds.). (2013a). Textile messages: Dispatches
from the world of e-textiles and education. New York, NY: Peter Lang.
Charmaz, K. (2011). Grounded theory methods in social justice research. The Sage Handbook of
Qualitative Research, 4, 359–380.
CollegeBoard. (2016). AP computer science principles: Course and exam description effective
fall 2016. CollegeBoard: New York, NY. Retrieved from https://secure-media.collegeboard.org/
digitalServices/pdf/ap/ap-computer-science-principles-course-and-exam-description.pdf.
Denner, J., Werner, L., & Ortiz, E. (2012). Computer games created by middle school girls: Can
they be used to measure understanding of computer science concepts? Computers & Education,
58, 240–249.
Fields, D. A., Kafai, Y. B., Nakajima, T. M., Goode, J., & Margolis, J. (2018a). Putting making
into high school computers science classrooms: Promoting equity in teaching and learning with
electronic textiles in Exploring Computer Science. Equity, Excellence, and Education, 51(1),
21–35.
Fields, D. A., Shaw, M. S., & Kafai, Y. B. (2018b). Personal learning journeys: Reflective portfolios
as “objects-to-learn-with” in an e-textiles high school class In V. Dagiene & E. Jastuė (Eds.),
Constructionism 2018: Constructionism, Computational Thinking and Educational Innovation:
Conference Proceedings (pp. 213–223). Vilnius, Lithuania. http://www.constructionism2018.fsf.
vu.lt/proceedings.
Fields, D. A., Kafai, Y. B., & Searle, K. A. (2012). Functional aesthetics for learning: Creative
tensions in youth e-textiles designs. In J. van Aalst, K. Thompson, M. J. Jacobson, & P. Reimann
(Eds.), The Future of Learning: Proceedings of the 10th International Conference of the Learning
Sciences (ICLS 2012), Full Papers (Vol. 1, pp. 196–203). Sydney, NSW, Australia: International
Society of the Learning Sciences.
Fields, D. A., Lui, D., & Kafai, Y. B. (2017). Teaching computational thinking with electronic
textiles: High school teachers’ contextualizing strategies in Exploring Computer Science. In S.
C. Kong, J. Sheldon, & R. K. Y. Li (Eds.), Conference Proceedings of International Conference
on Computational Thinking Education 2017 (pp. 67–72). Hong Kong: The Education University
of Hong Kong.
Goode, J., & Margolis, J. (2011). Exploring Computer Science: A case study of school reform.
ACM Transactions on Computing Education, 11(2), 12.
Goode, J., Margolis, J., & Chapman, G. (2014). Curriculum is not enough: The educational theory
and research foundation of the Exploring Computer Science professional development model. In
Proceedings of SIGCSE ’14 (pp. 493–498). New York, NY: ACM.
Griffin, J., Pirman, T., & Gray, B. (2016). Two teachers, two perspectives on CS principles. In
Proceedings of SIGCSE ’16 (pp. 461–466). New York, NY: ACM.
Grover, S., & Pea, R. (2013). Computational thinking in K–12: A review of the state of the field.
Educational Researcher, 42(1), 38–43.
Grover, S., Pea, R., & Cooper, S. (2015). Designing for deeper learning in a blended computer
science course for middle school students. Computer Science Education, 25(2), 199–237.
Guzdial, M. (2016). Drumming up support for AP CS principles. Communications of the ACM,
59(2), 12–13.
16 Teaching Computational Thinking with Electronic Textiles … 293
Kafai, Y. B., & Burke, Q. (2014). Connected code: Why children need to learn programming.
Cambridge, MA: MIT Press.
Kafai, Y. B., Fields, D. A., & Searle, K. A. (2014a). Electronic textiles as disruptive designs:
Supporting and challenging maker activities in schools. Harvard Educational Review, 84(4),
532–556.
Kafai, Y. B., Lee, E., Searle, K. S., Fields, D. A., Kaplan, E., & Lui, D. (2014b). A crafts-oriented
approach to computing in high school. ACM Transactions of Computing Education, 14(1), 1–20.
Lee, I., Martin, F., Denner, J., Coulter, B., Allan, W., Erickson, J., … Werner, L. (2011). Computa-
tional thinking for youth in practice. ACM Inroads, 2(1), 32–37.
Lui, D., Jayathirtha, G., Fields, D. A., Shaw, M., & Kafai, Y. B. (2018). Design considerations for
capturing computational thinking practices in high school students’ electronic textile portfolios.
In Proceedings of the International Conference of the Learning Sciences. London, UK.
Lui, D., Walker, J. T., Hanna, S., Kafai, Y. B., Fields, D. A., & Jayathirtha, G. (in press). Commu-
nicating computational concepts and practices within high school students’ portfolios of making
electronic textiles. Interactive Learning Environments.
Margolis, J., & Goode, J. (2016). Ten lessons for CS for all. ACM Inroads, 7(4), 58–66.
Margolis, J., Goode, J., & Ryoo, J. (2015). Democratizing computer science knowledge. Educational
Leadership, 72(4), 48–53.
Mishra, P., & Kohler, M. J. (2006). Technological pedagogical content knowledge: A new framework
for teacher knowledge. Teachers College Record, 108(6), 1017–1054.
National Research Council. (2011). Report of a workshop on pedagogical aspects of computational
thinking. Washington, DC: National Academy Press.
Penuel, W. R., Fishman, B. J., Cheng, B., & Sabelli, N. (2011). Organizing research and develop-
ment at the intersection of learning, implementation, and design. Educational Researcher, 40(7),
331–337.
Ragonis, N. (2012). Integrating the teaching of algorithmic patterns into computer science teacher
preparation programs. In Proceedings of ITiCSE ’12 (pp. 339–344). New York, NY: ACM.
Soloway, E., & Spohrer, J. C. (Eds.). (1989). Studying the novice programmer. Hillsdale, NJ:
Lawrence Erlbaum.
Tayal, S. P. (2013). Engineering design process. International Journal of Computer Science and
Communication Engineering, 1–5.
Tofel-Grehl, C., Fields, D. A., Searle, K., Maahs-Fladung, C., Feldon, D., Gu, G., & Sun, V. (2017).
Electrifying engagement in middle school science class: Improving student interest through e-
textiles. Journal of Science Education and Technology, 26(4), 406–417.
Weintrop, D., Beheshti, E., Horn, M., Orton, K., Jona, K., Trouille, L., & Wilensky, U. (2016).
Defining computational thinking for mathematics and science classrooms. Journal of Science
Education and Technology, 25(1), 127–147.
Wing, J. (2006). Computational thinking. Communications of the ACM, 49, 33–35.
Yadav, A., Mayfield, C., Zhou, N., Hambrusch, S., & Korb, J. T. (2014). Computational thinking
in elementary and secondary teacher education. ACM Transactions on Computing Education,
14(1), 1–16.
294 D. A. Fields et al.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter’s Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter’s Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Chapter 17
A Study of the Readiness
of Implementing Computational
Thinking in Compulsory Education
in Taiwan
Ting-Chia Hsu
Abstract In recent years, Computational Thinking (CT) Education for K-12 stu-
dents and undergraduates has become an important and hotly discussed issue. In Tai-
wan, starting from August 2019, all students in secondary schools will be required
to be fostered with computational thinking competencies. This chapter discusses not
only the preparation of students to learn CT but also the preparation required for
teachers and principals. There will be six credits each for the compulsory education
of information technology and living technology in junior high schools. Integrating
CT into other courses, such as mathematics, is one of the approaches implemented
at the primary school level. This chapter explores an initiative in which teachers
integrated block-based programming into a mathematics course for the sixth-grade
students, and further studied the self-efficacies and motivations of the students while
they learn CT in the integrated course. The chapter also reports on investigations
of the attitudes of educational leaders, such as the K-12 principals, towards teacher
preparation for conducting CT education in their schools. The results of the study
indicate that the weakest part of the object readiness (facilities) in 2017 in Taiwan
was the availability of classrooms for maker activities from the perspectives of the K-
12 principals. In terms of human resource readiness, instructional material resource
readiness, and leadership support (management readiness), teachers and principals
scored readiness degree at more than three points but less than four points on a
five-point Likert scale, implying that there is still room in all these aspects to be
enhanced. Many teacher training courses will need to be carried out in the next 1 to
2 years because the technological and pedagogical content knowledge of the teachers
regarding CT education must continue to be strengthened.
17.1 Introduction
Computational thinking refers to the basic concepts and processes used for solv-
ing problems in the computer science domain. The term was officially proposed
in 2006 (Wing, 2006), and was later simplified into four phases for the curricu-
lum design of CT in the United States (e.g., https://code.org/curriculum/course3/
1/Teacher; http://cspathshala.org/2017/10/25/computational-thinking-curriculum/),
the United Kingdom (e.g., https://www.bbc.co.uk/education/guides/zp92mp3/
revision), India (e.g., https://www.nextgurukul.in/KnowledgeWorld/computer-
masti/what-is-computational-thinking/), and so on (Fig. 17.1).
As shown in Fig. 17.1, the first phase of the CT process is to decompose the
problem so that it can be analyzed and divided into several smaller subproblems.
This is called the “problem decomposition” phase. The second phase is to identify
the patterns in the data representation or data structure. In other words, if the students
observe any repeated presentation of data or methods, they can identify their sim-
ilarities, regularities, or commonalities. Therefore, they do not need to spend time
repeating work when they write out the solution steps. The third phase is to gener-
alize or abstract the principles or factors to become a formula or rule. The students
have to try to model the patterns they found in the previous step. After testing, they
identify and abstract the key or critical factors presenting the model for solving the
problem in this step. Finally, they design the algorithm in the fourth phase, ensuring
that they include all the steps for solving the problem systematically.
Although CT is not equal to programming, block-based programming languages
such as Scratch, Blockly, mBlock, App Inventor, and so on, are good tools for devel-
oping the capabilities of students’ CT. CT has been defined as “the thought processes
involved in formulating problems and their solutions so that the solutions are rep-
resented in a form that can be effectively carried out by an information-processing
agent” (Cuny, Snyder, & Wing, 2010). The current study not only employed Scratch
to learn CT, but also used it to implement the solution to a problem that the students
encountered in their mathematics course. Scratch or other visual programming tools
are suitable to be used in different contexts such as games, science, music, and so
on (Maloney, Resnick, Rusk, Silverman, & Eastmond, 2010; Armoni, Meerbaum-
Salant, & Ben-Ari, 2015).
In a study by Maloney (2008), when Scratch was introduced to young students
from 8 to 18 years old, the students were found to be highly motivated to write pro-
grams. Another study found that fifth and sixth graders perceived Scratch as being
useful, and that they had high motivation and positive attitudes toward using it (Sáez-
López, Román-González, & Vázquez-Cano, 2016). Ke (2014) applied Scratch for
secondary school students to design mathematics games, and found that the integra-
tion of block-based programming and Mathematics game design could promote the
potential of the students to learn Mathematics, and resulted in students’ having sig-
nificantly more positive attitudes toward the development of Mathematics. Further-
more, this method was beneficial for activating students’ reflection on their daily-life
mathematical experiences. The mathematics concepts and block-based programming
were integrated when the students solved the problems or created the games. They
not only took part in achieving the mathematics learning target, but also carried out
CT, and transferred the reasoning process into an abstract program. It has been found
that using block-based programming in computer science can promote the cognitive
level and self-efficacy of students, but it does not result in high learning anxiety, and
the students spend less time learning and creating new programs in comparison with
line-based programming (Armoni et al., 2015).
The first study reported in this chapter integrated the block-based programming
software, Scratch, into a mathematics course, and applied the four phases of CT to
solve mathematics problems. The purpose of the study was to explore the correlations
between self-efficacy and learning motivation, and between self-efficacy and creative
tendency. From the results, the critical factor correlated with self-efficacy could be
identified when the students were involved in the proposed treatments. In addition,
this study also aimed to confirm whether the students made significant progress in
Mathematics and in problem-solving by using block-based programming. Therefore,
the research questions are as follows:
(1) Was the students’ learning effectiveness of mathematics significantly promoted
after the treatment?
298 T.-C. Hsu
how to provide support and assistance to their teachers for enhancing CT education
(Israel, Pearson, Tapia, Wherfel, & Reese, 2015).
In addition, when it comes to CT education, visual programming is a critical
enabler. When the teachers design CT-related courses, they mostly use block-based
programming tools for the basic level. Cetin (2016) considered CT to be the foun-
dation, and applied Scratch to pre-service teachers’ training. The results indicated
that this did indeed help the teachers in arranging beginner courses, and the visual
programming environment could help teachers better understand CT (Cetin, 2016).
The second study reported in this chapter applied the same approach in study one
(i.e., visual programming for the mathematical learning unit) in the teacher train-
ing for newly appointed K-12 principals. After they experienced the demonstrations
and training, we then investigated the readiness of their schools according to four
dimensions: technology readiness, teacher readiness, instructional resource readi-
ness, and leadership support. We also investigated the technology, pedagogy, and
content knowledge (TPACK) and the overall TPACK related to CT education based
on the real conditions the principals perceived. Therefore, the research questions for
the second study are as follows:
(4) Concerning the principals who had experienced this course during their pro-
fessional development training, how did they perceive the present readiness of
their school for conducting such CT courses?
(5) Concerning the principals who had experienced this course during their profes-
sional development training, how did they perceive the present TPACK of the
teachers in their school?
Overall, study one aimed to confirm the feasibility and effectiveness of conducting
CT education in K-12 courses. Study two explored the readiness of the leadership in
K-12 schools to implement and support CT education.
As mentioned above, two studies are integrated into this chapter. The following
section illustrates the research method including participant samples, measuring
tools, and the experimental process, as well as the research results for study one.
17.2.1 Participants
For research questions one to three in study one, the subjects included one class of
sixth graders of an elementary school in Taiwan. A total of 20 students participated in
the study. They were taught by the same instructor who had taught that mathematics
course and Scratch for more than 10 years. The average age of the students was 12.
300 T.-C. Hsu
The learning performance of CT includes three aspects which are concepts, per-
spectives, and practices (Brennan & Resnick, 2012). In study one, the research tools
included the pre-test and post-test of the mathematics learning achievements, the
post-test of Scratch Programming implementation, and the questionnaire for mea-
suring the students’ learning motivation, creative tendency, and self-efficacy.
The mathematics test sheets were developed by two experienced teachers. The pre-
test consisted of 10 calculation questions about the prior knowledge of the course
unit “equality axiom,” with a perfect score of 100. The post-test consisted of 10
calculation questions for assessing the students’ knowledge of the equality axiom
unit, with a perfect score of 100. For instance, the following is an example for the
elementary school students to practice mathematics and programming at the same
time.
On Sandy’s birthday, her father, mother, and brother go to a theme park with her. They
participate in a competition in which they have to guess the size of the facilities, which
constitute a triangle. The host asks them to estimate the area of the triangle to get points
by applying the block tools on a computer. If you were Sandy, how would you solve the
problem using block-based programming to make automatic calculations?
Table 17.1 shows the answer of one student for the abovementioned problem to
reveal an example of block-based programming and the CT process.
In the post-test of programming performance, there was a total of five situated
problems for the students to solve using block-based programming according to the
four phases of CT. Each programming problem was scored as 20 points, including
five points for assessing whether the students employed proper blocks, five points for
checking the usage of variances, five points for evaluating the formula transferred
from the meaning of the problem by the students in the program, and five points
for confirming if the output was correct or not. Consequently, the five programming
problems were worth a total of 100 points.
The questionnaire of learning motivation was modified from the measure pub-
lished by Hwang, Yang, and Wang (2013). It consisted of seven items (e.g., “It is
important for me to learn what is being taught in this class”) with a 5-point rating
scheme. The Cronbach’s alpha value of the questionnaire was 0.823.
The self-efficacy questionnaire originates from the questionnaire developed by
Pintrich, Smith, Garcia, and McKeachie (1991). It consists of eight items (e.g., “I’m
confident I can understand the basic concepts taught in this course”) with a five-point
Likert rating scheme. The Cronbach’s alpha value was 0.894.
The Creativity Assessment Packet (CAP) was revised from Williams (1991), and
included the scales of imagination, curiosity, and so on. It consisted of 50 items (e.g.,
“I have a vivid imagination”) with a 5-point rating scheme for the scales of overall
creativity, curiosity, imagination, complexity, and risk taking.
Before the experiment, the students were given time to get used to the block-based
programming environment. Figure 17.2 shows the flow chart of the experiment. Each
period in the mathematics class is 40 min in elementary school. At the beginning, the
instructor spent 8 weeks (i.e., one period a week, and totally eight periods) teaching
the students to become familiar with the block-based programming environment.
Before the learning activity of systematically applying the four phases of CT, the
students completed the Creativity Assessment Packet measure, took the pre-test, and
completed the learning motivation and self-efficacy questionnaires.
Thereafter, 3 weeks (i.e., one period per week, and totally three periods) was spent
on the enhancement of applying the four phases of CT and the integration of block-
based programming in a sixth-grade Mathematics course. After the effectiveness of
involving the CT process with mathematics was confirmed in study one, this part
(three periods) was later demonstrated in the teacher training course for the newly
appointed principals to experience and observe the common sense of involving CT
processes in learning. The students practiced this method six times, each time taking
half a period. Therefore, there were totally six situated examples implemented during
the three periods of the mathematics course.
At the same time, the students learned mathematics from solving the block-based
programming problems through the four CT phases. After the learning activity, there
were totally five programming problems for evaluating the students’ block-based
programming performance, with the four CT phases involved in both the block-
based programming and the mathematics problems. The test took 1.5 periods.
Finally, they also spent one period on the post-test of the pen-and-paper-based
mathematics test for measuring their learning achievements. There were totally 15
periods spent on the experiment, which lasted for a total of around three-fourth of a
semester (i.e., 15 weeks). The experimental treatment after the pre-test was 5 weeks.
In study one, the pre- and post-test were compared via a paired-sample t test to assess
whether or not the students had made progress in the mathematics unit.
Their block-based programming performance was also assessed. Correlation anal-
ysis was performed to identify the relationship between the students’ block-based
programming performance and their post-test results.
Correlation analysis was utilized for checking the correlation among the students’
learning motivation, self-efficacy, and creative tendency after the students learned
mathematics from the application of the four phases of CT to integrate visual pro-
gramming into the mathematics course.
Based on the above-mentioned data analysis methods, study one in this chapter
reports the cognition and perspectives of the students involving CT processes in their
mathematical learning.
17 A Study of the Readiness of Implementing Computational … 303
The research design hypothesized that the students would make progress in the
learning objectives of the mathematics unit. Therefore, a paired-sample t test was
performed on the pre-test and post-test in the mathematics unit.
The students did not use conventional instruction to learn mathematics; rather,
the four phases of CT were applied to integrate block-based programming into the
mathematics course. Table 17.2 reveals that this approach did indeed contribute to the
students’ learning effectiveness. They made significant progress in the mathematics
equality axiom unit after the experimental treatment (t 2.72, p < 0.05).
The self-efficacy of the students applying the four phases of CT to integrate block-
based programming into the mathematics course was significantly correlated with
their learning motivation (Spearman correlation value 0.623, p < 0.01), but was not
noticeably related to their creative tendency (Spearman correlation value 0.232,
p > 0.05), as shown in Table 17.4. In sum, the learning motivation was positively
correlated with the students’ self-efficacy regarding CT processes in their learning.
As mentioned above, two studies are integrated into this chapter. The following
sections illustrate the participant samples, measuring tools, and the experimental
process for study two.
17.4.1 Participants
For study two, there were 24 newly appointed principals who participated in the
teacher training course. They were taught by the same instructor as study one in the
teacher training workshop.
In study two, the participants had to answer the questionnaire of readiness for CT
education at their school, and express the situation they perceived in the TPACK (i.e.,
technological, pedagogical, and content knowledge) of their teachers. There are eight
17 A Study of the Readiness of Implementing Computational … 305
scales in the questionnaire. The first four were revised from the readiness question-
naire of mobile learning (Yu, Liu, & Huang, 2016) which referred to an eclectic
e-learning readiness including object readiness, software readiness, and leadership
support (Darab & Montazer, 2011), and referred to the higher education m-learning
readiness model based on the theory of planned behavior (TPB; Cheon, Lee, Crooks,
& Song, 2012). Accordingly, object readiness, instructor readiness, instructional
resource readiness, and leadership support are important scales for evaluating the
readiness for putting something into practice at school, such as e-learning, mobile
learning, or CT. Therefore, this study employed the readiness questionnaire, and
the Cronbach’s reliability coefficient for each scale in the revised questionnaire was
.701 for object readiness, .673 for instructor readiness, .646 for instructional resource
readiness, and .835 for leadership support.
The relationship between teachers’ technological, pedagogical, and content
knowledge (TPACK) is clearly pointed out in the framework of the TPACK model
(Mishra & Koehler, 2006). Numerous studies have therefore adopted this model to
assess teachers’ professionality or the effectiveness of teacher education (Chai, Koh,
& Tsai, 2010; Koehler, Mishra, & Yahya, 2007). This model has also been introduced
in another study for teachers to perform self-assessment (Schmidt et al., 2009). The
current study also employed the TPACK model (Chai et al., 2010) for the principals
to describe the school teachers in the technology domain. The Cronbach’s reliability
coefficient for each scale in the revised questionnaire was .840 for the knowledge of
technology, .884 for the knowledge of pedagogy, .943 for the knowledge of content,
and .908 for the overall TPACK.
After study one, the same instructor taught the CT course in the teacher training
workshop for the newly appointed principals. The teacher training workshop con-
sisted of 18 h. There were 9 h spent experiencing the CT process integrated with the
mathematics unit through the tool of visual programming. During the remaining 9 h,
they had to visit a school or institute where the infrastructure has been well estab-
lished, and attend the training course introducing the requirements for conducting
the 12-year compulsory education in the technology domain.
After they experienced the CT course and completed the teacher training, the
principals filled out the questionnaires to assess their schools. One questionnaire was
revised from the readiness for mobile learning, and the other one was the TPACK
(i.e., Technological, Pedagogical, and Content Knowledge) model.
306 T.-C. Hsu
Table 17.5 Descriptive information for the first four scales: readiness
Scale name Description Sample item
Object readiness For the current situation of There is enough
equipment in the school, information equipment such
please answer the following as computers for learning in
questions the school, providing
resources for technological
courses
Human resources readiness For the current situation of There are full-time
(instructor readiness) teachers in your school, information technology
please answer the following teachers in my school
questions
Instructional resource readiness For the arrangement of The teachers in my school
teaching materials for the have the capabilities to
technology domain, please employ the official
answer the following textbooks in the information
questions technology courses
Leadership support (management For the attitude of school School management
readiness) management, please answer proposes visions, policies,
the following questions or plans that support and
encourage the teaching as
well as learning in the
technological domain
In study two, the descriptive information for the first four scales of readiness is shown
in Table 17.5, which is abstracted from the first questionnaire (i.e., readiness).
The reliability data suggest that the refined version of each scale for readiness
and TPACK has acceptable internal consistency. The investigation results show the
descriptive statistics for each item, including the mean scores and the standard devi-
ation.
From the investigation results in Table 17.6, the average scores of object readiness
are quite low. From each item shown in the questionnaire in Appendix 17.1, it could
be found that the information equipment such as computers for learning in the school
has been available for a period of time (Mean 4.17, SD 0.87). Therefore, the prin-
17 A Study of the Readiness of Implementing Computational … 307
cipals expressed higher scores for the computer hardware in the first item. However,
if the instruction requires equipment for hands-on activities, the maker classrooms
are relatively lacking at the present time (Mean 1.88; SD 1.36). This is the main
reason why the object readiness was reduced. In sum, the overall technology hard-
ware and software for conducting compulsory education in the technology domain
has not yet been well prepared as it is still 2 years before compulsory education in
the technology domain begins.
As for instructor readiness, there are not enough full-time faculty in the technology
domain according to the results of the human resources readiness scale, as shown
in Table 17.6. The teacher education institutes must speed up the cultivation of new
teachers, and the K-12 schools should open recruitment for teachers in the technology
domain as their top priority.
In terms of instructional material resource readiness, the teachers tended not to take
part in the teaching plan competitions. Therefore, holding teaching plan contests may
not be the best strategy to produce adequate instructional material in the technology
domain.
Finally, the leadership support was also taken into consideration for the readiness
of conducting compulsory education in the technology domain. The scale of leader-
ship support has the highest mean score among the four scales (Mean 3.77; SD
0.74). There is a strong tendency for the school leaders to put greater emphasis on stu-
dents participating in DIY activities. In other words, school leadership tends to accept
that teachers can design problem-solving tasks integrating different disciplines and
hands-on activities in the future.
From the survey of TPACK for CT teachers, as shown in Table 17.7, it was found that
the teachers are partially prepared at present (see also Appendix 17.2). The expertise
of the present teachers was acceptable in terms of their knowledge of technology
from the principals’ point of view. However, the pedagogy of CT still has room for
improvement. From the investigation results, the teacher education institutes have to
put more effort into pedagogical research and training.
308 T.-C. Hsu
Study one not only put the four phases of CT into practice, but also applied them
to solve mathematics problems with the block-based programming language. The
results indicate that the implementation of programming activities was effective; in
addition, the students’ learning effectiveness, and their results in the mathematics
concepts post-test both improved remarkably in comparison with the pre-test of
the same mathematics unit. The programming implementation had a significantly
positive correlation with the learning effectiveness of mathematics, implying that
the students who had better block-based programming scores outperformed the other
students in the mathematics concepts post-test.
In addition to the CT concepts and practices, the perspectives of the students
were also assessed. Few studies have explored the relationships among self-efficacy,
learning motivation, and creative tendency. The results of study one found that the
students’ self-efficacy was correlated with their learning motivation, but not with
their creative tendency. In other words, the students who had higher learning moti-
vation possessed higher self-efficacy. This result was similar to that of a previous
study which pointed out that information literacy self-efficacy is associated with
both intrinsic and extrinsic motivation (Ross, Perkins, & Bodey, 2016). Another
study indicated that the creative tendency, such as curiosity as well as imagination,
and domain-specific knowledge are critical for students’ creative science problem-
finding ability (Liu, Hu, Adey, Cheng, & Zhang, 2013). An earlier study reported
that self-efficacy was closely related to creativity with intrinsic motivation completely
mediating this relationship (Prabhu, Sutton, & Sauser, 2008). From study one, it has
been confirmed that the students could learn CT and mathematics at the same time.
In future studies, to promote the motivations of the students, the teachers could try
to ask the students to design mathematics game programs with the block-based pro-
gramming tools so that the students can also learn CT and mathematics at the same
time. Future studies could also integrate CT into different subjects so that students
can learn CT, programming, and subject knowledge (e.g., physics, mathematics) at
the same time.
17 A Study of the Readiness of Implementing Computational … 309
After confirming the feasibility and benefits of conducting CT in study one, study
two further explored the readiness of the K-12 schools. Based on the results of this
investigation on the K-12 principals in study two, some suggestions to enhance the
preparation for involving CT education in the 12-year compulsory education are
given as follows.
It appears that object readiness, which refers to the educational hardware of the
technology domain at school, is the easiest part if the government is willing to devote
sufficient resources to the K-12 schools. However, the teachers have to be trained
so they know how to operate the new equipment, regardless of whether it is the
maker environment or computer technology products; otherwise, the money spent
on the hardware will be wasted. The perfect environment which is expected to be
constructed within the next 2 years will not work without professional teachers.
Therefore, future studies could further analyze the regression between the readiness
of the teachers in the technology domain and the readiness of the hardware, and find
direct evidence for this inference.
Unfortunately, the participants perceived that the leadership and management
levels have not provided enough support for conducting CT education. In other
words, many people agree that CT education is important; nevertheless, people in
leadership roles have not put enough emphasis on it. We inferred that the reason for
this unexpected situation is that the literacy of CT is not included as part of the senior
high school or college entrance examinations. It is important that schools should not
just pay attention to the subjects related to senior high school or college entrance
examinations; liberal education should also be encouraged.
Study one was conducted in 2016, and study two was carried out in 2017, while the
compulsory education of CT will be put into practice in August 2019. Accordingly,
in the next 2 years, the related institutes have a large amount of work to do. The
report of this chapter provides some findings, references, and suggestions for both
K-12 school faculty and the Ministry of Education. The most important aspect is
teacher education. The teachers in the technology domain should be trained in the
requirements of instruction in the technology domain.
Acknowledgements This study is supported in part by the Ministry of Science and Technology in
Taiwan under contract number: MOST 105-2628-S-003-002-MY3 and MOST 107-2511-H-003-
031.
Appendix 17.1
(continued)
Scale Item Mean SD
15. Currently, the school teachers do not 3.29 1.08
have to worry about the teaching
materials for the technology domain
Leadership 16. School management proposes 3.71 0.81
support visions, policies, or plans that support
(management and encourage the teaching as well as
readiness) learning in the technological domain
17. The school has established a reward 3.58 0.88
system for those who have
outstanding teaching performance in
technology
18. School management is gradually 3.92 0.83
putting greater emphasis on students
participating in DIY activities and
contests
19. School management will encourage 3.79 1.10
teachers and students to engage in a
robot or programming competition if
there is one
20. School management will encourage 3.83 1.09
teachers and students to engage in a
living technology contest if there is
one
Appendix 17.2
(continued)
Scales Questionnaire items Mean SD
Knowledge of PK1-Our teachers can adapt their 3.83 1.09
pedagogy teaching style to different learners
PK2-Our teachers can adapt their 3.79 1.02
teaching based upon what students
currently do or do not understand
PK3-Our teachers can use a wide range of 3.75 1.03
teaching approaches in a classroom
setting (collaborative learning, direct
instruction, inquiry learning,
problem/project based learning, etc.)
PK4-Our teachers know how to assess 3.96 0.91
student performance in a classroom
Knowledge of CK1-Our teachers have various ways and 3.67 1.05
content strategies of developing their
understanding of computational thinking
CK2-Our teachers can think about the 3.79 1.06
subject matter like an expert who
specializes in computational thinking
CK3-Our teachers have sufficient 3.75 1.03
knowledge of computational thinking
TPACK TPACK1-Our teachers can teach lessons 3.75 1.11
that appropriately combine computational
thinking, technologies, and teaching
approaches
TPACK2-Our teachers can use strategies 3.46 1.14
that combine content, technologies, and
teaching approaches
TPACK3-Our teachers can select 3.58 1.18
technologies to use in the classroom that
enhance what they teach, how they teach,
and what students learn
TPACK4-Our teachers can provide 3.54 0.93
leadership in helping others to coordinate
the use of content, technologies, and
teaching approaches at my school
References
Armoni, M., Meerbaum-Salant, O., & Ben-Ari, M. (2015). From scratch to “real” programming.
ACM Transactions on Computing Education (TOCE), 14(4), 25.
Balanskat, A., & Engelhardt, K. (2014). Computing our future: Computer programming and coding-
Priorities, school curricula and initiatives across Europe: European Schoolnet.
17 A Study of the Readiness of Implementing Computational … 313
Brennan, K., & Resnick, M. (2012). New frameworks for studying and assessing the development of
computational thinking. In: Proceedings of the 2012 Annual Meeting of the American Educational
Research Association (pp. 1–25). Vancouver, Canada.
Cetin, I. (2016). Preservice teachers’ introduction to computing: exploring utilization of scratch.
Journal of Educational Computing Research, 54(7), 997–1021.
Chai, C. S., Koh, J. H. L., & Tsai, C. C. (2010). Facilitating preservice teachers’ development of
technological, pedagogical, and content knowledge (TPACK). Educational Technology & Society,
13(4), 63–73.
Cheon, J., Lee, S., Crooks, S. M., & Song, J. (2012). An investigation of mobile learning readiness
in higher education based on the theory of planned behavior. Computers & Education, 59(3),
1054–1064. https://doi.org/10.1016/j.compedu.2012.04.015.
Cuny, J., Snyder, L., & Wing, J. M. (2010). Demystifying computational thinking for noncom-
puter scientists. Unpublished manuscript in progress, referenced in http://www.cs.cmu.edu/
~CompThink/resources/TheLinkWing.pdf.
Darab, B., & Montazer, G. (2011). An eclectic model for assessing e-learning readiness in the Iranian
universities. Computers & Education, 56(3), 900–910. https://doi.org/10.1016/j.compedu.2010.
11.002.
Falkner, K., Vivian, R., & Falkner, N. (2014). The Australian digital technologies curriculum:
Challenge and opportunity. In: Paper presented at the Proceedings of the Sixteenth Australasian
Computing Education Conference (Vol. 148).
Heintz, F., Mannila, L., & Färnqvist, T. (2016). A review of models for introducing computational
thinking, computer science and computing in K-12 education. In: Paper Presented at the Frontiers
in Education Conference (FIE): IEEE.
Hwang, G. J., Yang, L. H., & Wang, S. Y. (2013). A concept map-embedded educational computer
game for improving students’ learning performance in natural science courses. Computers &
Education, 69, 121–130.
Israel, M., Pearson, J. N., Tapia, T., Wherfel, Q. M., & Reese, G. (2015). Supporting all learners in
school-wide computational thinking: A cross-case qualitative analysis. Computers & Education,
82, 263–279.
Ke, F. (2014). An implementation of design-based learning through creating educational computer
games: A case study on mathematics learning during design and computing. Computers & Edu-
cation, 73, 26–39.
Koehler, M. J., Mishra, P., & Yahya, K. (2007). Tracing the development of teacher knowledge in a
design seminar: Integrating content, pedagogy and technology. Computers & Education, 49(3),
740–762.
Liu, M., Hu, W., Adey, P., Cheng, L., & Zhang, X. (2013). The impact of creative tendency, academic
performance, and self-concept on creative science problem-finding. PsyCh Journal, 2(1), 39–47.
Maloney, J. H., Peppler, K., Kafai, Y., Resnick, M., & Rusk, N. (2008). Programming by choice:
urban youth learning programming with scratch. ACM, 40(1), 367–371.
Maloney, J., Resnick, M., Rusk, N., Silverman, B., & Eastmond, E. (2010). The scratch programming
language and environment. ACM Transactions on Computing Education (TOCE), 10(4), 16.
Mishra, P., & Koehler, M. J. (2006). Technological pedagogical content knowledge: A framework
for teacher knowledge. Teachers College Record, 108(6), 1017–1054.
Mouza, C., Yang, H., Pan, Y.-C., Ozden, S. Y., & Pollock, L. (2017). Resetting educational technol-
ogy coursework for pre-service teachers: A computational thinking approach to the development
of technological pedagogical content knowledge (TPACK). Australasian Journal of Educational
Technology, 33(3).
Orvalho, J. (2017). Computational Thinking for Teacher Education. Paper presented at the
Scratch2017BDX: Opening, Inspiring, Connecting.
Pintrich, P. R., Smith, D. A. F., Garcia, T., & McKeachie, W. J. (1991). A manual for the use of
the motivated strategies for learning questionnaire (MSLQ). MI: National Center for Research
to Improve Postsecondary Teaching and Learning. (ERIC Document Reproduction Service No.
ED 338122).
314 T.-C. Hsu
Prabhu, V., Sutton, C., & Sauser, W. (2008). Creativity and certain personality traits: Understanding
the mediating effect of intrinsic motivation. Creativity Research Journal, 20(1), 53–66.
Ross, M., Perkins, H., & Bodey, K. (2016). Academic motivation and information literacy self-
efficacy: The importance of a simple desire to know. Library & Information Science Research,
38(1), 2–9.
Sáez-López, J. M., Román-González, M., & Vázquez-Cano, E. (2016). Visual programming lan-
guages integrated across the curriculum in elementary school: A two year case study using
“Scratch” in five schools. Computers & Education, 97, 129–141.
Schmidt, D. A., Baran, E., Thompson, A. D., Mishra, P., Koehler, M. J., & Shin, T. S. (2009).
Technological pedagogical content knowledge (TPCK): The development and validation of an
assessment instrument for preservice teachers. Journal of Research on Technology in Education,
42(2), 27.
Williams, F. E. (1991). Creativity assessment packet: Test manual. Austin, TX: Pro-Ed.
Wing, J. M. (2006). Computational thinking. Communications of the ACM, 49(3), 33–35.
Yadav, A., Mayfield, C., Zhou, N., Hambrusch, S., & Korb, J. T. (2014). Computational thinking
in elementary and secondary teacher education. ACM Transactions on Computing Education
(TOCE), 14(1), 5.
Yu, Y.-T., Liu, Y.-C., & Huang, T.-H. (2016). Support-object-personnel mobile-learning readiness
model for primary and secondary schools. Journal of Research in Education Sciences, 61(4),
89–120.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter’s Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter’s Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Chapter 18
Self-development Through
Service-Oriented
Stress-Adaption-Growth (SOSAG)
Process in the Engagement
of Computational Thinking Co-teaching
Education
Mani M. Y. Wong, Ron C. W. Kwok, Ray C. C. Cheung, Robert K. Y. Li
and Matthew K. O. Lee
18.1 Introduction
Programming and computing-related skills are vital in the information age both for
personal and social development. In CoolThink@JC, City University of Hong Kong
(CityU) aims to provide professional education support to enhance programming
literacy among Hong Kong citizens through a series of elaborative teaching and
learning activities, in particular targeting the primary school student group in the
Hong Kong population.
Programming/coding has now become a global initiative in multiple countries,
such as the “Hour of Code” campaign was first initialized by Code.org in the US
in 2013 providing free educational resources for all ages. Now, over 100 million
students worldwide have already tried an “Hour of Code”. In the UK and Australia,
programming has been put into the primary education curriculum. In Hong Kong,
CityU Apps Lab (CAL) (http://appslab.hk) is a leading University organization offer-
ing free workshops to the public to learn to code, and officiated the first “hour of
code” in the territory. Over 2,000 h of programming has been achieved in the previous
“Hour of Code HK” workshops, and we, at CityU of Hong Kong, have offered over
10,000 h of programming lessons to the beneficiaries by running “We Can Code”
and “Go Code 2015” with the Sino Group.
In the world’s major economies, students from elementary to postgraduate level
are getting increasingly involved in understanding the fundamentals of computer
programs and programming skills. In the UK, a new version of the relevant curricu-
lum was established a year earlier on July 8, 2013 by GOV.UK, putting a significant
emphasis on computing skills. The new curriculum replaces basic word processing
skills with more demanding tasks such as programming and understanding algo-
rithms. Primary school children are proposed to be taught how to write simple pro-
grams using computer languages.
In Singapore, Hong Kong’s Asian competitor of diverse areas is a plan being
fermented by the INFOCOMM Development Authority (IDA), which prescribes the
progressive introduction of software programming classes into public schools. This
would provide students with a unique opportunity to write programs in classroom
settings employing the teaching and educational resources, which are available to
other fundamental curricula. A talk is now being initiated by the nation’s Ministry
of Education regarding the necessity of incorporating programming into its national
curriculum.
Estonia is beyond all doubt taking the lead in programming skill education by
launching a nationwide scheme to teach school kids from the age of seven to nineteen
the methodology of writing computer programs. It is one of the first countries to have
a government that was fully enabled. The ProgeTiger initiative was started in January
2012 by the Estonian government, aiming at bringing programming into classrooms
to help raise Estonia’s technical competency. This small country with a population of
1.3 million is the home of Skype and has been attracting sponsoring activities from
well-known organizations such as the Mozilla Foundation.
18 Self-development Through Service-Oriented … 317
It is of great significance that Hong Kong citizens could grasp the basic principles
of mechanisms of the digital devices that play such a large role in modern life and
be aware of the fundamentals of programming. It is also important to know that
when running the “Hour of Code HK” Campaign, we observe that youth group
can achieve the programming tasks in a much shorter time when compared with
University students or adults. In this connection, it is identified that there is still a
lack of momentum in Hong Kong in the present day to catch up with the world’s
best.
We believe that students at their early age are able to understand and acquire
computational thinking skill at a faster pace; therefore, in this project, we provide the
students in the participating schools three years of in-class training and out-of-class
mentoring support from junior, intermediate, up to advanced level. For the in-class
training at each level, there are 8−14 lessons with each lasting around 35–45 min.
The out-of-class mentoring support is provided by our university student mentors on
a group basis (around two student mentors take care of a class of 40 students). The
student mentors take part in this project through our established campus internship
and other cocurricular experiential learning schemes.
In CoolThink@JC, a sustainable learning environment was created for a period
of 3 years for the participating primary students to learn the skill to program and
keep up the learning attitude. The main role of the CityU team is to provide in-class
manpower support and parent involvement support, and to facilitate effective learning
in target schools. CityU Apps Lab, an education community at CityU consisting
of more than 600 University student members, is able to provide such manpower
support throughout this project. It is expected that in 3 years’ time, this community
can grow up to 1,000 members on campus involving the CityU Alumni network.
Students from other Hong Kong higher education institutions who are passionate
about computational thinking (CT) and programming education are also recruited to
join this project.
Yet, one challenge identified by the research group is the diverse cultures and
backgrounds of the recruited students, which the cultural difference is expected to
be overcome in the stress-adaption-growth (SOSAG) process in the recruitment.
In order to provide interactions with primary school students, we will provide
support to the whole project to create a structured curriculum with the partnering
organizations on this project that eventually integrates learning existing subjects such
as mathematics and sciences with the computational thinking skills that the students
have picked up. This has the potential to galvanize knowledge sharing and learning
among the students.
318 M. M. Y. Wong et al.
In CoolThink@JC, 100 and 500 teaching assistants (TA) were recruited by CityU
from over 10 tertiary institutions of Hong Kong in the academic years of 2016/17
and 2017/18 to serve 32 pilot primary schools that participate in the computational
thinking education in Hong Kong.
The main roles of TAs are to assist teachers in dealing with classroom teaching,
e.g., co-teaching CT and answering students’ enquiries in class in the pilot primary
schools. They also help in creating a joyful and innovative learning environment, and
act as a role model in the classroom (e.g., by providing a passionate and responsive
presence).
Another major responsibility of TAs is to provide professional support to teachers
in relation to teaching and learning. They have to motivate students’ learning and
encourage them to interact with others, for example, by praising students who have
successfully completed the class exercises with creative ideas and are behaving well,
and encouraging them to assist other classmates. Also, they take the role of inspiring
students to generate creative ideas by encouraging students to finish their tasks by
themselves with appropriate guidance. They have to be aware of student progress and
achievements, and report any concerns regarding student matters to their supervisors,
namely teaching leads (TLs).
TAs take the main role in providing support in CT lessons and act as an ultimate
executor of co-teaching in primary schools. Before being assigned to serve in primary
schools, potential candidates are trained and assessed based on their performances
on a series of tests and teaching practices to become “qualified TAs”.
Unlike other subjects or skill set training, teaching CT is not easy as learners
are required to have thorough understanding of both concepts and mechanisms to
acquire the thinking skills needed for asking questions, understanding problems, and
identifying solutions. Training TAs to be qualified to provide support to CT teaching
at a large scale is even more a challenging task. Instead of “spoon-feeding” candidates
for the essential soft and hard skills as the knowledge for co-teaching CT, candidates
are expected to have a good attitude and high motivation, especially in Stage 1 where
there are frequent interactions between candidates and the assessor for examining
candidates’ understanding on CT concepts, CT practices, and CT perspectives. This
ensures the training and assessment at a large scale can be conducted smoothly while
maintaining quality.
18 Self-development Through Service-Oriented … 319
Among hundreds of TAs recruited from various academic background and expe-
rience, some do not have relevant education background while some lack relevant
experiences, e.g., teaching or interacting with children. To overcome the cultural
difference of a large group of TAs, assessments are crucial to evaluate and maintain
the standard of TAs via various kinds of assessment methodologies. Potential can-
didates are exposed to stress in four stages of assessments, which include a test via
electronic submission and interview screening (Stage 1), training assessment (Stage
2), teaching practice assessment (Stage 3), and probation assessment to be qualified
TAs (Stage 4). The assessment stages are summarized in Fig. 18.1.
In various stages of assessments, stress is caused by the intercultural differences
including but not limited to (i) education background (e.g., education, programming)
and (ii) work experiences (e.g., teaching experience or experience with children).
The potential challenges that new recruits for this program may face are highlighted
and emphasized with an increasing extent in each stage to enlarge their stress, and
therefore the adaption and growth eventually.
Many candidates expressed that they suffered from varying degree of anxiety,
sorrow, and even pain in different stages due to the various reasons: some candidates
are not confident to learn CT when being given a test related to Scratch in Stage
1; some are distressed when being asked to handle student issues during interview
while some feel worried to handle a big class of students and answer all enquiries
from students properly in the lesson.
To resolve the stress, adaptation takes place to promote qualitative transformation
toward growth (Kim, 2001). We consider the “cultural shock” as a “catalyst” for
potential TAs to adapt quickly and grow to make them fit in the roles in the service
engagement.
18.3.2 Adaptation
Table 18.1 ITT factors in the stages to become qualified TAs in CoolThink@JC of Hong Kong
ITT factors SOSAG process
Stage 1—Pre- Stage Stage Stage
assessment 2—Training 3—Teaching 4—Co-teaching (at
practice least 2 sessions)
Stress ✓ ✓ ✓ ✓
Adaption ✓ ✓ ✓
Growth ✓ ✓ ✓
18.3.3 Growth
18.4 Evidence
In this study, data was being collected and presented in the form of in-depth reflective
summary submitted by TAs. The extracted content of the reflective summary was
mapped against the corresponding factors of the stress-adaption-growth process of
the intercultural transformation theory (ITT) (Kim & Ruben, 1988). Five cases were
examined in light of ITT factors through the different stages of the SOSAG process.
The TA subjects in this study were invited to reflect on a number of attributes (listed
in Sect. 18.4.1–18.4.2) which are expected of a qualified TA in CoolThink@JC.
322 M. M. Y. Wong et al.
18.4.1 Attitude
• For example, handling a certain number of students inquiries and issues within the
limited lesson time.
Table 18.2 summarizes the TAs’ self-review comments through the SOSAG in the
service engagement in CT education. The comments that correspond to the respective
ITT factors are highlighted for further analysis.
18.5 Discussion
Table 18.2 Excerpts from reflective summaries submitted by teaching assistants in CoolThink@JC
(a) Stage 1—Pre-assessment
ITT Stage 1—Pre-assessment
factors
Stress “With knowledge on different computer languages such as C++, how come the
educational tools for primary school students will beat me? However, I recognized
that I might be too arrogant after pre-assessment as my background and knowledge
could not support answering. I felt nervous during interview. I actually have no idea
nor experience on CT questions and handling problems in given scenarios. There
was a lot of stress in the pre-assessment so I did not perform well enough.”
(Student 1)
“As a linguistics student, several programming languages are required to be learnt for
linguistic purposes. Yet, I am not confident of CT and programming. Though it
was undoubtedly stressful in the preparation, I successfully went through the
stages of online test and interview with sufficient self-learning and online research.”
(Student 2)
“My journey in CoolThink@JC began with pre-assessment. Without any relevant
background as majoring in Finance, I encountered many difficulties in the test
which were solved by searching information online and even asking advice from
friends. However, as I sowed, so I reaped. The pre-assessment helped me understand
some CT knowledge before moving to next stage. Then, I was pretty stressful during
interview as there are lots of CT mentioned on the question paper such as parallelism
and data manipulation. I tried to understand by relating each concept to the events
happened in my daily life.” (Student 3)
“Since I am a student studying Computer Science, almost all questions in the quiz
could be answered with confident. However, if I wanted to get all correct, I still need
to put some effort on revise some materials before start. This made me feel a little
worried about the difficulty of learning materials. After that, I was very nervous
in interview because I was weak in interview and also afraid of questions out of
preparation. During the interview, there were questions asking to deal with some
sudden problems in lesson. This made me start to have some stress on the
required soft skills of this job, such as how could I handle a class of students and
cooperate with teachers and other ambassadors.” (Student 4)
“It(CoolThink@JC) aroused my interest. However, there is a big problem as I am
not studying a subject related to CT and programming, and I do not have any idea of
CT concepts.” (Student 5)
Adaption
Growth
(continued)
324 M. M. Y. Wong et al.
Under sufficient guidance by supervisors, they can become a good trainer to incor-
porate interactivity and foster thought-provoking conversations among peers.
18.5.2 Promotion
18.5.3 Challenges
Fig. 18.2 TAs from CoolThink@JC supported CT-related workshops in InnoTech 2017
Besides the duties of co-teaching in primary schools, TAs are also actively partici-
pating in the support of CT workshops for public, for example, some were sent to
support programming workshops for the public in InnoTech Expo 2017, a large-scale
innovation and technology event held in Hong Kong Convention and Exhibition Cen-
tre. Through co-organizing the events with volunteers from different backgrounds,
TAs learned new knowledge like using programs to control drones and gained new
exposure and insights to the application of programming in real life. The pictures in
Fig. 18.2 show our TAs’ engagement in the InnoTech Expo 2017.
In the long run, TAs with the experience in CT education will have a higher chance
of engagement in CT education and related industries. Besides CT education, TAs
are more likely to serve the community continually in different aspects and be more
prepared to become future pillars in the society.
The potential development of qualified TAs is summarized and illustrated in
Fig. 18.3.
330 M. M. Y. Wong et al.
Promotion to senior/
supervisory TA
Supplementary
training
Stage 4 -
Stage 3 -
ITT Stage 1 - Stage 2 - Co-teaching
Teaching
Factors Pre-assessment Training (at least 2
Practice
sessions)
Stress
Adaption
Growth
Community service in
the future
18.6 Conclusion
This chapter extends the existing intercultural transformation theory (ITT) and pro-
poses the service-oriented stress-adaption-growth (SOSAG) process in the engage-
ment of computational thinking co-teaching education. Through service engagement
in CT education at primary schools, service-oriented stress-adaption-growth process
took place and allowed TAs to undergo self-development in multiple stages.
18 Self-development Through Service-Oriented … 331
Appendix
(continued)
ITT Stage 1—Pre- Stage 2—Training Stage 3—Teaching Stage 4—Co-teaching
factors assessment practice
Adaption Luckily, CoolThink@JC is a But after the first teaching Before the lesson, I went
family; the professors gave practice, I found that my through the teaching
us precious and useful worries were unnecessary. material with full
feedbacks and suggestions During the teaching understanding, and tried the
from their expert aspect, practice, teachers were block and app before to
also other groups provided well trained with Scratch confirm the steps.
supplementary methods and App Inventor 2. They During the co-teaching, I
from their experience. controlled the whole class had to follow the teaching
Besides, we enjoyed the and indicated what progress of the assigned
excellent performance of ambassadors should do. class. Although they were
other groups and learnt to Also, I found that just lessons for primary
handle other situations. nowadays primary school school students, I had to
These role-play games gave students are very talented cooperate with the teacher
me a very good experience and enthusiastic. They well to make sure that the
would like to help their lesson could continue
classmates to solve the fluently. Despite the class
problems together. Most of time was short, normally
them could finish their own 30 min to 1 h, there were a
work timely or even lot of uncertainties during
earlier. Then they would the class. Beside of the
start to observe their content delivering to the
classmates’ works and see students, I had to make sure
whether they could that the students paid
improve their own project attention during the lesson to
or not. Therefore, I could prevent them missing
give them some higher teacher’s instructions. For
level task and strengthen example, we had to prevent
their computational and advice the student to
thinking. Even better, they surf another websites, such
could use different as Facebook and YouTube,
categories and blocks to that is not related to
optimize and yield their CoolThink@JC or teaching
own project. Actually I material. Also, I was
was quite astonished that responsible to pay extra
students had the ability to attention to some students
use other command block with special educational
and make the block more needs to make sure that they
concise and efficient. Also, could follow the teaching
sometimes they could fix material. Sometimes there
the mistakes or change the were some naughty students
parameters in the interface who did not follow the
by observation. But as they instructions of the class; we
were still primary school had to help the teacher to
students, there might be maintain the good order of
something that was too the classroom.
difficult for them. With Even more, as the teaching
preparations before the materials became more and
class, I could provide more difficult, we had to
suggestions to help them practise ourselves well daily
to train our ability and
familiarity of the Scratch
and App Invertor 2,
otherwise we could not
follow the lesson too
Growth Not only does it help me to The teaching practice Until now, I am still just a
solve common problems helped me a lot, as I gained qualified TA but not a good
we face in co-teaching, but the experience and TA, as I believe a
it also gains my experience knowledge to control the well-prepared TA has to
in co-teaching and makes whole class and teach the practise and practise more
me calm down with no kids, also it strengthens my to gain the experience
disarray. These helped me a CT ability
lot
18 Self-development Through Service-Oriented … 333
(continued)
ITT Stage 1—Pre-assessment Stage 2—Training Stage 3—Teaching Stage 4—Co-teaching
factors practice
Adaption Hence, I realized During teaching Furthermore, the teaching
self-study and practice practice, hands-on styles are very distinct in
are also important apart experience was the different schools. Through
from training provided most valuable reward multiple duties in various
for me. It was my schools, I have learned that
first time being a TA I should always try my best
in school and I was to assist the students to
glad that the whole catch up with their teacher.
teaching process was I realized that the attitude
smooth and and behavior of teachers
manageable can actually have direct
influences to their students.
I believe every student can
feel whether his or her
teachers are teaching and
guiding them from heart or
just simply demonstrating
the procedures of Scratch
to them. Through
observation and
experience, I deeply
understand being teacher is
not an easy job. It requires
a lot of time, energy and
effort in order to be a
responsible teacher and
even TAs
Growth Generally, we all think Through the teaching To conclude, it is true to
we have stepped out practice, I also say that I have learned so
our comfort zone and noticed the duties of much since the first day of
accepted new a teacher are not only this program. After going
challenges beyond our delivering the solid through different stages, I
school professions knowledge to have become a qualified
students in the lesson, Teaching Assistant in
but also paying time CoolThink@JC. It has
and effort to prepare certainly been a precious
teaching materials experience in my
beforehand. It is very University life. Not only
crucial to ignite do I step out my comfort
students’ passion in zone to learn a discipline
the subject and that I was not familiar
hence motivate them with, but I also get a taste
to learn and further of being a TA in primary
develop their skills school. Even though being
a teacher might not be the
first priority in my future
career path, various
training in this program
enable me to understand
the importance of the sense
of responsibility and
having correct attitude at
work. I have also realized
nothing is impossible if we
are willing to learn from
our mistakes and get along
with stress, and hopefully
turn it into our motivation
to become a better
individual
18 Self-development Through Service-Oriented … 335
(continued)
ITT Stage Stage 2—Training Stage Stage
factors 1—Pre-assessment 3—Teaching 4—Co-teaching
practice
Adaption I was no longer Since it was my Fortunately, my
stressful about first teaching partner was
handling the practice, the experienced with the
Scratch because supervisor is situation. Thus, he
previous willing to give told me to report it
knowledge learned me another to the teacher after
from chance. When the teacher finished
pre-assessment participating in important slide.
allowed me to another teaching Eventually, everyone
adapt new practice, I sat still and worked
knowledge in CT understood that hard on the scratch
faster. Besides, I planning should together.
learned about how be always done Every co-teaching
to execute before moving, session may be
co-teaching which allows boring to others,
practice in next me to have but not to me
academic year. sufficient time because I got to
What I should do and effort to know something
and what should be complete the new every time. The
avoided during duty. Eventually, method of creating
co-teaching duty. I came earlier an application was
Even though because I did revised for me every
everything seemed preparation time when I have
complicated to me, before the co-teaching session.
I received help practice Besides, students’
from supervisors. creativity can always
They were willing surprise me. For
to help me when I example, in doing
had any problems the maze run on
encountered during scratch, some
the training students made it as
Halloween version,
while some added
elements that were
not mentioned. I
believe in the future,
some of them can
become great
programmer with
lots of creativity and
create more useful
applications for us
(continued)
18 Self-development Through Service-Oriented … 337
(continued)
ITT Stage Stage 2—Training Stage Stage
factors 1—Pre-assessment 3—Teaching 4—Co-teaching
practice
Growth Nil I was satisfied to For myself, I learned
prove myself CT, which
being able to encouraged me to
handle the solve problem by
teaching searching solution
practice from different
aspects. When
solving problem, I
would like to find
out what was the
root to tackle the
problem effectively
and efficiently.
Besides, it is
important to be
responsible to
perform or complete
a task assigned
satisfactorily and on
time. If I don’t
perform the task
satisfactorily and on
time, students and
teacher may suffer
as teacher cannot
handle all the
students at once and
chaos may occur. I
am grateful to
CoolThink@JC
which gives me
chances to learn
about the
computational
thinking and correct
personality that I
need to have for my
career. Everyone I
met here had
always given me
direction and
feedback to push
me one step
forward on my
career path
338 M. M. Y. Wong et al.
(continued)
ITT Stage Stage 2—Training Stage 3—Teaching Stage 4—Co-teaching
factors 1—Pre-assessment practice
Adaption Hence, I took myself I had more pressure in the Before the lesson, there
as a student that I second attempt and was some time for
might help later on prepared more time than ambassadors to prepare.
and remember what the last time for taking Since different classes
problems I would face transportation, finding might have different
and observed how the ways and walking to progress, I would read the
teacher helped and prevent being late again, learning materials and my
solved those problems therefore I arrived on time co-teaching notes in the
and could try to help the preparation time so to have
students with their needs. a brief idea of what was
Experience really helped a taught and would be taught
lot. The more problems we in the last and following
handled, the more we lesson. Hence, no matter
experience we had to what the teacher was going
understand how to handle to teach, I still could pick
different cases. Therefore, up quickly and know what
I always notice how other and how should I help the
ambassadors and teacher students in the lesson.
supported different During the lesson, apart
students, remind myself from students, it was
how can I do in the similar important for ambassadors
condition to pay attention to what the
teacher was doing, in order
to provide immediate
support and response to the
teacher to ensure a smooth
lesson delivery. For
example, when the teacher
was mentioning some
worksheets and we might
help teacher to distribute
the materials so to save
time or when the teacher
asked the students to
follow to do the tasks, then
we might start checking
the progress of the students
and provide help when
necessary.
Between ambassadors, we
would have a labor division
for looking after different
students. If students in this
group were in good
progress, then we might
observe other students
Growth This improved my I realized the importance Nil
performance on how of time management
could I observe the
students who might
want to ask some
questions and assist
them before they ask
340 M. M. Y. Wong et al.
(continued)
ITT Stage 1––Pre- Stage 2—Training Stage Stage
factors assessment 3—Teaching 4—Co-teaching
practice
Adaption Meanwhile, I was In the Nil
excited because I can classroom, there
edit an application of were other
the smartphone experienced
myself. Also, it was a TAs. I took them
first experience for me as a reference
to learn programming. and I started to
At the end, it had a imitate the way
quiz which tests TAs they are
understanding toward
the Scratch and the
App inventor. It was
not that difficult after
listening to the
instruction of the
teaching leader
Growth After accomplishing a Gradually, I Up till now, I have
set of assessment, I build up my own joined the
have finally become a set of practice. I CoolThink@JC
TA am confident to program for more
have than a year. I have
co-teaching and participated in
I can be more different posts such
mature to handle as co-teaching,
any difficulties backup duties and
training program.
Meanwhile, I am
fortunate to
become one of the
supervisory TA. I
am delighted that I
can involve in
supervisor duties.
Each post provides
different
experiences for me.
It makes me
growth and it
broadens my
horizon. In
CoolThink@JC, I
do not only learn
CT concept, but
also equip myself
with different
knowledge and
skills
342 M. M. Y. Wong et al.
References
Crutsinger, C., Pookulangara, S., Tran, G., & Kim, D. (2004). Collaborative service learning: A
winning proposition for industry and education. Journal of Family and Consumer Sciences, 93(3),
46–52.
Furco, A. (1996). Service-learning: A balanced approach to experiential education. In Introduction
to service-learning toolkit (pp. 9–13).
Kim, Y. (2001). Becoming intercultural: an integrative theory of communication and cross-cultural
adaptation. Google Books.
Kim, Y., & Ruben, B. D. (1988). Intercultural transformation: A systems theory. In Y. Kim & W.
Gudykunst (Eds.), Theories in intercultural communication (pp. 299–321). Newbury Park: Sage,
cop.
Shi, X. (2006). Intercultural transformation and second language socialization. Journal of Intercul-
tural Communication, 11.
Sigmon, R. (1994). Serving to learn, learning to serve. linking service with learning (Council for
Independent Colleges Report).
Sivakumar, C., & Kwok, C. W. (2017). Course design based on enhanced intercultural transformation
theory (EI): Transforming information systems (IS) students into inventors during academic
exchange. (A. 19, Ed.) Communications of the Association for Information Systems, 40.
Spencer-Oatey, H., & Franklin, P. (2009). Intercultural interaction: a multidisciplinary approach
to intercultural communication. Basingstoke: Palgrave Macmillan Ltd.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter’s Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter’s Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Part VI
Computational Thinking in Educational
Policy and Implementation
Chapter 19
Educational Policy and Implementation
of Computational Thinking
and Programming: Case Study
of Singapore
19.1 Introduction
Since Wing’s (2006) argument on how computational concepts, methods and tools
can develop thinking skills to transform how we work or solve problems, and with the
emergence of computation-related fields such as Data Science and Artificial Intel-
ligence in recent years, there has been a great interest from academia, industry and
government in Computational Thinking (CT) and programming. Sites such as code.
org, which is sponsored by industry giants like Google, provide free resources on
learning programming to anyone who is interested. Though research in program-
ming and computing education has been around for a few decades going back to the
introduction of Logo in the 1970s, there is a renewed interest in learning program-
ming and how it develops CT skills. National governments in addressing the shift
from a knowledge/information economy to an economy driven by computation, are
introducing educational policies that would prepare their citizens to be future ready.
Reflecting 10 years after her seminal publication on CT, Wing (2017) said she never
dreamt that Computer Science education, which was once available only at univer-
sity level, would be introduced in K-12 at such a large scale today. Governments,
educational authorities and schools are introducing Computer Science education at
different levels of education. In countries such as the United Kingdom, Lithuania, Fin-
land, Korea and Japan, initiatives and policies are made to introduce Computational
Thinking skills and programming in the schools. This paper describes Singapore’s
effort in introducing the CT and programming in the education from preschool to
secondary schools.
In 2014, Singapore launched the Smart Nation initiative, a nationwide effort to har-
ness technology in the sectors of business, government and home to improve urban
living, build stronger communities, grow the economy and create opportunities for
all residents to address the ever-changing global challenges (Smart Nation, 2017).
One of the key enablers, to support the above initiative, is to develop the nation’s
computational capabilities. Programmes are implemented to introduce and develop
CT skills and programming capabilities from preschool children to adults. We survey
the landscape of K-10 CT and programming-related curricula in Singapore, which are
implemented by various government organisations. We present these programmes,
organised according to three groups: Preschool, Primary and Secondary.
19.3.1 Preschool
In Singapore, children aged from 3 to 6 years old attend preschools which are mostly
privately run. The Infocomm Media Development Authority (IMDA) launched the
Playmaker initiative with the aim of introducing Computational Thinking in the
Kindergarten and Preschools in Singapore (IMDA, 2017). There are over 3000
preschools in Singapore. The initial phase involved piloting the programme in 160
preschools. IMDAs approach to introducing CT is to use electronic, robotic or pro-
grammable toys that would engage young children in play while developing CT skills
such as algorithmic thinking. IMDA provided a set of the toys to pilot centres for
use in the classroom by the teachers.
The toys selected by IMDA for playful exploration of technology are: (1) Beebot;
(2) Circuit Stickers; and (3) Kibo. The Beebot is a toy with simple programmable
steps to control its movement. Children can program the toy to move it along a
path by logically sequencing the number of steps to move and control its direction.
348 P. Seow et al.
Playing Beebot can help young children to develop problem-solving skills and logical
thinking as they plan and program the movement of the toy. Kibo, developed by
researchers in Tuft University, allowed children to create a sequence of instructions
by arranging Kibo wooden blocks. The blocks can be scanned in a sequence with
the instructions passed to the robot to execute the steps. Circuit sticker is a toolkit
comprised of peel-and-stick electronic components such as LEDs and conductive
copper tapes. With this toolkit, young children can create interactive art and craft
projects embedded with LED stickers and sensors that respond to the environment
or external stimuli (see Fig. 19.1). Children can creatively ‘make’ while learning and
applying basic electricity concepts.
Preschool teachers in Singapore do not use much technology or handle technology
in the classroom as the emphasis is more on literacy development and play. As a result,
they typically have apprehensions or concerns in using technology in their lessons. To
address teachers’ lack of experience and concerns, IMDA organises teacher seminars
and workshops for teachers to experience the use of the Beebot, Kibo and Circuit
Stickers. The hands-on sessions are facilitated by the instructors to introduce teachers
to the tech toys and work on simple projects. These experiences enable teachers to
understand the potential learning opportunities for their students by first learning
the technology for themselves. Hands-on sessions alleviate any potential fear of
handling technology as they experience the use of the technology with the support
from instructors.
In preparing to pilot the Playmaker program and address the concerns of preschool
teachers, IMDA works with a local polytechnic which offers preschool training
for teachers. At the Preschool Learning Academy, preschool lecturers and train-
ers, together with technologists work together to formulate the use of the various
tech toys in the preschool classroom. The learning experiences were shared among
the teachers. Such a collaboration created an opportunity to understand how the tools
could be used in the classroom and it also built the capacity among the trainers to
work with the teachers on how these tools can be used to develop the students’ learn-
19 Educational Policy and Implementation of Computational Thinking … 349
ing potential. The academy was able to provide ongoing professional development
to the current and new teachers.
Marina Bers and Amanda Sullivan were engaged by the IMDA to study the effec-
tiveness of the KIBO program in the preschools. They studied the preschools’ imple-
mentation of a curriculum called ‘Dances from Around the World’. They found that
the children were successful in mastering the foundational concepts of programming
and that the teachers were successful in promoting a creative environment (Sullivan
& Bers, 2017).
equipment during events. The clubs are now organised such that students now have
more opportunities to apply their creativity with computers through programming
and digital media. IMDA and the Information Technology Standards (ITSC) com-
mittee also organises an annual programming competition CodeXtreme for students,
supported by educational institutions such as the universities, polytechnics, the Sin-
gapore Science Centre and technology industry partners such as RedHat and PayPal.
Students are encouraged to participate in CodeXtreme in which Primary school stu-
dents can work in teams to complete a given project with Scratch. Each team has an
adult as a supervisor and mentor. Prior to the competition, the students must attend
a workshop to equip themselves with the necessary skills for the challenge of the
hackathon.
In 2017, the Ministry of Education introduced a new Computing subject which will be
offered to students as an ‘O’ Level subject. It replaced the existing Computer Studies
subject (MOE, 2017). Students taking the subject would learn to code in Python, a
programming language which was previously taught only at ‘A’ Level Computing. In
the new syllabus design, students will develop CT and programming skills to create
solutions with technology to solve problems. In the old Computer Studies syllabus,
students were learning to be users of technology such as using software applications
and understanding aspects of technology.
The new Computing syllabus is built on the framework shown in Fig. 19.2: (1)
Computer as a Science; (2) Computer as a Tool; and (3) Computer in Society.
The dimension of Computer as a Science is comprised of the core components
of Computational and Systems Thinking. Students will develop and apply CT skills
such as abstraction and algorithmic thinking to solve problems and develop solu-
tions through programming. Using both CT skills and systems thinking, students
are required to work on a project of their own interest. This is, however, a non-
assessment component of the programme. It is to encourage students to take more
ownership by identifying a problem that they have an interest and develop ideas to
solve the problem using programming tools. The purpose of a non-assessed project
work is to encourage the students to be more creative in designing solutions without
the pressure of assessment. In the dimension of Computer as a Tool, students are
exposed to the use of hardware, technology, and devices that are used in the every-
day aspects of life at work and play. They learn about computer applications that
are used for productivity, communications and creative tools for completing specific
tasks such as video editing or creating websites. In Computer in Society, students
learn about issues in using computers such as intellectual property, data privacy,
Internet security and the computer addiction. This dimension includes a component
on twenty-first century competencies to prepare students to be future-ready workers
in the use of technology for self-directed learning, working in collaboration with
others and fostering creativity.
19 Educational Policy and Implementation of Computational Thinking … 351
Currently, 22 out of about 150 secondary schools (15%) are offering Computing
as an O level subject. One of the reasons for the low number of schools is the lack
of teachers who can teach computing. There are relatively few teachers who have
Computing or Computer Science background who can teach programming. Teach-
ers who are interested in teaching Computing and programming attend a year-long
conversion course taught by Computer Science faculty from a university. They were
given time by their schools to attend the course during school hours. In the course,
they upgraded their Computer Science content knowledge such as data structures and
programming in Python. The goal of the course is to prepare and equip teachers with
the content and technical knowledge to teach computing. In addition to preparing
teachers for the new Computing curriculum, Ministry of Education’s Curriculum
Planning and Development Division (CPDD) organised workshops for teachers to
understand the aspects of the syllabus. Teachers are introduced to different pedago-
gies for teaching computing such as unplugged approaches and paired programming.
In the workshops, teachers also experienced the use of the tools for teaching such
352 P. Seow et al.
as the Raspberry Pi. The workshop also served as a platform for teachers to raise
their concerns about teaching the subject, for example about how the project work
for students can be organised and implemented in the school.
In most schools, the typical number of students offered Computing ranges from
15 to 30 students. Computing is offered as an elective subject for the O levels and
student can opt-in for the subject at the end of Secondary 2. A student is typically
offered 7–8 ‘O’ Level subjects in their Secondary 3 year. Subjects like Additional
Mathematics and the Sciences are more favoured by students as these subjects are
better considered for admission for pre-university junior colleges and polytechnics.
Hence, students who are initially interested in taking up Computing may instead
choose to take Additional Mathematics, because of the limit in the number of subjects
they are offered and for pragmatic reasons for entry to pre-university courses.
Unlike countries like Finland, England and Korea, Singapore is not including Com-
puting or CT as compulsory education. Instead, Singapore’ approach is to provide
opportunities for students to develop their interests in programming and computing
skills through touchpoint activities at various ages, as shown in Fig. 19.3. Com-
puting and CT skills, which are introduced to the children are age-appropriate and
can engage them in learning. Children can then progressively develop interest and
skills, leading them to select Computing as a subject in the ‘O’ levels. The following
sections describe the characteristics of the approach.
with teachers in designing and integrating computing with the subjects to address
the lack of teachers’ experience. Teachers’ capacity is continuously developed as they
gain competence in computing. The school iteratively improve the programme for
sustainability and richer learning experience for the students (Hameed et al., 2018)
The task of building CT and Computing skills requires the combined effort of multiple
agencies working together. They include the government agencies like Infocomm
Media Development Authority (IMDA), Ministry of Education and the Ministry of
Social and Family Development, education centres like the Singapore Science Centre,
universities and educational training providers. These agencies have been working
together, sometimes also independently, to organise opportunities for students to
learn computational thinking skills by providing them with varied experiences. These
agencies can pool resources such as funding and support for initiating, implementing
and sustaining the programmes.
19 Educational Policy and Implementation of Computational Thinking … 355
of resources and agents to develop learning pathways based on the interest of learn-
ers. Students have opportunities to informal and formal resources in the learning
ecosystem (Basham, Israel, & Maynard, 2010). They can participate in enrichment
programmes, special-interest clubs and after-school activities; have access to a myr-
iad of digital based media such as videos, social media and Internet. In the learning
ecosystem, each agent plays an important role in the ecosystem in initiating and
sustaining the learning interest of learners. Compared to the traditional classroom
where tightly bound relationships and resources are the nexus; the flow to deliver
instruction, develop curriculum, perform assessment, teaching, and learning can now
be available, through and enhanced by a vibrant learning ecosystem of resources for
students.
Bell, Curzon, Cutts, Dagiene, and Haberman (2011) explain that informal outreach
programmes which downplay syntax-based software programming as a prerequisite
skill for engaging with Computer Science ideas can effectively make CT concepts
accessible to the students in short bursts without formal curriculum support. These
outreach programmes can operate outside, or in conjunction with the formal educa-
tion system to expose students to computer science concepts, so that they can make
informed decisions about their career paths. Kafai and Burke (2013) observe that
developments in K-12 computer software programming education can be charac-
terised by a ‘social turn’, a shift in the field in which learning to code has shifted
from being a predominantly individualistic and tool-oriented approach to now one
that is decidedly sociologically and culturally grounded in the creation and sharing
of digital media. The three dimensions of this social turn are: (1) from writing code
to creating applications, (2) from composing ‘from scratch’ to remixing the work of
others and (3) from designing tools to facilitating communities. These three shifts
illustrate how the development of artefacts, tools and communities of programming
could catalyse the move from computational thinking to computational participation,
and hence broaden the students’ participation in computing. One of the most active
proponents of this social turn is the Maker Movement (Martin, 2015), a community of
hobbyists, tinkerers, engineers, hackers, and artists who creatively design and build
projects for both playful and useful ends. There is a growing interest among edu-
cators in bringing ‘making’ into K-12 education to enhance opportunities to utilise
CT concepts to engage in the practices of engineering, specifically, and STEM more
broadly. CT is considered the best practice for teaching computing and more broadly
to solve problems and design systems; however, as computing extends beyond the
desktop, Rode et al. (2015) argue we could transition from CT to computational mak-
ing as an educational framework via the Maker Movement. Besides using micro:bit,
many people in the Maker Movement also use the LilyPad Arduino, a fabric-based
construction kit that enables novices to design and build their own soft wearables
and other textile artefacts. (Buechley, Eisenberg, Catchen, & Crockett, 2008; Kafai
et al., 2013; Kafai, Lee, Searle, & Fields, 2014).
19 Educational Policy and Implementation of Computational Thinking … 357
Currently, computing in formal education is offered at ‘O’ Level and ‘A’ Level
curriculum. Pixel labs under Code@SG movement (Pixel Labs, 2017) aims to see
programming and CT as Singapore’s national capability. Accordingly, programming
and CT are being considered to teach from an early age to students. Children with
special needs and the underprivileged are also encouraged to cultivate their interest
in programming, programming and technology development. The Infocomm Clubs
(Infocomm Club, 2017), a co-curricular activity programme for school children not
only excites students about Infocomm in a fun and meaningful way but also cultivates
leadership and entrepreneurship capabilities along with providing opportunities for
project work and community service projects at an early age. Many initiatives (Code
in Community, 2017) offers free programming lessons to underprivileged children in
Singapore. Thus, the main approach is to enthuse a broad base of students in comput-
ing and expose them to possibilities of technology through enrichment programmes
and co-curricular activities. Learning to code is part of the ALP in 41 secondary
schools. In addition, MOE also partners IMDA to provide enrichment programmes
like the ‘Code for Fun’ and ‘Lab on Wheels’, which have been well received by
schools. There are secondary schools (33 schools in 2016) with Infocomm clubs,
which tap on the support of IMDA to provide learning in areas involving program-
ming such as app development and robotics.
In the National Institute of Education, efforts have been made to bring together
researchers, teachers, computing education trainers, and students together to share
ideas and practices in the learning of Computing and Computational Thinking. In
2017, a half-day symposium and workshop was organised within a major interna-
tional education conference with representatives from Infocomm Media Develop-
ment Authority (IMDA), National Institute of Education (NIE), National University
of Singapore (NUS), school teachers and students presented national-level initia-
tives on Computing Education, research findings, teaching and learning experiences
in Computing. A workshop provided attendees to participate in various Computing
learning activities such as unplugged activities, board game playing and physical
computing with micro:bit and Arduino (Fig. 19.6). We plan to continually organise
more of these events to draw participants in the ecosystem together in the sharing,
learning and building of a vibrant community for Computing Education in Singapore.
19.6 Summary
Fig. 19.6 Workshop with computing learning activities such as physical computing with micro:bit,
card games and unplugged activities to learn computing concepts
needs, schools’ niche programmes and readiness of the teachers to teach comput-
ing. Teachers who are keen can choose to extend their capacity to teach computing.
Singapore as a nation can harness various agencies to work together in providing
a variety of learning experiences for children to be engaged in computing learning.
Thus, Singapore’s approach to CT education is promising but different from various
countries that have made CT part of compulsory education.
References
APFC. (2017). Preparing students for South Korea’s creative economy: The successes and chal-
lenges of educational reform. Retrieved February 13, 2017, from http://www.asiapacific.ca/
research-report/preparing-students-south-koreas-creative-economy-successes.
Basham, J. D., Israel, M., & Maynard, K. (2010). An ecological model of STEM education: Oper-
ationalizing STEM for all. Journal of Special Education Technology, 25(3), 9–19.
360 P. Seow et al.
Bell, T., Curzon, P., Cutts, Q., Dagiene, V., & Haberman, B. (2011). Overcoming obstacles to CS
education by using non-programming outreach programmes. In Informatics in schools: Contribut-
ing to 21st century education (pp. 71–81). Springer. https://doi.org/10.1007/978-3-642-24722-
4.
Bocconi, S., Chioccariello, A., & Earp, J. (2018). The Nordic approach to introducing computational
thinking and programming in compulsory education. Report prepared for the Nordic@BETT2018
Steering Group. https://doi.org/10.17471/54007.
Buechley, L., Eisenberg, M., Catchen, J., & Crockett, A. (2008). The LilyPad Arduino. In Proceeding
of the Twenty-Sixth Annual CHI Conference on Human Factors in Computing Systems—CHI ’08
(p. 423). https://doi.org/10.1145/1357054.1357123.
Code in Community. (2017). Google’s free coding classes are proving to be a major success—Sees
first 500 S’pore kids graduate. Retrieved May 22, 2017, from https://vulcanpost.com/611220/
code-community-google/.
DfE. (2017). National curriculum in England: Computing programmes of study. Retrieved February
13, 2017, from https://www.gov.uk/government/publications/national-curriculum-in-england-
computing-programmes-of-study/national-curriculum-in-england-computing-programmes-of-
study.
Hameed, S., Low, C.-W., Lee, P.-T., Mohamed, N., Ng, W.-B., Seow, P., & Wadhwa, B. (2018). A
school-wide approach to infusing Coding in the Curriculum (pp. 33–36). Paper presented at the
Proceedings of 2nd Computational Thinking Education Conference 2018, Hong Kong SAR, The
Education University of Hong Kong.
IMDA. (2017). PlayMaker changing the game. Retrieved February 13, 2017, from https://www.
imda.gov.sg/infocomm-and-media-news/buzz-central/2015/10/playmaker-changing-the-game.
InfoComm Club. (2017). Infocomm Clubs programme. Retrieved February 13, 2017, from https://
www.imda.gov.sg/imtalent/student-programmes/infocomm-clubs.
Japan Times. (2017). Computer programming seen as key to Japan’s place in ‘fourth indus-
trial revolution’. Retrieved February 13, 2017, from http://www.japantimes.co.jp/news/2016/
06/10/business/tech/computer-programming-industry-seen-key-japans-place-fourth-industrial-
revolution/#.WKG2P_l97b0.
Kafai, Y. B., & Burke, Q. (2013). The social turn in K-12 programming. In Proceedings of the 44th
ACM Technical Symposium on Computer Science Education—SIGCSE ’13 (p. 603). https://doi.
org/10.1145/2445196.2445373.
Kafai, Y. B., Searle, K., Kaplan, E., Fields, D., Lee, E., & Lui, D. (2013). Cupcake cushions, scooby
doo shirts, and soft boomboxes: e-textiles in high school to promote computational concepts,
practices, and perceptions. In Proceedings of the 44th ACM technical symposium on Computer
Science Education (pp. 311–316). ACM.
Kafai, Y. B., Lee, E., Searle, K., Fields, D., Kaplan, E., & Lui, D. (2014). A crafts-oriented approach
to computing in high school: Introducing computational concepts, practices, and perspectives with
electronic textiles. ACM Transactions on Computing Education (TOCE), 14(1), 1.
Looi, C. K., How, M. L., Wu, L., Seow, P., & Liu, L. (2018). Analysis of linkages between an
unplugged activity and the development of computational thinking. Journal of Computer Science
Education (JCSE). https://doi.org/10.1080/08993408.2018.1533297.
Low, J. M. (2014). Applied learning programme (ALP): A possible enactment of achieving authen-
tic learning in Singapore schools. In 40th Annual Conference (2014). Singapore: International
Association for Educational Assessment. Retrieved from http://www.iaea.info/documents/paper_
371f25129.pdf.
Martin, L. (2015). The promise of the maker movement for education. Journal of Pre-College
Engineering Education Research (J-PEER), 5(1). https://doi.org/10.7771/2157-9288.1099.
MOE. (2017). O level computing teaching and learning syllabus. Retrieved February
13, 2017, from https://www.moe.gov.sg/docs/default-source/document/education/syllabuses/
sciences/files/o-level-computing-teaching-and-learning-syllabus.pdf.
Pixel Labs. (2017). CODE@SG movement—Developing computational thinking as a national
capability. Retrieved October 10, 2017, from https://www.imda.gov.sg/industry-development/
19 Educational Policy and Implementation of Computational Thinking … 361
programmes-and-grants/startups/programmes/code-sg-movement-developing-computational-
thinking-as-a-national-capability.
Prince, K., Saveri, A., & Swanson, J. (2015). Cultivating interconnections for vibrant and equitable
learning systems. KnowledgeWorks: Cincinnati, OH, USA.
Rode, J. A., Weibert, A., Marshall, A., Aal, K., von Rekowski, T., El Mimouni, H., & Booker,
J. (2015, September). From computational thinking to computational making. In Proceedings
of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing
(pp. 239–250). ACM.
Smart Nation. (2017). Why smart nation. Retrieved February 13, 2017, from https://www.
smartnation.sg/about-smart-nation.
Sullivan, A., & Bers, M. U. (2017). Dancing robots: Integrating art, music, and robotics in Sin-
gapore’s early childhood centers. International Journal of Technology and Design Education,
1–22.
Wing, J. M. (2006). Computational thinking. Communications of the ACM, 49(3), 33–35.
Wing, J. M. (2017). Computational thinking, 10 years. Retrieved December 13, 2017, from https://
www.microsoft.com/en-us/research/blog/computational-thinking-10-years-later/.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter’s Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter’s Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Chapter 20
Teaching-Learning of Computational
Thinking in K-12 Schools in India
Sridhar Iyer
Abstract Schools in India have been offering computer studies as a subject for
the past two decades. However, there is wide variation in the choice of topics, as
well as treatment of a given topic. Typically, the focus is on usage- and skill-based
content for specific applications. There is very little emphasis on important thinking
skills of broad applicability, such as computational thinking and 21st century skills.
This chapter describes a 10-year long project, called Computer Masti, to integrate
thinking skills into computer studies. The project includes: curriculum across K-12
grades, textbooks that contain explicit chapters on thinking skills in each grade, and
teacher training so that teachers gain proficiency in teaching this content. This chapter
provides an overview of the rationale and content of the curriculum, examples of how
computational thinking skills are addressed in the textbooks and learning activities,
summary of its implementation in schools in India, and some results of evaluation
studies.
20.1 Introduction
There are diverse perspectives of computational thinking (Papert, 1980; Wing, 2006;
diSessa, 2000; Barr & Stephenson, 2011; Brennan & Resnick, 2012). The Interna-
tional Society for Technology in Education and Computer Science Teachers Asso-
ciation (ISTE & CSTA, 2011) provides an operational definition of computational
thinking for K-12 education as
S. Iyer (B)
Department of Computer Science and Engineering, & Interdisciplinary Programme in
Educational Technology, Indian Institute of Technology Bombay, Mumbai, India
e-mail: sri@iitb.ac.in
Schools in India are affiliated to one of many Boards (CBSE; ICSE; IB; IGCSE; State
Board). Each Board prescribes curricula and conducts standardized examinations for
grades 10 and 12. Schools have been offering computer studies as a subject to their
students for the past two decades. However, there is wide variation in the choice
of topics, as well as treatment of a given topic. Typically, the focus is on usage-
and skill-based content for specific applications. There is very little emphasis on
important thinking skills of broad applicability, such as computational thinking and
twenty-first-century skills.
To address the above issues, the Computer Masti project was initiated at IIT Bombay
in 2007.
The project has three aspects: (i) Defining the curriculum, called CMC, (ii) Devel-
oping the textbooks, called Computer Masti, and (iii) Supporting schools to imple-
ment the curriculum. Developing thinking skills is a key focus across all these aspects.
CMC (Iyer et al., 2013) has explicit emphasis on thinking skills, i.e., the basic
procedures and methods used in making sense of complex situations and solving
problems. Thinking skills, such as algorithmic thinking, problem-solving, system-
atic information gathering, analysis and synthesis, and multiple representations, are
mapped to various grades. Computer applications and usage skills are introduced
only after motivating the need for developing the corresponding thinking skill.
20 Teaching-Learning of Computational Thinking … 365
Computer Masti textbooks (Iyer, Baru, Chitta, Khan, & Vishwanathan, 2012) have
explicit chapters for developing thinking skills across a variety of real-life contexts.
For example, students are introduced to computational thinking in grade 3 through
activities such as planning a picnic, and then go on to programming in Scratch
(Scratch). The teacher training support provided to schools has explicit emphasis on
the need for thinking skills, their applicability across subjects and grades, and how
they may be developed. Teachers are trained to go beyond teaching computer usage
skills and focus on thinking skills. In 2018, Computer Masti is being implemented
in 700+ schools, across India. Over the years, Computer Masti has been used by
more than 1.5 million students in India. The next section in this chapter provides an
overview of CMC curriculum. It is followed by examples of how thinking skills are
addressed in Computer Masti textbooks. The subsequent sections provide a summary
of its implementation in schools, and results of evaluation studies.
has explicit emphasis on teaching-learning of thinking process skills, which are the
basic procedures and methods used in making sense of complex situations, solv-
ing problems, conducting investigations and communicating ideas. CMC explicitly
addresses the thinking skills of: Algorithmic thinking, problem-solving, systematic
information gathering, brainstorming, analysis and synthesis of information, mul-
tiple representation and divergent thinking. Computer literacy skills are introduced
only after motivating the need for developing the corresponding thinking skill.
• Highlight the interconnectedness of knowledge, not just address a topic in isola-
tion.
While mastery of a topic is important, recognizing the interconnectedness of
various topics and ideas leads students to construct a more expert-like knowledge
structure (Ellis & Stuen, 1998). Hence, CMC has emphasis on: (i) thematic integra-
tion, i.e., the integration of knowledge from various subjects into computer studies,
and the use of computer-based activities to strengthen knowledge in other subjects,
and (ii) spiral curriculum, i.e., the content of the curriculum is organized such that
themes and topics are revisited with increasing depth in each successive grade.
A subset of the thinking skills from CMC (Iyer et al., 2013), which correspond to
items from the operational definition of computational thinking (ISTE & CSTA,
2011), is provided in Table 20.1. Each row shows how a particular thinking skill is
gradually developed and advanced across grades 3–8.
For example, students begin to learn algorithmic thinking in grade 3, where they
identify and sequence the steps involved in carrying out simple activities at that grade,
such as walking like a robot. In grade 4, they learn to identify constraints, use of
branching and iteration, to design solutions that meet the constraints. In grades 5 and
6, they learn to gather requirements, synthesize information, use of representations,
decision-making, and apply their learning to tasks such as digital story-telling. In
grades 7 and 8, they apply their learning to more complex tasks such as designing
an app for senior citizens.
Algorithmic thinking skills are strengthened through Scratch programming in
grades 3–6, where students apply concepts of if-else, loops, event handling, input,
variables and lists. In grades 6–8, they move to writing flowcharts and programming
using a procedural language.
20 Teaching-Learning of Computational Thinking … 367
Computer Masti books (Iyer et al., 2012), bridge the gap between prescriptive cur-
riculum and the teacher’s need to transact it in the class. The books are labeled as
Level I–VIII, which can be mapped onto grades 1–8. They are sequenced in such
a way that students who have studied them from grades 1–8 would be able to meet
the requirements of various education boards of the country (CBSE, ICSE, IB, State
boards) for grade 9 onwards. These books are released under Creative Commons
license (CC) and can be freely downloaded (Iyer et al., 2012).
The Computer Masti books provide learning activities to address the thinking skills
in CMC. These books use real-world context, analogies and a narrative structure to
show the broad applicability of thinking skills not just in other subjects but also in
“real life”.
The books use pedagogical approaches that are established to be effective for
learning of thinking skills. The approaches used are:
• Inquiry-based learning: Inquiry-based learning (Barrett, Mac Labhrainn, & Fallon,
2005; Olson & Loucks, 2000) is an approach in which students are actively engaged
in the learning process by asking questions, interacting with the real world, and
devising multiple methods to address the questions. For example, the teacher may
ask students to prepare a presentation on an unfamiliar topic, such as maintaining
an aquarium or embedded applications of computers. Students first identify what
they already know about the topic and what they need to find out. Then they
gather the relevant information from various sources, and synthesize it to create the
presentation. The teacher oversees the process, provides feedback and intervenes
wherever necessary. Detailed examples are given in Sect. 20.3.3.
20 Teaching-Learning of Computational Thinking … 369
20.3.2 Structure
Each Computer Masti book contains lessons, worksheets and activities based on the
themes in the CMC curriculum recommended for that particular grade. Each book
has a “Concept before Skill” approach focusing on conceptual understanding before
learning the usage skills associated with specific applications.
Each lesson has a narrative style, consisting of two children who learn about
computers while they engage in their daily activities. A third character in the narrative,
a mouse look-alike, plays the role of a facilitator for inquiry-based learning. Students
make real-world connections and integrate knowledge via the context in the narrative.
Worksheets, activities and projects in each lesson are geared towards exploration,
collaborative learning, and reflection. Each lesson also has a ‘Teacher’s Corner’ that
provides guidelines for transacting the lesson as per the pedagogy recommended.
There are explicit lessons on stepwise thinking in grade 3, on logical thinking
in grade 4, gathering and synthesizing information in grade 5, and using multiple
representations in grade 6. Students are introduced to computer applications only
after they learn the corresponding thinking skills in a familiar context. For example,
they are introduced to Internet search only after they learn the thinking skill of
systematically gathering information and concepts related to organizing information.
They are formally introduced to programming only after they apply algorithmic
processes to daily life situations, such as planning a school play.
370 S. Iyer
Figure 20.1 shows an example of algorithmic thinking from grades 3 and 4. The
facilitator (Moz) poses a problem of planning a picnic. The students (Tejas and Jyoti)
perform stepwise thinking to solve the problem by first listing the main sub-tasks, then
working out the details in each sub-task, and the sequencing of the steps. The broad
applicability of such thinking is reinforced by enclosing them in a “Concept box” as
shown. The corresponding “Teacher’s Corner” makes recommendations such as “Ask
them to give other examples where they follow steps to do the activity. For instance, ask
them to plan a birthday party for their classmate. What steps are involved? Ask them
to arrange all the steps they need to do, in proper sequence, so that the party turns out
to be well-organized. Ask them to identify the main steps and list the detailed steps.”
The students are given practice to apply the thinking through worksheets having
real-life scenarios, as shown in Fig. 20.2. This is followed by stepwise thinking with
conditions and branching, and connecting it to Scratch programming in grade 4, as
shown in Fig. 20.3.
In subsequent grades students apply their learning from programming activities
to real-life problem-solving. For example, in grade 7, they learn to use flowcharts
while programming, through activities such as “Draw a flowchart to find the tallest
372 S. Iyer
Fig. 20.3 Activity for applying stepwise thinking and programming a scenario
student in a group of 5 students”. Then, they are asked to apply their knowledge
of flowcharts to solve a real-life problem such as “You have to design the control
mechanism for the crossing of a railway track and a road. Draw a flowchart for the
functioning of the crossing gate to ensure that there are no accidents.”
Figure 20.4 shows an example of gathering and synthesis of information from grade
5. The facilitator (Moz) poses a problem of setting up a home aquarium. The students
(Tejas and Jyoti) apply thinking skills to solve the problem by analyzing the goal,
identifying the requirements of sub-tasks, gathering the information, and categorizing
it. This is followed by identifying constraints and decision-making, as shown in
Fig. 20.5. Then, they create a Scratch program for a virtual aquarium, as shown in
Fig. 20.6. The learning of the thinking skills is reinforced through worksheets and
activities. The “Teacher’s Corner” makes recommendations such as “Ask them to
plan an outdoor summer camp for the class using the thinking skills from this lesson.
20 Teaching-Learning of Computational Thinking … 373
Ensure that they explicitly identify the goal and analyze the requirements, gather and
consolidate information, and identify constraints and do decision-making.”
In subsequent grades, integration of multiple concepts is explicitly targeted. Stu-
dents apply their learning to complex problem-solving. For example, in Grade 8,
they do activities such as “Design a mobile app for senior citizens”. First, they create
surveys for identifying needs. To do this, they may interview senior citizens in their
neighborhood to identify events that need alerts, such as taking medicines on time.
Then, they identify the features that are required by most of the participants. To do
this, they may use spreadsheets for recording data, counting frequencies, drawing
graphs and so on. Finally, they design their product. To do this, they may create mind-
maps for categorizing features to be included, flowcharts to depict the functioning
of their app, and a presentation to market their product.
374 S. Iyer
20.4 Implementation
20.4.1 Reach
Over the years, Computer Masti has been used by more than 1.5 million students in
India. In 2018, Computer Masti is being implemented in 700+ schools, across India;
26 out of 29 states and 3 of 7 union territories. Table 20.2 below shows year-wise data,
in terms of number of schools implementing Computer Masti and the approximate
number of students across all grades.
In addition, the Computer Masti books may be freely downloaded (Iyer et al.
2012). The website’s Google analytics data for 2009–2018, shows over 100,000
pageviews and 35,000 users, across 150 countries.
No. of students 4000 20,250 72,500 117,150 175,200 239,200 258,050 352,950 461,500
377
378 S. Iyer
20.4.3 MOOCs
In 2017, to scale up the teacher training, two MOOCs (Massive Open Online Courses)
specific to teaching-learning of computer studies in schools were created. One MOOC
dealt with computer fluency and thinking skills, while the other dealt with program-
ming, as per the CMC curriculum. The MOOCs were offered through the IITBom-
bayX platform (IITBx), in May–June 2017, and again Nov–Dec 2017. The first offer-
ing had 553 registered participants, of which 266 completed the course. The second
offering had 1644 registered participants, of which 219 completed the course. The
plan is to continue offering these MOOCs periodically, until the total number of
participants reaches a significant fraction of K-12 teachers in India.
20.5 Evaluation
In order to identify student perceptions of learning with Computer Masti, a study was
conducted in 2013–2014. Data was gathered through multiple focus group interviews
of 24 students in grade 6, in two urban schools. The key findings were:
• Students often mentioned the use of thinking skills in real-life contexts. They gave
examples of how step wise thinking helped in drawing pictures, applying logical
thinking while solving math problems, and gathering and synthesis of information
while working on projects.
• Students reported that they enjoyed Scratch programming and used it as a tool in
everyday life. They gave examples of how they create animations and games.
• Students enjoyed solving worksheets that required them to apply thinking skills
in different contexts.
In addition, 49 (22 females, 27 males) students of grade 6 answered a survey to
determine their perceptions of Computer Masti. 97.9% of them reported that they
enjoyed doing the worksheets and activities, 93.8% perceived the learning of thinking
skills as useful, and 77.1% perceived that Computer Masti helped them in learning
of other subjects.
Although the number of respondents is small, the results indicate that students
overall have a positive perception about learning with Computer Masti.
20 Teaching-Learning of Computational Thinking … 379
There is anecdotal evidence that teachers perceive Computer Masti to be useful for
teaching- learning of thinking skills, as seen from the quotes below
• “Computer Masti promotes out of the box thinking. Very innovative.”
• “CM can really help a student in improving all aspects of studies as well as life.”
• “Excellent learning experience for students. Develops analytical abilities.”
• “I have seen my son invoke his lessons in the general world. For example, when
he is getting ready for the day, he might refer to it as ‘changing my costume’
(Scratch), or he might narrate as he moves to a room ‘move 10 steps and then turn
right’. Similarly, I see students implement Computer Masti knowledge in other
subjects also.”
• “Students come up with lots of questions. As a result, the projects they carry out
are better planned and mature.”
• “Content such as Computer Masti will surely succeed in enhancing the core skills
in students, not just computing but also creative, critical thinking and problem-
solving.”
Teachers also perceive the training through MOOCs to be useful, as seen by these
quotes from the end-of-course surveys:
• “I liked the real-life examples involved at each step”
• “The course is very useful. It gave me confidence to teach computers to my stu-
dents”
• “It helped me to rethink some strategy that I believed was correct”
• “This course helps me to understand how teachers need to upgrade themselves
and try to be creative, rather than teaching in traditional way of teaching, as this
generation students need to go along with computer learning through which peer
learning, critical thinking can develop and become a responsible citizen too”
• “This course changed my way of teaching programming to my students. I started
implementing the approaches learnt in the course. Feeling enriched after the com-
pletion of course. Please keep organizing courses like these for the development
of teachers.”
More systematic studies with teachers are yet to be carried out.
380 S. Iyer
20.6 Conclusion
Many schools in India offer some form of computer studies as a subject to their
students. This subject is suitable for learning thinking skills of broad applicability,
such as computational thinking and twenty-first-century skills, in addition to learning
computer application usage skills. One effort in this direction was the Computer Masti
project at IIT Bombay. The project defined the curriculum, developed the textbooks,
and provided teacher training support to schools. The curriculum and textbooks have
explicit emphasis on computational thinking skills, such as algorithmic thinking
and applying this problem-solving process to a wide variety of contexts. A growing
number of schools have adopted this curriculum over the past decade, and now the
teacher training has been scaled up through MOOCs on teaching-learning of the
curriculum. All these resources are released in Creative Commons, so they may
benefit schools, teachers, and students across the world.
Acknowledgements The author worked with several teams, at various stages of this project, and
acknowledges their contributions. The co-authors of the current edition of the CMC curriculum
are: Farida Khan, Sahana Murthy, Vijayalakshmi Chitta, Malathy Baru and Usha Vishwanathan.
The co-authors of the first edition of the Computer Masti textbooks are: Vijayalakshmi Chitta,
Farida Khan, Usha Vishwanathan and Malathy Baru, with illustrations by Kaumudi Sahasrabudhe,
and design by Sameer Sahasrabudhe. The co-instructor for the teacher training MOOCs is Sahana
Murthy, supported by TAs: Shitanshu Mishra, Kavya Alse, Gargi Banerjee and Jayakrishnan M.
The implementation in schools (2009–2016) was done by InOpen Technologies, co-founded by
the author and Rupesh Shah. Since 2016, Computer Masti work is being taken forward by Next
Education India Pvt. Ltd.
References
CITL. (1999). Being fluent with information technology. Washington, DC: Committee on Infor-
mation Technology Literacy, Computer Science and Telecommunications Board, Commission
on Physical Sciences, Mathematics, and Applications, National Research Council. National
Academy Press. http://www.nap.edu/catalog.php?record_id=6482.
CC. Creative commons. https://creativecommons.org/.
diSessa, A. A. (2000). Changing minds: Computers, learning, and literacy. Cambridge: MIT Press.
Ellis, A. K., & Stuen, C. J. (1998). The interdisciplinary curriculum. Larchmont, NY: Eye on
Education Inc.
IB. IB board. http://www.ibo.org/diploma/curriculum/.
ICSE. ICSE board. http://www.cisce.org/publications.aspx.
IGCSE. IGCSE board. http://www.cambridgeinternational.org/programmes-and-qualifications/
cambridge-secondary-2/cambridge-igcse/.
IITBx. IITBombayX Hybrid MOOCs platform. https://iitbombayx.in/.
ISTE, & CSTA. (2011). Operational definition of computational thinking for K-12 education. Inter-
national Society for Technology in Education and Computer Science Teachers Association. http://
www.iste.org/docs/ct-documents/computational-thinking-operational-definition-flyer.pdf.
Iyer, S., Baru, M., Chitta, V., Khan, F., & Vishwanathan, U. (2012). Computer Masti series of books.
InOpen Technologies. http://www.cse.iitb.ac.in/~sri/ssrvm/.
Iyer, S., Khan, F., Murthy, S., Chitta, V., Baru, M., & Vishwanathan, U. (2013). CMC: A model
computer science curriculum for K-12 schools. Technical Report: TR-CSE-2013-52. Department
of Computer Science and Engineering, Indian Institute of Technology Bombay. http://www.cse.
iitb.ac.in/~sri/ssrvm/.
Johnson, R. T., & Johnson, D. W. (1998). Action research: Cooperative learning in the science
classroom. Science and Children, 24, 31–32.
Marzano, R., Brandt, R., Hughes, C., Jones, B., Presselsen, B., Rankin, S., & Suhor, C. (1988).
Dimensions of thinking: A framework for curriculum and instruction. Association for Supervision
and Curriculum Development.
Next. Next Education India Pvt. Ltd. https://www.nexteducation.in/.
Olson, S., &Loucks-Horsley, S. (Eds.). (2000). Inquiry and the national science education stan-
dards: A guide for teaching and learning. Committee on the Development of an Addendum to
National Science Education Standards on Scientific Inquiry, National Research Council. http://
www.nap.edu/openbook.php?isbn=0309064767.
Padilla, M. J. (1990). The science process skills. Research matters—To the science teacher no. 9004.
NARST publications. https://www.narst.org/publications/research/skill.cfm.
Papert, S. (1980). Mindstorms: Children, computers, and powerful ideas. New York: Basic Books.
Scratch. Scratch website. https://scratch.mit.edu/.
State Boards. List of boards in India. https://en.wikipedia.org/wiki/Boards_of_Education_in_India.
Totten, S., Sills, T., Digby, A., & Russ, P. (1991). Cooperative learning: A guide to research. New
York: Garland.
Wing, J. (2006). Computational thinking. Communications of the ACM, 49(3), 33–36.
382 S. Iyer
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter’s Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter’s Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.