Enhancing Robot Programming With Visual Feedback and Augmented Reality 2
Enhancing Robot Programming With Visual Feedback and Augmented Reality 2
Enhancing Robot Programming With Visual Feedback and Augmented Reality 2
ABSTRACT 1. INTRODUCTION
In our previous research, we showed that students using the Robotics activities are widely used to introduce students
educational robot Thymio and its visual programming envi- to science, mathematics, engineering, technology ( stem) in
ronment were able to learn the important computer-science general and to computer science in particular [6]. Robotics
concept of event-handling. This paper extends that work activities are exciting and fun, but we are also interested
by integrating augmented reality ( ar) into the activities. in investigating if the activities lead to learning of stem
Students used a tablet that displays in real time the event subjects. In a previous paper [9], we described research con-
executed on the robot. The event is overlaid on the tablet ducted during an outreach program using the Thymio II ed-
over the image from a camera, which shows the location of ucation robot and its Visual Programming Language (vpl).
the robot when the event was executed. In addition, visual We showed that students successfully learned the important
feedback (fb) was implemented in the software. We devel- computer-science concept of event-handling.
oped a novel video questionnaire to investigate the perfor- However, while students were able to comprehend behav-
mance of the students on robotics tasks. Data were collected iors consisting of independent events, they had trouble with
comparing four groups: ar+fb, ar+non-fb, non-ar+fb, sequences of events. This paper explores two independent
non-ar+non-fb. The results showed that students receiving ways of improving their understanding of robotics program-
feedback made significantly fewer errors on the tasks. Those ming: visual feedback (fb) that shows which event handler
using ar made fewer errors, but this improvement was not is currently being executed and augmented reality (ar) (as
significant, although their performance improved. Technical originally suggested by the first author [7]).
problems with the ar hardware and software showed where The research methodology was improved. In [9], learning
improvements are needed. was measured by administering a textual questionnaire con-
taining exercises about vpl programs and the behaviors of
the robot that could be observed when the programs were
Categories and Subject Descriptors run. We observed that some young students found the tex-
K.3.2 [Computers & Education]: Computer and Infor- tual questionnaire difficult to understand. Therefore, we
mation Science Education - Computer Science Education; implemented a new type of research instrument—a video
I.2.9 [Robotics] questionnaire—where the students were given a multiple-
choice among several short video clips.
General Terms The performance of the students was measured in a 2×2
experimental setup: treatment groups that used ar com-
Human Factors pared with control groups that did not, and treatment groups
that received fb compared with those that did not.
Keywords Section 2 describes the robot and the software environ-
robotics in education; Thymio; Aseba; VPL; augmented re- ment, while Section 3 discusses previous work on ar in edu-
ality; event-actions pair cation and the ar system that we developed. The research
methodology and the design of the video questionnaire are
Permission to make digital or hard copies of all or part of this work for personal or presented in Section 4. The results of the analysis, the dis-
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full cita-
cussion and the limitations of the research appear in Sec-
tion on the first page. Copyrights for components of this work owned by others than tions 5–7. Section 8 describes our plans for the future.
ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or re-
publish, to post on servers or to redistribute to lists, requires prior specific permission
and/or a fee. Request permissions from permissions@acm.org.
ITiCSE’15, July 04–08, 2014, Vilnius, Lithuania. 2. THYMIO II AND ASEBA
Copyright is held by the owner/author(s). Publication rights licensed to ACM.
ACM 978-1-4503-3440-2/15/07 ...$15.00.
The Thymio II robot [11] (Figure 1) and its Aseba software
http://dx.doi.org/10.1145/2729094.2742585 . were created at the Swiss Federal Institute of Technology
Figure 1: The Thymio II robot with a top image for
tracking by the camera of the tablet.
local timeline
global timeline
5. RESULTS
The students had to select one of four video clips that var-
5.1 The questionnaire ied in the condition that could cause the robot’s leds to
become red. The correct video showed red when either the
To compare the treatment vs. the control groups, we
front button was pressed or an obstacle was placed in front
counted the number of mistakes for every participant. Ta-
of the robot. Although these questions are relatively simple,
ble 1 shows the mean mistake count and the p-value of Pear-
the students must reason on the spatial and logical relations
son’s chi-square test of its histograms, for the null hypothesis
between sensing and acting. The significant improvement
of no effect. We used Laplace smoothing (adding 1 to each
when using fb probably indicates that the fb caused the
bin) to apply the chi-square test even when the control group
students to become more aware of these relations while ex-
has 0 entries for a given mistake count. We see that using
perimenting with the robot.
fb decreases the mistake count significantly, while using ar
Table 4 shows that the error rate is generally lower with ar
is not significant.
than without, but the difference is not significant except for
Table 2 shows the error rate of the answers for the four
Q7 whose significance is borderline. In Q7, the students had
setups in the experiment: AF = ar and fb, AN = ar with
to select one of four video clips that varied in the behavior of
no fb, NF = no ar but with fb, NN = neither ar nor
the robot when an object was placed in front of its sensors.
fb. We see that some questions were answered correctly by
The program caused the robot to turn right or left when
almost all students, while other questions were more difficult
an object was detected in front of the left or right sensors,
and more than 30 % of the students gave the wrong answers.
respectively; when the object was detected by the center
We also see that the error rate depends on the setup.
sensor, the robot moved forward.9 This question required
We see from Table 3 that the error rate is always lower
understanding the relation between two event-actions pairs
with fb than without, and that this difference is significant
in sequence and the specific sensor events. We believe that
for Q1 and Q8, and borderline significant for Q7.
seeing the execution of the event-actions pairs in context
Q1 showed a video of the robot turning right when pre-
improved the understanding of these relations.
sented with an object in front of its left sensor. The students
had to select one of four programs, which differed in invert-
ing the left/right sensors and the direction of movement.
5.2 The usage data
The correct program was: To better understand the differences between treatment
and control groups for the different conditions we investi-
gated the usage data that we collected during the study.
Figure 5 compares the median time between consecutive
clicks on the run button in the vpl environment with the
median number of actions between two consecutive runs,
when using ar or not and when using fb or not. For ar,
there is a significant difference between the treatment group
Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8 and the control group. With ar, there were significantly
F 0.0 0.0 2.1 27.7 10.6 21.3 8.5 10.6 fewer actions between the runs and significantly less time
N 17.6 2.9 2.9 38.2 17.6 35.3 26.5 32.4 between clicks (Mann-Whitney U test, p < 0.001). When
p 0.01 0.87 0.62 0.44 0.56 0.25 0.06 0.03 fb is given, there is no significant difference in the usage
data of the treatment and control groups.
Table 3: The error rate (%) for fb/non-fb; p-values 9
The program and videos can be examined at http://thymio.
of Pearson’s chi-square test. n = F:47, N:34 org/en:thymiopaper-vpl-iticse2015.
• Several students tended to keep the tablet too close
to the robot and therefore the tablet did not see the
14
median number of actions between runs
ground image.
12 • The use of the tablet was not uniform: some students
did not use the tablet at all, while others seemed lost
10
in contemplating reality through the tablet.
8
• The students did not always realize whether the tablet
was tracking the ground or not.
6