1 Introduction
Computational thinking (CT) activities have been increasingly introduced in educational settings. It has proven to foster algorithmic thinking, promoting the decomposition of large problems into smaller ones, which ultimately brings benefits to real-world problem-solving [
74,
75]. As a result, coding environments have been proposed to support CT training both in schools and at home [
30]. These go from fully virtual environments (e.g., Scratch [
59]) to hybrid (e.g., Wonder workshop and Dash) and fully tangible (e.g., ACCembly [
60] and Torino [
49]). The latter two have been particularly relevant in promoting access to children with disabilities. For example, in ACCembly, children with visual impairments use tangible (accessible) pieces to program a physical robot with audio feedback; while with Torino, children connect tangible pods to create music sequences.
Accessible coding environments have the potential to promote inclusive activities, where children with mixed abilities collaborate toward shared solutions. Collaborative learning has been connected to fostering productivity, social skills, supportive relationships, and aids psychological health and self-esteem. In particular, children with visual impairments tend to isolate less in a collaborative-prone environment [
13,
45].
While accessible coding environments have proven helpful in co-located activities [
44,
53], their benefits and limitations in the increasingly relevant remote environments are underexplored. Research in Learning Sciences has shown that co-located collaboration is often preferred [
31]. However, it may not be possible (e.g., when children are home-bound or during after-school activities). While remote mixed-visual ability collaboration raises several challenges concerning accessibility, engagement, communication, and awareness, it also opens new opportunities to design more equitable experiences by manipulating access to information. Remote collaboration presents an opportunity for geographically dispersed teams to work together while sharing the same technologies [
9].
In this paper, we investigate the tradeoffs of remote and co-located collaboration between children with mixed-visual abilities in a CT activity. Particularly, we answer the following research question:
What are the benefits and limitations between co-located and remote coding environments in relation to task performance, social behaviors, and user experience? To do so, we designed a tangible robotic coding environment where children could program a robot to act out a Sokoban-inspired game
1. In the game, the main character (here, an Ozobot Evo) has the goal of pushing a crate to a target position on a map (in this case, a LEGO-based plate where walls, crate, and target positions can be felt by touch) – Fig.
1. To make it collaborative, we designed the activity with two interdependent roles: 1) the
map explorer, a strategist exploring the map where the robot had to push the crate, and 2) the
block commander, an executor programming the robot with tangible blocks. The environment allowed children to play the game in a remote or co-located context.
We conducted an evaluation with children with mixed-visual abilities (10 pairs with ages between 10 and 17), where they had to play collaboratively in both co-located and remote settings. Results showed that all children were able to apply CT concepts in both environments; cooperation was higher in co-located scenarios, mostly because sighted children had more access to both workspaces; however, remote collaboration was more balanced and promoted more verbal communication between children. Although we acknowledge the relevance of social connection in mixed-ability collaborative systems, in this study we focused on the impact of the environment on collaboration effectiveness.
To inform designers and developers of future collaborative CT environments, we contribute with: 1) empirical results into co-located and remote coding activities between mixed-visual ability children; 2) implications to enhance computer-aided remote collaborative environments; and 3) an example of a single-user game turned into an asymmetric and accessible collaborative CT activity. These contributions take pivotal relevance in the context of inclusive learning, where abilities but also lack of supporting instruments tend to segregate children.
5 Findings
We analyzed the data according to the following six dimensions: effectiveness, CT, communication, cooperation, engagement, and accessibility – their description is available in supplementary materials.
5.1 Effectiveness - Groups were more autonomous in co-located scenarios [F1]
To analyze effectiveness, we considered the time taken for groups to reach the proposed goal and its completion, and their autonomy from the investigator.
All the groups except one successfully reached the proposed goals [F1a]. Two groups, G3 and G10, finished all their maps with no help from the researchers. The group that did not finish, G7, quit a map after eighteen minutes. They started in a co-located scenario, and when they switched to the remote, S7 did not like to be the map explorer and asked to go back to the other room with her friend.
We did not find a statistically significant difference on the completion time between remote and co-located scenarios (Z = −1.932, p > 0.05;AM = 10′57′′, GM = 9′26′′, SD = 5′44′′) [F1b]. However, we noticed high standard deviations, which could be related to three common issues: the robot’s lack of accuracy [F1b.1] causing children to repeat the sequence of instructions (e.g., G3); difficulties fitting the blocks on the tray (e.g., VI1) [F1b.2]; or in remote settings, network issues that did not allow instructions to be sent between devices (e.g., G6) [F1b.3].
Regarding autonomy from the researchers, we could observe that children asked for more help and the researchers intervened more in remote (N = 822) than in co-located (N = 264) scenarios (Z = −4.542, p < 0.001) [F1c]. When breaking down those interventions, we noticed that researchers suggested children to communicate workspace-related awareness to their partners more in the remote scenarios (N = 95) than in co-located scenarios (N = 11;Z = −4.318, p < 0.001) [F1c.1]. Such suggestions occurred whenever the researcher found that the pairs could help each other more by having access to the other’s workspace. We can also report more interventions in remote scenarios from the researchers related to the children’s orientation difficulties [F1c.2] (e.g., R:“Where is your heart?” VI5:“Here”. R:“Which side is that?” VI5:“Left” R:“So where do you want the robot to go?” VI5:“Right”) and to the lack of coordination between partners (I:“Are you ready? You have to tell him” talking to VI10). Lastly, we observed investigators fostering engagement during the sessions (R:“Did you make it?”, S3:“Yes, we did!”, R:“That is great!”). These occurred more in remote scenarios (N = 453) than in co-located (N = 214;Z = −3.317, p < 0.001), due to the waiting times and lack of awareness of the peer’s status during the game [F1c.3].
5.2 CT - Children applied CT concepts during the gameplay [F2]
We observed that children applied CT concepts and practices while solving the Sokoban maps. The allocentric puzzles fostered children to apply perspective-taking and laterality concepts to plan solutions. Children applied Data Collection [F2a], a CT concept to collect the necessary data to solve the problem at hand, by observing and touching the map, asking questions about the game, and identifying the robot and crate’s locations (e.g., VI9:“Take the crate here”). It is considered a fundamental task to initiate problem-solving, and we could observe that it occurred at the beginning of each puzzle and after the robot’s execution. The researchers would also ask questions to encourage map exploration in almost every scenario. Children often moved the robot or their hand on the map to plan the algorithm while applying mental visualization and perspective-taking. In co-located scenarios, we could associate Data Collection to moments when partners helped each other and worked out strategies to then build the solution (e.g., VI2:“(...) Right”, S2:“Left!” while pointing to the map) [F2a.1].
Algorithms and Procedures are also fundamental to solving challenges computationally and build sequences of instructions. We observed that it usually occurred after data collection and was associated with children applying laterality concepts at the same time [F2b]. We observed children applying algorithms and procedures mainly when they were map explorers and verbalized the instructions to give (e.g., S6:“(...) forwards”, VI6:“How many times?”, S6:“Three”). S1 was the only block commander in a remote environment that applied Algorithms and Procedures by visualizing the instructions’ sequence and commenting on the shape of the map. In a co-located scenario, block commanders such as S2, S8, and VI9 applied these concepts while exploring the map with their partners.
When children recognized that one action went wrong or the robot did not end up where they intended, they began Debugging (e.g., VI2:“It will fail here... It will not catch the crate”) [F2c]. In co-located contexts, children helped each other finding the problem and worked out strategies to build a new solution. In remote scenarios, the map explorers realized the sequence would fail when the robot executed it. Only after that could they start debugging the sequence of instructions or find a new solution based on the current status.
Children also had the opportunity to apply Problem Decomposition [F2d], particularly when solving the third level (the most complex level). This level presented an obstacle, as it required a higher number of instructions than the tray allowed, forcing children to divide their sequence of instructions. Generally, some dyads also divided the problem to make it simpler. On the first iteration, they placed the robot next to the crate, and on the second one, they pushed it to the goal (e.g., R: “You gave two instructions. Do you want to add some more?” VI6: “No, just two for now.”).
5.3 Communication - The verbal communication exchange was higher in remote than in co-located [F3]
Communication includes verbal behaviors related to the awareness of the workspace or the task. Regarding workspace awareness, each child verbally supplied or requested considerably more cues to understand their peer’s workspace in the remote environment (M = 15, SD = 14) compared to the co-located environment (M = 3, SD = 4;Z = −4.469, p < 0.001) [F3a]. Examples of workspace awareness cues include, for instance, asking the peer to wait or informing about an ongoing task. As children have access to their partner’s workspace in a co-located collaboration, it might decrease the necessity for verbal awareness. Although visual access was less accessible to children with visual impairments in co-located games, they leveraged other channels to access information [F3a.1]. For example, VI6 kept her hand in front of the magic box in a co-located game, so that she knew when her partner was using it and even helped him by opening the lid of the box. Conversely, in a remote collaboration over an audio call, the verbal exchange is crucial to establish coordination among peers as it was equally accessible to both children through auditory-only feedback [F3a.2].
While dissecting the sub-type of awareness cues exchanged among peers regardless of the environment, 78% of those communicative acts refer to status requests/supplies (e.g., S3:“What about now?”, or VI6:“Done!”), suggesting that most of the communication is to acknowledge requested actions or to inform about completion [F3b]. The remaining 22% refer to information about past, future, or ongoing actions (e.g., VI3:“I have removed all blocks.”, VI5:“I will now give you three instructions.”, VI9:“The robot is not going straight.”). The average number of communicative acts per child related to past, future, or ongoing actions is extremely low (M = 1, SD = 2). It suggests that, in both environments, children do not generally provide nor request much information about their own or the other’s actions. This lack of verbal awareness about the workspace has a higher impact on remote settings, which are characterized by lower workspace access. Although children could have verbally compensated for the lack of workspace access by transmitting more often what is going on, they did not do it. Therefore, the creation of a mutual mental model of ongoing teamwork was hindered and, in turn, the coordination was also hindered in remote settings.
Regarding task-related communicative acts, they were used more frequently by children in the remote environment (
M = 20,
SD = 12) compared to the co-located environment (
M = 13,
SD = 9;
Z = −2.591,
p < 0.01)
[F3c]. These included giving more instructions to their peer (e.g., VI1:
“Turn left.”), and questioning or repeating the instructions more often (e.g., S2:
“How many move forward after the second turn right?”). A higher frequency of task-related communication can be associated with the fact that children either had to fix a higher number of previously wrong instructions or had to question more often about unclear instructions. We also looked at the number of communicative acts related to workspace awareness grouped by role. When having the
block commander role, each child either supplied or requested awareness from their peer 12 times on average. While having the
map explorer role, each child only performed 5 communicative acts related to awareness with their peer. Once again, these communicative acts were used to acknowledge received instructions or to set the pace for longer sequences of instructions. This difference might suggest that when children had the
block commander role, they were also implicitly responsible for establishing coordination mechanisms with their partners
[F3d]. A similar analysis grouped by visual acuity of the children is in Sec.
5.6.
5.4 Cooperation - Co-located environments enable more positive cooperation, but also leave room for more negative cooperation [F4]
We looked at cooperation among peers and classified it as either positive, negative, or neutral. In positive cooperation, children were engaged in finding a solution together; in negative cooperation, children substituted or ignored their peers. In neutral situations, children waited for each other but did not help. In the remote scenarios, children engaged in a total of 16 acts of positive cooperation. While in the co-located scenarios, there was a total of 84 acts. Helping behaviors occurred more frequently in the co-located environment (Z = −3.708, p < 0.001) due to easier access to their peer’s workspace and status [F4a]. Examples of positive cooperative acts include overcoming a previous misunderstanding or helping their partner. For example, S2 helped his partner by handing him the correct blocks to place on the tray and build the sequence.
Generally, negative acts of cooperation did not occur very often (N = 20) and we found no statistically significant difference between environments (Z = −1.042, p > 0.05) [F4b]. To exemplify an act of negative cooperation, when VI2 was block commander in the remote environment, she decided to put the tray in the magic box without communicating, while her peer was about to give another instruction. One extreme example of uncooperative behaviors in a co-located setting, that happened only in one out of the forty games, was S8 taking over his partner with visual impairments by taking control of both roles. This specific case happened when S8 was the map explorer, taking over the blocks and the board. Although this unbalanced situation did not occur very often, it is important to mention that taking over the role of a peer was only possible in co-located environments [F4b.1]. To some extent, remote environments facilitate and promote mutual respect for the other’s role. Overall, co-located environments enable more positive cooperation, such as helping behaviors, but also leave room for more negative cooperation acts, such as children getting in the way of each other.
We considered neutral cooperation whenever children waited for each other, which happened more frequently in the remote environment (N = 191), compared to the co-located environment (N = 54;Z = −3.879, p < 0.001) [F4c]. On the one hand, waiting for the peer may reflect respecting the other’s role, the time it takes to complete an action, or to think about a strategy. On the other hand, waiting may affect engagement and mirror an uncoordinated collaboration due to the lack of awareness of the peer’s status.
5.5 Engagement - Co-located experience was more engaging than the remote [F5]
To assess children’s engagement, we looked at positive and negative behavioral measures of enjoyment and boredom, respectively, as well as answers to their final questionnaire. For instance, the amount of laughter or excitement displayed by children during the sessions was more frequent in the co-located environment (N = 45) compared to the remote environment (N = 10;Z = −2.746, p < 0.01) [F5a] (e.g., VI8:“it’s now (...) Go! Go! Go!”). While coding the video data, we could also observe several moments of disengagement and boredom, which we associate with waiting for their peer to finish their action, troubleshooting, and other issues of the system [F5b]. Some children create new tasks or challenges while waiting for their peer, for instance, VI6 started sorting and organizing the blocks.
In the final questionnaire, we asked children about their enjoyment levels towards the co-located and remote environments. The difference in reported enjoyment between the two environments was statistically significant (F(1) = 5.586, p < 0.05), supporting children preferred the co-located scenarios (M = 4.90, SD = 0.31) over the remote scenarios (M = 4.65, SD = 0.49) [F5c]. We also analyzed how they perceived both environments in terms of collaboration, inclusion, and their self-relevance to the task. We found no statistically significant differences in the environment on the perceived collaboration of the task (F(1) = .486, p > 0.05), nor on the perceived inclusion of the task (F(1) = 1.306, p > 0.05), nor on the perceived self-relevance to the team’s performance (F(1) = 1.306, p > 0.05). Children reported the task as highly collaborative (M = 4.750, SD = 0.493) [F5d] and highly inclusive (M = 4.625, SD = 0.628) [F5e] in both environments. They also perceived their contributions as similarly relevant to the team’s performance in both environments (M = 4.400, SD = 0.778) [F5f]. These results support that children considered their participation and collaboration similarly balanced in both remote and co-located environments.
Children were asked how much they enjoyed having each of the two roles, block commander and map explorer, in each of the two environments. We found no statistically significant differences on how much children enjoyed the role of map explorer between environments (Z = −1.294, p > 0.05), nor on block commander (Z = −1.890, p > 0.05) [F5g].
5.6 Accessibility - Sighted children helped more their peers, but only in co-located environments [F6]
The accessibility of the workspace considers children’s ability to identify and use the system’s components. When children encountered issues, such as distinguishing the blocks by color/embossing or fitting them on the tray, we assumed the system was at fault. We could observe that all children identified the essential locations on the map and the necessary blocks to solve the problem, however occasionally with help from their peers or the researchers [F6a]. In general, we observed children faced four issues on average per session (M = 4, SD = 4), specifically fitting the blocks on the tray or placing them in the correct position inside the magic box [F6a.1]. VI2 showed more difficulty placing the blocks on the tray throughout the session (N = 11), she would rotate and try different spots until the block fitted the three raised dots. While being the map explorer, VI1, VI7, and VI8 tried to follow the robot’s movement on the map with their hands, presumably because of the lack of feedback it gave about its position and location [F6a.2]. These actions would disturb the robot’s movement and orientation, leading it to deviate from the intended path.
We also analyzed the impact of children’s visual ability on the dimensions of communication, cooperation, and engagement, by comparing results between children with visual impairments and sighted children. We summarize those results here as they reflect how balanced the activity was for each child of the mixed-visual ability pair.
In terms of communication, children with visual impairments exchanged approximately the same total amount of communicative acts related to the workspace awareness and related to the task (N = 346 and N = 639, respectively), compared with sighted children (N = 348 and N = 685; U = 749, Z = −.491, p > 0.05 and U = 727.5, Z = −.702, p > 0.05, respectively) [F6b]. In terms of cooperation, sighted children helped their peers more often (N = 45) than children with visual impairments (N = 14; U = 586, Z = −2.762, p < 0.01) [F6c]. When breaking down these frequencies by the environment, around 85% of them occurred in the co-located environment. Once again, this result is in line with the previous one suggesting that awareness propels coordination and, in turn, cooperation.
Finally, regarding engagement, we found no differences between children’s laughter during the sessions, which was annotated with a total of 14 occurrences for sighted children and 12 occurrences for children with visual impairments [F6d]. We also performed a statistical analysis of the questionnaire data comparing children’s answers by their level of visual acuity. No significant differences were found when comparing the overall activity, nor when comparing their experience in the two environments (p > 0.05 for all tests) [F6e].
6 Discussion
In the user study, children with mixed-visual abilities collaborated in remote and co-located environments while using a tangible robotic kit to perform a CT activity. We now discuss the lessons learned in each environment, considering the findings of the user study.
The Sokoban-inspired game allowed children with mixed-visual abilities to apply CT concepts by programming a solution together
[F2]. Children’s performance
[F1a] reinforces its contribution to the current state-of-art on the intersection of three topics: CT activities for children with visual impairments, collaboration between children with mixed-abilities [
57,
60,
66], and remote collaboration. Mainly, our robotic kit raised the bar on tangible CT activities to explore allocentric perspectives and increasing difficulty levels
[F2b], whether in a co-located or remote setting. Moreover, it created a rich environment for both children to become interdependent and reach the final goal together
[F5d, F5f, F6b, F6d, F6e], regardless of their visual ability [
24]. Generally, we consider that
our tangible robotic kit enabled children with mixed-visual abilities to train CT concepts together in both remote and co-located environments.
6.1 The Co-located Environment
When analyzing collaborative tasks in co-located environments, we learned that children were prone to help their peers whenever needed or even offer support and assistance spontaneously [
65]. Although the roles were designed to create interdependence to reach the final goal, the individual actions of each role could be performed alone by each child. As a result, each task did not require active collaboration or mutual cooperation. However, the properties of our co-located environment – physical proximity, higher access to the peer’s workspace, and use of tangible objects – facilitated the cooperative behaviors between children
[F4a]. We observed that
children in the co-located environment generally tend to follow a cooperative approach to perform the task.
The second learned lesson emerges from the previous. The cooperative approach created more fluid and coordinated interactions between children, which helped them to be more engaged with the activity [F5a, F5c]. Additionally, as children relied more on each other to perform the task, they became an autonomous team and required less help from the researchers or their teachers [F1c]. Such autonomy promotes higher effectiveness on the task contributing to children’s engagement. As a result, we highlight the effectiveness, autonomy, and engagement of children while using our tangible robotic kit in co-located environments.
During our analysis, we also identified a challenge to be considered when assessing the tradeoffs of deploying a coding activity in a co-located environment. When children with mixed-visual abilities commonly collaborate side by side, the default access to each other’s workspace is unbalanced for both children. While children with visual impairments can use audition and touch (and partial vision according to their level of visual acuity), sighted children can fully exploit their vision [F6c]. The unbalanced interaction favoring sighted children combined with the physical access to the workspace of the peer opens the way to taking over the peer’s role or other similar behaviors [F4b.1]. Although this type of situation occurred only once during our user studies, we believe it mirrors a fragility in co-located environments. Therefore, in co-located settings, the extended access to their peer’s workspace and their physical proximity cause a generally unbalanced interaction and allows sighted children to have dominating behaviors.
The last challenge is communication, particularly related to awareness of the workspace. Previous studies have already reported the importance of communicating and understanding the environment status to reach inclusive collaboration [
12,
42]. The results of our user study support and reinforce this known challenge, as we noticed that children, both sighted and visually impaired, do not often use communication to ask or disclose their ongoing tasks
[F3b].
The lack of workspace awareness communication between children is a major challenge that reinforces the naturally unbalanced workspace access. Describing current efforts or workspace status to a peer could have several benefits, such as propelling more helping behaviors by children with visual impairments, or even reducing the default unbalanced access to the workspace.
6.2 The Remote Environment
We now discuss the lessons leaned on the remote environment from deploying collaborative tasks between children with mixed-visual abilities. First, our setup for remote work, in which children only communicated via audio call, gave both children equal opportunity to access each other’s workspace. Although another typical setup for remote work includes video calls, which might increase the awareness of the peer’s status depending on children’s visual acuity, the awareness of the workspace would possibly still be compromised. Especially if we consider that each child’s workspace includes tangibles that lay on their table. For that reason, remote collaborations with tangible objects provide both children balanced access to each other’s workspace, regardless of their visual acuity.
Another learned lesson is related to how interdependent children behaved in the remote environment. Among the several properties that remote environments can have, the low access to each other’s workspace endorses the importance of verbal communication and coordination to achieve a common goal. The asymmetric roles in our robotic kit were designed precisely to be interdependent, similar to many other classroom activities in which children usually engage. Additionally, considering the previously mentioned balanced access for both children (sighted or with visual impairments), having interdependent roles also suggests the system has to be accessible and support an inclusive collaboration. The increased number of communication acts exchanged by children while playing the remote games [F3a] and the fact that almost all pairs finished the task [F1a] suggest they reached an acceptable level of coordination. Therefore, we argue that the role asymmetry in remote environments fosters both the interdependency between children and their inclusive collaboration.
While analysing the remote games, we identified two main challenges in the way children used our tangible robotic coding kit and interacted with each other. The first challenge refers to the lack of workspace awareness and its consequent individualistic approach to achieving the common goal. Previous studies have already reported that verbal communication between users is essential to keep a shared mental model of the activity [
62,
73]. Our results showed that children mainly exploited verbal communication to reach coordination
[F3b, F3c, F3d], by asking or reporting the current status of actions to their peers (e.g., “
Are you done?”, “
I made it!”). However, children did not use verbal communication to raise awareness about their workspace, specifying what exactly they were doing or whenever they struggled with something while doing their actions. One might have expected that children would use verbal communication to bridge the reduced access to the peer’s workspace in the remote setup, which did not occur. And, as a result,
children tend to follow a more individualistic approach while performing their actions in the remote environment. This approach to performing the task did not compromise the team’s success due to our design for the asymmetric roles, which did not require active collaboration between the children. However, further investigation must address its impact on other tasks or types of roles and on ways to improve this issue.
The last challenge identified in remote environments that impacts engagement, autonomy, and effectiveness and is supported by previous work [
29] are technical issues. While using the kit, the connection between the robot and the PC sometimes failed
[F1b.1]. Additionally, the tool to establish online audio calls (Zoom) also increased network demand
[F1b.3], creating delays in communication and even dropped calls. The robot-kit connection failure was more common in remote environments where the setups was more complex to support remote collaborations. In our setup, the results suggest the network issues affected children’s autonomy by requiring more help from the researchers
[F1c.1] and affected children’s engagement due to the more frequent waiting periods and lack of timely communication
[F4c, F5b]. However, we also acknowledge that the network issues were not the only reason behind children’s occasional disengagement. We observed that children relied more on the researchers to ask for help, for instance, when having orientation difficulties, instead of asking for the help of their peers.
The combined effect of network issues, reduce workspace awareness, ill-time communication, and children’s individualistic performance compromised the task effectiveness and reduced children’s autonomy and engagement.