Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3544548.3581261acmconferencesArticle/Chapter ViewFull TextPublication PageschiConference Proceedingsconference-collections
research-article
Open access

Coding Together: On Co-located and Remote Collaboration between Children with Mixed-Visual Abilities

Published: 19 April 2023 Publication History

Abstract

Collaborative coding environments foster learning, social skills, computational thinking training, and supportive relationships. In the context of inclusive education, these environments have the potential to promote inclusive learning activities for children with mixed-visual abilities. However, there is limited research focusing on remote collaborative environments, despite the opportunity to design new modes of access and control of content to promote more equitable learning experiences. We investigated the tradeoffs between remote and co-located collaboration through a tangible coding kit. We asked ten pairs of mixed-visual ability children to collaborate in an interdependent and asymmetric coding game. We contribute insights on six dimensions - effectiveness, computational thinking, accessibility, communication, cooperation, and engagement - and reflect on differences, challenges, and advantages between collaborative settings related to communication, workspace awareness, and computational thinking training. Lastly, we discuss design opportunities of tangibles, audio, roles, and tasks to create inclusive learning activities in remote and co-located settings.
Figure 1:
Figure 1: Representation of the role’s workspace and its elements on a remote (top) an co-located (bottom) environments

1 Introduction

Computational thinking (CT) activities have been increasingly introduced in educational settings. It has proven to foster algorithmic thinking, promoting the decomposition of large problems into smaller ones, which ultimately brings benefits to real-world problem-solving [74, 75]. As a result, coding environments have been proposed to support CT training both in schools and at home [30]. These go from fully virtual environments (e.g., Scratch [59]) to hybrid (e.g., Wonder workshop and Dash) and fully tangible (e.g., ACCembly [60] and Torino [49]). The latter two have been particularly relevant in promoting access to children with disabilities. For example, in ACCembly, children with visual impairments use tangible (accessible) pieces to program a physical robot with audio feedback; while with Torino, children connect tangible pods to create music sequences.
Accessible coding environments have the potential to promote inclusive activities, where children with mixed abilities collaborate toward shared solutions. Collaborative learning has been connected to fostering productivity, social skills, supportive relationships, and aids psychological health and self-esteem. In particular, children with visual impairments tend to isolate less in a collaborative-prone environment [13, 45].
While accessible coding environments have proven helpful in co-located activities [44, 53], their benefits and limitations in the increasingly relevant remote environments are underexplored. Research in Learning Sciences has shown that co-located collaboration is often preferred [31]. However, it may not be possible (e.g., when children are home-bound or during after-school activities). While remote mixed-visual ability collaboration raises several challenges concerning accessibility, engagement, communication, and awareness, it also opens new opportunities to design more equitable experiences by manipulating access to information. Remote collaboration presents an opportunity for geographically dispersed teams to work together while sharing the same technologies [9].
In this paper, we investigate the tradeoffs of remote and co-located collaboration between children with mixed-visual abilities in a CT activity. Particularly, we answer the following research question: What are the benefits and limitations between co-located and remote coding environments in relation to task performance, social behaviors, and user experience? To do so, we designed a tangible robotic coding environment where children could program a robot to act out a Sokoban-inspired game 1. In the game, the main character (here, an Ozobot Evo) has the goal of pushing a crate to a target position on a map (in this case, a LEGO-based plate where walls, crate, and target positions can be felt by touch) – Fig. 1. To make it collaborative, we designed the activity with two interdependent roles: 1) the map explorer, a strategist exploring the map where the robot had to push the crate, and 2) the block commander, an executor programming the robot with tangible blocks. The environment allowed children to play the game in a remote or co-located context.
We conducted an evaluation with children with mixed-visual abilities (10 pairs with ages between 10 and 17), where they had to play collaboratively in both co-located and remote settings. Results showed that all children were able to apply CT concepts in both environments; cooperation was higher in co-located scenarios, mostly because sighted children had more access to both workspaces; however, remote collaboration was more balanced and promoted more verbal communication between children. Although we acknowledge the relevance of social connection in mixed-ability collaborative systems, in this study we focused on the impact of the environment on collaboration effectiveness.
To inform designers and developers of future collaborative CT environments, we contribute with: 1) empirical results into co-located and remote coding activities between mixed-visual ability children; 2) implications to enhance computer-aided remote collaborative environments; and 3) an example of a single-user game turned into an asymmetric and accessible collaborative CT activity. These contributions take pivotal relevance in the context of inclusive learning, where abilities but also lack of supporting instruments tend to segregate children.

2 Related Work

This work addressed CT activities for children with mixed visual abilities in collaborative contexts, both co-located and remote. We thus discuss prior literature on 1) CT and its accessibility for children with visual impairments; 2) inclusive collaboration focusing particularly on education; 3) remote collaboration with and for people with visual impairments.

2.1 CT for children with visual impairments

CT can be defined as “the thought processes involved in formulating a problem and expressing its solution(s) in such a way that a computer – human or machine – can effectively carry out” [75]. It involves learning computer science concepts and practices – e.g., sequences, operators, iteration, decomposition, and abstraction – and is now seen as a valuable skill with benefits that go beyond computer science. For instance, learning CT skills can foster one’s critical and logical thinking, problem-solving, creativity, and social abilities [6, 27, 74]. Such benefits alongside reports about both computer science illiteracy and women’s under-representation in computing [28, 72] have fostered a continuous increase of CT activities in educational settings in the last decade – in particular in K-12 education. Learning how to code plays a central role in CT activities and poses opportunities not only for mathematical reasoning, but also for causal, spatial, verbal, and social reasoning [64], broadening the impact and advantages of CT in education.
This call for action resulted in the development of various tools and coding kits for children, designed with CT in mind. Targeted at specific age groups, these solutions often rely on block-based programming and drag-and-drop environments where specific blocks represent different programming expressions or elements. These environments prevent errors and reduce the barriers for beginners, including younger children [17, 71]. Some of the most widely known tools are Blocky [20] and Scratch [59], two web-based visual programming tools enabling children to create their own programs and learn new concepts. Hybrid solutions include both virtual and physical/tangible components to create more engaging and playful experiences. For instance, Thymio [48], can be programmed in a similar way to Scratch and Blocky, but its output controls a mobile robot. Conversely, Strawbies [32] (or the commercial kit Coding Awbie [54]) uses tangible blocks to control a virtual main character. Other coding kits, such as Cubetto [4] and KIBO [7] are fully physical, leveraging tangibles both for input (with blocks) and output (with a robot). These kits promote learning while also being engaging and playful [6, 77]. By being multimodal – with the use of visual, audio, and tactile elements – these tools have the potential to support inclusive learning activities with children with different abilities. However, the needs of children with disabilities are often disregarded, resulting in inaccessible tools.
Children with visual impairments, in particular, are often excluded from participating by themselves in CT activities as coding kits and tools lack engaging accessible elements [34, 50, 57]. As a consequence, teachers often try to adapt their activities to mitigate their systems’ lack of accessibility [50]. Additionally, many research efforts have tried to design, implement, and evaluate novel coding kits or tools taking into account the needs of children with visual impairments [30, 51]. For instance, Milne and Ladner [46] provided guidelines to create accessible block-based touchscreen environments for children with visual impairments, while Pires et al. [57] provided insights for robot-based programming environments and the activities that they support. Further research and commercial efforts created accessible versions of the Blocky tool [10, 25, 51]. Other efforts on accessible coding kits tried to leverage multimodal interaction, with a major focus on tactile and auditory feedback. StoryBlocks [35] uses tangible blocks – with visual tags detected by a camera – that can be assembled in a workspace area and then executed to create audio stories. CodeRhythm [61] uses magnetic tangible blocks that can be connected to create simple melodies, while Project Torino [49, 69] uses physical instruction beads connected through cables to create both music and stories. Similarly to the abovementioned, ACCembly [60] uses accessible tangible blocks, but its output controls the movement of a robot. Overall, these systems support children with visual impairments in learning CT skills through the use of coding kits and activities. Still, few of these research efforts have focused on promoting social interactions and collaboration, despite the potential for CT activities to support social reasoning [64]. Two exceptions are, for instance, Torino promoting collaboration between children [66] and ACCembly between children and their families [60]. Further exploring collaborative activities among children with mixed-visual abilities has the potential to increase inclusion both in and out of the classroom.

2.2 Inclusive Collaboration

Collaboration is often favored in educational activities, as a way to encourage students to work together towards a shared goal. Collaborative learning has multiple advantages such as enhancing critical thinking and promoting social and communication skills, while also improving classroom results [23, 36]. On a higher level, collaboration is crucial to promote inclusion between people with different abilities but also poses grand challenges, particularly in tasks that require understanding the others’ actions and awareness of the environment or workspace.
Being aware of the environment provides an understanding of the activity and is necessary to a successful collaboration, although it can be challenging both in mixed-visual ability contexts and between blind people [16]. Mendes et al., [42] explored how different auditory designs affected workspace awareness of blind adult dyads when exploring a large touchscreen tabletop. They noted that users would communicate often, but also search for each other’s hands – or offer their hand to the other – to successfully complete their tasks. Chibaudel et al., [12] explored the use of a tangible interface in a collaborative treasure hunt, showing that a continuous understanding of the environment status (in particular, the other’s location) improved both the performance of the pair and the effectiveness of their communication. Other works with adults have tried to convey either auditory [73] or haptic [62] feedback to participants with visual impairments as a complement to visual feedback when collaborating with sighted people. These studies highlight that blind people struggle to maintain their awareness of the environment and its current state, posing both cross-media consistency and verbal communication as essential to keep a shared mental model. In addition, a shared mental model of objects and the virtual environment may be facilitated by tactile reference points such as walls, floors, and fixed objects [62].
In an alternative approach, Gonçalves et al. [24], explored collaboration in online gaming where the pair has interdependent asymmetric roles. Interdependence empowers the participants to take control during the activity by valuing everyone’s contribution [5]. In the mentioned work, users have different roles that are essential to complete a task – one relying on visual feedback, while the other on audio. They have access to complementary information, making communication fundamental.
In the context of children with mixed abilities, research has focused on the potential of technology to increase inclusion among children [26, 44, 53]. One popular way to facilitate children’s collaboration and inclusion is to use robots and tangibles. Robots and their physical attributes are very engaging to all children [6, 30, 44] and could potentially be a tool for inclusive behaviors for children with mixed visual abilities [44, 52]. Frequently, robots are used as an output of programming instructions but it has been also used to, for instance, help children with mixed visual abilities to learn letters and shapes [52], or to support inclusive play experiences in a school setting [3, 44]. On the other hand, multisensory tangibles also allow children to manipulate and experiment, promoting children’s sharing, communication, and negotiation [18, 41, 56]. The tangible output that the robot and the tangibles represent helps children interchange different skills and senses, monitor their peers’ actions, and remember the state of play, characteristics needed for a successful collaboration [73]. An alternative approach is to use a haptic virtual environment to provide tactile feedback to facilitate teamwork between children with mixed visual abilities [47]. Besides the benefit of enabling the use of touch which increases accessibility and inclusion, these types of materials are also very engaging and have the potential of increasing inclusive learning experiences, collaboration, critical thinking, and group discussions [13].

2.3 (Inclusive) Remote Collaboration

Virtual and remote teamwork has been a reality in the last decades [39] – boosted by the Covid-19 pandemic in recent years [9, 58]. The ability to work and collaborate remotely with others enables geographically dispersed members to work together, making use of collaborative and communication technologies, such as e-mail, video-conferencing tools, shared hosting, and version control services. While remote work (and learning) has the potential to increase inclusion, its feasibility is heavily linked to the accessibility and usability of collaborative tools, which are often neglected. This can have the opposite effect, contributing to exclusion and major barriers for people with disabilities. For instance, in the context of collaborative writing tools, Das et al., [14, 15] refer to an increased effort by people with visual impairments in accessing (inaccessible) tools, listing challenges not only in collaboration awareness (e.g., who edited or commented what), but also in the organization dynamics (e.g., social, structural, and power). Recent efforts have tried to increase the accessibility of collaborative writing tools, by increasing the mixed-visual ability partners’ awareness, and control of collaborative actions [70] with auditory cues [38] or/and promoting interdependence [58]. There is still very limited work on remote collaboration for people with visual impairments, and those focusing on remote interaction are usually on sighted assistance – e.g., remote guidance [33] or task transference [78] – which can have a negative perception and effect on work settings [1].
Research on children’s collaboration in remote settings is also scarce, with a few exceptions on multiplayer collaborative online games for children [2, 22]. For instance, Garzotto [22] studied collaboration among groups of children in remote and co-located settings. The results suggest that a collaborative online game should have clear goals and mutual teamwork benefits, and it should not depend on individual activities to avoid waiting times. Another study designed and evaluated a kinesthetic game to facilitate collaboration in a co-located group of children and in a group collaborating remotely [2]. The authors found that it was more challenging to negotiate turn-taking and make decisions in a remote setting and that children preferred competitive against cooperative play.
When considering education, remote collaborative technologies remain underexplored [19], especially when considering children with visual impairments. An exception is the early work of Manshad et al. [40], where the authors designed a remote tangible learning environment to target lessons from everyday classrooms using manipulatives. The system supports manipulatives’ remote and active position, proximity, stacking, and orientation on a multi-touch tabletop surface. However, the prototype was not evaluated and therefore it was not possible to assess the impact of the design choices. In our work, we aim to explore how children collaborate and communicate both in co-located and remote scenarios as a way to characterize these two scenarios and their challenges, tradeoffs, and advantages.
Figure 2:
Figure 2: The three Sokoban LEGO-compatible maps, Level 1, 2 and 3

3 Tangible Robotic Coding Kit

Considering that mainstream coding kits could not be easily adapted to (accessible) remote settings, we developed a novel tangible robotic coding kit by following accessibility guidelines from prior research [13, 30, 44, 60, 66, 69]. This kit enables two children with mixed-visual abilities to collaboratively play a Sokoban-inspired game in a remote or co-located setting. The coding kit is composed of tangible elements for the map and coding blocks, an Ozobot Evo robot, a computer, and a magic box.

3.1 Tangible Sokoban

Sokoban is a 1980s puzzle-game based on a square grid, where a player controls a character that pushes crates towards specific positions, in the minimum number of moves. Each level can have more than one solution, and their complexity depends on the number of actions the solution requires. To solve each challenge, users apply computational competencies such as data collection, planning, problem-solving, debugging, and building sequences of instructions [37].
Focusing on the advantages of tangibles and multisensory robotic feedback for inclusive learning activities [13, 44, 66], we adapted the traditional digital game to be played collaboratively while applying CT concepts. Our design explores the potential of information asymmetry in collaborative learning activities. Moreover, since children with visual impairments tend to use more egocentric rather than allocentric representations, the game can present a positive challenge for spatial awareness development [12], and the potential to train laterality and perspective-taking skills [57].

3.2 Roles and their Workspace

In line with previous research on mixed-visual ability gaming, we aim to balance children’s communication and collaboration by separating information gathering and agency between two asymmetric roles [24].
Children have one of two roles in both collaborative environments: map explorer and block commander. The map explorer is responsible for the map exploration and deciding the path for the robot to push the crate to its goal. The map explorer’s workspace has a LEGO-based map and an Ozobot Evo robot [55], which is the main character on the tangible Sokoban. The robot moves on top of the map in four directions, has LED lights, 8-bit sound, and an additional 3D-printed pusher [68] to push the 3D yellow crate [67]. The robot receives instructions from the block commander.
The block commander is responsible for assembling the sequence of instructions with the coding blocks for the robot to move. The workspace of the block commander is composed of 3D-printed coding blocks and a magic box. Coding blocks are used to control the robot’s movement and direction. Children put the coding blocks on a 3D-printed tray that fits in the magic box, which abstracts the recognition and transfer of the coding blocks to their partners’ robot. The magic box features a webcam, which is connected to a computer running a python script – Fig. 1.
When both roles are co-located, a single computer recognises the coding blocks and sends movement instructions to the robot via Bluetooth. In remote conditions, block recognition and communication with the robot are split between the map explorer’s and block commander’s computers, respectively. The computers communicate via a client-server Internet connection (Fig. 1).

3.3 Coding Blocks and Tactile Maps

We know from previous work that block-based syntax allows focusing on code construction while reducing the cognitive load and training fine motor skills [21, 35, 49, 57, 60]. We design our tangible blocks considering the accessibility needs of mixed-visual abilities children, such as high-contrast colors, embossing, following code executed, and workspace organization [30, 60, 66, 69]. The blocks have four contrasting colors to facilitate its association and recognition (forward is yellow, backward is red, right is blue, and left is green), and an embossed blue arrow represents the direction. The workspace has a 3D-printed tray with raised dots designed to facilitate the correct placement and arrangement of blocks, and a box divided into four spaces for each type of block.
The customizable tangible maps are LEGO-based, familiar tangible items to all children [11, 13]. The path for the robot on the map is made of green caps, and specific locations have different textures and colors: the robot’s initial position (white with an oval sticker) and the main crate’s initial (black with a white square) and goal positions (yellow with a carved X) – Fig. 2.
Figure 3:
Figure 3: User scenario in a remote setting. A visually impaired child with the role of map explorer and a sighted block commander collaborate in one of the games to move the robot to push the crate toward its goal.

4 User Study

The goal of the user study was to investigate the benefits and challenges of remote and co-located collaboration in CT activities among children with mixed-visual abilities. We used the previously described tangible robotic coding kit that leverages asymmetric roles to create interdependence between a pair, meaning neither child can reach the final goal without both contributing and acting. We set up two environments to instantiate the same collaborative task both remotely and co-located.
In both collaborative environments, each child was responsible for the tangible objects required for their assigned role; i.e., the map explorer was responsible for the map and robot while the block commander for the coding blocks and magic box – Fig. 1. In the remote setting, the children were in different rooms and had a computer to communicate with each other through an online Zoom audio call. In the co-located environment, children were in the same room, sitting side by side.
The chosen setup for the two environments varied in the physical presence and proximity between the peers, and the access each child had to the workspace of their partner. In the case of the co-located environment, children had auditory, physical, and visual (conditioned by the visual acuity of each child) cues from their peer’s workspace. In the case of the remote environment, workspace access could only be reached through verbal communication between the children, which aimed at creating a more balanced workspace awareness for both children, regardless of their visual acuity [43]. In other words, we acknowledge and embrace that these two factors (proximity and workspace access) characterize the most common distinction between remote and co-located collaborations.

4.1 Participants

We conducted a study with 20 participants between 10 and 17 years old (M = 12.75 SD = 1.9) from three inclusive schools in our country. We asked 10 children with visual impairments to invite a sighted schoolmate to form pairs and ensured that all participants were attending the 5th-8th grade, considering the national curriculum. We asked participants about their age, school grade, and previous robotic or coding experience. Additionally, educators mentioned that two of the participants were considerably older than their grade colleagues, because one has a global development delay (VI7) and the other had repeated previous grades (S5). Only 4 participants reported previous robotic coding experience (VI3, VI7, VI9, VI10). Table 1 further describes the participants’ demographics.
Table 1:
GroupIDGenderAgeSchool
Grade
Visual
Impairment
Coding
Experience
Relationship
G1VI1M138BlindNoFriends
 S1M138SightedYes 
G2VI2F105Low-VisionNoFriends
 S2F105Sighted- 
G3VI3M116Low-VisionYesFriends
 S3M126Sighted- 
G4VI4M127Low-VisionNoFriends
 S4M147SightedYes 
G5VI5F116Low-VisionNoSchoolmates
 S5M176SightedNo 
G6VI6F127Low-VisionYesFriends
 S6M137SightedNo 
G7VI7F175BlindYesFriends
 S7F125SightedNo 
G8VI8M146BlindNoSchoolmates
 S8M137SightedNo 
G9VI9M125Low-VisionYesFriends
 S9M138SightedNo 
G10VI10F148Low-VisionNoFriends
 S10M127SightedYes 
Table 1: Participants in the study. The table describes the group and individual ID, gender, age, school grade, visual ability, previous coding experience, and relationship between partners.

4.2 Procedure

We manipulated the type of environment in a within-subjects design so that each child had the opportunity to collaborate and solve puzzles in both the remote and co-located environments with both roles. Children started in separate schoolrooms with a researcher per room, and for children with visual impairments, their Inclusive Education Teacher was also present. At the beginning of the session, we explained how it would proceed. Each child would start by solving the first level of the game individually and then move to the collaborative games with a randomly assigned role and environment. Each researcher explained the Sokoban game, how children could use the coding blocks to instruct the robot, and the representations of the positions of the crate and the robot on the tangible map. To familiarise children with the full setup [6, 44], they were free to explore the coding kit and solve the first level – Fig. 2, while the researcher scaffolded their first interaction.
The remainder of the session was designed to investigate the tradeoffs between the two environments, co-located and remote. After both children completed the first level individually, the researchers explained the existing roles, map explorer and block commander, for the collaborative games – Fig. 4. Children collaborated in the first environment with a randomly assigned role and then exchanged roles in that same environment. Afterward, they changed to the second collaborative setting and performed both roles. Occasionally, researchers intervened to keep the activity flowing, encouraging children to overcome challenges, cherish achievements or prompting them to see a different perspective. Fig 3 illustrates an example of the interaction between a group during their collaborative game in a remote setting.
To ensure the same level of difficulty between collaborative settings, we used the same maps/levels two and three twice but mirrored. Children could solve them step-by-step or sequentially, and apply laterality and perspective-taking concepts to understand the crate’s movement. The first level had an L shape and only required two types of instructions (forward and one of direction). The second level can have a solution in one iteration with more instructions than the first level. The third map has at least two iterations with more instructions, an obstacle, and at least three types of instructions to solve it.
Figure 4:
Figure 4: Pairs during the user study sessions. On the left, a block commander in a remote environment. In the middle, a pair during a co-located session. On the right, a map explorer in a remote setting.

4.3 Measures and Data Analysis

All the sessions were video and audio recorded. To evaluate the collected data in the light of our research question, our measures mirror task performance, social behaviors, and user experience. The performance of the coding activity considers the effectiveness of the team solving the task [76], and the CT skills they have practiced [60]. The measures of social behaviors are mainly focused on the collaboration between each pair of participants, namely how they communicate and cooperate as a team, following related literature assessing collaboration [63] and workspace awareness with visually impaired people [42]. The user experience examines the engagement and the accessibility provided by the robotic kit.
Most measures were assessed through behavioural observation and coding. The two exceptions are the duration that each pair took to solve the puzzle (effectiveness) and the final questions related to participants’ engagement in the tasks. The questionnaire has the same six questions for each environment to consider children’s perspectives on the dimensions considered for the study (enjoyment, collaboration, and inclusion). The questions are inspired by previous research with children with visual impairments [3, 60] that use five-point Likert item questions. A full description of the final reported measures and the questions is detailed in the supplementary files.
Two researchers led the qualitative analysis of the videos using inductive and deductive coding [8]. The initial codebook was inspired by previous work related to workspace awareness [42], CT, Orientation & Mobility [60], cooperative strategies [63], and researcher-child relationship roles [76]. After a parallel round of coding, the researchers reached the codebook agreement, adapting or removing codes, and adding new ones regarding the system’s interaction and accessibility. Using the final codebook, all qualitative data was coded on ELAN software.
Finally, the quantitative analysis of the questionnaire answers, the time measures, and the frequencies of coded behaviors were performed on SPSS using Wilcoxon signed-rank tests for comparisons between environments, and Mann–Whitney U tests for comparisons between children’s visual acuity.

5 Findings

We analyzed the data according to the following six dimensions: effectiveness, CT, communication, cooperation, engagement, and accessibility – their description is available in supplementary materials.

5.1 Effectiveness - Groups were more autonomous in co-located scenarios [F1]

To analyze effectiveness, we considered the time taken for groups to reach the proposed goal and its completion, and their autonomy from the investigator.
All the groups except one successfully reached the proposed goals [F1a]. Two groups, G3 and G10, finished all their maps with no help from the researchers. The group that did not finish, G7, quit a map after eighteen minutes. They started in a co-located scenario, and when they switched to the remote, S7 did not like to be the map explorer and asked to go back to the other room with her friend.
We did not find a statistically significant difference on the completion time between remote and co-located scenarios (Z = −1.932, p > 0.05;AM = 10′57′′, GM = 9′26′′, SD = 5′44′′) [F1b]. However, we noticed high standard deviations, which could be related to three common issues: the robot’s lack of accuracy [F1b.1] causing children to repeat the sequence of instructions (e.g., G3); difficulties fitting the blocks on the tray (e.g., VI1) [F1b.2]; or in remote settings, network issues that did not allow instructions to be sent between devices (e.g., G6) [F1b.3].
Regarding autonomy from the researchers, we could observe that children asked for more help and the researchers intervened more in remote (N = 822) than in co-located (N = 264) scenarios (Z = −4.542, p < 0.001) [F1c]. When breaking down those interventions, we noticed that researchers suggested children to communicate workspace-related awareness to their partners more in the remote scenarios (N = 95) than in co-located scenarios (N = 11;Z = −4.318, p < 0.001) [F1c.1]. Such suggestions occurred whenever the researcher found that the pairs could help each other more by having access to the other’s workspace. We can also report more interventions in remote scenarios from the researchers related to the children’s orientation difficulties [F1c.2] (e.g., R:“Where is your heart?” VI5:“Here”. R:“Which side is that?” VI5:“Left” R:“So where do you want the robot to go?” VI5:“Right”) and to the lack of coordination between partners (I:“Are you ready? You have to tell him” talking to VI10). Lastly, we observed investigators fostering engagement during the sessions (R:“Did you make it?”, S3:“Yes, we did!”, R:“That is great!”). These occurred more in remote scenarios (N = 453) than in co-located (N = 214;Z = −3.317, p < 0.001), due to the waiting times and lack of awareness of the peer’s status during the game [F1c.3].

5.2 CT - Children applied CT concepts during the gameplay [F2]

We observed that children applied CT concepts and practices while solving the Sokoban maps. The allocentric puzzles fostered children to apply perspective-taking and laterality concepts to plan solutions. Children applied Data Collection [F2a], a CT concept to collect the necessary data to solve the problem at hand, by observing and touching the map, asking questions about the game, and identifying the robot and crate’s locations (e.g., VI9:“Take the crate here”). It is considered a fundamental task to initiate problem-solving, and we could observe that it occurred at the beginning of each puzzle and after the robot’s execution. The researchers would also ask questions to encourage map exploration in almost every scenario. Children often moved the robot or their hand on the map to plan the algorithm while applying mental visualization and perspective-taking. In co-located scenarios, we could associate Data Collection to moments when partners helped each other and worked out strategies to then build the solution (e.g., VI2:“(...) Right”, S2:“Left!” while pointing to the map) [F2a.1].
Algorithms and Procedures are also fundamental to solving challenges computationally and build sequences of instructions. We observed that it usually occurred after data collection and was associated with children applying laterality concepts at the same time [F2b]. We observed children applying algorithms and procedures mainly when they were map explorers and verbalized the instructions to give (e.g., S6:“(...) forwards”, VI6:“How many times?”, S6:“Three”). S1 was the only block commander in a remote environment that applied Algorithms and Procedures by visualizing the instructions’ sequence and commenting on the shape of the map. In a co-located scenario, block commanders such as S2, S8, and VI9 applied these concepts while exploring the map with their partners.
When children recognized that one action went wrong or the robot did not end up where they intended, they began Debugging (e.g., VI2:“It will fail here... It will not catch the crate”) [F2c]. In co-located contexts, children helped each other finding the problem and worked out strategies to build a new solution. In remote scenarios, the map explorers realized the sequence would fail when the robot executed it. Only after that could they start debugging the sequence of instructions or find a new solution based on the current status.
Children also had the opportunity to apply Problem Decomposition [F2d], particularly when solving the third level (the most complex level). This level presented an obstacle, as it required a higher number of instructions than the tray allowed, forcing children to divide their sequence of instructions. Generally, some dyads also divided the problem to make it simpler. On the first iteration, they placed the robot next to the crate, and on the second one, they pushed it to the goal (e.g., R: “You gave two instructions. Do you want to add some more?” VI6: “No, just two for now.”).

5.3 Communication - The verbal communication exchange was higher in remote than in co-located [F3]

Communication includes verbal behaviors related to the awareness of the workspace or the task. Regarding workspace awareness, each child verbally supplied or requested considerably more cues to understand their peer’s workspace in the remote environment (M = 15, SD = 14) compared to the co-located environment (M = 3, SD = 4;Z = −4.469, p < 0.001) [F3a]. Examples of workspace awareness cues include, for instance, asking the peer to wait or informing about an ongoing task. As children have access to their partner’s workspace in a co-located collaboration, it might decrease the necessity for verbal awareness. Although visual access was less accessible to children with visual impairments in co-located games, they leveraged other channels to access information [F3a.1]. For example, VI6 kept her hand in front of the magic box in a co-located game, so that she knew when her partner was using it and even helped him by opening the lid of the box. Conversely, in a remote collaboration over an audio call, the verbal exchange is crucial to establish coordination among peers as it was equally accessible to both children through auditory-only feedback [F3a.2].
While dissecting the sub-type of awareness cues exchanged among peers regardless of the environment, 78% of those communicative acts refer to status requests/supplies (e.g., S3:“What about now?”, or VI6:“Done!”), suggesting that most of the communication is to acknowledge requested actions or to inform about completion [F3b]. The remaining 22% refer to information about past, future, or ongoing actions (e.g., VI3:“I have removed all blocks.”, VI5:“I will now give you three instructions.”, VI9:“The robot is not going straight.”). The average number of communicative acts per child related to past, future, or ongoing actions is extremely low (M = 1, SD = 2). It suggests that, in both environments, children do not generally provide nor request much information about their own or the other’s actions. This lack of verbal awareness about the workspace has a higher impact on remote settings, which are characterized by lower workspace access. Although children could have verbally compensated for the lack of workspace access by transmitting more often what is going on, they did not do it. Therefore, the creation of a mutual mental model of ongoing teamwork was hindered and, in turn, the coordination was also hindered in remote settings.
Regarding task-related communicative acts, they were used more frequently by children in the remote environment (M = 20, SD = 12) compared to the co-located environment (M = 13, SD = 9;Z = −2.591, p < 0.01) [F3c]. These included giving more instructions to their peer (e.g., VI1:“Turn left.”), and questioning or repeating the instructions more often (e.g., S2:“How many move forward after the second turn right?”). A higher frequency of task-related communication can be associated with the fact that children either had to fix a higher number of previously wrong instructions or had to question more often about unclear instructions. We also looked at the number of communicative acts related to workspace awareness grouped by role. When having the block commander role, each child either supplied or requested awareness from their peer 12 times on average. While having the map explorer role, each child only performed 5 communicative acts related to awareness with their peer. Once again, these communicative acts were used to acknowledge received instructions or to set the pace for longer sequences of instructions. This difference might suggest that when children had the block commander role, they were also implicitly responsible for establishing coordination mechanisms with their partners [F3d]. A similar analysis grouped by visual acuity of the children is in Sec. 5.6.

5.4 Cooperation - Co-located environments enable more positive cooperation, but also leave room for more negative cooperation [F4]

We looked at cooperation among peers and classified it as either positive, negative, or neutral. In positive cooperation, children were engaged in finding a solution together; in negative cooperation, children substituted or ignored their peers. In neutral situations, children waited for each other but did not help. In the remote scenarios, children engaged in a total of 16 acts of positive cooperation. While in the co-located scenarios, there was a total of 84 acts. Helping behaviors occurred more frequently in the co-located environment (Z = −3.708, p < 0.001) due to easier access to their peer’s workspace and status [F4a]. Examples of positive cooperative acts include overcoming a previous misunderstanding or helping their partner. For example, S2 helped his partner by handing him the correct blocks to place on the tray and build the sequence.
Generally, negative acts of cooperation did not occur very often (N = 20) and we found no statistically significant difference between environments (Z = −1.042, p > 0.05) [F4b]. To exemplify an act of negative cooperation, when VI2 was block commander in the remote environment, she decided to put the tray in the magic box without communicating, while her peer was about to give another instruction. One extreme example of uncooperative behaviors in a co-located setting, that happened only in one out of the forty games, was S8 taking over his partner with visual impairments by taking control of both roles. This specific case happened when S8 was the map explorer, taking over the blocks and the board. Although this unbalanced situation did not occur very often, it is important to mention that taking over the role of a peer was only possible in co-located environments [F4b.1]. To some extent, remote environments facilitate and promote mutual respect for the other’s role. Overall, co-located environments enable more positive cooperation, such as helping behaviors, but also leave room for more negative cooperation acts, such as children getting in the way of each other.
We considered neutral cooperation whenever children waited for each other, which happened more frequently in the remote environment (N = 191), compared to the co-located environment (N = 54;Z = −3.879, p < 0.001) [F4c]. On the one hand, waiting for the peer may reflect respecting the other’s role, the time it takes to complete an action, or to think about a strategy. On the other hand, waiting may affect engagement and mirror an uncoordinated collaboration due to the lack of awareness of the peer’s status.

5.5 Engagement - Co-located experience was more engaging than the remote [F5]

To assess children’s engagement, we looked at positive and negative behavioral measures of enjoyment and boredom, respectively, as well as answers to their final questionnaire. For instance, the amount of laughter or excitement displayed by children during the sessions was more frequent in the co-located environment (N = 45) compared to the remote environment (N = 10;Z = −2.746, p < 0.01) [F5a] (e.g., VI8:“it’s now (...) Go! Go! Go!”). While coding the video data, we could also observe several moments of disengagement and boredom, which we associate with waiting for their peer to finish their action, troubleshooting, and other issues of the system [F5b]. Some children create new tasks or challenges while waiting for their peer, for instance, VI6 started sorting and organizing the blocks.
In the final questionnaire, we asked children about their enjoyment levels towards the co-located and remote environments. The difference in reported enjoyment between the two environments was statistically significant (F(1) = 5.586, p < 0.05), supporting children preferred the co-located scenarios (M = 4.90, SD = 0.31) over the remote scenarios (M = 4.65, SD = 0.49) [F5c]. We also analyzed how they perceived both environments in terms of collaboration, inclusion, and their self-relevance to the task. We found no statistically significant differences in the environment on the perceived collaboration of the task (F(1) = .486, p > 0.05), nor on the perceived inclusion of the task (F(1) = 1.306, p > 0.05), nor on the perceived self-relevance to the team’s performance (F(1) = 1.306, p > 0.05). Children reported the task as highly collaborative (M = 4.750, SD = 0.493) [F5d] and highly inclusive (M = 4.625, SD = 0.628) [F5e] in both environments. They also perceived their contributions as similarly relevant to the team’s performance in both environments (M = 4.400, SD = 0.778) [F5f]. These results support that children considered their participation and collaboration similarly balanced in both remote and co-located environments.
Children were asked how much they enjoyed having each of the two roles, block commander and map explorer, in each of the two environments. We found no statistically significant differences on how much children enjoyed the role of map explorer between environments (Z = −1.294, p > 0.05), nor on block commander (Z = −1.890, p > 0.05) [F5g].

5.6 Accessibility - Sighted children helped more their peers, but only in co-located environments [F6]

The accessibility of the workspace considers children’s ability to identify and use the system’s components. When children encountered issues, such as distinguishing the blocks by color/embossing or fitting them on the tray, we assumed the system was at fault. We could observe that all children identified the essential locations on the map and the necessary blocks to solve the problem, however occasionally with help from their peers or the researchers [F6a]. In general, we observed children faced four issues on average per session (M = 4, SD = 4), specifically fitting the blocks on the tray or placing them in the correct position inside the magic box [F6a.1]. VI2 showed more difficulty placing the blocks on the tray throughout the session (N = 11), she would rotate and try different spots until the block fitted the three raised dots. While being the map explorer, VI1, VI7, and VI8 tried to follow the robot’s movement on the map with their hands, presumably because of the lack of feedback it gave about its position and location [F6a.2]. These actions would disturb the robot’s movement and orientation, leading it to deviate from the intended path.
We also analyzed the impact of children’s visual ability on the dimensions of communication, cooperation, and engagement, by comparing results between children with visual impairments and sighted children. We summarize those results here as they reflect how balanced the activity was for each child of the mixed-visual ability pair.
In terms of communication, children with visual impairments exchanged approximately the same total amount of communicative acts related to the workspace awareness and related to the task (N = 346 and N = 639, respectively), compared with sighted children (N = 348 and N = 685; U = 749, Z = −.491, p > 0.05 and U = 727.5, Z = −.702, p > 0.05, respectively) [F6b]. In terms of cooperation, sighted children helped their peers more often (N = 45) than children with visual impairments (N = 14; U = 586, Z = −2.762, p < 0.01) [F6c]. When breaking down these frequencies by the environment, around 85% of them occurred in the co-located environment. Once again, this result is in line with the previous one suggesting that awareness propels coordination and, in turn, cooperation.
Finally, regarding engagement, we found no differences between children’s laughter during the sessions, which was annotated with a total of 14 occurrences for sighted children and 12 occurrences for children with visual impairments [F6d]. We also performed a statistical analysis of the questionnaire data comparing children’s answers by their level of visual acuity. No significant differences were found when comparing the overall activity, nor when comparing their experience in the two environments (p > 0.05 for all tests) [F6e].

6 Discussion

In the user study, children with mixed-visual abilities collaborated in remote and co-located environments while using a tangible robotic kit to perform a CT activity. We now discuss the lessons learned in each environment, considering the findings of the user study.
The Sokoban-inspired game allowed children with mixed-visual abilities to apply CT concepts by programming a solution together [F2]. Children’s performance [F1a] reinforces its contribution to the current state-of-art on the intersection of three topics: CT activities for children with visual impairments, collaboration between children with mixed-abilities [57, 60, 66], and remote collaboration. Mainly, our robotic kit raised the bar on tangible CT activities to explore allocentric perspectives and increasing difficulty levels [F2b], whether in a co-located or remote setting. Moreover, it created a rich environment for both children to become interdependent and reach the final goal together [F5d, F5f, F6b, F6d, F6e], regardless of their visual ability [24]. Generally, we consider that our tangible robotic kit enabled children with mixed-visual abilities to train CT concepts together in both remote and co-located environments.

6.1 The Co-located Environment

When analyzing collaborative tasks in co-located environments, we learned that children were prone to help their peers whenever needed or even offer support and assistance spontaneously [65]. Although the roles were designed to create interdependence to reach the final goal, the individual actions of each role could be performed alone by each child. As a result, each task did not require active collaboration or mutual cooperation. However, the properties of our co-located environment – physical proximity, higher access to the peer’s workspace, and use of tangible objects – facilitated the cooperative behaviors between children [F4a]. We observed that children in the co-located environment generally tend to follow a cooperative approach to perform the task.
The second learned lesson emerges from the previous. The cooperative approach created more fluid and coordinated interactions between children, which helped them to be more engaged with the activity [F5a, F5c]. Additionally, as children relied more on each other to perform the task, they became an autonomous team and required less help from the researchers or their teachers [F1c]. Such autonomy promotes higher effectiveness on the task contributing to children’s engagement. As a result, we highlight the effectiveness, autonomy, and engagement of children while using our tangible robotic kit in co-located environments.
During our analysis, we also identified a challenge to be considered when assessing the tradeoffs of deploying a coding activity in a co-located environment. When children with mixed-visual abilities commonly collaborate side by side, the default access to each other’s workspace is unbalanced for both children. While children with visual impairments can use audition and touch (and partial vision according to their level of visual acuity), sighted children can fully exploit their vision [F6c]. The unbalanced interaction favoring sighted children combined with the physical access to the workspace of the peer opens the way to taking over the peer’s role or other similar behaviors [F4b.1]. Although this type of situation occurred only once during our user studies, we believe it mirrors a fragility in co-located environments. Therefore, in co-located settings, the extended access to their peer’s workspace and their physical proximity cause a generally unbalanced interaction and allows sighted children to have dominating behaviors.
The last challenge is communication, particularly related to awareness of the workspace. Previous studies have already reported the importance of communicating and understanding the environment status to reach inclusive collaboration [12, 42]. The results of our user study support and reinforce this known challenge, as we noticed that children, both sighted and visually impaired, do not often use communication to ask or disclose their ongoing tasks [F3b]. The lack of workspace awareness communication between children is a major challenge that reinforces the naturally unbalanced workspace access. Describing current efforts or workspace status to a peer could have several benefits, such as propelling more helping behaviors by children with visual impairments, or even reducing the default unbalanced access to the workspace.

6.2 The Remote Environment

We now discuss the lessons leaned on the remote environment from deploying collaborative tasks between children with mixed-visual abilities. First, our setup for remote work, in which children only communicated via audio call, gave both children equal opportunity to access each other’s workspace. Although another typical setup for remote work includes video calls, which might increase the awareness of the peer’s status depending on children’s visual acuity, the awareness of the workspace would possibly still be compromised. Especially if we consider that each child’s workspace includes tangibles that lay on their table. For that reason, remote collaborations with tangible objects provide both children balanced access to each other’s workspace, regardless of their visual acuity.
Another learned lesson is related to how interdependent children behaved in the remote environment. Among the several properties that remote environments can have, the low access to each other’s workspace endorses the importance of verbal communication and coordination to achieve a common goal. The asymmetric roles in our robotic kit were designed precisely to be interdependent, similar to many other classroom activities in which children usually engage. Additionally, considering the previously mentioned balanced access for both children (sighted or with visual impairments), having interdependent roles also suggests the system has to be accessible and support an inclusive collaboration. The increased number of communication acts exchanged by children while playing the remote games [F3a] and the fact that almost all pairs finished the task [F1a] suggest they reached an acceptable level of coordination. Therefore, we argue that the role asymmetry in remote environments fosters both the interdependency between children and their inclusive collaboration.
While analysing the remote games, we identified two main challenges in the way children used our tangible robotic coding kit and interacted with each other. The first challenge refers to the lack of workspace awareness and its consequent individualistic approach to achieving the common goal. Previous studies have already reported that verbal communication between users is essential to keep a shared mental model of the activity [62, 73]. Our results showed that children mainly exploited verbal communication to reach coordination [F3b, F3c, F3d], by asking or reporting the current status of actions to their peers (e.g., “Are you done?”, “I made it!”). However, children did not use verbal communication to raise awareness about their workspace, specifying what exactly they were doing or whenever they struggled with something while doing their actions. One might have expected that children would use verbal communication to bridge the reduced access to the peer’s workspace in the remote setup, which did not occur. And, as a result, children tend to follow a more individualistic approach while performing their actions in the remote environment. This approach to performing the task did not compromise the team’s success due to our design for the asymmetric roles, which did not require active collaboration between the children. However, further investigation must address its impact on other tasks or types of roles and on ways to improve this issue.
The last challenge identified in remote environments that impacts engagement, autonomy, and effectiveness and is supported by previous work [29] are technical issues. While using the kit, the connection between the robot and the PC sometimes failed [F1b.1]. Additionally, the tool to establish online audio calls (Zoom) also increased network demand [F1b.3], creating delays in communication and even dropped calls. The robot-kit connection failure was more common in remote environments where the setups was more complex to support remote collaborations. In our setup, the results suggest the network issues affected children’s autonomy by requiring more help from the researchers [F1c.1] and affected children’s engagement due to the more frequent waiting periods and lack of timely communication [F4c, F5b]. However, we also acknowledge that the network issues were not the only reason behind children’s occasional disengagement. We observed that children relied more on the researchers to ask for help, for instance, when having orientation difficulties, instead of asking for the help of their peers. The combined effect of network issues, reduce workspace awareness, ill-time communication, and children’s individualistic performance compromised the task effectiveness and reduced children’s autonomy and engagement.

7 Design Opportunities

In this section, we suggest three future design opportunities where audio feedback, remote tangibles, and simple tasks can improve the activity flow and create a more inclusive learning experience for children with mixed-visual abilities, in line with the findings previously described. In a fourth design opportunity, we also reflect on how asymmetry can enable a more engaging activity and the potential of complex games for CT training.

The potential of tangibles and audio for interdependent activities in remote settings.

Children can use their vision, touch exploration, or verbal communication to build a mental model of the activity status in co-located environments [42]. Particularly, to know when the robot is moving, its position, and the targets on the map [F6b]. Considering two examples from our study, a child with visual impairments kept her hand on the magic box to control the action’s timing [F3a.1], and sighted children tracked the activity’s status by observing their partner’s actions. On the other hand, in remote activities, the visual and haptic feedback is compromised since children only have access to each other’s workspace through their partner’s audio communication, which led to a support compensation by the researchers [F1c.1, F1c.2].
We suggest using audio prompts based on children’s actions to enable a shared understanding of the workspace in remote interdependent activities. This audio description can be configured and presented on request or as default. For example, systems can foster task-related audio communication between children [F3b] by prompting questions about each other’s actions, what they are doing, or the game status during the activities (e.g., system to block commander “What is the robot’s status?”). Tangibles have the potential to promote collaboration between children with mixed-visual abilities, facilitating similar exploration and recognition. They can also inform the status and control the activity’s flow, allowing the system to audio describe the children’s actions (e.g., “left turn block selected”), robot and target’s locations (e.g., “The robot is on cell A1”, “The target is on cell B7”), foster interdependence (e.g., “You are adding a right turn block, what do you think you can do next?”), or inform the action timing (e.g.,“The robot will start moving in 15 seconds”).
Additionally, as expected, children with visual impairments tend to rely on touch to explore the map and the robot. Children’s exploration affects the position or orientation of the robot and the task at hand, creating challenges when manipulating the blocks [F1b.1] and orientation issues [F1c.2]. This presents a design opportunity, for example using a more robust robot, to minimize misalignments or to use feed-forward to inform its location or misplacement as used in ACCembly [60].

Simple tasks for a balanced and engaging inclusive activity in remote and co-located settings.

The flow of interdependent tasks has a significant impact on the activity engagement and balance [22]. In our work, we defined sequential tasks to be performed in turns; each job was demanding and took longer to complete (M = 3′10′′, SD = 3′00′′). We believe, this design option negatively impacted the activity’s engagement, as children waited long periods before having an active role, [F1c.3, F5b]. While waiting for their partner, the children that did not have any specific task to perform would generally disengage. On some occasions, in co-located scenarios, children offered help or even did their peer’s tasks [F4a]. For remote and co-located settings, we suggest parallel and simple tasks to keep engagement and alternate the activity’s control. Additionally, in co-located environments, it can also reduce the control of their partner’s task [F4b.1]. Although the benefits of reducing the waiting time can also lead to less collaboration between children, this drawback may be reduced by creating hybrid activities, where we combine autonomous and team tasks with remote and co-located setting.

Asymmetry opportunities in remote and co-located settings.

In this study, we explored opportunities raised by asymmetric roles with access to different information, the block commander and the map explorer. One child was the strategist, she controlled the robot’s position and orientation, while the other, the executor, was responsible for building the sequence of instructions to send the robot based on her peer’s information. The two roles led to a dynamic and balanced activity allowing for different levels of control , in which the map explorer had a higher status [F3d]. Children enjoyed playing in both roles in both environments [F5c]. A design opportunity based on asymmetric roles can be explored to create different dynamics based on each child’s preference or activity goal, reducing the risk of children disengagement [F5b]. For example, we can explore further the different role types (e.g. executor vs strategist, developer vs tester, creative vs operational) aligned with child’s age, ability, preference [F5c], or personality.
Another opportunity is asymmetric access to information where children with different visual abilities perceive the activity differently [24]. Sighted children rely more on visual information, while children with visual impairments rely on haptics and sound [F3a.1]. Our findings, [F3a], showed that sighted children were less dependent on their peers in co-located environments as they used vision to enrich their perception and awareness of all the environment. In remote settings, both children used only audio communication, [F3a.2]. As a result, children were more dependent on each other, as each one only had autonomous access to a part of the information. Our findings suggest that asymmetry in access to information can be an enabler of more inclusive activities, without compromising the communication flow, cooperation and engagement between children [F6]. There is an opportunity to explore it in future designs, for example, by giving each child different information to create more interdependence.

Complex games for CT.

CT training kits (using robots or digital avatars) mainly use the maze concept to explore the spatial challenges of going from point A to B [57, 60] or audio to create new melodies or stories [35, 49, 61, 66]. The solution to the presented problem in many of these kits uses sequential commands. The use of more advanced concepts, like pattern recognition, loops, conditionals, and perspective-taking, is not usually explored due to their complexity. In our study, we took inspiration from Sokoban, a puzzle game that allows users to apply more complex CT concepts and orientation training, like perspective taking (e.g., one of the participants gave instructions assuming the robot was moving backward). We also had different difficulty levels, and by changing the map complexity, children quickly reapplied previously learned skills or applied new ones. Our findings [F2], suggests that the gaming metaphor, with different levels with increasing complexity, can be suitable for CT training activities, as participants were able to apply CT principles, such as data collection [F2a], algorithms [F2b], debugging [F2c] and problem decomposition [F2d]. Moreover, the potential of applying gaming concepts, such as asymmetric roles, controlled action time, and rewards and punishments, needs further exploration in inclusive CT training kits.

8 Limitations and Future Work

We would like to acknowledge some of the limitations of our user study. First, the generalizability of our findings should take into account the demographics of our sample. For instance, most pairs were friends and had little to no previous coding experience. Additionally, two participants were considerably older than their grade colleagues, since they have a development delay or failed some school years. Further studies should control for the developmental differences between partners. Although we note that video access is common in remote settings, we decided to use verbal-only communication in our setup. Our decision was based on creating a more balanced workspace awareness for both children and on promoting communication, regardless of their visual acuity. In addition, a more complex setup than traditional video communication (showing the collaborators’ faces or screens) would be required to minimize the discrepancy of workspace awareness between co-located and remote collaboration with tangible objects. We believe this presents an opportunity for future work on video-mediated remote collaborations with tangible objects to clarify the specific impact of accessing several visual cues, such as the state of their peer, or the workspace of the other.

9 Conclusion

Collaborative coding environments foster CT training, social skills, and relationship development. In a learning context, coding kits can promote inclusive learning and collaboration between children. While there is a focus on co-located collaboration, remote settings are fairly unexplored, particularly when considering children with mixed-visual abilities. To explore the benefits and challenges of remote and co-located collaborative scenarios, we created a tangible robotic coding kit for children with mixed-visual abilities to play a Sokoban-inspired game. We contribute insights on a study with ten dyads of mixed-visual ability children collaborating in an interdependent and asymmetric game in remote and co-located scenarios. Our findings show that children enjoyed themselves and collaborated to apply CT concepts to finish the proposed activities in both scenarios. Although we observed that cooperation was higher in co-located environments, remote collaboration promoted more verbal communication between children. We reflect on design opportunities that can inform future designs aiming to foster inclusive collaborative coding environments, either remote, co-located, or hybrid.

Acknowledgments

We thank all the children, their educators, and schools that agreed to participate in these sessions. We would also like to thank Diana Mendes (MSc student) for all her work and dedication. This work was supported by national funds through FCT, Fundação para a Ciência e a Tecnologia, under the projects UIDB/00408/2020, UIDP/00408/2020, UIDB/50009/2020, UIDB/50021/2020, and scholarships SFRH/BD/06589/2021 and SFRH/BD/06452/2021.

Footnote

Supplementary Material

Supplemental Materials (3544548.3581261-supplemental-materials.zip)
MP4 File (3544548.3581261-talk-video.mp4)
Pre-recorded Video Presentation

References

[1]
Khaled Albusays, Stephanie Ludi, and Matt Huenerfauth. 2017. Interviews and Observation of Blind Software Developers at Work to Understand Code Navigation Challenges. In Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility (Baltimore, Maryland, USA) (ASSETS ’17). Association for Computing Machinery, New York, NY, USA, 91–100. https://doi.org/10.1145/3132525.3132550
[2]
Stefanie Angelia, Naohisa Ohta, and Kazunori Sugiura. 2015. Design and Evaluation of Educational Kinesthetic Game to Encourage Collaboration for Kindergarten Children. In Proceedings of the 12th International Conference on Advances in Computer Entertainment Technology (Iskandar, Malaysia) (ACE ’15). Association for Computing Machinery, New York, NY, USA, Article 19, 5 pages. https://doi.org/10.1145/2832932.2832967
[3]
Cristiana Antunes, Isabel Neto, Filipa Correia, Ana Paiva, and Hugo Nicolau. 2022. Inclusive’R’Stories: An Inclusive Storytelling Activity with an Emotional Robot. In Proceedings of the 2022 ACM/IEEE International Conference on Human-Robot Interaction. 90–100.
[4]
Lucia Gabriela Caguana Anzoategui, Maria Isabel Alves Rodrigues Pereira, and Monica del Carmen Solís Jarrín. 2017. Cubetto for preschoolers: Computer programming code to code. In 2017 International Symposium on Computers in Education (SIIE). IEEE, 1–5.
[5]
Cynthia L. Bennett, Erin Brady, and Stacy M. Branham. 2018. Interdependence as a Frame for Assistive Technology Research and Design. In Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility(Galway, Ireland) (ASSETS ’18). Association for Computing Machinery, New York, NY, USA, 161–173. https://doi.org/10.1145/3234695.3236348
[6]
Marina Umaschi Bers. 2017. Coding as a Playground. Routledge. https://doi.org/10.4324/9781315398945
[7]
Marina Umaschi Bers. 2018. Coding, playgrounds and literacy in early childhood education: The development of KIBO robotics and ScratchJr. In 2018 IEEE global engineering education conference (EDUCON). IEEE, 2094–2102.
[8]
Virginia Braun and Victoria Clarke. 2019. Reflecting on reflexive thematic analysis. Qualitative Research in Sport, Exercise and Health 11, 4, 589–597. https://doi.org/10.1080/2159676X.2019.1628806 arXiv:https://doi.org/10.1080/2159676X.2019.1628806
[9]
Erik Brynjolfsson, John J Horton, Adam Ozimek, Daniel Rock, Garima Sharma, and Hong-Yi TuYe. 2020. COVID-19 and remote work: An early look at US data. Technical Report. National Bureau of Economic Research.
[10]
Logan B Caraco, Sebastian Deibel, Yufan Ma, and Lauren R Milne. 2019. Making the blockly library accessible via touchscreen. In The 21st International ACM SIGACCESS Conference on Computers and Accessibility. 648–650.
[11]
Gonçalo Cardoso, Ana Cristina Pires, Lúcia Verónica Abreu, Filipa Rocha, and Tiago Guerreiro. 2021. LEGOWorld: Repurposing Commodity Tools & Technologies to Create an Accessible and Customizable Programming Environment. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI EA ’21). Association for Computing Machinery, New York, NY, USA, Article 273, 6 pages. https://doi.org/10.1145/3411763.3451710
[12]
Quentin Chibaudel, Wafa Johal, Bernard Oriola, Marc J-M Macé, Pierre Dillenbourg, Valérie Tartas, and Christophe Jouffrais. 2020. "If You’ve Gone Straight, Now, You Must Turn Left" - Exploring the Use of a Tangible Interface in a Collaborative Treasure Hunt for People with Visual Impairments. In The 22nd International ACM SIGACCESS Conference on Computers and Accessibility(Virtual Event, Greece) (ASSETS ’20). Association for Computing Machinery, New York, NY, USA, Article 19, 10 pages. https://doi.org/10.1145/3373625.3417020
[13]
Clare Cullen and Oussama Metatla. 2019. Co-Designing Inclusive Multisensory Story Mapping with Children with Mixed Visual Abilities. In Proceedings of the 18th ACM International Conference on Interaction Design and Children (Boise, ID, USA) (IDC ’19). Association for Computing Machinery, New York, NY, USA, 361–373. https://doi.org/10.1145/3311927.3323146
[14]
Maitraye Das, Darren Gergle, and Anne Marie Piper. 2019. "It Doesn’t Win You Friends": Understanding Accessibility in Collaborative Writing for People with Vision Impairments. Proc. ACM Hum.-Comput. Interact. 3, CSCW, Article 191 (nov 2019), 26 pages. https://doi.org/10.1145/3359293
[15]
Maitraye Das, Anne Marie Piper, and Darren Gergle. 2022. Design and Evaluation of Accessible Collaborative Writing Techniques for People with Vision Impairments. ACM Trans. Comput.-Hum. Interact. 29, 2, Article 9 (jan 2022), 42 pages. https://doi.org/10.1145/3480169
[16]
Paul Dourish and Victoria Bellotti. 1992. Awareness and Coordination in Shared Workspaces. In Proceedings of the 1992 ACM Conference on Computer-Supported Cooperative Work (Toronto, Ontario, Canada) (CSCW ’92). Association for Computing Machinery, New York, NY, USA, 107–114. https://doi.org/10.1145/143457.143468
[17]
Caitlin Duncan, Tim Bell, and Steve Tanimoto. 2014. Should your 8-year-old learn coding?. In Proceedings of the 9th Workshop in Primary and Secondary Computing Education. 60–69.
[18]
Ralph J Erickson. 1985. PLAY CONTRIBUTES TO THE FULL EMOTIONAL DEVELOPMENT OF THE CHILD.Education 105(1985), 261. Issue 3. https://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=4731911&lang=pt-pt&site=eds-live&scope=site
[19]
Katherine G. Franceschi, Ronald M. Lee, and David Hinds. 2008. Engaging E-Learning in Virtual Worlds: Supporting Group Collaboration. In Proceedings of the 41st Annual Hawaii International Conference on System Sciences (HICSS 2008). 7–7. https://doi.org/10.1109/HICSS.2008.146
[20]
Neil Fraser. 2015. Ten things we’ve learned from Blockly. In 2015 IEEE Blocks and Beyond Workshop (Blocks and Beyond). IEEE, 49–50.
[21]
Vinitha Gadiraju, Annika Muehlbradt, and Shaun K. Kane. 2020. BrailleBlocks: Computational Braille Toys for Collaborative Learning. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3313831.3376295
[22]
Franca Garzotto. 2007. Investigating the Educational Effectiveness of Multiplayer Online Games for Children. In Proceedings of the 6th International Conference on Interaction Design and Children (Aalborg, Denmark) (IDC ’07). Association for Computing Machinery, New York, NY, USA, 29–36. https://doi.org/10.1145/1297277.1297284
[23]
Anuradha Gokhale. 1995. Collaborative learning enhances critical thinking. Journal of Technology education 7, 1 (1995).
[24]
David Gonçalves, André Rodrigues, Mike L Richardson, Alexandra A de Sousa, Michael J Proulx, and Tiago Guerreiro. 2021. Exploring asymmetric roles in mixed-ability gaming. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–14.
[25]
Google. [n.d.]. Google’s Blockly. https://github.com/google/blockly-experimental.
[26]
Monica Gori, Giulia Cappagli, Alessia Tonelli, Gabriel Baud-Bovy, and Sara Finocchietti. 2016. Devices for visually impaired people: High technological devices with low user acceptance and no adaptability for children. Neuroscience & Biobehavioral Reviews 69 (2016), 79–88. https://doi.org/10.1016/j.neubiorev.2016.06.043
[27]
Shuchi Grover. 2020. Designing an Assessment for Introductory Programming Concepts in Middle School Computer Science. In Proceedings of the 51st ACM Technical Symposium on Computer Science Education (Portland, OR, USA) (SIGCSE ’20). Association for Computing Machinery, New York, NY, USA, 678–684. https://doi.org/10.1145/3328778.3366896
[28]
Shuchi Grover and Roy Pea. 2013. Computational thinking in K–12: A review of the state of the field. Educational researcher 42, 1 (2013), 38–43.
[29]
Arzu Güneysu Özgür, Hala Khodr, Mehdi Akeddar, Michael Roust, and Pierre Dillenbourg. 2022. Designing Online Multiplayer Games with Haptically and Virtually Linked Tangible Robots to Enhance Social Interaction in Therapy. 31st IEEE International Conference on Robot & Human Interactive Communication (RO-MAN).
[30]
Alex Hadwen-Bennett, Sue Sentance, and Cecily Morrison. 2018. Making Programming Accessible to Learners with Visual Impairments: A Literature Review. International Journal of Computer Science Education in Schools 2 (05 2018). https://doi.org/10.21585/ijcses.v2i2.25
[31]
Kathy Hirsh-Pasek, Jennifer M. Zosh, Roberta Michnick Golinkoff, James H. Gray, Michael B. Robb, and Jordy Kaufman. 2015. Putting Education in “Educational” Apps: Lessons From the Science of Learning. Psychological Science in the Public Interest 16, 1 (2015), 3–34. https://doi.org/10.1177/1529100615569721 arXiv:https://doi.org/10.1177/1529100615569721PMID: 25985468.
[32]
Felix Hu, Ariel Zekelman, Michael Horn, and Frances Judd. 2015. Strawbies: Explorations in Tangible Programming. In Proceedings of the 14th International Conference on Interaction Design and Children (Boston, Massachusetts) (IDC ’15). Association for Computing Machinery, New York, NY, USA, 410–413. https://doi.org/10.1145/2771839.2771866
[33]
Rie Kamikubo, Naoya Kato, Keita Higuchi, Ryo Yonetani, and Yoichi Sato. 2020. Support Strategies for Remote Guides in Assisting People with Visual Impairments for Effective Indoor Navigation. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3313831.3376823
[34]
Shaun K. Kane and Jeffrey P. Bigham. 2014. Tracking @stemxcomet: Teaching Programming to Blind Students via 3D Printing, Crisis Management, and Twitter. In Proceedings of the 45th ACM Technical Symposium on Computer Science Education (Atlanta, Georgia, USA) (SIGCSE ’14). Association for Computing Machinery, New York, NY, USA, 247–252. https://doi.org/10.1145/2538862.2538975
[35]
Varsha Koushik, Darren Guinness, and Shaun K. Kane. 2019. StoryBlocks: A Tangible Programming Game To Create Accessible Audio Stories. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3290605.3300722
[36]
Marjan Laal and Seyed Mohammad Ghodsi. 2012. Benefits of collaborative learning. Procedia-social and behavioral sciences 31 (2012), 486–490.
[37]
Bobby Law. 2016. Puzzle games: a metaphor for computational thinking. In Proceedings of the 10th European Conference on Games Based Learning, Thomas Connolly and Liz Boyle (Eds.). Academic Conferences and Publishing International Limited, 344–353. Check first pub date; not available online (query to publisher 20-1-17; used 31-10-16 as publisher gave date of Oct16, see email 2-2-17) AAM provided 18/1/17; contacted publisher re permissions (rc’d email 31-1-17 about another paper from same publisher) Link to ToC for proceedings: http://toc.proceedings.com/31939webtoc.pdf.
[38]
Cheuk Yin Phipson Lee, Zhuohao Zhang, Jaylin Herskovitz, JooYoung Seo, and Anhong Guo. 2022. CollabAlly: Accessible Collaboration Awareness in Document Editing. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 596, 17 pages. https://doi.org/10.1145/3491102.3517635
[39]
Jessica Lipnack and Jeffrey Stamps. 1999. Virtual teams: The new way to work. Strategy & Leadership(1999).
[40]
Muhanad S. Manshad, Enrico Pontelli, and Shakir J. Manshad. 2013. Exploring Tangible Collaborative Distance Learning Environments for the Blind and Visually Impaired. In CHI ’13 Extended Abstracts on Human Factors in Computing Systems (Paris, France) (CHI EA ’13). Association for Computing Machinery, New York, NY, USA, 55–60. https://doi.org/10.1145/2468356.2468367
[41]
Nancy L. McElwain and Brenda L. Volling. 2005. Preschool children’s interactions with friends and older siblings: relationship specificity and joint contributions to problem behavior.Journal of Family Psychology 19 (2005), 486–496. Issue 4. https://doi.org/10.1037/0893-3200.19.4.486
[42]
Daniel Mendes, Sofia Reis, João Guerreiro, and Hugo Nicolau. 2020. Collaborative tabletops for blind people: The effect of auditory design on workspace awareness. Proceedings of the ACM on Human-Computer Interaction 4, ISS(2020), 1–19.
[43]
Oussama Metatla. 2016. Workspace awareness in collaborative audio-only interaction with diagrams. In Proceedings of the First African Conference on Human Computer Interaction. 165–169.
[44]
Oussama Metatla, Sandra Bardot, Clare Cullen, Marcos Serrano, and Christophe Jouffrais. 2020. Robots for Inclusive Play: Co-Designing an Educational Game With Visually Impaired and Sighted Children. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3313831.3376270
[45]
Oussama Metatla, Alison Oldfield, Taimur Ahmed, Antonis Vafeas, and Sunny Miglani. 2019. Voice User Interfaces in Schools: Co-Designing for Inclusion with Visually-Impaired and Sighted Pupils. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–15. https://doi.org/10.1145/3290605.3300608
[46]
Lauren R Milne and Richard E Ladner. 2018. Blocks4All: Overcoming Accessibility Barriers to Blocks Programming for Children with Visual Impairments. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (2018), 69:1–69:10. https://doi.org/10.1145/3173574.3173643
[47]
Jonas Moll and Eva-Lotta Sallnäs Pysander. 2013. A Haptic Tool for Group Work on Geometrical Concepts Engaging Blind and Sighted Pupils. ACM Trans. Access. Comput. 4, 4, Article 14 (jul 2013), 37 pages. https://doi.org/10.1145/2493171.2493172
[48]
Francesco Mondada, Michael Bonani, Fanny Riedo, Manon Briod, Léa Pereyre, Philippe Rétornaz, and Stéphane Magnenat. 2017. Bringing robotics to formal education: The thymio open-source hardware robot. IEEE Robotics & Automation Magazine 24, 1 (2017), 77–85.
[49]
Cecily Morrison, Nicolas Villar, Anja Thieme, Zahra Ashktorab, Eloise Taysom, Oscar Salandin, Daniel Cletheroe, Greg Saul, Alan F Blackwell, Darren Edge, Martin Grayson, and Haiyan Zhang. 2018. Torino: A Tangible Programming Language Inclusive of Children with Visual Disabilities. Human–Computer Interaction 35, 3, 191–239. https://doi.org/10.1080/07370024.2018.1512413 arXiv:https://doi.org/10.1080/07370024.2018.1512413
[50]
Aboubakar Mountapmbeme and Stephanie Ludi. 2021. How Teachers of the Visually Impaired Compensate with the Absence of Accessible Block-Based Languages. The 23rd International ACM SIGACCESS Conference on Computers and Accessibility, 1–10. https://doi.org/10.1145/3441852.3471221
[51]
Aboubakar Mountapmbeme, Obianuju Okafor, and Stephanie Ludi. 2022. Addressing Accessibility Barriers in Programming for People with Visual Impairments: A Literature Review. ACM Transactions on Accessible Computing 15 (3 2022), 1–26. Issue 1. https://doi.org/10.1145/3507469
[52]
Isabel Neto, Wafa Johal, Marta Couto, Hugo Nicolau, Ana Paiva, and Arzu Guneysu. 2020. Using Tabletop Robots to Promote Inclusive Classroom Experiences. In Proceedings of the Interaction Design and Children Conference (London, United Kingdom) (IDC ’20). Association for Computing Machinery, New York, NY, USA, 281–292. https://doi.org/10.1145/3392063.3394439
[53]
Isabel Neto, Hugo Nicolau, and Ana Paiva. 2021. Community Based Robot Design for Classrooms with Mixed Visual Abilities Children. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–12.
[54]
Osmo. [n.d.]. Coding Starter Kit. https://www.playosmo.com/en/shopping/kits/coding/.
[55]
Ozobot. [n.d.]. Ozobot Evo. https://shop.ozobot.com.
[56]
A. D. Pellegrini and Peter K. Smith. 1998. Physical Activity Play: The Nature and Function of a Neglected Aspect of Play. Child Development 69 (6 1998), 577–598. Issue 3. https://doi.org/10.1111/j.1467-8624.1998.tb06226.x
[57]
Ana Cristina Pires, Filipa Rocha, Antonio José de Barros Neto, Hugo Simão, Hugo Nicolau, and Tiago Guerreiro. 2020. Exploring Accessible Programming with Educators and Visually Impaired Children. In Proceedings of the Interaction Design and Children Conference (London, United Kingdom) (IDC ’20). Association for Computing Machinery, New York, NY, USA, 148–160. https://doi.org/10.1145/3392063.3394437
[58]
Venkatesh Potluri, Maulishree Pandey, Andrew Begel, Michael Barnett, and Scott Reitherman. 2022. CodeWalk: Facilitating Shared Awareness in Mixed-Ability Collaborative Software Development. In Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility (Athens, Greece) (ASSETS ’22). Association for Computing Machinery, New York, NY, USA, Article 20, 16 pages. https://doi.org/10.1145/3517428.3544812
[59]
Mitchel Resnick, John Maloney, Andrés Monroy-Hernández, Natalie Rusk, Evelyn Eastmond, Karen Brennan, Amon Millner, Eric Rosenbaum, Jay Silver, Brian Silverman, 2009. Scratch: programming for all. Commun. ACM 52, 11 (2009), 60–67.
[60]
Filipa Rocha, Ana Cristina Pires, Isabel Neto, Hugo Nicolau, and Tiago Guerreiro. 2021. Accembly at Home: Accessible Spatial Programming for Children with Visual Impairments and Their Families. In Interaction Design and Children (Athens, Greece) (IDC ’21). Association for Computing Machinery, New York, NY, USA, 100–111. https://doi.org/10.1145/3459990.3460699
[61]
Zhiyi Rong, Ngo Fung Chan, Taizhou Chen, and Kening Zhu. 2020. CodeRhythm: A Tangible Programming Toolkit for Visually Impaired Students. In The eighth International Workshop of Chinese CHI. 57–60.
[62]
Eva-Lotta Sallnäs, Kajsa Bjerstedt-Blom, Fredrik Winberg, and Kerstin Severinson Eklundh. 2006. Navigation and control in haptic applications shared by blind and sighted users. In International Workshop on Haptic and Audio Interaction Design. Springer, 68–80.
[63]
Magy Seif El-Nasr, Bardia Aghabeigi, David Milam, Mona Erfani, Beth Lameman, Hamid Maygoli, and Sang Mah. 2010. Understanding and Evaluating Cooperative Games. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Atlanta, Georgia, USA) (CHI ’10). Association for Computing Machinery, New York, NY, USA, 253–262. https://doi.org/10.1145/1753326.1753363
[64]
Amanda Strawhacker and Marina Umaschi Bers. 2019. What they learn when they learn coding: investigating cognitive domains and computer programming knowledge in young children. Educational Technology Research and Development 67, 3(2019), 541–575.
[65]
Judith E Terpstra and Ronald Tamura. 2008. Effective social interaction strategies for inclusive settings. Early Childhood Education Journal 35, 5 (2008), 405–411.
[66]
Anja Thieme, Cecily Morrison, Nicolas Villar, Martin Grayson, and Siân Lindley. 2017. Enabling Collaboration in Learning Computer Programing Inclusive of Children with Vision Impairments. In Proceedings of the 2017 Conference on Designing Interactive Systems (Edinburgh, United Kingdom) (DIS ’17). Association for Computing Machinery, New York, NY, USA, 739–752. https://doi.org/10.1145/3064663.3064689
[67]
Thingiverse. [n.d.]. 3D crate. https://www.thingiverse.com/thing:2975591.
[68]
Thingiverse. [n.d.]. 3D Ozobot pusher. https://www.thingiverse.com/thing:2218396.
[69]
Nicolas Villar, Cecily Morrison, Daniel Cletheroe, Tim Regan, Anja Thieme, and Greg Saul. 2019. Physical Programming for Blind and Low Vision Children at Scale. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI EA ’19). Association for Computing Machinery, New York, NY, USA, 1–4. https://doi.org/10.1145/3290607.3313241
[70]
Mirza Waqar, Muhammad Aslam, and Muhammad Farhan. 2019. An Intelligent and Interactive Interface to Support Symmetrical Collaborative Educational Writing among Visually Impaired and Sighted Users. Symmetry 11, 2 (Feb 2019), 238. https://doi.org/10.3390/sym11020238
[71]
David Weintrop and Uri Wilensky. 2015. To block or not to block, that is the question: students’ perceptions of blocks-based programming. In Proceedings of the 14th international conference on interaction design and children. 199–208.
[72]
Cameron Wilson, Leigh Ann Sudol, Chris Stephenson, and Mark Stehlik. 2010. Running on empty: The failure to teach k–12 computer science in the digital age. ACM.
[73]
Fredrik Winberg and John Bowers. 2004. Assembling the senses: towards the design of cooperative interfaces for visually impaired users. In Proceedings of the 2004 ACM conference on Computer supported cooperative work. 332–341.
[74]
Jeannette M. Wing. 2006. Computational Thinking. Commun. ACM 49, 3 (March 2006), 33–35. https://doi.org/10.1145/1118178.1118215
[75]
Jeannette M Wing. 2014. Computational thinking benefits society. 40th anniversary blog of social issues in computing 2014 (2014), 26.
[76]
Junnan Yu, Chenke Bai, and Ricarose Roque. 2020. Considering Parents in Coding Kit Design: Understanding Parents’ Perspectives and Roles. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3313831.3376130
[77]
Junnan Yu, Clement Zheng, Mariana Aki Tamashiro, Christopher Gonzalez-millan, and Ricarose Roque. 2020. CodeAttach: engaging children in computational thinking through physical play activities. In Proceedings of the fourteenth international conference on tangible, embedded, and embodied interaction. 453–459.
[78]
Zhuohao Zhang, Zhilin Zhang, Haolin Yuan, Natã M Barbosa, Sauvik Das, and Yang Wang. 2021. {WebAlly}: Making Visual Task-based {CAPTCHAs} Transferable for People with Visual Impairments. In Seventeenth Symposium on Usable Privacy and Security (SOUPS 2021). 281–298.

Cited By

View all
  • (2024)Inclusive Computational Thinking in Public Schools: A Case Study from LisbonInteractions10.1145/366599231:4(32-37)Online publication date: 26-Jun-2024
  • (2024)Inclusion as a Process: Co-Designing an Inclusive Robotic Game with Neurodiverse ClassroomsProceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3663548.3675664(1-15)Online publication date: 27-Oct-2024
  • (2024)Ethical Concerns when Working with Mixed-Ability Groups of ChildrenProceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3663548.3675648(1-9)Online publication date: 27-Oct-2024
  • Show More Cited By

Index Terms

  1. Coding Together: On Co-located and Remote Collaboration between Children with Mixed-Visual Abilities

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CHI '23: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems
    April 2023
    14911 pages
    ISBN:9781450394215
    DOI:10.1145/3544548
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 19 April 2023

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Accessible
    2. Children
    3. Collaboration
    4. Computational thinking
    5. Mixed-visual ability
    6. Robot
    7. Tangible
    8. Visually impaired

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    CHI '23
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

    Upcoming Conference

    CHI '25
    CHI Conference on Human Factors in Computing Systems
    April 26 - May 1, 2025
    Yokohama , Japan

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)758
    • Downloads (Last 6 weeks)132
    Reflects downloads up to 12 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Inclusive Computational Thinking in Public Schools: A Case Study from LisbonInteractions10.1145/366599231:4(32-37)Online publication date: 26-Jun-2024
    • (2024)Inclusion as a Process: Co-Designing an Inclusive Robotic Game with Neurodiverse ClassroomsProceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3663548.3675664(1-15)Online publication date: 27-Oct-2024
    • (2024)Ethical Concerns when Working with Mixed-Ability Groups of ChildrenProceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3663548.3675648(1-9)Online publication date: 27-Oct-2024
    • (2024)A Systematic Review of Ability-diverse Collaboration through Ability-based Lens in HCIProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3641930(1-21)Online publication date: 11-May-2024

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Get Access

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media