Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3699538.3699566acmotherconferencesArticle/Chapter ViewFull TextPublication Pageskoli-callingConference Proceedingsconference-collections
research-article
Open access

The Effect (or Lack Thereof) of Spatial Skills Training in a Mid-Major Computing Course

Published: 13 November 2024 Publication History

Abstract

Background and Context. Spatial skills have been shown to correlate with success in introductory computing courses. Fortunately, spatial skills can be trained to improve outcomes for all students, both in terms of greater spatial skills but also in terms of achievement in the computing classroom. However, less is known about courses beyond the introductory computing sequence.
Objectives. Our work sought to understand the impact of spatial skills training on both spatial ability and computer science performance in a mid-major computer science course. We further explored what factors influence the outcomes at the end of a training intervention, including the effort students applied to the training.
Methods. This paper describes a quasi-experimental study of spatial skills training in a required software engineering course in the middle of an undergraduate computing degree program. One section of the course (n=31) received spatial skills training while another section of the course (n=46) received more course-related activities to control for time-on-task. Each section took pre- and post-tests in spatial skills and computing knowledge and additional demographic information was gathered. The spatial skills training artifacts were further graded and analyzed for effort and correctness.
Findings. Our statistical analyses show no significant differences in the pre- and post-scores for the treatment group. The primary predictor of post-test spatial skills score was pre-test spatial skills score, regardless of the condition. Our analyses do show that the more effort students put into the spatial skills training, the more likely they are to improve their spatial skills, though not necessarily for low spatial skills students. We also find evidence of spatial skills training mitigating gaps in spatial abilities between men and women students.
Implications. Our work indicates the importance of timing and motivation when implementing spatial skills training. These training activities will not be as effective at all points in a computing major, nor is it significantly beneficial to apply the training to all students in a course. Our findings highlight the need for further research on where and when spatial skills training can be the most effective for computer science undergraduate students and how it can be feasibly implemented within the constraints posed by a program of study.

1 Introduction

Spatial skills include the ability to imagine, remember, and mentally transform symbols or objects [18]. There is an abundance of evidence that spatial skills are strongly correlated with Science, Technology, Engineering, and Math (STEM) achievement [4, 32]. Spatial skills tend to differ along spectra of gender and socioeconomic status, with women and low socioeconomic status students lagging behind their counterparts in terms of spatial skills [29]. However, spatial skills can be trained to improve outcomes for all students, both in terms of greater spatial skills and also in terms of achievement in the classroom [33].
Spatial skills work is traditionally based in the math and sciences, which have more physical representations and thus more connections to spatial representations. The computer science (CS) education research community is actively replicating findings from STEM education and finding similar results [2, 3, 23], despite being a field that is based in the digital and virtual worlds.
Prior work has shown that spatial skills training in introductory CS courses is beneficial [22]. However, there is an abundance of computing topics and learning activities that have previously been discussed as important to include in these courses [1], both for success in the course and to succeed in the degree program as a whole. It is feasible to consider whether spatial skills training might also be suited elsewhere in a CS degree program so as to not crowd a course that is already critical for many other reasons [16]. Yet, it is currently unclear if training spatial skills would still be useful outside of an introductory course. It is important to narrow down where spatial skills training is most effective in an undergraduate CS program. This understanding is not only useful on a practical level, for professors considering implementing training in their courses to help their students, but also on a research level. We do not fully understand the connections between CS and spatial skills [18], and understanding when it is useful to intervene can help provide clues as to the relationship between these two variables.
We designed a quasi-experimental study to take the work in spatial skills in CS beyond the introductory course sequence. We pose the following research questions to frame our study:
RQ1: How does spatial skills training impact (a) spatial ability and (b) computer science knowledge in a mid-major computer science course?
RQ2: What factors influence mid-major computer science students’ ability to improve their spatial skills?
Findings from this study can push the boundary of what is known about spatial skills in CS classrooms. Our study is situated in a non-programming, software engineering course that is required for CS majors at our institution. We implemented spatial skills training in one section of this course, while the other section received additional course-related activities to control for time-on-task. In this paper, we use the data we collected throughout the semester to investigate and better understand the spatial skills phenomenon in the middle, as opposed to the beginning, of a CS degree program.

2 Related Works

2.1 Spatial Skills

Spatial skills, which we define using Margulieux’s definition [18], are often conflated with other related terms. While we acknowledge that there are distinctions between spatial visualization, spatial ability, and spatial skills, we adopt the use of the term spatial skills to signify its malleability. In other words, spatial skills can be changed, and we choose to emphasize that with our word choice.
While we refer to spatial skills broadly in this work, it is important to understand that there are different types of spatial skills and different theories of how these types of spatial skills can be categorized. Linn and Petersen refer to three categories: spatial perception, mental rotation, and spatial visualization [14]. Uttal et al. defines four types of spatial skills along the dimensions of intrinsic-extrinsic and static-dynamic [33], which the Linn and Petersen categories can map onto. Each of the four categories could be interpreted as follows:
Intrinsic-static: about an object that does not move (e.g., recognizing a rake as a rake)
Intrinsic-dynamic: about an object that does move (e.g., recognizing what a rake would look like rotated)
Extrinsic-static: relation between objects in a group that do not move (e.g., recognizing where you are on a map)
Extrinsic-dynamic: relation between objects in a group that do move (e.g., recognizing how objects change in relation to each other as they are moved, such as how a building would look different from a different viewpoint)
These categorizations are, in part, where the definitions and word choice in spatial skills work diverge. While some work we mention here will refer to different terms or aspects of spatial skills (e.g., mental rotational ability), we see them all as connected to this core concept of a mental capacity that involves symbols or objects.

2.2 Spatial Skills Training

Spatial skills have consistently been shown to predict achievement in STEM courses [34]. Although the connection between spatial skills and achievement in STEM settings may be surprising, spatial skills can be trained to the point of improving both spatial skills and STEM outcomes [33]. Uttal et al. conducted a meta-analysis of studies that implemented various spatial training techniques [33]. Overall, they found training to have a moderate effect on spatial skills (g = 0.47), but they also explored different types of training types: courses, video games, and spatial tasks. Courses, which had an effect size of g = 0.41, include those developed by Sorby and Baartmans to improve spatial skills among engineering students [30]. This course consists of ten weeks of activities across a range of 3-D spatial skills and is accompanied by a workbook for students to draw isometric, orthographic, and rotated shapes. This particular spatial skills training course has been shown to have a positive impact on spatial skills and STEM outcomes [29].
As there are different categories of spatial skills, so too are there different types of spatial skills training to cater to the different skills. The workbooks described above, for example, are focused on intrinsic-dynamic spatial skills training, as they define objects that participants are then asked to move. Some spatial skills categories may respond less to training, including the intrinsic-static category [33]. These skills are more about applying principles or rules to an object than moving the object, which may be harder to train.
There is evidence from the aforementioned meta-analysis of spatial skills training studies that students with lower spatial skills benefit more from training [33]. This makes logical sense as these students have more room to grow in their spatial skills, so any efforts to improve their skill sets could see more improvement than their higher spatial skills peers. As a result, many studies in STEM education focus specifically on students with low spatial skills as determined by a preliminary spatial skills test at the start of the study [22, 29]. Studies that choose to give spatial skills training to all students still find a larger improvement for students that initially scored lower on a spatial skills test [17].

2.3 Spatial Skills in Computer Science

Over the past twenty years, there have been various efforts to explore the relationship between spatial skills and CS. One of the first of these works found that spatial visualization skills, as measured via a paper folding test, were correlated with success in an introductory computing course [8]. Other work similarly found correlations between mental rotation ability, as measured by a mental rotation test, and success in introductory computing for Master’s students [11]. Cooper et al. continued to confirm a connection between spatial skills and achievement in CS, this time in high school students [5]. In another study, spatial skills, measured at the start of the semester, were able to predict student computing knowledge by the end of the semester [2].
Despite this abundance of evidence that there exists a relationship between spatial skills and CS ability, it remains unclear why that connection exists. The prevailing theory, Margulieux’s Spatial Encoding Strategy (SPeS), hypothesizes that having better spatial skills leads to better strategies for storing information and orientation, which would aid computing students who are actively engaged in code comprehension and debugging tasks [18]. The SPeS theory has been supported by recent research which found, through a think-aloud study, that students with higher spatial skills were more able to demonstrate advanced chunking (information storing) and encoding (orientation) techniques; students with lower spatial skills were less likely to demonstrate these skills [26].
The connection between CS and spatial skills may lie in a particular aspect of computing. There have been fMRI studies that indicate that the same region of the brain is activated for spatial operations and for data structure tasks [10]. Another neuroimaging study found that students at the beginning of a programming course have certain neural activation patterns while completing coding and spatial reasoning tasks [7]. These patterns can predict performance on a programming assessment at the end of the term [7]. Consequently, not all aspects of a computing program may equally correspond to spatial skills. There was no correlation found between growth in spatial skills and courses with limited computational models, including human-computer interaction and ethics courses [25]. Meanwhile, when spatial skills was measured in a computer graphics course, students’ mental rotation and spatial visualization abilities correlated with their success in the course [15].
Beyond just a relationship between skills in spatial and computing domains, it has also consistently been shown that training spatial skills in an introductory computing course has positive effects on student outcomes [2, 22, 27]. However, there is some evidence that when this training is offered in a CS curriculum may matter: Parkinson and Cutts found a larger impact of spatial skills training in a CS0 course (typically geared towards students not majoring in CS) when compared to an introductory CS course, a finding that surprised the authors [24]. Further, spatial skills training in the second semester of a year-long introductory computing course was found to be less effective than in the first semester [27]. This is important to consider in conjunction with evidence that spatial skills training can have a larger impact on students with low spatial skills [33], providing evidence for why earlier spatial skills interventions might be more beneficial for students. It should also be noted that students tend to find spatial skills training challenging, occasionally guessing answers to complete the task [17]. However, to our knowledge, no further analyses have been conducted regarding students’ engagement with the spatial skills training materials in this manner.
Spatial skills have also been shown to relate to other factors that may play a role in our CS classrooms and research. In one study, students with higher mental rotation ability were able to complete a code comprehension task quicker and more efficiently than students with lower mental rotation ability [12]. Other studies have found that spatial skills training may have a greater impact on students of different races and socioeconomic statuses [5] and on women students [17]. The connection between spatial skills and socioeconomic status has since been explored, and generally supported, in other studies [17, 21]. Parkinson and Cutts found that spatial skills increased with the level of academic achievement in CS; that is, master’s students in computing had higher spatial skills than honors undergraduate students, who had higher spatial skills than first-year students [23]. Further, spatial skills tend to increase over the course of study in a computing program [25].
Our present study seeks to build on this prior literature around spatial skills training in CS classrooms while also pushing the boundaries of what is known about this phenomenon in our field. While most of the above studies focus on introductory courses, we seek evidence as to whether spatial skills training can be effective later in a CS major. We also continue the efforts to understand what factors correspond to spatial skills so that we may know what could help, or hinder, students throughout this process. Given the related work on this topic, we present the following hypotheses in regard to our two research questions:
H1: Spatial skills training will have a measurable, positive impact on mid-major students’ spatial skills and computer science abilities (based on evidence from [2, 22, 24]), especially for low spatial skills students [29]. This impact will not be as large as seen in studies on introductory computing (building on evidence from [17, 25, 27]).
H2: Students’ enrollment in a data structures course [10], their gender [17], and the level of spatial skills they start the course with [17, 26] will have an impact on their outcomes from the spatial skills training. Additionally, how much effort students put into the training will correspond to the growth in their spatial skills (extending the analysis presented in [17]).

3 Methods

3.1 Context

Our study was based at a large public university in the United States. In particular, we focused on a course on software engineering, which is required for CS majors at the institution where this study was conducted. The course is the last core requirement in the major sequence, meaning all students majoring in CS take the course, generally before they enroll in elective courses but after they have completed the introductory course sequence. The course does not contain any programming, focusing instead on the other phases of the software life cycle (e.g., requirements, design, testing). Conveniently, this course also allows us to remove the direct impact of programming influencing spatial skills. However, students are enrolled in multiple courses in a term and are often simultaneously enrolled in data structures courses along with this software engineering course. To control for the effect of these courses on the students in the study, we asked students to report what other courses they were enrolled in during the same semester.

3.2 Data Collection

Table 1:
 Control GroupTreatment Group
Week 2Pre-test spatial skills knowledge
Week 3Pre-test computer science knowledge
Weeks 4 - 15* 
Outside of classReadingsQuizzes and videos
Within classCourse-related active learning activitiesSpatial skills training workbooks [30]
Week 16Post-test spatial skills knowledge
Post-test computer science knowledge
Demographics survey
Table 1: Overview of procedures for control and treatment groups. *Weeks 11 and 12 had no activities due to Spring Break and an unplanned institutional event
Our study consisted of a control group and a treatment group. The delineation between the two groups can be found in Table 1. Each group was a different section of the same course on software engineering, described in Section 3.1, and received the same lectures and assignments and completed the same final project. Both groups took the same pre- and post-tests of CS and spatial skills knowledge and filled out a demographics survey at the end of the course. In the intervening ten weeks, the treatment group engaged in spatial skills training activities, including a weekly video and quiz to be completed outside of class and workbook pages to be completed during class. The control group received additional material related to the course, including readings on software engineering to be completed as homework and activities based on those readings to complete during class. This material was explicitly designed to not overlap with the spatial skills training modules; rather, these materials provided more practice with course content above what is normally delivered to the students. The two conditions were designed to control for time-on-task, both during and outside of class time. All of the out-of-class activities, such as the quizzes, videos, and readings, as well as the pre- and post-tests and demographic surveys, were delivered through the learning management system that the course uses to post the schedule and assignments.
The CS assessment used for the pre- and post-CS test in this study was SCS1, which is a pseudocode-based assessment designed for use at the end of an introductory course in computing [20]. Given that students in the course in this study had all completed the first CS course, but no other course was guaranteed to be completed, the use of SCS1 was deemed appropriate. Students were given 60 minutes to complete the 27 questions on the assessment, as advised by the creators of SCS1 [20]. The pre- and post-spatial skills test was the Purdue Spatial Visualization Test on Rotations (PSVT:R) [9]. This test has become the standard for spatial skills research and has been used by many studies in CS education to measure spatial skills gains [22]. Students were given 20 minutes to complete the 30 questions on the assessment. The software engineering course in this study has an emphasis on diagrams (UML class diagrams, software architecture diagrams, use case diagrams, etc.). However, these spatial skills would fall under the extrinsic-static category, as discussed in Section 2.1, as opposed to the intrinsic-dynamic training that tested with the PSVT:R. Thus there was no expectation that the control group activities could lead to improved performance on the spatial skills measure.
The spatial skills training workbooks, videos, and quizzes were developed by Sheryl Sorby [31]. Similar to the PSVT:R, these training materials are thoroughly used in computing education research [22]. These materials were chosen, in part, to allow for comparison between the present work and the past literature on spatial skills and CS. As discussed in Section 2.2, this workbook asks students to draw shapes from different angles, making this an intrinsic-dynamic focused training.
The instructor of the course section that served as the control group was the lead researcher of this study. They developed the class activities that were used to control for time-on-task, given their familiarity with the spatial skills training materials. This allowed for minimal overlap between the additional materials seen between the two sections, and thus the two groups of this study. That is to say, the control group was assured to not receive incidental spatial skills training akin to what the treatment section was receiving. The instructor of the treatment group was not involved in the research, nor an expert in computing education research. The treatment group instructor was given a script to introduce the spatial skills activity, emphasizing the importance of these skills in the students’ computing courses and their ability to improve them through the activities. The two instructors had weekly meetings to monitor the status of the project and address any questions or issues with the activities. No teaching assistants were involved with this work, either in the administration of activities or the grading of assessments.
This study was approved by the local ethics approval board. Students were unaware that the sections would be different prior to enrolling in the course, nor were they aware of the research study being conducted. While all students enrolled in each section participated in the respective activities, only students who broadly consented to the use of their class data in a research study were included in our analysis. Ultimately, 51 students in the control and 31 students in the treatment section consented to be in this study. The completion of all of the activities and assessments was counted as participation points for the course; these elements were not graded for correctness. Not every student completed every activity in their section, so for statistical analyses that require paired samples, the sample size may be smaller than the number of students in that section that consented. The rate of students who receive a D, F, or Withdraw from the course, often known as the DFW rate, for the two sections was similar; given the course is required for the CS major, no students dropped from either section during the term. The control group section had one student who received an F at the end of the course, while the treatment group section had two students who received an F in the course. Besides these students, the average final course grade for the students in the control group was 92 and for the treatment group was 88 (out of 100). A summary of our study population can be found in Table 2.
Table 2:
 ControlTreatment
Gender  
Women85
Men2820
Other30
Race  
White97
Asian1310
Black10
Latina/o144
Mixed34
Total Consented4431
Table 2: Summary demographics

3.3 Data Analysis

We provide a summary of our dataset in Table 3, both for the whole sample and for the students who had low scores on their pre-test of spatial skills (a threshold of 60% correct is typically applied in the literature, as seen in [3, 30]). Timing data is provided only on average and only for the entire study population. Due to constraints with the learning management system used to deliver the intervention, we do not have finer grain timing data than averages for each section.
We analyzed our data using different statistical techniques depending on the research question being answered. To begin all analyses, we tested our data for normality using a Shapiro-Wilks test. We found that not all of our data were normally distributed, and thus chose to use non-parametric analyses when appropriate.
To answer our first research question, regarding the impact of spatial skills training on spatial skills and CS ability, we looked at differences within and between each study group. To analyze differences within each group, we performed Wilcoxon Signed-Rank Tests to compare the paired pre- and post-test scores on the spatial and computing assessments. We did this for all students in each group, and then again for the students that had low spatial skills as determined by their score on the pre-test. Since we could not divide students into sections and only apply the treatment to low spatial skills students, we chose to conduct post-hoc analyses to understand the differential impact of the training on these students. Due to conducting two comparisons within each group, we applied a Bonferroni correction to our significance level, such that p < (0.05/2) = p < 0.025. To analyze the differences between the control and treatment groups, we used an Analysis of Covariance (ANCOVA) to understand the impact of the study condition on the post-test scores, controlling for the pre-test scores.
To answer our second research question, regarding the other factors that influence the impact of spatial skills training, we ran an ANCOVA to explore the effect of gender and concurrent enrollment in a data structures course. We also conducted a multiple regression to understand the impact of attempts on the spatial skills workbooks (discussed in Section 3.3.1) on the post-test spatial skills scores. All of our analyses were performed using R version 4.3.0.

3.3.1 Spatial Skills Workbook Analysis.

To gain more depth of data regarding student spatial skills training, the research team analyzed the artifacts created by students completing the spatial skills workbooks. Five researchers were each assigned two, out of the ten total, workbook modules. For their assigned modules, they created a rubric to score the correctness of the sketches, allowing for partial credit where appropriate. Each researcher coded the same 20% of the ten modules of the workbooks with these rubrics. Inter-rater reliability was calculated using an intra-class correlation, which was found to be 0.914 across the five raters, indicating excellent reliability [13]. After this, the researchers divided the remaining workbooks and scored them according to the rubrics. Each student in the treatment condition received a score for each module of the points they received out of a total number of possible points on the workbook, which was then averaged across the ten modules for an average workbook score.
Figure 1:
Figure 1: The above images show different types of student responses to a question in a module on the rotation of objects about a single axis. For this question, students were asked to "sketch the object in the space provided after rotating it about the axis by the indicated amount."
On the initial coding pass, researchers noticed that some students seemingly tried to answer a question, but ultimately were incorrect (i.e. rotated a shape in the wrong direction). However, some students did not seem to try on a question, drawing shapes that were not in the initial sketches. These differences can be seen in Figure 1, which shows examples of a correct, incorrect, and "not trying" sketch on a problem that asked students to rotate a shape about a single axis by 90°. Additionally, some students ran out of time and left a question blank. This would be scored as a 0 and would look the same as a student who tried but got it wrong or a student who did not draw a reasonable sketch. The research team defined what it meant to "try" on a sketch as a sketch where the lines were clearly defined and the shapes in the sketch were similar to the shapes in the problem.
Following this definition, the researchers revisited the 20% coding set to look at whether a student did not try on the questions where they previously received scores of 0. After this stage, the intra-class correlation score was 0.91, again signifying excellent reliability among the coders. Then, the researchers coded the remaining workbooks.
The amount that a student tried on the workbook was then analyzed as a ratio of the amount of questions that a student tried on, across all ten workbook modules to the total number of questions that they attempted. This meant that questions that they left blank would not be considered in this ratio. If a student received a 1, that meant they tried on every question they sketched something for. If a student received a value closer to 0, that indicated that, more often than not, the student did not draw a coherent sketch when they drew anything at all. This is in contrast to the overall workbook score, which did include the questions that the student left blank as being a 0 score. Between these two indicators, we can ascertain how much of the training a student tried, as well as how well they did regarding the training as a whole.
This ultimately extends the analysis by Ly et al., which asked students, who received the same spatial skills training as we administer here, if they cheated or guessed on the questions [17]. Beyond asking students how they responded to the difficult training, we chose to analyze the product created as a result of the training to demonstrate engagement with the material.

4 Results

Table 3:
 ControlTreatment ControlTreatment ControlTreatment
All students
Pre CS  Post CS  Delta CS  
Mean0.420.37Mean0.360.38Mean-0.070.00
Std. Dev.0.210.18Std. Dev.0.200.20Std. Dev.0.190.16
Range[0.04…0.85][0.04…0.81]Range[0…0.78][0.15…0.74]Range[-0.74…0.22][-0.37…0.48]
Avg. Time46:2342:07Avg. Time32:0128:31   
Pre SpS  Post SpS  Delta SpS  
Mean0.670.59Mean0.530.58Mean-0.14-0.03
Std. Dev.0.220.19Std. Dev.0.280.24Std. Dev.0.240.23
Range[0.10…0.97][0.17…0.93]Range[0.13…1][0.1…1]Range[-0.67…0.37][-0.63…0.47]
Avg. Time16:2915:33Avg. Time10:2210:33   
Low spatial skills students
Pre CS  Post CS  Delta CS  
Mean0.280.31Mean0.260.32Mean0.00-0.02
Std. Dev.0.170.17Std. Dev.0.140.16Std. Dev.0.130.15
Range[0.04…0.59][0.04…0.67]Range[0.07…0.52][0.15…0.59]Range[-0.22…0.22][-0.37…0.15]
Pre SpS  Post SpS  Delta SpS  
Mean0.400.44Mean0.350.46Mean-0.040.01
Std. Dev.0.160.14Std. Dev.0.240.28Std. Dev.0.210.26
Range[0.10…0.60][0.17…0.57]Range[0.13…0.9][0.10…1]Range[-0.37…0.37][-0.40…0.47]
Table 3: Summary statistics of all participants across the two conditions followed by summary statistics for the participants who scored below 60% on the initial pre-test of spatial skills. We also present the time it took for all students to complete each assessment, where the spatial skills test had a set max time of 20 minutes, and the CS test had a max time of 60 minutes. SpS = Spatial Skills, CS = Computer Science.

4.1 RQ1: The impact of spatial skills training

Table 4:
 All studentsLow spatial
     skills students
 nZprnZpr
Control Group
Spatial Skills39-2.8210.005*0.45212-0.3530.7240.102
Computer Science38-2.1050.0350.34111-0.2390.8110.072
Treatment Group
Spatial Skills28-0.4260.6700.08013-0.0440.9650.012
Computer Science270.2160.8290.04213-0.3120.7550.087
Table 4: Paired Wilcoxon signed rank test results within groups on pre- and post-test scores of spatial skills and computer science. We calculated a Bonferroni corrected significance value given two comparisons, * p < 0.025. r represents the rank-biserial correlation coefficient, which is a measure of the effect size of this test.
Table 5:
 DfSum SqMean SqFp
Post-Spatial
Condition10.1280.1282.460.122
Pre SpS11.1361.13621.76<0.001*
Residuals633.2890.052  
Post-CS
Condition10.0470.0471.8340.181
Pre CS10.8720.87233.811<0.001*
Residuals621.5990.026  
Table 5: ANCOVA analysis comparing post-spatial skills scores between control and treatment groups (condition) with pre-spatial skills scores as a covariate. A similar table is also presented with pre- and post-CS scores. * p < 0.05.
We began to analyze the impact of spatial skills training by examining the differences in the pre- and post-tests of spatial skills and CS knowledge. We found a statistically significant decrease between the pre- and post-tests of spatial skills (Z = -2.821, p = 0.005) for the control group. We did not find a statistically significance difference for the CS test for the control group, as seen in Table 4. Additionally, we did not find statistically significant differences within the treatment group. Further, when looking only at the low spatial skills students in both groups, we do not find statistically significant changes in either spatial skills or CS knowledge for either group.
We also ran an ANCOVA to control for the pre-test spatial scores in the post-test spatial scores and consider the impact of the two conditions of the study. As shown in Table 5, we do not find that the condition (i.e., control versus treatment group) plays any significant role in the final scores. However, the pre-test spatial scores were a significant factor in the post-test spatial scores. Similar results were found for the CS scores.
Finding 1: Spatial skills training did not have a measurable impact on mid-major students’ spatial skills and computer science abilities, especially for low spatial skills students who should benefit from the training the most.

4.2 RQ2: Factors influencing spatial skills development

Table 6:
 DfSum SqMean SqFp
Condition10.1360.1362.2170.143
Gender10.1070.1071.7460.192
Data Structures10.0070.0070.1110.740
Residuals523.1890.061  
Table 6: ANCOVA analysis comparing the change in spatial skills scores between control and treatment groups (condition) with gender and enrollment in a data structures course as covariates.
Table 7:
 EstimateStd. Errortp
All students
(Intercept)-0.0460.180-0.2540.801
Pre SpS0.1430.2960.4820.634
Workbook Attempted0.6270.2762.2740.032*
     
 R20.334Adj R20.280
 F6.256p0.006*
Low spatial skills students
(Intercept)0.0390.2820.1390.892
Pre SpS-0.1040.917-0.1140.912
Workbook Attempted0.6230.4641.3410.209
     
 R20.271Adj R20.125
 F1.858p0.206
Table 7: Multiple regression with pre-test spatial skills scores and workbook attempts as independent variables and post-test spatial skills scores as the dependent, outcome variable. The analysis is shown for all students, as well as for low-spatial skills students, in the treatment condition. * p < 0.05
To understand the effect of certain factors, as shown in prior literature, on spatial skills and spatial skills training, we conducted an ANCOVA on our data across the two study groups, as shown in Table 6. We find that, when controlling for the condition of the study, the student’s gender and whether or not the student is in a data structures course do not statistically significantly influence the change in spatial skills. To further understand this finding, we also ran a Mann-Whitney U test to model the effects of gender on the pre-test spatial skills scores. We found that gender was a significant factor in the pre-test spatial scores across both conditions (U = 196, p = 0.0497), but not a significant factor for the post-test scores (U = 262.5, p = 0.8554). We did not further analyze the effect of data structures enrollment on the pre-test scores given that students had not completed any portion of the data structures course at the time that they completed the pre-test for this study.
Within the treatment group, we further analyzed the impact of the spatial skills training on their post-test scores. We ran multiple regression analyses on the impact of the ratio of student attempts in the spatial skills workbooks (as described in Section 3.3.1) on the post-test spatial skills scores when controlling for pre-test spatial skills scores. The results shown in Table 7 indicate a significant effect of trying on the workbooks when considering all students in the treatment group. However, when focusing on the students who scored low on the spatial skills pre-test, the impact of attempting the workbook is no longer significant.
We conducted further regression analyses exploring the relationship between the overall workbook scores, final course grades, and post-spatial skills scores. We also factored in a quantified variable representing how often the student self-reported playing with spatial toys (e.g., Legos, Minecraft, etc.). None of these variables, either individually or in conjunction with other variables, for the entire treatment condition or the low spatial skill student group were significant factors in our models.
Finding 2a: The spatial skills training mitigated gaps in spatial skills between men and women students.
Finding 2b: The students’ enrollment in a data structures course did not have a measurable impact on students’ change in spatial skills.
Finding 2c: The amount of spatial skills training that a student attempted impacted their spatial skills, but not for the low spatial skills students.
Finding 2d: No other factors, including the workbook correctness, course grades, and spatial toy usage, played a significant role in any change in students’ spatial skills.

5 Discussion

Overall, we did not observe a large impact of the spatial skills training on spatial skills for our CS students in a mid-major course. We designed our study to be as similar to prior work as institutional constraints would allow, in order to best replicate and extend prior findings. However, certain nuances within the data that we discuss below indicate that the training, while not an overwhelming success, provides further insights into the intersection of spatial skills training and CS education.
When considering the outcomes of our study within its context, we must consider different possible explanations of our findings. Below, we outline our conjectures to why we found what we did in this study. These conjectures are speculations only, and further research would be needed to confirm, or reject, each of these notions. We explore each as an exercise in understanding the possible mechanisms behind our results.

5.1 Spatial skills training prevented a decline in spatial skills

Our analysis did not find statistically significant differences in the post-test of spatial skills or CS knowledge for either the control or treatment group after controlling for pre-test scores. However, we did find that the control group saw a statistically significant decrease in spatial skills (ΔSpS = −0.14, p = 0.005). Meanwhile, the treatment group saw smaller, not statistically significant changes: ΔSpS = −0.027 (p = 0.670) and ΔCS = 0.003 (p = 0.829).
Based on prior work [22, 25], we had no reason to expect a decrease in spatial skills in either group. However, given what we saw in our data, while the spatial skills training did not improve spatial skills in the treatment group, it seems to have prevented a decline in spatial skills that was seen in the control group. However, this is an area that would benefit from further research with a larger dataset to untangle the impact of spatial skills training at this point in a computing major.
While it is possible that the spatial skills truly declined in the control group, it is also possible that the decline is due to the students not taking the post-test seriously. The post-test did not directly connect back to anything they saw that semester except for the pre-test, it was graded for completeness and not correctness, and they took less time to complete the post-test than the pre-test (as seen in Table 3). However, this pattern was also true of the treatment group. If the control group did not take the post-test seriously, the treatment group may not have either, which would still mean that they took it under the same conditions. However, we only see a decrease in spatial skills for the control group, not the treatment group.
Further, the only significant factor we found predicting students’ spatial skills post-test was a student’s pre-test score in spatial skills, regardless of the study condition. This finding is similar, though distinct, from prior work that showed students’ prior spatial ability predict their CS scores above and beyond other factors [2]. This replicated finding gives us further confidence that the students did not take the post-test any less seriously than they took the pre-test.

5.2 Spatial skills training beyond an introductory computing course will not be effective

While in some respects our findings are surprising given the prior work on the positive effect of spatial skills training in CS [3, 22, 27], all of the prior work focused on introductory computing courses. This study is centered outside of that context, in a course required for the major but beyond the introductory CS course sequence. This work provides preliminary evidence that spatial skills training interventions are most effective early in the major. Our findings are also similar to recent work that indicated that students’ spatial skills gains were not significant in the second semester of an introductory CS course sequence [27]. These findings could mean that waiting until the students are in their second or third year of study is too late to intervene.
This might be because novices need more support to grow spatial skills [18], perhaps because those who persist in CS programs and careers incidentally have greater spatial skills as they attain higher levels of academic achievement [23]. Additionally, lower spatial skills students have more to gain from spatial skills training [17, 33]. If students early in the computing major have lower spatial skills, need more support, and have more room for improvement, then it stands to reason that spatial skills training would most benefit introductory CS students.
Another possibility that would make spatial skills training less effective outside of introductory computing is the idea that everything is novel to students in an introductory course and less so as they progress in their degree program. In an introductory course, students may be more receptive to activities that are tangential to the course, primarily because they cannot recognize it as tangential. The more CS courses that a student takes, the more they may recognize what does, and does not, belong in a computing course. If a student has taken two or more CS courses, and no explicit spatial skills connections have been made, they may be less motivated to fully engage when a spatial skills training is presented to them. This, in turn, may inhibit their ability to grow their spatial skills effectively.

5.3 The spatial skills training context (including student motivation) matters

Prior research has offered spatial skills training as a type of remediation course for students who score below 60% on a spatial skills test at the start of a term. Past research demonstrates that low spatial skills students benefit the most from spatial skills training [17, 22, 33], but our research did not replicate this finding. However, when we begin to look further into the effort put into the training, as we do in the next section, we can start to understand where the boundaries are of the effectiveness of spatial skills training.
We did not have the ability to direct only low spatial skills students into the spatial skills training, but we did look at their specific performance through a post-hoc analysis. Our analyses do not indicate statistically significant changes in any knowledge set over the course of the semester within either group. Further, the changes for the low spatial skills students within the two groups were not statistically significant either, for spatial skills or CS knowledge.
Similarly to the previous discussion regarding the decline in spatial skills, one hypothesis for the spatial skills intervention not improving spatial skills is that the students did not take the activities seriously, since it was graded for completion, not correctness, for class credit. The best proxy we have for the engagement students had with the post-test is the amount that students engaged with the training materials in the treatment condition. When modeling the impact of the attempts on the spatial skills workbook on post-test spatial skills scores, while controlling for pre-test spatial skills scores, we see that how much a student tried to complete a sketch does statistically significantly predict their post-test scores. However, this effect disappears when only looking within the low spatial skills group. This is especially discouraging because it implies that no matter how much a student with low spatial skills student tried on the workbooks, it did not impact their post-test spatial skills score. There are likely other factors at play that can support, or deter, low spatial skills students from improving their spatial skills, such as the type of training or the context of the training.
Prior work indicates that the instructor and teaching assistants can make a difference in the effectiveness of spatial skills training [22]. In our study, the condition is a proxy for the instructor. While the lead researcher taught the control group to make sure they did not bias the outcomes of the treatment group, the consequence was that the opposite occurred, with the treatment group instructor less familiar with spatial skills training. It is possible that an instructor who was more invested in the spatial skills training would have led to the training impacting outcomes more.

5.4 The measurements inadequately assess spatial and computing skills

To gauge students’ CS knowledge over time, we used a validated measure of learning in CS called SCS1 [20]. However, this measure is only validated for the content of introductory CS courses. Meanwhile, we administered this assessment outside of an introductory course. Our reasoning was that the students in the mid-major course should know the content that is covered in SCS1. Additionally, we recognize that this assessment is measuring a certain subset of computing knowledge that may not be the same subset of computing that is most malleable through spatial skills training [7]. In other words, spatial skills training may be affecting mid-major student performance in CS, but it may not be affecting their performance on that subset of CS. The most evidence we have right now for where spatial skills interplay with computing is within data structures problems [10, 25], which are only covered in SCS1 in part, with the inclusion of arrays. It is possible that another assessment, such as the Basic Data Structures Inventory (BDSI) [28], would be more appropriate to use in studies of spatial skills impacts on computing outcomes.
Part of the appeal of situating this study in this course was the course focus on diagrams: UML class diagrams, architecture diagrams, use case flows, etc. While the treatment condition received spatial skills training, the control condition received more course-related activities. This inherently meant that the control group received more time working with diagrams. Given the distinct, statistically significant drop in spatial skills among the control group, it seems that the control activities were worse for spatial skills, as measured by the PSVT:R, than the spatial skills training. Our findings can be explained by the differences in spatial skills and processes that each condition received [33]. When completing the spatial skills workbooks, students were completing intrinsic, dynamic tasks, which typically involved the rotation of a singular object. However, the students in the control condition were completing extrinsic, static tasks, involving no rotation of an object, but rather understanding the relationship between the objects. These different categories of spatial skills, as described in Section 2.1, are distinct and serve different purposes. Further, the PSVT:R specifically measures intrinsic dynamic spatial skills and thus catered more towards the treatment condition. Our research begins to prod these differences in the context of CS education, and we encourage future work to continue to disentangle the effects of these different spatial skills in our domain.

6 Conclusion

We conducted a quasi-experimental study with a spatial skills training intervention in a mid-major CS course. We did not find statistically significant differences in spatial skills or CS knowledge for the treatment group over the control group. Also, we did not find gender or concurrent enrollment in a data structures course to play a factor in the post-test spatial skills scores.

6.1 Contributions

Our study presents a novel approach to studying spatial skills training in a CS program. For one, we focused on a non-introductory course. Although some research has been done on individuals in CS outside of the first computing course (e.g., [23, 26]), the area is due for further exploration. Additionally, our study is based on a non-programming, software engineering course. We do not currently understand what aspect of CS leads to a connection in spatial skills, especially since CS is such a virtual domain. By expanding the research on spatial skills to other areas of CS, we can add to the existing SPeS theory [18]. For example, our results indicate that a non-programming, diagram-based course does not inherently improve students’ spatial skills, perhaps because they are not forced to use encoding strategies as much in the in-class activities they complete as they would if they did programming tasks instead. Further, SPeS offers an explanation that, as learners progress from novices to experts, they rely less on problem-solving skills, such as spatial skills. As our study is conducted mid-major, not with novice students, spatial skills training may be less relevant or effective given the students’ progression into more domain-specific courses.
To the best of our knowledge, our work is the first to create a rubric for scoring the spatial skills workbooks and incorporate those scores into the analysis. Further, we present our novel approach to codifying and quantifying what it means for students to put forth effort and try on the sketches in the workbook. While knowing whether spatial skills training works or not is valuable, it is also valuable to understand why it may be less effective when no impact is observed. Analyzing the student sketches further can provide deeper insight into how their spatial skills are evolving, or not, throughout the intervention.
Many CS education research studies on spatial skills use the same spatial skills test (PSVT:R), which we use as well. However, there is no standard measure of CS knowledge used in these studies. Oftentimes, a final exam or final course grade is used as a proxy for CS knowledge. Our study is the first to use the pseudocode-based SCS1 assessment in its entirety. Though other studies have used subsets of SCS1 [2, 3, 21], those uses are further outside of the context for which SCS1 was validated [19]. SCS1 may not be appropriate in all studies of spatial skills and CS, and its timing (one hour) and difficulty level may be deterrents to its use. Regardless of which assessment is used, we encourage other CS Education researchers to move towards standardized measures of CS learning. This will aid in comparing results across studies and better inform the community of the effects of spatial skills training.
Although we did not see gender as a significant factor in post-test spatial skills scores, we did find that gender was a significant factor in pre-test spatial skills scores in both study conditions. Given the significance of gender at the start of the term, combined with the lack of significance at the end of the term, we can hypothesize that the gaps in spatial skills between men and women students shrink over the semester. This is similar to prior findings shown in the literature [17, 33]. However, it should be noted that our data set is too small, with only eight females in the control condition and five females in the treatment condition, to dig much deeper into the differential effects of the training among the genders.

6.2 Limitations

Our divergence from prior findings on the effects of spatial skills training could be due to the new context of the training being beyond an introductory CS course. However, we should also consider the fidelity of the implementation of the intervention used. As the study was being carried out, the instructor of the treatment group observed that students did not talk to each other while completing the spatial skills workbooks. The students also did not make use of toy blocks, which were made available to them as an aid for visualizing the rotations of shapes. Increasing student interactions and use of visual aids may help in the training having a greater effect on spatial skills at the end of the intervention.
Our sample size, while large enough for the statistical analyses we used in this paper, is limiting. There is not enough data to cross-cut it any further, including looking more into the gender differences in spatial skills, especially among the low spatial skills students. A larger sample size is critical to dive deeper into the effects of the various factors that could be hindering students from developing their spatial skills.
Our analysis of sketches in the workbooks, including whether a student tried or not, does not fully capture a student’s understanding, intent, and attempt to understand a problem. To do this, we would need to conduct cognitive interviews as students think out loud through the workbook. We could also ascertain their effort level with a self-report survey, asking students after each workbook session how much they felt like they tried on the activity, similar to Ly et al. survey of guessing and cheating [17]. However, given that the observations about trying on a sketch were gathered in a post-hoc analysis, we did not include any element of analyzing the sketches in this way. Further work should be done in this vein if we are to accurately ascertain when spatial skills training works, and when it does not, in a CS major.

6.3 Future Work

This paper primarily presents null results. However, that does not imply that the contributions of this work are also null. While these findings run askew from prior work on spatial skills in computing, our study provides promising directions for future research. Understanding when spatial skills training is the most effective is beneficial for departments and professors contemplating the inclusion of spatial skills in their curricula. Further, if we understand when spatial skills training has the most impact, then our students can receive optimal timing of this additional instruction, aiding in their future success in CS and STEM fields.
We previously discussed how our work builds on prior findings and, in turn, how our work can be built on in the future. Beyond what has already been mentioned, we also encourage future endeavors in spatial skills in the CS learning context to explore other spatial skills training methods that have been shown to be effective in other fields, as shown in [33]. While the standard method in our community has been to use a course, further research is needed to determine if video games (e.g., Tetris) or spatial tasks (e.g., animating 3D media) is similarly, or more, effective with our students. Other research has found that technical reading tasks may be more beneficial for CS students than spatial skills training [6], which also deserves further investigation. We should do what we can to discover what works best to support all of our CS students to succeed.

References

[1]
Brett A Becker and Keith Quille. 2019. 50 years of cs1 at sigcse: A review of the evolution of introductory programming education research. In Proceedings of the 50th acm technical symposium on computer science education. 338–344.
[2]
Ryan Bockmon, Stephen Cooper, Jonathan Gratch, Jian Zhang, and Mohsen Dorodchi. 2020. Can Students’ Spatial Skills Predict Their Programming Abilities?. In Proceedings of the 2020 ACM Conference on Innovation and Technology in Computer Science Education. 446–451.
[3]
Ryan Bockmon, Stephen Cooper, William Koperski, Jonathan Gratch, Sheryl Sorby, and Mohsen Dorodchi. 2020. A cs1 spatial skills intervention and the impact on introductory programming abilities. In Proceedings of the 51st ACM Technical Symposium on Computer Science Education. 766–772.
[4]
Jeffrey Buckley, Niall Seery, and Donal Canty. 2018. A heuristic framework of spatial ability: A review and synthesis of spatial factor literature to support its translation into STEM education. Educational Psychology Review 30, 3 (2018), 947–972.
[5]
Stephen Cooper, Karen Wang, Maya Israni, and Sheryl Sorby. 2015. Spatial skills training in introductory computing. In Proceedings of the eleventh annual international conference on international computing education research. 13–20.
[6]
Madeline Endres, Madison Fansher, Priti Shah, and Westley Weimer. 2021. To read or to rotate? comparing the effects of technical reading training and spatial skills training on novice programming ability. In Proceedings of the 29th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering. 754–766.
[7]
Madeline Endres, Zachary Karas, Xiaosu Hu, Ioulia Kovelman, and Westley Weimer. 2021. Relating Reading, Visualization, and Coding for New Programmers: A Neuroimaging Study. In 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE). IEEE, 600–612.
[8]
Sally Fincher, Anthony Robins, Bob Baker, Ilona Box, Quintin Cutts, Michael de Raadt, Patricia Haden, John Hamer, Margaret Hamilton, Raymond Lister, et al. 2006. Predictors of success in a first programming course. In Proceedings of the 8th Australasian Computing Education Conference (ACE 2006), Vol. 52. 189–196.
[9]
Roland Guay. 1976. Purdue spatial vizualization test. Educational testing service.
[10]
Yu Huang, Xinyu Liu, Ryan Krueger, Tyler Santander, Xiaosu Hu, Kevin Leach, and Westley Weimer. 2019. Distilling neural representations of data structure manipulation using fMRI and fNIRS. In 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE). IEEE, 396–407.
[11]
Sue Jones and Gary Burnett. 2008. Spatial ability and learning to program. Human Technology: An Interdisciplinary Journal on Humans in ICT Environments (2008).
[12]
Sue Jane Jones and Gary E Burnett. 2007. Spatial skills and navigation of source code. ACM SIGCSE Bulletin 39, 3 (2007), 231–235.
[13]
Terry K Koo and Mae Y Li. 2016. A guideline of selecting and reporting intraclass correlation coefficients for reliability research. Journal of chiropractic medicine 15, 2 (2016), 155–163.
[14]
Marcia C Linn and Anne C Petersen. 1985. Emergence and characterization of sex differences in spatial ability: A meta-analysis. Child development (1985), 1479–1498.
[15]
Ken Liu, Burkhard C Wünsche, and Andrew Luxton-Reilly. 2022. Relationship between spatial skills and performance in introductory computer graphics. In Proceedings of the 27th ACM Conference on on Innovation and Technology in Computer Science Education Vol. 1. 304–310.
[16]
Andrew Luxton-Reilly, Simon, Ibrahim Albluwi, Brett A Becker, Michail Giannakos, Amruth N Kumar, Linda Ott, James Paterson, Michael James Scott, Judy Sheard, et al. 2018. Introductory programming: a systematic literature review. In Proceedings companion of the 23rd annual ACM conference on innovation and technology in computer science education. 55–106.
[17]
Anna Ly, Jack Parkinson, Quintin Cutts, Michael Liut, and Andrew Petersen. 2021. Spatial Skills and Demographic Factors in CS1. In Proceedings of the 21st Koli Calling International Conference on Computing Education Research. 1–10.
[18]
Lauren E Margulieux. 2020. Spatial encoding strategy theory: The relationship between spatial skill and stem achievement. ACM Inroads 11, 1 (2020), 65–75.
[19]
Miranda C Parker, Matt J Davidson, Yvonne S Kao, Lauren E Margulieux, Zachary R Tidler, and Jan Vahrenhold. 2023. Toward CS1 Content Subscales: A Mixed-Methods Analysis of an Introductory Computing Assessment. In Proceedings of the 23rd Koli Calling International Conference on Computing Education Research. 1–13.
[20]
Miranda C Parker, Mark Guzdial, and Shelly Engleman. 2016. Replication, validation, and use of a language independent CS1 knowledge assessment. In Proceedings of the 2016 ACM conference on international computing education research. 93–101.
[21]
Miranda C Parker, Amber Solomon, Brianna Pritchett, David A Illingworth, Lauren E Marguilieux, and Mark Guzdial. 2018. Socioeconomic status and computer science achievement: Spatial ability as a mediating variable in a novel model of understanding. In Proceedings of the 2018 ACM Conference on International Computing Education Research. 97–105.
[22]
Jack Parkinson, Ryan Bockmon, Quintin Cutts, Michael Liut, Andrew Petersen, and Sheryl Sorby. 2021. Practice report: six studies of spatial skills training in introductory computer science. ACM Inroads 12, 4 (2021), 18–29.
[23]
Jack Parkinson and Quintin Cutts. 2019. Chairs’ AWARD: investigating the relationship between spatial skills and computer science. ACM Inroads 10, 1 (2019), 64–73.
[24]
Jack Parkinson and Quintin Cutts. 2020. The effect of a spatial skills training course in introductory computing. In Proceedings of the 2020 ACM Conference on Innovation and Technology in Computer Science Education. 439–445.
[25]
Jack Parkinson and Quintin Cutts. 2022. Relationships between an early-stage spatial skills test and final CS degree outcomes. In Proceedings of the 53rd ACM Technical Symposium on Computer Science Education-Volume 1. 293–299.
[26]
Jack Parkinson and Quintin Cutts. 2023. Understanding Spatial Skills and Encoding Strategies in Student Problem Solving Activities. In Proceedings of the 2023 ACM Conference on International Computing Education Research.
[27]
Jack Parkinson, Sebastian Dziallas, Fiona McNeill, and Jim Williams. 2023. Exploring Models and Theories of Spatial Skills in CS through a Multi-National Study. In Proceedings of the 2023 ACM Conference on International Computing Education Research.
[28]
Leo Porter, Daniel Zingaro, Soohyun Nam Liao, Cynthia Taylor, Kevin C Webb, Cynthia Lee, and Michael Clancy. 2019. BDSI: A validated concept inventory for basic data structures. In Proceedings of the 2019 ACM Conference on International Computing Education Research. 111–119.
[29]
Sheryl Sorby, Norma Veurink, and Scott Streiner. 2018. Does spatial skills instruction improve STEM outcomes? The answer is ‘yes’. Learning and Individual Differences 67 (2018), 209–222.
[30]
Sheryl A Sorby and Beverly J Baartmans. 2000. The development and assessment of a course for enhancing the 3-D spatial visualization skills of first year engineering students. Journal of Engineering Education 89, 3 (2000), 301–307.
[31]
Sheryl Ann Sorby and Anne Frances Wysocki. 2003. Introduction to 3D Spatial Visualization: an active approach. Cengage Learning.
[32]
David H Uttal and Cheryl A Cohen. 2012. Spatial thinking and STEM education: When, why, and how? In Psychology of learning and motivation. Vol. 57. Elsevier, 147–181.
[33]
David H Uttal, Nathaniel G Meadow, Elizabeth Tipton, Linda L Hand, Alison R Alden, Christopher Warren, and Nora S Newcombe. 2013. The malleability of spatial skills: a meta-analysis of training studies. Psychological bulletin 139, 2 (2013), 352.
[34]
Jonathan Wai, David Lubinski, and Camilla P Benbow. 2009. Spatial ability for STEM domains: Aligning over 50 years of cumulative psychological knowledge solidifies its importance. Journal of educational Psychology 101, 4 (2009), 817.

Index Terms

  1. The Effect (or Lack Thereof) of Spatial Skills Training in a Mid-Major Computing Course

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    Koli Calling '24: Proceedings of the 24th Koli Calling International Conference on Computing Education Research
    November 2024
    382 pages
    ISBN:9798400710384
    DOI:10.1145/3699538

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 13 November 2024

    Check for updates

    Author Tags

    1. spatial skills
    2. training
    3. computer science
    4. undergraduate

    Qualifiers

    • Research-article

    Conference

    Koli Calling '24

    Acceptance Rates

    Overall Acceptance Rate 80 of 182 submissions, 44%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 179
      Total Downloads
    • Downloads (Last 12 months)179
    • Downloads (Last 6 weeks)71
    Reflects downloads up to 18 Feb 2025

    Other Metrics

    Citations

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Login options

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media