Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article
Open access

Health Data Visualization Literacy Skills of Young Adults with Down Syndrome and the Barriers to Inference-making

Published: 28 March 2024 Publication History

Abstract

As health management becomes more intertwined with data, an individual’s ability to read, interpret, and engage with personal health information in data visualizations is increasingly critical to one’s quality of care. People with Down Syndrome already experience greater health disparities than their typically developing peers. Inaccessible health information and technologies have the potential to magnify inequities further. Inaccessible health data can be an additional barrier to people with Down Syndrome’s ability to adopt and use health systems or devices, make informed decisions about their bodies, and advocate for themselves in health contexts. By examining their underlying data visualization literacy skills, our exploratory study involving ten young adults with Down Syndrome identifies several design opportunities to improve the accessibility of health data visualizations (HDVs) by addressing the cascade of negative effects caused by inference-making barriers in HDVs.

1 Introduction

Meaningful engagement with one’s personal health information is critical for receiving high quality healthcare [7]. Unfortunately, many digital health systems present patient data in ways that can be challenging for many people to interpret, understand, and extract information from and entirely inaccessible for others [37, 52, 63, 81, 116]. As health management becomes intertwined with data across an ecosystem of systems and personal devices, an individual’s ability to read, interpret, and understand data visualizations of their health information is increasingly integral to one’s quality of life.
Furthermore, inaccessible presentations of health data may contribute to an inequitable quality of care, particularly for people who have disabilities. People with Down Syndrome, like other individuals with intellectual and developmental disabilities (IDDs), already report receiving lower quality healthcare than their typically developing (TD) counterparts [4]. As technology has the power to magnify social inequalities, data visualization accessibility barriers may likewise hold the potential to exacerbate existing healthcare disparities for people with Down Syndrome [123].
Data visualizations, in particular, may play to people with Down Syndrome’s strength as learners with strong visuo-spatial abilities [39, 41]. Unfortunately, there currently is no literature describing the data visualization literacy and graph reading skills (i.e., graphicacy) of people with Down Syndrome. To address this data visualization gap and provide guidance for improvements that could be made for more equitable presentations of health data for our population of interest, we investigated the following research questions:
(1)
What are the graphicacy skills of people with Down Syndrome as they read and make sense of the information presented in health data visualizations?
(2)
What accessible health data visualization design opportunities are there to better support people with Down Syndrome’s abilities as they interact with health information?
The first of its kind, our exploratory study investigated and reported upon the underlying data visualization literacy skills of people with Down Syndrome as they progressed through the three stages of reading a graph (i.e., first stage: locating and identifying health data visualization (HDV) elements; second stage: observing and comparing relationships between the HDV elements; and third stage: connecting the information across the HDV elements with outside information, see Section 2). We also provide background about our study population, the interrelated literacy-based skills necessary to read data visualizations, and summarize the limited related literature. Section 3 describes how we conducted the semi-structured interviews with ten young adults with Down Syndrome as they read six health data visualizations of varying complexity inspired by or borrowed from existing health applications or information sources (Figure 1).
Section 4 begins by detailing the initial saliency task for each of the six HDV types. Results indicate study participants used a variety of strategies as they began to read and interact with the information visualized. It also reports upon the various stage-specific tasks (i.e., six HDV element identification tasks, seven HDV relationship comparison tasks, and four HDV element connection tasks). Each stage identifies various stage-specific barriers, which impacted performance during the various HDV reading tasks. In the end, we describe the observable downward cascade to participant performance as they progressed through the three stages.
In the discussion (Section 5), we describe our theory: several task- and design-based barriers adversely impacted participants’ abilities to make inferences as they progressed through the three reading stages. We further describe how these barriers stemmed from commonly occurring data visualization design decisions and how the increasing difficulty, complexity, and ambiguity of the reading tasks as they progressed through the stages compounded the effects of any errors made. We close our discussion by noting our study’s limitations and proposing future avenues for research.
This study makes two contributions:
(1)
A detailed account of people with Down Syndrome’s data visualization literacy skills in a health information context; and
(2)
Twelve suggestions with potential strategies to improve the overall design of HDVs (Tables 9 and 10 summarize).
These design considerations may improve the accessibility of not only HDVs, but also better support the inference-making abilities of individuals with Down Syndrome as they make sense of health data.

2 Background

2.1 People with Down Syndrome

Globally, around one in every 800 babies is born with Down Syndrome [13]. The most common chromosomal condition related to intellectual and developmental disability–Down Syndrome–is caused by the full or partial copy of the 21st chromosome [13, 32, 46]. The additional chromosomal copy can impact an individual’s overall development, gross and fine motor skills, perception and sensory processing abilities, cognition, and intellect in unique ways [42]. People with Down Syndrome can also have complex medical histories with co-occurring conditions (e.g., Alzheimer’s, apraxia, attention disorders, autism, celiac disease, dementia, heart defects, hypertension, hearing or vision impairments, leukemia, neurodevelopmental disorders, sleep apnea, etc.) that affect their health and functional abilities at a higher rate than the TD population [13, 18, 42]. As a result, the overall population is highly heterogeneous.
There is high inter-individual variability in the cognitive profile of people with Down Syndrome [96]. Due to its high occurrence within the population, Down Syndrome is most commonly associated with intellectual disability, which can impact an individual’s intelligence (i.e., capacity to discover, justify, make choices, solve problems), adaptive, and executive abilities [13, 51]. It is rare, however, for a person with Down Syndrome to have severe or profound intellectual and learning disabilities [71]; most individuals with an intellectual disability (\(\sim\)85%), such as those with Down Syndrome, instead have a mild (i.e., a 9–12 years old mental age equivalent) or moderate (i.e., a mental age equivalent of 6–9 years old) level [26, 55, 86]. The need for cognitive supports may increase as they enter middle-to-late adulthood when brain volume loss can occur and from the population’s increased potential for developing Alzheimer’s, both of which can impact their ability to integrate and retain information as they age [51].
As logical thinkers, people with Down Syndrome are entirely capable of learning new skills and content [41]. However, skill acquisition may occur at a slower pace than their typically developing counterparts (\(\sim\)2 years behind) [51, 114]. Learning new content can be facilitated with supports and adaptations. For example, playing to their high social skills, learning through imitation (i.e., peer or visual modeling) may be a potentially successful mode for new skill or knowledge acquisition [41, 54, 133]. They have also been found to be strong kinaesthetic learners [99, 104], so embodied interactions may play to that strength.
People with Down Syndrome are often better at processing visuo-spatial information (i.e., the locations of objects, images) versus verbal (i.e., holding speech-based, acoustic information in working memory) [41, 61]. Cognitive supports should play to their visuo-spatial and receptive language strengths [41]. They may benefit from verbal information processing and language comprehension supports, which can negatively impact their ability to learn and recall information when unaided [51]. Concrete and practical pictorial support materials, signs, and gestures can support expressive communication and vocabulary attainment [41, 99, 104]. Similarly, calculation tools can offset struggles people with Down Syndrome may experience with numeracy and allow them to perform more complex mathematical skills, such as solving algebraic problems [24, 83].
They may also struggle with distraction and impulsivity, so providing structure and routine can be helpful [26, 51, 114]. Attention-specific aids may improve their ability to adapt to circumstances, prioritize, switch between tasks, remain engaged with tasks, or multi-task [51]. People with Down Syndrome can become frustrated when their ability to integrate, generalize, retain, order, and use abstract reasoning is impacted by their differences in short-term and working-memory [41]. When information processing becomes too complex, the need for working memory supports that compensate for storage capacity limits also becomes more pronounced (e.g., [68, 125]). Yet, once content is stored in long-term memory, the rate of forgetting content is similar to TD individuals [41]. The increased activity found within their brain’s limbic system (i.e., region involved in behavioral and emotional response) improves their ability to store information in their long-term memory due to their heightened emotional state [136]. Emotionally-engaging content could further facilitate storing new content in long-term memory.
While students with Down Syndrome are taught mathematics skills in school, they are often segregated from the able-bodied, typically developing population (i.e., placed in special education classes only). In the US, for example, only 17.9% of students with intellectual disabilities were educated in the “mainstream” classrooms most of the day [93]. This dropped to 15% when students had multiple disabilities, as many people with Down Syndrome do (e.g., intellectual plus: visual, hearing, communication, attention, learning-specific) [93]. As such, teaching people with Down Syndrome how to read data visualizations may be a more developed skill for some students while neglected for others.

2.2 Literacies and Skills in Health Data Visualizations

Health data may be inaccessible to people with Down Syndrome for many reasons. One reason they may be inaccessible is because reading health data visualizations requires an intersection of various literacies and skills necessary to understand them. This section discusses crucial literacies that directly impact health data visualization: print literacy, numeracy, data literacy, graphicacy, and health literacy (see Figure 2).
Fig. 1.
Fig. 1. The saliency of six types of health data visualizations as reported by viewers with Down Syndrome. See Appendix for all saliency responses.
Fig. 2.
Fig. 2. Intersection of five literacies necessary for reading and understanding health data visualizations.

2.2.1 Print Literacy.

One foundational skill often used in health data visualizations is the ability to read. Reading involves constructing understanding of a shared meaning between a writing system (i.e., print) and its corresponding verbal or gestural language. Eighty-six percent of the world knows how to read and write [121]. However, between 4.9% to 27.7% (mean: 15.5%) of the 86% of literate adults (16–65 years old) have poor reading proficiency [33]. The total number of adults with poor reading proficiency could be higher as individuals with “language difficulties, or learning or mental disabilities” were excluded from this research effort [33].
Print literacy is comprised of multiple, complex language skills that underpin a person’s reading abilities. These underpinning skills and abilities are: phonology (i.e., speech sounds), orthography (i.e., spelling patterns), semantics (i.e., connecting symbols of written language to their meaning), and morphology (i.e., patterns found within word formation and language syntax) [88]. A further set of discrete decoding skills are necessary to move individuals from responding to visual and auditory stimuli in a written or spoken language towards competency (i.e., accurately and effectively constructing both meaning and their understanding): (1) Phonemic awareness is the ability to recognize and use with the individual sounds in spoken words; (2) phonics matches sounds to their corresponding letters or groups of letters; (3) fluency is the ability to read accurately and quickly; (4) vocabulary involves increasing the known words used in a particular language; and finally, (5) comprehension occurs when reading words constructs understanding [88].
An individual’s difficulty or ease in which they can grow their print literacy skills are closely related not only to adequate instruction and regular practice, but also to the body. For example, an individual’s hearing is involved during phonics activities in verbal languages; their vision is used to recognize orthographic elements, letters, letter groups, words, and phrases; and their working-memory is used to connect semantic meanings, recall vocabulary, comprehend the content, and construct their understanding of what they just read [88]. However, vision, hearing, and working-memory can be impacted by developmental differences within individuals with Down Syndrome [14]. As a result, reading abilities may vary across individuals.
Despite an increased potential for the kinds of impairments that may impact someone’s print literacy abilities, researchers have found that decoding words (i.e., figuring out a new word by sounding it out) and single word reading are relative strengths [14, 87]. However, people with Down Syndrome can struggle with phonological awareness [76]. Reading interventions that simultaneously develop their speech and phonological abilities whilst also growing their vocabulary and grammar skills may increase their abilities [58, 87]. Multi-modal interactions that reinforce the sound-word mental connections may be beneficial in digital health data visualizations. Given the unfamiliarity of many medical terms used in health systems and data visualizations, ensuring adequate vocabulary supports are incorporated may further facilitate the reading abilities of individuals with Down Syndrome.

2.2.2 Numeracy.

Roughly one-third of English-speaking Americans (\(\sim\)62.7 million) have low numeracy skills [80]. Numeracy is someone’s ability to use, reason with, and understand math in their daily lives [1]. Numeracy skills include arithmetic and mathematical skills, such as counting, understanding number lines, numerical concepts (e.g., whole numbers, percentages), number sense (i.e., the cognitive flexibility to take numbers apart and put them back together), performing calculations (i.e., addition, subtraction, multiplication, and division), operation sense, measurement concepts and protocols, estimation, proportions, and more [44]. As such, numeracy is also essential to understanding data visualizations [1]. Health numeracy is “the degree to which individuals have the capacity to access, process, interpret, communicate, and act on numerical, quantitative, graphical, biostatistical, and probabilistic health information needed to make effective health decisions” [49]. Sufficient health numeracy skills are critical to understanding risk, probability, and scientific evidence (i.e., medical information) [111].
As data visualizations are often intended to support human comprehension, pattern finding, and inference-making of numerical information, an individual who struggles with health numeracy and graphicacy may encounter unnecessary barriers to their ability to make effective, informed decisions [15, 120]. Unfortunately, people with poor numeracy skills are also more likely to have poorer graphicacy skills [45]. Similarly, people who struggle with numeracy skills, such as those with Down Syndrome, may also experience difficulty with abstract reasoning and other essential skills [35, 40, 74, 83] that can impact their data visualization literacy skills.

2.2.3 Data Literacy.

Data literacy is the newest type of essential 21st century skill necessary for full participation in an increasingly data-driven society [28]. Yet despite being a critical skill, there is no widely accepted definition for data literacy [130]. Some definitions broadly describe data literacy in terms of using and understanding data (e.g., [34, 47, 65, 100, 103, 105, 108]) while others see it as an inquiry process where hypotheses are formed or questions are generated about the data to either solve a problem or support decision-making (e.g., [36, 53, 79, 95, 131]).
Data usage can include multiple sub-processes relating to reading and manipulating data, such as connecting, comparing, distinguishing between data, transforming data into information, analyzing, data processing, and communicating conclusions [66, 69, 78, 79, 100, 107, 112, 117, 129, 135]. Print literacy, numeracy, and other statistical knowledge are requisite underlying skills to be data literate [5, 28, 50, 135].
Critical thinking skills are also used throughout the process to support an individual’s analysis, inference-making, and ability to synthesize the data to reach a conclusion or to explore potential solutions [65, 106]. However, some critical analysis skills fall outside of this study’s scope of investigation, such as the critical evaluation of data accuracy via viewer assessment of the methodologies used to collect the data and potential biases of the individual’s personal interpretation [28, 59, 107].
Data literacy is often described as an ability [53, 78, 79, 95, 100, 103, 117, 122, 129]. As several scholars note that an essential skill of a data literate person is their ability to read data visualizations [5, 16, 23, 78, 95, 117], ability-oriented definitions suggest that when someone is hindered, such as by inaccessible data presentations, an individual’s potential to leverage their personal data is limited.

2.2.4 Graphicacy.

Roughly 29% of Americans have poor data visualization literacy (i.e., graphicacy) [94]. Graphicacy skills can help viewers understand data visualizations. Graphicacy involves three stages of activities that work towards completing specific goals. The earliest, foundational phase has the goal of reading the visually represented data; the second phase’s goal builds up the initial information by connecting and mapping the information together within the visualization; and the third phase uses the information from the earlier phases to support the viewer’s ability to extract meaning and make inferences about the content presented with information outside of the visualizations [17, 73]. Curcio broadly described these phases as: reading the data, reading between the data, and reading beyond the data [29].
The first stage—reading the health data—involves various sense-making processes (i.e., taking in and interpreting visual information) to identify the various component parts that make up the visualization as a whole. These sense-making sub-activities are necessary to reach the first stage goal of reading the health data. Examples of sense-making activities include the viewer identifying the elements that make up the health data visualization’s anatomy (e.g., title, X- & Y axes, labels, values, numbers, special characters, symbols), the value encoded elements (e.g., shape, size, position), and the type of data visualization, which can influence how the viewer might approach the graph reading task. This initial visual encoding of information helps people to determine what is salient–and, thus, worth their attention and cognitive efforts to process [89].
The second stage–reading between the health data–involves comparing and connecting information to identify patterns, differences, and outliers. During this stage, people use various information foraging sub-activities [97] and other sense-making processes [17] to reach this phase’s goal of tying together how the graphical information is connected with each other and what these connections may mean. This stage involves an iterative process of searching through the data, visually encoding it, and then mapping the information presented to the viewer’s internal mental model of the visualization [77]. Reading between the data tasks can include things like identifying extremes, finding exact values, anomalies, or clusters, making estimates, noting ranges of values, connecting graph elements or other information to the data (e.g., relating the x- & y-axis information to the graph values, connecting meaning to stylistic elements, like color), and comparing any differences in the data that may occur over time (i.e., noticing trends and recurring behaviors).
In the final stage, viewers draw more heavily upon information held in their long-term memory (e.g., health knowledge and other related literacies or skills) to inform their understanding, perform inference-making, and update their mental model as they engage with the data visualization [113]. During this stage, the information is especially influenced by any domain-specific knowledge they possess. This is used to extract the information in an informed way to read beyond the data. It is during this stage that individuals leverage the information in the visualization to reflect upon their health behaviors and gain potential insights for where improvements to their health could be made. All three stages, but especially the final stage, are driven by knowledge. It is also important to note that these stages of reading a data visualization are not necessarily a linear process. Rather, viewers may move back and forth between stages as they refine and update their mental model as they take in more information and make new connections using information from both within the visualization and with information outside of it.

2.2.5 Health Literacy.

Health literacy involves multiple processes related to finding relevant information, understanding the information that is being accessed, and critically evaluating health information (i.e., interpret, filter, and judge the information) to make informed healthcare decisions and communicate those decisions with those involved in the management of one’s health [9, 19, 118]. Health literacy skills require print literacy, numeracy, and oral literacy when the information is shared with others (i.e., members of the medical care team and trusted adults involved in the shared or supported-decision making process).
Data visualization literacy-based skills within the medical context also requires health literacy knowledge and skills to inform the viewer how to read, interpret, and engage with the graphs or charts. People must use various information-seeking skills, understand new health information, and synthesize it while interpreting their health data, as described by information foraging theory [48, 97, 124]. A viewer’s ability to search, interpret, and make sense of any health information they find is hindered if they lack the requisite health literacy skills and domain knowledge [124, 127].
Nutbeam identified three levels of health literacy: functional (i.e., ability to understand basic health information within the healthcare context), interactive (i.e., engages, asks questions, applies new information to changing circumstances, and incorporates that knowledge into shared decision-making with healthcare professionals and/or family), and critical health literacy (i.e., critically analyze health information to make an informed decision that aligns with personal preferences and needs, including when to get a second opinion) [90]. Unfortunately, many people have poor health literacy [3, 35, 110]. More than one-third of Americans have insufficient health literacy skills; only 12% were proficient enough to effectively use health information [92].

2.3 Mental Models

The previously described five literacies are involved in various health data visualization reading activities when a viewer is constructing and updating their mental model about an HDV. A mental model is the internal, mental representation of an external system [89]. Mental models are used to figure out how something works, plan their next actions (i.e., interaction, input), and imagine the expected results from their interaction (i.e., output) [89]. Mental models are also drawn upon as their understanding and interaction with a visualization deepens, such as when the viewer encodes both individual graph elements and the relationships between elements (i.e., when the begin to make inferences about an HDV)[89].
Mental models are constructed over time from the result of an interplay occurring between an external system (e.g., a digital health data visualization) and the viewer’s interpretation of the visually represented information [75]. Mental models for information visualizations are constructed, developed, and refined over time. In Liu and Stasko’s information visualization mental model, the viewer begins by taking in the information visualization in the external system [75]. They then internalize that information in their brain and process it. When viewers of an HDV draw upon their literacy-based knowledge and skills, they employ top-down information processing. Next, they use that initially processed information and view the visualization. This time they take in more information that has been interpreted and reasoned with using their initial frame for the HDV. As more information from the external environment is taken in, the individual augments their initial mental model to include any new observation or information. The interplay between external system and individual interpretation continues to update and refine their mental model until the viewer feels they have an understanding of what was being represented.
While Liu and Stasko present a robust top-down information processing approach for most mental mental model use cases in information visualizations, they do overlook novice users interacting with unfamiliar visualizations. When this occurs, a user’s mental model for the data visualization can “flounder” [72]. However when those necessary skills are underdeveloped for whatever reason (e.g., inadequate education, difficulty recalling steps, unfamiliarity with a type of graph, insufficient opportunities to regularly use a skill that causes proficiency to degrade), they are unable to construct a useful mental model of the visualized information. Inaccurate mental models can lead to confusion and frustration stemming from their inability to effectively use the visualization [72].
Given the potentially varying underlying literacy-related skills and background knowledge that people with Down Syndrome may have, mental models of an HDV constructed by them may involve both bottom-up and top-down information processing. Bottom-up information processing occurs as a viewer takes in the initial visualized information and characteristics of the visual environment [113]. Top-down processing occurs when viewers draw upon the various literacy-based skills and background knowledge to make sense of the visualization [113]. The interaction between these two processes help them to refine their mental model and construct their understanding of the health data.

2.4 Past Work

Although researchers have investigated making data visualizations more accessible for people who have motor impairments, color vision deficiencies, or are blind or have low vision (e.g., [38, 64, 81]), research efforts have not been evenly distributed across populations with other disabilities. Wu et al. began the vital work of investigating data visualization accessibility with people who have IDD’s [134]. However, their findings were generalized across multiple, highly heterogeneous IDD populations–each of which can have potentially very different abilities, strengths, and accessibility requirements both between and within each other.
As previously stated, there has yet to be a study investigating the underlying data visualization reading skills of individuals with Down Syndrome or the nuances surrounding accessible health information visualization. Instead, most research within the educational literature corpus investigates people with Down Syndrome’s mathematical skills tends to orient around developing their numeracy and other functional skills, such as telling time or using money [12]. One study did explore aspects of numerical knowledge and skills within a health-related context: food management [71]. Another found that people with Down Syndrome had similar area comparison abilities to their TD peers when evaluating the size difference between two visualizations of quantities [2]. This study intends to complement existing research efforts [134] by investigating the underlying data visualization skills and abilities particular to people with Down Syndrome within the more specific health context.

3 Methods

Prior to the study, we consulted with four subject-matter experts to inform the study’s design and ensure participant-facing materials were accessible to our study sample. Subject-matter experts included: a self-advocate with Down Syndrome, the medical doctor and director of a Down Syndrome specialty clinic, an occupational therapist at a different adolescent and adult Down Syndrome-specific clinic, and an education professor, whose work focuses on improving mathematics education for students with Down Syndrome. All participant-facing documents, interview questions and study procedures were co-developed with, evaluated by, and revised based upon feedback from our subject-matter expert self-advocate with Down Syndrome. The partnership with the consultant with Down Syndrome resulted in more accessible presentations of all research materials (e.g., recruitment, screening, consent/assent documents, primary interview questions) to facilitate the independent participation and self-advocacy of our study participants (see example in Figure 5). The consulting self-advocate also helped to ensure study procedures were more aligned to the strengths of people with Down Syndrome. The resulting study methods and materials were approved by our university’s Institutional Review Board (ID: #1884885). This section describes our participants, how we chose and designed the health data visualizations, our study procedures, and our data analysis.
Fig. 3.
Fig. 3. Graph 1—body fat chart for men table (left); Graph 2—daily steps bar graph (center); Graph 3—this week’s macronutrients stacked bar graph (right).
Fig. 4.
Fig. 4. Graph 4—weekly walk distance history line graph (left); Graph 5—activity intensity line graph with 2 Y-axes (center); and graph 6—A scatterplot comparing the perception of a food’s healthiness by Americans vs. Nutritionists (right).
Fig. 5.
Fig. 5. Sample interview question presentation format.

3.1 Participants

We recruited participants from social media platforms, large national Down Syndrome organizations, local family groups, and email listservs. Ten participants, who met our study’s inclusion criteria (i.e., be 16+ years old, self-report a diagnosis of Down Syndrome, use some form of technology in their everyday life, and are able to verbally communicate), took part in the study. Overall, our participants were young adults, whose ages ranged between 16 and 29 years old (22.3 mean age). Although we did not collect or measure IQ scores in this study, no study participants demonstrated behavior indicating either a severe or profound intellectual disability. Study participants were verbal, understood gestures and emotional cues, and used more complex language (i.e., responses were not limited to a single word and/or gestures to communicate [26]). These behavioral and communication skills indicated study participants may have had either a mild or moderate intellectual disability, which is in line with ID prevalence and severity within the Down Syndrome population [86]
All study participants were located in the continental United States except for one (Emery lives in Australia, see Table 1). Participants decided upon the length of the interview and any breaks they wanted. They also chose whether the interview would be remote or not (In-person: Shiloh, Darcy, Jordan, and Cameron; Remotely conducted: Emery, Harper, Skyler, Jesse, Morgan, and Sloane).
Table 1.
PseudonymAgeSexRaceOccupationPrevious Graph TrainingStudy Partner Status*
Shiloh29FBlackUnemployedNot MentionedF, A
Emery21MWhiteEmployedNot MentionedF, A
Harper26MWhiteEmployedNot MentionedI
Skyler24FNative American & WhiteEmployedYesF, A
Darcy21MSouth AsianStudent: UniversityYesI
Jesse22FWhiteVolunteerNot MentionedF, A
Morgan27FEast Asian & WhiteUnemployedNot MentionedO, I
Jordan16MBlackStudent: High SchoolYesF, A
Cameron17MWhiteStudent: High SchoolYesO, I
Sloane20FWhiteStudent: High SchoolNot MentionedF, A
*Study Partner Status: F = Partner FACILITATED communication; A = Partner ADDED to participant’s answers; O = Partner ON-SITE only; I = INDEPENDENT participation
Table 1. Participant Information
All participants except for two (Darcy and Harper) had a study partner on-site with them during the interviews (see the last column in Table 1). Participants could choose if the partner was in the same room during interviews. Study partners took part in varying levels during the interviews. As they were more accustomed to the unique communication style of each participant, several study partners (i.e., those with the “F” designation in Table 1) would clarify wording or questions to facilitate the interview process if the participant was uncertain how to answer. Occasionally, study partners might comment upon behaviors they had observed (i.e., those with the “A” in Table 1). However, the data we report upon below are taken entirely from the 699 direct participant quotes that we analyzed.

3.2 Graph Selection and Design

All HDVs were either inspired by or borrowed from medical resources, news, or health apps (e.g., [43, 62, 85, 91, 102]). The HDV types (e.g., table, bar chart, line chart, scatter plot) were selected as they are commonly used to illustrate personal health data. The topics on all six charts are related to weight management. The topic was also suggested by the subject-matter expert, who is a leading Down Syndrome clinical researcher. Due to lower basal metabolic rates and the increased prevalence of hypothyroidism within the population, people with Down Syndrome are at an increased risk of being overweight or obese [98].
Participants viewed six data visualizations which had varying levels of complexity (Figures 3 and 4). As we were interested in the realistic experiences of people with Down Syndrome with HDVs, we selected imperfect yet representative visualizations. For example, some graphs included multiple unique differences between certain elements, such as graph #5, which had two y-axes. While other graphs had omissions, such as graph #6, which had no title.
All visualizations, except for the multi-colored Body Fat table, included various design elements (e.g., pictorial additions of icons, symbols, or images) that past work suggested may improve: comprehension and recall [57], memorability [11], working memory and information retrieval speed [56], and data accessibility for people with IDDs [134]. This choice was made to specifically understand how the Down Syndrome population responded to these design features within the HDV context. However, not all elements of graphs were accessible. For example, the Body Fat Chart for Men table had poor color contrast between the background color and the text. This table was selected because of its visual similarity to BMI charts, which are commonly used in healthcare.

3.3 Procedures

All participant-facing research materials were presented in a full-screen slide show with the associated question or text written to better support verbal short-term memory differences of participants [60] (see Figure 5).
In addition to the accessible information format, we followed inclusive research consent procedures involving people with Down Syndrome [132]. Specifically, consent forms were read aloud as the materials were presented to support reading comprehension. We assessed capacity to consent at the same time as we went through the consent form content. Questions about clarity of content were asked after each section (e.g., “Did I explain the section topic clearly?” “Do you have any questions about the section topic?”). We also asked section questions intended to assess whether the four conditions for demonstrating capacity to consent were satisfied. These were, (1) clearly and consistently state their desire to be involved, (2) demonstrate understanding, (3) have rational reasons for taking part, and (4) appreciate the risks and benefits of the research study. For example, after the section describing the risks to participation, we asked: “We just talked about a few risks, like having bad feelings. What do you think about risks?”
For individuals who did not demonstrate capacity to consent, a parent or guardian provided consent. Some participants and their study partner opted to complete the consent form independently rather than go through the guided process. When this occurred, the parent or guardian provided either consent or assent.
Interview questions were informed by the various literacy-based skills (described in the Background), which we mapped to the three reading stages (see Appendix A Tables). Questions were also guided by the Visualization Literacy Assessment Test evaluation points [73]. However, we opted not to use the exact questions as the the scale was not developed with people who have Down Syndrome in mind. Instead, we adapted the types of questions asked (e.g., identify extremes, observe trends, etc.) to be more accessible for our population.
During the interview, the HDVs were presented to participants following the same order as listed in Figures 3 and 4 (i.e., table, bar graph, stacked bar, line graph, dual y-axis line graph, scatterplot). As the goal of the study was to understand the reading process and the potential accessibility barriers during the process, the questions asked covered different perspectives of the process. Questions loosely followed a saliency, identification, connection, overall takeaways, and extremes format with the differences and overall trend questions being asked last. We used open-ended saliency questions when we wanted to capture more naturalistic graph reading behaviors of people with Down Syndrome (i.e., when participants could choose for themselves the best way to proceed). The questions aimed at comparing values to identify extremes had the added value of illustrating more authentic reading tasks (i.e., starting with a specific question to find an exact value). The exact question wording of both general and HDV-specific questions are available in Tables 14 and 15 in Appendix A.2. Please note that the middle headers in Tables 2 through 4 indicate the question topic that was asked (e.g., identify title, y-axis label, trend, etc.) except for the pseudonym and overall column headers.
Table 2.
PseudonymTitleX-Axis LabelY-Axis LabelX-Axis ValueY-Axis ValueIcons / ImagesOverall
Shiloh4 / 5
80%
1 / 2
50%
3 / 5
60%
3 / 5
60%
4 / 7
57.1%
5 / 5
100%
20 / 29
69.0%
Emery5 / 5
100%
1 / 2
50%
2 / 5
40%
4 / 5
80%
3 / 7
42.9%
4 / 5
80%
19 / 29
65.5%
Harper5 / 5
100%
0 / 2
0%
2 / 5
40%
5 / 5
100%
7 / 7
100%
2 / 5
40%
21 / 29
72.4%
Skyler5 / 5
100%
2 / 2
100%
5 / 5
100%
5 / 5
100%
7 / 7
100%
5 / 5
100%
29 / 29
100%
Darcy5 / 5
100%
2 / 2
100%
3 / 5
60%
5 / 5
100%
4 / 7
51.4%
4 / 5
80%
23 / 29
79.3%
Jesse5 / 5
100%
1 / 2
50%
3 / 5
60%
5 / 5
100%
6 / 7
85.7%
2 / 5
40%
22 / 29
75.9%
Morgan5 / 5
100%
1 / 2
50%
4 / 5
80%
5 / 5
100%
6 / 7
85.7%
5 / 5
100%
26 / 29
89.7%
Jordan5 / 5
100%
1 / 2
50%
2 / 5
40%
3 / 5
60%
4 / 7
51.4%
5 / 5
100%
20 / 29
69.0%
Cameron5 / 5
100%
1 / 2
50%
5 / 5
100%
5 / 5
100%
5 / 7
71.4%
5 / 5
100%
26 / 29
89.7%
Sloane4 / 5
80%
2 / 2
100%
4 / 5
80%
4 / 5
80%
5 / 7
71.4%
5 / 5
100%
24 / 29
82.8%
Mean Score:4.8 / 5
96%
1.2 / 2
60%
3.3 / 5
66%
4.4 / 5
88%
5.1 / 7
72.8%
4.2 / 5
84%
23 / 29
79.3%
N/A for Graph #62,3,4,52,311 
Table 2. Reading the HDVs: Identification of Health Data Visualization Elements
Table 3.
PseudonymEncode ColorEncode IconExtreme HighExtreme LowDifferenceTopicTrendOverall
Shiloh2 / 3
66.67%
4 / 5
80%
4 / 7
57.14%
4 / 7
57.14%
4 / 6
57.14%
1.5 / 6
25%
1.5 / 5
30%
21 / 39
53.85%
Emery3 / 3
100%
4.5 / 5
90%
4.5 / 7
64.28%
6 / 7
85.71%
6 / 6
100%
4 / 6
66.67%
0 / 5
0%
28 / 39
71.79%
Harper2.5 / 3
83.33%
4 / 5
80%
3 / 7
42.85%
4 / 7
57.14%
2 / 6
33.33%
3 / 6
50%
0 / 5
0%
18.5 / 39
47.44%
Skyler3 / 3
100%
4 / 5
80%
3 / 7
42.85%
3.5 / 7
50%
6 / 6
100%
3.5 / 6
58.33%
2 / 5
40%
25 / 39
64.1%
Darcy2.5 / 3
83.33%
4.5 / 5
90%
5.5 / 7
78.57%
6 / 7
85.71%
5 / 6
83.33%
4 / 6
66.67%
2 / 5
40%
29.5 / 39
75.64%
Jesse1 / 3
33.33%
2 / 5
40%
2 / 7
28.57%
1 / 7
14.28%
3 / 6
50%
1 / 6
16.67%
0 / 5
0%
10 / 39
25.64%
Morgan1 / 3
33.33%
3 / 5
60%
1 / 7
14.28%
2 / 7
28.57%
4 / 6
66.67%
1 / 6
16.67%
.5 / 5
10%
12.5 / 39
32.05%
Jordan1.5 / 3
50%
2.5 / 5
50%
3.5 / 7
50%
3.5 / 7
50%
2 / 6
33.33%
2.5 / 6
41.67%
0 / 5
0%
15.5 / 39
39.74%
Cameron2.5 / 3
83.33%
4.5 / 5
90%
5.5 / 7
78.57%
6.5 / 7
92.85%
6 / 6
100%
4 / 6
66.67%
1.5 / 5
30%
30.5 / 39
78.21%
Sloane2 / 3
66.67%
3 / 5
60%
2.5 / 7
35.71%
4 / 7
57.14%
4 / 6
66.67%
1 / 6
16.67%
1.5 / 5
40%
18 / 39
46.15%
Mean Score2.1 / 3
70%
3.6 / 5
72%
3.45 / 7
49.29%
4.05 / 7
57.86%
4.2 / 6
70%
2.6 / 6
43.33%
.9 / 5
19%
20.9 / 39
53.59%
N/A for Graph #2, 4, 611 
Table 3. Reading between the HDVs: Comparing and Observing Relationships between Elements
Table 4.
PseudonymInteraction PotentialInteractive ExpectationsInformation -SeekingChanges to BehaviorOverall
Shiloh2 / 6
33.3%
2 / 6
33.3%
2.5 / 6
41.67%
0 / 6
0%
6.5 / 24
27.08%
Emery5 / 6
83.33%
5 / 6
83.33%
3 / 6
50%
1 / 6
16.67%
14 / 24
58.33%
Harper1 / 6
16.67%
0 / 6
0%
.5 / 6
8.33%
0 / 6
0%
1.5 / 24
6.25%
Skyler6 / 6
100%
6 / 6
100%
4.5 / 6
75%
3 / 6
50%
19.5 / 24
81.25%
Darcy3 / 6
50%
6 / 6
100%
1.5 / 6
25%
6 / 6
100%
16.5 / 24
68.75%
Jesse1 / 6
16.67%
5 / 6
83.33%
2 / 6
33.3%
2 / 6
33.3%
10 / 24
41.67%
Morgan0 / 6
0%
0 / 6
0%
0 / 6
0%
3 / 6
50%
3 / 24
12.5%
Jordan1 / 6
16.67%
0 / 6
0%
0 / 6
0%
.5 / 6
8.33%
1.5 / 24
6.25%
Cameron3 / 6
50%
6 / 6
100%
1.5 / 6
25%
0 / 6
0%
10.5 / 24
43.75%
Sloane3 / 6
50%
4 / 6
66.67%
0 / 6
0%
1 / 6
16.67%
8 / 24
33.3%
Mean Score2.5 / 6
41.67%
3.4 / 6
56.67%
1.55 / 6
25.83%
1.65 / 6
27.5%
9.1 / 24
37.92%
Table 4. Reading beyond the Data: Connecting Data across the HDV with Outside Information
Although we did not perform member-checking, researchers asked follow-up questions when a participant’s speech or wording was unclear. We also repeated what participants said back to them to either clarify or confirm their responses as well as to ensure the data accurately recorded the intended meaning of participant statements during the interviews.

3.4 Data Analysis

As the goal of our research was to understand how people with Down Syndrome read, interpret, and, ultimately, construct meaning from HDVs, we used constructivist grounded theory to analyze the semi-structured interviews in order to identify future accessibility design requirements [21]. We concurrently collected, reviewed, and analyzed study data. All interviews were transcribed verbatim. We identified excerpts, and discussed initial codes and emergent themes collaboratively. Interview data was supplemented with researcher memos and interview session observations. We used comparative analysis to identify patterns, consistencies, and differences. As the study and our analysis progressed, concepts were refined into initial codes, which were then collapsed into categories during focused coding to generate more abstract concepts and theories regarding people with Down Syndrome’s graphicacy skills and abilities. We drew upon various three stage models for data visualization literacy to provide the underlying structure of our resulting health graphicacy theory (see Section 5 for discussion).
The grounded theory methodological practice of regular comparison during inductive data analysis is intended to reduce researcher bias [27]. However, there is still a chance that our personal and professional backgrounds may have affected our analysis of our data [22]. Although we collectively have nearly two decades of experience working with the Down Syndrome population and we regularly volunteer with Down Syndrome organizations, we are not members of this community. As a team of typically developing academic researchers that focus on technology accessibility, we viewed our data through a critical disability lens that uses the asset-based, social model for disability rather than the deficit-based, medical model of disability [6].

4 Results

This section details the experiences of ten young adults with Down Syndrome as they reviewed and completed seventeen tasks for each of the six realistic health data visualizations during our interview study. In total, we analyzed 11 hours and 15 minutes of interview data. The duration of individual interview sessions ranged from 30 minutes to 2 hours. We report upon participant performance during each stage: initial HDV saliency (4.1.1) and identifying HDV elements (4.1.2) in the next two subsections, observing and comparing the relationships between HDV elements (4.2), and connecting those elements and relationships with other information (4.3).
Although we employ the three graphicacy stages in both the results and discussion, this is done to provide structure, not to imply that constructing ones understanding of an HDV is a linear process. The three stages are used to describe the activities involved to accomplish each stage’s specific goal and potential barriers to those activities. Rather the process of constructing ones understanding of an HDV is a combination of bottom-up and top-down processes that occur during the interplay between the external HDV and the individual’s interpretation of the information as viewers construct and revise their mental model of the HDV over time.
Additionally, the authors would like to note that the downward cascade to participant performance throughout the various graph reading activities indicated the negative impact of HDVs that did not provide adequate support for people with Down Syndrome as they read and attempted to make sense of the information presented. In other words, a “low” performance does not necessarily indicate the limits of people with Down Syndrome’s abilities; rather it demonstrates how shortcomings in HDV design can introduce unnecessary barriers to effective inference-making, understanding, and engagement with health information.

4.1 Reading Health Data

As described in the background, reading health data visualizations requires sufficient competencies with data, information, numeracy, and print literacies. These literacy-related skills are requisite to notice what is salient, identify the various graph component elements, and recognize if there is any missing information that is necessary to read the HDV. Below we describe people with Down Syndrome’s saliency and sense-making abilities, their identification skills, and some accessibility barriers that can arise during the first stage of reading health data visualizations.

4.1.1 Saliency.

The foundation of reading any graph or chart begins with identifying the various elements that make up the health data visualization. We began our interviews by asking participants some open-end saliency questions, such as “What is the first thing you see?” During the second saliency question, the researcher physically covered their own eyes with their hands and said “I am closing my eyes now so I can’t see.” The researcher then asked participants to describe everything they were seeing in the graph to them. The benefit of taking a saliency-first approach in the interview was two-fold. First, the saliency questions show us generally where the participant was looking and in what order they read the graph elements (i.e., how they naturally read graphs without any kind of structure). Secondly, the act of verbalizing provided insight into the saliency judgments they were making (e.g., an element’s relevance, importance, noticeability, etc.). This combination demonstrated how people with Down Syndrome initially read health graphs without any supports or guidance.
Participants described the HDVs in four different ways (see Appendix A.3 for all saliency HDVs). First, they made generalizations based on their initial assessments of the information presented, compared different types of data or regions in the HDV, categorized types of data, and made judgments about the HDV. For example, upon seeing the different images of food in the scatterplot, most participants generalized that the data was split into two groups, healthy and unhealthy foods and beverages. Participants also called out specific types of graph elements. These could include the type of visualization (i.e., bar or line graph), the use of icons or images, the presence of words, numbers or dates (e.g., “I see a lot of numbers” in the table). Participants also vocalized elements’ specific text content, such as the title in verbatim or an actual number value (e.g., “2.0”) in the HDV. The last type of observation that our participants made was noting the various descriptive qualities (e.g., the color, size, or position) of HDV elements.
All participants made multiple combinations of descriptions for every HDV. This demonstrated that every participant was capable of varying levels of abstract thinking upon seeing an HDV for the first time. The number of observations verbalized also differed between participants. The quantity and locations of observations similarly indicate different graph reading patterns across our participants. It also showed the visual path they were taking, where they visually focused on a region or if they returned to a region or element more than once. As such, the order of the observations likewise suggests varying levels of ability in effective scanning of information.
When initially reviewing the HDV, participants often leaned upon skills they were strongest in. For example, most participants employed their print literacy skills first. Many participants demonstrated a tendency to read specific text elements first. Participants then read text usually from the largest to smallest in text size. This was followed by participants verbalizing items that had sufficient color contrast as this facilitated quick and easy recognition of information. After the written text, participants demonstrated a tendency to either call out familiar icons or images or specific colors when present in the visualization. Next, participants verbalized numerical information. The order of numerical elements that were verbalized similarly followed the largest to smallest with strongest to weakest color contrast. This suggests that familiarity and confidence in various skills may influence what is most salient and in what order this population reads HDVs. In other words, many individuals were immediately drawn to HDV elements they were confident in their ability to make sense of the more familiar elements (e.g., print, numbers). This familiarity-first behavior suggests some viewers with Down Syndrome may employ a combination of top-down (i.e., long-term knowledge and skills) and bottom-up (i.e., taking in stimuli without context) information processing as they interpreted the information to construct their understanding of an HDV at the earliest stage of graph interaction.
As people with Down Syndrome can struggle to express themselves, it is worth noting that several participants frequently avoid saying words or numbers that they struggled to audibly say. For example, many participants struggled with saying the word “macronutrient” in the stacked bar chart. Instead, they talked around, were hesitant to say, or omitted the word (e.g., “Whatever that long word is called” [Harper], “It’s a bit hard to say” [Morgan] “This week’s–not–uh, I–I don’t know” [Jordan]). However, participants struggled less when they broke down the syllables. When Emery got to graph #3, their study partner covered up the syllables as they read them, allowing their to gradually read the word. This may signal an accessible design opportunity for long words or jargon in HDVs with populations who may struggle to visually parse through multisyllabic words (e.g., “This week’s Mac-ro-nu-tri-ents”).
Similarly, verbalizing numerical information also highlighted the numeracy issues people with Down Syndrome can experience. There appeared to be an issue when reading long number place values. Past work has suggested that dyscalculia is often a part of the Down Syndrome behavioral phenotype [30], which occurred when participants encountered extra characters, such as decimals separating the whole and fractional numbers, commas indicating higher-level number place values (i.e., tens, hundreds, thousands), or dashes used in ranges. For example, Emery read the age range “31–35” as “135” in the first table. When Jesse said, “18 and 20, and 20 and 39, and 62, 85,” they were actually reading the age range (18–20), the first two cells (2.0, 3.9) in the blue Lean region and first two cells (6.2, 8.5) in the green Ideal table region. Similarly, Cameron read the total value of 63,451 in the bar chart as “Six thousand–six, five, thirty-four, fifty-one.” Skyler had to correct themselves when reading the average number of steps “9,600–9,064.” This may indicate incorrect number articulation errors caused by special characters creating visual shifts as they read the number and encoded the position with the appropriate number place values.
Invisible number lines could also be a stumbling block for some participants. Jesse similarly could not find their age: “I’m 22. but 22’s not on here.” According to the table, Jesse would fall into the 21–25 age bracket. In this example, the use of age ranges requires a viewer to have both sufficient working memory and numeracy skills to recognize the invisible number line with a range of numbers. Increments on the X- and Y-Axes may also introduce invisible number line barriers. On the “Weekly Walk Distance History” line graph, Darcy observed: “I see that the graph skipped some numbers. It starts from 2.0 and then it is counting by five. Five, zero, five.” Sloane struggled to articulate their frustration with the increments–in the y-axis of the macronutrients stacked bar chart. They referred to these skipped numbers as categories: “I see 100 category. I see two–2,000 in categories and three cat–category and forty catty–category, ach!”

4.1.2 Identifying Health Data Visualization Elements.

An essential part of reading a health data visualization is the ability to identify the various graph component elements (i.e., title, axes labels and values, icons, images). HDV element identification requires sense-making, spatial awareness, and effective scanning abilities. Additionally, interaction with a data visualization is not a linear process [17]. Instead, someone’s interaction and understanding of an HDV is continually refined as more information is visually sensed and encoded and mental models for the data are iteratively revised and updated. Table 2 reports upon participant performance of each sub-task within this first stage of HDV reading. Correct answers are indicated by a point and incorrect answers with zero points for each graph and each corresponding question.
Overall, the participants performed well when identifying graph elements such as the: title, X-axis labels and values, Y-axis labels and values, values within the visualization, and various symbols, icons, or images. They also were able to identify the various stylistic elements such as shape, color, and size. Six of the ten participants were able to identify more than 75% of the various HDV component parts. The remaining four were able to identify between 65.5% to 72.4% of the elements. This suggests that the graph perceptual, sense-making skills of people with Down Syndrome are relatively strong during early health data visualization reading identification activities. This is in line with past work that found people with Down Syndrome to be strong visual learners [41].
4.1.2.1  The X- and Y-Axes. The X- and Y-axes labels scored the lowest during graph element identification. This appeared to occur, in part, because of a behavioral reaction to pictures when influenced strongly by existing health knowledge particularly in the scatterplot graph (graph #6). The scatter plot’s use of strong, realistic pictures appeared to reinforce our participant’s existing nutritional understanding. As a result, the X- and Y-axes labels were often ignored. This visual disregard may have occurred because photographs of food were used. Viewers appeared to instead fixate upon the highly familiar, concrete data points rather than noticing what was being compared on the scatter plot, which compared the percentage of Americans versus Nutritionists who said whether a food or drink was healthy or not.
Participants demonstrated an observable tendency to first notice the X-axis, followed by the Y-axis on the left whenever participants read graphs. However, many participants (60%) struggled to notice an additional Y-axis was included in graph 5. Only three participants noticed both the label and the percentage values [Harper, Skyler, and Sloane] when a dual Y-axis was present. This behavior may indicate that some participants were unfamiliar with the procedure when reading different types of graphs with axes in different quantities and locations.
4.1.2.2  Icons vs. Images. In the HDVs that used icons (graph # 2–5), participants verbalized more descriptive qualities (e.g., colors, or data types, such as numbers or words) when they generalized what they were seeing in addition to a greater number of specific HDV details as they took in the information. Conversely, most participants (80%) in the scatter plot HDV used more generalized descriptions of large categories of information. It appeared that the use of images of various food and drinks combined with an unfamiliar graph type caused them to rely more upon their understanding of nutrition. This resulted in broad generalizations about the data points and the visualization as a whole that was directly informed by their pre-existing nutritional knowledge. Participants also categorized and grouped the more familiar imagery (e.g., foods and drinks, healthy and unhealthy, “fat stuff” and “too much salty” [Morgan]). This kind of data categorization and grouping can support estimation abilities [119]. Participants, like Cameron, made other associations with what they were seeing as well: “I see different types of ... foods in the kitchen, oven and stuff and the cups.”
Most participants (80%) skipped a lot of individual data point identification that they had demonstrated in previous HDVs. Instead, many went straight to making judgments about what was being depicted. This interestingly suggests that mental models of HDVs could become less flexible if pictures are used on their own without additional elements that can support accurate inference-making if an assumption is made based solely on the familiarity of the images in conjunction with their initial impressions of the data. It appeared that the confidence experienced by the participants caused by the combination of familiar presentation and topic that simultaneously reinforced their existing nutritional knowledge made them less visually critical of the remaining HDV elements and what the relationship between data points could mean, especially if it conflicted with their existing understanding of the topic being visualized.

4.2 Reading Between Health Data

Reading between health data in visualizations requires even more skills (e.g., health data literacy and numeracy skills, print literacy, abstract and spatial reasoning, and ratio processing abilities) to effectively interpret visualized health information. These skills are critical for HDV viewers to: (1) encode and map the information and (2) compare values. This section details the inference-making skills of people with Down Syndrome as they encode, map, connect, compare the visualized health data.
Table 3 reports upon participant performance of each sub-task within the second stage of HDV reading. During the various graph reading connection activities, performance diminished (32.42%) among our participants as compared to identification activities. Only two individuals were able to satisfactorily answer more than 75% of the questions in this stage. Three fell into the third quarter (50–75%) and the other half of the participants scored 50% or less. These results point to potential accessibility and HDV design opportunities that better support people with Down Syndrome as they connect, compare, and interpret graphs. Please note: partial points (i.e., .5) were awarded when participant answers were close, but not entirely correct.

4.2.1 Encoding and Mapping Information.

Mapping information in HDVs generally consists of connecting the identified component elements of the graph’s anatomy to each other and encoding the meaning of visual attributes. Mapping information in this way supports viewers’ ability to assign meaning to each connected element and update their overall understanding of the visualization. In this subsection, we describe the two visual attributes these HDV employed to support interpretation of categorical information encoding: color and images.
4.2.1.1  Color Encoding and Meaning Mapping. Color is typically used to distinguish categorical information by grouping elements so viewers can more easily identify similarities and differences. Many systems designed for people with Down Syndrome or other IDDs, use color-coding to indicate more than just groups. Some nutrition-oriented health apps use the color metaphor of a stop light to indicate a food’s health status (e.g., [71, 101]. However, graphical properties, like color, are not equal in their ability to accurately communicate meaning. Other channels, such as spatial region, position, length, angle, and size, are more effective [25, 84].
Three graphs used color encoding: the table (#1), the stacked bar (#3), and the line graph with two y-axes (#5). The table used both colors and labels to associate the two as a group (i.e., blue = lean, green = ideal, yellow = average, red = above average body fat percentage). In the line graph, color coding was used to indicate intensity of a physical activity to visually link the heart rate and effort y-axes together. Finally, the stacked bar used color to indicate the healthiness of a macronutrient using the stoplight visual metaphor, to distinguish between the three values, and to visually link the macronutrient label with the icon (i.e., lettuce = healthy carbs, leg of meat = protein, butter = fats).
When color-coding did occur, 30% of the participants had no meaning association. Instead, viewers, like Jesse, inferred that color was simply a stylistic choice: “it means the colors ... like different kinds of colors. It’s blue, green, yellow, red.” The percentage of those who had no meaning associated with color was higher during the first half of the interview versus after spending more time engaging with the HDV. Harper was the only one who explicitly stated that the graphs were “color-coded.”
In the table, many participants recalled a pre-existing color association, “Normally, the green, yellow, and red means ... Stop, Slow, Go. But [I’m] not sure about the blue” [Harper]. When other colors did not also map to the metaphor, like the blue, confusion occurred. For Jordan, green was “good,” yellow was “bad,” and red was “very bad.” However, they instead associated blue with the affective state of “sad,” which may indicate a color metaphor-mismatch occurred. This inaccurate encoding made interpreting the graph more difficult. Typically a high-performer, Cameron did not notice the labels at all. Instead they interpreted the color as corresponding with the size of the region. While Sloane did connect blue with its lean label, they said green was “healthy” and red was “really bad.” They described yellow in terms of foods that were both healthy and yellow-colored.
Graph #5 similarly indicated how more explicit mapping between color and meaning is necessary in more accessible HDVs viewed by people with Down Syndrome. High-performers were able to associate how color was used. Skyler observed that colors were the “different kind of colors of different beats in your heart.” Similarly, Cameron was able to connect the title and associate the color with its gradation: “The colors mean how–how deep is the intensity... [Green] means like not that–not that intense. Dark red means that it’s that extreme amount in the intensity.” However, imperfect color encoding and mapping still occurred 45% of the time. Jordan associated the color with the icons rather than intensity levels. This suggests that despite the presence of labels, color is not strong enough to ensure accurate mental connections in a world of potential implied meanings.
The stacked bar chart had the highest level of correct color mapping at 95% accuracy. Even participants, who consistently struggled [Shiloh, Emery, and Harper], were able to connect both color and icon encoding when properly reinforced with familiar distinct imagery that was reinforced by understanding of health information and had a label to support mapping. However Jordan’s concrete associations with the food icons overruled a totally accurate color interpretation: “[Red is] bad food. [Yellow is] good food. [And green is my] favorite food.” This may signal the strength of lived experience to inform HDV interpretation.
4.2.1.2  Image Encoding and Meaning Mapping. Five of the six HDVs used icons or pictures to support comprehension as suggested by previous accessible visualization recommendations [134]. While icons did support most participants’ connection between graph elements, some participants did not like the use of icons. Darcy, who self-reported that they were very familiar with reading graphs, preferred a “dot” instead of the shoe icon. As such, there are some caveats for how those icons could be incorporated into visualizations intended to be more accessible to people with Down Syndrome within the health context.
Appropriate image selection is critical. Sloane initially thought the vertically stacked shoe icons in the Daily Steps bar graph was a “shoe store.” While Cameron extrapolated that “those lo–logos represents [the] amount of di–distance” in the Weekly Walk Distance line graph. Others, like Jordan, associated the shoe icons not with steps but with physical activity (i.e., “walking”) in the bar graph. However, for Darcy, the shoes were simply shoes and the caution sign meant nothing to Morgan in the dual Y-axis line graph. Sloane interpreted the lettuce icon in the stacked bar chart as green brain, which, in turn, impacted their connection of the icon to its healthy carbohydrates label.
Graph complexity can also interfere with accurate icon encoding. The use of multiple icons in a single graph should be carefully considered, particularly if there is not a label to provide a redundant encoding for bundling connections between elements. In the fifth graph, while the use of different icons were intended to visually represent a change in activity intensity to reinforce position on the graph as well as the color encoding on the graph background, the different icons did not clearly map. As a result, the complexity and use of multiple icons in the dual y-axis line graph contributed to the icons only being 45% accurately encoded–the lowest scoring graph.
Icon encoding can take time to process. Like some, Morgan did not initially verbally associate the shoe icons with steps. However after spending time answering questions about the bar graph, those participants persistently associated the shoe icons with steps across all three graphs that used the shoe icon. This was interesting as the meaning of the shoe icon changed with every graph that used it (i.e., bar graph #2 = steps; line graph #4 = distance; dual y-axis line graph #5 = mid-level intensity). The re-use of icons when an association has been made caused participants, like Morgan and Harper, to consistently demonstrate this carryover effect during icon encoding. Other icons also had pre-existing associations, which similarly affected correct meaning-mapping. The use of hearts and a warning sign to indicate higher heart rate zones meant something different for Harper, Morgan, and Jordan, who interpreted the hearts as “love” or “falling in love” and the triangular warning symbol as “person” [Harper], “danger” [Shiloh and Sloane] or a “danger zone” [Darcy].
The results from this and the previous sections may signal an accessible HDV design feature opportunity to provide more explicit inference-making features that support the connection-making abilities of individuals connecting graph elements and channel encoding, like color or icons.

4.2.2 Comparing Information.

Comparing information in HDVs involves both visuo-spatial and abstract reasoning as well as ratio-processing abilities to recognize patterns, similarities, differences, extremes, anomalies, and ultimately connect the various elements together to infer the overall trend. This section describes how people with Down Syndrome compared high and low extreme values. It also reports upon the differences participants noticed when viewing HDVs. The number and kind of differences indicated varying levels of cognitive flexibility among our sample when recognizing patterns, generalizing, and grouping.
4.2.2.1  Comparing Extreme Values. The extreme value questions illustrated authentic reading tasks where the viewer begins with a specific question to find an exact value (see Tables 14 and 15 in Appendix A.2 for the exact HDV specific questions asked). Overall, participants performed moderately well when comparing data to determine extreme values. They were able to accurately discern both the high values roughly half of the time: Table (60%), Bar (50%), general values Stacked Bar (50%) specific values Stacked Bar (60%), Line (45%), Dual Y Line (45%), and scatter plot (45%). They performed slightly better when judging low values: Table (85%), Bar (70%), general values Stacked Bar (50%), specific values Stacked Bar (60%), Line (50%), Dual Y Line (55%), and scatter plot (40%). One reason for the moderate performance may be that participants had to use different procedural skills to locate extreme values across all six HDVs.
For example, the table (HDV #1) required effective scanning of a large amount of numbers, some with poor color contrast. While Shiloh generalized to entire regions, some participants, such as Morgan and Sloane, instead visually fixated and answered within their field of vision when the question was asked. The table also required participants to recognize patterns across the overall numbers to find the location of extremes. The values increased from top-to-bottom and left-to-right with the lowest in the upper-left and the highest in the bottom-right. There were also table reading procedural issues for Jesse and Sloane, who answered with age values, which were literally the highest number visible to them.
HDVs #2-5 required participants to visually track between the graph’s axes and the individual values. During this visual back-and-forth, participants engaged their ratio-processing system to visually compare the differences between values. The brain’s ratio-processing system attends to ratios of difference between non-symbolic values (i.e., not number symbols, but shapes or areas) [82]. As HDVs are visualizations of non-symbolic values, comparing between them requires noticing fractional differences between the information represented. When the contrast between values is great, it requires less cognitive effort. When it is smaller, it can increase a viewer’s cognitive load (Figure 6). After this comparison has occurred, they then must keep track of each of the value judgments in their visuo-spatial working memory until they find their answer.
Fig. 6.
Fig. 6. Ratio-Processing Equivalent values between symbolic numbers and non-symbolic areas.
Comparing the lowest values in the bar graph required the least cognitive effort for ratio-processing. This made sense as most participants were able to correctly answer this question. However, more sensitive ratio-processing skills were necessary for the high values in the bar graph and both the overall high and low extremes as well as the specific macronutrient type extremes in the stacked bar graph. Half of the participants struggled to correctly determine the highest value in the Daily Steps bar chart. Saturday (i.e., the correct answer), Tuesday, Wednesday, and Thursday all had very similar values to each other. Because of this lower contrast ratio, four out of the five answered one of the visually similar, yet incorrect values (Emery, Harper, Darcy, and Jordan).
Ratio-processing skills were taxed in the stacked bar graph when participants were asked to determine extremes of specific macronutrient types (i.e., highest protein and lowest fat). Unlike comparing values in the bar graph, which started on the same level on the x-axis, comparing fats and protein areas were more difficult because they sat on top of the different healthy carbs values. Being stacked on an uneven base appeared to impact participant’s abilities to effectively process and differentiate between the lower contrast, fractional differences in the sizes of the protein and fat rectangles. This suggests an accessible visualization design opportunity to highlight when the values have low ratio contrast and reduce unnecessary impacts to the viewer’s cognitive load.
When participants had hit their ratio-processing limit, they reverted to a more familiar graph reading strategy of looking to the top of the shape to determine the highest or lowest values. All four of the participants (Emery, Harper, Darcy, and Cameron), who incorrectly answered Thursday as the day with the most fats, did so because Thursday had the most overall grams. Visually, Thursday had the highest position from the top. Similarly, when Shiloh was uncertain how to judge the highest and lowest values in the daily steps bar graph, they reverted back to their stronger number reading skills as they seemed less confident in the ratio-processing abilities. In both instances, Shiloh answered with the smaller numerical metric underneath the title: “Yeah, so the highest and the lowest: 63,451. And the lowest is the nine thousand. No, no. Nine hundred, sixty-four ... so–the highest and the lowest.”
While the imagery was the most familiar, the procedural skills required to read a scatterplot graph was the most foreign to all participants. Several participants leveraged other graphicacy skills they felt more confident using as they interpreted the nutritional data. For example, two participants reported that kale [Darcy and Sloane], which was visually in the highest position to the top of the graph, was the most healthy. Other participants relied upon their health literacy and were instead influenced directly by their nutritional knowledge. In the scatter plot, participants' broad generalizations were impacted by their existing understanding of nutrition: “the highest food is the healthy foods” [Skyler], “the veggies up top” [Emery] or “the fruit and vegetables” [Harper]. Shiloh said the “junk. It’s ice cream–the desserts” were the least healthy in the scatterplot. Harper echoed this assessment by generalizing with “all the junk.” As a result, how participants compared values suggests that when individuals with Down Syndrome felt less confident in their interpretive procedural skills they instead switched to more familiar skill sets and prior knowledge to interpret HDVs. In other words, they use the same tactics as other typically developing populations when they are not sure how best to proceed when they are uncertain how to interpret a data visualization.
4.2.2.2  Comparing Differences. The ability to effectively compare values within an HDV is critical to notice patterns within the data, where values diverge from each other, and what anomalies or outliers there may be. Effective comparative skills support the viewer’s understanding of the data by examining the relationships between values. Comparing the relationships between and across the HDV is foundational to interpreting patterns on the micro (e.g., [in]consistency of a performance instance) and macro level (e.g., overall trend across multiple instances).
All of the participants were able to compare data and demonstrated varying levels of cognitive flexibility when answering this question. Skyler showed the highest level of flexibility when mentioning differences in HDVs–a total of 18 differences across the six graphs. Cameron was the second highest at twelve and Emery reported eight. The lowest was Harper, who noticed three differences.
Participants verbalized multiple kinds of differences. The types of differences mentioned were: the individual shapes, differences between regions, overall trend across the visualizations, colors, and images. The most commonly reported difference type was the various kinds of data (i.e., text, numbers, axis labels and values, specific graph values). This once again highlights our sample’s tendency to rely upon and leverage their strongest skills (i.e., reading) when doing an unfamiliar task like verbalizing differences of data representations.

4.2.3 Connecting Relationships across HDV Elements.

After an individual has encoded and mapped meaning between the elements of the HDV and has compared values to get the gist of the data, the viewer will then connect those component elements together. Connecting information allows viewers to make sense and begin to understand the overall nature of the HDV. Creating mental relationships between the information allows the viewer to understand: (1) the topic and (2) the overall trend.
Although most participants performed moderately well throughout the entire interview, performance dropped dramatically as many participants struggled to connect the various elements and aspects of an HDV to synthesize a coherent understanding of the information represented. Results support previous work which found that while people with Down Syndrome understand abstract information, the differences to their working memory can make managing too much information with too many relationships at the same time a challenge [20, 60].
4.2.3.1  Identifying the Topic. Only a quarter of the participants were able to connect the various graph elements, aspects, and information to the overall topic of the table (HDV #1). A little less than half (45%) of the participants were able to get the overarching topic of the bar chart, the stacked bar, and the dual Y-axis line graph. The HDVs with the highest levels of connection between the data and the topic were the line graph and the scatterplot at 50%. It is worth noting that unfamiliarity with the scatterplot graph type led to everyone generalizing. Everyone got partial credit for verbalizing that the graph depicted the overall topic of healthy and unhealthy foods and drinks. However, no one was able to provide the more nuanced answer: the scatterplot was comparing the perceptions of healthiness of food and drink items judged by nutritionists versus the average American.
Cameron, Darcy, Emery, and Harper were the most consistent individuals to succinctly synthesize and summarize many of the HDVs. Harper described the table being: “about the ages and the percent of the fat.” Emery connected the HDV to their everyday life, which made connecting the data to the topic much easier: “It is called steps for–same as my watch! It tells you, like, activities in there.” Cameron recognized that the macronutrient stacked bar graph was about: “the grams of, like, the amount of food. And the food has different categories because carbs, protein, and fats.” Emery described the line graph as: “It looks like a snake. That is how many walks have you done. ... how much you’ve done it–of the walk distance history.” Cameron summarized the topic of the dual Y-axis line graph by saying: “it’s all about the intensity in the activity, um, it tells you ... the times at the bottom. Um, is telling you about different times of the levels of the activity intensity.”
A mixture of partially correct and incorrect answers indicated that participants were influenced by their personal understanding of health, exercise, and nutritional knowledge. For example, Skyler said the table was about: “how much you eat ... It has different kind of colors of what–what the–the healthiest things that you can eat. That’s it in my head.” Like Jesse, Darcy described the Body Fat table as: “different pounds of weight you have and also about your losing.” Their response was informed by the visual similarities between the much more familiar BMI chart often seen in doctor’s offices. Emery recalled the food pyramid when they saw the stacked macronutrients bar graph: “it’s ... like, um, a food triangle one that is ... And then there’s a different one. Different one is, like, that equals healthy one, protein, and fat.”
When participants answered incorrectly, most responses consisted of describing and identifying elements and aspects of the HDV rather than connecting everything together. For example, Sloane described the stacked bar as: “combine as healthy. And it will really combine to protein and fat” and counted the number of data points in the dual y-axis line graph, which had “22. There’s 22 times. It’s something measuring from, uh, to 80. Uh, maybe 75. ... all about, uh, the line of, um, gray mark.”
The issue of multiple blocks of easier-to-read text came up during the topic question as well. For some participants, seeing these text blocks did cause some incorrect connections. “It’s graph about the weekend because it started on, uh, every month. Like, um, Jan–January, Feb–February, March, April, and May. Hmm... new year. I don’t know” [Sloane]. “It’s the steps been taken in the years—since the numbers” [Harper].
4.2.3.2  Identifying the Overall Trend. Trend identification was the most difficult question for our participants. Most participants struggled to connect how each of the individual data points worked together to describe the overall trend (i.e., how the parts describe the whole).
Participants answered correctly the most often when the trend was obvious. Participants performed the best in the Weekly Walking Distance History line graph at 40% correct identification of the upwards trend. Several participants actually described the overall trend as they were connecting the graph elements to the topic. “This graph is describing that the walk distance is increasing” [Darcy]. Cameron described the red line as: “a snake goes up. ... all the logos in the snake that–that goes up th–those lo–logos represents amount of d–distance.” When Sloane couldn’t find the words, they instead vocalized the change: “the highlighted [line] And it goes “whoooop.” ... the number [is] bigger.” They audibly changed pitch of the vowels from lower to higher as they said “whoooop” to express the changes in the increasing trend. Sloane again embodied their response to describe the positive trend in the scatterplot as well. Cameron drew upon their health and data literacy skills to demonstrate their understanding of how nutritional components of the foods depicted affected where they fall on the plot: “The nutrients number could change because of the sugar weight.”
Trend identification was particularly challenging in the table and the scatterplot. This is because trend identification in the table required observing the changes using the numerical information only and none of our participants could recall interacting with a scatterplot before, so unfamiliarity of the graph type made describing its trend difficult. It was also challenging when there was no discernible pattern (i.e., not clearly ascending, descending, or remaining around the same amount), which occurred in the bar graph. Of those who were able to correctly identify the trend, only Cameron could describe the trend in the table and the scatterplot. Darcy was the only participant who partially described the bar graph’s trend and two others were able to identify the stacked bar and dual y-axis line graph’s trends.
There appeared to be a relationship between participants who were detailed describers during the saliency questions and those who were able to describe the HDV’s trend. Many participants used more concrete language to describe how the visual changes in data points appeared across the entire visualization. Skyler, who excelled at descriptions throughout their interviews described the stacked bar’s trend as: “I know for a fact it’s wavy.” Oftentimes relating the abstract patterns to more familiar, concrete imagery made trend interpretation easier to articulate. Shiloh similarly described the dual y-axis line graph’s trend: “it’s like a noodle.” In the same HDV, Skyler described the initial upward climb and subsequent dips as “very like up and down. It’s like a roller coaster. ... a pool. But it goes straight, but it has the little roller coaster baby pool to me.”

4.3 Reading Beyond the Data

Data visualization literacy activities associated with reading beyond the health data are critical for viewers to reach the interactive and critical health literacy levels. Much like the interactive level of Nutbeam’s health literacy model (see Section 2.2.4), reading beyond the data requires HDV viewers to engage, ask questions, leverage and apply new health information to decision-making. This is then mapped to examples of applications of those skills as they more deeply engage with the HDVs. This section reports on how (1) people with Down Syndrome would engage with their data, (2) their information-seeking inclination during HDV engagement, and (3) what kinds of changes they thought they should make given the information presented in the graphs. Please Note: Partial points of .5 were given in Table 4 when participant answers were close, but not entirely correct.
Overall, our participants demonstrated low levels of the more advanced engagement stage-specific skills. There was a further 29.24% performance drop from the 2nd stage to the final engagement stage. Participants demonstrated mostly lower levels of engagement with the presented health information. Skyler was the only participant in the final stage whose behavior indicated they was engaging with the HDVs. Only two participants, Emery and Darcy indicated moderate engagement. Shiloh, Jesse, Cameron, and Sloane have low levels of engagement. Harper, Morgan, and Jordan demonstrated very little engagement.

4.3.1 Interaction Potential and Expectations.

A well-known data visualization design mantra in HCI suggests that users want an “overview first, zoom and filter, then details-on-demand” as they engage more deeply with whatever information is visualized [115]. However, these best practices may not be as intuitive for people with Down Syndrome, who often did not think HDVs were interactive. 60% of participants thought that the HDVs were not something they could click on (table and bar: 75%, dual y-axis and scatterplot: 60%, line: 50% and stacked bar: 30%). Then, we asked what they thought would happen if they did interact with it. More participants thought of potential ideas for interaction (bar, dual y-axis, and scatterplot: 50%, table and line: 40%, stacked: 30%).
Many said they could not mentally envision or had “no clue” [Harper and Shiloh] what would happen if they chose to click on anything. The unknown outcome made even the adventurous computer user, Shiloh, who will “Click click click click. Gestures clicking across two computers. ... all the time,” irresolute. Instead, unfamiliarity with the task domain appeared to increase their hesitation and reduced their desire to perform even exploratory clicks. Jordan confirmed that any click or interaction with a HDV would be a “surprise” to them as they could not imagine what would happen. When Jesse did finally click on the stacked bar graph, they said: “Uh I broke it.” They immediately blamed themselves–rather than the HDV or technology–for the lack of response from their action. This may indicate those who were similarly hesitant or declined to interact may also share a strong internalized locus of control that could contribute to feelings of low self-efficacy during unfamiliar tasks [109] like engaging with HDVs.
Only Skyler interacted with every HDV with unprompted, adventurous exploratory interactions. Others just verbalized where they would click or tap. Participants said they expected interactivity on the axis values or labels [Emery, Shiloh], graph values, such as numbers, lines, or bar graphs [Cameron, Jesse, and Jordan], icons or images [Shiloh, Jordan], or “anywhere” in general [Harper, Emery]. While hesitant to click, Shiloh and Morgan did use Google to look up terms they were uncertain about, like “clickable.” Shiloh, in particular, regularly used the browser to find definitions throughout the entire interview.
Of the seven participants who did expect something to occur after they interacted with it, only two mentioned expecting details-on-demand for most of the visualizations [Darcy and Cameron]. They both expected to see the same, additional weight information on the body fat table cells, the total steps for each bar, the “carbs, proteins and fats” [Darcy] “by the amount of the grams” [Cameron] for each day’s macronutrients, both the number of miles and the distance in the line graph, and the nutritional information in the scatterplot. The two only diverged in the dual Y-axis. Darcy wanted each icon to reveal the “activity you’re doing” and Cameron instead expected to see “the number amount of the intensity.”
Some participants also expected the same type of outcome for each kind of interaction with the HDV. Emery wanted clicking on the HDV to either hide or close it except for the bar graph. For the Daily steps, Emery wanted it to function just like the same visualization they regularly interacted with on their Apple smartwatch. “It’s going to put it three ways. The first one is the activity thing. Second one is the serving thing. And then [the] third one is the rewarding, um, award thing.”
However, almost everyone had different expectations for what that interaction should be. Jesse wanted different types of interaction outcomes upon clicking the HDV, such as video content from YouTube in the table, “An app comes up ... Like social media” for both bar graphs and “my FitBit” for the stacked bar, and “a story or ... like a movie or something” in the line graph. Others expected an animation of more information to “... come right at you, ... get bigger” (Skyler) and “pop right up” (Shiloh). Sloane liked the idea of having videos or agents to provide additional information support. “Like a chat ... like, uh, people talking” about how fats can be “really unhealthy ... not healthy at all.” Like Sloane, Skyler also suggested gamified elements: “the shoes will come flying at you. ... I would typically duck under ... and you can eat the lollipops” [as a reward]. Sloane described how icons could use animation, which might provide additional layers of redundant meaning encoding to better support viewer interpretation and conceptually link the image to the activity being visualized. They suggested animated icons like the “walking of the shoe” and “a heart beating” to reinforce the connection with BPM.

4.3.2 Information-Seeking and Question Generation.

Even though participants didn’t generate questions or have information-seeking triggers when interacting with the HDVs around three-quarters of the time, the remaining 25% generated rich types of questions and info-seeking activities. These were oriented around: (1) wanting more explicitly connected information together, (2) wanting additional encoding supports for icon meaning and value, (3) step-by-step guides when they were uncertain how to progress, (4) wanting to limit visual messiness, and (5) clearer definitions for unfamiliar words or abstract concepts. This section also describes the visual aversion we observed in some participants when they encountered intimidating or unfamiliar content in HDVs.
Emery had difficulty connecting the relationships between various types of data. Emery wanted these connections to be “better explained.” Shiloh was likewise uncertain how the age brackets and the regions were related and what those connections meant. Skyler also wanted to support connecting the relationships between various graph elements together (i.e., how the walk distance line graph values and weeks in the x axis were related). Emery confirmed that they wanted additional information and support interpreting values to better understand why the extremes happened and what other values meant in the stacked bar chart. Finally, Sloane wanted an explanation of the trend in the Weekly Walking line graph.
When the icons were used, Skyler wanted to know what the shoes represented numerically. “I wish it was explained better by how many shoes are on the measuring sticks.” If icons are meant to support comprehension, that needs to be explicitly explained in the design. However, if each icon equals a specific number of items, the total rather than an implied calculation needs to be included. In other words, a legend that says “1 shoe = 500 steps” would be less effective than having the total label say “On Monday, you walked 9,400 Steps.”
Finally, several participants wanted the HDVs to be more neatly ordered as visual messiness hindered their ability to make connections. For example, Skyler wondered why the color blocks in the table were different sizes. As oftentimes size is used to indicate quantity, some participants were confused when they noticed that the red above average block was visually larger than the blue lean region, which was, in turn, smaller than the green ideal and yellow average classifications. Emery, Cameron, and Darcy wanted neater categorizations of healthy and unhealthy items as they were visually overwhelmed by the data points being “jammed together” [Cameron] in the scatterplot. Darcy suggested filtering content and drill-down functionality to make it easier to view. “I wish they would organize the graph better. Yeah, because like now it is like messy ... so someone would click on carb, and then all the carbs would go into a category, and then there would be protein it would go there and then fats would go there into different groups–3 groups.”
Quick to look up information in the browser, Shiloh immediately Googled the word “intensity” when they encountered it in the dual-y axis line graph. However, they grew frustrated at times with the unfamiliarity and difficulty of the task domain (i.e., reading, interpreting and engaging with HDVs) when search engines did not provide helpful results. When Jesse encountered the unfamiliar words and concepts of “healthy carbs. [and] mac-uh-nutrients,” they wanted additional information to better understand what these items meant. Jesse also wanted definitions for unfamiliar health metrics: “I don’t know what beats per minute B–BPM means.” Skyler instead preferred to ask people in their life what things meant.
For other participants, however, they demonstrated a visual aversion to unfamiliar graph elements or aspects that felt either unnecessarily difficult or intimidating to parse: “I’m not clicking at the words” [Morgan]. Throughout the interview, many participants appeared to prioritize their cognitive effort and adjusted what they chose to attend to. As a result, more abstract or unknown elements became blind spots to be ignored as the viewer attended to problems they felt confident enough in their skills to solve. Another example of this behavior was when many participants fell back on their print and number reading skills when they were uncertain of the steps required to interpret and understand the abstract visualizations.

4.3.3 Reflecting upon Personal Behavior Changes.

Participants varied in the depth of the level of reflection they were able to engage in with the HDVs. For example, Jordan connected the Body Fat table to being more conscientious of their food intake by “eating salad.” Morgan also found the line graph to be a motivation for them: “It is telling me to walk a little bit far, and like ‘keep going, going, going’ and like ‘take a break to drink lots of water’ [and then] ‘keep walking, walking again.”’ An athlete, Darcy, who already tracked health information in various aspects of their life, often found inspiration for other ways they could leverage their personal information to become even more physically fit with every visualization: “The [table] is telling me I need to start, I need to weigh myself.” In the bar graph, Darcy reflected that “maybe I can start tracking–measuring my steps, and see how many total steps that I’ve taken Sunday until Saturday.” And the line graph inspired Darcy to take it a step further by also recording “how much walk distance I walked.” The stacked bar chart told them “watch out for macronutrients ... like how much carb I’m eating. How much protein and fat I’m eating.” The visualizations also made Darcy want to tweak how they currently tracked their workouts: “this is telling to graph–to graph the time I’m exercising. When I’m tracking activities, I put the day that I’m exercising. I put the date and the type of the type of exercise I’m doing. But this graph is different, this graph shows you what time you are exercising ... I just put the date and then I put what I’m doing.”

4.4 A Broad Range of Abilities

Participants successfully completed more than half of the HDV reading activities (56.9%). Individual performances within each phase and across all three phases varied widely from person to person. While performance was generally better on the health data visualization identification and connection activities compared to the synthesizing skills used when reading beyond the data, some participants saw a smaller decrease in performance than others. Furthermore, the participants’ performance as they progressed through data visualization tasks indicated different abilities, strengths, and potential accessibility requirements across our sample (see Table 5). The variation to performance, like other IDD populations, indicate that the skills of people with Down Syndrome are highly heterogeneous when reading and making sense of HDVs.
Table 5.
Pseudonym1st Stage: Reading the Data1st to 2nd Stage Diff.2nd Stage: Reading BETWEEN2nd to 3rd Stage Diff.3rd Stage: Reading BEYOND1st to 3rd Stage Diff.Overall Mean
Shiloh69.0%\(-\)21.9%53.9%\(-\)49.7%27.1%\(-\)60.7%50.0%
Emery65.5%+ 9.6%71.8%\(-\)18.8%58.3%\(-\)11.0%65.2%
Harper72.4%\(-\)35.5%47.4%\(-\)86.7%6.3%\(-\)91.3%42.0%
Skyler100%\(-\)35.9%64.1%+ 26.8%81.3%\(-\)18.7%81.8%
Darcy79.3%\(-\)4.6%75.6%\(-\)9.0%68.8%\(-\)13.2%74.6%
Jesse75.9%\(-\)66.3%25.6%+ 62.9%41.7%\(-\)45.0%47.7%
Morgan89.7%\(-\)64.2%32.1%\(-\)61.0%12.5%\(-\)86.1%44.8%
Jordan69.0%\(-\)42.4%39.7%\(-\)84.1%6.3%\(-\)90.9%38.3.%
Cameron89.7%\(-\)12.8%78.2%\(-\)44.0%43.8%\(-\)51.2%70.6%
Sloane82.8%\(-\)44.2%46.2%\(-\)27.9%33.3%\(-\)59.8%54.1%
Mean79.3%\(-\)32.42%53.6%\(-\)29.24%37.9%\(-\)52.18%56.9%
Table 5. Changes to Individual Success Rates across HDV Reading Stages
Participants who were employed had a higher mean success rate. Skyler had the highest mean success rate across all three stages at 81.8%. Similarly, Emery, who uses a computer in their office job, was the most consistent, ranging between 58.3% and 71.8% success rates and the lowest overall performance decrease (11%). Emery was also the only participant to improve during the second connection phase from the first stage’s identification tasks. Other factors outside of those collected or observed in this study may have also affected performance.
While education in how to read a graph is useful, it was not the sole indicator of success and showed mixed results. As current high schoolers, Jordan and Cameron received the most recent instruction about data visualizations. Another student in secondary school, Sloane, did not mention any recent graph education. While Cameron had the 3rd highest mean success rate across all three stages (70.6%), Sloane was in the middle at 5th (54.1%), and Jordan scored the lowest at 38.3%. Darcy, a current university student who is passionate about their health, had the most consistent high success rate ranging from 68.8% to 79.3% and had the second highest overall mean score of 74.6%.
Although education may be a factor, resilience in response uncertainty and adaptability was a better predictor of a higher mean success rate. Participants in the lower half were among those more likely to abandon tasks or respond that they did not know how to answer or proceed. This more consistent trend across the participant pool indicated a barrier when they were uncertain what to do next, what elements meant, or how to connect elements together. Interestingly, this included Harper, the only participant who lives on their own. While Harper regularly demonstrates resilience and adaptability in their everyday life and throughout the study, this did not extend to tasks when they were unsure how to solve a task with more abstract data in a health context.
Although people with Down Syndrome have strong visual and spatial reasoning skills, as evidenced by our participant’s high performance identifying and comparing areas of HDV elements (\(\sim\)80% success rate), individuals like this study’s sample may similarly struggle making inferences and constructing understanding with how the information was connected. This appeared to occur when they were uncertain how to proceed or design elements forced them to make cognitive leaps during an HDV activity.

5 Discussion

As previously described, each stage’s activities aim to further that stage’s overall goal (e.g., identification of HDV elements to understand initial structure of data, comparing elements to observe relationships between them to notice patterns and differences across data, making connections about data observations with outside information to use the data as a potential decision-making aid). During the various stages’ activities, a viewer interprets the HDV using what they have learned so far and constructs their understanding (i.e., their mental model) as they interact with the HDV over time. Each stage-specific activity also builds upon–and potentially revises–what they learned in other stage activities. Meaning that, even though participants may have gone back and forth between stages as they constructed and revised their mental model for the HDV, an error that occurred that was foundational to their mental model, such as misinterpreting HDV elements during an identification task, can negatively impact both their overall interpretation and their inference-making abilities during later stage activities.
As participants gradually created and refined their mental model for the HDV, the difficulty and complexity of HDV activities similarly become more challenging. Activity difficulty and complexity increased because viewers had to hold more information in their working memory, which further placed greater demands upon the level of cognitive effort necessary to complete a later stage activity. When cognitive efforts became taxed, there was an increased potential for participants to make errors.
These errors could occur for several reasons (e.g., visually skimmed information too quickly and missed a detail, difficulty recalling graph reading steps, unfamiliarity with a type of graph, insufficient opportunities to regularly use graphicacy skills that causes their proficiency to degrade). Errors may also occur if they do not have graph-specific background knowledge and skills to draw upon. However, should an error occur–and it remained uncorrected as the mental model was updated–the viewer’s mental model of the HDV can be inaccurate. When an error occurs at the earliest interpretation and it is not recognized and appropriately revised, the potential to successfully complete later activities may likewise be adversely impacted.
As a mental model is constructed from the interplay between an external system (i.e., the HDV) and the viewer’s interpreted internal representation of it, the decline in activity success rates in Table 5 suggests that a combination of HDV design-based and task-based barriers may be to blame. In the next sections, we reflect upon the two categories of barriers that contribute to the highly individual performances across the study participants: design-based barriers (5.1) and HDV reading task-based barriers (5.2). The combination of these co-occurring barriers further diminished participant’s inference-making abilities as they progressed through the HDV reading stage-specific tasks. In 5.3, we propose 12 potential design suggestions that may improve HDV accessibility for individuals similar to this study’s sample. We summarize these HDV design suggestions and provide examples of potential strategies to address such accessibility issues in Tables 9 and 10. We conclude the discussion by describing our study’s limitations and potential avenues for future work (5.4).
We posit that the observed downward cascade to participant performance across the three stages of reading health data is the result of a combination of task and design-based barriers. Table 6 is a simplified representation of our theory. It depicts how individuals similar to our study participants with Down Syndrome progressed across the three stages. As the inference-making barriers in HDVs increased over time, the negative impact of those barriers can cause a cascading decrease to an individual’s success at various stage-specific HDV reading related tasks. The arrows, which indicate participant progression through the three stages, represent how (1) the negative impact of design-based barriers compound, and (2) the difficulty, complexity, and ambiguity of HDV reading tasks increases as they progressed through the three stages. As these stages build upon one another, (3) the overall performance proportionately diminished and declined.
Table 6.
Reading the Health DataReading BETWEEN the Health DataReading BEYOND the Health Data
Design-based Barriers increased the potential number of errors that could impact inference-making as they progressed across the three stages. (Table 7)
Task-based Barriers caused procedural uncertainty (i.e., tasks increased in difficulty, complexity, and ambiguity) impacting how they made inferences. (Table 8)
The combination of design and task-based barriers caused their Overall Performance to diminish over time. (Table 5)
Table 6. Barriers to Inference-making: Overview
Our theory is that the adverse effects caused by these co-occurring barriers resulted in increases to participant cognitive effort. This was reflected by the performance deterioration that occurred for 90% of the participants (Table 5, end of results). In other words, the errors that occurred in the early stages had compounding effects on the later more complex stage-specific tasks. This happened because the observations made in the earlier stages are foundational to the later stages when those observations are used during inference-making as the viewer constructed their understanding of the HDV. As a result, any misunderstanding in the beginning only became more pronounced as they progressed and had a more significant negative impact upon their inference-making ability.

5.1 Design-Based Barriers to Effective Inference-Making

This section describes the various kinds of design-based barriers that can hinder people with Down Syndrome’s ability to effectively make inferences from information presented in HDVs. Table 8 summarizes all design-based barriers described in both the results and the discussion.

5.1.1 Abstract Language Barriers: Type-Token Distinction Preference.

The language used in HDVs may be generalized with more abstract terms (e.g., an exercise HDV titled: “This Week’s Activity”) to incorporate the wide range of things that the original design intended describe with that one word. However, participants indicated that HDVs intended to be read by them could benefit from replacing generalized language with more specific, concrete descriptors to better support comprehension. As such, participants demonstrated a type-token distinction preference when discussing increasingly abstract language appeared in HDVs. Types are abstracted descriptive concepts (e.g., a sphere); tokens are specific, concrete instantiated objects (e.g., that orange leather basketball on the ground) [128]. This preference became evident when participants indicated wanting clarification about the types of abstract information in an HDV. For example, when researchers asked type-oriented questions that used generalized, plain language to describe abstract content or concepts, such as trends, participants wanted more information. Responses that clarified meaning instead used more specific token language. When token-oriented language was used, participants answered questions with greater ease as the concrete specificity of the language enabled them to visually orient their attention to the specific content being discussed in the HDV.
In line with past research, we still found that people with Down Syndrome can “handle abstract things in a relatively easy way” despite the abstraction level preference for the more specific, concrete token distinctions over more abstracted types [83]. Rather, concrete specificity that drew the participants’ attention to tokens rather than types enabled them to better understand abstract content or the nature of type concepts. Future HDV design features that support this preference may be more beneficial and accessible to people with Down Syndrome.

5.1.2 Fill-in-the-Blank Barriers: Missing or Implied Information.

After someone has identified the various elements of a health data visualization in the first stage, a viewer must also recognize if any information is missing that is necessary to effectively read the graph during later activities with the HDV. A fill-in-the-blank barrier is an umbrella term we used when some aspect of the data visualization design is implied, unexplicit, abbreviated, or missing in the HDV that forces the viewer to make a cognitive leap when attempting to make sense of the information. The barrier occurs due to a mismatch in ability to make the cognitive leap. To make a cognitive leap caused by a fill-in-the-blank barrier, a viewer must have sufficient graphicacy and other literacy-based skills draw upon to know what is missing, deduce what should be there, and hold that implied, invisible information in their working-memory as they continue to interact with the HDV.
For example, health data visualizations intended to be used by a wider audience, such as a BMI chart, often have missing personal information that the viewer is supposed to mentally fill-in-the-blank with their personal data to effectively read it. In this case, viewer-specific health information, like someone’s weight and height, are used to guide the viewer to their BMI number in the table.
One kind of fill-in-the-blank barrier is abbreviating content (i.e., Monday = Mon = M). Abbreviating content often requires the viewer to have prior knowledge of cultural or design norms to accurately interpret and effectively leverage the information presented. Using abbreviations to simplify the HDV’s design is a common design choice in data visualizations, especially those intended to be seen on small screens (e.g., a smartphone or wearable device). Single letter abbreviations caused fill-in-the-blank barriers for some people with Down Syndrome. In the “Daily Steps” bar graph, several participants struggled with the “k” abbreviation for kilometers. At first, Cameron thought the K meant one thousand; while Jesse thought it was a kilogram. Similarly, Jordan and Shiloh both struggled to recognize the days of the week when they were abbreviated to a single letter (i.e., SSMTWRF). In these examples, the abbreviation forced viewers to use additional context clues or have pre-existing knowledge to fill-in-the-blank.
Missing information is another kind of fill-in-the-blank barrier. When the missing information is related to a health topic that the individual is less familiar with, health data viewers with Down Syndrome may struggle to recognize that effective interaction with visualizations first requires information that may not be visible. When a fill-in-the-blank barrier occurs due to missing information, the viewer must first recognize the absence of crucial information before they can connect the components of the graph together. For example, someone viewing the “Body Fat for Men” table needs to know three key pieces of personal health information: their biological sex, age in years, and—crucially—their body fat percentage. If a viewer does not realize that additional information is needed to actually read a HDV, they will struggle to use it during later stage activities, like using the data as a health management decision-making tool.
Gaps in prior knowledge can also lead to fill-in-the-blank barriers. None of the participants recognized that they needed to know their specific body fat percentage to read the chart and find out where they fell across the four categories. This may have occurred because the participants were less familiar with “body fat” beyond an association with their weight or anatomy. Their lack of specific medical jargon appeared to make the numbers and topic more abstract and ambiguous. When this occurred, participants attempted to fill-in-the-blank by drawing upon their personal health literacy and knowledge. For example, Cameron interpreted these numbers as being related to “the scale ... the different numbers of wei–of weight.” Darcy viewed the values as “the number of pounds of your body fat” rather than the percentage. In these examples, personal health knowledge influenced and revised their mental models about the data being presented, which caused conceptual mismatches in later data visualization reading phases.
Some fill-in-the-blank barriers also occurred due to known numeracy issues (described in 5.1.2) as people with Down Syndrome can struggle visualizing a “mental number line” [39]. There were two common numeracy-based design causes for fill-in-the-blank barriers in this study’s HDVs: number ranges and intervals. The use of number ranges and numerical increments on the X and Y-axes caused confusion for multiple participants due to the invisible number lines implied by these commonly used graph design choices. Both number ranges and intervals require the HDV viewer to be able to mentally imagine an invisible number line. For example, “18–25” is a number range that abbreviates the full number line of 18, 19, 20, 21, 22, 23, 24, 25. The numeracy convention of using an em-dash is intended to imply the invisible numbers in between 18 and 25. In the Body Fat for Men table, both Jordan (16) and Cameron (17) recognized that their ages were not visible. The age brackets began at the 18–20 range in the y-axis. Because this data visualization procedure was not explicitly stated, some readers may likewise struggle to find where they would fall within a number range. Secondly, number intervals similarly employ abbreviations to imply the invisible number line in a graph’s x- and y- axes. The number interval on the axis of 0, 5, 10, 15 indicate that the visual spaces in between intervals (i.e., the 0, 1, 2, 3, 4, and 5 are implied as existing between the first and second line).
Invisible number lines may be particularly frustrating when first reading an HDV because of the interaction between numeracy skills and working memory. Both numeracy and working memory are integral to a person’s ability to: (1) mentally imagine the number line being described, (2) determine if their age is within the number line or not, (3) perform these two steps until a match occurs, (4) mentally associate their personal invisible age to the appropriate design element, and (5) continue to hold that now invisible numerical information-design element association in their working memory to draw upon as they continue to interact and reason with the HDV. As the numeracy skills of people with Down Syndrome may lag an average two years behind their print literacy skills and they may also experience attention or working memory issues in their daily lives [10, 14], these kind of health data visualization numeracy-based fill-in-the-blank barriers can be especially challenging to overcome.
When some participants encountered unknown content (i.e., unfamiliar words or were uncertain how to progress in their task), several reacted by ignoring the unfamiliar content entirely. One example of this was when participants did not know the word “macronutrient.” Rather than using their preferred information-seeking method or asking its meaning, multiple participants ignored this keyword altogether. The additional information-seeking task appeared to increase their mental effort whilst they were already progressing through their identification stage tasks. Unfortunately, this avoidant behavior led to later problems as text or numbers that were ignored often remained unconnected with other information in the second and third stages.

5.1.3 Information Encoding Barriers when Interpreting Relationships between Elements.

Most participants encountered encoding barriers when they were making connections between the various HDV design elements and other relationships between the data. Appropriately layering encoding is critical to the successful interpretation of visual metaphors. The two most successful HDVs were the stacked bar graph (95%) and the scatterplot (100%). The icons in these visualizations appeared to be the most easily connected because of the redundant encoding between the graph elements that resulted in the bundled channels which better supported people with Down Syndrome’s graph reading abilities.
The scatterplot’s use of pictures of familiar foods and drinks with text labels next to the image made it easier to understand the abstract data through concrete examples. Furthermore, the data itself reinforced participant’s understanding of and familiarity with nutrition. This combination supported their ability to more easily categorize, group, and generalize about the data more so than any other HDV. The stacked bar chart similarly used familiar icons with labels in close proximity in a prominent location (i.e., directly underneath the large title). Even though the graph used layered color-encoding (i.e., status and categorical), the color icons were more clearly connected when they were superimposed atop the larger, colored rectangle (e.g., the green lettuce icon in the center of the green rectangle of a stacked bar).
Transposing content can also interfere with accurate encoding. Half of the participants verbalized a preference against vertical text, which was unnecessarily difficult to read as several had to physically turn their head sideways. Both line graphs required participants to read dates and times where the text was vertically transposed. This was made more complex because both graphs had repeating values that resulted in blocks of text. For example, in the first line graph the x-axis was a date separated in weekly increments. The dual y-axis line graph the x-axis recorded timestamps of activity intensity at 30-second intervals. The transposed text at intervals appeared to increase cognitive load because participants had to compare values in the graph area against the y- and then against the x-axis which required fine-detail level comparisons to notice the differences. Several participants instead opted to respond more generally by saying the month, which was the easiest to read while also scanning and comparing the information.
Transposing can also break visual metaphors when using icons or images. The vertical orientation of the bars in the second graph triggered an encoding error of the step metaphor for some participants. “I see that the steps from up here, you’re walking down the stairs and then from Monday, you’re going up the stairs, and Sunday you’re coming back down” Darcy. In this example, we can see how design choices mimicked a real-life activity (i.e., walking up and down stairs). The different heights of the tops of individual bars were reinforced by both the title text steps and the shoe icons. As the word steps is also a synonym for stairs, we can see how orientation of data can affect the understanding of visual metaphors. Because of this, a participant’s mental model of a HDV can shift from total number of steps taken across a horizontal distance to the number of vertical steps upstairs. In this way, we can see how design choices that force viewers with Down Syndrome to transpose data can affect their encoding of visual metaphors.
While we found that icons and images can support participant understanding, the selection of appropriate design elements that successfully reinforce the visual metaphor is critical. One reason the stacked bar chart was the highest performing HDV is perhaps because it used multiple levels of redundant encoding to more tightly bundle graph elements during interpretation. Its success suggests that unused visual properties should be used to redundantly encode the main data dimensions because bundling channels of the same information has been found to support faster, more accurate, and efficient meaning acquisition ([126], pg. 179). Bundling data visualization channels with appropriate graph type and information orientation can make or break visual metaphors. These findings suggest that designing layered encoding in HDVs must be done with careful consideration. As such, future HDV design features should clearly and explicitly call attention to these design choices and define the intended meaning between elements to ensure the accurate understanding of the visual metaphor.

5.1.4 Connection Barriers while Constructing Meaning and Synthesizing Understanding.

Participants struggled the most during the final stage because of unsupported inference-making. While we saw that people with Down Syndrome are visually strong, synthesizing meaning from connections made about observed relationships both within and outside of the HDV can be difficult. For example, Skyler was confused when they encountered a bar with a shoe icon cut in half at the top in the Daily Steps HDV. The shoe icon was cut off to indicate a partial value. However, there was no visual indication of what a partial values means. In other words, the appropriate way to interpret the half shoe was implied, rather than made explicit. This impacted Skyler’s ability to accurately connect the icon with the intended meaning. “What does the shoes mean, though? One of the shoes was cut off, but I could still see it.” When participants were uncertain how everything was connected, they were unable to deduce the underlying takeaway of the data being visualized. Without interpretive supports integrated into the HDV, people like Skyler may encounter similar conceptual mismatches that can affect their ability to fully engage with their health information.
Feelings of uncertainty caused by barriers or misunderstandings in the two previous stages appeared to diminish participant self-efficacy in their graphicacy abilities particularly in this third stage. When inference-making was not supported, participants often relied upon existing knowledge and skills that they felt more confident with employing. Instead, several participants drew upon what they already knew about health, nutrition, and exercise science when they felt uncertain what something meant or what information should be connected together. As a result, participant confidence in more familiar health literacy skills versus those where they had poor self-efficacy (i.e., graphicacy skills) caused several people to inaccurately update their mental model and understanding of the HDV.
Although knowledge is constantly informing people’s interpretation and the subsequent inferences that they make, pre-existing knowledge related to the HDV topic plays a more significant role when people attempt to read beyond a visualization. Drawing upon outside sources of information to synthesize and contextualize the graph’s data within the viewer’s understanding is at the heart of the third graph engagement stage. Several participants tried to retrofit their current mental models of the HDVs to match their pre-existing health knowledge regardless of the goodness of the fit or not when they did not know how to make inferences. Including information present to support inference-making in the HDV is not enough. The design of future HDVs need to explicitly support the inference-making process of people with Down Syndrome. This could be done by features designed to reduce the procedural uncertainty and ambiguity about the relationships between HDV elements (e.g., a guided walk-through). Inference-making supports like this may also help viewers with Down Syndrome to connect information with external knowledge to leverage greater insights into the data.
Another reason why inference-making may be difficult during this stage is that health data visualizations in particular are intended to be used as part of an informed decision-making process that is driven by questions. It is through question generation that graphicacy tasks are contextualized, HDV reading goals are created, and appropriate outside knowledge schemas can be drawn upon as viewers make inferences. When questions are not an integrated part of the HDV reading process, knowing what types of inferences are relevant to their task goals becomes more unclear. While we did investigate the kinds of questions participants ask when reading graphs, our findings suggest that future HDV features could better support question generation. For example, providing a bank of relevant questions that would guide viewers throughout the reading process would tie interpretation and inferences to that question’s goal (e.g., Am I getting enough protein this week? Am I walking enough each day?). Furthermore, question generation could allow for more of a narrative interpretation about the data and provide relevant takeaways as the viewer is guided through each stage. As past work has suggested that storytelling can be beneficial to data visualization, question-driven storytelling may provide greater opportunities for insight with HDV viewers with Down Syndrome in the final stage of graph reading. Integrating step-by-step graphicacy educational supports, like guided walk-throughs, could benefit people who may not know or be able to recall (i.e., inadequate education or lapsed graphicacy skills) what next steps to take when interpreting visualized information.
One way to address the myriad task-based barriers discussed in this section is to provide procedural supports to viewers with Down Syndrome. Future health systems could include a feature that visually models graph reading procedures in a sequential manner. The step-by-step walk-throughs could provide procedure supports and structure to the graph reading process, which the variety of saliency between participants indicate they could benefit from. Set by the pace of the viewer, this gradual approach to information acquisition may support the mental model construction process by offloading the cognitive effort required during unstructured graph reading. Furthermore, asset-driven guided walk-throughs could leverage their strengths as learners and provide support for any population weaknesses (see 2.1).
For example, highlighting content step-by-step and explaining what it means plays to their visuo-spatial strengths and preferences towards behavioral modeling, guides their attention to aid focus, supports encoding of meaning to ensure more accurate understanding. It also reduces information into smaller chunks that slowly build upon each other support working memory differences. The explicit meaning mapping between HDV elements and design choices, including providing additional health information, like definitions or health literacy explanations, can reduce task-shifting from the HDV to information-seeking activities outside of the HDV. Guided walk-through features may provide asset-driven supports to the short and working memory of individuals with Down Syndrome, as past work has noted population deficits in these [8]. With more short-term and working-memory freed up, viewers with Down Syndrome may be able to more deeply engage with visualized health information. Although interactive guided walk-throughs may be supported by some of this study’s findings as well as various strengths and weaknesses of learners with Down Syndrome (see 2.1), any proposed HDV feature ultimately requires a future participatory co-design study with and critical evaluation by people with Down Syndrome, which we will explore in future work.

5.2 Task-Based Barriers to Effective Inference-Making

Increases to graphicacy task difficulty and complexity appeared to impact participants’ performance in the later stages. The progressive difficulty of key aspects of graphicacy tasks may signal why we observed performance drops in each phase of falling from 79.3% (reading) to 53.6% (reading between) to 37.9% (reading beyond) in their overall mean success rate.
Table 8 outlines how various aspects required to successfully make sense of an HDV increased in complexity, difficulty, and ambiguity over time. Furthermore, where the information was primarily located (i.e., in the HDV vs. stored in short-term memory) also shifted as later stages, which further taxed working memory information capacity limits. These changes similarly demanded a greater level of cognitive flexibility (i.e., the ability to adapt their understanding and mental model of the HDV as new or unexpected information or events occur during reading tasks).
All of these elements increased the overall difficulty of graph reading tasks as they became more complex, unfamiliar, and ambiguous. As a result, our findings suggest that the participant performance decreased as the demands to their cognitive load increased, which adversely impacted their ability to effectively making inferences and construct meaning from the visualized health information. Below we discuss how the combination of the various HDV reading aspects adversely impacted people with Down Syndrome’s inference-making abilities.
In the earliest “reading the data” stage, participants first encountered familiar types of information (i.e., words, numbers, colors, shapes, dates, images). These were easy to recognize, read, and identify. As participants noticed the composition and identified component parts of the HDV, there was a lower level of information abstraction as most of the information did not have to be transformed in any way. By comparing and connecting familiar data, the acts of encoding meaning across and between data and HDV elements abstracted the initially familiar information into groups and high-level types, associations, and meanings. Data that visually grouped by the color are encoded to be interpreted as warning or danger is more abstract than the color of an individual data point. As information abstraction transformed (e.g., visual metaphors, encoding meaning, performing calculations) and synthesized (e.g., by drawing upon between information and from outside information sources) during the final stages for inference-making, the information that someone engages with to extract insights from is no longer as familiar and concrete.
The number of steps during stage-specific tasks also increased. First stage tasks were to visually recognize elements and identify those elements as component parts (e.g., title, x-axis label, y-axis value). These task steps were minimal and simple compared to tasks like trend identification where individuals had to compare across the entire visualized data and make an informed judgment about the overall HDV. When making inferences in the final stage, the number of task steps increases further as the individual may have to perform additional information-seeking or question generation tasks in parallel to their overall graph reading tasks. As a result, the level of attention required can increase as well.
The uncertainty and ambiguity of such tasks became more difficult for many participants. Several individuals were uncertain with how they should go about making judgments and inferences about the presented data (i.e., what steps to take, what information is required and when should it be used). This, in turn, made leveraging that data difficult to do in the last engagement-focused stage when they were reading not only the HDV but relating it to their experiences or to any parallel information-seeking tasks they had to conduct (e.g., looking up an unfamiliar term or finding personal health information that was needed to make sense of the data).
The variety of problem-solving and abstract reasoning skills similarly grew as the stages progressed. For example, in the first stage one problem needing solving was when data visualizations had unfamiliar words. This abbreviation demands a specific level of number sense ability. As the stages progressed, viewers had to draw upon a wider range of strategies to tease apart information and construct meaning.
Visualization readers have to be able to recognize and shift between multiple types of different information. As we described in Section 4.2.2.2, participants demonstrated a variety of ability to easily recognize differences across the types of information that makes up a HDV. Comparing differences can also demonstrate an individual’s cognitive flexibility abilities. Cognitive flexibility is necessary during attentional processes, such as set shifting and task switching [31]. The ability to shift one’s attention during various tasks and sets of value qualities are critical skills as individuals constantly shift their attention during each stage and across all three stages of HDV interaction.
The typical fill-in-the-blank design choices (detailed in 5.1.2) further taxed participants’ cognitive load by requiring them to hold explicit and calculated information in their working memory. Another increase to graph reading difficulty and complexity was the shifting proportion of explicit (i.e., visually represented in the HDV; not stored in short-term memory) to implicit (e.g, calculated information or inferences they had made) information. In other words, the sources of information progressively moved from inside the data representation to being, at least in part, held in working memory (i.e., invisible).
When information was missing, implied, ignored, or misinterpreted, the mental effort and cognitive flexibility required during HDV tasks increased even more. If a misunderstanding occurred in the early stages, some participants struggled to recognize that a mistake was even made or did not revise their earlier understanding. Rather, our participants re-fitted their understanding to knowledge they were more confident with when they were uncertain how to appropriately use or engage with the information presented in the HDV. As a result, the information storage requirements likewise increased as individuals had to hold a greater number of pieces of information in their working memory as they moved through the latter stages. In other words, an increasing proportion of information items were stored solely in the viewer’s mind and had to be recalled throughout graphicacy tasks rather than distributed as part of the HDV to be recognized.
When all of these elements are considered together, it becomes evident that as each stage progresses there is an increase to the overall mental effort (i.e., increased participant cognitive load and the greater level of cognitive flexibility). Mental effort became taxing when participants had to hold too many pieces of increasingly abstract information and relationships about that information at the same time. The simultaneous juggling of too much information exceeded their known working memory limits [20, 60]. Mental effort also increased when participants had to adapt to missing information, unfamiliar or ambiguous procedures, and had to use more problem-solving skills of increasing abstraction of the visualized information during information synthesis tasks.

5.3 Improving Health Data Visualizations for Viewers with Down Syndrome: Implications for Design & Graphicacy Education

The myriad design-based barriers (5.1 summarized in Table 7) and task-based barriers (5.2 summarized in Table 8) have several implications for the design of future data visualizations in health systems. This section proposes 12 HDV design suggestion heuristics, possible accessibility strategies, and potential features that may better support the abilities of HDV viewers with Down Syndrome. Tables 9 and 10 synthesized our study’s key takeaways from the results and discussion sections into 12 design suggestions with potential strategies that future HDV designers might consider employing. Using a combination of these strategies may improve the accessibility of HDVs for people with Down Syndrome.
Table 7.
Reading the Health DataReading BETWEEN the Health DataReading BEYOND the Health Data
Invisible number lines (Numeric Ranges and axis Intervals) and Visual shifting in number place values (4.1.1)
Unfamiliar Words, Jargon, or Abstract Concepts (4.1.1, 4.3.2)
Undefined Image/Icon meaning overridden by more familiar associations (4.1.2.2)
Visual Blocks of Repetitive Text (4.2.3.1)
Visually Overwhelming (4.3.2)
Token Type Distinction, Abstract Language Preference (5.1.1)
Unclear Abbreviations (5.1.2)
Transposed Presentation of Content (5.1.3)
Unclear Relationship between X/Y Axes ID & Use (4.1.2.1)
Unclear Meaning of Color Encoding (4.2.1.1, 4.3.2)
Unclear Meaning of Image Encoding (4.2.1.2, 4.3.2)
Difficulty Comparing Values (4.2.2.1)
Unclear Meaning of Differences (4.2.2.2)
Unclear Relationships between Elements (4.2.3, 4.3.2)
Transposed Content Breaking Visual Metaphor (5.1.3)
Unclear Meaning of Partial Image Values (5.1.4)
Unclear what is appropriate use Outside Info (4.2.3.1)
Unclear how to Synthesize Meaning from Connected Elements for overall message (4.2.3.1, 4.2.3.2, 4.3.3)
Unclear Interaction–Potential & Outcome (4.3.1)
Parallel Info-Seeking Task Req’d to Read (5.1.2)
Outside Personal Info Req’d to Read (5.1.2)
Result: Adverse impact of barriers to inference-making ability may worsen over time
IF earlier errors are not recognized & corrected as their mental model is updated.
Table 7. Design-Based Barriers to Inference-making
Table 8.
Inference Barriers Caused by Reading Task AspectReading the Health DataReading BETWEEN the Health DataReading BEYOND the Health Data
Types of InformationMore FamiliarLess FamiliarMore Unfamiliar
Level of Information AbstractionLowestIncreasingHigh
Number of Steps in HDV TasksLeastMoreMost
Familiarity of Task ProceduresMore FamiliarLess FamiliarMore Unfamiliar
Ambiguity of Task ProceduresMore ClearLess ClearMore Unclear
Problem-Solving and Abstract ReasoningSimpleIntermediateComplex
Information Type Stored in Working Memory*More Explicit than ImplicitFewer Explicit than ImplicitMore Implicit than Explicit
Working Memory Storage CapacityFewerMoreMost
Cognitive FlexibilityLeastMoreMost
Overall Cognitive LoadLowestIncreasingHighest
Result: Inference-making ability increasingly hindered
*Explicit = Information Represented in HDV; Implicit = Information Represented Solely in the mind;
Table 8. Task-Based Barriers to Effective Inference-making: Difficulty, Complexity, & Ambiguity Increase over Time
Table 9.
Design SuggestionPotential Strategies
Integrate training featuresHow to Read the HDV: For viewers who are uncertain how to read the HDV, easily accessible video content can demonstrate how to read the specific HDV for people who don’t want to do guided walk-throughs. Background Information: Brief videos that provide important health information.
Incorporate personalization featuresViewer Controlled Personalization: To address differences among individual comprehension needs and preferences, let viewers toggle the visibility and the type of HDV presentation features for information.
Include definitions and background knowledgeTo reduce the need for viewers to perform parallel information seeking activities when they don’t know something, consider explaining: Health Terms: Include definitions of health terms in plain language (e.g., “What’s this word mean?”) with images that support text comprehension. Long words: Break up long words into syllables (e.g., Mac-ro-nu-tri-ent) with an audio link for pronunciation.
Be specific and concrete when describing abstract conceptsAbstract Language Preference: Use token rather than type-oriented language that is specific and concrete instead of abstract concepts (e.g., replace the type language in “This Week’s Activity” with the more specific token language of “All the Steps You Ran or Walked This Week”).
Avoid design decisions that omit informationAbbreviations: If required, include more than one letter to make recognition of abbreviation easier (e.g., Saturday = “Sat” not “S”).
Avoid intervals that require viewers to perform calculations Interval specifics can be viewed when data point is selected or highlighted by guided walk-throughs (e.g., a flyout highlighting that 1 is the week of July 11th, 2023)
Support early graph element identification proceduresStep-by-step guided walk-through: Highlight relationships between elements and between data with brief explanations (i.e., While highlighting the title element – “The title for this graph is Daily Steps. This graph shows how many steps you took every day.”).
Table 9. Health Data Visualization Design Suggestions and Potential Strategies (1 of 2)
Table 10.
Design SuggestionPotential Strategies
Carefully consider transposing contentVisual Metaphors: Consider how orientation can either reinforce or create confusion when other design features are working to create visual metaphors (i.e., using horizontal line graphs for step distance, vertical bars for total stairs). Vertical type may be unnecessarily difficult to read. Consider design alternatives like the example in Avoid Intervals above.
Layer encoding to reinforce visual metaphorsBe explicit with encoded information and define the intended meaning and how the design elements work together so viewers know what everything means. Icons: After defining what the icon means and where it appears on the HDV, include additional information if partial icons represent partial values. Color: If color is used to indicate information, such as grouping or status, explicitly explain what the color means and highlight each block of color to draw attention to them.
Visually connect elements for more accurate mapping of infoStep-by-step guided walk-through: Features that highlight and provide brief takeaways to support the comparison between data points and connection of HDV elements together (e.g., “You took 123 more steps during the week of July 25th than the previous week!”).
Support Inference-Making AbilitiesIncorporate Inference-making into Walkthroughs: Integrate takeaways (informed by graph reading task goal) into step-by-step walkthroughs to better support comprehension.
Support Question Generation Abilities“What are you trying to do?” or “What do you want to know?” Include a bank of the top kinds of questions someone may ask about the HDV for viewers to choose from. These questions can guide reading and interpretation process walkthrough specific to the goal of selected question.
Table 10. Health Data Visualization Design Suggestions and Potential Strategies (2 of 2)
As with any preliminary work, these suggestions should be carefully considered. For example, although the design suggestion of toggling features may help balance an HDV’s potentially distracting visual clutter, such a feature could impact a viewer with Down Syndrome’s understanding of how to interact with the HDV. Just like the guided walk-through, any initial design suggestions require future critical evaluation. For the best results, the accessibility and usability of any resulting HDV should be evaluated by individuals with Down Syndrome prior to implementation.

5.3.1 Implications for Educators.

As most mathematics literature for this populations orients around developing critical real-world numeracy skills, other skills that are becoming increasingly necessary in society, like reading data visualizations, have been overlooked. This study takes preliminary steps in better understanding the health data visualization literacy skills of individuals with Down Syndrome. While this study may have implications for educators, these are suggested starting points that should be taken with a grain of salt given the qualitative nature of this study’s format and small sample size. We broadly describe some approaches educators could take when looking to develop the graphicacy skills of students with Down Syndrome.
During lessons, one way to engage students with Down Syndrome or other IDDs is to play to their strengths as learners. As people with Down Syndrome have strong visual awareness, benefit from behavior modeling, are great kinesaethetic learners, and have an active limbic system, graphicacy skill development could be improved by making data activities be embodied experiences, more socially engaging, and emotionally evocative to facilitate the transition of graphicacy skills from the classroom into long-term memory. It may be valuable to use some of the identified barriers (summarized in Table 7) during lesson planning, so students with Down Syndrome can co-develop personalized graphicacy strategies when they encounter any of data visualization barriers described.
The questions in Tables 14 and 15 in the appendix may be useful starting points when initially evaluating the graphicacy skills of students with Down Syndrome. For example, asking them to describe data visualizations can provide insights into what they are first drawn to, may indicate what underlying literacy skills they are stronger or weaker in, what they think is important and relevant within the data visualization, and how they orient themselves within the various elements. Ideally, with graph-type specific training, this method of describing graphs may become more effective and standardized, indicating the transition of graphicacy skills from classroom activities into long-term memory. Evaluation of graphicacy skill development could also include asking specific questions looking for specific values, like identifying extremes. Asking students what kinds of differences they see can indicate their underlying pattern-finding strengths and help elucidate their comparative abilities to notice when data is similar or not. This may be helpful if students need help mapping abstract mathematics “type” language to visual elements (e.g., mapping descriptive superlatives like most to the largest values in the data visualization). Using specific, concrete token-based language to describe abstract mathematics language and concepts may also support this mapping according to some students with Down Syndrome’s type-token distinction preference.

5.4 Limitations

Our study has several limitations. The study sample was relatively small. Historically, recruiting people with disabilities for human-computer interaction research is difficult [70]. Because of this, it is generally considered acceptable for studies involving people with disabilities to have smaller sample sizes only if the participant pool meets all the study’s inclusion criteria [70], which all our participants did. Our participants also skewed young (22.3 mean age). Furthermore, our participants resided in countries that are part of the Global North (i.e., USA and Australia). Meaning this study reports upon the experiences of people living in Western, Educated, Industrialized, Rich, and Democratic (WEIRD) nations. As such, this study’s findings may not necessarily be representative of the experiences of children or older adults with Down Syndrome or of those who live in countries that are in the Global South or nations that are not “WEIRD.”
Additionally, the quantitative numbers in performance tables should be interpreted with caution. The performance metrics in Tables 2 through 5 were descriptive in nature. Given the qualitative nature of our study and the small sample size, the data do not suggest statistical significance. The order of interview questions may also have impacted these metrics. However, the table content still carries valuable information related to the level of difficulty of each reading task and may provide useful background for future studies of HDVs involving people with Down Syndrome.

5.5 Future Work

There are several additional avenues of future work that we intend to pursue. We will validate this study’s findings via a nation-wide survey with a substantially larger sample size that will support statistical inference. This survey will also allow us to investigate initial outliers, such as the sudden performance drop that occurred in trend identification during the second stage. We also will explore how to better support people with Down Syndrome as they generate questions during the three stages of HDV interaction. We will investigate the potential of the proposed future design strategies through a co-design study with participants who have Down Syndrome to better understand how to balance population-specific accessibility features with HDV readability. Future eye tracking usability studies will both complement these findings by investigating how participants read HDVs in authentic settings and validate (or refute) the effectiveness of any resulting prototype from the co-design study.
Although this paper has described how health data visualizations could be improved for individuals with Down Syndrome, other populations within the IDD umbrella, who share similar characteristics to their cognitive profile, abilities, learning strengths or weaknesses, or education of the requisite underlying literacies needed to effectively interact with health data visualizations, may likewise benefit from some of this study’s findings. However, given the high variability across individuals within the equally diverse populations classified as IDDs (e.g., Fragile X, Autism, Fetal Alcohol Spectrum Disorder, Cerebral Palsy, individuals with a brain injury, Phenylketonuria, or congenital hypothyroidism, etc.), future population-specific work is needed to better understand the potential generalizability of these findings to other such groups.

6 Conclusion

The first of its kind, this study reported on the underlying graph reading skills of people with Down Syndrome as they made sense of six health data visualizations. We investigated these skills to identify future accessible HDV design opportunities that may better support the ability of individuals with Down Syndrome as they construct their understanding of the visualized health information. Using accessible interview techniques, we were able to capture rich qualitative data in our exploratory study. We employed grounded theory to analyze nearly 700 direct participant quotes from our semi-structured interviews with ten young adults who have Down Syndrome. In doing so, we teased apart the various unique abilities, strengths, and Down Syndrome-specific opportunities for data visualization accessibility design considerations within a health context.
Our findings suggested that people with Down Syndrome have comparatively strong visuospatial graph reading skills and they can understand abstract information and visualized data. This was demonstrated by their initial saliency tasks (i.e., the width and breadth of observations made in 4.1.1) and early reading stage identification tasks (4.1.2). However, performance appeared to diminish when the graph reading tasks increased in complexity and ambiguity when participants were reading between and beyond health data. In other words, the various design-based barriers made making insights difficult to do. This appeared to occur when participants had to hold too much explicit, implied, and inferred information about both the relationships between the graph elements and within the visualized data in their working memory, which past work has noted its limits. Future health data visualization design features that better support their inference-making abilities and offload cognitive-intensive tasks would be beneficial in the later connection and knowledge synthesis phases as people with Down Syndrome interpret and engage more deeply with health data.
This work contributes several design considerations to better support People with Down Syndrome as they interact with health data visualizations. Furthermore, future accessibility features that minimize the cognitive effort required to interact with HDVs may be beneficial not only to people with Down Syndrome but potentially also to the millions of people who similarly struggle with the requisite numeracy, data, and health literacy-based skills and knowledge that is necessary to more deeply engage with their health data. It is our hope that the accessibility improvements to the design of health data visualizations may better support people with Down Syndrome’s ability to more actively participate in the decision-making process and to better advocate for themselves in healthcare settings.

A Appendix

A.1 Mapping Numeracy, Data, and Health Literacies to Skills to Data Visualization Reading Stages

The literacy-based skills described below are used when reading health data visualizations and when managing one's health effectively. Please note that the various literacies may occur simultaneously during the same task as individuals engage with their visualized health information. These tables similarly highlight how, as the graphicacy stages progress, the demands upon the reader's abilities and literacy-based skills similarly increase. Tables 11, 12, and 13 also informed our theory (depicted in Table 6 of the Discussion). Finally, the mapping of the various kinds of literacies involved when reading data visualizations are also a contribution unique to this study.
Table 11.
Reading the Health Data
Locate and Identify Component HDV Elements
Reading BETWEEN the Data
Observe and Compare Relationships between HDV Elements
Reading BEYOND the Data
Connect Information Across the HDV with Outside Information
Below Level 1:
• Count and sort values
• Perform basic arithmetic operations
• Recognize common spatial representations
Level 1:
• Perform simple, one-step processes
• Understanding simple percentages (ex: 50%)
• Locate and identify elements of simple or common spatial representations
Level 2:
• Perform calculations or processes with two steps involving whole numbers
• Perform simple measurements of spatial representations
• Estimates and interpret simple data and statistics in texts, tables and graphs
Level 3:
• Perform calculations or processes with three or more steps
• Apply number sense and spatial sense (i.e., more or less)
• Recognize mathematical relationships, patterns, and proportions expressed in verbal or numerical forms
Level 4:
• Evaluate spatial relationships, such as changes in data and proportions
Level 4:
• Analyze complex quantities, statistics, and probabilities
Level 5:
• Integrate multiple types of mathematical information where considerable translation or interpretation is required
• Draw inferences to construct meaning between numerical relationships
• Evaluate and critically reflect upon potential solutions or choices
Table 11. Numeracy-specific Skills used when Reading Health Data Visualizations (HDVs) [94]
Table 12.
Reading the Health Data
Locate and Identify HDV Elements
Reading BETWEEN the Data
Observe and Compare Relationships between HDV Elements
Reading BEYOND the Data
Connect Information Across and Outside the HDV
ODI Below Level 1 [59]:
• Able to recall a single piece of specific information as presented in a graph or chart
Cui Competencies [28]:
• Find, identify and read data
Ridsdale Competencies [107]:
• Reads charts, tables, and graphs
• Explores Data
ODI Level 1 [59]:
• Able to understand the meaning of information or data presented to you
• Able to explain what a simple graph means
Cui Competencies [28]:
• Analysis by connecting, distinguishing, and comparing between data
• Extract and transform data into information
Ridsdale Competencies [107]:
• Organizes data
• Identifies outliers, anomalies, and discrepancies in the data
• Identifies useful data
• Understands data and thinks critically when working with data
ODI Level 2 [59]:
• Considers where data came from and potential impacts it may have upon the message being presented
• Paraphrase and make low level inferences
• Interpret data to create a new fact or understand an existing fact in a new way
ODI Level 4 [59]:
• Use data from different sources to make informed decisions
ODI Level 5 [59]:
• Synthesize or create original ideas based on information from multiple sources
Cui Competencies [28]:
• Interpret data in context, make inferences and evaluate information
• Use information to inform and implement a decision (i.e., data-driven decision-making)
• Identify problems and frame questions
• Communicate Data and draw conclusions based on understanding
Ridsdale Competencies [107]:
• Identifies key takeaways and integrates data with other important information
• Uses data to identify problems
• Prioritizes and uses data to inform potential actions, solutions, or decisions
• Weights merits and impact of potential actions, solutions, or decisions
Table 12. Data Literacy-specific Skills used when Reading Health Data Visualizations (HDVs) [28, 59, 107]
Table 13.
Reading the Health Data
Locate and Identify Component HDV Elements
Reading BETWEEN the Data
Observe and Compare Relationships between HDV Elements
Reading BEYOND the Data
Connect Information Across the HDV with Outside Information
Functional [90]:
• Obtain relevant health information Below Basic [67]:
• Able to locate straightforward pieces of information in short, simple texts
Functional [90]:
• Apply knowledge to a limited range of prescribed activities Below Basic [67]:
• Find more complex information in somewhat longer and more complex documents Proficient [67]:
• Compare and contrast multiple pieces within complex texts or documents
Interactive [90]:
• Extract information and derive meaning from communication
• Apply new information to changing circumstances Critical [90]:
• Critically analyze information
• Use information to exert greater control over life events and situations Intermediate [67]:
• Interpret information presented in complex graphs, tables, and more complex health texts or documents Proficient [67]:
• Draw abstract inferences
• Apply abstract or complicated information from text or documents
Table 13. Health Literacy-specific Skills used when Reading Health Data Visualizations (HDVs) [67, 90]

A.2 Health Data Visualization Interview Questions

Table 14.
Question AskedHDV
Saliency: “What is the first thing you see?”Table, Bar, Stacked bar, Line, Dual Y-axis Line, Scatter plot
Saliency & Identification: “I’m going to cover my eyes so I can’t see.” [[Researcher covers their own eyes with their hands.]] “In your own words, how would you describe this to someone?”Table, Bar, Stacked bar, Line, Dual Y-axis Line, Scatter plot
Internally Connecting Topic: “What kinds of information (or stuff) is this graph describing?”Table, Bar, Stacked bar, Line, Dual Y-axis Line, Scatter plot
Synthesis, Changes to Behavior: “Looking at this graph, is the graph telling you something about your health?” [[IF YES]] Follow-up: “What is the graph telling you about your health?”Table, Bar, Stacked bar, Line, Dual Y-axis Line, Scatter plot
Interaction Potential: “Do you think anything is clickable? Like if you clicked somewhere, something would happen.”Table, Bar, Stacked bar, Line, Dual Y-axis Line, Scatter plot
Interaction Expectation: “If you clicked on that, what do you think would happen next?”Table, Bar, Stacked bar, Line, Dual Y-axis Line, Scatter plot
Information-Seeking: “What do you wish was better explained (what they mean) in this graph?”Table, Bar, Stacked bar, Line, Dual Y-axis Line, Scatter plot
Color Encoding: “What do you think the color means?” (one-color) –OR– “What do you think the colors mean?” (multiple colors)Table, Bar, Stacked bar, Line, Dual Y-axis Line
Comparing Values to Identify Extremes: See Table 15 in Appendix A.2 for Graph-specific questions.Table, Bar, Stacked bar, Line, Dual Y-axis Line, Scatter plot
Differences: “What are some differences you see?”Table, Bar, Stacked bar, Line, Dual Y-axis Line, Scatter plot
Trend: “Is the graph changing over time?” [[IF YES]] Follow-up: “How would you describe the way that the graph is changing over time?”Table, Bar, Stacked bar, Line, Dual Y-axis Line, Scatter plot
Table 14. General HDV Questions
Table 15.
Question AskedHDV
“What is the highest (or biggest) number?”Table
“What is the lowest (or smallest) number?”Table
“What day has the most (or biggest number of) steps?”Bar
“What day has the least (or littlest number of) steps?”Bar
“What day was the most food eaten?”Stacked Bar
“What day was the least food eaten?”Stacked Bar
“What day had the highest (or most) amount of fats?”Stacked Bar
“What day had the least (or the littlest) amount of protein?”Stacked Bar
“What day did they walk the most?”Line
“What day did they walk the least?”Line
“When was the most intense part of the activity?”Dual Y-axis Line
“When was the least intense part of the activity?”Dual Y-axis Line
“What food is the highest food?”Scatter plot
“What is the lowest food?”Scatter plot
Table 15. HDV Specific Questions: Identifying Extremes to Find Exact Values

A.3 Saliency Graphs

How to read: the figures in Appendix A.3 show each individual’s saliency and sense-making skills per HDV. The table is the legend for these figures and shows which icon shapes are associated with which of the four types of verbalized observations (i.e., overall generalization: a square, graph element type: a hexagon, specific element: a circle, descriptive quality: a diamond). The numbers in the shapes indicate the order of the verbalization. The first and last verbalizations have a black background with a white number. The middle steps have a white background and black text. When broad generalizations about regions were made, these regions were highlighted with black, dashed lines.
Table 16.
IconLabelDefinition
Overall GeneralizationAn assessment, comparison, categorization, or judgment of a region. Usually indicates a comparison between more than one graph elements or aspects (e.g., a broad assessment of the overall topic, comparing data types, categorizing groups of data together, judging groups of information, etc.)
Graph ElementGeneral elements using broad language indicating a type of information (e.g., chart type, icons, numbers, words, dates, etc.)
Specific ElementSpecific verbalized elements mentioned verbatim as they appear in the visualization (e.g., actual title name, actual number value, actual label text)
Descriptive QualityDescribing an aspect of a graph element (e.g., color, size, position, etc.)
When large areas are discussed, the region is highlighted by dashed lines. Black backgrounds with light text color indicate either a start or end-point. White with black numbers indicate intermediary steps.
Table 16. Verbalized Saliency Graph Icon Legend
Fig. 7.
Fig. 7. “Body fat chart for men %:” Participant verbalized saliency of graph #1 (A multi-colored table).
Fig. 8.
Fig. 8. “Daily steps:” Participant verbalized saliency of graph #2 (A single-colored bar graph with icons).
Fig. 9.
Fig. 9. “This week’s macronutrients:” Participant verbalized saliency of graph #3 (A multi-colored stacked bar graph with icons).
Fig. 10.
Fig. 10. “Weekly walk distance history:” Participant verbalized saliency of graph #4 (A single-colored line graph with icons).
Fig. 11.
Fig. 11. “Activity intensity:” Participant verbalized saliency of graph #5 (A dual Y-axes line graph with multiple icons).
Fig. 12.
Fig. 12. “Nutritionists vs. Americans’ Perception of Healthy Foods:” The first 6 of 10 participants’ verbalized saliency for graph #6 (A scatter plot with multiple images).
Fig. 13.
Fig. 13. “Nutritionists vs. Americans’ Perception of Healthy Foods:” The remaining 4 of 10 participants’ verbalized saliency of graph #6 (A scatter plot with multiple images).

References

[1]
[n. d.]. What is Numeracy. Accessed: 2023-7-9 https://www.nationalnumeracy.org.uk/what-numeracy
[2]
Roberto A. Abreu-Mendoza and Natalia Arias-Trejo. 2015. Numerical and area comparison abilities in Down syndrome. Research in Developmental Disabilities 41 (2015), 58–65.
[3]
Jordan M. Alpert, Linda Desens, Alex H. Krist, Rebecca A. Aycock, and Gary L. Kreps. 2017. Measuring health literacy levels of a patient portal using the CDC’s clear communication index. Health Promotion Practice 18, 1 (2017), 140–149.
[4]
Muneef Alshammari, Owen Doody, and Ita Richardson. 2018. Barriers to the access and use of health information by individuals with intellectual and developmental disability IDD: A review of the literature. In 2018 IEEE International Conference on Healthcare Informatics (ICHI’18). IEEE, 294–298.
[5]
Tricia Aung, Debora Niyeha, Shagihilu Shagihilu, Rose Mpembeni, Joyceline Kaganda, Ashley Sheffel, and Rebecca Heidkamp. 2019. Optimizing data visualization for reproductive, maternal, newborn, child health, and nutrition (RMNCH&N) policymaking: Data visualization preferences and interpretation capacity among decision-makers in Tanzania. Global Health Research and Policy 4 (2019), 1–14.
[6]
Tony Baldwinson. 2019. UPIAS-The Union of Physically Impaired Against Segregation (1972-1990): A Public Record from... Private Files. TBR Consulting.
[7]
Serena Barello, Guendalina Graffigna, Mariarosaria Savarese, and Albino Claudio Bosio. 2014. Engaging patients in health management: Towards a preliminary theoretical conceptualization. Psicologia della Salute (2014), 11–33.
[8]
Stephanie J. Bennett, Joni Holmes, and Sue Buckley. 2013. Computerized memory training leads to sustained improvement in visuospatial short-term memory skills in children with Down syndrome. American Journal on Intellectual and Developmental Disabilities 118, 3 (2013), 179–192.
[9]
Nancy D. Berkman, Terry C. Davis, and Lauren McCormack. 2010. Health literacy: What is it? Journal of Health Communication 15, S2 (2010), 9–19.
[10]
Gillian Bird and Sue Buckley. 2001. Number Skills for Individuals with Down Syndrome. Down Syndrome Educational Trust, Kirkby Lonsdale, England.
[11]
Michelle A. Borkin, Zoya Bylinskii, Nam Wook Kim, Constance May Bainbridge, Chelsea S. Yeh, Daniel Borkin, Hanspeter Pfister, and Aude Oliva. 2015. Beyond memorability: Visualization recognition and recall. IEEE Transactions on Visualization and Computer Graphics 22, 1 (2015), 519–528.
[12]
Sophie Brigstocke, Charles Hulme, and Joanna Nye. 2008. Number and arithmetic skills in children with Down syndrome. Down Syndrome Research and Practice (2008).
[13]
Marilyn J. Bull. 2020. Down syndrome. New England Journal of Medicine 382, 24 (2020), 2344–2352.
[14]
Angela Byrne, John MacDonald, and Sue Buckley. 2002. Reading, language and memory skills: A comparative longitudinal study of children with Down syndrome and their mainstream peers. British Journal of Educational Psychology 72, 4 (2002), 513–529.
[15]
Jesus J. Caban and David Gotz. 2015. Visual analytics in healthcare–opportunities and research challenges. Journal of the American Medical Informatics Association 22, 2 (2015), 260–262.
[16]
E. Cadman. 2020. Developing Pre-Service Teachers’ Graph Literacy Knowledge.
[17]
Patricia A. Carpenter and Priti Shah. 1998. A model of the perceptual and conceptual processes in graph comprehension. Journal of Experimental Psychology: Applied 4, 2 (1998), 75.
[18]
CDC. 2023. Facts about Down Syndrome. Accessed: 2023-7-9 https://www.cdc.gov/ncbddd/birthdefects/downsyndrome.html
[19]
CDC. 2023. What is Health Literacy? Accessed: 2023-7-9 https://www.cdc.gov/healthliteracy/learn/index.html
[20]
Robin Chapman and Linda Hesketh. 2001. Language, cognition, and short-term memory in individuals with Down syndrome. Down Syndrome Research and Practice 7, 1 (2001), 1–7.
[21]
Kathy Charmaz. 2006. Constructing Grounded Theory: A Practical Guide through Qualitative Analysis. Sage.
[22]
Kathy Charmaz. 2014. Constructing Grounded Theory. Sage.
[23]
Doris B. Chin, Kristen P. Blair, and Daniel L. Schwartz. 2016. Got game? A choice-based learning assessment of data literacy and visualization skills. Technology, Knowledge and Learning 21 (2016), 195–210.
[24]
Barbara Clarke and Rhonda Faragher. 2013. Developing early number concepts for children with Down syndrome. Educating Learners with Down Syndrome (2013), 146–162.
[25]
William S. Cleveland and Robert McGill. 1984. Graphical perception: Theory, experimentation, and application to the development of graphical methods. Journal of the American Statistical Association 79, 387 (1984), 531–554.
[26]
Cleveland Clinic. 2023. Intellectual Disability: Definition, Symptoms, & Treatment. (Accessed on 12/11/2023) https://my.clevelandclinic.org/health/diseases/25015-intellectual-disability-id
[27]
John W. Creswell and Cheryl N. Poth. 2016. Qualitative Inquiry and Research Design: Choosing among Five Approaches. Sage publications.
[28]
Ying Cui, Fu Chen, Alina Lutsyk, Jacqueline P. Leighton, and Maria Cutumisu. 2023. Data literacy assessments: A systematic literature review. Assessment in Education: Principles, Policy & Practice 30, 1 (2023), 76–96.
[29]
Frances R. Curcio. 1987. Comprehension of mathematical relationships expressed in graphs. Journal for Research in Mathematics Education 18, 5 (1987), 382–393.
[30]
Monica Cuskelly and Rhonda Faragher. 2019. Developmental dyscalculia and Down syndrome: Indicative evidence. International Journal of Disability, Development and Education 66, 2 (2019), 151–161.
[31]
Dina R. Dajani and Lucina Q. Uddin. 2015. Demystifying cognitive flexibility: Implications for clinical and developmental neuroscience. Trends in Neurosciences 38, 9 (2015), 571–578.
[32]
G. De Graaf, F. Buckley, and B. Skotko. 2019. People living with Down syndrome in the USA: Births and population. Down Syndrome Education International. https://dsuri.net/us-population-factsheet (2019).
[33]
Richard Desjardins, William Thorn, Andreas Schleicher, Glenda Quintini, Michele Pellizzari, Viktoria Kis, and Ji Eun Chung. 2013. OECD skills outlook 2013: First results from the survey of adult skills. Journal of Applied Econometrics 30, 7 (2013), 1144–1168.
[34]
Rosanna Di Gioia, Stéphane Chaudron, Monica Gemo, and Ignacio Sanchez. 2019. Cyber chronix, participatory research approach to develop and evaluate a storytelling game on personal data protection rights and privacy risks. In Lecture Notes in Computer Science. Springer International Publishing, Cham, 221–230.
[35]
Marie-Anne Durand, Renata W. Yen, James O’Malley, Glyn Elwyn, and Julien Mancini. 2020. Graph literacy matters: Examining the association between graph literacy, health literacy, and numeracy in a Medicaid eligible population. PLoS One 15, 11 (Nov. 2020), e0241844.
[36]
Johanna Ebbeler, Cindy L. Poortman, Kim Schildkamp, and Jules M. Pieters. 2017. The effects of a data use intervention on educators’ satisfaction and data literacy. Educ. Assess. Eval. Acc. 29, 1 (Feb. 2017), 83–105.
[37]
Sherine El-Toukhy, Alejandra Méndez, Shavonne Collins, and Eliseo J. Pérez-Stable. 2020. Barriers to patient portal access and use: Evidence from the Health Information National Trends Survey. J. Am. Board Fam. Med. 33, 6 (Nov. 2020), 953–968.
[38]
Danyang Fan, Alexa Fay Siu, Hrishikesh Rao, Gene Sung-Ho Kim, Xavier Vazquez, Lucy Greco, Sile O’Modhrain, and Sean Follmer. 2023. The accessibility of data visualizations on the web for screen reader users: Practices and experiences during Covid-19. ACM Transactions on Accessible Computing 16, 1 (2023), 1–29.
[39]
Rhonda Faragher. 2019. The new ‘functional mathematics’ for learners with Down syndrome: Numeracy for a digital world. Intl. J. Disabil. Dev. Educ. 66, 2 (Feb. 2019), 1–12.
[40]
R. Faragher and B. Clarke. 2013. Mathematics profile of the learner with Down syndrome. In Educating Learners with Down Syndrome. 119–145.
[41]
R. Faragher, P. Robertson, and G. Bird. 2020. International Guidelines for the Education of Learners with Down Syndrome. DSI, Teddington, UK.
[42]
J. Feng, J. Lazar, L. Kumin, and A. Ozok. 2010. Computer usage by children with Down syndrome: Challenges and future research. ACM Transactions on Accessible Computing (TACCESS) 2, 3 (2010), 1–44.
[43]
Fitbit. 2020. Fitbit Official Site for Activity Trackers & More. https://www.fitbit.com/global/us/home
[44]
Iddo Gal, Silvia Alatorre, Sean Close, Jeff Evans, Lene Johansen, Terry Maguire, Myrna Manly, and Dave Tout. 2009. PIAAC Numeracy: A Conceptual Framework. Vol. 1. OECD Publishing.
[45]
Mirta Galesic and Rocio Garcia-Retamero. 2011. Graph literacy: A cross-cultural comparison. Medical Decision Making 31, 3 (May 2011), 444–457.
[46]
GDSF. 2021. FAQ and Facts about Down Syndrome. Accessed: 2023-7-9 https://www.globaldownsyndrome
[47]
Engida H. Gebre. 2018. Young adults’ understanding and use of data: Insights for fostering secondary school students’ data literacy. Can. J. Sci. Math. Technol. Educ. 18, 4 (Dec. 2018), 330–341.
[48]
Amelia N. Gibson. 2016. Building a progressive-situational model of post-diagnosis information seeking for parents of individuals with Down syndrome. Glob. Qual. Nurs. Res. 3 (Jan. 2016), 2333393616680967.
[49]
Amanda L. Golbeck, Carolyn R. Ahlers-Schmidt, Angelia M. Paschal, and S. Edwards Dismuke. 2005. A definition and operational framework for health numeracy. American Journal of Preventive Medicine 29, 4 (2005), 375–376.
[50]
Robert Gould. 2017. Data literacy is statistical literacy. Statistics Education Research Journal 16, 1 (2017), 22–25.
[51]
Julie Grieco, Margaret Pulsifer, Karen Seligsohn, Brian Skotko, and Alison Schwartz. 2015. Down syndrome: Cognitive and behavioral functioning across the lifespan. In American Journal of Medical Genetics Part C: Seminars in Medical Genetics, Vol. 169. Wiley Online Library, 135–149.
[52]
Jessie Gruman, Margaret Holmes Rovner, Molly E. French, Dorothy Jeffress, Shoshanna Sofaer, Dale Shaller, and Denis J. Prager. 2010. From patient education to patient engagement: Implications for the field of patient education. Patient Education and Counseling 78, 3 (2010), 350–356.
[53]
Edith S. Gummer and Ellen B. Mandinach. 2015. Building a conceptual framework for data literacy. Teachers College Record 117, 4 (2015), 1–22.
[54]
Michael J. Guralnick, Robert T. Connor, and L. Clark Johnson. 2011. Peer-related social competence of young children with Down syndrome. American Journal of Intellectual and Developmental Disabilities 116, 1 (2011), 48–64.
[55]
James W. Hanson, Michele Lloyd-Puryear, Cynthia A. Moore, John Williams, H. Eugene Hoyme, Marilyn J. Bull, William I. Cohen, Franklin Desposito, Beth A. Pletcher, Nancy Roizen, Rebecca Wappner, and Lauri A. Hall. 2001. Health supervision for children with Down syndrome. Pediatrics 107, 2 (2001), 442–449. DOI: Publisher: American Academy of Pediatrics.
[56]
Steve Haroz, Robert Kosara, and Steven L. Franconeri. 2015. Isotype visualization: Working memory, performance, and engagement with pictographs. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. 1191–1200.
[57]
Jessica Hullman, Eytan Adar, and Priti Shah. 2011. Benefitting InfoVis with visual difficulties. IEEE Transactions on Visualization and Computer Graphics 17, 12 (2011), 2213–2222.
[58]
Charles Hulme, Kristina Goetz, Sophie Brigstocke, Hannah M. Nash, Arne Lervåg, and Margaret J. Snowling. 2012. The growth of reading skills in children with Down syndrome. Developmental Science 15, 3 (2012), 320–329.
[59]
Calum Inverarity, David Tarrant, Emilie Forrest, and Phil Greenwood. 2022. Towards benchmarking data literacy. In Companion Proceedings of the Web Conference 2022. 408–416.
[60]
Christopher Jarrold and Alan Baddeley. 2001. Short-term memory in Down syndrome: Applying the working memory model. Down Syndrome Research and Practice 7, 1 (2001), 17–23.
[61]
Christopher Jarrold, Alan D. Baddeley, and Caroline E. Phillips. 2002. Verbal short-term memory in down syndrome: A problem of memory, audition, or speech? Journal of Speech, Language, and Hearing Research 45, 6 (2002), 531–544.
[62]
S. N. Kar. 2023. Complete Guide to Apple Watch Heart Rate Zones - MyHealthyApple. https://www.myhealthyapple.com/complete-guide-to-apple-watch-heart-rate-zones/
[63]
Andreas Kerren, John Stasko, Jean-Daniel Fekete, and Chris North. 2008. Information Visualization: Human-centered Issues and Perspectives. Vol. 4950. Springer.
[64]
Nam Wook Kim, Shakila Cherise Joyner, Amalia Riegelhuth, and Y. Kim. 2021. Accessible visualization: Design space, opportunities, and challenges. In Computer Graphics Forum, Vol. 40. Wiley Online Library, 173–188.
[65]
Tibor Koltay. 2017. Data literacy for researchers and data librarians. Journal of Librarianship and Information Science 49, 1 (2017), 3–14.
[66]
Ragne Kõuts-Klemm. 2019. Data literacy among journalists: A skills-assessment based approach. Central European Journal of Communication 12, 24 (2019), 299–315.
[67]
Mark Kutner, Elizabeth Greenberg, Ying Jin, Christine Paulsen, and Sheida White. 2006. The Health Literacy of America’s Adults: Results From the 2003 National Assessment of Adult Literacy. https://nces.ed.gov/pubs2006/2006483.pdf, 76 pages. (Accessed on 07/10/2023).
[68]
Silvia Lanfranchi, Cesare Cornoldi, and Renzo Vianello. 2004. Verbal and visuospatial working memory deficits in children with Down syndrome. American Journal on Mental Retardation 109, 6 (2004), 456–466.
[69]
P. E. Larasati, S. Supahar, and D. R. A. Yunanta. 2020. Validity and reliability estimation of assessment ability instrument for data literacy on high school physics material. In Journal of Physics: Conference Series, Vol. 1440. IOP Publishing, 012020.
[70]
Jonathan Lazar, Jinjuan Feng, and Harry Hochheiser. 2017. Research Methods in Human-Computer Interaction (2nd ed.). Morgan Kaufmann, Oxford, England.
[71]
Jonathan Lazar, Caitlin Woglom, Jeanhee Chung, Alison Schwartz, Yichuan Grace Hsieh, Richard Moore, Drew Crowley, and Brian Skotko. 2018. Co-design process of a smart phone app to help people with Down syndrome manage their nutritional habits. Journal of Usability Studies 13, 2 (2018), 73–93.
[72]
Sukwon Lee, Sung-Hee Kim, Ya-Hsin Hung, Heidi Lam, Youn-ah Kang, and Ji Soo Yi. 2015. How do people make sense of unfamiliar visualizations?: A grounded model of Novice’s information visualization sensemaking. IEEE Transactions on Visualization and Computer Graphics 22, 1 (2015), 499–508.
[73]
Sukwon Lee, Sung-Hee Kim, and Bum Chul Kwon. 2016. VLAT: Development of a visualization literacy assessment test. IEEE Transactions on Visualization and Computer Graphics 23, 1 (2016), 551–560.
[74]
Helen Levy, Peter A. Ubel, Amanda J. Dillard, David R. Weir, and Angela Fagerlin. 2014. Health numeracy: The importance of domain in assessing numeracy. Medical Decision Making 34, 1 (2014), 107–115.
[75]
Zhicheng Liu and John Stasko. 2010. Mental models, visual reasoning and interaction in information visualization: A top-down perspective. IEEE Transactions on Visualization and Computer Graphics 16, 6 (2010), 999–1008.
[76]
Susan J. Loveall and Andrea Barton-Hulsey. 2021. Reading skills in Down syndrome: Implications for clinical practice. In Seminars in Speech and Language, Vol. 42. Thieme Medical Publishers, Inc. 333 Seventh Avenue, 18th Floor, New York, NY, 330–344.
[77]
U. Ludewig. 2018. Understanding Graphs: Modeling Processes, Prerequisites and Influencing Factors of Graphicacy, Doctoral dissertation, Universität Tübingen.
[78]
A. Lusiyana, F. Festiyed, and Y. Yulkifli. 2020. Measuring the physics students’ data literacy skill in the era of industry 4.0 by using MIRECAL learning model. International Journal of Scientific and Technology Research 9, 1 (2020), 1203–1205.
[79]
Suryadi, I. K. Mahardika, Supeno and Sudarti. 2021. Data literacy of high school students on physics learning. In Journal of Physics: Conference Series, Vol. 1839. IOP Publishing, 012025.
[80]
Saida Mamedova and Emily Pawlowski. 2020. Adult numeracy in the United States. Data point. NCES 2020-025. National Center for Education Statistics (2020).
[81]
Kim Marriott, Bongshin Lee, Matthew Butler, Ed Cutrell, Kirsten Ellis, Cagatay Goncu, Marti Hearst, Kathleen McCoy, and Danielle Albers Szafir. 2021. Inclusive data visualization for people with disabilities: A call to action. Interactions 28, 3 (2021), 47–51.
[82]
Percival G. Matthews, Mark Rose Lewis, and Edward M. Hubbard. 2016. Individual differences in nonsymbolic ratio processing predict symbolic math performance. Psychological Science 27, 2 (2016), 191–202.
[83]
Elisabetta Monari Martinez and Katia Pellegrini. 2010. Algebra and problem-solving in Down syndrome: A study with 15 teenagers. European Journal of Special Needs Education 25, 1 (2010), 13–29.
[84]
Tamara Munzner. 2014. Visualization Analysis and Design. CRC Press.
[85]
MyFitnessPal. [n. d.]. Free Calorie Counter, Diet & Exercise Journal | MyFitnessPal. https://www.myfitnesspal.com/en/
[86]
André Mégarbané, Florian Noguier, Samantha Stora, Laurent Manchon, Clotilde Mircher, Roman Bruno, Nathalie Dorison, Fabien Pierrat, Marie-Odile Rethoré, Bernadette Trentin, Aimé Ravel, Marine Morent, Gerard Lefranc, and David Piquemal. 2013. The intellectual disability of trisomy 21: Differences in gene expression in a case series of patients with lower and higher IQ. European Journal of Human Genetics 21, 11 (Nov. 2013), 1253–1259. DOI:
[87]
Kari-Anne B. Næss, Monica Melby-Lervåg, Charles Hulme, and Solveig-Alma Halaas Lyster. 2012. Reading skills in children with Down syndrome: A meta-analytic review. Research in Developmental Disabilities 33, 2 (2012), 737–747.
[88]
US National Reading Panel and US National Institute of Child Health, & Human Development. 2000. Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction: Reports of the subgroups. National Institute of Child Health and Human Development, National Institutes of Health.
[89]
Kent L. Norman. 2017. Cyberpsychology: An Introduction to Human-Computer Interaction. Cambridge University Press.
[90]
Don Nutbeam. 2000. Health literacy as a public health goal: A challenge for contemporary health education and communication strategies into the 21st century. Health Promotion International 15, 3 (2000), 259–267.
[92]
US Dept. of Health and Human Services. 2009. America’s Health Literacy: Why We Need Accessible Health Information. US HHS.
[93]
Office of Special Education and U.S. Department of Education Rehabilitative Services. 2023. 44th Annual Report to Congress on the Implementation of the Individuals with Disabilities Education Act, 2022. (Accessed on 12/13/2023) https://sites.ed.gov/idea/files/44th-arc-for-idea.pdf
[94]
B. C. Oguguo, Fadip Audu Nannim, Agnes O. Okeke, Roseline I. Ezechukwu, Godwin Asanga Christopher, and Clifford O. Ugorji. 2017. Highlights of the Program for the International Assessment of Adult Competencies, U.S. Results. Accessed: 2023-7-9 https://nces.ed.gov/surveys/piaac/national_results.asp
[95]
B. C. Oguguo, Fadip Audu Nannim, Agnes O. Okeke, Roseline I. Ezechukwu, Godwin Asanga Christopher, and Clifford O. Ugorji. 2020. Assessment of students’ data literacy skills in southern Nigerian universities. Universal Journal of Educational Research 8, 6 (2020), 2717–2726.
[96]
Sara Onnivello, Francesca Pulina, Chiara Locatelli, Chiara Marcolin, Giuseppe Ramacieri, Francesca Antonaros, Beatrice Vione, Maria Caracausi, and Silvia Lanfranchi. 2022. Cognitive profiles in children and adolescents with Down syndrome. Scientific Reports 12, 1 (2022), 1936.
[97]
Peter Pirolli and Stuart Card. 1999. Information foraging. Psychological Review 106, 4 (1999), 643.
[98]
Michele Polfuss, Kathleen J. Sawin, Paula E. Papanek, Linda Bandini, Bethany Forseth, Andrea Moosreiner, Kimberley Zvara, and Dale A. Schoeller. 2018. Total energy expenditure and body composition of children with developmental disabilities. Disability and Health Journal 11, 3 (2018), 442–446.
[99]
Vee Prasher and Cliff Cunningham. 2001. Down syndrome. Current Opinion in Psychiatry 14, 5 (2001), 431–436.
[100]
M. A. Pratama, Supahar, D. P. Lestari, W. K. Sari, T. S. Y. Putri, and V. A. K. Adiatmah. 2020. Data literacy assessment instrument for preparing 21 Cs literacy: Preliminary study. In Journal of Physics: Conference Series, Vol. 1440. IOP Publishing, 012085.
[101]
Lauren T. Ptomey, Debra K. Sullivan, Jaehoon Lee, Jeannine R. Goetz, Cheryl Gibson, and Joseph E. Donnelly. 2015. The use of technology for delivering a weight loss program for adolescents with intellectual and developmental disabilities. Journal of the Academy of Nutrition and Dietetics 115, 1 (2015), 112–118.
[102]
K. Quealy and M. Sanger-Katz. 2016. Is Sushi ‘Healthy’? What About Granola? Where Americans and Nutritionists Disagree - The New York Times. https://www.nytimes.com/interactive/2016/07/05/upshot/is-sushi-healthy-what-about-granola-where-americans-and-nutritionists-disagree.html
[103]
L. Rahmawati, I. Wilujeng, and A. Satriana. 2020. Application of STEM learning approach through simple technology to increase data literacy. In Journal of Physics: Conference Series, Vol. 1440. IOP Publishing, 012047.
[104]
Christoph Ratz. 2013. Do students with Down syndrome have a specific learning profile for reading? Research in Developmental Disabilities 34, 12 (2013), 4504–4514.
[105]
Todd D. Reeves and Jui-Ling Chiang. 2019. Effects of an asynchronous online data literacy intervention on pre-service and in-service educators’ beliefs, self-efficacy, and practices. Computers & Education 136 (2019), 13–33.
[106]
Todd D. Reeves and Sheryl L. Honig. 2015. A classroom data literacy intervention for pre-service teachers. Teaching and Teacher Education 50 (2015), 90–101.
[107]
Chantel Ridsdale, James Rothwell, Michael Smit, Hossam Ali-Hassan, Michael Bliemel, Dean Irvine, Daniel Kelley, Stan Matwin, and Bradley Wuetherick. 2015. Strategies and best practices for data literacy education: Knowledge synthesis report. (2015).
[108]
Michelle Antionette Rogers. 2015. A Developmental Study Examining the Value, Effectiveness, and Quality of a Data Literacy Intervention. The University of Iowa.
[109]
Julian B. Rotter. 1966. Generalized expectancies for internal versus external control of reinforcement. Psychological Monographs: General and Applied 80, 1 (1966), 1.
[110]
Urmimala Sarkar, Andrew J. Karter, Jennifer Y. Liu, Nancy E. Adler, Robert Nguyen, Andrea Lopez, and Dean Schillinger. 2010. The literacy divide: Health literacy and the use of an internet-based patient portal in an integrated health system–results from the Diabetes Study of Northern California (DISTANCE). Journal of Health Communication 15, S2 (2010), 183–196.
[111]
Marilyn M. Schapira, Kathlyn E. Fletcher, Mary Ann Gilligan, Toni K. King, Purushottam W. Laud, B Alexendra Matthews, Joan M. Neuner, and Elisabeth Hayes. 2008. A framework for health numeracy: How patients use quantitative skills in health care. Journal of Health Communication 13, 5 (2008), 501–517.
[112]
René Schneider. 2018. Training trainers for research data literacy: A content- and method-oriented approach. In Communications in Computer and Information Science. Springer International Publishing, Cham, 139–147.
[113]
Priti Shah and Eric G. Freedman. 2011. Bar and line graph comprehension: An interaction of top-down and bottom-up processes. Topics in Cognitive Science 3, 3 (2011), 560–578.
[114]
Numera M. I. Shahid, Effie Lai-Chong Law, and Nervo Verdezoto. 2022. Technology-enhanced support for children with Down syndrome: A systematic literature review. International Journal of Child-Computer Interaction 31 (2022), 100340.
[115]
Ben Shneiderman. 1996. The eyes have it: A task by data type taxonomy for information visualizations. In Proceedings 1996 IEEE Symposium on Visual Languages. IEEE, 336–343.
[116]
Ben Shneiderman and Catherine Plaisant. 2006. Strategies for evaluating information visualization tools: Multi-dimensional in-depth long-term case studies. In Proceedings of the 2006 AVI Workshop on Beyond Time and Errors: Novel Evaluation Methods for Information Visualization. 1–7.
[117]
Tamara L. Shreiner. 2019. Students’ use of data visualizations in historical reasoning: A think-aloud investigation with elementary, middle, and high school students. The Journal of Social Studies Research 43, 4 (2019), 389–404.
[118]
Kristine Sørensen. 2019. Defining health literacy: Exploring differences and commonalities. In International Handbook of Health Literacy. Policy Press, 5–20.
[119]
Gillian S. Starkey and Bruce D. McCandliss. 2014. The emergence of “groupitizing” in children’s numerical cognition. Journal of Experimental Child Psychology 126 (2014), 120–137.
[120]
Meghan Reading Turchioe, Annie Myers, Samuel Isaac, Dawon Baik, Lisa V. Grossman, Jessica S. Ancker, and Ruth Masterson Creber. 2019. A systematic review of patient-facing visualizations of personal health data. Applied Clinical Informatics 10, 04 (2019), 751–770.
[121]
UNESCO. 2023. What you Need to know about Literacy | UNESCO. (Accessed on 12/15/2023) https://www.unesco.org/en/literacy/need-know
[122]
Philip Vahey, Ken Rafanan, Charles Patton, Karen Swan, Mark van’t Hooft, Annette Kratcoski, and Tina Stanford. 2012. A cross-disciplinary approach to teaching data literacy and proportionality. Educational Studies in Mathematics 81 (2012), 179–205.
[123]
Alexander JAM van Deursen and Jan AGM van Dijk. 2014. The digital divide shifts to differences in usage. New Media & Society 16, 3 (2014), 507–526.
[124]
Kate van Dooren, Nick Lennox, and Madeline Stewart. 2013. Improving access to electronic health records for people with intellectual disability: A qualitative study. Australian Journal of Primary Health 19, 4 (2013), 336–342.
[125]
L. Visu-Petra, O. Benga, I. Tincaş, and M. Miclea. 2007. Visual-spatial processing in children and adolescents with Down’s syndrome: A computerized assessment of memory skills. Journal of Intellectual Disability Research 51, 12 (2007), 942–952.
[126]
Colin Ware. 2019. Information Visualization: Perception for Design. Morgan Kaufmann.
[127]
Karl E. Weick. 1995. Sensemaking in Organizations. Vol. 3. Sage.
[128]
Linda Wetzel. 2006. Types and tokens. The Stanford Encyclopedia of Philosophy (2006).
[129]
Zuzanna Wiorogórska, Jędrzej Leśniewski, and Ewa Rozkosz. 2018. Data literacy and research data management in two top universities in Poland. Raising awareness. In Information Literacy in the Workplace: 5th European Conference, ECIL 2017, Saint Malo, France, September 18–21, 2017, Revised Selected Papers 5. Springer, 205–214.
[130]
Annika Wolff, Daniel Gooch, Jose J. Cavero Montaner, Umar Rashid, and Gerd Kortuem. 2016. Creating an understanding of data literacy for a data-driven society. The Journal of Community Informatics 12, 3 (2016).
[131]
Annika Wolff, Michel Wermelinger, and Marian Petre. 2019. Exploring design principles for data literacy activities to support children’s inquiries from complex data. International Journal of Human-Computer Studies 129 (2019), 41–54.
[132]
R. E. Wood, J. Lazar, J. H. Feng, and A. Forsythe-Korzeniewicz. 2023. Creating inclusive materials and methods for co-designing health information technologies with people who have Down syndrome. In Cambridge Workshop on Universal Access and Assistive Technology. Springer, 178–187.
[133]
Ingram Wright, Vicky Lewis, and Glyn M. Collis. 2006. Imitation and representational development in young children with Down syndrome. British Journal of Developmental Psychology 24, 2 (2006), 429–450.
[134]
Keke Wu, Emma Petersen, Tahmina Ahmad, David Burlinson, Shea Tanis, and Danielle Albers Szafir. 2021. Understanding data accessibility for people with intellectual and developmental disabilities. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–16.
[135]
Xiaopeng Wu, Tianshu Xu, and Yi Zhang. 2021. Research on the data analysis knowledge assessment of pre-service teachers from China based on cognitive diagnostic assessment. Current Psychology (2021), 1–15.
[136]
André Frank Zimpel. 2016. Trisomy 21: What we can Learn from People with Down Syndrome. Vandenhoeck & Ruprecht.

Cited By

View all
  • (2024)Our Stories, Our Data: Co-designing Visualizations with People with Intellectual and Developmental DisabilitiesProceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3663548.3675615(1-17)Online publication date: 27-Oct-2024

Index Terms

  1. Health Data Visualization Literacy Skills of Young Adults with Down Syndrome and the Barriers to Inference-making

        Recommendations

        Comments

        Please enable JavaScript to view thecomments powered by Disqus.

        Information & Contributors

        Information

        Published In

        cover image ACM Transactions on Accessible Computing
        ACM Transactions on Accessible Computing  Volume 17, Issue 1
        March 2024
        174 pages
        EISSN:1936-7236
        DOI:10.1145/3413488
        Issue’s Table of Contents
        This work is licensed under a Creative Commons Attribution-NonCommercial International 4.0 License.

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 28 March 2024
        Online AM: 24 February 2024
        Accepted: 05 February 2024
        Revised: 28 December 2023
        Received: 03 October 2023
        Published in TACCESS Volume 17, Issue 1

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. Accessibility
        2. data visualization
        3. health informatics
        4. people with disabilities
        5. Down Syndrome

        Qualifiers

        • Research-article

        Funding Sources

        • University of Maryland “Grand Challenges” funding for the Maryland Initiative for Digital Accessibility (MIDA)
        • Dr. Joan Giesecke Health Informatics Fellowship
        • Google’s Award for Inclusion Research
        • National Institute on Disability, Independent Living, and Rehabilitation Research

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)999
        • Downloads (Last 6 weeks)210
        Reflects downloads up to 21 Nov 2024

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)Our Stories, Our Data: Co-designing Visualizations with People with Intellectual and Developmental DisabilitiesProceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3663548.3675615(1-17)Online publication date: 27-Oct-2024

        View Options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Login options

        Full Access

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media