Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

What Does the Chart Say? Grouping Cues Guide Viewer Comparisons and Conclusions in Bar Charts

Published: 01 August 2024 Publication History

Abstract

Reading a visualization is like reading a paragraph. Each sentence is a comparison: the mean of these is higher than those; this difference is smaller than that. What determines which comparisons are made first? The viewer's goals and expertise matter, but the way that values are visually grouped together within the chart also impacts those comparisons. Research from psychology suggests that comparisons involve multiple steps. First, the viewer divides the visualization into a set of units. This might include a single bar or a grouped set of bars. Then the viewer selects and compares two of these units, perhaps noting that one pair of bars is longer than another. Viewers might take an additional third step and perform a second-order comparison, perhaps determining that the difference between one pair of bars is greater than the difference between another pair. We create a visual comparison taxonomy that allows us to develop and test a sequence of hypotheses about which comparisons people are more likely to make when reading a visualization. We find that people tend to compare two groups before comparing two individual bars and that second-order comparisons are rare. Visual cues like spatial proximity and color can influence which elements are grouped together and selected for comparison, with spatial proximity being a stronger grouping cue. Interestingly, once the viewer grouped together and compared a set of bars, regardless of whether the group is formed by spatial proximity or color similarity, they no longer consider other possible groupings in their comparisons.

References

[1]
M. Ahissar and S. Hochstein, “The reverse hierarchy theory of visual perceptual learning,” Trends Cogn. Sci., vol. 8, no. 10, pp. 457–464, 2004.
[2]
J. Allik and T. Tuulmets, “Occupancy model of perceived numerosity,” Percep. Psychophys., vol. 49, no. 4, pp. 303–314, 1991.
[3]
R. Amar, J. Eagan, and J. Stasko, “Low-level components of analytic activity in information visualization,” in Proc. IEEE Symp. Inf. Vis., 2005, pp. 111–117.
[4]
F. Baluch and L. Itti, “Mechanisms of top-down attention,” Trends Neurosci., vol. 34, no. 4, pp. 210–224, 2011.
[5]
I. Biederman, “Recognition-by-components: A theory of human image understanding,” Psychol. Rev., vol. 94, no. 2, pp. 115–147, 1987.
[6]
C. A. Brewer, G. W. Hatchard, and M. A. Harrower, “Colorbrewer in print: A catalog of color schemes for maps,” Cartography Geographic Inf. Sci., vol. 30, no. 1, pp. 5–32, 2003.
[7]
J. L. Brooks, Traditional and New Principles of Perceptual Grouping, London, U.K.: Oxford Univ. Press, 2015.
[8]
D. Burlinson, K. Subramanian, and P. Goolkasian, “Open versus closed shapes: New perceptual categories?,” IEEE Trans. Vis. Comput. Graph., vol. 24, no. 1, pp. 574–583, Jan. 2018.
[9]
A. Burns, C. Xiong, S. Franconeri, A. Cairo, and N. Mahyar, “How to evaluate data visualizations across different levels of understanding,” in Proc. IEEE Workshop Eval. Beyond-Methodological Approaches Vis., 2020, pp. 19–28.
[10]
Z. Bylinskii, M. A. Borkin, N. W. Kim, H. Pfister, and A. Oliva, “Eye fixation metrics for large scale evaluation and comparison of information visualizations,” in Eye Tracking and Visualization: Foundations, Techniques, and Applications. Berlin, Germany: Springer, 2017, pp. 235–255.
[11]
A. Chalbi et al., “Common fate for animated transitions in visualization,” IEEE Trans. Vis. Comput. Graph., vol. 26, no. 1, pp. 386–396, Jan. 2020.
[12]
L. Ciccione and S. Dehaene, “Grouping mechanisms in numerosity perception,” Open Mind, vol. 4, pp. 102–118, 2020.
[13]
H. H. Clark and W. G. Chase, “On the process of comparing sentences against pictures,” Cogn. Psychol., vol. 3, no. 3, pp. 472–517, 1972.
[14]
W. S. Cleveland and R. McGill, “Graphical perception: Theory, experimentation, and application to the development of graphical methods,” J. Amer. Stat. Assoc., vol. 79, no. 387, pp. 531–554, 1984.
[15]
P. C. Cozby, Methods in Behavioral Research. New York, NY, USA: McGraw-Hill, 2007.
[16]
Ç. Demiralp, M. S. Bernstein, and J. Heer, “Learning perceptual kernels for visualization design,” IEEE Trans. Vis. Comput. Graph., vol. 20, no. 12, pp. 1933–1942, Dec. 2014.
[17]
J. Duncan and G. W. Humphreys, “Visual search and stimulus similarity,” Psychol. Rev., vol. 96, no. 3, pp. 433, 1989.
[18]
H. E. Egeth, C. J. Leonard, and A. B. Leber, “Why salience is not enough: Reflections on top-down selection in vision,” Acta Psychologica, vol. 135, no. 2, 2010, Art. no.
[19]
C. L. Folk and B. A. Anderson, “Target-uncertainty effects in attentional capture: Color-singleton set or multiple attentional control settings?,” Psychon. Bull. Rev., vol. 17, no. 3, pp. 421–426, 2010.
[20]
S. L. Franconeri, G. A. Alvarez, and J. T. Enns, “How many locations can be selected at once?,” J. Exp. Psychol., Hum. Percep. Perform., vol. 33, no. 5, 2007, Art. no.
[21]
S. L. Franconeri, D. K. Bemis, and G. A. Alvarez, “Number estimation relies on a set of segmented objects,” Cognition, vol. 113, no. 1, pp. 1–13, 2009.
[22]
S. N. Friel, F. R. Curcio, and G. W. Bright, “Making sense of graphs: Critical factors influencing comprehension and instructional implications,” J. Res. Math. Educ., vol. 32, no. 2, pp. 124–158, 2001.
[23]
D. Gentner, “Structure-mapping: A theoretical framework for analogy,” Cogn. Sci., vol. 7, no. 2, pp. 155–170, 1983.
[24]
M. Gleicher, “Considerations for visualizing comparison,” IEEE Trans. Vis. Comput. Graph., vol. 24, no. 1, pp. 413–423, Jan. 2018.
[25]
M. Gleicher, D. Albers, R. Walker, I. Jusufi, C. D. Hansen, and J. C. Roberts, “Visual comparison for information visualization,” Inf. Vis., vol. 10, no. 4, pp. 289–309, 2011.
[26]
M. Gleicher, M. Correll, C. Nothelfer, and S. Franconeri, “Perception of average value in multiclass scatterplots,” IEEE Trans. Vis. Comput. Graph., vol. 19, no. 12, pp. 2316–2325, Dec. 2013.
[27]
J. Halberda, S. F. Sires, and L. Feigenson, “Multiple spatially overlapping sets can be enumerated in parallel,” Psychol. Sci., vol. 17, no. 7, pp. 572–576, 2006.
[28]
G. S. Halford, R. Baker, J. E. McCredden, and J. D. Bain, “How many variables can humans process?,” Psychol. Sci., vol. 16, no. 1, pp. 70–76, 2005.
[29]
J. Hegdé and D. J. Felleman, “The popout in some conjunction searches is due to perceptual grouping,” NeuroReport, vol. 10, no. 1, pp. 143–148, 1999.
[30]
F. Y. Hsieh, D. A. Bloch, and M. D. Larsen, “A simple method of sample size calculation for linear and logistic regression,” Statist. Med., vol. 17, no. 14, pp. 1623–1634, 1998.
[31]
L. Huang, “Space of preattentive shape features,” J. Vis., vol. 20, no. 4, pp. 10–10, 2020.
[32]
L. Huang and H. Pashler, “Symmetry detection and visual attention: A “binary-map” hypothesis,” Vis. Res., vol. 42, no. 11, pp. 1421–1430, 2002.
[33]
L. Huang and H. Pashler, “A boolean map theory of visual attention,” Psychol. Rev., vol. 114, no. 3, pp. 599, 2007.
[34]
J. E. Hummel and K. J. Holyoak, “Distributed representations of structure: A theory of analogical access and mapping,” Psychol. Rev., vol. 104, no. 3, 1997, Art. no.
[35]
N. Jardine, B. D. Ondov, N. Elmqvist, and S. Franconeri, “The perceptual proxies of visual comparison,” IEEE Trans. Vis. Comput. Graph., vol. 26, no. 1, pp. 1012–1021, Jan. 2020.
[36]
C. Jung, S. Mehta, A. Kulkarni, Y. Zhao, and Y.-S. Kim, “Communicating visualizations without visuals: Investigation of visualization alternative text for people with visual impairments,” IEEE Trans. Vis. Comput. Graph., vol. 28, no. 1, pp. 1095–1105, Jan. 2022.
[37]
N. W. Kim et al., “BubbleView: An interface for crowdsourcing image importance maps and tracking visual attention,” ACM Trans. Comput.-Hum. Interact., vol. 24, no. 5, pp. 1–40, 2017.
[38]
N. W. Kim, Z. Bylinskii, M. A. Borkin, A. Oliva, K. Z. Gajos, and H. Pfister, “A crowdsourced alternative to eye-tracking for visualization understanding,” in Proc. 33rd Annu. ACM Conf. Extended Abstr. Hum. Factors Comput. Syst., 2015, pp. 1349–1354.
[39]
Y.-S. Kim, L. A. Walls, P. Krafft, and J. Hullman, “A Bayesian cognition approach to improve data visualization,” in Proc. CHI Conf. Hum. Factors Comput. Syst., 2019, pp. 1–14.
[40]
D. Lamy, A. Leber, and H. E. Egeth, “Effects of task relevance and stimulus-driven salience in feature-search mode,” J. Exp. Psychol., Hum. Percept. Perform., vol. 30, no. 6, 2004, Art. no.
[41]
B. R. Levinthal and S. L. Franconeri, “Common-fate grouping as feature selection,” Psychol. Sci., vol. 22, no. 9, pp. 1132–1137, 2011.
[42]
B. C. Love, J. N. Rouder, and E. J. Wisniewski, “A structural account of global and local processing,” Cogn. Psychol., vol. 38, no. 2, pp. 291–316, 1999.
[43]
A. Lovett and K. Forbus, “Cultural commonalities and differences in spatial problem-solving: A computational analysis,” Cognition, vol. 121, no. 2, pp. 281–287, 2011.
[44]
A. Lovett and K. Forbus, “Modeling multiple strategies for solving geometric analogy problems,” in Proc. Annu. Meeting Cogn. Sci. Soc., 2012, pp. 701–706.
[45]
A. Lovett and K. Forbus, “Modeling visual problem solving as analogical reasoning,” Psychol. Rev., vol. 124, no. 1, 2017, Art. no.
[46]
A. Lovett, E. Tomai, K. Forbus, and J. Usher, “Solving geometric analogy problems through two-stage analogical mapping,” Cogn. Sci., vol. 33, no. 7, pp. 1192–1231, 2009.
[47]
P. Mantri, H. Subramonyam, A. L. Michal, and C. Xiong, “How do viewers synthesize conflicting information from data visualizations?,” IEEE Trans. Vis. Comput. Graph., vol. 29, no. 1, pp. 1005–1015, Jan. 2023.
[48]
A. B. Markman and D. Gentner, “Commonalities and differences in similarity comparisons,” Memory Cogn., vol. 24, no. 2, pp. 235–249, 1996.
[49]
A. L. Michal and S. L. Franconeri, “Visual routines are associated with specific graph interpretations,” Cogn. Res., Princ. Implic., vol. 2, no. 1, pp. 1–10, 2017.
[50]
A. L. Michal, D. Uttal, P. Shah, and S. L. Franconeri, “Visual routines for extracting magnitude relations,” Psychon. Bull. Rev., vol. 23, no. 6, pp. 1802–1809, 2016.
[51]
D. Navon, “Forest before trees: The precedence of global features in visual perception,” Cogn. Psychol., vol. 9, no. 3, pp. 353–383, 1977.
[52]
C. Nothelfer and S. Franconeri, “Measures of the benefit of direct encoding of data deltas for data pair relation perception,” IEEE Trans. Vis. Comput. Graph., vol. 26, no. 1, pp. 311–320, Jan. 2020.
[53]
E. Picon and D. Odic, “Finding maximal and minimal elements in a set is capacity-unlimited and massively-parallel,” J. Vis., vol. 17, no. 10, pp. 1284–1284, 2017.
[54]
S. Pinker, “A theory of graph comprehension,” Artif. Intell. Future Testing, vol. 73, pp. 73–126, 1990.
[55]
M. I. Posner, “Orienting of attention,” Quart. J. Exp. Psychol., vol. 32, no. 1, pp. 3–25, 1980.
[56]
B. Ripley, W. Venables, and M. B. Ripley, “Package ‘NNET’,” R Package Version, vol. 7, pp. 3–12, 2016.
[57]
A. Satyanarayan, D. Moritz, K. Wongsuphasawat, and J. Heer, “Vega-lite: A grammar of interactive graphics,” IEEE Trans. Vis. Comput. Graph., vol. 23, no. 1, pp. 341–350, Jan. 2017.
[58]
P. Shah and E. G. Freedman, “Bar and line graph comprehension: An interaction of top-down and bottom-up processes,” Top. Cogn. Sci., vol. 3, no. 3, pp. 560–578, 2011.
[59]
P. Shah and J. Hoeffner, “Review of graph comprehension research: Implications for instruction,” Educ. Psychol. Rev., vol. 14, no. 1, pp. 47–69, 2002.
[60]
S. Shin, S. Chung, S. Hong, and N. Elmqvist, “A scanner deeply: Predicting gaze heatmaps on visualizations using crowdsourced eye movement data,” IEEE Trans. Vis. Comput. Graph., vol. 29, no. 1, pp. 396–406, Jan. 2023.
[61]
J. Snow and M. Mann, Qualtrics Survey Software: Handbook for Research Professionals. Provo, UT, USA: Qualtrics Labs, 2013.
[62]
A. Srinivasan, S. M. Drucker, A. Endert, and J. Stasko, “Augmenting visualizations with interactive data facts to facilitate interpretation and communication,” IEEE Trans. Vis. Comput. Graph., vol. 25, no. 1, pp. 672–681, Jan. 2019.
[63]
A. Srinivasan, N. Nyapathy, B. Lee, S. M. Drucker, and J. Stasko, “Collecting and characterizing natural language utterances for specifying data visualizations,” in Proc. CHI Conf. Hum. Factors Comput. Syst., 2021, pp. 1–10.
[64]
C. Stokes and M. Hearst, “Why more text is (often) better: Themes from reader preferences for integration of charts and text,” 2022,.
[65]
D. A. Szafir, “Modeling color difference for visualization design,” IEEE Trans. Vis. Comput. Graph., vol. 24, no. 1, pp. 392–401, Jan. 2018.
[66]
J. Theeuwes, “Parallel search for a conjunction of color and orientation: The effect of spatial proximity,” Acta Psychologica, vol. 94, no. 3, pp. 291–307, 1996.
[67]
J. Wagemans et al., “A century of gestalt psychology in visual perception: I. perceptual grouping and figure–ground organization,” Psychol. Bull., vol. 138, no. 6, 2012, Art. no.
[68]
J. M. Wolfe, “What can 1 million trials tell us about visual search?,” Psychol. Sci., vol. 9, no. 1, pp. 33–39, 1998.
[69]
J. M. Wolfe and T. S. Horowitz, “What attributes guide the deployment of visual attention and how do they do it?,” Nat. Rev. Neurosci., vol. 5, no. 6, pp. 495–501, 2004.
[70]
C. Xiong, A. Sarvghad, D. G. Goldstein, J. M. Hofman, and Ç. Demiralp, “Investigating perceptual biases in icon arrays,” in Proc. CHI Conf. Hum. Factors Comput. Syst., 2022, pp. 1–12.
[71]
C. Xiong, V. Setlur, B. Bach, E. Koh, K. Lin, and S. Franconeri, “Visual arrangements of bar charts influence comparisons in viewer takeaways,” IEEE Trans. Vis. Comput. Graph., vol. 28, no. 1, pp. 955–965, Jan. 2022.
[72]
C. Xiong, L. van Weelden, and S. Franconeri, “The curse of knowledge in visual data communication,” IEEE Trans. on Vis. Comput. Graph., vol. 26, no. 10, pp. 3051–3062, Oct. 2020.
[73]
D. Yu, D. Tam, and S. L. Franconeri, “Gestalt similarity groupings are not constructed in parallel,” Cognition, vol. 182, pp. 8–13, 2019.
[74]
D. Yu, X. Xiao, D. K. Bemis, and S. L. Franconeri, “Similarity grouping as feature-based selection,” Psychol. Sci., vol. 30, no. 3, pp. 376–385, 2019.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image IEEE Transactions on Visualization and Computer Graphics
IEEE Transactions on Visualization and Computer Graphics  Volume 30, Issue 8
Aug. 2024
1479 pages

Publisher

IEEE Educational Activities Department

United States

Publication History

Published: 01 August 2024

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 16 Feb 2025

Other Metrics

Citations

View Options

View options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media