Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Spreadsheets on Interactive Surfaces: Breaking through the Grid with the Pen

Published: 29 January 2024 Publication History

Abstract

Spreadsheet programs for interactive surfaces have limited manipulations capabilities and are often frustrating to use. One key reason is that the spreadsheet grid creates a layer that intercepts most user input events, making it difficult to reach the cell values that lie underneath. We conduct an analysis of commercial spreadsheet programs and an elicitation study to understand what users can do and what they would like to do with spreadsheets on interactive surfaces. Informed by these, we design interaction techniques that leverage the precision of the pen to mitigate friction between the different layers. These enable more operations by direct manipulation on and through the grid, targeting not only cells and groups of cells, but values and substrings within and across cells as well. We prototype these interaction techniques and conduct a qualitative study with information workers who perform a variety of spreadsheet operations on their own data.
Appendices

A Elicitation Study

A.1 List of Referents

Table 2 lists questions for the 28 referents considered in the elicitation study. Some questions were actually a bit more detailed to give context (e.g., the name of the columns to merge for \(GM_{1}\)). Study material is available as supplemental material.
Table 2.
ScopeAction TypeQuestion
Value-levelSelection( \(VS_{1}\)) How would you select the first character of a string in a cell?
( \(VS_{2}\)) How would you select the comma (and only the comma character) in a cell?
( \(VS_{3}\)) How would you select the last character of a string in a cell?
( \(VS_{4}\)) How would you select the left part of a string in a cell?
( \(VS_{5}\)) How would you select the sequence of characters “, NY” (and only that sequence) in a cell?
( \(VS_{6}\)) How would you select the right part of a string in a cell?
( \(VS_{7}\)) How would you generalize a sub-cell selection to its parent column?
Manipulation( \(VM_{1}\)) How would you move a selection within a cell?
( \(VM_{2}\)) How would you delete part of the content of a cell?
( \(VM_{3}\)) How would split a column into two columns?
Grid-levelSelection( \(GS_{1}\)) How would you select a cell?
( \(GS_{2}\)) How would you select a range of cells?
( \(GS_{3}\)) How would you select a column?
( \(GS_{4}\)) How would you select a range of columns?
( \(GS_{5}\)) How would you select a set of columns?
( \(GS_{6}\)) How would you select a row?
( \(GS_{7}\)) How would you select a range of rows?
( \(GS_{8}\)) How would you select a set of rows?
( \(GS_{9}\)) How would you select the set of cells that have the same value in a column?
( \(GS_{10}\)) How would you select the set of rows that have the same value for a specific cell?
Manipulation( \(GM_{1}\)) How would you merge two columns into one?
( \(GM_{2}\)) How would you move a column?
( \(GM_{3}\)) How would you move a row?
( \(GM_{4}\)) How would you clear a cell?
( \(GM_{5}\)) How would you delete a column?
( \(GM_{6}\)) How would you delete a row?
( \(GM_{7}\)) How would you sort a column?
( \(GM_{8}\)) How would you fill up a column following the pattern of selected values?
Table 2. Referents Considered in the Elicitation Study

A.2 Definition of a Sign

We define a sign as a series of events that is described along the following dimensions:
The input modality, which can be Pen tip, Pen eraser, Single Touch, Multi-touch or Pen + Touch.
The start and end locations of input, which can be a Column header, a Row header, a Cell, somewhere Inside-a-Cell, the Select-All button, a Column separator, the Background. We use Inside-a-Cell when the location within the cell itself carries information (e.g., the participant draws a line between two specific characters of the value string).
The input event type. We use four types of discrete events: Tap, Double Tap, Dwell and Flick. For continuous events, if the trace’s trajectory does not bear meaningful information, we classify it as Drag. For other continuous events, we use the following five categories: Vertical Line, Horizontal Line, Diagonal Line, Enclose or ZigZag. A few traces do not fall in any of those categories and rather correspond to custom-shape gestures that we categorize into one of the following shapes: Circle, Arrow, Equal sign, Parallel sign, Less-than sign, V, Loops.
An event is defined as a combination of these dimensions, and a sign can be either a single event or a combination of atomic events. Our definition of a sign is quite specific not only regarding the description of an event but also regarding the transition between consecutive events. In particular, when a sign involves a couple of events that have the same modality, we make a distinction between the case where the input device remains in contact with the screen during the transition, and the case where it is lifted up between the two events. For example, a Dwell immediately followed by a Drag without lifting the pen up is different from a Dwell + Drag sequence where the user lifts the pen up after the Dwell. For the coarser modality-based classification, a participant’s proposal is simply described as the combination of its events’ modalities.

B Implementation Details

B.1 Prototype Implementation

The Web-based prototype depicted in Figure 15 and used for the semi-structured qualitative study implements all interaction techniques from Section 5. It is developed entirely in JavaScript and D3 [4], and runs on the client side. Spreadsheet elements and interface widgets are all rendered in SVG. User pen and touch input events are handled with the W3C Pointer API [6].
The prototype is made available as supplemental material, and has been tested extensively with the Chromium Web browser on a Windows 10 PC connected to a Wacom Cintiq Pro. It also runs for instance on a Microsoft Surface Studio 2+, although some interactions that involve two simultaneous contact points are not supported so far because of input event API compatibility issues (the level of support for the W3C pointer API varies significantly across Web browsers and operating systems).

B.2 Generalizing Subcell Selections

Algorithm 1 below details how generalization works for subcell selections that include the cell’s first character. Informally, priority is given to special characters such as dash, comma, and the like, falling back to different alphanumeric transitions (including juxtapositions of uppercase and lower case letters in either order) if no such character could be found. Other cases work similarly but are not detailed for the sake of conciseness: selections that include the last character use a mirror of the algorithm below; selections that include neither the first nor the last character use a combination of both algorithms; selections of the latter category consisting of a single character are generalized based on the transition from the previous character rather than the next one, consistent with the reading direction.

References

[1]
Caroline Appert and Shumin Zhai. 2009. Using strokes as command shortcuts: Cognitive benefits and toolkit support. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’09). Association for Computing Machinery, New York, NY, USA, 2289–2298.
[2]
Lyn Bartram, Michael Correll, and Melanie Tory. 2022. Untidy data: The unreasonable effectiveness of tables. IEEE Transactions on Visualization and Computer Graphics 28, 1 (2022), 686–696.
[3]
Ann Blandford. 2013. Semi-structured qualitative studies. In The Encyclopedia of Human-Computer Interaction, 2nd edition, Mads Soegaard and Rikke Friis Dam (Eds.). Interaction Design Foundation.
[4]
Michael Bostock, Vadim Ogievetsky, and Jeffrey Heer. 2011. D3: Data-driven documents. IEEE Transactions on Visualization and Computer Graphics 17, 12 (2011), 2301–2309.
[5]
Peter Brandl, Clifton Forlines, Daniel Wigdor, Michael Haller, and Chia Shen. 2008. Combining and measuring the benefits of bimanual pen and direct-touch interaction on horizontal interfaces. In Proceedings of the Working Conference on Advanced Visual Interfaces (AVI’08). Association for Computing Machinery, New York, NY, USA, 154–161.
[6]
Matt Brubeck, Rick Byers, Patrick H. Lauke, and Navid Zolghadr. 2019. Pointer Events Level 2 - W3C Recommendation. https://www.w3.org/TR/pointerevents2/. (April2019).
[7]
Drini Cami, Fabrice Matulic, Richard G. Calland, Brian Vogel, and Daniel Vogel. 2018. Unimanual pen+touch input using variations of precision grip postures. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology (UIST’18). Association for Computing Machinery, New York, NY, USA, 825–837.
[8]
George Chalhoub and Advait Sarkar. 2022. “It’s freedom to put things where my mind wants”: Understanding and improving the user experience of structuring data in spreadsheets. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI’22). Association for Computing Machinery, New York, NY, USA, Article 585, 24 pages.
[9]
Ran Chen, Di Weng, Yanwei Huang, Xinhuan Shu, Jiayi Zhou, Guodao Sun, and Yingcai Wu. 2023. Rigel: Transforming tabular data by declarative mapping. IEEE Transactions on Visualization and Computer Graphics 29, 1 (2023), 128–138.
[10]
Andy Cockburn, Amy Karlson, and Benjamin B. Bederson. 2009. A review of overview+detail, zooming, and focus+context interfaces. ACM Comput. Surv. 41, 1, Article 2 (Jan.2009), 31 pages.
[11]
Andrew Crotty, Alex Galakatos, Emanuel Zgraggen, Carsten Binnig, and Tim Kraska. 2015. Vizdom: Interactive analytics through pen and touch. Proc. VLDB Endow. 8, 12 (Aug.2015), 2024–2027.
[12]
Paul Dourish. 2017. Spreadsheets and spreadsheet events in organizational life. In The Stuff of Bits: An Essay on the Materialities of Information. The MIT Press.
[13]
Lisa A. Elkin, Matthew Kay, James J. Higgins, and Jacob O. Wobbrock. 2021. An aligned rank transform procedure for multifactor contrast tests. In Proceedings of the 34th Annual ACM Symposium on User Interface Software and Technology (UIST’21). Association for Computing Machinery, New York, NY, USA, 754–768.
[14]
Mathias Frisch, Jens Heydekorn, and Raimund Dachselt. 2010. Diagram editing on interactive displays using multi-touch and pen gestures. In Diagrammatic Representation and Inference. Springer Berlin, Berlin, 182–196.
[15]
Mathias Frisch, Ricardo Langner, and Raimund Dachselt. 2011. NEAT: A set of flexible tools and gestures for layout tasks on interactive displays. In Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces (ITS’11). Association for Computing Machinery, New York, NY, USA, 1–10.
[16]
Travis Gesslein, Verena Biener, Philipp Gagel, Daniel Schneider, Per Ola Kristensson, Eyal Ofek, Michel Pahud, and Jens Grubert. 2020. Pen-based interaction with spreadsheets in mobile virtual reality. In IEEE International Symposium on Mixed and Augmented Reality (ISMAR). IEEE, 361–373.
[17]
Yves Guiard. 1987. Asymmetric division of labor in human skilled bimanual action. Journal of Motor Behavior 19, 4 (Dec.1987), 486–517.
[18]
Sumit Gulwani. 2011. Automating string processing in spreadsheets using input-output examples. In Proceedings of the 38th Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL’11). Association for Computing Machinery, New York, NY, USA, 317–330.
[19]
Sumit Gulwani, William R. Harris, and Rishabh Singh. 2012. Spreadsheet data manipulation using examples. Commun. ACM 55, 8 (Aug.2012), 97–105.
[20]
Philip J. Guo, Sean Kandel, Joseph M. Hellerstein, and Jeffrey Heer. 2011. Proactive wrangling: Mixed-initiative end-user programming of data transformation scripts. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology (UIST’11). Association for Computing Machinery, New York, NY, USA, 65–74.
[21]
William Hamilton, Andruid Kerne, and Tom Robbins. 2012. High-performance pen + touch modality interactions: A real-time strategy game esports context. In Proceedings of the 25th Annual ACM Symposium on User Interface Software and Technology (UIST’12). Association for Computing Machinery, New York, NY, USA, 309–318.
[22]
William R. Harris and Sumit Gulwani. 2011. Spreadsheet table transformations from examples. SIGPLAN Not. 46, 6 (Jun.2011), 317–328.
[23]
Ken Hinckley, Xiaojun Bi, Michel Pahud, and Bill Buxton. 2012. Informal information gathering techniques for active reading. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’12). Association for Computing Machinery, New York, NY, USA, 1893–1896.
[24]
Ken Hinckley, Koji Yatani, Michel Pahud, Nicole Coddington, Jenny Rodenhouse, Andy Wilson, Hrvoje Benko, and Bill Buxton. 2010. Pen + touch = new tools. In Proceedings of the 23rd Annual ACM Symposium on User Interface Software and Technology (UIST’10). Association for Computing Machinery, New York, NY, USA, 27–36.
[25]
Jane Hoffswell and Zhicheng Liu. 2019. Interactive repair of tables extracted from PDF documents on mobile devices. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI’19). Association for Computing Machinery, New York, NY, USA, 1–13.
[26]
T. Igarashi, J. D. Mackinlay, Bay-Wei Chang, and P. T. Zellweger. 1998. Fluid visualization of spreadsheet structures. In Proceedings 1998 IEEE Symposium on Visual Languages. 118–125.
[27]
Will Cukierski and Jessica Li. 2012. Titanic - Machine Learning from Disaster. (2012). https://kaggle.com/competitions/titanic. Accessed 11-17-2023.
[28]
Zhongjun Jin, Michael R. Anderson, Michael Cafarella, and H. V. Jagadish. 2017. Foofah: Transforming data by example. In Proceedings of the 2017 ACM International Conference on Management of Data (SIGMOD’17). Association for Computing Machinery, New York, NY, USA, 683–698.
[29]
Jaemin Jo, Sehi L’Yi, Bongshin Lee, and Jinwook Seo. 2017. TouchPivot: Blending WIMP & Post-WIMP interfaces for data exploration on tablet devices. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI’17). Association for Computing Machinery, New York, NY, USA, 2660–2671.
[30]
Sean Kandel, Andreas Paepcke, Joseph Hellerstein, and Jeffrey Heer. 2011. Wrangler: Interactive visual specification of data transformation scripts. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’11). Association for Computing Machinery, New York, NY, USA, 3363–3372.
[31]
Sam Lau, Sruti Srinivasa Srinivasa Ragavan, Ken Milne, Titus Barik, and Advait Sarkar. 2021. TweakIt: Supporting end-user programmers who transmogrify code. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI’21). Association for Computing Machinery, New York, NY, USA, Article 311, 12 pages.
[32]
Bongshin Lee, Greg Smith, Nathalie Henry Riche, Amy Karlson, and Sheelagh Carpendale. 2015. SketchInsight: Natural data exploration on interactive whiteboards leveraging pen and touch interaction. In IEEE Pacific Visualization Symposium (PacificVis). 199–206.
[33]
Bongshin Lee, Arjun Srinivasan, Petra Isenberg, and John Stasko. 2021. Post-WIMP interaction for information visualization. Foundations and Trends in Human–Computer Interaction 14, 1 (2021), 1–95.
[34]
Guy Lüthi, Andreas Rene Fender, and Christian Holz. 2022. DeltaPen: A device with integrated high-precision translation and rotation sensing on passive surfaces. In Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology (UIST’22). Association for Computing Machinery, New York, NY, USA, Article 57, 12 pages.
[35]
Fabrice Matulic, Riku Arakawa, Brian Vogel, and Daniel Vogel. 2020. PenSight: Enhanced interaction with a pen-top camera. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI’20). Association for Computing Machinery, New York, NY, USA, 1–14.
[36]
Fabrice Matulic and Moira C. Norrie. 2012. Supporting active reading on pen and touch-operated tabletops. In Proceedings of the International Working Conference on Advanced Visual Interfaces (AVI’12). Association for Computing Machinery, New York, NY, USA, 612–619.
[37]
Fabrice Matulic and Moira C. Norrie. 2013. Pen and touch gestural environment for document editing on interactive tabletops. In Proceedings of the 2013 ACM International Conference on Interactive Tabletops and Surfaces (ITS’13). Association for Computing Machinery, New York, NY, USA, 41–50.
[38]
Fabrice Matulic, Daniel Vogel, and Raimund Dachselt. 2017. Hand contact shape recognition for posture-based tabletop widgets and interaction. In Proceedings of the 2017 ACM International Conference on Interactive Surfaces and Spaces (ISS’17). Association for Computing Machinery, New York, NY, USA, 3–11.
[39]
MyScript. 2022. Cross-platform handwriting recognition and interactive ink APIs. https://developer.myscript.com. (2022). Last accessed: 2023-09-28.
[40]
Jakob Nielsen. 1994. Enhancing the explanatory power of usability heuristics. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’94). Association for Computing Machinery, New York, NY, USA, 152–158.
[41]
Gary Perelman, Marcos Serrano, Christophe Bortolaso, Celia Picard, Mustapha Derras, and Emmanuel Dubois. 2019. Combining tablets with smartphones for data analytics. In Human-Computer Interaction – INTERACT 2019, David Lamas, Fernando Loizides, Lennart Nacke, Helen Petrie, Marco Winckler, and Panayiotis Zaphiris (Eds.). Springer International Publishing, Cham, 439–460.
[42]
Ken Pfeuffer, Jason Alexander, Ming Ki Chong, Yanxia Zhang, and Hans Gellersen. 2015. Gaze-shifting: Direct-indirect input with pen and touch modulated by gaze. In Proceedings of the 28th Annual ACM Symposium on User Interface Software and Technology (UIST’15). Association for Computing Machinery, New York, NY, USA, 373–383.
[43]
Ken Pfeuffer, Jason Alexander, and Hans Gellersen. 2016. Partially-indirect bimanual input with gaze, pen, and touch for pan, zoom, and ink interaction. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI’16). Association for Computing Machinery, New York, NY, USA, 2845–2856.
[44]
Ken Pfeuffer, Ken Hinckley, Michel Pahud, and Bill Buxton. 2017. Thumb + pen interaction on tablets. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI’17). Association for Computing Machinery, New York, NY, USA, 3254–3266.
[45]
Peter Pirolli and Ramana Rao. 1996. Table lens as a tool for making sense of data. In Proceedings of the Workshop on Advanced Visual Interfaces (AVI’96). Association for Computing Machinery, New York, NY, USA, 67–80.
[46]
Thammathip Piumsomboon, Adrian Clark, Mark Billinghurst, and Andy Cockburn. 2013. User-defined gestures for augmented reality. In CHI’13 Extended Abstracts on Human Factors in Computing Systems (CHI EA’13). Association for Computing Machinery, New York, NY, USA, 955–960.
[47]
Ramana Rao and Stuart K. Card. 1994. The table lens: Merging graphical and symbolic representations in an interactive focus + context visualization for tabular information. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’94). Association for Computing Machinery, New York, NY, USA, 318–322.
[48]
Yann Riche, Nathalie Henry Riche, Ken Hinckley, Sheri Panabaker, Sarah Fuelling, and Sarah Williams. 2017. As we may ink? Learning from everyday analog pen use to improve digital ink experiences. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI’17). Association for Computing Machinery, New York, NY, USA, 3241–3253.
[49]
Hugo Romat, Caroline Appert, and Emmanuel Pietriga. 2021. Expressive authoring of node-link diagrams with graphies. IEEE Transactions on Visualization and Computer Graphics (TVCG) 27, 4 (2021), 2329–2340.
[50]
Hugo Romat, Christopher Collins, Nathalie Henry Riche, Michel Pahud, Christian Holz, Adam Riddle, Bill Buxton, and Ken Hinckley. 2020. Tilt-responsive techniques for digital drawing boards. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology (UIST’20). Association for Computing Machinery, New York, NY, USA, 500–515.
[51]
Hugo Romat, Nathalie Henry Riche, Ken Hinckley, Bongshin Lee, Caroline Appert, Emmanuel Pietriga, and Christopher Collins. 2019. ActiveInk: (Th)inking with data. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI’19). Association for Computing Machinery, New York, NY, USA, 1–13.
[52]
Hugo Romat, Emmanuel Pietriga, Nathalie Henry-Riche, Ken Hinckley, and Caroline Appert. 2019. SpaceInk: Making space for in-context annotations. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology (UIST’19). Association for Computing Machinery, New York, NY, USA, 871–882.
[53]
Vít Rusnák, Caroline Appert, Olivier Chapuis, and Emmanuel Pietriga. 2018. Designing coherent gesture sets for multi-scale navigation on tabletops. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI’18). Association for Computing Machinery, New York, NY, USA, 1–12.
[54]
Ramik Sadana and John Stasko. 2016. Expanding selection for information visualization systems on tablet devices. In Proceedings of the 2016 ACM International Conference on Interactive Surfaces and Spaces (ISS’16). Association for Computing Machinery, New York, NY, USA, 149–158.
[55]
Advait Sarkar, Andrew D. Gordon, Simon Peyton Jones, and Neil Toronto. 2018. Calculation view: Multiple-representation editing in spreadsheets. In IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC). IEEE, 85–93.
[56]
Rishabh Singh and Sumit Gulwani. 2012. Learning semantic string transformations from examples. Proc. VLDB Endow. 5, 8 (Apr.2012), 740–751.
[57]
Awalin Sopan, Manuel Freier, Meirav Taieb-Maimon, Catherine Plaisant, Jennifer Golbeck, and Ben Shneiderman. 2013. Exploring data distributions: Visual design and evaluation. International Journal of Human–Computer Interaction 29, 2 (2013), 77–95.
[58]
Arjun Srinivasan, Bongshin Lee, Nathalie Henry Riche, Steven M. Drucker, and Ken Hinckley. 2020. InChorus: Designing consistent multimodal interactions for data visualization on tablet devices. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI’20). Association for Computing Machinery, New York, NY, USA, 1–13.
[59]
Yuta Takayama, Yuu Ichikawa, Buntarou Shizuki, Ikkaku Kawaguchi, and Shin Takahashi. 2021. A user-based mid-air hand gesture set for spreadsheets. In Asian CHI Symposium 2021 (Asian CHI Symposium 2021). Association for Computing Machinery, New York, NY, USA, 122–128.
[60]
Poorna Talkad Sukumar, Anqing Liu, and Ronald Metoyer. 2018. Replicating user-defined gestures for text editing. In Proceedings of the 2018 ACM International Conference on Interactive Surfaces and Spaces (ISS’18). Association for Computing Machinery, New York, NY, USA, 97–106.
[61]
Theophanis Tsandilas. 2018. Fallacies of agreement: A critical review of consensus assessment methods for gesture elicitation. ACM Trans. Comput.-Hum. Interact. 25, 3, Article 18 (Jun.2018), 49 pages.
[62]
Wesley Willett, Jeffrey Heer, and Maneesh Agrawala. 2007. Scented widgets: Improving navigation cues with embedded visualizations. IEEE Transactions on Visualization and Computer Graphics 13, 6 (2007), 1129–1136.
[63]
Jacob O. Wobbrock, Leah Findlater, Darren Gergle, and James J. Higgins. 2011. The aligned rank transform for nonparametric factorial analyses using only ANOVA procedures. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’11). Association for Computing Machinery, New York, NY, USA, 143–146.
[64]
Jacob O. Wobbrock, Meredith Ringel Morris, and Andrew D. Wilson. 2009. User-defined gestures for surface computing. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’09). ACM, New York, NY, USA, 1083–1092.
[65]
Haijun Xia, Ken Hinckley, Michel Pahud, Xiao Tu, and Bill Buxton. 2017. WritLarge: Ink unleashed by unified scope, action, & zoom. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI’17). Association for Computing Machinery, New York, NY, USA, 3227–3240.
[66]
Ka-Ping Yee. 2004. Two-handed interaction on a tablet display. In CHI’04 Extended Abstracts on Human Factors in Computing Systems (CHI EA’04). Association for Computing Machinery, New York, NY, USA, 1493–1496.
[67]
Dongwook Yoon, Nicholas Chen, François Guimbretière, and Abigail Sellen. 2014. RichReview: Blending ink, speech, and gesture to support collaborative document review. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology (UIST’14). Association for Computing Machinery, New York, NY, USA, 481–490.
[68]
Robert Zeleznik, Andrew Bragdon, Ferdi Adeputra, and Hsu-Sheng Ko. 2010. Hands-on math: A page-based multi-touch and pen desktop for technical work and problem solving. In Proceedings of the 23rd Annual ACM Symposium on User Interface Software and Technology (UIST’10). Association for Computing Machinery, New York, NY, USA, 17–26.
[69]
Emanuel Zgraggen, Robert Zeleznik, and Steven M. Drucker. 2014. PanoramicData: Data analysis through pen and touch. IEEE Transactions on Visualization and Computer Graphics 20, 12 (2014), 2112–2121.
[70]
Emanuel Zgraggen, Robert Zeleznik, and Philipp Eichmann. 2016. Tableur: Handwritten spreadsheets. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA’16). Association for Computing Machinery, New York, NY, USA, 2362–2368.
[71]
Yang Zhang, Michel Pahud, Christian Holz, Haijun Xia, Gierad Laput, Michael McGuffin, Xiao Tu, Andrew Mittereder, Fei Su, William Buxton, and Ken Hinckley. 2019. Sensing posture-aware pen+touch interaction on tablets. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI’19). Association for Computing Machinery, New York, NY, USA, 1–14.

Cited By

View all
  • (2024)Challenges of Music Score Writing and the Potentials of Interactive SurfacesProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642079(1-16)Online publication date: 11-May-2024

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Transactions on Computer-Human Interaction
ACM Transactions on Computer-Human Interaction  Volume 31, Issue 2
April 2024
576 pages
EISSN:1557-7325
DOI:10.1145/3613620
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 29 January 2024
Online AM: 25 October 2023
Accepted: 26 September 2023
Revised: 18 August 2023
Received: 03 March 2023
Published in TOCHI Volume 31, Issue 2

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Digital pen
  2. multi-touch interaction
  3. ink
  4. spreadsheets

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)395
  • Downloads (Last 6 weeks)21
Reflects downloads up to 13 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Challenges of Music Score Writing and the Potentials of Interactive SurfacesProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642079(1-16)Online publication date: 11-May-2024

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Full Text

View this article in Full Text.

Full Text

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media