Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3536221.3556620acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article
Open access

Pull Gestures with Coordinated Graphics on Dual-Screen Devices

Published: 07 November 2022 Publication History

Abstract

A new class of dual-touchscreen device is beginning to emerge, either constructed as two screens hinged together, or as a single display that can fold. The interactive experience on these devices is simply that of two 2D touchscreens, with little to no synergy between the interactive areas. In this work, we consider how this unique, emerging form factor creates an interesting 3D niche, in which out-of-plane interactions on one screen can be supported with coordinated graphics in the other orthogonal screen. Following insights from an elicitation study, we focus on "pull gestures", a multimodal interaction combining on-screen touch input with in air movement. These naturally complement traditional multitouch gestures such as tap and pinch, and are an intriguing and useful way to take advantage of the unique geometry of dual-screen devices.

References

[1]
Arducam. 2022. Arducam Fisheye Camera for Raspberry Pi.https://www.arducam.com/product/arducam-raspberry-pi-camera-5mp-fisheye-m12-picam-b0055/
[2]
Amartya Banerjee, Jesse Burstyn, Audrey Girouard, and Roel Vertegaal. 2011. Pointable: an in-air pointing technique to manipulate out-of-reach targets on tabletops. In Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces. 11–20.
[3]
Valentin Bazarevsky, Ivan Grishchenko, Karthik Raveendran, Tyler Zhu, Fan Zhang, and Matthias Grundmann. 2020. Blazepose: On-device real-time body pose tracking. arXiv preprint arXiv:2006.10204(2020).
[4]
Andy Brown, Michael Evans, Caroline Jay, Maxine Glancy, Rhianne Jones, and Simon Harper. 2014. HCI over Multiple Screens. In CHI ’14 Extended Abstracts on Human Factors in Computing Systems (Toronto, Ontario, Canada) (CHI EA ’14). Association for Computing Machinery, New York, NY, USA, 665–674. https://doi.org/10.1145/2559206.2578869
[5]
Wolfgang Büschel, Patrick Reipschläger, and Raimund Dachselt. 2016. Foldable3d: Interacting with 3d content using dual-display devices. In Proceedings of the 2016 ACM International Conference on Interactive Surfaces and Spaces. 367–372.
[6]
Ke-Yu Chen, Daniel Ashbrook, Mayank Goel, Sung-Hyuck Lee, and Shwetak Patel. 2014. AirLink: Sharing Files between Multiple Devices Using in-Air Gestures. In Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing(Seattle, Washington) (UbiComp ’14). Association for Computing Machinery, New York, NY, USA, 565–569. https://doi.org/10.1145/2632048.2632090
[7]
Xiang’Anthony’ Chen, Julia Schwarz, Chris Harrison, Jennifer Mankoff, and Scott E Hudson. 2014. Air+ touch: interweaving touch & in-air gestures. In Proceedings of the 27th annual ACM symposium on User interface software and technology. 519–525.
[8]
Connor Dickie, Jamie Hart, Roel Vertegaal, and Alex Eiser. 2006. LookPoint: an evaluation of eye input for hands-free switching of input devices between multiple computers. In Proceedings of the 18th Australia Conference on Computer-Human Interaction: Design: Activities, Artefacts and Environments. 119–126.
[9]
[9] Microsoft Surface Duo.2022. https://www.microsoft.com/en-us/surface/devices/surface-duo
[10]
[10] Samsung Z Flip.2022. https://samsung.com/us/smartphones/galaxy-z-flip3-5g/
[11]
[11] Samsung Z Fold.2022. https://samsung.com/us/smartphones/galaxy-z-fold3-5g/
[12]
Antonio Gomes and Roel Vertegaal. 2014. Paperfold: a shape changing mobile device with multiple reconfigurable electrophoretic magnetic display tiles. In CHI’14 Extended Abstracts on Human Factors in Computing Systems. 535–538.
[13]
Kenneth P. Herndon, Robert C. Zeleznik, Daniel C. Robbins, D. Brookshire Conner, Scott S. Snibbe, and Andries van Dam. 1992. Interactive Shadows. In Proceedings of the 5th Annual ACM Symposium on User Interface Software and Technology (Monteray, California, USA) (UIST ’92). Association for Computing Machinery, New York, NY, USA, 1–6. https://doi.org/10.1145/142621.142622
[14]
Otmar Hilliges, Shahram Izadi, Andrew D Wilson, Steve Hodges, Armando Garcia-Mendoza, and Andreas Butz. 2009. Interactions in the air: adding further depth to interactive tabletops. In Proceedings of the 22nd annual ACM symposium on User interface software and technology. 139–148.
[15]
Ken Hinckley, Morgan Dixon, Raman Sarin, Francois Guimbretiere, and Ravin Balakrishnan. 2009. Codex: a dual screen tablet computer. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 1933–1942.
[16]
Ken Hinckley, Seongkook Heo, Michel Pahud, Christian Holz, Hrvoje Benko, Abigail Sellen, Richard Banks, Kenton O’Hara, Gavin Smyth, and William Buxton. 2016. Pre-touch sensing for mobile interaction. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. 2869–2881.
[17]
Ken Hinckley, Jeff Pierce, Mike Sinclair, and Eric Horvitz. 2000. Sensing techniques for mobile interaction. In Proceedings of the 13th annual ACM symposium on User interface software and technology. 91–100.
[18]
Ken Hinckley, Gonzalo Ramos, Francois Guimbretiere, Patrick Baudisch, and Marc Smith. 2004. Stitching: pen gestures that span multiple displays. In Proceedings of the working conference on Advanced visual interfaces. 23–31.
[19]
Hamed Ketabdar, Kamer Ali Yüksel, and Mehran Roshandel. 2010. MagiTact: interaction with mobile devices based on compass (magnetic) sensor. In Proceedings of the 15th international conference on Intelligent user interfaces. 413–414.
[20]
Mohammadreza Khalilbeigi, Roman Lissermann, Wolfgang Kleine, and Jürgen Steimle. 2012. FoldMe: interacting with double-sided foldable displays. In Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction. 33–40.
[21]
Sven Kratz and Michael Rohs. 2009. HoverFlow: expanding the design space of around-device interaction. In Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services. 1–8.
[22]
Sven Kratz, Michael Rohs, Dennis Guse, Jörg Müller, Gilles Bailly, and Michael Nischt. 2012. PalmSpace: continuous around-device gestures vs. multitouch for 3D rotation tasks on mobile devices. In Proceedings of the international working conference on advanced visual interfaces. 181–188.
[23]
Zhihan Lv, Alaa Halawani, Muhammad Sikandar Lal Khan, Shafiq Ur Réhman, and Haibo Li. 2013. Finger in air: touch-less interaction on smartphone. In Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia. 1–4.
[24]
Kent Lyons, David Nguyen, Daniel Ashbrook, and Sean White. 2012. Facet: A Multi-Segment Wrist Worn System. Association for Computing Machinery, New York, NY, USA, 123–130. https://doi.org/10.1145/2380116.2380134
[25]
Kent Lyons, Trevor Pering, Barbara Rosario, Shivani Sud, and Roy Want. 2009. Multi-display composition: Supporting display sharing for collocated mobile devices. In IFIP Conference on Human-Computer Interaction. Springer, 758–771.
[26]
Regan L Mandryk, Malcolm E Rodgers, and Kori M Inkpen. 2005. Sticky widgets: pseudo-haptic widget enhancements for multi-monitor displays. In CHI’05 Extended Abstracts on Human Factors in Computing Systems. 1621–1624.
[27]
Nicolai Marquardt, Ricardo Jota, Saul Greenberg, and Joaquim A Jorge. 2011. The continuous interaction space: interaction techniques unifying touch and gesture on and above a digital surface. In IFIP Conference on Human-Computer Interaction. Springer, 461–476.
[28]
Takashi Matsumoto, Daisuke Horiguchi, Shihori Nakashima, and Naohito Okude. 2006. Z-agon: mobile multi-display browser cube. In CHI’06 extended abstracts on Human factors in computing systems. 351–356.
[29]
Meredith Ringel Morris, Andreea Danielescu, Steven Drucker, Danyel Fisher, Bongshin Lee, MC Schraefel, and Jacob O Wobbrock. 2014. Reducing legacy bias in gesture elicitation studies. interactions 21, 3 (2014), 40–45.
[30]
Meredith Ringel Morris, Jacob O Wobbrock, and Andrew D Wilson. 2010. Understanding users’ preferences for surface gestures. In Proceedings of graphics interface 2010. 261–268.
[31]
Miguel A Nacenta, Samer Sallam, Bernard Champoux, Sriram Subramanian, and Carl Gutwin. 2006. Perspective cursor: perspective-based interaction for multi-display environments. In Proceedings of the SIGCHI conference on Human Factors in computing systems. 289–298.
[32]
Takehiro Niikura, Yuki Hirobe, Alvaro Cassinelli, Yoshihiro Watanabe, Takashi Komuro, and Masatoshi Ishikawa. 2010. In-air typing interface for mobile devices with vibration feedback. In ACM SIGGRAPH 2010 Emerging Technologies. 1–1.
[33]
Takehiro Niikura, Yoshihiro Watanabe, Takashi Komuro, and Masatoshi Ishikawa. 2012. In-air Typing Interface: Realizing 3D operation for mobile devices. In The 1st IEEE Global Conference on Consumer Electronics 2012. IEEE, 223–227.
[34]
Uran Oh and Leah Findlater. 2013. The challenges and potential of end-user gesture customization. In Proceedings of the SIGCHI conference on human factors in computing systems. 1129–1138.
[35]
Antti Oulasvirta and Lauri Sumari. 2007. Mobile kits and laptop trays: managing multiple devices in mobile information work. In Proceedings of the SIGCHI conference on Human factors in computing systems. 1127–1136.
[36]
Keunwoo Park, Sunbum Kim, Youngwoo Yoon, Tae-Kyun Kim, and Geehyuk Lee. 2020. DeepFisheye: Near-surface multi-finger tracking technology using fisheye camera. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology. 1132–1146.
[37]
Jun Rekimoto. 1997. Pick-and-Drop: A Direct Manipulation Technique for Multiple Computer Environments. In Proceedings of the 10th Annual ACM Symposium on User Interface Software and Technology(Banff, Alberta, Canada) (UIST ’97). Association for Computing Machinery, New York, NY, USA, 31–39. https://doi.org/10.1145/263407.263505
[38]
Arne Schmitz, Ming Li, Volker Schönefeld, and Leif Kobbelt. 2010. Ad-Hoc Multi-Displays for Mobile Interactive Applications. In Eurographics (Areas Papers). 45–52.
[39]
Kazuhiro Terajima, Takashi Komuro, and Masatoshi Ishikawa. 2009. Fast finger tracking system for in-air typing interface. In CHI’09 Extended Abstracts on Human Factors in Computing Systems. 3739–3744.
[40]
Radu-Daniel Vatavu, Annette Mossel, and Christian Schönauer. 2016. Digital Vibrons: Understanding Users’ Perceptions of Interacting with Invisible, Zero-Weight Matter. In Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services (Florence, Italy) (MobileHCI ’16). Association for Computing Machinery, New York, NY, USA, 217–226. https://doi.org/10.1145/2935334.2935364
[41]
Guo-Zhen Wang, Yi-Pai Huang, Tian-Sheuan Chang, and Tsu-Han Chen. 2013. Bare finger 3D air-touch system using an embedded optical sensor array for mobile displays. Journal of Display Technology 10, 1 (2013), 13–18.
[42]
Andrew D Wilson. 2004. TouchLight: an imaging touch screen and display for gesture-based interaction. In Proceedings of the 6th international conference on Multimodal interfaces. 69–76.
[43]
Jacob O Wobbrock, Htet Htet Aung, Brandon Rothrock, and Brad A Myers. 2005. Maximizing the guessability of symbolic input. In CHI’05 extended abstracts on Human Factors in Computing Systems. 1869–1872.
[44]
Jacob O Wobbrock, Meredith Ringel Morris, and Andrew D Wilson. 2009. User-defined gestures for surface computing. In Proceedings of the SIGCHI conference on human factors in computing systems. 1083–1092.
[45]
Shengzhi Wu, Daragh Byrne, Ruofei Du, and Molly Steenson. 2022. “Slurp” Revisited: Using’system re-presencing’to look back on, encounter, and design with the history of spatial interactivity and locative media. (2022).
[46]
Xing-Dong Yang, Tovi Grossman, Pourang Irani, and George Fitzmaurice. 2011. TouchCuts and TouchZoom: Enhanced Target Selection for Touch Displays Using Finger Proximity Sensing. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Vancouver, BC, Canada) (CHI ’11). Association for Computing Machinery, New York, NY, USA, 2585–2594. https://doi.org/10.1145/1978942.1979319
[47]
Xin Yi, Chun Yu, Mingrui Zhang, Sida Gao, Ke Sun, and Yuanchun Shi. 2015. Atk: Enabling ten-finger freehand typing in air based on 3d hand tracking data. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology. 539–548.
[48]
Alex. Yuktanon. 2022. Best Wide Angle Selfie Smartphone.https://www.alexi.xyz/blog/best-wide-angle-selfie-phone.
[49]
[49] ASUS Zenbook.2022. https://www.asus.com/Laptops/For-Home/Zenbook/
[50]
Jamie Zigelbaum, Adam Kumpf, Alejandro Vazquez, and Hiroshi Ishii. 2008. Slurp: tangibility spatiality and an eyedropper. In CHI’08 Extended Abstracts on Human Factors in Computing Systems. 2565–2574.

Cited By

View all
  • (2024)Engineering Touchscreen Input for 3-Way Displays: Taxonomy, Datasets, and ClassificationCompanion Proceedings of the 16th ACM SIGCHI Symposium on Engineering Interactive Computing Systems10.1145/3660515.3661331(57-65)Online publication date: 24-Jun-2024

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
ICMI '22: Proceedings of the 2022 International Conference on Multimodal Interaction
November 2022
830 pages
ISBN:9781450393904
DOI:10.1145/3536221
This work is licensed under a Creative Commons Attribution International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 07 November 2022

Check for updates

Author Tags

  1. Dual-Screen Devices
  2. Hand Gestures
  3. In-Air Interaction
  4. Interaction Techniques

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

ICMI '22
Sponsor:

Acceptance Rates

Overall Acceptance Rate 453 of 1,080 submissions, 42%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)200
  • Downloads (Last 6 weeks)33
Reflects downloads up to 09 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Engineering Touchscreen Input for 3-Way Displays: Taxonomy, Datasets, and ClassificationCompanion Proceedings of the 16th ACM SIGCHI Symposium on Engineering Interactive Computing Systems10.1145/3660515.3661331(57-65)Online publication date: 24-Jun-2024

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media