Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/1753326.1753366acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

High-precision magnification lenses

Published: 10 April 2010 Publication History

Abstract

Focus+context interfaces provide in-place magnification of a region of the display, smoothly integrating the focus of attention into its surroundings. Two representations of the data exist simultaneously at two different scales, providing an alternative to classical pan & zoom for navigating multi-scale interfaces. For many practical applications however, the magnification range of focus+context techniques is too limited. This paper addresses this limitation by exploring the quantization problem: the mismatch between visual and motor precision in the magnified region. We introduce three new interaction techniques that solve this problem by
integrating fast navigation and high-precision interaction in the magnified region. Speed couples precision to navigation speed. Key and Ring use a discrete switch between precision levels, the former using a keyboard modifier, the latter by decoupling the cursor from the lens' center. We report on three experiments showing that our techniques make interacting with lenses easier while increasing the range of practical magnification factors, and that performance can be further improved by integrating speed-dependent visual behaviors.

Supplementary Material

JPG File (p273.jpg)
MOV File (p273.mov)

References

[1]
Y. Ayatsuka, J. Rekimoto, and S. Matsuoka. Popup vernier: a tool for sub-pixel-pitch dragging with smooth mode transition. In Proc. UIST '98, 39--48. ACM, 1998.
[2]
R. Balakrishnan. "Beating" Fitts' law: virtual enhancements for pointing facilitation. IJHCS, 61(6):857--874, 2004.
[3]
E. A. Bier, M. C. Stone, K. Pier, W. Buxton, and T. D. DeRose. Toolglass and magic lenses: the see-through interface. In Proc. SIGGRAPH '93, 73--80. ACM, 1993.
[4]
S. Carpendale, J. Ligh, and E. Pattison. Achieving higher magnification in context. In Proc. UIST '04, 71--80. ACM, 2004.
[5]
O. Chapuis, J. Labrune, and E. Pietriga. Dynaspot: Speed-dependent area cursor. In Proc. CHI '09, 1391--1400. ACM, 2009.
[6]
A. Cockburn and P. Brock. Human on-line response to visual and motor target expansion. In Proc. GI '06, 81--87, 2006.
[7]
A. Cockburn and A. Firth. Improving the acquisition of small targets. In Proc. BCS-HCI '03, 181--196, 2003.
[8]
A. Cockburn, A. Karlson, and B. B. Bederson. A review of overview+detail, zooming, and focus+context interfaces. ACM CSUR, 41(1):1--31, 2008.
[9]
S. A. Douglas, A. E. Kirkpatrick, and I. S. MacKenzie. Testing pointing device performance and user assessment with the ISO 9241, part 9 standard. In Proc. CHI '99, 215--222. ACM, 1999.
[10]
G. Fitzmaurice, A. Khan, R. Pieké, B. Buxton, and G. Kurtenbach. Tracking menus. In Proc. UIST '03, 71--79. ACM, 2003.
[11]
G. W. Furnasand B. B. Bederson. Space-scalediagrams: understanding multiscale interfaces. In Proc. CHI '95, 234--241. ACM & Addison-Wesley, 1995.
[12]
C. Gutwin. Improving focus targeting in interactive fisheye views. In Proc. CHI '02, 267--274. ACM, 2002.
[13]
K. Hornbæk, B. B. Bederson, and C. Plaisant. Navigation patterns and usability of zoomable user interfaces with and without an overview. ACM ToCHI, 9(4):362--389, 2002.
[14]
T. Igarashi and K. Hinckley. Speed-dependentautomatic zooming for browsing large documents. In Proc. UIST '00, 139--148. ACM, 2000.
[15]
S. Jul and G. W. Furnas. Critical zones in desert fog: aids to multiscale navigation. In Proc. UIST '98, 97--106. ACM, 1998.
[16]
J. Lamping and R. Rao. Laying out and visualizing large trees using a hyperbolic space. In Proc. UIST '94, 13--14. ACM, 1994.
[17]
M. J. McGuffin and R. Balakrishnan. Fitts' law and expanding targets: Experimental studies and designs for user interfaces. ACM ToCHI, 12(4):388--422, 2005.
[18]
E. Pietriga and C. Appert. Sigma lenses: focus-context transitions combining space, time and translucence. In Proc. CHI '08, 1343--1352. ACM, 2008.
[19]
G. Ramos, A. Cockburn, R. Balakrishnan, and M. Beaudouin-Lafon. Pointing lenses: facilitating stylus input through visual-and motor-space magnification. In Proc. CHI '07, 757--766. ACM, 2007.
[20]
M. Sarkar, S. S. Snibbe, O. J. Tversky, and S. P. Reiss. Stretching the rubber sheet: a metaphor for viewing large layouts on small screens. In Proc. UIST '93, 81--91. ACM, 1993.
[21]
J. J. van Wijk and W. A. Nuij. A model for smooth viewing and navigation of large 2d information spaces. IEEE TVCG, 10(4):447--458, 2004.
[22]
C. Ware and M. Lewis. The DragMag image magnifier. In Proc. CHI '95, 407--408. ACM, 1995.

Cited By

View all
  • (2024)Cone&Bubble: Evaluating Combinations of Gaze, Head and Hand Pointing for Target Selection in Dense 3D Environments2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)10.1109/VRW62533.2024.00126(642-649)Online publication date: 16-Mar-2024
  • (2024)Visualizing the Influence of New Public Transport Infrastructure on Travel TimesVisualisierung des Einflusses neuer öffentlicher Verkehrsinfrastruktur auf ReisezeitenKN - Journal of Cartography and Geographic Information10.1007/s42489-024-00167-974:2(107-119)Online publication date: 22-Apr-2024
  • (2023)SpaceX MagProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/35962537:2(1-36)Online publication date: 12-Jun-2023
  • Show More Cited By

Index Terms

  1. High-precision magnification lenses

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CHI '10: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
    April 2010
    2690 pages
    ISBN:9781605589299
    DOI:10.1145/1753326
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 10 April 2010

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. focus+context
    2. lenses
    3. navigation
    4. quantization
    5. selection

    Qualifiers

    • Research-article

    Conference

    CHI '10
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

    Upcoming Conference

    CHI '25
    CHI Conference on Human Factors in Computing Systems
    April 26 - May 1, 2025
    Yokohama , Japan

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)24
    • Downloads (Last 6 weeks)12
    Reflects downloads up to 14 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Cone&Bubble: Evaluating Combinations of Gaze, Head and Hand Pointing for Target Selection in Dense 3D Environments2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)10.1109/VRW62533.2024.00126(642-649)Online publication date: 16-Mar-2024
    • (2024)Visualizing the Influence of New Public Transport Infrastructure on Travel TimesVisualisierung des Einflusses neuer öffentlicher Verkehrsinfrastruktur auf ReisezeitenKN - Journal of Cartography and Geographic Information10.1007/s42489-024-00167-974:2(107-119)Online publication date: 22-Apr-2024
    • (2023)SpaceX MagProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/35962537:2(1-36)Online publication date: 12-Jun-2023
    • (2022)The Pattern is in the Details: An Evaluation of Interaction Techniques for Locating, Searching, and Contextualizing Details in Multivariate Matrix VisualizationsProceedings of the 2022 CHI Conference on Human Factors in Computing Systems10.1145/3491102.3517673(1-15)Online publication date: 29-Apr-2022
    • (2021)AR-enhanced Widgets for Smartphone-centric InteractionProceedings of the 23rd International Conference on Mobile Human-Computer Interaction10.1145/3447526.3472019(1-12)Online publication date: 27-Sep-2021
    • (2021)A novel approach for exploring annotated data with interactive lensesComputer Graphics Forum10.1111/cgf.1431540:3(387-398)Online publication date: 29-Jun-2021
    • (2020)VUM: Understanding Requirements for a Virtual Ubiquitous MicroscopeProceedings of the 19th International Conference on Mobile and Ubiquitous Multimedia10.1145/3428361.3428386(259-266)Online publication date: 22-Nov-2020
    • (2018)Designing Coherent Gesture Sets for Multi-scale Navigation on TabletopsProceedings of the 2018 CHI Conference on Human Factors in Computing Systems10.1145/3173574.3173716(1-12)Online publication date: 21-Apr-2018
    • (2017)Panning and Zooming High-Resolution Panoramas in Virtual Reality DevicesProceedings of the 30th Annual ACM Symposium on User Interface Software and Technology10.1145/3126594.3126617(279-288)Online publication date: 20-Oct-2017
    • (2017)Interactive Lenses for VisualizationComputer Graphics Forum10.1111/cgf.1287136:6(173-200)Online publication date: 1-Sep-2017
    • Show More Cited By

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media