default search action
19th Mobile HCI 2017: Vienna, Austria
- Matt Jones, Manfred Tscheligi, Yvonne Rogers, Roderick Murray-Smith:
Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI 2017, Vienna, Austria, September 4-7, 2017. ACM 2017, ISBN 978-1-4503-5075-4
Main papers track
- Tim Duente, Max Pfeiffer, Michael Rohs:
Zap++: a 20-channel electrical muscle stimulation system for fine-grained wearable force feedback. 1:1-1:13 - Surjya Ghosh, Niloy Ganguly, Bivas Mitra, Pradipta De:
TapSense: combining self-report patterns and typing characteristics for smartphone based emotion detection. 2:1-2:12 - Thomas Stütz, Michael Domhardt, Gerlinde Emsenhuber, Daniela Huber, Martin Tiefengrabner, Nicholas Matis, Simon Ginzinger:
An interactive 3D health app with multimodal information representation for frozen shoulder. 3:1-3:11 - Michaël Ortega, Jérôme Maisonnasse, Laurence Nigay:
EXHI-bit: a mechanical structure for prototyping EXpandable handheld interfaces. 4:1-4:11 - Yosra Rekik, Eric Vezzoli, Laurent Grisoni:
Understanding users' perception of simultaneous tactile textures. 5:1-5:6 - Michael Kvist Svangren, Mikael B. Skov, Jesper Kjeldskov:
The connected car: an empirical study of electric cars as mobile digital devices. 6:1-6:12 - Nina Wenig, Dirk Wenig, Steffen Ernst, Rainer Malaka, Brent J. Hecht, Johannes Schöning:
Pharos: improving navigation instructions on smartwatches by including global landmarks. 7:1-7:13 - Limin Zeng, Markus Simros, Gerhard Weber:
Camera-based mobile electronic travel aids support for cognitive mapping of unknown spaces. 8:1-8:10 - Denys J. C. Matthies, Thijs Roumen, Arjan Kuijper, Bodo Urban:
CapSoles: who is walking on what kind of floor? 9:1-9:14 - Ahmed Sabbir Arif, Sunjun Kim, Geehyuk Lee:
Usability of different types of commercial selfie sticks. 10:1-10:8 - Martin Pielot, Luz Rello:
Productive, anxious, lonely: 24 hours without push notifications. 11:1-11:11 - Aku Visuri, Niels van Berkel, Chu Luo, Jorge Gonçalves, Denzil Ferreira, Vassilis Kostakos:
Predicting interruptibility for manual data collection: a cluster-based user model. 12:1-12:14 - Kostadin Kushlev, Bruno Cardoso, Martin Pielot:
Too tense for candy crush: affect influences user engagement with proactively suggested content. 13:1-13:6 - Jennifer Pearson, Simon Robinson, Matt Jones, Céline Coutrix:
Evaluating deformable devices with emergent users. 14:1-14:7 - Anas Bilal, Aimal Rextin, Ahmad Kakakhel, Mehwish Nasim:
Roman-txt: forms and functions of roman urdu texting. 15:1-15:9 - Jessica Conradi:
Influence of letter size on word reading performance during walking. 16:1-16:9 - Jonas F. Kraft, Jörn Hurtienne:
Transition animations support orientation in mobile interfaces without increased user effort. 17:1-17:6 - Mohammad Othman, Telmo Amaral, Roisin McNaney, Jan D. Smeddinck, John Vines, Patrick Olivier:
CrowdEyes: crowdsourcing for robust real-world mobile eye tracking. 18:1-18:13 - Samuel Navas Medrano, Max Pfeiffer, Christian Kray:
Enabling remote deictic communication with mobile devices: an elicitation study. 19:1-19:13 - Danielle M. Lottridge, Frank Bentley, Matt Wheeler, Jason Lee, Janet Cheung, Katherine Ong, Cristy Rowley:
Third-wave livestreaming: teens' long form selfie. 20:1-20:12 - Charlie Pinder, Jo Vermeulen, Benjamin R. Cowan, Russell Beale, Robert J. Hendley:
Exploring the feasibility of subliminal priming on smartphones. 21:1-21:15 - Liwei Chan, Kouta Minamizawa:
FrontFace: facilitating communication between HMD users and outsiders using front-facing-screen HMDs. 22:1-22:5 - Mark D. Dunlop, Marc Roper, Gennaro Imperatore:
Text entry tap accuracy and exploration of tilt controlled layered interaction on Smartwatches. 23:1-23:11 - Frederic Kerber, Tobias Kiefer, Markus Löchtefeld, Antonio Krüger:
Investigating current techniques for opposite-hand smartwatch interaction. 24:1-24:12 - Hui-Shyong Yeo, Juyoung Lee, Andrea Bianchi, David Harris-Birtill, Aaron Quigley:
SpeCam: sensing surface color and material with the front-facing camera of a mobile device. 25:1-25:9 - William Delamare, Teng Han, Pourang Irani:
Designing a gaze gesture guiding system. 26:1-26:13 - Takashi Kikuchi, Yuta Sugiura, Katsutoshi Masai, Maki Sugimoto, Bruce H. Thomas:
EarTouch: turning the ear into an input surface. 27:1-27:6 - Tilman Dingler, Dominik Weber, Martin Pielot, Jennifer Cooper, Chung-Cheng Chang, Niels Henze:
Language learning on-the-go: opportune moments and design of mobile microlearning sessions. 28:1-28:12 - Sebastian Marichal, Andrea Rosales, Fernando González Perilli, Ana Cristina Pires, Ewelina Bakala, Gustavo Sansone, Josep Blat:
CETA: designing mixed-reality tangible interaction to enhance mathematical learning. 29:1-29:13 - Putjorn Pruet, Panote Siriaraya, Chee Siang Ang, Farzin Deravi:
Designing a ubiquitous sensor-based platform to facilitate learning for young children in Thailand. 30:1-30:13 - Stephanie Wong, Lillian Yang, Bernhard E. Riecke, Emily S. Cramer, Carman Neustaedter:
Assessing the usability of smartwatches for academic cheating during exams. 31:1-31:11 - Sven Bertel, Thomas Dressel, Tom Kohlberg, Vanessa von Jan:
Spatial knowledge acquired from pedestrian urban navigation systems. 32:1-32:6 - Sven Mayer, Perihan Gad, Katrin Wolf, Pawel W. Wozniak, Niels Henze:
Understanding the ergonomic constraints in designing for touch surfaces. 33:1-33:9 - Carrie Demmans Epp, Cosmin Munteanu, Benett Axtell, Keerthika Ravinthiran, Yomna Aly, Elman Mansimov:
Finger tracking: facilitating non-commercial content production for mobile e-reading applications. 34:1-34:15 - Linda Di Geronimo, Marica Bertarini, Julia Badertscher, Maria Husmann, Moira C. Norrie:
Exploiting mid-air gestures to share data among devices. 35:1-35:11 - Frederic Kerber, Michael Puhl, Antonio Krüger:
User-independent real-time hand gesture recognition based on surface electromyography. 36:1-36:7 - Reuben Kirkham, Romeo Ebassa, Kyle Montague, Kellie Morrissey, Vasilis Vlachokyriakos, Sebastian Weise, Patrick Olivier:
WheelieMap: an exploratory system for qualitative reports of inaccessibility in the built environment. 38:1-38:12 - Xiying Wang, Susan R. Fussell:
EnergyHome: leveraging housemate dynamics to motivate energy conservation. 39:1-39:12 - Gianluca Schiavo, Chiara Leonardi, Mattia Pasolli, Silvia Sarti, Massimo Zancanaro:
Weigh it and share it!: crowdsourcing for pro-environmental data collection. 40:1-40:12 - Ismo Alakärppä, Elisa Jaakkola, Jani Väyrynen, Jonna Häkkilä:
Using nature elements in mobile AR for education with children. 41:1-41:13 - Fabio Paternò, Antonio Giovanni Schiavone, Antonio Conte:
Customizable automatic detection of bad usability smells in mobile accessed web applications. 42:1-42:11 - Benjamin R. Cowan, Nadia Pantidi, David Coyle, Kellie Morrissey, Peter Clarke, Sara Al-Shehri, David Earley, Natasha Bandeira:
"What can i help you with?": infrequent users' experiences of intelligent personal assistants. 43:1-43:12 - Simon Robinson, Jennifer Pearson, Matt Jones, Anirudha Joshi, Shashank Ahire:
Better together: disaggregating mobile services for emergent users. 44:1-44:13 - Kalpana Hundlani, Sonia Chiasson, Larry Hamid:
No passwords needed: the iterative design of a parent-child authentication mechanism. 45:1-45:11
Demos
- Radomir Dinic, Michael Domhardt, Simon Ginzinger, Thomas Stütz:
EatAR tango: portion estimation on mobile devices with a depth sensor. 46:1-46:7 - Sebastian Marichal, Andrea Rosales, Gustavo Sansone, Ana Cristina Pires, Ewelina Bakala, Fernando González Perilli, Josep Blat:
CETA: open, affordable and portable mixed-reality environment for low-cost tablets. 47:1-47:7 - Linda Di Geronimo, Marica Bertarini, Julia Badertscher, Maria Husmann, Moira C. Norrie:
MyoShare: sharing data among devices via mid-air gestures. 48:1-48:3 - Katsunori Tai, Yasuyuki Kono:
Walking motion recognition system by estimating position and pose of leg mounted camera device using visual SLAM. 49:1-49:6 - Benett Axtell, Cosmin Munteanu:
Using frame of mind: documenting reminiscence through unstructured digital picture interaction. 50:1-50:4 - Uwe Gruenefeld, Tim Claudius Stratmann, Wilko Heuten, Susanne Boll:
PeriMR: a prototyping tool for head-mounted peripheral light displays in mixed reality. 51:1-51:6 - Emilio Granell, Luis A. Leiva:
βTap: back-of-device tap input with built-in sensors. 52:1-52:6 - Jana Jost, Thomas Kirks, Benedikt Maettig:
Study on manual palletization of inhomogeneous boxes with the help of different interfaces to assess specific factors of ergonomic impact. 53:1-53:6 - Limin Zeng, Gerhard Weber, Markus Simros, Peter Conradie, Jelle Saldien, Ilse Ravyse, Jan B. F. van Erp, Tina Mioch:
Range-IT: detection and multimodal presentation of indoor objects for visually impaired people. 54:1-54:6 - Alexandre Almeida, Ana Alves:
Activity recognition for movement-based interaction in mobile games. 55:1-55:8 - Frederik Wiehr, Felix Kosmalla, Florian Daiber, Antonio Krüger:
FootStriker: an EMS-based assistance system for real-time running style correction. 56:1-56:6
Industrial perspectives
- Boban Blazevski, Jean D. Hallewell Haslwanter:
User-centered development of a system to support assembly line worker. 57:1-57:7 - Yash Bhavnani, Kerry Rodden, Laura Cuozzo Guarnotta, Margaret T. Lynn, Sara Chizari, Laura Granka:
Understanding mobile phone activities via retrospective review of visualizations of usage data. 58:1-58:10 - Iram Mirza, Joshua Tabak:
Designing for delight. 59:1-59:3 - Christian Sturm, Maha Aly, Birka von Schmidt, Tessa Flatten:
Entrepreneurial & UX mindsets: two perspectives - one objective. 60:1-60:11 - Sanjay Ghosh:
What users want in their mobile phones?: localization for low socio-economic emerging market. 61:1-61:10
Tutorials
- Sven Mayer, Huy Viet Le, Niels Henze:
Machine learning for intelligent mobile user interfaces using TensorFlow. 62:1-62:5 - Bastian Pfleging, Andrew L. Kun, Nora Broy:
The car as an environment for mobile devices. 63:1-63:5 - Denise Su, Megan K. Torkildson, Heidi Sales:
Speed dating, love letters, and couples interviews: how to get the spark back in user research methods. 64:1-64:5 - Florian Daiber, Felix Kosmalla:
Tutorial on wearable computing in sports. 65:1-65:4 - Cosmin Munteanu, Gerald Penn:
Speech and Hands-free interaction: myths, challenges, and opportunities. 66:1-66:4 - Tim Duente, Stefan Schneegass, Max Pfeiffer:
EMS in HCI: challenges and opportunities in actuating human bodies. 67:1-67:4
Doctoral consortium
- Dominik Weber:
Towards smart notification management in multi-device environments. 68:1-68:2 - Michael Kvist Svangren:
Understanding and designing for emerging digital eco-systems: the cases of private and shared cars. 69:1-69:4 - Cameron Steer:
Designing mobile deformable controls for creation of digital art. 70:1-70:4 - Mihai Bâce:
Augmenting human interaction capabilities with proximity, natural gestures, and eye gaze. 71:1-71:3 - Jared Duval:
A mobile game system for improving the speech therapy experience. 72:1-72:3 - Sean-Ryan Smith:
Mobile context-aware cognitive testing system. 73:1-73:4 - Maria Karyda:
Crafting collocated interactions: exploring physical representations of personal data. 74:1-74:4
Workshops
- Hui-Shyong Yeo, Gierad Laput, Nicholas Gillian, Aaron Quigley:
Workshop on object recognition for input and mobile interaction. 75:1-75:5 - Scott Jenson:
The UX of IoT: unpacking the internet of things. 76:1-76:2 - Jonna Häkkilä, Ashley Colley, Keith Cheverst, Simon Robinson, Johannes Schöning, Nicola J. Bidwell, Felix Kosmalla:
NatureCHI 2017: the 2nd workshop on unobtrusive user experiences with technology in nature. 77:1-77:4 - Alexander Meschtscherjakov, Manfred Tscheligi, Peter Fröhlich, Rod McCall, Andreas Riener, Philippe A. Palanque:
Mobile interaction with and in autonomous vehicles. 78:1-78:6
Late breaking results
- Michael Braun, Nora Broy, Bastian Pfleging, Florian Alt:
A design space for conversational in-vehicle information systems. 79:1-79:8 - Marion Koelle, Wilko Heuten, Susanne Boll:
Are you hiding it?: usage habits of lifelogging camera wearers. 80:1-80:8 - Uwe Gruenefeld, Abdallah El Ali, Wilko Heuten, Susanne Boll:
Visualizing out-of-view objects in head-mounted augmented reality. 81:1-81:7 - Sven Mayer, Michael Mayer, Niels Henze:
Feasibility analysis of detecting the finger orientation with depth cameras. 82:1-82:8 - Angélique Montuwy, Béatrice Cahour, Aurélie Dommes:
Visual, auditory and haptic navigation feedbacks among older pedestrians. 83:1-83:8 - Kerstin Blumenstein, Christina Niederer, Markus Wagner, Wilhelm Pfersmann, Markus Seidl, Wolfgang Aigner:
Visualizing spatial and time-oriented data in a second screen application. 84:1-84:8 - Ragavendra Lingamaneni, Thomas Kubitza, Jürgen Scheible:
DroneCAST: towards a programming toolkit for airborne multimedia display applications. 85:1-85:8 - Nan Yang, Gerbrand van Hout, Loe M. G. Feijs, Wei Chen, Jun Hu:
Eliciting values through wearable expression in weight loss. 86:1-86:6 - Simone Kriglstein, Mario Brandmüller, Margit Pohl, Christine Bauer:
A location-based educational game for understanding the traveling salesman problem: a case study. 87:1-87:8 - Tim Weißker, Erdan Genc, Andreas Berst, Frederik David Schreiber, Florian Echtler:
ShakeCast: using handshake detection for automated, setup-free exchange of contact data. 88:1-88:8 - Romina Kettner, Patrick Bader, Thomas Kosch, Stefan Schneegass, Albrecht Schmidt:
Towards pressure-based feedback for non-stressful tactile notifications. 89:1-89:8 - Susanne Koch Stigberg:
Simplifying the making of probes, prototypes and toolkits in mobile interaction research using tasker. 90:1-90:8 - Florian Güldenpfennig, Roman Ganhör, Geraldine Fitzpatrick:
How to look at two-sided photos?: exploring novel perspectives on digital images. 91:1-91:8 - Katta Spiel, Katharina Werner, Oliver Hödl, Lisa Ehrenstrasser, Geraldine Fitzpatrick:
Creating community fountains by (re-)designing the digital layer of way-finding pillars. 92:1-92:8 - Jacob M. Rigby, Duncan P. Brumby, Sandy J. J. Gould, Anna L. Cox:
Film, interrupted: investigating how mobile device notifications affect immersion during movies. 93:1-93:8 - Simran Chopra, Shruthi Chivukula:
My phone assistant should know I am an Indian: influencing factors for adoption of assistive agents. 94:1-94:8 - Susen Döbelt, Johann Schrammel, Manfred Tscheligi:
Which cloak dresses you best?: comparing location cloaking methods for mobile users. 95:1-95:8 - Cristina Maria Sylla, Ahmed Sabbir Arif, Elena Márquez Segura, Eva Irene Brooks:
Paper ladder: a rating scale to collect children's opinion in user studies. 96:1-96:8 - Mara Dionisio, Teresa Paulino, Trisha Suri, Nicolas Autzen, Johannes Schöning:
"In search of light": enhancing touristic recommender services with local weather data. 97:1-97:8 - Nassrin Hajinejad, Barbara Grüter, Licínio Roque:
Prototyping sonic interaction for walking. 98:1-98:8 - Aiman M. Ayyal Awwad, Christian Schindler, Kirshan Kumar Luhana, Zulfiqar Ali, Bernadette Spieler:
Improving pocket paint usability via material design compliance and internationalization & localization support on application level. 99:1-99:8 - Huy Viet Le, Sven Mayer, Patrick Bader, Niels Henze:
A smartphone prototype for touch interaction on the whole device surface. 100:1-100:8 - Shruti Grover, Simon Johnson:
Balance trees: a new visual representation for body balance. 101:1-101:7 - Ionut Andone, Konrad Blaszkiewicz, Matthias Böhmer, Alexander Markowetz:
Impact of location-based games on phone usage and movement: a case study on Pokémon GO. 102:1-102:8 - Sonya Cates, Daniel Barron, Patrick Ruddiman:
MobiLearn go: mobile microlearning as an active, location-aware game. 103:1-103:7 - Miriam Greis, Tilman Dingler, Albrecht Schmidt, Chris Schmandt:
Leveraging user-made predictions to help understand personal behavior patterns. 104:1-104:8 - Ioannis Giannopoulos, Andreas Komninos, John D. Garofalakis:
Interacting with large maps using HMDs in VR settings. 105:1-105:9 - Gwangrae Yeom, Garam Lee, Dayoung Jeong, Jeonghoon Rhee, Jundong Cho:
Fam-On: family shared time tracker to improve their emotional bond. 106:1-106:8 - Niels Henze, Sven Mayer, Huy Viet Le, Valentin Schwind:
Improving software-reduced touchscreen latency. 107:1-107:8 - Garam Lee, Luis Cavazos Quero, Jing Yang, Hyunhee Jung, Jooyoung Son, Jundong Cho:
Slate master: a tangible Braille slate tutor for mobile devices. 108:1-108:6
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.