default search action
24. UIST 2011: Santa Barbara, CA, USA
- Jeffrey S. Pierce, Maneesh Agrawala, Scott R. Klemmer:
Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, Santa Barbara, CA, USA, October 16-19, 2011 - Adjunct Volume. ACM 2011, ISBN 978-1-4503-1014-7
Demonstration
- Yuhang Zhao, Chao Xue, Xiang Cao, Yuanchun Shi:
PicoPet: "Real World" digital pet on a handheld projector. 1-2 - Cati N. Boulanger, Paul H. Dietz, Steven Bathiche:
Scopemate: a tracking inspection microscope. 3-4 - Robert LiKamWa, Lin Zhong:
SUAVE: sensor-based user-aware viewing enhancement for mobile device displays. 5-6 - Michel Amberg, Frédéric Giraud, Betty Lemaire-Semail, Paolo Olivo, Géry Casiez, Nicolas Roussel:
STIMTAC: a tactile input device with programmable friction. 7-8 - Taichi Murase, Atsunori Moteki, Noriaki Ozawa, Nobuyuki Hara, Takehiro Nakai, Katsuhito Fujimoto:
Gesture keyboard requiring only one camera. 9-10 - Nimesha Ranasinghe, Adrian David Cheok, Hideaki Nii, Owen Noel Newton Fernando, Gopalakrishnakone Ponnampalam:
Digital taste interface. 11-12 - Kenji Sugihara, Mai Otsuki, Asako Kimura, Fumihisa Shibata, Hideyuki Tamura:
MAI painting brush++: augmenting the feeling of painting with new visual and tactile feedback mechanisms. 13-14 - Sangwon Choi, Jaehyun Han, Sunjun Kim, Seongkook Heo, Geehyuk Lee:
ThickPad: a hover-tracking touchpad for a laptop. 15-16 - David Holman, Roel Vertegaal:
TactileTape: low-cost touch sensing on curved surfaces. 17-18 - Nektarios Paisios, Alex Rubinsteyn, Vrutti Vyas, Lakshminarayanan Subramanian:
Recognizing currency bills using a mobile phone: an assistive aid for the visually impaired. 19-20 - Anant P. Bhardwaj, Dave Luciano, Scott R. Klemmer:
Redprint: integrating API specific "instant example" and "instant documentation" display interface in IDEs. 21-22
Doctoral symposium
- Max Goldman:
Role-based interfaces for collaborative software development. 23-26 - Tsung-Hsiang Chang:
Using graphical representation of user interfaces as visual references. 27-30 - David R. Flatla:
Accessibility for individuals with color vision deficiency. 31-34 - Raphaël Hoarau:
Augmenting the SCOPE of interactions with implicit and explicit graphical structures. 35-38 - Jessica R. Cauchard:
Mobile multi-display environments. 39-42 - Markus Löchtefeld:
Advanced interaction with mobile projection interfaces. 43-46 - Saleema Amershi:
Designing for effective end-user interaction with machine learning. 47-50
Poster presentation
- Qian Qin, Michael Rohs, Sven G. Kratz:
Dynamic ambient lighting for mobile devices. 51-52 - Kentaro Takemura, Akihiro Ito, Jun Takamatsu, Tsukasa Ogasawara:
Active bone-conducted sound sensing for wearable interfaces. 53-54 - Tun-Hao You, Yi-Jui Wu, Yi-Jen Yeh:
TOPS: television object promoting system. 55-56 - Daniel S. Weld, Mausam, Peng Dai:
Execution control for crowdsourcing. 57-58 - Hosub Lee, Young Sang Choi:
Fit your hand: personalized user interface considering physical attributes of mobile device users. 59-60 - Youngwoo Park, Sungjae Hwang, Tek-Jin Nam:
Poke: emotional touch delivery through an inflatable surface over interpersonal mobile communications. 61-62 - Cati N. Boulanger, Paul H. Dietz, Steven Bathiche:
Scopemate: a robotic microscope. 63-64 - Charles Roberts, Tobias Höllerer:
Composition for conductor and audience: new uses for mobile devices in the concert hall. 65-66 - Dave Krebs, Alexander Conrad, Milos Hauskrecht, Jingtao Wang:
MARBLS: a visual environment for building clinical alert rules. 67-68 - Hubert Pham, Justin Mazzola Paluska, Robert C. Miller, Steve Ward:
Cloudtop: a workspace for the cloud. 69-70 - Leslie Wu, Jesse Cirimele, Stuart K. Card, Scott R. Klemmer, Larry F. Chu, Kyle Harrison:
Maintaining shared mental models in anesthesia crisis care with nurse tablet input and large-screen displays. 71-72 - Taku Hachisu, Michi Sato, Shogo Fukushima, Hiroyuki Kajimoto:
HaCHIStick: simulating haptic sensation on tablet pc for musical instruments application. 73-74 - Jiseong Gu, Geehyuk Lee:
TouchString: a flexible linear multi-touch sensor for prototyping a freeform multi-touch surface. 75-76 - Abhijit Karnik, Walterio W. Mayol-Cuevas, Sriram Subramanian:
MUST-D: multi-user see through display. 77-78 - Nimesha Ranasinghe, Adrian David Cheok, Hideaki Nii, Owen Noel Newton Fernando, Gopalakrishnakone Ponnampalam:
Digital taste for remote multisensory interactions. 79-80 - Atsushi Hiyama, Yusuke Doyama, Mariko Miyashita, Eikan Ebuchi, Masazumi Seki, Michitaka Hirose:
Artisanship training using wearable egocentric display. 81-82 - Nektarios Paisios, Alex Rubinsteyn, Lakshminarayanan Subramanian, Matt Tierney, Vrutti Vyas:
Tracking indoor location and motion for navigational assistance. 83-84 - Jared S. Bauer, Alex Jansen, Jesse Cirimele:
MoodMusic: a method for cooperative, generative music playlist creation. 85-86 - Neil Patel, Scott R. Klemmer, Tapan S. Parikh:
An asymmetric communications platform for knowledge sharing with low-end mobile phones. 87-88 - Benjamin J. Lafreniere, Andrea Bunt, Matthew Lount, Filip Krynicki, Michael A. Terry:
AdaptableGIMP: designing a socially-adaptable interface. 89-90 - James Simpson, Michael A. Terry:
Embedding interface sketches in code. 91-92
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.