default search action
37th UIST 2024: Pittsburgh, PA, USA - Adjunct Volume
- Lining Yao, Mayank Goel, Alexandra Ion, Pedro Lopes:
Adjunct Proceedings of the 37th Annual ACM Symposium on User Interface Software and Technology, UIST Adjunct 2024, Pittsburgh, PA, USA, October 13-16, 2024. ACM 2024, ISBN 979-8-4007-0718-6
Student Innovation Contest
- Yuewen Luo, Siyi Ren, Zhaodong Jiang, Bingjian Huang, Daniel Wigdor:
TactileNet: Bringing Touch Closer in the Digital World. 1:1-1:3 - Chenyi Shen, Yiming Jiao, Rushil H. Sojitra:
MetaController: Sheet Material Based Flexible Game Controlling System. 2:1-2:3 - Lyumanshan Ye, Hao Jin, Qiao Jin:
SenseBot: Leveraging Embodied Asymmetric Interaction and Social Robotic to Enhance Intergenerational Communication. 3:1-3:3 - Hongni Ye, Xiangrong Zhu, Yongbo Yang:
CrAIzy MIDI: AI-powered Wearable Musical Instrumental for Novice Player. 4:1-4:3 - Shun Hanai, Kohei Miura:
IntelliCID: Intelligent Caustics Illumination Device. 5:1-5:2 - Jiwan Kim, Hohurn Jung, Ian Oakley:
VibraHand: In-Hand Superpower Enabling Spying, Precognition, and Telekinesis. 6:1-6:3 - Meng Ting Shih, Ming-Yun Hsu, Sheng Cian Lee:
Empathy-GPT: Leveraging Large Language Models to Enhance Emotional Empathy and User Engagement in Embodied Conversational Agents. 7:1-7:3
Sustainable Interfaces
- Yixuan Li, Zhaowen Deng, Yanying Zhu:
EmoPus: Providing Emotional and Tactile Comfort with a AI Desk Companion Octopus. 8:1-8:3
Demos
- Naoto Nishida, Hirotaka Hiraki, Jun Rekimoto, Yoshio Ishiguro:
Real-Time Word-Level Temporal Segmentation in Streaming Speech Recognition. 9:1-9:3 - Masatoshi Hamanaka:
Breaking Future Rhythm Visualizer. 10:1-10:3 - Lukas Teufelberger, Xintong Liu, Zhipeng Li, Max Möbus, Christian Holz:
Demonstrating LLM-for-X: Application-agnostic Integration of Large Language Models to Support Writing Workflows. 11:1-11:3 - Mark Richardson, Fadi Botros, Yangyang Shi, Bradford J. Snow, Pinhao Guo, Linguang Zhang, Jingming Dong, Keith Vertanen, Shugao Ma, Robert Wang:
StegoType: Surface Typing from Egocentric Cameras. 12:1-12:14 - Hongyu Mao, Alexander Kyu, Junyi Zhu, Mayank Goel, Karan Ahuja:
Demo of EITPose: Wearable and Practical Electrical Impedance Tomography for Continuous Hand Pose Estimation. 13:1-13:3 - Pragya Kallanagoudar, Chithra Anand, Rolando Garcia, Rebecca M. M. Hicke, Aditya G. Parameswaran, Eunice Jun, Sarah E. Chasins:
Quilt: Custom UIs for Linking Unstructured Documents to Structured Datasets. 14:1-14:4 - Tianyu Yu, Yang Liu, Yujia Liu, Qiuyu Lu, Teng Han, Haipeng Mi:
FlexEOP: Flexible Shape-changing Actuator using Embedded Electroosmotic Pumps. 15:1-15:5 - Md. Touhidul Islam, Noushad Sojib, Imran Kabir, Ashiqur Rahman Amit, Mohammad Ruhul Amin, Syed Masum Billah:
Demonstration of Wheeler: A Three-Wheeled Input Device for Usable, Efficient, and Versatile Non-Visual Interaction. 16:1-16:3 - Yijing Jiang, Julia Kleinau, Till Max Eckroth, Eve E. Hoggan, Stefanie Mueller, Michael Wessely:
Demonstration of MouthIO: Customizable Oral User Interfaces with Integrated Sensing and Actuation. 17:1-17:3 - Yunyi Zhu, Cedric Honnet, Yixiao Kang, Junyi Zhu, Angelina J. Zheng, Kyle Heinz, Grace Tang, Luca Musk, Michael Wessely, Stefanie Mueller:
Demo of PortaChrome: A Portable Contact Light Source for Integrated Re-Programmable Multi-Color Textures. 18:1-18:4 - Yudai Tanaka, Hunter G. Mathews, Jacob Serfaty, Pedro Lopes:
Demonstrating Haptic Source-Effector: Full-Body Haptics via Non-Invasive Brain Stimulation. 19:1-19:3 - Ryo Masuda, Yuta Noma, Koya Narumi:
Computational Design and Fabrication of 3D Printed Zippers Connecting 3D Textile Structures. 20:1-20:3 - Riki Takizawa, Shigeyuki Hirai:
PronounSE: SFX Synthesizer from Language-Independent Vocal Mimic Representation. 21:1-21:3 - John Joon Young Chung, Melissa Roemmele, Max Kreminski:
Toyteller: Toy-Playing with Character Symbols for AI-Powered Visual Storytelling. 22:1-22:5 - Erzhen Hu, Mingyi Li, Xun Qian, Alex Olwal, David Kim, Seongkook Heo, Ruofei Du:
Experiencing Thing2Reality: Transforming 2D Content into Conditioned Multiviews and 3D Gaussian Objects for XR Communication. 23:1-23:3 - Qiuyu Lu, Semina Yi, Lining Yao:
DeMorph: Morphing Devices Functioning via Sequential Degradation. 24:1-24:3 - Minhyeok Baek, Sunjun Kim:
Demonstration of Haptic Devices with Variable Volume Using Spiral Spring Structures. 25:1-25:3 - Zining Zhang, Jiasheng Li, Zeyu Yan, Jun Nishida, Huaishu Peng:
Demonstration of JetUnit: Rendering Diverse Force Feedback in Virtual Reality Using Water Jets. 26:1-26:3 - Soheil Kianzad, Hasti Seifi:
MagicDraw: Haptic-Assisted One-Line Drawing with Shared Control. 27:1-27:3 - Bob Tianqi Wei, Shm Garanganao Almeda, Ethan Tam, Dor Abrahamson:
Demonstration of Sympathetic Orchestra: An Interactive Conducting Education System for Responsive, Tacit Skill Development. 28:1-28:3 - Naoki Yoshioka, Hiroyuki Manabe:
SealingLid: FDM 3D Printing Technique that Bends Thin Walls to Work as a Lid. 29:1-29:3 - Fangzheng Liu, Don Derek Haddad, Joe A. Paradiso:
MindCube: an Interactive Device for Gauging Emotions. 30:1-30:2 - Yatharth Singhal, Haokun Wang, Jin Ryong Kim:
Demonstrating FIRE: Mid-Air Thermo-Tactile Display. 31:1-31:2 - Xia Su, Ruiqi Chen, Weiye Zhang, Jingwei Ma, Jon E. Froehlich:
A Demo of DIAM: Drone-based Indoor Accessibility Mapping. 32:1-32:3 - Nathaniel Steele Dennler, Evan Torrence, Uksang Yoo, Stefanos Nikolaidis, Maja J. Mataric:
PyLips: an Open-Source Python Package to Expand Participation in Embodied Interaction. 33:1-33:4 - Changsung Lim, Sangyoon Lee, Geehyuk Lee:
DualPad: Exploring Non-Dominant Hand Interaction on Dual-Screen Laptop Touchpads. 34:1-34:4 - Mizuki Ishida, Kaori Ikematsu, Yuki Igarashi:
ScreenConcealer: Privacy-protection System with Obfuscations for Screen Sharing. 35:1-35:3 - Simret Araya Gebreegziabher, Elena L. Glassman, Toby Jia-Jun Li:
MOCHA: Model Optimization through Collaborative Human-AI Alignment. 36:1-36:4 - Ayaka Ishii, Kentaro Yasu:
RelieFoam: Rapid Prototyping of 2.5D Texture using Laser Cutter. 37:1-37:3 - Anandghan Waghmare, Sanjay Varghese, Shwetak N. Patel:
Demonstrating Z-Band: Enabling Subtle Hand Interactions with Bio-impedance Sensing on the Wrist. 38:1-38:2 - Jessie Yuan, Janavi Gupta, Akhil Padmanabha, Zulekha Karachiwalla, Carmel Majidi, Henny Admoni, Zackory Erickson:
Towards an LLM-Based Speech Interface for Robot-Assisted Feeding. 39:1-39:4 - Jiexin Ding, Ishan Chatterjee, Alexander Ching, Anandghan Waghmare, Shwetak N. Patel:
Demo of FlowRing: Seamless Cross-Surface Interaction via Opto-Acoustic Ring. 40:1-40:3 - Mehmet Özdemir, Marwa Alalawi, Mustafa Doga Dogan, Jose Francisco Martinez Castro, Stefanie Mueller, Zjenja Doubrovski:
Demonstrating Speed-Modulated Ironing: High-Resolution Shade and Texture Gradients in Single-Material 3D Printing. 41:1-41:6 - Gabriel Lipkowitz:
Palimpsest: a spatial user interface toolkit for cohering tracked physical entities and interactive 3D content. 42:1-42:4 - Bingjian Huang, Hanfeng Cai, Siyi Ren, Yeqi Sang, Qilong Cheng, Paul H. Dietz, Daniel Wigdor:
Demonstrating VibraForge: An Open-source Vibrotactile Prototyping Toolkit with Scalable Modular Design. 43:1-43:5 - Eric J. Gonzalez, Ishan Chatterjee, Khushman Patel, Mar González-Franco, Andrea Colaço, Karan Ahuja:
Demonstrating XDTK: Prototyping Multi-Device Interaction and Arbitration in XR. 44:1-44:3 - Takegi Yoshimoto, Yoshiki Minato, Homei Miyashita:
Edible Lens Array: Dishes with lens-shaped jellies that change their appearance depending on the viewpoint. 45:1-45:3 - Cheng Xue, Yijie Guo, Ziyi Wang, Mona Shimizu, Jihong Jeung, Haipeng Mi:
DishAgent: Enhancing Dining Experiences through LLM-Based Smart Dishes. 46:1-46:4 - Xinyun Cao, Dhruv Jain:
SoundModVR: Sound Modifications in Virtual Reality for Sound Accessibility. 47:1-47:4 - Jingyue Zhang, Ian Arawjo:
ChainBuddy: An AI-assisted Agent System for Helping Users Set up LLM Pipelines. 48:1-48:3 - Keita Tsuyuguchi, Kosuke Shimizu, Kenji Suzuki:
Emotion Overflow: an Interactive System to Represent Emotion with Fluid. 49:1-49:2 - Kyzyl Monteiro, Yuchen Wu, Sauvik Das:
Manipulate to Obfuscate: A Privacy-Focused Intelligent Image Manipulation Tool for End-Users. 50:1-50:3 - Jas Brooks, Alex Mazursky, Janice Hixon, Pedro Lopes:
Demonstrating Augmented Breathing via Thermal Feedback in the Nose. 51:1-51:5 - Jaewook Lee, Sieun Kim, Minji Park, Catherine L. Rasgaitis, Jon E. Froehlich:
Embodied AR Language Learning Through Everyday Object Interactions: A Demonstration of EARLL. 52:1-52:3 - Jan Ulrich Bartels, Natalia Sanchez-Tamayo, Michael Sedlmair, Katherine J. Kuchenbecker:
Active Haptic Feedback for a Virtual Wrist-Anchored User Interface. 53:1-53:3 - Cathy Mengying Fang, Patrick Chwalek, Quincy Kuang, Pattie Maes:
WatchThis: A Wearable Point-and-Ask Interface powered by Vision-Language Models for Contextual Queries. 54:1-54:4 - Muhammad Abdullah, Laurenz Seidel, Ben Wernicke, Mehdi Gouasmi, Anton Friedrich Hackl, Thomas Kern, Conrad Lempert, Clara Lempert, David Bizer, Wieland Storch, Chiao Fang, Patrick Baudisch:
Demonstrating PopCore: Personal Fabrication of 3D Foamcore Models for Professional High-Quality Applications in Design and Architecture. 55:1-55:5
Doctoral Symposium (Not Public)
- Hirotaka Hiraki, Shusuke Kanazawa, Takahiro Miura, Manabu Yoshida, Masaaki Mochimaru, Jun Rekimoto:
Conductive Fabric Diaphragm for Noise-Suppressive Headset Microphone. 56:1-56:3 - Yasaman S. Sefidgar:
Supporting Control and Alignment in Personal Informatics Tools. 57:1-57:4 - Shwetha Rajaram:
Enabling Safer Augmented Reality Experiences: Usable Privacy Interventions for AR Creators and End-Users. 58:1-58:8 - Saelyne Yang:
Enhancing How People Learn Procedural Tasks Through How-to Videos. 59:1-59:5 - Yudai Tanaka:
Nervous System Interception: A New Paradigm for Haptics. 60:1-60:5 - Michelle S. Lam:
Granting Non-AI Experts Creative Control Over AI Systems. 61:1-61:5 - Anandghan Waghmare:
Extending the Senses of Ubiquitous Devices. 62:1-62:5 - Nathaniel Steele Dennler:
Physical and Social Adaptation for Assistive Robot Interactions. 63:1-63:6
Vision talks
- Zeyu Yan:
Sustainable in-house PCB prototyping. 64:1-64:5 - Arvind Satyanarayan:
Intelligence as Agency. 65:1-65:3
AI & Automation
- Wendy E. Mackay:
Parasitic or Symbiotic? Redefining our Relationship with Intelligent Systems. 66:1-66:2
Poster Session A
- May Yu, Afroza Sultana, Stacy Cernova, Megan Wang, Alexander Bakogeorge, Tudor Tibu, Aneesh P. Tarun, Ali Mazalek:
"SimSnap" Framework: Designing Interaction Methods for Cross-device Applications". 67:1-67:3 - Ling Qin:
Seent: Interfacing Gamified Olfactory Training. 68:1-68:3 - Likun Fang, Yunxiao Wang, Ercan Altinsoy:
Investigating the Design Space of Affective Touch on the Forearm Area. 69:1-69:3 - Panayu Keelawat, Ryo Suzuki:
Transforming Procedural Instructions into In-Situ Augmented Reality Guides with InstructAR. 70:1-70:3 - Kaori Ikematsu, Kunihiro Kato:
Enhancing Readability with a Target-Aware Zooming Technique for Touch Surfaces. 71:1-71:3 - Jiahao Nick Li, Zhuohao (Jerry) Zhang, Jiaju Ma:
OmniQuery: Enabling Question Answering on Personal Memory by Augmenting Multimodal Album Data. 72:1-72:3 - Benedict Leung, Mariana Shimabukuro, Christopher Collins:
NeuroSight: Combining Eye-Tracking and Brain-Computer Interfaces for Context-Aware Hand-Free Camera Interaction. 73:1-73:3 - Hsuanling Lee, Yujie Shan, Huachao Mao, Liang He:
Fluxable: A Tool for Making 3D Printable Sensors and Actuators. 74:1-74:3 - Chenfeng Gao, Wanli Qian, Richard Liu, Rana Hanocka, Ken Nakagaki:
Towards Multimodal Interaction with AI-Infused Shape-Changing Interfaces. 75:1-75:3 - Qiuyu Lu, Jiawei Fang, Zhihao Yao, Yue Yang, Shiqing Lyu, Haipeng Mi, Lining Yao:
Large Language Model Agents Enabled Generative Design of Fluidic Computation Interfaces. 76:1-76:3 - Minyung Kim, Kun Woo Song, Yohan Lim, Sang Ho Yoon:
Collision Prevention in Diminished Reality through the Use of Peripheral Vision. 77:1-77:3 - Hirotaka Hiraki, Jun Rekimoto:
Piezoelectric Sensing of Mask Surface Waves for Noise-Suppressive Speech Input. 78:1-78:3 - Jiayi Lu, Xinxin Qiu, Zihan Gao:
LOST STAR: An Interactive Stereoscopic Picture Book Installation for Children's Bedtime Rituals. 79:1-79:3 - Yu Liu, Qiao Jin, Zejun Zhang, Bo Han, Svetlana Yarosh, Feng Qian:
HoloClass: Enhancing VR Classroom with Live Volumetric Video Streaming. 80:1-80:3 - Lorraine Underwood, Thomas Ball, Steve Hodges, Elisa Rubegni, Peli de Halleux, Joe Finney:
MicroCode: live, portable programming for children via robotics. 81:1-81:3 - Alexander Lingler, Dinara Talypova, Philipp Wintersberger:
AITentive: A Toolkit to Develop RL-based Attention Management Systems. 82:1-82:3 - Billy Shi, Per Ola Kristensson:
Pay Attention! Human-Centric Improvements of LLM-based Interfaces for Assisting Software Test Case Development. 83:1-83:3 - Yujia Liu, Qihang Shan, Zhihao Yao, Qiuyu Lu:
KeyFlow: Acoustic Motion Sensing for Cursor Control on Any Keyboard. 84:1-84:3 - Jieyu Zhou, Christopher MacLellan:
Improving Interface Design in Interactive Task Learning for Hierarchical Tasks based on a Qualitative Study. 85:1-85:3 - David Chuan-En Lin, Hyeonsu B. Kang, Nikolas Martelaro, Aniket Kittur, Yan-Ying Chen, Matthew K. Hong:
Inkspire: Sketching Product Designs with AI. 86:1-86:6 - Hyelim Hwang, Seung-Jun Lee, Seok-Hyung Bae:
ValueSphere: A Portable Widget for Quick and Easy Shading in Digital Drawings. 87:1-87:2 - Cyrus Vachha, Yixiao Kang, Zach Dive, Ashwat Chidambaram, Anik Gupta, Eunice Jun, Bjoern Hartmann:
Dreamcrafter: Immersive Editing of 3D Radiance Fields Through Flexible, Generative Inputs and Outputs. 88:1-88:3
Workshop - Democratizing Intelligent Soft Wearables
- François Guimbretière, Amritansh Kwatra, Victor F. Guimbretiere, Scott E. Hudson:
A New Approach for Volumetric Knitting. 89:1-89:3
Workshop - Dynamic Abstractions: Building the Next Generation of Cognitive Tools and Interfaces
- Cedric Honnet, Tianhong Catherine Yu, Irmandy Wicaksono, Tingyu Cheng, Andreea Danielescu, Cheng Zhang, Stefanie Mueller, Joe A. Paradiso, Yiyue Luo:
Democratizing Intelligent Soft Wearables. 90:1-90:3
Workshop - Bridging disciplines for a new era in Physical AI
- Sangho Suh, Hai Dang, Ryan Yen, Josh M. Pollock, Ian Arawjo, Rubaiat Habib Kazi, Hariharan Subramonyam, Jingyi Li, Nazmus Saquib, Arvind Satyanarayan:
Dynamic Abstractions: Building the Next Generation of Cognitive Tools and Interfaces. 91:1-91:3
Poster Session B
- Alexandra Ion, Carmel Majidi, Lining Yao, Amir H. Alavi:
Bridging Disciplines for a New Era in Physical AI. 92:1-92:3 - Lauren Nigri, Hyo Kang:
Undercover Assistance: Designing a Disguised App to Navigate Sexual Harassment. 93:1-93:3 - Shoi To, Junichiro Kadomoto, Hidetsugu Irie, Shuichi Sakai:
DataPipettor: Touch-Based Information Transfer Interface Using Proximity Wireless Communication. 94:1-94:3 - Emilie Faracci, Aditya Retnanto, Anup Sathya, Ashlyn Sparrow, Ken Nakagaki:
Game Jam with CARDinality: A Case Study of Exploring Play-based Interactive Applications. 95:1-95:3 - Shogo Tomaru, Ken Takaki, Hiroaki Murakami, Damyon Kim, Koya Narumi, Mitsuhiro Kamezaki, Yoshihiro Kawahara:
Micro-Gesture Recognition of Tongue via Bone Conduction Sound. 96:1-96:3 - Chengbo Zheng, Zeyu Huang, Shuai Ma, Xiaojuan Ma:
SelfGauge: An Intelligent Tool to Support Student Self-assessment in GenAI-enhanced Project-based Learning. 97:1-97:3 - Tongyu Zhou, Gromit Yeuk-Yin Chan, Shunan Guo, Jane Hoffswell, Chang Xiao, Victor S. Bursztyn, Eunyee Koh:
Data Pictorial: Deconstructing Raster Images for Data-Aware Animated Vector Posters. 98:1-98:3 - Mia Huong Nguyen, Kian Peen Yeo, Yasith Samaradivakara, Suranga Nanayakkara:
Catch that butterfly: A Multimodal Approach for Detecting and Simulating Gut Feelings. 99:1-99:3 - Deval Panchal, Christopher Collins, Mariana Shimabukuro:
LingoComics: Co-Authoring Comic Style AI-Empowered Stories for Language Learning Immersion with Story Designer. 100:1-100:3 - Ryan Yen, Jian Zhao, Daniel Vogel:
Code Shaping: Iterative Code Editing with Free-form Sketching. 101:1-101:3 - Junjie Tang, Jakki O. Bailey:
Exploring the Effects of Fantasy Level of Avatars on User Perception and Behavior. 102:1-102:3 - Karin Ohara, Tsubasa Saito, Takashi Ijiri:
TeleHand: Hand-only Teleportation for Distant Object Pointing in Virtual Reality. 103:1-103:3 - Minhyeok Baek, Sunjun Kim:
Efficient Optimal Mouse Sensor Position Estimation using Simulated Cursor Trajectories. 104:1-104:3 - Harish Ram Nambiappan, Fillia Makedon:
Development and Evaluation of Collision Avoidance User Interface for Assistive Vision Impaired Navigation. 105:1-105:3 - Hiroya Miura:
Electrical Connected Orchestra: A New Baton System that can Interactively Control the Body Movements of Performers. 106:1-106:2 - Dailyn Despradel Despradel, Max Murphy, Luigi Borda, Nikhil Verma, Prakarsh Yadav, Jenn Shanahan, Najja Marshall, Emanuele Formento, Mario Bräcklein, Jun Ye, Peter Walkington, Rishi Rajalingham, David Sussillo, Stephanie Naufel, Diego Adrian Gutnisky, Jennifer L. Collinger, Douglas J. Weber:
Enabling Advanced Interactions through Closed-loop Control of Motor Unit Activity After Tetraplegia. 107:1-107:3 - Serene Cheon, Hyo Kang:
ChipQuest: Gamifying the Semiconductor Manufacturing Process to Inspire Future Workforce. 108:1-108:3 - Ami Takahashi, Yu Soma, Hiroki Sato, Karin Tomonaga:
Flexmock: Fast, easy, stockable smocking method using 3D printed self-shrinkable pattern sheet. 109:1-109:3 - Xin Wen, Eldy S. Lazaro Vasquez, Michael L. Rivera:
Exploring a Software Tool for Biofibers Design. 110:1-110:3 - Vikram Aikat, Pradeep Raj Krishnappa Babu, Kimberly L. H. Carpenter, J. Matías Di Martino, Steven Espinosa, Naomi Davis, Lauren Franz, Marina Spanos, Geraldine Dawson, Guillermo Sapiro:
Digital Phenotyping based on a Mobile App Identifies Distinct and Overlapping Features in Children Diagnosed with Autism versus ADHD. 111:1-111:4 - Josef Macera, Soma Narita, Lea Albaugh:
Stretchy Embroidered Circuits. 112:1-112:3 - Wanhui Li, Qing Zhang, Takuto Nakamura, Sinyu Lai, Jun Rekimoto:
Mapping Gaze and Head Movement via Salience Modulation and Hanger Reflex. 113:1-113:3
Workshop - HRI and UIST: Designing Socially Engaging Robot Interfaces
- De-Yuan Lu, Lung-Pan Cheng:
FisheyeVR: Extending the Field of View by Dynamic Zooming in Virtual Reality. 114:1-114:3
Banquet & Keynote (Yaser Sheikh)
- Pragathi Praveena, Arissa J. Sato, Amy Koike, Ran Zhou, Nathan Thomas White, Ken Nakagaki:
HRI and UIST: Designing Socially Engaging Robot Interfaces. 115:1-115:3
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.