default search action
NIME 2013: Daejeon, Republic of Korea
- Woon Seung Yeo, Kyogu Lee, Alexander Sigman, Haru (Hyunkyung) Ji, Graham Wakefield:
13th International Conference on New Interfaces for Musical Expression, NIME 2013, Daejeon, Republic of Korea, May 27-30, 2013. nime.org 2013 - Jesse T. Allison, Yemin Oh, Benjamin Taylor:
NEXUS: Collaborative Performance for the Masses, Handling Instrument Interface Distribution through the Web. 1-6 - Xiao Xiao, Anna Pereira, Hiroshi Ishii:
Conjuring the Recorded Pianist: A New Medium to Experience Musical Performance. 7-12 - Lode Hoste, Beat Signer:
Expressive Control of Indirect Augmented Reality During Live Music Performances. 13-18 - Rébecca Kleinberger:
PAMDI Music Box: Primarily Analogico-Mechanical, Digitally Iterated Music Box. 19-20 - Matan Ben-Asher, Colby Leider:
Toward an Emotionally Intelligent Piano: Real-Time Emotion Detection and Performer Feedback via Kinesthetic Sensing in Piano Performance. 21-24 - Simon Lui:
A Compact Spectrum-Assisted Human Beatboxing Reinforcement Learning Tool On Smartphone. 25-28 - Colin Honigman, Andrew C. Walton, Ajay Kapur:
The Third Room: A 3D Virtual Music Framework. 29-34 - Koray Tahiroglu, Nuno N. Correia, Miguel Espada:
PESI Extended System: In Space, On Body, with 3 Musicians. 35-40 - Gregory Burlet, Ichiro Fujinaga:
Stompboxes: Kicking the Habit. 41-44 - Ajay Kapur, Dae Hong Kim, Raakhi Kapur, Kisoon Eom:
New Interfaces for Traditional Korean Music and Dance. 45-48 - Takayuki Hamano, Tomasz M. Rutkowski, Hiroko Terasawa, Kazuo Okanoya, Kiyoshi Furukawa:
Generating an Integrated Musical Expression with a Brain-Computer Interface. 49-54 - Jan C. Schacher:
Hybrid Musicianship - Teaching Gestural Interaction with Traditional and Digital Instruments. 55-60 - Alessandro Altavilla, Baptiste Caramiaux, Atau Tanaka:
Towards Gestural Sonic Affordances. 61-64 - Gibeom Park, Kyogu Lee:
Sound Spray - can-shaped sound effect device. 65-68 - Russell Eric Dobda:
Applied and Proposed Installations with Silent Disco Headphones for Multi-Elemental Creative Expression. 69-72 - Brennon Bortz, Aki Ishida, Ivica Ico Bukvic, R. Benjamin Knapp:
Lantern Field: Exploring Participatory Design of a Communal, Spatially Responsive Installation. 73-78 - Edmar Soria, Roberto Morales-Manzanares:
Multidimensional sound spatialization by means of chaotic dynamical systems. 79-83 - Will W. W. Tang, Stephen C. F. Chan, Grace Ngai, Hong Va Leong:
Computer Assisted Melo-rhythmic Generation of Traditional Chinese Music from Ink Brush Calligraphy. 84-89 - Laurel Pardue, William Sebastian:
Hand-Controller for Combined Tactile Control and Motion Tracking. 90-93 - Reid Oda, Adam Finkelstein, Rebecca Fiebrink:
Towards Note-Level Prediction for Networked Music Performance. 94-97 - Thomas Walther, Damir Ismailovic, Bernd Brügge:
Rocking the Keys with a Multi-Touch Interface. 98-101 - Charles Roberts, Angus G. Forbes, Tobias Höllerer:
Enabling Multimodal Mobile Interfaces for Musical Performance. 102-105 - Aristotelis Hadjakos, Tobias Grosshauser:
Motion and Synchronization Analysis of Musical Ensembles with the Kinect. 106-110 - Saebyul Park, Seong-Hoon Ban, Dae Ryong Hong, Woon Seung Yeo:
Sound Surfing Network (SSN): Mobile Phone-based Sound Spatialization with Audience Collaboration. 111-114 - Erfan Abdi Dezfouli, Edwin van der Heide:
Notesaaz: a new controller and performance idiom. 115-117 - Tomohiro Tokunaga, Michael J. Lyons:
Enactive Mandala: Audio-visualizing Brain Waves. 118-119 - Yoonchang Han, Sejun Kwon, Kibeom Lee, Kyogu Lee:
A Musical Performance Evaluation System for Beginner Musician based on Real-time Score Following. 120-121 - Xin Fan, Georg Essl:
Air Violin: A Body-centric Style Musical Instrument. 122-123 - Jaeseong You, Red Wierenga:
Remix_Dance 3: Improvisatory Sound Displacing on Touch Screen-Based Interface. 124-127 - Marco Donnarumma, Baptiste Caramiaux, Atau Tanaka:
Muscular Interactions. Combining EMG and MMG sensing for musical practice. 128-131 - Andrew Johnston:
Fluid Simulation as Full Body Audio-Visual Instrument. 132-135 - Yoon Chung Han, Byeong-jun Han, Matthew Wright:
Digiti Sonus: Advanced Interactive Fingerprint Sonification Using Visual Feature Analysis. 136-141 - Ståle Andreas Skogstad:
Filtering Motion Capture Data for Real-Time Applications. 142-147 - Sangbong Nam:
Musical Poi (mPoi). 148-151 - Andrew P. McPherson:
Portable Measurement and Mapping of Continuous Piano Gesture. 152-157 - Dalia El-Shimy, Jeremy R. Cooperstock:
Reactive Environment for Network Music Performance. 158-163 - Florent Berthaut, Mark T. Marshall, Sriram Subramanian, Martin Hachet:
Rouages: Revealing the Mechanisms of Digital Musical Instruments to the Audience. 164-169 - Chi-Hsia Lai, Till Bovermann:
Audience Experience in Sound Performance. 170-173 - Abram Hindle:
SWARMED: Captive Portals, Mobile Devices, and Audience Participation in Multi-User Music Performance. 174-179 - Steven Gelineck, Dan Overholt, Morten Büchert, Jesper Andersen:
Towards an Interface for Music Mixing based on Smart Tangibles and Multitouch. 180-185 - Olivier Perrotin, Christophe d'Alessandro:
Adaptive mapping for improved pitch accuracy on touch user interfaces. 186-189 - Jieun Oh, Ge Wang:
LOLOL: Laugh Out Loud On Laptop. 190-195 - Alexander Refsum Jensenius:
Kinectofon: Performing with Shapes in Planes. 196-197 - Toshihiro Kita, Naotoshi Osaka:
Providing a feeling of other remote learners' presence in an online learning environment via realtime sonification of Moodle access log. 198-199 - Stefano Baldan, Amalia de Götzen, Stefania Serafin:
Sonic Tennis: a rhythmic interaction game for mobile devices. 200-201 - Shoken Kaneko:
A Function-Oriented Interface for Music Education and Musical Expressions: "the Sound Wheel". 202-205 - Dimitri Diakopoulos, Ajay Kapur:
Netpixl: Towards a New Paradigm for Networked Application Development. 206-209 - Thomas Resch:
note~ for Max - An extension for Max/MSP for Media Arts & music. 210-212 - Bridget Johnson, Ajay Kapur:
Multi-Touch Interfaces for Phantom Source Positioning in Live Sound Diffusion. 213-216 - Kenneth W. K. Lo, Chi Kin Lau, Michael Xuelin Huang, Wai Wa Tang, Grace Ngai, Stephen C. F. Chan:
Mobile DJ: a Tangible, Mobile Platform for Active and Collaborative Music Listening. 217-222 - Mayank Sanganeria, Kurt Werner:
GrainProc: a real-time granular synthesis interface for live performance. 223-226 - Parag Kumar Mital, Mick Grierson:
Mining Unlabeled Electronic Music Databases through 3D Interactive Visualization of Latent Component Relationships. 227-232 - Dae Ryong Hong, Woon Seung Yeo:
Laptap: Laptop Computer as a Musical Instrument using Audio Feedback. 233-236 - Danielle Bragg, Rebecca Fiebrink:
Synchronous Data Flow Modeling for DMIs. 237-242 - Mark Cerqueira, Spencer Salazar, Ge Wang:
SoundCraft: Transducing StarCraft 2. 243-247 - Yuan-Yi Fan, F. Myles Sciotto:
BioSync: An Informed Participatory Interface for Audience Dynamics and Audiovisual Content Co-creation using Mobile PPG and EEG. 248-251 - Qi Yang, Georg Essl:
Visual Associations in Augmented Keyboard Performance. 252-255 - Miles Thorogood, Philippe Pasquier:
Impress: A Machine Learning Approach to Soundscape Affect Classification for a Music Performance Environment. 256-260 - Hayami Tobise, Yoshinari Takegawa, Tsutomu Terada, Masahiko Tsukamoto:
Construction of a System for Recognizing Touch of Strings for Guitar. 261-266 - Kameron R. Christopher, Jingyin He, Raakhi Kapur, Ajay Kapur:
Kontrol: Hand Gesture Recognition for Music and Dance Interaction. 267-270 - Fumitaka Kikukawa, Sojiro Ishihara, Masato Soga, Hirokazu Taki:
Development of A Learning Environment for Playing Erhu by Diagnosis and Advice regarding Finger Position on Strings. 271-276 - Steve Everett:
Sonifying Chemical Evolution. 277-278 - Avneesh Sarwate, Rebecca Fiebrink:
Variator: A Creativity Support Tool for Music Composition. 279-282 - Kazuhiro Jo:
cutting record - a record without (or with) prior acoustic information. 283-286 - Shawn Greenlee:
Graphic Waveshaping. 287-290 - Tae Hong Park, Oriol Nieto:
Fortissimo: Force-Feedback for Mobile Devices. 291-294 - Alyssa M. Batula, Manu Colacot, David Grunberg, Youngmoo E. Kim:
Using Audio and Haptic Feedback to Improve Pitched Percussive Instrument Performance in Humanoids. 295-300 - David John:
Updating the Classifications of Mobile Music Projects. 301-306 - Jordan Hochenbaum, Ajay Kapur:
Toward The Future Practice Room: Empowering Musical Pedagogy through Hyperinstruments. 307-312 - Charles Roberts, Graham Wakefield, Matthew Wright:
The Web Browser As Synthesizer And Interface. 313-318 - Brett Park, David Gerhard:
Rainboard and Musix: Building dynamic isomorphic interfaces. 319-324 - Edgar Berdahl, Spencer Salazar, Myles Borins:
Embedded Networking and Hardware-Accelerated Graphics with Satellite CCRMA. 325-330 - Lionel Feugère, Christophe d'Alessandro:
Digitartic: bi-manual gestural control of articulation in performative singing synthesis. 331-336 - Jim Tørresen, Yngve Hafting, Kristian Nymoen:
A New Wi-Fi based Platform for Wireless Sensor Data Collection. 337-340 - Wolfgang Fohl, Malte Nogalski:
A Gesture Control Interface for a Wave Field Synthesis System. 341-346 - Adrian Freed, John MacCallum, David Wessel:
Agile Interface Development using OSC Expressions and Process Migration. 347-351 - Leonardo Jenkins, Shawn Trail, George Tzanetakis, Peter F. Driessen, Wyatt Page:
An Easily Removable, wireless Optical Sensing System (EROSS) for the Trumpet. 352-357 - Anton L. Fuhrmann, Johannes Kretz, Peter Burwik:
Multi Sensor Tracking for Live Sound Transformation. 358-362 - Laurel Pardue, Andrew P. McPherson:
Near-Field Optical Reflective Sensing for Bow Tracking. 363-368 - Tom Mudd:
Feeling for Sound: Mapping Sonic Data to Haptic Perceptions. 369-372 - Yoshihito Nakanishi, Seiichiro Matsumura, Chuichi Arakawa:
POWDER BOX: An Interactive Device with Sensor Based Replaceable Interface For Musical Session. 373-376 - Charles Martin:
Performing with a Mobile Computer System for Vibraphone. 377-380 - Alex McLean, Eunjoo Shin, Kia Ng:
Paralinguistic Microphone. 381-384 - Daniel Bisig, Sébastien Schiesser:
Coral - a Physical and Haptic Extension of a Swarm Simulation. 385-388 - Jackie Chui, Yi Tang, Mubarak Marafa, Samson Young:
SoloTouch: A Capacitive Touch Controller with Lick-based Note Selector. 389-393 - Ulysse Rosselet, Alain Renaud:
Jam On: a new interface for web-based collective music performance. 394-399 - Chad McKinney, Nick Collins:
An Interactive 3D Network Music Space. 400-405 - Anders-Petter Andersson, Birgitta Cappelen:
Designing Empowering Vocal and Tangible Interaction. 406-412 - Mick Grierson, Chris Kiefer:
NoiseBear: A Malleable Wireless Controller Designed In Participation with Disabled Children. 413-416 - Jeffrey J. Scott, Mickey Moorhead, Justin Chapman, Ryan Schwabe, Youngmoo E. Kim:
Personalized Song Interaction Using a Multi Touch Interface. 417-420 - Sam Tarakajian, David Zicarelli, Joshua Clayton:
Mira: Liveness in iPad Controllers for Max/MSP. 421-426 - Taehun Kim, Stefan Weinzierl:
Modelling Gestures in Music Performance with Statistical Latent-State Models. 427-430 - Qian Liu, Yoon Chung Han, JoAnn Kuchera-Morin, Matthew Wright:
Cloud Bridge: a Data-driven Immersive Audio-Visual Software Interface. 431-436 - Michael Everman, Colby Leider:
Toward DMI Evaluation Using Crowd-Sourced Tagging Techniques. 437-440 - Adrian Freed, John MacCallum, Sam Mansfield:
"Old" is the New "New": a Fingerboard Case Study in Recrudescence as a NIME Development Strategy. 441-445 - Robert Hamilton:
Sonifying Game-Space Choreographies With UDKOSC. 446-449 - Sang Won Lee, Jason Freeman:
echobo : Audience Participation Using The Mobile Music Instrument. 450-455 - Stefano Trento, Stefania Serafin:
Flag beat: a novel interface for rhythmic musical expression for kids. 456-459 - Ryan McGee:
VOSIS: a Multi-touch Image Sonification Interface. 460-463 - Romain Michon, Myles Borins, David Meisenholder:
The Black Box. 464-465 - Johnty Wang, Nicolas D'Alessandro, Aura Pon, Sidney S. Fels:
PENny: An Extremely Low-Cost Pressure-Sensitive Stylus for Existing Capacitive Touchscreens. 466-468 - Antonius Wiriadjaja:
Gamelan Sampul: Laptop Sleeve Gamelan. 469-470 - Benjamin Taylor, Jesse T. Allison:
Plum St: Live Digital Storytelling with Remote Browsers. 477-478 - Tobias Grosshauser, Gerhard Tröster:
Finger Position and Pressure Sensing Techniques for String and Keyboard Instruments. 479-484 - Adam Place, Liam Lacey, Thomas Mitchell:
AlphaSphere. 491-492 - Sang Won Lee, Georg Essl:
Live Coding The Mobile Music Instrument. 493-498 - Hongchan Choi, Jonathan Berger:
WAAX: Web Audio API eXtension. 499-502 - KatieAnna Wolf, Rebecca Fiebrink:
SonNet: A Code Interface for Sonifying Computer Network Data. 503-506 - Stefano Fasciani, Lonce Wyse:
A Self-Organizing Gesture Map for a Voice-Controlled Instrument Interface. 507-512 - Baptiste Caramiaux, Atau Tanaka:
Machine Learning of Musical Gestures. 513-518 - Edward Zhang:
KIB: Simplifying Gestural Instrument Creation Using Widgets. 519-524 - Niklas Klügel, Georg Groh:
Towards Mapping Timbre to Emotional Affect. 525-530 - Ohad Fried, Rebecca Fiebrink:
Cross-modal Sound Mapping Using Deep Learning. 531-534 - Jan C. Schacher:
The Quarterstaff, a Gestural Sensor Instrument. 535-540 - Sam Ferguson, Aengus Martin, Andrew Johnston:
A corpus-based method for controlling guitar feedback. 541-546 - Maria Astrinaki, Nicolas D'Alessandro, Loïc Reboursière, Alexis Moinet, Thierry Dutoit:
MAGE 2.0: New Features and its Application in the Development of a Talking Guitar. 547-550 - Jerônimo Barbosa, Filipe Calegario, Veronica Teichrieb, Geber L. Ramalho, Giordano Cabral:
A Drawing-Based Digital Music Instrument. 551-556 - Jim W. Murphy, James McVay, Ajay Kapur, Dale A. Carnegie:
Designing and Building Expressive Robotic Guitars. 557-562
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.