default search action
20th NIME 2020: Birmingham, UK
- 20th International Conference on New Interfaces for Musical Expression, NIME 2020, Birmingham, UK, July 25-27, 2020. nime.org 2020
- Ruolun Weng:
Interactive Mobile Musical Application using faust2smartphone. 1-4 - John Sullivan, Julian Vanasse, Catherine Guastavino, Marcelo M. Wanderley:
Reinventing the Noisebox: Designing Embedded Instruments for Active Musicians. 5-10 - Advait Sarkar, Henry Mattinson:
Excello: exploring spreadsheets for music composition. 11-16 - Alberto Boem, Giovanni Maria Troiano, Giacomo Lepri, Victor Zappi:
Non-Rigid Musical Interfaces: Exploring Practices, Takes, and Future Perspective. 17-22 - Tim Shaw, John Bowers:
Ambulation: Exploring Listening Technologies for an Extended Sound Walking Practice. 23-28 - Michael J. Krzyzaniak:
Words to Music Synthesis. 29-34 - Marcel Ehrhardt, Max Neupert, Clemens Wegener:
Piezoelectric strings as a musical interface. 35-36 - Jean Chu, Jaewon Choi:
Reinterpretation of Pottery as a Musical Interface. 37-38 - Charles Patrick Martin, Zeruo Liu, Yichen Wang, Wennan He, Henry Gardner:
Sonic Sculpture: Activating Engagement with Head-Mounted Augmented Reality. 39-42 - Rohan Proctor, Charles Patrick Martin:
A Laptop Ensemble Performance System using Recurrent Neural Networks. 43-48 - Darrell Gibson, Richard Polfreman:
Star Interpolator - A Novel Visualization Paradigm for Graphical Interpolators. 49-54 - Anna Xambó, Gerard Roma:
Performing Audiences: Composition Strategies for Network Music using Mobile Phones. 55-60 - Samuel J. Hunt, Tom Mitchell, Chris Nash:
Composing computer generated music, an observational study using IGME: the Interactive Generative Music Environment. 61-66 - Olivier Capra, Florent Berthaut, Laurent Grisoni:
All You Need Is LOD : Levels of Detail in Visual Augmentations for the Audience. 67-72 - Johnty Wang, Eduardo Meneses, Marcelo M. Wanderley:
The Scalability of WiFi for Mobile Embedded Sensor Interfaces. 73-76 - Florent Berthaut, Luke Dahl:
Adapting & Openness: Dynamics of Collaboration Interfaces for Heterogeneous Digital Orchestras. 77-82 - Andreas Förster, Christina Komesker, Norbert Schnell:
SnoeSky and SonicDive - Design and Evaluation of Two Accessible Digital Musical Instruments for a SEN School. 83-88 - Robert Pritchard, Ian Lavery:
Inexpensive Colour Tracking to Overcome Performer ID Loss . 89-92 - Kiyu Nishida, Kazuhiro Jo:
Modules for analog synthesizers using Aloe vera biomemristor. 93-96 - Giulio Moro, Andrew P. McPherson:
A platform for low-latency continuous keyboard sensing and sound generation. 97-102 - Andrea Guidi, Fabio Morreale, Andrew P. McPherson:
Design for auditory imagery: altering instruments to explore performer fluency. 103-108 - Raul Masu, Paulo Bala, Muhammad Ahmad, Nuno N. Correia, Valentina Nisi, Nuno Nunes, Teresa Romão:
VR Open Scores: Scores as Inspiration for VR Scenarios. 109-114 - Amble H. C. Skuse, Shelly Knotts:
Creating an Online Ensemble for Home Based Disabled Musicians: Disabled Access and Universal Design - why disabled people must be at the heart of developing technology. 115-120 - Anil Çamci, Matias Vilaplana, Ruth Wang:
Exploring the Affordances of VR for Musical Interaction Design with VIMEs. 121-126 - Anil Çamci, Aaron Willette, Nachiketa Gargi, Eugene Kim, Julia Xu, Tanya Lai:
Cross-platform and Cross-reality Design of Immersive Sonic Environments. 127-130 - Marius Schebella, Gertrud Fischbacher, Matthew Mosher:
Silver: A Textile Wireframe Interface for the Interactive Sound Installation Idiosynkrasia. 131-132 - Ning Yang, Richard Savery, Raghavasimhan Sankaranarayanan, Lisa Zahray, Gil Weinberg:
Mechatronics-Driven Musical Expressivity for Robotic Percussionists. 133-138 - Paul Dunham:
Click: : RAND. A Minimalist Sound Sculpture. 139-142 - Enrique Tomás:
A Playful Approach to Teaching NIME: Pedagogical Methods from a Practice-Based Perspective. 143-148 - Quinn Jarvis-Holland, Crystal Quartez, Francisco Botello, Nathan Gammill:
EXPANDING ACCESS TO MUSIC TECHNOLOGY- Rapid Prototyping Accessible Instrument Solutions For Musicians With Intellectual Disabilities. 149-153 - Jack Atherton, Ge Wang:
Curating Perspectives: Incorporating Virtual Reality into Laptop Orchestra Performance. 154-159 - Fabio Morreale, S. M. Astrid Bin, Andrew P. McPherson, Paul Stapleton, Marcelo M. Wanderley:
A NIME Of The Times: Developing an Outward-Looking Political Agenda For This Community. 160-165 - Chantelle Ko, Lora Oehlberg:
Touch Responsive Augmented Violin Interface System II: Integrating Sensors into a 3D Printed Fingerboard. 166-171 - Nicolas E. Gold, Chongyang Wang, Temitayo A. Olugbade, Nadia Berthouze, Amanda C. de C. Williams:
P(l)aying Attention: Multi-modal, multi-temporal music control. 172-175 - Doga Cavdir, Ge Wang:
Felt Sound: A Shared Musical Experience for the Deaf and Hard of Hearing. 176-181 - Sasha Leitman:
Sound Based Sensors for NIMEs. 182-187 - Yuma Ikawa, Akihiro Matsuura:
Playful Audio-Visual Interaction with Spheroids . 188-189 - Sihwa Park:
Collaborative Mobile Instruments in a Shared AR Space: a Case of ARLooper. 190-195 - Diemo Schwarz, Abby Wanyu Liu, Frédéric Bevilacqua:
A Survey on the Use of 2D Touch Interfaces for Musical Expression. 196-201 - Harri Renney, Tom Mitchell, Benedict R. Gaster:
There and Back Again: The Practicality of GPU Accelerated Digital Audio. 202-207 - Gus Xia, Daniel Chin, Yian Zhang, Tianyu Zhang, Junbo Zhao:
Interactive Rainbow Score: A Visual-centered Multimodal Flute Tutoring System. 208-213 - Nicola Davanzo, Federico Avanzini:
A Dimension Space for the Evaluation of Accessible Digital Musical Instruments. 214-220 - Adam Pultz Melbye, Halldór Úlfarsson:
Sculpting the behaviour of the Feedback-Actuated Augmented Bass: Design strategies for subtle manipulations of string feedback using simple adaptive algorithms. 221-226 - Gwendal Le Vaillant, Thierry Dutoit, Rudi Giot:
Analytic vs. holistic approaches for the live search of sound presets using graphical interpolation. 227-232 - Chase Mitchusson:
Indeterminate Sample Sequencing in Virtual Reality. 233-236 - Rebecca Fiebrink, Laetitia Sonami:
Reflections on Eight Years of Instrument Creation with Machine Learning. 237-242 - Alex Michael Lucas, Miguel Ortiz, Franziska Schroeder:
The Longevity of Bespoke, Accessible Music Technology: A Case for Community. 243-248 - Ivica Ico Bukvic, Disha Sardana, Woohun Joo:
New Interfaces for Spatial Musical Expression. 249-254 - Mark Durham:
Inhabiting the Instrument. 255-258 - Chris Nash:
Crowd-driven Music: Interactive and Generative Approaches using Machine Vision and Manhattan. 259-264 - Alex McLean:
Algorithmic Pattern. 265-270 - Louis McCallum, Mick Grierson:
Supporting Interactive Machine Learning Approaches to Building Musical Instruments in the Browser. 271-272 - Mathias Kirkegaard, Mathias Bredholt, Christian Frisson, Marcelo M. Wanderley:
TorqueTuner: A self contained module for designing rotary haptic force feedback for digital musical instruments. 273-278 - Corey J. Ford, Chris Nash:
An Iterative Design 'by proxy' Method for Developing Educational Music Interfaces. 279-284 - Filipe Calegario, Marcelo M. Wanderley, João Tragtenberg, Eduardo Meneses, Johnty Wang, John Sullivan, Ivan Franco, Mathias Kirkegaard, Mathias Bredholt, Josh Rohs:
Probatio 1.0: collaborative development of a toolkit for functional DMI prototypes. 285-290 - Travis J. West, Marcelo M. Wanderley, Baptiste Caramiaux:
Making Mappings: Examining the Design Process. 291-296 - Michael Sidler, Matthew C. Bisson, Jordan Grotz, Scott Barton:
Parthenope: A Robotic Musical Siren. 297-300 - Steven T. Kemper:
Tremolo-Harp: A Vibration-Motor Actuated Robotic String Instrument. 301-304 - Atsuya Kobayashi, Reo Anzai, Nao Tokui:
ExSampling: a system for the real-time ensemble performance of field-recorded environmental sounds. 305-308 - Juan Pablo Yepez Placencia, Jim W. Murphy, Dale A. Carnegie:
Designing an Expressive Pitch Shifting Mechanism for Mechatronic Chordophones. 309-314 - Alon Ilsar, Matthew Hughes, Andrew Johnston:
NIME or Mime: A Sound-First Approach to Developing an Audio-Visual Gestural Instrument. 315-320 - Matthew Hughes, Andrew Johnston:
URack: Audio-visual Composition and Performance using Unity and VCV Rack. 321-322 - Irmandy Wicaksono, Joseph A. Paradiso:
KnittedKeyboard: Digital Knitting of Electronic Textile Musical Controllers. 323-326 - Olivier Capra, Florent Berthaut, Laurent Grisoni:
A Taxonomy of Spectator Experience Augmentation Techniques. 327-330 - Sourya Sen, Koray Tahiroglu, Julia Lohmann:
Sounding Brush: A Tablet based Musical Instrument for Drawing and Mark Making. 331-336 - Koray Tahiroglu, Miranda Kastemaa, Oskar Koli:
Al-terity: Non-Rigid Musical Instrument with Artificial Intelligence Applied to Real-Time Audio Synthesis. 337-342 - Chris Kiefer, Dan Overholt, Alice Eldridge:
Shaping the behaviour of feedback instruments with complexity-controlled gain dynamics. 343-348 - Duncan A. H. Williams:
MINDMIX: Mapping of brain activity to congruent audio mixing features. 349-352 - Marcel O. DeSmith, Andrew Piepenbrink, Ajay Kapur:
SQUISHBOI: A Multidimensional Controller for Complex Musical Interactions using Machine Learning. 353-356 - Nick Bryan-Kinns, Zijin Li, Xiaohua Sun:
On Digital Platforms and AI for Music in the UK and China. 357-360 - Anders Eskildsen, Mads Walther-Hansen:
Force dynamics as a design framework for mid-air musical interfaces. 361-366 - Erik Nyström:
Intra-Actions: Experiments with Velocity and Position in Continuous Controllers. 367-368 - James Leonard, Andrea Giomi:
Towards an Interactive Model-Based Sonification of Hand Gesture for Dance Performance. 369-374 - Rômulo Vieira, Flávio Luiz Schiavoni:
Fliperama: An affordable Arduino based MIDI Controller. 375-379 - Alex MacLean:
Immersive Dreams: A Shared VR Experience. 380-381 - Nick Bryan-Kinns, Zijin Li:
ReImagining: Cross-cultural Co-Creation of a Chinese Traditional Musical Instrument with Digital Technologies. 382-387 - Konstantinos Vasilakos, Scott Wilson, Thomas McCauley, Tsun Winston Yeung, Emma Margetson, Milad Khosravi Mardakheh:
Sonification of High Energy Physics Data Using Live Coding and Web Based Interfaces. 388-393 - Haruya Takase, Shun Shiramatsu:
Support System for Improvisational Ensemble Based on Long Short-Term Memory Using Smartphone Sensor. 394-398 - Augoustinos Tsiros, Alessandro Palladini:
Towards a Human-Centric Design Framework for AI Assisted Music Production. 399-404 - Matthew Rodger, Paul Stapleton, Maarten van Walstijn, Miguel Ortiz, Laurel Pardue:
What Makes a Good Musical Instrument? A Matter of Processes, Ecologies and Specificities . 405-410 - Giovanni Santini:
Augmented Piano in Augmented Reality. 411-415 - Tom Davis, Laura Reid:
Taking Back Control: Taming the Feral Cello. 416-421 - Thibault Jaccard, Robert Lieck, Martin Rohrmeier:
AutoScale: Automatic and Dynamic Scale Selection for Live Jazz Improvisation. 422-427 - Lauren Hayes, Adnan Marquez-Borbon:
Nuanced and Interrelated Mediations and Exigencies (NIME): Addressing the Prevailing Political and Epistemological Crises. 428-433 - Andrew P. McPherson, Giacomo Lepri:
Beholden to our tools: negotiating with technology while sketching digital instruments. 434-439 - Andrea Martelloni, Andrew P. McPherson, Mathieu Barthet:
Percussive Fingerstyle Guitar through the Lens of NIME: an Interview Study. 440-445 - Robert H. Jack, Jacob Harrison, Andrew P. McPherson:
Digital Musical Instruments as Research Products. 446-451 - Amit D. Patel, John Richards:
Pop-up for Collaborative Music-making. 452-457 - Courtney N. Reed, Andrew P. McPherson:
Surface Electromyography for Direct Vocal Control. 458-463 - Henrik von Coler, Steffen Lepa, Stefan Weinzierl:
User-Defined Mappings for Spatial Sound Synthesis. 464-469 - Tiago Brizolara, Sylvie Gibet, Caroline Larboulette:
Elemental: a Gesturally Controlled System to Perform Meteorological Sounds. 470-476 - Çagri Erdem, Alexander Refsum Jensenius:
RAW: Exploring Control Structures for Muscle-based Interaction in Collective Improvisation. 477-482 - Travis C. MacDonald, James Alexander Hughes, Barry MacKenzie:
SmartDrone: An Aurally Interactive Harmonic Drone. 483-488 - Juan Pablo Martinez-Avila, Vasiliki Tsaknaki, Pavel Karpashevich, Charles Windlin, Niklas Valenti, Kristina Höök, Andrew P. McPherson, Steve Benford:
Soma Design for NIME. 489-494 - Laddy P. Cadavid:
Knotting the memory//Encoding the Khipu_: Reuse of an ancient Andean device as a NIME . 495-498 - Shelly Knotts, Nick Collins:
A survey on the uptake of Music AI Software. 499-504 - Scott Barton:
Circularity in Rhythmic Representation and Composition. 505-508 - Thor Magnusson:
Instrumental Investigations at Emute Lab. 509-513 - Satvik Venkatesh, Edward Braund, Eduardo R. Miranda:
Composing Popular Music with Physarum polycephalum-based Memristors. 514-519 - Fede Camara Halac, Shadrick Addy:
PathoSonic: Performing Sound In Virtual Reality Feature Space. 520-522 - Laurel Pardue, Miguel Ortiz, Maarten van Walstijn, Paul Stapleton, Matthew Rodger:
Vodhrán: collaborative design for evolving a physical model and interface into a proto-instrument. 523-524 - Satvik Venkatesh, Edward Braund, Eduardo R. Miranda:
Designing Brain-computer Interfaces for Sonic Expression. 525-530 - Duncan A. H. Williams, Bruno Fazenda, Victoria Williamson, György Fazekas:
Biophysiologically synchronous computer generated music improves performance and reduces perceived effort in trail runners. 531-536 - Gilberto Bernardes:
Interfacing Sounds: Hierarchical Audio-Content Morphologies for Creative Re-purposing in earGram 2.0. 537-542 - Joung Min Han, Yasuaki Kakehi:
ParaSampling: A Musical Instrument with Handheld Tapehead Interfaces for Impromptu Recording and Playing on a Magnetic Tape. 543-544 - Giorgos Filandrianos, Natalia Kotsani, Edmund G. Dervakos, Giorgos Stamou, Vaios Amprazis, Panagiotis Kiourtzoglou:
Brainwaves-driven Effects Automation in Musical Performance. 545-546 - Graham Wakefield, Michael Palumbo, Alexander Zonta:
Affordances and Constraints of Modular Synthesis in Virtual Reality. 547-550 - Emmanouil Moraitis:
Symbiosis: a biological taxonomy for modes of interaction in dance-music collaborations. 551-556 - Antonella Nonnis, Nick Bryan-Kinns:
Όλοι: music making to scaffold social playful activities and self-regulation. 557-558 - Sara Sithi-Amnuai:
Exploring Identity Through Design: A Focus on the Cultural Body Via Nami. 559-563 - Joe Wright:
The Appropriation and Utility of Constrained ADMIs. 564-569 - Lia Mice, Andrew P. McPherson:
From miming to NIMEing: the development of idiomatic gestural language on large scale DMIs. 570-575 - William Christopher Payne, Ann Paradiso, Shaun K. Kane:
Cyclops: Designing an eye-controlled instrument for accessibility and flexible use. 576-580 - Adnan Marquez-Borbon:
Collaborative Learning with Interactive Music Systems. 581-586 - Jens Vetter:
WELLE - a web-based music environment for the blind. 587-590 - Margarida Pessoa, Cláudio Parauta, Pedro Luís, Isabela Corintha Almeida, Gilberto Bernardes:
Examining Temporal Trends and Design Goals of Digital Music Instruments for Education in NIME: A Proposed Taxonomy. 591-595 - Laurel Pardue, Kuljit Bhamra, Graham England, Phil Eddershaw, Duncan Menzies:
Demystifying tabla through the development of an electronic drum. 596-599 - Juan Sierra:
SpeakerDrum. 600-604 - Matthew Caren, Romain Michon, Matthew Wright:
The KeyWI: An Expressive and Accessible Electronic Wind Instrument. 605-608 - Pelle Juul Christensen, Dan Overholt, Stefania Serafin:
The Da ̈ıs: A Haptically Enabled New Interface for Musical Expression for Controlling Physical Models for Sound Synthesis. 609-612 - João Wilbert, Don Derek Haddad, Hiroshi Ishii, Joseph A. Paradiso:
Patch-corde: an expressive patch-cable for the modular synthesizer. 613-616 - Jirí Suchánek:
SOIL CHOIR v.1.3 - soil moisture sonification installation. 617-618 - Marinos Koutsomichalis:
Rough-hewn Hertzian Multimedia Instruments. 619-624 - Taylor J. Olsen:
Animation, Sonification, and Fluid-Time: A Visual-Audioizer Prototype. 625-630 - Virginia de las Pozas:
Semi-Automated Mappings for Object-Manipulating Gestural Control of Electronic Music. 631-634 - Christodoulos Benetatos, Joseph VanderStel, Zhiyao Duan:
BachDuet: A Deep Learning System for Human-Machine Counterpoint Improvisation. 635-640
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.