Nothing Special   »   [go: up one dir, main page]

Jump to content

Open Sound Control

From Wikipedia, the free encyclopedia

Open Sound Control (OSC) is a protocol for networking sound synthesizers, computers, and other multimedia devices for purposes such as musical performance or show control. OSC's advantages include interoperability, accuracy, flexibility and enhanced organization and documentation.[1] Its disadvantages include inefficient coding of information, increased load on embedded processors,[2] and lack of standardized messages/interoperability.[3][4][5] The first specification was released in March 2002.

Motivation

[edit]

OSC is a content format developed at CNMAT by Adrian Freed and Matt Wright comparable to XML, WDDX, or JSON.[6] It was originally intended for sharing music performance data (gestures, parameters and note sequences) between musical instruments (especially electronic musical instruments such as synthesizers), computers, and other multimedia devices. OSC is sometimes used as an alternative to the 1983 MIDI standard, when higher resolution and a richer parameter space is desired. OSC messages are transported across the internet and within local subnets using UDP/IP and Ethernet. OSC messages between gestural controllers are usually transmitted over serial endpoints of USB wrapped in the SLIP protocol.[citation needed]

Features

[edit]

OSC's main features, compared to MIDI, include:[1]

  • Open-ended, dynamic, URI-style symbolic naming scheme
  • Symbolic and high-resolution numeric data
  • Pattern matching language to specify multiple recipients of a single message
  • High resolution time tags
  • "Bundles" of messages whose effects must occur simultaneously

Applications

[edit]

There are dozens of OSC applications, including real-time sound and media processing environments, web interactivity tools, software synthesizers, programming languages and hardware devices. OSC has achieved wide use in fields including musical expression, robotics, video performance interfaces, distributed music systems and inter-process communication.

The TUIO community standard for tangible interfaces such as multitouch is built on top of OSC. Similarly the GDIF system for representing gestures integrates OSC.

OSC is used extensively in experimental musical controllers, and has been built into several open source and commercial products.

The Open Sound World (OSW) music programming language is designed around OSC messaging.[7]

OSC is the heart of the DSSI plugin API, an evolution of the LADSPA API, in order to make the eventual GUI interact with the core of the plugin via messaging the plugin host. LADSPA and DSSI are APIs dedicated to audio effects and synthesizers.

In 2007, a standardized namespace within OSC called SYN, for communication between controllers, synthesizers and hosts, was proposed,

Notable software with OSC implementations include:

Notable hardware with OSC implementations include:

Design

[edit]

OSC messages consist of an address pattern (such as /oscillator/4/frequency), a type tag string (such as ,fi for a float32 argument followed by an int32 argument), and the arguments themselves (which may include a time tag).[8] Address patterns form a hierarchical name space, reminiscent of a Unix filesystem path, or a URL, and refer to "Methods" inside the server, which are invoked with the attached arguments. Type tag strings are a compact string representation of the argument types. Arguments are represented in binary form with four-byte alignment. The core types supported are

An example message is included in the spec (with null padding bytes represented by ␀): /oscillator/4/frequency␀,f␀␀, Followed by the 4-byte float32 representation of 440.0: 0x43dc0000.[9]

Messages may be combined into bundles, which themselves may be combined into bundles, etc. Each bundle contains a timestamp, which determines whether the server should respond immediately or at some point in the future.[8]

Applications commonly employ extensions to this core set. More recently some of these extensions such as a compact Boolean type were integrated into the required core types of OSC 1.1.

The advantages of OSC over MIDI are primarily internet connectivity; data type resolution; and the comparative ease of specifying a symbolic path, as opposed to specifying all connections as seven-bit numbers with seven-bit or fourteen-bit data types.[8] This human-readability has the disadvantage of being inefficient to transmit and more difficult to parse by embedded firmware, however.[2]

The spec does not define any particular OSC Methods or OSC Containers. All messages are implementation-defined and vary from server to server.

References

[edit]
  1. ^ a b "Introduction to OSC". opensoundcontrol.org. 7 April 2021. Retrieved 11 September 2021.
  2. ^ a b Fraietta, Angelo (2008). "Open Sound Control: Constraints and Limitations". doi:10.5281/zenodo.1179537. S2CID 5690441. {{cite web}}: Missing or empty |url= (help)
  3. ^ "Home · fabb/SynOSCopy Wiki". GitHub. Retrieved 2022-12-31. one of the reasons OSC has not replaced MIDI yet is that there is no connect-and-play … There is no standard namespace in OSC for interfacing e.g. a synth
  4. ^ Supper, Ben (October 24, 2012). "We hate MIDI. We love MIDI". Focusrite Development. Retrieved 2023-01-01. OSC suffers from a superset of this problem: it's anarchy, and deliberately so. The owners of the specification have been so eager to avoid imposing constraints upon it that it has become increasingly difficult for hardware to cope with it. … More severely, there is an interoperability problem. OSC lacks a defined namespace for even the most common musical exchanges, to the extent that one cannot use it to send Middle C from a sequencer to a synthesiser in a standardised manner
  5. ^ "OSC-Namespace and OSC-State: Schemata for Describing the Namespace and State of OSC-Enabled Systems" (PDF). OSC also introduces new obstacles. First, since there is no fixed set of messages, each participating server needs to know what messages it can send to the servers it intends to communicate with. Currently the OSC standard does not provide for a means of programmatically discovering all messages a server responds to
  6. ^ "OpenSoundControl | CNMAT". cnmat.berkeley.edu. Retrieved 22 December 2019.
  7. ^ "OSW Manual OpenSound Control (OSC)". osw.sourceforge.net. Retrieved 22 December 2019.
  8. ^ a b c Wright, Matt (March 26, 2002). "The Open Sound Control 1.0 Specification". opensoundcontrol.org. Retrieved 22 December 2019.
  9. ^ Wright, Matt (March 29, 2002). "Examples Supporting the OpenSoundControl 1.0 Spec". opensoundcontrol.stanford.edu. Retrieved 2023-01-01.
[edit]