Combining event and signal processing in the MAX graphical programming environment

M Puckette - Computer music journal, 1991 - JSTOR
Computer music journal, 1991JSTOR
MAX (Puckette 1988; Opcode 1990) is a graphical programming environment for developing
real-time musical software applications. First written for the Apple Macintosh computer, it has
been ported to the NeXT computer as a part of the IRCAM Music Workstation (IMW) project
(Lindemann et al. 1991). From its earliest conception, MAX was intended as a unified
environment for describing both control and signal flow. Historically it has developed as a
MIDI (ie, control) program primarily because the 4X (Favreau et al. 1986), IRCAM's earlier …
MAX (Puckette 1988; Opcode 1990) is a graphical programming environment for developing real-time musical software applications. First written for the Apple Macintosh computer, it has been ported to the NeXT computer as a part of the IRCAM Music Workstation (IMW) project (Lindemann et al. 1991). From its earliest conception, MAX was intended as a unified environment for describing both control and signal flow. Historically it has developed as a MIDI (ie, control) program primarily because the 4X (Favreau et al. 1986), IRCAM's earlier signal pro-cessing engine, could only communicate with the Macintosh over a serial MIDI connection. The IMW offers the opportunity to join MAX more intimately with a number-crunching engine capable of doing high-quality audio synthesis and processing in real-time. The DSP coprocessor boards now available from various manufacturers for the Macintosh, while many times less powerful than the IMW, offer a similar possibility. This paper describes how MAX has been extended on the NeXT to perform signal as well as control computa-tions. Since MAX as a control environment has been described elsewhere, we offer only an outline of its control aspect here as background for the description of its signal processing extensions. The main purpose for making electronic music production run in real time is so that a musician can exercise live control over the music in performance. The problem of defining that control is a much harder one than that of defining the signal processing network that ultimately will generate the samples. The sample generation problem has historically been considered difficult simply be-cause of its computational requirements. Today, a real-time programmable audio synthesis and processing engine can be bought at a price that researchers, and even some musicians, can afford to pay. It is therefore not surprising that many systems are now being proposed for graphical signal network editing; recent examples are described by Bate (1990), Minnick (1990), and Helmuth (1990). But the control problem, that of making the signal network respond in an" instrument-like" way to live human control, is not made appreciably easier by the availability of faster hardware. Today, the challenge for a signal processing network editor is to open itself up to a wide range of control possibilities. To take complete control of all the possibilities of some kind of signal processing" patch," or network, it may be necessary to specify independently where the control is coming from-the basic pitch and tempo material, timbral changes, pitch articu-lation, whatever. These should be controllable physically, sequentially, or algorithmically; if they are to be controlled algorithmically, the inputs to the algorithm should themselves be controllable in any way. The more a given situation relies on unusual synthesis methods or input devices, the more acutely we need to be able to specify exactly what will control what, and how.
JSTOR