Nothing Special   »   [go: up one dir, main page]

Visual Framework For Dynamics

Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

Journal of New Music Research, 2017

Vol. 46, No. 1, 54–73, http://dx.doi.org/10.1080/09298215.2016.1245345

A Visual Framework for Dynamic Mixed Music Notation

Grigore Burloiu1 , Arshia Cont2 and Clement Poncelet2


1 University Politehnica Bucharest, Romania; 2 INRIA and Ircam (UMR SMTS – CNRS/UPMC), France
(Received 27 April 2016; accepted 12 September 2016)

Abstract such as OpenMusic (Assayag, Rueda, Laurson, Agon, &


Delerue, 1999) and Orchids (Nouno, Cont, Carpentier, &
We present a visual notation framework for real-time, score-
Harvey, 2009).
based computer music where human musicians play together
Composer Philippe Manoury has theorised a framework
with electronic processes, mediated by the Antescofo reac-
for mixed music (Manoury, 1990), introducing the concept of
tive software. This framework approaches the composition
virtual scores as scenarios where the musical parameters are
and performance of mixed music by displaying several per-
defined beforehand, but their sonic realisation is a function
spectives on the score’s contents. Our particular focus is on
of live performance. One example is the authoring of music
dynamic computer actions, whose parameters are calculated at
sequences in beat time and relative to human performer’s
runtime. For their visualisation, we introduce four models: an
tempo; another example is the employment of generative
extended action view, a staff-based simulation trace, a tree-
algorithms that depend on the analysis of an incoming signal
based hierarchical display of the score code and an out-of-
(see (Manoury, 2013) for an analytical study).
time inspector panel. Each model is illustrated in code samples
Despite the wide acceptance of interactive music systems,
and case studies from actual scores. We argue the benefits of
several composers have provided insights into the insuffi-
a multifaceted visual language for mixed music, and for the
cient musical considerations and potential abuse of the term
relevance of our proposed models towards reaching this goal.
‘interactive’ in such systems. Among these, we would like
to cite the work of Marco Stroppa (Stroppa, 1999), Jean-
Keywords: dynamic scores, visualisation, notation, mixed Claude Risset (Risset, 1999) and Joel Chadabe (Chadabe,
music, Antescofo 1984). In his work, Stroppa asks the musical question of juxta-
posing multiple scales of time or media during the composition
1. Introduction phase, and their evaluation during performance. He further
remarks on the poverty of musical expressivity in then state-
We approach the issue of notation for the authoring and perfor- of-the-art real-time computer music environments as opposed
mance of mixed music, which consists of the pairing of human to computer-assisted composition systems or sound synthe-
musicians with computer processes or electronic equipment, sis software. Risset takes this one step further, arguing that
where each side influences (and potentially anticipates) the interactive music systems are less relevant for composition
behaviour of the other. The term has been used along the than performance. Finally, Chadabe questions the very use
history of electronic music, referring to different practices of the term ‘interaction’ as opposed to ‘reaction’. Effectively,
involving tape music, acousmatic music or live electronics many such systems can be seen as reactive systems (computers
(Collins, Kapralos, & Tessler, 2014). reacting to musicians’ input), whereas interaction is a two-
In the 1990s, real-time sound processing gave birth to way process involving both specific computing and cognitive
various communities and software environments like Pd processes.
(Puckette, 1997), Max (Cycling ’74, 2016) and SuperCol- To address the above criticisms and to define the current
lider (McCartney, 1996), enabling interactive music situations state of the art in computational terms, we turn to the concept
between performers and computer processes on stage (Rowe, of dynamic mixed music, where the environment informs the
1993). In parallel, computer-assisted composition tools have computer actions during runtime. In this paradigm, computer
evolved to enrich data representation for composers interested music systems and their surrounding environments (including
in processing data offline to produce scores or orchestration, human performers) are integral parts of the same system and

Correspondence: Grigore Burloiu, University Politehnica Bucharest, Romania. E-mail: gburloiu@gmail.com

© 2016 Informa UK Limited, trading as Taylor & Francis Group


A Visual Framework for Dynamic Mixed Music Notation 55

there is a feedback loop between their behaviours. A ubiqui- 1.1.1 Symbolic graphical notation
tous example of such dynamics is the act of collective music
Early electroacoustic music scoring methods evolved in tan-
interpretation in all existing music cultures, where synchro-
dem with the expansion of notation in the mid-twentieth cen-
nisation strategies are at work between musicians to interpret
tury towards alternative uses of text and graphics. Written
the work in question. Computer music authorship extends this
instructions would specify, in varying degrees of detail, the
idea by defining processes for composed or improvised works,
performance conditions and the behaviours of the musicians.
whose form and structure are deterministic on a large global
In addition, symbolic graphical representations beyond the
scale but whose local values and behaviour depend mostly
traditional Western system could more suggestively describe
on the interaction between system components—including
shapes and contours of musical actions. While these meth-
human performers and computers/electronics. The two-way
ods can apply to purely acoustic music, the introduction of
interaction standard in Chadabe (1984) is always harder to cer-
electronics came without a conventional performative or no-
tify when dealing with cognitive feedback between machine-
tational practice, and so opened up the space of possibilities
generated action and the human musician, and many of the
for new symbolic graphical notation strategies.
programming patterns might be strictly described as reactive
Major works of this era include Karlheinz Stockhausen’s
computing. For this reason we argue that the computationally
Elektronische Studie I and II (1953–1954), the latter being the
dynamic aspect of the electronics should be their defining trait.
first published score of pure electronic music (Kurtz, 1992).
A uniting factor in the diversity of approaches to mixed
These examplified a workflow where the composer, usually
music can be found in the necessity for notation or tran-
aided by an assistant (such as G.M. Koenig at the WDR studio
scription of the musical work itself. A survey on musical
in Cologne), would transcode a manually notated symbolic
representations (Wiggins, Miranda, Smaill, & Harris, 1993)
score into electronic equipment manipulations or, later, com-
identified three major roles for notation: recording, analysis
puter instructions, to produce the sonic result. An instance of
and generation. We would specifically add performance to this
this paradigm, which continues today, is the collaboration of
list. Despite advances in sound and music computing, there is
composer Pierre Boulez and computer music designerAndrew
as yet no fully integrated way for composers and musicians to
Gerzso for Anthemes II (1997), whose score is excerpted in
describe their musical processes in notations that include both
Figure 1.
compositional and performative aspects of computer music,
across the offline and real-time domains (Puckette, 2004),
although steps in this direction can be seen in the OSSIA 1.1.2 Graphics–computer integration
framework for the i-score system (Celerier et al., 2015).
This paper attempts to provide answers to the problem of Before long, composers expressed their need to integrate the
musical notation for dynamic mixed music. The work pre- notation with the electronics, in order to reach a higher level
sented here is the outcome of many years of musical practice, of expressiveness and immediacy. Xenakis’ UPIC (Xenakis,
from composing to live performance of such pieces and in the 1992) is the seminal example of such a bridging technol-
context of the Antescofo (Cont, 2008) software used today in ogy: his Mycenae-α (1978) was first notated on paper before
numerous new music creations and performances worldwide.1 being painstakingly traced into the UPIC. The impetus to
We start with a brief survey of the current state of mixed music enhance the visual feedback and control of integrated scor-
notation. In this context, we look at the Antescofo language ing led to systems such as the SSSP tools (Buxton, Patel,
and its basic visual components, before turning to the main Reeves, & Baecker, 1979), which provided immediate access
focus of the paper: notational models for dynamic processes. to several notation modes and material manipulation meth-
Note: throughout this paper, we use the word dynamic and ods. Along this line, the advent of real-time audio processing
its derivatives in the computer science sense, signifying vari- brought about the integrated scores of today, underpinned
ability from one realisation to another. We occasionally use by computer-based composition/performance environments.
dynamics as a shorthand for dynamic values or processes. Leading examples in the field are the technical documentation
Please do not confuse this with the musical sense of the word, databases at IRCAM’s Sidney2 or the Pd Repertory Project,3
which refers to the intensity of playing. hosting self-contained software programs, or patches, which
can be interpreted by anyone with access to the required equip-
ment. These patches serve as both production interfaces and de
facto notation, as knowledge of the programming environment
1.1 Mixed music notation at a glance
enables one to ‘read’ them like a score. Since most real-time
To facilitate the understanding of the field, we distinguish computer music environments lack a strong musical time-
notational strategies for mixed music into three categories, authoring component, sequencing is accomplished through
before noting how composers might combine them to reach tools such as the qlist object for Max and Pd (Winkler, 2001),
different goals. the Bach notation library for Max (Agostini & Ghisi, 2012)

1An incomplete list is available at http://repmus.ircam.fr/antescofo/ 2 http://brahms.ircam.fr/sidney/.


repertoire. 3 http://msp.ucsd.edu/pdrp/latest/files/doc/.
56 G. Burloiu et al.

Fig. 1. The first page of the Anthemes II score for violin and electronics, published by Universal Edition. The marked cues correspond to
electronic action triggerings.

and/or an electronics performer score such as the one for 2012). More structured solutions are provided by OSC5
Anthemes II pictured in Figure 2. sequencers with some dynamic attributes such as IanniX
(Coduys & Ferry, 2004) and i-score (Allombert, Desainte-
Catherine, & Assayag, 2008). In particular, i-score uses a
1.1.3 Dynamic notation Hierarchical Time Stream Petri Nets (HTSPN)-based specifi-
Finally, a third category is represented by dynamic scores— cation model (Desainte-Catherine & Allombert, 2005),
these are Manoury’s virtual scores (Manoury, 1990), also enabling the visualisation of temporal relations and custom
known in the literature as interactive scores due to their con- interaction points. The dynamic dimension is, however, fairly
nection to real-time performance conditions (Fober & Orlarey, limited: an effort to enhance the model with conditional branch-
2012). Interpretations range from prescribed musical actions ing concluded that not all durations can be preserved in scores
to ‘guided improvisations’, where the musicians are free to with conditionals or concurrent instances (Toro-Bermúdez,
give the notation a personal reading (Clay & Freeman, 2010), Desainte-catherine, & Baltazar, 2010). By replacing the Petri
or the electronic algorithms have a degree of nondetermin- Nets model with a synchronous reactive interpreter, (Arias,
ism. Here, composers are responsible for creating a dynamic Desainte-Catherine, Salvati, & Rueda, 2014) achieved a more
roadmap whose acoustic realisation could be radically4 dif- general dynamic behaviour, accompanied by a real-time
ferent from one performance to another. i-score-like display of a dynamic score’s performance. Still,
A question arises: how are such dynamic interactions to be this performance-oriented visualisation does not display the
notated? While regular patches can already reach high levels potential ramifications before the actual execution. This is gen-
of complexity, the problem of descriptiveness increases expo- erally the case with reactive, animated notation6 : it does a good
nentially once time and decision-making are treated job of representing the current musical state, but does not offer
dynamically. The low-level approach is typified by the Pd a wider, out-of-time perspective of the piece. One significant
software, which was designed to enable access to custom,
5 OpenSoundControl, a multimedia communication protocol: http://
non-prescriptive data structures for simple visual represen-
tation (Puckette, 2002). One step higher is an environment opensoundcontrol.org/.
6 In the literature, this kind of ‘live’ notation is sometimes called
like INScore, which provides musically characteristic build-
ing blocks while retaining a high level of flexibility through dynamic notation (Clay & Freeman, 2010), regardless of the
its modular structure and OSC-based API (Fober & Orlarey, underlying scenario being computationally dynamic or not. In this
paper, we use the term ‘dynamic’ with regard to notation for the
scoring of dynamic music in general. While the notation itself may
4 (in terms of structure, duration, timbre, etc.).
be dynamic, this is not a necessary condition.
A Visual Framework for Dynamic Mixed Music Notation 57

Fig. 2. Top: A portion of the Anthemes II Max/MSP patch from 2005. All electronic processes are triggered from a list of cues. Bottom: Excerpt
from the first page of the Anthemes II computer music performer score. The marked cues correspond to interaction points with the dedicated
patch.

development to the i-score framework enables conditional evaluation of the score’s data flow graph, based on the human
branching through a node-based formalism (Celerier et al., operator’s manual triggering of cues. But the audio processes’
2015). Meanwhile, more complex structures (loops, recursion, activation, their control and most importantly their interaction
etc.) still remain out of reach. with respect to the physical world (the human violinist) are
Naturally, much contemporary music makes use of all three neither specified nor implemented.
types of notation outlined above. Composers often mix nota- The authoring of time and interaction of this type, and
tional strategies in an effort to reach a two-fold goal: a (fixed) its handling and safety in real-time execution is the goal of
blueprint of the piece, and a (dynamic) representation of the Antescofo system and language (Cont, 2008; Echeveste, Cont,
music’s indeterminacies. On the one hand, a score should Giavitto, & Jacquemard, 2013), which couples a listening ma-
lend itself to analysis and archival; on the other, notation chine (Cont, 2010) and a reactive engine (Echeveste, Giavitto,
is a tool for composition and rehearsal, which in modern & Cont, 2013) in order to dynamically perform the computer
mixed music require a high degree of flexibility. But perhaps music part of a mixed score in time with live musicians.
most importantly, the score serves as a guide to the musical This highly expressive system is built with time safety in
performance. As such, the nature of the notation has a strong mind, supporting musical specific cases such as musician error
bearing on the relationship of the musician with the material, handling and multiple tempi (Cont, Echeveste, Giavitto, &
and on the sonic end result. Jacquemard, 2012; Echeveste, Giavitto, & Cont, 2015). Ac-
tions can be triggered synchronously to an event e(t) detected
by the listening machine, or scheduled relative to the detected
1.2 Dynamic mixed music composition in Antescofo musician’s tempo or estimated speed ė(t). Finally, a real-time
In the realisation of Anthemes II shown in Figure 2, the tem- environment (Max/MSP, Pd or another OSC-enabled respon-
poral ordering of the audio processes is implicit in the stepwise sive program) receives the action commands and produces
the desired output. The Antescofo runtime system’s coordina-
58 G. Burloiu et al.

Fig. 3. Antescofo execution diagram.

tion of computing actions with real-time information obtained actions are either discrete (visualised by circles) or continuous
from physical events is outlined in Figure 3. (curves) but they are all strongly timed7 (Cont, 2010). Figure
Antescofo’s timed reactive language (Cont, 2011) specifies 4 displays the implementation of Anthèmes II (Section 1) from
both the expected events from the physical environment, such the Antescofo Composer Tutorial.8 This layout facilitates the
as polyphonic parts of human musicians (as a series of EVENT authoring process by lining up all the elements according to
statements) and the computer processes that accompany them musical time, which is independent of the physical (clock)
(as a series of ACTION statements). This paper touches on time of performance. Thus, in cases where the score specifies
several aspects of the language; for a detailed specification a delay in terms of seconds, this is automatically converted
please consult the Antescofo reference manual (Giavitto, Cont, to bars and beats (according to the local scored tempo) for
Echeveste, & Members, 2015). The syntax is further described visualisation.
in Cont (2013), while a formal definition of the language is Generally, atomic actions in the Antescofo language have
available in Echeveste et al. (2015). the following syntax: [< delay >] < receiver_name ><
To facilitate the authoring and performance of Antescofo value >. An absent < delay > element is equivalent to zero
scores, a dynamic visualisation system was conceived, with delay, and the action is assumed to share the same logical
a view to realising a consistent workflow for the compo- instant as the preceding line.
sitional and execution phases of mixed music. Ascograph Code listing 1 shows the starting note and action instruc-
(Coffy, Giavitto, & Cont, 2014) is the dedicated user interface tions of the score. After one beat of silence, the first violin
that aims to bridge the three notational paradigms described note triggers the opening of the harmoniser process, by way
in Section 1.1: symbolic, integrated and dynamic. We demon- of a nested group of commands. The resulting hierarchy is
strate the first two aspects with a brief study of Ascograph’s reflected in the green-highlighted section of Figure 4’s action
basic notation model (Burloiu & Cont, 2015) in Section 1.3, view: the first action group block on the left contains a white
before laying out different strategies to tackle the dynamic
dimension. 7 Wang (Wang, 2008) defines a strongly timed language as one in
which there is a well-defined separation of synchronous logical time
from real time. Similar to Wang’s ChucK language, Antescofo also
1.3 The basic AscoGraph visualisation model explicitly incorporates time as a fundamental element, allowing for
the precise specification of synchronisation and anticipation of events
Since its inception, Ascograph’s visual model has been centred
and actions.
around the actions view window, which is aligned to the instru- 8 available at http://forumnet.ircam.fr/products/antescofo/.
mental piano roll by means of a common timeline. Electronic
A Visual Framework for Dynamic Mixed Music Notation 59

hensive dynamic notation that supports both authorship and


execution of augmented scores. Specifically, the four models
are the extended action view (Section 3.1), the simulation
trace staff view (3.2), the hierarchical tree display (4.1) and
the inspector panel (4.2). We test our hypotheses on real use
case scenarios (Section 5) and conclude the paper with a final
discussion (Section 6).

2. Dynamic elements in the Antescofo language


Fig. 4. Ascograph visualisation for Anthèmes II (Section 1) from Dynamic behaviour in computer music can be the result of
the Antescofo Composer Tutorial. Top: piano roll; Bottom: action both interactions during live performance, and algorithmic
view. The left-hand portion highlighted by a rectangle corresponds and dynamic compositional elements prescribed in the score
to Listing 1. itself. Accordingly, in an Antescofo program, dynamic be-
haviour is both produced by real-time (temporal) flexibility
circle (representing two simultaneous messages) and a sub- as a result of performing with a score follower, and through
group block, which in turn includes a circle (containing four explicit reactive constructs of the action language.
messages to the harmoniser units). Note the absence of any In the former case, even though the temporal elements can
time delay: all atomic actions mentioned above are launched be all statically defined in the score , they become dynamic
in the same logical instant as the note detection. during live performance due to the tempo fluctuations es-
This visualisation model meets the first notational require- timated by the listening machine. We alluded to this basic
ment we specified at the end of Section 1.1: it acts as a graphic dynamic aspect in Section 1.3, where we noted the implicit
map of the piece, which reveals itself through interaction (e.g. interpretation of physical time as musical time.
hovering the mouse over message circles lists their contents). The second case employs the expressive capabilities of the
The visual framework presented in this paper is the outcome strongly timed action language of Antescofo. Such explicitly
of a continuous workshopping cycle involving the mixed mu- dynamic constructs form the topic of this section.
sic composer community in and around IRCAM. While some
features are active in the current public version of Ascograph,9
others are in the design, development or testing phases. This 2.1 Runtime values
is apparent in the proof of concept images used throughout, Variables in the Antescofo language can be runtime, meaning
which are a blend of the publicly available Ascograph visual- that their values are only determined during live performance
isation and mock-ups of the features under construction. We (or a simulation thereof). The evaluation of a runtime variable
can expect the final product to contain minimal differences can quantify anything as decided by the composer, from a
from this paper’s presentation. discrete atomic action to breakpoints in a continuous curve,
In the remainder of this paper, we lay out the major dynamic as shown in code listing 2.
features of the Antescofo language (Section 2) and the visual In this basic sample, the value of the level output can be the
models for their representation (timeline-based in Section 3 result of an expression defined somewhere else in the code,
and out-of-time in Section 4), in our effort towards a compre- whose members depend on the real-time environment. In the
example on the right, the output level is defined by $y, which
9At the time of writing, the newest release of Antescofo is v0.9, which grows linearly over 2 beats from zero to the runtime computed
includes Ascograph v0.2. value of $y.
60 G. Burloiu et al.

Thus, while for now the circle-shaped action message dis- the behaviour and activation interval (lifespan) of a process,
play from Section 1.3 is adequate for the dynamic atomic once it has been called, can be highly dynamic. The example
action, for the curve display we need to introduce a visual in code listing 4 produces the same result as the loop block in
device that explicitly shows the target level as being dynamic. code listing 3. See (Giavitto et al., 2015) for in-depth specifi-
Our solution is described in Section 3.1. Additionally, we cations of all iterative constructs.
propose an alternative treatment of both atomic actions and
curves, in the context of performance traces, in Section 3.2.

2.2 Durations
On Ascograph’s linear timeline, we distinguish two kinds
of dynamic durations. Firstly, the delay between two timed
items, such as an EVENT and its corresponding ACTION, or
between different breakpoints in a curve; and secondly, the
number of iterations of a certain timed block of instructions.
The examples in code listing 3 show the two kinds of tempo-
ral dynamics: On the left, the runtime value of $x determines
the delay interval (in beats, starting from the detected onset of
NOTE C4) until level receives the value .7, and the duration
of the (continuous) increase from 0 to .5. On the right, the
duration of execution of the loop and forall structures depends, We introduce a graphic solution for representing dynamic
respectively, on the state of $x and the number of elements in durations in Section 3.1. Going further, the action view model
$tab. In these cases the terminal conditions of loop and forall is not well suited for dynamically iteration-based repetition.
are reevaluated on demand. We propose three alternatives: ‘unpacking’ such constructs
A particular extension of iterative constructs are recursive into an execution trace (Section 3.2), detailing their structure
processes. A process is declared using the @proc_def com- in a tree-based graph (Section 4.1) and monitoring their status
mand, and can contain calls to itself or other processes. Thus, in an out-of-time auxiliary panel (Section 4.2).
A Visual Framework for Dynamic Mixed Music Notation 61

2.3 Occurence case, the tempo is aligned to the anticipation of the @target
event at a specific distance in the future, computed either by
The examples we have shown thus far, while pushing the
a number of beats or a number of events. We can indicate
limits of Ascograph’s action view, can still be represented
this synchronisation look ahead in relation to the timeline; see
along a linear compositional timeline. There is a third category
Section 5.2 for an example.
which could not be drawn alongside them without causing a
breakdown in temporal coherence, as shown in code listing 5.
2.5 Score jumps
The instrumental events in an Antescofo score are inherently
a sequence of linear reference points, but they can be further
extended to accommodate jumps using the @jump attribute
on an event. Jumps were initially introduced to allow simple
patterns in Western classical music such as free repetitions, or
Here, the action block is fired whenever the variable $x is
da capo repetitive patterns. However, they were soon extended
updated. Moreover, this is set to occur only twice in the whole
to accommodate composers willing to create open form scores
performance, hence the during [2#]. From here, it is easy
(Freeman, 2010).
to imagine more complications—recursive process calls, dy-
For the purposes of visualisation, we distinguish two types
namic stop conditions etc—leading to a highly unpredictable
of open-form scores: in the first, jump points are fixed in the
runtime realisation.
score (static), while their activation is left to the discretion of
In most cases, since such occurrence points are variable,
the performer. This scheme is more or less like the da capo
nothing but the overall lifespan of the whenever (from entry
example in Figure 5. Its success in live performance depends
point down to stop condition fulfilment) should be available
highly on the performance of the score follower. Such scoring
for coherent representation; this might still be helpful for the
has been featured in concert situations such as pieces by com-
composer, as we show in Section 3.1.
poser Philippe Manoury realised using Antescofo (Manoury,
Since whenever constructs are out-of-time dynamic
2016). Figure 5 shows the treatment of static jumps in the
processes,10 being detached from the standard timeline grid,
action view, which would be similarly handled in the staff
they require new methods of representation beyond the classic
view (Section 3.2).
action view. We discuss the ‘unpacking’ of whenever blocks
The second type is where the score elements and their
onto traces in Section 3.2.3, and their non-timeline-based rep-
connections are dynamically generated, such as in the work
resentations in Section 4.
of composer Jason Freeman (Freeman, 2010). In this case,
similar to the dynamic @targets from Section 2.4, we are
2.4 Synchronisation strategies dealing with an attribute, this time that of an event. For now, we
In Antescofo, the tempo of each electronic action block can be choose to print out the algorithm for jump connection creation
dynamically computed relatively to the global tempo detected in a mouse-over popup, since its runtime evaluation can be
by the listening machine. Through attributes attached to an impossible to predict.
action group, its synchronisation can be defined as @loose,
@tight, or tied to a specific event @target; the latter enabling
timing designs such as real-time tempo canons (Trapani & 3. Timeline-based models
Echeveste, 2014). Since @target-based relationships define In the following, we put forward visualisation solutions for
temporal alignment between a group and event(s), we can the dynamic constructs from Section 2, anchored to the lin-
visualise this by connecting the piano roll to the action tracks ear timeline. Since so much compositional activity relates to
(see Section 3.2.2), or by drawing the connections in an out- timelines, be they on paper or in software, it makes sense to
of-time model (see Section 4.1). push the boundaries of this paradigm in the context of mixed
Additionally, the Antescofo language allows for dynamic music.
targets, acting as a moving synchronisation horizon. In this

10 Our use of the term ‘out-of-time’ is derived from the sense coined 3.1 Representing dynamics in the action view
by Xenakis (Xenakis, 1992) to designate composed structures (and The main challenge in displaying dynamic delay segments
the methods used to generate them), as opposed to sonic form. In an alongside static ones is maintaining horizontal coherence. Dy-
analogous fashion, we distinguish ‘in-time’ electronic actions that
namic sections must be clearly delimited and their conse-
are linked to specific acoustic events from ‘out-of-time’ constructs,
which are not. Just like Xenakis’ structures, during the performance,
quences shown. To this end, we introduce relative timelines:
the out-of-time constructs are actuated (given sonic form) in time. once an action is behind a dynamic delay, it no longer syn-
The difference from Xenakis’ approach is that Antescofo out-of-time chronises with actions on the main timeline; rather, the action
actions do not reside on a separate plane of abstraction from in-time lives on a new timeframe, which originates at the end of the
actions: only the nature of their activation is different. dynamic delay.
62 G. Burloiu et al.

Fig. 5. Ascograph with static jumps: A classical music score (Haydn’s Military Minuet) with Antescofo jumps simulating da capo repetitions
during live performance.

Fig. 6. A group with a dynamic delay between its second and third Fig. 8. A best-guess simulation of the previously shown curve. The
atomic actions. The subsequent action and subgroup belong to a dynamic delay and value have both been actuated.
relative timeline, whose ruler is hidden.

Fig. 7. A curve with a dynamic delay between its second and third Fig. 9. Definite lifespan (top). Dynamic finite lifespan (mid).
breakpoints. The sixth breakpoint has a dynamic value. The relative Dynamic infinite lifespan (bottom).
timeline ruler is drawn.

To avoid clutter, a relative time ruler appears only upon of an ideal execution of the score,11 and can be undone or
focus on a dynamically delayed section. Also, we add a shaded regenerated. See Figure 8 for an example of such a local
time-offset to depict the delay, as seen in Figure 6. Since simulation result. The underlying mechanisms are part of the
by definition their actual duration is unpredictable, all such Antescofo offline engine, similar to the full simulation model
shaded regions will have the same default width. in Section 3.2.
These concepts apply to the display of curves as well. As For the constructs involving a number of iterations over
discussed in Section 2.1, dynamic breakpoint heights now a set of actions, we propose a specific striped shading of the
come into play. Our solution is to randomly generate the block background, as well as a model for depicting the group’s
vertical coordinate of such points, and to mark their position lifespan along the timeline. We chose vertical background
with a vertical shaded bar, as in Figure 7. stripes for loops and horizontal ones for foralls, according
In our investigations with the core user group at IRCAM, a to their sequential or simultaneous nature in standard usage,
need was identified for the possibility of ‘local execution’ of a respectively.12 For the activation intervals of the constructs,
dynamic section, to be able to compare a potential realisation
of the dynamics with their neighbouring actions and events. 11 Details on how such ad’hoc trace generation and execution is
To this end, we are developing a simulate function for any accomplished can be found in Poncelet & Jacquemard (2015).
dynamic block, which transforms it into a classic static group, 12 Of course, a loop can be made simultaneous through a zero repeat
eliminating its relative timeline. The process makes a best period, while a forall can function sequentially by way of branch-
possible guess for each particular situation, in the context dependant delays.
A Visual Framework for Dynamic Mixed Music Notation 63

we distinguish three situations with their respective models,


depicted in Figure 9: (1) a definite lifespan, when the dura-
tion is statically known, (2) a dynamic, finite lifespan for dy-
namically determined endpoints and the (3) dynamic, infinite
lifespan for activities that carry on indefinitely. These graphic
elements are all demonstrated in the examples in Section 5,
Figures 18(a) and 19(a). attempt to print all the corresponding actions to a single staff.
Should this prove impossible without creating overlaps, the
following actions will be taken, in order, until a ‘clean’ layout
3.2 Tracing performance simulations
is obtained:
From its conception, Ascograph has included an experimental
simulation mode that ‘prints’ the whole piece to a virtual exe- (1) Collapse overlapping action groups to their lifespan
cution trace (Coffy et al., 2014). Much like a traditional score, segments. These can then be expanded, creating a
electronic action staves would mark the firing of messages new sub-staff underneath or above the main one.
or evolution of continuous value streams along a common (2) Order the open action groups and curves by relative
timeline. We now present a perfected simulation model, to be timeline (see section 3.2.2), and move them to sub-
implemented into the next version of Ascograph, that more ro- staves as needed.
bustly handles the dynamic aspects of the score language and (3) Order the open action groups and curves by lifespan
also supports the recently developed Antescofo test framework length, and move them to sub-staves as needed.
(Poncelet & Jacquemard, 2015).
The general aim is to produce the equivalent to a manu- If track definitions are missing from the score, the staff con-
ally notated score, to be used as a reference for performance figuration will simply mirror the action group hierarchy. Fig-
and analysis, as well as a tool for finding bugs and making ure 11 shows a possible reflection of the group from Figure 6,
decisions during the composition phase. whose corresponding score is Group g1 from code listing 7
Our main design inspiration is a common type of notation below. The height of a point on a staff is proportional to the
of electroacoustic music (Xenakis, 1992), as exemplified in atomic action value, according to its receiver.13 It is possible
Figure 10. The standard acoustic score is complemented by to have several receivers on a single staff, each with its own
electronic action staves, along which the development of com- height scale (as in Section 5.1), or with common scaling (as
puterised processes is traced. in the present section).
The new display model accommodates all concepts intro- Since the same item can belong to several tracks, this will
duced so far: atomic values, curves, action groups and their be reflected in its staff representation. By default, a primary
lifespans. Dynamic values and durations are still indicated staff for the item will be selected, and on the remaining staves
specifically; this time we use dotted lines, as the examples in the item will only be represented by its lifespan. The user can
Section 3.2.1 show. Horizontally, distances still correspond then expand/collapse any of the instances. The primary staff
to musical time, but, as was the case of the shaded areas in is selected by the following criteria:
Section 3.1, the dotted lines representing dynamic durations
produce disruptions from the main timeline. (1) Least amount of overlapping items.
Electronic action staves can be collapsed to a ‘closed’ state (2) Smallest distance to staff’s track definition: an iden-
to save space, where all vertical information is hidden and all tical label match will be a closer match than a partial
components are reduced to their lifespans. match, which is closer than a parent–child group
relationship.
3.2.1 Defining staves In Figure 12 we show the same group from Figures 6 and 11,
Unlike the action view (see Sections 1.3 and 3.1), in the now with two tracks defined: T1 for the main group and T2
simulation mode the focus is on reflecting the score’s output, for the subgroup. The subgroup and its contents also fall under
not its code structure. While the action view is agnostic with the track T1 definition, which is why the subgroup lifespan is
regard to the content of the coded actions, the simulation mode represented on the T1 staff.
is closely linked to electronic action semantics. Thus, it is Note that, while the subgroup’s timeline starts simultane-
likely that commands from disparate areas in the code will ously with the m3 0.9 atomic action, its first triggering is
belong on the same horizontal staff. m21 0.8, which comes after a .4-beat delay. Since, as we
In this simulation trace model, staff distribution is closely have explained, the simulation view focuses on reflecting the
linked to the Antescofo tracks that are defined in the score,
using the @track_def command. 13 Recall the general atomic action code syntax: [< delay >] <
In the example in code listing 6, the track T contains all receiver_name><value>. A receiver might get no numeric value,
score groups or actions whose label or target start with the or a list of values. We use the first value if any, or else a default height
prefix level, and their children recursively. The model will of 1.
64 G. Burloiu et al.

Fig. 10. Electroacoustic staff notation: Nachleben (excerpt) by Julia Blondeau.

Naturally, users will be able to employ a mix of both strategies,


adding and removing score elements or entire tracks or staves
to create a desired layout.
While some layouts produced with this model might prove
Fig. 11. Staff representation of a group containing a dynamic delay satisfactory for direct reproduction as a final score, we will also
and a subgroup. provide a vector graphics export function, where a composer
can subsequently edit the notation in an external programme.
Finally, the layout can be saved in XML format and included
alongside the Antescofo score file of the piece.

3.2.2 Representing sync points


We showed in Figure 12 how the start and end of a subgroup’s
relative timeline (coinciding with the actions m3, valued at .9,
and m23, valued at 0) are marked by vertical shaded dotted
lines. Similarly, we signify a return to the main timeline,
Fig. 12. The elements of T2 are also part of T1 (in collapsed form). by synchronising to a certain event, as in Figure 13, where
curve A is no longer part of the relative timeline before it; it
synchronises to an event, depicted by the red rectangle on the
execution of the score, the subgroup lifespan on the T1 track piano roll. The corresponding Antescofo score is presented in
is represented as starting with the m21 0.8 event. code listing 7.
Similar to the action view (Section 2.3), whenever blocks We take a similar approach for dynamic synchronisation
are represented by their lifespans. However, here the user can targets, as exemplified by the case study in Section 5.2. Again
expand the contents of the whenever block on a new staff, the sync relationship will be represented by a dotted line, this
marked as being out-of-time—much like populating the auxil- time parallel to the timeline.
iary inspector panel, as we describe in Section 4.2. We present
a practical approach to visualising whenever constructs as
3.2.3 Visualising test traces
part of the example in Section 5.1.
An alternative to the automatic layout generation function Beyond the ideal trace produced by executing a score with
is adding tracks one by one using context menu commands. all events and actions occurring as scripted in the score, the
A Visual Framework for Dynamic Mixed Music Notation 65

simulation view extends to support the model-based testing in our example input trace (see code listing 8) the event is
workflow (Poncelet & Jacquemard, 2015), which builds a missed, then the action will remain untriggered.
test case by receiving a timed input trace, executing the piece Timed input traces, as lists of event symbols and their cor-
accordingly and outputting a verdict. Such an input trace de- responding timestamp and local tempo, can be loaded from
scribes the behaviour of the listening machine, by way of the text files and visualised on the piano roll, as in Figure 14. The
deviations of the detected musician activity from the ideal time distance between the ideal event and its detected trace
score. For each deviation, Antescofo computes a new local is highlighted, and missed events are greyed out. The user
tempo, based on the last detected note duration. is able to edit the input trace on the piano roll and save the
We propose on example partial test scenario in Table 1,14 modifications into a new text file. The computed tempo curve
again corresponding to code listing 7. Since the last line of τ connects all the local tempo values and spans the duration
code is an atomic action @local-ly synced to NOTE E4, and of the piece; it is displayed along the timeline.
Output traces are produced by computing physical time
14 This table and the corresponding trace listings are an example of from musical time. For instance, the timestamp of event e2
the automatically generated output of the Test framework. from code listing 8 is the result of multiplying its input beat po-
66 G. Burloiu et al.

Fig. 14. Partial test scenario: the event e2 (C note) is .125 beats late, the event e4 (D) is .75 beats early, the event e5 (E) is missed. The curve
is quantised into five actions: c1–c5.

Effectively, a visualisation of the test’s output trace is pro-


duced, with any missed actions being greyed out. Any staff
layout previously configured by the user is preserved. The
corresponding test verdict can be saved as a text file.

4. Models complementary to the timeline


We have shown so far how the introduction of dynamic
processes makes linear timeline-based models partially or
wholly inadequate for coherent representation. Along with
addressing this issue, alternative models can provide the added
benefit of improving focus and offering a fresh perspective on
Fig. 13. Synchronising to an event: the piano roll is focused on the the score.
D4 note which triggers Curve A. In this section, we propose two solutions: a tree-based
display of the score’s hierarchical structure and internal rela-
tionships, and an auxiliary panel that focuses on specific, pos-
sition with the corresponding relative tempo: t (e2) = 1.125× sibly recurring actions or groups. We note that these two mod-
(60/102). els are currently under development as part of the roadmap
Once the input trace has been processed, the simulation towards the next major version of Ascograph. The final imple-
view updates accordingly. The offline engine does a best-effort mentations may vary slightly from the specifications presented
attempt to produce a veridical realisation of the electronic hereon.
score. For instance, any whenever blocks are fired just as they
would be during a real performance, by monitoring their acti-
4.1 The Hierarchy view
vation condition. This allows their contents to be displayed on
the appropriate staves alongside the regularly timed actions— There are significant precedents of graphic tree representa-
which would be impossible without a known input trace. tions for grammars in the computer music literature, such as
A Visual Framework for Dynamic Mixed Music Notation 67

Table 1. Partial test scenario. In Antescofo’s listening estimation, ‘zzz’ denotes the wait for a late event detection, and ‘!’ is a surprise detection
of an early event. The real tempo and duration of event e4 are irrelevant, since the following event is missed.

Musician Antescofo estimation Event durations


cue timestamp tempo timestamp [s] tempo real A. estimate Relative duration

e1 .0s 90.7 .0 102 .661s .588s 1.125beats [long]


e2 .661s 115.4 .58 zzz .66 90.7 1.819s 2.315s 2.75beats [short]
e4 2.481s N/A !2.481 115.4 irrelevant
e5 [missed] N/A

Fig. 15. Glyphs used in the hierarchy tree model.

for Curtis Roads’TREE specification language (Roads, 1977).


Fig. 16. Example of a hierarchy tree. Group G synchronises to the
In a similar way, we can interpret the Antescofo language as second event.
a Type 2 context-free grammar, and construct the hierarchical
tree of a score as follows. The primary nodes are the instru-
mental EVENTs. Their siblings are the neighbouring events,
vertically aligned. Should a jump point be scripted, then one
event node can have several downstream siblings.
ACTIONs are secondary nodes, connected to their respec-
tive event nodes in a parent–child relationship. The branch
structure of the action nodes mirrors the groupings in the
score. We have designed a set of glyphs for all Antescofo score
elements; see Figure 15.
Aside from the parent–child and sibling relationships de-
fined so far, we also provide ways to indicate internal rela-
tionships. These include:

• common variables or macros (colour highlighting);


• common process calls (colour highlighting);
Fig. 17. Explicit linking in Ascograph: the three main views are
• synchronisation targets (dotted arrow). linked to the code editor, which is linked to the inspector. The main
views also allow the user to select items to highlight in the inspector.
The user can selectively activate them permanently, or they
can appear upon mouse hover. Figure 16 shows an example
of all three types of relationships between nodes.
4.2 The inspector panel
To avoid cluttering the tree display, we have decided not
to show lifespans in this model. However, whenever and In this auxiliary visualisation mode, the user can observe
@proc_def nodes are persistently displayed at the top of the contents and/or monitor the state of groups, actions or
the frame, next to the score tree, for as long as the current variables, which shall be selected from the other views (text
zoom view intersects with their lifespan. A click on a when- editor, action view, hierarchy view). Once inside the inspector,
ever node expands its contents in place, and clicking on a the item state will synchronise, via a local simulation estimate,
@proc_def node expands its currently visible instances with the current position in the score from the other views.
within the tree. This behaviour is consistent with the visualisation principle
68 G. Burloiu et al.

of explicit linking (Roberts, 2007), which is maintained in the machine in response to the instrumental event, and their
Ascograph along the diagram in Figure 17. visualisation using the four models we have described.
The inspector displays a combination between the timeline-
based designs from Section 3. For action groups, we retain 5.1 Reiteration and dynamic occurrence: loop, whenever
the block display (e.g. for showing whenever groups outside
the timeline) and we use horizontal staves to visualise the The first example is based on the basic rhythm and soundscape
values in variables and action receivers, along with their recent tutorials included in the Antescofo software package. Code
histories. The added value is two-fold: block display of out- listing 9 shows significant portions of an example algorithmic
of-time constructs, and persistent monitoring of values (even composition score15 , where a group contains two curve and
when they have lost focus in the other views). two whenever blocks, whose states control the behaviour of
The hierarchy view and the inspector panel are both the loop block. To all intents and purposes, the instrumental
depicted in a working situation in the following section; see score is empty, except for ‘dummy’ events for the start and
Figure 18. end of the piece: everything happens on the electronic side
where Antescofo acts as a sequencer. For an in-depth analysis
of the score’s operation, we refer the reader to the tutorial
documentation; presently, we shall concentrate on the visual-
5. Case studies isation solutions, as displayed in Figures 18(a)–(d).
We present two use case scenarios highlighting specific The action view follows the code structure, and includes
dynamic language constructs used in example Antescofo pieces. the lifespans of the whenever and loop blocks, as introduced
In each example, the instrumental score is minimal (a single
event) and the focus is on the electronic actions produced by 15As found in JBLO_loop_ex-StepTWO.asco.txt.
A Visual Framework for Dynamic Mixed Music Notation 69

in Section 3.1. Note how these lifespans are all constrained previous constructs is pointed out through colour highlighting.
by the duration of the parent group. The two whenever nodes are displayed next to the tree, while
The triggering frequency of loop construct is dictated by their parent node is in view.
the evolution of the tempo (curve tempoloop1) and the beat Finally, in the inspector panel we track the variables $count,
division rate ($cycleloop). Their interdependence is reflected $t_loop1 and $cycleloop, assuming the current view position
in the simulation view staff display. In the loop, receiver is after the end of the tempoloop1 curve, which ends on the
names are drawn by string concatenation via the @command value 90 for $t_loop1, thus fulfilling the loop’s end condition.
instruction. The layout has been configured to draw a staff The user might choose to track a different configuration of
for each of the three receivers, containing pan and amplitude receivers in the inspector, depending on their particular focus
pairs. and the task at hand.
The hierarchy view uses EVENT objects one and three as
primary nodes, and the layer1 group as the parent secondary 5.2 Score excerpt: Tesla
node, from which the subgroups branch out. The existence
of common variables between the final loop node and the Our final case study is excerpted from the score of ‘Tesla ou
l’effet d’étrangeté’ for viola and live electronics, by composer
70 G. Burloiu et al.

Fig. 18. Visualisations for code listing 9.

Fig. 19. Visualisations for code listing 10.


A Visual Framework for Dynamic Mixed Music Notation 71

Julia Blondeau. Again we focus, in code listing 10, on the tescofo is on the way to incorporating audio processing in
actions associated to a single event in the instrumental part. its engine, making integrated spectral operations a possibil-
The visualisation models are proposed in Figure 19(a) and (b). ity. Other aspects of music-making that our present work
A synthesiser is controlled by the components of GROUP is challenged by are spatialisation (or other heavily multi-
GravSynt, which has a dynamic synchronisation target16 of dimensional processes), improvisation (which the Antescofo
two beats. We indicate this lookahead interval as a shading of language approaches through whenever constructs and/or
the group header in the action view, and a dotted line arrow dynamic jumps) and complex synchronisation strategies. Our
in the simulation view. proposed methods are still far from comprehensively solving
The group contains the following: a static message to the these problems.
SPAT1 receiver, a triggering of the ASCOtoCS_SYNTH_Ant Over the whole recorded history of music we have seen
process (previously defined through @proc_def) and 5ms notation fostering and being in turn shaped by musical cre-
later, the triggering of the SYNTH_Ant_curveBat process. ativity. While there will always be artists working outside of
After 5ms, curve ampGr is launched, which lasts 14 beats. general norms (this is as true today as it’s ever been, with
Simultaneously with the curve above, a loop is triggered, bespoke visualisations, audio-visual art, live coding and other
controlling an oscillation in the $mfoGr variable. Each itera- practices that integrate the visual with the musical piece), we
tion creates a four-beat curve that starts where the previous aim to outline a framework for creating and interpreting mixed
one left off (after aborting the previous curve if necessary), music which will apply to, and in turn generate, a large set of
and finally, when the loop is stopped, its @abort action en- approaches.
sures a smooth two-beat reset of $mfoGr to the value 5. Thus, Today’s democratisation and diversity of musical practices
its lifespan is dynamic and finite: it has no set end condition, means we cannot hope to reach a new canonical notation
but is stopped by an abort message elsewhere in the score. standard. However, this fact also motivates us towards flexible
The inconvenience of an unknown endpoint is removed alternatives to the traditional Western staff. The models we
in the simulation view, which by definition is based on an have proposed (action blocks, traced staff, code tree, inspec-
execution of the score. This model is also able to unfold the tor) capitalise on the state of the art in computer interface
loop construct in time along its computed duration, attaching design, maintaining a generality of applicability while en-
the @abort sequence at the end. abling specific perspectives into form and material. As the
The hierarchical and inspector views provide no new Antescofo reactive language evolves we intend to keep adapt-
insights in this case; we omit them for space considerations. ing our visual models to new abilities—for instance, patterns,
which help specify complex logical conditions for use in a
whenever, or objects, which extend the language with as-
6. Conclusions and future work pects of object-oriented programming. Similarly to existing
software coding environments, we are considering developing
We introduced four interlinked models for visualising debugger-oriented features to further facilitate the testing of
processes in dynamic mixed music, building towards a frame- Antescofo programs.
work that can support various compositional and performa- Finally, we intend to further open up the system to user
tive approaches to the music. Throughout the paper, we have customisation through a powerful API and a robust import/-
worked under the assumption that a standardised visual nota- export system that empowers musicians to integrate the vi-
tion would be beneficial for current and future practitioners sual notation into their personal authoring and performance
and theorists. Still, the question should be asked: is such a workflows.
lingua franca a necessary, or even desirable goal to pursue?
While this is not the time for a definitive assessment, we would
like to contribute a point of view based on our experience with Acknowledgements
Ascograph.
Visualisation models such as the ones we propose have The authors would like to thank Julia Blondeau for contribu-
proven useful in practice, at least for a subset of computer tions to Section 5, and the Journal’s editors for their highly
music. This includes, but is not limited to, works which em- constructive reviews.
ploy real-time score following, algorithmic generation, digi-
tal sound processing and sampling, recurring electronic ac-
tions and/or dynamic feedback between machine and mu-
Funding
sician. One case where our models might not be as appro- Sectoral Operational Programme Human Resources Develop-
priate, would be, for instance, spectralist music, where the ment 2007–2013 of the Romanian Ministry of European Funds;
impetus is in the evolution of timbre over time, as opposed Financial Agreement [POSDRU/159/1.5/S/132397].
to discrete pitched events. Even so, the task of extending
our models into this direction might prove worthwhile: An- References
Agostini, A., & Ghisi, D. (2012). Bach: An environment for
16 as defined in Section 2.4. computer-aided composition in max. International Computer
72 G. Burloiu et al.

Music Conference. Ann Arbor, MI: Michigan Publishing, Conference. Ljubljana, Slovenia: IRZU – the Institute for Sonic
University of Michigan Library. Arts Research.
Allombert, A., Desainte-Catherine, M., & Assayag, G. (2008). Cycling ’74. (2016). Max is a visual programming language for
Iscore: A system for writing interaction. International media. Retrieved from https://cycling74.com/products/max/
Conference on Digital Interactive Media in Entertainment and Desainte-Catherine, M., & Allombert, A. (2005). Interactive
Arts, DIMEA ’08 (pp. 360–367). New York, NY, USA: ACM. scores : A model for specifying temporal relations between
Arias, J., Desainte-Catherine, M., Salvati, S., & Rueda, C. (2014). interactive and static events. Journal of New Music Research,
Executing hierarchical interactive scores in ReactiveML. 34, 361–374.
Journées d’Informatique Musicale. Bourges, France: HAL. Echeveste, J., Cont, A., Giavitto, J.-L., & Jacquemard, F. (2013).
Assayag, G., Rueda, C., Laurson, M., Agon, C., & Delerue, Operational semantics of a domain specific language for real
O. (1999). Computer assisted composition at Ircam: From time musician-computer interaction. Discrete Event Dynamic
PatchWork to OpenMusic. Computer Music Journal, 23, Systems, 23, 343–383.
59–72. Echeveste, J., Giavitto, J.-L., & Cont,A. (2013). A dynamic timed-
Burloiu, G., & Cont, A. (2015). Visualizing timed, hierarchical language for computer-human musical interaction (Research
code structures in AscoGraph. International Conference on Report RR-8422, INRIA).
Information Visualisation. Barcelona, Spain: University of Echeveste, J., Giavitto, J.-L., & Cont, A. (2015). Programming
Barcelona. with events and durations in multiple times: The Antescofo
Buxton, W., Patel, S., Reeves, W., & Baecker, R. (1979). The DSL. ACM Transactions on Programming Languages and
evolution of the SSSP score-editing tools. Computer Music Systems, Submitted for publication.
Journal, 3, 14–25. Fober, D., & Orlarey, L.Y. (2012). INScore – An environment for
Celerier, J.-M., Baltazar, P., Bossut, C., Vuaille, N., Couturier, the design of live music scores. Proceedings of the Linux Audio
J.-M., & Desainte-Catherine, M. (2015). OSSIA: towards Conference — LAC (pp. 47–54). California: CRMA, Stanford
a unified interface for scoring time and interaction. In M. University.
Battier, J. Bresson, P. Couprie, C. Davy-Rigaux, D. Fober, Freeman, J. (2010). Web-based collaboration, live musical
Y. Geslin, H. Genevois, F. Picard, & A. Tacaille (Eds.), performance and open-form scores. International Journal of
International Conference on Technologies for Music Notation Performance Arts and Digital Media, 6, 149–170.
and Representation (pp. 81–90). Paris, France: Institut de Giavitto, J.-L., Cont, A., Echeveste, J., & Members, M.T. (2015).
Recherche en Musicologie. Antescofo: A not-so-short introduction to version 0.x. MuTant
Chadabe, J. (1984). Interactive composing: An overview. Team-Project. Paris, France: IRCAM.
Computer Music Journal, 8, 22–27. Kurtz, M. (1992). Stockhausen: A biography. London: Faber and
Clay, A., & Freeman, J. (2010). Preface: Virtual scores and real- Faber.
time playing. Contemporary Music Review, 29, 1–1. Manoury, P. (1990). La note et le son. Paris: L’Hamartan.
Coduys, T., & Ferry, G. (2004). Lannix aesthetical/symbolic Manoury, P. (2013). Compositional Procedures in Tensio.
visualisations for hypermedia composition. Sound and Music Contemporary Music Review, 32, 61–97.
Computing, Paris. Manoury, P. (2016). List of works. Retrieved from http://brahms.
Coffy, T., Giavitto, J.-L., & Cont, A. (2014). AscoGraph: A user ircam.fr/philippe-manoury
interface for sequencing and score following for interactive McCartney, J. (1996). SuperCollider: A new real time
music. International Computer Music Conference. Athens, synthesis language. International Computer Music Conference
Greece: Michigan Publishing. (pp. 257–258). Hong Kong University of Science and
Collins, K., Kapralos, B., & Tessler, H. (2014). The oxford Technology, China.
handbook of interactive audio. Oxford: Oxford Handbooks, Nouno, G., Cont, A., Carpentier, G., & Harvey, J. (2009).
Oxford University Press. Making an orchestra speak. Sound and Music Computing.
Cont, A. (2008). ANTESCOFO: Anticipatory Synchronization Porto, Portugal: SMC2009 Best Paper Award.
and Control of Interactive Parameters in Computer Music. Poncelet, C., & Jacquemard, F. (2015). Model based testing
International Computer Music Conference. Belfast: Michigan of an interactive music system. ACM/SIGAPP Symposium on
Publishing. Applied Computing. Salamanca, Spain: ACM.
Cont, A. (2010). A coupled duration-focused architecture for Puckette, M. (1997). Pure data. International Computer Music
realtime music to score alignment. IEEE Transactions on Conference. Thessaloniki, Greece: Michigan Publishing.
Pattern Analysis and Machine Intelligence, 32, 974–987. Puckette, M. (2002). Using Pd as a score language. International
Cont, A. (2011). On the creative use of score following and Computer Music Conference. Gothenburg, Sweden: Michigan
its impact on research. Sound and Music Computing. Padova, Publishing.
Italy. Puckette, M. (2004). A divide between ‘compositional’
Cont, A. (2013). Real-time Programming and Processing of and ‘performative’ aspects of Pd. First International Pd
Music Signals. Habilitation à diriger des recherches: Université Convention. Graz, Austria.
Pierre et Marie Curie - Paris VI. Risset, J.-C. (1999). Composing in real-time? Contemporary
Cont, A., Echeveste, J., Giavitto, J.-L., & Jacquemard, F. (2012). Music Review, 18, 31–39.
Correct automatic accompaniment despite machine listening
or human errors in Antescofo. International Computer Music
A Visual Framework for Dynamic Mixed Music Notation 73

Roads, C. (1977). Composing Grammars (2nd ed. 1978). Trapani, C., & Echeveste, J. (2014). Real time tempo canons with
International Computer Music Conference. Ann Arbor, MI: Antescofo. International Computer Music Conference (p. 207).
Michigan Publishing. Athens, Greece: Michigan Publishing.
Roberts, J. C. (2007). State of the art: Coordinated and Wang, G. (2008). The chuck audio programming language.
multiple views in exploratory visualization. International A strongly-timed and on-the-fly environ/mentality. Princeton:
Conference on Coordinated and Multiple Views in Exploratory Princeton University.
Visualization (pp. 61–71). Zurich: IEEE. Wiggins, G., Miranda, E., Smaill, A., & Harris, M. (1993).
Rowe, R. (1993). Interactive music systems: Machine listening Surveying musical representation systems: A framework for
and composing. Cambridge: MIT Press. evaluation. Computer Music Journal, 17, 31–42.
Stroppa, M. (1999). Live electronics or live music? Towards Winkler, T. (2001). Composing interactive music: Techniques and
a critique of interaction. Contemporary Music Review, 18, ideas using max. Cambridge, MA: MIT Press.
41–77. Xenakis, I. (1992). Formalized music (2nd ed.). Hillsdale, NY:
Toro-Bermúdez, M., Desainte-catherine, M., & Baltazar, P. Pendragon Press.
(2010).Amodel for interactive scores with temporal constraints
and conditional branching. In Proceeding of Journées de
Informatique Musicale, Paris.
Copyright of Journal of New Music Research is the property of Routledge and its content may
not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's
express written permission. However, users may print, download, or email articles for
individual use.

You might also like