GROUPTHINK
Telepresence and Agency During Live Performance
Proc. ACM Comput. Graph. Interact. Tech., Vol. 5, No. 4, Article 39. Publication date: August 2022.
DOI: https://doi.org/10.1145/3533610
Live performers often describe “playing to the audience” as shifts in emphasis, timing and even content according to perceived audience reactions. Traditional staging allows the transmission of physiological signals through the audience's eyes, skin, odor, breathing, vocalizations and motions such as dancing, stamping and clapping, some of which are audible. The Internet and other mass media broaden access to live performance, but they efface traditional channels for “liveness,” which we specify as physiological feedback loops that bind performers and audience through shared agency. During online events, contemporary performers enjoy text and icon-based feedback, but current technology limits expression of physiological reactions by remote audiences. Looking to a future Internet of Neurons where humans and AI co-create via neurophysiological interfaces, this paper examines the possibility of reestablishing audience agency during live performance by using hemodynamic sensors while exploring the potential of AI as a creative collaborator.
ACM Reference Format:
Ali Hossaini, Oliver Gingrich, Shama Rahman, Mick Grierson, Joshua Murr, Alan Chamberlain and Alain Renaud. 2022. GROUPTHINK: Telepresence and Agency During Live Performance. Proc. ACM Comput. Graph. Interact. Tech. 5, 4, Article 39, (August 2022). 10 Pages. https://doi.org/10.1145/3533610
1 INTRODUCTION
GROUPTHINK is a participatory artwork that anticipates the Internet of Neurons - an era when humans and computers interact through sensory prostheses [Sempreboni and Viganò, 2021]. The project aims to:
- Develop new methods of remote interaction between performers and live audiences
- Examine the psychological states associated with mass connectivity via human-machine interface (HMI)
- Recruit AI as a creative partner in an HMI-integrated network
- Cultivate a role for traditional museums in contemporary art production
As a thematic core, the artists chose to portray entanglement: the emergence of collective agency through a network of metabolic exchanges [Sheldrake, 2020]. While graphs of neurons, markets and ecologies often resemble one another, increasing evidence implies this is not coincidence [Lewontin and Levins, 1985]. Brains, economies and nature may coordinate activities through similar principles, notably structural coupling which facilitates shared agency among individual components.
GROUPTHINK provided an opportunity for an entangled network to emerge during a live performance. Staged at National Gallery X, a research studio run by the National Gallery (UK) and King's College London, and live-streamed to a remote audience during Ars Electronica Festival 2022, the performance included a sitarist, a guitarist, visual artists, AI-generated animations, and a visual score generated by the audience's collective heartrate. A triptych of monitors immersed the stage in a responsive video environment. Hence, the project's tagline, “Make art with your hearts.” The artwork's success was judged by the level of interdependence - or physiological entanglement - attained by the performers and audience. GROUPTHINK explores collaborative agency, AI-based creativity, and the growing possibility of an Internet of Neurons co-inhabited by AI and humans who interact via neurophysiological interfaces.
2 THE ARTIST IS PRESENT (REMOTELY)
Participatory art emphasizes audience agency, that is, active collaboration between artists and audiences, and biosensory art uses physiological interfaces as creative tools [Pearce et al., 2015]. A prime example of the former is Marina Abramović’s Measuring the Magic of Mutual Gaze (2011) which engaged visitors with direct, natural interaction. In 1965 Alvin Lucier pioneered biosensory art in Music for a Solo Performer [Lysen, 2019], and for decades Stelarc has explored shared agency through neural implants, notably in RE-WIRED / RE-MIXED: Event for Dismembered Body [Stelarc, 2015]. Biosensory artworks have historically been performed by artists, for instance, Janine Antoni's Slumber (1993) and Lisa Park's Eunoia (2013), or they have facilitated individual experiences such as Mariko Mori's Wave UFO (2007) and Oliver Gingrich's Aura (2015).
Improvements in technology and costs have enabled art which is both biosensory and participatory to emerge. Recent examples include Yui Kawaguchi's MatchAtria (2015) and predecessor installations by the authors: Shama Rahman and Jugular Production's Rhythms of the Heart (2015), Oliver Gingrich and Shama Rahman's Zeitgeist (2019) and Ali Hossaini's Kosmograf (2021). Zeitgeist invited audiences to co-create by providing real-time flow state classification. Using deep learning algorithms from NeuroCreate, the artwork generated video representations of Möbius strips which entwined as participants reached higher levels of togetherness [Rahman, 2022]. Rhythms of the Heart (2015) explored synchronization of heartbeats, also known as entrainment, based on research that indicates: "Those who sing together, sync together," and "Synchrony is the mechanic of group empathy" [Weinstein et al., 2016]. Its musicians encouraged synchronicity and psychological entrainment by using the audience's pulse to set their tempo.
Improvements in telecommunications, AI and biosensing offer artists the capacity to collaborate with remote audiences. The convergence of these factors provided the practical context for GROUPTHINK's development. At the same time, thanks to steadfast pioneers, participatory and biosensory works are now accepted as art. Joining these trends is a new emphasis on giving everyone access to culture, including individuals who cannot physically visit venues. Finally, there is a growing sense that AI may become an artistic partner as well as a creative tool. In keeping with its era, GROUPTHINK sets the stage for creating accessible, participatory art with HMI, AI and telematics.
3 TECHNICAL OVERVIEW
GROUPTHINK was performed on a platform surrounded by three panels of video. Performers faced a single camera, and separate mixing boards managed the video and audio. GROUPTHINK incorporated the following visual elements:
- Custom applications for hemodynamic monitoring and autonomous video playback
- Digitized images of paintings
- AI-generated video
- An audience participation webpage
An opt-in button on the webpage launched a custom JavaScript application which detected the participant's heartbeat via local webcam. Page responsiveness impacted data quality, so content was optimized to simplify the main runtime routine. The local client transmitted heartrate data via websockets to an AirTable database which forwarded anonymized compilations to the venue. Data were processed in studio by a MaxMSP patch which filtered anomalies, e.g., detection of random movement and light by the webcam. The patch sent the mean and standard deviation of the data via OSC (OpenSoundControl) to a visual generator built in TouchDesigner. The video generator composited four visual elements on the video panels:
- A visual score which displayed the audience's collective heart rate in real time.
- Animation sequences triggered by values of the mean audience heartbeat.
- Manually controlled effects such as pixel sorting, fades and dissolves.
- Paintings from the National Gallery's collection (UK)
The generator included manual controls for operators to smooth transitions and insert selected images. The patch's functionality allowed the media artists to integrate themselves into the performance as desired while allowing key workflow to be determined autonomously by the system. (Figure 1)
4 REPRESENTING THE AUDIENCE
Audience members participated via remote heartbeat monitoring. During the performance, heartrates were averaged in real time then represented as a visual score whose form evolved with heartrate measurements. Performers responded to the visual score by changing tempo and mood. This feedback loop aimed to replicate the dynamics of a traditional performance by using heartrate as a proxy for excitement. GROUPTHINK's visual score offered participants direct representations of their input and introduced new visual themes. The score took the form of pulsing rhizomes which elaborated over time. (Figure 2)
The visual score served practical, theoretic and aesthetic purposes. When composited in the video environment, it could be intuitively processed by performers and audience. Its quasi-organic structure provided a pleasing intervention within the plant-inspired animations described below. Although this proposition was not tested, studies demonstrate that the brain synchronizes with pulsing visual stimuli by producing steady-state visual evoked potentials (SSVEPs) [Davidson et al., 2020], and we speculate that this may reinforce musical entrainment. Although sui generis, GROUPTHINK's visual score developed in the atmosphere created by innovators such as John Cage, Anthony Braxton and Iannis Xenakis, and its double-duty as musical guide and visual art resonates with Craig Vear's description of digital scores which "bring to mind a map of the harmonic shape of ... [a] song and also a tempo, a feel, a groove of how to interpret it." [Vear, 2019]
Figure 3 shows the audience participation webpage. The hemodynamic monitoring application calculated heart rates by assessing color changes in the forehead, a process which required participants to hold their face inside a target region [McDuff et al., 2020]. (Lower left corner of Figure 3.) GROUPTHINK's software incorporated Eulerian Video Magnification [Wu et al., 2012]. It recorded a mean color value for each frame, and a Fast-Fourier Transform analyzed the most recent 256 frame values. Given that 1 Hz is 60 beats per minute, this frequency could be converted to heartrate easily.
Audience reactions were categorized into three energy levels: low, medium and high. These levels were keyed to 80 - 100 bpm (heartbeats per minute), 101 - 120 bpm and above 120 bpm. These bands presumably conveyed the audience's excitement. To encourage entrainment, the performers asked participants to hum with the music, and they played tempos which matched the visual score. Every attempt was made to engineer a system which functions in real time. Although the lag between performance, measurements and score sometimes approached 2,000 milliseconds, the music's evolution through loops and slow variations created an impression of convergence between performers and audience.
5 REPRESENTING NATURE
The base video contained two distinct but developmentally related elements. The first element was landscape paintings from the National Gallery. These were representative of the artistic corpus described below. Entanglement provided the conceptual bridge between the National Gallery's collection of organic imagery, and the biosensory apparatus of GROUPTHINK. AI and machine learning (ML) translated the visual dynamics of landscape paintings into animated video. Paintings of nature contain inherent forms of organic growth, and, when interpolated into video via latent vectors, they produce animations which evoke the spontaneous formations of entangled life. Digitized images of canonical paintings from the National Gallery segmented the performance. The second element was produced by three generative adversarial networks (StyleGAN2 on Runway ML and Google Collab). Initially pre-trained botanical models were used to interpret 200 landscape paintings. The results did not meet the project's ambition: animations that combine the texture of canonical painting with the sinuous motion of organic growth.
Desired results were achieved by using data-centric techniques championed by Andrew Ng [Ng, 2021]. The models were retrained on custom datasets: 3,000 images of plants excerpted from National Gallery paintings and 2,000 photographs of tree branches. The StyleGANs produced still images categorized into three energy levels: low, medium and high. (Figure 4) From these base images, latent space transversal sequences produced 200 animations. The retrained models created exciting sequences - visual poems - that stretched the artists' imaginations and creative capacity.
6 A REAL-TIME, COLLABORATIVE PERFORMANCE BETWEEN AUDIENCE, ARTISTS, AND AI
GROUPTHINK was staged twice in September 2021. (Figure 5) The performers explained how viewers could participate via the hemodynamic monitoring webpage then began playing slow, inviting melodies. As the visual score changed, the music ranged from ambient loops to bright, rhythmic grooves to frenzied percussion.
The video triptych presented visual counterpoint to the music. During the performance, the visual generator selected animations - autonomous visual poetry - by matching energy level to audience heartrates. An artist enhanced the environment by smoothing transitions, live-mixing AI-generated video, and segmenting the event with full-frame paintings.
7 ART FROM THE HEART
Performance 1 begins at 475 seconds. (Figure 6) Subsequently, there is a steady increase in average heart rate which starts at 80 bpm and climaxes at 120 bpm. Performance 2 begins at 800 seconds. Average heart rate increases from 100 bpm to 115 bpm. It displays more variance and a higher standard deviation, and thus synchronicity is not as pronounced as the first session.
Performance 1 influenced data more than Performance 2, but heartrates in both sessions reflected the music's amplitude and spectral intensity. The second session may have included participants from the first, and differences in synchronization might be explained by foreknowledge. These results are consistent with similar experiments including [Iwanaga et al., 2005], where "...excitative music decreased perceived tension and increased perceived relaxation as the number of sessions increased," but straightforward intensity continued to have an overall impact on tension. Recent studies in exercise, health and fitness indicate possibly similar relationships between tempo and heart rate, although potential confounds can complicate attempts to measure such relationships [Thakare et al., 2017].
Taking this analog social experiment to the digital realm, GROUPTHINK connected over 100 participants remotely and improved on the findings of a paradigmatic experiment because it varied tempo in three rather than two modalities [Vickhoff et al., 2013]. Inspired by entanglement, GROUPTHINK's process utilized the dynamics of entrainment to influence the audience's perception of painting masterpieces while engaging in a prosocial, telematic bonding experience.
8 CREATING A SENSE OF "LIVENESS"
Debates about the nature of liveness often gravitate to the poles of Peggy Phelan and Philip Auslander [Meyer-Dinkgräf, 2015]. While the ontology of media pertains to our artwork, we sidestep earlier conceptions of liveness by focusing on the phenomenology of performance. One of the limitations of media is embedded in the word: it mediates (and restricts) the richness of live interaction. Humans interact via multiple physiological channels, and events where performers and audiences share space facilitate psychophysiological feedback loops. Performers adapt to audience reactions such as movement, dilated eyes, sweat and non-verbal vocalizations; audiences encourage or discourage new directions in the performance. Changes in heartrate accompany emotional range, and, in GROUPTHINK, hemodynamic monitoring approximated direct physiological channels of co-adaptation. The experience of GROUPTHINK was jointly produced by a physiological collective. The project sought to measure sensations of shared agency between performers and audience, and, according to a post-performance survey, a majority of participants (57.1%) felt their hearts influenced the performance. If we venture into ontology, GROUPTHINK explores whether telematic performance could become a self-generating or autopoietic organization that generates unanticipated collective experiences.
An autopoietic definition of liveness may be useful in the design of emotionally satisfying telematics. GROUPTHINK was enabled by contemporary telecommunications, inexpensive cloud services and growing acceptance of remote cultural events following Covid-19. While its assemblage is complex, GROUPTHINK's artistic objective was simple: to reprise - in mediated form - spontaneous organic connections between performers and audiences. The artwork achieved its goal of conveying physiological reactions from the audience to performers. As the post-event survey and discussions with the performers revealed, both groups felt a sense of mutual agency during the event.
9 LIMITATIONS AND POSSIBILITIES
We should note the limitations of generating telematic liveness. Engineering provides one set of constraints. Bandwidth, compression, computational processing, and the speed of light contribute to latency. Based on our experience, delays as short as 10 milliseconds can disrupt networked performances. To compensate, GROUPTHINK incorporated looping cycles of music and imagery which engulfed the relatively long processing delays. Most participants (55.6%) felt strongly connected to the performance while the balance felt some connection. The performers felt the audience reaction lagged their musical moods, but not to the point of inhibition. As with any medium, technics limits aesthetics while introducing new creative possibilities.
Other constraints derived from project necessities. Physiological data can be acquired through various means, notably personal fitness devices, but business, API and privacy issues made this approach impractical. We chose webcams because they are ubiquitous, and we could control the entire workflow. However, our hemodynamic monitoring application only works if the user's face remains fixed within the webcam's sensor array. The participant survey reported that 45.5% of participants found the web interface easy to use, and 54.5% rated it as average on a 5-point scale. People enjoy moving, and entrainment relies on motion as well as metabolism. We see this implementation as a proof of concept that similar effects could be generated with sensors which allow more freedom of movement.
10 CONCLUSION: AI AND THE INTERNET OF NEURONS
The notion of AI as partner offers a useful metaphor for working with machine-based intelligence: autonomous systems are partners which may one day connect our bodies. Human-machine teaming is already part of military doctrine [MoD, 2022]. However, the metaphor is problematic if we start believing AI is sentient or creates speculative representations of the world. Though given a wide degree of autonomy, and capable of surprising results, AI did not serve as a full collaborator in GROUPTHINK because it lacks agency.
As military planning reveals, questions remain about the stability and accountability of AI-reliant systems, especially in an unpredictable networked environment. Physiological or direct neural interface multiplies potential hazards. GROUPTHINK provides a safe context to experiment with autonomous, networked systems before they are placed into service. (Figure 7) One day we may coexist with AI on an Internet of Neurons. For now, AI / ML opens new horizons for art which parallel possibilities in disciplines ranging from banking to battle. Will our view of AI-based artwork, telematic collectives and autopoiesis become a social fixture, or will it continue evolving? Can art provide a safe space for experimenting with disruptive technologies? Consider Romanyshyn's discussion of Alberti's Window, an artistic technique which became a standard for documenting visual space [Romanyshyn, 1989]. By creating new artistic perspectives, we can explore technological spaces before they mature. We intend GROUPTHINK to foster discussions of authorship and individuality, the liminality of human-machine agency and the invasiveness of media. Subsequent iterations will offer opportunities for artists, engineers, and the public to contemplate the rewards and dangers of possible futures. As Lucian Freud stated, the artist's task is to make people uncomfortable. GROUPTHINK asks uncomfortable questions by blurring the boundaries between performance and participation, passive and active experience, virtual and physical presence, and, finally, between networked, machine-mediated collectives and organic individuals.
11 ETHICAL STATEMENT
The GROUPTHINK project complies the ACM Code of Ethics and Professional Conduct and the EU GDPR.
ACKNOWLEDGMENTS
This work was supported by the Engineering and Physical Sciences Research Council [grant number EP/V00784X/1], UKRI Trustworthy Autonomous Systems Hub and [grant number EP/S035362/1] PETRAS 2, and the National Gallery.
REFERENCES
- Matthew Davidson, Will Mithen, Hinze Hogendoorn, Jeroen van Boxtel, Naotsugu Tsuchiya. 2020. The SSVEP tracks attention, not consciousness, during perceptual filling-in. eLife 2020;9:e60031. DOI: 10.7554/eLife.60031
- Makoto Iwanaga, Asami Kobayashi, Chie Kawasaki. 2005. Heart rate variability with repetitive exposure to music. Biological Psychology. https://doi.org/10.1016/j.biopsycho.2004.11.015
- Richard Lewontin, Richard Levins. 1985. The Dialectical Biologist. Harvard University Press.
- Eiluned Pearce, Jacques Launey, Robin Dunbar. 2015. The ice-breaker effect: singing mediates fast social bonding, Royal Society. https://doi.org/10.1098/rsos.150221
- Flora Lysen. 2019. The interface is the (art)work: EEG-feedback, circuited selves and the rise of real-time brainmedia. In Anton Nijholt, ed. Brain Art: Brain-Computer Interfaces for Artistic Expression. Springer.
- Daniel Meyer-Dinkgräf. 2015. Liveness: Phelan, Auslander, and After. Journal of Dramatic Theory and Criticism, Vol. 29 no. 2.
- MoD. 2022. https://www.army.mod.uk/our-future/human-machine-teaming/
- Andrew Ng. 2021. https://datacentricai.org/
- Daniel McDuff, Izumi Nishidate, Kazuya Nakano, Hideaki Haneishi, Yuta Aoki, Chihiro Tanabe, Kyuichi Niizeki, Yoshihisa Aizu. 2020. Non-contact imaging of peripheral hemodynamics during cognitive and psychological stressors. Nature Scientific Reports 10, 10884. https://doi.org/10.1038/s41598-020-67647-6
- Robert Romanyshyn. 1989. Technology as Symptom and Dream, Routledge.
- Diego Sempreboni, Luca Viganò. 2021. Privacy, Security and Trust in the Internet of Neurons. In: Groß, T., Viganò, L. (eds) Socio-Technical Aspects in Security and Trust. Lecture Notes in Computer Science, Vol. 12812. Springer. https://doi.org/10.1007/978-3-030-79318-0_11
- Merlin Sheldrake. 2020. Entangled Life: How Fungi Make Our World, Change Our Minds and Shape the Future. Random House.
- Stelarc. 2015. http://stelarc.org/?catID=20353
- Avanash Thakare, Ranjeetra Mehrotra, Ayushi Singh. 2017. Effect of music tempo on exercise performance and heart rate among young adults. International journal of physiology, pathophysiology and pharmacology, 9(2), 35–39.
- Craig Vear. 2019. The Digital Score: Musicianship, Creativity and Innovation. New York: Routledge.
- Björn Vickhoff, Helge Malmgren, Rickard Åström, Gunnar Nyberg, Seth-Reino Ekström, Mathias Engwall, Johan Snygg, Michael Nilsson, Rebecka Jörnsten. 2013. Music structure determines heart rate variability of singers. Frontiers in Psychology, Vol. 4.
- Daniel Weinstein, Jacques Launey, Eiluned Pearce, Robin Dunbar, Lauren Stewart. 2016. Singing and social bonding: changes in connectivity and pain threshold as a function of group size. Evolution and Human Behavior, Vol. 37, Issue 2.
- Hao Yo Wu, Michael Rubinstein, Eugene Shih, John Guttag, Fredo Durand, William Freeman. 2012. http://people.csail.mit.edu/mrub/papers/vidmag.pdf
- Shama Rahman. 2022. https://www.neurocreate.co.uk/
FOOTNOTE
Authors' addresses: Ali Hossaini, Department of Engineering, King's College London, United Kingdom, email: ali.hossaini@kcl.ac.uk; Oliver Gingrich, University of Roehampton, London, United Kingdom, email: olivergingrich@gmail.com; Shama Rahman, Hasso-Plattner-Institute, Germany, email: rahman.shama@gmail.com; Mick Grierson, Creative Computing Institute, University of the Arts London, United Kingdom, email: m.grierson@arts.ac.uk; Joshua Murr, Creative Computing Institute, University of the Arts London, United Kingdom, email: j.murr@arts.ac.uk; Alan Chamberlain, University of Nottingham, United Kingdom, email: Alan.Chamberlain@nottingham.ac.uk; Alain Renaud, MintLab, Switzerland, email: alain.renaud@mintlab.ch.
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s).
© 2022 Copyright held by the owner/author(s).
2577-6193/2022/8-ART39
DOI: https://doi.org/10.1145/3533610