Birk Weiberg
237
The Entangled
Apparatus
Cameras as
Non-distancing
Devices
In her reading of the philosophy-physics of Niels Bohr, Karen Barad has proposed
a new ontology based on the post-representational concepts of diffractions
and material-discursive practices. In my paper I trace these concepts in the INTRA
SPACE project from the perspective of reading the experimental system as an
apparatus for the production of real-time technical images. I do so by comparing
it to recent developments in computational photography and by contextualizing
the project within post-photographic artistic practices. A central question herein
is whether photography can be understood as a non-distancing technique.
Birk Weiberg
Image/Data
In a provisionally furnished room on the ground floor of the former post office
at Dominikanerbastei in Vienna, a large rear-projection screen structures the
space into dedicated areas. The area in front of the screen is an open space
surrounded by subordinated areas with seats for an audience, a table for technical
staff, and a backstage area for the projection beam itself. A performer—or in
the lingo of the INTRA SPACE project, a visitor—enters the void and stands still
facing the screen. After a few basic movements the figure depicted in the
projected CGI video adjusts its posture to the one of the visitor. Visitor and
avatar are now linked in a similar way to puppeteer and puppet. However,
there are no strings attached and both figures have about the same size. This
apparently simple scheme then unfolds its very own idiosyncrasies of which
the following text only surveys those related to the roles that images and cameras
play herein.
In the experimental system of the INTRA SPACE project we find two types of
images—all of which exist primarily in real time. The first kind of images are
found on the large screen. While they appear to be familiar in their mirror-like
function, they are notable for their origin. The second kind of images only
appear on the screens of a control computer and thus are not directly visible
for performers and spectators. These images originate from a dozen small
IP cameras distributed throughout the room and directed at the performance
space. They provide what Harun Farocki has called operative images, images
that are part of specific procedures, images that work, and that are recorded
for machines rather than for human perception.1 They belong to a motion
tracking system, which provides information about the posture of the visitor
1
Harun Farocki, “Phantom Images,” Public 29
(2004): 12–22; Volker Pantenburg,
“Working Images: Harun Farocki and the
Operational Image,” in Image Operations:
Visual Media and Political Conflict, ed.
Jens Eder and Charlotte Klonk (Manchester:
Manchester University Press, 2017); and
Aud Sissel Hoel, “Operative Images:
Inroads to a New Paradigm of Media
Theory,” in Image—Action—Space, ed.
Luisa Feiersinger, Kathrin Friedrich, and
Moritz Queisner (Berlin: De Gruyter,
2018), 11–27.
238
The Entangled Apparatus
without the visual makers that are usually attached to the bodies of the performers. The extracted data then becomes an element for the rendered and
projected images.
Owing to their function to collect spatial data, the operative images differ from
what photographic images usually do and, as I want to suggest, can be compared
to computational photography as a recent development in the field of “technical
images”2. Computational photography marks a paradigm shift that goes beyond the much-discussed digitization of photographic images during the 1990s.3
Traditional photography can be described as a semi-automatic technique for
translating three-dimensional situations into two-dimensional visual representations thereof. Computational photography, then, is an umbrella term for various extensions of this technique by means of computations that are done within
the act of photographing and that strive to “improve” representational qualities.
Such improved representations provide a more faithful coverage of the original
situation through the now digital apparatus as much as adjustments to conventional understandings to what “good photographs” are.
Looking at the computational photography discourse of software developers,
as it is shaped in scholarly articles and books, one finds an already canonized
catalogue of useful features to increase image quality. Essential applications
of computational photography are:
• High Dynamic Range (HDR) algorithms that overcome limitations in reproducible contrast by combining several exposures with varying stops.
• The flash/no-flash method merges two images with ambient and flashlight
to capture a wider range of illumination.
• Flutter shutter is a technique of collecting several images in random intervals
with different exposure times in order to eliminate motion blur effects by
understanding their causes.
• Panorama stitching, finally, overcomes limitations of a camera’s field of view
by combining shots made in different directions.4
Most of these techniques automatically extract information from the images
as it is also done in the case of operative images but use the information in
order to apply certain effects back to the images themselves. Contemporary
smartphone cameras, for example, use algorithms to identify the silhouettes of
persons in the foreground. This allows them to blur the image background
and to give the entire image the appearance of a photograph taken with a camera
with a larger image sensor and less depth of field. One camera pretends to
be another one through modification of aesthetic features of its photographs.
The first step of reading data from an image is the domain of computer vision
as it is also used in the INTRA SPACE setup. While computer vision seems to
mark a break in the operative ontologies of photographic images, the tech-
Birk Weiberg
239
nique ties in with a tradition that is nearly as old as photography itself: photogrammetry, or what in German is called Messbilder (measurement images).
The digital status of photographic images makes it possible to automatize this
practice; the results may either be reapplied to the images or used in other
ways. (In the case of operative images, reapplication often happens to make
the process transparent and controllable for human operators by adding
markers to the images.)
What makes the entire process of extracting spatial information from flat images
possible is the concept of central perspective as it is incorporated in the
cameras. With the digitization of technical images, the depicted space has
again become addressable as it has been in fine arts since the Renaissance.
“Perspective is not interesting because it provides realistic pictures […] it is
interesting because it creates complete hybrids: nature seen as fiction, and
fiction seen as nature, with all the elements made so homogeneous in space
that it is now possible to reshuffle them like a pack of cards.”5 Thus, perspective is perhaps less an instrument of depiction but one of remote control.
Probabilistic Realism
The algorithmic interpretation and modification of photographs within the
camera itself opens up a new field of agency that is of special interest for
practitioners and artists. In her e-flux essay “Proxy Politics,” Hito Steyerl refers
to an unfortunately unidentified software developer who revealed to her
what actually changes with computational photography as it is applied especially in smartphones. Their small and cheap lenses, which deliver essentially
noise, have propelled the development of techniques to render images based
on such input in combination with pre-existing images. “By comparing what
you and your network already photographed, the algorithm, guesses what you
might have wanted to photograph now.” Computational photography for
Steyerl thus seems to be “a gamble with probabilities that bets on inertia.” 6
The resulting images are neither immediate representations of reality nor sim2
Vilém Flusser, Into the Universe of Technical
Images (1985), trans. Nancy Ann Roth
(Minneapolis: University of Minnesota
Press, 2011).
3 Hubertus von Ameluxen, Stefan Iglhaut,
and Florian Rötzer, eds., Fotografie nach
der Fotografie (Dresden: Verlag der
Kunst, 1996); Geoffrey Batchen, “On Postphotography,” Afterimage 20, no. 3 (1992);
and William J. Mitchell, The Reconfigured
Eye: Visual Truth in the Post-photographic
Era (Cambridge, MA: MIT Press, 1992).
4 Brian Hayes, “Computational Photography,”
American Scientist 96, no. 2 (2008): 94–99.
5 Bruno Latour, “Drawing Things Together,”
in Representation in Scientific Practice, ed.
Michael Lynch and Steve Woolgar (Cambridge,
MA: MIT Press, 1990), 8.
6 Hito Steyerl, “Proxy Politics: Signal and
Noise,” e-flux Journal 60 (December 2014):
http://www.e-flux.com/journal/proxy
-politics/.
240
The Entangled Apparatus
ple inventions. The persistent, representational promise of the concept of indexicality in photography in combination with issues of statistical likelihood
brings me to my question whether it can be productive to assess computational photography as probabilistic realism, i.e., a condensation of miscellaneous,
computable sources that become relevant through averaging.7
Steyerl’s anecdote might be understood in a way that images have become
mere reverberation of memories. But this is nothing new, as photographic culture
always featured a high degree of conventions where people tend to reproduce
images rather than make new ones. The difference is that more and more of
these conventions are now black-boxed within the apparatus as proxies that
Steyerl wants to call into question. An early example here are cameras with
smile detection that enables a camera to trigger an exposure automatically
once it recognizes that the subject lifts the corners of her mouth.8 We can read
the resulting image as the representation of a smile or even of a happy person.
We can read it as the representation of an either social or aesthetic convention
which has found its way into software. Or we can understand it as an everevolving circle of causes with liminal modifications where effective factors have
to be traced in between material and discursive domains. Such translations
between the domains of humans and machines have also been the subject of
science and technology studies. But as especially the actor-network theory
of Michel Callon (1986), Bruno Latour (1991 and 1999), and others has shown,
agency and thus responsibilities can no longer attributed to humans alone.
What I consider more relevant than the leverage of specific actors, is the disappearance of the original or primary image, not in the sense of an authentic representations but as something that is close to the act of exposure and that makes
all further reproductions derivatives. The before-and-after comparison of original
and modified photographs has been a key rhetorical figure for pointing to
human agency within the automatisms of photography. Without a real or even
imagined original image the possibility for such a critique vanishes. One artist
who has constantly questioned the idea of the original—whether in regard to
photographic images or fine arts in a broader sense—is Oliver Laric. This is
possibly best expressed in Versions, a video essay that he himself has altered
repeatedly over the years and that demonstrates how deeply embedded such
transformations are in contemporary visual culture.9 Therefore, one thing that
has changed since digital photographs emerged in the 1990s and raised the
question of whether and how they were still indexical or not, is that we are moving
away from calling on an original image as a reference when discussing matters
of visual representation.
The original image has effectively been replaced by raw data as the primary
trace left by reality once it has entered a camera. Raw data—as problematic as
the term itself may be—in its inaccessibility, however, has structural similari-
Birk Weiberg
241
ties to photo negatives and latent images of analogue photography. So, when
Daniel Rubinstein and Katrina Sluis (2013) point out that digital images are
always just one out of many possible visual representations of the underlying
data, we can say the same with regard to the latent images of photochemical
exposures. The fragile connection between data and image was already a point
of interest in the discussion of the 1990s. Artist (and publisher) Andreas MüllerPohle, for example, translated a digital scan of Nicéphore Niépce’s famous
first photograph into a variety of decorative data prints.10 What at first comes
across as being in awe of large amounts of data, even today, still articulates
our inability to establish a meaningful connection between the two ontological
domains. Müller-Pohle’s title Digital Scores is possibly more revealing than
the panels themselves as it suggests that data is both a trace or outcome and
something that needs to be performed or retranslated into an aesthetic form.
And again, all this likewise applies to the latent images and negatives of analogue
photography, which were widely ignored by traditional photo theories. The
technological change that we are witnessing might change our view of photography and the questions we are asking to a higher degree than the medium
of photography itself.
Computational photography makes us aware of a paradoxical situation: There
is an indexical (in the sense of causal) relationship between the photographed
subject and the raw data a camera collects. But this raw data—the noise
that Steyerl describes—is of limited to no value (significance) for the beholder.
Unlike the indexes that we find in Charles Sanders Peirce—the smoke, the
weathercock, etc.—camera data can no longer be read by a human interpretant and thus its indexical character (as effect and sign) remains unaccomplished
because of an opaque wall of numeric abstraction. The representational function
of photography only becomes possible with a subsequent step of interpretation,
combination, and other non-indexical procedures. This second step then
also becomes the subject of scholarly critique and artistic inquiry. Such is the
case with recent works by Trevor Paglen where he used machine learning
7
A computer that remixes our visual memories
to provide us with new ones that are likely
in a statistical sense; the sci-fi feeling that
Steyerl’s anecdote comes with possibly also
has to do with our inability to evaluate the
effectiveness of the algorithms she refers
to. Overall, it remains difficult for humanities scholars to assess how photography
actually is changing here. This caused not
only by the technical nature of these
changes but also by the fact that a lot of
what is going on is hidden inside the black
boxes of proprietary soft- and hardware.
We are left with the resulting images and
the user activities that bring them forth but
both are only a part of the entire system.
8 J. Whitehill, G. Littlewort, I. Fasel, M. Bartlett,
and J. Movellan, “Toward Practical Smile
Detection,” IEEE Transactions on Pattern
Analysis and Machine Intelligence 31, no.
11 (2009): 2106–11; a demonstration of the
feature in Sony’s Alpha 6300 camera:
https://www.youtube.com/watch?v
=Godpu72R2c4.
9 One version can be found here: https://
anthology.rhizome.org/versions.
10 See artist’s website, http: http://
muellerpohle.net/projects/digital-scores/.
242
The Entangled Apparatus
techniques to reveal how computers translate data into rendered photographs.
Paglen trained a neural network with images of the post-colonial philosopher
Frantz Fanon and then asked the computer to render a portrait based on the
features that the machine identified as distinguishing Fanon. In a similar
way, he trained his systems to classify images associated with terms such as
omens and portents, monsters, and dreams.11 The final synthetic images are
created by using actual digital noise as raw data and increasing the trained
model’s sensitivity until it sees something where there is nothing. Paglen
thus produces artifacts that unveil the usually invisible algorithms. He speaks
of invisible images here as they do not address anybody but represent a
closed-circuit of images made by machines for machines.
Loss of Perspective
The translation of images into interpretable data is but one aspect of computational photography. Another less discussed one is the fact that many methods
not only resolve the concept of a primary image but also overcome the singularity of such an image. Image data usually derives not from a single but from
several exposures. HDR extends the dynamic range of luminosity by combining
several exposures with different stops. Panorama stitching requires the photographer to point her camera in different directions to capture a wider field
of view in a sequence of images. With single lens systems different exposures
necessarily represent different moments in time. This has changed with more
recent camera designs with multiple lenses that, owing to their different positions
and perspectives, make it possible to extract more precise spatial information, as is also the case in the INTRA SPACE setup. We can speak of a collected
or aggregated indexicality—but indexicality after all—that tries to overcome
shortcomings of cameras in comparison to human perception. The fact that
several images are combined into one does not yet distinguish current computational photography from the digital photography of the 1990s. But the notion
of digital photography, as conceived then, refers to procedures applied to
visible and identifiable images with image processing software such as
Photoshop. Computational photography, on the other hand, develops its own
dynamics as it is applied automatically by the apparatus itself—an apparatus
whose hard- and software is, of course, designed by humans.
A software that possibly marks the threshold between both paradigms is
Microsoft’s meanwhile discontinued Photosynth. It was most notably used for
CNN’s online project “The Moment,” which depicts the inauguration of Barack
Obama as US president in 2009. In the wake of citizen journalism, CNN asked
people who attended the ceremony and took photos of it to contribute them
to a single, collective photomontage. The submitted images were then combined
and presented with the Photosynth software, which allowed website visitors to
Birk Weiberg
243
navigate between different viewpoints. The result is a hybrid form of testimony which
at the same time affirms the documentary quality of photography in the accumulation of 628 witnessing photos and photographers but also creates glitches and
tensions between these photos simply because, and in contradiction to the project
title, it does not represent a single moment. William Uricchio, in his analysis of
the project, has found that “there is no correct or authorized viewing position, no
‘master shot’ within which everything else is a recomposition. Instead, there is
simply a three-dimensional space made up of many textures and granularities, and
the means to move within it.”12 “The Moment” thus is also symptomatic of the
loss of authority that single images in the context of traditional media have had.
Taking Photosynth as a forerunner of computational photography inside cameras,
we can say that one difference to earlier modes of photography is the dissolution
of temporal and spatial singularities that find their way into an image. An image
of computational photography no longer refers to a specific view of the camera,
it aggregates points in time and space and thus overcomes the central perspective of the Renaissance. This not only affects the anthropomorphic viewpoint
but also the virtual plane placed between the eye and the scene, as the raw data
often preserves three-dimensional information. This is the case with the Kinect
camera, which Microsoft introduced in 2010, and in Apple’s iPhone X, which uses
3D data for (among other things) post hoc lighting changes, where virtual illumination hits the spatial representation of a situation before it is rendered as an image.
Another technique is light field photography, where the light from a situation
is captured in a way that does not yet predetermine its rendering on an image
plane. Other camera designs foresee the replacement of the single lens with
multiple optics of lower quality, which in combination nonetheless can provide
images of higher quality once their raw data has been merged. In all of these
techniques, it is not primarily the image itself that becomes subject to interpretation but the situation and the point of view that finally transforms it into an
image. In a laboratory setup with an object, a camera, and a single light source
at the Max Planck Institute for Informatics in Saarbrücken it was possible to
use the data provided by the camera to render an image from the perspective
of the light source.13 “You can’t have a point of view in the Electronic Age,”
as Marshall McLuhan said.14 Perspective has turned into an option, a convention, and it is interesting to see how, for example, Paglen’s renderings try to bypass the question of perspective. While technically they use a virtual camera
for rendering, this camera however does not produce a situation that can be
seen as specific. The specificity of these images is that of a typology.
11 See Paglen’s exhibition, http://
metropictures.com/exhibitions
/trevor-paglen4.
12 William Uricchio, “The Algorithmic Turn:
Photosynth, Augmented Reality and the
Changing Implications of the Image,”
Visual Studies 26, no. 1 (2011): 30.
13 Hayes, “Computational Photography,” 98.
14 “Marshall McLuhan: The World is Show
Business,” YouTube video, 6:31, posted by
globalbeehive, April, 27, 2010, https://
www.youtube.com/watch?v=9P8gUNAVSt8.
244
The Entangled Apparatus
Coming back to the notion of a probabilistic realism, computational photography
in many ways works against an understanding of realism that has to be conceived as subjective in the sense that it requires a point of view that somebody
or something has to take and that can be called to name. A probabilistic realism, on the other hand, is the result of echoes and feedbacks in a distributed
network or, as Rubinstein writes, a “rhizomatic assemblage of interconnected
fragments.”15
Birk Weiberg
245
teract as self-sufficient entities, they intra-act and thereby (re)define each
other. And it is this assumption that subject and object have no stable identities that allows us to develop a different understanding of photographic practices. In INTRA SPACE, we can witness this in the merging of the distinct
photographic measurements, mapping them onto a single ideal skeleton,
when the avatar’s movements deviate from that of the visitor, when limbs are
bent in unnatural ways. This is when the resulting CGI image no longer remains
an image but becomes physical as we tend to identify and feel with the twisted body.
Embracing Entanglements
For INTRA SPACE a plurality of images is provided by the small cameras dotted
around the room. It is them that define a stage-like zone of computational
visibility rather than the elements of physical architecture. Unlike regular video
cameras that would require a power cable to receive electricity and a video
cable to send images, the IP cameras of the INTRA SPACE system are merely
connected via Ethernet cables, which provide electricity and transmit image
data. The cameras are no longer connected apparatuses but extensions of a
computer network.16 The multiplicity of cameras becomes necessary because of
the insufficiency of the camera as a measuring device for representing comprehensive spatial information. Within the application of photography, the ability
of the technique to make the world flat and portable is a vital feature. However,
if one is no longer interested in the photographs themselves but in the data
that can be extracted from them, this compression feature turns into a shortcoming, which has to be compensated for by adding to the now insufficient
devices. What remains is the camera’s ability to capture/measure things from
a distance.
A technical challenge of the setup lies in unifying the various measurements.
This is also the starting point of Karen Barad’s exploration of the “philosophyphysics” of Niels Bohr and his writings on quantum physics. She is interested
in how Bohr’s careful analysis of measurement in science, a practice that I
want to compare to that of photography, leads him to reject representationalism.17 A central question of quantum physics derives from the fact that the usage of different experimental systems results in different and even conflicting
measurement results. Bohr’s colleague Werner Heisenberg saw this as a
problem of epistemology, an uncertainty that we have when it comes to recognizing the features of electrons in a specific situation. Bohr, on the other hand,
drew a more radical conclusion, saying that there is an indeterminacy of such
features, that electrons may not even have a position or a momentum until
they are measured.18 Barad’s take on this is not to fall into the unproductive
trap of social constructivism, where signs ultimately win over matter, but
rather to understand Bohr, his instruments, and the subjects of his research
as entities that constitute each other. In Barad’s terminology they do not in-
Explaining Bohr’s position on the dynamism of matter, Barad writes: “Moving
away from the representationalist trap of geometrical optics, I shift the focus
to physical optics, to questions of diffraction rather than reflection.”19 The necessity to find alternatives to geometrical optics as the basis of photography shows in critical, apparatus-orientated photographic practices which
likewise often deal with geometry as a contingent property of cameras. Such
practices that shift their focus from the image to the apparatus have existed
for a long time but have gained a new momentum since the digitization of photography in the 1990s. Well-known examples of such surveys of photographic
geometry are the camera obscura installations by Zoe Leonard. In 2011, the artist began a series of such installations that confront the geometry of optics
with the specific geometry of the different spaces she used. On the one hand,
she brings the visitor back to the very beginning of photography when images could not yet be preserved. On the other, these installations have a very
post-photographic character being produced after Leonard herself had temporally abandoned the production of photographic images.20 The images one
encounters inside her camera obscuras are ephemeral, fragile, and also
function as a light source for the room itself and thus question widespread photographic concepts.
More explicitly, the Israeli artist and theoretician Aïm Deüelle Lüski has constructed
cameras as a critique of visual representations in the context of the political
15 Daniel Rubinstein, “Posthuman Photography,”
in The Evolution of the Image: Political
Action and Digital Self, ed. Marco Bohr
and Basia Sliwinska (New York: Routledge,
2018).
16 In the science fiction movie Colossus:
The Forbin Project (1970) it is the network
itself (consisting of a US and a USSR
supercomputer) that calls for camera
extensions to accomplish total surveillance
of its operators and world domination.
17 Light in physics can be either understood
as continuous waves or as discrete particles.
Both models contradict each other and
require distinct methods of measurement.
18 Karen Barad, Meeting the Universe
Halfway: Quantum Physics and the
Entanglement of Matter and Meaning
(Durham, NC: Duke University Press,
2007), 115ff.
19 Barad, 135.
20 Courtney Fiske, “In-Camera: Q+A with Zoe
Leonard,” Art in America, November 2012,
http://www.artinamericamagazine.com
/news-features/interviews/zoe-leonard
-murray-guy/.
248
The Entangled Apparatus
Fig. 54 (previous spread)
TheCaptury, screenshot motion-tracking software interface, 2016
situation in the Middle East. His viewfinder-less cameras document the convergence of various entities in a shared space while evading any purposeful and
thus hegemonic visual representation. With his somewhat kaleidoscopic images
Deüelle Lüski literally replaces reflections with diffractions as suggested not
only by Barad21 but also by Donna Haraway,22 from whom she adopts this notion. Deüelle Lüski describes his practice as “distracted concentration,”23 a
mode of perception that is still understood in relation to human consciousness where for Haraway and Barad neither the origin nor the target of light is
fixed.
What makes Deüelle Lüski, who works only with traditional, analog techniques,
interesting with regard to computational photography is that he conceives
the body of the camera as a threshold, a place where light turns into matter.
What he strives for is delaying, nearly preventing the materialization of an
image in what he calls “the ‘struggle’ inside the camera obscura and upon the
emulsion surface.”24 The camera itself has turned into a discursive device, a
phenomenon that also became more relevant with computational photography but remains difficult to grasp. The images of computational photography
are figurative but can only be regarded as representational with a very open understanding of what they represent—subjects, expectations, norms, the technology itself, or the threshold Deüelle Lüski addresses. Between reality and an image, we now find raw data that is as inaccessible or even undetermined as
the atoms of Niels Bohr.
This threshold cannot be understood with the simplified model of analogies
and brings Barad to her proposition of a shift from reflection to diffraction,
which she at first derives from specific devices used in scientific practices:
“In contrast to reflecting apparatuses, like mirrors, which produce images—
more or less faithful—of objects placed a distance from the mirror, diffraction
gratings are instruments that produce patterns that mark differences in
the relative characters (i.e., amplitude and phase) of individual waves as they
combine.”25 So while a reflection produces an analogon, a representation by
means of similarity, diffraction creates complex patterns that are jointly caused
by an instrument and its subject. This becomes evident when we look at the
flutter shutter technique to reduce motion blur caused by relative movement
between a camera and one or more of its subjects. In analogue photography
there are basically two options to avoid this usually unwanted effect: we can
reduce either the relative movement or the duration of exposure. Computational
photography, however, provides an option that seems counter-intuitive at first
Birk Weiberg
249
sight: the camera collects several images at random intervals with different exposure times and then infers relatively sharp images from them26. The fact that
the images feature different degrees of motion blur means that those that are
less sharp are deliberately out of focus. But the comparison between the images allows the software to ascertain the relative movement that caused the
problem. It can compensate for the shortcomings of the hardware because it
“knows” something about what the camera sees. Such a technical awareness
of a situation was originally conceived by the inventors of cybernetics in the
1940s to improve the ability of missiles to hit moving targets. In the case of INTRA SPACE the closed circuit starts with the visitor’s body and its capture
through a dozen IP cameras. It is then transformed into data and brought
back into the space as a CGI image of the avatar’s body, to be seen by the
visitor and an audience of spectators and technicians. Seen as a contemplation on representation by means of technical images this structure is not even
necessarily computational but can also be traced back to the beginnings of
video art with the installations of Peter Campus and the TV Buddha of Nam June
Paik. In any case, such loops, just like cybernetic feedback structures, partially
suspend the distinction between machinic and human agency. They constantly oscillate between software and hardware, between signs
and matter, and thus circumvent any determination of primary agency on either
side. The programmed camera is a device that persistently measures but is
also measured in order to adjust its measurement values. Digitization was initially
understood as a translation of matter into signs but meanwhile we have started
to understand that the digital has its own material constraints and cannot be
seen as a purely semantic but also as a material domain. The camera itself
has lost its former stability with regard to its configurations and its position in
the process of documenting the world as it can also be observed during the
development of INTRA SPACE. There, the virtual CGI camera more and more lost
its stable position that created a mirror-like image when it was attached to
the body of the avatar to show how, for instance, the hand would see the rest
of the virtual body if it only had eyes.27
21 Barad, Meeting the Universe Halfway, 29.
22 Donna Haraway, “The Promises of
Monsters: A Regenerative Politics for
Inappropriate/d Others” (1992), in The
Haraway Reader (New York: Routledge,
2004), 70.
23 Ariella Azoulay, Aïm Deüelle Lüski and
Horizontal Photography (Leuven: Leuven
University Press, 2014), 235.
24 Azoulay, 238.
25 Barad, Meeting the Universe Halfway, 81.
26 Amit Agrawal, “Motion Deblurring Using
Fluttered Shutter,” in Motion Deblurring:
Algorithms and Systems, ed. A. N.
Rajagopalan and Rama Chellappa,
(Cambridge: Cambridge University Press,
2014), 141–60.
27 Regarding identifications of the camera
with persons and objects, see Birk
Weiberg, “Maschinenbilder: Zur postsubjektiven Kamera,” in Archäologie der
Zukunft, ed. Heiko Schmid,
Frank-Thorsten Moll, Ursula Zeller, and
Mateusz Cwik (Friedrichshafen: Zeppelin
Museum, 2014), 23–44.
250
The Entangled Apparatus
Birk Weiberg
251
Steyerl, when writing about computational photography, has suggested viewing
its intermediary processes as proxies. These proxies for Steyerl are considered
the subject matter of critical inquiries because they might be informed by economic or political interests. Such a critical discourse, however, necessarily
perpetuates the very idea of representation and of a proper closure of the gap
between matter and sign. Barad and other critics of modernism, on the other
hand, simply claim that originally there is no distance to be bridged. Identities
are thus not recognized and represented but are the result of repetitions and
variations. “A performative understanding of scientific practices”—and as stated
before, I identify these with photographic ones—“takes account of the fact
that knowing does not come from standing at a distance and representing but
rather from a direct material engagement with the world.”28 As photography has
been a vital contributor of constructing such distances, one question to be
answered is what practices and studies of cameras as non-distancing devices
might look like.
A conclusion, I wish to propose, is not limited to computational photography but
rather takes this most recent development as a starting point to read photography in a different way. From this perspective, photography has first been
chemical, then optical, and now computational. The changing identities of
photography herein are not simply ontological transformations by means of
technical progress but also different modes of perceiving the medium. The
optical has dominated our understanding of photography with metaphors such
as mirror or window borrowed from the fine arts. It is analogue not only in a
technical but also in a conceptual sense. The diffractive methodology that Barad
has suggested, “a way of attending to entanglements in reading important
insights and approaches through one another”29, provides a different approach to photography if we consider it as a practice that is a diffractive entanglement itself. Can we understand the camera as a diffractor and what
do we win with it? Distortion would be an integral part of photography and
not a defect of an otherwise ideal mirror. Different results from different apparatuses do not lead to uncertainty but complementarity. Any kind of translation, the proxies Steyerl writes about, does not estrange us from a situation but
brings all its relata closer together: “Images or representations are not
snapshots or depictions of what awaits us but rather condensations or traces
of multiple practices of engagement.”30
Fig. 55
INTRA SPACE, virtual figure in the Unity scene, with supporting guides placed by Christan Freude
to trace head movements in relation to virtual cameras, still from video, 2017
28 Barad, Meeting the Universe Halfway, 49.
29 Barad, 30.
30 Barad, 53.
252
The Entangled Apparatus
Birk Weiberg
Literature
Agrawal, Amit. “Motion Deblurring Using
Fluttered Shutter.” in Motion Deblurring:
Algorithms and Systems, edited by A. N.
Rajagopalan and Rama Chellappa, 141–60.
Cambridge: Cambridge University Press,
2014.
Ameluxen, Hubertus von, Stefan Iglhaut,
and Florian Rötzer, eds. Fotografie nach der
Fotografie. Dresden: Verlag der Kunst, 1996.
Azoulay, Ariella. Aïm Deüelle Lüski and
Horizontal Photography. Leuven: Leuven
University Press, 2014.
Barad, Karen. Meeting the Universe Halfway:
Quantum Physics and the Entanglement of
Matter and Meaning. Durham, NC: Duke
University Press, 2007. doi:10.1215
/9780822388128.
Batchen, Geoffrey. “On Post-photography.”
Afterimage 20, no. 3 (1992).
Callon, Michel. “Some Elements of a Sociology
of Translation: Domestication of the Scallops
and the Fishermen of St. Brieuc Bay.” In
Power, Action, and Belief: A New Sociology
of Knowledge?, edited by John Law, 196–
233. London: Routledge & Kegan Paul,
1986.
Farocki, Harun. “Phantom Images.” Public
29 (2004): 12–22.
Fiske, Courtney. “In-Camera: Q+A with Zoe
Leonard.” Art in America, November 2012,
http://www.artinamericamagazine.com
/news-features/interviews/zoe-leonard
-murray-guy/.
Hoel, Aud Sissel. “Operative Images:
Inroads to a New Paradigm of Media Theory.”
In Image—Action—Space, edited by Luisa
Feiersinger, Kathrin Friedrich, and Moritz
Queisner, 11–27. Berlin: De Gruyter, 2018.
Uricchio, William. “The Algorithmic Turn:
Photosynth, Augmented Reality and the
Changing Implications of the Image.”
Visual Studies 26, no. 1 (2011): 25–35.
doi:10.1080/1472586x.2011.548486.
Latour, Bruno. “Drawing Things Together.”
In Representation in Scientific Practice,
edited by Michael Lynch and Steve Woolgar,
19–68. Cambridge, MA: MIT Press, 1990.
Weiberg, Birk. “Maschinenbilder: Zur
postsubjektiven Kamera.” In Archäologie
der Zukunft, edited by Heiko Schmid,
Frank-Thorsten Moll, Ursula Zeller, and
Mateusz Cwik, 23–44. Friedrichshafen:
Zeppelin Museum, 2014. doi:10.17613
/yegs-2625.
———. “Technology Is Society Made
Durable.” in A Sociology of Monsters:
Essays on Power, Technology and
Domination, edited by John Law, 103–31.
London: Routledge, 1991.
———. Pandora’s Hope: Essays on the
Reality of Science Studies. Cambridge,
MA: Harvard University Press, 1999.
Mitchell, William J. The Reconfigured Eye:
Visual Truth in the Post-photographic Era.
Cambridge, MA: MIT Press, 1992.
Pantenburg, Volker. “Working Images:
Harun Farocki and the Operational Image.”
In Image Operations: Visual Media and
Political Conflict, edited by Jens Eder and
Charlotte Klonk. Manchester: Manchester
University Press, 2017. doi:10.7228
/manchester/9781526107213.003.0004.
Rubinstein, Daniel. “Posthuman Photography.”
In The Evolution of the Image: Political
Action and Digital Self, edited by Marco
Bohr and Basia Sliwinska. New York:
Routledge, 2018.
Flusser, Vilém. Into the Universe of
Technical Images. Translated by Nancy
Ann Roth. Minneapolis: University of
Minnesota Press, 2011. Originally
published in German in 1985.
Rubinstein, Daniel, and Katrina Sluis. “The
Digital Image in Photographic Culture:
Algorithmic Photography and the Crisis of
Representation.” In The Photographic
Image in Digital Culture, edited by Martin
Lister, 22–40. 2nd ed. London: Routledge,
2013.
Haraway, Donna. “The Promises of Monsters:
A Regenerative Politics for Inappropriate/d
Others” (1992). In The Haraway Reader,
63–124. New York: Routledge, 2004.
Steyerl, Hito. “Proxy Politics: Signal and
Noise.” e-flux Journal 60 (December 2014).
http://www.e-flux.com/journal/proxy
-politics/.
Hayes, Brian. “Computational Photography.”
American Scientist 96, no. 2 (2008):
94–99.
Whitehill, J., G. Littlewort, I. Fasel, M.
Bartlett, and J. Movellan. “Toward
Practical Smile Detection.” IEEE
Transactions on Pattern Analysis and
Machine Intelligence 31, no. 11 (2009):
2106–11. doi:10.1109/tpami.2009.42.
253
Image Credits
Image Credits
Towards an INTRA SPACE
Christina Jauernik and Wolfgang Tschapeller
In collaboration with Christina Ehrmann,
Drawings for INTRA SPACE, 2019.
© INTRA SPACE
Fig. 1
Cartography of Neighborhoods, INTRA
SPACE, 2019
Fig. 2
INTRA SPACE
Fig. 3
View of the project space with Projection
Screen C
Fig. 4
Biography of project space including all
technical, virtual, engineered, and human
contributors, their positions,
specifications, and collaborative
engagements
Fig. 5
Zoom into project space
Fig. 5a
Body model, skeleton, and spheres; drawn
based on the concept developed by Nils
Hasler, 2017
Fig. 6
Detail (zoom-in): Overlapping field of
views, 12 cameras J
Fig. 6a Industry camera installed in the
tracking area, 2017
Fig. 6b Virtual camera, INTRA SPACE
Fig. 6c Figure shown with virtually placed
camera positions, experiments, 2017
Fig. 7
Detail (zoom-in): Figures
Fig. 7a
Photograph, reenacting Venus Cupid Folly
Time, Esther and Christina. Photo:
Christian Freude
Fig. 7b
View from the tracking camera, screenshot
of software interface. Esther and Christina
with two skeletons (unknown-2;
snapPoseSkeleton-6). Screenshot:
Christian Freude
Fig. 8
Detail (zoom-in): Skeleton
Fig. 8a Four of the twelve cameras, motion
tracking screen (monitor H), 2017
Fig. 9
The Virtual Camera
Fig. 10
Jason, screenshot excerpt of programming
language for the virtual figures’ behavior
Fig. 10a
Jason
283
Figs. 11–12
Experiment 1: Orthogonal camera as mirror
with Carla and Esther, January 2016. Photo:
Wolfgang Tschapeller; image editing:
Markus Wörgotter
Fig. 13
Experiment 2: Perspective camera
attached to Christina’s right hand, working
with Carla, April 2016. Camera: INTRA
SPACE; image editing: Markus Wörgötter
Figs. 14–18
Experiment 2: Perspective camera
attached to Christina’s right hand, working
with Old Man, May 2017. Camera: INTRA
SPACE; image editing: Markus Wörgötter
Fig. 19
Experiment 2: Rehearsal
Esther, Christina, and two figures, Bob and
Bob, perspective camera attached to
Esther’s inner right wrist, May 2017
Camera: Ludwig Löckinger
Fig. 20
Experiment 2: Rehearsal Esther and Bob,
perspective camera attached to Esther’s
inner right wrist, May 2017
Camera: Ludwig Löckinger; image editing:
Markus Wörgötter
Vital Technologies: The Involvements of “the
Intra”
Vicky Kirby
Fig. 21
Jean-Martin Charcot, Autographic Skin,
1877
Dancing with Machines: On the Relationship
of Aesthetics and the Uncanny
Clemens Apprich
Fig. 22
Stuart Patience, Spin Round Wooden Doll –
Nathaniel dancing with Olympia at the ball.
2018. Courtesy of Heart Agency © Stuart
Patience/heartagency.com
Fig. 23
INTRA SPACE working situation,
Dominikanerbastei, Vienna, 2017. Photo:
Günter Richard Wett
Body of Landscape
Esther Balfe
Figs. 24–41
Practicing Virtual Conditions. Rehearsal:
Esther Balfe, Christina Jauernik. Video
stills, 2017. Camera: Ludwig Löckinger
284
Image Credits
INTIMACY LOSS SKINNING
Christina Jauernik
Figs. 42–46
Christina Jauernik, working with skeletons,
October 2018, Sitterwerk artist residency,
Switzerland
Skin Dreams
John Zissovici
Figs. 47, 48, 53
John Zissovici, head shapes, 2017, Vienna.
Video stills
Fig. 49
Still from recording, experiments with
hand camera. INTRA SPACE 2017
Fig. 50
Francesco Morone, Stimmate di San
Francesco, 14th century. Tempera on
canvas, 84.5 × 56.5 cm. Museo di
Castelvecchio, Verona. © Photographic
Archive, Museo di Castelvecchio, Verona
Fig. 51
Philippe Lapierre, Untitled #155, 2020.
Drawing
Fig. 52
John Zissovici, The Door, 2017. Photo: John
Zissovici
The Entangled Apparatus: Cameras as Nondistancing Devices
Birk Weiberg
Fig. 54
TheCaptury, screenshot motion-tracking
software interface, 2016
Fig. 55
INTRA SPACE, virtual figure in the Unity
scene, with supporting guides placed by
Christan Freude to trace head movements
in relation to virtual cameras. Video still,
2017