Pointed of Pointless? Recalibrating the Index Part II, Potsdam, 2017
Probabilistic Realism
Reassessing the Index in Computational Photography
Birk Weiberg, PhD
Zurich University of the Arts, Institute of Contemporary Art Research
birk.weiberg@zhdk.ch
November 4, 2017
What is computational photography?
After digital photography and networked images, the discussion of computational photography is yet another approach to understand how photography has changed after its
relatively stable one-and-a-half photochemical century. Artists and practitioners had
a fair share in pointing to the fact that the introduction of advanced computer technologies into the camera implies a relevant change to be discussed. Hito Steyerl, in
her e-flux essays “Proxy Politics,” makes a first step from issues of networked images
to computed ones. She refers to an unfortunately unidentified software developer who
revealed to her what actually changes with computational photography as it is applied
especially in smart phones. Their small and cheap lenses, which deliver essentially
noise, have propelled the development of techniques to render images based on such
input in combination with pre-existing images. “By comparing what you and your network already photographed, the algorithm, guesses what you might have wanted to
photograph now.” Computational photography, according to Steyerl, thus seems to be
“a gamble with probabilities that bets on inertia.” (2014) The persistent, representational promise of the concept of indexicality in photography in combination with issues
of statistical likelihood brings me to my question whether it can be productive to assess
computational photography as probabilistic realism.
A computer that remixes our visual memories to provide us with new ones that are
likely in a statistical sense; the sci-fi feeling that Steyerl’s anecdote comes with has possibly also to do with our inability to evaluate the effectiveness of the algorithms she
refers to. It is still difficult for humanities scholars to assess how photography actually
is changing here. This is not only caused by the technical nature of these changes but
1
also by the fact that a lot of what is going on is hidden inside the black boxes of proprietary soft- and hardware. We are left with the resulting images and the user activities
that bring them forth but both are only a part of the entire system.
Looking at the computational photography discourse of software developers, as it takes
shape in their scholarly articles and books, one finds an already canonized catalog of
useful features to improve image quality.
Essential and widespread applications are:
• High Dynamic Range (HDR) algorithms that overcome limitations in reproducible
contrast by combining several exposures with varying stops.
• The flash/no-flash method merges two images with ambient and flash light to capture a wider range of illumination.
• The so called flutter shutter is a technique to collect several images in random
intervals with different exposure times in order to understand and eliminate motion blur effects.
• Panorama stitching, finally, overcomes limitations of the camera’s field of view by
combining shots made in different directions. (Hayes 2008)
What these methods have in common is that they replace a single exposure with several images. As these derive from the same optical system, they necessarily represent
different moments in time. In a first approximation, we can speak of a collected or aggregated indexicality – but indexicality after all – that tries to overcome shortcomings
of cameras in comparison to human perception.
What is the difference to digital photos of the 1990s?
The fact that several images are combined into one does not yet distinguish current
computational photography from the digital photography of the 1990s. At the risk of
simplifying things, I would say that the notion of digital photography, as drafted two
decades ago, refers to procedures applied to visible and identifiable images with image processing software such as Photoshop. Computational photography, on the other
hand, develops its own dynamics as it is applied by the apparatus itself – an apparatus
whose hard- and software is, of course, designed by humans.
Some applications like “face beautification” by means of machine learning, as described
by Gideon Dror (2011), carry on and automate concepts of earlier photo manipulations
with new and more capable technologies. The process that Dror outlines still involves
an original image, which is then enhanced. But it is foreseeable that ‘improvements’ of
this kind will soon happen concurrently with the shooting of a photo. Such a delegation
is not only one from human to machine but also from photographer to manufacturer –
2
and this is where Steyerl’s program of a critique of proxy politics applies. It is a critique
of the apparatus that Vilém Flusser already demanded some thirty years ago when he
described photographic images as straight expressions of the apparatus itself ([1983]
2000). For Flusser, however, the then still stable apparatus apparently was something
given not something made as for Steyerl.
Steyerl’s anecdote might be understood in a way that images have become mere reverberation of memories. But this is nothing new as our photographic culture always
featured a high degree of conventions where people tend to reproduce images rather
than make new ones. The difference is that more and more of these conventions are
now black-boxed in the apparatus. An early example for this are cameras with smile
detection where the facial expression of the photographed person actually triggers the
exposure. Such translations between the domains of humans and machines have been
researched by science and technology studies and in particular with the Actor Network
Theory of Michel Callon (1986), Bruno Latour (1991, (1999)), and others.1
What I consider more relevant here, is the disappearance of what has been regarded
as an original or primary image not in the sense of an authentic representation but as
something material or at least accessible. The primary rhetoric figure in the discourse
of how new image technologies possibly trick us into misrepresentations of the world is
the before-and-after-comparisons, which goes back to well-known Stalinist touch-ups,
where people disappeared once they fell into disgrace. An artist who constantly has
questioned the idea of an original – whether in regard to photographic images or fine
arts in a broader sense – is Oliver Laric. This is possibly best expressed in Versions,
a video essay that he himself has altered repeatedly over the years.2 Therefore, one
thing that has changed since digital photographs emerged in the 1990s and raised the
question of whether and how they were still indexical or not, is that we are moving
away from calling on an ‘original image’ as a reference when discussing matters of
visual representation.
What kind of indexicality do we have here?
The original image has been replaced by raw data as the primary trace left by reality once it has entered a camera. Raw data – as problematic as the term itself may be
(Gitelman 2013) – in its apparent inaccessibility, however, has structural similarities
to photo negatives and latent images of analog photography. So, when Daniel Rubinstein and Katrina Sluis (2013) point out that digital images are always just one out of
1. In art history this goes along with a reassessment of the role of automatisms. In a special issue of
Texte zur Kunst on photography, Isabelle Graw and Benjamin Buchloh (2015) discuss the role of authors
in automatisms and Graw describes Rauschenberg’s Tire Print as the result of an authorship that includes
Rauschenberg’s car.
2. One version can be found here: https://vimeo.com/95946485.
3
many possible visual representations of the underlying data, we can say the same in
regard to the latent images of a photochemical exposures. The fragile connection between data and image was already a point of interest in the discussion of the 1990s.
Artist (and publisher) Andreas Müller-Pohle, for example, translated a digital scan of
Nicéphore Niépce’s famous first photograph into a variety of decorative data prints.3
What at first comes across as being in awe of large amounts of data, even today, still
articulates our inability to establish a meaningful connecting between the two ontological domains. Müller-Pohle’s title Digital Scores possibly is more revealing than the
panels themselves as it suggests that data is both, a trace or result and something that
needs to be performed or retranslated into an aesthetic form. And again, all this also
applies to the latent images and negatives of analog photography, which were widely
ignored by traditional photo theories. The technological change that we are witnessing
might change our view of photography and the questions were are asking to a higher
degree than the medium of photography itself.
Computational photography makes us aware of a paradoxical situation: There is an
indexical relationship between the photographed object and the raw data a camera
collects. But this raw data – the noise that Steyerl described – is of limited to no value
for the beholder. Unlike the indexes that we find with Peirce – the smoke, the weathercock, etc. – camera data no longer can be read by a human interpretant and thus
its indexical character remains unaccomplished due to an opaque wall of numeric abstraction. The representational function of photography only becomes possible with
a subsequent step of interpretation, combination, and other non-indexical procedures.
This second step then also becomes the subject of scholarly critique and artistic inquiry.
Such is the case with recent works by Trevor Paglen where he used machine learning
techniques to reveal how computers connect data with rendered photographs. Paglen
trained a neural network with images of the post-colonial philosopher Frantz Fanon
and then asked the computer to render a portrait based of the features that the machine identified as distinguishing for Fanon. In a similar way, he trained his systems to
recognize images associated with taxonomies such as omens and portents, monsters,
and dreams.4
Paglen produces errors or voids that unveil the usually invisible algorithms. He speaks
of invisible images here as they do not address anybody but represent a closed-circuit
of images made by machines for machines. We can see these as of intentional and
critical glitches, which are similar to what machines occasionally produce themselves
as with so called panorama fails when they miss to combine several images into one.
With Madeleine Akrich we can see these as “De-Scriptions,” as decompilations of the
implemented programs. But unlike Paglen and Steyerl, who combine their aesthetic
3. http://muellerpohle.net/projects/digital-scores/
4. http://www.metropictures.com/exhibitions/trevor-paglen4/selected-works
4
inquires with a political agenda that implies a concept of human agency, Akrich suggests to keep the field open as long as possible to move freely “between the technical
and the social [. . . ] between the inside and the outside of technical objects.” (1992,
206)
How does raw data differ from latent images?
Looking at how raw data is interpreted is one way to discuss computational photography. Another approach is to ask how the collection of raw data itself changes photography. As mentioned above, computational photography often combines several exposures. William Uricchio here has analyzed CNN’s online project “The Moment” that
depicts the inauguration of Barack Obama as US President in 2008. In the wake of citizen journalism, CNN asked people, who attended the ceremony and made photos of it,
to contribute them to a single, collective photo-montage. The submitted images were
then combined and presented with Microsoft’s Photosynth software, which allowed
website visitors to navigate between different viewpoints. The result is a hybrid form
of testimony, which at the same time affirms the documentary quality of photography in
the accumulation of 628 witnessing photos and photographers but also creates glitches
and tensions between these photos simply because and in contradiction to the project
title it does not represent a single moment. In Uricchio’s view “there is no correct or
authorised viewing position, no ‘master shot’ within which everything else is a recomposition. Instead, there is simply a three-dimensional space made up of many textures
and granularities, and the means to move within it.” (2011, 30)
Taking the recently discontinued Photosynth software as a forerunner of computational photography inside cameras, we can say that one difference to earlier modes
of photography is the dissolution of temporal and spatial singularities that find their
way into an image. An image of computational photography does no longer refer to
a specific view of the camera, it aggregates points in time and space and thus evades
the central perspective of the Renaissance. This not only affects the anthropomorphic
viewpoint but also the virtual plane placed between the eye and the scene as the raw
data often preserves three-dimensional information. This is the case with the Kinect
camera, which Microsoft introduced in 2010, and most recently in Apple’s iPhone X,
which uses 3D data for (among other things) post hoc lighting changes, where virtual
illumination hits the spacial representation of a situation before it is rendered as an
image. Another technique is light field photography, where the light from a situation
is captured in a way that does not yet predetermine its rendering on an image plane.
Other camera designs foresee the replacement of the single lens with multiple optics of
lower quality, which in combination nonetheless can provide images of higher quality
once their raw data has been merged. In all of these methods, it is not primarily the
5
image itself that becomes subject to interpretation but the situation and the point of
view that finally transforms it into an image. In a laboratory setup with an object, a
camera and a single light source at the Max Planck Institute for Informatics in Saarbrücken it was possible to use the data provided by the camera to render an image
from the perspective of the light source (see Hayes 2008, 98). “You can’t have a point of
view in the Electronic Age,” as Marshal McLuhan said.5 Perspective has turned into an
option, a convention and it is interesting to see how for example Paglen’s renderings
try to bypass the question of perspective. While technically they use a virtual camera
for rendering, this camera however does not produce a situation that can be seen as
specific. The specificity of these images is that of a typology.
A question that has to be discussed in regard to these applications is whether we locate
the index in the raw data collected in a specific situation that it has a direct connection
to or in the multitude of perceivable signs that can be rendered based on the collected
data? This, however, is maybe not even the most relevant question as the critical point
where the concept of the index gets under pressure in computational photography is
not the fragile connection between an object and the sign that represents it but the interpretant who reads that sign. In a strict sense, the interpretant for Peirce is not a human
but it stands for the required effect a sign has on an intelligible addressee. “In the 3rd
place it is necessary for a sign to that mind which so considers and if it is not a sign to
any mind it is not a sign at all.” ([1873] 1986, 67) If the active role of an interpretant is
regarded as mandatory to speak of something as the raw data as a sign and the classification of such a sign as indexical requires a connection to what it represents, then the
algorithms that are required to read and interpret such data have to be seen as part of
the interpretant, who thus is no longer necessarily a human.
Coming back to the title of my paper and the proposal of a ‘probabilistic realism,’ computational photography in many ways works against a realism that has to be conceived
as subjective in that sense that it requires a point of view that somebody or something
has to take. A probabilistic realism, on the other hand, is the result of echoes and feedbacks in an distributed network. To adopt Peirce’s index to emerging image making
practices would mean to localize not a simple cause-and-effect-relation but recursive
triadic structures in networks of image production.
This also applies to my final and most speculative application of computational photography as it was drafted by a group of researches in 2009 and that brings us back to
the Roland Barthe’s introduction of the index into photo theory: “If the goal of photography is to capture the visual essence of an object in front of us, then perhaps the
ideal photography studio is not a room full of lights and boxlike cameras at all, but a
flexible cloth we can rub gently over the surface of the object itself. The cloth would
5. https://www.youtube.com/watch?v=9P8gUNAVSt8
6
hold microscopic, interleaved video projectors and video cameras. It would emit hyperspectrally colorful patterns of light in all possible directions from all possible points on
the cloth (a flexible 4D light source), while simultaneously making coordinated hyperspectral measurements in all possible directions from all possible points on the cloth (a
flexible 4D camera). Wiping the cloth over a surface would illuminate and photograph
inside even the tiniest crack or vent hole of the object, banishing occlusion from the
data set; a quick wipe would characterize any rigid object thoroughly.” (Raskar et al.
2009)
References
Akrich, Madeleine. 1992. “The de-Scription of Technical Objects: Studies in Sociotechnical Change.” in Shaping Technology/Building Society: Studies in Sociotechnical Change,
edited by Wiebe E. Bijker and John Law, 205–24. Cambridge, MA: MIT Press.
Callon, Michel. 1986. “Some Elements of a Sociology of Translation: Domestication of
the Scallops and the Fishermen of St. Brieuc Bay.” in Power, Action, and Belief: A New
Sociology of Knowledge?, edited by John Law, 196–233. London: Routledge & Kegan
Paul.
Dror, Gideon. 2011. “Machine Learning for Digital Face Beautification: Methods and
Applications.” in Computational Photography: Methods and Applications, 419–43. Boca
Raton, FL: CRC.
Flusser, Vilém. (1983) 2000. Towards a Philosophy of Photography. Translated by Anthony Mathews. London: Reaktion.
Gitelman, Lisa, ed. 2013. “Raw Data” Is an Oxymoron. Cambridge, MA: MIT Press.
Graw, Isabelle, and Benjamin H. D. Buchloh. 2015. “Lost Traces of Life.” Texte Zur Kunst
25 (99): 42–55.
Hayes, Brian. 2008. “Computational Photography.” American Scientist 96 (2): 94–99.
Latour, Bruno. 1991. “Technology Is Society Made Durable: Essays on Power, Technology and Domination.” in A Sociology of Monsters. Essays on Power, Technology and
Domination, edited by John Law, 103–31. London: Routledge.
———. 1999. Pandora’s Hope: Essays on the Reality of Science Studies. Cambridge, MA:
Harvard UP.
Peirce, Charles Sanders. (1873) 1986. “On the Nature of Signs.” in Writings of Charles S.
Peirce: A Chronological Edition, Volume 3: 1872–1878, edited by Christian J. W. Kloesel.
Bloomington: Indiana UP.
7
Raskar, Ramesh, Jack Tumblin, Ankit Mohan, Amit Agrawal, and Yuanzen Li. 2009.
“Computational Photography.” in Frontiers in Optics 2009/Laser Science XXV/Fall 2009
OSA Optics & Photonics Technical Digest. OSA. doi:10.1364/cosi.2009.ctua1.
Rubinstein, Daniel, and Katrina Sluis. 2013. “The Digital Image in Photographic Culture:
Algorithmic Photography and the Crisis of Representation.” in The Photographic Image
in Digital Culture, edited by Martin Lister, 2nd ed., 22–40. London: Routledge.
Steyerl, Hito. 2014. “Proxy Politics: Signal and Noise.” e-flux, December. http://www.
e-flux.com/journal/proxy-politics/.
Uricchio, William. 2011. “The Algorithmic Turn: Photosynth, Augmented Reality and
the Changing Implications of the Image.” Visual Studies 26 (1). Informa UK: 25–35.
doi:10.1080/1472586x.2011.548486.
8