Nothing Special   »   [go: up one dir, main page]

Making Space - Senses of Cinema

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Making Space | Senses of Cinema

http://www.sensesofcinema.com/2010/feature-articles/making-space/[12/01/2011 11:51:42]
LATEST ISSUE
Issue 57
Senses of Cinema RSS feed
CURRENT ISSUE ABOUT US LINKS TOP TENS ARCHIVE GREAT DIRECTORS
Sean Cubitt is Director of the Program in Media and Communications at the University of Melbourne. His
publications include Timeshift, Videography, Digital Aesthetics, Simulation and Social Theory, The Cinema Effect
and EcoMedia. He is series editor for Leonardo Books at MIT Press.
Bitmap, Colour, Codec
The meaning of the term cinema has changed.
Today we experience film across large and small
screens, only relatively rarely projected outside
lecture halls and the theatrical circuits. We watch
in aircraft, on the bus, on handhelds, with
earphones, on TV and plasma screens, tape, DVD
and Blu-Ray and a dozen internet formats. The
35mm photographic film that gave the medium its
name is no longer universal in filming, passing out of post-production, and scheduled for demolition
in distribution. As a community of scholars we are still trying to understand what it is that we are
looking at now.
The experience of digital cinema starts with screens. The two major formats for digital projection
digital light projection (DLP) and Liquid Crystal on Silicon (LCOS) are geometrically the same as the
dominant formats for fixed fluorescent screens, liquid crystal displays (LCD) and plasma screens. Big
architectural screens using light-emitting diodes (LEDs) share the same structure. It even dominates
the formats of digital cameras, charge-coupled devices (CCD) and complementary metaloxide
semiconductor (CMOS) chips. This geometry is the rectangular grid of pixels ordered arithmetically
from top left. Because it is designed to handle any output from a computer, such screen displays are
more deeply sealed into contemporary screen culture than, say, standard aspect ratios were in the
cinema. However your image is produced, it has to be displayed on the raster grid, also known as the
bitmap display.
There is an interesting history to this, stretching back to half-tone printing and the development of
wire photos, the former essential to mass dissemination, the latter to journalistic speed in
transmission, of photography. The principle of synchronised scanning moved from experiments with
pioneer fax technologies around 1900 into the cathode ray tube (CRT). The old CRTs were pretty
blurry, so Sony introduced the Trinitron which placed a shadow mask, a grid of fine lines between
the more or less random spray of phosphorescent molecules lining the inside of the tube, to give the
impression both of richer blacks, and of crisper definition. The shadow mask principle was adopted
for computer displays, where each spot on the screen was given a numerical address so many pixels
to the right or below the origin at top left. Those addresses now are hardwired into the computers
operating system. Redesigning the screen would mean redesigning pretty much everything from the
operating system up.
The other numerical factor in digital displays is the colour depth, defined by the number of bits
Print this article

Bookmark for later
FIRST PUBLISHED
20 December 2010
LAST CHANGES
20 December 2010
KEYWORDS
SUBSCRIBE
Subscribe to Senses of
Cinema to receive
instant notification upon
the release of a new
issue.
Name:
Email:
filed under FEATURE ARTICLES in ISSUE 57
Making Space
by Sean Cubitt

Subscribe
Making Space | Senses of Cinema
http://www.sensesofcinema.com/2010/feature-articles/making-space/[12/01/2011 11:51:42]
allocated per pixel. Obviously the more data you attach to each pixel, the larger the number of colours
that can be shown. The problem is the second factor in colour display, the gamut, which is the range
of colours that can be shown. In pretty much any of the devices I just listed, cameras included, and in
the vast majority of printers, the gamut is only around 40% of the visible spectrum. Some colours are
just hard to get with the standard colours of phosphors red, green and blue. The RGB gamut isnt
very good at bright yellow, which in the human retina overlaps the red (actually in our eyes more
orange) and green cones, and so appears the brightest of colours (Sharp introduced a new four colour
TV system ACQUOS Quattron, including yellow pixels in 2010. Unavailable as yet in Australia, the
author has been unable to assess their optical qualities: http://ces.cnet.com/8301-31045_1-
10426897-269.html). RGB phosphors dont quite match the breadth of wavebands of the respective
cones in the human eye, and across much of the spectrum they cant achieve the same brightness.
This happens especially with blues: we are highly sensitive to blue, which overlaps with our scotopic
(colourless) night vision: we can see way into the blue end of the spectrum. But getting that dim
region to shine brightly on screens would take too much power (and generate too much heat). So
software engineers bring in a third tool in colour management: colour difference. Various input
devices, like cameras, record the fact that light of the visible wavelengths is arriving. They pass that
data on to the software. The software then squeezes the incoming light into the available gamut. But
rather than simply move the extreme blues and yellows etcetera into the nearest available spot, they
calculate the difference between colours say between a mauve and a magenta and try to keep the
relative difference between them. The whole convolution moves all the colours, not just the extreme
ones, so the result looks as if it has as much colour contrast, even though there is a lesser total range
than in ordinary visible light.
So the screen is a grid, and the colour is manipulated. The third major factor in digital display is the
compression-decompression algorithms or codecs. To get from visible light to a computer screen, the
wavelengths of light are converted into numbers matching the colours, and addresses matching
where they are to appear. That is a considerable number of numbers for each pixel. For a good
domestic 12801024 high definition display, that would be 1,310,720 pixels and the address of the
bottom right quadrant pixels have to have a minimum of eight digits in their addresses (obviously
more in binary!). Change these values once every 25th of a second over the two-hour runtime of a
feature film and you see the problem: vast quantities of data (and we havent mentioned sound yet).
A similar problem arises even in shooting with digital cameras. An individual frame is exposed: the
light causes the release of electrons. That creates charge. The charge is guided through the use of
positively and negatively charged gutters in the chip through a timer gate into storage. So far it is just
charge the equivalent of the latent image in traditional photography. It has to be converted into
numbers in order to be taken over into digital storage. Typically this is handled by processing the
collected data in batches called blocks or groups of blocks: four neighbouring pixels, or sixteen,
checked to see whether they are pretty much the same colour. If so, the same colour number is
applied to the lot, making it much quicker to get them out of the vulnerable charge state and into the
digital state, so making room for the next frame one 25th of a second later.
Getting data from the cameras storage into a computer, and then into editing software may require
other compression phases. Previewing on a monitor requires further compression. Then theres the
question of delivering it to audiences. Terrestrial broadcasting used pretty strenuous compression to
get the data through crowded airwaves. Internet and mobile media likewise: even with broadband,
time and the cost per byte of data transfer have to be balanced against the resolution, colour depth
and accuracy that viewers will accept. Even DVD has to fit everything onto a finite space, and get it
from the disc to the screen as rapidly as possibly, This is the work of codecs. To cut a long story short,
all codecs crush the image. YouTube crushes hard. A Blu-Ray movie or an FC Pro file crush
considerably less. But everything from the moment of digitisation on is crushed. I need only make
one last point: the means for doing this includes a process called vector prediction.
This is infuriating, only partly because vectors have a very different role elsewhere in digital imaging.
Animators will recognise the principles: the codec seeks out keyframes where there is substantial
change across the whole field of the image. Then it automates in-betweening to get from the first
keyframe in a sequence to the last, sampling as it goes along to update and check for basic accuracy.
Making Space | Senses of Cinema
http://www.sensesofcinema.com/2010/feature-articles/making-space/[12/01/2011 11:51:42]
The idea is that if youre watching the cricket, you arent watching the grass, which can be relied on to
stay the same colour. Big saving in information. Vector prediction then tells the software which areas
of the image seem to change most and which least from frame to frame. Those that dont change
come out as blocky patches because the unit is those Groups of Blocks, fetchingly referred to as GoBs.
So what does all this mean? Ill concentrate on one device where three features come together, the
CCD chip standard in most digital video and still cameras. Light comes in as a rain of photons over
the duration that the shutter is open: lets say slightly less than a 25th of a second for a video frame.
The photons have different wavelengths, and even though they arrive over a very short period by
human standards, recall that they are travelling at the speed of light. There are lots of them. When
they arrive, they react with the CCD to produce a charge. This charge is an average of the wavelengths
of all the different photons, and the average is applied across the width of the pixel square. This
averaging involves effectively sampling from the whole spectrum falling in the square and arriving at
a single figure to represent it. When the charge is deposited in digital form for storage, the average is
rounded up or down to a whole number there are no digital fractions. The frequency of the
averaged wave form, what we see is a colour, is then managed, convoluted to match the available
colour gamut. The question here is not proximity to human vision but the translation of flux into
averaged unit steps. The process is geared towards a good-enough rendering of the scene, as
measured against the visual perception of a standard observer.
The standard observer also has an interesting history. In 1931, arguments between physicists and
psychologists over how to standardise colour descriptions reached a pitch. Both agreed that the
Commission International de lEclairage, one of the older international standards bodies, had as a
remit the task of providing metrics for the notoriously unstable and tricky field of pigments, dyes and
light-sources. How could you guarantee that the colour of your flag was the same from year to year,
given the vagaries of bleaching, fading, rapidly changing industrial colour manufacture, different
printing technologies and their inks, different conditions of illumination? Unable to find a common
language, the dispute over whether colour was a matter of the physics of frequency or the subjective
impression of observers under different light conditions, talks were deadlocked until a group of US
psychologists, inspired by the social physics of Alphonse Quetelet, inventor of statistical techniques
for sociology, (1) tested a group of students. Testing their responses to colour cards under different
lighting conditions, they constructed, as Quetelet had constructed the average man a standard
observer. (2) It is, I believe, a triumphant moment in the establishment of what Michel Foucault calls
biopolitics (3): the replacement of rule through panoptic discipline by the management of
populations through probability and statistical goal-setting.
At the same time, the whole-number enumeration of things parallels another critical factor in the
political economy of our times: the commodity form. The harsh reality is that once a colour value has
described in the whole-number language of hexadecimal or the other coding systems (HSV, LAB,
RGB) available in digital devices, it is exchangeable. No only can the number be handled
arithmetically as we do with Photoshop filters. It can be picked up and moved. Colour values are no
longer semantic grounded in use but arithmetic based in exchange. Between the processes of
statistical averaging and arithmetical description, biopolitics and information commodity, light is
manufactured for digital cinema in technologies which are symptomatic of a very precise condition of
contemporary social organisation: the database economy.
So now we have the basic building blocks of digital cinema: bitmaps, colour management and codecs.
The first traces its history back to printing, losing in the process much of the textural richness of older
techniques. The second can be traced back through the industrialisation and standardisation of
colour. The third, codecs, has an interesting relationship with the geometry of perspective, which is
where, at last, I want to begin.
Layers, Grading and Sprites
The Teatro Olimpico in Vicenza designed by Palladio and completed by Scamozzi is a triumph of
Baroque stage design, with its forced perspective only clearly visible from the central area of the
auditorium (the confusing side-on view being blanked out by a false wall of neo-classical ornament).
Making Space | Senses of Cinema
http://www.sensesofcinema.com/2010/feature-articles/making-space/[12/01/2011 11:51:42]
The stage made sense only from the vantage of the best seat in the house: the monarchs. By the 19th
century, a mass, popular theatre could not afford to permit its paying customers only the confused
eye-lines from their vantage points in the pit or the gods created by this monocular construction.
In the same period, the advance of gas and later electric lighting, replacing the footlights of the
previous century, illuminated the stage in far greater detail than had previously been the case,
showing up the crude materiality of the set construction all too vividly. Abandoning such techniques
as flanking the stage with painted perspectival walls, the new stage design of the 18th and 19th
century had moved to the use of flats which preserved the illusion of depth from multiple viewpoints.
(4) The virtual space of the stage was democratised by the abandonment of monocular perspective,
but at the expense of the perfection previously reserved for the monarchs best seat in the house.
Its easy to see how theatre confused and confounded moralists from Plato to the Taliban. Henri
Lefebvre makes plain the stakes: To the question of whether such a space is a representation of space
or a representational space, the answer must be neither or both. (5) The conundrum haunts not
only the forced perspective of the baroque and the theatrical flats of the melodramatic stage, but the
use of layers in digital imaging. Though there are other genealogies to trace the multilayered images
of Len Lyes abstract films, the artistic developments of silkscreen printing in the immediately pre-
digital age theatrical staging has a direct relevance to the development of photography the
backdrops and flats of studio photography and cinema Georges Mlis extensive use of flats.
Unsurprisingly then, layers have become key components of digital imaging.
And they were there in a very specific usage focal planes, throughout the history of
cinematography. A practice prevalent at some stages in cinema history has been using focal planes of
the camera lens to isolate only one plane of action for attention, leaving backgrounds and
foregrounds out of focus and difficult to see. This was the characteristic which Andr Bazin attacked,
promoting instead the use of deep focus which allows the viewer the freedom to look anywhere on the
screen and see things clearly. But as Bazin already knew, no technique is integrally and exclusively
progressive. Observing that realism in art can only be achieved in one way through artifice, Bazin
offers to define as realist, then, all narrative means tending to bring an added measure of reality to
the screen. But every realism is a selection, and necessarily an abstraction, which does not permit
the original to subsist in its entirety. The resulting mix of abstractions, conventions and authentic
reality produces a necessary illusion, allowing reality to become
identified in the mind of the spectator with its cinematographic representation. As for the film
maker, the moment he has secured this unwitting complicity of the public, he is increasingly
tempted to ignore reality . . . He is no longer in control of his art. He is its dupe. (6)
Even Bazins beloved deep focus is a technique open to this necessary process, in which the technique
becomes a goal, rather than a medium. Today, deep focus and staging in depth are the norm for
blockbuster event movies, most of which have only the most tangential relation to Bazinian realism.
Anyone who has seen James Camerons Avatar (2009) in 3D will have a clear idea of the significance
of focal planes. Despite enormous technical achievements, the film has to train its audience against
one of the pleasures of the modern blockbuster spectacle: the opportunity to let your eyes roam over
the whole of the screen. Here we see the inverse of the monarchical seat in the baroque theatre:
everyone occupies that perfect seat, but they must occupy it precisely as defined by the films optical
system, or suffer the consequent eyestrain. In Avatar, the layers, intended to stack up as a believable
world, introduce schisms between foregrounds and backgrounds. Avatar handles this by training
viewers to watch correctly. Other creative uses of layers, like the late Tim Maras silkscreen works,
(7) suggest that this schismatic quality can become a resource, in this instance particularly a spatial
organisation of the multiple passes and repeated motifs, so that the gaps are no longer merely spatial
but extensively temporal, as in a rather different way the colour fields and photographic halftones in
Andy Warhols screenprints float in separate though conjoined spaces in the finished works.. This
disjuncture between layers is printmakings equivalent to cinematic parallax, the effect that not only
makes distant objects move more slowly relative to an observer than nearby ones, but which allows
relative speed to stand in for relative distance, aligning layers in motion so that, rather than the
nearest appearing to move fastest with respect to the camera, the fastest appears nearest.
Making Space | Senses of Cinema
http://www.sensesofcinema.com/2010/feature-articles/making-space/[12/01/2011 11:51:42]
With this exception, layers only rarely function schismatically in digital cinema, where they are the
central component in compositing different elements (plates) into a single, coherent image.
Coherence of the story-world is the central issue, and a serious technical challenge. Typically, plates
are filmed in different locations, at different scales, often with different equipment, and inevitably
under different lighting conditions. The old physical matte, a painted sheet of glass suspended
between the camera and the actual location to supply, for example, the extended roofline of Tara in
Gone with the Wind (1939), had the great virtue of being shot in the same light as the physical
location. Combining physical action (likely to be green-screen, so tinted with the ambient light of the
studio), digital cyclorama backdrops (often of daylit locations), miniatures shot under illumination
built to make them look their best, plus CGI (computer generated imagery illuminated by virtual
light) have to be brought together into a coherent world with, for choice, a coherent and cogent
global light which will provide highlights and shadows aligned across the various elements in the
finished sequence.
A critical tool here is grading, the process of
correcting colour within and between shots. Long
the province of laboratories, grading moved to the
control of creative teams with the development of
digital tools. The major innovator was a UK-based
company, 5D, whose Colossus program was
radically redeveloped during work on Peter
Jacksons Lord of the Rings trilogy (2001, 2002, 2003), and later commercialised as Lustre, now part
of the Autodesk suite. Monsters vs Aliens (2009) used the four channels available in current versions
of the program to adjust and colour-correct characters, backgrounds, mattes and visual effects in an
entirely CGI feature. Critically, the program also allows for the holistic treatment of whole scenes and
sequences, seen as a critical selling point. In this instance, where all elements are CGI-derived, there
is no case to answer concerning the practice of applying a global light, since all light in the screen is
digital. One reason connoisseurs of effects find this a more convincing 3D than Camerons film is that
when global light is applied to live actors, there is a sense of dimensionality being lost or, more
properly, flattened. Colour correction (on the complementary Primatte software used to remove
colour artefacts derived from blue- or green-screen shoots) changes the specificity of the light in
which they were photographed (for example translucent blonde, grey or white hair will show more of
the reference chroma colour). But the artefacts of reference colours bleeding into the matted image of
the actor are more than accidents: they are evidence of the actual time and conditions of filming.
Stripping out those colours, replacing them with pixel-mapped corrected colours, and placing them
in sequences where they are blended in with the illumination holding for the whole scene smoothes
the differences and thereby reduces the possibility of a dialogue between layers, on the principle that
dialogue can only occur between identical interlocutors.
The Battle of Pelennor Fields in The Return of the King (2003) is a case in point. The battle scenes
were filmed on two locations in the South and North Islands of New Zealand, and in the studio
backlot in Wellington. Digital cycloramas derived from mountain ranges in the North and South
Islands provided backdrops. Real horses shot on location were blended with digital horses and riders.
Built for the first time in Massive, until then only used for more distant creatures, digital doubles of
horse-and-rider pairs were constructed using emulations of horse and rider musculature, with
animated maps of the skins to provide the sheen of their pelts. To gather data for these elements,
Weta Digital arranged to film horses wearing reflective markers in a 52-camera motion-capture rig,
designed to abstract the points into wireframe models of horse motion for the animators. Teams of
rotoscope artists separated real riders from their location backgrounds in order to place the animated
riders in among them. Mark Tait Lewis, 2D effects supervisor, recalls two further elements in
constructing the digital horses:
Lighting matched the models as closely as possible to the horses in the [live action] plate but
black levels changed constantly because horses were kicking up dust. So we animated those
values, adding haze and contrast levels to match the plate. We also added real horsetails. Some
of the hair-sim[ulation]s were not matching the real thing up close, so [visual effects supervisor
Making Space | Senses of Cinema
http://www.sensesofcinema.com/2010/feature-articles/making-space/[12/01/2011 11:51:42]
and second unit director] Jim Rygiel and I shot a bunch of horses asses bouncing up and down.
We stabilized them, removed the tails and pinned them to our CG horses. (8)
Animated hoofprints and shadows added to practical (live-action) mud and dust completed the scene.
Perhaps the most intriguing aspect of this process is the reverse move of two processes: abstracting
motion from live horses, and applying real horse footage to the resulting animation. This second
technique, called sprites, is widely used in effects. It involves applying photographic footage of
practical elements from grass and water to skin and fur to the digital models, in place of the kind
of digitally-produced surfaces in Monsters vs Aliens and other digital animations. It is here that the
process of abstraction, in the sense intended by Bazin, really strikes home. Abstracting the geometry
of motion from live action stretches back to the chronophotographic experiments of Etienne-Jules
Marey and Eadweard Muybridge in the late 19th century, (9) and in many respects into the long
history of anatomical drawing. But reapplying textures and shades excised from their actual instances
in order to apply them to digital models is not the abstraction of science but of witchcraft.
Lustre and the other tools available to professionals working on digital intermediaries can reset the
colours and the effective virtual illumination not only of CGI, but of practical elements. Not only
divorced from their sources, they are welded into a single, comprehensive and enclosed world whose
coherence as a fictional universe depends on the seamless blending of its elements. Defined by the
frame of the screen, the world produced is a universe, and its principle of coherence made universal
within that world. One effect of this is to make the frame a far more significant element of the
composition than in live action, where the unexpected can always come into frame from beyond its
boundaries. Nothing unexpected can occur in such discrete universes as Camerons Avatar. At best
practical dust and mud, or practical grass, provide a semblance of contingency, but even these can be
subjected to flow-fields in film or physics engines in games to ensure they abide not only by the global
light of the scene but the virtual weather in the story world. In layered compositing, the only remnant
of traditional constructions of pictorial space is fog, the use of diffusion to dim the light reaching the
perceiver from more distant objects, a variant on the painterly technique of chiaroscuro.
Perspective, Projection and Vectors
Sprites are a kind of limit case of both compositing and the ubiquity of layers in digital imaging. I
have argued here that they can serve to produce coherence. That coherence, like the positioning of the
ideal spectator in Avatars polarised 3D, places the onus of construction on the screen side, leaving
the viewer to be either dupe or connoisseur, or to refuse the offered images. Yet as Norman Klein
argues, concluding his historical and critical survey, Special effects terrorize with surgical irony,
precision-tooled like the old automaton itself. But they also rely on omissions, evasions, absence.
(10) To understand the contradictions at the heart of layers, we have to look deeper into the
construction of the third dimension in two-dimensional surfaces. We need to understand the many
meanings and practices of projection.
Oddly, formal Renaissance geometric perspective of the Alberti and Brunelleschi style is relatively
unimportant to digital cinema. Though apparatus theory of the 1970s drew heavily on Erwin
Panofskys analysis of perspective, cinema as a whole has not characteristically looked like Piero della
Francescas Citt Ideale. (12) But there are some very intriguing parallels. One of Albertis
innovations was
a veil woven of very thin thread and loose texture, dyed with any colour, subdivided with thicker
threads according to parallel positions, in as many squares as you like, and held stretched by a
frame. Which indeed I place between the object to be represented and the eye, so that the visual
pyramid penetrates through the thinness of the veil. (13)
This is the device picked up and popularised by Drer in the Unterweysung (14) almost a century
later. Equally intriguing is Albertis contention that black and white are the most important colours, a
belief commonly held at the time, but for an unusual reason: the distribution of black and white
produces what became praised in the Athenian painter Nicias, or what an artist must greatly look for:
that his painted objects appear very much to protrude. (15)
Making Space | Senses of Cinema
http://www.sensesofcinema.com/2010/feature-articles/making-space/[12/01/2011 11:51:42]
Albertis technique of the veil and its tracery of fine lines suggests a shared history with another
visual technology which has had an even more important structuring role in digital visual culture: the
map. Alberti inherited the general idea from scale drawing, the practice of designing mural-sized
works on gridded panels for transfer to the walls. This scalability is intrinsic to the geometric
principle of projection, which renders something on a larger or smaller surface. The mercator
projection is just one of the many that have guided and shaped cartography and now geographic
information systems. Architects likewise have employed axonometric and other projections. Although
they have a very different genealogy, both databases and spreadsheets, as spatial representations of
temporal relationship, also use a kind of projection, and in turn are used to produce projections of
future states of affairs. And of course this is the optical principle of projecting, in the case of DLP for
example, from tiny square mirrors onto large screens.
The second Alberti quote about protruding objects goes against the idea of the picture as a window,
so deeply associated with his name, suggesting instead a reversal: that it is not we who look into the
picture, but the picture that penetrates our world. The importance of black and white in this is
indicated by Samuel Edgerton (17) with an illustration from Christoph Jamnitzers (18) book of
engravings, a medium deeply associated with restriction of the palette to black and white. This
protrusion based on extremes of light and shade is central to trompe loeuil painting, from
Caravaggio to Hoogstraaten, who also designed a projecting shadow theatre as well as the delightful
trompe-loeuil perspective box in the National Gallery, London. The Jamnitzer engravings are
obviously geometrical in construction. This is the first significant aspect of the construction of 3D
space that does not involve a specifically arithmetic relationship.
A clear expression of what is at stake can be derived from inspecting an important technique in
creating volume in digital media: shading. Of the many types of shading used in digital imaging,
Gouraud and Phong are the most widespread. The former was invented in 1971. Gouraud shading
uses samples of illumination effects on curved surfaces, extrapolating from them the likely effects of
lighting on objects constructed from polygons, geometric primitives used in wireframe CGI. Curves in
polygon construction are composed of smaller, usually regular, flat geometric shapes selected by
averaging the tangents to the local curvature of the object as a single plane. For each of these flat
surfaces, there is a surface normal, that is, a line perpendicular to the flat plane. Once a virtual light
source has been constructed, it will strike one polygon at the perpendicular, and neighbouring
polygons at gradually more oblique angles. Gouraud shading takes the average of surface normals for
coloured polygons meeting at a specific point or vertex, and interpolates a colour value based on the
average. The beauty of Gouraud shading is its efficiency. Interpolating an average is swift, and uses
less computing power than trying to trace every point on a surface. The drawback however is that if a
highlight occurs elsewhere than at a vertex, it may not be included in the average, and either lost, or
terminated abruptly, giving a tessellated effect. One solution is to design objects with simple
geometric surfaces, as is the case in many computer games, and to restrict the number of light
sources involved. The results are generally felt to be acceptable where interaction and therefore speed
of computing is the major attraction, but for more sophisticated shading, the Phong system is
preferred.
Phong shading began in the observation that rough surfaces reflected less light than smooth: the more
mirror-like a surface, the more light it would return. Bui Tuong Phong established a complex
algorithm for calculating relations between the ambient tone of the object, diffuse reflections typical
of rough surfaces and highlights typical of shiny ones, and various aspects of reflection. This
algorithm is shared with Gouraud shading, but in Phong shading, reflectance is calculated at each
pixel, not just the vertices of adjoining polygons. The result is much smoother rendering, albeit at the
expense of much heavier use of computing power. The trick is to presume that the curvature of the
surface varies smoothly and constantly. Phong shading provides the characteristic billiard ball style of
computer graphics, although it allows for a wide range of effects, especially when combined with
other 3D graphics tools. Yet like Gouraud shading, it averages around a unit: the unit of the pixel, far
smaller than a polygon, but still a finite and enumerable quantity. The oddity about this is that the
smoothly changing gradients which it presumes is based not on pixels but on vectors, which are not
intrinsically unit-based.
Making Space | Senses of Cinema
http://www.sensesofcinema.com/2010/feature-articles/making-space/[12/01/2011 11:51:42]
About here is where the nitty gritty of non-uniform rational Bzier splines (NURBs) and ray-tracing
come in. Ill spare you. The critical thing is that these are vector-based. They return to Renaissance
projective geometries (ray-tracing for example works by reversing the flow of photons, tracing them
from the virtual eye to objects rather than the natural worlds way of sending photons to the eye).
They involve not pixels adding up to a shape but algebraic descriptions of curves.
We have already encountered these vectors in the context of codecs. H.261, the YouTube codec,
shared by Flash, the leading provider of vector animations for the web, is used to minimise the
transport costs of media-rich pages. Flash produces true vector-based sequences, tracking change
through time as a set of geometrical instructions to draw a mutating line or field over time. YouTube
works by analysing the bitmap sequence to establish average colours, blocking them in across roughly
similar areas, and applying them along the timeline in the likely order that gets the blocks of colour
from one keyframe to the next. In the latter case especially, conversion to vector means losing
information and quality through the averaging process. Flash saves time: real time graphics can be
generated from a small stack of geometrical algorithms, which are all that need to be sent from host
to user. High-end vector graphics like NURBS and ray-tracing need a lot more computer power, so it
is feature films and TV that use them most, media which can be prerecorded. Games tend to use
Gourraud and other faster and less power- and memory-hungry techniques.
What is fascinating is that today, feature films
especially and a number of commercials and
shorts, use both the arithmetic structure of layers
and the geometrical tools of vector graphics. The
theatricality of layered compositions is apparent
in South African collective The Blackheart Gangs
work on The Tale of How
(http://theblackheartgang.com/the-
household/the-tale-of-how/)
(http://vimeo.com/1516019), which seems to
evoke earlier hand-crafted evocations of the
theatre in Mlis as much as the theatre itself, an interesting phase in the remediation (19) of theatre
by the moving image. (20) Particularly interesting in The Tale of How is the interplay between
layered space created mainly in Photoshop and the vector space of AfterEffects, used for example to
produce the birds crossing the clearly demarcated planes of the scene at about the one minute mark.
This creates an incoherent spatial orientation which neatly expresses the surreal world of the
Piranhas. These clashing spaces undo the numerical cartography of layers. But more importantly still,
the lines which compose the vast majority of the vectors in this work are unforeseeable, taking on
shapes which not only create a diegetic universe but constitute the events which it exists to give birth
to. These events are not keyframed but autonomous within the plane of the screen, and through the
incoherent construction of space, in the depth of the screen as well. The effect of the disruption of
space is to make possible a work on time which is free of conclusion, operating without the
determinations of the grid: an aesthetic presentation of an alternative to the dominance of the
database economy.
The grid of the raster display echoes those of the key instruments of our times: databases,
spreadsheets and geographical information systems. These are media which convert time a year of
transactions, a students career, the movements of populations into a spatial representation. In
works like The Tale of How, and even in the more banal event movies like Avatar, the triumph of
spatial media over temporal is contested through the presence of vectors which, as geometrical
descriptions of trajectories, are fundamentally engaged in time. Like the category e films of Jean-
Louis Comolli and Pierre Narboni (21), the contemporary blockbuster sets out in support of the
dominant ideology but contains, in this instance in its technological foundations, an intrinsic
contradiction. They contradict Wittgenstein (22): they are not reduplications of what is the case, but
statements of the non-identicality of the present, from which springs all possibility of future change.
More than contradictions, the vectors trajectory points into an unmapped future, suggesting that the
work of remaking media need not be limited to remaking technologies, but to exploding the
Making Space | Senses of Cinema
http://www.sensesofcinema.com/2010/feature-articles/making-space/[12/01/2011 11:51:42]
forewarned and forearmed managerialist commodification that the grid expresses as the iron fist of
the market, and the perpetual status quo of risk management in a polity where elections seem to be
won exclusively on the basis of fear of change. The vector, and the contradiction between vector and
arithmetic forms, including the very screens that contain them, point, with all due trepidation,
towards a distant shore where things are no longer as they are.
This article has been peer reviewed
Authors note: This research was made possible by an Australian Research Council Discovery Grant.
I owe special thanks to my colleagues Daniel Palmer and Les Walkling for their input.
ENDNOTES
1. Ian Hacking, The Taming of Chance, Cambridge: Cambridge University Press, 1990.
2. Sean F. Johnston, A History of Light and Colour Measurement: Science in the Shadows, Institute of Physics,
Bristol and Philadelphia, 2001.
3. Michel Foucault, The Birth of Biopolitics: Lectures at the Collge de France 1978-1979, ed. Michel Senellart,
trans. Graham Burchell, Basingstoke: Palgrave Macmillan, 2004.
4. Adolphe Appia, Adolphe Appia: Essays, Scenarios, and Designs, ed. Richard C. Beacham, trans. Walther R.
Volbach, Ann Arbor: UMI Research Press, 1989.
5. Henri Lefebvre, The Production of Space, trans. Donald Nicholson-Smith, Oxford: Blackwell, 1991, p. 188.
6. Andr Bazin, An Aesthetic of Reality, in What is Cinema?, Volume 2, trans. Hugh Gray, Berkeley: University of
California Press, 1971, p. 26-7.
7. Tim Mara, The Thames and Hudson Manual of Screen Printing, London: Thames & Hudson, 1979.
8. Joe Fordham, The Lord of the Rings: The Return of the King: Journeys End in Cinefex, n.96, January, 2004,
p. 115-6.
9. Marta Braun, Picturing Time: The Work of Etienne-Jules Marey (1830-1904), Chicago: University of Chicago
Press, 1992; Franois Dagognet, Etienne-Jules Marey: A Passion for the Trace, trans. Robert Galeta with
Jeanine Herman, New York: Zone Books, 1992; Rebecca Solnitt, River of Shadows: Eadweard Muybridge and
the Technological Wild West, New York: Viking, 2003.
10. Norman M. Klein, The Vatican to Vegas: A History of Special Effects, New York: The New Press, 2004.
11. Erwin Panofsky, Perspective as Symbolic Form, trans. Christopher S Wood, New York: Zone Books, 1991 [1924-
5].
12. Hubert Damisch, The Origin of Perspective, trans, John Goodman, Cambridge MA: MIT Press, 1994; originally
published as, LOrigine de la perspective, Paris: Flammarion, 1987.
13. Leon Alberti, Leon, Il nuovo De Piictura di Leon Alberti The New De Pictura of Leon Alberti, ed. and trans.
Rocco Sinisgalli, Roma: Universit di Roma La Sapienza, 2006.
14. Albrecht Drer, The Painters Manual, trans. Walter L Strauss, New York: Abaris Books, 1977.
15. Alberti, p. 227
16. Anne Friedberg, The Virtual Window: From Alberti to Microsoft, Cambridge MA: MIT Press, 2006.
17. Samuel Y. Edgerton, The Mirror, The Window and the Telescope: How Renaissance Linear Perspective
Changed Our Vision of the Universe, Ithaca: Cornell University Press, 2009, p. 136.
18. Christoph Jamnitzer, Neuw Grottessken Buch, einleitung, Heinrich Gerhard Franz, Akademische Druck-u.
Verlagsanstalt, Graz. 1966.
19. Jay David Bolter and Richard Grusin, Remediation: Understanding New Media, Cambridge MA: MIT Press,
1999.
20. Nicholas A. Vardac, Stage to Screen: Theatrical Origins of Early Film: David Garrick to D.W. Griffith, New
York: DaCapo, 1949; Ben Brewster and Lea Jacobs, Theatre to Cinema: Stage Pictorialism and the Early
Feature Film, Oxford: Oxford University Press, 1997.
21. Jean-Louis Comolli and Pierre Narboni, Cinema/ Ideology/ Criticism (1) & (2), trans. Susan Bennett, in John
Ellis (ed.), Screen Reader 1: Cinema/Ideology/Politics, SEFT, London, 1977, pp. 2-11 and 36-46.
22. Ludwig Wittgenstein, Tractatus Logico-Philosophicus, trans. D.F. Pears and B.F. McGuinness, London:
Routledge & Kegan Paul, 1961.
ABOUT US LINKS TOP TENS ARCHIVE GREAT DIRECTORS CURRENT ISSUE
Making Space | Senses of Cinema
http://www.sensesofcinema.com/2010/feature-articles/making-space/[12/01/2011 11:51:42]
Senses of Cinema 1999-2009. KnowledgePath made this happen.

You might also like