Nothing Special   »   [go: up one dir, main page]

Undone Science: Charting Social Movement and Civil Society Challenges To Research Agenda Setting

Download as pdf or txt
Download as pdf or txt
You are on page 1of 31

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/249623643

Undone Science: Charting Social Movement and


Civil Society Challenges to Research Agenda
Setting

Article  in  Science, Technology & Human Values · July 2010


DOI: 10.1177/0162243909345836

CITATIONS READS

217 786

6 authors, including:

Scott Frickel Sahra Gibbon


Brown University University College London
28 PUBLICATIONS   1,876 CITATIONS    32 PUBLICATIONS   931 CITATIONS   

SEE PROFILE SEE PROFILE

Jeff Howard Joanna Kempner


Connecticut Department of Energy and Environm… Rutgers, The State University of New Jersey
10 PUBLICATIONS   277 CITATIONS    17 PUBLICATIONS   449 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Expertise for sustainability View project

Citizen Science View project

All content following this page was uploaded by Gwen Ottinger on 26 June 2017.

The user has requested enhancement of the downloaded file.


Science, Technology, & Human Values
35(4) 444-473
ª The Author(s) 2010
Undone Science: Reprints and permission:
sagepub.com/journalsPermissions.nav
Charting Social DOI: 10.1177/0162243909345836
http://sth.sagepub.com

Movement and Civil


Society Challenges
to Research Agenda
Setting

Scott Frickel,1 Sahra Gibbon,2 Jeff Howard,3


Joanna Kempner,4 Gwen Ottinger,5 and
David J. Hess6

Abstract
‘‘Undone science’’ refers to areas of research that are left unfunded,
incomplete, or generally ignored but that social movements or civil society
organizations often identify as worthy of more research. This study mobi-
lizes four recent studies to further elaborate the concept of undone science
as it relates to the political construction of research agendas. Using these
cases, we develop the argument that undone science is part of a broader
politics of knowledge, wherein multiple and competing groups struggle over

1
Department of Sociology, Washington State University, Pullman, Washington.
2
Anthropology Department, University College London, London.
3
University of Texas at Arlington, Arlington, Texas.
4
Rutgers University, Princeton, New Jersey.
5
Chemical Heritage Foundation, Philadelphia, Pennsylvania.
6
Rensselaer Polytechnic Institute, Troy, New York.

Corresponding Author:
Scott Frickel, Department of Sociology, P.O. Box 644020, Washington State University,
Pullman, WA 99164. E-mail: frickel@wsu.edu

444
Frickel et al. 445

the construction and implementation of alternative research agendas.


Overall, the study demonstrates the analytic potential of the concept of
undone science to deepen understanding of the systematic nonproduction
of knowledge in the institutional matrix of state, industry, and social move-
ments that is characteristic of recent calls for a ‘‘new political sociology of
science.’’

Keywords
undone science, research agendas, social movements, environmental health,
science policy

Since the 1980s, the modern university has undergone a well-recognized


diversification from publicly funded research to an increasing emphasis
on private funding sources, technology transfer, and economic competitive-
ness (e.g., Kleinman and Vallas 2001; Slaughter and Rhoades 2004). A cor-
responding diversification in science and technology studies (STS) has led
to renewed attention to the role of extrainstitutional factors such as states,
industries, and social movements in the shaping of scientific research fields
and technological design choices (Klein and Kleinman 2002; Frickel and
Moore 2006a, 2006b). Among the changes that this ‘‘new political sociol-
ogy of science’’ brings to STS is a shift of attention from the microsociolo-
gical accounts of how knowledge and technologies are constructed to the
mesosociological and macrosociological political and institutional organi-
zation of scientific knowledge and science policy. Here, analytical concern
centers on distributional inequalities in technoscience and the ways that for-
mal and informal manifestations of power, access to resources, relations
among organizations, and procedures for rule making create losers as well
as winners and explain both institutional stasis and change. For example,
why does science pay dividends more often to some groups than to others?
What explains the selection of certain areas of scientific research and tech-
nological design choices and the neglect of others? This shift in focus to the
institutional politics of knowledge and innovation brings into sharper relief
the problem of ‘‘undone science,’’ that is, areas of research identified by
social movements and other civil society organizations as having poten-
tially broad social benefit that are left unfunded, incomplete, or generally
ignored.
This article brings together four recent studies to elaborate the concept of
undone science and move forward the more general project of a political
sociological approach to the problem of research priorities and scientific

445
446 Science, Technology, & Human Values 35(4)

ignorance. Three of the four studies cluster in the area of environmental science
and technology: the development of alternatives to chlorinated chemicals, bet-
ter understanding of toxic exposure to air pollution through alternative air
monitoring devices, and the environmental etiology of cancer. The fourth
study is based on interviews with scientists from a wide range of academic dis-
ciplines about forbidden knowledge. Taken together, the research demon-
strates the analytic potential of undone science to extend and deepen the
new political sociology of science by providing a political sociological per-
spective on the problem of research agendas and more general issues of the
construction of knowledge and ignorance. We begin with a brief review of the
existing literature. Our discussion highlights some of the basic contours that
the case studies reveal about undone science and that in turn can guide future
research.

1. Background
The concept of undone science locates the systematic nonproduction of knowl-
edge in the institutional matrix of governments, industries, and social move-
ments characteristic of the political sociology of science. Specifically, Hess
(2007) has been concerned with the absences of knowledge that could have
helped a social movement or other civil society organization to mobilize the
intellectual resources needed to confront an industrial and/or political elite that,
from the perspective of the challenging organization, is supporting policies that
are not broadly beneficial, either to the general society and environment or to
the historically disempowered groups (Woodhouse et al. 2002; Hess 2007).
Because elites set agendas for both public and private funding sources, and
because scientific research is increasingly complex, technology-laden, and
expensive, there is a systematic tendency for knowledge production to rest
on the cultural assumptions and material interests of privileged groups. How-
ever, it is only a tendency. Given the opportunities created by a diversity of
funding sources, divisions among elites, differences among social movement
organizations, the limited and partial autonomy of the scientific field (Bour-
dieu 2004), and the potential for some research projects to be completed with-
out extramural funding, there is some room for research that supports social
movement perspectives on research agendas, even when the research conflicts
with the interests of elites. Nevertheless, because research fields themselves
are constituted by agonistic relations between dominant and nondominant net-
works, even when ‘‘undone science’’ is completed, the knowledge may
become stigmatized and the credibility and standing of scientists who produce
it may suffer (Hess 2007).

446
Frickel et al. 447

Contemporary discussions of undone science have various precedents. In


some ways, Marx’s critique of political economy and his effort to develop
an alternative research field of Marxist political economy was an early
exploration of undone science, in that Marx both critiqued the assumptions
of mainstream economics and developed a framework for alternatives
within the field (Marx 1967). In a similar vein, feminist research and multi-
cultural science studies have highlighted the systematic lack of attention
paid to gender, race, and related issues in science. Feminist research has
also described how gender-laden assumptions shape the development of
research programs and, like Marxist scholarship, has proposed alternative
research frameworks and programs (e.g., Haraway 1989; Harding 1998;
Forsythe 2001).
Historical research highlights the institutional constraints of completing
undone science. Of particular relevance to the new political sociology of
science is the study of how the contours of entire disciplines or research pro-
grams have been shaped by military and industrial funding priorities, and
consequently how some subfields have been left to wither on the vine while
others have been well tended by government and industrial funding sources
(e.g., Noble 1977; Forman 1987; Markowitz and Rosner 2002). Historians
and others have also offered detailed investigations of the dynamics of intel-
lectual suppression and purposeful policy decisions to avoid some areas of
research, usually research that would challenge powerful industrial interests
(MacKenzie and Spinardi 1995; Zavestoski et al. 2002; Martin 2007). In the
emerging literature on the social production of ignorance or what some his-
torians have called ‘‘agnotology’’ (Proctor and Schiebinger 2008), addi-
tional studies of particular relevance examine the industrial funding of
contrarian research to generate a public controversy and scientific dissensus
(Proctor 1995), the role of the government and industry in rendering knowl-
edge invisible by producing classified knowledge and trade secrets (Galison
2004), and problems of imperceptibility for chemically exposed groups
(Murphy 2006).
Functionalist and constructivist sociologies of science have also contrib-
uted indirectly to the understanding of undone science, primarily through
discussions of the epistemic status of ignorance and uncertainty. Merton
(1987) identified ‘‘specified ignorance’’ as knowledge that researchers have
about topics that deserve further inquiry. Zuckerman (1978) also noted that
theoretical commitments, or what Kuhnians would call ‘‘paradigms,’’ could
result in decisions by scientists to characterize some areas of specified
ignorance as not worth studying. The sociology of scientific knowledge also
examined the role of uncertainty and interpretive flexibility in the

447
448 Science, Technology, & Human Values 35(4)

generation and resolution of controversies, both within the scientific field


and in broader public fora (e.g., Collins 1985, 2002). In critical analyses
of risk assessment and statistical analysis, STS scholars have also brought
out the unanticipated consequences of broader forms of ignorance that are
not considered within the horizon of standard risk assessment practices
(Hoffmann-Riem and Wynne 2002; Levidow 2002). Sociologists have also
examined the production of the ‘‘unknowable,’’ as occurred when claims
were made that an accurate count of ballots for the 2000 U.S. presidential
election was impossible (Hilgartner 2001), and ‘‘regulatory knowledge
gaps,’’ which are among the unintended consequences of the U.S. Environ-
mental Protection Agency’s (EPA) environmental testing program in New
Orleans following Hurricane Katrina (Frickel 2008; Frickel and Vincent
2007). Gross (2007, 2009) has drawn on the general sociology of ignorance
to distinguish various forms of scientific ignorance, including nonknowledge,
or known unknowns that are considered worth pursuing; negative knowledge,
or knowledge deemed dangerous or not worth pursuing; and ‘‘nescience,’’ or
a lack of knowledge about the unknown, a form of ignorance that is a precon-
dition for a surprise because it is an unknown unknown.1 In Gross’s terms,
undone science is a type of nonknowledge when viewed from the perspective
of social movements, but from the perspective of some research communities
and elites, it may be viewed as negative knowledge.
In an effort to map in more detail the concept of undone science, this
study summarizes four research projects. The four studies are based primar-
ily on semistructured interviews and/or participant-observation, which are
appropriate methodological choices given the exploratory nature of the
research and the need, at this stage, to understand the dimensions and fea-
tures of undone science. The following sections summarize the aspect of
these four independently designed research projects that have encountered
the phenomenon of undone science. Because social movement and other
civil society organizations have frequently encountered a deficit of research
on health and environmental risks associated with exposure to industrial
pollutants, it is not surprising that three of the cases considered here focus
on the health and environmental sciences. The question of generalizability
across various scientific research fields cannot be resolved in this study; our
goal is the preliminary one of mapping and exploring undone science.

2. Regulatory Paradigms, Dyads, and the Undoable


Howard’s research on the ‘‘chlorine sunset’’ controversy is based on inter-
views and document analysis. He conducted twenty-seven semistructured

448
Frickel et al. 449

interviews, lasting an hour on average, with staff members of federal


regulatory agencies in the United States and Canada, staff members of the
International Joint Commission (IJC), members of the Great Lakes Science
Advisory Board, staff members or individuals otherwise associated with
nongovernmental organizations (NGOs), academic or governmental mem-
bers of the industrial ecology or green chemistry communities, and indus-
trial chemists in industry and academia. A number of transcripts were
supplemented with additional information from follow-up correspondence.
Documents analyzed included (1) reports, press releases, Web documents,
and other materials published by NGOs, the chemical industry, and federal
agencies; (2) articles and commentaries in newspapers and popular and
trade magazines; (3) research articles and commentaries in scholarly
anthologies and peer-reviewed scholarly journals; (4) books written by key
actors; and (5) transcripts of Congressional testimony.
A little-studied controversy involving one of the major branches of
industrial chemistry documents a striking example of undone science and
illustrates the role it can play in structuring conflict between competing reg-
ulatory paradigms. Much of the controversy has centered on the Great
Lakes region, where extensive chemical manufacturing and contamination
has occurred; where scientists have documented threats to wildlife and
humans from persistent, toxic, industrial chlorinated pollutants; where
extensive citizen activism has emerged around this threat; and where a qua-
sigovernmental advisory body has assumed a leadership role in addressing
this concern (Botts et al. 2001). A number of environmental and health
advocates have argued, based both on fundamental toxicology and on long
historical experience with chlorinated synthetic chemicals (e.g., DDT and
PCBs), that the entire class of thousands of such substances should be ten-
tatively presumed dangerous and that the chemical industry accordingly
should wean itself from most major uses of chlorine (Thornton 1991,
2000; International Joint Commission [IJC] 1992; see Howard 2004). The
analysis offered here briefly considers the character and function of undone
science in the debate provoked by proposals for a ‘‘chlorine sunset.’’
The chlorine sunset controversy revolves around conflict between two
sharply contrasting regulatory paradigms: risk and precaution (Thornton
2000; Howard 2004). The powerful chemical industry has coevolved with,
supports, and is supported by the dominant U.S. and Canadian environmen-
tal regulatory regime, which restricts chemical industry decision making
only to the extent that detailed calculation of risk indicts individual chem-
ical substances. Meanwhile, Greenpeace, a marginalized, reputedly radical
environmental NGO, and the IJC, a prominent but marginalized binational

449
450 Science, Technology, & Human Values 35(4)

Table 1. Dyads of Done, Undone, Undoable Chlorine Science in Dominant and


Challenger Paradigms

Regulatory What Is Done or


Paradigm Would Be Done? What Remains Undone?

Risk Ad hoc identification of unsafe Systematic identification of unsafe


(dominant) chlorine chemicals (explicit role chlorine chemicals (implicit role
for government) for government)
Systematic development of Systematic development of
chlorine chemicals (explicit role nonchlorine alternatives (implicit
for industry) role for government)
Precaution Systematic development of Ad hoc identification of essential
(challenger) nonchlorine alternatives (explicit and safe chlorine chemicals
role for industry) (explicit role for industry)

advisory organization, argued for a regulatory regime based on the precau-


tionary principle (see Tickner 2003), which in their view justified govern-
mental action against an entire class of industrial chemicals. The dominant
paradigm assumes the unit of analysis to be the individual substance and
places the burden of proof on the public to prove harm; in contrast, the chal-
lenger paradigm allows, even requires, the primary unit of analysis to be the
entire class of substances and places the burden of proof on corporate offi-
cials. Within this matrix of political and epistemological conflict, the polit-
ical economy and political sociology of undone science can be seen to
revolve around a series of three dyads, each paradigm implying parallel for-
mulations of ‘‘done science’’ and undone science. The three dyads are sum-
marized in Table 1.
One dyad appears in the context of health impacts research. Industry and
federal officials operating in the risk paradigm hold that the legitimate goal
of health impacts research performed or mandated by government is ad hoc
identification of individual chlorinated chemicals that cannot be safely
manufactured and used. In this paradigm, chlorine chemistry itself is seen
as immune to fundamental interrogation; the role of public science is lim-
ited to documenting the odd substance that can be definitively proven harm-
ful and, on that basis, restricted. ‘‘We’ve made the point over and over again
that you have to look at each product’s physical and chemical characteris-
tics to draw conclusions about what it is going to do in the environment,’’
argued Brad Lienhart, of the Chlorine Chemistry Council. To do otherwise
would be to ‘‘[make] non-science—or nonsense—into science’’ (quoted in
Sheridan 1994, 50).

450
Frickel et al. 451

Beginning in the early 1990s, ‘‘sunset’’ proponents vigorously argued


that such research is incapable of interrupting a long series of chlorinated
‘‘Pandora’s poisons’’ from entering the environment and human tissues
long before their deleterious effects are documented. Inevitably remaining
undone, they argued, is science capable of systematically identifying unsafe
chemicals from among tens, perhaps hundreds, of thousands of chlorinated
industrial substances, by-products, and breakdown products, a scope of
research that the risk paradigm is sometimes assumed to provide but, owing
to the sheer enormity of the undertaking, cannot. The government’s effort to
identify unsafe chlorinated chemicals is ad hoc precisely because it cannot,
in any meaningful sense, be systematic; not only are available resources
insufficient, but the enterprise is technically infeasible. Viewed in this light,
the science is undoable. The IJC argued:

There is a growing body of evidence that [suggests that] these compounds are
at best foreign to maintaining ecosystem integrity and quite probably persis-
tent and toxic and harmful to health. They are produced in conjunction with
proven persistent toxic substances. In practice, the mix and exact nature of
these various compounds cannot be precisely predicted or controlled in pro-
duction processes. Thus, it is prudent, sensible and indeed necessary to treat
these substances as a class rather than as a series of isolated, individual che-
micals. (IJC 1992, 29)

A second dyad appears in the risk paradigm’s stance on innovation. Industry


has systematically pursued the development of chlorine chemistry, develop-
ing chlorinated chemicals and expanding markets for them; meanwhile,
advocates of chlorine precaution have pointed to the need to systematically
develop nonchlorine alternatives. This is in part science that the risk para-
digm has long left undone—historical research and development trajec-
tories that could have led to a wider range of nonchlorine chemicals and
processes being available today. The implication of the historical analysis
offered by a leading sunset proponent (Thornton 2000; see also Stringer and
Johnston 2001) is that over the past century the technological, economic,
and political momentum of chlorine chemistry has to some extent bent the
overall industry research and development agenda toward chlorine and
away from nonchlorine alternatives. Here undone science consists of a body
of nonchlorine chemicals and processes that might now exist but for the
long dominance of research and development predicated on chlorine. It is
a point seemingly acknowledged by a confidential IJC informant who did
not support the commission’s sunset recommendation: ‘‘There’s no reason

451
452 Science, Technology, & Human Values 35(4)

why we couldn’t, as a global society, live a non-chlorine lifestyle. It’s just,


you know <laughs>, that ain’t gonna happen, because that is not our his-
tory! We’re kind of, in a way, captives of our past.’’
In the risk paradigm, with its laissez-faire orientation, such research and
development need not be undertaken by the industry but instead is tacitly
left to whichever agency or organization might care to undertake it. Viewed
from the vantage point of the industry, with its adamantine conception of
chlorine chemistry as technologically and economically inevitable, the only
conceivable motivation for conducting such research and development would
be some kind of ideological fetish (see, e.g., Chlorine Chemistry Council
n.d.). It would represent ‘‘a veiled attempt to return to a pre-industrial
Eden,’’ one industry supporter suggested (Amato 1993). Crucially, although
this agenda would have been and would now be technically feasible, such
research would be hobbled by the absence of a sizable cadre of technoscien-
tists devoted to the project and by a lack of financial resources to sustain the
effort.
A third dyad occurs within the challenger, precautionary paradigm and
directly counters the values and priorities of the dominant paradigm’s
dyads. Paired with precaution advocates’ assertion of the need for research
to systematically develop nonchlorine alternatives—here seen as industry’s
responsibility rather than the public’s—is an explicit assertion that industry
should assume the burden of making the case for any specific chlorinated
chemicals (or chemical processes) that can be demonstrated to be both
essential (i.e., nonsubstitutable) and capable of being manufactured and
used in ways that (to some as yet unstated standard) pose no significant
environmental hazard. Industry’s motivation for undertaking this latter effort
would, of course, be profit. And owing to the presumably quite limited num-
ber of substances to be evaluated, it would be both technically feasible and,
given the industry’s substantial financial and technical resources, affordable.
The chlorine sunset controversy is now effectively dormant. In the face
of bitter industry resistance and U.S. and Canadian governmental intransi-
gence, the IJC and Greenpeace ceased promoting their sunset recommenda-
tions in the mid-1990s (Howard 2004). Thornton’s book, which appeared in
2000, reawakened (and in significant ways deepened) the debate, but it did
so only briefly. The sunset proposals have not visibly shifted policy at any
level in North America. A major international treaty on persistent organic
pollutants signed in 2001 represented an important victory for activists, but
it also underscored the lingering, unresolved character of the chlorine
debate: all twelve of the ‘‘dirty dozen’’ substances it required to be phased
out are chlorinated compounds, and each was targeted on the basis of its

452
Frickel et al. 453

discreet, well-documented characteristics. Meanwhile, thousands of far less


extensively studied chlorinated chemicals—and chlorine chemistry as a
whole—remain unregulated.
This analysis of the chlorine sunset controversy illustrates how regula-
tory regimes influence the construction and articulation of research priori-
ties. In this case, advocates of the risk and precaution paradigms, on the
basis of competing understandings of the appropriate unit of regulatory
analysis and appropriate regulatory burden of proof, promote competing
conceptualizations of science both done and undone. More specifically, the
case suggests that done and undone science in such a controversy can be
understood as occurring in dyadic pairs and that a major role for challenger
discourses is making the implicit undone portion of dyads within the domi-
nant paradigm visible and explicit. This analysis also highlights an impor-
tant category of undone science in technoscience controversies—undoable
science—that improves understanding of how regulatory regimes constrain
the identification of undone science. Here, close examination of precaution-
ary advocates’ critique of the risk paradigm clarifies the process through
which conventional regulatory structures veil undoable science in the form
of systematic research for which insufficient resources and insufficient
technical means are available.

3. Standards as Solutions to and Sources of


Undone Science
Ottinger’s research on community-based air monitoring as a strategy for
producing knowledge about environmental health hazards is based primar-
ily on participant-observation in two environmental justice NGOs: Commu-
nities for a Better Environment (CBE) in Oakland, California, and the
Louisiana Bucket Brigade in New Orleans, Louisiana (Ottinger 2005). As
part of her ethnographic fieldwork, she devoted ten hours per week as a
technical volunteer (Ottinger has a background in engineering) for each
organization during two consecutive years between 2001 and 2003. At both
organizations, her participation involved researching a variety of air moni-
toring strategies and developing tools for interpreting results from those
methods. Her study is also informed by semistructured interviews of one
to two hours each. She interviewed thirteen scientist-activists, community
organizers, and community residents in California and more than forty acti-
vists, regulators, and petrochemical industry representatives in Louisiana.
The interviews addressed organizing and community-industry relations,
broadly defined, and frequently touched on issues related to ambient air

453
454 Science, Technology, & Human Values 35(4)

monitoring techniques, with about one-third taking air monitoring as a pri-


mary theme.
The case of community-friendly air monitoring involves similar issues of
undone science and regulatory politics to those discussed for the chlorine
controversy, but at a grassroots, community level. In communities adjacent
to refineries, power plants, and other hazardous facilities, known as ‘‘fen-
celine communities,’’ residents suspect that facilities’ emissions of toxic
chemicals cause serious illnesses. However, there is a dearth of scientific
research that could illuminate, in ways credible to residents, the effects of
industrial emissions on community health (Tesh 2000; Allen 2003; Mayer
and Overdevest 2007). The use of air sampling devices known as ‘‘buckets’’
provides one avenue for addressing issues of undone environmental health
science. With the low-cost, easy-to-operate devices, fenceline community
residents and allied environmental justice organizers measure concentra-
tions of toxic chemicals in the ambient air, collecting data about residents’
exposures that is necessary (though not sufficient) to understanding chem-
ical health effects. Designed in 1994 by a California engineering firm and
adapted for widespread dissemination by Oakland-based non-profit CBE,
the buckets ‘‘grab’’ samples of air over a period of minutes. By taking short
samples, buckets can document chemical concentrations during periods
when air quality is apparently at its worst—when a facility is flaring or has
had an accident, for example—providing otherwise unavailable information
about residents’ exposures during pollution peaks.
Both activists’ strategies for air monitoring and experts’ responses to
activist monitoring are significantly shaped by agreed-upon procedures for
collecting and analyzing air samples and interpreting their results. When
measuring levels of toxic chemicals in the ambient air, regulatory agencies
and chemical facilities routinely use stainless steel Suma canisters to collect
samples, which are then analyzed using a method specified in the Federal
Register as Federal Reference Method (FRM) TO-15. Although the canis-
ters can be used to take short-term samples, when regulators want to repre-
sent air quality broadly, samples are taken over a twenty-four-hour period
every sixth day. Where they exist, regulatory standards for air quality form the
context for interpreting the results. Louisiana, one of only two U.S. states with
ambient air standards for the individual volatile organic chemicals measured
by FRM TO-15, specifies eight-hour or annual averages that ambient concen-
trations are not to exceed; monitoring data are compared to these standards to
determine whether air quality poses a potential threat to public health.2
Specifying how air toxics data are to be collected and interpreted, these
formal (e.g., FRM TO-15) and informal (e.g., the twenty-four-hour, sixth

454
Frickel et al. 455

day sampling protocol) standards shape how bucket data are received by
regulatory scientists and chemical industry officials. First, they act as a
boundary-bridging device; that is, the standards help to render activists’
scientific efforts recognizable in expert discourses about air quality and
monitoring.3 Although activists and experts collect their samples with
different devices—buckets for activists, Suma canisters for experts—both
strategies rely on air sampling to characterize air quality and both use FRM
TO-15 to analyze the samples. The shared analytical method makes the
results of individual bucket samples directly comparable to those of canister
samples. Moreover, because activists use the FRM, an EPA laboratory in Cali-
fornia was able to conduct quality assurance testing early in the bucket’s devel-
opment, allowing activists to refute charges that chemicals found in bucket
samples were somehow an artifact of the sampling device and to claim, more
generally, that the bucket was an ‘‘EPA-approved’’ monitoring method.
To the extent that the standards, particularly the FRM, serve a boundary-
bridging function, they help undone science get done: they allow data from
an alternate method of measuring air quality, bucket monitoring, to circu-
late with some credibility among experts and, consequently, to address
questions of pressing concern to community members but hitherto ignored
by experts. Activists’ monitoring with buckets has even prompted experts to
undertake additional monitoring of their own. For example, in Norco,
Louisiana, where resident-activists used buckets to document very high
concentrations of toxic compounds in their neighborhood, Shell Chemical
in 2002 began an extensive ambient air monitoring program (Swerczek
2000).4
Simultaneously, however, standards for air monitoring serve a boundary-
policing function: the same suite of regulatory standards and routinized
practices that give buckets a measure of credibility also give industrial facil-
ities and environmental agencies a ready-made way to dismiss bucket data.
Specifically, ambient air standards are typically expressed as averages over
a period of hours, days, or years.5 Bucket data, in contrast, characterizes
average chemical concentrations over a period of minutes. Environmental
justice activists nonetheless compare results of individual samples to the
regulatory standard—asserting, for example, that a 2001 sample taken near
the Orion oil refinery in New Sarpy, Louisiana, showed that ‘‘the amount of
benzene in the air that day was 29 times the legal limit’’ (Louisiana Bucket
Brigade 2001)—but experts vehemently reject such claims. In a 2002 inter-
view, Jim Hazlett, part of the Air Quality Assessment division of the Louisi-
ana Department of Environmental Quality, complained about activists’
inaccurate use of bucket data:

455
456 Science, Technology, & Human Values 35(4)

You can’t really take that data and apply it to an ambient air standard . . . . So
we see a headline, the citizen group over here found a, took a sample and
found benzene that was 12 times the state standards. Well, it’s not true.
I’m sorry, but that’s not what it was.

In the view of Hazlett and other experts, only the average concentrations of
regulated chemicals can be meaningfully compared to the standards and
thus contribute to determining whether air pollution might pose a threat
to human health.
Ambient air standards, and the average-oriented air sampling protocols
that they require, thus prove to be a mechanism for policing the boundary
between activists’ and experts’ claims about air quality, marking experts’
data as relevant and activists’ data as irrelevant to the assessment of overall
air quality, to the determination of regulatory compliance, and to discus-
sions of chemical plants’ long-term health effects. As boundary-policing
devices, standards circumscribe activists’ contributions to doing undone
science. To the extent that bucket monitoring has resulted in increased
enforcement activity by regulators (O’Rourke and Macey 2003) or addi-
tional ambient air monitoring by industrial facilities, the additional monitor-
ing has been undertaken to confirm activists’ results, track the causes of the
chemical emissions, and fix what are assumed to be isolated malfunctions
but usually not to query the possibility that routine industrial operations
might pose systematic threats to community health. Even Shell’s program
in Norco, which collects rare data on chemical concentrations in a fenceline
community, is oriented to long-term averages and thus does not shed light
on the potential effects of the pollution spikes that occur with regularity as a
result of flaring and other unplanned releases.
As in the chlorine sunset controversy case, the example of bucket mon-
itoring demonstrates how regulatory systems shape conflicts over undone
science, even at the local level of community-based research and activism.
In this instance, efforts by neighborhood activists (and other outsiders to
science) to see undone science done in their own backyards illustrate the
asymmetrical operation of regulatory standards and standardized practices.
Air monitoring standards function as boundary-bridging devices that enable
activist use of an alternative, more cost-effective method and therefore
help address an aspect of environmental health science left undone by
experts. However, standards also serve as boundary-policing devices. These
reinforce experts’ authority to define how health risks in fenceline commu-
nities should be evaluated, shutting down debates over fundamental
research questions and associated methodological approaches—debates, for

456
Frickel et al. 457

example, over whether average or peak concentrations of air toxics are most
relevant to their determining health effects. Because it is exactly these
debates that activists would, and must, provoke to shift scientific research
priorities, the standards’ boundary-policing aspect tends to dominate most
locally organized attempts to counter undone science. However, this case
also illustrates the importance of standards’ boundary-bridging aspects that
enable community activists to actually and forcefully enact shifts in
research priorities, rather than merely advocate for alternative scientific
agendas.

4. Diversity Within Movements and Research Fields


Gibbon’s research is based on ethnographic fieldwork, ongoing since 1999,
that examines the social and cultural context of developments in breast can-
cer genetics in the United Kingdom. The larger study addresses how the
knowledge and technologies associated with breast cancer genetics are put
to work inside and outside clinical settings, at the interface with a culture of
breast cancer activism (see Gibbon 2007). The discussion presented here
draws on fieldwork conducted in a leading high-profile U.K. breast cancer
research charity between 1999 and 2001 and again in 2005–2006. The field-
work involved the analysis of promotional documents produced by the
organization, participant-observation of a range of events, and more than
forty-five in-depth semistructured interviews and five focus groups with the
organization’s fundraisers, advocates, scientists, and staff.
Given the exponential growth in lay/patient and public activism in rela-
tion to breast cancer in the last twenty to thirty years (Klawiter 2004;
Gibbon 2007), this would seem to be an arena where we might expect to see
challenges related to undone science. In one sense, the rapid expansion in
breast cancer activism has achieved much to reduce the space of undone
science in breast cancer. Like AIDS activism in the 1990s, so-called breast
cancer activism is often held up as an exemplary instance of successful
collective lay/public/patient mobilization that has helped to raise awareness
of the disease, promote a discourse of female rights, and redress gendered
inequities in scientific research and health provision (e.g., Anglin 1997;
Lerner 2003). It would from this perspective seem potentially to be a clear
example of epistemic modernization, where research agendas may be
opened up to the scrutiny of lay/patient/public communities (Hess 2007).
Yet paradoxes abound in an arena where growing collective awareness
of the disease also helps ensure that the management of risk and danger
is the burden of individual women (Kaufert 1998; Fosket 2004; Klawiter

457
458 Science, Technology, & Human Values 35(4)

2004). The situation reflects what Zavestoski et al. (2004) have referred to
as the ‘‘dominant epidemiological paradigm’’ of breast cancer, one that
strongly informs the parameters of scientific research and medical interven-
tion by focusing on lifestyle and/or the genetic factors of individuals and
that has engendered some resistance from civil society groups. In the United
States, for example, recent lobbying efforts to draw attention to alternative
strategies for breast cancer have involved collaborations between specific
cultures of breast cancer and broader environmental justice movements
(Di Chiro 2008) in pursuit of what Brown and colleagues term a ‘‘lab of
one’s own’’ (2006). Nevertheless, breast cancer activism is characterized
by diverse cultures, and consequently, the issue of undone science is also
disjunctured and differentiated within national and across international are-
nas. Despite the growth of health activism around breast cancer research,
environmental risk factors in breast cancer etiology remain one domain
of undone science that continues to be marginalized in mainstream
discourse.
The particular institutional parameters that serve to sustain the space of
undone science in breast cancer are illustrated by examining the predomi-
nant culture of patient and public activism in the United Kingdom. In this
context, understanding how breast cancer activism operates to preserve
undone science requires paying attention not only to the marginalization
of environment-focused breast cancer activism (Potts 2004) but also to an
institutionalized culture of cancer research, where breast cancer activism
can reference and symbolize quite different activities (Gibbon 2007). Since
the early part of the twentieth century, cancer research in the United King-
dom has been rooted in an institutional culture of first philanthropic dona-
tion and then charitable fundraising, helping ensure a public mandate
influencing patterns of research in cancer science (see Austoker 1988). Like
earlier public mobilization around the so-called wars on tuberculosis and
polio, the ‘‘war’’ fought by the cancer charity establishment in the United
Kingdom has proved not only a resilient cultural metaphor (Sontag 1988)
but also a reflection of ongoing public support and investment in cancer
research. As a result, cancer research in the United Kingdom is mostly sus-
tained as a modernist project waged by a scientific community, focused on a
cure (Löwy 1997) and supported by cancer charities that are funded signif-
icantly by public resources in the form of voluntary donations.
The influences of this project on undone breast cancer science are visible
within a high-profile breast cancer research charity, where narratives of
involvement and identification reveal the scope of activism, the ways that
this institutional culture informs the parameters of civic engagement, and

458
Frickel et al. 459

how activists’ engagement with research is limited to certain areas of activ-


ities. In one instance, for example, a group of women responded to the
meaning of ‘‘involvement’’ in ways that mixed the morality of fundraising
with campaigning work and also with moral sentiments such as ‘‘giving
something back,’’ ‘‘helping make a difference,’’ or somehow ‘‘being use-
ful,’’ as this excerpt illustrates:

I was in the middle of treatment, chemotherapy, and I just happened to read—it


was October—and I happened to read an article in a magazine, I think the
launch of their [the charity’s] £1,000 challenge. And at that point I was feeling
[a] sort of a wish, a need, to put something back . . . . And I got the certificate
and I got invited to the research center . . . there was something that drew me to
it . . . . So [it] was mainly fundraising, but I could feel something could develop
there. So at one point I said to one of the girls on the fundraising team, ‘‘Can I
help in a voluntary way? I’ve got skills I’m not using, particularly proofread-
ing, editing, language leaflets, making things clear.’’ And then it seemed to be
very useful, from a ‘‘Joe public’’ point of view. And it’s developed into almost
like a little job; it’s given me a whole new life . . . and I feel like I’m putting
something back. And my life has value . . . . So, it’s terrific. Really, it’s terrific.

Although often difficult to tease apart fundraising as a form of activism and


the highly successful marketing strategies of the charity, narratives such as
the one above suggest that lay/civic engagement in breast cancer research
does little to challenge a traditional expert/lay dynamic. Instead, women
became ‘‘involved’’ mostly in the pursuit of reproducing and sustaining
traditional parameters of scientific expertise.
Such activism has been constituted through ‘‘heroic’’ acts of fundraising,
which were in turn wedded to the pursuit of basic science genetic research,
collectively situated as a form of ‘‘salvationary science’’(Gibbon 2007,
125). This continues to be a salient motif for engagement in the charity, with
very few women seeing their involvement in terms of influencing a research
agenda or affecting the research priorities of the charity. Although a number
of women interviewed spoke of being involved in a charity in terms of
‘‘campaigning’’ or being active around the ‘‘politics of health care,’’ their
narratives exhibited a general lack of interest in influencing scientific
research and a strong feeling about the inappropriateness of ‘‘stepping on
the toes of the scientists.’’ As two interviewees put it:

I don’t think any of us would push it in anyway, because we can’t appreciate


if you’re a nonscientist. I don’t . . . appreciate the process sufficiently to be

459
460 Science, Technology, & Human Values 35(4)

able to direct it in a particular direction and say, ‘‘Hey, why don’t you look at
this?’’
I don’t think laypeople can make a significant contribution to what we
should study. I know that a lot of people would agree with me on that.

While some interviewees observed that the whole point of being an advo-
cate for those with breast cancer is, as one woman explained, ‘‘You’re not
a scientist,’’ others noted that the research undertaken by the charity was
widely perceived in terms of a ‘‘gold standard.’’ Many, including those who
strongly identified more as ‘‘advocates’’ rather than ‘‘fundraisers,’’ also
believed that the standard of expertise might potentially be threatened or
undermined by training a wider community of people affected by breast
cancer to have a say in scientific research.6
Overall, interview data suggest that despite thirty years of growing acti-
vism around breast cancer and a much more open concern with implement-
ing, developing, and identifying with advocacy, a particular institutional
context continues to sustain, color, and influence the lay/patient and public
mobilization around the disease. The morality of fundraising and the faith in
the expertise of scientific research expressed by these women cannot be
abstracted from the institution of cancer charities in the United Kingdom.
The complex and diverse nature of breast cancer activism here and else-
where shows that what is required in understanding the dynamic space of
undone science in breast cancer is a careful mapping and analysis of the
nexus of interests that coalesce at particular disease/science/public inter-
faces (Epstein 2007; Gibbon and Novas 2007). The dense imbrication of
some segments of the breast cancer movement with various institutions
of scientific research in the United Kingdom means that undone science
appears only to a segment of the advocacy community that has itself been
historically marginalized within the larger breast cancer movement. Thus,
unlike the two previous cases, which examine conflicts between industrial
and government elites in conflict with social movement actors, the case of
breast cancer research demonstrates conflicting notions of undone science
within movements.
Additionally, however, support for research into environmental etiolo-
gies of cancer may yet come from within institutional cultures of science.
Postgenomic researchers have increasingly begun to explore what is
described as ‘‘gene/environment interaction,’’ where the importance of a
seemingly broader context of molecular interaction is becoming important
(Shostak 2003). As such, researchers examining social movements must be
attentive to subtle shifts around the space of undone science of breast cancer

460
Frickel et al. 461

from within and outside mainstream science as different configurations of


health activism interface with seemingly novel targets of scientific inquiry
in contrasting national contexts. As this study shows, undone science
demarcates a highly dynamic cultural space characterized by interorgani-
zational and intraorganizational competition mediated by advances in
technoscientific research and clinical practice.

5. Movements as Sources of Undone Science


Kempner’s research is based on an interview study that examines ‘‘forbid-
den knowledge,’’ a term used to capture scientists’ decisions not to produce
research because they believe it to be taboo, too contentious, or politically
sensitive (a type of negative knowledge in the terminology introduced
above). In 2002–2003, she and colleagues conducted ten pilot and forty-
one in-depth, semistructured telephone interviews with a sample of
researchers drawn from prestigious U.S. universities and representing a
diverse range of disciplines, including neuroscience, microbiology, indus-
trial/organizational psychology, sociology, and drug and alcohol research
(Kempner, Perlis, and Merz 2005). Those fields were chosen to gauge the
range, rather than the prevalence, of experiences with forbidden knowledge.
Interviews lasted between thirty-five and forty-five minutes and were
audiotaped, transcribed, coded, and analyzed according to the principles
of grounded theory (Strauss and Corbin 1990).
While many social movements organize around the identification and com-
pletion of undone science, others devote themselves to making sure that some
kinds of knowledge are never produced. They are not alone. The idea that
some knowledge ought to be forbidden is deeply embedded in Western cul-
tures and appears in literature through the ages, from Adam and Eve’s expul-
sion in Genesis to Dr. Frankenstein’s struggle with a monster of his own
creation (Shattuck 1996). Mertonian rhetoric aside, most people agree that
some science poses unacceptable dangers to research subjects or to society
at large. The widely accepted Nuremberg Code, for example, places strict lim-
its on human experimentation, in an effort to ensure that some science—such
as Nazi human experimentation in World War II—is never done again.
Determining which knowledge ought to remain undone can often be
contentious, as illustrated by current high-profile public debates surrounding
the ethics and implications of stem cell research and cloning technologies.
Nevertheless, as in research agenda-setting arenas (Hess 2007), debates and
decisions about what knowledge should remain off limits to the scientific
community typically occur among elites: legislators and federal agencies

461
462 Science, Technology, & Human Values 35(4)

perennially issue guidelines and mandates regarding which research should


not be conducted, setting limits on everything from reproductive and thera-
peutic cloning to studies of the psychological effects of Schedule I drugs, like
heroin and marijuana. Scientists and the lay public both have limited oppor-
tunities to voice their opinion in these discussions. In dramatic cases, scien-
tists have attempted to preempt mandates via self-regulation, as was the case
in 1975 when scientists meeting at Asilomar called for a moratorium on cer-
tain kinds of recombinant DNA research (Holton and Morrison 1979).
According to the forty-one elite researchers interviewed for this case
study, these formal mechanisms account for only a portion of the limitations
that can produce undone science (Kempner, Perlis, and Merz 2005). More
often, researchers described how their research had been hamstrung by
informal constraints—the noncodified, tacit rules of what could not be
researched or written. Yet researchers were very clear about what consti-
tuted ‘‘forbidden knowledge’’ in their respective fields. The boundaries of
what could not be done had been made known to them when either they
or a colleague’s work had been targeted for rebuke—in essence, their work
had breached an unwritten rule. The management of forbidden knowledge,
thus, worked much as Durkheim said it would: once someone’s research
had been identified as especially problematic by, for example, a group of
activists, their work became a ‘‘cautionary tale,’’ warning others ‘‘not to
go there’’ (Kempner, Bosk, and Merz 2008).
In this way, social movement organizations and activists are able to play
an important role in debates about what ought to remain undone, whether or
not they are invited to the table. Besides their influence on shaping research
agenda-setting arenas, social movements can and do influence individual
researchers’ decisions not to pursue particular types of studies. In recent
decades, for example, animal rights organizations have had an enormous
influence on the kinds of research that scientists choose not to produce.
We found that the researchers in our sample who work with animal models
took seriously the threat posed by those organizations. They spoke of
‘‘terrorist-type attacks’’ and told stories of colleagues who received ‘‘razor
blades in envelopes’’ and ‘‘threatening letters.’’ Others faced activists who
staked out at their houses. Researchers learned from these cautionary tales
and, in many cases, said that they had self-censored as a result. One
researcher, for example, explained that he would not work with pri-
mates—only ‘‘lower order’’ animals like mice and drosophilia because:

I would like to lunatic-proof my life as much as possible . . . I, for one, do not


want to do work that would attract the particular attention of terrorists . . .

462
Frickel et al. 463

The paranoia was acute. One researcher refused to talk to the interviewer
until she proved her institutional affiliation: ‘‘For all I know, you are some-
body from an animal rights organization, and you’re trying to find out what-
ever you can before you come and storm the place.’’
Over time, the overt interventions of animal rights organizations in the
production of research have redefined the ethics of animal research, usher-
ing in legislation like the Animal Welfare Act of 1985, which requires
research institutions that receive federal funding to maintain ‘‘Institutional
Animal Care and Use Committees’’ (Jasper and Nelkin 1992). However, lay
groups do not need to use such directly confrontational tactics to influence
researchers’ decisions, especially if the groups are successful in their
attempts to reframe a particular social problem. For example, substance
abuse researchers argued that their research agendas were limited by the
success of the Alcoholics Anonymous’ campaign to define treatment for
alcoholism as lifelong abstinence from drink. Although these researchers
would like to conduct ‘‘controlled drinking’’ trials, in which alcoholics are
taught to drink in moderation, they argued that ‘‘There’s a strong political
segment of the population in the United States that without understanding
the issues just considers the goal of controlled alcohol abuse to be totally
taboo.’’ The mere threat of interference from the grassroots was enough
to keep many researchers from conducting certain studies. Several drug and
alcohol researchers described great unwillingness to conduct studies on the
health benefits of ‘‘harm reduction’’ programs, such as those that distribute
free condoms in schools or clean needles in neighborhoods, because they
might attract unwanted controversy from lay groups who oppose such pub-
lic health interventions.
Thus, in some contrast to the role that social movement organizations
and lay experts/citizen scientists play in exposing undone science and
encouraging knowledge creation in chemical, air monitoring, and breast
cancer research, this study shows that the same actors can also play a pow-
erful role in determining which knowledge is not produced. Moreover, con-
flict over the direction of funding streams, while critically important to the
political of research agenda setting, do not solely determine what science is
left undone. Rather, social movements are also effective beyond research
agenda-setting processes that occur at the institutional level; this study pro-
vides evidence that they also shape the microlevel interactional cues and
decision-making process of individual scientists. Although more research
is needed to understand the circumstances under which researchers decide
to self-censor in response to pressure from outside groups, this case suggests
that social movements may have much greater potential to thwart research

463
464 Science, Technology, & Human Values 35(4)

than originally thought. The implications are intriguing and deserve greater
attention. On one hand, disempowered groups may leverage these tech-
niques to gain a voice in a system of knowledge from which they are typi-
cally excluded. On the other hand, it is troubling to learn that the subsequent
‘‘chilling effect’’ happens privately, often without public discussion and in
response to intimidation and fear.

6. Discussion
The diverse cases provide an empirical basis for moving forward the theo-
retical conceptualization of undone science in relation to a new political
sociology of science and that program’s concern with how research agendas
are established. Perhaps the most significant general observation is that the
identification of undone science is part of a broader politics of knowledge,
wherein multiple and competing groups—including academic scientists,
government funders, industry, and civil society organizations—struggle
over the construction and implementation of alternative research agendas.
To a large extent, our case studies focus on attempts by civil society or qua-
sigovernmental organizations to identify areas of research they feel should
be targeted for more research. However, the identification of undone sci-
ence can also involve claims about which lines of inquiry should warrant
less attention than they currently receive, either because there are decreas-
ing social returns on continued investments in heavily researched areas or
because the knowledge is deemed not worth exploring and possibly danger-
ous or socially harmful—what Gross (2007) calls ‘‘negative knowledge.’’
Examples of the latter include the research programs and methods targeted
by animal rights groups and research on chlorinated chemicals targeted
by Greenpeace. There are many other cases that would fit this role for
civil society organizations, including calls for research moratoria on
weapons development, genetically modified food, nuclear energy, and
nanotechnology.
Five more specific insights follow from and add complexity to this gen-
eral observation. First, while we see undone science as unfolding through
conflict among actors positioned within a multiorganizational field, as
Gibbons’ case shows, definitions of undone science may also vary significantly
within different organizational actors, coalitions, or social movements.
Some portions of the movement may be captured by mainstream research,
and consequently advocacy is channeled into support for the experts’ prior-
itizations of research agendas. Thus, a research topic such as environmental
etiologies of breast cancer may represent undone science to a marginalized

464
Frickel et al. 465

segment of breast cancer advocates and their allies in the scientific commu-
nity, but it may represent negative knowledge to the majority of breast can-
cer advocates and the dominant cancer research networks. To further
complicate the picture, rapid developments and changes within the scien-
tific field, such as the development of genomic research to better pinpoint
environmental or epigenetic factors, may result in shifts in research priori-
ties that can open up opportunities for research in areas of undone science.
Here, one sees that internal changes and differences among both researchers
and civil society advocates interact to define shifting coalitions of research
priorities.
Second, the dynamic nature of coalitions and alliances that emerge
around undone science suggests that the articulation of research priorities
is often a relatively fluid process; even when civil society groups target
some areas of scientific research as deserving low or no priority, their views
may in turn lead to the identification of other areas of research deserving
higher priority. For example, the position of an animal rights group may
begin with opposition to some types of animal research but lead to support
for more ‘‘humane’’ forms of animal research that have been reviewed by
animal research committees. Likewise, the position of an organization such
as Greenpeace in opposition to chlorinated chemicals is linked to an articu-
lation of the need for research on green chemistry alternatives. As these
examples suggest, the identification of undone science can be viewed as
multifaceted outcomes of coalitions and conflict among diverse groups rep-
resenting various social categories, each promoting a mix of topics seen as
deserving more, less, or no attention from the scientific community.
Third, making sense of the complex processes that produce undone
science involves attending to the distributions of power, resources, and
opportunities that structure agenda setting within the scientific field. An
important element of field structure is the role of regulatory regimes in
shaping definitional conflicts over research priorities. Howard’s work sug-
gests that done and undone environmental science dyads can be a key
expression of the regulatory paradigm in which they occur and intimately
linked to the way expertise is conceptualized and deployed in the paradigm.
Furthermore, he proposes that until mainstream science faces a challenger,
important forms of undone science within the dominant paradigm can
remain implicit and unarticulated. In other words, undone science may take
the form of a latent scientific potential that is suppressed through ‘‘mobili-
zation of bias’’ (Lukes 2005; see also Frickel and Vincent 2007). Ottinger
(2005) also notes the important role of regulatory standards in defining
opportunities for activists who attempt to get undone science done largely

465
466 Science, Technology, & Human Values 35(4)

using their own resources. In the case of air monitoring devices, an alterna-
tive research protocol and data gathering device operated by laypeople pro-
vides a basis for challenging official assurances of air quality safety. Rather
than advocate for shifts in a research agenda, they simply enact the shift. In
Howard’s terms, the lay research projects also dramatize the implicit
and unarticulated bias in the dominant method of air quality monitoring.
Ottinger’s (2005) focus on the double role of standards as enabling and
constraining factors in establishing both the conditions and limitations of
undone science is intriguing, and it remains for future research to examine
the efficacy of tactical dynamics in relation to structural constraints encoun-
tered across a range of regulatory and research contexts.
Fourth, while access to financial resources is an implicit focus of efforts
to identify undone science, Kempner’s research demonstrates that the inter-
action of civil society and research priorities is not restricted to the broad
issue of funding. Although civil society organizations can exert an effect
on research funding allocations, as we have seen especially in environmen-
tal and health research priorities, Kempner notes that there are other
mechanisms that can cause such shifts. Her work suggests that efforts to
study the problem of undone science should also consider the role that a
moral economy has in shaping scientists’ decisions about what research
programs they will and will not pursue (Thompson 1971; on moral econ-
omy in science, see Kohler 1994). Furthermore, even if scientists do not
accept in principle the notion that certain knowledge should remain undone,
they may simply decide not to invest in some areas of research because of
intense direct pressures from civil society organizations such as animal
rights groups. As a result of individual decisions not to engage in an area
of research, changes in the research agendas of a field can occur even when
funding is not shifting dramatically.
Finally, sometimes structural constraints such as limited access to
resources coincide with practical constraints to produce ‘‘undoable sci-
ence.’’ In the case of the chlorine sunset provisions, precaution advocates
see governmental programs for screening individual chemicals as obscuring
a plain fact: the sheer number of chemicals and their complex interaction
with ecological and biological systems make it impossible to predict
whether a given concentration of a given chemical will in any meaningful
sense be ‘‘safe’’ or whether it will be a risk. As a result of this ‘‘wicked prob-
lem’’ (Rittel and Weber 1973), the articulation of undone science as a goal
for research prioritization and funding—in this case, the standard assump-
tion of a need for ever more research on the environmental, health, and
safety implications of new chemicals—turns against itself, because the call

466
Frickel et al. 467

for research into specific chemicals tacitly supports a regulatory framework


that systematically generates a policy failure (see Beck 1995).

7. Conclusion
This study demonstrates some of the ways in which the analysis of undone
science can enrich empirical understandings of research agenda-setting pro-
cesses. The considerable variation we find in just four cases suggests that
one promising avenue for future research lies in developing more systema-
tic comparisons across academic, government, industry, and community
settings. Doing so will further elaborate the ways in which the institutional
contexts of research—including different sets of political and economic
pressures, normative expectations, resource concentrations, and sizes and
configurations of research networks—shape the articulation of undone sci-
ence and the successful or failed implementation of alternative research
agendas.
Our broader aim in seeking to give undone science higher visibility
within STS is to broaden the foundations for a new political sociology of
science. Much like feminist and antiracist science studies, the political
sociology of science situates questions relating to the uneven distribution
of power and resources in science at the center of the STS project while
remaining attentive to how knowledge and its inverse—ignorance—is
socially shaped, constructed, and contested. As we have argued here, one
of the crucial sites where questions of power, knowledge, and ignorance
come together is in the domain of research agenda setting, where intense
coalitions and conflicts are forged to gain access to the limited resources
that ultimately shape what science is done and what remains undone.

Notes
1. The term ‘‘negative knowledge’’ originally comes from Knorr-Cetina (1999), but
our usage follows Gross’s amplification (2007).
2. North Carolina also has ambient air standards for this class of pollutants. The fed-
eral government has not set such standards; only total levels of volatile organic
chemicals, in addition to five other ‘‘criteria pollutants,’’ are regulated by the
Clean Air Act.
3. A significant body of work in social studies of science demonstrates how stan-
dards and standardized practices help coordinate scientific work across hetero-
geneous communities and distant research sites (see for example Star and
Griesemer 1989; Fujimura 1996).

467
468 Science, Technology, & Human Values 35(4)

4. In presenting the program to Norco residents, one chemical engineer representing


Shell even acknowledged the legitimacy of activists’ data, reiterating the claim
that the buckets were EPA-approved.
5. Louisiana is not alone in this; the National Ambient Air Quality Standards, for
example, regulate one-hour, eight-hour, twenty-four-hour, or annual averages
of criteria pollutants.
6. A few women did acknowledge that they would want to have more training in the
field of scientific research to enable them to be, as they put it, more ‘‘credible’’
and ‘‘not be discounted.’’ They sought to become, as one woman said, ‘‘an
informed layperson as opposed to somebody who can’t be dismissed.’’ It was
clear that there were boundaries placed on what this might mean in relation to
informing or influencing scientific research.

References
Allen, B. L. 2003. Uneasy alchemy: Citizens and experts in Louisiana’s chemical
corridor disputes. Cambridge, MA: MIT Press.
Amato, I. 1993. The crusade against chlorine. Science 261 (5118):152-4.
Anglin, M. K. 1997. Working from the inside out: Implications of breast cancer
activism for bio-medical policies and practices. Social Science and Medicine
44 (9):1043-415.
Austoker, J. 1988. A history of the Imperial Cancer Research Fund 1902-1986. New
York: Oxford University Press.
Beck, U. 1995. Ecological politics in an age of risk. Trans. A. Weisz. Cambridge:
Polity.
Botts, L., P. Muldoon, P. Botts, and K. von Moltke. 2001. The Great Lakes water
quality agreement. In Knowledge, power, and participation in environmental
policy analysis, ed. M. Hisschemöller, R. Hoppe, W. Dunn, and J. Ravetz,
121-43. New Brunswick, NJ: Transaction.
Bourdieu, P. 2004. Science of science and reflexivity. Chicago: University of
Chicago Press.
Brown, P., S. McCormick, B. Mayer, S. Zavestoski, R. Morello-Frosch, R. Gasior
Altman, and L. Senier. 2006. ‘‘A lab of our own’’: Environmental causation of
breast cancer and challenges to the dominant epidemiological paradigm. Science,
Technology and Human Values 31 (5):499-536.
Chlorine Chemistry Council (n.d.). Pandora’s poison: Putting political ideologies
ahead of public health—and hope. http://www.pandoraspoison.org/industry_
views/ccc_statement.html (accessed November 11, 2000).
Collins, H. 1985. Changing order: Replication and induction in scientific practice.
Beverly Hills, CA: Sage.

468
Frickel et al. 469

Collins, H. 2002. The third wave of science studies: Studies of expertise and expe-
rience. Social Studies of Science 32 (2):235-96.
Di Chiro, G. 2008. Living environmentalisms: Coalition politics, social reproduc-
tion, and environmental justice. Environmental Politics 17 (2):276-98.
Epstein, S. 2007. Patient groups and health movements. In New handbook of science
and technology studies, ed. E. J. Hackett, O. Amsterdamska, M. Lynch, and J.
Wacjman, 499-539. Cambridge, MA: MIT Press.
Forman, P. 1987. Behind quantum electronics: National security as basis for phys-
ical research in the United States, 1940-1960. Historical Studies in the Physical
and Biological Sciences 18 (1):149-229.
Forsythe, D. 2001. Studying those who study us: An anthropologist in the world of
artificial intelligence. Stanford, CA: Stanford University Press.
Fosket, J. 2004. Constructing ‘high risk women’: the development and standardiza-
tion of a breast cancer risk assessment tool. Science, Technology & Human
Values 29 (3):291-313.
Frickel, S. 2008. On missing New Orleans: Lost knowledge and knowledge gaps in
an urban hazardscape. Environmental History 13 (4):634-50.
Frickel, S., and K. Moore, eds. 2006a. The new political sociology of science: Insti-
tutions, networks, and power. Madison, WI: University of Wisconsin Press.
Frickel, S., and K. Moore. 2006b. Prospects and challenges for a new political
sociology of science. In The new political sociology of science: Institutions, net-
works, and power, ed. S. Frickel, and K. Moore, 3-31. Madison, WI: University
of Wisconsin Press.
Frickel, S., and M. B. Vincent. 2007. Katrina, contamination, and the unintended
organization of ignorance. Technology in Society 29:181-8.
Fujimura, J. 1996. Crafting science: Standardized packages, boundary objects, and
‘translation.’ In Science as practice and culture, ed. A. Pickering, 168-211.
Chicago: University of Chicago Press.
Galison, P. 2004. Removing knowledge. Critical Inquiry 31 (autumn): 229-43.
Gibbon, S. 2007. Breast cancer genes and the gendering of knowledge: Science and
citizenship in the cultural context of the ‘new’genetics.’ Basingstoke, UK:
Palgrave Macmillan.
Gibbon, S., and C. Novas, eds. 2007. Bio-socialities, genetics and the social
sciences. London: Routledge.
Gross, M. 2007. The unknown in process: Dynamic connections of ignorance,
non-knowledge, and related concepts. Current Sociology 55:742-59.
Gross, M. in press. Ignorance and surprise: Science, society, and ecological design.
Cambridge, MA: MIT Press.
Haraway, D. J. 1989. Primate visions: Gender, race, and nature in the world of
modern science. New York: Routledge.

469
470 Science, Technology, & Human Values 35(4)

Harding, S. 1998. Is science multicultural? Postcolonialisms, feminisms, epistemol-


ogies. Blooomington, IN: Indiana University Press.
Hess, D. 2007. Alternative pathways in science and industry: Activism, innovation,
and the environment in an era of globalization. Cambridge: MIT Press.
Hilgartner, S. 2001. Election 2000 and the production of the unknowable. Social
Studies of Science 31 (3):439-41.
Hoffmann-Riem, H., and B. Wynne. 2002. In risk assessment, one has to admit
ignorance. Nature 416 (March):123.
Holton, G., and R. S. Morrison, eds. 1979. Limits of scientific inquiry. New York:
W. W. Norton & Company.
Howard, J. 2004. Toward intelligent, democratic steering of chemical technologies:
Evaluating industrial chlorine chemistry as environmental trial and error. PhD
Diss., Rensselaer Polytechnic Institute, Troy, NY. Proquest no. 845710461.
International Joint Commission (IJC). 1992. Sixth biennial report on Great Lakes
water quality. Washington, DC: IJC.
Jasper, J. M., and D. Nelkin. 1992. The animal rights crusade: The growth of a
moral protest. New York: Free Press.
Kaufert, P. 1998. Women, resistance and the breast cancer movement. In Pragmatic
women and body politics, ed. M. Lock and P. Kaufert, 287-309. Cambridge:
Cambridge University Press.
Kempner, J., C. L. Bosk, and J. F. Merz. 2008. Forbidden knowledge: The phenom-
enology of scientific inaction. Unpublished manuscript.
Kempner, J., C. S. Perlis, and J. F. Merz. 2005. Forbidden knowledge. Science
307:854.
Klawiter, M. 2004. Breast cancer in two regimes: the impact of social movements on
illness experience. Sociology of Health and Illness 26 (6):845-74.
Klein, H. K., and D. L. Kleinman. 2002. The social construction of technology:
Structural considerations. Science, Technology, and Human Values 27 (1):
28-52.
Kleinman, D. L., and S. P. Vallas. 2001. Science, capitalism, and the rise of the
‘knowledge worker’: The changing structure of knowledge production in the
United States. Theory and Society 30:451-92.
Knorr-Cetina, K. 1999. Epistemic cultures: How the sciences make knowledge.
Cambridge, MA: Harvard University Press.
Kohler, R. E. 1994. Lords of the fly: Drosophila genetics and the experimental life.
Chicago: University of Chicago Press.
Lerner, B. 2003. The breast cancer wars. Hope, fear and pursuit of a cure in twen-
tieth Century-America. Oxford: Oxford University Press.
Levidow, Les. 2002. Ignorance-based risk assessment? Scientific controversy over
GM food safety. Science as Culture 11 (1):61-7.

470
Frickel et al. 471

Louisiana Bucket Brigade. 2001. Land sharks: Orion Refining’s predatory property
purchases. New Orleans: Inkworks Press.
Löwy, I. 1997. Between bench and bedside: Science, healing and interleukin-2 in a
cancer ward. Cambridge, MA: Harvard University Press.
Lukes, S. 2005. Power: A radical view. 2nd ed. New York: Palgrave Macmillan.
MacKenzie, D., and G. Spinardi. 1995. Tacit knowledge, weapons design, and
the uninvention of nuclear weapons. American Journal of Sociology
101:44-99.
Markowitz, G., and D. Rosner. 2002. Deceit and denial: The deadly politics of
industrial pollution. Berkeley: University of California Press.
Martin, B. 2007. Justice ignited: The dynamics of backfire. Lanham, MD: Rowman
& Littlefield.
Marx, K. 1967. Capital, Volume 1. New York: International Publishers.
Mayer, B., and C. Overdevest. 2007. Bucket brigades and community-based envi-
ronmental monitoring. Paper presented at the annual meeting of the Society for
Social Studies of Science, Montreal.
Merton, R. 1987. Three fragments from a sociologist’s notebook: Establishing the
phenomenon, specified ignorance, and strategic research materials. Annual
Review of Sociology 13:1-28.
Murphy, M. 2006. Sick building syndrome and the problem of uncertainty: Environ-
mental politics, technoscience, and women workers. Durham, NC and London:
Duke University Press.
Noble, D. 1977. America by design: Science, technology, and corporate capitalism.
New York: Alfred A. Knopf.
O’Rourke, D., and G. P. Macey. 2003. Community environmental policing: Asses-
sing new strategies of public participation in environmental regulation. Journal
of Policy Analysis and Management 22 (3):383-414.
Ottinger, G. 2005. Grounds for action: Community and science in environmental
controversy. PhD Diss., University of California, Berkeley.
Potts, L. 2004. An epidemiology of women’s lives: The environmental risk of breast
cancer. Critical Public Health 14 (2):133-47.
Proctor, R. N. 1995. Cancer wars: How politics shapes what we know and don’t
know about cancer. New York: Basic Books.
Proctor, R. N., and L. Schiebinger, eds. 2008. Agnotology: The making and unmak-
ing of ignorance. Stanford, CA: Stanford University Press.
Rittel, H., and M. Webber. 1973. Dilemmas in a general theory of planning. Policy
Sciences 4:155-69.
Shattuck, R. 1996. Forbidden knowledge. New York: Harcourt Brace and Company.
Sheridan, J. 1994. Chlorine chemistry: An endangered species? Industry Week,
January 3, 49-50.

471
472 Science, Technology, & Human Values 35(4)

Shostak, S. 2003. Locating gene-environment interaction: At the intersections of


genetics and public health. Social Science and Medicine 56:2327-42.
Slaughter, S., and G. Rhoades. 2004. Academic capitalism and the new economy:
Markets, states, and higher education. Baltimore, MD: The Johns Hopkins
University Press.
Sontag, S. 1988. Illness as Metaphor and AIDS and its Metaphors. New York:
Doubleday.
Star, S. L., and J. R. Griesemer. 1989. Institutional ecology, ‘translations’ and
Boundary objects: Amateurs and professionals in Berkeley’s Museum of Verte-
brate Zoology, 1907-39. Social Studies of Science 19:387-420.
Strauss, A., and J. Corbin. 1990. Basics of qualitative research: Grounded theory
procedures and techniques. Newbury Park: Sage Publications.
Stringer, R., and P. Johnston. 2001. Chlorine and the environment: An overview of
the chlorine industry. Boston: Kluwer Academic.
Swerczek, M. 2000. Orion promises air samples. The Times Picayune, New Orleans.
September 29, B1-2.
Tesh, S. N. 2000. Uncertain hazards: Environmental activists and scientific proof.
Ithaca: Cornell University Press.
Thompson, E. P. 1971. The moral economy of the English crown in the eighteenth
century. Past and Present 50:76-136.
Thornton, J. 1991. The product is the poison: The case for a chlorine phase-out.
Washington: Greenpeace USA.
Thornton, J. 2000. Pandora’s poison: Organochlorines and health. Cambridge:
MIT.
Tickner, J. A., ed. 2003. Precaution, environmental science, and preventive public
policy. Washington, DC: Island.
Woodhouse, E. J., D. Hess, S. Breyman, and B. Martin. 2002. Science studies and
activism: Possibilities and problems for reconstructivist agendas. Social Studies
of Science 32 (2):297-319.
Zavestoski, S., P. Brown, M. Linder, S. McCormick, and B. Mayer. 2002. Science,
policy, activism, and war: Defining the health of Gulf War veterans. Science,
Technology, & Human Values 27 (2):171-205.
Zavestoski, S., R. Morello-Frosch, P. Brown, B. Mayer, S. McCormick, and
R. Gasior 2004. Embodied health movements and challenges to the dominant
epidemiological paradigm. Research in Social Movements, Conflict and Change
25:253-278.
Zuckerman, H. 1978. Theory choice and problem choice in science. Sociological
Inquiry 48 (3-4):65-95.

472
Frickel et al. 473

Bios
Scott Frickel is associate professor of sociology at Washington State University,
where he studies science, environment, and social movements. He is author of
Chemical Consequences: Environmental Mutagens, Scientist Activism, and the Rise
of Genetic Toxicology (Rutgers University Press, 2004) and coeditor with Kelly
Moore of The New Political Sociology of Science: Institutions, Networks, and
Power (University of Wisconsin Press, 2006).
Sahra Gibbon is a research fellow in the Anthropology Department at University
College London. She is author of Breast Cancer Genes and the Gendering of Knowl-
edge (Palgrave Macmillan, 2007) and coeditor with Carlos Novas of Biosocialities,
Genetics and the Social Sciences: Making Biologies and Identities (Routledge,
forthcoming).

Jeff Howard is assistant professor at the University of Texas at Arlington School of


Urban and Public Affairs. His research focuses, in part, on the problematic role of
experts and expert knowledge in environmental decision making—an interest rooted
in his experience as a Greenpeace staff member in the 1980s (prior to the case exam-
ined here).
Joanna Kempner is assistant professor of sociology at Rutgers University and
member of the Institute for Health, Health Care Policy and Aging Research. Her
research examines the intersection of medicine, science, politics, and gender.
Gwen Ottinger is a research fellow in the Environmental History and Policy Pro-
gram at the Chemical Heritage Foundation. Her work explores how expertise is con-
structed in the everyday interactions of engineers, scientists, residents, and activists
at an oil refinery’s fenceline.

David J. Hess is professor of Science and Technology Studies and director of the
Ecological Economics, Values, and Policy Program at Rensselaer Polytechnic Insti-
tute. His research focuses on the social studies of science, technology, health, the
environment, and social movements. His most recent books are Alternative Path-
ways in Science and Technology (MIT Press, 2007) and Localist Movements in a
Global Economy (MIT Press, 2009).

473

View publication stats

You might also like