Journal of Experimental Psychology: General
2009, Vol. 138, No. 4, 535–545
© 2009 American Psychological Association
0096-3445/09/$12.00 DOI: 10.1037/a0016796
Negativity Bias in Attribution of External Agency
Carey K. Morewedge
Carnegie Mellon University
This research investigated whether people are more likely to attribute events to external agents when
events are negative rather than neutral or positive. Participants more often believed that ultimatum game
partners were humans rather than computers when the partners offered unusually unfavorable divisions
than unusually favorable divisions (Experiment 1A), even when their human partners had no financial
stake in the game (Experiment 1B). In subsequent experiments, participants were most likely to infer that
gambles were influenced by an impartial participant when the outcomes of those gambles were losses
rather than wins (Experiments 2 and 3), despite their explicitly equal probability. The results suggest a
negative agency bias—negative events are more often attributed to the influence of external agents than
similarly positive and neutral events, independent of their subjective probability.
Keywords: attribution, mind perception, negativity bias, superstition
external agents— entities other than the self that possess their own
beliefs and desires (Dennett, 1987). Negative events may be attributed to the influence of external agents simply because negative events are unlikely to be anticipated. Alternatively, people
may exhibit a negative agency bias, whereby negative events are
more likely to be attributed to the intentions of external agents than
events that are similarly positive or neutral, regardless of their
subjective probability.
Prosperity is easily received as our due, and few questions are asked
concerning its cause or author . . . . On the other hand, every disastrous accident alarms us, and sets us on enquiries concerning the
principles whence it arose . . . . And the mind, sunk into diffidence,
terror, and melancholy, has recourse to every method of appeasing
those secret intelligent powers, on whom our fortune is supposed
entirely to depend. (Hume, 1757/1956, p. 31)
People are apt to believe that computers are “trying” to antagonize them when important files are deleted, referees conspired to
cause their team’s loss, and a deity’s wrath has been incurred when
natural disaster strikes. Yet, when files are easily found, one’s
team is winning, or the sky is clear and the sun is shining, the state
of affairs is seldom attributed to the influence of external agents.
This discrepancy reflects a general asymmetry in the way perceivers ascribe causality. Psychopathology, politics, religion, sports,
and everyday experience are replete with attributions made to
malicious external agents (Boyer, 2001; George & Neufeld, 1987;
Gilbert, 1987; Guthrie, 1993; Shaughnessy, 2000). Although helpful external agents may be invoked to explain surprisingly pleasant
events (D. T. Gilbert, Brown, Pinel, & Wilson, 2000), such attributions seem fewer and less frequent.
The present research investigated whether events that are negative are especially likely to be attributed to the influence of
Asymmetric Expectations
When outcomes violate one’s expectations, they are less likely
to be attributed to internal causes and are more likely to be
attributed to external physical or intentional causes (Kelley, 1977;
Subbotsky, 2001). Overconfidence in the probability of one’s
success and other egocentric biases (e.g., Dunning, Griffin,
Milojkovic, & Ross, 1990; Kruger & Dunning, 1999; Weinstein,
1980) may lead one to more often expect positive outcomes to
occur than negative outcomes. Simply because negative events are
more frequently unexpected, they may be more frequently attributed to external causes than positive events (e.g., Gilovich, 1983;
Pyszczynski & Greenberg, 1981; Weiner, 1985).
Unexpected events are likely to be attributed to external agents
rather than to chance because intentional explanations are a useful
way to understand the behavior of unpredictable entities (e.g.,
Dennett, 1987; Epley, Waytz, & Cacioppo, 2007; Rakison &
Poulin-Dubois, 2001; Waytz et al., 2009). If an entity’s behavior is
entirely predictable, one does not need to understand its intentions
to be able to predict its behavior. Instead, one can simply recall
what stimuli it recently encountered or its past reaction to a
particular stimulus (Michotte, 1963). When one cannot easily
predict an entity’s behavior from its past behavior, one must
understand its beliefs and desires to infer its future behavior
(Heider, 1958). By definition, unexpected events are difficult to
predict. Because negative events are more likely to be unexpected,
they may thus be more often attributed to the intentions of external
agents than similarly positive and neutral events.
I gratefully acknowledge the support of a dissertation fellowship from
the Institute for Quantitative Research in the Social Sciences at Harvard
University and a John Parker Scholarship from the Graduate Society of
Fellows at Harvard University. I thank Andrew Boston, Dina Gohar, Tessa
Johung, Matthew Killingsworth, Amit Kumar, Kathy Lee, Rebecca Levine,
Gregory McBroom, Sara Rabinovitch, Dadjie Saintus, Jill Swencionis, and
Lisa Xu for their assistance in the execution of these experiments, and
Susan Carey, Eugene Caruso, Daniel Dennett, Daniel Gilbert, and Daniel
Wegner for their helpful suggestions.
Correspondence concerning this article should be addressed to Carey
K. Morewedge, Department of Social and Decision Sciences, Carnegie
Mellon University, 208 Porter Hall, Pittsburgh, PA 15213. E-mail:
morewedge@cmu.edu
535
MOREWEDGE
536
Negativity Agency Bias
People generally appear to suspect that the causal origin of
events is a prime mover—an intentional agent that initiates the first
event in a causal chain of events (Rosset, 2008; Vinokur & Ajzen,
1982). Infants expect an intentional agent to be the primary cause
of an inanimate object’s motion, even in the absence of any direct
perceptual evidence (Saxe, Tenenbaum, & Carey, 2005). Adults
seem to be no different. They initially assume that events occurred
as a result of another agent’s intentions, and subsequently correct
that assumption only when given sufficient reason and time to do
so (Rosset, 2008). Even unambiguously accidental events such as
sneezing and slipping on ice may be attributed to intentional
causes when observers are given limited time to infer the cause of
an actor’s behavior. Positive events and behaviors are likely to be
attributed to one’s intentions, whether enacted through skill or
personal luck (Langer & Roth, 1975; Miller & Ross, 1975; Wohl
& Enzle, 2002). As attributing negative events to the self is
undesirable (Taylor & Brown, 1988), people may be inclined to
attribute negative events to the intentions of external agents.
The findings of research examining moral judgment, intentionality, and attribution support this assertion. In contrast to attributions for their own behavior, people are more likely to attribute
negative behaviors performed by external agents to their dispositions and intentions rather than to their situation or to chance:
Others receive more blame for the negative side-effects of their
intentional behaviors than praise for the positive side-effects of
their intentional behaviors (Knobe, 2003, 2005). Intentions are
more often attributed to computers that malfunction frequently
than infrequently (Waytz et al., 2009). Memory is best for negative
behaviors that were attributed to the disposition of an external
agent and for positive behaviors that were attributed to their
situation (Ybarra & Stephan, 1996). Most relevant, people are
more likely to engage in extensive causal reasoning when experiencing events that are negative—whether those events were expected or unexpected (Bohner, Bless, Schwarz, & Strack, 1988).
Even negative psychological states that are incidental increase
the tendency to attribute intentions to external agents. The tendency to anthropomorphize ambiguous stimuli is exacerbated
when experiencing loneliness (Epley, Akalis, Waytz, & Cacioppo,
2008), and mortality salience increases belief in the existence of
supernatural agents and their ability to influence worldly events
(Norenzayan & Hansen, 2006).
A negativity bias exists in the domains of attention, categorization, person perception, memory, decision making, and causal
explanation (for reviews, see Baumeister, Bratslavsky, Finkenauer,
& Vohs, 2001; Kanouse & Hanson, 1971; Rozin & Royzman,
2001). These findings suggest a negativity bias may exist in the
perception of external agency. People may be more likely to
attribute negative events to the intentions of external agents than
similarly positive and neutral events, independent of the subjective
probability of those events.
The Present Research
Three sets of experiments tested whether people are more likely
to attribute negative events to the intentions of external agents than
similarly positive or neutral events, and whether this asymmetry is
due to differences in their subjective probability or to negative
agency bias. Participants in Experiment 1A and 1B played three
ultimatum games with partners who had or did not have a stake in
the game. Each participant was offered an unusually positive and
negative offer by different partners and guessed whether each of
those partners was a human or a computer program. In Experiments 2 and 3, the probability of positive and negative events was
explicit and equal. Participants in Experiment 2 experienced monetary gains and losses and reported which gains and losses they
attributed to an impartial agent. Experiment 3 used implicit measures to test when the stated cause of positive and negative events
(i.e., an impartial agent or chance) corresponded with their spontaneous attributions. Across all three sets of experiments, I predicted that participants would be more likely to attribute negative
than positive events to external agents, regardless of their subjective or objective probability of those events.
Experiments 1A and 1B
Experiments 1A and 1B tested whether people are more likely to
attribute unusually negative events than unusually positive events
to external agents in modified ultimatum games. Each participant
in Experiments 1A and 1B played three ultimatum games with
three different anonymous partners (Güth, Schmittberger, &
Schwarze, 1982). In the three games, participants were offered an
unusually favorable, an even, and an unusually unfavorable split
by different partners. After each game, participants guessed
whether their partner was human or a computer program. (In fact,
all of their partners were computer programs.) If people more often
attribute negative events to external agents because they are more
often unexpected, participants should then be equally likely to
attribute unusually favorable and unusually unfavorable splits
to human partners. If people more often attribute negative events
to external agents because of a negative agency bias, then participants should be more likely to attribute unusually unfavorable
than unusually favorable splits to human partners.
Both Human partners had a financial stake in the games played
in Experiment 1A. Participants in Experiment 1B played ultimatum games with human partners who could not earn money in the
games to test whether they would attribute outcomes differently if
they believed that their human partners were not financially incentivized. In Experiment 1B, participants were also asked to
describe their partner. These descriptions were coded for the use of
intentional language and whether participants spontaneously identified their partner as a human being or a computer program.
Experiment 1A
Method
Participants. Thirty-five undergraduate and graduate students
at Harvard University (19 women; Mage ⫽ 22.2, SD ⫽ 7.7)
participated for money earned in the experiment ($5–$13).
Monetary splits. In each round of the ultimatum games, a participant and his or her partner split $3. The entity dividing the sum at
stake in each round (the divider) could choose one of three splits to
offer to his, her, or its partner (the receiver): an unusually unfavorable
split ($2.25divider/$0.75receiver), a usual even split ($1.50divider/
$1.50receiver ), or an unusually favorable split ($0.75divider /
$2.25receiver). These amounts were chosen on the basis of previous
NEGATIVITY BIAS IN ATTRIBUTION OF EXTERNAL AGENCY
research examining average and modal offers in the United States
(Henrich, 2000). Participants in both experiments were aware that
the divider had to choose one of these three splits.
Procedure. Participants were run individually. They were informed that they would play three ultimatum games on a computer
with three different partners, each of whom would be randomly drawn
from a pool that included other human participants and computer
programs. Participants were not informed of any strategy the computer programs might use to select splits, so computer programs could
have been designed to maximize profits, select randomly, or maximize the number of splits accepted by participants.
Next, participants were informed that each game would consist
of two rounds. In the first round, their partner received $3 and
chose one of the three splits to offer the participant. Then participants were given the choice to accept or reject that split. In the
second round of each game, the partner and participants switched
positions. Participants’ splits were always accepted. Each of the 3
partners offered participants a different split (i.e., favorable, even,
or unfavorable). Offer order was random. After finishing each
two-round game, participants were informed of their earnings and
were asked whether they believed that their partner was a human
or a computer program. After finishing the last game, participants
were debriefed and compensated.
Results
There were no significant effects of order in this or further
experiments, so order is not further discussed.
Although participants were as likely to guess that their partner
was a human being (50.5%) as they were to guess that their partner
was a computer program (49.5%; t ⬍ 1), a nonparametric test
revealed a significant effect of the generosity of their partner’s
split on inferences of the partner’s identity, 2(2, N ⫽ 35) ⫽ 11.51,
p ⬍ .01. Participants were more likely to attribute an unfavorable
split to a human partner than to a computer program, 2(1, N ⫽
35) ⫽ 4.83, p ⫽ .03, whereas participants were no more likely to
attribute an even split to a human partner than to a computer
program, 2(1, N ⫽ 35) ⫽ 0.26, p ⫽ .61, and were less likely to
537
attribute a favorable split to a human partner than to a computer
program, 2(1, N ⫽ 35) ⫽ 6.43, p ⫽ .01 (see Figure 1, left panel).
Experiment 1B
Method
Participants. Forty-four undergraduate and graduate students
at Harvard University (28 women; Mage ⫽ 20.8, SD ⫽ 2.7)
participated for money earned in the experiment ($5–$13).
Procedure. The procedure of Experiment 1B was identical to
the procedure of Experiment 1A, with two exceptions. First, participants were informed that their human partners would not receive payment for the games they played with participants. Specifically, participants were told that the human players earned
money from ultimatum games they played with other people but
that the human players would not earn money from the ultimatum
games they played with participants (see the Appendix).
Second, participants gave open-ended descriptions of what they
thought of their partner immediately after each game. Three
judges, blind to condition, rated the extent to which participants
used intentional language to describe their partner or their partner’s behavior on a 5-point scale ranging from 1 (did not use
intentional language) to 5 (definitely did use intentional language). Judges also rated the extent to which participants spontaneously identified their partner as human participant or as computer program on a 5-point scale ranging from 1 (definitely
believed the partner was a computer program) to 5 (definitely
believed that the partner was a human being).
Results
Open-ended responses. Ratings of intentional language and
spontaneous partner identification were averaged across judges
(Cronbach’s ␣ ⫽ .71; ␣ ⫽ .72) and submitted to a repeated
measures analysis of variance (ANOVA), with three levels of split
(unfavorable, even, favorable). That analysis yielded a main effect
of split on the extent to which participants spontaneously used
intentional language and spontaneously identified partners as hu-
Figure 1. Participants were more likely to believe an ultimatum game partner was a human rather than a
computer partner when offered unusually negative than usual even or unusually positive divisions of the money
at stake in a round, whether their human partners did or did not have a monetary incentive, left and right,
respectively (Experiments 1A and 1B).
538
MOREWEDGE
man or as computer program, F(1, 42) ⫽ 10.44, p ⫽ .002, 2p ⫽
.20, and, F(1, 42) ⫽ 4.65, p ⫽ .04, 2p ⫽ .10, respectively. Post hoc
tests (Fisher’s least significance difference [LSD] test) revealed
that participants were more likely to spontaneously use intentional
language to describe partners who offered them unfavorable splits
(M ⫽ 3.47, SD ⫽ 1.07) than partners who offered them even (M ⫽
2.94, SD ⫽ 1.10) and favorable splits (M ⫽ 2.49, SD ⫽ 1.05; p ⫽
.02 and p ⬍ .001, respectively) and that participants were more
likely to use intentional language when offered an even split than
when offered a favorable split ( p ⫽ .03).
Participants were also more likely to spontaneously describe
their partners as human rather than as computer program when
offered unfavorable splits (M ⫽ 3.34, SD ⫽ 1.05) than when
offered even (M ⫽ 2.92, SD ⫽ 1.02) and favorable splits (M ⫽
2.71, SD ⫽ 1.15; p ⫽ .067 and p ⫽ .005, respectively), but they
were equally likely to identify their partners as human or as
computer program whether offered an even split or a favorable
split ( p ⫽ .28). Of interest, the extent to which participants used
intentional language was positively correlated with the extent to
which participants spontaneously identified their partner as human
and not as computer program, r(42) ⫽ .54, p ⬍ .001.
Agent perception. Although participants were as likely to
guess that an unknown partner was a human partner (52.3%) as
they were to guess that an unknown partner was a computer
program (47.7%; t ⬍ 1), a nonparametric test revealed a significant
effect of the partner’s split on inferences made about the partner’s
identity, 2(2, N ⫽ 44) ⫽ 13.85, p ⬍ .001. Participants were more
likely to attribute an unfavorable split to a human partner than to
a computer program, 2(1, N ⫽ 44) ⫽ 4.46, p ⫽ .04, whereas
participants were no more likely to attribute an even split to a
human partner than to a computer program, 2(1, N ⫽ 44) ⫽ 2.27,
p ⫽ .13, and were less likely to attribute a favorable split to a
human partner than to a computer program, 2(1, N ⫽ 44) ⫽ 7.36,
p ⫽ .007 (see Figure 1, right panel). These explicit inferences were
positively correlated with spontaneous identifications of the partners as human or as computer program, r(42) ⫽ .40, p ⫽ .008.
Discussion
Whether their human partners had or did not have a financial
incentive to behave selfishly, participants in Experiments 1A and 1B
were most likely to infer that partners who offered them unfavorable
splits were humans and were most likely to infer that partners who
offered them favorable splits were computer programs. Given the
similarity in their procedures and populations, it may be informative
to compare attributions made when human partners were selfinterested (Experiment 1A) and when they were not (Experiment 1B).
As suggested by Rosenthal and Rosnow (1991), attributions were
compared with a 2 (self-interest: self-interest, no self-interest) ⫻ 3
(split: unfavorable, even, favorable) mixed ANOVA, with repeated
measures on the last factor. That analysis revealed a significant main
effect of split, F(1, 77) ⫽ 15.33, p ⬍ .001, 2p ⫽ .17, but no main
effect of self-interest or interaction (Fs ⬍ 1). Post hoc tests (Fisher’s
LSD) revealed that participants were more likely to believe their
partners were human when their partners offered them unfavorable or
even splits than when their partners offered them favorable splits
( ps ⬍ .001), but they were equally likely to believe their partner was
human when their partners offered them unfavorable or even splits
( p ⫽ .20) (see Figure 1). The most parsimonious interpretation of the
similarity between the two experiments is that participants exhibited a
negative agency bias. Whether external agents had or had no reason
to act miserly or charitably toward participants, participants were
more likely to attribute negative than similarly positive events to the
intentions of external agents.
There are, however, two concerns with this interpretation of the
results. The first concern is that it is not possible to determine
whether attributions in Experiment 1B are biased. When people are
given this choice of splits, total anonymity, and no payment, they
may elect to offer unfavorable splits to other participants out of
envy or because they derive pleasure from watching others suffer
(Smith et al., 1996). If this is the case, then attributions made in
Experiment 1B may have been accurate. To gain some understanding of the splits that dividers choose under these circumstances, a
posttest was conducted in which dividers did not earn any money
from the splits they offered to receivers.
Sixty-six dividers (34 women; Mage ⫽ 22.11, SD ⫽ 3.97) were
paid a flat rate to choose which of the three splits would be offered
to a receiver—another participant—at a later date. These flat-rate
dividers knew that they would not earn money as a result of the
split that they chose but that the receiver could earn money if he or
she accepted the offered split. The posttest was conducted in an
experimental economics lab with a no-deception policy, so dividers had no reason to suspect that the splits would not be offered at
a later time to another person.
In stark contrast to the inferences made by participants in
Experiment 1B, 71.2% of flat-rate dividers (47/66) elected to offer
their partner the unusually favorable split. Only 19.7% of the
dividers (13/66) offered their partner the even split, and 9.1% of
the dividers (6/66) offered their partner the unusually unfavorable
split. In other words, when dividers stood to earn nothing from the
split they chose, the majority of those dividers offered their partners an unusually favorable split, 2(2, N ⫽ 66) ⫽ 43.73, p ⬍ .001.
If one assumes that participants in Experiment 1B believed that
there was an equal mix of humans and computers, P(human) ⫽ .5,
and that a random offer strategy was used by computers [P(favorable split) ⫽ .33, P(even split B) ⫽ .33, and P(unfavorable split) ⫽
.33], then one can compare their attributions with the prior probability that a divider who offered them a favorable, an even, or an
unfavorable split was a human rather than a computer program
[P(human) ⫽ .68, .37, and .21, respectively]. Given these assumptions, binomial tests revealed that participants would have underestimated the probability that a divider who offered them a favorable split was a human, P(human) ⫽ .30, p ⬍ .001, correctly
estimated the probability that a divider offering them an even split
was a human, P(human) ⫽ .61, p ⫽ .43, and overestimated the
probability that a divider offering them an unfavorable split was
human, P(human) ⫽ .66, p ⬍ .001. This suggests that participants
in Experiment 1B exhibited a negativity bias in the attribution of
external agency.
The second concern regarding the foregoing interpretation of the
results is that participants in Experiment 1B could have been
confused or skeptical about their human partners’ payment structure. The attributions made in Experiments 1A and 1B might both
have been made under the assumption that human dividers were
acting in accordance with their self-interest. Note that this interpretation does not rule out the possibility that participants exhibited a negative agency bias, as participants did not know the
strategy used by the computer programs. There is no reason why
NEGATIVITY BIAS IN ATTRIBUTION OF EXTERNAL AGENCY
the computer programs would be less likely to select splits strategically than randomly. If participants assumed that the computer
programs were strategically selecting splits, then their attributions
would still indicate that they preferentially attributed negative
outcomes to self-interested agents (i.e., humans) than to selfinterested nonagents (i.e., computers).
Experiments 2 and 3 addressed both concerns. Regarding the first
concern, participants in Experiments 2 and 3 experienced to a series
of monetary losses and gains that were random or influenced by an
impartial partner. By explicitly making the probability of positive and
negative outcomes equivalent, this design disambiguated whether the
tendency to attribute negative outcomes to external agents was a bias.
If participants more often attributed their losses than gains to the
influence of their partner, then they would exhibit a negativity bias in
their attributions of external agency. Regarding the second concern,
participants were probed before and after the experiments to make
sure that they understood (and believed) that their outcomes and their
partner’s outcomes were unrelated.
539
It was made clear to participants that their winning colors and
amounts at stake were unrelated to the winning colors and amounts
at stake of the confederate and that there was no way for them or
the confederate to know whether they shared a winning color or a
stake on any round. Participants were then told that the confederate
could decide the outcome of five rounds and that their task was to
report the extent to which they thought that the outcome of each
round was due to chance or to the confederate on a 5-point scale
ranging from 1 (definitely random chance) to 5 (definitely the
other participant). After playing all 40 rounds, participants reported the amount they believed that they had won or lost and were
carefully debriefed and compensated.
Stimuli and conditions. In a within-subjects design, each participant experienced all possible wins and losses at five stakes ($0,
$0.25, $0.50, $0.75, $1.00) and four levels of probability (20%,
40%, 60%, and 80%). In each round, the probability of winning
was clearly indicated by the number of yellow and blue slices on
the wheel (i.e., 1:4, 2:3, 3:2, or 4:1).
Experiment 2
Experiment 2 expanded on the previous experiment in three ways.
First, the probabilities of positive and negative events were explicitly
manipulated by forcing participants to play a series of gambles. This
allowed me to test whether event probability and valence independently influence attributions to external agents. Second, participants
could attribute those events to an impartial confederate or to chance.
This meant that the experiment only tested participants’ inferences
about the behavior of human agents. It did not also test their inferences about the behavior of computer programs. Third, participants
could not begin the experiment until they demonstrated that they
understood that their partner’s outcomes were unrelated to their outcomes. I predicted that independent of event probability, participants
would be more likely to attribute negative events (i.e., monetary
losses) than positive events (i.e., monetary gains) to the influence of
the impartial confederate.
Method
Participants. Fifty-one undergraduate and graduate students at
Harvard University (39 women, Mage ⫽ 20.0, SD ⫽ 3.6) received
$12 for participating (a $7 show-up fee and an additional $5 that
was wagered on gambles). The responses of six suspicious participants were not included in any of the analyses.1
Procedure. An experimenter briefly introduced participants to a
confederate and then escorted them to separate rooms so that they
could not see or hear each other. Once seated, participants were
informed that they and the confederate would win and lose money
according to the outcomes of 40 rounds of a game of chance in which
an arrow spun on a computerized color wheel (see Figure 2, left
panel), and they received $5 to use in the game. The color wheel
consisted of a circle divided into five slices, some of which were
yellow and some of which were blue. If the arrow stopped on a slice
that was the participants’ winning color in that round, they would earn
the amount of money displayed in the top right corner of the screen.
If the arrow stopped on a slice that was the other color, they would
lose the amount of money displayed in the top right corner of the
screen. Before the game began, participants chose a card that assigned
them a winning color for each round of the game. Winning colors
varied randomly across rounds in four random orders.
Results
Agent perception. Attributions of responsibility were analyzed
in a 2 (outcome: win, loss) ⫻ 5 (stake: $0, $0.25, $0.50, $0.75,
$1.00) ⫻ 4 (probability: 20%, 40%, 60%, 80%) repeated measures
ANOVA. Most important, the analysis revealed a main effect of
outcome—participants were more likely to infer that the confederate controlled rounds when participants lost money (M ⫽ 2.64,
SD ⫽ 0.56) than when participants won money (M ⫽ 2.44, SD ⫽
0.48), F(1, 44) ⫽ 11.23, p ⫽ .002, 2p ⫽ .20. The analysis also
revealed a main effect of stake—participants were more likely to
believe that the confederate controlled outcomes when participants
had money at stake than when participants did not have money at
stake, F(1, 44) ⫽ 4.90, p ⫽ .03, 2p ⫽ .10. Post hoc tests (Fisher’s
LSD) confirmed that attributions were different for rounds when
the amount at stake was $0 than for rounds when the amount at
stake was greater than $0 (all ps ⬍ .02); linear contrast, F(1, 41) ⫽
8.34, p ⫽ .006, 2p ⫽ .17 (see Table 1).
These two main effects are best explained by an Outcome ⫻
Stake interaction, F(4, 176) ⫽ 3.46, p ⫽ .009, 2p ⫽ .07. As
illustrated by Figure 3, participants were equally likely to allocate
responsibility to the confederate for their small losses and their
small wins, but they were more likely to allocate responsibility to
the confederate for their large losses than their large wins.
Unsurprisingly, a main effect of probability revealed that participants were more likely to allocate responsibility to the confederate for
unlikely outcomes than for likely outcomes, F(1, 44) ⫽ 18.89, p ⬍
.001, 2p ⫽ .30; linear contrast, F(1, 44) ⫽ 28.48, p ⬍ .001, 2p ⫽ .39
(see Table 1). No other significant effects were found.
Monetary estimates. Although participants lost as much
money as they won in the 40 rounds of the game, participants
reported retaining a significantly smaller sum than the $5 they
were given before the first round (M ⫽ $2.89, SD ⫽ 2.72), t(44) ⫽
1
During the debriefing, when participants were asked how they determined when the confederate controlled the wheel, 5 reported that they did
not believe that the confederate controlled any spins of the wheel. One
additional participant reported believing that the confederate was assisting
the experimenter.
MOREWEDGE
540
Figure 2.
Examples of trials in Experiments 2 and 3.
5.20, p ⬍ .001, r ⫽ .62. Apparently, their losses were more
memorable than their wins.
have been more likely to attribute their large wins to chance than their
small wins, as when participants received large wins the confederate
would have received equally large losses.
Discussion
These results suggest that independent of subjective probability,
people exhibit a negative agency bias in attributions of external
agency. Most important, participants were more likely to believe that
an impartial confederate was responsible for their monetary losses
than their monetary gains. As there was no interaction between
outcome valence and probability, negative agency bias appears to
occur independently of perceived event probability. Furthermore, it
appears that participants in Experiment 2 did not simply use the norm
of self-interest (Miller, 1997) to infer the confederate’s behavior. As
illustrated by Figure 3, participants thought confederates were equally
responsible whether participants won smaller or larger amounts of
money, whereas participants thought confederates were more responsible when participants lost larger amounts of money than when they
lost smaller amounts. If participants believed that their fortune was
inversely related to the fortune of the confederate, participants should
Experiment 3
The results of the previous experiments suggest that people are
more likely to explicitly attribute negative events to external agents
than events that are similarly positive or neutral. In Experiment 3, I
examined whether negative outcomes are spontaneously attributed to
external agents by measuring the length of time participants spent
processing information that attributed positive and negative outcomes
to external agents and to chance. Just as infants and adults look longer
at events that contradict their intuition (e.g., Baillargeon, Spelke, &
Wasserman, 1985; Risen & Gilovich, 2008), it was predicted that the
amount of time participants spent processing this information would
demonstrate a negative agency bias. In other words, participants
should spend more time processing information attributing their wins
than losses to an external agent because the former information should
contradict their spontaneous causal attributions.
NEGATIVITY BIAS IN ATTRIBUTION OF EXTERNAL AGENCY
541
Table 1
Attributions of Responsibility to an Impartial Confederate by Outcome Valence, Outcome
Probability, and Outcome Severity in Experiment 2
Outcome probability
Outcome
Wins
$0.00
$0.25
$0.50
$0.75
$1.00
M
Losses
$0.00
$0.25
$0.50
$0.75
$1.00
M
80%
60%
40%
20%
M
2.27 (1.12)
2.27 (0.94)
1.96 (0.80)
2.18 (0.89)
1.96 (0.88)
2.12 (0.57)
2.11 (0.83)
2.27 (0.94)
2.29 (0.73)
2.13 (0.73)
2.18 (0.89)
2.20 (0.49)
2.47 (1.06)
2.60 (0.99)
2.58 (0.89)
2.64 (1.13)
2.73 (1.03)
2.60 (0.69)
2.31 (1.16)
3.16 (1.30)
2.91 (1.26)
2.89 (1.15)
2.80 (1.16)
2.81 (0.87)
2.29 (0.69)
2.57 (0.60)
2.43 (0.58)
2.46 (0.60)
2.42 (0.62)
2.43 (0.48)
2.09 (0.87)
2.18 (0.78)
2.40 (0.94)
2.56 (1.08)
2.44 (1.14)
2.33 (0.66)
2.27 (1.05)
2.53 (1.04)
2.38 (0.89)
2.60 (1.01)
2.56 (0.99)
2.47 (0.66)
2.58 (1.10)
2.62 (1.05)
2.73 (1.05)
2.69 (1.08)
3.02 (1.36)
2.73 (0.79)
2.64 (1.19)
2.87 (1.34)
3.20 (1.31)
3.33 (1.22)
3.07 (1.16)
3.02 (0.90)
2.39 (0.71)
2.55 (0.63)
2.68 (0.63)
2.79 (0.74)
2.77 (0.77)
2.64 (0.56)
Note. Standard deviations appear in parentheses. Probability figures refer to the probability of winning (top
half) or losing (bottom half) in that trial. Scales marked with endpoints, ranging from 1 (definitely random
chance) to 5 (definitely the other participant).
Method
Participants. Fifty-three residents of Cambridge, Massachussetts (31 women, Mage ⫽ 24.5, SD ⫽ 10.8) received $15 for
participating in the experiment (a $10 show-up fee and an additional $5 that was wagered on gambles).
Procedure. An experimenter briefly introduced participants to
their partner (another participant or a confederate) and then escorted them to separate rooms so that they could not see or hear
each other. Participants were informed that the experiment was
designed to assess how people feel about winning and losing
different amounts of money. Participants were informed that they
would win and lose money according to the outcomes of 48 rounds
of a game of chance that was similar to the game described in
Experiment 2 (see Figure 2, right panel), and they received $5 to
use in the game. The game differed slightly from the game described in the previous experiment: Participants were told that their
partner could control half of the outcomes (i.e., 24 rounds), and
there were three amounts that participants could win or lose on
each round (i.e., 25¢, 50¢, or 75¢). Each participant received each
outcome at four levels of probability (20%, 40%, 60%, and 80%),
in one of eight random orders.
On each round, participants first saw the arrow spin on the
color wheel until it landed on a slice of the color wheel, with the
money at stake displayed in the upper right corner. Then participants were informed how much they won or lost. Next, text
appeared on the monitor that informed participants whether the
outcome was caused by their partner or by a random number
generator. The amount of time participants looked at this information before pressing the spacebar to continue to the next
question served as the critical dependent measure (see Figure
2). Finally, participants reported how happy they felt on a
5-point scale ranging from 1 (not at all happy) to 5 (very
happy). This was included to divert their attention away from
the dependent measure. As in Experiment 2, participants estimated the amount of money they won upon the conclusion of
the experiment.
Figure 3. Participants were more likely to attribute large monetary losses than gains (wins) to the influence of
an impartial confederate, but they were equally likely to attribute small monetary losses and gains (wins) to the
influence of an impartial confederate (Experiment 2). Bars reflect ⫹1 SE.
MOREWEDGE
542
Table 2
Milliseconds Spent Looking at Attributions of Wins and Losses
to Another Participant or Chance in Experiment 3
Cause
Wins (SD)
Losses (SD)
Other participant
Chance
1426a (545)
1271b (495)
1334b (520)
1333b (528)
Note. Means that do not share a common subscript differ significantly at
the p ⬍ .05 level when compared with a paired-sample t test. (Untransformed reaction times are presented for the purpose of clarity.)
Results
Agent perception. A square-root transformation was applied to
reading time to correct for skewness. The predicted Cause ⫻
Outcome interaction was tested and reported by collapsing across
outcome stake and outcome probability for the purpose of clarity.
When looking time was analyzed with a 2 (cause: other participant, chance) ⫻ 2 (outcome: win, loss) repeated measures
ANOVA, the analysis revealed a main effect of cause, F(1, 52) ⫽
10.21, p ⫽ .002, 2p ⫽ .16; no significant effect of outcome, F(1,
52) ⫽ 1.24, p ⫽ .27; and the predicted Cause ⫻ Outcome interaction, F(1, 52) ⫽ 4.52, p ⫽ .04, 2p ⫽ .08. Participants looked
longer at information when it revealed that the other participant
was responsible for their wins than for their losses, t(52) ⫽ 2.62,
p ⫽ .01, r ⫽ .34, but participants looked equally long at information, suggesting that chance was responsible for their wins and
losses (t ⬍ 1; see Table 2). (The full factorial analysis and a
comparison between the results of that analysis and the results of
Experiment 2 can be found in this footnote).2
Monetary estimates. Although participants lost as much
money as they won in the game, participants again reported that
they retainined a significantly smaller sum than the $5 they were
given before the first round (M ⫽ $3.58, SD ⫽ 3.85), t(52) ⫽ 2.79,
p ⫽ .009, r ⫽ .35.
Discussion
Participants spent more time processing information suggesting
that another person was responsible for their financial gains than
for their financial losses, whereas participants spent an equal
amount of time processing information suggesting that their financial gains and losses were random. These findings suggest that
people are more likely to spontaneously assume that external
agents are responsible for negative events than for positive events
that occur randomly. Furthermore, the results suggest that negative
agency bias appears largely due to what perceivers infer about the
behavior of external agents rather than optimistically biased inferences about chance events—that agents are more likely to cause
negative events rather than that good events are more likely to
occur at random.
General Discussion
The findings of the present research demonstrate negativity bias in
the attribution of external agency. Across three experiments, negative
outcomes more often led perceivers to infer the presence and influence of external agents than did positive and neutral outcomes. In
ultimatum games, dividers who offered participants unfavorable splits
were more likely to be identified as humans than as computers,
whereas dividers who offered participants favorable splits were more
likely to be identified as computers than as humans. This bias occurred whether human dividers received some or none of the money
they split with the participants.
Attributions of losses and gains in Experiments 2 and 3 suggest
that negativity bias in the attribution of external agency is not due
to differences in the subjective probability of positive and negative
events. In Experiment 2, participants were more likely to infer that
outcomes were due to the intentions of impartial confederates
when outcomes were negative (losses) than when outcomes were
positive (wins). This effect was exacerbated by the magnitude of
negative outcomes, whereas the magnitude of positive outcomes
did not affect participants’ attributions. The findings of Experiment 3 suggest that people are more likely to spontaneously
attribute negative outcomes to external agents than positive outcomes, whereas people are equally likely to spontaneously attribute positive and negative outcomes to chance.
2
Reading time was submitted to the full 2 (cause: other participant, chance) ⫻
2 (outcome: win, loss) ⫻ 3 (stake: 25¢, 50¢, 75¢) ⫻ 4 (probability: 20%, 40%,
60%, 80%) repeated measures ANOVA. In Experiment 2, an Outcome ⫻ Stake
interaction found that participants were more likely to attribute large losses to the
confederate than small losses but were no more likely to attribute large gains to the
confederate than small gains. The corresponding test in Experiment 3, an Outcome ⫻ Stake ⫻ Cause interaction, was significant, F(2, 104) ⫽ 13.61, p ⫽ .001,
2p ⫽ .21. In line with the previous findings, the pattern of means suggest that
participants appeared to look longer at causal information attributing a win to their
partner than to chance at all stakes. In contrast to the previous findings, participants
looked longer at the lowest stake losses that were attributed to chance than to their
partner and longer at higher stakes losses that were attributed to their partner than
to chance. It is unclear why this pattern among losses did not replicate. Perhaps the
different stakes used or the number of outcomes under the partner’s control made
participants surprised that the partner would attempt to control a round with a small
stake.
The analysis also found significant interactions that were not predicted
or found in Experiment 2 that are likely to be due to the different payoff
structures, differences between spontaneous and deliberate attributions,
and the different number of outcomes that were presumably controlled by
the partner. For the purposes of clarity and space, I have attempted to
interpret the means in a concise manner by only reporting the results of
significant tests that were appropriately conservative tests (e.g., if sphericity was violated). A document containing the full analysis and means is
available upon request from the author.
The analysis yielded: An Outcome ⫻ Probability interaction, F(3, 156) ⫽ 8.20,
p ⬍ .001, 2p ⫽ .14, whereby participants looked shorter at high probability (.8)
losses than wins but looked equally long at losses and wins at other levels of
probability; a Cause ⫻ Probability interaction, F(3, 156) ⫽ 27.26, p ⬍ .001, 2p ⫽
.34, whereby participants looked longer when their partner caused improbable than
probable outcomes but looked no longer when improbable or probable outcomes
were random; a Cause ⫻ Stake interaction, F(2, 104) ⫽ 7.96, p ⬍ .01, 2p ⫽ .13,
whereby participants appeared to look equally long at outcomes attributed to
partners at all levels of stake but looked shorter at random higher stake losses than
random lower stake losses; a Cause ⫻ Stake ⫻ Probability interaction, F(6,
312) ⫽ 11.79, p ⬍ .01, 2p ⫽ .19, whereby participants looked longer at improbable outcomes attributed to their partner than probable outcomes, but there was no
clear pattern of responses for random outcomes. And participants exhibited an
Outcome ⫻ Stake ⫻ Probability interaction, F(6, 312) ⫽ 10.46, p ⬍ .01, 2p ⫽ .17,
whereby participants looked longer at lower probability losses than higher probability losses but looked no longer at lower or higher probability wins, with no
clear pattern differentiating lower and higher stake outcomes.
NEGATIVITY BIAS IN ATTRIBUTION OF EXTERNAL AGENCY
It is important to note that the negative agency bias demonstrated across the three experiments cannot simply be explained by
an assumption that the partners were behaving self-interestedly or
were motivated by envy or schadenfreude. Dividers gained nothing
in the games they played with participants in Experiment 1B and
thus benefited equally from all three divisions, so participants had
no reason to assume they chose divisions according to their selfinterest. In Experiments 2 and 3, participant and partner outcomes
were ostensibly independent, so partners were just as likely to
share the same reward structure with participants as have a reward
structure that was inversely related to that of the participants.
Moreover, the attributions made by participants in Experiment 2
suggest that they did not perceive their payments to be inversely
related to their partners. If their reward structures were inversely
related, then participants should have been more likely to attribute
their larger wins to chance than their small wins, as their larger
wins would have implied that their partner would have received an
equally large (undesired) loss.
Other forms of misanthropy such as envy and schadenfreude
may sometimes contribute to negative agency bias but cannot
entirely account for the results of the experiments presented here.
Perhaps an assumption of envy could explain the attributions made
by participants in Experiment 1B. Participants may have assumed
that human dividers offered them the worst of all possible splits
because they could not profit from any of the splits they chose.
Envy and schadenfreude cannot explain the attributions made by
participants in Experiments 2 and 3 because neither they nor their
partners knew whether the other person gained or lost money on
any trial. Furthermore, the partners had greater power over outcomes, so they had no reason to envy the participants (Parrott &
Smith, 1993). Finally, there was no reason for participants to
assume their partners made choices to experience pleasure by
watching them suffer. To experience such schadenfreude, their
partners would have to have been able to watch participants lose,
and partners and the participants would have to have been in direct
competition. Schadenfreude should not occur when outcomes are
unknown and independent, as they were in Experiments 2 and 3
(Heider, 1958; Takahashi et al., 2009).
Although a negative agency bias did nothing to improve the
quality of judgments in the experiments presented here, there may
be motivational and evolutionary advantages to committing such a
Type I error. As suggested earlier in the introduction, people are
motivated to think well of themselves and defer responsibility for
negative events to causes other than themselves (Taylor & Brown,
1988). Given the general tendency to attribute events to an intentional first cause (Rosset, 2008; Saxe et al., 2005; Vinokur &
Ajzen, 1982), they may be motivated to attribute negative events to
agents other than themselves, which could lead to negative agency
bias. People may also be motivated to attribute negative events to
other agents rather than to chance to minimize their emotional
impact. Events evoke spontaneous explanation (Lombrozo, 2006),
and the uncertainty that accompanies unexplained events amplifies
their emotional impact (Bar-Anan, Wilson, & Gilbert, 2009). Attributing an event to the intentions of an external agent provides
perceivers with a satisfactory explanation (Jones & Davis, 1965).
As people are motivated to amplify the impact of positive experiences and diminish the impact of negative experiences (Taylor,
1991), they should be more motivated to attribute negative than
positive events to the intentions of external agents.
543
From an expected utility standpoint, the best strategy may be to
assume that negative outcomes were caused by external agents.
Intentional bads are more painful than accidental bads (Gray &
Wegner, 2008), and they are more likely to be repeated if proactive
measures are not taken (Axelrod & Hamilton, 1981). Within a
single context, harm caused by an antagonist (e.g., being cheated
by a casino) may be more likely to be repeated than a harm that
was accidental (e.g., losing a gamble by chance). Antagonists may
also harm a person in multiple contexts, whereas accidental threats
and dangers are likely to be specific to a single place or situation.
By assuming the presence of an antagonist, one may be better able
to avoid a quick repetition of the unpleasant event one has just
experienced. Repeated goods, whether intentional or accidental, do
not immediately threaten one’s survival and are less likely to
require one’s immediate attention. It may thus be rational and
advantageous to more often spontaneously attribute negative than
positive events to the intentions of external agents, even if that
inference is often wrong (Haselton & Buss, 2000).
It is not entirely clear whether a negative agency bias is rational
inference, however, as there may be steep costs to assuming that
others intended to produce the negative events that one experiences. Participants in Experiment 1B exhibited markedly false
assumptions about human behavior. Had they the opportunity to
contact the humans who they believed were responsible for their
divisions, their acrimonious assumptions may have poisoned any
interpersonal interactions they had with (what would have apparently been well-intentioned) dividers.
Future research may find that negative agency bias contributes
to pathological thinking, superstition, and misanthropy. There is a
persecutory subtype of delusional disorder (i.e., paranoid personality disorder), for example, which is accompanied by selfpersecutory delusions that lead people to believe other agents have
or are planning to intentionally cause them harm. Perhaps negative
agency bias is a milder and more common form of such disordered
thinking. People sometimes do attribute extremely unusual positive events to the work of external agents (D. T. Gilbert et al.,
2000), as in the case of attributing miracles to God, but there is no
delusional disorder that leads people to believe that external agents
have or are conspiring to bestow them with pleasures or successes
that are anticipated (American Psychiatric Association, 2000). As
in the case of self-serving attributions for success, however, some
people do make delusional attributions for their real and unreal
successes to themselves (i.e., megalomaniacs).
Widely held superstitious beliefs more often attribute negative
than positive events to the work of external agents. Bostonians
have long attributed the Boston Red Sox’s 86-year losing streak
and the Yankees’ good fortune to “The Curse of the Bambino,” a
curse put on the team when it traded Babe Ruth to the Yankees in
1918 so that the owner could finance a Broadway musical
(Shaughnessy, 2000). In contrast, the more recent successes of the
Red Sox have been attributed to the skill of the team (Damon &
Golenbock, 2005; Triumph Books & Boston Globe, 2007). Less
industrialized cultures attribute the failure of crops and sickness on
the work of external agents such as witches (Boyer, 2001). Folklore and mythology is replete with negative gods, demons, and
supernatural creatures. Benevolent agents such as fairy godmothers, angels, and purely benevolent gods are less frequently mentioned (Campbell, 1949; Hume, 1757/1956).
MOREWEDGE
544
Imaginary and unseen external agents are not alone. People
exhibit a tendency to attribute their misfortune to more common
external agents as well. Politicians have attributed the misfortune
of their country to ethnic minorities (e.g., M. Gilbert, 1987).
People are more likely to consider other people to be responsible
for the side effects of their behaviors if those side effects are
negative than if they are positive (Knobe, 2003, 2005; Leslie,
Knobe, & Cohen, 2006). Fans are likely to see the infractions of
rival teams as evidence of unsportsmanlike behavior (Hastorf &
Cantril, 1954). And perceivers believe that others are more likely
to cheat and act according to their self-interest than they would in
the same situation (Miller, 1997; Miller, Visser, & Staub, 2005). In
short, psychopathology, superstition, and common inferences
about others’ behavior suggest that people are more likely to
believe that external agents are responsible for bad than for good.
If a negativity agency bias plays some role in these forms of
psychopathology, superstition, and misanthropic perceptions, then
it is questionable whether the benefits it may confer outweigh its
cost to the perceiver and to society.
References
American Psychiatric Association. (2000). Diagnostic and statistical manual of mental disorders (4th ed., text rev.). Washington, DC: Author.
Axelrod, R., & Hamilton, W. D. (1981, March 27). The evolution of
cooperation. Science, 211, 1390 –1396.
Baillargeon, R., Spelke, E., & Wasserman, S. (1985). Object permanence
in five-month-old infants. Cognition, 20, 191–208.
Bar-Anan, Y., Wilson, T. D., & Gilbert, D. T. (2009). The feeling of
uncertainty intensifies affective reactions. Emotion, 9, 123–127.
Baumeister, R. F., Bratslavsky, E., Finkenauer, C., & Vohs, K. D. (2001).
Bad is stronger than good. Review of General Psychology, 5, 323–370.
Bohner, G., Bless, H., Schwarz, N., & Strack, F. (1988). What triggers
causal attributions? The impact of valence and subjective probability.
European Journal of Social Psychology, 18, 335–345.
Boyer, P. (2001). Religion explained: The evolutionary origins of religious
thought. New York: Basic Books.
Campbell, J. (1949). The hero with a thousand faces. New York: Pantheon
Press.
Damon, J., & Golenbock, P. (2005). Idiots: Beating “the curse” and
enjoying the game of life. Boston: Three Rivers Press.
Dennett, D. (1987). The intentional stance. Cambridge, MA: MIT Press.
Dunning, D., Griffin, D. W., Milojkovic, J., & Ross, L. (1990). The
overconfidence effect in social prediction. Journal of Personality and
Social Psychology, 58, 568 –581.
Epley, N., Akalis, S., Waytz, A., & Cacioppo, J. T. (2008). Creating social
connection through inferential reproduction: Loneliness and perceived
agency in gadgets, gods, and greyhounds. Psychological Science, 19,
114 –120.
Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On seeing human: A threefactor theory of anthropomorphism. Psychological Review, 114, 864 –
886.
George, L., & Neufeld, R. W. J. (1987). Magical ideation and schizophrenia. Journal of Consulting and Clinical Psychology, 55, 778 –779.
Gilbert, D. T., Brown, R. P., Pinel, E. C., & Wilson, T. D. (2000). The
illusion of external agency. Journal of Personality and Social Psychology, 79, 690 –700.
Gilbert, M. (1987). The Holocaust: A history of the Jews of Europe during
the second World War. New York: Henry Holt and Company.
Gilovich, T. (1983). Biased evaluation and persistence in gambling. Journal of Personality and Social Psychology, 44, 1110 –1126.
Gray, K., & Wegner, D. M. (2008). The sting of intentional pain. Psychological Science, 19, 1260 –1262.
Güth, W., Schmittberger, R., & Schwarze, B. (1982). An experimental
analysis of ultimatum bargaining. Journal of Economic Behavior and
Organization, 3, 367–388.
Guthrie, S. (1993). Faces in the clouds. New York: Oxford University
Press.
Haselton, M. G., & Buss, D. M. (2000). Error management theory: A new
perspective on biases in cross-sex mind reading. Journal of Personality
and Social Psychology, 78, 81–91.
Hastorf, A. H., & Cantril, H. (1954). They saw a game: A case study.
Journal of Abnormal and Social Psychology, 49, 129 –134.
Heider, F. (1958). The psychology of interpersonal relations. New York:
Wiley.
Henrich, J. (2000). Does culture matter in economic behavior? Ultimatum
game bargaining among the Machiguenga of the Peruvian Amazon.
American Economic Review, 72, 973–979.
Hume, D. (1956). The natural history of religion. Stanford, CA: Stanford
University Press. (Original published 1757)
Jones, E. E., & Davis, K. E. (1965). From acts to dispositions: The
attribution process in person perception. In L. Berkowitz (Ed.), Advances in experimental social psychology (Vol. 2., pp. 219 –266). New
York: Academic Press.
Kanouse, D. E., & Hanson, L. R. (1971). Negativity in evaluations. In E. E.
Jones, D. E. Kanouse, H. H. Kelley, R. E. Nisbett, S. Valins, & B.
Weiner (Eds.), Attribution: Perceiving the causes of behavior (pp. 47–
62). Morristown, NJ: General Learning Press.
Kelley, H. H. (1977, June). Magic tricks: The management of causal
attributions. Paper presented at the Arbeitstagung ‘Attribution’, University of Bielefeld, Germany.
Knobe, J. (2003). Intentional action in folk psychology: An experimental
investigation. Philosophical Psychology, 16, 309 –324.
Knobe, J. (2005). Theory of mind and moral cognition: Exploring the
connections. Trends in Cognitive Sciences, 9, 357–359.
Kruger, J. M., & Dunning, D. (1999). Unskilled and unaware of it: How
difficulties in recognizing one’s own incompetence lead to inflated
self-assessments. Journal of Personality and Social Psychology, 77,
1121–1134.
Langer, E. J., & Roth, J. (1975). Heads I win, tails it’s chance: The illusion
of control as a function of the sequence of outcomes in a purely chance
task. Journal of Personality and Social Psychology, 32, 951–955.
Leslie, A., Knobe, J., & Cohen, A. (2006). Acting intentionally and the
side-effect effect: Theory of mind and moral judgment. Psychological
Science, 17, 421– 427.
Lombrozo, T. (2006). The structure and function of explanations. Trends in
Cognitive Sciences, 10, 464 – 470.
Michotte, A. (1963). The perception of causality. New York: Basic Books.
Miller, D. T. (1997). The norm of self-interest. American Psychologist, 54,
1053–1060.
Miller, D. T., & Ross, L. (1975). Self-serving biases in the attribution of
causality: Fact or fiction? Psychological Bulletin, 82, 213–225.
Miller, D. T., Visser, P. S., & Staub, B. (2005). How surveillance begets
perceptions of dishonesty: The case of the counterfactual sinner. Journal
of Personality and Social Psychology, 89, 117–128.
Norenzayan, A., & Hansen, I. G. (2006). Belief in supernatural agents in the
face of death. Personality and Social Psychology Bulletin, 32, 174 –187.
Parrott, W. G., & Smith, R. H. (1993). Distinguishing the experiences of
envy and jealousy. Journal of Personality and Social Psychology, 64,
906 –920.
Pyszczynski, T. A., & Greenberg, J. (1981). Role of disconfirmed expectancies in the instigation of attributional processing. Journal of Personality and Social Psychology, 40, 31–38.
Rakison, D., & Poulin-Dubois, P. (2001). Developmental origin of the
animate–inanimate distinction. Psychological Bulletin, 127, 209 –228.
Risen, J. L., & Gilovich, T. (2008). Why people are reluctant to tempt fate.
Journal of Personality and Social Psychology, 95, 293–307.
NEGATIVITY BIAS IN ATTRIBUTION OF EXTERNAL AGENCY
Rosenthal, R., & Rosnow, R. L. (1991). Essentials of behavioral research:
Methods and data analysis (2nd ed.). Boston: McGraw-Hill.
Rosset, E. (2008). It’s no accident: Our bias for intentional explanations.
Cognition, 108, 771–780.
Rozin, P., & Royzman, E. B. (2001). Negativity bias, negativity dominance, and contagion. Personality and Social Psychology Review, 5,
296 –320.
Saxe, R., Tenenbaum, J. B., & Carey, S. (2005). Secret agents: Inferences
about hidden causes by 10- and 12-month-old infants. Psychological
Science, 16, 995–1001.
Shaughnessy, D. (2000). The curse of the Bambino. New York: Penguin
Books.
Smith, R. H., Turner, T. J., Garonzik, R., Leach, C. W., Urch-Druskat, V.,
& Weston, C. (1996). Envy and schadenfreude. Personality and Social
Psychology Bulletin, 22, 158 –168.
Subbotsky, E. (2001). Causal explanations of events by children and adults:
Can alternative causal modes coexist in one mind? British Journal of
Developmental Psychology, 19, 23– 46.
Takahashi, H., Kato, M., Matsuura, M., Mobbs, D., Suhara, T., & Okubo,
Y. (2009, February 13). When your gain is my pain and your pain is my
gain: Neural correlates of envy and schadenfreude. Science, 323, 937–
939.
Taylor, S. E. (1991). Asymmetrical effects of positive and negative events:
545
The mobilization–minimization hypothesis. Psychological Bulletin, 110,
67– 85.
Taylor, S. E., & Brown, J. D. (1988). Illusion and well-being: A social
psychological perspective on mental health. Psychological Bulletin, 103,
193–210.
Triumph Books & Boston Globe. (2007). So good! The incredible championship season of the 2007 Red Sox. Boston: Triumph Books.
Vinokur, A., & Ajzen, I. (1982). Relative importance of prior and immediate events: A causal primacy effect. Journal of Personality and Social
Psychology, 42, 820 – 829.
Waytz, A., Morewedge, C. K., Epley, N., Monteleone, G., Gao, J. H., &
Cacioppo, J. T. (2009). Making sense by making sentient: Effectance
motivation increases anthropomorphism. Manuscript under review.
Weiner, B. (1985). “Spontaneous” causal thinking. Psychological Bulletin,
97, 74 – 84.
Weinstein, N. D. (1980). Unrealistic optimism about future life events.
Journal of Personality and Social Psychology, 39, 806 – 820.
Wohl, M. J. A., & Enzle, M. E. (2002). The deployment of personal luck:
Illusory control in games of pure chance. Personality and Social Psychology Bulletin, 28, 1388 –1397.
Ybarra, O., & Stephan, W. G. (1996). Misanthropic person memory.
Journal of Personality and Social Psychology, 70, 691–700.
Appendix
Ultimatum Game Instructions in Experiment 1B
“In this study, you’ll play six ultimatum games with three
different opponents currently on our server— computer programs
and people playing at other computers located in William James
Hall and at Harvard Business School.
In an ultimatum game, one player, the divider, gets to split a pot of
money between self and the other player, the receiver. If the receiver
accepts the split, both get to keep the amounts stipulated by the divider
for that round. If the receiver rejects the split, neither gets to keep any
of the money from that round. You will play six ultimatum games, each
for $3. With each opponent, you’ll get to be the divider once and the
receiver once. You are guaranteed to keep any money that you earn.
In today’s experiment, your human opponents will earn money
in games played with the other participants, but they will not be
paid in the games they play with you.
When you are ready to begin, please click CONTINUE at the
bottom right corner of the screen.”
Received March 16, 2009
Revision received May 18, 2009
Accepted May 19, 2009 䡲