1sperber EpistemicVigilance
1sperber EpistemicVigilance
1sperber EpistemicVigilance
Epistemic Vigilance
_________________________________________________________
Abstract: Humans depend massively on communication with others, but this leaves
that humans have a suite of cognitive mechanisms for epistemic vigilance to ensure
that communication remains advantageous despite this risk. Here we outline this
claim and consider some of the ways in which epistemic vigilance works in mental
and social life by surveying issues, research and theories in different domains of
suggestions and comments on an earlier version of this article and to the Centre for
the Study of Mind in Nature at the University of Oslo for supporting our work.
1. Introduction
We claim that humans have a suite of cognitive mechanisms for epistemic vigilance,
targeted at the risk of being misinformed by others. Here we present this claim and
consider some of the ways in which epistemic vigilance works in mental and social
life. Our aim is to integrate into a coherent topic for further research a wide range of
Humans are exceptional among animals for both the richness and strength of
their cognitive abilities and the extent to which they rely on a wide variety of
information communicated by others. These two traits are linked. On the one hand, it
On the other hand, these individual abilities would not develop or function properly
How reliable are others as sources of information? In general, they are mistaken
no more often than we are – after all, „we‟ and „they‟ refer to the same people – and
they know things that we don‟t know. So it should be advantageous to rely even
trust by exercising some degree of vigilance towards the competence of others? That
would depend on the cost and reliability of such vigilance. But in any case, the major
others, but with their interests and their honesty. While the interests of others often
overlap with our own, they rarely coincide with ours exactly. In a variety of situations,
3
their interests are best served by misleading or deceiving us. It is because of the risk
communicator performs an action by which she not only conveys some information
but also conveys that she is doing so intentionally (Grice, 1975; Sperber and Wilson,
1995).1 For communication of this type to succeed, both communicator and addressee
must cooperate by investing some effort: in the communicator‟s case, the effort
required to perform a communicative action, and in the addressee‟s case, the effort
required to attend to it and interpret it. Neither is likely to invest this effort without
expecting some benefit in return. For the addressee, the normally expected benefit is
to acquire some true and relevant information. For the communicator, it is to produce
some intended effect in the addressee. To fulfil the addressee‟s expectations, the
communicator should do her best to communicate true information. To fulfil her own
likely to produce the intended effect in the addressee, regardless of whether it is true
or false.2
1 Such human communication is very different from the many forms of animal
communication discussed by Dawkins and Krebs (1978) and Krebs and Dawkins
(1984). However, some similar evolutionary considerations about costs and benefits
There are situations where communicators would not stand to benefit from
misleading their audience, for instance when teaching their own children or
situations, with interlocutors whose interests quite often diverge from their own.
People stand to gain immensely from communication with others, but this leaves
them open to the risk of being accidentally or intentionally misinformed, which may
reduce, cancel, or even reverse these gains. The fact that communication is so
pervasive despite this risk suggests that people are able to calibrate their trust well
(Sperber, 2001; Bergstrom et al., 2006). For this to happen, the abilities for overt
intentional communication and epistemic vigilance must have evolved together, and
must also develop together and be put to use together. A disposition to be vigilant is
likely to have evolved biologically alongside the ability to communicate in the way
that humans do. Human social life (with some cultural variability) provides plenty of
generate not only psychological but also social vigilance mechanisms. Before
manage (see Paglieri and Woods In press, who develop this parsimony argument and
philosophical topic – see Origgi, 2004, 2008a). What is less obvious is the claim that
humans not only end up trusting one another much of the time, but are also trustful
and willing to believe one another to start with, and withdraw this basic trust only in
from Thomas Reid to Tyler Burge and Ruth Millikan, and more recently psychologists
such as Daniel Gilbert, have made this stronger claim, arguing that humans are
philosophical and psychological claims and suggest that for trust to play the
present the same warrants as clear and distinct ideas or sense impressions arrived at
by oneself. According to John Locke, for instance, „The floating of other men's
opinions in our brains makes us not one jot the more knowing, though they happen to
be true‟ (Locke, 1690, book I, ch. 3, sect. 23). Historically, this individualistic stance
only if acceptance of the testimony is itself justified by other true beliefs acquired not
through testimony but through perception or inference (see Fricker, 1995; Adler,
which treats trust in testimony as intrinsically justified (Hardwig, 1985; Coady, 1992;
Foley, 1994). According to Thomas Reid, who provided an early and influential
articulation of this anti-reductionist view, humans not only trust what others tell
them, but are also entitled to do so. They have been endowed by God with a
disposition to speak the truth and a disposition to accept what other people tell them
as true. Reid talks of two principles „that tally with each other,‟ the Principle of
language as material proof that principles of credulity and veracity are indeed in
force. How could shared meanings in a public language ever have stabilised, were it
not for the fact that most statements in such a language are true testimonials?
According to Lewis (1969) and Davidson (1984) in particular, the very possibility of a
common language presupposes a generally truthful use of speech. This can be used to
provide an a priori justification for trust in testimony (see Coady, 1992). Thus, Tyler
Burge argues that linguistic communication has a „purely preservative character:‟ just
us to „accept as true something that is presented as true and that is intelligible [to us]
unless there are stronger reasons not to do so‟ (Burge, 1993, pp. 457-88).
Approaching the issue from an evolutionary perspective, Ruth Millikan (1987) argues
distinct issues, one normative and the other descriptive. The normative issue has to
knowledge. The descriptive issue has to do with the cognitive and social practices
involved in the production and acceptance of testimony. The two issues are explicitly
linked in a „third way‟ approach which assumes that our actual practices, which
involve some degree of vigilance, are likely to be reasonable, and therefore at least
indicative of what the norm should be (e.g. Adler, 2003, Fricker, 2006).
particular, work by Daniel Gilbert and his colleagues seems to show that our mental
examining it and possibly rejecting it (Gilbert et al., 1990; Gilbert et al., 1993). This
can be seen as weighing (from a descriptive rather than a normative point of view) in
experiment, participants were told that they would have to learn Hopi words. They
were then presented with sentences such as „A Monishna is a star‟, followed shortly by
the signal TRUE or FALSE, to indicate the truth value of the preceding statement. In
3 Note that this anti-reductionist view, which treats testimony as a simple process
Bezuidenhout (1998) argues against Burge on this basis, and Origgi and Sperber
some cases, however, participants were distracted while processing the signal TRUE
or FALSE. Later, they were given a recognition task in which the same statements
about Hopi words were presented, and they had to judge whether they were true or
then distracting participants from indications that the statement was false should
processing cost which is likely to be kept to a bare minimum when the information
hear a comment on the radio about a competition in some sport you neither know nor
care about, you are unlikely to invest any extra energy in deciding whether or not to
believe what you hear. If forced to guess whether it is true or false, you might guess
that it is true. After all, it was not merely uttered but asserted. Guessing that it was
false would amount to questioning the legitimacy of the assertion, and why should
rather, irrelevance – in the materials used by Gilbert and his colleagues. Even if the
participants could muster some interest for statements about the meaning of Hopi
words (and there is nothing in either the experimental situation or the participants‟
background knowledge which makes it likely that they would), the information that
one of these statements (e.g. „A Monishna is a star‟) is false would still be utterly
irrelevant to them. From the knowledge that such a statement is false, nothing
follows. With other statements, things may be different. If you had prior reasons for
thinking that a certain statement was true, or if it described a normal state of affairs,
it is easy to see how you might find it relevant to be told that it is false. For instance, it
9
is easy to see how being told that „Patrick is a good father‟ is false might have a wide
Simmons, and Todorov (2005) which were otherwise similar to Gilbert‟s, when
participants were presented with statements whose falsity had this kind of potential
relevance, automatic acceptance was again no longer found. These results cast doubt
on the import of experimental evidence which has been claimed to show that
As noted above, philosophers and psychologists who argue that humans are
fundamentally trustful do not deny that, when the circumstances seem to call for it,
people take a critical stance towards communicated information, and may end up
rejecting it. So defenders of this approach are not committed to denying that such a
critical stance might exploit dedicated cognitive mechanisms for epistemic vigilance.
Vigilance (unlike distrust) is not the opposite of trust; it is the opposite of blind trust
(see also Yamagishi, 2001). Still, the philosophers and psychologists whose claims we
have discussed in this section assume that even if people do not trust blindly, they at
least have their eyes closed most of the time to the possibility of being misinformed.
In Gilbert‟s terms, people are trustful „by default‟ (Gilbert et al., 1990, p. 601) and are
motivate them to do so. This leaves unanswered the question of how they might
Note too that the idea of default trust draws on an old-style Artificial Intelligence
its turn comes to do its job, which it then does fully and uninterrupted. An alternative
possible view is that several mechanisms may work in parallel or in competition. For
10
instance, it could be that any piece of communicative behaviour activates two distinct
assessing its trustworthiness. Either process might abort for lack of adequate input,
or because one process inhibits the other, or as a result of distraction. More generally,
Here is an analogy which may help to clarify how epistemic trust can co-exist with
epistemic vigilance, and indeed be buttressed by it. When we walk down a street
through a crowd of people, many at very close quarters, there is a constant risk of
inadvertent or even intentional collision. Still, we trust people in the street, and have
no hesitation about walking among them. Nor is it just a matter of expecting others to
take care while we ourselves walk carelessly. We monitor the trajectory of others, and
automatically adjusting our level of vigilance to the surroundings. Most of the time, it
is low enough to be unconscious and not to detract, say, from the pleasure of a stroll,
but it rises when the situation requires. Our mutual trust in the street is largely based
be trustful and therefore need to be vigilant only in rare and special circumstances.
communicators have two distinct goals: to be understood, and to make their audience
towards acceptance).
There is some limited suggestive evidence that they are. It has been experimentally
established that vervet and rhesus monkeys do not act on an alarm call from an
individual that has produced a series of false alarms calls in the past (Cheney and
seem to suggest that the monkeys understand the message, but do not accept it given
alarm calls from this unreliable individual are not interpreted and then rejected by its
conspecifics, but are simply treated as mere noise, and therefore ignored. More
from acceptance in non-human communication. (In any case, if it emerged that other
social animals do exert some form of epistemic vigilance, this would enrich our
own.)
from acceptance and considering the relations between them. Austin, for instance,
distinguished „the securing of uptake‟ (that is, „bringing about the understanding of
12
the meaning and the force of the locution‟) from a range of further cognitive or
p. 116).
behavioural effect that goes beyond the mere securing of uptake as the starting point
This analysis, which went through a great many revisions and reformulations (e.g.
Strawson, 1964; Grice, 1969; Searle, 1969; Schiffer, 1972), treats a speaker‟s meaning
as a complex mental state made up of several layered intentions, of which the most
deeply embedded is the intention to make the addressee think or act in a certain way.
Beyond this basic intention are two higher-order intentions: that the addressee
should recognise the basic intention, and that the addressee‟s recognition of the basic
intention should be at least part of his reason for fulfilling it. By recognising the basic
intention (and thus fulfilling the speaker‟s higher-order intention to have that basic
intention recognised), the addressee will have understood the utterance, whether or
not he goes on to fulfil the basic intention by producing the desired response.
communication around the idea that speakers have both an informative intention
13
intention.
recognition of the informative intention will lead to its fulfilment, and hence to the
speaker‟s meaning in two ways. They argue that only two hierarchically related
intention. They reject the idea that the communicator must have a third-level
For Grice, this third-level intention is essential for distinguishing „meaning‟ from
„showing‟. If I show you that I have a seashell in my pocket, your reason for believing
that I have a seashell in my pocket is that you have seen it there. If I tell you “I have a
seashell in my pocket,” your reason for believing that I have a seashell in my pocket is
that I have told you so. How does my telling you something give you a reason to
believe it? By the very act of making an assertion, the communicator indicates that
she is committing herself to providing the addressee with genuine information, and
she intends his recognition of this commitment to give him a motive for accepting a
content that he would not otherwise have sufficient reasons to accept. In other words,
epistemic trust from the addressee. Similarly, making a request typically involves
But still, is the audience‟s recognition that the speaker intends her utterance to
presenting a problem for his analysis, although he simply mentioned it passing and
did not offer any solution. When the communicator is producing a logical argument,
she typically intends her audience to accept the conclusion of this argument not on
While U[tterer] intends that A[ddressee] should think that r, he does not
expect (and so intend) A to reach a belief that r on the basis of U's intention
15
that he should reach it. The premises, not trust in U, are supposed to do the
Sperber and Wilson, on the other hand, were analysing not „meaning‟ but
„communication‟, and they argued that this involves a continuum of cases between
„meaning‟ and „showing‟ which makes the search for a sharp demarcation otiose. In
producing an explicit argument, for instance, the speaker both means and shows that
her conclusion follows from her premises. Although Grice‟s discussion of this
underscores the contrast between cases where a speaker intends the addressee to
accept what she says because she is saying it, and those where she expects him to
accept what she says because he recognises it as sound. We will shortly elaborate on
precondition for its acceptance. However, it does not follow that the two processes
play a crucial role in the comprehension process itself. We believe that they do,
although not in the way commonly envisaged. As noted above, many philosophers
have argued that for comprehension to be possible at all, most utterances should be
must interpret an utterance „in a way that optimizes agreement‟ and that reveals „a set
of beliefs largely consistent and true by our own standard (Davidson, 1984, p. 137).
16
Such „interpretive charity‟ implies an unwillingness to revise one‟s own beliefs in the
light of what others say. It is an a priori policy of trusting others to mean something
true – but this is a niggardly form of trust, since it is left up to the interpreter to
decide what is true (however, see Davidson, 1986 for a more nuanced picture).
There is a difference between trusting a speaker because you interpret what she
understand a speaker to say – even if it is incompatible with your own beliefs, which
you may then have to revise – because you trust her to start with. In this latter case,
Even when an utterance is in your own language, decoding its linguistic sense falls
takes this linguistic sense, together with contextual information, and aims for an
interpretation consistent with the expectation of relevance that every utterance elicits
attention. The hearer does not have to accept this presumption: after all, the speaker
may not know what is relevant to him, or she may not really care. But whether or not
the hearer accepts the presumption of relevance, the very fact that it is conveyed is
enough to guide the interpretation process. It justifies the search for an interpretation
that the speaker had reason to think would seem relevant to the hearer. In many
cases, the output of such a relevance-based interpretation process differs from the
Andy had previously assumed that Joan was just an underpaid junior academic. How
should he interpret Barbara‟s reply? If his aim was to optimize agreement, he should
take Barbara to be asserting that Joan has some money, as opposed to no money at
all, which is true of most people and which he already believes is true of Joan. But
interpreted in this way, Barbara‟s reply is not relevant enough to be worth Andy‟s
believes it. If he does not believe what he takes Barbara to say, then her utterance will
only provide him with information about Barbara herself (her beliefs, her intended
meaning) rather than about Joan, and this may not be relevant enough to him. How,
he is not willing to accept it? We claim that, whether he ends up accepting it or not,
the hearer interprets the speaker as asserting a proposition that would be relevant
interpretation (see Holton, 1994; Origgi, 2005, 2008b). The trust required is less
one‟s own beliefs. On the other hand, it is tentative trust. We claim that interpreting
18
communicated information. If so, this might help to explain the results of Gilbert‟s
maintain, in at least partly because the audience‟s vigilance limits the range of
Comprehension involves adopting a tentative and labile stance of trust; this will lead
communicative acts that trigger comprehension, does not come up with reasons to
doubt.
Communication brings vital benefits, but carries a major risk for the audience of
calibrating one‟s trust in communicated information so as to weed out all and only
the misinformation. Given that the stakes are so high, it is plausible that there has
least approximate such sorting. Since there are a variety of considerations relevant to
the granting or withholding of epistemic trust, we will explore the possibility that
different abilities for epistemic vigilance may have emerged in biological and cultural
information may have to do either with the source of the information – who to believe
– or with its content – what to believe. In this section and the next, we consider
epistemically and morally, and therefore expecting what Mary says to be true, what
she does to be good, and so on. Or you may trust (or mistrust) someone on a
particular topic in specific circumstances: „You can generally trust Joan on Japanese
prints, but less so when she is selling one herself.‟ Trust can be allocated in both these
A reliable informant must meet two conditions: she must be competent, and she
must be benevolent. That is, she must possess genuine information (as opposed to
information with her audience (as opposed to making assertions she does not regard
as true, through either indifference or malevolence). Clearly, the same informant may
be competent on one topic but not on others, and benevolent towards one audience in
suggests that trust should be allocated to informants depending on the topic, the
audience, and the circumstances. However such precise calibration of trust is costly in
cognitive terms, and, while people are often willing to pay the price, they also
20
overall trustworthiness.
and Todorov (2006). Participants were shown pictures of faces, for either a mere 100
to the authors‟ expectations, the correlation between judgments with and without
time limit was not greater for attractiveness (.69) – which is, after all, a property of a
person‟s appearance – than for trustworthiness (.73), while the correlations for
aggressiveness and competence were a relatively low .52. One might wonder if such
split-second judgments of trustworthiness have any basis at all, but what this
experiment strongly suggests is that looking for signs of trustworthiness is one of the
first things we do when we see a new face (see also Ybarra et al., 2001).
situation (Ross and Nisbett, 1991; Gilbert and Malone, 1995). If so, judging that
(Ross, 1977): that is, the tendency, in explaining or predicting someone‟s behaviour,
the view that some people are more generally trustworthy than others, and are to
our own immediate advantage may damage our reputation and end up being costly in
21
the long run. Conversely, doing our best to be systematically trustworthy may
sometimes be costly in the short run, but may be beneficial in the long run. The trade-
off between the short term cost and long term benefits of a policy of trustworthiness
may differ from person to person, depending, for instance, on the way they discount
time (Ainslie, 2001), and they may end up following different policies. If such policies
People who opt for a policy of systematic trustworthiness would stand to benefit from
a reputation for being highly trustworthy. This reputation would be fed by common
knowledge of their past actions, and might be further advertised by their everyday
such as lying without giving any detectable evidence of the fact? There is a substantial
literature on lie detection (see Ekman, 2001, for a review), and what it shows, in a
nutshell, is that detecting lies on the basis of non-verbal behavioural signs is hard
(Vrij, 2000; Malone and DePaulo, 2001; Bond and DePaulo, 2006), even for people
who are trained to do so (e.g. DePaulo and Pfeifer, 1986; Ekman and O‟Sullivan, 1991;
Mann, Vrij and Bull, 2004; Vrij, 2004), and even when the liars are far from expert –
for instance, when they are three-year-old children (Lewis et al., 1989; Talwar and
Lee, 2002). The ability to lie can be quite advantageous, but only if the liars do not
and acquired skills, liars seem able to keep the behavioural signs of dishonesty to a
minimum.
In order to gain a better grasp of the mechanisms for epistemic vigilance towards
the source, what is most urgently needed is not more empirical work on lie detection
22
or general judgments of trustworthiness, but research on how trust and mistrust are
calibrated to the situation, the interlocutors and the topic of communication. Here,
two distinct types of consideration should be taken into account: the communicator‟s
competence on the topic of her assertions, and her motivation for communicating.
Both competence and honesty are conditions for believability. There is a considerable
literature with some indirect relevance to the study of epistemic vigilance in ordinary
communication, for instance in the history and sociology of science (e.g. Shapin,
1994), the anthropology of law (e.g. Hutchins, 1980; Rosen, 1989), the linguistic
study of evidentials (e.g. Chafe and Nichols, 1986; Ifantidou, 2001; Aikhenvald,
2004), or the social psychology of influence and persuasion (e.g. Chaiken, 1980; Petty
and Cacioppo, 1986). However, much more work needs to be done on epistemic
In the next section, we turn to the development of vigilance towards the source in
childhood, which is not only interesting in its own right, but will also help us separate
out the various components of epistemic vigilance towards the source of information.
vigilance (for reviews, see e.g. Koenig and Harris, 2007; Heyman, 2008; Clément, In
press; Corriveau and Harris, In press; Nurmsoo et al., In press). This shows that even
at a very early age, children do not treat all communicated information as equally
(Koenig and Echols, 2003). By the age of two, they often attempt to contradict and
23
correct assertions that they believe to be false (e.g. Pea, 1982). These studies
challenge the widespread assumption that young children are simply gullible.
Do young children have the cognitive resources to allocate trust on the basis of
olds seem to prefer informants who are both benevolent (Mascaro and Sperber,
informants, they take into account not only their own observations but also what they
have been told about the informant‟s moral character (Mascaro and Sperber, 2009),
and in preferring competent informants, they take past accuracy into account (e.g.
Clément et al., 2004; Birch et al., 2008; Scofield and Behrend, 2008). By the age of
four, they not only have appropriate preferences for reliable informants, but also
show some grasp of what this reliability involves. For instance, they can predict that a
dishonest informant will provide false information (Couillard and Woodward, 1999),
or that an incompetent informant will be less reliable (Call and Tomasello, 1999;
Lampinen and Smith, 1995; Clément et al., 2004). Moreover, they make such
predictions despite the fact that unreliable informants typically present themselves as
and vigilance towards cheating (e.g. Cosmides and Tooby, 2005, Harris and Núñez,
1996). Indeed, the exercise of epistemic vigilance not only relies on some of the
fundamental tool for selecting cooperative partners (e.g. Alexander, 1987; Nowak and
Sigmund, 1998; Milinski et al., 2002 – but see below). For instance, as four- and five-
24
Woodward, 1999; Mascaro and Sperber, 2009) they also become more vigilant
towards hypocrisy in self-presentation (Peskin, 1996; Mills and Keil, 2005; Gee and
Heyman, 2007).
attitudes such as endorsement or doubt (Fusaro and Harris, 2008), and are also
aware that assertions can be stronger or weaker (Sabbagh and Baldwin, 2001; Birch
et al., 2008; Matsui et al., 2009). Children are able to make sense of comments on
the reliability of what is communicated (e.g. Fusaro and Harris, 2008, Clément et al.,
2004). As a result, they can take advantage of the epistemic judgments of others, and
enrich their own epistemological understanding and capacity for epistemic vigilance
in doing so.
Children also appear to have some capacity to compare the reliability of different
on conformity (Asch, 1956), for instance, a majority of three-year-olds trust their own
of the experimenters (Corriveau and Harris, In press, although see Walker and
Robinson and Whitcombe, 2003; Robinson et al., 2008; (Nurmsoo and Robinson,
2009). They also attribute to others lasting dispositions for greater or lesser reliability
(e.g. Koenig and Harris, 2007; Birch et al., 2009; Corriveau and Harris, 2009), and
may do this on the basis of an understanding that different people are more or less
knowledgeable – a component of the child‟s naïve psychology which has not been
25
investigated in depth (though see Lutz and Keil, 2002). Children‟s epistemic vigilance
thus draws on – and provides evidence for – distinct aspects of their naïve
intentions, including intentions to induce false beliefs in her audience. This calls for
believes that not-P but wants me to believe that P‟ combines a first-order attribution
and evidence from false belief tasks classically used to measure the development of
arguably even in infancy. However, starting at around the age of four, there is a major
Sperber, 2009; see also Couillard and Woodward, 1999; Jaswal et al. In press) and
incompetence (Povinelli and DeBlois, 1992; Call and Tomasello, 1999; Welch-Ross,
1999; Figueras-Costa and Harris, 2001). At four, children begin to show increased
attention to the epistemic quality of other people‟s beliefs and messages. They
become much more selective in their trust, and also much more willing and able to
This transition in epistemic vigilance occurs around the age at which children
succeed in passing standard false belief tasks (Wimmer and Perner, 1983; Baron-
Cohen et al., 1985). Until recently, this convergence might have been interpreted on
the following lines: At around four years of age, as a result of their emerging
26
aware that others may hold false beliefs, and (at a higher metapresentational level)
that others may want them to hold false beliefs. This awareness is the basis for more
interpretation, it has become less plausible as a result of recent experiments with new
non-verbal versions of the false belief task adapted for use with infants. These
experiments suggest that by their second year, children already expect an agent‟s
behaviour to be guided by its beliefs, even when they are false (e.g. Onishi and
Baillargeon, 2005; Southgate et al., 2007; Surian et al., 2007). If so, the robust
results of work with the standard false belief task must be reinterpreted, and so must
Two possible reinterpretations of this transition come readily to mind (and a full
picture might draw on both). First, the ability to pass standard false belief tasks and
the improved capacity for epistemic vigilance might have a common cause, for
instance, a major development in executive function abilities (e.g. Perner and Lang,
2000; Carlson and Moses, 2001). Second, as a result of their improved capacity for
epistemic vigilance, children may start paying attention to relevant aspects of false
belief tasks which are generally missed at an earlier age. As they become increasingly
aware that others may hold false beliefs (through either epistemic bad luck or
deception), they get better at taking these false beliefs into account when predicting
the behaviour of others. Their interest here is not so much that of an observer, but
Current studies of epistemic vigilance thus offer some interesting insights into the
nature and development of theory of mind abilities. They show that epistemic
27
trajectories, including the moral sense involved in recognising potential partners for
As we have seen in the last two sections, epistemic vigilance can be directed at the
can also be directed at the content of communication, which may be more or less
believable independently of its source. In this section and the next, we consider
contents whose truth is sufficiently evidenced by the act of communication itself (e.g.
saying, „Je suis capable de dire quelques mots en français‟). Other contents are
must rely on more than just its inherent logical properties, indisputable background
To keep processing time and costs within manageable limits, only a very small subset
of that encyclopaedia, closely related to the new piece of information, can be brought
to bear on its assessment. Indeed, the systematic activation of even a limited subset of
communicated content would still be quite costly in processing terms. We will argue
According to relevance theory (Sperber and Wilson, 1995, 2005; Carston, 2002;
Wilson and Sperber, 2004), the comprehension process itself involves the automatic
cognitive benefits derived. We claim that this same background information which is
used in the pursuit of relevance can also yield an imperfect but cost-effective
epistemic assessment. Moreover, as we will now show, the search for relevance
Sperber and Wilson (1995) distinguish three types of contextual effect through
beliefs. (i) When new information and contextual beliefs are taken together as
neither the context nor the new information alone) which are accepted as new beliefs.
lowered in the light of new information. (iii) New information may contradict
contextually activated beliefs and lead to their revision. All three types of contextual
individual‟s knowledge.
29
What happens when the result of processing some new piece of information in a
context of existing beliefs is a contradiction? When the new information was acquired
through perception, it is quite generally sound to trust one‟s own perceptions more
than one‟s memory and to update one‟s beliefs accordingly. You believed Joan was in
the garden; you hear her talking in the living room. You automatically update your
belief about Joan‟s whereabouts. Presumably, such automatic updating is the only
When the new information was communicated, on the other hand, there are three
possibilities to consider. (i) If the source is not regarded as reliable, the new
information can simply be rejected as untrue, and therefore irrelevant: for instance, a
drunk in the street tells you that there is a white elephant around the corner. (ii) If the
source is regarded as quite authoritative and the background beliefs which conflict
with what the source has told us are not held with much conviction, these beliefs can
be directly corrected: for instance, looking at Gil, you had thought he was in his early
twenties, but he tells you that he is 29 years old. You accept this as true and relevant –
relevant in the first place because it allows you to correct your mistaken beliefs. (iii) If
you are confident about both the source and your own beliefs, then some belief
revision is unavoidable. You must revise either your background beliefs or your belief
that the source is reliable, but it is not immediately clear which. For instance, it
seemed to you that Gil was in his early twenties; Lucy tells you that he must be in his
early thirties. Should you should stick to your own estimate or trust Lucy‟s?
incoherence: that is, when the new information is incompatible with some of our
background beliefs, given other more entrenched background beliefs. For instance,
30
you believed that Gil was a doctor; Lucy tells you that he is only 22 years old. You
have the well entrenched belief that becoming a doctor takes many years of study, so
that it is almost impossible to be a doctor by the age of 22. Hence, you should either
disbelieve Lucy or give up the belief that Gil is a doctor. In order to preserve
coherence, you must reduce either your confidence in the source or your confidence
philosophers such as Gilbert Harman (1986) and Paul Thagard (2002). Here, though,
we see coherence checking not as a general epistemic procedure for belief revision,
If only for reasons of efficiency, one might expect the type of coherence checking
used in epistemic vigilance to involve no more than the minimal revisions needed to
distrusting the source, and in others by revisiting some of one‟s own background
beliefs. Unless one option dominates the competition to the point of inhibiting
What we are suggesting is that the search for a relevant interpretation, which is
part and parcel of the comprehension process, automatically involves the making of
procedure wholly dedicated to such assessment. Still, comprehension, the search for
31
Now consider things from the communicator‟s point of view. Suppose she suspects
that her addressee is unlikely to accept what she says purely on trust, but will
probably exercise some epistemic vigilance and check how far her claim coheres with
his own beliefs. The addressee‟s active vigilance stands in the way of the
communicator‟s achieving her goal. Still, from the communicator‟s point of view, a
vigilant addressee is better than one who rejects her testimony outright. And indeed,
claim may offer the communicator an opportunity to get past his defences and
We have suggested that coherence checking takes place against the narrow context
of beliefs used in the search for a relevant interpretation of the utterance. But the
addressee may have other less highly activated beliefs which would have weighed in
favour of the information he is reluctant to accept, if he had been able to take them
into account. In that case, it may be worth the communicator‟s while to remind the
addressee of these background beliefs, thus increasing the acceptability of her claim.
Or there may be other information that the addressee would accept on trust from the
communicator, which would cohere well with her claim and thus make it more
acceptable.
32
To illustrate, we will adapt a famous example from Grice (1975/1989, p. 32). Andy
Barbara believes that Steve has a new girlfriend, but feels that if she were simply to
say so, Andy, who has just expressed doubt on the matter, would disagree. Still, she
has noticed that Steve has been paying a lot of visits to New York lately, and regards
this as evidence that Steve has a girlfriend there. Andy might also have noticed these
visits, and if not, he is likely to accept Barbara‟s word for it. Once he takes these visits
into account, the conclusion that Steve might have a girlfriend may become much
Barbara is making no secret of the fact that she wants Andy to accept this
conclusion. On the contrary, her assertion that Steve has been paying a lot of visits to
New York will only satisfy Andy‟s expectations of relevance (or be cooperative in
Grice‟s sense) to the extent that it is understood as implicating that Steve might have
a girlfriend despite Andy‟s doubts. Although Andy recognises this implicature as part
of Barbara‟s intended meaning, he may not accept it. What Barbara is relying on in
order to convince him is not his ability to understand her utterance but his ability to
grasp the force of the argument whose premises include her explicit statement,
together with other pieces of background knowledge (about Steve‟s likely reasons for
In another, slightly different scenario, Andy himself remarks that Steve has been
paying a lot of visits to New York lately; but failing to see the connection, he adds, „He
33
doesn‟t seem to have a girlfriend these days.‟ In that case, Barbara may highlight the
connection in order to help him come to the intended conclusion, saying: „If he goes
to New York, it may be to see a girlfriend‟, or „If he had a girlfriend in New York, that
would explain his visits.‟ Or she might simply repeat Andy‟s comment, but with a
different emphasis: „He doesn‟t seem to have a girlfriend these days, but he has been
paying a lot of visits to New York lately.‟ Logical connectives such as „if‟ and discourse
connectives such as „but‟, which suggest directions for inference, are used by the
1987, 2002).
In both scenarios, when Barbara expresses herself as she does and Andy sees the
force of her implicit argument, they are making use of an inferential mechanism
which recognises, more specifically, that some function as premises and others as
conclusions. What Barbara conveys, and what Andy is likely to recognise, is that it
Argumentation, in either the simple and largely implicit form illustrated in the
reasoning.4 In a series of papers (Mercier and Sperber, 2009; Sperber and Mercier, In
„reasoning‟ in its more frequent sense to refer to a form of inference which involves
involves reflection, and contrasts with intuitive forms of inference where we arrive at
a conclusion without attending to reasons for accepting it. A similar contrast between
intuitive and reflective forms of inference has been much discussed under the
34
press; Sperber, 2001; Mercier and Sperber, Forthcoming) Hugo Mercier and Dan
Sperber have argued that reasoning is a tool for epistemic vigilance, and for
to help people overcome the limits of intuition, acquire better grounded beliefs,
particularly in areas beyond the reach of perception and spontaneous inference, and
make good decisions (Evans and Over, 1996; Kahneman, 2003; Stanovich, 2004).
This common view is not easy to square with the massive evidence that human
reasoning is not so good at fulfilling this alleged function. Ordinary reasoning fails to
intuitions (Denes-Raj and Epstein, 1994). It often leads us towards bad decisions
(Shafir et al., 1993; Dijksterhuis et al., 2006) and poor epistemic outcomes (Kunda,
1990). By contrast, intuition has a good track record for efficiently performing very
Human reasoning, with its blatant shortcomings and relatively high operating costs,
is not properly explained by its alleged function as a tool for individual cognition.
and Sperber, submitted). To give just one example, the argumentative theory makes a
heading of „dual process‟ theories of reasoning (see, for instance, Evans and Frankish,
2009).
35
prediction which sets it apart from other approaches and which is of particular
arguments to convince others, then the arguments it comes up with should support
confirmation bias. And indeed, this bias towards „seeking or interpreting of evidence
(Nickerson, 1998 p.175) has been evidenced in countless psychology experiments and
observations in natural settings (see Nickerson, 1998 for review). For classical
prevalent – or indeed that it should exist at all – and attempts have been made to
several arguments against these attempted explanations. In the first place, the
difference (Camerer and Hogarth, 1999; Willingham, 2008). This suggests that the
Interestingly, the confirmation bias need not lead to poor performance from a
logical normative point of view. When people with different viewpoints share a
genuine interest in reaching the right conclusion, the confirmation bias makes it
only for reasons to support their own position, while exercising vigilance towards the
arguments proposed by others and evaluating them carefully. This requires much less
36
work than having to search exhaustively for the pros and cons of every position
present in the group. By contrast, when the confirmation bias is not held in check by
may lead individuals to be over-confident of their own beliefs (Koriat et al., 1980), or
to adopt stronger version of those beliefs (Tesser, 1978). In group discussions where
all the participants share the same viewpoint and are arguing not so much against
each other as against absent opponents, such polarization is common and can lead to
We are not claiming that reasoning takes place only in a communicative context.
It clearly occurs in solitary thinking, and plays an important role in belief revision.
considering claims she might be presented with, or that she might want to convince
others to accept, or engaging in a dialogue with herself where she alternates between
such speculation: for instance, we predict that encouraging or inhibiting such mental
What we have considered so far is the filtering role that epistemic vigilance plays in
the flow of information in face to face interaction. In this section, we turn to the flow
37
information‟. The very social success which is almost a defining feature of cultural
accepted. We will argue, however, that here too epistemic vigilance is at work, but
that it needs appropriate cultural and institutional development to meet some of the
interlocutors at the time, is ever totally disconnected from the flow of information in
the whole social group. Human communication always carries cultural features. It
reminding her of an assumption which is culturally shared in their milieu, and which
Barbara in another of our examples, „Steve doesn‟t seem to have a girlfriend these
days.‟ Although Andy‟s remark has quite limited and local relevance, it implicates
without which his remark would not be relevant in the intended way. Many cultural
assumptions are distributed in this way, not so much – or, in some cases, not at all –
by being directly asserted, but by being used as implicit premises in a vast number of
if so, how? When contents of this type are conveyed, either explicitly or implicitly,
communicators use their own individual authority not so much to endorse the
content as to vouch for its status as a commonly accepted cultural assumption. When
38
Andy says that champagne is expensive, or implies that it would be normal for Steve
to have a girlfriend, he conveys that these are accepted views in his and Barbara‟s
milieu. If Barbara disagrees, her disagreement is not just with Andy, but with this
accepted view.
If an idea is generally accepted by the people you interact with, isn‟t this a good
reason for you to accept it too? It may be a modest and prudent policy to go along
with the people one interacts with, and to accept the ideas they accept. Anything else
may compromise one‟s cultural competence and social acceptability. For all we know,
it may be quite common for members of a cultural group to accept what they take to
be „accepted views‟ in this pragmatic sense, without making any strong or clear
epistemic commitment to their content (Sperber, 1975, Sperber, 1985; Boyer 1992;
Bloch 1998). From an epistemological point of view, the fact that an idea is widely
shared is not a good reason to accept it unless these people have come to hold it
independently of one another. Only in those circumstances will every individual who
accepts this idea add to our own epistemological reasons for accepting it too. Quite
often, however, people who accept (in an epistemological sense) culturally shared
Often, information spreads through a group from a single source, and is accepted
by people along the chains of transmission because they trust the source rather than
because of any evidence or arguments for the content. If so, the crucial consideration
should be the trustworthiness of the original source. If each person who passes on the
information has good independent reasons for trusting the source, this should give
people further along the chain good reasons for also trusting the source, and thus for
accepting the content originally conveyed. However, people‟s reasons for trusting the
39
source are in general no more independent of one another than their reasons for
Even if we make the strong assumption that each individual along the chain of
transmission from the source to ourselves had good reasons for trusting the previous
individual in the chain, these reasons can never be error-proof; hence, our own
confidence in the original source should diminish as the length of the chain increases.
source to be accepted and transmitted purely on the ground that it is widely accepted
and transmitted – an obvious circularity. Add to all this the fact that when an idea
propagates through a population, its content tends to alter in the process without the
propagators being aware of these alterations (as with nearly all rumours and
traditions – see Sperber, 1996). In these cases, even if there were good reason to
regard the original source as reliable, this would provide no serious support for the
It might seem, then, that people are simply willing, or even eager, to accept
towards it. Boyd, Richerson and Henrich have argued that there is an evolved
conformist bias in favour of adopting the behaviour and attitudes of the majority of
members of one‟s community (e.g. (Boyd and Richerson, 1985; Henrich and Boyd,
1998). Csibra and Gergely (2009) have argued that people in general, and children in
particular, are eager to acquire cultural information, and that this may bias them
having cultural relevance, and also towards accepting it. An alternative (or perhaps
vigilance towards all communicated information, whether local or cultural, but that
40
interaction, and not at information propagated on a larger scale. For instance, people
they often are when they occur in face to face interaction, but not otherwise. On a
population scale, these problems can remain unnoticed although, on reflection, they
are likely to be pervasive. All kinds of beliefs widely shared in the community may
testimonies. The trust is not blind, but the epistemic vigilance which should buttress
it is short-sighted.
Of particular relevance here are two kinds of belief which are typically cultural:
reputations, and beliefs which are only partly understood, and whose content is
Morris, 1999) and on its relevance to cooperation and to social epistemology (e.g.
instance, the opinion that Lisa is generous or that John is a liar, which has become
belongs to a relatively small group in which many people have direct experience of
her qualities and shortcomings, and where they can express and compare their
opinions with some freedom (for instance by gossiping), then her actual behaviour
gossips may themselves be incompetent or not quite honest, but ordinary epistemic
their direct assessment. When an addressee has to decide whether or not to believe an
unfamiliar source of information, she may have no other basis for her decision than
her knowledge of the source‟s reputation, which she is unable to assess herself, and
which she is likely to accept for want of a better choice. All too often, reputations are
examples of ideas which are accepted and transmitted purely on the ground that they
the course of transmission. One of the ways in which reputations get transformed is
by becoming inflated well beyond the level found in typical opinions arrived at
– achieve such inflated reputations, people who are then inclined to defer more to
them than to any source whose reliability they have directly assessed may find
of these sources (for instance, „Mary was and remained a virgin when she gave birth‟
or Lacan‟s „There is no such thing as a sexual relationship‟) for coherence with their
existing beliefs, they would reject them. But this would in turn bring into question
„semi-propositional‟ idea (Sperber, 1985; 1997; In press). Most religious beliefs are
typical examples of beliefs of this kind, whose content is in part mysterious to the
somewhat grim. Mechanisms for epistemic vigilance are not geared to filtering
information transmitted on such a large scale. Even if we are right to claim that these
mechanisms exist, they do not prevent mistaken ideas, undeserved reputations and
empty creeds from invading whole populations. However, we did note that it is
important not to jump from the fact that people are seriously, even passionately,
conclusion that the commitment involved is clearly epistemic. It may be that the
content of the ideas matters less to you than who you share them with, since they may
help define group identities. When what matters is the sharing, it may be that
objections within the relevant social group, or would be too easily shared beyond that
just the relevant group may have a cultural success which is negatively correlated with
Strictly speaking, such forms of hegemonic or dogmatic vigilance are not epistemic.
better epistemic filtering than the mere cumulative effect of spontaneous vigilance
exercised by individuals.
be experts in their field because they have shown strong evidence of their expertise to
experts who are even more qualified. Of course, these procedures may be inadequate
or corrupt, and the domain may itself be riddled with errors; but still, such
the content as to study vigilance towards the source. We suggested above that
vigilance towards the content is typically exercised through debate and argument, and
may give rise to a kind of spontaneous division of cognitive labour. This division of
labour can itself be culturally organized and take various institutional forms.
Examples include judicial institutions, where a number of rules and procedures are
designed to establish the facts of the matter through examination of the evidence,
questioning of witnesses, and debates between the parties, for instance. The
sciences, where observational or theoretical claims are critically assessed via social
(Goldman, 1999).
Social mechanisms for vigilance towards the source and vigilance towards the
content interact in many ways. In judicial proceedings, for instance, the reputation of
sciences, peer review is meant to be purely content-oriented, but is influenced all too
suppress this influence), and the outcome of the reviewing process in turn affects the
44
involves multiple complex assessments from teachers and examiners, who engage in
discussion with the candidate and among themselves; these assessments are
Here we can do no more than point to a few of the issues raised by social
mechanisms for epistemic vigilance. Our main aim in doing so is to suggest that, to a
The way in which people rely on distributed assessment systems poses a new
version of Reid‟s and Hume‟s problem of how to justify our trust in tesminony. This is
particularly true in the case of the new assessment systems without which we would
be unable to use the Web at all. Google is a salient case in point. Google is not only a
represents, in the form of a ranked list, the relative epistemic values of Web
documents found in a search. The higher the rank of a document, the more likely it is
to contain relevant and reliable information. One way of producing such a ranking
involves calculating the number of links to a given Web document from other Web
sites, and weighting this number according to the relative importance of these sites
45
(which is itself calculated based on the number of links to them from still other sites).
The idea behind this process is that linking to a document is an implicit judgement of
its worth. The process compiles these judgements into an accessible indication: its
Why, then, do people rely on a search engine such as Google even though they
know little or nothing about how its results pages are produced? And how far are they
justified in doing so? Note, first, that our reliance is not entirely blind: This cognitive
technology, like any other technology, is adopted on the basis of its observed success.
Moreover, our reliance is tentative: We are willing to look first at highly ranked pages
and to assume that there are good reasons why they are so highly ranked. But don‟t
we then exercise some fairly standard epistemic vigilance towards the information we
are presented with? The work and ideas evoked in this article should be relevant to an
9. Concluding remark
Our aim in this paper has been to give some substance to the claim that humans have
a suite of cognitive mechanisms for epistemic vigilance. To this end, we have surveyed
psychology and the social sciences. We do not expect our readers to have accepted all
do hope is to have a made a good case for the recognition of epistemic vigilance as an
are articulated across individuals and populations into social mechanisms. Some of
these mechanisms are targeted at the source of information, others at its content.
Seeing these diverse mechanisms as all contributing to one and the same function of
epistemic vigilance may be a source of insight in the study of each one of them.
Dan Sperber
and
Department of Philosophy
Budapest, Hungary
Fabrice Clément
Laboratoire de Sociologie
Université de Lausanne
Switzerland
Christophe Heintz
Department of Philosophy
Budapest, Hungary
Olivier Mascaro
47
Department of Philosophy
Budapest, Hungary
Hugo Mercier
PPE Program
University of Pennsylvania
Philadelphia, USA
Gloria Origgi
Deirdre Wilson
Department of Linguistics
and
CSMN
University of Oslo
Norway
48
References
Dijksterhuis, A., Bos, M. W., Nordgren, L. F. and Van Baaren, R. B. 2006: On making the
right choice: the deliberation-without-attention effect. Science, 311, 1005–7.
Ekman, P. 2001: Telling Lies. New York: Norton.
Ekman, P. and O‟Sullivan, M. 1991: Who can catch a liar. American Psychologist, 46, 913–20.
Evans, J. S. B. T. 2002: Logic and human reasoning: an assessment of the deduction
paradigm. Psychological Bulletin, 128, 978–96.
Evans, J. S. B. T. and Frankish, K. (eds) 2009: In TwoMinds. Oxford:OxfordUniversity Press.
Evans, J. S. B. T. and Over, D. E. 1996: Rationality and Reasoning. Hove: Psychology Press.
Figueras-Costa, B. and Harris, P. 2001: Theory of mind development in deaf children: a
nonverbal test of false-belief understanding. Journal of Deaf Studies and Deaf
Education, 6, 92.
Foley, R. 1994: Egoism in epistemology, in F. Schmitt (ed.), Socializing Epistemology.
Lanham, MD: Rowman and Littlefield, Inc.: 53–73.
Fricker, E. 1995: Critical notice: telling and trusting: reductionism and anti-reductionism in
the epistemology of testimony. Mind, 104, 393–411.
Fricker, E. 2006: Testimony and epistemic autonomy. In J. Lackey and E. Sosa (eds), The
Epistemology of Testimony. Oxford: Oxford University Press.
Fusaro, M. and Harris, P. L. 2008: Children assess informant reliability using bystanders‟
non-verbal cues. Developmental Science, 11, 771–7.
Gee, C. L. and Heyman, G. D. 2007: Children‟s evaluation of other people‟s self-descriptions.
Social Development, 16, 800–18.
Gilbert, D. T., Krull, D. S. and Malone, P. S. 1990: Unbelieving the unbelievable: some
problems in the rejection of false information. Journal of Personality and Social
Psychology, 59, 601–13.
Gilbert, D. T. and Malone, P. S. 1995: The correspondence bias. Psychological Bulletin, 117,
21–38.
Gilbert, D. T., Tafarodi, R. W. and Malone, P. S. 1993: You can‟t not believe everything you
read. Journal of Personality and Social Psychology, 65, 221–33.
Goldman, A. 1999: Knowledge in a Social World. Oxford: Oxford University Press.
Gouzoules, H., Gouzoules, S. and Miller, K. 1996: Skeptical responding in rhesus monkeys
(Macaca mulatta). International Journal of Primatology, 17, 549–68.
Grice, H. P. 1957: Meaning. Philosophical Review, 66, 377–88.
Grice, H.P. 1969: Utterer‟s meaning and intentions. Philosophical Review, 78, 147–77.
[Reprinted in Grice, 1989, pp. 86–116].
Grice, H. P. 1975: Logic and conversation. In P. Cole and J. P. Morgan (eds), Syntax and
Semantics, Vol. 3: Speech Acts. New York: Seminar Press.
Grice, H. P. 1989: Studies in the Way of Words. Cambridge, MA: Harvard University Press.
Hardwig, J. 1985: Epistemic dependence. The Journal of Philosophy, 82: 335–349.
Harman, G. 1986: Change in View: Principles of Reasoning. Cambridge, MA: MIT Press.
Harris, P. and Núñez, M. 1996: Understanding of permission rules by preschool children.
Child Development, 67, 1572–91.
51
Hasson, U., Simmons, J. P. and Todorov, A. 2005: Believe it or not: on the possibility of
suspending belief. Psychological Science, 16, 566–71.
Heintz, C. 2006: Web search engines and distributed assessment systems, Pragmatics &
Cognition, 14, 387–409.
Henrich, J. and Boyd, R. 1998: The evolution of conformist transmission and the emergence
of between-group differences. Evolution and Human Behavior, 19, 215–41.
Heyman, G. D. 2008: Children‟s critical thinking when learning from others. Current
Directions in Psychological Science, 17, 344–7.
Holton, R. 1994: Deciding to trust, coming to believe. Australasian Journal of Philosophy,
72: 63–76.
Hutchins, E. 1980: Culture and Inference: A Trobriand Case Study. Cambridge, MA:
Harvard University Press.
Hutchins, E. 1996: Cognition in the Wild. Cambridge, MA: MIT Press.
Ifantidou, E. 2001: Evidentials and Relevance. Amsterdam: John Benjamins.
Jaswal, V. K., Croft, A. C., Setia, A. R. and Cole, C. A. In Press: Young children have a specific,
highly robust bias to trust testimony. Psychological Science.
Kahneman, D. 2003: A perspective on judgment and choice: mapping bounded rationality.
American Psychologist, 58, 697–720.
Koenig, M. A. and Echols, C. H. 2003: Infants‟ understanding of false labeling events: the
referential roles of words and the speakers who use them. Cognition, 87, 179–203.
Koenig, M. A. and Harris, P. L. 2007: The basis of epistemic trust: reliable testimony or
reliable sources? Episteme, 4, 264–84.
Koriat, A., Lichtenstein, S. and Fischhoff, B. 1980: Reasons for confidence. Journal of
Experimental Psychology: Human Learning and Memory and Cognition, 6, 107–18.
Krebs, J. R. and Dawkins, R. 1984: Animal signals: mind-reading and manipulation? In J. R.
Krebs and N. B. Davies (eds), Behavioural Ecology: An Evolutionary Approach, 2nd
edn. Oxford: Basil Blackwell Scientific Publications.
Kunda, Z. 1990: The case for motivated reasoning. Psychological Bulletin, 108, 480–98.
Lampinen, J. M. and Smith, V. L. 1995: The incredible (and sometimes incredulous)
child witness: child eyewitnesses‟ sensitivity to source credibility cues. Journal of
Applied Psychology, 80, 621–7.
Lewis, D. K. 1969: Conventions. Cambridge, MA: Harvard University Press.
Lewis, M., Stranger, C. and Sullivan, M. W. 1989: Deception in 3-year-olds. Developmental
Psychology, 25, 439–43.
Locke, J. 1690 [1975]: An Essay Concerning Human Understanding, P. Nidditch, ed. Oxford:
Oxford University Press.
Lutz, D. J. and Keil, F. C. 2002: Early understanding of the division of cognitive labor. Child
Development, 1073–84.
Malone, B. E. and DePaulo, B. M. 2001: Measuring sensitivity to deception. In J. A. Hall and
F. Bernieri (eds), Interpersonal Sensitivity: Theory, Measurement, and Application.
Hillsdale, NJ: Erlbaum: 103–124.
Mann, S., Vrij, A. and Bull, R. 2004: Detecting true lies: police officers‟ ability to detect
suspects‟ lies. Journal of Applied Psychology, 89, 137–149.
52
Mascaro, O. and Sperber, D. 2009: The moral, epistemic, and mindreading components of
children‟s vigilance towards deception. Cognition, 112, 367–80.
Mathew, S. and Boyd R. 2009: When does optional participation allow the evolution of
cooperation? Proceedings of the Royal Society of London B, 276(1659), 1167–1174.
Matsui, T., Rakoczy, H.,Miura, Y. and Tomasello, M. 2009: Understanding of speaker
certainty and false-belief reasoning: a comparison of Japanese and German
preschoolers.
Developmental Science, 12, 602–13. Mercier, H. submitted-a: Developmental evidence for
the argumentative theory of
reasoning.
Mercier, H. submitted-b: Looking for arguments.
Mercier, H. submitted-c: On the universality of argumentative reasoning.
Mercier, H. and Landemore, H. submitted: Reasoning is for arguing: consequences for
deliberative democracy.
Mercier, H. and Sperber, D. 2009: Intuitive and reflective inferences. In J. St B. T. Evans and
K. Frankish (eds), In Two Minds. New York: Oxford University Press.
Mercier, H. and Sperber, D. Forthcoming: Why do humans reason? Arguments for an
argumentative theory. Behavioral and Brain Science.
Milinski, M., Semmann, D. and Krambeck, H. J. 2002: Reputation helps solve the „tragedy of
the commons‟. Nature, 415, 424–6.
Millikan, R. G. 1987: Language, Thought and Other Categories. Cambridge, MA: MIT Press.
Mills, C. M. and Keil, F. C. 2005: The Development of Cynicism. Psychological Science, 16,
385–90.
Morris, C.W. 1999: What is this thing called „reputation‟? Business Ethics Quarterly, 9, 87–
102
Nickerson, R. S. 1998: Confirmation bias: a ubiquitous phenomena in many guises. Review of
General Psychology, 2, 175–220.
Nowak, M. A. and Sigmund, K. 1998: Evolution of indirect reciprocity by image scoring.
Nature, 383, 573–7.
Nurmsoo, E. and Robinson, E. J. 2009: Children‟s trust in previously inaccurate informants
who were well or poorly informed: when past errors can be excused. Child
Development, 80, 23–27.
Nurmsoo, E.,Robinson, E. J. and Butterfill, S. A. In press: Are children gullible? Review of
Psychology and Philosophy.
Onishi, K. H. and Baillargeon, R. 2005: Do 15-month-old infants understand false beliefs?
Science, 308, 255–58.
Origgi, G. 2005: A stance of trust. Paper presented at the 9th International Pragmatics
Conference (IPRA), Riva del Garda, July 10–15th; to be published in T. Matsui (ed.):
Pragmatics and Theory of Mind. Amsterdam: John Benjamins (forthcoming). Origgi, G.
2008: Trust, authority and epistemic responsibility. Theoria, 23: 35–44.
Origgi, G. and Sperber, D. 2000: Evolution, communication and the proper function of
language. In P. Carruthers and A. Chamberlain (eds), Evolution and the Human Mind:
Modularity, Language and Meta-Cognition. Cambridge: Cambridge University Press.
Paglieri, F. and Woods, J. In press: Enthymematic parsimony. Synthese.
53
Wimmer, H. and Perner, J. 1983: Beliefs about beliefs: representation and constraining
function of wrong beliefs in young children‟s understanding of deception. Cognition, 13,
41–68.
Yamagishi, T. 2001: Trust as a form of social intelligence. In K. Cook (ed.), Trust in Society.
New York: Russell Sage Foundation.
Ybarra, O., Chan, E. and Park, D. 2001: Young and old adults‟ concerns about morality and
competence. Motivation and Emotion, 25, 85–100.