Nothing Special   »   [go: up one dir, main page]

Combined Critical Thinking

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

Visual illusions are like cognitive illusions in that...

they illustrate how System 1 can be trained to become more accurate in automatic judgments

the illusions do not arise at all for people who are sufficiently careful to monitor their System 1

they show us that we can't know the truth about how the world really is

it is hard to shake the incorrect impression even after we are aware that it is incorrect

1.3 Guiding the mind

Although much of our everyday reasoning is extremely good, we are sometimes faced with reasoning
tasks that the human mind is not well-suited to perform. In such situations, we may feel we are
reasoning perfectly well, while actually being subject to systematic errors. Avoiding these pitfalls
requires an attentive rider who is ready to correct the elephant when necessary. And eventually, with
enough training, the elephant may get better at avoiding them all on its own.

The errors that I call include not only mental glitches uncovered by cognitive
psychologists, but also errors in probabilistic reasoning, common mistakes in making decisions, and even
some logical fallacies. In short, any specific way in which we tend to mess up our reasoning in a
systematic way counts as a cognitive pitfall.

We will encounter many more cognitive pitfalls in subsequent chapters, but it is worth introducing a
few examples here in order to illustrate some of the main reasons why we run into them. We can break
these down into three sources of error:

we like to take shortcuts rather than reason effortfully


we hold onto beliefs without good reason
we have motivations for belief that conflict with accuracy
All of these error tends to go unnoticed by our conscious thought processes, and being hurried and
impulsive makes us more susceptible to them.

Shortcuts

The elephant often knows where to go, but there are points along the path where the elephant's
inclinations are not reliable. In those cases, if the rider isn't careful, the elephant may decide to take a
"shortcut" that actually leads away from the path. One of the most important skills in reasoning is
learning when we can trust the elephant's impulses, and when we can't.

Too often, when we faced with a question that we should answer using our effortful reasoning, we simply
allow System 1 to guess at the answer, and then we go with that. This is a cognitive shortcut we use to
avoid the difficult effort of System 2 thinking. But if it's not the sort of question that System 1 is
naturally good at, it'll hand us an answer using a quick process that's not appropriate to the situation.

For example, we answer extremely simple math problems using System 1. If I ask you what 2 + 2 is, you
don't have to perform any effortful calculation: the answer just pops into your head. Now consider the
following question:

A bat and a ball together cost $1.10. The bat costs a dollar more than the ball. How much does the ball
cost?

Take a moment to actually answer the question for yourself before reading on.

The answer might seem completely obvious, but what if I told you that most Ivy League students—and
80% of the general public—get it wrong? If the answer took virtually no effort on your part, that's a sign
that you just let your System 1 answer it, and you might want to go back and check your answer

Nearly everyone's first impulse is to say that the ball costs 10 cents. This is the answer that jumps out at
us, because subtracting $1 from $1.10 is so easy that it's automatic. But some people do question that
initial reaction, at which point they notice that if the bat costs $1 and the ball costs 10 cents, the bat
does not actually cost a dollar more than the ball: it only costs 90 cents more.
We don't get the answer wrong because the math is difficult. Instead, the problem is that an answer
jumps out at us and seems plausible; and it is very tempting to use no additional effort on the problem.
The tendency to override that temptation is what psychologists call , and the bat-
and-ball question is one of a series of questions used to measure it. What makes the question so hard is
precisely that it seems so easy. When people are given versions of the bat-and-ball problem with slightly
more complex numbers, they tend to do better. Faced with a trickier math problem in which no answer
seems obvious at first, we have to actually engage System 2. And once we've done that, we're more likely
to notice that we can't solve the problem just by subtracting one number from the other. [12]

Here is another example. There are two bags sitting on a table. One bag has two apples in it, and the
other bag has an apple and an orange in it. You have no idea which bag is which. You reach into one of
the bags, and grab something at random. It's an apple. What's the probability that the bag you reached
into contains the orange? Think about the question long enough to arrive at an answer before moving
on.

The answer that jumps out to most people is 50%, but it's easy to see why that can't be right. You were
more likely to pull out an apple at random out of a bag if it only had apples in it. So pulling out an apple
must provide some evidence that the bag has only apples. At the outset, the probability that this was the
bag with only apples was 50%, so the probability should be higher now. (As we'll see, this is one of the
central principles of evidence: when you get evidence for a hypothesis, you should increase your
confidence in that hypothesis.)

If you are like most people, this explanation still won't dislodge your intuitive sense that the right
answer should be 50%. In Chapter 8, you'll learn the principles of probability that explain how to work
out the right answer. But for now, the point is that our brains are not very good at even very simple
assessments of probability. This is important to know about ourselves, because probability judgments are
unavoidable in everyday life as well as in many professional contexts.

Both the bat-and-ball example and the bags-of-fruit example illustrate why we should be suspicious of
the answer handed to us by System 1. In some cases, we can start by second-guessing our initial reaction
and then simply calculate the correct answer using System 2. In other cases—such as the bags-of-fruit
example—we may not know how to calculate the correct answer. But at least we should think twice
before assuming we already know it! The first and most important step is noticing when we don't really
know the answer.
Here is a third kind of case where System 1 is not very reliable and tends to get thrown off by a tempting
answer. Suppose you're wondering how common some kind of event is—for example, how often do
people die from shark attacks, tornados, or accidents with furniture? An easy way to guess the answer is
by asking ourselves how easily we can bring to mind examples from our memory. The more difficult it is
to think of an example, the more uncommon the event—or so it might seem. As far as System 1 is
concerned, this seems like a pretty good shortcut to answering a statistical question. The name for any
cognitive shortcut that we commonly to use to bypass effortful reasoning is a , and this
particular one is called . But, of course, the ease with which we can recall things
is affected by irrelevant factors including the vividness and order of our memories. So how available an
example is in our memory is not always a good indication of how common it is in reality.

If the examples we remember come from what we've seen or heard from the media, things are even
worse. There's no relationship between how many media reports there are about a type of event and how
many events of that type actually occur. Shark attacks and tornados are rare and dramatic, while
drownings and accidents with furniture are rarely considered newsworthy. In a world of sensational news
media, System 1's availability heuristic is an extremely unreliable way to evaluate the prevalence of
events.

In contemporary life, it's crucial for us to be able to make accurate judgments in cases where our
automatic intuitions may not be very reliable. Even if System 1 is not well-suited too answer the kind of
question we are faced with, it might strongly suggest an answer to us. And it requires a concerted effort
to check the impulse. In the examples I've given, we are in the very sort of situation that we should be
suspicious about, because System 1 is not very good at calculating costs or probabilities. Better to set
aside the tempting answer, grit our teeth, and actually think through the problem.

Stubbornness

It takes effort to change our minds, so we tend not to do so. Once a belief has lodged itself in our heads,
it can be hard to shake, even if we no longer have any support for it. In our analogy, when the elephant is
marching happily along, it takes effort to get the elephant to change course. If the rider is not paying
attention or is just too lazy to make a correction, the elephant will just keep marching in the same
direction, even if there is no longer any good reason to.

This phenomenon is known as , and it has been well-established in cognitive


psychology. For example, people allow their beliefs to be affected by news and journal articles even after
they are aware that the articles have been debunked and retracted by the source. [13] The same effect has
been found for all kinds of cases where people discover that their original reasons for believing
something have been completely undercut. In a standard kind of experiment testing this phenomenon,
subjects first answer a series of questions that have right and wrong answers. Afterward, they receive
scores that are completely random and unrelated to their performance. Eventually, everyone in the
experiment is told the truth: the scores they received had nothing to do with how they performed. Still,
when they are later asked to assess themselves, the scores they received continue to have an impact on
their self-evaluations. [14]

Another type of study involves two groups being told opposite made-up "facts". For example, one group
of people is told that people who love risks make better firefighters, while another group is told that they
make worse firefighters. Later, everyone in the study is informed that the two groups were told opposite
things, and that the claims about firefighters had been made up. But even after the original claims are
debunked, when subjects are asked about their own personal view about riskiness and firefighting ability,
they continue to be influenced by whichever claim they had been randomly assigned. [15]

Once a piece of misinformation has been taken up into someone's belief system, it takes significant
cognitive effort to dislodge it! In the studies just discussed, the subjects did not have the chance to seek
out additional evidence that confirms their view. But an unsupported belief can gain even more traction
when people do have that chance. This is because we are subject to : we tend to notice
or focus on potential evidence for our pre-existing views, while neglecting or discounting contrary
evidence. [16]

Confirmation bias is perhaps the best known and most widely accepted notion of inferential error to
come out of the literature on human reasoning.
—Jonathan Evans, Bias in Human Reasoning
The effect of confirmation bias on our reasoning is hard to overstate. Again and again, and in a wide
range of situations, psychologists have found that people seem to find a way to confirm their pre-
existing beliefs. Sometimes this is because we are emotionally attached to those beliefs and so are
motivated to seek out only sources of evidence that will support them. But confirmation bias also occurs
even when the issue is something we don't care much about. This is because our expectations color how
we interpret ambiguous or neutral experiences, making those experiences seem to fit well with our pre-
existing views. [17] In other words, we tend to see what we'd expect to see if our views were true, and not
to notice things that would conflict with our views.

Confirmation bias leads to a related cognitive pitfall involving cases where we form beliefs by making a
series of observations over time. In such cases, we tend to develop opinions early on and then either
interpret later evidence in a way that confirms our opinions (due to confirmation bias), or simply pay less
attention to it out of sheer laziness. The result is known as the : earlier evidence
has more influence on our beliefs. [18]

Imagine you are asked to judge a murder case based on a series of facts about the case. You are given a
description of the case followed by about 20 relevant pieces of evidence, half supporting guilt and the
other half supporting innocence. (For example, the defendant was seen driving in a different direction
not long before the murder; on the other hand, the defendant and the victim had recently had a loud
argument.) After considering all the evidence, you are asked to assess the probability that the defendant
is guilty. Would it matter in what order you were presented with the evidence?

Most of us would like to think that the order of evidence would have little effect on us. As long as we see
all the evidence, we think we will be able to weigh it fairly. But in a study where subjects were put in
exactly this scenario, those who first saw the incriminating evidence assigned an average probability of
75% to the defendant's guilt, while for those who first saw the exonerating evidence, the average
probability was 45%. [19] This suggests that the order of evidence in court could make the difference in
whether someone is thrown in jail for the rest of their lives.

The importance of first impressions goes beyond the courtroom, of course. For example, imagine I
happen to see Bob being friendly to someone, and form the impression that he is friendly. If I
subsequently see Bob interacting normally with people, I am more likely to notice things about those
interactions that fit with my image of Bob as friendly. And if I then see Bob being unfriendly, I am more
likely to assume he had a good reason. But-- if I had witnessed all of those scenes in the reverse order, I
would probably have ended up with a very different impression of Bob. I would have interpreted all the
later experiences through the lens of the initial interaction, in which he is unfriendly. But this is a
mistake: the first time you meet someone is no more representative of what they are really like than any
other time. Perhaps less so, if they are trying especially hard to make a good impression!

To sum up: when we form beliefs, we often hold on to them even if our original reasons are debunked,
and then we go on to interpret subsequent evidence in a way that confirms them. Even when they
concern things we don't particularly care about, beliefs are hard to shake. So what happens when our
beliefs do concern things we care about? In that case, they are even harder to shake, as we're about to
find out!

Motivated reasoning

To some degree, especially at a conscious level, we want to have accurate beliefs. But other motivations
are involved as well, often beneath the level of awareness. In other words, even if the rider's goal is to
have accurate beliefs, the elephant may have goals of its own.

In forming and maintaining beliefs, we are often at some level motivated by how we would like things to be
rather than merely by how they actually are. This is called . For example, we like to
have positive opinions of ourselves, and at least in cases where there is plenty of room for interpretation,
we are inclined to believe that we have personal traits that are above average. Studies have found that
93% of American drivers rated themselves better than the median driver; a full 25 percent of students
considered themselves to be in the top 1 percent in terms of the ability to "get along with others"; and 94
percent of college professors thought they did "above average work". [20] Many studies have found
people to consider themselves better than average on a wide range of difficult-to-measure personality
traits.

Of course, we can't just choose to believe something because we want it to be true. I can't just convince
myself, for example, that I am actually on a boat right now, however much I may want to be on a boat.
Motivated reasoning doesn't work like that: we can't just override obvious truths with motivated beliefs.
[21] So our motivated reasoning has to be subtle, tipping the scales here and there when the situation is
murky. We typically don't allow ourselves to notice that we are engaged in motivated reasoning, since
that would defeat the purpose. We want to feel like we've formed our beliefs in an impartial way, and
maintain the illusion that our beliefs and desires just happen to line up. (It's important to stress that
many of our beliefs are "motivated" in this sense even though we are not aware that we have the relevant
motivations: most of our motivated reasoning is entirely subconscious and non-transparent.)

For this reason, we are more susceptible to motivated reasoning the less clear the evidence really is. For
example, it is fairly easy to just believe that we are better than average when it comes to traits that are
difficult to measure, like leadership ability and getting along with others. This is much harder when it
comes to beliefs about our trigonometry skill or ability to run long distances. [22] We may not have
strong evidence that we are better leaders than other people, but then we don't have strong evidence
against it either. When we want to believe something, it's tempting to apply lower standards for how
much evidence is required. That goes a long way towards allowing us to adopt the beliefs we like, without
the process being so obvious to ourselves that it defeats the purpose. In fact, we tend not to notice the
process going on at all.

It is neither the case that people believe whatever they wish to believe nor that beliefs are untouched by
the hand of wishes and fears.
—Peter Ditto and David Lopez, 'Motivated Skepticism'

Motivated reasoning can also work behind the scenes to influence how we respond to evidence. For
example, since we have limited time and energy to think, we often focus only on certain bits of evidence
that we find interesting. But, without our realizing it, the "interesting" bits of evidence tend to be the
ones that happen to support our favored views.

This general tendency has been found in dozens of studies of human reasoning. Here are just a few:

When two groups of people with opposing opinions examine the same evidence (which provides
support for both sides), most people end up more confident in whatever view they started with. [23]
People who are motivated to disbelieve the conclusions of scientific studies look harder to find
problems with them. [24]
In a series of studies, subjects were "tested" for a made-up enzyme deficiency. Those who "tested
positive" considered the test far less accurate and the deficiency far less serious than those who
"tested negative", even though everyone had seen the same information about the deficiency and the
test. [25]

Together, motivated reasoning and confirmation bias make a powerful combination. When our opinions
are not only preconceived but also motivated, we are not only more likely to notice evidence that
confirms our motivated opinions, but also to apply selective standards when we evaluate the evidence.
This means we tend to uncritically accept evidence that supports our views while finding ways to
discredit evidence that conflicts with our views.
A closing caveat

Having considered all of these cognitive pitfalls involving System 1, it might be easy to get the
impression that System 1 is extremely unreliable in general, and that we should never "listen to our
gut". As the main character puts it in High Fidelity:

"I've been thinking with my guts since I was fourteen years old, and frankly speaking, between you and
me, I have come to the conclusion that my guts have shit for brains."
—Nick Hornby, High Fidelity

But this isn't the right way to think about the reliability of System 1 in general, for two reasons. First, we
have deliberately been focusing on situations that tend to lead to bad outcomes. Our automatic
processes are generally far better and faster at processing information than our deliberate ones. In fact,
the sheer volume of sensory data that System 1 handles would overwhelm our conscious minds. No one
could consciously do enough calculations in real time to approximate the amount of processing that
System 1 must do when, for example, we catch a baseball.

The second reason not to disparage System 1 is this. Even for situations where our automatic judgments
tend to get things wrong, they can sometimes become more reliable, while remaining much faster than
deliberate reasoning. This means that with enough training of the right sort, we can sometimes develop
our gut reactions into genuinely : the ability to make fast and accurate judgments about
a situation by recognizing learned patterns in it. But this only works under the right set of conditions,
and it is easy for people to think they have developed skilled intuition when they really haven't. It's
important to understand when the elephant can be trusted on its own, and when it needs to be guided by
an attentive rider.

Section Questions

The bat-and-ball example and the bags-of-fruit example both illustrate...

that in certain cases we should be wary of our immediate intuitions

that our System 1 is not very good at calculating probabilities

that we are "cognitive misers" when it comes to answering very difficult numerical problems

that under the right conditions, our System 1 can be trained to provide quick and reliable intuitions

The murder case was used to illustrate...


that motivated reasoning can color how we interpret ambiguous evidence

that our System 1 is not very good at estimating probabilities

that our beliefs are often affected by which pieces of evidence we get first

that we are more likely to judge a person as being guilty than as being innocent when we are given
evidence on both sides

When we interpret evidence in a biased way due to motivated reasoning, we tend to...

simply decide that we want to believe something and then figure out ways to convince ourselves that it is
true

knowingly apply selective standards in order to discredit conflicting evidence

deliberately ignore evidence on the other side so that we can bolster our own view

think we are actually being unbiased and fair

If System 1 is not naturally skilled at a certain kind of reasoning task, ...

it may still be possible, under the right conditions, to train it to improve

it is easy to tell that it is not skilled and avoid trusting its responses when faced with that kind of task.

then that task is not the sort of task that System 1 performs, because there is a clear division between
System 1 tasks and System 2 tasks

the only way that reasoning task can be performed reliably is with effortful and deliberate thought
processes

Key terms

Availability heuristic: judging the frequency or probability of an event or attribute by asking ourselves
how easily we can bring examples to mind from memory.

Belief perseverance: the tendency to continue holding a belief even if its original support has been
discredited, and in the face of contrary evidence.

Cognitive illusions: an involuntary error in our thinking or memory due to System 1, which continues
to seem correct even if we consciously realize it's not
Cognitive pitfalls: a common, predictable error in human reasoning. Cognitive pitfalls include mental
glitches uncovered by cognitive psychologists, as well as logical fallacies.

Cognitive reflection: the habit of checking initial impressions supplied by System 1, and overriding
them when appropriate.

Confirmation bias: the tendency to notice or focus on potential evidence for our pre-existing views,
and to neglect or discount contrary evidence. Confirmation bias can be present with our without an
underlying motive to have the belief in the first place.

Evidence primacy effect: in a process where information is acquired over time, the tendency to give
early information more evidential weight than late information. This tendency arises when we develop
opinions early on, leading to confirmation bias when interpreting later information, or simply a failure to
pay as much attention to it.

Heuristic: a cognitive shortcut used to bypass the more effortful type of reasoning that would be
required to arrive at an accurate answer. Heuristics are susceptible to systematic and predictable errors.

Motivated reasoning: forming or maintaining a belief at least partly because, at some level, we want it
to be true. This manifests itself in selective standards for belief, seeking and accepting evidence that
confirms desired beliefs, and ignoring or discounting evidence that disconfirms them

Skilled intuition: the ability to make fast and accurate judgments about a situation by recognizing
learned patterns in it. This requires training under specific kinds of conditions.

System 1: the collection of cognitive processes that feel automatic and effortless but not transparent.
These include specialized processes that interpret sensory data and are the source of our impressions,
feelings, intuitions, and impulses. (The distinction between the two systems is one of degree, and the
two systems often overlap, but it is still useful to distinguish them.)

System 2: the collection of cognitive processes that are directly controlled, effortful, and transparent.
(The distinction between the two systems is one of degree, and the two systems often overlap, but it is
still useful to distinguish them.)

Transparency: the degree to which information processing itself (rather than just its output) is done
consciously, in such a way that one is aware of the steps being taken.

Footnotes
11 Dobelli, R. (2014). The art of thinking clearly. Harper Paperbacks.

WHY WE PREFER A WRONG MAP TO NO MAP AT ALL


Availability Bias

‘Smoking can’t be that bad for you: my grandfather smoked three packs of
cigarettes a day and lived to be more than 100.’ Or: ‘Manhattan is really safe. I
know someone who lives in the middle of the Village and he never locks his door.
Not even when he goes on vacation, and his apartment has never been broken
into.’ We use statements like these to try to prove something, but they actually
prove nothing at all. When we speak like this, we succumb to the availability bias.
Are there more English words that start with a K or more words with K as their
third letter? Answer: more than twice as many English words have K in third
position than start with a K. Why do most people believe the opposite is true?
Because we can think of words beginning with a K more quickly. They are more
available to our memory.
T h e availability bias says this: we create a picture of the world using the
examples that most easily come to mind. This is absurd, of course, because in
reality things don’t happen more frequently just because we can conceive of them
more easily.
Thanks to the availability bias, we travel through life with an incorrect risk map
in our heads. Thus, we systematically overestimate the risk of being the victim of
a plane crash, a car accident or a murder. And we underestimate the risk of dying
from less spectacular means, such as diabetes or stomach cancer. The chances
of bomb attacks are much rarer than we think, and the chances of suffering
depression are much higher. We attach too much likelihood to spectacular, flashy
or loud outcomes. Anything silent or invisible we downgrade in our minds. Our
brains imagine show-stopping outcomes more readily than mundane ones. We
think dramatically, not quantitatively.
Doctors often fall victim to the availability bias. They have their favourite
treatments, which they use for all possible cases. More appropriate treatments
may exist, but these are in the recesses of the doctors’ minds. Consequently they
practise what they know. Consultants are no better. If they come across an
entirely new case, they do not throw up their hands and sigh: ‘I really don’t know
what to tell you.’ Instead they turn to one of their more familiar methods, whether
or not it is ideal.
If something is repeated often enough, it gets stored at the forefront of our
minds. It doesn’t even have to be true. How often did the Nazi leaders have to
repeat the term ‘the Jewish question’ before the masses began to believe that it
was a serious problem? You simply have to utter the words ‘UFO’, ‘life energy’ or
‘karma’ enough times before people start to credit them.
The availability bias has an established seat at the corporate board’s table, too.
Board members discuss what management has submitted – usually quarterly
figures – instead of more important things, such as a clever move by the
competition, a slump in employee motivation or an unexpected change in
customer behaviour. They tend not to discuss what’s not on the agenda. In
addition, people prefer information that is easy to obtain, be it economic data or
recipes. They make decisions based on this information rather than on more
relevant but harder to obtain information – often with disastrous results. For
example, we have known for ten years that the so-called Black–Scholes formula
for the pricing of derivative financial products does not work. But we don’t have
another solution, so we carry on with an incorrect tool. It is as if you were in a
foreign city without a map, and then pulled out one for your home town and simply
used that. We prefer wrong information to no information. Thus, the availability
bias has presented the banks with billions in losses.
What was it that Frank Sinatra sang? ‘Oh, my heart is beating wildly/And it’s all
because you’re here/When I’m not near the girl I love/I love the girl I’m near.’ A
perfect example of the availability bias. Fend it off by spending time with people
who think differently than you think – people whose experiences and expertise
are different than yours. We require others’ input to overcome the availability bias.
See also Ambiguity Aversion (ch. 80); Illusion of Attention (ch. 88); Association Bias
(ch. 48); Feature-Positive Effect (ch. 95); Confirmation Bias (ch. 7–8); Contrast Effect (ch.
10); Neglect of Probability (ch. 26)
73 Dobelli, R. (2014). The art of thinking clearly. Harper Paperbacks.

WHY FIRST IMPRESSIONS DECEIVE


Primacy and Recency Effects

Allow me to introduce you to two men, Alan and Ben. Without thinking about it too
long, decide who you prefer. Alan is smart, hard-working, impulsive, critical,
stubborn and jealous. Ben, however, is jealous, stubborn, critical, impulsive,
hard-working and smart. Who would you prefer to get stuck in an elevator with?
Most people choose Alan, even though the descriptions are exactly the same.
Your brain pays more attention to the first adjectives in the lists, causing you to
identify two different personalities. Alan is smart and hard-working. Ben is jealous
and stubborn. The first traits outshine the rest. This is called the primacy effect.
If it were not for the primacy effect, people would refrain from decking out their
headquarters with luxuriously appointed entrance halls. Your lawyer would feel
happy turning up to meet you in worn-out sneakers rather than beautifully
polished designer Oxfords.
T h e primacy effect triggers practical errors too. Nobel laureate Daniel
Kahneman describes how he used to grade examination papers at the beginning
of his professorship. He did it as most teachers do – in order: student 1 followed
by student 2 and so on. This meant that students who answered the first
questions flawlessly endeared themselves to him, thus affecting how he graded
the remaining parts of their exams. So, Kahneman switched methods and began
to grade the individual questions in batches – all the answers to question one,
then the answers to question two, and so forth. Thus, he cancelled out the
primacy effect.
Unfortunately, this trick is not always replicable. When recruiting a new
employee, for example, you run the risk of hiring the person who makes the best
first impression. Ideally, you would set up all the candidates in order and let them
answer the same question one after the other.
Suppose you sit on the board of a company. A point of discussion is raised – a
topic on which you have not yet passed judgement. The first opinion you hear will
be crucial to your overall assessment. The same applies to the other participants,
a fact that you can exploit: if you have an opinion, don’t hesitate to air it first. This
way, you will influence your colleagues more and draw them over to your side. If,
however, you are chairing the committee, always ask members’ opinions in
random order so that no one has an unfair advantage.
T h e primacy effect is not always the culprit; the contrasting recency effect
matters as well. The more recent the information, the better we remember it. This
occurs because our short-term memory file drawer, as it were, contains very little
extra space. When a new piece of information gets filed, an older piece of
information is discarded to make room.
When does the primacy effect supersede the recency effect, or vice versa? If
you have to make an immediate decision based on a series of ‘impressions’
(such as characteristics, exam answers etc.), the primacy effect weighs heavier.
But if the series of impressions was formed some time ago, the recency effect
dominates. For instance, if you listened to a speech a few weeks ago, you will
remember the final point or punchline more clearly than your first impressions.
In conclusion: first and last impressions dominate, meaning that the content
sandwiched between has only a weak influence. Try to avoid evaluations based
on first impressions. They will deceive you, guaranteed, in one way or another.
Try to assess all aspects impartially. It’s not easy, but there are ways around it.
For example, in interviews, I jot down a score every five minutes and calculate the
average afterward. This way, I make sure that the ‘middle’ counts just as much as
hello and goodbye.
See also Illusion of Attention (ch. 88); Sleeper Effect (ch. 70); Salience Effect (ch. 83)

You might also like