Nothing Special   »   [go: up one dir, main page]

Dennett Vs Searle

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 9
At a glance
Powered by AI
The debate is about whether consciousness can be achieved through computation alone or if biological factors like specific brain functions are necessary. Searle argues for the latter while Dennett supports strong AI.

Searle argues that programs are formal and syntactical while minds have intrinsic semantics/meaning. Programs alone are not sufficient for consciousness. Brains have certain functions that are necessary for consciousness.

Searle's main arguments are that programs are syntactical while minds have semantics, complexity of programs is irrelevant, and we know brains cause consciousness but not how.

Dennett v. Searle v.

Dennett v Searle v…
In 1997, the Journal of Cognitive Neuroscience published an interview with me by its editor, Michael
Gazzaniga, in which he asked me about Searle and others. I replied:

Searle is not even in the same discussion [with the Churchlands]. He claims that organic brains are
required to “produce” consciousness-at one point he actually said brains “secrete” consciousness, as if it
were some sort of magical goo-but since this is for him just an article of faith with no details, no models,
no explanatory power, no predictions, it is hard to know what to say in response. Given the peculiar way
he divorces his favored “causal powers” of brains from their control powers-the powers that permit
them to accomplish discrimination and perception, underlie memory, guide behavior-his doctrine is
conveniently untestable, now and forever. He and his followers do not shrink from his implication-they
embrace it! To me, this is an unvarnished reductio ad absurdum of his position, and I marvel that
anybody takes it seriously. Some people just love an insoluble mystery, I guess.

This interview was later reprinted in a book edited by Gazzaniga, called Conversations in the Cognitive
Neurosciences. When Searle read the interview, he sent Gazzaniga a furious email, charging me with
lying about his position. I have not been able to find a copy of Searle’s original email, which was
forwarded to me by Gazzaniga, but here is my reply, Searle’s reply to me, my reply to Searle, and
Searle’s final reply. These have never been published.

Subject: Re: Searle’s complaint


Date: Wed, 7 May 1997 23:49:34 -0400 (EDT)
From: Daniel Dennett <ddennett@diamond.tufts.edu>
To: “Michael S. Gazzaniga” <Michael.S.Gazzaniga@Dartmouth.EDU>
CC: John Searle <searle@cogsci.berkeley.edu>

Dear Mike,

Thanks, I guess, for forwarding Searle’s complaint to me. Here’s my reply.

Searle says, in his original 1980 BBS article: “Whatever else intentionality is, it is a biological
phenomenon, and it is as likely to be as causally dependent on the specific biochemistry of its origins as
lactation, photosynthesis, or any other biological phenomenon.” (p.424)
Now this “likely” line leaves matters unclear, but many-- almost all--commentators on that
article took him to be asserting that intentionality (or consciousness, which Searle now insists is a
prerequisite for genuine intentionality) was necessarily a biological phenomenon (dependent on an
“organic brain”)--and hence could not be, say, a “silicon” phenomenon. It was hard, otherwise, to see
what his point was.
In any case, Searle apparently confirmed that interpretation in his exchange with me in 1982, in
the NEW YORK REVIEW. There he says he is opposing the “strong AI belief that the appropriately
programmed computer literally has a mind, and its antibiological claim that the specific neurophysiology
of the brain is irrelevant to the study of the mind.” (p57) If a program is not sufficient, as he insists, then
something else must be necessary. What? Well, Searle doesn’t ever quite say, but he gives us a clue: If
ignoring “specific neurophysiology” is a mistake--because it’s relevant--is that because something about
that neurophysiology is necessary? If not, why would it be relevant? Why, otherwise, would AIers be

Page 1 of 9
wrong to ignore it? Searle doesn’t say. He speaks in his message to you of “certain functions of neural
tissues” as sufficient (not necessary), but he’s never said what these certain functions are, or how to
detect their presence. No sane person can disagree with this, as he says, because it is a claim with no
content yet. It won’t do to say that “brains do it”--many brains don’t “do it,” because they’re comatose
or dead or otherwise not functioning right, so until he says what these certain functions are, he’s
asserting only that some brains are sometimes sufficient for consciousness. Indeed a trivial claim, and
not anything contrary in any way to strong AI, since it might be that those brains that “do it” manage the
feat in virtue of the programs they implement at the time.
If I (and you, and almost everybody else) have been misinterpreting him to hold that
SOMETHING about the “specific neurophysiology of the brain” is necessary--something that is lacking in
any programmed computer--then this is only because we were adopting the utterly minimal principle of
charity: assuming that he was actually attempting to assert something nonvacuous.
As for secreting consciousness (or intentionality), I must say that John has a bad memory. I
remember the occasion as vividly as can be--though I can’t remember on which of several tellings of the
Chinese Room I’ve endured he made the remark, in response to some challenge or other. (It was back
about 1979-80, the year I was at Stanford.) It was certainly memorable, in any case. I can even recall the
context and the tone of voice. There was John, playing his no-nonsense, roll-up-ma-sleeves, jes’ plain
facts, populist tough guy to the hilt, saying (this is almost--ALMOST--verbatim, but hey, I didn’t have a
tape recorder): “Look [, Pilgrim], it’s a matter of BIOLOGY. The brain is a BioLOGical ORgan. The bile duct
secretes bile, the mammary glands secrete milk, and the brain secretes intentionality!” I remember
some of us in the audience had great fun discussing that remark later that day.
But WHO CARES? (As I said in my 1980 commentary, “He can’t really view intentionality as a
marvelous mental fluid, so what is he trying to get at? I think his concern with INTERNAL properties of
control systems is a misconceived attempt to capture the interior POINT OF VIEW of a conscious agent.”)
Searle, in his reply to me (and Zenon Pylyshyn, who also made much of his lactation and secretion
analogies), insisted that he didn’t think of intentionality as a fluid. No kidding--I’d already made that
point myself. But what WAS the point of his comparing it as a phenomenon to lactation? He passed up
the opportunity to explain further.
I find it telling that Searle gets so huffy about whether I’m “deliberately lying”--I’m telling the
truth, in fact--on this relatively trivial matter, while he remains utterly silent, for years on end, when it
comes to responding to my serious, detailed, and very, very harsh criticisms of his arguments--for
instance, in THE INTENTIONAL STANCE and in my review of his book in J. Phil. I’ll send you a copy. You’d
think that if I misrepresented him THERE, he’d be squawking like mad, but in fact he’s never said boo in
response to those detailed criticisms, so far as I know. I am tempted to conclude that he accepts those
representations of his views. For instance, in “Fast Thinking” in THE INTENTIONAL STANCE (published in
1987), I went to great lengths, and in considerable detail, and with quotes galore, and to show what is
incoherent with Searle’s whole Chinese Room argument. Among other points in it, p333ff, I specifically
discuss the curious line he’s pushing today. That discussion begins: “Searle insists that he has never
claimed that an organic brain is essential for intentionality.. . .” and goes on to demonstrate the bind this
leaves him in when he claims (as he says now) that this is “an open factual question.” So much for his
complaint that I misrepresent his views. Since he’s never discussed my discussion, the ball is in his court.
The whole business about “another of Dennett’s fabrications” and about my misquoting him
before is from hunger. In the aforementioned 1982 discussion, he made very heavy weather of the fact
that Doug Hofstadter and I, having CORRECTLY printed every word of his paper in THE MIND’S I, then
inadvertently misquoted our own quotation--it was Doug’s error, but I, as co-author, am ready to share
the responsibility for this misquotation--in the discussion. The fatal slip, let me remind you, was that
where Searle had said “slips of paper” Doug had quoted him as saying “bits of paper” (or maybe I’ve
reversed this in my memory--it didn’t make any difference then, so I may be confusing the two oh- so-

Page 2 of 9
different words). His howl then was a transparent case of somebody trying desperately to direct
attention away from good criticisms by going on about a trivial and non-damaging slip (or bit, or
whatever). In any case, having been alerted by Searle’s truly bizarre performance over the slips over
bits, I have been hyper-scrupulous when quoting him ever since. He complains about my not citing any
of his work (in an interview about my own work!) but perhaps he would care, then, to substantiate his
charge of “fabrications” from me in, say, a tenth the detail with which I have demonstrated his errors.
Enough. I’ve already given Searle’s remarks much more time and attention than they deserve. When
Searle takes a similar amount of time and trouble to give my many detailed objections the attention
they deserve, he just may regain my respect.

All the best,

Dan

Subject: Re: Searle’s complaint


Date: Thu, 15 May 1997 13:36:55 MET
From: “John Searle” filjs@hum.aau.dk

Dear Dan,

Your email was forwarded to me here.


To get the record straight:

1. You say I claimed that the brain “secretes” consciousness. I have never claimed that. Not in
writing, not in lectures, not anywhere. Never.
2. Ditto for intentionality. I have never claimed it was a secretion, and when you charged me with
this in the original BBSarticle (1980) I explicitly denied it. I do not know how to make it clearer.
Consciousness is not a secretion nor is intentionality. I have never ever said anything to the
contrary. Perhaps you wish you had heard me say that in a lecture but you did not.
3. What I actually said at Stanford and have published and repeated over and over is this: We
should think of consciousness and intentionality as biological phenomena on all fours as
biological phenomena with growth, digestion, mitosis, meiosis, lactation, photosynthesis, the
secretion of bile, etc. In short, they are biological phenomena among others. You have to be
singularly inattentive (or worse) to hear that as saying that consciousness and intentionality are
fluid secretions. In any case when you charged me with this in BBS 1980 I explicitly denounced it.
4. I have never said our brain tissue was a necessary condition for intentionality or consciousness.
In the article you quote I said the following. : “perhaps, for example, Martians also have
intentionality but their brains are made of different stuff. That is an empirical question.” Again,
next page “it might be possible to produce consciousness, intentionality, and all the rest of it
using chemical principles different from those human beings use. It is, as I said, an empirical
question. “My position has always been as follows. Certain processes in actual human and
animal brains are causally sufficient. But human and animal brain tissue is not *logically*
necessary. Perhaps they are empirically necessary but if so we do not know it and I have never
claimed it, as the passages above should make clear. These are empirical questions to be settled
by neurobiological research, not philosophical argument. The picture I have, which I as far as I
can see should be uncontroversial, is this: Some processes, still imperfectly understood, in
human and animal brains. Do it. They do it causally. Call those processes X. Then the question is

Page 3 of 9
do you have to have X to do it? Answer: maybe. We don’t yet know. Maybe you could do it with
Y. But a constraint on Y, which follows trivially from the fact that X does it causally is that Y has
to have equivalent causal powers to do it. The interest of this trivial point derives from the fact
that the program construed as an abstract syntactical process has no causal powers to cause
consciousness or anything else. All the causal powers are in the implementing medium. They are
various other points in your letter to Mike. I won’t try to deal with them here, but one or two I
can’t resist: The problem was never with bits vs. slips but with “a few”. Your (or Doug’s) idea
was to charge me with thinking that a program that passed the Turing test would be on just a
few bits-slips of paper, and then try to show that I stupidly had no idea of the complexity of the
program. About all these unanswered arguments. Huh? I never saw any I thought were worth
answering. Do you mean the one about the program on the shelf? I am not sure what you think
is the argument that is supposed to be important. Actually I do think you misrepresent me in
your J. Phil piece, but it is a genuine misinterpretation and not a misquotation. I try to write and
speak carefully so I am annoyed when misquoted, but I do understand that people sometimes
misinterpret my writings.

Anyway that is all for now. I am writing this on a strange machine using a strange program so I hope it is
not garbled.

Best wishes,

John Searle

Subject: Re: Searle’s complaint


Resent-Date: Thu, 15 May 1997 08:08:30 -0400 (EDT)
Resent-From: michael.s.gazzaniga@Dartmouth.EDU
Resent-To: ddennett@emerald.tufts.edu
Date: Thu, 15 May 1997 08:08:24 -0500
“\”John Searle\”.” <filjs@hum.aau.dk>
CC: ddennett@PEARL.TUFTS.EDU

Dear Dan and John:

I feel like the guy that reads one legal brief and says, hey this is good. Then I read the other guys and say,
Hey this is good too........So here I am a rookie with you high verbal IQ guys going at it......So why don’t I
just let you two continue this dialogue and if the product is interesting...we will publish it.....but keep the
whole thing to 5-6k or so.......”Searle Meets Dennett and vice-versa “Could be great.

Michael S. Gazzaniga,
Ph.D. Program in Cognitive Neuroscience
6162 Silsby Hall
Dartmouth College
Hanover, New Hampshire 03755-3547
TEL (603) 646 1182
FAX (603) 646 1181

Subject: one last reply

Page 4 of 9
Date: Wed, 21 May 1997 00:08:40 -0400 (EDT)
From: Daniel Dennett <ddennett@diamond.tufts.edu>
To: “Michael S. Gazzaniga” <michael.s.gazzaniga@Dartmouth.EDU>, John Searle
<searle@cogsci.berkeley.edu>

I may owe John Searle an apology, on two counts. First, as for the competition between my memory and
Searle’s, I’ll let him have it his way. It is indeed possible that we friends of AI thought we heard what we
wanted to hear. This is just as possible as that he has suppressed a memory for an unfortunate slip of
the tongue. Unless somebody comes up with a tape recording, his word should stand. I do regret that
this truly trivial war of memories should continue to deflect him from consideration of the deeper point.
Second, all these years I harbored the suspicion that he was deliberately ignoring the best arguments
against his position, but now he says “Huh? I never saw any I thought were worth answering,” and as
incredible as this might have seemed to me before today, he does give several striking demonstrations
of his inability to see arguments in this very response, so perhaps he does suffer from a sort of argument
agnosia. This is handy, since your readers will not have to consult other publications to see some of the
evidence for themselves. Consider:
(1) in my previous reply, I said: “As I said in my 1980 commentary, ‘He can’t really view
intentionality as a marvelous mental fluid. . .’ “And went on to point out that on the occasion of
his 1980 reply, Searle missed this point, and issued a thundering denial of the point I had myself
denied. Quoth I, just a few days ago: “No kidding--I’d already made that point myself.” Got that?
I agreed that Searle couldn’t mean it, and noted that I had noted this from the outset. What is
Searle’s response to this? He CONTINUES to miss this point (see his nicely numbered points 1-2,
two more denials as unnecessary as they are thunderous.) THE POINT was, and is, and continues
to be, that Searle’s calling consciousness and intentionality biological--LIKE lactation, as he says--
counts for nothing until he spells out just what it is about biology that matters. Before the
Wright brothers, only biological things flew. Flying was a “biological phenomenon . . . on all
fours with growth, digestion, ... lactation.” Now it isn’t. The Wright brothers were not wrong to
ignore the biochemistry of flight; Searle owes us an argument about why AI is wrong to ignore
the biochemistry of thought. Or if it is an unsettled empirical question, as he now says, then
apparently we don’t know yet--he hasn’t shown us yet--that strong AI was wrong to ignore it. In
which case, as I said before, our error was in supposing that he was trying to advance something
nonvacuous. I could have sworn he thought he was offering a biology-based argument against
strong AI, but I am willing to stand corrected.
(2) But when he says it is an empirical question, he is also blind (apparently) to the arguments I and
others have presented for more than a decade, showing that he has cut himself off from all
avenues for answering such questions empirically by his insistence on the first person point of
view. Note that I mentioned this in my previous message, but Searle ignored it. Here’s the bare-
bones argument, one more once (but Searle really should respond to the detailed versions):
How are empirical scientists going to tell if Martians are conscious? Not by asking them, if we
live by Searle’s rules, or putting them in a version of the Turing test. Only they can know, it
seems, and they can’t tell us. (Robots can’t tell us, after all--why should Martians be any more
credible?) It is not just that he hasn’t yet given us any clue about what might count, empirically,
for or against robot or Martian consciousness; it is that he has already systematically eliminated
the only sorts of things that could count.
(3) He now says “the program considered as an abstract syntactical process has no causal powers to
cause consciousness or anything else.” This sentence is either manifestly false--for reasons I
have gone into at length--or irrelevant (the program on the shelf claim--just one of several
overlooked or misunderstood arguments). When Searle says “All the causal powers are in the

Page 5 of 9
implementing medium” I wonder what he means. But I have wondered this before, and gone to
some lengths to find a suitable reading. He has not discussed my efforts, either to accept or
reject them.
(4) Doug Hofstadter and I weren’t claiming that Searle “stupidly thought” a program that could pass
the Turing test could be written on a few bits/slips of paper. We were charging something more
serious, which perhaps explains why he has resisted understanding what we said: namely, that
he deliberately MISLED his readers by speaking of the program as existing on “bits of paper,”
since it discouraged them from imagining the case correctly. Quoting from THE MIND’S I: “We
think Searle has committed a serious and fundamental misrepresentation by giving the
impression that it makes any sense that a human being could do this.” (p.373)Searle’s sentence,
in the original, reads: “The idea is that while a person might not understand Chinese, somehow
the conjunction of that person and bits of paper might understand Chinese. It is not easy for me
to imagine how someone who was not in the grip of an ideology would find the idea at all
plausible. “It is easier to imagine,” we find, “if you imagine the case right.” That was Hofstadter’s
charge, and mine, too. Actually, we hedged our charge ever so slightly. Hofstadter went on:
“The illusion that Searle hopes to induce in his readers (naturally he doesn’t think of it as an
illusion!) depends on his managing to make readers overlook a tremendous difference in
complexity between two systems at different conceptual levels.” But now Searle insists that he
always understood this point (as I always suspected), so the charge stands: he tried to mislead
his readers. There is no doubt that he succeeded; over the years I have found many readers of
Searle’s essay who are under the impression that the program under discussion is some simple
conversion table taking Chinese symbols to Chinese symbols via a few (a few) rules.

Dan Dennett

Subject: reply
Date: Tue, 27 May 1997 14:55:31 MET
From: “John Searle” <filjs@hum.aau.dk>
michael.s.gazzaniga@dartmouth.edu
CC: ddennett@diamond.tufts.edu, hudin@cogsci.berkeley.edu

Dear Mike,

Just back from Oxford. Here is my reply to Dan. Any theory in this domain has to explain two facts. First
consciousness is a real phenomenon. There really are inner qualitative, ontologically subjective states
such as pains, emotions, etc. Ditto for intentionality. There really are thoughts, perceptions, desires,
beliefs etc. and these are intrinsic and not just observer relative or as-if. Now, Dan denies both these
facts and that is the source of most of his errors. So, let us now turn to his criticisms. To begin, he
misunderstands the argument against Strong AI. He says, “I could have sworn that he thought he was
offering a biology based argument against Strong AI “No, the point about biology comes AFTER the
argument against strong AI. Here in summary form is the overall structure of the argument.
1. The refutation of Strong AI: I don’t understand Chinese and my just implementing a program,
any program, would not by itself be sufficient to guarantee that I would thereby understand
Chinese. What goes for me goes for any computer and what goes for Chinese goes for any
semantically loaded cognitive capacity. So implemented programs are not by themselves
enough to guarantee cognition. So Strong AI is false. There is a lot more to be said but that is the
bare bones. The argument has some other interesting consequences: The Turing test is refuted,

Page 6 of 9
and behavior is shown to be irrelevant to the ontology of consciousness. (It is still relevant to the
epistemology but that is a separate point that I will come to.)So much for Strong AI. It has now
been refuted and I have not seen any serious new attempts to answer this argument that I did
not answer in 1980. That is what I mean when I say I haven’t seen any arguments worth
answering. Perhaps Dan knows of some but he has not presented them. He simply repeats old
arguments based on his behaviorist assumptions.
2. Ok now that we have refuted Strong AI, what is the interest of that? What is it that the
implemented program lacks in Chinese that I have in, for example, English? The difference
between passing the Turing Test for Chinese by following the steps in the program and passing
the Turing Test for English as a native English speaker is that in English I have more than the
implemented syntax of the program. I have a semantics or mental content. I know in English
that I am being asked a question, I understand what the question is, and on the basis of this
understanding I am able to answer appropriately. So the significance of the argument is that the
syntactical processes of the implemented program are not by themselves sufficient to guarantee
the mental contents of actual brains. In a word, Strong AI confuses the map with the territory.
Simulation is not duplication, and syntax is not sufficient for semantics.
3. Well why not? The brain is a machine and the computer is a machine so why can’t we do with
the one what we do with the other? What is wrong with conscious silicon, for example? As far as
LOGICAL possibility is concerned there is no reason why you could not make a conscious
machine out of silicon. I think this is preposterous on EMPIRICAL biological grounds as I have
said. That is an empirical hypothesis on my part and is subject to confirmation or refutation. But
this is irrelevant to Strong AI, which is not about the chemistry of silicon or any other material,
but about computation as defined by Turing, et al. Computation has no essential connection
whatever with silicon or any other material. Silicon is just one of many media that we can use to
implement the formal or abstract syntax of the computational algorithm. To repeat, the
argument about Strong AI is not about the CAUSAL powers of silicon but about the LOGICAL
properties of syntactical processes. And logically speaking, those processes are not sufficient to
guarantee the presence of mental contents. To repeat, syntax is not sufficient for semantics in
the sense of intrinsic mental contents.
4. Now here is where the biology comes in. We know that brains do it causally. Brain processes are
causally sufficient for consciousness and all the rest. As far as we know, the way it works is this.
Micro level processes (probably at the level of neurons and synapses) cause macro level
phenomena such as qualitative subjective states of consciousness and intrinsic intentionality. In
addition to the neuron firings and the external behavior there are conscious states caused by
the former which cause the latter. Dennett is forced to deny these facts. The fact that brains do
it causally has some logical consequences: It follows trivially that any other system that caused
consciousness, etc. would have to have causal powers equal to brains to do it. But we already
know that the implemented program by itself has no such causal powers. I often use the analogy
with flight, as Dan does, so let’s spell it out:
a. The computer simulation of flight is not flight any more than the computer simulation of
thought is thought.
b. From the fact that birds do it causally it follows trivially that any other system would
have to have equivalent threshold causal powers to do it. Thus airplanes do not have to
have feathers in order to fly but they do have to share with birds the causal capacity to
overcome the force of gravity in the earth’s atmosphere. They have to equal the bird’s
threshold capacity to do it, and a computer program is not enough to guarantee those
powers. Analogously, a conscious artificial machine might not have to have neurons but
it must share with biological brains the causal capacity to cause consciousness. An

Page 7 of 9
implemented program by itself is not enough to guarantee those powers, because the
program consists of formal, syntactical processes, as we saw. The parallel with strong AI
is exact. Do I have to spell it out any more?

Now these four are the important points and I have not seen an answer to any of them. So let’s look at
the arguments Dan thinks are so terrific.
1. Behaviorism. Dan asks how empirical scientists would know that other systems, e.g. Martians,
were conscious if they could only use “the first person point of view?” But Dan has
misunderstood me once again. As far as methodology is concerned you can use any point of
view you like. In this case you can use third person epistemology to get at a first person
ontology. I describe in my books and articles, especially the Rediscovery, the strategies we
would, and do, use to identify consciousness in others. Consciousness in others is just one of
many unobservable but real phenomena. Consider quarks, electrons and black holes. The
important thing about consciousness is that its ontology is a first person ontology. That is just a
fancy way of saying that, e.g., conscious pains have to be FELT by somebody in order to exist.
Like all verificationists Dan confuses epistemology with ontology. This leads him to deny the
existence of first person mental phenomena and that denial constitutes a reductio ad absurdum
of his position.
2. “Programs are syntactical”. Dan thinks this means only the program on the shelf. But he is
mistaken. To put the point slightly more technically, the notion “same implemented program
“defines an equivalence class specified purely formally or syntactically, and independently of the
specific causal powers of the implementing medium. Here I am just repeating standard textbook
definitions. Any given implementation will have of course some specific physics, but that is
irrelevant to the identity of the program. Any physics will do if it is stable enough and rich
enough to carry out the steps in the formal program. (This by the way is not a fault of
computers. It is why they are so powerful.)
3. Dan thinks I am neglecting, indeed trying to conceal, the fact that programs are - or can be- very
complicated. He thinks this is a devastating argument and that I have been trying to mislead
people about it. That charge, to put it charitably, is ridiculous. Again, he can’t cite any passage
where I am trying to mislead anybody about this. In fact as far as the impossibility of actually
carrying out the thought experiment is concerned I have even compared the Chinese Room to
Einstein’s clock paradox. He can’t be unaware of this because it was in the course of a refutation
of his views. The point I make over and over is that complexity is irrelevant to the structure of
the argument. However complicated the program its formal structure as a sequence of symbol
manipulations is insufficient to guarantee the intrinsic mental content of human thought.
Complexity is irrelevant to the fundamental distinction between syntax and semantics. The
conjunction of a person and a whole mountain of pieces of paper with Chinese symbols does not
guarantee the understanding of Chinese any more than does a small pile.
4. Dan complains that I don’t tell the details of how brains cause consciousness. I don’t know and
neither does anybody else. The person who figures it out will get the Nobel Prize. But all the
same we know that brains do it. Well, Mike, I have to say if this is the best that the defenders of
Strong AI can do then they are in very poor shape. Why does Dan think these arguments are so
terrific when I think they are just pathetic? The answer is that his behaviorism and
verificationism commit him to denying the existence of consciousness and intrinsic intentionality
from the start. By the way there is nothing in Dan’s letters that I have not answered before.
Dan’s strategy is always to say there is some really terrific argument that I have not answered
but when you call his bluff, and ask him to specify the argument it is the same old stuff

Page 8 of 9
-behaviorism, complexity, etc. I have seen nothing new that I have not already answered many
years ago. \\ Best Wishes \\ John Searle

Page 9 of 9

You might also like