Papers by Paul Austin Murphy
Introduction
Lee Smolin: Doing Physics in a Quantum Box
i) Initial Conditions
ii) Isolated and Op... more Introduction
Lee Smolin: Doing Physics in a Quantum Box
i) Initial Conditions
ii) Isolated and Open Boxes
iii) Truncations of Nature
Nancy Cartwright: Doing Physics in a Classical Box
i) Introduction
ii) A Ball on an Inclined Plane
iii) Pure Observations?
iv) Manipulating Nature
v) Conclusion
Final Thought: Quantum Boxes aren't Classical Boxes
In Lee Smolin's book, *Time Reborn: From the Crisis in Physics to the Future of the Universe* (2013), there is a chapter called 'Doing Physics in a Box'. That chapter is the basis of this piece.
Doing physics in a box has a long history. According to the theoretical physicist Lee Smolin, it goes back to Kepler, Galileo, Descartes and Newton in the 16th and 17th centuries. These scientists “learned to [] isolate little pieces of the world, examine them, and record the changes to them”.
Ironically, some of the thought-experimental “boxes” of physics are *literally* boxes. For example, we have Erwin Schrödinger famous cat-in-a-box thought experiment and Albert Einstein's “light box” experiment. Interestingly enough, both experiments show the impact of outside forces on the boxes (something which Smolin will stress later). In the cat-in-the-box case, outside disturbances can/do “collapse the wave function”.
Later in this piece, the philosopher Nancy Cartwright specifically confronts Galileo's “little piece[] of the world” - a ball on an inclined plain (i.e., a ramp). However, Smolin himself concentrates on quantum pieces of the world.
The words “quantum box” are used because Smolin focuses on such a thing. He says that the “application of quantum mechanics appears to be limited to isolated systems”. Yet “[i]t's [still] an extension of the Newtonian paradigm – of doing physics in a box”. Nonetheless, he mentions classical (or macro) boxes too.
Finally, Smolin has two important things to say about the experiments of physics:
i) That what he calls “subsystems” (or “isolated systems”) are inevitably part of the entire universe – therefore they're also affected by the whole universe.
ii) That experiments are extremely artificial constructs.
Bookmarks Related papers MentionsView impact
... With John Gribbin
i) Introduction
ii) What is Interpretation?
Iii) Shut up and calculate!
iv... more ... With John Gribbin
i) Introduction
ii) What is Interpretation?
Iii) Shut up and calculate!
iv) Limits to John Gribbin's Pluralism
v) Only Maths?
vi) Waves and Particles
vii) Conclusion
viii) Afterthought
This piece doesn’t claim to offer a conclusive case for the elimination of all the interpretations of quantum mechanics (QM). It simply raises the possibility of elimination and then offers a few arguments in that direction.
The prime motive here is that, at least at present, there’s no way of establishing which interpretation is the true/correct/etc. one. Secondly, the multiplicity of interpretations both confuses the issue and leads to scepticism towards many of them. Thirdly, some interpretations are so convoluted and wacky that laypersons and even physicists themselves must have only aesthetic reasons to believe in them.
This is particularly true of the Many Worlds Interpretation. I can see no reason whatsoever for the layperson to accept it other than it can be taken to “explain the phenomena” (a phrase which is being used in contrast to Bas van Fraassen’s “save the phenomena”). That is, in the way that panpsychism, idealism, Marxism, theism, etc. can all be taken to explain the phenomena. However, explaining phenomena is often very cheap and easy, even if it is neat and tidy. (Note Albert Einstein’s rejection of David Bohm’s theory.)
The first section introduces John Gribbin's position as well as a potted history of positions (which features Paul Dirac, Richard Feynman, etc.) which can be taken to lead towards a possible eliminitivism This section also deals with both the shut-up-and-calculate mantra and instrumentalism and how they can be taken to lead to eliminitivism. Then the simple question “What is Interpretation?” is asked. After that, Gribbin's pluralism is tackled and seen to be not very convincing (at least not from a philosophical point of view). The section “Only Maths?” is self-explanatory. Finally, the issue of wave-or-particles is discussed in the context of the elimination of interpretations of quantum mechanics.
Bookmarks Related papers MentionsView impact
i) Introduction
ii) Why Model-dependent Realism?
iii) The Brain's Models
iv) True Reality and Mod... more i) Introduction
ii) Why Model-dependent Realism?
iii) The Brain's Models
iv) True Reality and Models
v) Pluralism
vi) Hawking's Constructive Empiricism?
vii) The Aesthetics of Models
viii) Conclusion
The words "model-dependent realism" (MDR) were first used in Stephen Hawking and Leonard Mlodinow's book *The Grand Design*, which was published in 2010. Before that, Hawking had of course already talked about the importance of models in physics. (As a physicist, it would be hard not to stress their importance.)
It's very odd that Stephen Hawking should have said that “philosophy is dead” on more than one occasion. What did he think his model-dependent realism is? Above and beyond that, it's difficult to decipher why Hawking chose the term “realism” in the first place.
As for the philosophical position of model-dependent realism itself, it has a strong pragmatic - rather than a strictly empiricist - appeal. And, precisely because of that, metaphysical realists and scientific realists will have serious problems with Hawking's philosophical position.
One question which needs to be asked here is the following:
*Why did Stephen Hawking use the word “realism” to characterise
his position?*
At an intuitive level, MDR appears to be an anti-realist position, rather than a realist one. After all, if one is stressing models, theories and “mental concepts”, then isn't that also to stress some kind of anti-realist position? Perhaps the wording doesn't matter.
As already hinted at, Hawking didn't have much time for philosophy and on more than one occasion he said that it was “dead”. Yet it's very odd that he didn't realise he was doing philosophy when he wrote these parts of *The Grand Design*. What's more, Hawking made the following incredible claim:
“Model-dependent realism short-circuits all this argument and discussion between the realist and anti-realist schools of thought.”
So perhaps it wasn't philosophy simpliciter that Hawking was against; but only philosophy “which has not kept up with modern developments in science”. (Many philosophers themselves have said the same about their fellow philosophers.) Thus one must now assume that model-dependent realists have kept up with modern science. However, even philosophy which has kept up with science still remains philosophy. And that's also true of Hawking's own model-dependent realism.
Bookmarks Related papers MentionsView impact
i) The Pleasing Theory of Panpsychism
ii) Geocentrism?
iii) Unobservable Consciousness
iv) Absenc... more i) The Pleasing Theory of Panpsychism
ii) Geocentrism?
iii) Unobservable Consciousness
iv) Absence of Evidence
v) Are Panpsychists Closet Theists?
Philosopher Philip Goff and science writer/journalist John Horgan disagree about panpsychism.
In a blog post ('The New Copernican Revolution: A Response to John Horgan'), Philip Goff lays out the philosophical problems he has with John Horgan's stance on panpsychism. It mainly concerns what Horgan sees as panpsychism's “geocentrism”; though it does touch on the nature of evidence (i.e., when it comes to both metaphysical and scientific theories).
Firstly, John Horgan accuses panpsychists of “neo-geocentrism”. And then Philip Goff accuses John Horgan of geocentrism. According to both Goff and Horgan, geocentrism is
“the attempt to drag us back to the pre-Copernican view that reality revolves us human beings”.
I note the distinction between the following:
i) Seeing consciousness as being “fundamental” to our own conceptions of the universe/reality.
and
ii) Seeing consciousness as being fundamental to the universe/reality simpliciter.
Goff also goes into greater detail about what he thinks “non-panpsychists” believe. He writes:
“For non-panpsychists, consciousness – the source of all that is of value in existence – is to be found on the planet alone, and only in its very recent history. In the immensity of the cosmos, we are uniquely special and privileged.”
Goff then puts the panpsychist position:
“Panpsychists, in contrast, propose a new Copernican revolution, according to which there’s nothing special about human consciousness...”
Even if someone accepts that phenomenal properties exist all the way down (i.e., to rocks and atoms), it may still be the case that human consciousness is indeed special. However, the word “special” is loaded because everything in the universe – from a type of rock to an ant - is special and unique in some (or even many) ways.
Bookmarks Related papers MentionsView impact
Michio Kaku is an American theoretical physicist and populariser of science. He is a professor of... more Michio Kaku is an American theoretical physicist and populariser of science. He is a professor of theoretical physics in the City College of New York and CUNY Graduate Center.
*********************************
Much has been made of string theory being “unscientific”, lacking evidence and not offering (unique) predictions. This piece lays part of the blame at the Pythagorean nature of string theory (in which the mathematics always has a supreme position).
The first section tackles what's called “old-style physics” - that is physics which places an importance on observation, experiment and prediction. Then there's a section in which Michio Kaku ties his own position to the science of Galileo and then Einstein. In both cases, it can be seen that they aren't great exemplars for string theory.
The central section deals with Pythagoreanism and how it ties in with string theory. The next section deals with the aesthetics of string theory and argues that aesthetic appeal can't be – and mustn't be - the end of the story in physics.
Finally, there's a section which cites a specific example of how string theory mathematics led to string theory physics and cosmology: i.e., the case of string solutions and the multiverse.
Bookmarks Related papers MentionsView impact
i) Introduction
ii) Panpsychism
iii) Substance or Intrinsic Essence?
iv) Relationalism
Lee Smoli... more i) Introduction
ii) Panpsychism
iii) Substance or Intrinsic Essence?
iv) Relationalism
Lee Smolin is a theoretical physicist with many philosophical interests and inclinations. This interplay between science and philosophy is played out in Smolin's writings.
So let Smolin lay his cards on the table. He writes:
"[T]here are questions that science cannot answer now but that are so clearly meaningful that sometime in the future, it is hoped, science will evolve language, concepts, and experimental techniques to address them."
The question is:
Does Smolin believe that "science cannot address" what he calls "intrinsic essence" - and thereby consciousness and qualia?
It would seem that Smolin does believe that. One may speak for Smolin here and say that although science will progress in the future, the "hard problem" (to use David Chalmers' term) will remain beyond it. So this isn't even a issue of insolubilia versus incompletability (i.e., the position which states that even though science's problems are soluble in principle, science will never be complete). No; this is a position that embraces insolubility regardless of completability.
As it is, current science probably can't answer "what qualia are like" or "why we perceive them". Of course one trick here is to deny the existence of qualia and consciousness outright (as some philosophers and scientists have done). Alternatively, one can explain consciousness (though not qualia) in terms that are amenable to third-person (scientific) evaluation and tests (as Daniel Dennett does).
As for panpsychism.
It's quite a surprise that Smolin never mentions panpsychism. He doesn't even engage the possibility that the intrinsic essence of the mind have been tied to the intrinsic essence of inanimate objects. That is, Smolin says that we have an "internal aspect" which is the "intrinsic essense" of rocks or atoms. And we also have consciousness; which he tells us "is an aspect of the intrinsic essence of brains". As just stated, Smolin doesn't tie the intrinsic essence of the brain to the intrinsic essence of a rock or atom - apart from saying that they all have intrinsic essences. The panpsychist, of course, goes one step further than Smolin. The intrinsic essence of the brain is one and the same thing as the intrinsic essence of a rock (often talked about in terms of “phenomenal properties”). That means, of course, that the rock is conscious (or has phenomenal properties) too - if to a markedly lesser degree than the human brain.
Bookmarks Related papers MentionsView impact
One scientist/academic has a theory that people will countenance - or even commit - cruelty if it... more One scientist/academic has a theory that people will countenance - or even commit - cruelty if it's approved by some kind of authority figure. (From an example given by William Poundstone.)That theory intuitively has a lot of plausibility. However, that's not the point of the paradox.
The said scientist/academic carries out an experiment on ten people (i.e., subjects). The experiment is to see if the ten subjects will press a button which will deliver an electric shock if ordered to do so by an authority figure (given a suitably rational or "scientific" reason to do so). However, unbeknownst to him, it's the scientist who's the real subject of the experiment. That is, his ten subjects aren't being tested - he is! In fact the ten subjects know what's going on and are given fake electrical shocks. The scientist, on the other hand, doesn't know what's going on. He only knows about his own experiment; not the experiment upon him.
Again, this paradox isn't about the nature of meta-tests or even about why - or if - people really do commit acts of cruelty when told to do so by authority figures. This paradox is actually about the "fudge factor" or "experimenter bias effect". In other words, the scientist is testing his research students and these meta-testers are testing the scientist who's testing research students.
The phrases "fudge factor" and "experimenter bias effect" actually refer to the fact that when a researcher or scientist expects to "discover" or find a certain result, he's very likely to get that result. Needless to say, when the research or science involves anything which is in any way political in nature, this is even more likely to be the case.
Bookmarks Related papers MentionsView impact
The American computer scientist Christopher Gale Langton was born 1948. He was a founder of the f... more The American computer scientist Christopher Gale Langton was born 1948. He was a founder of the field of artificial life. He coined the term “artificial life” in the late 1980s. Langton joined the Santa Fe Institute in its early days. He left the institute in the late 1990s. Langton then gave up his work on artificial life and stopped publishing his research.
*************************************
When it came to Artificial Life (AL), Christopher G. Langton didn't hold back. In the following passage he puts the exciting case for AL:
“It's going to be hard for people to accept the idea that machines can be alive as people, and that there's nothing special about our life that's not achievable by any other kind of stuff out there, if that stuff is put together in the right way. It's going to be as hard for people to accept that as it was for Galileo's contemporaries to accept the fact that Earth was not at the center of the universe.”
The important and relevant part of the passage above is:
“[T]here's nothing special about our life that's not achievable by any other kind of stuff out there...”
Although the above isn't a definition of functionalism, it nonetheless has obvious and important functionalist implications. So when it comes to both Artificial Life and Artificial Intelligence, the computer scientist Christopher Langton seems to have explicitly stated that biology doesn't matter. Yes; it of course matters to biological life and biological intelligence; though not to life and intelligence generically interpreted.
The biologist and cognitive scientist Francisco Varela put the opposite position (as it were) to Langton's when he told us that he “disagree[s] with [Langton's] reading of artificial life as being functionalist”. Varela continues:
“By this I refer to his idea that the pattern is the thing. In contrast, there's the kind of biology in which there's an irreducible side to the situatedness of the organism and its history...”
We have specific biologies. Those specific biologies are situated in specific environments. And then we must also consider the specific histories of those specific biological organisms. So, if “early AI” was “all about” functions and nothing else, then that was surely to leave a lot out. (From a philosophical angle, we must also include externalist arguments, as well as embodiment and embeddedness – i.e., not only Varela's “situatedness”.)
Bookmarks Related papers MentionsView impact
Barry Stroud makes what appears to be a metaphysically-realist point against W.V.O. Quine’s natur... more Barry Stroud makes what appears to be a metaphysically-realist point against W.V.O. Quine’s naturalist position. He makes the simple realist distinction between
1) Beliefs which are “constructions or projections from [sensory] stimulations” (Quine’s position).
and
2) Beliefs about the world.
The question is: How do we get from 1) to 2)? Or:
How do we get from sensory stimulations - which are (indeed) caused by the world - to truths about the world (or representations which are true about the world)?
This is the ancient sceptical question. Indeed this was *the* question of epistemology (at least since Descartes).
Another way of putting this is in terms of what we *know*. On Quine’s account, all we know about is are sensory stimulations and what we assert in response to them. We don't know anything about the world itself.
So are beliefs and assertions about our own and other people’s beliefs and assertions not directly (or even directly) about the world? These beliefs and assertions (or Quine’s ‘projections’)
“could not be seen as a source of independent information about the world against which their own truth or the truth of the earlier beliefs could be checked”.
This is as strong a statement of the possibility of scepticism in epistemology as you could hear from a sceptic himself.
Bookmarks Related papers MentionsView impact
The following piece doesn't tackle David Chalmers' well-discussed and well-known Hard Problem. Th... more The following piece doesn't tackle David Chalmers' well-discussed and well-known Hard Problem. That is, it doesn't attempt to find an answer to the question:
Why does the physical brain give rise to consciousness?
Instead, it asks us why we human beings - and other animals - needed consciousness in the first place. Thus we have this question:
From an evolutionary perspective, if the functions of the brain (which Chalmers often refers to) might have occurred “in the dark”, then why did we need (if we did need) - and why do we still need/have - consciousness?
The nature of the physical-consciousness link is only tangential to this issue.
Bookmarks Related papers MentionsView impact
The following is a critical account of the 'The Chinese Room' chapter in Daniel Dennett's book, *... more The following is a critical account of the 'The Chinese Room' chapter in Daniel Dennett's book, *Intuition Pumps and Other Tools For Thinking*.
In this chapter Daniel Dennett doesn't really offer many of his own arguments against John Searle's position. What he does offer are a lot of blatant ad hominems and simple endorsements of other people's (i.e., Strong AI aficionados) positions on the Chinese Room argument. Indeed Dennett is explicit about his support (even if it's somewhat qualified) of the “Systems Reply”.
We can happily accept that Searle's thought experiment doesn't entirely (or even at all) succeed in what it claims to accomplish. However, Dennett's claims (or those he endorses) don't demonstrate the *possibility* of Strong AI either. In addition, it can also be said that Searle himself never claimed the “flat-out impossibility of Strong AI” in the first place.
Bookmarks Related papers MentionsView impact
i) Introduction
ii) Analysis and Argumentation
iii) Clarity
iv) Obscurity
v) Deconstruction
vi) C... more i) Introduction
ii) Analysis and Argumentation
iii) Clarity
iv) Obscurity
v) Deconstruction
vi) Conclusion
Many analytic philosophers stress the point that analytic philosophy isn't about the sharing of views or positions: it's about the sharing of philosophical tools and a basic commitment to clarity. All this is regardless of what position a particular analytic philosopher may advance.
In his book, *What is Analytic Philosophy?*, Hans-Johan Glock elaborates on this position in the following:
“Philosophy is not about sharing doctrines, but about a rational and civilised debate even about one's own cherished assumptions.”
Of course it's also the case that many analytic philosophers do actually “share doctrines”. However, it's just that the sharing of philosophical tools and practices is deemed to be more important than sharing doctrines. It also follows from the sharing of philosophical tools and a commitment to clarity that there can be “rational and civilised debates even about one's cherished assumptions”. That is, the sharing of philosophical tools and a commitment to clarity enables (or allows) rational and civilised debate.
Of course certain questions arise here:
i) Do analytic philosophers really share many - or indeed any - philosophical tools?
ii) Is there genuine civilised debate between all analytic philosophers at all times?
Many philosophers have of course questioned this assumption that analytic philosophers share tools. Others may question the deepness or genuineness of the “civilised debate” too. This basically means that there will be exceptions to i) and ii) above and no one should expect otherwise. However, on the whole, it's easy to see that most analytic philosophers do indeed share many tools and practices.
As for civilised debate.
This runs parallel to an account of science as a whole which can be distinguished from any accounts of individual scientists. That is, individual scientists can be very unrepresentative individuals: they can falsify experimental data, stick dogmatically to their theories, be paid by big business, let their politics influence their science, etc. Nonetheless, unrepresentative scientists certainly aren't the norm in science. (All this will partly depend partly on which science we're talking about, etc.)
The same kinds of distinction can be made between individual analytic philosophers and analytic philosophy itself. There may indeed be unrepresentative analytic philosophers. It may even be the case that poor standards (however that's defined) are sometimes displayed within books or even papers. However, as with science, none of this is really true of analytic philosophy as a whole.
Bookmarks Related papers MentionsView impact
i) Introduction
ii) Implementation
iii) Causal Structure
iv) The Computer's Innards
v) Chalmers' ... more i) Introduction
ii) Implementation
iii) Causal Structure
iv) The Computer's Innards
v) Chalmers' Biocentrism?
vi) The Chinese Room
It's a good thing that the abstract and the concrete (or abstract objects in "mathematical space" and the "real world") are brought together in David Chalmers' account of Strong AI. Often it's almost (or literally) as if AI theorists believe that (as it were) disembodied computations can themselves bring about mind or even consciousness. (The same can be said, though less strongly, about functions or functionalism.) This, as John Searle once stated, is a kind of contemporary dualism in which abstract objects (computations/algorithms – the contemporary version of Descartes' mind-substance) bring about mind and consciousness on their own.
To capture the essence of what Chalmers is attempting to do we can quote his own words when he says that it's all about “relat[ing] the abstract and concrete domains”. And he gives a very concrete example of this.
Take a recipe for a meal. To Chalmers, this recipe is a “syntactic object[]”. However, the meal itself (as well as the cooking process) is an “implementation” that occurs in the “real world”.
So, with regard to Chalmers' own examples, we need to tie "Turing machines, Pascal programs, finite-state automata", etc. to"[c]ognitive systems in the real world" which are
"concrete objects, physically embodied and interacting causally with other objects in the physical world".
In the above passage, we also have what may be called an "externalist" as well as "embodiment" argument against AI abstractionism. That is, the creation of mind and consciousness is not only about the computer/robot itself: Chalmers' "physical world" is undoubtedly part of the picture too.
Of course most adherents of Strong AI would never deny that such abstract objects need to be implemented in the "physical world". It's just that the manner of that implementation (as well as the nature of the physical material which does that job) seems to be seen as almost – or literally – irrelevant.
Bookmarks Related papers MentionsView impact
The very idea of nothing (or nothingness) is hard - or even impossible - to conceive or imagine. ... more The very idea of nothing (or nothingness) is hard - or even impossible - to conceive or imagine. This means that (at least for myself) it fails David Chalmers' idea of *conceivability*.
David Chalmers (the well-known Australian philosopher) claims that if something is conceivable; then that entails that it's also – metaphysically - possible. The problem with this is we can distinguish conceivability from imaginability. That is, even if we can't construct mental images of nothing (or nothingness), we can still conceive of nothing (or nothingness). I, for one, can't even conceive of nothing (or nothingness).
But can other people conceive of nothing? Do we even have intuitions about nothing or about the notion of nothingness?
So how can we even name or refer to nothing? (We shall see that Parmenides might have had something here.) There's nothing to hold onto. Yet, psychologically speaking, thoughts about nothing can fill people with dread. There's something psychologically (or emotionally) both propelling and appalling about it. And that's why existentialists and other philosophers – with their taste for the dramatic and poetic - found the subject of nothing (or at least nothingness) such a rich philosophical ground to mine.
The very idea of nothing also seems bizarre. It arises at the very beginning of philosophy and religion. After all, how did God create the world "out of nothing"? Did God Himself come from nothing? Indeed what is nothing (or nothingness)?
Bookmarks Related papers MentionsView impact
i) Introduction
ii) Ten Logical Possibilities Before Breakfast
iii) René Descartes
iv) Logical Po... more i) Introduction
ii) Ten Logical Possibilities Before Breakfast
iii) René Descartes
iv) Logical Possibilities as Tools
There isn't much technical argumentation in the first part of this piece about David Chalmers' “arguments from logical possibility”. Indeed, I suspect that many of his logical possibilities are indeed logical possibilities. I also suspect that many of his arguments are airtight.
In *The Conscious Mind*, Chalmers introduces a logical possibility on almost every page of that book. In it he talks about “God”, “an angel world”, “flying telephones”, “ectoplasm” and a monkey who writes Hamlet. Admittedly, most of these logical possibilities are of little interest to Chalmers either because they're hugely improbable or because they don't help him philosophically. It is zombies which he specialises in - and it is zombies which help him philosophically.
Indeed isn't the position of panpsychism (which Chalmers also endorses) itself very reliant on logical possibility? (For example, it's *logically possible* that an electron or stone has “phenomenal properties”.)
Not only is Chalmers' keenness on logical possibilities a philosophical position, logical possibilities also advance one of his positions within philosophy. Chalmers and other “anti-materialists” require logical possibilities in order to advance anti-materialism. In other words, their arguments only work if one countenances all sorts of strange logical possibilities.
So when a philosopher cites various logical possibilities, all sorts of philosophical positions become available... or *possible*. That's primarily because once a given philosopher sets a particular logical possibility among the pigeons, then it's surely the duty of other philosophers to tackle that logical possibility. But if they don't, then surely they can be classed as philosophical philistines... or “verificationists”. Indeed Chalmers lays his cards on the table when he states:
“In general, a certain burden of proof lies on those who claim that a given description is logically *impossible*.”
This means that there's a lot of weight attached to logical possibilities. In other words, logical possibilities (as well as conceivabilities) serve a larger philosophical purpose. This means that citing logical possibilities isn't just another tool – it is *the* tool to fight materialism (perhaps much more too).
Bookmarks Related papers MentionsView impact
Contents:
v. Logical Possibility and Natural Possibility
vi. Zombies
vii. Saul Kripke
viii. Chal... more Contents:
v. Logical Possibility and Natural Possibility
vi. Zombies
vii. Saul Kripke
viii. Chalmers & Goff: Conceivability to Possibility
ix. Two More Conceivings
One may think that only natural (or empirical) possibility is of interest to most people – both laypersons and experts. Indeed David Chalmers himself (sort of) states that himself. Yet logical (i.e., not natural) possibility still permeates Chalmers' entire work. And, as we shall see, so too does conceivability; which he strongly ties to logical possibility. Indeed Chalmers' references to natural possibility hardly make sense when taken out of the context of logical possibility. Thus both logical and natural possibility gain much of their purchase by being opposed to one another.
Chalmers himself sums up one major problem with logical possibility (this was touched upon in Part One) when he says that
“[t]here are a vast number of logically possible situations that are not naturally possible”.
That means that there must be mightily good philosophical (or scientific) reasons to spend time on a given logical possibility. (It's easy to believe that there are good reasons when, for example, consciousness is being discussed.) That is, surely it must pay philosophical dividends to do so. Having said that, there's a very large number of uninstantiated natural possibilities too.
So what, philosophically, can we draw out of natural (“although wildly improbable”) possibilities? Well, we can draw one thing out: that they're *naturally possible*. And that's enough for some philosophers. But what else? Well, at a superficial level, it shows us that all sorts of bizarre things are naturally possible. So if it's naturally possible that a monkey could type *Hamlet*, then it's also naturally possible that an ant could take over the world. Why? Because all it takes for something to be naturally possible is that it “conforms to the laws of our world”. So, as far as I can see, a Nietzschean super-ant does appear to conform to the laws of of our world. That is, this ant and its actions don't “violate[] the laws of nature of our world”.
Bookmarks Related papers MentionsView impact
Many "cognitivists" believe that the brain is a computer. (Sometimes they say “a kind of computer... more Many "cognitivists" believe that the brain is a computer. (Sometimes they say “a kind of computer”.) Thus, as a result of this belief, they attempt to discover the computational processes which enable such things as perception and learning. However, expressed in that manner (as it often is), things are a little unclear.
Such cognitivists believe that the brain is also a machine – a computing machine.
Computationalists (computationalism is a branch of cognitivism) claim that all thought is computation. But what does that mean? Are the words 'thought' and 'computation' virtual (or literal) synonyms?
This is more clearly the case because it seems that almost all conscious processes in the brain are deemed to be thoughts; and thus also deemed to be computations. That not only includes the thought that 1 plus 1 equals 4 or that Snow is white; but also the rotation of a mental image in the mind, imagining the smell of a rose and so on. Then again, if rotating a mental image is classed as a thought, then why can't it also be classed as a computation? Especially since, in computationalism, they appear to by synonyms.
It all now depends on what we mean by the word 'computation'.
For a start, we can make the following claims:
i) All of a computer's processes are computations.
ii) Not all conscious human mental processes are computations.
Similarly, we can say:
iii) Many human mental processes are thoughts.
iv) No computer computations are thoughts (i.e. because thoughts have semantic content, intentionality, reference, etc.).
Bookmarks Related papers MentionsView impact
Dualists and the so-called "mysterians" aren't the only people who believe that Daniel Dennett is... more Dualists and the so-called "mysterians" aren't the only people who believe that Daniel Dennett is a "scientistic philosopher" – Dennett thinks that about himself! That is, Dennett refers to his own overriding philosophical position as "third-person absolutism".
So what does a third-person absolutist believe? According to David Chalmers, Dennett believes that “what is not externally verifiable cannot be real”. To be more explicit: there's a fundamental connection between any x being real (or existing) and whether or not we can “externally verify” that x. Thus, if we can't externally verify x, it doesn't exist. It's not real.
Dennett explains his third-person absolutism when he states the following:
"I wouldn't know what I was thinking about if I couldn't identify them by their functional differentia."
It's not surprising, then, that Dennett has asked David Chalmers
“to provide 'independent' evidence (presumably behavioral or functional evidence) for the 'postulation' of experience”.
As for David Chalmers: he classes himself as a “dualist”; or, more accurately, his position is one of "naturalistic dualism". Why “naturalist”? Because Chalmers believes that mental states and consciousness itself are caused by physical systems. So why “dualist”? Because Chalmers believes that mental states - or consciousness generally - are ontologically distinct and also irreducible to the physical.
Bookmarks Related papers MentionsView impact
Murray Gell-Mann died on the 24th of May, 2019.
In 1964 Gell-Mann postulated the existence of qu... more Murray Gell-Mann died on the 24th of May, 2019.
In 1964 Gell-Mann postulated the existence of quarks. (The name was coined by Gell-Mann himself and it's a reference to the novel Finnegans Wake, by James Joyce.) Quarks, antiquarks and gluons were seen to be the underlying elementary elements of neutrons and protons (as well as other hadrons). Gell-Mann was then awarded a Nobel Prize in Physics in 1969 for his contributions and discoveries in the classification of elementary particles at the nuclear level.
More relevantly to this piece. In 1984 Gell-Mann was one of several co-founders of the Santa Fe Institute - a research institute in New Mexico. Its job is to study complex systems and advance the cause of interdisciplinary studies of complexity theory.
Gell-Mann wrote a popular science book about physics and complexity science, The Quark and the Jaguar: Adventures in the Simple and the Complex, in 1994. Many of the quotes in this piece come from that book.
**************************************************************************
The following words of Lee Smolin (an American theoretical physicist) sum up both Murray Gell-Mann's work and the man himself. (At least as they are relevant to this piece.) Firstly he explains Gell-Mann's work:
“Physics needs a new direction, and the direction should have something to do with the study of complex systems rather than with the kind of physics [Murray Gell-Mann] did most of his life.”
Then Smolin continues with a few words on Gell-Mann himself:
“The fact that after spending a life focused on studying the most elementary things in nature Murray can turn around and say that now what's important is the study of complex systems is a great inspiration, and also a great tribute to him.”
Of course all the above is hardly a philosophical or scientific account of the need to move from the “elementary” to the “complex”. However, it does hint at the importance of gaining a broader picture of nature (or the universe). And that's what both Smolin himself and Gell-Mann realised. (In Smolin's own case, he moved from theoretical physics to adding cosmology and philosophy to his repertoire.)
Despite that, surely it can't be said that “what's important is the study of complex systems”. That's simply to reverse the “reductionist hierarchy”. Complex systems are simply part of the picture: not the most important part of the picture. Indeed it seems a little naïve to reverse that previous ostensible hierarchy with a new one.
Murray Gell-Mann himself did appear to offer us a middle-way between (strong) reductionism and the complete autonomy of the individual (“special”) sciences.
Gell-Mann believed that it's all about what he called the "staircases” between the sciences. As Gell-Mann put it (in the specific case of the relation between the levels of psychology and biology):
“Where work does proceed on both biology and psychology and on building staircases from both ends, the emphasis at the biological end is on the brain (as well as die test of the nervous system, the endocrine system, etc), while at the psychological end the emphasis is on the mind—that is, the phenomenological manifestations of what the brain and related organs are doing. Each staircase is a brain-mind bridge.”
Most of the above can be summed up in this way:
i) Simply because a scientist (or philosopher) says that x can be reduced to y (not necessarily without remainder),
ii) that certainly doesn't also mean that this scientist (or philosopher) also believes that x is (to use Patricia Churchland's words) “disreputable, unscientific or otherwise unsavoury”.
Bookmarks Related papers MentionsView impact
Murray Gell-Mann died on the 24th of May, 2019.
In 1964 Gell-Mann postulated the existence of qu... more Murray Gell-Mann died on the 24th of May, 2019.
In 1964 Gell-Mann postulated the existence of quarks. (The name was coined by Gell-Mann himself: it's a reference to the novel *Finnegans Wake*, by James Joyce.) Quarks, antiquarks and gluons were seen to be the underlying elementary elements of neutrons and protons (as well as other hadrons). Gell-Mann was then awarded a Nobel Prize in Physics in 1969 for his contributions and discoveries in the classification of elementary particles at the nuclear level.
More relevantly to this piece. In 1984 Gell-Mann was one of several co-founders of the Santa Fe Institute - a research institute in New Mexico. Its job is to study complex systems and advance the cause of interdisciplinary studies of complexity theory.
In 1994, Gell-Mann wrote a popular science book about physics and complexity science, *The Quark and the Jaguar: Adventures in the Simple and the Complex*. Many of the quotes in this piece come from that book.
Bookmarks Related papers MentionsView impact
Uploads
Papers by Paul Austin Murphy
Lee Smolin: Doing Physics in a Quantum Box
i) Initial Conditions
ii) Isolated and Open Boxes
iii) Truncations of Nature
Nancy Cartwright: Doing Physics in a Classical Box
i) Introduction
ii) A Ball on an Inclined Plane
iii) Pure Observations?
iv) Manipulating Nature
v) Conclusion
Final Thought: Quantum Boxes aren't Classical Boxes
In Lee Smolin's book, *Time Reborn: From the Crisis in Physics to the Future of the Universe* (2013), there is a chapter called 'Doing Physics in a Box'. That chapter is the basis of this piece.
Doing physics in a box has a long history. According to the theoretical physicist Lee Smolin, it goes back to Kepler, Galileo, Descartes and Newton in the 16th and 17th centuries. These scientists “learned to [] isolate little pieces of the world, examine them, and record the changes to them”.
Ironically, some of the thought-experimental “boxes” of physics are *literally* boxes. For example, we have Erwin Schrödinger famous cat-in-a-box thought experiment and Albert Einstein's “light box” experiment. Interestingly enough, both experiments show the impact of outside forces on the boxes (something which Smolin will stress later). In the cat-in-the-box case, outside disturbances can/do “collapse the wave function”.
Later in this piece, the philosopher Nancy Cartwright specifically confronts Galileo's “little piece[] of the world” - a ball on an inclined plain (i.e., a ramp). However, Smolin himself concentrates on quantum pieces of the world.
The words “quantum box” are used because Smolin focuses on such a thing. He says that the “application of quantum mechanics appears to be limited to isolated systems”. Yet “[i]t's [still] an extension of the Newtonian paradigm – of doing physics in a box”. Nonetheless, he mentions classical (or macro) boxes too.
Finally, Smolin has two important things to say about the experiments of physics:
i) That what he calls “subsystems” (or “isolated systems”) are inevitably part of the entire universe – therefore they're also affected by the whole universe.
ii) That experiments are extremely artificial constructs.
i) Introduction
ii) What is Interpretation?
Iii) Shut up and calculate!
iv) Limits to John Gribbin's Pluralism
v) Only Maths?
vi) Waves and Particles
vii) Conclusion
viii) Afterthought
This piece doesn’t claim to offer a conclusive case for the elimination of all the interpretations of quantum mechanics (QM). It simply raises the possibility of elimination and then offers a few arguments in that direction.
The prime motive here is that, at least at present, there’s no way of establishing which interpretation is the true/correct/etc. one. Secondly, the multiplicity of interpretations both confuses the issue and leads to scepticism towards many of them. Thirdly, some interpretations are so convoluted and wacky that laypersons and even physicists themselves must have only aesthetic reasons to believe in them.
This is particularly true of the Many Worlds Interpretation. I can see no reason whatsoever for the layperson to accept it other than it can be taken to “explain the phenomena” (a phrase which is being used in contrast to Bas van Fraassen’s “save the phenomena”). That is, in the way that panpsychism, idealism, Marxism, theism, etc. can all be taken to explain the phenomena. However, explaining phenomena is often very cheap and easy, even if it is neat and tidy. (Note Albert Einstein’s rejection of David Bohm’s theory.)
The first section introduces John Gribbin's position as well as a potted history of positions (which features Paul Dirac, Richard Feynman, etc.) which can be taken to lead towards a possible eliminitivism This section also deals with both the shut-up-and-calculate mantra and instrumentalism and how they can be taken to lead to eliminitivism. Then the simple question “What is Interpretation?” is asked. After that, Gribbin's pluralism is tackled and seen to be not very convincing (at least not from a philosophical point of view). The section “Only Maths?” is self-explanatory. Finally, the issue of wave-or-particles is discussed in the context of the elimination of interpretations of quantum mechanics.
ii) Why Model-dependent Realism?
iii) The Brain's Models
iv) True Reality and Models
v) Pluralism
vi) Hawking's Constructive Empiricism?
vii) The Aesthetics of Models
viii) Conclusion
The words "model-dependent realism" (MDR) were first used in Stephen Hawking and Leonard Mlodinow's book *The Grand Design*, which was published in 2010. Before that, Hawking had of course already talked about the importance of models in physics. (As a physicist, it would be hard not to stress their importance.)
It's very odd that Stephen Hawking should have said that “philosophy is dead” on more than one occasion. What did he think his model-dependent realism is? Above and beyond that, it's difficult to decipher why Hawking chose the term “realism” in the first place.
As for the philosophical position of model-dependent realism itself, it has a strong pragmatic - rather than a strictly empiricist - appeal. And, precisely because of that, metaphysical realists and scientific realists will have serious problems with Hawking's philosophical position.
One question which needs to be asked here is the following:
*Why did Stephen Hawking use the word “realism” to characterise
his position?*
At an intuitive level, MDR appears to be an anti-realist position, rather than a realist one. After all, if one is stressing models, theories and “mental concepts”, then isn't that also to stress some kind of anti-realist position? Perhaps the wording doesn't matter.
As already hinted at, Hawking didn't have much time for philosophy and on more than one occasion he said that it was “dead”. Yet it's very odd that he didn't realise he was doing philosophy when he wrote these parts of *The Grand Design*. What's more, Hawking made the following incredible claim:
“Model-dependent realism short-circuits all this argument and discussion between the realist and anti-realist schools of thought.”
So perhaps it wasn't philosophy simpliciter that Hawking was against; but only philosophy “which has not kept up with modern developments in science”. (Many philosophers themselves have said the same about their fellow philosophers.) Thus one must now assume that model-dependent realists have kept up with modern science. However, even philosophy which has kept up with science still remains philosophy. And that's also true of Hawking's own model-dependent realism.
ii) Geocentrism?
iii) Unobservable Consciousness
iv) Absence of Evidence
v) Are Panpsychists Closet Theists?
Philosopher Philip Goff and science writer/journalist John Horgan disagree about panpsychism.
In a blog post ('The New Copernican Revolution: A Response to John Horgan'), Philip Goff lays out the philosophical problems he has with John Horgan's stance on panpsychism. It mainly concerns what Horgan sees as panpsychism's “geocentrism”; though it does touch on the nature of evidence (i.e., when it comes to both metaphysical and scientific theories).
Firstly, John Horgan accuses panpsychists of “neo-geocentrism”. And then Philip Goff accuses John Horgan of geocentrism. According to both Goff and Horgan, geocentrism is
“the attempt to drag us back to the pre-Copernican view that reality revolves us human beings”.
I note the distinction between the following:
i) Seeing consciousness as being “fundamental” to our own conceptions of the universe/reality.
and
ii) Seeing consciousness as being fundamental to the universe/reality simpliciter.
Goff also goes into greater detail about what he thinks “non-panpsychists” believe. He writes:
“For non-panpsychists, consciousness – the source of all that is of value in existence – is to be found on the planet alone, and only in its very recent history. In the immensity of the cosmos, we are uniquely special and privileged.”
Goff then puts the panpsychist position:
“Panpsychists, in contrast, propose a new Copernican revolution, according to which there’s nothing special about human consciousness...”
Even if someone accepts that phenomenal properties exist all the way down (i.e., to rocks and atoms), it may still be the case that human consciousness is indeed special. However, the word “special” is loaded because everything in the universe – from a type of rock to an ant - is special and unique in some (or even many) ways.
*********************************
Much has been made of string theory being “unscientific”, lacking evidence and not offering (unique) predictions. This piece lays part of the blame at the Pythagorean nature of string theory (in which the mathematics always has a supreme position).
The first section tackles what's called “old-style physics” - that is physics which places an importance on observation, experiment and prediction. Then there's a section in which Michio Kaku ties his own position to the science of Galileo and then Einstein. In both cases, it can be seen that they aren't great exemplars for string theory.
The central section deals with Pythagoreanism and how it ties in with string theory. The next section deals with the aesthetics of string theory and argues that aesthetic appeal can't be – and mustn't be - the end of the story in physics.
Finally, there's a section which cites a specific example of how string theory mathematics led to string theory physics and cosmology: i.e., the case of string solutions and the multiverse.
ii) Panpsychism
iii) Substance or Intrinsic Essence?
iv) Relationalism
Lee Smolin is a theoretical physicist with many philosophical interests and inclinations. This interplay between science and philosophy is played out in Smolin's writings.
So let Smolin lay his cards on the table. He writes:
"[T]here are questions that science cannot answer now but that are so clearly meaningful that sometime in the future, it is hoped, science will evolve language, concepts, and experimental techniques to address them."
The question is:
Does Smolin believe that "science cannot address" what he calls "intrinsic essence" - and thereby consciousness and qualia?
It would seem that Smolin does believe that. One may speak for Smolin here and say that although science will progress in the future, the "hard problem" (to use David Chalmers' term) will remain beyond it. So this isn't even a issue of insolubilia versus incompletability (i.e., the position which states that even though science's problems are soluble in principle, science will never be complete). No; this is a position that embraces insolubility regardless of completability.
As it is, current science probably can't answer "what qualia are like" or "why we perceive them". Of course one trick here is to deny the existence of qualia and consciousness outright (as some philosophers and scientists have done). Alternatively, one can explain consciousness (though not qualia) in terms that are amenable to third-person (scientific) evaluation and tests (as Daniel Dennett does).
As for panpsychism.
It's quite a surprise that Smolin never mentions panpsychism. He doesn't even engage the possibility that the intrinsic essence of the mind have been tied to the intrinsic essence of inanimate objects. That is, Smolin says that we have an "internal aspect" which is the "intrinsic essense" of rocks or atoms. And we also have consciousness; which he tells us "is an aspect of the intrinsic essence of brains". As just stated, Smolin doesn't tie the intrinsic essence of the brain to the intrinsic essence of a rock or atom - apart from saying that they all have intrinsic essences. The panpsychist, of course, goes one step further than Smolin. The intrinsic essence of the brain is one and the same thing as the intrinsic essence of a rock (often talked about in terms of “phenomenal properties”). That means, of course, that the rock is conscious (or has phenomenal properties) too - if to a markedly lesser degree than the human brain.
The said scientist/academic carries out an experiment on ten people (i.e., subjects). The experiment is to see if the ten subjects will press a button which will deliver an electric shock if ordered to do so by an authority figure (given a suitably rational or "scientific" reason to do so). However, unbeknownst to him, it's the scientist who's the real subject of the experiment. That is, his ten subjects aren't being tested - he is! In fact the ten subjects know what's going on and are given fake electrical shocks. The scientist, on the other hand, doesn't know what's going on. He only knows about his own experiment; not the experiment upon him.
Again, this paradox isn't about the nature of meta-tests or even about why - or if - people really do commit acts of cruelty when told to do so by authority figures. This paradox is actually about the "fudge factor" or "experimenter bias effect". In other words, the scientist is testing his research students and these meta-testers are testing the scientist who's testing research students.
The phrases "fudge factor" and "experimenter bias effect" actually refer to the fact that when a researcher or scientist expects to "discover" or find a certain result, he's very likely to get that result. Needless to say, when the research or science involves anything which is in any way political in nature, this is even more likely to be the case.
*************************************
When it came to Artificial Life (AL), Christopher G. Langton didn't hold back. In the following passage he puts the exciting case for AL:
“It's going to be hard for people to accept the idea that machines can be alive as people, and that there's nothing special about our life that's not achievable by any other kind of stuff out there, if that stuff is put together in the right way. It's going to be as hard for people to accept that as it was for Galileo's contemporaries to accept the fact that Earth was not at the center of the universe.”
The important and relevant part of the passage above is:
“[T]here's nothing special about our life that's not achievable by any other kind of stuff out there...”
Although the above isn't a definition of functionalism, it nonetheless has obvious and important functionalist implications. So when it comes to both Artificial Life and Artificial Intelligence, the computer scientist Christopher Langton seems to have explicitly stated that biology doesn't matter. Yes; it of course matters to biological life and biological intelligence; though not to life and intelligence generically interpreted.
The biologist and cognitive scientist Francisco Varela put the opposite position (as it were) to Langton's when he told us that he “disagree[s] with [Langton's] reading of artificial life as being functionalist”. Varela continues:
“By this I refer to his idea that the pattern is the thing. In contrast, there's the kind of biology in which there's an irreducible side to the situatedness of the organism and its history...”
We have specific biologies. Those specific biologies are situated in specific environments. And then we must also consider the specific histories of those specific biological organisms. So, if “early AI” was “all about” functions and nothing else, then that was surely to leave a lot out. (From a philosophical angle, we must also include externalist arguments, as well as embodiment and embeddedness – i.e., not only Varela's “situatedness”.)
1) Beliefs which are “constructions or projections from [sensory] stimulations” (Quine’s position).
and
2) Beliefs about the world.
The question is: How do we get from 1) to 2)? Or:
How do we get from sensory stimulations - which are (indeed) caused by the world - to truths about the world (or representations which are true about the world)?
This is the ancient sceptical question. Indeed this was *the* question of epistemology (at least since Descartes).
Another way of putting this is in terms of what we *know*. On Quine’s account, all we know about is are sensory stimulations and what we assert in response to them. We don't know anything about the world itself.
So are beliefs and assertions about our own and other people’s beliefs and assertions not directly (or even directly) about the world? These beliefs and assertions (or Quine’s ‘projections’)
“could not be seen as a source of independent information about the world against which their own truth or the truth of the earlier beliefs could be checked”.
This is as strong a statement of the possibility of scepticism in epistemology as you could hear from a sceptic himself.
Why does the physical brain give rise to consciousness?
Instead, it asks us why we human beings - and other animals - needed consciousness in the first place. Thus we have this question:
From an evolutionary perspective, if the functions of the brain (which Chalmers often refers to) might have occurred “in the dark”, then why did we need (if we did need) - and why do we still need/have - consciousness?
The nature of the physical-consciousness link is only tangential to this issue.
In this chapter Daniel Dennett doesn't really offer many of his own arguments against John Searle's position. What he does offer are a lot of blatant ad hominems and simple endorsements of other people's (i.e., Strong AI aficionados) positions on the Chinese Room argument. Indeed Dennett is explicit about his support (even if it's somewhat qualified) of the “Systems Reply”.
We can happily accept that Searle's thought experiment doesn't entirely (or even at all) succeed in what it claims to accomplish. However, Dennett's claims (or those he endorses) don't demonstrate the *possibility* of Strong AI either. In addition, it can also be said that Searle himself never claimed the “flat-out impossibility of Strong AI” in the first place.
ii) Analysis and Argumentation
iii) Clarity
iv) Obscurity
v) Deconstruction
vi) Conclusion
Many analytic philosophers stress the point that analytic philosophy isn't about the sharing of views or positions: it's about the sharing of philosophical tools and a basic commitment to clarity. All this is regardless of what position a particular analytic philosopher may advance.
In his book, *What is Analytic Philosophy?*, Hans-Johan Glock elaborates on this position in the following:
“Philosophy is not about sharing doctrines, but about a rational and civilised debate even about one's own cherished assumptions.”
Of course it's also the case that many analytic philosophers do actually “share doctrines”. However, it's just that the sharing of philosophical tools and practices is deemed to be more important than sharing doctrines. It also follows from the sharing of philosophical tools and a commitment to clarity that there can be “rational and civilised debates even about one's cherished assumptions”. That is, the sharing of philosophical tools and a commitment to clarity enables (or allows) rational and civilised debate.
Of course certain questions arise here:
i) Do analytic philosophers really share many - or indeed any - philosophical tools?
ii) Is there genuine civilised debate between all analytic philosophers at all times?
Many philosophers have of course questioned this assumption that analytic philosophers share tools. Others may question the deepness or genuineness of the “civilised debate” too. This basically means that there will be exceptions to i) and ii) above and no one should expect otherwise. However, on the whole, it's easy to see that most analytic philosophers do indeed share many tools and practices.
As for civilised debate.
This runs parallel to an account of science as a whole which can be distinguished from any accounts of individual scientists. That is, individual scientists can be very unrepresentative individuals: they can falsify experimental data, stick dogmatically to their theories, be paid by big business, let their politics influence their science, etc. Nonetheless, unrepresentative scientists certainly aren't the norm in science. (All this will partly depend partly on which science we're talking about, etc.)
The same kinds of distinction can be made between individual analytic philosophers and analytic philosophy itself. There may indeed be unrepresentative analytic philosophers. It may even be the case that poor standards (however that's defined) are sometimes displayed within books or even papers. However, as with science, none of this is really true of analytic philosophy as a whole.
ii) Implementation
iii) Causal Structure
iv) The Computer's Innards
v) Chalmers' Biocentrism?
vi) The Chinese Room
It's a good thing that the abstract and the concrete (or abstract objects in "mathematical space" and the "real world") are brought together in David Chalmers' account of Strong AI. Often it's almost (or literally) as if AI theorists believe that (as it were) disembodied computations can themselves bring about mind or even consciousness. (The same can be said, though less strongly, about functions or functionalism.) This, as John Searle once stated, is a kind of contemporary dualism in which abstract objects (computations/algorithms – the contemporary version of Descartes' mind-substance) bring about mind and consciousness on their own.
To capture the essence of what Chalmers is attempting to do we can quote his own words when he says that it's all about “relat[ing] the abstract and concrete domains”. And he gives a very concrete example of this.
Take a recipe for a meal. To Chalmers, this recipe is a “syntactic object[]”. However, the meal itself (as well as the cooking process) is an “implementation” that occurs in the “real world”.
So, with regard to Chalmers' own examples, we need to tie "Turing machines, Pascal programs, finite-state automata", etc. to"[c]ognitive systems in the real world" which are
"concrete objects, physically embodied and interacting causally with other objects in the physical world".
In the above passage, we also have what may be called an "externalist" as well as "embodiment" argument against AI abstractionism. That is, the creation of mind and consciousness is not only about the computer/robot itself: Chalmers' "physical world" is undoubtedly part of the picture too.
Of course most adherents of Strong AI would never deny that such abstract objects need to be implemented in the "physical world". It's just that the manner of that implementation (as well as the nature of the physical material which does that job) seems to be seen as almost – or literally – irrelevant.
David Chalmers (the well-known Australian philosopher) claims that if something is conceivable; then that entails that it's also – metaphysically - possible. The problem with this is we can distinguish conceivability from imaginability. That is, even if we can't construct mental images of nothing (or nothingness), we can still conceive of nothing (or nothingness). I, for one, can't even conceive of nothing (or nothingness).
But can other people conceive of nothing? Do we even have intuitions about nothing or about the notion of nothingness?
So how can we even name or refer to nothing? (We shall see that Parmenides might have had something here.) There's nothing to hold onto. Yet, psychologically speaking, thoughts about nothing can fill people with dread. There's something psychologically (or emotionally) both propelling and appalling about it. And that's why existentialists and other philosophers – with their taste for the dramatic and poetic - found the subject of nothing (or at least nothingness) such a rich philosophical ground to mine.
The very idea of nothing also seems bizarre. It arises at the very beginning of philosophy and religion. After all, how did God create the world "out of nothing"? Did God Himself come from nothing? Indeed what is nothing (or nothingness)?
ii) Ten Logical Possibilities Before Breakfast
iii) René Descartes
iv) Logical Possibilities as Tools
There isn't much technical argumentation in the first part of this piece about David Chalmers' “arguments from logical possibility”. Indeed, I suspect that many of his logical possibilities are indeed logical possibilities. I also suspect that many of his arguments are airtight.
In *The Conscious Mind*, Chalmers introduces a logical possibility on almost every page of that book. In it he talks about “God”, “an angel world”, “flying telephones”, “ectoplasm” and a monkey who writes Hamlet. Admittedly, most of these logical possibilities are of little interest to Chalmers either because they're hugely improbable or because they don't help him philosophically. It is zombies which he specialises in - and it is zombies which help him philosophically.
Indeed isn't the position of panpsychism (which Chalmers also endorses) itself very reliant on logical possibility? (For example, it's *logically possible* that an electron or stone has “phenomenal properties”.)
Not only is Chalmers' keenness on logical possibilities a philosophical position, logical possibilities also advance one of his positions within philosophy. Chalmers and other “anti-materialists” require logical possibilities in order to advance anti-materialism. In other words, their arguments only work if one countenances all sorts of strange logical possibilities.
So when a philosopher cites various logical possibilities, all sorts of philosophical positions become available... or *possible*. That's primarily because once a given philosopher sets a particular logical possibility among the pigeons, then it's surely the duty of other philosophers to tackle that logical possibility. But if they don't, then surely they can be classed as philosophical philistines... or “verificationists”. Indeed Chalmers lays his cards on the table when he states:
“In general, a certain burden of proof lies on those who claim that a given description is logically *impossible*.”
This means that there's a lot of weight attached to logical possibilities. In other words, logical possibilities (as well as conceivabilities) serve a larger philosophical purpose. This means that citing logical possibilities isn't just another tool – it is *the* tool to fight materialism (perhaps much more too).
v. Logical Possibility and Natural Possibility
vi. Zombies
vii. Saul Kripke
viii. Chalmers & Goff: Conceivability to Possibility
ix. Two More Conceivings
One may think that only natural (or empirical) possibility is of interest to most people – both laypersons and experts. Indeed David Chalmers himself (sort of) states that himself. Yet logical (i.e., not natural) possibility still permeates Chalmers' entire work. And, as we shall see, so too does conceivability; which he strongly ties to logical possibility. Indeed Chalmers' references to natural possibility hardly make sense when taken out of the context of logical possibility. Thus both logical and natural possibility gain much of their purchase by being opposed to one another.
Chalmers himself sums up one major problem with logical possibility (this was touched upon in Part One) when he says that
“[t]here are a vast number of logically possible situations that are not naturally possible”.
That means that there must be mightily good philosophical (or scientific) reasons to spend time on a given logical possibility. (It's easy to believe that there are good reasons when, for example, consciousness is being discussed.) That is, surely it must pay philosophical dividends to do so. Having said that, there's a very large number of uninstantiated natural possibilities too.
So what, philosophically, can we draw out of natural (“although wildly improbable”) possibilities? Well, we can draw one thing out: that they're *naturally possible*. And that's enough for some philosophers. But what else? Well, at a superficial level, it shows us that all sorts of bizarre things are naturally possible. So if it's naturally possible that a monkey could type *Hamlet*, then it's also naturally possible that an ant could take over the world. Why? Because all it takes for something to be naturally possible is that it “conforms to the laws of our world”. So, as far as I can see, a Nietzschean super-ant does appear to conform to the laws of of our world. That is, this ant and its actions don't “violate[] the laws of nature of our world”.
Such cognitivists believe that the brain is also a machine – a computing machine.
Computationalists (computationalism is a branch of cognitivism) claim that all thought is computation. But what does that mean? Are the words 'thought' and 'computation' virtual (or literal) synonyms?
This is more clearly the case because it seems that almost all conscious processes in the brain are deemed to be thoughts; and thus also deemed to be computations. That not only includes the thought that 1 plus 1 equals 4 or that Snow is white; but also the rotation of a mental image in the mind, imagining the smell of a rose and so on. Then again, if rotating a mental image is classed as a thought, then why can't it also be classed as a computation? Especially since, in computationalism, they appear to by synonyms.
It all now depends on what we mean by the word 'computation'.
For a start, we can make the following claims:
i) All of a computer's processes are computations.
ii) Not all conscious human mental processes are computations.
Similarly, we can say:
iii) Many human mental processes are thoughts.
iv) No computer computations are thoughts (i.e. because thoughts have semantic content, intentionality, reference, etc.).
So what does a third-person absolutist believe? According to David Chalmers, Dennett believes that “what is not externally verifiable cannot be real”. To be more explicit: there's a fundamental connection between any x being real (or existing) and whether or not we can “externally verify” that x. Thus, if we can't externally verify x, it doesn't exist. It's not real.
Dennett explains his third-person absolutism when he states the following:
"I wouldn't know what I was thinking about if I couldn't identify them by their functional differentia."
It's not surprising, then, that Dennett has asked David Chalmers
“to provide 'independent' evidence (presumably behavioral or functional evidence) for the 'postulation' of experience”.
As for David Chalmers: he classes himself as a “dualist”; or, more accurately, his position is one of "naturalistic dualism". Why “naturalist”? Because Chalmers believes that mental states and consciousness itself are caused by physical systems. So why “dualist”? Because Chalmers believes that mental states - or consciousness generally - are ontologically distinct and also irreducible to the physical.
In 1964 Gell-Mann postulated the existence of quarks. (The name was coined by Gell-Mann himself and it's a reference to the novel Finnegans Wake, by James Joyce.) Quarks, antiquarks and gluons were seen to be the underlying elementary elements of neutrons and protons (as well as other hadrons). Gell-Mann was then awarded a Nobel Prize in Physics in 1969 for his contributions and discoveries in the classification of elementary particles at the nuclear level.
More relevantly to this piece. In 1984 Gell-Mann was one of several co-founders of the Santa Fe Institute - a research institute in New Mexico. Its job is to study complex systems and advance the cause of interdisciplinary studies of complexity theory.
Gell-Mann wrote a popular science book about physics and complexity science, The Quark and the Jaguar: Adventures in the Simple and the Complex, in 1994. Many of the quotes in this piece come from that book.
**************************************************************************
The following words of Lee Smolin (an American theoretical physicist) sum up both Murray Gell-Mann's work and the man himself. (At least as they are relevant to this piece.) Firstly he explains Gell-Mann's work:
“Physics needs a new direction, and the direction should have something to do with the study of complex systems rather than with the kind of physics [Murray Gell-Mann] did most of his life.”
Then Smolin continues with a few words on Gell-Mann himself:
“The fact that after spending a life focused on studying the most elementary things in nature Murray can turn around and say that now what's important is the study of complex systems is a great inspiration, and also a great tribute to him.”
Of course all the above is hardly a philosophical or scientific account of the need to move from the “elementary” to the “complex”. However, it does hint at the importance of gaining a broader picture of nature (or the universe). And that's what both Smolin himself and Gell-Mann realised. (In Smolin's own case, he moved from theoretical physics to adding cosmology and philosophy to his repertoire.)
Despite that, surely it can't be said that “what's important is the study of complex systems”. That's simply to reverse the “reductionist hierarchy”. Complex systems are simply part of the picture: not the most important part of the picture. Indeed it seems a little naïve to reverse that previous ostensible hierarchy with a new one.
Murray Gell-Mann himself did appear to offer us a middle-way between (strong) reductionism and the complete autonomy of the individual (“special”) sciences.
Gell-Mann believed that it's all about what he called the "staircases” between the sciences. As Gell-Mann put it (in the specific case of the relation between the levels of psychology and biology):
“Where work does proceed on both biology and psychology and on building staircases from both ends, the emphasis at the biological end is on the brain (as well as die test of the nervous system, the endocrine system, etc), while at the psychological end the emphasis is on the mind—that is, the phenomenological manifestations of what the brain and related organs are doing. Each staircase is a brain-mind bridge.”
Most of the above can be summed up in this way:
i) Simply because a scientist (or philosopher) says that x can be reduced to y (not necessarily without remainder),
ii) that certainly doesn't also mean that this scientist (or philosopher) also believes that x is (to use Patricia Churchland's words) “disreputable, unscientific or otherwise unsavoury”.
In 1964 Gell-Mann postulated the existence of quarks. (The name was coined by Gell-Mann himself: it's a reference to the novel *Finnegans Wake*, by James Joyce.) Quarks, antiquarks and gluons were seen to be the underlying elementary elements of neutrons and protons (as well as other hadrons). Gell-Mann was then awarded a Nobel Prize in Physics in 1969 for his contributions and discoveries in the classification of elementary particles at the nuclear level.
More relevantly to this piece. In 1984 Gell-Mann was one of several co-founders of the Santa Fe Institute - a research institute in New Mexico. Its job is to study complex systems and advance the cause of interdisciplinary studies of complexity theory.
In 1994, Gell-Mann wrote a popular science book about physics and complexity science, *The Quark and the Jaguar: Adventures in the Simple and the Complex*. Many of the quotes in this piece come from that book.
Lee Smolin: Doing Physics in a Quantum Box
i) Initial Conditions
ii) Isolated and Open Boxes
iii) Truncations of Nature
Nancy Cartwright: Doing Physics in a Classical Box
i) Introduction
ii) A Ball on an Inclined Plane
iii) Pure Observations?
iv) Manipulating Nature
v) Conclusion
Final Thought: Quantum Boxes aren't Classical Boxes
In Lee Smolin's book, *Time Reborn: From the Crisis in Physics to the Future of the Universe* (2013), there is a chapter called 'Doing Physics in a Box'. That chapter is the basis of this piece.
Doing physics in a box has a long history. According to the theoretical physicist Lee Smolin, it goes back to Kepler, Galileo, Descartes and Newton in the 16th and 17th centuries. These scientists “learned to [] isolate little pieces of the world, examine them, and record the changes to them”.
Ironically, some of the thought-experimental “boxes” of physics are *literally* boxes. For example, we have Erwin Schrödinger famous cat-in-a-box thought experiment and Albert Einstein's “light box” experiment. Interestingly enough, both experiments show the impact of outside forces on the boxes (something which Smolin will stress later). In the cat-in-the-box case, outside disturbances can/do “collapse the wave function”.
Later in this piece, the philosopher Nancy Cartwright specifically confronts Galileo's “little piece[] of the world” - a ball on an inclined plain (i.e., a ramp). However, Smolin himself concentrates on quantum pieces of the world.
The words “quantum box” are used because Smolin focuses on such a thing. He says that the “application of quantum mechanics appears to be limited to isolated systems”. Yet “[i]t's [still] an extension of the Newtonian paradigm – of doing physics in a box”. Nonetheless, he mentions classical (or macro) boxes too.
Finally, Smolin has two important things to say about the experiments of physics:
i) That what he calls “subsystems” (or “isolated systems”) are inevitably part of the entire universe – therefore they're also affected by the whole universe.
ii) That experiments are extremely artificial constructs.
i) Introduction
ii) What is Interpretation?
Iii) Shut up and calculate!
iv) Limits to John Gribbin's Pluralism
v) Only Maths?
vi) Waves and Particles
vii) Conclusion
viii) Afterthought
This piece doesn’t claim to offer a conclusive case for the elimination of all the interpretations of quantum mechanics (QM). It simply raises the possibility of elimination and then offers a few arguments in that direction.
The prime motive here is that, at least at present, there’s no way of establishing which interpretation is the true/correct/etc. one. Secondly, the multiplicity of interpretations both confuses the issue and leads to scepticism towards many of them. Thirdly, some interpretations are so convoluted and wacky that laypersons and even physicists themselves must have only aesthetic reasons to believe in them.
This is particularly true of the Many Worlds Interpretation. I can see no reason whatsoever for the layperson to accept it other than it can be taken to “explain the phenomena” (a phrase which is being used in contrast to Bas van Fraassen’s “save the phenomena”). That is, in the way that panpsychism, idealism, Marxism, theism, etc. can all be taken to explain the phenomena. However, explaining phenomena is often very cheap and easy, even if it is neat and tidy. (Note Albert Einstein’s rejection of David Bohm’s theory.)
The first section introduces John Gribbin's position as well as a potted history of positions (which features Paul Dirac, Richard Feynman, etc.) which can be taken to lead towards a possible eliminitivism This section also deals with both the shut-up-and-calculate mantra and instrumentalism and how they can be taken to lead to eliminitivism. Then the simple question “What is Interpretation?” is asked. After that, Gribbin's pluralism is tackled and seen to be not very convincing (at least not from a philosophical point of view). The section “Only Maths?” is self-explanatory. Finally, the issue of wave-or-particles is discussed in the context of the elimination of interpretations of quantum mechanics.
ii) Why Model-dependent Realism?
iii) The Brain's Models
iv) True Reality and Models
v) Pluralism
vi) Hawking's Constructive Empiricism?
vii) The Aesthetics of Models
viii) Conclusion
The words "model-dependent realism" (MDR) were first used in Stephen Hawking and Leonard Mlodinow's book *The Grand Design*, which was published in 2010. Before that, Hawking had of course already talked about the importance of models in physics. (As a physicist, it would be hard not to stress their importance.)
It's very odd that Stephen Hawking should have said that “philosophy is dead” on more than one occasion. What did he think his model-dependent realism is? Above and beyond that, it's difficult to decipher why Hawking chose the term “realism” in the first place.
As for the philosophical position of model-dependent realism itself, it has a strong pragmatic - rather than a strictly empiricist - appeal. And, precisely because of that, metaphysical realists and scientific realists will have serious problems with Hawking's philosophical position.
One question which needs to be asked here is the following:
*Why did Stephen Hawking use the word “realism” to characterise
his position?*
At an intuitive level, MDR appears to be an anti-realist position, rather than a realist one. After all, if one is stressing models, theories and “mental concepts”, then isn't that also to stress some kind of anti-realist position? Perhaps the wording doesn't matter.
As already hinted at, Hawking didn't have much time for philosophy and on more than one occasion he said that it was “dead”. Yet it's very odd that he didn't realise he was doing philosophy when he wrote these parts of *The Grand Design*. What's more, Hawking made the following incredible claim:
“Model-dependent realism short-circuits all this argument and discussion between the realist and anti-realist schools of thought.”
So perhaps it wasn't philosophy simpliciter that Hawking was against; but only philosophy “which has not kept up with modern developments in science”. (Many philosophers themselves have said the same about their fellow philosophers.) Thus one must now assume that model-dependent realists have kept up with modern science. However, even philosophy which has kept up with science still remains philosophy. And that's also true of Hawking's own model-dependent realism.
ii) Geocentrism?
iii) Unobservable Consciousness
iv) Absence of Evidence
v) Are Panpsychists Closet Theists?
Philosopher Philip Goff and science writer/journalist John Horgan disagree about panpsychism.
In a blog post ('The New Copernican Revolution: A Response to John Horgan'), Philip Goff lays out the philosophical problems he has with John Horgan's stance on panpsychism. It mainly concerns what Horgan sees as panpsychism's “geocentrism”; though it does touch on the nature of evidence (i.e., when it comes to both metaphysical and scientific theories).
Firstly, John Horgan accuses panpsychists of “neo-geocentrism”. And then Philip Goff accuses John Horgan of geocentrism. According to both Goff and Horgan, geocentrism is
“the attempt to drag us back to the pre-Copernican view that reality revolves us human beings”.
I note the distinction between the following:
i) Seeing consciousness as being “fundamental” to our own conceptions of the universe/reality.
and
ii) Seeing consciousness as being fundamental to the universe/reality simpliciter.
Goff also goes into greater detail about what he thinks “non-panpsychists” believe. He writes:
“For non-panpsychists, consciousness – the source of all that is of value in existence – is to be found on the planet alone, and only in its very recent history. In the immensity of the cosmos, we are uniquely special and privileged.”
Goff then puts the panpsychist position:
“Panpsychists, in contrast, propose a new Copernican revolution, according to which there’s nothing special about human consciousness...”
Even if someone accepts that phenomenal properties exist all the way down (i.e., to rocks and atoms), it may still be the case that human consciousness is indeed special. However, the word “special” is loaded because everything in the universe – from a type of rock to an ant - is special and unique in some (or even many) ways.
*********************************
Much has been made of string theory being “unscientific”, lacking evidence and not offering (unique) predictions. This piece lays part of the blame at the Pythagorean nature of string theory (in which the mathematics always has a supreme position).
The first section tackles what's called “old-style physics” - that is physics which places an importance on observation, experiment and prediction. Then there's a section in which Michio Kaku ties his own position to the science of Galileo and then Einstein. In both cases, it can be seen that they aren't great exemplars for string theory.
The central section deals with Pythagoreanism and how it ties in with string theory. The next section deals with the aesthetics of string theory and argues that aesthetic appeal can't be – and mustn't be - the end of the story in physics.
Finally, there's a section which cites a specific example of how string theory mathematics led to string theory physics and cosmology: i.e., the case of string solutions and the multiverse.
ii) Panpsychism
iii) Substance or Intrinsic Essence?
iv) Relationalism
Lee Smolin is a theoretical physicist with many philosophical interests and inclinations. This interplay between science and philosophy is played out in Smolin's writings.
So let Smolin lay his cards on the table. He writes:
"[T]here are questions that science cannot answer now but that are so clearly meaningful that sometime in the future, it is hoped, science will evolve language, concepts, and experimental techniques to address them."
The question is:
Does Smolin believe that "science cannot address" what he calls "intrinsic essence" - and thereby consciousness and qualia?
It would seem that Smolin does believe that. One may speak for Smolin here and say that although science will progress in the future, the "hard problem" (to use David Chalmers' term) will remain beyond it. So this isn't even a issue of insolubilia versus incompletability (i.e., the position which states that even though science's problems are soluble in principle, science will never be complete). No; this is a position that embraces insolubility regardless of completability.
As it is, current science probably can't answer "what qualia are like" or "why we perceive them". Of course one trick here is to deny the existence of qualia and consciousness outright (as some philosophers and scientists have done). Alternatively, one can explain consciousness (though not qualia) in terms that are amenable to third-person (scientific) evaluation and tests (as Daniel Dennett does).
As for panpsychism.
It's quite a surprise that Smolin never mentions panpsychism. He doesn't even engage the possibility that the intrinsic essence of the mind have been tied to the intrinsic essence of inanimate objects. That is, Smolin says that we have an "internal aspect" which is the "intrinsic essense" of rocks or atoms. And we also have consciousness; which he tells us "is an aspect of the intrinsic essence of brains". As just stated, Smolin doesn't tie the intrinsic essence of the brain to the intrinsic essence of a rock or atom - apart from saying that they all have intrinsic essences. The panpsychist, of course, goes one step further than Smolin. The intrinsic essence of the brain is one and the same thing as the intrinsic essence of a rock (often talked about in terms of “phenomenal properties”). That means, of course, that the rock is conscious (or has phenomenal properties) too - if to a markedly lesser degree than the human brain.
The said scientist/academic carries out an experiment on ten people (i.e., subjects). The experiment is to see if the ten subjects will press a button which will deliver an electric shock if ordered to do so by an authority figure (given a suitably rational or "scientific" reason to do so). However, unbeknownst to him, it's the scientist who's the real subject of the experiment. That is, his ten subjects aren't being tested - he is! In fact the ten subjects know what's going on and are given fake electrical shocks. The scientist, on the other hand, doesn't know what's going on. He only knows about his own experiment; not the experiment upon him.
Again, this paradox isn't about the nature of meta-tests or even about why - or if - people really do commit acts of cruelty when told to do so by authority figures. This paradox is actually about the "fudge factor" or "experimenter bias effect". In other words, the scientist is testing his research students and these meta-testers are testing the scientist who's testing research students.
The phrases "fudge factor" and "experimenter bias effect" actually refer to the fact that when a researcher or scientist expects to "discover" or find a certain result, he's very likely to get that result. Needless to say, when the research or science involves anything which is in any way political in nature, this is even more likely to be the case.
*************************************
When it came to Artificial Life (AL), Christopher G. Langton didn't hold back. In the following passage he puts the exciting case for AL:
“It's going to be hard for people to accept the idea that machines can be alive as people, and that there's nothing special about our life that's not achievable by any other kind of stuff out there, if that stuff is put together in the right way. It's going to be as hard for people to accept that as it was for Galileo's contemporaries to accept the fact that Earth was not at the center of the universe.”
The important and relevant part of the passage above is:
“[T]here's nothing special about our life that's not achievable by any other kind of stuff out there...”
Although the above isn't a definition of functionalism, it nonetheless has obvious and important functionalist implications. So when it comes to both Artificial Life and Artificial Intelligence, the computer scientist Christopher Langton seems to have explicitly stated that biology doesn't matter. Yes; it of course matters to biological life and biological intelligence; though not to life and intelligence generically interpreted.
The biologist and cognitive scientist Francisco Varela put the opposite position (as it were) to Langton's when he told us that he “disagree[s] with [Langton's] reading of artificial life as being functionalist”. Varela continues:
“By this I refer to his idea that the pattern is the thing. In contrast, there's the kind of biology in which there's an irreducible side to the situatedness of the organism and its history...”
We have specific biologies. Those specific biologies are situated in specific environments. And then we must also consider the specific histories of those specific biological organisms. So, if “early AI” was “all about” functions and nothing else, then that was surely to leave a lot out. (From a philosophical angle, we must also include externalist arguments, as well as embodiment and embeddedness – i.e., not only Varela's “situatedness”.)
1) Beliefs which are “constructions or projections from [sensory] stimulations” (Quine’s position).
and
2) Beliefs about the world.
The question is: How do we get from 1) to 2)? Or:
How do we get from sensory stimulations - which are (indeed) caused by the world - to truths about the world (or representations which are true about the world)?
This is the ancient sceptical question. Indeed this was *the* question of epistemology (at least since Descartes).
Another way of putting this is in terms of what we *know*. On Quine’s account, all we know about is are sensory stimulations and what we assert in response to them. We don't know anything about the world itself.
So are beliefs and assertions about our own and other people’s beliefs and assertions not directly (or even directly) about the world? These beliefs and assertions (or Quine’s ‘projections’)
“could not be seen as a source of independent information about the world against which their own truth or the truth of the earlier beliefs could be checked”.
This is as strong a statement of the possibility of scepticism in epistemology as you could hear from a sceptic himself.
Why does the physical brain give rise to consciousness?
Instead, it asks us why we human beings - and other animals - needed consciousness in the first place. Thus we have this question:
From an evolutionary perspective, if the functions of the brain (which Chalmers often refers to) might have occurred “in the dark”, then why did we need (if we did need) - and why do we still need/have - consciousness?
The nature of the physical-consciousness link is only tangential to this issue.
In this chapter Daniel Dennett doesn't really offer many of his own arguments against John Searle's position. What he does offer are a lot of blatant ad hominems and simple endorsements of other people's (i.e., Strong AI aficionados) positions on the Chinese Room argument. Indeed Dennett is explicit about his support (even if it's somewhat qualified) of the “Systems Reply”.
We can happily accept that Searle's thought experiment doesn't entirely (or even at all) succeed in what it claims to accomplish. However, Dennett's claims (or those he endorses) don't demonstrate the *possibility* of Strong AI either. In addition, it can also be said that Searle himself never claimed the “flat-out impossibility of Strong AI” in the first place.
ii) Analysis and Argumentation
iii) Clarity
iv) Obscurity
v) Deconstruction
vi) Conclusion
Many analytic philosophers stress the point that analytic philosophy isn't about the sharing of views or positions: it's about the sharing of philosophical tools and a basic commitment to clarity. All this is regardless of what position a particular analytic philosopher may advance.
In his book, *What is Analytic Philosophy?*, Hans-Johan Glock elaborates on this position in the following:
“Philosophy is not about sharing doctrines, but about a rational and civilised debate even about one's own cherished assumptions.”
Of course it's also the case that many analytic philosophers do actually “share doctrines”. However, it's just that the sharing of philosophical tools and practices is deemed to be more important than sharing doctrines. It also follows from the sharing of philosophical tools and a commitment to clarity that there can be “rational and civilised debates even about one's cherished assumptions”. That is, the sharing of philosophical tools and a commitment to clarity enables (or allows) rational and civilised debate.
Of course certain questions arise here:
i) Do analytic philosophers really share many - or indeed any - philosophical tools?
ii) Is there genuine civilised debate between all analytic philosophers at all times?
Many philosophers have of course questioned this assumption that analytic philosophers share tools. Others may question the deepness or genuineness of the “civilised debate” too. This basically means that there will be exceptions to i) and ii) above and no one should expect otherwise. However, on the whole, it's easy to see that most analytic philosophers do indeed share many tools and practices.
As for civilised debate.
This runs parallel to an account of science as a whole which can be distinguished from any accounts of individual scientists. That is, individual scientists can be very unrepresentative individuals: they can falsify experimental data, stick dogmatically to their theories, be paid by big business, let their politics influence their science, etc. Nonetheless, unrepresentative scientists certainly aren't the norm in science. (All this will partly depend partly on which science we're talking about, etc.)
The same kinds of distinction can be made between individual analytic philosophers and analytic philosophy itself. There may indeed be unrepresentative analytic philosophers. It may even be the case that poor standards (however that's defined) are sometimes displayed within books or even papers. However, as with science, none of this is really true of analytic philosophy as a whole.
ii) Implementation
iii) Causal Structure
iv) The Computer's Innards
v) Chalmers' Biocentrism?
vi) The Chinese Room
It's a good thing that the abstract and the concrete (or abstract objects in "mathematical space" and the "real world") are brought together in David Chalmers' account of Strong AI. Often it's almost (or literally) as if AI theorists believe that (as it were) disembodied computations can themselves bring about mind or even consciousness. (The same can be said, though less strongly, about functions or functionalism.) This, as John Searle once stated, is a kind of contemporary dualism in which abstract objects (computations/algorithms – the contemporary version of Descartes' mind-substance) bring about mind and consciousness on their own.
To capture the essence of what Chalmers is attempting to do we can quote his own words when he says that it's all about “relat[ing] the abstract and concrete domains”. And he gives a very concrete example of this.
Take a recipe for a meal. To Chalmers, this recipe is a “syntactic object[]”. However, the meal itself (as well as the cooking process) is an “implementation” that occurs in the “real world”.
So, with regard to Chalmers' own examples, we need to tie "Turing machines, Pascal programs, finite-state automata", etc. to"[c]ognitive systems in the real world" which are
"concrete objects, physically embodied and interacting causally with other objects in the physical world".
In the above passage, we also have what may be called an "externalist" as well as "embodiment" argument against AI abstractionism. That is, the creation of mind and consciousness is not only about the computer/robot itself: Chalmers' "physical world" is undoubtedly part of the picture too.
Of course most adherents of Strong AI would never deny that such abstract objects need to be implemented in the "physical world". It's just that the manner of that implementation (as well as the nature of the physical material which does that job) seems to be seen as almost – or literally – irrelevant.
David Chalmers (the well-known Australian philosopher) claims that if something is conceivable; then that entails that it's also – metaphysically - possible. The problem with this is we can distinguish conceivability from imaginability. That is, even if we can't construct mental images of nothing (or nothingness), we can still conceive of nothing (or nothingness). I, for one, can't even conceive of nothing (or nothingness).
But can other people conceive of nothing? Do we even have intuitions about nothing or about the notion of nothingness?
So how can we even name or refer to nothing? (We shall see that Parmenides might have had something here.) There's nothing to hold onto. Yet, psychologically speaking, thoughts about nothing can fill people with dread. There's something psychologically (or emotionally) both propelling and appalling about it. And that's why existentialists and other philosophers – with their taste for the dramatic and poetic - found the subject of nothing (or at least nothingness) such a rich philosophical ground to mine.
The very idea of nothing also seems bizarre. It arises at the very beginning of philosophy and religion. After all, how did God create the world "out of nothing"? Did God Himself come from nothing? Indeed what is nothing (or nothingness)?
ii) Ten Logical Possibilities Before Breakfast
iii) René Descartes
iv) Logical Possibilities as Tools
There isn't much technical argumentation in the first part of this piece about David Chalmers' “arguments from logical possibility”. Indeed, I suspect that many of his logical possibilities are indeed logical possibilities. I also suspect that many of his arguments are airtight.
In *The Conscious Mind*, Chalmers introduces a logical possibility on almost every page of that book. In it he talks about “God”, “an angel world”, “flying telephones”, “ectoplasm” and a monkey who writes Hamlet. Admittedly, most of these logical possibilities are of little interest to Chalmers either because they're hugely improbable or because they don't help him philosophically. It is zombies which he specialises in - and it is zombies which help him philosophically.
Indeed isn't the position of panpsychism (which Chalmers also endorses) itself very reliant on logical possibility? (For example, it's *logically possible* that an electron or stone has “phenomenal properties”.)
Not only is Chalmers' keenness on logical possibilities a philosophical position, logical possibilities also advance one of his positions within philosophy. Chalmers and other “anti-materialists” require logical possibilities in order to advance anti-materialism. In other words, their arguments only work if one countenances all sorts of strange logical possibilities.
So when a philosopher cites various logical possibilities, all sorts of philosophical positions become available... or *possible*. That's primarily because once a given philosopher sets a particular logical possibility among the pigeons, then it's surely the duty of other philosophers to tackle that logical possibility. But if they don't, then surely they can be classed as philosophical philistines... or “verificationists”. Indeed Chalmers lays his cards on the table when he states:
“In general, a certain burden of proof lies on those who claim that a given description is logically *impossible*.”
This means that there's a lot of weight attached to logical possibilities. In other words, logical possibilities (as well as conceivabilities) serve a larger philosophical purpose. This means that citing logical possibilities isn't just another tool – it is *the* tool to fight materialism (perhaps much more too).
v. Logical Possibility and Natural Possibility
vi. Zombies
vii. Saul Kripke
viii. Chalmers & Goff: Conceivability to Possibility
ix. Two More Conceivings
One may think that only natural (or empirical) possibility is of interest to most people – both laypersons and experts. Indeed David Chalmers himself (sort of) states that himself. Yet logical (i.e., not natural) possibility still permeates Chalmers' entire work. And, as we shall see, so too does conceivability; which he strongly ties to logical possibility. Indeed Chalmers' references to natural possibility hardly make sense when taken out of the context of logical possibility. Thus both logical and natural possibility gain much of their purchase by being opposed to one another.
Chalmers himself sums up one major problem with logical possibility (this was touched upon in Part One) when he says that
“[t]here are a vast number of logically possible situations that are not naturally possible”.
That means that there must be mightily good philosophical (or scientific) reasons to spend time on a given logical possibility. (It's easy to believe that there are good reasons when, for example, consciousness is being discussed.) That is, surely it must pay philosophical dividends to do so. Having said that, there's a very large number of uninstantiated natural possibilities too.
So what, philosophically, can we draw out of natural (“although wildly improbable”) possibilities? Well, we can draw one thing out: that they're *naturally possible*. And that's enough for some philosophers. But what else? Well, at a superficial level, it shows us that all sorts of bizarre things are naturally possible. So if it's naturally possible that a monkey could type *Hamlet*, then it's also naturally possible that an ant could take over the world. Why? Because all it takes for something to be naturally possible is that it “conforms to the laws of our world”. So, as far as I can see, a Nietzschean super-ant does appear to conform to the laws of of our world. That is, this ant and its actions don't “violate[] the laws of nature of our world”.
Such cognitivists believe that the brain is also a machine – a computing machine.
Computationalists (computationalism is a branch of cognitivism) claim that all thought is computation. But what does that mean? Are the words 'thought' and 'computation' virtual (or literal) synonyms?
This is more clearly the case because it seems that almost all conscious processes in the brain are deemed to be thoughts; and thus also deemed to be computations. That not only includes the thought that 1 plus 1 equals 4 or that Snow is white; but also the rotation of a mental image in the mind, imagining the smell of a rose and so on. Then again, if rotating a mental image is classed as a thought, then why can't it also be classed as a computation? Especially since, in computationalism, they appear to by synonyms.
It all now depends on what we mean by the word 'computation'.
For a start, we can make the following claims:
i) All of a computer's processes are computations.
ii) Not all conscious human mental processes are computations.
Similarly, we can say:
iii) Many human mental processes are thoughts.
iv) No computer computations are thoughts (i.e. because thoughts have semantic content, intentionality, reference, etc.).
So what does a third-person absolutist believe? According to David Chalmers, Dennett believes that “what is not externally verifiable cannot be real”. To be more explicit: there's a fundamental connection between any x being real (or existing) and whether or not we can “externally verify” that x. Thus, if we can't externally verify x, it doesn't exist. It's not real.
Dennett explains his third-person absolutism when he states the following:
"I wouldn't know what I was thinking about if I couldn't identify them by their functional differentia."
It's not surprising, then, that Dennett has asked David Chalmers
“to provide 'independent' evidence (presumably behavioral or functional evidence) for the 'postulation' of experience”.
As for David Chalmers: he classes himself as a “dualist”; or, more accurately, his position is one of "naturalistic dualism". Why “naturalist”? Because Chalmers believes that mental states and consciousness itself are caused by physical systems. So why “dualist”? Because Chalmers believes that mental states - or consciousness generally - are ontologically distinct and also irreducible to the physical.
In 1964 Gell-Mann postulated the existence of quarks. (The name was coined by Gell-Mann himself and it's a reference to the novel Finnegans Wake, by James Joyce.) Quarks, antiquarks and gluons were seen to be the underlying elementary elements of neutrons and protons (as well as other hadrons). Gell-Mann was then awarded a Nobel Prize in Physics in 1969 for his contributions and discoveries in the classification of elementary particles at the nuclear level.
More relevantly to this piece. In 1984 Gell-Mann was one of several co-founders of the Santa Fe Institute - a research institute in New Mexico. Its job is to study complex systems and advance the cause of interdisciplinary studies of complexity theory.
Gell-Mann wrote a popular science book about physics and complexity science, The Quark and the Jaguar: Adventures in the Simple and the Complex, in 1994. Many of the quotes in this piece come from that book.
**************************************************************************
The following words of Lee Smolin (an American theoretical physicist) sum up both Murray Gell-Mann's work and the man himself. (At least as they are relevant to this piece.) Firstly he explains Gell-Mann's work:
“Physics needs a new direction, and the direction should have something to do with the study of complex systems rather than with the kind of physics [Murray Gell-Mann] did most of his life.”
Then Smolin continues with a few words on Gell-Mann himself:
“The fact that after spending a life focused on studying the most elementary things in nature Murray can turn around and say that now what's important is the study of complex systems is a great inspiration, and also a great tribute to him.”
Of course all the above is hardly a philosophical or scientific account of the need to move from the “elementary” to the “complex”. However, it does hint at the importance of gaining a broader picture of nature (or the universe). And that's what both Smolin himself and Gell-Mann realised. (In Smolin's own case, he moved from theoretical physics to adding cosmology and philosophy to his repertoire.)
Despite that, surely it can't be said that “what's important is the study of complex systems”. That's simply to reverse the “reductionist hierarchy”. Complex systems are simply part of the picture: not the most important part of the picture. Indeed it seems a little naïve to reverse that previous ostensible hierarchy with a new one.
Murray Gell-Mann himself did appear to offer us a middle-way between (strong) reductionism and the complete autonomy of the individual (“special”) sciences.
Gell-Mann believed that it's all about what he called the "staircases” between the sciences. As Gell-Mann put it (in the specific case of the relation between the levels of psychology and biology):
“Where work does proceed on both biology and psychology and on building staircases from both ends, the emphasis at the biological end is on the brain (as well as die test of the nervous system, the endocrine system, etc), while at the psychological end the emphasis is on the mind—that is, the phenomenological manifestations of what the brain and related organs are doing. Each staircase is a brain-mind bridge.”
Most of the above can be summed up in this way:
i) Simply because a scientist (or philosopher) says that x can be reduced to y (not necessarily without remainder),
ii) that certainly doesn't also mean that this scientist (or philosopher) also believes that x is (to use Patricia Churchland's words) “disreputable, unscientific or otherwise unsavoury”.
In 1964 Gell-Mann postulated the existence of quarks. (The name was coined by Gell-Mann himself: it's a reference to the novel *Finnegans Wake*, by James Joyce.) Quarks, antiquarks and gluons were seen to be the underlying elementary elements of neutrons and protons (as well as other hadrons). Gell-Mann was then awarded a Nobel Prize in Physics in 1969 for his contributions and discoveries in the classification of elementary particles at the nuclear level.
More relevantly to this piece. In 1984 Gell-Mann was one of several co-founders of the Santa Fe Institute - a research institute in New Mexico. Its job is to study complex systems and advance the cause of interdisciplinary studies of complexity theory.
In 1994, Gell-Mann wrote a popular science book about physics and complexity science, *The Quark and the Jaguar: Adventures in the Simple and the Complex*. Many of the quotes in this piece come from that book.