Bad Medicine Doctors Doing Harm Since Hippocrates
Bad Medicine Doctors Doing Harm Since Hippocrates
Bad Medicine Doctors Doing Harm Since Hippocrates
234
Bad Medicine
Doctors Doing Harm Since Hippocrates
David Wootton
Table of Contents
Dedication and Praise
Acknowledgements & Note On Sources
Note on sources xi
list of illustrations xiii
list of tables xv
Introduction: Bad Medicine/Better Medicine
Part 01 The Hippocratic Tradition
01. Hippocrates and Galen
02. Ancient Anatomy
03. The Canon
04. The Senses
Conclusion to Part 01 : the Placebo Effect
Part 02. Revolution Postponed
05. Vesalius and Dissection
06. Harvey and Vivisection
07. The Invisible World
Conclusion to Part 02: Trust Not the Physician
Part 03. Modern Medicine
08. Counting
09. Birth of the Clinic
10. The Laboratory
11. John Snow and Cholera
12. Puerperal Fever
13. Joseph Lister and Antiseptic Surgery
14. Alexander Fleming and Penicillin
Conclusion to Part 03: Progress Delayed
Part 04 . After Contagion
15. Doll, Bradford Hill, and Lung Cancer
16. Death Deferred
Conclusion
Further Reading
Acknowledgements
Alison Mark first suggested this project. Katharine Reeve commissioned
it. Luciana OFlaherty adopted it. Students at Queen Mary,
University of London, and at the University of York explored the subject
with me. The University of York gave me a sabbatical in which to
write. Audiences at Birkbeck, University of London; the History of
Science Seminar in the University of Cambridge; the Department of
History in the University of York; and the National Humanities
Centre at Ralegh-Durham discussed chapters with me. Harold Cook,
Lauren Kassell, Stuart Reynolds, and Lisa Wootton read a draft, and I
am grateful for their comments. They are not responsible for my
errors, nor my failings. Nor, of course, is Alison Mark, who has kept
company with this project from beginning to end.
Note On Sources
This book is not burdened with numerous footnotes and a lengthy
bibliography, though I know it will be read by students and scholars as
well as by others with an interest in the subject. For those who wish to
pursue this further, at www.badmedicine.co.uk you will find detailed
bibliographies and notes, along with links to other web sites. You will
also find updates: corrections, clarifications, responses to critics, and
references to literature that has appeared since this book was written.
The very short bibliography you will find at the end is intended only
as an indication of the most important sources on which I have drawn
and the most significant works that have influenced my thinking.
LIST OF ILLUSTRATIONS
1. James Ensor, The Bad Doctors, 1895. Etching. xvi
2. Woodcut, reproduced from Guido Guidi, Opera Varia (Lyons,
1599). 9
3. Abraham Bosse, Bloodletting, c.1635. 18
4. Eighteenth-century caricature, by Pier Leone Ghezzi, shows a
Dr Romanelli. 32
5. A Greek vase from c.475 bc showing a doctors surgery. 33
6. The tombstone of Jason, an Athenian doctor of the second
century ad. 62
7. A doctor inspecting urine in a urine bottle, reproduced from
Johannes de Ketham, Fasciculus Mediciniae (Venice, 1522). 65
8. Anatomy Lesson, from Johannes de Ketham, Fasciculus Mediciniae
(Venice, 1522). 75
9. The titlepage to the 1st edition of Vesaliuss De Humani Corporis
Fabrica. 77
10 and 11. Two medieval illustrations of skeletons, one from the
fourteenth century and one from the mid-fifteenth. 81
12. The lateral view of the skeleton from the De Fabrica of 1543. 84
13. The first illustration of the muscles from the 1543 De Fabrica. 87
14. The seventh illustration of the muscles from the 1543 De Fabrica. 88
15. Third illustration of the anatomy of the torso from the De Fabrica. 89
16. This initial letter L, which appears once in the 1543 edition of
De Fabrica. 91
17. In this illustration from Juan Valverde de Amuscos Anatomia del
corpo humano (1560) an corch or flayed figure holds up his own
skin for your inspection. 93
18. The illustration of the valves in the veins from Harveys De Motu
Cordis. 97
19. Large initial letter Q, showing the vivisection of a boar, from the
1555 edition of Vesaliuss De Fabrica. 100
20. Vivisection of a dog from J. Walaeus, Epistola Prima de Motu
Chyli et Sanguinis (1647). 105
21. One of Leeuwenhoeks microscopes. 114
22. The compound microscope used by Hooke, as illustrated in his
Micrographia (1665). 121
23. Seventeenth-century French woodcut of a skull and crossbones,
believed to have been produced to be stuck up on the houses of
people dying of plague. 128
24. The apparatus devised by Tyndall for carrying out spontaneous
generation experiments. 134
25. Lithograph by Honor Daumier, which appeared in 1883. 142
26. A set of Perkins tractors. 167
27. Drawing by George John Pinwell, entitled Deaths Dispensary,
published in an English magazine, 1866. 197
28. The map of the fatalities in the neighbourhood of the Broad
Street pump from the second edition of Snows The Mode of
Communication of Cholera. 206
29. A surgical operation performed in Aberdeen according to
Listers principles. 228
30. Etching by Charles Maurin, c.1896, showing the researchers
from the Institut Pasteur, led by Pierre-Paul-Emil Roux, who
had discovered serum therapy for diphtheria. 230
society, we turn above all to the medical profession for help, and the
doctors who treat us belong to a profession that dates back to
Hippocrates, the ancient Greek who, some 2,500 years ago, founded a
tradition of medical education that continues uninterrupted to the
present day. Yet the striking thing about the Hippocratic tradition of
medicine is that, for all but the last hundred years, the therapies it
relied on must have done (in so far as they acted on the body, not the
mind) more harm than good. For some two thousand years, from the
first century bc until the mid-nineteenth century, the main therapy
used by doctors was bloodletting (usually opening a vein in the arm
with a special knife called a lancet, a process called phlebotomy
or venesection; but also sometimes cupping and leeching), which
weakened and even killed patients.
Moreover medicine became more not less dangerous over time:
nineteenth-century hospitals killed mothers in childbirth because
doctors (trained to consider themselves scientists) unwittingly spread
infections from mother to mother on their hands. Mothers and
infants had been much safer in previous centuries when their care had
been entrusted to informally trained midwives. For 2,400 years
patients have believed that doctors were doing them good; for 2,300
years they were wrong.
I think it is fair to say that historians of medicine have had
difficulty facing up to this fact. Historians of medicine are a diverse
group, with widely differing views, but in general they no longer
write about progress, and so they no longer seek to distinguish good
medicine from bad. Indeed they try to avoid what they think of as
anachronistic evaluations: only the most dyed-in-the-wool Whig
history still polarizes the past in terms of confrontations between
saints and sinners, heroes and villains, wrote Roy Porter (19462002,
the greatest medical historian of his generation) in 1989. This book,
on the other hand, is directly concerned with progress in medicine:
what made it possible, and why it was so long postponed. To talk
about progress is to talk about discoveries and innovation, and about
obstacles and resistance: it is inevitably to talk about heroes and villains,
if not about saints and sinners. This book, therefore, is written
10
11
12
13
One should use these, while extension is going on, to make leverage . . .
just as if one would lever up violently a stone or log. This is a great help,
if
the irons are suitable and the leverage used properly; for of all the
apparatus
contrived by men these three are the most powerful in action
the wheel and axle, the lever and the wedge. Without some one, indeed,
or all of these, men accomplish no work requiring great force. This
lever method, then, is not to be despised, for the bones will be reduced
thus or not at all. If, perchance, the upper bone over-riding the other
affords no suitable hold for the lever, but being pointed, slips past, one
should cut a notch in the bone to form a secure lodgment for the lever.
This was a perfectly effective technology, well-grounded in theory;
but Hippocratic doctors persisted in defending bloodletting and
cauterization as if they were just as reliable as the application of a lever
to a stone or a log.
I have deliberately introduced the term technology because I
want to stress that medicine, at least since Hippocrates, has always
been a technology, a set of techniques used to act on the material
world, in this case the physical condition of the patients body. With
technologies it is perfectly legitimate, and not at all anachronistic, to
talk about progress. Thus a steam engine is a technology for turning
heat into propulsion. Progress in the design of steam engines means
either that greater propulsive force is obtained, or the same force is
obtained more efficiently. The definition of progress is internal to the
technology itself. In the case of medicine, progress means that pain is
alleviated, periods of sickness are shortened, and/or death is postponed.
Hippocrates would have recognized this to be progress, so
would Lister, so would Richard Doll, the man who discovered that
smoking causes lung cancer. To ask if there is progress in medicine is
not to ask an illegitimate question, as it might be, for example, to ask if
there is progress in philosophy or poetry.
Hippocrates thought that he could alleviate pain, shorten sickness,
and postpone death. We now know that (in so far as his techniques
acted on the body not the mind) he was wrong. Studies in the nineteenth
14
15
species, just as (to use his comparison) plants did. Later generations of
English doctors revered him as the English Hippocrates because he
had refounded medicine as a study, not of patients and their disorders,
but of diseases and their regularities.
From this point it becomes easy, in principle, to compare therapies,
and decide if one is better than another at alleviating pain, shortening
illness, and postponing deathSydenham claimed to have brought
about a great improvement in the treatment of smallpox (even though
he did not recognize it as a contagious disease). Other doctors had
bled smallpox victims, covered them with hot blankets, and given
them warming drinks, despite the fact that they were suffering from a
fever. Sydenham thought this could lead to boiling of the blood,
brain-fever, and death. He cooled his patients, gave them cool liquids,
and, naturally, bled them, though only moderately. His patients were
certainly more comfortable, and may well have got better faster. For
other conditions, however, his therapies were entirely orthodox. He
believed, for example in treating a cough, of whatever sort (he was
well aware there were different sorts of cough), with bleeding and
(often repeated) purges (laxatives to induce diarrhoea).
In Sydenhams day, people were beginning the first systematic
study of life expectancies, based on the London bills of mortality,
which recorded the cause of death for everyone who died in London.
As we shall see, the new intellectual tools were being assembled
which would eventually make it possible to evaluate therapies and
measure progress in medicine. The more this was done, the more it
became apparent that traditional remedies were defective. Foucault
gives the example of an early nineteenth-century doctor who abandoned
all the traditional therapies. He was aware of 2,000 species of
disease, and treated each and every one of them with quinine. Now
quinine, a drug that was new in the seventeenth century, really does
work against malaria. Its great advantage in use against the other 1,999
conditions is that (unlike traditional Hippocratic remedies) it does
little harm.
Although Hippocrates had no way of knowing it, his technology
was defective. Hippocratic medicine was not a science, but a fantasy of
16
17
18
19
20
21
22
23
disease (bad air was thought to be the cause of epidemic disease in the
mid-nineteenth century just as in the days of Hippocrates), even
though theories of the body had undergone radical change, that I use
the terms Hippocratic medicine and traditional medicine to cover
not just the period when humoral theory was in the ascendant, but
the whole period through to the rise of the germ theory of disease.
Having recognized that therapies stood still even while knowledge
advanced, I had to face a deeply disturbing fact. Much of the new
knowledge was founded on vivisection. This did not greatly worry
me, I have to confess, for as long as I thought that all medical
knowledge
was useful knowledge. But how could you justify the suffering
of Harveys experimental animals when you realized that Harvey was
no better at treating the sick than any other seventeenth-century
doctor? As I worked on this book, I became more and more puzzled
at the way in which standard medical histories ignored vivisection,
which turned out to be absolutely central to the history of medicine.
Vivisection, and even dissection, I realized were difficult and
emotionally
disturbing subjects, and one needed to face the fact that
modern medicine had been born out of a series of activities that were
both shocking and distressing. As long as I thought of medical history
in terms of a continuing progress in knowledge, I could assume that
dissection and vivisection were worth it; but once I realized that there
was virtually no progress in therapy before 1865, I was bound to ask
myself how one could justify mangling the dead and torturing the
living.
And then I slowly became aware of a third problem. Histories
of progress are written on the assumption that there is a logic of
discovery. Once you discover (say, germs), it is easy to discover
(say, antibiotics); without a theory of germs you will never discover
antibiotics. A good example is Newtons theory of gravity. As long as
the sun, the moon, the planets, and the stars were believed to circle
around the earth it seemed obvious that there were different laws of
movement on earth and in the heavenshere, natural movement was
24
25
26
27
to make sense of these delays we need to turn away from the inflexible
logic of discovery and look at other factors: the role of the emotions,
the limits of imagination, the conservatism of institutions, to
name just three. If you want to think about what progress really
means, then you need to imagine what it was like to have become so
accustomed to the screams of patients that they seemed perfectly
natural and normal; so accustomed to them that you could read with
interest about nitrous oxide, could go to a fairground and try it out,
and never even imagine that it might have a practical application. To
think about progress, you must first understand what stands in the
way of progressin this case, the surgeons pride in his work, his
professional training, his expertise, his sense of who he is.
Anaesthetics made the work of surgery easier. They were no threat
to surgeons incomes. At first sight surgeons had everything to gain
and nothing to lose from the discovery of pain relief. And indeed,
from 1846, anaesthesia established itself with great speed. Yet it is clear
from the inexplicable delay, from the extraordinary hostility
expressed towards its inventors, from the use of the phrase Yankee
dodge, that there was something at stake, some obstacle to be
overcome. That obstacle was the surgeons own image of themselves.
Since this book argues that real medicine begins with germ theory,
at its heart there is a most puzzling historical non-event: the long
delay that took place between the discovery of germs and the triumph
of germ theory. Its fairly easy to find names for things that happen
the Scientific Revolution, the Great War. Its much harder to name a
non-event, but non-events can be every bit as important as events.
Historians regularly insist that to understand the past one must
approach it as if one did not know what was going to happen next.
But, despite this, they are very reluctant to take seriously the idea that
things might have happened differently. The standard view is that
when important things dont happen it is because they couldnt
possibly have happened. Thus the great biologist Franois Jacob, in
The Logic of Living Things, argues that eighteenth-century biologists
could not solve the intellectual problems presented by sexual
reproduction:
28
29
30
only a year before. One might think that Tyndall should have
changed his mind in the light of the new evidence. Instead he treated
the new evidence as an obstacle to be overcome. He refused to give
up, he refused to give in, he was determined not to be defeated. And
this, every scientist would now say, was the right choice. There can be
no impartial account of Tyndalls refusal to accept the result of his
own experiments, of his stubborn persistence in face of the evidence:
what one makes of it depends entirely on whether one is a proponent
of germ theory or of spontaneous generation.
So dont be misled by the title of this book. This is neither an
attack on the medical profession nor an indictment of modern medicine.
When I was young, doctors twice saved my life: I have the scars
to prove it. More recently, a plastic surgeon performed a wonderful
operation on my right hand, on which Id lost the use of two fingers.
Im all in favour of good medicinebut the subject of good medicine
is inseparable from the subject of bad medicine. To think about one,
you need to be able to think about the other, and of the two subjects,
bad medicine is both the less explored and by far the larger. Before
1865 all medicine was bad medicine, that is to say, it did far more
harm than good. But 1865 did not usher in a new era of good
medicine. For the three paradoxes of progressineffectual progress,
immoral progress, progress postponedare still at work. They may
not work quite as powerfully now as they did before 1865, but they
work more powerfully than we are prepared to acknowledge. There
has been progress; but not nearly as much as most of us believe.
In the final chapter of the book I will try to measure the extent of
the progress that has taken place. I think most readers will be surprised
to discover just how limited the achievements of modern
medicine are. And, as we shall see, the paradoxes of progress do not
cover the full range of problems we encounter in modern medicine.
There is, for example, iatrogenesis, where medical intervention itself
creates conditions that need to be treated, a particular case of doing
harm when trying to do good. But other subjects, I want to stress
now, lie outside the scope of this book, important though they are.
This book is not concerned with plain malpractice. There have always
31
32
33
34
blood from the body. The followers of Hippocrates also had an interest
in cautery, the application of hot irons to parts of the body. These
four forms of treatment (emetics, purgatives, bloodletting, and
cautery) were to remain the fundamental therapies for almost two
thousand years; three of the four were to remain the standard therapies
for far longer than that. (Cautery was largely abandoned in the
Renaissance, but Laennec, the inventor of the stethoscope, offered it
to patients suffering from phthisis (tuberculosis), knowing that
conventional
remedies were ineffectual. He made 12 to 15 burns on the
chest with an incandescent copper rod.) It seems likely that these
procedures all predated Hippocrates: the Scythians practised cautery,
and we have a Greek perfume bottle from c.475 bc which shows a
doctor engaged in venesection, with a cupping bowl hanging on the
wall behind him. What the Hippocratics provided was an account
of why these therapies worked: it never occurred to them that they
did not.
4. This eighteenth-century caricature, by Pier Leone Ghezzi, shows a Dr
Romanelli, who was employed by Cardinal Giovanni Francesco Albani.
He is holding an enema syringe.
All the Hippocratics shared a belief that the human body was an
integrated whole. In order to understand what was going on inside it
you had to study the fluids that came out of it (vomit, urine, blood,
phlegm, etc.), but it was assumed that a whole range of other indicators
might serve as signs indicating internal processes. Here is a passage
from a treatise called Epidemics I (c.410 bc) that is amongst those
preserved by later generations of doctors because they believed it to
be by Hippocrates himself. The author describes an outbreak of causus
(perhaps enterric fever) in the autumn. Those affected suffered from
fever, fits, insomnia, thirst, nausea, delirium, cold sweats, constipation,
5. A Greek vase from c.475 bc showing a doctors surgery.
and they passed urine which was black and fine; death often
occurred on the sixth day, or the eleventh, or the twentieth.
The disease was very widespread. Of those who contracted it death was
most common among youths, young men, men in the prime of life,
35
36
37
38
was allowed to stand and separate. Black bile, we may suspect, was
invented to bring medicine into harmony with the cosmology of
Empedocles. But it was also believed that all four humours were to be
found, in varying proportions, in the blood, and that they separated
out when blood was left to stand. It was thus easy for bloodletting to
come to be regarded as the sovereign remedy, far more important
than emetics and purgatives.
In Galens view Whatever sickens the body from internal evil
has a twofold explanation, either plethora or dyspepsia. Dyspepsia
resulted from eating the wrong foods; plethora from consuming
more food than one burnt up or excreted. Why, in the case of one
patient, did a severe wound heal without becoming infected, while in
the case of another a tiny scratch became infected, red, swollen, and
potentially fatal? Because the second patient was already suffering
from a plethora, an excess in the blood. Without this the scratch
would have been insignificant. Bloodletting thus became a cure for
almost all conditions. Celsus, for example, in the first century ad,
recommended bloodletting for severe fever, paralysis, spasm, difficulty
in breathing or talking, pain, rupture of internal organs, all acute
(as opposed to chronic) diseases, trauma, vomiting of blood. It was
still being used as a nearly universal remedy in the middle of the
nineteenth century.
In the ancient world bloodletting had its opponents. The followers
of Erasistratus (c.330255 bc) thought bloodletting was dangerous,
and preferred to get rid of excessive blood by fasting. But the main
disputes were over where to let the blood from, for some said it
should be from close to the affected organ, some from as far away as
possible, and over how much to let: the leading authorities were
prepared to let blood up to the point when the patient fainted. Disputes
over these matters were to continue as long as the tradition of
ancient medicine survived. In 1799 Benjamin Rush (one of the
signatories
of the Declaration of Independence) was advocating heroic
bloodletting, and was accused by some of killing his patient, George
Washington, through his commitment to this practice. The dispute
39
was still not over whether to let blood, but rather over how much to
let and where from. A critic of excessive bloodletting still regarded
moderate bleeding as the pre-eminent medical remedy in 1839, and
around 1870 the naturalist Charles Waterton attributed to frequent
bloodletting his success in keeping himself in as perfect health as a
man can be.
Through the centuries, many doctors recommended a regular
regime of prophylactic bloodletting, particularly in the spring. In
Philadelphia in the 1830s it was still the custom, as it would have been
in a medieval monastery, for people to go en masse to the doctors to
be bled each spring. Such bleeding was held to be essential for those
who did not vent their excess of this humour by natural means: in the
case of women, in their periods, and in men in nosebleeds, varicose
veins, and haemorrhoids. These last three were seen as examples of
natural self-therapy. It is obvious to us, in the twenty-first century,
that a nosebleed, a bleeding vein, or a bleeding bottom needs treatment;
for centuries, by contrast, these were welcomed as ways in
which the body healed itself. Women who had ceased to have periods
(an interruption in periods, without pregnancy, in someone of
childbearing
age was regarded as extremely dangerous) and men who had
no haemorrhoids had to turn to doctors for an artificial substitute.
The goal of ancient medicine was a balance of humours. An early
text, Airs, Waters, Places, argued that different climates would tend to
produce a predominance of different humours, hence different
physiological types and national characters. This process was seen as
complex, even contradictory. Thus Galen held that the Germans and
Celts, because they lived in a cold, wet climate, had soft, white skin,
while the Ethiopians and Arabs had hard, dry, and black skin. But the
Germans and Celts bottled up their internal heat within themselves:
Whatever internal heat they have has retreated, along with the blood,
into the internal organs; and there the blood churns about, confined
in a small space, and boils; and thus they become spirited, bold, and
quick-tempered. To achieve a healthy body and disposition you thus
needed to counteract the effects of the climate and the seasonin
40
41
Galen, and all doctors after Galen, thus advocated proper diet.
Galen recommended a diet designed to thin the humours, consisting
of fish, fowl, barley, beans, onions, and garlic for all chronic diseases.
They recommended sensible exercise. Galen abhorred gymnastics as
too violentthe claim that gymnastics was the science of health and
medicine the science of disease seemed to him to take no account of
sports injuriesbut recommended instead exercise with the small
ball, a game of catch. They recommended the regular use of laxatives
and prophylactic bloodletting. But they also recommended control of
the passions, particularly anger.
Galen says:
In my youth . . . I once saw a man in a hurry to open a door. When he
could not get it to open, he began to bite the key, to kick the door, to
curse the gods; his eyes went wild like those of a madman, and he was
all
but frothing at the mouth like a wild boar. The sight caused me to hate
anger so much that I would never appear thus disfigured by it.
He had particular contempt for those who struck out at their slaves.
His father
frequently berated friends who had bruised their hands in the act of
hitting servants in the teeth. He would say they deserved to suffer
convulsions and to die from the inflammations they had sustained.
Once I even saw a man lose his temper and strike his servant in the eye
with a pencil, causing him to lose the sight of one eye. And it is related
of the emperor Hadrian that he once struck one of his household staff
in the eye with a pencil, causing him to lose the sight of one eye. When
Hadrian realized what had happened, he summoned the servant and
agreed to grant him a gift of his own request in exchange for the loss he
had suffered. But the injured party was silent. Hadrian repeated his offer:
that he should request anything he wished. At which the servant grew
bold and said that he wanted nothing but his eye back.
In Galens eyes such behaviour unmanned those guilty of it. His
mother, temperamentally the opposite of his father (an architect), was
so bad tempered she would sometimes bite her maids.
This preoccupation with the passions might seem a purely
42
02 Ancient Anatomy
Hippocrates and his contemporaries knew remarkably little about
human anatomy, the structure of the bones aside. They made no
43
44
45
to say that someone made up their mind: he talks about the gods
deciding what they should do. By the time we get to Aristotle in the
fifth century bc soul and body are contrasted terms, and deliberation
is a capacity of the soul. Moreover, the soul acts through the body,
but some actions are deliberate, and some actions (e.g. breathing)
take place without thought. After Herophilus there are two systems
in the body. On the one hand the brain, the nerves, and the muscles
(the word now becomes of crucial importance) control voluntary
movement. On the other hand the heart, the arteries, and the veins
represent systems over which the mind has no control, systems of
involuntary action. This involved distinguishing terms that, for the
followers of Hippocrates, had been near-equivalents. For the first
Hippocratics the difference between pulses, palpitations, tremors, and
spasms was merely one of scale: such tremblings could be seen in any
part of the body. After Herophilus, the pulse (singular now instead of
plural) occurred simultaneously and involuntarily in the heart and
arteries; palpitations, tremors, and spasms were now afflictions of the
nervous system, involuntary twitches of a system that should be under
conscious control.
The first Hippocratics had never taken the pulse of their patients,
but now the pulse became a key source of information about the
involuntary system, as opposed to the voluntary one. We now begin
to get some sense of why the idea of self-control was so important to
Galen. To be human was to be in control of those bodily activities
that were voluntary; to lose control, to strike and bite, was to allow the
passions to seize control, and so to become an animal rather than a
human being. There was, however, a fundamental ambiguity in this
way of thinking. One could treat the voluntary and involuntary
systems as two aspects of the human body, as Galen did, or one could
see them as reflecting a fundamental distinction between the physical
and the mental, between body and soul, between the passions and
reason. If one went down this alternative route, dissection, which had
started as a study of function, could now be said to have confirmed a
fundamental claim of Socratic philosophy: that the mind was separate
from the body, that mental functions and bodily functions were different
46
47
heat in the body, so that the excretion becomes genuinely smoky; fair
hair comes about when this heating effect is less. In that case the
wedged-in substance is a sedimental product of yellow bile, not of
the black variety. White hair is a product of phlegm. Red hair,
whose colour is midway between fair and white, has an intermediate
position in terms of its origin too: a sediment which is half-way
between the phlegmatic and the bilious. Curly hair comes about either
through the dryness of the mixture or because of the pore in which it
is rooted . . .
Thus every physiological process was to be understood in terms of
the humours and the four basic qualities of hot and cold, dry and wet.
Although Galen saw that a hair could be compared to a blade of grass
with a root, he was only interested in it as part of a whole-body
system, not as a distinct organ.
This helps us to understand a puzzling feature of ancient medical
thinking. In De Medicina (c. ad 40) Celsus says that the Art of Medicine
is divided into three parts: one being that which cures through
diet, another through medicaments, and the third by the hand. The
idea that surgery is handwork is straightforward as the Latin word
chirurgia derives from the Greek words for hand and work. But Celsus
description of cure through diet includes not only regimen or the
non-naturals, but also bloodletting and rocking (regarded as a form of
gentle exercise; even the act of thinking was for Hippocratic doctors a
form of exercise which warmed the soul), and the taking of purgatives
and emetics: in other words all of what we would think of as internal
medicine comes under the heading of dietary healing. Venesection
and purging were considered, at the time, as being means of
reestablishing
the necessary balance that had been destroyed by
unhealthy diet, and thus as extensions of dietary therapy. When we
turn to Celsus on medicaments we discover that this section of his
work is not concerned at all with things like purgatives and emetics
(which we might well think of as drugs) but only with substances
applied topically to the body: oils, poultices, embrocations, liniments,
and so forth.
48
A hundred and thirty years later, Galen also assumes that a drug is
normally something applied to the exterior of the body. It works
by entering through the pores; the finer the material the further it
penetrates into the body. There, according to its nature, it warms or
cools, dries or dampens the body. For Galen drugs that are ingested
work in exactly the same way as those that are not ingested. But if
they are ingested they function both as drugs and as food. Take lettuce,
which cools and is a soporific. Applied to the body its function is
straightforward. Ingested, it initially functions as a drug; but when
digested it functions as a food, and all foods have a heating quality
and encourage wakefulness. Galen thus compares lettuce as a food to
putting green wood on a fireinitially it dampens the fire down, but
in the end it burns brightly.
Historians of medicine happily report that Galen describes 473
simples (uncompounded medicaments); that 1,000 had been listed
in the De Materia Medica of Dioscorides (c. ad 60); that in the early
eleventh century the Arabic scholar Ibn Sina or Avicenna lists 760
medicaments; and that towards the end of the twelfth century Ibn
al-Bayatar lists over 1,400 medicaments. But this information is
beside the point unless we remember that (with some exceptions,
such as theriac, used both as an antidote to vipers bites and as a general
tonic, which was held to work by its specific form) medicaments
were understood simply to heat and cool, dry and dampen, and were
therefore almost infinitely substitutable one for another. Language
consistently tricks us into thinking that there are continuities where
none exist. Our word medicine when it is used to mean a drug
comes from the title of Dioscoridess De Materia Medica; our word
pharmacy comes from the ancient Greek, where, as in Latin and
English, the word for both the science of medicine and for a medicament
is the same; the word drug itself is of Arab origin. Because we
owe our language to the Greeks, the Romans, and the Arabs it is easy
to think that our words mean the same as theirs did; but medicine is
no longer what it was for Dioscorides.
Hippocratic doctors did not treat diseases; in their view diseases
were themselves symptoms of an underlying imbalance of the
49
humours; it was therefore the patient, not the disease, that had to be
treated. To work through the list of Hippocratic medicaments, as
modern scholars do, looking for ones that we would take to be
effective
not lettuce, for sureis to miss the point that drugs were rarely
specifics, directed at specific diseases (the case for such drugs was
first argued by Paracelsus in the sixteenth century, using the example
of mercury as a treatment for syphilis), and that our belief, for
example, that bathing a wound in a liquid containing alcohol might
have an anti-bacterial effect would have been incomprehensible in a
world where there was no concept of infection. One of Harveys
claims for his theory of the circulation of the blood was that it at last
made it possible to understand how drugs could quickly take effect
throughout the body; without such a theory all drugs had to be
understood as being local in their impact. Prior to Harvey, materia
medica was first and foremost about ointments, not drugs, even when
they were ingested into the body. And what these ointments could
be understood to do was constrained by the humoral theory that
governed their use. Between Polybus and Galen that theory had been
developed and refined. For the next fourteen or fifteen hundred years
doctors were more concerned to preserve and transmit this intellectual
inheritance than to question or improve it.
03 The Canon
When Celsus wrote about medicine in ad 40 he identified three
main schools of medical practitioner. The dogmatists, the followers
of Herophilus, believed we must look for hidden causes in order to
explain biological processes, and therefore believed in vivisection and
dissection, even if they had no opportunity to practise either. The
methodists had a simple mechanical account of disease as the result
of particles travelling either too quickly or too slowly through the
body, and believed a doctor could be trained in six months. And the
empirics rejected all theories of disease, insisting that we must learn
from past experience which sorts of intervention are effective. A
50
century later, Galen (who was a dogmatist) was arguing with the same
groups, and if we look at early Arabic medicine, in the tenth and
eleventh centuries, we find several competing traditions, all claiming
to be descended from Greek antecedents.
Yet, a thousand years later, in both Arab and Christian lands, Galen
had established himself as the one reliable authority on medical
questions. This was perhaps because he was deeply interested in logic,
and so his texts fitted well into a programme of education grounded
in the Aristotelian syllogism. Moreover, Galens understanding of the
body was, unlike that of the methodists and empirics, entirely
compatible
with an Aristotelian preoccupation with function. There may
have been important differences between Aristotles account of
human biology and Galens, in their understanding of the function
of the brain, for example, and, as we shall see, in their views on
reproduction, but these were marginal compared to their overall
compatibility.
So we can plausibly explain Galens survival; but it is almost
impossible now to look back to the world in which Galenic medicine
faced competition, because what survive are the books of Galen, not
those of his competitors; the Hippocratic texts survive in large part
because Galen declared himself to be a follower of Hippocrates and
wrote commentaries on key Hippocratic texts. Galen certainly
intended to dominate the field of medicine, and wrote at enormous
length to achieve this effectthe modern edition of Galens works in
Greek runs to ten thousand pages, and much has been lost; much too
survives only in Arabic translation. This copious production and trust
in the written word must certainly have helped ensure his future
dominance
Knowledge of Greek medicine spread in a series of waves. The
first Greek doctor to be invited to Rome was Archagathus in 291 bc,
and over the next five hundred years Greek doctors became more and
more frequent in the capital of the Roman empire: Galen himself
made the journey to Rome. By ad 500 there was general agreement
in Alexandria on the key texts by Hippocrates and Galen that should
51
52
(banned by a papal bull in 1299) may have prepared the way for a new
willingness to cut up the dead. The new activity of public dissection
was slow to spread: the first public dissection took place in Spain in
1391, in the German territories in 1404, and it did not become standard
until Vesalius established it as a central part of medical education
in the mid-sixteenth century. In the eighteenth century it became
normal for every student to have some experience of dissection,
which led to a severe shortage of bodies and a trade in the dead
known as body snatching.
Meanwhile, Greek medicine, as transmitted through Arabic, continued
to be the foundation of all medical education. The summary
of medical knowledge in the Canon of Ibn Sina (known in Latin as
Avicenna, d. 1037), translated into Latin in Muslim Spain by Gerald of
Cremona in the 1140s, continued to be used as a textbook at
Montpellier until 1650, and at some Italian universities until the
eighteenth century. So important was this text that an edition in
Arabic was published in the West in 1593. In 1701 the great Dutch
physician Boerhaave gave his inaugural lecture in praise of the
Hippocratic school and as we have seen Sydenham was admired as
the English Hippocrates. It was the Sydenham Society that produced
the major translation of Hippocrates into English in 1849.
Doctors thus had a set of key texts in common from ad 500 to 1850.
Galen would consequently have had little difficulty making sense
of a university education in medicine at least until the midseventeenth
century, when new discoveries in anatomy began to
come thick and fast. He would have been interested to see that his
scattered comments on diagnosis from inspection of urine had been
assembled to form a new discipline, uroscopy, which had taken shape
in late fourth century Alexandria: among the texts translated by
Constantinus Africanus was an Arabic text on the subject. And he would
have been dismayed to discover that the Arabs had wedded medicine
closely to astrology, and that this linkage had become part of university
education in Latin Europe, so that doctors routinely took
horoscopes to decide on treatment: in Valencia in 1332 it was decreed
that barber surgeons must consult qualified doctors, or physicians as
53
04 The Senses
Hot, cold, dry, wet seem to us simple terms, in principle susceptible to
measurement. Galen was much interested in the question of whether
the young were hotter and colder than the old. He believed he had
trained himself to so exactly remember heat that he could compare
the warmth of the same person several years apart, and he decided,
after extended observation, that, as we might say, their temperature
was the same. Or rather he decided something quite different, for
though the temperature was the same,
there are differences in the quality of the heat . . . that of children is more
vaporous, large in quantity, and sweet to the touch; while that of people
in their prime verges on the sharp, and is not gentle at all . . . [it is]
small,
dry, and less gentle. Neither therefore is hotter in any simple sense, but
the former appears so . . .
Heat was for Galen a complex, not a simple, quality.
We should not be surprised, then, that Celsus, when he comes to
discuss fever, hastens to stress that a patient should not be regarded as
feverish just because he feels hothe may have been working hard,
sleeping, or he may be suffering from fear or anxiety. We can only
conclude he has a fever if he not only feels hot and has a rapid pulse,
54
but
if also the surface of the skin is dry in patches, if both the forehead feels
hot, and it feels hot deep under the heart, if the breath streams out of the
nostrils with burning heat, if there is a change of colour whether to
unusual redness or to pallor, if the eyes are heavy and either very dry or
somewhat moist, if sweat, when there is any, comes in patches, if the
pulse is irregular . . .
Consequently it will not do just to touch the patients skin. The
doctor must face him in a good light, so that he may note all the signs
from his face as he lies in bed. The thermometer was invented by
Sanctorius Sanctorius, a friend of Galileos, in the seventeenth century;
but we can be sure that if Celsus had had a thermometer he
would not have felt that it alone could provide proof of a fever. Indeed
it is striking that the thermometer only became a standard clinical
tool with the death of Hippocratic medicine, spreading from Berlin
(where Ludwig Traube introduced it around 1850) to New York
(1865) and Leeds (1867). In 1791 Jean-Charles-MargueriteGuillaume de Grimaud was still arguing that the patients temperature
as measured by a thermometer was of little interest. Nothing
could substitute for the physicians hand. The doctor must apply
himself, above all, to distinguishing in feverish heat qualities that
may be perceived only by a highly practised touch, and which elude
whatever means physics may offer. Thus feverish heat is acrid and
irritating; it gives the same impression as smoke in the eyes.
What would seem to us one of the simplest of all substances, water,
was almost infinitely complex for Hippocratic doctors, who recognized,
perfectly sensibly, that it varied immensely from place to place:
the title of Airs, Waters, Places identifies it as one of the three crucial
environmental variables. No two sorts of waters can be alike, asserts
this author, but some will be sweet, some salt and astringent and
some from warm springs. Water from marshes and lakes will be
warm, thick and of an unpleasant smell in summer and productive
of biliousness. In winter it will be cold, icy and muddied by melting
snow and ice and productive of phlegm and hoarseness. There is
also water from rock springs (hard, heating in its effect, difficult to
55
56
57
58
the wrong remedies. There can be no doubt that the patients are likely
to be unable to obey and, by their disobedience, bring about their own
deaths.
Medicine defined itself as a science by transferring responsibility for
failure, firmly and remorselessly, from doctor to patient.
One last example may serve to illustrate what it was like to live in a
world, not of quantities, but of qualities. As we have seen, Praxagoras
was the first to understand the pulse as an involuntary movement of
heart and arteries, and his pupil Herophilus was the first to use it as a
diagnostic tool, classifying pulses by magnitude, strength, rate, and
rhythm. It was only for the third of these measures that his timepiece
would have been useful. Galen went further. He wrote at enormous
length on the pulse: a thousand printed pages survive. In The Pulse for
Beginners he explains that arteries have three dimensionslength,
depth, breadth. In other words, in order to understand the pulse he
immediately thinks of the anatomy of the body as exposed by dissection.
The pulse itself must be considered in terms of its strength,
hardness, speed, interval, regularity, and rhythm. Thus a pulse could in
theory be large, long, broad, deep, vigorous, soft, quick, frequent, even,
regular or, at the opposite extreme, small, short, narrow, shallow, faint,
hard, slow, sparse, uneven, and irregular. In anger, he tells us, the pulse
is deep, large, vigorous, quick, and frequent; in pleuritis quick, frequent,
hard, and, consequently, you can be deceived into thinking it is
vigorous (for remember, strength and hardness are different dimensions
of the pulse). Galen devised evocative terms to identify particular
types of pulse. Thus the anting pulse is extremely faint, frequent,
and small. Such a pulse appears quick but is not: speed and frequency
again are different dimensions.
To train oneself to identify different types of waters by their taste,
or different types of pulse (and Galen thought the pulse the most
valuable of all diagnostic tools), was to acquire a level of
connoisseurship
that in our society we would expect to find only in a wine
merchant or a restaurant critic. Galen himself says that his knowledge
of pulses is not something that can adequately be expressed in words.
59
60
61
like leather that characterizes certain lung diseases and was first
described by the early Hippocratics. They tasted ear wax: if it was
sweet death was imminent, if bitter recovery could be expected.
Galen rejected the claim that the heart was a muscle, not only on the
grounds that one could not control its beat, but also on the grounds
that if one cooked and ate a heart it tasted nothing like flesh. The four
humours (blood, phlegm, yellow and black bile) each had to be
examined
with care. According to Avicenna, phlegm could be sweet, salty,
acid, watery, mucilaginous. According to Maurus of Salerno in the
twelfth century, blood could be viscous, hot or cold, slippery, foamy,
fast or slow to coagulate. You had to observe the layers into which it
separated, and once it had separated the solids should be washed and
their texture feltslippery blood was a sign of leprosy. When Celsus
inspected urine he noted its colour, whether it was thick or thin, its
smell, and its texture (was it slimy?): black, thick, malodorous urine
was a harbinger of death.
Sight was particularly important. We have seen Celsus stress that
the doctor must have a good view of his patient. Every doctor was
trained to look out for the change in facial appearance that marked
the imminence of death: the nose becoming pointed, the temples
sunken, the eyes hollowed, the ears cold and flaccid with the tips
drooping slightly, the skin of the forehead hard and tight. You could
see death approaching. But touch was also fundamental. The first
Hippocratics always palpated the hypochondrium, literally the parts
under the cartilage, that is, the sides of the abdomen under the ribs. In
a memorial statue of a doctor from the second century ad we can see
him reaching out to his patient to touch him here, where he is evidently
swollen. The Hippocratic text Prognosis discusses at length
what you could expect to learn by feeling a patient here, and concludes:
In brief, then, painful hard large swellings [of the hypochondrium]
mean danger of a speedy death; soft, painless swellings which
pit on pressure mean protracted illness. A hypochondriac was originally
someone with something wrong with their hypochondrium; it
was only in the nineteenth century, when the hypochondrium ceased
62
63
64
65
in most instances he did not touch her for the purposes of examination.
Here too he acted on the basis of what the patient said and what he
could find out in further conversation. The importance of words and
the public nature of the complaint stand in sharp contrast to the
unimportance of a medical examination and what one can almost call a
taboo against touching . . .
When women occasionally show him (never at his request) parts of
their bodies (a lump on a breast, a hernia, a lump on her right side)
they do so with great embarrassment, with great modesty and
embarrassment, bashful[ly]. Of one he records, since she was
expected to die, she agreed to have her naked body looked at and
touched. In this society (and many other early modern societies may
have been similar) only the dead were exposed to the hand and the
eye. Only in the nineteenth century did the living become once more
exposed to the doctors touch, and even then great caution had to be
used; as we have seen, the original function of Laennecs stethoscope
was to overcome the fact that he could not possibly put his ear to a
womans chest. Family doctors, visiting patients in their homes in the
United States in the 1890s, usually contented themselves with feeling
the pulse and inspecting the tongue.
66
67
68
derived solely from the fact that doctors and patients believed it
would be beneficial, and consequently it was.
We can also be clear that the type of benefits that medicine was
capable of offering, until the last century, and leaving aside some
simple surgical procedures and a very few other treatments, was
effectively restricted to what the body is capable of doing for itself.
Thus if a patient takes a placebo believing it to be a pain-killer they
are likely to experience a reduction in pain, and this reduction is not
just in the mind: the body produces endorphins, which reduce the
pain. In this way the placebo can mimic the working of opiates. But
the body is incapable of producing a substance comparable to aspirin
(introduced in 1899), so that even if you take a placebo believing it
to be aspirin, the body will never successfully mimic the action
of aspirin; your pain relief will still come from the production of
endorphins. Before 1865, as after, doctors were able to marshal all the
resources of the placebo effect, and it is a safe general rule, to which
there were, as Hertzler acknowledged, very few and very limited
exceptions (setting bones, reducing dislocations, operating for bladder
stones and cataracts, and, in later periods, taking opium for pain relief,
quinine for malaria, digitalis for dropsy, mercury for syphilis, orange
and lemon juice for scurvy), that this is all that medicine could do. For
more than two thousand years medicine effectively stood still, despite
all the progress in human biology, and a doctor in ancient Rome
would have done you just about as much good as a doctor in early
nineteenth-century London, Paris, or New York.
But if modern medicine is effective and Hippocratic medicine was
not, it follows that the very idea that there is continuity between the
two is profoundly misleading. The same institutions may educate
doctors in the twenty-first century as in the thirteenth (you can still
get medical degrees at Bologna, Paris, Montpellier); many of the same
words may be used to describe diseases; but modern medicine is no
more a development of ancient medicine than modern astronomy is
a development of medieval astrology. The two are fundamentally
different. At the very beginning of the twentieth century, on the
other hand, the medical care that could be offered by doctors such
69
70
71
72
73
appear in the frontispiece) so that people could see for themselves the
source of Galens misconceptions, but such animals were probably
primarily to hand so that they could be vivisected. Anatomy thus
represented new knowledge in a world where the assumption had
long been that there could be no progress beyond the achievements
of the ancients.
Anatomy was seen as being of central importance. It was mans
knowledge of himself, through which the anatomist learnt about his
own body. But at the same time man was a microcosm, a little universe,
an epitome of the macrocosm or larger universe, so that all
knowledge was to be found reflected and summarized in him. And
man had, the Bible said, been made in the image of God, so the study
of anatomy was also the study of the divine. Moreover anatomy
gave onlookers the opportunity to meditate on death and the transience
of life, a theme both philosophical and religious. Finally, the
Renaissance did not see minds and bodies as distinct in the way that
we (since Descartes) do: hair colour, for example, reflected the
balance of the humours, and this determined the psychology of the
individual. To study someones body was also to study their mind. All
this served to give the messy and disturbing task of cutting up bodies
an extraordinary dignity.
Renaissance art had already trained people to look at the body in a
new way, and from the beginning the great artists of the Renaissance
had practised anatomy. Donatello (13861466) attended anatomical
dissections (he illustrates one in a bronze, The Heart of the Miser)
and made a bronze sculpture of the skeleton of a horse; Antonio
Pollaiuolo (143298) removed the skin from many corpses in order
to see the anatomy underneath. To portray weight, balance, movement,
tension, and strength the artist had to have a direct knowledge
of the structure of the bones and the shape of the muscles. In 1435
Alberti advised anyone painting a human figure to imagine the bones
beneath the skin and to build up from them the muscles and the
surface appearance, and there is a sketch by Raphael in which he can
be seen doing exactly that. The great Leonardo (14521519) was so
interested in the structure of the human body that he planned a book
74
75
on any topic, and could be confident that the texts at his disposal were
generally accurate. He could now claim to be sure of what Galen
thought and consequently to be in a position to judge whether he
was wrong or right. The printing press and the new scholarly editions
that it made possible were fundamental to Vesaliuss enterprise of
surpassing Galen.
In addition, before the printing press medical books had had either
no illustrations or only very rudimentary ones. Manuscripts were
copied by hand and so only the crudest of illustrations could be
employed. With anything complicated the quality was bound to
degenerate as one copy was made from another. With the printing
press came a new emphasis on illustration; woodcuts and (even better)
copper plates could be employed to provide complex and detailed
information. Leonardo saw clearly the possibilities that this opened
up. Beside one of his anatomical drawings of a heart he wrote:
O writer what words of yours could describe this whole organism as
perfectly as this drawing does? Because you have no true knowledge of
it you write confusedly, and convey little understanding of the true
form of things . . . How could you describe this heart in words without
filling a whole book? And the more minutely you try to write of it the
more you confuse the mind of the listener.
Vesalius invented the process of labelling parts of illustrations with
letters keyed to an accompanying text, so that readers could turn back
and forth from text to illustration using each as a commentary on the
other.
Vesalius was also able to draw on the great discovery of Renaissance
art, perspectival representation, to produce images that created
the illusion of being three-dimensional, without which it would have
been impossible to represent the interrelationship of the different
parts of the human body. Raised in Brussels and Louvain, educated in
Paris, by 1537 Vesalius was teaching in Padua, and to illustrate his
great work, the Fabric of the Human Body, he turned to the nearby city
of Venice, to the artists in Titians studio. The first scientific drawings
employed the skills of the most highly trained artists of the day.
Vesalius was the first to bring together anatomy, art, and the printing
76
press. In principle, Leonardo could have beaten him to it; but the
enterprise would have been impossible before 1500, when a lavishly
illustrated book would have been hopelessly expensive (the first
anatomical
drawing made from direct observation had appeared in print
as recently as 1493), and eccentric before 1531, for up until then the
task of catching up with the knowledge of the ancient Romans was
still incomplete. Before Vesalius, the most important work to pioneer
anatomical illustration was Berengario da Carpis Commentaria of
1521; Vesalius published his first illustrated medical text, the Tabulae
Anatomicae in 1538, in collaboration with an artist, Joannes Stephanus
10 AND 11. These two medieval illustrations of skeletons, one from the
fourteenth and one from the mid-fifteenth centuries, give an indication
of
the very varying quality of the illustrations accompanying medieval
medical
manuscripts but even the finer of the two, an exceptionally detailed
image for a medieval manuscript, falls far short of the standard of
accuracy
established by Vesalius.
of Calcar: he must have begun work on the Tabulae almost immediately
on arrival in Padua. He was clearly determined to waste no
time.
Illustration, of the quality pioneered in the Fabrica, enabled the
anatomist to make manifest exactly what it was that he thought he
had seen. His successors could compare both his words and his plates
with what they found on the dissecting table, and if there was a
discrepancy they could be certain that they had found something
new. Leonardo carried out a number of dissections, and in his drawings
we can trace the development of his anatomical understanding.
At first he held all sorts of mythical beliefs derived from ancient
authors: for example, that there was a duct connecting the penis to the
brain, so that semen contained not only matter from the testicles, but
spirit from the brain. The great English anatomist, Thomas Willis, was
still looking for such a duct in the 1660s. As time passed, Leonardo
77
78
There was nothing new, then, about the idea of a skeleton. Vesalius
however turned the articulated skeleton into a central pedagogical
aid: he had one hanging by the body being dissected as he lectured
and cut, and, in imitation of him, generations of anatomists furnished
every anatomy theatre with its skeleton.
Vesalius could use skeletons as pedagogic aids because he had a
new method for producing them. He implies he is following the
example of Galen, but the reference he gives to Galen is false, and
perhaps deliberately misleading. He tells us, in the opening pages of
the Fabrica, that his predecessors had put bodies in coffins, covered
them in quick lime, and then, after a few days, cut holes in the sides of
the coffins and put them in a stream. After a while, the coffins were
removed from the running water and opened; the flesh had washed
away, leaving the bones, still tied together by ligaments. But the dark
ligaments concealed much of what needed to be seen.
Vesaliuss method was very different. In his kitchen, he boiled up a
large vat of water. He carved up a body, removing as much flesh as
possible, and carefully putting aside loose pieces of cartilage, including
the cartilage in the tip of the nose and the eyelids. He then boiled up
the body until it fell apart, pouring off the fat and straining the liquid
so that nothing was lost. He was left with beautiful clean bones that
could be wired together to create an almost perfect representation of
12. The lateral view of the skeleton from the De Fabrica of 1543.
84 revolution postponed
the living skeleton. Those little bits of cartilage which could not be
reattached (the tip of the nose, the stiffening to the eyelid, the ears) he
strung together on a necklace to decorate his teaching aid, which was
then made portable by being mounted on a folding stand and encased
in a boxone of Vesaliuss skeletons survives to this day in Basle.
There is something profoundly alarming about the story of how
to make a skeleton: Vesalius is boiling bones as if he was making beef
stock; he is chopping up bodies in his own kitchen as if he were about
to eat them. By beginning with bones, and with his recipe for producing
skeletons, Vesalius was inevitably reminding his readers that
there was something shocking about dissection. As already noted, a
79
80
one hand he employed the finest artists to turn his cadavers into
aesthetic objects. He carefully posed his dead bodies so that they
could be represented as though still alive. He had them illustrated in
landscapes, as if walking about. When he came to illustrate the viscera,
where it was clearly impossible to make a corpse look alive with its
guts hanging out, he created the illusion that an antique statue was
being opened up to discover flesh-and-blood organs within. But
then, he provides an illustration to show just how his bodies were
posed: a corpse held up by a rope, hanging from a pulley, bits of flesh
dangling from the bones. When he dissects the brain, he allows you to
see (after the idealized anonymity of the muscle men) the moustache
and facial characteristics of the corpse: his friends would be able
to recognize him. And he provides an illustration of the lower torso,
with legs splayed and dangling penis, which makes it look like a hunk
of meat on a butchers slab. At one moment he is a magician, beautifying
death; at the next he is telling you it was only a trick, and showing
you how terrible the dead body can be.
We find the same contradiction in the text. At one moment
Vesalius is writing of anatomy as a divine calling, at another he is
boiling human bodies in a vat. It is Vesalius who tells us that he
obtained the first body he worked on by pulling it down off a gibbet
and carrying it home in pieces under cover of darkness; Vesalius who
tells us that his students stole the unburied body of a woman who had
recently died, and quickly flayed it so that those who knew her would
not recognize her; Vesalius who tells us that one of the bodies he
dissected was that of a recently buried prostitute famous for her
13. The first illustration of the muscles from the 1543 De Fabrica.
vesalius and dissection 87
14. The seventh illustration of the muscles from the 1543 De Fabrica.
88 revolution postponed
15. Third illustration of the anatomy of the torso from the De Fabrica:
this
is one of a series of images that turn the body into an antique statue.
vesalius and dissection 89
beauty, her body stolen from the cemetery; Vesalius who tells us that
81
his students had keys made so that they could get easy access to bones
and bodies in the cemeteries. (The large initial letter I which opens
book VII shows putti robbing a grave.) Vesalius repeatedly tells us, in
short, that he obtains bodies by stealing them and makes it absolutely
clear that much of this activity is criminal: in the case of the flayed
woman, her relatives went straight to a judge to protest the theft of
her corpse. In 1497, the anatomist Alessandro Benedetti had claimed
the law allowed the dissection of unknown and ignoble bodies,
those of foreigners and criminals, who had no one to protect their
honour, but in Venice at least the law was tightened up in 1550 to put
an end to tomb robbing by anatomists, and many of the stories with
which Vesalius had regaled his readers disappeared from the revised
edition of the Fabrica in 1555.
A tiny detail in the text illustrates Vesaliuss obsession with the
transgressive. Once an initial letter had been designed it was reused
whenever the same letter occurred: thus the large initial V that stands
at the head of the preface also stands at the head of book V. But there
are two small initial Lsthe standard initial L shows a body being
removed from the gallows; but at the point in his text where he
discusses the anatomy of the arse, Vesalius has an initial L which
shows a putto shitting. This is, quite straightforwardly, a dirty joke; but
what Vesalius is doing here is shitting on his own book.
Vesalius was not the only one to tell stories against himself: Leonardo
joked about having the quartered bodies of human beings lying
around his studio, as if it was a butchers shop. It is worth remembering
that in Renaissance Europe, butchers, like executioners, were
always social pariahs, forced to live on the outskirts of town, and
unable to marry the daughters of other tradesmen. One artist, the
sculptor Silvio Cosini, as if in a Renaissance Silence of the Lambs, even
made a T-shirt for himself out of the skin of a dissected body: scolded
by a friar, he gave his shirt a decent burial. The main legitimate source
of bodies was the scaffold (remember the initial L with its body
being removed from the gallows), but bodies that had been executed
were inevitably badly damaged. Not surprisingly, anatomists were
eager to cut out the middleman; the great anatomist Fallopio (the
82
83
84
85
86
showed was that the blood could flow through the veins in only one
direction. Realdo Colombo (d. 1559) had already argued that there
was a unidirectional flow from right ventricle to left atrium, the
pulmonary transit, and Harvey was acquainted with Colombos work,
although if he was indebted to it he never acknowledges the fact.
Second, close study of the exposed beating heart in animals convinced
Harvey that Galen had misinterpreted the hearts action:
when Galen thought it was contracting it was actually expanding, and
vice versa. Harvey watched the heart more closely than Galen had
been able to by studying its movement in slow motion in dying
mammals and in cold-blooded creatures. An important consequence
of this discovery was that Harvey could now see that the heart beats
in synchrony with the pulse in the arteries, rather than, as Galen had
believed, the two beats being out of step. This opened the way to
recognizing that the pulse was merely the heartbeat reflected in the
arteries.
Third, Harvey saw that if blood was forced out of the heart on
each beat, then large quantities of blood must flow through the arteries;
and if the valves in the heart, like those in the veins, were efficient,
then the flow here too must be unidirectional. The consequence was
that blood must flow through the system much more rapidly than it
could be produced. Where was it coming from and where was it
going? The only possible answer was that there was some mechanism
for recycling it, that it was going round in a circle. Galen had already
argued that arteries and veins were connected at their extremities, for
96 revolution postponed
18. The illustration of the valves in the veins from Harveys De Motu
Cordis.
he knew that if one cut an artery all the blood would be drained
out of the veins as well as the arteries. Harvey had only to argue
that blood could flow through from arteries to veins by hidden
connections to have an account of the circulation of the blood. He
had a model in Aristotles account of the circulation of water: water
evaporates, falls as rain, seeps through the ground, runs in rivers to the
sea, evaporates, and so on. Parts of this processevaporation and
87
percolationare invisible, but one can tell they are taking place, for
otherwise the rivers would eventually run dry.
Fourth, Harvey appealed to a number of key experiments where
one could see this process at work. One had only to cut an artery to
see the blood shoot out, and to see that the power of the jet reflected
the rhythm of the heartbeat. Where Galenists had taken this copious
flow to be a sign of the bodys response to injury, Harvey took the cut
in the artery wall as exposing what was going on anyway: the blood
flowing rapidly under pressure from the heartbeat. Moreover, the
blood always came from the side of the cut closest to the heart if one
cut an artery, and from the side furthest from the heart if one cut a
vein: Galen knew that this was the case, but he had no explanation for
it, and in fact it is impossible to explain until one recognizes that the
blood flows in only one direction. Again, when Harvey ligated the
vein in the back of a snake, he could see the vein between the ligature
and the heart being progressively emptied of blood as the blood
was pumped through the heart into the arteries: here was the
unidirectional flow of the blood made visible.
Harveys arguments were simple and, he thought conclusive.
But the first argument merely restated Fabriciuss discoveries, and
the third was essentially hypothetical. It was his second and fourth
arguments that introduced new evidence, and both of these were
dependent on experiments on living animals. Harveys book begins
with a couple of prefaces, an introduction, and a first chapter, subtitled
the authors strong reasons for writing. He gets down to serious
business in chapter 2: The nature of the hearts movement, gauged
from dissection of living animals. Chapter 3 is entitled: The nature
of the movement of the arteries, gauged from dissection of living
animals. Chapter 4: The nature of the movement of the heart and of
the auricles [i.e. atria], gauged from dissection of living animals.
Without the dissection of living animals (the word vivisection was
not invented until 1702, so that when Harvey discusses dissection
one always has to check whether he is dissecting the living or the
deadtranslations, such as the one I use, which employ the word
vivisection are anachronistic) Harvey would have had no argument.
88
89
90
91
Harriss The Heart and the Vascular System in Ancient Greek Medicine,
has been written to answer this question. In 455 closely printed pages
Harris conclusively demonstrates (against the claims of some scholars)
that Galen had no knowledge of the circulation of the blood; but
he is still unable to provide any satisfactory account of why such
knowledge was unavailable to Galen.
In short, there was no major cultural or intellectual gap between
Harveys world and Galens. Where Vesaliuss achievements
depended entirely on the new technology of the printing press,
Harveys discovery could easily have taken place in a manuscript
culture. The experimental method, comparative anatomy, mechanical
models, quantification, and vivisection are familiar to Galen just as
they are to Harvey. What makes this story even more complicated is
that Galen was not always mistaken when Harvey claimed that he
was. From Vesalius on, anatomists had puzzled over Galens claim that
there were fine tubes between the left and right ventricles through
which blood could seep. Harveys argument in part depended on
denying the existence of such tubes, and yet they do exista fact that
most modern histories of science refuse to acknowledge. Again, in a
famous experiment copied from Erasistratus, Galen had attached a
tube to a hole cut into the wall of an artery; below the tube, he said,
rebutting Erasistratus, the pulse ceased to exist, which Galen interpreted
as evidence that the pulse was a contraction in the wall of the
artery. Harvey failed to discuss this experiment in De Motu Cordis, but
in a reply to his critics he said that Galens experiment was almost
impossible to perform because of haemorrhaging. Nevertheless if one
did perform it one found that the pulse occurred below as well as
above the insertion point of the tube, evidence that it was caused by
the flow of blood, not by the artery wall. Attempts to repeat this
experiment, from the seventeenth century to the present day, have
produced equivocal and contrasting results: there is experimental
support for Galen as well as for Erasistratus and Harvey.
It is very hard to fault Galens powers of observation, even if we
now prefer Harveys explanations to Galens. Galen, for example,
describes tying off the two carotid arteries in an animal in order to
92
prove that vital spirits reached the brain through the nose, and not just
through the blood. The animal survived and continued to function.
Surely Galen had botched his experiment? In the late seventeenth
century Thomas Willis discovered what is now called the circle of
Willis, a network of blood-vessels which ensures that even if a carotid
artery is blocked, blood still reaches the whole of the brain; both
carotid arteries can even be blocked, and a supply of blood can reach
the brain through the vertebral artery. Whether that supply could be
adequate, and whether Galen had botched his experiment, is a moot
point: one medical textbook expresses the view that obstruction of
both carotids in a human would probably not be compatible with
survival; the pathophysiology of death by strangulation continues to
be controversial, but obstruction to the veins may well be as important
as obstruction to the carotid arteries.
Perhaps if there had been a continuing tradition of vivisection after
Galen, an ancient Roman would have discovered the circulation of
the blood some fourteen hundred years before Harvey; certainly once
Vesalius had rediscovered Galens techniques it took only a century to
establish first the pulmonary transit, then the valves in the veins, and
finally the circulation of the blood. The important thing to recognize
is that, even if an ancient Roman had discovered the circulation of the
blood, it would have made virtually no difference to the history of
medicine. For Harveys revolutionary discovery had only limited
implications for medical therapy. The discovery that there was one
unified circulatory system meant that there was little point in worrying,
as doctors were busily doing in the time of Vesalius, about
where one should draw blood, as blood drawn from any vein would
affect the system as a whole (nevertheless, doctors continued to
debate this question in the nineteenth century). And Harveys
account of circulation made it easier to understand how drugs and
poisons could work on the body as a whole, easier too to understand
how to use a tourniquet in phlebotomy. But Harvey had no intention
of questioning traditional medical therapies: he relied on the
worthless therapies of bloodletting, purges, and emetics just as much
as all the other disciples of Galen. He claimed to have a better
93
94
95
96
97
98
have taken place failed to occur. In the 1670s and 1680s the microscope
made possible a series of major discoveries. These discoveries
should have been followed by others; yet they were not. Instead
the microscope was abandoned as a significant tool for biological
research; not until the 1830s did serious research recommence. It used
to be thought that there was a straightforward explanation for this: the
microscopes of the 1830s were much better than those of the 1680s.
Now we know that this was not the case. So what went wrong?
The telescope and microscope were invented simultaneously, and
probably by the same person, in Holland between 1608 and 1610.
News of the new devices reached Galileo, whose Sidereal Messenger
(1610), based on his discoveries with the telescope, immediately
began the revolution in astronomy. Galileo himself constructed
microscopes (the word is first used of one of his instruments, in 1625);
but the first detailed account of the interior construction of a living
being based on the use of a microscope did not appear until 1644, in
Giambattista Odiernas Locchio della mosca, or The Flys Eye. Here is
the first mystery: where the relevance of the telescope to astronomy
was recognized immediately, the relevance of the microscope to
biology and medicine was not recognized for a generation or two.
It is not until the 1660s and 1670s that the microscope began
seriously to be put to use, in Italy, England, and Holland, with a host
of studies by the five great microscopists of the seventeenth century.
The first was Marcello Malpighi, who pioneered the use of the
microscope to explore minute biological structures, beginning with
De Pulmonibus, On the Lungs (1661) and more or less ending with De
Ovo Incubato, The Incubation of the Egg (1675), based on the
microscopic
study of the incubation of chickens eggs. Classical medicine
had assumed that liver, spleen, kidneys, lungs were essentially made
out of congealed blood and were therefore formless lumps of
material; Malpighi showed that these organs had complicated internal
structures. Next came Robert Hookes Micrographia (1665), which
had an enormous impact with the public because of its wonderful
illustrations, including a double-page pull-out image of a louse. Then
99
100
101
102
103
104
105
Swammerdam.
At once, one would have thought, efforts would have concentrated
on trying to actually see a mammalian egg. But after the death of de
Graaf (1673) and Swammerdam (1680) research stood still. In 1737
and 1738 when Swammerdams unpublished manuscripts were
finally published, more than fifty years after his death, they still
represented
entirely new knowledge, but did not contain any observation
of the ovum, which had to wait until von Baers work in 1827. Where
in the late 1660s there had been a whole group of Dutch scientists
de Graaf, Swammerdam, Horne (d. 1670), Nicolaus Steno (d. 1686,
who had discovered the ovaries of fish in 1667)competing to be the
first to make discoveries in this field, it took one hundred and fifty
years to bring their work to a successful conclusion. In the meantime,
in the world of biological research, time stood still.
Despite their failure to actually look for eggs, scientists between
the 1680s and the 1830s generally took the view that plants, fish,
and mammals all developed from eggs and that the new life existed
preformed within the egg: we now call this ovism. Where for two
thousand years, Aristotelians had held that new life comes into existence
at conception, the dominant view from the late seventeenth
century was that all new life implied new creation, and any act
of creation involved a miracle. Rather than accepting that God
was constantly working miracles, scientists preferred to argue that all
life had been created by God when he created the world. What
appeared to be new life was rather the development of existing life.
Swammerdam showed that butterflies, which appeared to be new
creatures born out of the pupa, in fact existed already in the caterpillar
em;by dissection he could identify the presence of their organs within
the caterpillars body. So too the parts of the full-grown plant could
be found within the seed, as Malpighi showed in 1675. The logical
conclusion of this line of argument was that Eve had within her body
the eggs which would develop into her children, and within those
eggs were already preformed the eggs which would develop into her
daughters children, and so on, as in Russian dolls, each with a smaller
106
doll nested within it. Eve thus contained nested within her ovaries
every future human being, already created, already formed, merely
awaiting development. Preformation was held to entail pre-existence.
There were obvious problems with this line of argument. It made
it very difficult to explain why offspring sometimes took after their
fathers: as early as 1683 Leeuwenhoek had noted that if you crossed
large white female domestic rabbits with small grey male wild rabbits
you always got small, grey rabbits; in 1752 Maupertuis showed that in
human beings polydactylism (having six fingers, not five) is always
inherited in the male line. It was hard to see how ovism could explain
such cases where characteristics were inherited from the male. It also
made it hard to understand how you could have hybrids, how a
donkey and a horse could mate, for example, and produce a mule.
Moreover, if all the parts of a creature existed in the egg, and merely
developed rather than being constructed from nothing, how was it
possible for some creatures to regenerate parts that had been lost?
In 1688 Claude Perrault wrote a dissertation on the regeneration
of lizards tails. In 1712 Ren Antoine Ferchault de Raumur published
an account of the regeneration of crayfish claws. Worst of all,
in 1741 Abraham Trembley showed that you could cut a polyp into a
dozen pieces and it would turn into a dozen polyps. Was this not the
creation of new living beings?
Leeuwenhoek used his rabbits to argue that the future creature
existed preformed not in the egg but in the sperm (which he claimed
to have been the first to observe through a microscope); thus offspring
took after their fathers, not their mothers. But fundamentally
spermaticism (as Leeuwenhoeks alternative to ovism is called) was
open to exactly the same objections as ovism. Throughout the eighteenth
century most biologists remained attached to ovism despite
all the obvious difficulties, and despite the fact that they had no
clear account of what sperm were for. (If anything: in 1785 Lazzaro
Spallanzani claimed to have definitively shown that one could
fertilize frog spawn with frog semen from which every spermatozoon
had been removed.) Ovism remained dominant until the birth of
cell theory in the 1830s, despite the fact that it could not explain
107
108
109
110
111
112
113
114
plague germinating in the human body. Why did Harvey not understand
Nardi on this subject? Why was his response so beside the
point? For certainly the point did elude himin the theory of seeds
of plague a seed was something that could be transported from one
place to another, could be carried in clothes, could lie dormant, and
then prove fertile. The answer would seem to be that Harvey had
radically misunderstood Nardis argument because of a fundamental
ambiguity in Latin. In Latin the word semen means either semen or
seed. The same ambiguity was once present in English, where seed
was used to mean semen, as in the biblical Onan spilling his seed
upon the ground. Even modern scholars can get confused by this
ambiguity: theres a striking example in an article published in Medical
History in 1977. Nardi had compared the mysterious plague agent to
invisible seeds, while Harvey had read him as comparing it to invisible
semen. Outside the body semen rapidly loses its efficacy. It cannot be
transported from one place to another or lie dormant. You dont get
pregnant by touching someone elses clothes. Harvey, preoccupied
with generation and so with semen, has completely missed the point
of Nardis argument. We know Harvey never received a reply from
Nardi; we do not know if his letter ever reached him.
Nardi, as it happens, does not mention Fracastoro. His authority
(apart from Lucretius himself) is Felix Platter. Platter, in his De
Febribus (1597) and his Quaestiones (1625), had explicitly rejected
arguments for the spontaneous generation of plague and syphilis,
since these could not account for the fact that these were new diseases
if they could be spontaneously generated they would have
arisen over and over again. The only explanation, Platter argued, is
that these two diseases had come into existence once and once
onlywith the creation of the worldbut that their distribution was
erratic in time and space because they were spread by contagion. He
thus carefully propounds the theory that these diseases are spread by
seeds or germs; in the case of plague he notes that everyone agrees
that external causes (miasmas) are also relevant; and he is prepared
to accept, after a long debate with himself, that internal causes, the
balance of the humours, are also relevant to susceptibilitythough he
115
recognizes that the fact that the plague strikes young and old alike
might be taken to mean that the internal state of the body is as
irrelevant as it is to someone being shot at by arrows. In any case,
the germs themselves are a sine qua non, an essential prerequisite, a
necessary condition for the spread of the disease. Platter is thus a
proper germ theorist, the earliest known to me: in the case of contagious
diseases, he denies that they can be spontaneously generated,
and all his language implies that he is thinking in terms of animate
contagion.
I never expected to come across Felix Platter. He does not appear
in the literature (at least the English-language literature) on germ
theory. I did not expect to find a proper germ theorist before the
invention of the microscope. Yet he is hardly well-concealed. Harvey
23. This seventeenth-century French woodcut of a skull and crossbones
is believed to have been produced to be stuck up on the houses of people
dying of plague. It reflects the belief that plague was contagious, and
that it was therefore essential to avoid contact with people suffering
from it.
led me to Nardi, Nardi led me to Platter. If no one else has followed
this route, it can only be because the intellectual origins of modern
medicine remain a relatively unexplored field.
We might think that Leeuwenhoek, having discovered the
infusoria, would immediately have decided that they were a possible
cause of disease. Towards the end of his life he was certainly aware that
some thought that plague is caused by little animals transported
through the air. In 1702 he recognized that living beings could function
like seeds: he discovered that the minute rotifer could be
reanimated after years of dessication. But Leeuwenhoek never
showed any interest in theories of animate contagion. Amongst his
contemporaries, the only microscopist to develop such an argument
was Nicolas Andry de Boisregard, whose An Account of the Breeding of
Worms in Human Bodies (French 1700; English 1701) maintained that
almost all diseases were parasitic infections: If we consider . . . the
almost infinite number of little animals which microscopes discover
to us, we shall easily find that there is nothing in nature into which
116
the seed of insects may not insinuate itself, and that a great quantity of
them may enter into the body of a man, as well as into those of other
animals, by means of the air and aliments.
The important point is not that Leeuwenhoek failed to develop
this line of argument; it is that it is a line of argument that anyone
could have developed who was familiar with Leeuwenhoeks work
and with the idea of animate contagion: Leeuwenhoeks friend Nicolaas
Hartsoeker, for example, who wreathed himself in tobacco smoke
to kill off the invisible wormlets that carried the plague. There was no
need to wait more than a hundred and fifty years for Lister, any more
than there was any need to wait a hundred and fifty years for von
Baer. From 1546 to the early 1720s there was a lively intellectual
tradition debating animate contagion. Yet, as Charles and Dorothea
Singer say in their study of this subject, from the year 1725 nothing
of real value appeared on the subject until the 1830s. Here too time
stood still. Bonomos work on scabies was not followed up until well
into the nineteenth century. Agostino Bassi, a pupil of Spallanzanis,
was the first to identify a germ that was the cause of a disease: in
1835 he published work showing that the muscardine disease of
silkworms was caused by a fungal infectionthe work had been
done years before, but he had lost time trying to find someone who
was prepared to buy information about silkworm disease as a valuable
commercial secret. His study of silkworms was followed by a series of
publications, beginning in 1844, arguing that human diseases were
caused by microscopic parasites. Bassis research provided the model
for J. L. Schoenleins demonstration in 1839 that ringworm in
humans was a fungal infectionthe first occasion on which a human
disease had been shown to be caused by a germ.
Between Bonomos work and that of Bassi and Schoenlein a
century and a half had been wasted because there was no interest in
using the microscope to study disease. During this period obvious
lines of enquiry were not followed up. Let me give one example,
which is to be found in a little book published in 1810, and frequently
reprinted thereafter. Charles Nicolas Apperts LArt de conserver,
pendant
117
118
of it could be poured off, and then casually recorked, has been preserved
almost
without any alteration . . . I present it here in the same bottle, so that you
can
convince yourself of a fact that I would have had difficulty in believing,
if it had
been reported to me before I saw it with my own eyes.
The italics are fully justified. According to all the established theories,
once fresh air entered the bottle, putrefaction should immediately
have followed. And yet sometimes it does not.
Why not? Fifty years later, Pasteur would show that everything
depends on where you open the bottle. If it is in a room where the air
is relatively free of germs, then putrefaction need not follow. But to
understand this, you need to have grasped that micro-organisms cause
putrefaction. Spallanzani claimed to be able to produce germ-free
environments in which micro-organisms did not develop. But he had
to break open his flasks in order to search inside them for
microorganisms.
He kept them for days, but not for months or years. And so
he never recognized what would seem to us an obvious fact: where
there are no micro-organisms there is no putrefaction.
The first person to properly understand this was Schwann in 1837.
He had shown that yeast was alive, and could be killed by heat, thus
halting fermentation. Putrefaction, he set out to show, was also caused
by micro-organisms, and could be halted by heating. What Spallanzani
had not understood was that micro-organisms transform the
world in which they live. Watching them through his microscope,
Spallanzani saw them moving about in a world he had created. It
never occurred to him that they had the power to transform their
own world. Schwann, who had measured the transformation of sugar
and starch into alcohol, understood this very clearly. The principle
was later to be stated by Lister: We know that it is one of the chief
peculiarities of living structures that they possess extraordinary
powers of effecting chemical changes in materials in their vicinity,
out of all proportion to their energy as mere chemical compounds.
119
Spallanzani did not know this, which is why he did not make the link
between micro-organisms and putrefaction, and did not invent
pasteurization.
Anyone who understood what Appert was doing when he bottled
beef stewkilling germswas in a position to invent antiseptic
surgery, which is another way of applying the principle that germs
cause putrefaction. Anyone familiar with Spallanzanis work should
have been able to understand that Appert was killing germs and
consequently they should have understood that, in the absence of
micro-organisms, putrefaction will not occur. The passage in italics
on p. xix of Apperts book was absolutely crucial evidence against the
established theory of putrefaction and in favour of a germ theory of
putrefaction, but nobody grasped its significance at the time. The fact
that nobody did understand the significance of Apperts method of
conservation does not mean that no one could have understood it. At
any time after 1810 the germ theory of putrefaction and antiseptic
surgery were real intellectual possibilities, even though the first
appeared only in 1837 and the second only in 1865.
We have thus looked at three areas where research in the 1830s
effectively took up from debates in the 1680s: reproduction, spontaneous
generation, and animate contagion. I have discussed the last
two topics as if they can be kept separate, but in reality they are
intimately related. The idea of animate contagion only becomes
powerful when it is combined with the denial of spontaneous
generation, for then if one can kill the creatures that cause contagion
one can eliminate disease. It is clear that a number of those who
advocated animate contagion (e.g. Kircher and Hauptmann) were
perfectly comfortable with the idea of spontaneous generation;
others, such as Cogrossi, took it for granted that micro-organisms
were never spontaneously generated.
One might think, then, that the question of spontaneous
generation had to be resolved before germ theory could triumph. It
comes as a surprise to discover that germ theory triumphed while the
issue of spontaneous generation was still subject to a lively debate.
The whole question was only finally resolved in 1877, when John
120
121
The light source shows that the air in the box contains no particles that
reflect light, and thus that it is sterile.
Tyndall was a Darwinist, but what his experiments seemed to show
was that Pasteur was right, you cannot make life, at least in a test tube.
The debate on reproduction thus stood still for over a century. In
some cases, such as the identification of the mammalian egg, obvious
lines of enquiry were not followed up. In others, such as the inheritance
of characteristics from the male, obvious objections went
unanswered. In the parallel debate on spontaneous generation, similar
experiments seemed to produce very different results, yet no progress
was made towards explaining why this should be so. And arguments
about animate contagion made no progress between Cogrossi in
1714 and Bassi in 1835. Why did biological knowledge remain so
stable, why did it change so little between 1690 and 1830? It is no
coincidence that this is precisely the period in which the microscope
was out of fashion. Few scientists were working with microscopes,
and most of those that were worked with ineffective compound
microscopes. But, as we have seen, the microscope had gone out of
fashion because there was no money in it. It would seem obvious that
Leeuwenhoeks work on the grain weevil and the grain moth, and
Malpighis work on the silkworm, was intended to have practical
application. So why was the microscope increasingly dismissed as a
tool for serious research?
The answer is that the medical profession had set its face against
the new research, and others followed where they led. In England,
Willis and Lower abandoned their researches in 1669. Their one
successor was John Mayow, who continued to 1675. Lower went on
to become Londons most fashionable doctor, but not because of his
research record. In the nineteenth century no one could understand
why the research of Boyle, Hooke, Willis, Lower and Mayow had
been abandoned. In Holland the same thing happens to the work of
Swammerdam and Leeuwenhoek. Here too the medical profession
turned its back on the new research. Leeuwenhoeks hostility to doctors
was such that the first thing he wanted to know when he was
elected a Fellow of the Royal Society in 1680 was whether this meant
122
123
124
125
126
127
128
129
130
131
132
133
being cured.
Pomata constantly stresses that doctors and legal authorities
contested the right of patients to refunds where there was no cure, but
she argues that at the beginning of the seventeenth century this right
was recognized by the court of the protomedicato if there was a prior
agreement that payment should be by results, while by the eighteenth
century it was consistently being rejected. You have to read her book
with some care to discover that she does not identify a single example
of a patient successfully enforcing an agreement for a cure against
a licensed and qualified doctor. We are told that at the end of the
sixteenth century the protomedici endorsed the terms set by the
agreements for a cure, including the principle of payment for results,
but this turns out to be true only in the case of claims against
barbersurgeons;
for qualified doctors it was already the case that for the
patients, a therapeutic transaction was fair if the healer respected the
terms of the cure agreement; for the protomedici, it was fair if the
practitioner medicated according to the official rules. As far as doctors
were concerned the agreement for a cure had already ceased to
exist when the protomedici were established in 1581. Doctors were
prepared to reduce the fees claimed by other doctors in practising
orthodox medicine when those fees were exorbitant or when
patients were impoverished, but they were not prepared to rule that
the failure of the therapy meant that they were not entitled to payment.
Not only were the sentences of the tribunal in malpractice
cases always favourable to the practitioners, but in disputes over
whether doctors were entitled to payment when they had failed to
cure the tribunal consistently ruled in favour of the doctors.
In Bologna doctors did not have a monopoly: they practised
alongside, and in competition with, other licensed healers, including
barber-surgeons and apothecaries. But they practised on terms that
were biased in their favour, for it was they who sat on the tribunal
which decided if healers were entitled to payment and if there had
been malpractice. Elsewhere, though, doctors did not have even this
degree of control over the marketplace for therapy. In England, effective
134
135
success, the placebo effect, the tendency to think of patients not diseases,
the pressure to conform, the resistance to statisticswhich
explains, if anything can, the intolerable delay in testing the efficacy of
orthodox medicine.
At this point we need to ask what sort of historical explanation we
hope to find. Do we want to prove that there was never any possibility
of testing orthodox medicine before the 1820s? We would be wrong.
Do we want to prove that the obstacles to testing orthodox medicine
were great, but not insuperable, before the 1820s? Then we would
be right, but an argument of this sort cannot absolve all those doctors
who practised an ineffectual form of therapy from some responsibility.
If we provide too strong an explanation of why traditional
medicine was not put to the test until the nineteenth century, then
we will inevitably lose sight of the fact that plenty of people could see
that it didnt work. It is difficult to strike the right balance here, but
the resilience of orthodox medicine is far more significant than the
persistent criticism it encountered. The stress needs to fall on
medicines appearance of success, and on the ease with which it saw
off competition from Paracelsians and Helmontians.
136
137
138
139
140
141
142
gunfire, so that some were treated only with a cold dressing of egg
yolk, turpentine, and oil of roses, which was normally applied only as
a second dressing, after the burning paste, to encourage healing. These
patients did far better than those who had received the orthodox
treatment, which Par abandoned thereafter. Pars publication in
1545 of a treatise on how to treat gunshot wounds served to kill off
the myth that gunpowder acted as a poison.
Par had no doubt that he had learnt something important from
his two comparative trials, yet he never conducted any others.
Surgery was a relatively empirical and untheoretical discipline, and
surgeons were not doctorsthey had not benefited from a university
education, which is why in England they are still titled Mr. Surgeons
like Par were therefore relatively open to innovation. It is also
important that gunpowder burns and gunshot wounds were new to
Western medical science. Guns had only been important on the
battlefield for a hundred years or so. Par did not have to question
long-established authority in order to abandon theriac. No one was
being asked to deny the authority of Hippocrates or Galen.
Pars trials were accidental and ad hoc. We have to wait more than
a hundred years for a proper clinical trial to be proposed. In Oriatrike,
Johannes Baptista van Helmont (d. 1644), not himself a doctor and, as
we have seen, a bitter opponent of conventional medicine, suggested
that five hundred sick people should be randomly divided into two
groups, one to be treated with bloodletting and other conventional
remedies, and the other by his alternative therapies. Helmont was not
nearly influential enough to bring such a trial about, but his followers
continued to propose such tests of the efficacy of his remedies. Thus
in 1675, Mary Trye claimed that if her medicine for curing smallpox
was tested against the conventional remedy of bloodletting, then it
would be found that the proportion of her patients that survived
would be twice that amongst those receiving conventional therapies.
In 1752 the philosopher and doctor of divinity (but again, no doctor)
George Berkeley suggested a similar experiment to test the value of
tar water in the treatment of smallpox. These are lonely examples that
show that there was nothing particularly difficult about conceiving of
143
a clinical trial. But none of these proposals came from doctors, and
actual trials never took place. Ironically, had such trials been performed
they would have shown that the therapies favoured by the
Helmontians and by Berkeley were, like the mercury therapy
favoured by MacLean in 1818, little better than conventional
treatments.
One doctor has become established in the literature as the
inventor of the modern clinical trial. James Lind was a Scot, and he
qualified first as a surgeon and then as a doctor. In 1753 he published
A Treatise of the Scurvy. Scurvy is a condition that we now know is
caused by vitamin C deficiency. The first symptoms are swollen gums
and tooth loss. The victim soon becomes incapable of work. Death
follows, though not swiftly. The standard medical therapies were (of
course) bleeding, and drinking salt water to induce vomiting. A
patent remedy was Wards Drop and Pill, a purgative and diuretic.
In other words, scurvy was understood in humoral terms and the
remedies were the conventional onesbleeding, purging, and
emetics.
Scurvy becomes a problem only if you have a diet that contains no
fresh vegetables. In the Middle Ages castle garrisons subjected to
prolonged sieges came down with it, but it became a major problem
only with the beginning of transoceanic voyages: if you are healthy to
begin with, you will only show symptoms of scurvy after you have
been deprived of vitamin C for some ten weeks. Ancient and medieval
ships stayed close to land and came ashore regularly to take on
water and fresh food; but once ships embarked on long ocean voyages
they needed to carry food supplies which would not perish, usually
salted beef and hard tack (dried biscuit, notoriously full of weevils).
Any fresh vegetables were hastily consumed before they could perish.
In 1740 George Anson commanded a fleet of six ships and two
thousand men against the Spanish in the Pacific Ocean. Only about
two hundred of the crew survived, and nearly all the rest died of
scurvy. During the Seven Years War, 184,899 sailors served in the
British fleet (many of them press-ganged into service); 133,708 died
from disease, mostly scurvy; 1,512 were killed in action. The normal
144
death rate from scurvy on long voyages was around 50 per cent. One
estimate is that two million sailors died of this dreadful disease
between Columbuss discovery of America and the replacement of
sailing ships by steam ships in the mid-nineteenth century. These
death rates are so horrific that one is bound to think that only people
who were incapable of statistical thinking would have volunteered
to sail on long voyages; and yet these same sailors must have been
perfectly capable of calculating the odds when betting at cards. It
takes further enquiry to establish an even more surprising fact: the
medical profession were responsible for almost all these deaths (for,
when good arguments are beaten from the field by bad ones, those
who do the driving must bear the responsibility).
In 1601 Sir James Lancaster had stocked his ship, sailing to the East
Indies, with lemon juice. The practice became standard on ships of
both the Dutch and English East India Companies in the early
seventeenth
century. The power of lemons to prevent scurvy was known
to the Portuguese, the Spanish, and the first American colonists. By
the early seventeenth century the problem of scurvy had effectively
been solved. Yet this treatment made no sense to doctors with a
university education, who were convinced that this disease, like every
other, must be caused by bad air or an imbalance of the humours, and
it was under their influence (there can be no other explanation) that
ships stopped carrying lemons. This is a remarkable example of
something
that ought never to occur, and is difficult to understand when it
does. Ships captains had an effective way of preventing scurvy, but
the doctors and the ships surgeons persuaded the captains that they
did not know what they were doing, and that the doctors and surgeons
(who were quite incapable of preventing scurvy) knew better.
Bad knowledge drove out good. We can actually see this happening.
There is no letter from a ships surgeon to his captain telling him to
leave the lemons on the dock, but we do know that the Admiralty
formally asked the College of Physicians for advice on how to combat
scurvy. In 1740 they recommended vinegar, which is completely
145
146
obvious objections to them. Did sailors not sweat? If the problem was
an imbalance of the humours, why should traditional remedies not
work? The book was translated and reprinted, but it did not alter the
practice of ships surgeons.
In 1758 Lind was appointed chief medical officer at the Royal
Naval Hospital at Haslar, the largest hospital in England. There he was
responsible for the treatment of thousands of patients suffering from
scurvy. But he treated them with concentrated lemon juice (called
rob), and he concentrated the lemon juice by heating it to a temperature
close to boiling. He also recommended bottled gooseberries.
In both cases, the heat employed destroyed much of the vitamin C,
and Lind conducted no tests to compare his concentrates with fresh
fruits: he just assumed they were the same thing. As a result he seems
to have gradually lost faith in his own remedy, which had actually
become less effective, and he became increasingly reliant on
bloodletting.
When he published in 1753 he had not heard of the argument of
the Polish doctor Johan Bachstrom, who had maintained that scurvy
is solely owing to a total abstinence from fresh vegetable food and
greens, which is alone the true primary cause of the disease. By 1773,
when he published the 3rd edition of his book on scurvy, he rejected
Bachstroms claim. It was impossible, Lind insisted, to reduce the
cause or cure of scurvy to a matter of diet.
Thus Lind himself had no clear understanding of exactly what it
was that he had discovered in 1747, and no grasp of the importance
of the clinical trial as a procedure. He conducted various trials of
therapies at Haslar, in the elementary sense that he tried them out,
but, despite having ample opportunity, he never gave an account of
any other trial in which therapies were compared directly against
each other. He does say in his Essay on Diseases Incidental to Europeans
in Hot Climates (1771) that he had conducted several comparative
trials, in similar cases of patients on drugs to alleviate fever, but what
he actually gives his readers are conventional case histories. Moreover
his therapeutic practice remained entirely conventional. We find him
on a single day bleeding ten patients with scurvy, a woman in labour
147
two hours before her delivery, a teenage lunatic, three people with
rheumatism, and someone with an obstruction of the liver. He was
cautious when bleeding people with major fevers, but only because
he preferred to use opium or to induce vomiting. If Lind had invented
the clinical trial, then he had done a profoundly unsatisfactory job
of it.
Why then has Lind become a major figure in the history of
medicine? The answer is that when the formalized clinical trial
for new drug therapies was introduced in the middle years of the
twentieth century there was a natural desire to look back and find its
origins. In 1951 A. Bradford Hill published his classic article The
Clinical Trial in the British Medical Bulletin. It contains references to
nine scientific publications, all in the previous three years. As far as
Bradford Hill was concerned the clinical trial was a brand-new
invention,
introduced to test drugs such as streptomycin. Streptomycin had
been discovered in 1944, and in 1946 the (British) Medical Research
Council began testing it on patients with tuberculosisthe results
were published in 1948. Because streptomycin was in short supply
it was decided that it was ethical to have a control group that did
not receive the drug. This was claimed at the time to be the first
randomized clinical trial reported in human subjects (Pasteur had
done clinical trials with silkworms and sheep). It was precisely at this
point that Lind was rediscovered to give the randomized clinical trial
an appropriate history, with an article by R. E. Hughes on Linds
experimental approach in 1951, and an article by A. P. Meiklejohn
in 1954 on The Curious Obscurity of Dr James Lind. Linds
importance is entirely the result of backward projection.
If Lind did not succeed in curing scurvy, who did? In 1769
Captain James Cooks Endeavour set sail on the first great voyage of
exploration in the Pacific. When it returned more than two and a half
years later, not one of the crew had died from scurvy. Cook had
primarily relied on sauerkraut to keep the scurvy at bay, and in fact it
does contain a little vitamin C. He had also taken on fresh vegetables
whenever he made landfall. On his next voyage, which lasted seven
148
149
had been virtually eliminated. It had taken fifty years for Linds
discovery
of the curative power of lemon juice to be generally adopted,
and no further controlled clinical trials had been conducted. It
was Gilbert Blane, not Lind, who had persuaded the navy to adopt
lemons, and the triumph of lemon juice over wort had done nothing
to further the idea that therapies should be subjected to systematic
comparative testing. Blane had never conducted a comparative test of
lemon juice against other therapies.
Linds failure to press home the implications of his single trial, and
his failure to repeat and extend it, mean that he actually deserves to be
left in obscurity. If one wants to identify key figures in the invention
of the clinical trial it would be better to look elsewhere. In 1779, for
example, Edward Alanson published Practical Observations on
Amputation.
He recommended new techniquesthe cutting of a flap to close
the wound, and the laying together of wound edges to facilitate healing.
He compared his results before he adopted his new methods (10
fatalities out of 46 amputations) with his recent results (no fatalities
out of 35 amputations). This persuasive statistical argument had a
major impact on surgical technique in England and Europe (though
the French remained sceptical). Alansons control group was historical,
and he did not randomly select who was to receive what treatment;
but he was using numbers to effect. In 1785 William Withering
published a careful account of his use of the foxglove (the active
ingredient being digitalis) to treat a total of 163 patients suffering
from dropsy (or what we might now diagnose as congestive heart
failure). But in practice digitalis, once it had established itself in the
pharmacopoeia, was soon being used to treat a whole host of diseases,
and was often not used to treat dropsybad knowledge had once
again driven out good.
Even more important than the work of Alanson and Withering is
John Haygarths Of the Imagination as a Cause and as a Cure of
Disorders
of the Body (1800). Haygarth wanted to debunk the claims made on
150
151
since ancient times that emotions could give rise to physical symptoms
and that these symptoms could be cured by a change in the
patients emotional state. Edward Jorden in 1603, for example, had
discussed the case of a young man who had fallen out with his father
and then fallen victim to the falling sickness (epilepsy): he had been
cured by a kind letter from his father. Haygarth did not argue that
the fictitious tractors only worked to cure conditions that were
psychological in origin.
Haygarth believed that his experiments with fictitious tractors
explained why a famous doctor was often more successful in his
practice than someone without an established reputation, and why a
new medicine was often more successful when it was first introduced
than when it had been around for some time. One doctor or one
medicine might be more successful than another because they were
more effective in eliciting the cooperation of the patients imagination.
For real success, he claimed, it was important that both the
doctor and the patient should be believers: Medical practitioners of
good understanding, but of various dispositions of mind, feel different
degrees of scepticism in the remedies they employ. One who possesses,
with discernment, the largest portion of medical faith, will be
undoubtedly of greatest benefit to his patients.
Here Haygarths conclusion was at odds with his own research. He
had successfully shown that sceptics, using fictitious tractors, could, by
pretending to believe, elicit results indistinguishable from those
achieved by true believers using genuine Perkins tractors. His doctors
had cynically used the patter employed by the advocates of the
Perkins tractor, without believing for a moment what they were saying.
Why pretend otherwise? One can only assume that Haygarth
wanted to protect himself against the charge of encouraging lying and
hypocrisy when he asserted, against the evidence of his own trials,
that if one wants to touch ones patients heart one must speak what
one feels.
But what Haygarth had done was suggest that much standard
medicine relied entirely on the placebo effect. Within a few years the
arguments he had deployed to explain the apparent success of Perkins
152
153
they failed to recognize that what we would now call the placebo
effect is present in all medical treatment. They failed to direct their
scepticism towards orthodox medical practice.
Medical historians would seem to have a similarly blinkered vision.
They are interested in Haygarth because he discovered how to prevent
infections spreading from one patient to another, and in the
French royal commission because it pioneered the blind trial. Yet, as
we have seen, what is really important about Haygarth is that he was
the first properly to understand (if not to name) both iatrogenesis and
the placebo effect: he understood that hospitals spread infections
and that inoculation might spread smallpox; and he recognized that
conventional medicine relied in large part on the same power of
imagination as that evoked by the Perkins tractor.
The initial usage of the word placebo was to refer to a pill (made of
flour, sugar, or some other inert substance) given to reassure a patient
for whom no effective treatment was available. The first use of
the placebo in clinical trials was apparently in Russia in 1832
(coincidentally the year in which the word first appears in English).
There trials were being carried out to test the effectiveness of
homeopathy.
In these tests homeopathy was systematically compared with
placebo therapy (pills made of flour paste), and found to be no more
effective. This was the first occasion on which the test of effectiveness
in a therapy was defined as being more effective than a placeboone
of the tests employed today (therapies can also be assessed against no
treatment, or against alternative treatments).
By the early nineteenth century there was thus nothing problematic
about the idea of a controlled trial of a medical therapy. In 1816
an Edinburgh military surgeon called Alexander Lesassier Hamilton
described, in a thesis published though so obscure as to be little read
(it was in Latin), a model trial that he had carried out on sick troops in
1809, during the Peninsula War. The troops were randomly divided
into three groups, one third receiving bloodletting and two-thirds
not. Of the 244 soldiers treated by alternative methods six died,
while of the 122 whose blood was let, 35 died. Unfortunately
154
155
156
Louis had set out to show that bloodletting was pointless. Yet he
clearly believes it to have some considerable merit and advocates it in
the treatment of all the diseases he studies. One commentator claims
Louis had shown bloodletting postponed recovery in cases of angina
tonsillaris. Louis believed he had shown the opposite. There was nothing
in Louiss book to persuade doctors to abandon venesection, and
it is clear that he did not abandon it himself. On closer inspection it
seems that Louis interpreted his data in a fashion that was strangely
biased in favour of venesection. His own figures suggested early
venesection
shortened the disease by 2.75 days, yet he claimed that other
things being equal it shortened it by four or five days. In fact, he had
shown that other things were not equal. Those bled early were on
average eight years and five months younger than those bled late,
which in itself is sufficient to explain their more rapid recovery.
Again, it is said that Louis was concerned to criticize the contemporary
use of leeches, which had been strongly advocated by Franois
Broussais. It is true that he seems to have little time either for leeching
or vessication (blistering or cupping). But it is quite clear that what he
is trying to study is the merits of venesection, and that he believes the
only way of establishing how far venesection helps is by comparing
cases statistically.
Since Louiss conclusion was that bloodletting, though it never
halted a disease in its tracks, was still good for patients, one is bound to
look closely at his statistics. After all, if he was right, why do we not
still let blood? Table 1 is a simplified version of his table detailing the
second group of patients with pneumonitis, twenty-five of whom
survived the disease. The top row shows the day on which blood was
first let. Each cell below records the number of days it took a particular
patient to recover, until the final row, which shows the average
recovery time for patients in that column.
I have already suggested that Louiss argument that those bled
early recover more quickly than those bled late is spurious, and that
the form in which he presents his figures conceals a correlation
between youth and rapid recovery. Table 2 gives the information
157
158
159
160
161
patient who had died had escaped autopsy. What the doctor now
sought to do was predict, on the basis of his inspection of the patient,
what would show up at autopsy. The doctors task was to read the
symptoms he could perceive in the living as indicators of a hidden
condition that would only be exposed to view in the dead. His
project was to move in the minds eye from the surface of the body to
the interior, and soon very simple new toolssuch as the stethoscope
em;were to be devised to make this task easier.
This task was complicated by the fact that the interior of the body
was now being observed in a different way. In the past, autopsies had
primarily been concerned to locate the cause of death in a particular
organ, and each organ was seen as having a particular function within
the body. But now each organ was seen as being constructed out of a
number of different types of tissue (eventually twenty-one different
types were identified), and the same types of tissue were found
throughout the body, so that at autopsy it was seen that the same type
of lesion, affecting the same type of tissue, could be found in very
different organs. The founding text of this new anatomy was
Franois-Xavier Bichats Trait des membranes, of 1799, followed by
his Anatomie gnrale of 1801: Bichat had arrived in Paris in 1794, at
the age of 22, but he had been quick to impress, probably because he had
begun learning medicine at an early age from his father, a professor of
medicine at Montpellier. In Bichats work the body was no longer
described as a city made up of different buildings; instead it was seen
as a city made up of different buildings constructed from a narrow
range of materials, so that rotting wood might equally be found in a
town hall, a country cottage, or a castle; just so an inflamed membrane
might be found in lung, intestine, or eye-socket, or rather an inflamed
membrane of a particular sort, for Bichat distinguished membranes
into three distinct types.
This set of interlocking transformationsa new analysis of the
bodys components; a new observation of the patients body; a new
relationship between doctor and patient; a new medical education
was the subject of a famous book by Michel Foucault, The Birth of the
Clinic, first published in French in 1963 and in English translation in
162
163
164
within, two new developments were taking place, the one an extension
of the other, just as the identification of hospitalism was a natural
extension of therapeutic pessimism. First, a small group of doctors,
mainly in Paris, came to feel that medical knowledge would never be
complete if it relied entirely on the inspection of patients and the
dissection of cadavers. What was needed was the application to
medicine of the experimental method, an enterprise that would
make possible new developments in physiology. Since there were
limits to the experiments that could be performed on humans the
new science was to be based on animal experiments. Here again
Bichat was a key figure. One of Bichats concerns was to establish
what actually happened when people died. Death, he realized, was
not a uniform process. Sometimes the heart stopped first, and the
other organs of the body then failed; at other times, for example if
someone drowned, it was the lungs that first ceased to function,
followed by unconsciousness and the stopping of the heart; or again
an injury to the brain might be fatal, though the lungs and heart were
in good order.
Bichat, the founder of the new account of human anatomy in
terms of different types of tissue, devised experimental methods for
studying the progress of death through the body. For example, one
could model the failure of the lungs by passing venous blood rather
than arterial blood into the heart to be pumped around the body.
Bichat tried connecting the heart of one dog to the veins of another,
but it was hard to get the pressure right and the blood flowing in the
right direction. He had more success tying off the flow from the lungs
and injecting venous blood in its placebrain death followed, as if
from asphyxiation. These experiments were reported in Recherches
physiologiques sur la vie et la mort (1800).
Experimental physiology meant that for the first time doctors
needed a specialized space in which to conduct research. Every
experimental science requires a laboratory, wrote Claude Bernard in
his Introduction to the Study of Experimental Medicine (1865).
Medicine
followed the path pioneered by physics and chemistry in becoming a
165
166
that the medical revolution represented by the birth of the clinic and
of the physiological laboratory was not a success but a failure. The
mortality amongst patients did not decrease, instead it increased. New
therapies were tried, but they failed. The old complicated pharmacopoeia
was abandoned in favour of new, chemically pure drugs which
could from the 1850s be injected by hypodermic syringe straight into
the bloodstream. Morphine was extracted from opium, quinine was
extracted from cinchona, but people went on dying, more or less as
before. You could only think that this was the foundation of modern
medicine if you thought that modern medicine was about certain
institutions (hospitals, laboratories), or certain ways of inspecting
patients (stethoscopes, thermometers), or certain ways of interpreting
the human body as a prospective cadaver for autopsy (lesions rather
than diseases, tissues rather than organs). But if you think that the key
feature of modern medicine is effective therapy and the capacity to
postpone death, then these institutions, these instruments, this way of
thinking about disease are beside the point, because none of them led
to effective therapy. The alternative view is that modern medicine
began long after the birth of the clinic, and that it is inseparable from
the germ theory of disease and the controlled clinical trial.
Why do historians prefer to focus on the birth of the clinic rather
than the germ theory of disease or the clinical trial? Part of the answer
is that many of them dont actually believe that science progresses. For
a relativist, the story of the birth of medical science in the first half of
the nineteenth century is a profoundly reassuring one, because the
unintended and adverse consequences of so-called progress are far
more striking than the intended consequences; doctors, trying hard to
save lives, went around killing people. But the story of the birth of the
clinic is also attractive to historians because it ties the history of
medicine firmly to other sorts of history: the purpose-built hospital
can be compared with the prisons and schools being built at the same
time (the subject of Foucaults Discipline and Punish); the new
technically
skilled medical professional can be compared with his fellow
professionals in law, in the universities, in engineering; experimental
167
10 The Laboratory
Experiments on animals have, we have seen, been central to the
development of modern medicine, but some have always found them
repugnant, and many have refused to engage in them. The more I
have thought about this subject the less sympathetic I have become to
all animal experiments conducted before 1877, when Pasteur began
work on anthrax. Samuel Johnson said in 1758:
Among the inferior professors of medical knowledge is a race of
wretches, whose lives are only varied by varieties of cruelty . . . What is
alleged in defence of these hateful practises everyone knows, but the
truth is that by knives, fire, and poison knowledge is not always sought
168
and is very seldom attained. The experiments that have been tried are
tried again . . . I know not that by living dissections any discovery
has been made by which a single malady is more easily cured. And if
knowledge of physiology has been somewhat increased, he surely buys
knowledge dear, who learns the use of the lacteals at the expense of his
humanity. It is time that universal resentment should arise against these
horrid operations . . .
But animal experimentation was absolutely central to the new
science of physiology as it developed in the nineteenth century.
According to Claude Bernard, the man generally acknowledged
as the greatest of all the nineteenth-century physiologists, without
vivisection neither physiology nor scientific medicine is possible.
Moreover, Bernard was quite explicit in his determination to pay no
attention to the pain his animals suffered:
A physiologist is not a man of fashion, he is a man of science, absorbed
by the scientific idea which he pursues: he no longer hears the cry of
animals, he no longer sees the blood that flows, he sees only his idea and
perceives only organisms concealing problems which he intends to
solve. Similarly, no surgeon is stopped by the most moving cries and
sobs, because he sees only his idea and the purpose of his operation.
Similarly again, no anatomist feels himself in a horrible slaughter house;
under the influence of a scientific idea, he delightedly follows a nervous
filament through stinking livid flesh, which to any other man would be
an object of disgust and horror.
Bernard was quite right: surgery and anatomy require the overcoming
of normal human responses and the substitution of a professional
detachment. But Bernards argument was also a form of special
pleading. By the time he wrote this, in 1865, anaesthetics were
commonplace.
Surgeons no longer had to brace themselves against cries
and sobs. As for Bernard he was genuinely indifferent to the sufferings
of his animals. One of his research programmes was directed at
understanding
how curare worked as a poison. Having discovered that it
paralysed but did not anaesthetize, Bernard frequently used it to
169
170
171
instrument had reached the facial nerve. Turning the hook upward, and
without leaving the petrosal bone, he carefully withdrew the instrument,
thereby pulling at and sectioning the nerve. The completion of
this operation was signalled by the immediate and complete paralysis of
the left side of the face. Within six days the wound had healed and the
effects of the opium had dissipated. Bernard was able to confirm that,
apart from the facial paralysis, there was a considerable diminution of
the gustatory faculty in the left anterior half of the tongue, without any
corresponding alteration of movement or of the tactile sense in the
same region. When the animal was sacrificed after thirty-three days,
autopsy confirmed that the seventh pair, and only the seventh pair, of
cranial nerves had been sectioned. He obtained the same results in
experiments on two other dogs.
Bernards study of dogs helped make sense of some cases of facial
paralysis in humans: it proved possible to find damage to the facial
nerve in an autopsy of a patient whose symptoms were similar to
those of the vivisected dogs.
But some operations involved slashing and gashing that caused
horrible damage to the dog. Thus Magendies work on the nerves of
the spinal column was conducted on small young dogs. He was able
to lay bare the membranes of the posterior half of the spinal cord with
a single stroke of a very sharp scalpel. But the shock and blood loss
associated with operations such as this were so great that others had
difficulty repeating his findings: it turned out that one had to allow
the dogs time to recover before manipulating their nerves if one
wanted to find what passed for normal physiological responses.
And there is something grotesque about some of the experiments
conducted by the French physiologists. Magendie did a good deal of
work on the operation of poisons such as strychnine and prussic acid
em;Bernard was to extend this into studies of curare and carbon
monoxide. One question was how poisons such as strychnine, which
appeared to work directly on the central nervous system, were
absorbed into the body. Was it via the veins, the lacteals, or the nerves?
The great eighteenth-century Scottish surgeon John Hunter had
done experiments on absorption where he had cut out a small section
172
173
174
became inevitable, though what Klein had said was little different
from what Bernard had said in his Introduction.
The outcome, after much negotiation, was the Cruelty to Animals
Act of 1876, the first legislation anywhere in the world to restrict
vivisection. The Act provided that anyone experimenting upon living
vertebrate animals must have a licence from the Home Secretary;
in order to apply for such a licence they must have the support of a
president of a major scientific or medical society and of a professor
of medicine. The experiments must be performed at a registered
location and be open to inspection, and must have as their purpose
the acquisition of new knowledge. They must be performed under
anaesthesia, and the animal experimented upon must not be revived
afterwards. Special licences had to be sought by anyone who intended
to experiment without anaesthesia, to repeat experiments that had
already been performed, or to use vivisection to illustrate a lecture.
Cats, dogs, horses, and donkeys were singled out for particular
protection.
In its detail the Act was a commentary on the charges laid against
the Handbook and the French physiologists. Curare was explicitly
ruled not to be an anaesthetic. Obstacles were placed in the way of the
repetition of experiments. Many French physiologists had conducted
experiments at the Parisian veterinary school of Alfort. There horses
that had been condemned to be put down were handed over to
students so that they could practise their operative skills upon them,
and in Bernards laboratory animals that still had life in them when
the experiments were finished were presented to the students so that
they could practise on them. The Act specifically forbade such
vivisection
to develop manual dexterity. The use of animals in public
lectures had caused particular offence in England, and again particular
restrictions were placed upon this by the Act.
The Act was an initial success for a strong antivivisection
movement which had developed in Britain, and which continued to
campaign (indeed continues to campaign) for the complete abolition
of vivisection. It was clear from the beginning that the Act would
175
176
177
178
179
the third; the adoption of arguments similar to his limited the impact
of the fourth epidemic and prevented future epidemicsthere were
only 135 deaths in England in 1893, when cholera was ravaging much
of Europe.
The conventional account of cholera in the 1840s and 1850s held
that it was spread as a poison through the air. Like all epidemic
diseases, cholera was held to be caused by a miasma, by bad air.
(Malaria means bad air. Malaria was only shown to be caused by a
parasite transmitted by mosquitoes in 1897, so that it remained a
disease of bad air long after other diseases, such as cholera, had been
shown to be caused by germs.) The ultimate source of bad air was
rotting organic matter, and the best way of preventing it therefore
was to eliminate the sources of foul odours by improving sewers
and drains. Nineteenth-century English reformers such as Edwin
Chadwick thus recommended exactly the same improvements that
had been urged by their sixteenth-century Italian predecessors. The
key difference was that where Renaissance doctors believed that
epidemics
could under certain circumstances be transmitted directly
from one person to another, and so advocated the quarantining of
those infected, their nineteenth-century successors held that quarantine
measures were pointless, and so allowed the free movement of
people and goods even during epidemics: as Snow himself remarked,
there were great pecuniary interests that would have been damaged
by any recourse to the precautions that had been adopted against the
plague.
Snows starting hypothesis was very different from those of his
contemporaries. Because he was a pioneer in the use of anaesthetics
(in 1853 he was chosen to administer chloroform to Queen Victoria
during the delivery of Prince Leopold), Snow had a great deal of
experience in what happened to people and animals when a poison
entered the bloodstream as a result of inhalation. (He frequently tried
out possible new anaesthetics, first on animals, and then, if the results
were promising, on himself.) This experience provided no analogues
to cholera. Moreover poisonous gases affected everyone exposed to
180
them, while cholera struck some and passed others by. And poisonous
gases obeyed the law of the diffusion of gases:
27. This drawing by George John Pinwell, entitled Deaths Dispensary,
published in an English magazine during the cholera epidemic of
1866, marks the belated triumph of John Snows account of the mode
of transmission of cholera.
As the gases given off by putrefying substances become diffused in the
air, the quantity in a given space is inversely as the square of the distance
from their source. Thus a man working with his face one yard from
offensive substances would breathe ten thousand times as much of the
gas given off as a person living a hundred yards from the spot.
Yet miasmatic theories claimed to be compatible with the fact that
people at a considerable distance from the source of a smell often fell
ill when those closer to the source were unaffected.
Cholera attacks the gut, causing violent diarrhoea, so Snow
concluded it most likely had its origin in something ingested, and this
ingested substance probably came more or less directly from cholera
sufferers (as smallpox came from smallpox sufferers). Snow was a
vegetarian and a teetotaller, and this prepared him for his line of
thinking. He himself drank distilled water because he was well aware
that one could often find in tap water material that had passed
through the human gut. Tap water was not, strictly speaking,
vegetarian, since it commonly contained microscopic amounts of
half-digested meat. At the age of 17 Snow had read John Frank
Newtons The Return to Nature: A Defence of the Vegetable Regime
(1811), in which the water drunk by Londoners was shown to be full
of septic matter. As an anaesthetist, Snow could make no sense of
the miasmatic theorys account of how poisonous gases operate; as
an admirer of Frank Newton, Snow was predisposed to think that
the water people drank made them ill, but he did not develop his
alternative theory until the last months of 1848.
In the first edition of The Mode of Communication of Cholera, Snow
recounts the case of John Barnes, a labourer in Moor Monkton near
Yorkshire, a village untouched by cholera. Barness sister had died of
cholera in Leeds; two weeks later her clothes were bundled up and
181
sent to Moor Monkton by the carrier. The clothes had not been
washed; Barnes had opened the box in the evening; on the next day
he had fallen sick of the disease. From John Barnes, the communication
of the disease could be traced through twenty individuals (with
only one unexplained link), and thirteen deaths. Clearly the disease
had travelled in the box of clothes.
Snow had to hand other accounts of the propagation of cholera
from person to person. These cases were not compatible with the
modification of the miasmatic theory that some advocated, that cholera,
originally caused by rotting matter, was also caused by effluvia
given off from the patient into the surrounding air, and inhaled by
others into the lungs. Snows alternative hypothesis was that cholera
was spread through excreta. The first mode of transmission he identified
was from hand to mouth: John Barnes had touched his sisters
soiled clothes, had failed to wash his hands, and had conveyed the
source of the disease to his own gut. By similar means, his sickness had
been conveyed to those who visited him and cared for him. It was
easy to show that the disease had an incubation period of twenty-four
to forty-eight hours. From the beginning, Snows argument implied
that cholera was an animalculealthough he insisted that all his
argument required was that it should be an organized particle, one
capable of multiplying in the human body. Snow was exploring a
germ theory of cholera at a time when germ theories were generally
rejected in favour of chemical theories which attributed diseases to
poisons (or, to use the Latin term, viruses). He did not want to dismiss
out of hand the possibility of some sort of chemical account of the
cholera poison, comparable to contemporary accounts of fermentation,
which maintained that it was an inorganic process, but his whole
argument implied a germ theory, and by 1853 he had committed
himself to the view that diseases are caused by living agents.
Snow identified a second mode of transmission, which did not
require direct contact with the patient or their personal effects. He
looked closely at two identical alleys of houses in Horsleydown, near
London, that stood next to each other. Cholera had struck one alleyway,
called Surrey Buildings, killing eleven people, and had almost
182
entirely spared the other, killing only one. In the alleyway where the
infection had spread overflow from the drains ran back into the well
from which the inhabitants drew their water. The soiled clothes of
the sick had been washed in water that their neighbours had then
drunk. Cholera had spread through the pollution of the water supply.
In another case, a London suburban development called Albion
Terrace, Snow had identified a row of seventeen houses where a
severe rainstorm had caused the cesspools to overflow into the water
supply. Here twenty-four people had died. In a neighbouring house,
supplied with the same water, a gentleman who had always refused to
drink the water was untouched. In 1849, Snow had five other
examples of disease hotspots that could only be explained by the
hypothesis that cholera was entering the drinking water. One was of a
landlord who had dismissed his tenants complaints that their water
stank. Cholera was frequent amongst the tenants, but not in the distant
village where the landlord lived. One Wednesday he drank a glass
of his tenants water to show there was nothing wrong with it; he died
the following Saturday.
The official account of the deaths at Albion Terrace blamed an
open sewer 400 feet away, which caused an unpleasant smell when
the wind was in the wrong direction, together with a disagreeable
smell from the sinks in the houses and some smelly rubbish in the
basement of one of the houses. In other words the orthodox explanation
was that the disease was airborne, and that it was caused (as all
epidemic diseases had been believed to have been caused for centuries)
by the smell of putrefaction. The solution was improved hygiene.
Snow pointed out that most of London was exposed to exactly the
same sorts of smells that were to be found at Albion Terrace, but on
other such streets nobody at all had died.
Snow had rejected the conventional view that cholera was transmitted
through the air and was primarily caused by putrefaction.
Instead he argued that it was transmitted through the water supply
and by direct contact, and was carried in the faeces of cholera sufferers.
He was thinking in terms of a germ theory of the disease. The
great advantage of this was that he could explain why cholera seemed
183
184
185
test the effect of water supply on the progress of cholera than this, which
circumstances placed ready made before the observer.
The experiment, too, was on the grandest scale. No fewer than three
hundred thousand people of both sexes, of every age and occupation,
and of every rank and station, from gentlefolks down to the very poor,
were divided into two groups without their choice, and, in most cases,
without their knowledge; one group being supplied with water
containing
the sewage of London and the other group having water quite
free from such impurity.
Snow and an associate set out to visit every house in which there
had been a cholera fatality and establish which company supplied its
waterSnow cut back on his practice, effectively giving up his
income, in order to pursue his enquiries. Between 8 July and 5 August
1854 there were 563 deaths from cholera in London, or 9 deaths per
10,000 houses. In 40,046 houses supplied by the polluted water of the
Southwark and Vauxhall Co. there were 286 fatalities, or 71 per
10,000 houses. Amongst the identical houses intermingled amongst
them but supplied with the clean water of the Lambeth Co. there
were 14 fatalities, or 5 per 10,000. Note that these fourteen fatalities
did not present a problem for Snows argument: it was to be expected
that some customers of the Lambeth Co. would visit friends who
were customers of the Southwark and Vauxhall Co. and drink their
water; would purchase drinks made with Southwark and Vauxhall
water in pubs and cafs; and would visit and nurse those who had
fallen sick from drinking Southwark and Vauxhall water.
Snows use of the customers of the two water companies as a
randomized trial of his hypothesis resulted in a brilliant vindication of
his arguments. (Snows survey was later repeated and extended with
much less striking results. But it is Snows results that are to be trusted.
He published a list running over twenty-five pages of every death and
every address in his study; and he pointed out two major difficulties:
there was often more than one house with the same street address, so
that one had to make careful enquiries to make sure one was at the
right address; and people often did not know the name of their water
186
supplier, since that was a matter for their landlorda problem Snow
had circumvented by devising a chemical test to distinguish the water
supplies of the two companies.) Snow also showed that an analysis of
the occupations of those who died from cholera was highly revealing.
Sailors and ballast-heavers were accustomed to drink water direct
from the Thamesone in twenty-four of them died of cholera in the
epidemic of 18489; those who worked in breweries were said never
to drink water at all, and indeed none of them died.
Contemporaries often implied that it really did not matter
whether Snow was right or not. Both the miasmatists and Snow
believed that human faeces helped spread cholera, so one could conclude
from both their arguments that improved sanitation was the
answer. Snow was impatient with this response, as he held that the
activities of those who advocated improved sanitation had had the
opposite effect from the one they had intended. In 1854, publishing
the preliminary results of his enquiry into the water supply, he wrote:
The persons who have been more instrumental in causing the increase
in cholera are precisely those who have made the greatest efforts to
check it, and who have been loudest in blaming the supineness of
others. In 1832 there were few water-closets in London. The privies
were chiefly emptied by night men, a race who have almost ceased to
exist; or a portion of the contents of the cesspool flowed slowly, and
after a time, into the sewers. By continued efforts to get rid of what
were called the removable causes of disease, the excrement of the
community
has been washed every year more rapidly into the river from
which two-thirds of the inhabitants, till lately, obtained their supply of
water. While the faeces lay in the cesspools or sewers, giving off a small
quantity of unpleasant gas having no power to produce specific diseases,
they were spoken of as dangerous and pestilential nuisances; but when
washed into the drinking-water of the community, they figured only in
Sanitary Reports as so many grains of organic matter per gallon.
Thus the difference between his own account of the transmission of
the disease and that of the miasmatists was fundamental. His priority
was clean drinking water; theirs was flush toilets.
187
188
diagram.) Nearly all the deaths fell within the dotted line.
The story of the pump handle and the map that illustrates it
have entered the folklore of epidemiology, though the often careless
retelling of this story has opened the way to a fundamental
misunderstanding
of what Snow had accomplished. No map showing Broad
Street and its immediate area to be a hotspot, it is said, could prove
Snows account of how cholera was transmitted; for other
contemporaries
produced similar maps, and they were convinced that
such maps showed that the disease was most likely disseminated
through the air. They hypothesized some unidentified source of
miasma at the centre of the circle within which the deaths fell.
Consequently,
it is argued, Snow had failed to prove his case, and his
opponents had at least as good an argument as he had. But this is to
make the elementary mistake of imagining that the map was a full
presentation of Snows materialignoring not only the other evidence
we have so far surveyed, but the fact that Snow had crucial
additional evidence relating to the Broad Street epidemic itself.
The workhouse in Poland Street was near the epicentre of the
28. The map of the fatalities in the neighbourhood of the Broad Street
pump from the second edition of Snows The Mode of Communication
of Cholera.
hotspot. It had 535 inmates but only 5 fatalities. The inmates breathed
the air of Broad Street, but did not drink water from its pump. On
Broad Street itself a brewery employed 70 men; none of them died.
All drank beer not water; in any case the brewery had its own well. At
a percussion cap factory a few yards away 18 of 200 workers died; they
were supplied with water from the pump. Both those who worked in
the brewery and those who worked in the percussion cap factory
breathed the same air, but they did not drink the same water. One
person, who came from Brighton, spent twenty minutes in the house
of someone who had died of cholera, drank a glass of brandy diluted
with water from the Broad Street pump, and died the next day. A
189
190
191
192
12 Puerperal Fever
Puerperal fever or childbed fever is (we now know) a bacterial
infection of the genital tract, often leading to peritonitis and a dreadfully
painful death. In the eighteenth and nineteenth centuries some 6
to 9 women in every 1,000 deliveries succumbed to puerperal fever,
and just under half that number died. Epidemics sometimes occurred
for unknown reasons, especially in maternity hospitals, and then the
rate of infection soared and the proportion of those who died reached
80 per cent. In the General Lying-In Hospital, London, between 1835
and 1844 there were on average 63 deaths for every 1,000 deliveries.
The most common treatment was, of course, bleeding, but some
doctors also recognized that no treatment was of any help. Thus in the
early nineteenth century William Hunter said Of those attacked by
this disease, treat them in any manner you will, at least three out of
four will die. Cause and cure were both equally mysterious.
193
194
195
196
them responsible for his decision to give up medical practice and turn
to writing poetry and delivering public lectures: except for the fact
that Holmes continued to lecture on physiology at Harvard until the
normal retirement age. If his relationship with the medical profession
was deeply uncomfortable, it was never entirely broken.
Holmes, like Gordon, had no idea how the disease was carried
from doctor to patient. All that was apparent was that it was. In 1850
James Simpson suggested a new way of drawing an analogy between
puerperal fever and the best understood of the infectious diseases,
smallpox. He suggested that:
patients during labour have been and may be locally inoculated with a
materies morbi [infectious matter] capable of exciting puerperal fever;
that this materies morbi is liable to be inoculated into the dilated and
abraded lining membrane of the maternal passages during delivery by
the fingers of the attendant; that thus in transferring it from one patient
to another, the fingers of the attendant act, as it were, like the ivory
points formerly used by some of the early vaccinators . . .
Thus puerperal fever was caused by the introduction of diseased
matter, from other cases of puerperal fever, or erysipelas, or gangrene,
into the abraded lining of the uterus. It was thus spread both in the
same way in which smallpox was spread by innoculators, and in the
same way that infections were spread during surgery. The title of
Simpsons paper was Some notes on the analogy between puerperal
fever and surgical fever.
When Simpsons paper appeared in 1850, Ignaz Semmelweis, a
young Hungarian appointed an assistant in the lying-in hospital in
Vienna, was propounding a rather different account of puerperal
fever. Unlike Gordon, Holmes, and Simpson, Semmelweis has
become a hero, at least in the eyes of some, the first proponent, it is
said, of a modern theory of infection.
Vienna had the largest lying-in hospital in the world, with about
7,000 deliveries a year. The hospital was opened in 1784; in 1833 two
admissions wards were established, admitting on alternate days; and
from 1839 one of these wards was used for teaching all the male
medical students, and the other for teaching all the female midwives.
197
From that point on the mortality rate from puerperal fever (which
had previously been identical in the two clinics) was much higher in
the first clinic, where the men were taught, than in the second clinic,
where the women were taught. Semmelweis, who established these
facts, also established there had been a general increase in mortality
after 1823. This was the moment when it became routine to teach
medical students through the post-mortem dissection of cadavers.
Semmelweis could see only one explanation for the increase in mortality
after 1823 and the differential mortality between the two wards
after 1839: male medical students, who frequently handled cadavers,
were bringing the cause of the disease on their hands from the
morgue to the wombs of their patients. In particular he saw a striking
similarity between the symptoms of puerperal fever and the symptoms
exhibited by Professor Kolletschka, a forensic pathologist who
died of blood poisoning following an accidental injury while conducting
an autopsy. Mothers dying from puerperal fever were dying
for the same reason that Kolletschka had diedcadaveric material
was entering their bloodstreams. This hypothesis also helped to
explain a number of puzzling features of the incidence of puerperal
fever. Puerperal fever was rare among mothers who delivered
prematurely or who were brought to the hospital immediately after
deliverybut these mothers were the only ones who were not
subjected to internal examinations by students.
In May 1847 Semmelweis required everyone to wash their hands
with chloride of lime (which eliminated the smell of the dissecting
room) before beginning to examine living patients. At once, the
incidence
of puerperal fever fell sharply. Semmelweis had recognized that
(since patients were divided randomly between the two wards) the
difference in mortality between the two wards must be due to some
difference in the treatment of patients on the two wards. The difference
evidently had something to do with the difference between
male medical and female midwifery students; and the only obvious
difference was that the first conducted autopsies and the second did
not. This observation had enabled him to eliminate the major cause
198
199
200
201
202
then surely his last professional act was to reassert his belief that doctors
were killing patients through ignorance and stupidity, and it is
reasonable to conclude that his breakdown was directly caused by his
sense of hopeless incapacity in the face of this situation.
It is difficult not to sympathize with Semmelweis, who certainly
did know how to reduce deaths from puerperal fever. But it is also
important to acknowledge that his arguments deserved to be met
with puzzlement and scepticism. The original cadaverous material
explanation provided a convincing account (more convincing than
contemporaries were prepared to recognize) of the excess deaths on
the ward where medical students were trained, but it clearly only
explained a proportion of all deaths from puerperal fever. Semmelweiss
revised explanation, on the other hand, provoked questions for
which he had no answer.
Several critics said that if he was right, he was in effect claiming
that puerperal fever was identical to the fevers that frequently killed
patients who had undergone surgery. Semmelweis effectively
admitted they were right (and as we have seen, Simpson had already
made a connection of this sort), but he failed to follow through the
logic of his argument at this point. If antiseptic measures could halt
the spread of puerperal fever, then antiseptic measures could in principle
be deployed to halt the spread of surgical fever: had he made
this claim, Semmelweis would have pre-empted Lister. But he never
made such a claimit was as if he recognized responsibility only for
obstetrical deaths. And it was natural for obstetricians to conclude
that they were under no obligation to take more extensive precautions
than surgeons tookwhich is to say, no precautions at all. It
was particularly important that surgery was a much higher status
discipline than the obstetrics practised in the great lying-in hospitals
of the major cities, where puerperal fever was rife. Most women
who came to deliver to such hospitals were unmarried, and their
infants went into the foundling hospitals. As we shall see, when the
surgeons discovered antiseptic principles, they were quickly adopted
by obstetricians; when Semmelweis advocated the same principles it
occurred neither to him nor to anyone else that the practices of the
203
204
205
206
falling everywhere, and the young Lister was clearly familiar with
this theory (even though he was later to write as if he had never heard
of germ theory before reading Pasteur), for as a young registrar in
London he had persuaded himself that he could find fungal spores in
gangrenous wounds. Earlier still, as a student, he had demonstrated his
familiarity with the debates about spontaneous generation by getting
into an argument with someone who claimed that cheese mites were
spontaneously generated.
So Lister reasoned that compound fractures were fatal because
germs landed on the exposed wound, and that the solution was to kill
off the germs and to cover the wound. He therefore bathed the
wound with carbolic acid (which had recently been introduced into
the sewer system of Carlisle where it had been shown to prevent the
smell of putrefaction); he cleansed the implements he used in carbolic
acid; he covered the wound with dressings soaked with carbolic acid;
and he placed a temporary metal plate over the whole injured area to
prevent germs falling on it. He also explored methods that would
make it unnecessary to reopen wounds, thus risking infectionthe
use of cat gut (presoaked in carbolic acid, of course) for ligatures, as
this was reabsorbed into the body. Later he was to advocate conducting
operations while a mist of carbolic acid was being sprayed into the
air, killing the germs before they could touch down, and he was to
explore the use of alternative antiseptics. Although his first experiment
in antiseptic surgery was a failure, his second, in August 1865, on
a boy of 11 whose leg had been run over by a cart, was successful.
Soon he could lay claim to a whole series of cases where his methods
had saved lives, and he began publishing reports of these cases in 1867.
His post-amputation death rate fell from 45 per cent to 15 per cent.
Listers innovation was deceptively simple, and most of his own
accounts of his discovery were misleading: Lister downplayed his own
originality in order to win support for his new practice. In a letter to
his father written in 1866, however, he said that he had made one
of the ten greatest discoveries in world history, and this is perhaps
to underestimate his achievement in founding medical science. The
biggest advance in practical medicine before the twentieth century
207
208
209
210
1876 John Tyndall had claimed that epidemic diseases would soon be
swept from the face of the earthbut they were not entirely misplaced.
This is a story of good medicine, and it belongs in a different
book.
In the conventional story, the triumph of germ theory begins with
Kochs discovery of the anthrax bacillus and ends with Wrights
conquest
of typhoid. This story is a story of technique: Kochs use of solid
(gelatine, agar) rather than liquid mediums for the cultivation of pure
samples of bacteria, and the invention of the petri dish; the invention
of methods of dyeing bacteria to make them visible under the
microscope;
Pasteurs triumph in learning how to attenuate or weaken
anthrax bacteria so that they could safely be injected into cows and
sheep. In this story, Lister disappears into germ theorys prehistory,
and is merely the English disciple of Pasteur, a role in which, it must
be said, he cast himself. As a result, the nature of the first crucial
meeting between science and medicine is scarcely explored and its
character is systematically misunderstood.
Modern history of science, including history of medicine, has
consistently sought to destroy the notion that there is a straightforward
logic of discovery: that one discovery leads almost automatically
to another, that one researcher picks up where another has left
off, as if passing a baton in a relay race. At stake is Pasteurs claim
that scientific research pursues an inflexible logic. Instead recent
histories insist that there are always conflicting views, uncertain
outcomes,
unpredictable developments. Lister said he was applying germ
theory to the most intractable problem in surgery; in a characteristic
move, the latest book on the subject, says that there were always
numerous germ theories, not one germ theory, and indeed that
Listers own stated views on germs changed significantly over time.
And this is correct: Lister, who started out claiming that one needed
to accept the truth of germ theory in order to successfully implement
his practices, quickly retreated to saying that he did not care what
211
people believed as long as they did what he said. But in 1865 Pasteur
was the only germ theorist of note, so that germ theory, when Lister
30. This etching by Charles Maurin, c.1896, shows the researchers
from the Institut Pasteur, led by Pierre-Paul-Emil Roux, who had
discovered serum therapy for diphtheria. On the right is the horse from
whose blood the serum comes. The poem, by Jean Richepin, is translated
by Suzanne G.
Lindsay as: Smiles of children cured, / Festive sparks in the mothers
eyes
that weep no more, / Songs of all our birds saved from the birdcatchers, /
Be
the diamonds, the laurels, and the flowers of which his crown will be
made.
first introduced antiseptic surgery, was nothing other than Pasteurs
germ theory, and Lister always uses the term in the singular, as when
he said in 1875 that the philosophical investigations of Pasteur long
since made me a convert to the germ theory. It is also said that at first
Listers germs were more like seeds of disease, highly plastic agents
(not specific causal entities) whose pathogenic qualities depended on
the local environment in which they developed.
Listers germ theory is thus presented as primitive and unsophisticated.
But while this may be true of some germ theories of the 1870s
and 1880s, it is not true of Listers. What Lister meant (at least in his
early work) by germ theory was quite specifically the germ theory
of putrefaction. He was quite clear that the character of the
decomposition
which occurs in a given fermentable substance is determined
by the nature of the organism that develops in it, which is a way of
saying that germs are specific causal entities. It would be wrong to
identify his public statements (which were aimed at winning support
for antiseptic surgery in the face of bitter opposition) with his private
commitments. And yet for all these (often misguided) attempts to
read Lister on the assumption that he was not a modern germ theorist,
every book reproduces as established fact a false impression carefully
conveyed by Listers first publication: that Listers success was
212
213
214
215
216
217
218
219
220
partially explains the long delay. It also helped that Lister was
surrounded
by people who were worrying about the spread of infections
in hospital wards: since Lister believed that diseases were spread not
by miasmas, but by germs floating through the air, the covering of
wounds with antiseptic dressings was, he was convinced, enough to
prevent infections spreading from patient to patientas a result he
claimed to have eliminated pyemia, hospital gangrene, and erysipelas
from his wards, despite the fact that the air was as foul and smelly as
always (which was attributed to the fact that the Glasgow Infirmary
had been built over the graves of the cholera victims of 1849 and next
to the Cathedral churchyard). Indeed Lister even took pride in the
fact that his wards were cleaned less often than most hospital wards:
once he knew that it was germs that were the enemy, smells and even
ordinary dirt ceased to worry him.
But at the same time it is clear that for at least thirty years patients
had been dying unnecessarily. The key intellectual preconditions for
antiseptic surgery had been met by 1837; indeed Schwann was only
belatedly developing the work of Spallanzani, who was only belatedly
following up on the ideas of Leeuwenhoek. The key obstacle to
medical progress was not intellectual but cultural: the best doctors and
the best scientists failed to acknowledge the importance of the
microscope,
and they changed their minds only as it became apparent
(though it should have been apparent from the beginning) that neither
traditional views on disease, nor the new chemical theories could
produce effective remedies.
The genius of Pasteur was to make an early commitment to the
idea that processes standardly thought to be chemical were in fact
biological, and then, in the light of this commitment, to tackle one
problem after another: first the fermentation of alcohol (185765),
then silkworm diseases (186570), then anthrax (187781), and finally
rabies (18804). Pasteur crept up on medicine because the obstacles
there were greater: in the case of silkworm diseases, all that was
necessary
221
222
bacteria. Lysozyme, it had turned out, had little effect on those bacteria
that cause dangerous diseases, but Flemings experience with it
meant he only needed a glance at his contaminated plate to recognize
that something important might be happening, for on this plate the
unknown mould was killing an organism which was a common
source of dangerous infections, a staphylococcus.
It was straightforward to establish that the mould was a member of
the Penicillium family, and that it was active against numerous
dangerous
bacteria. Fleming could easily show that it did no harm to white
blood cells: this was important because the laboratory he worked in,
headed by Almroth Wright, had long been committed to the idea
that the key to effective treatment was to mobilize the bodys own
capacity for defence. Fleming himself, during the First World War,
had studied infections in soldiers wounds and had argued that
conventional
antiseptics both killed off white blood cells faster than they
killed bacteria, and failed to penetrate into the jagged interstices of
gunshot wounds: they were, he thought, positively fostering infection.
He could also straightforwardly show, by injecting the broth derived
from his mould into a very small number of mice and rabbits, that it
was not toxic. And he could also show that it quickly lost its
antibacterial
effect when mixed with digestive juices: there would be no
point in taking it as a pill.
Fleming was surely moving towards injecting penicillin (as he was
soon to call his mould broth filtrate) into infected animals to see if it
would cure them. He had long worked with salvarsan, which was the
first drug effective against syphilis, a disease that Fleming had extensive
experience of treating in private practice. But by April 1929 he
seems to have lost all interest in injecting penicillin into the
bloodstream.
Penicillin took around four hours to kill bacteria; but tests
showed that both in animals and in the test tube it ceased to be active
in blood after two hours. This seems to have persuaded him that it
223
224
225
226
227
was essential for the Oxford work, and was not available in
1929; but Ridley and Craddocks work shows that Fleming could
have managed without it.) If Fleming deserves the credit for recognizing
the action of penicillin on his contaminated dish, he also
carries the responsibility for this delay.
The situation would thus appear straightforward: Fleming
discovered penicillin; Florey and Chain first put it to effective use.
The question of the relative contribution of Fleming on the one
hand, and Florey and Chain on the other to the revolution represented
by modern drug therapy has however distracted attention
from an even more puzzling and difficult question. In what sense can
Fleming be said to have discovered penicillin?
Contamination of bacterial cultures by moulds takes place all the
time. In 1871 Sir John Burdon Sanderson reported that moulds of the
Penicillium group would prevent the development of bacteria in a
broth exposed to the air. In 1872 Joseph Lister established that the
growth of Penicillium glaucum would kill off bacteria in a liquid
culture. He at once saw the possible clinical application of the
phenomenon.
He wrote to his brother saying Should a suitable case
present, I shall endeavour to employ Penicillium glaucum and observe if
the growth of the organisms be inhibited in the human tissues. He
never published his results, so we do not know how far and how long
he pursued the question, but we do know that in 1884 a patient of
Listers, a young nurse, was suffering from an infected wound. Various
chemical antiseptics were tried without success, and then a new
substance
was used. She was so astonished and so grateful at her seemingly
miraculous cure that she asked Listers registrar to write the
name of this substance in her scrap-book. It was penicillium. Why did
Lister keep this success to himself? There is, I think, only one possible
explanation. Throughout the 1870s and 1880s he was struggling to
win acceptance for the principle of antiseptic surgery. He lacked the
energy or the resources to embark on a new campaign while the
germ theory itself remained so widely contested.
228
229
230
231
232
233
lives of their own? All the evidence we have seen suggests there was
no need for institutions to act; or, where institutions did act, there
was no significant gap between those actions and the views of
individuals.
Did doctors know what they were doing when they obstructed
progress for a century and a half? Ive already said that when bad
arguments drive out good, those who do the driving must bear the
responsibility, but one can be responsible for something one never
intended to dolosing ones temper for example. Once doctors
decided that they need pay no attention to micro-organisms they
immediately ensured that they would never have to encounter evidence
suggesting they had made the wrong choice. There are many
decisions which have the peculiar characteristic of being selfconfirming
because you never know what would have happened if
you had made a different decision. It is perfectly sensible to say that
doctors had no idea what they were doing, but that they bear a
burden of responsibility for the consequences of their actions.
After 1830 the microscope came back into fashion, and progress,
effectively halted since the 1680s, recommenced. The new microscopes
were much easier to work with than Leeuwenhoeks had been,
and they had the air of being serious scientific instruments. Their
introduction coincided with a crisis in therapy provoked by the
beginning of serious counting. That crisis deepened over the next
few decades. In the 1860s Listerism came to the rescue of the hospitals
when they faced an extremely uncertain future. Without germ
theory the crisis in the hospitals would never have been resolved, and
the hospital as an institution would not have survived.
So the story appears to be one of successful adaptation. Is there any
sense in which we can say that individuals or institutions pursued a
strategy intended to rescue medicine from its crisis? Did the new
knowledge serve institutional purposes? I ask these questions because
the story I have been telling might be of the sort that is labelled
functionalist: according to functionalist arguments, institutions and
social groups react to difficulties by moderating and displacing conflict,
allowing their own adaptation and survival. Few people want to
234
235
236
237
238
239
quickly stood out. Of the male lung cancer patients only 2 were
nonsmokers;
in the control group 27 were non-smokers. Of the female
lung cancer patients 19 were non-smokers, while in the control group
32 were non-smokers. A smoker, it should be said, was defined very
broadly, as someone who had smoked at least one cigarette a day for
at least a year in the course of their life. Since we now know smoking
causes a number of diseases, smokers will have been disproportionately
present in the control group. A fair guess would be that 80 per
cent of adult men smoked, and 40 per cent of adult women; the men
averaging 15 cigarettes a day, and the women half as many. Doll and
Bradford Hill calculated that if there was no statistical connection
between smoking and lung cancer, and thus if the difference between
the two groups was only a matter of chance, then one would have to
conduct the trial more than a million times for a difference on this
scale to occur once. These rather small numbers21 non-smokers
amongst the lung cancer patients, 59 in the control groupamounted
to proof of a causal connection. Further evidence also showed that
the more you smoked the greater the risk: their initial estimate was
that heavy smokers were fifty times more likely to die of lung cancer
than non-smokers.
One criticism made of this first study was that the results might
have been skewed in some way because all the patients in the study
came from London. Doll and Bradford Hill quickly conducted a
larger survey including patients in Bristol, Cambridge, Newcastle and
Leeds, publishing the results in December 1952. These two studies
mark the completion of the first phase of their research.
Dolls and Bradford Hills early work had some critics, including
R. A. Fisher, the first statistician to advocate random trials (Fishers
area of expertise was agriculture rather than medicine, so his were
trials of seeds rather than drugs). Fisher pointed out that the fact that
there was a statistical correlation did not mean there was a causal link:
people with grey hair tend to have short life expectancies, but this is
not because grey hair causes death; it is because old age is one of the
causes of grey hair, and old age is a major cause of death. There is a
240
real correlation between grey hair and death, but not a causal link. So
there might be a genetic trait, for example, which made one both
disinclined to smoke and relatively immune to lung cancer. But the
crucial fact about Dolls and Bradford Hills first two publications is
that they met with widespread acceptance amongst medical experts.
As early as the middle of 1951 the Secretary of the Medical Research
Council (which had funded their work) was prepared to state that
the case against smoking as such is proven and that there was no
need for further statistical work.
Within the new National Health Service, however, there existed a
body called the Standing Advisory Committee on Cancer whose job
it was to advise on government policy. This committee was reluctant
to accept Dolls and Bradford Hills conclusions, and even more
reluctant to see any action based on them. They therefore called in an
independent committee of experts, chaired by the government actuary,
to reassess the evidence. In November 1953, this committee
unequivocally backed Doll and Bradford Hill. As a result, on 12
February 1954, the minister of health, Iain Macleod, announced to
the House of Commons that there was a real link (the word real
was slightly equivocalhe did not say there was a causal link)
between smoking and lung cancer. He went over the same ground at
a press conference the same day, smoking as he did so. At that
conference
the minister tried to walk a tightrope. He had been advised that
It is desirable that young people should be warned of the risks
apparently
attendant on excessive smoking, but he insisted that the time
has not yet come when the Ministry should offer public warnings
against smoking. In other words, the Ministry had accepted that the
case against smoking was proven, but they planned to do nothing
about it. In this they were following the general policy of the SAC on
Cancer, which held that public education about cancer would only
provoke anxiety without saving lives.
It is important to stress that the argument that smoking causes
cancer had been won by February 1954, because later in life Doll
241
himself used to claim that their research was not taken seriously until
later in 1954, with the publication of the first results of an entirely
new project, and as a result the 1954 announcement is often presumed
to have occurred later in the year, or is confused with an announcement
made in 1957. In both 1954 and 1957 the minister announced
that smoking was linked to lung cancer, and on both occasions he is
supposed to have smoked through the press conferencethis story
may be true of both occasions, or the two events may have become
hopelessly confused.
According to Dolls later account, he and Bradford Hill devised a
new statistical study because nobody had taken their early work
seriously.
The very design of their study shows that this account is
wrong. It was funded by a government-appointed body, the Medical
Research Council. Its basis was a questionnaire sent in October 1951
to every doctor in the country, all 60,000 of them, by the British
Medical Associationit was thus endorsed by the organization that
represented all doctors. And finally, it required the active involvement
of the Registrar General of Births, Marriages and Deaths, representing
another government agency. The second phase of their research
was thus only possible because their work had been taken extremely
seriously.
Some two-thirds of the doctors approached by Doll and Bradford
Hill responded to the questionnaire. The Registrar General then sent
Doll and Bradford Hill the death certificate for every doctor who
died. By March 1954, 36 doctors who smoked had died of lung
cancer, and no non-smoking doctor had done sosince 12.7 per cent
of the doctors in their study were non-smokers, nearly 4 should have
done so. Their second study thus followed a population over time
and showed that smokers died younger than non-smokers. By 1954,
Doll and Bradford Hill were already able to point to eleven studies
(starting with their own study of 1950) linking smoking and lung
cancer. Fifty years later, when the study was finally concluded (the
number of doctors alive in 1954 was rapidly diminishing by this
point), Doll and Bradford Hill had shown that smoking reduced life
242
243
244
245
246
16 Death Deferred
In the end we all die: life is a condition with a mortality rate of 100
per cent. Doctors talk of saving lives, but what they really do is defer
death. This chapter is about the deaths that medicine has deferred.
Deferring death is the main test of medicines successnot the only
one, admittedly, since doctors also alleviate pain and suffering and
cure non-fatal conditions. But it is far easier to measure deferred
deaths than improved qualities of life. Modern medicine, it turns out,
has been far less successful at deferring death than you would think.
The story so far has been straightforward: up until 1865 medicine
was almost completely ineffectual where it wasnt positively harmful.
Histories of medicine which treat medicine as if it was in some sense
scientific and capable of progress before the emergence of a practical
germ theory of disease have to keep drawing attention away
from this fact, even though it is one that almost no one would deny.
After 1865 doctors began to tackle diseases with some success. There
began to be some real progress in medicine, and this represents the
beginning of a new epoch. Recognizing this, it would be easy to
conclude that medicine was bad until 1865 (when antiseptic surgery
began), or 1885 (when the first modern vaccine was discovered), or
1910 (when salvarsan was introduced as the first effective chemical
therapy), or 1942 (when the first antibiotic was introduced), and that
thereafter it became, in fairly short order, good medicine, life-saving
medicine.
Certainly between 1865 and 1942 doctors began for the first time
to defer deaths in significant numbers, but not in numbers anywhere
near large enough to explain the astonishing increase in life expectancy
247
that took place during the same period. Medicine has been
taking the credit for something that would have happened anyway.
And because there had been a real revolution in life expectancies the
impression was created that doctors were rather good at doing what
they do. In fact, when it comes to saving lives, doctors have been
surprisingly slow and inefficient. For every Semmelweis, horrified at
his failure to transform the practice of his contemporaries, there is a
Fleming, oblivious in face of a missed opportunity to save lives.
In order to get the achievements of modern medicine in perspective
we have to start thinking about life expectancies. What matters is
the age at which we die, or (to look at it from another point of view) the
proportion of the population that dies each year. If 1 per cent of the
population die each year, and if deaths are randomly distributed across
ages, then the average life expectancy will be 50. But death does not
play fair. It singles out the very young and the very old. In preindustrial
economies something like half those born die by the age of
5; on the other hand a very large proportion of those who survive
infancy and early childhood die in their fifties, sixties, and seventies.
The result is a life expectancy at birth that rarely rises above 40.
The distribution of deaths across ages in early modern England
was such that a death rate of 2.5 per cent per annum corresponded
roughly with a life expectancy of 40 years. (The fact that 2.5 goes into
a hundred forty times is a coincidence; the relationship between
death rate and life expectancy is an empirical one, determined by the
distribution of deaths across ages.) The rate first dropped significantly
below 2.5 per cent per annum (and life expectancy first rose above
40 years) around 1870, though death rates had intermittently been
lower and life expectancies higher in the late sixteenth and early
seventeenth centuries.
Medicine has always claimed to be able to postpone death, but
there is no evidence that it was able to do so for significant numbers
of people before 1942. Between 1900 and 2000, life expectancies in
Western countries increased from 45 to 75 years, and death rates fell
from 2 to 0.5 per cent. In the course of the last century, death had
been deferred by thirty years. This is known as the health transition
248
or the vital revolution. Most people assume that this increase in life
expectancy is the result of improvements in medicine, but by 1942 life
expectancies had already risen by about twenty years. As a result the
extent of medicines contribution to the health transition is hotly
debated. One estimate is that in America modern medicine has
increased life expectancy by five years; of those five years, two were
gained in the first half of the century, when life expectancy increased
by twenty-three years, and three in the second half, when life expectancy
increased by seven years. This study implies that Americans owe
less than 20 per cent of the increase in life expectancy over the past
century to medicine. Another study suggests a figure of 25 per cent
for the period 1930 to 1975. A study for the Netherlands proposes
that between 4.7 and 18.5 per cent of the increase in life expectancy
between 1875 and 1970 was due to direct medical intervention,
almost all of it since 1950: in other words, in the region of 12 per cent.
The same study estimates that between 1950 and 1980 medical
intervention
improved the life expectancy of Dutch males by two years
and of Dutch females by six years. Thus, according to this research,
medical intervention has been the key factor in gains in life expectancy
since 1950, but more than three-quarters of the gain in life
expectancy took place between 1875 and 1950.
I find these figures hard to believe in the light of my own history: a
compound fracture of the arm at the age of 8 would in all probability
have killed me before the antiseptic revolution, for I would have been
fortunate to survive amputation; and then peritonitis from a burst
appendix would certainly have killed me at the age of 13 had I been
born anywhere without access to modern surgery: the first
appendectomy
was performed in 1880. But apparently my own experience is
far from typical. The simple fact is that few of us owe our lives to
modern medicine.
In order to understand this puzzle we need to explore the changes
in health over the last two hundred years. Because evidence is
particularly
249
good for England, and because much of the debate over the
effectiveness of modern medicine has been concerned with the
interpretation of the English evidence, in what follows I am going to
concentrate on England, but nothing important would change if we
looked at any other modern industrialized country. England is peculiar
only in that industrialization and urbanization took place earlier
and more rapidly there than anywhere else. Death rates in cities were
higher than in the countryside in every Western country until around
1900, so Englands exceptionally rapid population growth was
achieved despite the braking effect of urbanization. Between 1681
and 1831 the population of England and Wales tripled, from 4.93
million to 13.28 million.
It would seem reasonable to assume that this population increase
was largely due to increased life expectancythat is, to assume that
adults are normally sexually active, and that, without birth control,
fertility is largely determined by female life expectancy. In the 1680s
life expectancy was 32 years; in the 1820s (despite a century of falling
wages) it was 39 years, and it was still about the same fifty years later (a
fact partly explained by increased urbanization, which shortened
lives); it then started a steady climb to 70 in 1960. Thus the first thing
to note is that there was a small but significant gain in life expectancy
before the first of the modern revolutions in medicine, the victory of
germ theory in 1865, took place.
The classic argument that medicine has had almost nothing to do
with modern gains in life expectancy is Thomas McKeowns The
Modern Rise of Population (1976). McKeowns case depended on a
series of tables and graphs that showed the proportion of the population
killed by a number of key diseases and the way in which this
changed over time. Thus respiratory tuberculosis killed 40 people in
every 10,000 a year in 1840 and was responsible for 13 per cent of all
deaths; this had fallen to 5 deaths in 10,000 by 1945, and yet there was
no effective treatment in England until the introduction of streptomycin
in 1947. The BCG vaccine had been available from 1921, but
its general introduction was delayed because of doubts about its
effectivenessdoubts that continue to the present day. Bronchitis,
250
251
252
253
254
and part from increased procreation outside marriage (in 1680 less
than one tenth of all first births were illegitimate, while in 1820, 25
per cent were).
Earlier marriage and a higher proportion marrying could in
principle be the result of rising standards of living, which would make
it easier for people to afford to start a family, but in fact the fit
between rising fertility and rising standards of living is not very good.
It seems clear that in the early modern period fertility was kept in
check by deliberate abstinence on the part of the unmarried, and that
in the course of the eighteenth and nineteenth centuries abstinence
became much less popular. In short, people became more sexually
active. There is no adequate study of why this might be, but a reasonable
guess is that it reflects the decline in the church courts and of
other mechanisms, formal and informal, of policing sexual behaviour.
The history of population increase before 1870, in England at least,
turns out to have more to do with the history of sexual activity
(including sex within marriage) than with the history of life expectancy.
Smallpox inoculation and vaccination may have been responsible
for one-third of the increase in life expectancy, but they can only
explain one-ninth of the increase in population. The primary cause
of population increase, at least in England, was an increase in sexual
activity, a possibility which McKeown never suspected although his
subject was the modern rise in population.
Second, McKeown chose to concentrate his attention not just on
diseases caused by germs, but on diseases caused by airborne germs.
Of the increase in life expectancy between 1850 and 1970,40 per cent
was due to the declining death rate from these diseasestuberculosis,
bronchitis, pneumonia, influenza, whooping cough, measles, diphtheria,
and smallpox. The most important single cause of improved
life expectancy was the decline in the death rate from tuberculosis.
The consequence of McKeowns concentration on airborne diseases
was that standard public health measures, which had conventionally
been assumed to be a major factor in rising life expectancy, suddenly
seemed irrelevant. Piped water, sewers, and water-closets are completely
irrelevant to the spread of TB.
255
What then were the obstacles to the spread of TB? Once the
bacillus had been identified by Koch in 1882 people knew that they
were dealing with an airborne germ, and public health campaigns
against spitting might well have had some impact on the spread of
infection. When I was a child there were still signs on buses in England
(and in France) telling passengers not to spit. Perhaps the isolation
of sufferers in sanatoria might also have served to protect the
uninfected population. Yet arguments like these break down in face
of a simple fact: 85 per cent of young people in 1946 had antibodies
that showed they had been exposed to TB. Thus the TB germ was
still clearly widespread. What seems to have changed is not the
proportion
of the population being exposed to TB, but the proportion
dying as a result of exposure.
In fact this seems to be true of disease in general. Three surveys of
a friendly society, delightfully named the Odd Fellows, enable us to
assess the incidence of sickness in the working-class population in
1847,1868, and 1895. What we discover is that there was no decline in
the rate at which people fell ill. What declined was the proportion of
illnesses that resulted in death. McKeown is right: the germs were still
there, but the people were better able to survive them.
Third, McKeowns preoccupation with airborne diseases meant
that he paid little attention to the history of sanitation. In the period
1850 to 1900, the reduction of death from water- and food-borne
diseases was almost as important as the reduction in death from airborne
diseases. London began to introduce sand filtration of the
water supply in 1828. Chemical treatment of sewage water was common
in the 1860sit was in part this that gave Lister the idea of
antiseptic surgery. The construction of a modern sewage-treatment
system in London began in 1858. Generally across England, investment
in improvements to water and sewage was highest in the last
two decades of the century. Public health measures were clearly
crucial in eliminating cholera, which between appearing in England
for the first time in 1831 and for the last in 1866, caused in all some
113,000 deaths. The conquest of cholera has always been an exciting
256
257
258
bleachthe case for cleanliness was clear long before the triumph of
the germ theory of disease. By the beginning of the twentieth century
most new English houses had running water, flush toilets and
baths, but what is clear from the English evidence is that much of
this improvement in hygiene had little effect on life expectancies.
Children in particular continued to die in the same numbers as
before.
What had happened was that Heberden had misunderstood his
own statistics: deaths from griping in the guts had just been reclassified
by doctors as deaths from convulsions. It took a systematic
application of the principles of the sanitary reformers to domestic life
to conquer infantile diarrhoea; there was no need to wait for germ
theory. What needed to be done had been clear since (at least) Edwin
Chadwicks Report on the Sanitary Condition of the Labouring
Population
of Great Britain (1842), but it took almost a hundred years to transform
childrearing practices.
If we look for deliberate interventions to reduce disease prior to
the 1930s there are some important examples: quarantine against
plague, obstetric forceps, vaccination against smallpox, macro- and
micro-sanitation. Forceps deliveries and smallpox vaccinations were
mainly administered by doctors, and quarantine and sanitation drew
extensively on medical theories. But none of these developments can
account for the extraordinary increase in life expectancies from the
1870s onwards. Here we have to return to McKeowns contentious
claim that the explanation lies with improved nutrition.
At first it seems as though McKeown must be wrong on this
crucial question. We now know that England was the first country in
Europe to escape periods of high mortality caused by bad harvests.
From the early seventeenth century there was enough food, not only
enough to prevent people from starving, but enough to prevent
people from being so weakened by malnutrition that they succumbed
in significant numbers to infections in years of bad harvests and high
food prices. If malnutrition caused, as McKeown argues, high death
rates, then surely death rates should be higher in these years? Since
259
260
Conclusion
Primum non nocere.
First do no harm.
(Thomas Inman, 1860)
Three simple arguments run through this book. The first is that if we
define medicine as the ability to cure diseases, then there was very
little medicine before 1865. The long tradition that descended from
Hippocrates, symbolized by a reliance on bloodletting, purges, and
emetics, was almost totally ineffectual, indeed positively deleterious,
except in so far as it mobilized the placebo effect.
The second is that effective medicine could only begin when
doctors began to count and to compare. They had to count the
number of patients that lived and the number that died, and then
compare different treatments to see if they resulted in improved survival
261
262
263
264
265
Further Reading
I have organized this short guide to further reading according to the Parts
into which the book is divided. Further bibliography, references, and
links
to other websites can be found at www.badmedicine.co.uk.
INTRODUCTION
The late Roy Porter is undoubtedly the most influential medical historian
266
of the last few decades. See in particular his The Greatest Benefit to
Mankind:
A Medical History of Humanity from Antiquity to the Present (London,
1997).
Another very useful standard history is Irvine Loudon (ed.), Western
Medicine:
An Illustrated History (Oxford, 1997). For a doctors view of the history
of medicine see Raymond Tallis, The Miracle of Scientific Medicine,
in
his Hippocratic Oaths: Medicine and its Discontents (London, 2004),
1724. The
key critique of modern medicine is Ivan Illich, Limits to Medicine
(London,
1976).
I. THE HIPPOCRATIC TRADITION
The main primary sources are: Hippocratic Writings, ed. G. E. R. Lloyd
(London, 1978); Galen, Selected Works, tr. P. N. Singer (Oxford, 1997);
Charles Singer, Galen, On Anatomical Procedures (Oxford, 1956).
Highly
recommended is Jacques Jouanna, Hippocrates (Baltimore, 1999).
Shigeshisa
Kuriyama, The Expressiveness of the Body and the Divergence of Greek
and
Chinese Medicine (New York, 1999) is exceptionally thoughtprovoking.
For a survey of the Middle Ages, Nancy G. Siraisi, Medieval and Early
Renaissance Medicine (Chicago, 1990). A wonderful book on prescientific
medicine is Barbara Duden, The Woman beneath the Skin: A Doctors
Patients
in Eighteenth-Century Germany (Cambridge Mass., 1991). To
understand
what doctors were really doing, read Daniel Moerman, Meaning,
Medicine
267
and the Placebo Effect (Cambridge, 2002), or, more briefly, chapter 2
of
Harry Collins and Trevor Pinch, Dr Golem: How to Think about
Medicine
(Chicago, 2005).
II. REVOLUTION POSTPONED
For a good general survey of the early modern period, see Roger French,
Medicine before Science: The Business of Medicine from the Middle
Ages to the
Enlightenment (Cambridge, 2003). There are a number of helpful books
on
Renaissance advances in anatomy: Bernard Schultz, Art and Anatomy in
Renaissance Italy (Ann Arbor, 1985); Andrew Cunningham, The
Anatomical
Renaissance (Aldershot, 1997); Andrea Carlino, Books of the Body
(Chicago,
1999); R. K. French, Dissection and Vivisection in the European
Renaissance
(Aldershot, 1999). Also on vivisection, see Anita Guerini, The Ethics of
Animal Experimentation in Seventeenth-Century England, Journal of
the
History of Ideas, 50 (1989), 391407.
There is a fine digital replica of Vesaliuss Fabrica available from
www.octavo.com. The standard authority is C. D. OMalley, Andreas
Vesalius of Brussels, 1514-1564 (Berkeley, Calif., 1964). A valuable
article
is Katharine Park, The Criminal and the Saintly Body: Autopsy and
Dissection in Renaissance Italy, Renaissance Quarterly, 47 (1994), 1
33.
Harvey can be read in William Harvey, The Circulation of the Blood and
Other Writings (London, 1963). An excellent short introduction to the
extensive literature on Harvey is Andrew Gregory, Harveys Heart
(Cambridge, 2001). C. R. S. Harris, The Heart and the Vascular System
in
268
Ancient Greek Medicine (Oxford, 1973) puzzles over why the ancient
Greeks
(including Galen) did not discover the circulation of the blood.
On theories of contagion, see Carlo M. Cipolla, Miasmas and Disease:
Public Health and the Environment in the Pre-Industrial Age (New
Haven,
1992); Vivian Nutton, The Seeds of Disease: An Explanation of
Contagion
and Infection from the Greeks to the Renaissance, Medical
History, 27 (1983), 134; Vivian Nutton, The Reception of Fracastoros
Theory of Contagion: The Seed that Fell among Thorns?, Osiris, 6
(1990), 196234; and Lise Wilkinson, Rinderpest and Mainstream
Infectious
Disease Concepts in the Eighteenth Century, Medical History, 28
(1984), 12950.
Three postscripts to my discussion of theories of animate contagion. 1) I
am not the first to turn from Nardi to Platter (above, p. 128). The
anonymous
annnotator of the 1714 edition of Creechs translation of Lucretius
(which advertises itself as a complete system of the Epicurean
philosophy
and was reprinted in 1722) added (apparently as an afterthought) a note
Of Contagion, the chief Cause of a Plague (II, 77681) in which he
reports Platters views with care. He stresses that Platter has an account
of
how resistance to disease may vary, and that he argues that there may
be
asymptomatic carriers of diseasesin other words he fully recognizes
what
we would think of as distinctively modern aspects of Platters theory.
2) I was too quick to accept the Singers account of theories of animate
contagion after 1725 (above, p. 129). See the excellent article by M. E.
De
Lacy and A. J. Cain, A Linnean Thesis Concerning Contagium Vivum:
the Exanthemata viva of John Nyander and its place in contemporary
269
270
271
272
273