Nothing Special   »   [go: up one dir, main page]

Theory and Methods in Political Science - 2 Version - p4

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 68

‘logocentric’ and male in conception.

As a result, the liberal vision may


exclude the alternative perspectives and preoccupations of women and
collude with the historical banishment of women’s experiences and
interests to the private or domestic sphere (Pateman 1988; Young 1990).

In view of this, many feminists have adopted a perspective that is more


postmodernist in orientation, placing a greater emphasis upon a politics of
difference. This, it is argued, would permit a renegotiation of the
distinction between the public and the private and would subvert domi¬
nant images of human nature, permitting the more effective articulation of
the experiences of women and the identification of implicit forms of
oppression.

Equally, debates in political theory have chimed strongly with con¬


temporary issues surrounding multiculturalism. Liberals have tended to see
the rational consensus they seek to establish as providing a suitable
framework of agreement within which different cultural groups can
coexist, harbouring their own particular ‘conceptions of the good’ in a
context of mutual respect underwritten by fair procedures and equal rights
(Kymlicka 1995). The agreement over justice that liberalism theorises is
also, at the same time, an agreement to disagree over other moral, cultural
or spiritual matters and therefore embodies a fundamental principle of
tolerance. Others, however, have seen this approach as inadequate.

192 Normative Theory

Steve Buckler 193

Liberalism, it is argued, is complacent in assuming that this sort of rational


consensus is an effective principle in a highly culturally diverse society and,
further, that it fails to address the ways in which minority groups can be
marginalised and be caricatured in a way that brands them as a threatening
‘other’. Again, here, a more postmodern perspective has tended to emerge,
affirming the need for a radically pluralist politics that would permit the
constant renegotiation of the political issues raised by cultural diversity
and the more effective articulation of complex cultural identities (Phillips
1993). In turn, some writers more inclined to a communitarian view have
emphasised the fundamental importance of collective cultural identity, a
factor which both deontological liberalism (with its emphasis upon
personal choice) and postmodernism (with its emphasis upon fluid
identities) may treat too lightly. Such a reemphasis is needed, it is
suggested, if we are to appreciate fully what might be involved in
protecting minority cultures (Taylor 1992).

The theoretical debates we have looked at in this section relate closely,


then, to key social issues of our time. And they are debates that look set to
develop and remain vigorous. It is worth emphasising also that none of the
perspectives recognising value-pluralism that we have examined implies
anything like a moral nihilism, whereby the possibility of asserting and
arguing about authoritative normative claims is denied. They all, in one
way or another, carry implications for how we might best and most justly
arrange our common affairs. The differences concern the nature and scope
of what can be said in this area and what kinds of knowledge claims
underlie this. Even the more radically pluralist standpoints invoke some
kinds of claims to knowledge about the human condition, if not about
human nature as such. The possibility, therefore, of making compelling
normative claims about our political and social arrangements remains
central to the debate.

Conclusion

The various responses to the logical positivist challenge that we have


surveyed in this chapter point to the continued health of normative theory.
The debates surrounding deontological liberalism and value-pluralism
provide the dominant contemporary themes; although it is worth noting
that the preoccupations we examined in relation to interpretive and critical
theory have had a lasting impact. The emphasis upon linguistically
embedded norms and the constitutive nature of conceptual frameworks
continues to be felt in terms of the universal recognition of the importance
in politics of the discursive conditions under which norms arise (whatever
their scope or consistency). The questioning of technical forms of

rationality and the quest in critical theory for standards of rationality


above and beyond the purely instrumental is echoed in modern
deontological theory and in the general recognition of the value of non¬
instrumental political norms.

In general, then, normative theory has continued to maintain both its


intellectual vigour and its social relevance. At the same time, the debates
we have looked at do also suggest that one aspect of the logical positivist
view (although not one confined to that view) has had an impact. The
assertion of absolute metaphysical truths as a basis for normative claims is
now regarded with greater suspicion. This is evident in the strong influence
that anti-foundationalism has had upon recent political philosophy. And,
as we have seen, even those who maintain a foundationalist approach tend
to frame it in more modest terms, eschewing the grand moral vision and
recognising value-pluralism. This indicates a greater reflexivity or self-
consciousness with respect to what can be said in political philosophy,
reflected in greater concern about how ambitious normative claims can be
and about the basis and scope of their justification.

Equally, this sense of circumspection may lead us to suppose that the


debates we have looked at, whilst they will evolve, are not likely to be
resolved by arrival at some final, authoritative answer. But this might only
confirm Isaiah Berlin’s contention that normative theory is an ongoing and
ineradicable aspect of our continuing attempts to understand the world.

Further reading

• Some useful general works on themes in recent political philosophy are


Kymlicka (1990), Plant (1991), Goodin and Pettit (1993) and Ashe et al.
(1999).

• A notable historical and theoretical account of logical positivism is


Kolakowski (1972).

• A series of re-assessments of Peter Winch’s work can be found in the


special issue of History of the Human Sciences (2000), vol. 13, no.l.

• For some considerations on the application of an interpretive approach,


see Taylor (1971).

• On the general issue of relativism in social science, see Hollis and Lukes
(1982).

• Jay (1973) is a comprehensive history of the Institute for Social Research


and a useful introduction to critical theory is Held (1982). Introductions
to the work of Habermas are McCarthy (1984), White (1988) and
Outhwaite (1994). Collections of critical literature around Habermas’s
work can be found in Thompson and Held (1982) and Dews (1999). See
also Meehan (1995).
194 Normative Theory

• Aside from Rawls, some important contributions to contemporary


liberalism are Raz (1986) and Dworkin (1981a/b). Critical collections on
the work of Rawls are Kukathas and Pettit (1990) and Daniels (1978).
Nozick (1974) is a critique of the Rawlsian approach from a libertarian
point of view.

• Notable communitarian works are Sandel (1982), Walzer (1983) and


Taylor (1989). Important contributions to the liberal-communitarian
debate are collected in Sandel (1984) and in Avineri and de Shalit (1992).
See also Gutmann (1985). The debate is analysed by Mulhall and Swift
(1996) and a feminist perspective is provided in Frazer (1993).

• Notable works from a postmodern perspective are Lyotard (1984) and


Rorty (1980). More specifically on questions of identity and difference
in relation to political theory see Connolly (1991), Mouffe (1993), Honig
(1993) and Phillips (1995).

• A useful collection on liberalism and pluralism is Bellamy and Hollis


(1999).

• Important feminist contributions to contemporary debate include


Pateman (1988), Okin (1989) and Young (1990). A useful collection
on multiculturalism is Gutmann (1994).

Part II

Methods

jjf
p|||v

Chapter 9

Qualitative Methods

FIONA DEVINE

Qualitative methods is a generic term that refers to a range of techniques


including observation, participant observation, intensive individual inter¬
views and focus group interviews which seek to understand the experiences
and practices of key informants and to locate them firmly in context. More
often than not, researchers use two or more of these techniques in the field
and research that draws on these techniques is usually referred to as
ethnographic research or an ethnography (Lareau and Shultz 1996: 3). This
chapter is divided into four parts. First, it looks at the role of qualitative
methods in the social sciences in general and political science in particular.
Second, it considers the ontological and epistemological underpinnings of
qualitative research. Third, it evaluates criticisms that are often levelled
against qualitative research. Fourth, it discusses a recent example of
qualitative research on electoral volatility at the 1997 general election by
the author and her colleagues (White et al. 1999). Overall, it will be argued
that the use of qualitative methods in political science has made an
important contribution to our understanding of political phenomena and
explanations of them. Some political scientists are reluctant to acknowl¬
edge the importance of qualitative methods to the discipline. They doubt
the value of qualitative techniques and the need to be reflexive about issues
of method in the discipline. Fortunately, there are others who recognise the
advantages of qualitative research and are more reflective about issues of
method. Empirical research in political science is moving in this direction.

The role of qualitative methods in political science

Qualitative methods have played a major, albeit understated, role in


political science, from the study of individuals and groups inside the
formal political arena to the political attitudes and behaviour of people (be
they voters or members of elites) outside it. It is no coincidence, however,
that it is a sociologist (albeit one that retains a long-held interest in
political science) who is the author of this chapter since the origins of
different qualitative techniques lie in sociology and anthropology.
Participant observation was first used in anthropology to study other
cultures (Powermaker 1966; Spradley 1980; Wax 1971). It involves the

198 Qualitative Methods

researcher immersing himself or herself in the social setting in which they


are interested, observing people in their usual milieu and participating in
their activities. On this basis, the researcher writes extensive field notes.
The participant observer depends upon relatively long-term relationships
with informants, whose conversations are an integral part of field notes
(Lofland and Lofland 1985: 12). They are the ‘raw data’ that are analysed,
and the interpretation of the material forms the basis of a research report.
More recently, participant observation has been used by sociologists
including Roseneil’s (1995) chronicle of the experience of women involved
in the Greenham peace camp in the UK and Eliasoph’s study (2000) of civic
groups - recreation club members, volunteers and activists - and how they
avoid talking politics in the USA.

However, it has been more common for sociologists (and, as we shall


see, political scientists) to use intensive interviewing techniques rather than
participant observation. In-depth interviewing is based on an interview
guide, open-ended questions and informal probing to facilitate a discussion
of issues in a semi-structured or unstructured manner. The interview guide
is used as a checklist of topics to be covered, although the order in which
they are discussed is not preordained (Bryman 1988: 66). Open-ended
questions are used to allow the interviewee to talk at length on a topic.
Finally, various forms of probing are used to ask the interviewee to
elaborate on what they have said (Fielding 1993a: 140-1). Intensive
interviews are, then, ‘guided conversations’ (Lofland and Lofland
1984: 9). Such lengthy interviews are usually conducted with only a small
sample of informants. The transcriptions constitute the data that are
analysed and interpreted. Interviewers also engage in observing the
interviewee and the setting in which they are found and these observations
facilitate the interpretation of the material. In contrast to the highly
structured interview used in survey research, based on a tightly defined
questionnaire and closed questions, intensive interviews are open and
flexible, allowing the informants to elaborate on their values and attitudes
and account for their actions (Mann 1985; Brenner et al. 1985). For
example, McAdam’s (1988) in-depth interviews with volunteers who went
to Mississippi to register black voters in 1964 - the Freedom Summer
project - captures the voices and experiences of the American civil rights
movement.

Finally, academic researchers are increasingly undertaking focus group


interviews, although the technique is still most closely associated with
opinion poll organisations and the politicians who use them (Barbour and
Kitzinger 1998). Note, for example, the frequent (and often disparaging)
reference in newspapers to Tony Blair’s use of focus group research
findings to define new issues and devise new policies! The technique
involves intensive discussion about a set of issues with a small group of

Fiona Devine 199

people (say 10-12 participants). The main advantage of focus group


interviews over individual interviews is that participants interact in a
discussion on a particular topic, agree with other interviewees in some
respects and disagree in others and raise new issues and concerns. It is the
interaction between all the participants in a quasi-naturalistic setting - that
is, not too far removed from everyday group conversations - that is unique
to the method (Maynard 1998). The discussions are usually either tape-
recorded or extensive notes are taken which are then subject to different
forms of analysis (like those associated with individual interviews). The
transcripts or notes may also be subject to conversation analysis which
involves a very detailed examination of what people say, how they say it,
how they respond to other people’s reactions and so forth. Focus group
discussions have been used, for example, by Gamson (1992) in the USA to
consider the process of opinion formation, how people deal with media
information, and how they draw on their own experiences in life and also
those of people they know in talking politics. He argues that people are
able to conduct informed and reasoned discussions about political issues
and have a political consciousness often dismissed by opinion pollsters.
From this brief description of qualitative methods, it should be clear that
they are most appropriately employed where the goal of research is to
explore people’s subjective experiences and the meanings they attach to
those experiences. Intensive interviewing, for example, allows people to
talk freely and offer their interpretation of events. It is their perspective
that is paramount (Harvey 1990). Qualitative methods are also good at
tapping into the thought processes or narratives that people construct. In-
depth interviews allow people to tell their own story in language with
which they are familiar. Where the discussion of issues flows naturally it is
possible to understand the logic of an interviewee’s argument and the
associative thinking that led them to particular conclusions. Finally,
qualitative methods draw particular attention to contextual issues, placing
an interviewee’s attitudes and behaviour in the context of their individual
biography and the wider social setting. This is sometimes referred to as a
holistic approach. Qualitative methods, therefore, are good at capturing
meaning, process and context (Bryman 1988: 62; Rose 1982). Inevitably,
research of this kind is very labour-intensive, especially when fieldwork is
conducted over a long period of time, and it not surprising that researchers
usually concentrate on a small group of people. It is the meaning of a
particular practice - say, not voting at an election - that is important to
qualitative researchers rather than the frequency of abstention that
concerns quantitative researchers.

Qualitative methods have been employed across a number of sub-fields


of political science since participants in the world of politics have been
willing to talk about their involvement in groups, their role in formal

200 Qualitative Methods

positions of power, their views about the political system and so on.
Political scientists, for example, have frequently interviewed pressure
group activists (Grant and Marsh 1977; Mills 1993). Members of political
parties and party officials have interviewed extensively about develop¬
ments in party organisation, strategy and so forth (Seyd 1987; Whiteley
1983). While previous work in the UK focused on the Labour Party, recent
research on party membership has extended to the Conservatives as well
(Seyd and Whiteley 1992; Whiteley et al . 1994). The prize-winning book on
the rise and fall of the Social Democratic Party by Crewe and King (1995)
draws on many interviews with a wide variety of people. No doubt
observations and so forth played a very important part of their analysis
too since both authors were involved in the early days of the SDP and their
involvement must have shaped their insights into the momentous events of
the 1980s. Qualitative methods have been used extensively in the study of
local politics in Britain (Gyford et al. 1984; Lowndes and Stoker 1992;
Maloney et al. 2000). Until recently, qualitative methods were rarely used
in research on central government because of limited access to the
seemingly secretive world of high politics (the exception being Helco
and Wildavsky 1981) although the move to more open government has
facilitated greater willingness among government officials to be inter¬
viewed (Smith 1999).

There are, then, a number of research techniques which fall under the
generic heading of qualitative research which have been widely used by
sociologists and political scientists who have chosen one or more of them
to elicit people’s subjective experiences, opinions, beliefs and values and so
forth. While academic researchers usually choose a research technique that
is most appropriate for what they want to explore, the choice of methods is
not merely a matter of technical superiority. As we shall now see, opting
for one technique over another raises epistemological arguments about
different ways of knowing the social world (Bryman 1998).

The epistemological underpinnings of qualitative methods

The use of methods is often associated with an epistemological position


about the production of knowledge (May 1997; see also Chapter 1).
Quantitative methods, for example, have been linked with a positivist
stance that aligns itself with a particular view about the assumptions and
mechanisms of the natural sciences (see Chapter 10). It is underpinned by a
belief that only that which is grounded in the observable can count as valid
knowledge (Halfpenny 1992; Halfpenny and McMylor 1994). A positivist
notion of knowledge, therefore, is grounded in the objective and tangible
and researchers working within this paradigm are preoccupied with

Fiona Devine 201

creating the conditions in which objective data can be collected. As Sanders


notes in his chapter (Chapter 2), early twentieth-century positivists were
concerned with the precise operationalisation and measurement of
theoretical concepts (Henwood and Pidgeon 1993: 15; Lee 1993: 13). The
preference is for survey research with a standardised approach to
interviewing based on a predetermined questionnaire and closed questions
where there is limited interaction between the interviewer and the
respondent to avoid bias. The interviews can be replicated easily and
are, therefore, reliable in reproducing similar facts. The statistical analysis
of the coded replies produces observed regularities that form the basis of
explanation, generalisation and prediction. The major concern of survey
researchers is with the predictive ability of their statistical findings
(Bryman 1988: 34). Overall, the highly structured interview associated with
survey research is a form of communication under controlled circum¬
stances somewhat analogous to an experimental situation found in the
natural sciences (Fielding 1993b: 144).

Qualitative methods have been aligned with an interpretive epistemol¬


ogy that stresses the dynamic, constructed and evolving nature of social
reality. In this view, there is no objective science that can establish
universal truths or can exist independently of the beliefs, values and
concepts created to understand the world. These concerns are unique to
the social sciences and account for the different methods used in the
natural and social sciences. Researchers committed to this paradigm attach
primary importance to the perspective of conscious actors who attach
subjective meaning to their actions and interpret their own situation and
that of others (Benton 1977; Keat and Urry 1975: 205). Thus, intensive
interviews are appropriate when seeking to understand people’s motives
and interpretations. Such guided conversations cannot be free of bias,
although the influence of the researcher can be acknowledged. There is a
strong emphasis on describing the context in which people live their lives,
form opinions, act (or fail to act) and so on. Participant observers go to
great lengths to watch people in their natural settings, especially since
subjective meanings vary according to the context in which they are found.
Consequently, the emphasis is on seeking to understand human experi¬
ences and practices rather than making predictions about behaviour
(Henwood and Pidgeon 1993: 16). Explanation involves understanding
and interpreting actions rather than drawing conclusions about relation¬
ships and regularities between statistical variables. Thus, the in-depth
interview is about listening to people talking in order to gain some insight
into their world-views and how they see things as they do (Fielding
1993b: 157).

It should be emphasised, however, that the distinction between the


choice of methods and epistemological positions should not be overdrawn.
202 Qualitative Methods

It would be ridiculous, for example, to dismiss all quantitative researchers


as positivists (Marsh 1982, 1984)! To adopt such a position would imply
that different methods are mutually exclusive and cannot be employed in
conjunction with each other (see Chapter 11). The choice of methods is
usually made on the basis of whether it is a suitable way of answering
particular research questions (Bryman 1988: 108-9). Quantitative and
qualitative methods involve collecting data in different ways and the
crucial question is whether the choice of method is appropriate for the
theoretical and empirical questions that the researcher seeks to address. As
it is, social scientists increasingly use a mix of methods rather than one
method in isolation (Brannen 1992; Cohen and Manion 1985). This is not
to suggest that methodological eclecticism does not have technical or
epistemological problems (Miles and Huberman 1984; Reichardt and Cook
1979). There are a number of technical issues raised by the mixing of
methods such as how to deal with apparent inconsistencies between data
sets, and whether or not one data source should take priority over another
(Devine and Heath 1999: 199-205). There are also important epistemolo¬
gical issues at stake for, as Mason (1996: 28) has argued, researchers
should ensure that the integration of methods is legitimate and based on
‘similar, complementary or comparable assumptions about what can
legitimately constitute knowledge or evidence’. Nevertheless, a combina¬
tion of methods can lead to a more rounded and holistic perspective on the
topic under investigation.

More recently, the epistemological underpinnings of the social sciences


have been challenged by postmodernism that regards the quest for reliable
knowledge of the social world as misguided (see, for example, Denzin
1997). As Williams and May (1996) note, postmodernist thinking has big
implications for questions of epistemology and method (see Chapters 1
and 6). In relation to the former: ‘postmodernism can be viewed as a
critique of the values, goals and basis of analysis that, from the enlight¬
enment onwards, have been assumed to be universally valid’ (Williams and
May 1996: 158). With reference to the latter: ‘the alternative to the
complacent foundationalism of modernism becomes the maxim ... that
anything goes’ (ibid.). More specifically, the postmodern critique of
research practice has confronted empirical researchers with a dual crisis:
a crisis in representation and a crisis in legitimation.

Crisis of representation

The first crisis is based on questioning the expert status of the researcher,
given that: ‘truth is contingent and nothing should be placed beyond the
possibility of revision’ (Williams and May 1996). It is not possible to

Fiona Devine 203

capture lived experience directly because the researcher is merely an


interpreter whose own account has no greater claim to ‘truth’ than anyone
else’s account. There can never be a final accurate representation of what
was meant or said - only different textual representations of different
experiences (Denzin 1997: 5). Representation and reality can no longer be
said to correspond to each other, therefore, and what becomes significant
is how researchers use textual devices in an attempt to create ‘authentic’
accounts (Stronach and MacLure 1997). Postmodernism demands that
researchers think about the research process and not just the research
outcomes in a more radical way than they have done to date.

Crisis of legitimation

The crisis in legitimation arises from a rethinking of concepts such as


validity, reliability and generalisability. A claim to validity, based on rules
concerning the production of knowledge and its relationship to ‘reality’, is
the usual means by which an account is given legitimacy and by which
‘good’ research is distinguishable from ‘bad’ research. Denzin (1997: 6),
however, argues that attempts to claim validity for a piece of research:
‘cling to the conception of a “world out there” that is truthfully and
accurately captured by the researcher’s method’. Consequently, the
postmodernists reject specific criteria for judging research and ‘doubts
all criteria and privileges none’ (Denzin 1997: 8).

The postmodern critique, therefore, sees the researcher as intrinsically


implicated in the production of knowledge (Williams and May 1996).
Moreover, centrality is given to text and to questions of power and
authority that are inscribed within them (a text can be anything from a
literary text, an official document or an intensive interview transcript,
through to a photograph, a movie or a building). This means that a pivotal
concern of postmodern research is the deconstruction of texts and their
embedded power relations. As Williams and May (1996: 169) note: ‘how
the social world is represented becomes more important than the search for
an independent “reality” described by such texts’. The advantage of this
approach is that the socially constructed, interpretive and dynamic nature
of reality is genuinely appreciated. That the accounts produced by the
researcher in the process of deconstruction are as much the focus of the
postmodern gaze as the initial texts upon which they based their analysis
may be seen as a disadvantage however. As I have argued elsewhere
(Devine and Heath 1999), the postmodern researcher is consequently
caught in a hall of mirrors with too much attention devoted to the
preoccupations of the researcher rather than the research topic. See this
tension, for example, in Charlesworth’s (2000) study of the ‘political
dispossessed’ working class of Rotherham, South Yorkshire in the UK.

204 Qualitative Methods

Fiona Devine 205

To date, there are few examples of empirical research - especially in


political science - which have been informed by postmodern influences.
Those who have contributed to the debate on postmodernism have not
really discussed the issue of method head-on other than in a highly abstract
way. Therefore, it is not entirely clear what form postmodern research
strategies might take. The emphasis on uncertainty and disappointment is
not especially helpful. To be sure, research is often messy and rarely
proceeds in the neat and tidy way that researchers wish for (Devine and
Heath 1999). Nevertheless, it is possible to distinguish between ‘good’ and
‘bad’ research according to certain criteria and those criteria can vary from
one methodology to another. This position implies that it is possible to
cultivate knowledge of the social world and that research can make
important substantive contributions to an understanding and explanation
of the social world. This is not to say that the issues raised by the debates
over the crisis in representation and legitimation should be ignored. On the
contrary, the debate concerning the limits of validity and the competing
claims of alternative accounts is welcome. Discussion on the degree to
which any knowledge of the social world is highly dependent on the
methodological devices employed by the researcher is also welcome. Still,
the wholesale dismissal of conventional criteria for assessing social
research can easily collapse into a rather hopeless relativism that gets
nobody anywhere! (see Chapters 1, 9, 11).

Criticisms of qualitative research

While the sterile debate about quantitative verses qualitative research no


longer preoccupies social scientists (Bryman 1988: 84—5), there are some
who still dismiss qualitative research as impressionistic, piecemeal and
even idiosyncratic (even if political correctness demands that such views
are expressed in private conversations rather than publicly in print).
Quantitative research is seen as representative and reliable. Systematic
statistical analysis ensures that research findings and interpretations are
robust. Overall, quantitative research is replicable and comparable and
generalisations can be made with a high degree of certainty. Social surveys
produce hard scientific data (Hellevik 1984; De Vaus 1991). In contrast,
qualitative research is often dismissed as unrepresentative and atypical.
Field relations raise problems about bias while the interpretation of the
material can be highly subjective and not open to external validation.
Finally, qualitative research is neither replicable nor comparable and,
therefore, not the basis on which generalisations can be made. Qualitative
research produces soft, unscientific, results. On the face of it, these
criticisms of qualitative research seem damning. Closer reflection,

however, suggests that these criticisms are misplaced. That is to say, what
is a valid method depends on the aims and objectives of a research project.
For example, if the goal of qualitative research is to explore the meaning of
voters’ attachment to a political party in depth, it is not concerned about
the frequency of particular views and opinions. It would be nonsensical to
employ methods more appropriate to capture the latter rather than the
former. Moreover, as we shall see, qualitative researchers are as systematic
and rigorous in their methods of empirical investigation as quantitative
researchers.

Representativeness and reliability


The issues of representativeness and reliability revolve around the question
of designing and generating a sample of ‘people, places or activities
suitable for study’ (Lee 1993: 60). It is often assumed that qualitative
researchers do not devote as much attention to generating a sample as
quantitative researchers because they are not concerned with representa¬
tiveness (see Miller 1995). This is far from the case, precisely because there
is often no sampling frame from which to draw a random list of names to
approach for interview. Snowball sampling is the usual way of generating a
sample. Interviewees are asked to nominate potential informants and the
request is made at each subsequent interview until the required number is
reached. Snowballing a sample continues throughout the period in the
field. However, there are problems in generating a sample from one
network of people with particular characteristics because interviewees can
nominate a set of interconnected people. Researchers have to be on their
guard against producing a restricted sample and find ways of generating as
wide a sample of interviewees as possible. It is not surprising that most
qualitative research reports devote a considerable amount of time to the
issues of how a sample was generated and the characteristics of the
informants included in the final sample. In sum, the choice of sampling
methods or the use of sampling frames is no less important in qualitative
research than quantitative research. A failure to justify one’s sampling
strategies in any research only undermines the strength of the claims that
can be made about the data (Devine and Heath 1999: 13-4).

Objectivity and bias

Qualitative research is often dismissed because of bias and the lack of


objectivity in the collection of empirical material. The relationship
between an interviewer and interviewee is not aloof, for example, since
the interviewer participates in the conversation (Bulmer 1984: 209; Newell
1993: 97). The relationship cannot be distant if confidential personal

206 Qualitative Methods

information is to be revealed or when sensitive topics are discussed (Lee


1993: 111). In such instances, a greater level of involvement is required so
that the researcher inspires trust (Bulmer 1984: 111). Thus, qualitative
researchers neither subscribe to the view that research can be objective, nor
do they seek objectivity in field relations. That is not to say, however, that
field relations are unproblematic and their impact on the collection of the
information can be ignored. Playing an active role in facilitating
conversations is not easy. Informants are often anxious to please and
offer responses that they perceive to be desirable. They may seek to
impress with shows of bravado and create the impression that they know
more than they do. They may ask the interviewer to offer their own
opinions on the topics under discussion (Finch 1984). All of these
considerations demand that the interviewer is reflexive about the conduct
of an interview or an episode while engaged in participant observation and
that they think about the nature of interaction on what was said, how it
was said and so forth. Thus, rather than attempt to control the effects of
bias in field relations, qualitative researchers prefer to acknowledge it in
the process of collecting empirical material and explicitly consider its
effects on substantive findings (Devine and Heath 1999: 9—10; Hobbs and
May 1993; Lee 1993).

Interpretation

Concern is frequently voiced about the interpretation of qualitative


material. Is the interpretation placed on the material merely a personal
reading? Of course, the analysis and interpretation of qualitative material
proceeds in a different manner to quantitative research that is concerned
about relationships between variables (Rose 1982; Silverman 1997).
Transcripts can be analysed manually, being subjected to numerous
readings until different themes emerge, or, with the aid of computer
packages for qualitative research, coded and analysed on this basis. All
empirical material, be it of a quantitative or qualitative kind, is subject to
different interpretations and there is no definitive interpretation that tells
the ‘truth’. Nevertheless, the qualitative researcher has to demonstrate the
plausibility of their interpretation like their quantitative counterpart.
Various ways of enhancing the validity of interpretations exist. The
interpretation of interview material can be discussed with a group of
researchers to obtain a consensus on the interpretation. It is possible to ask
the informant for their reaction to the interpretation of the interview
transcript and this may lead to a reinterpretation. The plausibility of an
ethnography can be enhanced by doing full justice to the context of the
participant observation or intensive interviewing (Atkinson 1990: 129).
Finally, the internal consistency of an account can be assessed to establish

Fiona Devine 207


whether an analysis is coherent with the themes that have been identified.
External validity can be considered by checking findings with other studies
(Fielding 1993b: 166). In sum, the onus is on the qualitative researcher to
make the interpretation of the data as explicit as possible in the
development of an argument using systematically gathered data (Mason
1996; Silverman 1997).

Generalisability

Finally, qualitative research is often dismissed because it is not possible to


generalise the findings from a study that confines itself to a small number
of people or a particular setting. Qualitative researchers have to be
tentative about making inferences from a small number of cases to the
population at large, yet qualitatative researchers can design research that
facilitates an understanding of other situations (Rose 1982: 38). The
findings of one in-depth study can be corroborated with other research to
establish similarities and differences. Such a comparison would be a
limited test of confirmation (Marsh 1984: 91). As it is, it is rarely the case
that a sample of interviewees is so unrepresentative or the interpretations
so misleading that suggestion about the wider incidence of certain
phenomena is wholly specious. Finally, qualitative research findings are
often the basis on which subsequent quantitative research is conducted
from which generalisations can be made. To date, however, there have
been few genuine attempts in political science to bring quantitative and
qualitative data together to address inconsistencies as well as consistencies
(an issue which will be considered further in the conclusion). Qualitative
research, therefore, can have wider significance beyond the time and place
in which it was conducted (Ward-Schofield 1993: 205). Qualitative
research methodology has its disadvantages like other methods and
techniques. Its advantages, however, are clear where the goal of a piece of
research is to explore people’s experiences, practices, values and attitudes
in depth and to establish their meaning for those concerned.

Illustration of qualitative research

Somewhat surprisingly, qualitative research has been largely absent in the


field of electoral behaviour. It may be that voting is particularly amenable
to quantitative research - along the lines of the British Election Surveys
(BES) — and that this has inhibited the use of other methods and
techniques. The over-reliance on the BES was the source of debate in the
early 1990s (Devine 1992; Dunleavy 1990). It is certainly the case that other
methodologies are now employed in the study of elections. That said, the

208 Qualitative Methods

Fiona Devine 209

BES remains the dominant mode of enquiry even though, as one of the
principal authors of the 1997 BES publications (Evans and Norris 1999;
Norris et al. 1999) has readily acknowledged, the validity of the statistical
data remains open to some doubt (Norris 1997). That is to say, the BES
had been used to develop various models of voting behaviour but they are
essentially socio-psychological models of individual behaviour derived
from the analysis of aggregate patterns and trends of voting from the
electorate as a whole (Norris 1997). Indeed, Sanders (1999: 201) has
conceded that ‘aggregate patterns can often hide a great deal more than
they reveal about the electoral calculations that individual voters make 5 .
Against this background, a qualitative study of why people changed their
vote, or wavered but voted as before, was undertaken immediately after
the 1997 general election (White et al. 1999). The sample of 45 interviewees
(see Table 9.1) was drawn from the campaign panel of the BES and
interviewed in depth six weeks after the election on how and why they
voted as they did.

Why did these voters act differently in 1997 or consider doing so but
remain loyal to their political party on polling day? There was a long¬
standing and deep-seated disillusionment with the Conservative Govern¬
ment. The catalogue of disgruntlement with the Conservative Party was
long and familiar (Denver 1997; Norton 1998; Whiteley 1997). The
informants focused particularly on the standing of the leaders and the

Table 9.1 The political profile of the sample, 1992—97

Alterations in 1997

Conservative to Liberal Democrat 7


Conservative to Labour 9

Labour to Liberal Democrat 3

Liberal to Labour 5

Green Party to Labour 1

Voting to non voting

Non voting to voting 2

34

Waverers in 1997
1992 Conservative voters

1992 Liberal Democrat voters 3

1992 Labour voters 1

11

Total interviewed

related imagery of the parties. John Major, for example, was widely
regarded as a weak and ineffectual leader who could not hold his
increasingly disunited party together. As a previous Conservative voter
explained:

Well, she [Margaret Thatcher] was strong. You know, she wasn’t scared
to get up and, you know, if they were slagging her off like, she slagged
’em back. I think they have to be a strong leader otherwise the party’s no
good because he needs to be, or she needs to be, whoever it may be, they
have to control. They have to have a head to tell the other ones, or sort
out the other ones. It’s no use letting everyone do as they want ’cos, to
get away with what they want, ’cos it just goes as you’ve seen the
Conservatives this last time. All they did leading up to the election was
fight with each other. That’s all they did. Or fight with the other ones.
They didn’t actually in my eyes, didn’t sort of get it together themselves.
(Male, 30s -Wirral West)

Tony Blair, in contrast, was credited with transforming the Labour


Party into a political party that could win an election. While many had
been unsure of Kinnock, Blair was seen as genuine and likely to keep the
promises he made. Most importantly, it was his perceived strength and
decisiveness in leading his party in opposition and preparing it for
government that impressed interviewees. His leadership appeared to
attract younger voters, unencumbered by Labour’s past, to switch directly
from Conservative to Labour (see also Crewe and Thomson 1999). An
evaluation of the leaders, therefore, was often intertwined with an
evaluation of the political parties they led and each shared the positive
and negative traits identified. Focusing on the leaders, therefore, appeared
to be a shorthand way of discussing the state of the political parties
especially amongst the least politically interested and informed intervie¬
wees. These findings appear to confirm Crewe and King’s (1994) argument
that the leaders have an indirect influence - via their own relationship to
the party they lead - on the way in which people vote.

The image of the political parties was also very influential for the sample
of voters. Two aspects of party imagery were important. First, they clearly
associated the political parties with different classes although the associa¬
tion had changed in recent years. That is to say, long-standing Conserva¬
tives who had previously felt that the Conservative Party represented all
classes in its safe management of the economy no longer felt that way.
These voters expressed their unhappiness with the ‘fat cats’ - the senior
managers of various private and recently privatised utilities — who were the
main beneficiaries of privatisation. Many of the interviewees talked about
how the ‘rich had got richer and the poor had got poorer’ under the

Source: Adapted from White et al. (1999: 10).

210 Qualitative Methods

Conservatives and that the Tories ‘only look after the rich’. While this
view was often expressed by Liberal Democrat or Labour voters, it was
increasingly a view shared, albeit reluctantly, by long-standing Conserva¬
tive voters. The mass appeal of the Conservatives under Thatcher in the
late 1970s and early 1980s had clearly disappeared. In contrast. Labour was
seen as representing the mass of ordinary working people including the
middle class and the working class. The party’s focus on the issues of
health and education tapped into concerns about welfare services on which
most people depended. Their policies for shorter waiting lists and smaller
class sizes were seen as reflecting the concerns of the mass, rather than the
few. The Labour Party’s appeal, therefore, was a broad-based and
inclusive appeal that focused on concerns shared by the working class
and the middle class. Thus, class voting may have been low in 1997 (Evans
et al . 1999: 94), but class imagery was an important part of the electoral
appeal to the parties on which the interviewees commented.

Second, the transformation of the Labour Party from Old to New


Labour was influential on how the interviewees voted. Its transformation
was especially important for those interviewees who had voted Liberal
Democrat in 1992. The perceived loosening of the relationship between
Labour and the unions opened the way for many of the interviewees to
vote Labour (Kellner 1997: 120-1). After all, the threat of union domina¬
tion and the implications for the economy — often used by the Conservative
Party against Labour - were no longer a consideration. In 1992, some
voters had misgivings about Labour even if they had wanted change, as a
mobile voter who voted Liberal Democrat in 1992 and then switched to
Labour in 1997 explained:

I wanted it, the Government changed from Tory ... [I voted] just to get
the Tories out but at the same time I didn’t really want Labour in then
because ... there were a lot of things, you know, it was still in my mind
about all this militant stuff ... miners striking, Arthur Scargill shaking
his fists, what’s his name in Liverpool doing dodgy deals and getting
loads of backhanders ... I didn’t know much about them [Liberal
Democrats] at all but maybe I liked Paddy Ashdown and thought he
seemed a real man ... All I can remember is that I wanted things to
change. (Female, 20s - Northampton North)

Moreover, the perceived convergence of the political parties was noted


in favourable terms. Labour’s move to the centre ground - its willingness
to forgo old dogmatic policies like nationalisation and adopt new
pragmatic policies such as jointly funded public and private ventures -
impressed many of the interviewees (Budge 1999; Sanders 1999). As a voter
who moved from Conservative to the Social Democrat Party to Labour
explained:
Fiona Devine 211

I was by now totally clear that the Conservative Party had to move ... If
they had another spell in power you were really starting to get a one-
party state ... But in developments in the year before ... The Labour
people had obviously changed a lot of what they were trying to do.
They’d modernised themselves, admitted they’d moved. They’d moved,
in fact, very much into the SDP area. When you looked at the way they
were doing [things] and what they were talking [about] and the people
they’d got, it was almost as if the SDP had risen again. They were very
similar. Also, the leader character seemed to be attractive and strong
enough to say what he thought, and what he thought was reasonable
and matched my own sort of thinking. (Male 60s - Northampton North)

In this instance, the move across the political spectrum had been gradual
and painless and, indeed, the voter quoted here emphasised that the parties
had moved to his way of thinking rather than vice versa. Labour’s past
image as being too closely associated with the unions, too left-wing and
too internally divided, which had impeded victory in 1992 (Heath et al.
1994), had been left behind. As we shall see, however, the Labour Party’s
transformation left traditional Labour voters unhappy with the electoral
choice before them in 1997.

Reference has already been made to the importance of issues to the


interviewees, although it is important to stress that which issues were
important, how important they were and how they were discussed were
closely tied to past political allegiances. The issue of Europe, for example,
was important to only a small group of Conservative interviewees who
invariably remained loyal in 1997. The state of the economy and the issue
of taxes were discussed in that some interviewees commented on how taxes
had increased under the Conservatives thereby reneging on earlier pro¬
mises. However, the dominant issues were education and health (Norris
1997). Conservative voters who shifted to the Liberal Democrats in 1997
mentioned these issues although they approved of the way that the Liberal
Democrats acknowledged the need to increase income tax to improve
services, while Labour remained vague about how it would finance
improvements. Thus, one switcher explained:
Mr Ashdown said he would put up taxes, which I would agree with, and
I would willingly pay the extra coppers and what not that he said he was
going to charge me, provided he said he was going to use them for ...
education, health and things like that. (Male, 60s - Oxford West and
Abingdon)

Somewhat ironically, a policy of explicitly stating taxes would be raised


to pay for better services also prompted disillusioned Labour Party
supporters to vote Liberal Democrat. The Liberal Democrats were

212 Qualitative Methods

Fiona Devine 213

perceived as more radical than Labour, therefore, in stating explicitly that


taxes would go up rather than accepting the Conservative agenda of not
increasing them (Budge 1999; Holiday 1997). This was not a bone of
contention for previous Conservative voters who switched to Labour
however. As a young interviewee explained:

Well, you want the best for your children. You want your children to
grow up in a safer and like educational world and I just thought, like all
them things in the news you know, the last government wasn’t doing
enough and now, I’ve got to, had to show an interest ’cos my children
are going there [school]. So that’s why I started voting Labour ’cos they
said they’re going to change it and they’re going to change, like the
crime, cut down teenage crime. (Male, 20s, Northampton North)

Issues, therefore, were important to the interviewees (Sarlvik and Crewe


1983) although which issues were important to them, how they were
discussed and their salience relative to other considerations were heavily
influenced by their past partisan alignment.

Finally, tactical considerations and evaluations about local and national


outcomes influenced how the interviewees actually cast their vote because
how they acted varied even if they shared similar assessments of the
political parties (Curtice and Steed 1997: 310). There were former
Conservative voters, for example, who were seriously attracted to Labour
but local constituency factors intervened. As a voter explained:
Tactically, I voted to get them out but I wanted Labour in. If I’d been in
a seat where Labour had a chance of winning, I would have voted
Labour so I wanted them in but because I’m down here in a country area
with farming, hunting, shooting and fishing, [it was] an absolutely
wasted vote if you voted Labour. There’s 5,000 people voted Labour and
23,000 voted Conservative and 21,000 voted Lib Dem last time so we
thought, ‘right, vote Lib Dem and we’ll topple them’ which we did. It
was a tactical vote, but if I’d had a chance of voting Labour I would have
voted straight Labour. (Female, 40s - Devon West and Torridge)

Local factors also worked in the opposite direction, leading wavering


Conservative voters to remain loyal, for example, rather than vote Liberal
Democrat or Labour. Evaluations of the national outcome - namely, the
likelihood of a Conservative defeat and Labour victory - also influenced
how some interviewees voted (Miller et al. 1990). It compelled some voters,
for example, to remain loyal to the Conservatives to keep their vote up
rather than waste their vote on the Referendum Party. Disillusioned
Labour supporters who voted Liberal Democrat did so in the context of
a likely landslide victory for Labour. A former Labour voter who much
preferred the Liberal Democrat policy on education explained that:

When I’d seen the polls and they said, you know, Labour would
definitely get in and whatever, then I thought, well, I’d vote for the one I
feel is the best. Anyway, so I voted Liberal Democrat. I thought it would
be nice to get some Liberals in as well. If they [the polls] had said ‘oh, it’s
a bit dubious whether Conservative or Labour was going to win’, I think
I’d probably have gone Labour. (Female, 30s - Colne Valley)

In this context, previously loyal Labour supporters felt they had the
space to vote differently or abstain in the event of a landslide. Tactical
considerations, local constituency factors and evaluations of the national
result, therefore, played an important role in shaping the interviewees’
voting decisions.

Overall, the qualitative research highlighted the continuing influence of


family and class on early voting behaviour. Most importantly, it shapes
voters’ images of the political parties, including support for one particular
party and opposition to other political parties. The nature of early political
socialisation in the family and local community also influences the extent
to which party attachments are strong or weak. Indeed, early images of the
political parties can be very enduring and are often the starting point from
which voters evaluate leaders, parties, issues and so forth. It was found, for
example, that most of the sample had long histories of voting for one
political party prior to 1997. It should be stressed, however, that this
stability was not necessarily indicative of a strong commitment to a
political party; sometimes it was merely a product of routine or falling
in with family and friends. Be that as it may, images of the political parties
are not static but change as the issues and policies they stand for change
and the perceived unity and strength of the party change. Against the
background of eighteen years of power, long-standing Conservative voters,
for example, were dismayed with a party leader who they had been
ambivalent about in 1992. They were unhappy with the extent of disunity
and squabbling over Europe within the party. They were unconvinced by
the Conservative Party’s claim to run a sound economy, uninterested in
their agenda of keeping taxes down and increasingly convinced that the
Conservatives represented the rich rather than the whole electorate.

In contrast, the Labour Party was no longer burdened by its poor


imagery of the late 1970s and 1980s. The Conservatives’ attempts to ignite
fear and uncertainty about Labour’s ability to handle the economy and to
portray it as the party of big spenders fell on deaf ears, especially among
younger voters with little or no memory of events nearly two decades
earlier. Instead, voters were impressed by the Labour Party with its strong
leader and united party and were convinced by its agenda of improving
education and health services. The appeal to the whole of the electorate
also convinced many of the interviewees to support them. This is not to

214 Qualitative Methods

suggest that they easily moved across the political spectrum. Their political
histories, past party alignment, early images of the parties and tactical
considerations greatly influenced voters’ decision-making processes. Thus,
some previous Conservative voters would never vote Labour and chose to
abstain, remain loyal or vote Liberal Democrat. Other Conservative
voters, unfettered by a strong alignment to the Conservatives or negative
images of the Labour Party in the past, could shift to Labour without too
much difficulty. Labour’s transformation was especially attractive to
Liberal Democrat supporters who now viewed the party’s agenda as less
dogmatic and more pragmatic and, thus, more in tune with their views and
opinions. This support, however, came at a price for Labour. Its
transformation had not found favour among its traditional constituency
of working-class supporters strongly committed to socialist ideals. These
voters, like their Conservative counterparts at the other end of the political
spectrum, either abstained, remained loyal to Labour or voted Liberal
Democrat.

In sum, the qualitative research explained why individual interviewees


voted while others did not, and why some changed their votes while others
remained loyal, in a way that quantitative data cannot. The material
suggests that the 1997 election was critical (Norris and Evans 1999) - in
some way different from past elections — in that some long-standing
Conservative voters were so disillusioned that they were prepared to place
their vote elsewhere. It also suggests that it was not a critical election - that
it shared similarities with the past — in the way that Labour faced the
problem, once again, of winning middle-class voters and losing working-
class voters. The analysis of the qualitative material was also suggestive in
highlighting patterns and regularities between groups of voters within the
sample in terms of how they responded to party appeals, which issues were
important to them and so forth. Young Conservative voters, for example,
appeared to find it easier to move directly to Labour, while some older
Conservatives voters would never, in their wildest dreams, consider voting
Labour! This suggests that the concept of political generations and cohort
effects which Butler and Stokes (1974) spoke of many years ago should be
reconsidered in the study of elections. These comments suggest ways in
which further analysis of the BES could proceed. There could, for example,
be more disaggregation of the data to look at different groups of voters,
rather than just examining aggregate patterns and trends among the
electorate as a whole.

Some might argue that the qualitative material presented here does not
offer any revelations. Only those who remain hostile to qualitative
research demand that it demonstrate its worth by some new extraordinary
revelations. Arguably, listening to the way in which the voters of this study
described how they came to vote revealed much about the causal processes

Fiona Devine 215


by which final decisions were made. This contribution is as great as any
account of the predictive power of individual variables to the development
of explanatory theories of voting behaviour and may, indeed, lead to a
widening of the remit of election studies.

Conclusion

In this chapter, it has been argued that qualitative research has made a
significant contribution to political science. Be that as it may, there are
some political scientists who are hostile towards qualitative research albeit
in the privacy of conversation rather than the publicity of print. They
remain sceptical of what they see as a costly approach to the collection of
political data. They scoff at the small sample sizes of qualitative work that
they reject as atypical and worthless. They dismiss qualitative findings as
insubstantial and not worthy of note, since they are rarely new or
unfamiliar. They think it is the stuff of sociologists and not proper political
‘scientists’! Fortunately, there are other political scientists who are more
enlightened about qualitative research. The inclusion of this chapter in a
political science textbook for students is testimony to this fact. There are
signs that the advantages of qualitative research are being recognised as
more research of this kind is being undertaken in the discipline. Moreover,
there are encouraging indications that more research that combines
quantitative and qualitative methods is being undertaken. The ESRC-
funded Democracy and Participation programme is a case in point. These
developments are to be welcomed. For political science as a whole, they
herald an era in which epistemological questions about how we know the
political world and the process of producing knowledge about that world
are not taken for granted. Arguably, the discipline will be all the better
for it.

Further reading

There are numerous books that discuss different methods and techniques

in the social sciences.

• One of the most useful texts is Gilbert’s (1993) edited collection that
considers quantitative and qualitative methodologies.

• Recently published books focusing on qualitative research that have


enjoyed favourable reviews include Silverman (1997), Mason (1996) and
Devine and Heath (1999).

• Good qualitative research straddling sociology and politics include


Roseneil’s (1995) study of political action at Greenham in the UK and
Eliasoph’s (2000) study of civic groups and avoiding politics in the USA.

Chapter 10

Quantitative Methods

PETER JOHN

This chapter examines four aspects of the quantitative approach: the


collection and management of data; the advantages associated with the
quantitative method; common objections to these methods; and the
different types of analysis that can be employed, including multivariate
methods. However, first it is important to contextualise the approach
within the broader discipline of political science.

The divide between quantitative and qualitative research remains highly


pronounced. The communities of political scientists who work with large
numbers of observations are often segregated into sub-topics. As a result
many academics assume that quantitative investigation only concerns
elections, voting systems, party manifestos and political attitudes rather
than having a more general application. The division becomes manifest in
the descriptors researchers apply to themselves and to others: quantitative
researchers are known as political scientists; the rest often have the labels
of students of politics, area specialists, biographers and public policy
specialists. Not only do different topics, skills and networks create the
divide; it is sustained by apparently clashing conceptions of the purpose
and practice of social science. Many qualitative researchers think that
quantitative work is underpinned by a crude version of positivism whilst
qualitative work describes complex realities, acknowledges that research¬
ers cannot separate their values from the political world, engages with and
seeks to understand the beliefs and aspirations of those who are being
researched and rejects the idea that there are universal rules of human
behaviour. In this context a review of quantitative methods cannot just be
a description of the different techniques on offer. Such an account would
reinforce the idea that quantitative researchers live in a different spirit
world to others. Instead this chapter aims to persuade sceptics of the depth
and subtlety of quantitative analysis. Moreover, the current debate
between quantitative and qualitative research is shallow and rests on
stereotypes of the research process.

Peter John 217

The argument presented here mirrors that of King, Keohane and Verba
in their book, Designing Social Inquiry (1994). Writing with the tools of
quantitative analysis in mind, they argue that both fields apply a ‘unified
logic of inference with differences mainly of style and specific technique’
(1994: 1). They recommend that qualitative inferences could be improved
by the adoption of a few straightforward techniques. Whilst the book
should be compulsory reading for every research student, experienced
researchers often feel uncomfortable with the clean and tidy nature of their
programme, which seems to squeeze out the messy problem-solving and
practical way in which most qualitative researchers actually do the job.
Often investigators respect their hunches; they discover bits of data by
accident, ‘play detective’ and follow up leads. Sometimes they start with
the wrong research question and, after many blind alleys, come to a
moment of revelation. It is often quite late in the project that the student or
even experienced academic knows how to frame the research problem and
is able to test alternative hypotheses. This chapter claims that quantitative
researchers also engage in messy and unpredictable data analysis; they
solve problems incrementally and follow their intuitions just like their
qualitative counterparts. They discuss their strategies with their colleagues
and seek the advice of others in the research community. The message is
that all researchers should design their projects to be capable of testing
hypotheses, but they should also use their practical knowledge to carry out
exciting and imaginative pieces of work.

Quantitative researchers sometimes help their critics because convention


requires them not to report the interpretive aspects of their craft. They
report complex statistical analysis as though they had run their data
through a ‘black box’, making knowledge of the technique a necessary
prerequisite to understanding the article. This chapter aims to demystify
both the theory and presentation of quantitative research. The idea is not
to knock it down, but to show that much of its practice coheres with the
rich traditions of social science. The chapter also reports recent develop¬
ments in the United States of America where conventions and rules of
scholarly journals encourage or require political scientists to present as
much information as possible about how they gather their data, choose
their models and ensure that others can replicate their results (King 1995).
Critics of the classic technique of political science, the ordinary least
squares (OLS) model, will be relieved to find that many methodologists
share their worries and that the new generation of non-parametric models
overcome some of the problems. Moreover, in spite of rapid advances in
statistical techniques, the use of programming and software, the leading US
scholars argue that researchers should make further efforts to present their
data more effectively (King et al. 2000).

218 Quantitative Methods

The collection and management of data

Quantitative work rests on the observation and measurement of repeated


incidences of a political phenomenon, such as voting for a political party,
an allocation of resources by a government agency or citizen attitudes
towards taxation and public spending. By observing variables over a large
number of cases, it is possible to make inferences about a class of political
behaviour, such as who votes for a political party, who gets resources from
governments and what is the distribution of attitudes to public spending in
the adult population. With large numbers, social scientists can confidently
make generalisations about the empirical world. Statistical theory shows
that the larger the number of cases (or the greater number in proportion to
the whole population), the surer data analysts can be that what they
observe is not a random occurrence. Moreover, political scientists often
want to analyse whole populations, such as the voting record of all
Members of Parliament or all electoral systems in the world, which
involves large numbers.

Qualitative researchers are often suspicious about the way in which their
colleagues generate observations. Particularly when variables are attitu-
dinal or behavioural, like those drawn from large-scale surveys using
standardised questions, the measures appear to ignore social and political
contexts (Kirk and Miller 1986). Even the statistics that emerge from
government departments may reflect political decisions about how to
collect data. In the end, official information is what politicians and
bureaucrats wish to make public. Some techniques, such as content
analysis (the classification and counting of data drawn from the texts of
media or political debates), appear to strip out the context of the language
and render it meaningless or sterile. Quantitative researchers appear to be
blind to the relationship between the observer and observed which makes
each act of collecting data unique. Critics claim that quantitative research¬
ers ignore the complexity of the world by their quest to turn politics into a
series of repeated and identical experiences or events (Ragin 2000).

Qualitative researchers recommend that investigators immerse them¬


selves in their data and seek to understand the perspective of who is being
researched (Allan 1991). But the recommendations of effort and care do
not overcome the fundamental problems that qualitative researchers high¬
light. If their point is pushed to its logical conclusion, no researcher can
understand the perspectives of the researched given the limitations of time
and the interference of social science methods. If pure understanding is the
goal, the long informal interview is as contaminated as the standardised
survey question. Qualitative and quantitative researchers can find a way
out of this conundrum by accepting that what they observe is partial and
limited by their research instruments. A successful and meaningful project

Peter John 219

does not need to find out about respondents’ constructions of reality. For
the purposes of the study, what matters is whether the information about
an individual or organisation indicates an underlying set of attitudes,
dispositions or behaviours. For example, researchers do not need to know
about voters’ social construction of the realm of economics when finding
out whether the electorate considers the state of the economy in their
voting decisions.

After taking into account the limited objectives of most of their studies,
quantitative researchers become aware that complex social realities may
not always be captured by repeated observations. In certain situations,
quantification is not appropriate as what is being measured could be made
either meaningless or biased by socially constructing the data. For
example, research that depends on standardised questions may not be
replicated across countries because of differences in culture and language.
Even in the appropriate contexts, researchers should attend to the validity
of the data to know whether they measure what the project intends them
to. For example, in the qualitative prelude to most surveys and in pilots,
questions are bandied about, interviewers evaluate interviews and respon¬
dents fill in an additional questionnaire about their experience of complet¬
ing questions. Quantitative researchers pay a lot of attention to reliability
(that data are produced independently of the activity of measurement) and
seek to maximise it where possible. For example, survey researchers have
frequent discussions about the effect of question wording and order on the
responses to their questions. Content analysis researchers use inter-coder
reliability scores to find out whether they coded an item in the same way
(Krippendorff 1980: 129-54). Such problems do not just occur in surveys
and the analysis of texts. Statisticians who use data from government
departments frequently investigate how the data are collected. They
consider the possible biases in the results and think of ways to correct
for them. There is even discussion about the extent to which research
instruments reflect biases within social science, such as in favour of class-
based explanations in voting behaviour (Catt 1996: 67-9, 92). Also Sanders
(1999) argues that British surveys of voting behaviour use a question on
party identification that measures voting preference rather than a determi¬
nant of it. This mistake biases the results of studies that use party
identification to predict voting behaviour. Debates also occur in footnotes
and appendices and in discussions and emails between colleagues; they
become part of the common stock of knowledge that members of the
research community acquire. These critical activities show that quantita¬
tive researchers do the same things as their qualitative colleagues: they seek
to find the best data to answer their research questions.

Quantitative researchers spend much time and effort thinking about


their data. Choosing data or sampling appears an easy task but it contains

220 Quantitative Methods

many hidden pitfalls. The sample must allow the investigator to make
inferences, but often it is not clear what constitutes the population. If the
topic of study is about change over time, which years should the researcher
choose to analyse? Surveys contain many problems, such as how to define
a household. They may need to be re-weighted because of the stratification
of the sample (Skinner et al. 1989). There are also choices about how to
measure the variables. No perfect set of data exists: for example, response
rates to surveys may be low and archives may contain missing years.
Although the electronic storage of data gives the impression of perma¬
nence, disks sometimes decay and data get lost in large file stores.
Overcoming these problems requires attention to practical issues and to
theory about what are the best data for the study. No solution is ideal, but
researchers pick up practical knowledge from their colleagues and friends
about how to solve these problems and learn about the pitfalls of
particular choices.

The collection and manipulation of data invite errors. Interviewers,


research assistants or survey companies sometimes input responses to
questionnaires incorrectly; researchers accidentally delete cases and vari¬
ables; the transfer of files between software packages and across the
Internet can create ‘dirt’ in the data; and researchers can even accidentally
work on the wrong or old data set because they did not label it correctly.
They may even forget how they created their variables because they did not
note what they did at each stage. One of the problems is that the speed and
efficiency of modern computers encourage researchers to think that their
data are ‘clean’. But most political scientists learn to be careful after
making silly errors when their concentration lapses. As mistakes are so
easy to make, researchers spend a large amount of their time carefully
collecting data, checking and rechecking how they or their research
assistant entered the information and correcting their errors. Even with
this culture of paranoia, mistakes still occur in published work, sometimes
in the best quality journals (for example, see the correction of Garrett
(1988) by King et al. 2000: 356).

The power of description

One of the advantages of descriptive measures is that they allow the


observer to split the observations and to examine the proportions, such as
the percentage of a group who support a political party. Judgements about
these proportions form an essential part of the interpretation of data. In
journalism and other forms of commentary, there are debates about
whether a percentage is too big or too small, and descriptive political

Peter John 221


science is no exception. For example, consider an imaginary statistic
showing that 5 per cent of the electorate believe in repatriation.
Commentators can either interpret it as evidence of alarming racism or
of tolerance of the bulk of the population. To resolve this dilemma, social
scientists should place the statistic in its proper context, taking into
account arguments about what defines a liberal society and existing
empirical knowledge. The interpretation of the 5 per cent would differ
with the additional information that, for example, 10 per cent of the
population believed in it twenty years previously.

Summary statistics are useful in understanding the properties of the


data, such as measures of central points so that researchers can know the
average or typical point in the data. The most common is the mean value
or average, but there is also the median (middle observation) and mode
(the most frequent value). As important are measures of dispersion.
Observers find it useful to know whether the observations converge on
the average value or are widely distributed. For example, if the interest is
in response times of fire brigades in different locations, researchers and
residents may be interested in finding out which area has the lowest
average response time. But they should also be interested in the dispersion
around the average as residents would like to know how likely the fire
engines will arrive close to the mean time. As with central points, there are
a number of measures, such as the inter-quartile range (the distance
between the upper and lower quartiles) and the standard deviation (the
square root of the variance). When deciding which measure to use,
researchers need to think carefully about their data and decide whether
it is nominal (with categories that are just different from each other, for
example, male or female), ordinal (with measures that involve ranking),
and ratio/interval (with values that have equal intervals between cate¬
gories). Investigators may wish to look at the shape of the distribution,
such as whether it is unimodal (spread evenly around one point) or
bimodal or multimodal (having a number of peaks), which can inform
much about what the data reveal. Alternatively the data may be skewed or
symmetrical, leptokurtotic (bunched around the mean) or normal. The
normal is particularly interesting because it shows the distribution is
random.

When technical terms appear qualitative researchers start to think that


quantitative topics are not for them. But they merely formalise what
people do in everyday life. Imagine a person walking into a room full of
people. The person would immediately size up the gathering by asking
how many people there are, how many are of a certain type or how many
old people or young people there are. When coming to these judgements,
people make approximate proportions, averages and distributions. De¬
scriptive statistics standardise these common-sense ideas (or common-

222 Quantitative Methods

sense ideas make sense of the statistics). Moreover, such statistics appear
regularly in newspapers and in qualitative research.

Paradoxically, quantitative researchers do not use descriptive statistics


enough, only reporting them as the prelude to applying sophisticated tests.
But much can be gained by their careful and imaginative use. To obtain the
best results, quantitative researchers must immerse themselves in their data
and explore the myriad possible ways of cutting and representing them.
Familiarity with descriptive measures assists an understanding of the
complexity of the topic and can help researchers interpret the output from
more complex statistical models. In short, quantitative researchers should
be as intimate with their research materials as their qualitative colleagues.
As Jenkins writes, ‘The statistician should fall in love with his data’ (cited
by Franzosi 1994: 45). Finally, much can be gained by representing
descriptive data pictorially in the form of bar charts, pies and plots. Most
software packages easily provide these.

Tables and inferential statistics

Social scientists often infer or deduce models of causation that they wish to
test. Such models often hypothesise a strong relationship between two
variables (either positive or negative). Social scientists assume that the
values of one variable cause variation in another. The explaining terms are
called independent variables and the dependent variable is what is being
explained. For example, in a project about what causes people to
volunteer, which is an important topic in the burgeoning literature on
social capital (see for example Verba et al. 1995), theory - in the form of
the social—economic status (SES) model of political behaviour — may
suggest that those from wealthy families are more likely to join
organisations. Logically it would not be possible for volunteering to affect
social background, so it is clear that wealth is independent and
volunteering is dependent. Such a project can only test whether social
background affects voluntary activity or not rather than the other way
round.

One of the simplest ways to find out if one variable determines or is


associated with another is tables or cross-tabulations. Tables show how
the values or categories of one variable are expressed as the categories of
another. Researchers frequently use tables in survey research. If the
volunteering project had been carried out in the days before computers,
researchers would have sorted all the cards containing the records of the
interviews into the piles of wealthy volunteers, non-wealthy volunteers,
wealthy non-volunteers and non-wealthy non-volunteers. Then they would

Peter John 223

have counted the numbers of cards of each category, worked out their
percentages as a proportion of each variable and represented the results in
a two-by-two table. It is conventional to place the independent variable
along the top of the table and have the dependent on the stem. The
researcher can examine the percentage of volunteers who are wealthy and
look across the table to compare with the percentage of non-volunteers.

Now that the records of surveys can be stored as data matrixes in


software packages, such as STATA (StataCorp 1999), researchers can
create such a table in seconds. But their construction is surprisingly tricky.
Often variables need to be recoded, such as by transforming the individual
ages of respondents into bands of age groups. Working out which
measures to use requires knowledge of the data and attention to theory
to select the appropriate units.

Researchers who use tables from surveys also need to run a battery of
tests to show that the associations could not have happened just by chance.
Because surveys are samples of a larger population, associations could
appear because of unusual selections of people. Statisticians conventionally
argue that researchers should have 95 per confidence that the association is
not random. The humped shape of the normal distribution indicates that
the mean value of the variable in the sample is going to be close to the
population mean whereas the chance that it is far from the mean is much
less. The 95 per cent confidence level is convenient because it is just under
two standard deviations (typical deviations) from the mean or average
level and also is the point at which the normal distribution becomes flat.
Survey researchers calculate the probability and most computer packages
routinely produce a figure. If the figure was 0.04, for example, researchers
would believe that the association has not occurred by chance. But the ease
with which computers run these tests makes researchers forget to examine
the strength of the associations, which show how much one variable affects
another. In large samples, such as those in excess of 4,000 respondents, it is
easy to find significant but meaningless relationships.

Qualitative researchers often question whether variables are genuinely


independent or dependent as the structure of causation is usually complex.
For example, there may be no relationship between wealth and volunteer¬
ing because wealthy volunteers tended to go to schools which encouraged
voluntary activity. The causal relationship between schooling and volun¬
teering makes the correlation between wealth and volunteering spurious
because wealthy people go to a certain type of school which also produces
volunteering as well as good examination results. But such relationships
are not a problem for quantitative researchers because they can use
multiple regression to examine all the determinants of vote choice (see
below). This technique allows them to test whether other factors than
wealth affect voluntary activity.

224 Quantitative Methods

Peter John 225

Of course, the structure of causal relationships can become even more


complex. For example, the existence of marginal Westminster seats causes
governments to direct public resources to them (Ward and John 1999), but
the receipt of those resources will affect which areas are going to be
marginal seats in the following election. Over time, how can a researcher
know what level of resources it takes to win marginal seats? To answer the
question there is a branch of statistics, called structural equation models
(SEM) (Schumacker and Lomax 1996; Maruyama 1998) and software
packages, such as LISREL and AMOS, can estimate the causal relation¬
ships. Researchers solve the problem that causal relationships are complex
by using more sophisticated models and by applying advanced statistical
techniques. As always, theory specifies the direction of the causal arrows
rather than the computer or technique. Critics, however, would be right to
point out that research on political attitudes does not pay enough attention
to causal relationships and tends to present models with one dependent
variable and a batch of independent ones. For example, an edited volume
based on the 1997 British Election Survey (Evans and Norris 1999) does not
contain one causal model although it is likely that the bundle of attitudes,
reports of behaviour and personal variables have complex logical
sequences.

The other common objection to testing hypotheses from using correla¬


tions presented in tables is that they do not establish causation but only
show associations. Unlike natural scientists, the claim is that political ones
rarely carry out experiments, so they have no way of knowing whether the
relationships they observe in their data are accidental, spurious or causal.
Theory comes to the aid of the social scientist because a relationship
between two variables needs to be logical and consistent as well as
following from existing empirical studies. The association between wealth
and volunteering is not a correlation found by ‘dredging’ the data, but
derives from sociological theory that argues that as some people have more
resources and advantages than others so they are more able to engage with
public life. The relationship is logical in the sense that background can
affect political participation; it is plausible because investigators compare
the SES with other models, such as the rational choice model of
participation or models that emphasise contextual factors, such as educa¬
tion or the neighbourhood.

When researchers appraise hypotheses they are not satisfied with


observations of relationships in the data. To support their case they would
look for other relationships to make a set of plausible arguments. They
might be interested in change over time; they could run multivariate and
structural equation models as indicated above. Just like detectives on a
case, quantitative researchers gradually piece together the evidence. At all
times they are aware of academic communities of reviewers and conference

participants who are likely to be sceptical about the results. They think of
the likely criticisms and devise strategies about how to convince the
sceptics. Rarely do quantitative researchers claim that an association in
the data proves causation, but that correlation has importance only when
applied by theory and used alongside other evidence.
Multivariate analysis

As the wealth and volunteering example shows, researchers need to move


beyond cross-tabulations if they are to convince academic communities
that they have found new facts. In a complex world, there are many causes
of action, and politics is no exception. Researchers do not aim to show that
x causes y, but that x causes y alongside or controlling for z or w. Analysts
become more confident of testing hypotheses because they have allowed
for all the possible causes of behaviour or attitudes. They can run one
model against another and carry out robust tests of each one. However,
multivariate analysis carries more risks than descriptive statistics because
the regression models that social scientists commonly use make restrictive
assumptions about the data.

The most common multivariate model is ordinary least squares (OLS).


The intuitive idea is that a plot of the points between two interval
variables, X and Y, may contain a relationship. If the points are not
randomly distributed, it may be possible to plot a line that minimises the
distance between it and the data points. This line would have a gradient or
slope that indicates the constant relationship between the two variables.
Rather than eyeballing the data, OLS uses a formula to estimate the slope
of the line from the mean or average value of the independent variable and
from the data points. In addition, OLS estimates the distances between the
regression line and the data points, what are called the residuals or errors.
OLS calculates the total explanation of the model as a statistic, the
r-square, which falls between 0 and 1. The same mathematics govern
models with more than one independent term. This neat extension allows
the estimation of the effects of each of the independent terms upon the
dependent variable. OLS allows researchers to test hypotheses in the
knowledge that they are controlling for all the hypothesised effects.

Because OLS assumes the data are a sample from the population of
possible data points, everything that the model does not explain is random
variation. For each variable there is a standard error or measure of spread
that indicates the probability that the relationship between the indepen¬
dent and dependent variable is there by chance or not. Political scientists
have been happy to run hypothesis tests based on the 95 per cent
confidence level. If the probability is equal to or greater than 95 per cent.
226 Quantitative Methods

researchers accept that an independent variable has an effect on the


dependent one; if it is less then they reject the hypothesis that there is a
statistically significant relationship. The procedure easily tests models that
derive from social science theory. Political scientists specify their models,
which usually postulate a relationship between a dependent and indepen¬
dent variable; then they find data from the correct population, conceive of
all possible determinants of the dependent variable; and then run a model
with all the variables in which they are interested. This procedure appears
to correspond to the scientific method because investigators allow the
variable of interest to pass or fail a test and they report whether it
succeeded or not.

When non-specialists read quantitative articles they may come away


with the impression that political scientists only test models that derive
from theory. But even with a small number of independent variables, there
are many choices about which ones to exclude or include in the final
model. These choices should be driven by theory, but sometimes theory
provides arguments and counter-arguments for a number of models. For
example, researchers could include all or some of the independent
variables in the final model irrespective of whether they reach the 95 per
cent confidence level or not. Alternatively, they could include only those
variables that reach the required significance level. Moreover, the number
of choices increases if researchers include interaction effects. These are
terms created by multiplying two variables to indicate a joint impact on the
dependent variable and they may be included along with the original
independent terms (Friedrich 1982). In many situations, it does not matter
which model to run as all of them show the same kinds of relationships
and levels of probability. But competing models can show the hypothesised
variables to be significant and sometimes not. Researchers may be tempted
to present the one that shows the hypothesised variable to be above the
95 per cent confidence level. With the speed of current computers and the
easy manipulation of software packages, modellers can engage in the
much-despised practice of ‘significance hunting’, which involves running
many hundreds of equations until the ‘right’ one emerges. Because journal
editors cannot require researchers to report every model they run, it is hard
to detect this practice.

The incentive to present the most favourable model exists because few
journals publish papers containing negative results. Most journal editors
and reviewers find these papers to be less interesting and less publishable
than those that reach positive conclusions; alternatively there is self¬
selection at work whereby researchers only send off papers to journals
when they have positive results. The alternative explanation is that
political scientists choose to carry out and research councils usually fund
research projects that are likely to yield new findings. In the natural

Peter John 227

sciences the bias has been studied and is called the ‘file drawer problem’
(Rothenthal 1979; Rotton et al. 1995; Csada et aL 1996; Bradley and
Gupta 1997).

Qualitative researchers may become suspicious that advanced statistics


creates a screen behind which the modeller ‘cooks’ the results. When
practice breaches the stereotype of the pure model of scientific investiga¬
tion, the effect is something of a fall from grace. The quantitative
researcher becomes like Gabriel inhabiting the dark depths of malpractice.
Rather than a devious manipulation of data, the art of building models
involves the assessment of different possibilities or pathways, each of
which is trailed with theory. Researchers think about what is going on in
their models and go back and forth between theory and the results they
produce. Along the way is much dialogue - often internal, but also with
colleagues along the corridor and across the Internet. Such conversations
show that quantitative research is above all problem-centred. Problems
and solutions are continually traded amongst the research community to
overcome the many pitfalls. A folklore of practices emerges and complex
networks link together researchers. Researchers engage with their data.
They neither test pure models; nor do they ‘dredge’ for significant results;
but they carefully consider each equation and come up with plausible
explanations of the routes they have chosen.

The dialogue continues in the more informal workshops, though it


becomes hidden by the time investigators submit their articles to learned
journals. After the publication of research the discursive aspect of the
production of knowledge starts again. Back along the department corridor
researchers discuss results of papers with varying degrees of scepticism or
respect that draws upon their knowledge about their data and about the
people involved. Members of the research community often detect ‘cooked’
models because they cannot understand how researchers arrived at their
results. When researchers find they cannot replicate the results of a paper,
this knowledge gradually diffuses to affect the reputation of the investi¬
gator. The informal control may become formal when an academic
questions the findings of another by publishing a comment, to which
there are often replies and rejoinders.

Recent developments have assisted the transparency of data collection


and analysis. King (1995) has campaigned for a standard of replication,
whereby any person may repeat another scholar’s work using the same
data set and coding of the variables. Such a standard has now been
adopted by the main US journals. The ability to replicate not only guards
against the false presentation of data, which is anyway rare, but it ensures
that researchers carefully check their data for mistakes. Replication
encourages researchers to consider the steps toward the presentations of
their final results and to check the ‘health’ of their models, such as for

228 Quantitative Methods

breaches of the assumptions of OLS. As a result, the standards of reporting


have improved and most articles in good journals convey at least some of
the vast range of diagnostic statistics, rather than just r-squares and
probability values. This caution is wise, as King (1986) shows that the
r-square statistic can be misleading, making ‘macho’ comparisons of its
size rather meaningless. For example, the r-square can increase by
including more variables in the model rather than because of any real
improvement in explanation. Similarly, stepwise regression has now fallen
into disuse. Stepwise is a facility on some of the more popular software
programs, such as SPSS, which allow the researcher to ‘race’ their variables
by automatically discounting non-significant terms or including significant
ones in each equation.

The current wave of reforms could go further as there is a range of tests


that researchers can apply to the ‘interior’ of their regression models
(Franzosi 1994). For example, it is common that one case in a model can
cause a variable to be significant, and researchers need to find out why this
is (sometimes it is caused by a data entry error). There are tests of the
contribution each case makes to the final model, which help the researcher
to understand what is going on inside the ‘black box’. Moreover, political
scientists could consider abandoning some of the shibboleths of their art.
The most sacred is the 0.05 and 0.01 significance tests (or 95 and 99 per
cent confidence levels) that can lead researchers to reject or accept a
hypothesis because the probability exceeds or does not reach the required
level only by a small margin. But there is no theoretical reason why these
rules should exist. Tanenbaum and Scarbrough (1998: 15) suggest that they
derived from the period before computers automatically calculated the
probability values and researchers had to look up the values in printed
tables. Researchers should be forbidden from adding asterisks to the
variables in models they publish to indicate that a variable has ‘passed’
a significance test. They should only report the standard errors and the
probability levels and discuss them in the text. Such a practice would not
be so satisfying for the researcher, but it would lessen the file drawer
problem and lead to a more balanced and nuanced discussion of research
results. Psychologists have already conceived of life beyond significance
tests (Harlow and Mulaik 1997) and a discussion has begun in political
science (Gill 1999).

Beyond ordinary least squares

For the bulk of the postwar period the OLS model held sway, particularly
as it is taught as the central component of most political science methods

Peter John 229

courses. Most well-trained political scientists understand its output. In


spite of its ease of comprehension, OLS has disadvantages. It is worth
recalling that the model depends on ten assumptions that are frequently
breached in most contexts, one assumption being that the variables are
constant and linear over time and space (Wood 2000). For example, studies
of the effects of the economy on voting behaviour often assume there is a
particular level of unemployment that causes a particular amount of
unpopularity for a government, but in fact unemployment can sometimes
be damaging for a government and sometimes not. To cope with the
complexity of political phenomena, a variety of statistical techniques have
emerged that supersede OLS.

Radical alternatives to OLS respond to its questionable assumption that


everything that the model does not explain must be random. In many
research situations the residuals are not randomly distributed because the
number of cases is too small, for example, with studies that use the
developed countries as the units of analysis. Monte Carlo simulation is an
experimental technique that allows the investigator to estimate a variable
and to make inferences to the population. It needs vast amounts of
computer memory to generate data from an artificially created population
that resembles the process being investigated. Then the researcher esti¬
mates a statistical model from this population and assesses its perfor¬
mance. Political scientists use bootstrapping models that are similar to
Monte Carlo simulation and relax the restrictive assumptions of the OLS
model (Mooney with Duval 1993; Mooney 1996; Mooney and Krause
1997), arguing that the OLS model only developed because of the
limitations of computational capacity and now the microchip revolution
makes other forms of estimation possible. Bootstrapped estimators are
available on statistical packages, such as STATA, and articles now appear
with reports of both OLS and bootstrapped estimates.

Conclusion

This chapter shows the complexity and subtlety of quantitative work. Far
from being mindless ‘number crunchers’ testing unrealistic models,
researchers who use large numbers of observations are acutely aware of
the context and character of their data and the assumptions that underlie
statistical models. Whether through descriptive statistics, tabulations, OLS
or non-parametric models, quantitative researchers immerse themselves as
much in their data as their qualitative counterparts. Imagination and
intuition have their rightful place in the craft of quantitative analysis.
Moreover, a highly critical research community exists to appraise and
scrutinise the methods that investigators deploy.

230 Quantitative Methods

In the spirit of a subtle defence, this chapter criticises some of the


practice of quantitative work, such as the tendency to present results too
cleanly and to hide much of the messiness of data analysis. More
improvements still can be made. A culture shift would acknowledge the
importance of exploratory data analysis and accept that it is just as correct
to infer a model from the data as the other way round. As Tanenbaum and
Scarbrough (1998) argue, the revolution in the speed of computers and the
ease of using software packages could help researchers and students utilise
the benefits of exploratory data analysis as they can flexibly handle and
present data. However, the space in journals is a constraint on the
possibilities for elaboration. It is also tedious to read articles that recount
‘how I did the research’ with tales of blind alleys and mistakes. But much
has already been achieved through the campaign for a replication standard
and the new culture of resistance against ‘cookbook’ data analysis. At the
same time as political methodologists campaign for more transparency,
rapid advances in statistical techniques, made possible by the speed of
modern computers, have transformed the field. Quantitative researchers
now seek to be both more advanced in their methods and more compre¬
hensible to a non-technical audience.

Further reading

• The beginner should start with a textbook on statistics, such as


Wonnacott and Wonnacott (1990).

• More fun is the irreverent How to Lie With Statistics (Huff 1991).

• Then there are the introductions to quantitative methods in political


science: Miller’s (1995) chapter in the first edition of Theory and
Methods in Political Science ; the classic book by Tufte (1974); and
recent introductions and reviews (for example, Pennings et al, (1999);
Champney (1995); Jackson (1996)).

• In addition, there are many general treatments for the social sciences
(for example, Skinner (1991)).

• More advanced readers should read the excellent volume edited by


Tanenbaum and Scarbrough (1998).

Chapter 11

Combining Quantitative and


Qualitative Methods

MELVYN READ AND DAVID MARSH

In the past it was common for researchers to reject certain methods out of
hand, often because these methods did not fit with the researcher’s implicit
or explicit epistemological position. Happily, such a position is now much
less common. Most empirical researchers acknowledge that both
qualitative and quantitative methods have a role to play in social science
research and that, often, these methods can be combined to advantage. Of
course, individual researchers must decide which are the best - that is, the
most appropriate - methods to use to address the particular research in
which they are interested (see Devine and Heath 1999: 200). Overall, the
quality of any piece of research is most likely to be affected by the
appropriateness of the research design and the skill of the researcher;
slavish adherence to particular methods carries few rewards.

This chapter focuses upon some of the issues involved in integrating


quantitative and qualitative methods. It is divided into four substantive
sections. The first section briefly questions the distinction between
quantitative and qualitative methods. The second section begins by
identifying the move towards the integration of quantitative and qualita¬
tive methods before briefly looking at ways in which these methods can be
combined. Subsequently, the third section outlines some of the problems
that need to be acknowledged by advocates of such an integrated
approach. In the final section we present two brief case studies which
bear on the use of such an integrated approach: the first deals with the
analysis of private members’ bills in the UK House of Commons; the
second examines research into changes in bureaucratic structure in Britain.

Quantitative and qualitative methods:


a false dichotomy?

Traditionally, in political science, quantitative and qualitative methods


have been used by different researchers, to study different things and to

232 Combining Quantitative and Qualitative Methods

answer different questions. However, is there really a clear distinction


between qualitative and quantitative methods? Or has a damaging false
dichotomy developed between the two research methods?

In our view, while there are differences, these can easily be over¬
emphasised. One of the problems is that the distinction, whilst widely
used, is rarely discussed and it seems to be merely assumed by many
commentators. Following Hammersley (1992) it is possible to identify five
putative distinctions between quantitative and qualitative data research,
most of which have their origins in the links between epistemology and
methodology:

1. Quantitative methods are more often used by authors who are in


ontological terms foundationalists and in epistemological terms
positivists. In contrast, those who use qualitative methods are, most
often, ontologically anti-foundationalist and usually follow a non-
positivistic epistemology (see Marsh and Furlong, Chapter 1) .

2. As such, most researchers who use quantitative methods view social


science as analogous to natural science; their aim is to produce causal
explanations, and preferably scientific laws, about the relationship
between the social phenomena being studied (see John, Chapter 10). In
contrast, those preferring to use qualitative methods see social science
as a distinct and differentiated discipline because it involves subjective
objects - reflexive human beings.

3. It follows that those who use quantitative analysis focus on describing


and explaining behaviour, while those utilising qualitative methods are
more concerned to understand the meaning of such behaviour to those
people being studied.

4. Quantitative researchers tend to adopt a deductive approach, using a


theory to generate hypotheses, which are then tested empirically. In
complete contrast, researchers who utilise qualitative analysis use their
inductive empirical analysis to generate interpretations or under¬
standings of the social world.

5. Quantitative analysis often deals with large amounts of data which


researchers analyse using statistical techniques. Qualitative research
differs as it generally makes use of fewer cases (or smaller amounts of
data).

However, Bryman (1988: 93) argues that, whilst these differences should
not be underestimated, academic discussion on these two research tradi¬
tions has tended to create a somewhat exaggerated picture of their
difference and theoretical irreconcilability. In his view, there is nothing
inherent in the properties of the different methodologies which prevents
Melvyn Read and David Marsh 233

their use by researchers who are operating from different epistemological


positions (see also May and Williams 1998; Abel 1970; or Hammersley
1992). In the practical world of political research, the distinctions which
are deemed so integral in theory can become blurred or sidelined. So, in
practice, most positivists will not stick rigidly to their epistemological
position. They will recognise that there are some differences between
natural science and social science and that the interpretations that actors
have of their own actions are important in formulating a fuller explanation
(see Hammersley 1992: 46). Positivists will therefore disengage from their
methodological heritage and use qualitative research when quantitative
analysis is not possible or is inappropriate.

For example, a positivist political scientist designing a large survey


about people’s attitudes towards a particular issue, such as confidence in
the Prime Minister (UK) or President (USA), might initially interview a
small number of respondents in order to generate more appropriate
questions for the main questionnaire which is ultimately intended to
discover the views of a representative sample of the population.

Similarly, a quantitative study of voting in the USA Congress may reveal


patterns of voting that are difficult to explain in terms of the variables used
in the quantitative study (such as party, gender, constituency character¬
istics, size of majority). In these circumstances, interviews with the
members of Congress would probably produce a much fuller explanation
by focusing upon their own explanations of why they had voted in a
particular way.

Non-positivists will also use quantitative data. At the very least, many of
them wish to make claims that have a quantitative basis, perhaps that a
particular discourse is dominant, or that a particular interpretation of
action is ‘typical’ or common (see Rhodes and Bevir, Chapter 6). More
obviously, many realists would argue that the pattern of structured
inequality which exists within a society like the USA, acts as a major
structural constraint/enabler (that is, it enables the privileged and con¬
strains the disadvantaged). So, political activity occurs within a pattern of
structured inequality that has a crucial (although not always directly
observable) effect on policy outcomes (on this see Marsh 2002). It is not
important here whether we accept this view, but it is clear that these
realists would utilise quantitative analysis to establish the extent of
structured inequality.

At the same time, while most relativists or social constructivists do not


use formal quantitative analysis, they can do so. So, those anti-founda-
tionalists who use discourse analysis as a key methodological tool (this
approach tends to emphasise the importance of texts, for example, official
documents, party programmes, texts of speeches, interview transcripts, or

234 Combining Quantitative and Qualitative Methods

even photographs or movies) in order to identify the nature and extent of


power relations can use computer programs to analyse texts and identify
patterns of linguistic use (perhaps to discover how often gendered language
is used), without compromising their ontological and epistemological
position.

Therefore, quantitative and qualitative methods are not in and of


themselves necessarily tied to a particular ontological and epistemological
position. Other factors affect researchers’ decisions as to which methodol¬
ogy to use: ‘This is not to say that one’s paradigmatic stance is
unimportant in choosing a method; nor is it to deny that certain methods
are usually associated with specific paradigms. The major point is that
paradigms are not the sole determinant of the choice of methods’ (Reich-
ardt and Cook 1979: 16).

In particular, the nature of the research problem plays a crucial role. So,
if we wish to study patterns of political participation in Britain, then this
interest inevitably suggests the collection of a sizeable data set, composed
of the questionnaire responses of a representative sample of the popula¬
tion, and their analysis using statistical techniques. However, even here the
researcher may wish to follow up the quantitative research with interviews
with a particular subset of the sample, perhaps in order to discover how
young people understand politics and political activity.

At the same time, there are many practical issues that may affect a
researcher’s methodological decisions. So, cost considerations may pre¬
clude large-scale quantitative work, while the funding organisation’s
preferences may push researchers towards quantitative research. Then
again, many researchers may have methodological preferences that are
driven as much by their areas of expertise as by methodological considera¬
tions, although of course the two are related.

Equating positivism with a deductive approach and non-positivism with


an inductive approach also involves significant oversimplification. Put
simply, it is evident that all researchers use both inductive and deductive
approaches in constructing explanations or developing understanding. In
all research we move from ideas to data and from data to ideas (see
Hammersley 1992: 48-9). So, rational choice theory, which is perhaps the
classic example of a deductive model in social science, is based upon
assumptions about preferences that are derived from prior empirical
analysis. Similarly, most non-positivists are concerned to generate conclu¬
sions from their data (few believe that data are self-explanatory, although
most believe that they are capable of a variety of interpretations), which
then may be used to generate future research questions.

Even the most obvious distinction between quantitative and qualitative


data, that quantitative analysis involves large data sets which are usually

Melvyn Read and David Marsh 235

analysed using statistical packages, while qualitative analysis involves a


small number of cases analysed in more depth, is questionable. There is no
doubt that when we think of quantitative data we think of large-scale
surveys, perhaps attempting to explain voting patterns or changing rates of
political participation, or the analysis of government statistics. Similarly,
when we think of qualitative data we think of interview material, content
analysis, participant observation, vignettes, life histories and so on.
However, it is not true that all quantitative data sets are large, or that all
qualitative data sets are small. Neither is it true that statistical analysis is
only performed on what are normally understood as quantitative data. So,
sophisticated regression analysis can be conducted on few cases (perhaps
as few as 25). At the same time, there is no reason why interview data
cannot be analysed using quantitative techniques; so we might undertake a
content analysis of interviews with different sections of the population to
compare the use of gendered or racially-inspired language. Similarly, we
might undertake a content analysis of various official documents or of
politicians’ speeches to establish how often globalisation is mentioned and
whether, in these documents/speeches, there is a dominant discourse that
presents globalisation as a constraint and argues that the nature of this
constraint means that the government has no alternative but to pursue neo¬
liberal economic policies.

Overall, in our view while the distinction between quantitative and


qualitative methods is useful heuristically, we must not overemphasise the
differences between them. In particular, the link between epistemology and
methodology is important, but far from determinant. We cannot say that
positivists never use qualitative data or that all non-positivists reject
quantitative analysis. Indeed, evidence from research practice suggests
that the traditional philosophical division between quantitative and
qualitative methods is increasingly becoming viewed as a false dichotomy.

Combining research methods

The move towards methodological pluralism

In the twentieth century, social science was dominated by quantitative


methods, particularly survey methods (Finch 1986: 1). It is not difficult to
understand this preference for quantitative research as it was associated
with the dominance of positivist informed political science (such as the
behavouralist or rational choice approaches: see Sanders, Chapter 2, and
Ward, Chapter 3). At the same time however, those who funded research,

236 Combining Quantitative and Qualitative Methods

such as government agencies, regarded quantitative research as more |

legitimate because of its claims to be ‘scientific’. Such research generated C

‘hard’, ‘objective’, ‘statistically significant’, findings in contrast to the .

‘soft’, ‘subjective’, product of qualitative research. The fact that much

funding of social science research comes from the government, in one guise

or another, and that they are usually interested in social science research as
a basis for ‘problem-solving’ legislation, merely reinforces this preference

for quantitative methods. The problem of course is that the existence of

such a dominant paradigm has conditioned both funders and practitioners

into a particular way of thinking about the process and product of

research.

Of course, the dominance of quantitative methods did not go unques¬


tioned and for most of the postwar period there have been two camps,
with researchers exclusively using either quantitative or qualitative meth- l

ods and, at the same time, denigrating the use of alternative methods. In
particular, at the height of the behavioural revolution in the 1960s and
1970s many social scientists questioned the utility of qualitative research,
although few went as far as Kerlinger who claimed that: ‘there’s no such
thing as qualitative data. Everything is either 1 or 0’ (cited in Miles and
Huberman 1994: 40). Even now, it is not unknown for behavioural
researchers to claim that qualitative methods have very limited utility
(for one recent example, see Dowding 2001). Similarly, more recently, the
postmodern turn in sociology (see Rhodes and Bevir, Chapter 6) has led
many to reject quantitative methods because, as we saw, it is argued that
they are based upon the foundationalist claim that there is a world ‘out
there’ which can be truthfully and accurately captured by utilising
appropriate quantitative methods. In contrast, as Denzin argues (1997:

8), the anti-foundationalist ‘doubts all criteria and privileges none’. Such
researchers have little, if anything, to say to behaviouralists utilising
quantitative methods.

However, there was some interest in attempting to integrate quantitative


methods even in the 1950s (see Trow 1957). This move has grown apace in
the 1980s and 1990s and the traditional view that there is a gap, even a
gulf, between qualitative and quantitative research, which means that the
methods should be used independently of one another, is now regularly
challenged (see previous section). Hammersley’s view that each has a
crucial role to play in extending our understanding of the topic under
consideration is now becoming widely shared. Others endorse the view
that all available techniques should be used to add power and sensitivity to
individual judgement in understanding the environment being studied:
‘why throw away anything helpful?’ (see Miles and Huberman 1984). j

Melvyn Read and David Marsh 237

Why and how to combine methods

It seems to us that there are two main reasons for combining methods:

• First, it may be that using one method does not allow the researcher to
address all aspects of the research question.

• Second, many researchers argue that combining methods increases the


validity of research, because using a variety of methods means that one
method serves as a check on another.

It is this second justification which dominates in the methods literature,


but we should not neglect the first reason.

Those researchers who focus on the search for increased integration talk
in terms of methodological triangulation; Denzin (1970) calls it a ‘trian¬
gulated perspective’. Actually, Denzin rejects the utility of a simple
distinction between quantitative and qualitative methods. Rather, he
identifies five separate methodologies: (1) surveys; (2) interviewing; (3) doc¬
umentary analysis; (4) direct observation; and (5) participant observation.
In his view, a completely triangulated investigation would make use of all
these methods, but its basic feature is the combination of two or more
different research methods in the study of the same empirical issue (Denzin
1970: 308).

Denzin also distinguishes between triangulation within methods and


triangulation between methods. However, it seems to us useful to
subdivide his second category. If we take the distinction between quanti¬
tative and qualitative methods seriously, and recognise that this distinction
has resonance in epistemological terms, then we can distinguish between
combining different quantitative methods or different qualitative methods
and combining qualitative and qualitative methods.

Triangulation within methods


This form of methodological triangulation is very narrow and involves the
use of different tools to measure a particular variable. So, a particular
variable, perhaps political participation, will be measured using different
indicators, the assumption being that the different measures provide a
form of triangulation. As Denzin points out, this is no more than a
variation within the use of a single method and the inherent problems of
single-method research remain (1970: 307). Even so, such triangulation
may have its uses because, if different ways of measuring a variable, for
example participation, do not affect its relationship with another variable,
say education, then this adds validity to any conclusions about the
relationship between the two.

238 Combining Quantitative and Qualitative Methods

Melvyn Read and David Marsh 239

The use of different quantitative or different qualitative methods

Here, the scope is broader and the justification of the use of a variety of
methods is often a practical one. As an example, a quantitative researcher
may wish to explain the changes in the support for a government in the
opinion polls over time. In this case the opinion polls would be the source
of data on: the party preference of a representative sample of voters; their
views of government performance (most likely their views as to which
party can be most trusted on health, education, the economy and so on);
and their expectations of their own future economic well-being (in the
literature this would be called their personal economic expectations).
However, the researcher will want to know which independent variables
combine to provide the best model of changing government popularity and
the opinion polls only provide data on some of those variables. So, the data
on variables that deal with actual government economic performance
unemployment, inflation, bank rate, government expenditure levels and so
on - will be collected from various government publications reporting
these statistics.

At the same time, different qualitative methods may be used where one
method does not deal with all aspects of the research question or where
combining methods in this way increases the validity of findings. So, to
take an example already discussed, we might undertake a content analysis
of some government documents to establish how the government views the
problem of globalisation and how far they see it as constraining their
autonomy particularly in relation to the pursuit of economic policy. In
addition, we could interview ministers, civil servants or government
advisers about the issue, asking them how far they think economic
globalisation has progressed and how far they see it as a constraint on
their autonomy. The aim would be to discover and explore any incon-
sistencies between the two data sources.

The triangulation of quantitative and qualitative methods.

The issues here are similar. The aim is either to address aspects of the
research question that the exclusive use of either quantitative or qualitative
methods cannot cover or to add validity to results produced by one or
other method. So, if we return to our example of political participation
research, in-depth interviewing of a limited number of young people can be
used to gain more understanding of how people view politics and the
political’. These responses could then be used to generate a questionnaire
to administer to a representative sample of young people. On the other
hand, the researcher may already have a questionnaire, perhaps because
they are following up prior research and looking at change in participation
over time, that is administered. Subsequently, there may be an intensive

study, using in-depth interviews of a sub-sample, to help the researcher


understand or clarify some of the responses to the questionnaire.

In a similar vein, a researcher may begin by undertaking qualitative


work, using interviews; participant observation or vignettes that reveal an
interesting result. They may then wish to see if that result is more
generalisable, either by interviewing a representative sample of the
population they are studying or by using a survey method. Of course, this
example is really little different from the first example raised in the last
paragraph. The only likely difference is an epistemological one. In the
prior example it is likely that the researcher is essentially a positivist using
qualitative methods as an ancillary method. In contrast, in the second
example the researcher is more probably a non-positivist who, never¬
theless, is interested in establishing the extent to which the understanding/
interpretation of their small sample is shared in the wider population.
In our view, whichever type of methodological triangulation is used,
providing it is used in a way that is sensitive to the epistemological issues,
can add to our knowledge and understanding of the relationships between
social phenomena.

Modes of combination

Creswell (1994) argues that combining methods can take three basic forms:
a two-phase design; a dominant/less dominant design; and a mixed
methodology design. We have touched upon each of these designs in the
prior discussion, so a brief exposition will suffice here.

CreswelPs first model is a two-phase design approach in which the study


has conducted separate qualitative and quantitative phases. In this
approach, in each separate phase the researcher can operate within the
appropriate epistemological paradigm and this goes some way to meeting
the criticisms of methodological triangulation by researchers who only
operate within one paradigm, using one method (1994: 177).

Padgett (1998), unlike Creswell, discusses the issue of the temporal


sequencing of this two-stage approach. She distinguishes between designs
in which the qualitative study comes first and designs where the researcher
starts from a quantitative analysis. As we saw earlier, in the first type of
design the qualitative analysis is ‘exploratory’ and is used to inform the
content of the quantitative study. The advantage to the researcher is that
the validity of the concepts, hypotheses and questions is enhanced because
they are devised following an intensive study of a sample of the population
being researched (Padgett 1998: 128—9). In the second type of design, the
findings of the first study provide the starting point for a qualitative study.
Here, the view is that the statistical analysis of survey data does not allow

240 Combining Quantitative and Qualitative Methods

Melvyn Read and David Marsh 241

us fully to explore the meanings that respondents attach to responses or


actions, whereas in-depth interviews with a sub-sample of respondents will
allow the researcher to construct a fuller explanation of those responses/
actions.

CresswelPs second model is the dominant/less dominant design. Here,


the researcher adopts an approach within a single dominant paradigm with
a small component of the overall study drawn from the alternative
paradigm (1994: 177). The advantage of this method is that it retains a
single consistent paradigm, but allows other data to be collected from a
smaller or larger population, depending on which methodology is domi¬
nant. Again, Padgett develops this schema introducing the two temporal
sequences discussed above.

Cresswell is most committed to the third model, the mixed-methodology


design (1994: 177), which he argues can be used at any stage of the research
process. In his view, such an approach not only adds complexity to the
research design but also allows the researcher to take advantage of each
research methodology and offers, perhaps, a better reflection of research
practice because it uses both inductive and deductive approaches (1994:
178). Padgett concurs, arguing that: ‘the mixed-method approach is worth
the effort. When both methods are given their due, a study can be
enhanced greatly by their synergy’ (1998: 134). At the same time, however,
she draws our attention to the difficulties that adopting this approach
entails, requiring researchers to compromise in the way that they work.

Problems of combination

Although combining methods is increasingly the orthodoxy, it is not a


process without problems. The key issue returns us to the question of
ontological and epistemological differences. How important are these
differences and how do they impact on methodology? In essence, there
seem to be three answers to these questions. First, many authors ignore
ontological and epistemological issues and their links to methodology,
merely focusing on the empirical question in which they are interested. It is
clear from our earlier discussion that we see that position as untenable (see
also Marsh and Furlong, Chapter 1). In our view, ontological and
epistemological positions are crucial in social science and any researcher
must be aware of their position. Second, authors like Hammersley (1992),
Cresswell (1994) and Padgett (1998) recognise the differences, but play
them down. Here, the view is that it is possible for both positivists and
non-positivists to use both qualitative and quantitative methods in the
same research project in any combination and without privileging one or
the other. In this view, the third mode of combination outlined by
Cresswell is both possible and preferred. A third position argues that
ontological and epistemological positions serve as a skin, not a sweater,
and that there is a clear link between ontology, epistemology and
methodology. This view deserves more attention because it really takes us
to the core problem of combining methods.

In this view a researcher cannot hold to two different positions at the


same time or within the same research project (although of course, as was
argued by Marsh and Furlong in Chapter 1, at the margins there is
considerable overlap between the positions). If we utilise both quantitative
and qualitative methods we must ensure that we are doing so in a way that
does not compromise our basic ontological and epistemological position.
We cannot be a positivist when we are collecting and analysing quantita¬
tive data and an interpretist when we are analysing qualitative data. In this
view, one s ontological and epistemological position affects all aspects of
the research process.

Devine and Heath (1999) explore some of these issues. They argue that:

[A researcher’s epistemological position] will probably determine the


research questions which that researcher is prepared to consider in the
first place and in turn will influence the extent to which methodological
triangulation will lean more towards one tradition than another.
(1999: 204)

So, we would expect positivists to be more interested in research questions


that lead to the production of generalisable results: questions which point
to the use of qualitative methods. In addition, even when positivists use
qualitative as well as quantitative methods and thus engage in what the
literature terms ‘data triangulation’, the qualitative data will usually be
ancillary to the quantitative data.

However, there is another important point that the Devine and Heath
quote misses. Different researchers are likely to interpret their data,
whether quantitative or qualitative, differently depending on their episte¬
mological positions. More specifically they are likely to make different
claims about their results. The positivist will see their results, if they
involve a representative sample, as generalisable and, in a sense, as ‘true’.
In contrast, a relativist would see their results as offering only one possible
interpretation of the social phenomenon studied.
In this view, the first and third modes of combination outlined by
Cresswell are highly questionable, so the best possible mode is the second
one. In effect, this view would argue that we are most likely either to be a
positivist using qualitative methods as an ancilliary to quantitative ones, or
a non-positivist who uses qualitative methods as an ancillary to qualitative
ones.

242 Combining Quantitative and Qualitative Methods

One other point is important here, before we move on to the, more


minor, logistical problems of combination. If we use methodological
triangulation to test the validity of results, what happens if the different
methods produce different results? Of course, if money was no object, the
first response of a researcher might be to replicate both the quantitative
and the qualitative research. If this still produced differing results, then a
good researcher might ask why. However, in most cases the positivist,
quantitative, researcher would doubt the validity of the qualitative
analysis, perhaps arguing that the numbers involved were too small, the
sample not representative or the researchers incompetent. At the same
time, the non-positivist, qualitative, researcher might argue that the
quantitative analysis was unsophisticated and not sufficiently fine-grained.
Such problems are probably impossible to resolve, because the positions
taken reflect epistemological differences. This issue is clearly evident in the
second case study discussed below.

At the same time, we cannot ignore the logistical problem involved in


combining quantitative and qualitative analysis. First, there is often a
resource problem. Large-scale surveys cost a great deal of money; large-
scale in-depth interviewing costs even more. A combination of the two
might be prohibitively expensive. Second, few researchers have the varied
skills necessary to undertake sophisticated statistical analysis on large data
sets and in-depth interviews. This may mean that a team with varied skills
needs to be assembled, but that, again, is expensive, and any such team
needs to avoid endless epistemological arguments.

Two case studies

The final section looks at two case studies of research that involve the
combination of quantitative and qualitative methods: the first deals with
voting on private members’ bills in the UK House of Commons; and the
second focuses on explanations of the changes which occurred in the
structure of the British civil service in the late 1980s and the 1990s.

The fate of UK private members' bills

The study of private members’ bills in the UK is of interest for two reasons.
First, legislative proposals for reform on key moral issues, such as
abortion, divorce and homosexuality, are normally introduced under this
procedure because governments have been reluctant to become identified
with a particular stance on what are usually regarded as vote-losing issues.
Second, unlike in the USA, party discipline is very strong in Britain. Almost
all the House of Commons divisions that are unwhipped (not dictated by a

Melvyn Read and David Marsh 243

party line) occur on private members’ bills, so, if researchers wish to study
what factors, other than party discipline, affect voting, then they have to
focus mainly on private members’ bills.

Studies of private members’ bills tend to focus on one of two questions:


why some bills pass while most do not; and what factors affect MPs’
voting on such issues. We shall look in turn at how quantitative and
qualitative methods have been used by Marsh and Read (1988) to address
both questions.

Marsh and Read (1988) provide the fullest study of private members’
bills to date, although there are other more recent studies of voting on
these unwhipped (non-partisan) issues (see especially Pattie et al. 1998).
Using Hansard, they examined the fate of all bills introduced in the
postwar period, classifying them according to: what topic they dealt with;
how far they progressed; whether the bill was voted on and, if so, what the
votes were; whether the bill received government support; and whether the
bill was given extra, that is, government, time. These quantitative data
were supported by interview data with most of the MPs who introduced
private members’ bills in two Sessions, 1979/1980 and 1980/1981. In these
interviews MPs were asked why they introduced a particular bill, what
contact they had with government and how they dealt with the constraints
that the private members’ bill procedure imposes on the fate of a bill.
The quantitative data was crucial because it showed that no contro¬
versial private members’ bill that has not received government time has
passed since July 1977, when extra time was provided by the Labour
Government to enable the Lords’ Amendment stage of three bills to be
completed. As governments are now very reluctant to give time to private
members’ bills, the bills that pass are minor and technical. Time is of the
essence because the procedure does not prevent MPs who oppose a bill
from filibustering it (that is, talking it out). There is no equivalent in
private members’ business to the guillotine which the government can use
to curtail debate on its bills.

However, the qualitative data also add to this explanation. It is usually


possible to trace the government’s attitude to a particular bill by using
Hansard, because, in the Second Reading debate on any bill, a government
minister usually makes a speech in which they outline the government’s
response. But, this is not always so and it may be necessary to interview
particular MPs. At the same time, it is rarely possible to discover from
Hansard whether a particular bill has in fact originated with the govern¬
ment. So, it was only through in depth interviews with MPs that Marsh
and Read discovered that many of the successful private members’ bills are
government bills in all but name; minor, uncontroversial bills provided by
a government department to an MP who draws a high position in the
private members’ ballot. In addition. Marsh and Read undertook inter-

244 Combining Quantitative and Qualitative Methods

views with some MPs who filibustered particular bills. A reading of


Erskine May, the bible of parliamentary procedure, sensitises the research¬
er to some of the intricacies of the private members’ bill process, but,
without interviews with those MPs who piloted bills through the process
and those who oppose bills, it would be impossible to understand how that
procedure can be, and is, used.

If anything a consideration of research on voting on private members’


bills reveals even more clearly the advantages of combining methods.

Marsh and Read (1988) analysed the voting records of MPs on abortion,
capital punishment and the decriminalisation of homosexuality between
1965 and 1980. Their aim was to discover which factors had most influence
on MPs voting on unwhipped issues. They utilised a multivariate ordinary
least squares regression analysis (1988: 86), to establish both which of a
number of political and demographic variables (including party, size of
majority, religion, gender, age, social class and so on) had the most
explanatory power and how well an explanatory model could be produced
by combining the effects of the independent variables.

Their main finding was that party was by far the best predictor of vote,
even though there was no declared party line on these issues. Generally,

Labour MPs were liberal on these social issues (so voting pro-choice on
abortion, against capital punishment and for the liberalisation of the
homosexuality laws). The other key finding was also unsurprising.

Religion had an influence on voting on abortion, but not on the other

issues. So, while almost all Labour MPs took the liberal position on capital

punishment and homosexuality reform, Catholic Labour MPs invariably Eg

voted for tougher abortion laws. Despite these results however, Marsh and

Read concluded that relatively few quantitative variables affect voting on

these issues and that much of the variance on voting cannot be explained

using these statistical models.

To explore the voting patterns more fully, Marsh and Read used |

qualitative methods, interviewing MPs and interest group activists. Their }

argument is that, in order to explain MPs’ votes, we need to understand


much more about the political context of each vote. As just one example,

77 per cent of Conservative MPs voted for the Second Reading of the 1967
Abortion Bill, while 66 per cent of those MPs opposed it at the Third
Reading. It is impossible to explain this difference on the basis of Marsh
and Read’s quantitative analysis. However, interviews and the scrutiny of
interest group documentation make it clear that the formation, and
activities, of an anti-abortion interest group, the Society for the Protection v

of the Unborn Child (SPUC), after the Second Reading, together with \
increasingly visible opposition from sections of the medical profession to
abortion on ‘social’ grounds, persuaded many Conservative MPs to change |

their votes. Marsh and Read conclude that: ‘while statistical analysis can ?

Melvyn Read and David Marsh 245

provide an important basis from which to embark upon an explanation of


voting a fuller, more adequate, explanation involves a much more detailed
consideration of the political background to, and context of, the votes’
(1988: 107).

In this case then qualitative research was used to explore aspects of the
research question that the quantitative research could not address. Both
the quantitative and the qualitative data contribute to an explanation of
the fate of private members’ bills and the voting behaviour of MPs on
unwhipped issues. In our view, no positivist would have any problem with
combining methods in this way. Similarly, most non-positivists would have
few problems in using the quantitative analysis, although in explaining the
votes they would clearly focus more on the MPs’ understanding of the
issues involved and the strategic context within which they vote.

The changing structure of the British civil service

This case study is rather different. It does not report an example of the
successful combination of methods. Rather, it focuses on how two
different methods, essentially quantitative and qualitative, have been used
to examine the same problem: how to explain the changes that occurred in
the British civil service in the late 1980s and the 1990s. Of course, these
methods might both have been used in the same research project, but, as
we shall see, there are important epistemological issues involved with this
type of combination.

Dunleavy (1991) is critical of the mainstream public choice explanation


of bureaucratic change, the budget maximisation model, which argues that
the main interest of senior bureaucrats is in maximising their budget,
because a larger budget will mean greater status and higher salary for the
bureaucrats. In contrast, Dunleavy develops a bureau-shaping model that
is based on the assumption that senior bureaucrats are most interested in
maximising the status and quality of their work. For this reason, senior
civil servants may be quite happy for their bureaux to shrink in size, as a
result of the hiving off of troublesome and routine managerial work, if that
means they can focus on more interesting work, especially providing policy
advice to ministers.

Dunleavy uses aggregate data about the patterns of expenditure on


different budgets and in different types of bureaux in order to test his
bureau-shaping model of change in bureaucracies over time. These data
are taken from government statistics. On the basis of this analysis
Dunleavy argues:

Senior policy-level officials have generally agreed on the need to separate

out their existing under-managed and under-prioritised executive roles,

246 Combining Quantitative and Qualitative Methods

so as to allow them to concentrate on their key priorities of providing


policy advice to ministers, managing relations with Parliament,
organising legislation and regulations, and moving money around ...
(1991: 463)

It is possible to test the bureau-shaping model using other methods. We


could interview a sample of the bureaucrats in a given bureau, or a series of
bureaux, and ask them about the changes that had occurred and why they
acted in the way they did. The results of the two methods could then be
compared, allowing us an additional test of the validity of Dunleavy s
model. If a representative sample of the civil servants in a particular bureau
were interviewed, then these data could be treated as quantitative; if not,
they would be more likely to be regarded as qualitative. Of course, this
reflects back on our earlier argument that the distinction between
quantitative and qualitative data is not a self-evident or easy one.

Marsh et al, (2000) take another approach. They focus on the major
change in the UK bureaucracy which began in the 1980s: the creation of
the Next Steps agencies. This change meant that the British civil service
shrank significantly in size as most executive functions of British central
government departments were transferred to separate agencies. As such,
the departments were much smaller and their work focused on policy
advice. Prima facie , this development would seem to confirm the Dunleavy
model.

Actually, the bureau-shaping model would generate three hypotheses


concerned with this development: first, that senior civil servants have less
interest in the management of their department than in providing policy
advice; second, that the development of agencies was encouraged by senior
civil servants; and, third, that the outcome of these changes has been to
allow senior civil servants to focus on the policy advice function. Marsh et
al. (2000) test these hypotheses using extensive interviews with retired and
serving senior civil servants who had been involved with the changes. Their
sample is not representative and they treat their interviews as qualitative,
rather than quantitative, data. So, they do not report the numbers of
interviewees who gave particular responses and use quotes from the
interviews to give a flavour of how these civil servants experienced and
viewed the process.

Marsh et aL’ s interview data do not confirm the bureau-shaping models


explanation of that change. Rather, they suggest that: some senior bureau¬
crats value their management function as highly as their policy advice
function; that the creation of the agencies was driven by politicians, not
bureaucrats, and, indeed, opposed by some senior civil servants; and that,
since these developments, very senior civil servants are less, rather than
more, involved in giving policy advice.

Melvyn Read and David Marsh 247

As we said, neither Dunleavy nor Marsh et al. combine methods, but it


would be perfectly possible to do so. Many would also argue that such an
approach could have advantages. For example, it is clear that Marsh et
al’ s qualitative study could be seen as providing a test of the validity of
Dunleavy’s explanation. However, there is a major epistemological pro¬
blem here.

Dunleavy is a positivist and for him this has strong methodological


implications. As we saw, he relied almost exclusively on quantitative data
about bureaux budgets. He did not interview civil servants or politicians
about their preferences or strategic judgements, The implication is that
such data are ‘soft’ because they may reflect a bureaucrat’s post hoc
reconstruction of decisions: interviewees may be mistaken or lying about
their preferences and reasons for particular behaviour. In contrast, Marsh
et al. are realists who argue that we need to know how civil servants
understood the situation and their choices if we are to develop a fuller
interpretation of the changes.

The key point is that if both methods confirm the same result then two
researchers, albeit operating from different epistemological positions and
with different methodologies, may agree. However, if the two methodol¬
ogies produce different results or suggest different conclusions, then the
researcher will almost inevitably privilege one set of results dependent
upon their epistemological and methodological preferences.

Conclusion

In our view, the key characteristics necessary for high quality research are
a good research design and an excellent researcher/research team. As
Devine and Heath argue:

Good [social science] research requires integrity on the part of the


researcher, a willingness to face difficult questions, an openness to new
and different ideas, an acceptance that some strategies of research can go
wrong, an ability to adapt to different ways of doing things and so on.
The way in which practical, ethical and political issues arise in all
research demands pragmatism, but it is a pragmatism which is
systematic and vigorous. (1999: 18)

As such, researchers need to be flexible and adaptable and this involves


using quantitative and qualitative methods when appropriate. In addition,
all must take ontological and epistemological issues seriously and for many
this means that combining methods can only occur within one epistemo¬
logical position. However, perhaps the most important point in our view is
that, while it is possible to distinguish ‘good’ research from ‘bad’,

248 Combining Quantitative and Qualitative Methods

whichever method or combination of methods is used, the criteria of


judgement will not be the same (see Devine and Heath 1999: 209). So, for
example, the positivist undertaking quantitative work will make judge-*
ments based on the reproducibility and generalisability of research results;
criteria which have little or no relevance to the relativist undertaking
qualitative analysis and acknowledging that there can be competing
interpretations of the same data. It is important not to argue that there
is only one way to do social science and that a single set of criteria can be
established to judge ‘good research’.

Further reading

• On qualitative research, see Silverman (1997), Mason (1996) and Devine


and Heath (1999).

• For quantitative methods in political science, see Miller (1995); the


classic book by Tufte (1974); and recent introductions and reviews such
as Pennings et al. (1999), Champney (1995) or Jackson (1996).

Chapter 12

Comparative Methods

JONATHAN HOPKIN

Introduction

Despite the status of ‘comparative politics’ as a disciplinary sub-field,


comparison and the comparative method are used implicitly or explicitly
across political science and the social sciences in general. Comparison
serves several purposes in political analysis. By making the researcher
aware of unexpected differences, or even surprising similarities, between
cases, comparison brings a sense of perspective to a familiar environment
and discourages parochial responses to political issues. Observation of the
ways in which political problems are addressed in different contexts
provides valuable opportunities for policy learning and exposure to new
ideas and perspectives. Comparison across several cases (usually countries)
enables the researcher to assess whether a particular political phenomenon
is simply a local issue or the professional academic’s Holy Grail, a
previously unobserved ‘general trend’. But perhaps the principal function
of comparison in political science is that of developing, testing and refining
theory. This chapter will therefore focus on the relationship between
theory and the comparative method and the problems involved in
designing comparative research.
Theory and the comparative method

As many disgruntled comparativists have pointed out, ‘comparative


politics’ is often taken to mean simply ‘the politics of foreign countries’
— this is certainly the meaning used in the book reviews section of the
American Political Science Review and in the organisation of political
science departments in the United States, and to some extent elsewhere.
Ironically, much of the research carried out under this rubric is not
comparative at all, consisting instead of narrow, ‘idiographic’ studies
(studies which are limited to particular cases or events), often of individual
countries. In fact, use of the comparative method is by no means

250 Comparative Methods

constrained by such institutional conventions. Instead, as ‘one of the


primary means for establishing social scientific generalizations’ (Ragin et
al . 1996: 749), it can, and indeed should, be used in ‘nomothetic’ studies
(studies which seek to demonstrate ‘law-like’ theoretical claims) of social
and political phenomena.

You might also like