(Morgan, 2019) - Commentary in Journal of Mixed Methods Research
(Morgan, 2019) - Commentary in Journal of Mixed Methods Research
(Morgan, 2019) - Commentary in Journal of Mixed Methods Research
net/publication/329563437
CITATIONS READS
4 538
1 author:
David L Morgan
Portland State University
68 PUBLICATIONS 11,888 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by David L Morgan on 09 February 2019.
David L. Morgan1
Abstract
This commentary agrees with the editors’ recent decision to do away with triangulation as a
term in mixed methods research, but before doing so, it argues for a review of its original popu-
larity, and a careful consideration of what should replace it. Triangulation depends on the com-
parison of results from qualitative and quantitative studies that attempt to answer the same
research question(s), so there are three possible outcomes: convergence, complementarity,
and divergence. After reviewing each of these alternatives, I present an approach that cross-
tabulates tests of hypotheses as quantitative results and themes as qualitative results, based on
the extent to which those results are convergent, complementary, or divergent.
Keywords
triangulation, convergence, complementarity, divergence
Let me begin by agreeing with Fetters and Molina-Azorin’s (2017a) proposal that we divest our-
selves of triangulation as a term for research designs in mixed methods research (MMR). Even
though they offer six reasons for eliminating triangulation from our terminology, I believe their
fourth is sufficient: triangulation ‘‘has multiple meanings and lacks sufficient clarity and preci-
sion.’’ This problem has been recognized since Greene, Caracelli, and Graham (1989) examined
more than 50 studies to assess the match between the stated reasons for doing MMR versus what
the studies did. They found that although triangulation was the most frequently stated reason,
less than a third of such studies actually used triangulation as intended. So triangulation has a
long history of multiple meanings and insufficient clarity.
Yet simply saying good-bye to triangulation is not enough. Instead, we need to understand
why it was so popular, in terms of both its initial purpose and the various other purposes that
were assigned to it. Here, Fetters and Molina-Azorin (2017a) are mistaken is saying that trian-
gulation ‘‘developed within, and is virtually synonymous with the field of qualitative research’’
(p. 7), since the concept originated in the work of Donald Campbell and his coauthors
(Campbell & Fiske, 1959; Webb, Campbell, Schwartz, & Sechrest, 1966). There, the term came
from an analogy to navigation, with two separate lines of sight converging on a single point
and forming the tip of a triangle. For Campbell and colleagues, comparing the results from mul-
tiple methods aimed to minimize the chance that the weaknesses of any single method might
produce ‘‘invalid’’ conclusions.
1
Department of Sociology, Portland State University, OR, USA
Corresponding Author:
David L. Morgan, 2513 NE Skidmore St, Portland, OR 97211, USA.
Email: morgand@pdx.edu
Morgan 7
Concerns about validity were the center of much of Campbell’s career. One such concern
involved problems in measurement (Campbell & Fiske, 1959), where the research results actu-
ally derived from deficiencies in how the data were captured. A different concern with validity
was highlighted in his work on threats to inference in experimental and quasi-experimental
design (Campbell & Stanley, 1963; Cook & Campbell, 1979). A third concern was limitations
inherent in any given method, so that using the only one method to do multiple studies of the
same topic might produce similar results due to shared biases in the method itself. Seen in this
light, his work on unobtrusive measures in Webb et al. (1966) was devoted to producing a new
method that was not subject to the ‘‘reactivity’’ that occurred when participants knew they were
being studied, as in methods such as interviews, self-reports, and participant observation.
This last approach to validity issues is important because Webb et al. (1966) relied on the
concept of triangulation to counteract the limitations of single methods. In particular, they noted
the importance of cross-validating results by using multiple methods: ‘‘Once a proposition has
been confirmed by two or more independent measurement processes, the uncertainty of its inter-
pretation is greatly reduced’’ (p. 3); and ‘‘When a hypothesis can survive the confrontation of a
series of complementary methods of testing, it contains a degree of validity unattainable by one
test within the more constricted framework of a single method’’ (p. 196). Denzin (1970) relied
explicitly on Webb et al. (1966) to promote the goal of comparing the results of multiple meth-
ods. This in turn produced the version of triangulation that was so widely used as the justifica-
tion for MMR in the 1970s and 1980s.
But if triangulation initially meant assessing the convergence of different methods, how did
the other interpretations arise? I believe the key insight here is that there are multiple possible
outcomes in the comparison of different methods. Beyond convergence, there is the possibility
that each method will target a different aspect of the underlying phenomenon, leading to results
that are complementary to each other. There is also the obvious possibility of divergence when
multiple methods produce distinctly different outcomes.
Convergence, complementarity, and divergence summarize the three possible alternatives
from comparing qualitative and quantitative results, thus leading to three different reasons for
doing MMR. Of course, these are not the only reasons to do MMR; at a minimum, they omit all
of the ‘‘sequential’’ design formats. Still, the direct comparison of the results from multiple meth-
ods remains an important element of MMR, even if we abandon triangulation as a label for this
work. In the core section of this commentary, I will describe convergence, complementarity, and
divergence, along with an assessment of the strengths and limitations of each as a goal for doing
MMR. I will then offer a specific proposal for presenting the more complex results that can come
from combining two or more of these goals, followed by some brief concluding remarks.
that those results were due to the biases of any one method. Note that I have replaced the quan-
titatively oriented term validity with the broader criterion of credibility (literally, ‘‘believabil-
ity’’), as proposed by Lincoln and Guba (1985), who advocated triangulation as a way to
enhance such credibility. One of the strengths of this approach is its direct link to issues of inte-
gration in mixed methods (Fetters & Molina-Azorin, 2017b), because it proposes a direct com-
parison of the qualitative and quantitative results to determine their similarity.
In contrast to these strengths, major problems can arise if the actual results produce either
outright divergence or a muddled interpretation where each method targets different aspects of
the research goal. In either of those cases, studies that were exclusively aimed at convergence
may yield very little in the way of usable conclusions. Furthermore, even when there is clear
convergence, that still amounts to answering the same research question twice. This duplication
of effort is worthwhile only when the need for additional credibility is important enough to jus-
tify the expense and effort of conducting separate qualitative and quantitative studies.
for comparing results, and as such it has not received as many competing labels. The other
major option is ‘‘initiation,’’ which was used by Greene et al. (1989). The reason for preferring
divergence as a label is not only that initiation never caught on but also that divergence is a
possible outcome from comparing the results of qualitative and quantitative studies, while initi-
ating new research is a choice that might be made in the face of divergence.
The main advantage of divergence is not the differences that it generates but the opportuni-
ties that it provides for investigating those differences. This typically involves moving back and
forth between the qualitative and quantitative results to produce a richer interpretation of the
original contradictions. Maxwell and Loomis (2003) called this an ‘‘interactive model of
design,’’ and they provided a number of detailed examples to demonstrate how pursuing diver-
gent results can produce insights that go well beyond the initial recognition of difference. In
this case, the point where integration occurs can be somewhat indeterminate. On the one hand,
the research may cease with the discovery of divergence, producing only hypotheses about the
sources of the different results. On the other hand, the divergent results may lead to further data
collection and analysis in an attempt to resolve the discrepancy.
Divergence has limitations because it requires differences that are both theoretically interest-
ing and empirically addressable, but there currently are no protocols for producing such results.
Interestingly, these problems are also demonstrated in the detailed examples provided by
Maxwell and Loomis (2003), since a number of those studies began with failed attempts at con-
vergence. In other words, much of the work that exemplifies research based on divergence also
indicates how hard it is to design a study around divergence as an explicit goal. In addition,
when further research is undertaken to resolve discrepancies, it is difficult to predict in advance
how much effort it will take to produce meaningful results.
Conclusions
In many ways, triangulation was a victim of its own success. From the 1970s into the 1990s, it
was by far the best-known reason for doing MMR. Hence, anyone doing MMR during that
period might have been tempted to use triangulation as a justification, if only because there
were few obvious alternatives. As a result, triangulation came to mean too many things. Yet
that does not imply that the original purposes of triangulation have disappeared; instead, those
purposes have been clarified and expanded.
Today, we still have the goal of comparing the results of qualitative and quantitative studies
on the same phenomena, but we have developed a better understanding of the alternative rea-
sons for making such comparisons. Furthermore, as Table 1 indicates, we now realize that there
may be multiple outcomes from comparing the results from qualitative and quantitative meth-
ods. Building on these advances creates greater clarity about the differences between conver-
gence, complementarity, and divergence, and that provides a much better chance of laying
triangulation to rest.
Funding
The author received no financial support for the research, authorship, and/or publication of this article.
References
Campbell, D., & Fiske, D. (1959). Convergent and discriminant validation by the multitrait-multimethod-
matrix. Psychological Bulletin, 56, 81-105.
Campbell, D., & Stanley, J. (1963). Experimental and quasi-experimental designs for research. Belmont,
CA: Wadsworth.
Cook, T., & Campbell, D. (1979). Quasi-experimentation: Design and analysis issues for field settings.
Boston, MA: Houghton Mifflin.
Denzin, N. (1970). The research act. Chicago, IL: Aldine.
Fetters, D., & Molina-Azorin, J. (2017a). The Journal of Mixed Methods Research starts a new decade:
Principles for bringing in the new and divesting of the old language of the field. Journal of Mixed
Methods Research, 11(1), 3-10.
Fetters, D., & Molina-Azorin, J. (2017b). The Journal of Mixed Methods Research starts a new decade:
The mixed methods integration trilogy and its dimensions. Journal of Mixed Methods Research, 11(3),
291-307.
Morgan 11
Fielding, N., & Fielding, J. (1986). Linking data. Thousand Oaks, CA: Sage.
Flick, U. (1992). Triangulation revisited: Strategy of validation or alternative? Journal for the Theory of
Social Behavior, 22, 175-197.
Greene, J., Caracelli, V., & Graham, W. (1989). Toward a conceptual framework for mixed methods
evaluation designs. Educational Evaluation and Policy Analysis, 11, 259-274.
Lincoln, Y., & Guba, E. (1985). Naturalistic inquiry. Thousand Oaks, CA: Sage.
Maxwell, J., & Loomis, D. (2003). Mixed methods design: An alternative approach. In A. Tashakkori & C.
Teddlie (Eds.), Handbook of mixed methods in social & behavioral research (pp. 241-271). Thousand
Oaks, CA: Sage.
Morgan, D. (2013). Integrating qualitative and quantitative methods: A pragmatic approach. Thousand
Oaks, CA: Sage.
Webb, E., Campbell, D., Schwartz, R., & Sechrest, L. (1966). Unobtrusive measures. New York, NY:
Guilford.