Generative Explanation and Individualism in Agent-Based Simulation
Generative Explanation and Individualism in Agent-Based Simulation
Generative Explanation and Individualism in Agent-Based Simulation
research-article2013
POS43310.1177/0048393113488873Philosophy of the Social SciencesMarchionni and Ylikoski
Article
Philosophy of the Social Sciences
43(3) 323340
Generative Explanation The Author(s) 2013
Reprints and permissions:
and Individualism in sagepub.com/journalsPermissions.nav
DOI: 10.1177/0048393113488873
Agent-Based Simulation pos.sagepub.com
Abstract
Social scientists associate agent-based simulation (ABS) models with three
ideas about explanation: they provide generative explanations, they are
models of mechanisms, and they implement methodological individualism.
In light of a philosophical account of explanation, we show that these ideas
are not necessarily related and offer an account of the explanatory import of
ABS models. We also argue that their bottom-up research strategy should
be distinguished from methodological individualism.
Keywords
agent-based simulation, explanation, mechanism, methodological individualism
1. Introduction
Over the past two decades, agent-based simulations (ABSs) have been
increasingly employed throughout the social sciences. However, the method-
ology of ABS is not yet sufficiently understood, and its legitimacy and impli-
cations are still subject of debate. For example, many economists are wary of
simulations because they regard them as inferior vis--vis analytical models
Corresponding Author:
Caterina Marchionni, Finnish Centre of Excellence in the Philosophy of the Social Sciences,
Department of Political and Economic Studies, P.O. Box 24, 00014 University of Helsinki,
Finland.
Email: caterina.marchionni@helsinki.fi
324 Philosophy of the Social Sciences 43(3)
which they react is constituted by the beliefs, goals, and behaviors of other
agents. ABS methodology also permits the modeling of heterogeneous
agents, namely, agents who differ in their beliefs, goals, and rules of behav-
ior. Finally, agents can be embedded in networks. All this means that popu-
lation dynamics are emergent outcomes of local interactions. ABS also
allows agents to change their structural locations or break off relations with
their neighbors and seek out new relations (Macy and Flache 2009; Miller
and Page 2007).
Compared with the traditional modeling tools employed in the social sci-
ences, ABS is claimed to have a number of advantages. First, as we have
seen, simulations allow one to be flexible about the characteristics of agents
and hence break away from the strictures of the assumption of optimizing
behavior characteristic of rational choice models. Furthermore, ABS allows
modelers to study equilibrium outcomes as well as the dynamics of systems.
Finally, unlike standard economic models, where for reasons of tractability
the modeler has to work with cases with either one, two, or an infinite number
of agents, ABS can work with any number of agents and hence can provide
more realistic models of social processes (Macy and Flache 2009; Miller and
Page 2007; Ylikoski, forthcoming.).
1Mki (2005) and Morgan (2003) compare the manipulation of modeling assumptions
to experimental manipulation.
Marchionni and Ylikoski 327
The model explains both stylized facts, price dispersion and high loyalty. In a
coevolutionary process, buyers learn to become loyal as sellers learn to offer
higher utility to loyal buyers, while these sellers, in turn, learn to offer higher
utility to loyal buyers as they happen to realize higher gross revenues from loyal
buyers.
In our view, the authors may be overstating the extent to which the model
explains the stylized facts. This is not to understate the importance of this
model and its result. As their simulation can generate high loyalty and price
dispersion, it achieves an abstract proof of possibility. Such a proof shows
what kind of assumptions could produce the outcome, but not how that occurs
or whether those assumptions are the only way to generate the outcome of
interest. The model does not provide much insight into the crucial how-ques-
tions, as the experimental component is almost completely absent (the authors
only vary one condition, namely, the heterogeneity of buyers). Thus, insofar
as the authors do not systematically explore how the outcome of interest
depends on the details of the model, the full explanatory import of the model
remains an open question.
On our account of explanation, to provide a proper explanation of the
phenomenon in question, the simulator should show not only that the assump-
tions made about the agents bring about the observed macro-outcome but
also how they do so. This is done by spelling out the mechanism implemented
in the simulation and by showing how the differences in outcomes systemati-
cally depend on changes in the assumptions of the simulation. When simula-
tion practice involves an experimental component, the assumptions of the
simulation are systematically varied to learn the effects of these changes on
the outcome of the simulation. This makes it possible to learn more about the
network of counterfactual dependencies that characterizes the simulated sys-
tem. This network of dependencies is precisely what the description of the
explanatory mechanism implemented in the simulation amounts to.
Note that there are two reasons for the systematic variation of the assump-
tions of a simulation. The first is to learn about the systematic dependencies
between the explanans and the explanandum within the specified mechanical
configuration. The second is to learn which unrealistic assumptions matter for
the models results, a purpose that is typically achieved by robustness analysis
(Kuorikoski, Lehtinen, and Marchionni 2010; Levins 1966; Weisberg 2006;
Wimsatt 1981). Both purposes are equally important for the explanatory use of
simulations, but the contribution of robustness analysis to explanation is only
indirect. Usually robustness analysis is aimed at investigating the role of those
assumptions made to facilitate the tractability of the underlying model or of
the simulation itself (as, for instance, when a city is represented as a
328 Philosophy of the Social Sciences 43(3)
[MI] Social phenomena can only be explained (or they are best explained) by
accounts that only refer to individuals, their properties and their interactions.
There are two aspects to note about this reading of methodological indi-
vidualism. First, [MI] is a thesis about explanation, not about ontology.
Therefore, arguments about the existence of social wholes, structures, and
such entities vis--vis individuals do not directly bear on arguments about
explanation of social phenomena. Second, [MI] qualifies as a strong version
of methodological individualism in that it holds that explanation of social
phenomena should appeal only to individuals, their properties, and interac-
tions. The corollary of such a view is that nonindividual properties are denied
nonderivative explanatory status.
Some individualists endorse weaker versions of the thesis, according to
which nonindividual properties can play an (nonderivative) explanatory role
via their effects on individuals. It is legitimate to ask, however, in what sense
these more liberal positions are individualistic and whether they represent the
same position that many anti-individualists are trying to defend (see Udehn
2001). As said above, we do not want to get entangled in debates about the
proper definition of individualism. For our argument, it is sufficient that [MI]
is sufficiently close to what is commonly understood as the content of the
methodological individualist doctrine.
The model implements the emperor dilemma familiar from the Hans
Christian Andersens fable. In the model, agents must decide whether to com-
ply with and enforce a norm that is supported by a few fanatics and opposed
by the vast majority. The model is used to examine the population-level
implications of the use of norm enforcement to falsely signal genuine convic-
tion. The idea is to study whether a very small fraction of true believers can
spark a cascade of conformity and false enforcement that quickly engulfs a
vulnerable population. Thus, the norm does not become enforced because
people are converted to new beliefs; rather, it is because they feel the need to
affirm the sincerity of their (false) conformity.
In the simulation, the population consists of agents who differ in their
beliefs and convictions. A small group of true believers is assumed to have
such strong convictions that they always comply with the norm. When dis-
satisfied with the level of compliance of others, they may enforce the norm.
The remainder of the population consists of disbelievers who privately
oppose the norm, but with less conviction than that of the true believers. The
disbelievers may deviate from the norm or even pressure others to deviate as
well. However, the disbelievers can also be pressured to support the norm and
even to enforce it. At every iteration of the simulation, each agent observes
how many of his or her neighbors are complying with the norm and how
many are deviating. They also observe how many neighbors are enforcing the
compliance and how many are enforcing deviations from the norm. Based on
this information, the agents decide whether to comply or deviate and whether
to force others to behave similarly in the next round.
In the article, Centola, Willer, and Macy (2005) report the manipulation of
three kinds of variable: (1) access to information about the behavior of other
agents, (2) the frequency distribution and clustering of true believers, and
(3) the network topology. The results of these simulations are surprising: cas-
cades are much easier to achieve than expected. A small group of true believ-
ers can bring about a cascade in population where neighborhoods are local;
however, they are unable to do so in fully connected populations. Moreover,
the clustering of true believers turns out to be relevant: a very small cluster of
believers can trigger a cascade, while a great number of randomly distributed
believers cannot. Finally, when a small number of random ties reduce the
overlap between local neighborhoods, cascades are prevented. On the basis
of these observations, the authors conclude that unpopular norms thrive on
local misrepresentations of the underlying population distribution (Centola,
Willer, and Macy 2005, 1034). However, the most interesting result is that
disbelievers are crucial for the emergence of cascades. Without them, cas-
cades do not begin, and if the agents start to convert into true believers, the
following of the norm might paradoxically collapse.
334 Philosophy of the Social Sciences 43(3)
Now let us take a closer look at the manipulated variables. From the point
of view of our argument, the crucial question is whether they are individual
or structural properties. By structural properties, we mean properties that are
attributed to larger scale entities than individuals or if they are attributed to
individuals, they presuppose some larger scale entities. These nonindividual
properties constitute a rather heterogeneous class; what they share in com-
mon is the property of being nonindividual properties (Ylikoski 2012). The
network topology is clearly a structural assumption about the macrostructure
of the population. It is a structural property in the sense that there is no mean-
ingful way to attribute it to an individual; it is always attributed to a larger
scale entity. Similarly, the frequency and degree of clustering of true believ-
ers (and other agents) is a population-level attribute that cannot be applied to
individuals. Finally, while the access to information about other agents is
attributed to an individual agent, it is more properly understood as a structural
assumption about relations between agents. The relations between individual
agents and the overall configuration of these relations in the population are
population-level attributes that do not apply to individuals.
Thus, all three key variables are rather prototypical nonindividual struc-
tural macroproperties (Ylikoski 2012). Furthermore, all three satisfy our sug-
gested criteria for explanatory variables. First, they make a difference to the
outcome, as whether or not a cascade is triggered depends on them. Second,
they also have a realistic sociological interpretation, as they capture the
degree to which agents can obtain an accurate picture of how widespread
genuine belief in a given norm is.
Explanatory structural variables like these are not unique to this particular
case. It is quite common to find ABS models that focus on variables such as
the composition of the population, contacts between agents, and agents free-
dom of movement (e.g., Centola and Macy 2007; Flache and Macy 2011).
This is the case even for the relatively simple segregation models inspired by
Thomas Schellings work (e.g., Benard and Willer 2007; Bruch and Mare
2006; Clark and Fossett 2008; Fossett 2006; also see Ylikoski, forthcoming).
One of the attractions of ABS modeling is precisely that such assumptions
can be systematically manipulated together with assumptions about individu-
als. For example, the effects of changes of the size of the neighborhood with
which agents interact or of the connections that agents have beyond their
immediate neighborhood can be studied, for example, by a random rewiring
of the links between agents (e.g., Centola and Macy 2007). The agents free-
dom of movement brings in another set of structural assumptions, such as the
relative size of the available empty spaces and the rules for movement across
them. The experimental manipulation of structural and individual assump-
tions makes it possible for the simulator to investigate the network of
Marchionni and Ylikoski 335
2Ylikoski (2012) offers an account of macromicro relations that dispenses with many
of the problems that afflict the individualismholism debate.
Marchionni and Ylikoski 337
8. Conclusion
Philosophical reflections on social simulations have been scarce compared
with general accounts of the epistemology of simulations and specific analy-
ses of simulations in the natural sciences (see, however, B. Epstein 2012;
Grne-Yanoff and Weirich 2010). In this article, we have offered an account
of the way in which ABS is used for explanatory purposes in social science.
The bottom-up research strategy embodied in typical ABS delivers genera-
tive explanations: it generates the macrophenomenon to be explained by
appeal to the actions and interactions of the agents. The bottom-up research
strategy of ABS can also yield mechanism-based explanations: it tracks the
(possible) network of dependencies behind the phenomenon to be explained.
However, we argue that the idea that ABS is an implementation of method-
ological individualism is misleading because structural assumptions often
play an irreducible explanatory role. Finally, we have offered practical (rather
than conceptual) arguments for resisting the association of ABS with meth-
odological individualism, even when the latter is interpreted to accommodate
the explanatory role of nonindividualistic assumptions.
338 Philosophy of the Social Sciences 43(3)
Funding
The author(s) disclosed receipt of the following financial support for the research,
authorship, and/or publication of this article: Caterina Marchionnis work was con-
ducted with funding from the Academy of Finland.
References
Benard, Stephen, and Robb Willer. 2007. A Wealth and Status-Based Model of
Residential Segregation. Journal of Mathematical Sociology 31 (2): 149-74.
Bruch, Elizabeth, and Robert Mare. 2006. Neighborhood Choice and Neighborhood
Change. American Journal of Sociology 112 (3): 667-709.
Centola, Damon, and Michael Macy. 2007. Complex Contagions and the Weakness
of Long Ties. American Journal of Sociology 113 (3): 702-34.
Centola, Damon, Robb Willer, and Michael Macy. 2005. The Emperors Dilemma: A
Computational Model of Self-Enforcing Norms. American Journal of Sociology
110 (4): 1009-40.
Clark, William A.V., and Mark Fossett. 2008. Understanding the Social Context
of the Schelling Segregation Model. Proceedings of the National Academy of
Sciences of the United States of America 105 (11): 4109-14.
Craver, Carl F. 2007. Explaining the Brain: Mechanisms and the Mosaic Unity of
Neuroscience. Oxford: Clarendon Press.
Epstein, Brian. 2012. Agent-Based Modeling and the Fallacies of Individualism. In
Models, Simulations and Representations, edited by P. Humphreys and C. Imbert,
115-44. New York: Routledge.
Epstein, Joshua M. 2006. Generative Social Science. Studies in Agent-Based
Computational Modeling. Princeton: Princeton University Press.
Flache, Andreas, and Michael W. Macy. 2011. Small Worlds and Cultural
Polarization. Journal of Mathematical Sociology 35 (1): 146-76.
Fossett, Mark. 2006. Ethnic Preferences, Social Distance Dynamics, and Residential
Segregation: Theoretical Explorations Using Simulation Analysis. Journal of
Mathematical Sociology 30 (3-4): 185-273.
Gilbert, Nigel, and Petra Ahrweiler. 2009. The Epistemologies of Social Simulation
Research. In EPOS 2006, LNAI 5466, edited by F. Squazzoni, 12-28. Berlin:
Springer-Verlag.
Grne-Yanoff, Till, and Paul Weirich. 2010. The Philosophy and Epistemology of
Simulation: A Review. Simulation & Gaming 41 (1): 20-50.
Hedstrm, Peter. 2005. Dissecting the Social: On the Principles of Analytical
Sociology. Cambridge: Cambridge University Press.
Hedstrm, Peter, and Petri Ylikoski. 2010. Causal Mechanisms in the Social
Sciences. Annual Review of Sociology 36:49-67.
Marchionni and Ylikoski 339
Author Biographies
Caterina Marchionni is an academy research fellow at the Finnish Centre of
Excellence in Philosophy of the Social Sciences, University of Helsinki. Her research
interests are in the philosophy and methodology of economics and the philosophy of
the social sciences.
Petri Ylikoski is a professor of science and technology studies and the deputy direc-
tor of the Finnish Centre of Excellence in Philosophy of the Social Sciences,
University of Helsinki. His research interests range from theoretical issues in philoso-
phy of science to empirical case studies in science and technology studies.