Nothing Special   »   [go: up one dir, main page]

©Copyright JASSS

JASSS logo ----

Michael Agar (2005)

Agents in Living Color: Towards Emic Agent-Based Models

Journal of Artificial Societies and Social Simulation vol. 8, no. 1
<https://www.jasss.org/8/1/4.html>

To cite articles published in the Journal of Artificial Societies and Social Simulation, reference the above information and include paragraph numbers if necessary

Received: 14-Jul-2004    Accepted: 19-Sep-2004    Published: 31-Jan-2005


* Abstract

The link between agent-based models and social research is a foundational concern of this journal. In this article, the anthropological concept of 'emic' or 'insider's view' is used to foreground the value of learning what differences make a difference to actual human agents before building a model of those agents and their world. The author's Netlogo model of the epidemiology of illicit drug use provides the example case. In the end, the emic does powerfully inform and constrain the model, but etic or 'outsider' views are required as well. At the same time, the way the model motivates these etic frameworks offers a strong test of theoretical relevance and a potential avenue towards theory integration.

Keywords:
Agent-Based Models, Ethnography, Substance Use, Emic/etic, Validity, Netlogo

* Introduction

1.1
My journey into complex worlds was first inspired by tests of ethnographic conclusions using agent-based models. But as I listened to colleagues present agent-based models at conferences, I was struck by how some "Artificial Societies" were more artificial than others. At one extreme, presenters offered models whose characteristics resembled no world I'd ever been in. At the other extreme, presenters knew whereof they spoke, from personal or research experience or both. And of course there were plenty of models somewhere in between.

1.2
In this article, I'd like to do two things. First of all, I'd like to explore this scale of model/world correspondence. With my background in ethnography, I'll push for the importance of knowing a world well before one models it, though in the conclusion I'll complicate the argument considerably. The second thing I'd like to do is present a model I've developed in Netlogo as an example. It's the latest version of earlier work (Agar and Wilson 2002), and it will help evaluate some of the issues. The code is available on the Netlogo community models page ( http://ccl.northwestern.edu/netlogo/models/community/Drugtalk) so interested readers can examine the model for themselves.

* The Importance of Knowing the World

2.1
Ethnographers work for a long time to learn what matters to people. Why do they bother? Shouldn't it be obvious what matters and what doesn't? No, not always, maybe even not usually. Here's an example from the world of illicit drugs. Think of a drug scene you don't know much about, inhabited by people you haven't had much to do with. Now imagine you want to figure out why those people do something. What is it that matters to them?

2.2
Say they sell crack cocaine. You might describe them as criminals driven by sociopathic personalities. Or you might say that they're just businessmen and women trying to make a little money. It would turn out, after a little time in such worlds with such people, that you'd understand that yes, they were criminals, and yes, they were interested in the money. But the story of why a crack dealer was doing what he/she was doing would actually be a bit different than you originally thought. You'd learn that being a criminal, to most of those who went into the business, wasn't a major issue, since most of them already were criminals in one way or another and at any rate expected to be treated like one no matter what they did. In fact, many probably celebrated an "outlaw" identity.

2.3
And making money? Why not just rob a bank or steal jewelry from a wealthy home and earn even more? You would learn that crack dealing was among the most available opportunities in the underground economy, because one can begin with little capital or skills and operate in comparative security, as far as law enforcement is concerned, at least for awhile. And you'd also learn that legitimate job opportunities for those who consider crack dealing, jobs that pay a living wage, have diminished considerably over the last two decades.

2.4
In other words, if you imagined why people went into crack dealing, you'd come up with any number of ideas, most of them running from not quite right to flat wrong. If you actually spent a little time with crack dealers, you'd have a much better idea of "what matters" than if you just imagined what the answer might be.

2.5
The advantage of ethnography, if done properly, is that data from a different world constrains the imagination of a person with no experience of it. It also helps those with too much experience, since they may have lost the ability to see the obvious. In the end a few parameters emerge that are critical to explain the social process that you're interested in, parameters that you might never have imagined existed before you explored that world.

* The Emic/Etic Distinction

3.1
What does all this mean, besides the fact that a model built without a first-hand sense for the world being modeled should be viewed with suspicion? It means that models and ethnographic research should be mutually reinforcing. Ethnographers, if not blinded by prior theory and ideology, should come out with an informed sense of the differences that make a difference from the point of view of people being modeled. They should, in the end, be able to tell outsiders what matters to insiders.

3.2
"Insider" and "outsider" recapitulate the old and somewhat misused "emic/etic" distinction in anthropology, taken originally from the linguist's "phonetic" and "phonemic" (See Pike 1967 for the original statement and Headland, Pike and Harris 1990 for a more recent summary of the issues). Imagine all the possible sounds that a human can make. Note that imagination here is constrained by human physiology. Those possible sounds constitute phonetics. But when people actually speak a particular language, they don't hear all possible sounds. Not all sounds make a difference. The sounds that are locally significant, as modeled by a linguist, are the phonemics of that language.

3.3
In Spanish, a speaker may hear "vaca" and "baca" as the same word. An English speaker would never confuse "van" and "ban." In English, a speaker won't hear the difference between "pin" and "spin," the former having a puff of air after the "p," the latter not. But in other languages, like Hindi, that puff of air would signal a different word.

3.4
"Emic," then, is about differences that make a difference from an insider's point of view. This article argues for emic models, models grounded in what matters in the world of those being modeled. But many models are etic, in the sense that they are built on an outsider's view of the people and the world being modeled. Etic models represent how the modeler thinks the world works; emic, how people who live in such worlds think things are. Etic always plays some role, an issue we return to in the conclusion, but emic should be the point of it all.

3.5
Now I want to show an example of an emic model, in part to stimulate more discussion around this issue. I want to show that an emic model results in a different sensitivity to program details and what they mean, because program details have a phenomenological analogue in the world of real-life agents. I want to show that it also raises issues about the translation of ethnographic analysis into computational form. And I want to show that emic models sharpen the question of what an agent-based model should include to serve practical goals.

* Youth and Heroin

4.1
Let me give an overview of the study and the model before I describe the program that runs it. The study focused on heroin use among suburban youth in the late 1990s (Agar and Reisinger 2000). It was ethnographic in its logic, though it ranged across several kinds of data. One kind of data were interviews with some youth we'd met in drug education programs. Most youth had been court-referred — which meant "do it or else" — usually because of an arrest for alcohol or marijuana possession.

4.2
Youth emphasized the credibility of information about illicit drugs. Most credible were their own experiences, those of their friends as told to them, or those of other youth as witnessed by them. The extent of experimentation, they explained, depended on these "consumer evaluations" and the "buzz" or stories they generated. The stories we heard, in this study and in other drug epidemics we looked at, were fascinating in the range of positive and negative outcomes they described. In fact, this study and all the others convinced us that the best material for drug prevention is available in the stories of users. Of course, so is material to encourage use, which is why user stories have little chance in a "war on drugs" world.

4.3
Low on the credibility list were the sources of information the adult world usually thinks of when they think of drug education — courses in school, warnings of parents, educational material produced by various levels of government. And other sources were relevant as well, MTV for instance often being mentioned in a favorable light, as far as presentation of credible illicit drug information goes.

4.4
Youth also described a dynamic for an epidemic. A new drug would first be tried by the adventurous. This would begin the circulation of stories through youth networks. For an interesting drug, positive "buzz" would initially outweigh negative and lead to more use. With time, with a drug like heroin that can produce physical dependence, negative buzz would increase and experimentation would decline.

4.5
So, we wondered, in the end might this "buzz," this flow of narratives among youth, be enough to generate an illicit drug epidemic? Would stories that evaluated a new illicit drug accelerate and/or brake use? And might the stories alone in fact be adequate to produce epidemic incidence curves?

4.6
First I experimented with this idea in Starlogo (Agar 2001). I then collaborated with Swarm programmer Dwight Wilson and built something we called Drugmart (Agar and Wilson 2002). Finally I started working on my own in the Netlogo language (Wilensky 1999), making several changes to the Drugmart model that will be described in a moment.

4.7
Common to all three programs is a simple idea born of what the youth taught us. The model gives each agent a risk and an attitude. "Risk" is the tendency to try something new and unknown. "Attitude" is attitude towards illicit drug use — favorable or unfavorable. Details of how this assignment is made will come in a moment. For now, just note that risk varies among agents and does not change. This corresponds to youth descriptions of different "kinds of kids" who would be more or less likely to experiment. Attitude is set at the same value initially, a rough approximation of community norms. But individual agent attitudes change with experience. Whether or not an agent uses a drug simply depends on whether or not risk is greater than attitude.

4.8
If an agent does use the new drug, it evaluates the experience, communicates with its social network, and offers them the drug as well. And all agents, all the time, tune in to the attitudes of the agents that surround them as they move about in the simple patchwork quilt of their world. Attitudes move up and down, individually and collectively, but eventually they head in an anti-drug direction, a bias built into the model that will also be described in a moment.

* Drugtalk: An Agent-Based Model

5.1
In Netlogo there are two core procedures that appear as "buttons" on the user interface. The first is Setup, which does the usual housekeeping chores of defining and initializing things. In Drugtalk, as I'll call this program to distinguish it from the earlier Starlogo and Swarm versions, three important features of Setup need to be described.

5.2
Each agent must have a risk and an attitude value at the beginning, since the comparison between the two determines whether or not an agent will try a drug. The attitude value is a parameter set by the programmer on what Netlogo calls a "slider" on the interface. Attitude values can range from 0 to 100, the higher the value, the more anti-drug. In Setup all agents are assigned the same attitude value right from the slider setting. This represents a general orientation to use of a particular new illicit drug among agents in that world, a "norm," if you will.

5.3
Risk, on the other hand will be different for each agent. But, in contrast to attitude, an agent's risk will not change during the simulation. The assumption is that risk is a fairly stable and pervasive characteristic of an agent, that there are risk-takers and risk-avoiders, and that those proclivities hold up across different situations.

5.4
Rogers' most recent edition of Diffusion of Innovation (Rogers 1995) reports numerous studies across many different domains. A robust result of those studies is that people's willingness to take a chance on an innovation is normally distributed. With this body of work as background, then, risk values are assigned to agents using the random-normal function packaged with Netlogo.

5.5
The third feature of the Setup procedure signals an important change from earlier models. Barabasi (2002), in his summary of the work of many others from a variety of fields, argues that social networks show an inverse power law distribution because of the complex adaptive systems that generate them. In the earlier Swarm program, the model of networks was limited. We could set a parameter so that agents had no connections, or we could specify a particular number of connections for each agent. But the number of connections per agent didn't vary.

5.6
After simple trial and error, an exponent of 1.5 produced an inverse power law distribution for the 500 agent world. So a procedure called connect-the-turtles was written inside of Setup. Each agent has a list of other agents to whom it is connected. Initially it is empty. At minimum an agent must have at least two links. Beginning with that number, then, the procedure calculates (500 × (1 / ( 21.5))) to get the number of agents that will have two links, in this case about 177 out of 500.

5.7
Assigning agents to a nework-list works like this: An agent that still has an empty list is selected at random. Then two additional agents are selected at random and put into that agent's list. After the total number of agents calculated by the formula for two links is reached, the number of links is increased to three, so the total according to the formula becomes (500 × (1 / (31.5))), or about 98. The procedure then draws new agents with empty lists at random again, and it fills their lists with three additional agents drawn at random. The process continues, the number of links increasing, until there are no more agents with an empty network-list.

5.8
For 500 agents, the rest of the distribution runs about like this: 63 agents with 4 links, 48 agents with 5 links, 34 agents with 6 links, 27 agents with 7 links, 22 agents with 8 links, 19 agents with 9 links, and the last 12 agents have 10 links.

5.9
Notice that there is no restriction on which agents might be assigned to a network list. That is, the same agent might be selected at random more than once, or even several times, or perhaps never. And it doesn't matter what the selected agent's network list looks like, either. The resulting network, if graphed with number of agents on the Y axis and size of network-list on the X axis, will look like an inverse power law distribution. But the overall network, expressed as a digraph, will look very different from time to time.

5.10
The patch at the center of the lattice is changed to a red color to indicate the location where a drug — heroin in this case — first appears, and the model is ready to run.

5.11
The Go button is the second major control on the Netlogo interface. It is usually defined as a "forever" button, meaning that procedures defined within it will repeat indefinitely until it is pressed again or until an internal test halts it. What I will describe now is one iteration of the model.

5.12
First of all, each agent moves randomly. Then the first thing it does is to check-the-buzz. If it has become an addict (to be defined in a moment), it doesn't bother to check, because it doesn't matter what other people are "saying" about heroin any more. Addiction means one has to have the drug above all else.

5.13
Check-the-buzz corresponds to what youth often told us, that you hear stories about drugs wherever you go, from people other than those in your personal network. A party, a club, an event, school, a part-time job--drug stories are often "tellable" in these settings, since they can be dramatic, surprising, something out of the ordinary, especially for a new drug.

5.14
How does the buzz get checked? Each agent keeps a record of how many positive and negative experiences it has had with the drug. The relevant procedures will be described in a moment. To check the buzz, an agent just adds up the total number of positive and negative experiences among the agents on its own patch or within a "radius" of two patches. The agent who is doing the checking then adjusts its attitude by these numbers, subtracting the positive total from its attitude to make it more likely to use and adding the negative total to make it less likely.

5.15
Here for the first time a bias is introduced based on Tversky and Kahnemann's prospect theory (Tversky, Kahneman, and Slovic 1982). People, say the hundreds of studies that have now been done, want to minimize loss more than they want to maximize gain. Therefore, an agent will put more emphasis on the negative total than it will on the positive total. So the negative total is multiplied by two to represent this effect. Why two? I can think of no empirical justification.

5.16
The impact of checking the buzz is low when compared to procedures to come. This is as it should be, since hearing things from strangers you just happen to run across probably has less effect than a story from a trusted and long term friend. However, one buzz-checking experience can have a major effect on attitude. If an addicted agent is also in buzz range, the agent who is checking will raise their attitude by 20. (An addict is defined by a certain number of uses--a parameter--set to five in this case). Twenty is a substantial change, if you recall that the range is zero to 100. (The range is always kept between zero and one hundred. It can go no higher or lower.)

5.17
There is some justification for this number — not this exact number, but rather for a number that represents a "big" difference. For one thing, youth reported such reactions — "I was experimenting, or thinking of trying it out, and then I ran into so-and-so who'd turned into a junkie, and it really turned me off." Other evidence comes from Musto's generational theory (1999) — after an illicit drug epidemic impacts one generation, the next generation will not use, since they've seen use go from pleasant early on to devastating for addicts, families and communities later on. Recent observations in U.S. cities, for example, suggest that African-American youth, having witnessed the early crack epidemic, will have nothing to do with use of the drug, though some might sell it as a lucrative niche in the underground economy. For many such youth now, a "drug-related problem" means dealing, not using (Reisinger 2004).

5.18
Next in the program sequence comes the moment of truth. If an agent is on the red patch that represents the entry of the drug into the world, it compares its risk to its attitude, and if risk is higher, it uses the drug.

5.19
Right after it uses, an agent evaluates the experience with a procedure called how-was-it, unless the agent is already an addict, in which case it doesn't matter any more, since it has to use. To understand how this procedure works, we first look at two more parameters, represented by sliders on the interface, the first called goodstuff?, the second, badstuff? Each one can vary between zero and 100. This number means to be a quality evaluation. For the moment, I ignore problems of individual variation in both physiology and context and assume there's some kind of average that makes sense. Overall, does the drug produce a pretty good or a pretty bad experience? Notice that both things can be true — in other words a user might have an experience that he/she would describe as both good and bad.

5.20
If I seek empirical justification for such numbers, I raise more questions than can be discussed in an article, questions that don't have clear answers. For example, there is a biochemical basis for drug effects, both in terms of likelihood of positive and negative effects and in terms of variability among users. Is a drug, more often than not, "pretty good" when it is first tried on the basis of what it could do for anyone? Or only among those with some kind of neurotransmitter glitch? And is a drug usually "pretty good" only for people who find themselves in a particular set of historical conditions, as Reisinger and I argue elsewhere (Agar and Reisinger 2001)? And does "pretty good" mean just "nice," or does it mean the "best drug ever," as some experimenters say about heroin?

5.21
I have no answers to these questions. What I do have are hundreds of stories, stories I've heard over the years as a drug ethnographer, that suggest that "most" initial heroin use is called a good experience in terms of effects and "some" initial use results in a bad experience. So numbers in the model are set at 70 for good and 30 for bad as a rough approximation, at least for heroin. This is a drug about which a book was entitled It's So Good Don't Even Try It Once.

5.22
Given this background on the goodstuff?/badstuff? parameters, how-was-it is simple. After an agent has used, it generates a random number between zero and one hundred, and if goodstuff? is larger than that number, the agent records a positive experience. Then it changes its attitude in a favorable direction — i.e. it decreases it — by an amount equal to ( ( 1 / positive ) × 20). Notice how the effect of the evaluation diminishes with increased use. The first positive experience reduces attitude by 20; the second, by 10; the third, by 6.67; and so on.

5.23
And then, independently of how the goodstuff? evaluation went, the agent does the same evaluation using badstuff? The difference here, of course, is that if badstuff? is larger than a random number between zero and 100, the value of the agent's attitude increases to make it less likely to use. And another difference, corresponding to the prospect theory principle that people are risk-aversive, as described earlier: This time the value changes by 40 instead of by 20. The impact of the experience diminishes with time, just as it did with "goodstuff?" — 40 the first time, 20 the second time, 13.33 the third time, and so on.

5.24
The justification for the diminishing impact of experience lies in intuitions about habit, that the first experience of anything is the most significant, with subsequent experiences showing a weakening effect. There is a literature that supports this assumption going back to old-fashioned behaviorist psychology that I take for granted here.

5.25
The next thing that all agents do, right after they use, is let their network know about the experience with tell-the-network. Recall that the model was set up with an inverse power law social network, that is, a few agents will have large networks and many agents will have small networks. An agent who has just used checks its network members. If a network member is already an addict, the agent who used has no influence on its attitude. But if the network member isn't an addict, a couple of things might happen.

5.26
First of all, if the agent who has just used is itself an addict, it will "turn off" the members of its network by adding 20 to their attitude. Recall that the same thing happened if an agent found an addicted agent nearby when it checked the buzz around it.

5.27
If the agent who just used is not an addict, then something different happens. For each agent in its network, the agent who just used "pulls" them in the direction of its attitude, whatever it might be. It does this by the simple mechanism of assigning the agent in its network the average of its own and that agent's attitude. The agent who used will have an attitude that reflects its history of positive and negative experiences from checking the buzz and evaluating its own use. Tell-the-network will move the agent in its list towards its current attitude that reflects those experiences.

5.28
Whatever the outcome of all this influencing, or lack thereof, the agent who just used always offers heroin to all the agents in its network, no matter what. If the agents in the network have a risk greater than their attitude, they use the heroin and evaluate the experience, as the original agent did, with the same procedure, how-was-it. But at that point the network member stops. In other words, the agent in the network does not, in turn, offer heroin to other agents in its own network. Perhaps it should, not immediately, in that particular tick of the program, but with some time lag.

5.29
Earlier I wrote of how this program was different from the program Wilson and I collaborated on, DrugMart. Another major difference should now be mentioned. In DrugMart, influence of agents on each other was evaluated by a look-up matrix, with several different types of users and addicts. In Drugtalk, the program just described, the matrix has been eliminated and replaced by Netlogo procedures that do different things. Check-the-buzz is about influences wherever an agent happens to be located in the model. How-was-it is about the impact of the experience of using on attitude, both good and bad, an impact that decreases with the number of times an agent has used. Tell-the-network is about influencing a personal network, both in terms of attitude and possible use. And recall that this takes us back to another major difference between DrugMart and Drugtalk, described earlier in this article, namely that social networks are set up using an inverse power law distribution.

5.30
One final part of the program, still in development: As the number of users increases, I want to include a production/distribution system that can respond and provide a larger supply. In the current version, a supply system grows according to a procedure called market-watch. The procedure tells each patch to keep track of the number of agents who step on it who have used the drug at least once and who still have a risk value greater than attitude. On each tick of the program, the patches check themselves and the values of the patches around them. If the number reaches a certain level, set by a slider switch called demand-response, the patch changes color to red and becomes a place where the drug is available to any agent who lands on it. At the moment the limit is 125 patches, and nothing has been programmed so far to eliminate patches once use declines.

* Drugtalk and the Space of Possible Epidemics

6.1
A full analysis using Drugtalk awaits future work with programmers who can take the slow pace of Netlogo, speed it up, and enable parameter sweeps to explore the spaces it generates. In the meantime, I ran the model ten times for a preliminary sense of what it would do, and I would like to report something about those runs here.

6.2
The most striking first impression is that Drugtalk, in the end, shows less variation in final number of addicts than previous versions did. Even more interesting, the final number clusters around a value of 10% of the agent population of 500. That number is in the ballpark of the 10-15% figure reported by the few epidemiologic studies done in community settings to learn how many addicts an epidemic would produce among groups with easy availability who are open to illicit drug use (DuPont 1974; Robins and Murphy 1967). The attitude slider setting of 50, the mid-range, was meant to reflect just that sort of "openness."

6.3
But why should the number of addicts cluster in a narrower range now when compared to earlier Starlogo and Swarm programs? I can at least say that there are clear differences between earlier programs and this one. The major differences, compared to the Swarm program for Drugmart, were described earlier. But there are also differences with earlier versions written in Netlogo, more subtle but clearly of major influence. In talks based on those earlier models, I emphasized how different outcomes could be in terms of number of addicts. In fact, I'm a little embarrassed about it now. So what changed? Mostly what changed was this: I wasn't careful enough, in earlier work, about the way addicts were influencing or being influenced by others. After straightening out several details, Drugtalk produced outcomes that varied less than previous Netlogo models, or previous Starlogo and Swarm versions for that matter, in terms of final number of addicts.

6.4
But plenty of variation remains, in number of users for example. And among the graphs of the ten runs, the variation among the "story lines" of the numerical narratives are important as well. Let me illustrate with a few examples, graphs from four of the runs, exported from Netlogo as "csv" files, then reconstructed using Excel. All runs were made with parameter settings as described earlier in this article — Goodstuff? is set to 70, Badstuff? to 30, and Attitude to 50. Addiction occurs after five uses. The market is set to respond at 100, which means after that many agents have stepped on a patch and its neighbors, where the agents have used and are still at-risk, then the patch turns red and makes heroin available to any subsequent agent who lands on it.

6.5
In the graphs to come, three lines are shown. The green line, starting high and dropping low, represents the number of agents who are "at-risk," that is, agents who have risk values larger than their attitude values. The red line, starting low and usually ending up in the mid-range of the y-axis, shows the number of agents who have used at least once, what the epidemiologists call "lifetime prevalence." The blue line, starting low and usually ending up around the first horizontal line that signals the value of "50," represents the number of agents who are addicts. I stopped the run when the at-risk line approached the addict line. I also checked the "Attitude" graph, not shown here, to ensure that nearly all agents by then had strong anti-drug attitudes. Notice that all examples to come have the same Y-axis, 300 agents out of the total of 500, but their X-axes vary in the number of program ticks represented.

6.6
First let's look at two examples that produced less than 50 addicts. Call them "A" and "B."

ExA
Example A

6.7
Example A shows the kind of case our youth subjects and our study of epidemiologic data lead us to expect. Perhaps the curves are more compressed than real data, though it is hard to specify the relationship between "ticks" and actual historical time in a plausible way. At first, there's an increase of at-risk agents because of initial positive reports of experimenters. Use increases dramatically, probably because some early experimenters were also network hubs. With some time lag, there is a sharp increase in addiction. By now the at-risk numbers are in a nosedive, which should happen given the extra weight on negative experience and the increasing presence of addicts. The story is dramatic and rapid, quickly producing a situation where the world settles into an "endemic" situation of about 40 addicts.

6.8
Example B, shown below, produces about the same number of addicts, but the story is different — less dramatic and spread over a longer time range. The number of agents who used at least once is quite different — About 180 in the first case as opposed to about 130 here. The use and addict lines rise more gradually, though there are a couple of early bumps in the use line and a slight bump in the addict line later, probably when additional red squares start to appear so that more heroin is available.

ExB
Example B

6.9
Now let's look at two examples where more than 50 addicts were produced, Example Y and Example Z.

ExY
Example Y

6.10
Example Y, like Example A, shows a rapid increase in use and addiction early on, this in spite of an early decline in the at-risk curve. With the rapid onset of the epidemic, the at-risk numbers plummet, as expected given the nature of the model. Example Y looks in many ways like a more explosive version of Example A.

6.11
In Example Z, below, the story also begins with a dramatic start, including an initial increase in the at-risk agents. While this epidemic reaches similar numbers in terms of use and addicts, it slows after the initial burst and then grows more slowly than Example Y, as the number of at-risk agents declines, also much more slowly. Example Z produces the largest number of users, around 220.

ExZ
Example Z

6.12
Drugtalk shows how similar results can occur in a variety of different ways. The problem with such models, given the centuries old social research bias that prediction is the be-all and end-all, is that it's impossible to say why things happened in a certain way absent a "tick-by-tick" analysis of every agent in the program.

6.13
Variation might depend on whether or not early use is by a network hub, or it could depend on the contingencies of positive vs negative experience, or it could depend on the location of an agent vis a vis the kinds of experiences represented around it, or it could depend on when and where new drug patches appear, or it could depend on other contingencies, singly or in combination.

6.14
At the same time — this is the point of nonlinear dynamic systems — the model does let us look at the space of possible outcomes. Even with this small sample of what the space looks like, several implications for drug policy are apparent.

* Potential Applications of Drugtalk

7.1
The first implication has to do with what epidemiologists call "surveillance," a term whose connotations call up images of George Orwell's novel 1984, a reaction not much reduced if the more neutral term "monitoring" is used.

7.2
But the fact remains, if a society wants to prevent a serious illicit drug epidemic, it needs to know what it is trying to prevent. The problem is that current drug monitoring systems use indicators generated by institutions — emergency rooms, treatment centers, and the like. Such indicators are subject to vagaries in their collection and only appear in analyzed form after an epidemic is well underway. The media, another useful indicator, also react after an epidemic has already become widely noticed. The examples from Drugtalk show how out-of-date such information would be, certainly for the dramatic cases where use and addiction shoot up rapidly shortly after the new drug appears.

7.3
We need a monitoring system that is more tuned to immediate conditions and much broader in its coverage than current systems. We think such a system is possible, based on our own experiments in our project, but we are the first to admit that this is only a promissory note. Such a system could be based on ethnographic logic and multiple sources of front-line counselors and outreach-workers. When we talk about such a system, it doesn't look like normal science and it ignores official experts in favor of community practitioners without educational credentials. The likelihood of ever getting such a system funded is low, to put it mildly.

7.4
If we had such a system, and linked it with knowledge of the space that shows how an epidemic might develop, we could know in the short term what sort of path an epidemic was following and plan our responses accordingly.

7.5
Such knowledge would allow an efficient allocation of scarce intervention resources. Caulkins and his colleagues have already argued that interventions should shift depending on the stage of an epidemic (Behrens, Caulkins, Tragler, and Feichtinger 2000). Drugtalk allows a more fine-tuned argument: Interventions should shift depending on particular variations in the possible paths of an epidemic.

7.6
Education is a first kind of intervention. In the graphs shown earlier, there isn't much lead time between first appearance of a dangerous drug and use/addiction take-off, not surprising given the favorable settings of the parameters. In these cases, early warning would have to come from other sources, such as monitoring of drug production and distribution systems. Future runs of the model under different parameter values will probably show slower development for other kinds of parameter settings. In those cases, rapid education efforts would make sense.

7.7
But two of the examples, called "B" and "Z", show that a rapid education response might help. Recall that after an initial burst of use and addiction, the trend lines leveled off, with the at-risk agents declining slowly and use and addiction rising gradually. The implication here is that, even after an epidemic is signaled by a monitoring system, education aimed at the still substantial at-risk but non-using population would be useful.

7.8
I'm avoiding the question here of what kind of education. Recall the earlier discussion: A massive propaganda effort on the part of ignorant adults probably doesn't help. A peer-oriented effort using materials derived from and relevant to their worlds would probably do a better job. I neglect this problem here.

7.9
What about early intervention? The term means working with users who already have experimented to keep them from becoming addicts. In our work with a youth program in Baltimore County, the "addiction" issue almost always turned into a lively topic of discussion. If you tell youth that if they ever experiment with heroin, even once, they're doomed, that message will be contradicted by their experiences and observations. If you tell youth what life is like on the other side of physical dependence, the message will correspond with their observations of addicts in their own worlds. This kind of "early intervention" simply amplifies and accelerates the braking effect described in ethnographic research and represented in Drugtalk.

7.10
If an epidemic resembles the examples where the addiction curve shoots skyward right away, early intervention would not help much. But if there is a rising use curve, and either a delayed addiction curve or a gradually rising one, early intervention might prevent many youth from crossing the line into physical dependence.

7.11
Finally, once addiction has occurred, treatment is a priority, the point made by Caulkins and his colleagues in their work. Drugtalk highlights some additional details. First of all, a treatment response should not imply that all the other interventions stop. In fact, the different epidemic trajectories in the examples suggest changing allocations over time. Second, the different trajectories call for different kinds of treatment response. At one extreme, a community can expect a massive and sudden appearance of addicts seeking treatment with some time lag after use begins. At the other extreme, the flow of people seeking treatment will be lower in volume, but longer in duration.

7.12
The moral of the story must be taken as tentative, but with any luck also as promising. Coupling a good rapid-feedback monitoring system with an understanding of the space of possible epidemics might yield a more humane and effective — not to mention efficient — harm reduction system to handle new and changing waves of dangerous drug use. The idea at least deserves a try.

* "Emic" and "Etic" After the Model

8.1
My goal in this article was to show how an agent-based model could grow out of ethnographic research and produce some useful results. But other issues raised in the introduction require some attention as well. Perhaps the best way to do that is to revisit the "emic" and "etic" concepts.

8.2
Much of what makes the model work is "emic" in two senses:
  1. Parameters and mechanisms derive in large part from features of experimentation with a new drug identified by those involved in the process; and,
  2. Characteristics of opiate addiction learned during ethnographic research over the years also played a role.

8.3
Recall that the usual use of the terms "emic" and "etic" signals a change between two different points of view, that of "insiders" and that of "outsiders." While useful at a general level, a closer look reveals many conceptual potholes. For example, at any given moment, any two people are to some extent outsiders with reference to each other (solipsism), and the same two people are to some extent also insiders with reference to each other (universals). That philosophical quandary is beyond the scope of this article, but it does show that the distinction between emic and etic isn't as straightforward as it seems.

8.4
What I would like to do is revisit some assumptions made to get from the ethnographic data to the Netlogo program. Were those assumptions all "emic," in the sense of reflecting what the youth told us? No they weren't, at least not directly. Two assumptions were based on scholarly traditions with elaborate pedigrees. Diffusion of innovation supported the random-normal assignment of risk values to agents. Prospect theory supported a stronger impact of negative experience when compared with the positive. Along the way, I smuggled in a third assumption that the "first time" for anything was the most powerful, the effects of subsequent "times" diminishing rapidly. Not very emic, are they, these assumptions?

8.5
Here's a more elaborate version of emic and etic that this article suggests. The simple "insider/outsider" version as used here still holds: If you're going to model what human agents do in some corner of the world, you can adopt theoretical frameworks that lay out dozens or even hundreds of variables that might make a difference. Or you can explore that corner of the world and listen and learn, from the point of view of living breathing agents, what it is that actually makes a difference from their point of view.

8.6
But a major qualification is necessary here, one that is well known by ethnographers but important to foreground for readers who have no such background. "Emic" never means "everyone told me that this is exactly the problem and this is how to solve it and they are all exactly right." "Emic" means a difference that makes a difference in those agents' world, even if they are not aware of it. Emic goes beyond the consciousness of any individual agent, including any individual ethnographic agent.

8.7
"Emic" does not mean that youth have to say, "Negative experience has a more powerful effect on me than positive experience," before a modeler can use that idea. It does mean that stories youth tell reflect the more powerful impact of negative experiences when compared to the "fun" positive stories. Another example: "Emic" does not mean that youth said, "Kids today represent a wide range of variation as far as risk goes." It does mean that their stories reflect some "crazy" kids who'll try anything and some "cautious" kids who won't try anything, and many other kinds of kids in between.

8.8
The important part of "emic" is that you, the outsider/modeler, learn some key differences from them, the living agents. The significance of those key differences may well not have been known, to the outsider, before the research. Possibly the outsider knew of them, but he/she didn't really understand how they played the roles that they did.

8.9
But then modeling the differences means you need a computational translation. Some youth are more risk prone than others. Bad things repel more than good things attract. Is there a sane way to tell a computer what these propositions mean?

8.10
Enter the "etic" at this point. In the case of risk and the impact of experience, there are, in fact, etic research traditions that focus on exactly those concepts, namely diffusion of innovation and prospect theory. Etic they may be, but in this case they intersect with emic differences and let us model them. Notice several things:
  1. Selection of etic/theory was directed by an important emic difference rather than being selected a priori.
  2. The etic research tradition had an elaborate pedigree with numerous studies in different content and geographical areas.
  3. So elaborate was the etic tradition, and so robust were its key results, that one can argue that it plausibly represents universal aspects of the human situation.

8.11
Ethnography can be defined as making sense of human differences in terms of human similarities. The differences — the emic — are always in the foreground, since they are the primary focus of any ethnographic study. But similarities — the etic — are featured as well, at least enough to connect the differences in the agents' world with the audience's way of understanding how the world works. Differences are the problem; similarities are the solution. Similarities are where the etic helps out.

8.12
This is a version of "emic/etic" that makes sense for ethnographic research and for modeling its results. Emic differences direct a search for etic theory with characteristics like those listed in #1 through #3 above. The use of theory here is "eclectic," not a good word in traditional science. But notice that this "eclecticism" is actually about selection of well-supported theories with clear relevance to at least one particular emic case.

8.13
In this article I pulled in theories and used them piecemeal. But with development of an ethnography/agent-based modeling tradition, we might notice that particular theories are repeatedly useful and appreciate how those recurrently useful theories work together. We might begin to build an etic structure, a theory hybrid, backed by relevance to many different cases. We might, in short, move towards something previously available to the wise and the insane — an actual theory of how the social world works that helps explain across many kinds of differences, a theory shaped by robust emic relevance rather than proclaimed universality (often false) and predictive power (that often doesn't work).

* Conclusion

9.1
Drugtalk has its charms in elucidating the role of emic research in agent-based modeling, but it also has its limits. The time has come to round up the usual hedges.

9.2
First of all, the focus was only on how it is that experimentation with a new illegal drug might or might not take off into the classic S-curve of an epidemic. Clearly many other questions about an epidemic could be asked, researched and modeled as well — for example recovery and relapse of the addicted, changes as subsequent waves of new drugs appear, results of public health or police intervention, and so on. Those other questions, important as they are, are not asked here.

9.3
Second, Drugtalk needs further evaluation, and in fact that work is ongoing. Together with colleagues from The Redfish Group, we are conducting Design of Experiment analyses to evaluate model parameters (Agar et.al. 2005). That evaluation foregrounds similarities between Drugtalk and models from marketing research, a parallel also noted in recent drug research (Agar and Reisinger 2004). The purpose here was only to demonstrate the potential of a phase-space analysis for policy.

9.4
Third, Drugtalk is certainly not the first attempt to model drug epidemics (see Epstein 1997 for an example). But the agent-based modeling framework differs from the usual systems model based on differential equations. It allows a focus on social interaction process rather than the mathematical properties of the curve, and it yields a space of possible system trajectories rather than the solution to an equation. Links between emic research and model characteristics are easier to forge with this kind of modeling, and the emphasis shifts from how the curve works to how the world works such that curves are or are not produced.

9.5
For all its shortcomings, Drugtalk shows how building models from emic features of the world being modeled is a powerful strategy. At the same time it adds validity to the model and enables tests of ethnographic explanations. And, in this case, the model lends itself to practical application, at least in principle. If nothing else, "emic" reminds us that models of the human world have to have clear connections with what the humans who live there are actually thinking and doing.

* References

AGAR, Michael. 2001. "Another Complex Step: A Model of Heroin Experimentation." Field Methods 13:353-369.

AGAR, Michael H. and Heather Schacht Reisinger. 2000. "Explaining Drug Use Trends: Suburban Heroin Use in Baltimore County." Pp. 143-165 in Illicit Drugs: Patterns of Use -- Patterns of Response, edited by A. Springer and A. Uhl. Innsbruck: STUDIENVerlag.

AGAR, Michael H. and Heather Schacht Reisinger. 2001. "Open Marginality: Heroin Epidemics in Different Groups." Journal of Drug Issues 31:729-746.

AGAR, Michael H. and Heather Schacht Reisinger. 2004. "Ecstasy: Commodity or Disease?" Journal of Psychoactive Drugs 36:253-264.

AGAR, Michael and Dwight Wilson. 2002. Drugmart: Heroin Epidemics as Complex Adaptive Systems. Complexity 7(5):44-52.

AGAR, Michael H., Stephen Guerin, Robert Holmes, and Dan Kunkel. 2005. "Epidemiology or Marketing? The Paradigm-Busting Use of Complexity and Ethnography." In Proceedings of the Agent 2004 Conference, Oct. 7-9, 2004, University of Chicago, forthcoming.

BARABASI, Albert-Laszlo. 2002. Linked: The New Science of Networks. Cambridge MA: Perseus Books.

BEHRENS, Doris A., Jonathan P. Caulkins, Gernot Tragler, and Gustav Feichtinger. 2000. "Optimal Control of Drug Epidemics: Prevent and Treat--But Not at the Same Time?" Management Science 46:333-347.

DUPONT, Robert L. 1974. "The Rise and Fall of Heroin Addiction." Natural History 83:66-71.

EPSTEIN, Joshua M. 1997. Nonlinear Dynamics, Mathematical Biology, and Social Science. Boston: Addison-Wesley.

HEADLAND, Thomas N., Kenneth L. Pike, and Marvin Harris. 1990. Emics and Etics: The Insider/Outsider Debate. Newbury Park: Sage.

MUSTO, David F. 1999. The American Disease: Origins of Narcotic Control. New York: Oxford University Press.

PIKE, Kenneth L. 1967. Language in Relation to a Unified Theory of Human Behavior. The Hague: Mouton.

REISINGER, Heather Schacht. 2004. "Young and Thuggin: The Unresolved Life of a Crack Dealer." Anthropology, American University, Washington DC.

ROBINS, Lee N. and George E. Murphy. 1967. "Drug Use in a Normal Population of Young Negro Men." American Journal of Public Health 57:1580-1596.

ROGERS, Everett M. 1995. Diffusion of innovations. New York: The Free Press.

TVERSKY, Amos, Daniel Kahneman, and Paul Slovic. 1982. Judgment Under Uncertainty: Heuristics and Biases. Cambridge: Cambridge University Press.

WILENSKY, Uri. 1999. "NetLogo." Evanston IL: Center for Connected Learning and Computer Based Modeling, Northwestern University, http://ccl.northwestern.edu/netlogo/.

----

ButtonReturn to Contents of this issue

© Copyright Journal of Artificial Societies and Social Simulation, [2005]