LevelSpace - A NetLogo Extension For ML-ABM
LevelSpace - A NetLogo Extension For ML-ABM
LevelSpace - A NetLogo Extension For ML-ABM
Abstract: Multi-Level Agent-Based Modeling (ML-ABM) has been receiving increasing attention in recent years.
In this paper we present LevelSpace, an extension that allows modelers to easily build ML-ABMs in the popu-
lar and widely used NetLogo language. We present the LevelSpace framework and its associated programming
primitives. Based on three common use-cases of ML-ABM âĂŞ coupling of heterogenous models, dynamic adap-
tation of detail, and cross-level interaction - we show how easy it is to build ML-ABMs with LevelSpace. We argue
that it is important to have a unified conceptual language for describing LevelSpace models, and present six di-
mensions along which models can differ, and discuss how these can be combined into a variety of ML-ABM
types in LevelSpace. Finally, we argue that future work should explore the relationships between these six di-
mensions, and how different configurations of them might be more or less appropriate for particular modeling
tasks.
Introduction
1.1 Agent-based models are typically conceptualized and coded as self-contained, separate units, each affording
an in-depth ‘thinking and analysis space’ of one particular phenomenon. They are modeled as the interactions
between two levels: a micro- (or agent-) and a macro- (aggregate-) level (e.g. Bar-Yam 1997; Epstein & Axtell 1996;
Mitchell 2009; Wilensky & Rand 2015). This approach has been useful across many disciplines and domains, but
has shown very useful in the social sciences where the micro-level often represents the states and behaviors of
social organizations or individual people (e.g. Epstein 1999, 2006; Gilbert & Terna 2000). While the good reasons
for restricting models to two levels are manifold, this comes at a cost: by reducing a phenomenon to two levels,
we necessarily either exclude processes and entities that do not fit at these levels, or we are forced to abstract
them into proto-agents or other low-fidelity simulations.
1.2 For instance, in the classic Segregation (Schelling 1969) model, we are restricted to two levels: houses, repre-
sented by space, and individuals, who make decisions about whether to move or not. However, having only
these two levels of simulated processes cuts out things that lie outside of the model, like the impact that seg-
regation might have on neighborhood schools, or things that are inside the model but at a more detailed level
like the decision process “within” each agent of where to move, and why.
1.3 In this paper, we present LevelSpace, a NetLogo extension that provides a simple set of NetLogo primitives for
connecting models and building multi-level or multi-model models. We provide examples of how modelers
can use LevelSpace to elaborate on processes within existing models, or to connect phenomena, by letting
theoretically infinitely large systems of models communicate and interact with each other. Finally, we present
six dimensions for describing different kinds of inter-model interactions that LevelSpace facilitates and discuss
how these six dimensions help us better describe NetLogo models built with LevelSpace.
2.2 Languages and packages typically take one of three different approaches to addressing these problems. First,
some frameworks achieve multi-level capabilities by allowing agents to contain or be defined by lower-level
agents. For example, SWARM (Minar et al. 1996) was not only one of the first ABM frameworks, it also has (lim-
ited) multi-level capabilities. Agents in SWARM can be defined by swarms of lower-level agents. However, due
to the limitations of inter-level interactions in SWARM, Morvan (2012) does not regard it as true a ML-ABM tool.
Another example is ML-RULES (Maus et al. 2011), a rule-based, multi-level framework focused on cell biology. It
uses a chemical-reaction-equation-like language to describe interactions between “species” and allows species
to contain micro solutions of other species. ML-Rules can be used to address cross-level interaction, but not
heterogeneous model coupling or dynamic adaptation of detail. Finally, GAMA (Drogoul et al. 2013) is a sophis-
ticated and actively developed ABM framework that has multi-level capabilities. In GAMA, every agent has a
defined geometry that forms a space that other agents can reside in. Moreover, information and agents can
pass between levels. This allows GAMA to address cross-level interaction and, to some degree, dynamic adap-
tation of level of detail, but not heterogeneous model coupling.
2.3 Second, other frameworks allow multiple models, spaces, or environments, that each contain agents, to be
linked together. This approach offers significant flexibility (and is the one LevelSpace adopts). For example,
SPARK (Soloyev et al. 2010) is a multi-scale ABM framework designed for biomedical applications. The frame-
work consists of agents, links between agents, spaces which contain the agents, data layers which overlay
spaces with quantitative values, and an observer which is responsible for, e.g., scheduling. A model may have
multiple spaces of varying scales and topologies. Furthermore, agents and information can be passed between
levels. As such, SPARK can be used to address cross-level interaction in the biomedical domain. Scerri et al.
(2010) introduce an architecture for designing ABMs in a modular fashion such that each module handles a par-
ticular aspect of the system. The modules take the form of ABMs that are synchronized with a “Time Manager”,
with interactions between the models handled by a “Conflict Resolver” that ensures divergences in the mod-
ules are handled gracefully. As such, the architecture is particularly focused on handling heterogeneous model
coupling, though it can also handle cross-level interactions that fit within this paradigm. Finally, Morvan et al.
(2010) adapts the IRM4S meta-model (Michel 2007) to handle multiple levels. This adaptation, called IRM4 MLS,
describes agents, levels that the agents inhabit, influences that the agents exhibit, and reactions the agents
have to influences. Morvan & Kubera (2017) have since refined IRM4 MLS into the new SIMILAR meta-model.
Both IRM4 MLS and SIMILAR are targeted at solving cross-level interaction and heterogeneous model coupling
problems.
2.4 Finally, some frameworks provide multi-level capabilities by imposing a strict, multi-level structure. For in-
stance, one of the earliest architectures, GEAMAS (Marcenac & Giroux 1998) broke models into three different
layers. The highest level defines how agents are organized in the model via a “network of acquaintances”, as
well as the input parameters and output measures of the model. The mid-level includes “high-level, cognitive
agents”, which act as groups of low-level agents that may be identified as wholes. The third level includes sim-
ple, reactive agents. Thus, GEAMAS was targeted specifically at the problem of cross-level interaction rather
than the other two problems.
LevelSpace Primitives
4.1 As LevelSpace is built on NetLogo’s Extension API and Controlling API, it deploys only a few powerful primitives
used for keeping track of models, and for running commands and reporters in them. As mentioned, a core
design principle of LevelSpace is to preserve as much of the existing NetLogo syntax and semantics as possible,
both in the core NetLogo, and in other extensions like the xw-extension 2 . These primitives therefore bear a
strong resemblance to existing NetLogo primitives that serve analogous modeling functions for simple agents.
4.2 In this section we will list and describe the most commonly used primitives that are part of the LevelSpace
extension. For an exhaustive list of primitives, please see the GitHub wiki page 3 . We will give a brief example
or two, discuss how and when to use each primitive, and discuss advantages and disadvantages of the design
of the primitive where relevant.
ls :models
4.3 In order to keep track of child models, the IDs of all child models contained within a model can be reported
with the ls : models primitive. It returns a list of all IDs in the order of time of creation, and LevelSpace keeps
track of models and removes discarded ones, so that ls : models always returns only currently open models.
As mentioned, using lists helps integrate the execution and processing of LevelSpace code, because it allows
easy integration with existing list related primitives, like map, reduce, sentence, and foreach. Any model’s child
models can only be retrieved using ls : models from within the model itself. In other words, if Model A has a
collection of child models, and Model B has a collection of child models, then the returned value of ls : models
will depend on in which model you run this reporter. This is similar to how the returned value of reporting any
agent value in NetLogo will depend on which agent you report it from. This also means that if a modeler wishes
to retrieve grandchild models, they would need to run ls : models in the context of their child model. (We will
show how to report values below.)
ls :ask
4.4 Like the rest of the NetLogo language, LevelSpace is polite. So, we ask child models to do things. ls : ask takes
a model ID (a number), or a list of model IDs, and a block of code. So
l s : ask 0 [ c r e a t e −t u r t l e s 100 ]
would ask the models with IDs 0, 1 and 2 to do so. We can also pass arguments into ls : ask using NetLogo’s
standard lambda-syntax. This helps us dynamically change the values that we base the LevelSpace model in-
teractions on, e.g.
would ask model 0 to create 50 turtles and then set each turtle’s color to one of those provided in turtle-colors.
Importantly. ls:ask only asks models that are directly available to a model, i.e. its own child models. Conse-
quently, if a model wishes to ask a grandchild-model to do something, we would need to explicitly pass the ask
request “down the family tree”, like this:
l s : ask 0 [
; ; w i t h i n t h e c o n t e x t o f t h i s c h i l d model , 0 r e f e r s t o i t s own c h i l d model
; ; with ID 0
l s : ask 0 [
do−something
]
]
ls :of / ls :report
4.5 There are two ways of reporting data from models. The first, ls : of, uses ‘infix’ syntax but does not take argu-
ments. The second, ls : report, uses ‘prefix’ syntax but allows for use of arguments. Just like ls : ask, ls : of and
ls : report take either a single model ID, or a list of model IDs. So,
[ count t u r t l e s ] l s : o f 0
will report the number of turtles in model 0. This primitive was designed to work like NetLogo’s built-in primitive
of and uses the same syntax. Similarly, if we wanted to report the number of turtles in three models, 0, 1, and
2, we could do:
[ count t u r t l e s ] l s : o f [ 0 1 2 ]
4.6 This would return a list of numbers, representing the count of turtles in the models with IDs 0, 1, and 2, respec-
tively. The reported list will be ordered and aligned with the input list.
4.7 Another way of reporting values is to use the ls : report primitive. This works analogously to NetLogo’s built-in
primitive for running strings as code, called run-result, and allows for the use of arguments:
( l s : r e p o r t 0 [ a−c o l o r −> count t u r t l e s w i t h [ c o l o r = a−c o l o r ] ] b l u e )
4.8 Note that because arguments are optional, it is necessary to enclose uses of ls : report in parentheses when
providing arguments.
4.9 When passing in a list of model IDs, ls : of and ls : report return results in the order of the list so that you can
have many models run a reporter in parallel and still be able to match up models and their respective result.
This could feel like a departure from the regular of-primitive in NetLogo where
[ xcor ] of t u r t l e s
will return the x-coordinates of all turtles in a random order each time it is evaluated.
4.10 However, of actually returns values in the same order as it is given turtles. What causes the randomization of or-
der is the fact that turtles is an agentset, which will always return agents in a random order. In other words, the
difference between the behavior of of and ls : of and ls : report is due not to a difference in the implementation
of the LevelSpace-primitives, but to the fact that we store model IDs in lists, which are ordered.
4.11 ls : of and ls : report can be used to retrieve data from grandchildren by embedding them inside themselves.
Both put a reporter in the context of a particular child model, just like regular of puts a reporter in the context
of an agent or agentset. This means that any reporter will be evaluated for that child model.
; ; t h i s r e p o r t s ‘ count t u r t l e s ’ from t h e top−l e v e l model ’ s c h i l d no . 0 ’ s
; ; c h i l d no . 4
[ [ count t u r t l e s ] l s : o f 4 ] l s : o f 0
ls : let
4.13 ls : let allows the parent model to store information in a variable that may then be accessed by a child model.
It is similar to the let primitive in NetLogo. For example:
l s : l e t number−of−t u r t l e s 100
l s : ask 0 [ c r e a t e −t u r t l e s number−of−t u r t l e s ]
will first assign 100 to the LevelSpace temporary variable called number−of−turtles, and then pass that to its
child model. Consequently, the code above will create 100 turtles in the child model with ID 0.
4.14 ls : let variables can be used exactly as any other local variable in the child model’s context. However, the parent
model may not change (or even read) the value of a ls : let variable once it is set. This is intentional: ls : let
variables should be used only to pass information from the parent to child, and not used for any computation in
the parent. With ls : let , sharing information with child models becomes just as natural as using local variables
to pass information from one agent to another.
4.15 As a stylistic side note, ls : let can replace all uses of arguments, and often results in slightly more human read-
able code, whereas using arguments can be less verbose.
4.16 Importantly, ls : let exists in the context of a parent. This means that unlike a normal NetLogo let-variable which
exists inside the scope of a full block, the value of a ls : let cannot be passed down through grand children
without having to use ls : let again. I.e.
l s : l e t num−t u r t l e s count t u r t l e s
l s : ask 0 [
l s : ask 0 [
c r e a t e −t u r t l e s num−t u r t l e s ; ; WILL ERROR
]
]
will fail because num−turtles only exists in the top-level model. Rather, the top-level model would need to ask
its child model to reassign the ls : let in its own context, and then pass that into its own child model âĂŞ the
grandchild of the original model:
l s : l e t num−t u r t l e s count t u r t l e s
l s : ask 0 [
; ; b in d t h e v a l u e t o a l s : l e t i n t h e c o n t e x t o f t h e c h i l d model
l s : l e t c h i l d −num−t u r t l e s num−t u r t l e s
l s : ask 0 [
c r e a t e −t u r t l e s c h i l d −num−t u r t l e s ; ; t h i s works
]
]
ls :with
4.17 Often, we want to filter models, similarly to how we filter agents in NetLogo with the built-in with-primitive. For
instance, we may want to ask only those of our models that satisfy a particular condition — e.g. only those with
more wolves than sheep — to do something. For example, suppose we wanted to call the GO-procedure in only
models where wolves outnumber sheep. The following code combines ls : ask and ls : models with ls : with to
achieve this:
l s : ask l s : models l s : w i t h [ count w o l v e s > count sheep ] [ go ]
ls :create−interactive−models / ls :create−models
4.18 These primitives open child models. Recall from above that LevelSpace allows for two different kinds of child
model: interactive models that present a full interface, and lightweight (or ‘headless’) models that, by default,
loads an instance of each of the two models and assigns them to their respective variables. This not only helps
keep track of models but can also be used as a device for writing more easily readable code using illuminating
variable names for child models instead of model ID numbers.
ls :close
4.19 This primitive simply takes a number or a list of numbers and closes the models associated with those IDs. If
the models that are closed contain LevelSpace and child-models of their own, these will also be closed recur-
sively. All associated objects will be garbage collected by the JVM when needed. Consequently, this primitive
is imperative for memory management. As mentioned, ls : close automatically removes a child model from its
parent’s ls : models.
ls :show/hide
4.20 These primitives also take a number or list of numbers and show or hide the view of the corresponding models.
For lightweight/headless child models, this window contains only the view widget, while for interactive/GUI
child models it contains the entire interface. When hidden, the child models will keep running in the back-
ground. This saves all drawing calls in NetLogo, and automatically sets each models’ speed slider to maximum.
Consequently, ls : hide can be used to make a LevelSpace model run much faster in those cases where viewing
the model during runtime is not necessary.
ls :reset
4.21 This primitive clears LevelSpace, closing down all existing child models (and, if necessary, any descendant mod-
els), and readies the LevelSpace extension for a new model run. This also resets the “serial number” of child
models, so the next child model created by the model will have ID 0. In our experience, this command will
typically be used in the setup procedure of the top-level parent model.
ls :name−of / ls :path−of
4.22 When dealing with many different types of models, it is often useful to be able to identify a model on-the-fly.
ls : name−of and ls : path−of return, respectively, the name of the .nlogo file, and the full path of the .nlogo file
that was used when loading a particular model. So that
l s : c r e a t e −models 1 " W o l f Sheep P r e d a t i o n . n l o g o "
show l s : name−o f 0
ls :uses−level−space?
4.23 Calling an ls :∗ primitive in a model that does not have the LevelSpace extension loaded will result in a runtime
error. It is therefore often useful to be able to check whether a child model has the extension loaded. This
primitive takes a model ID and returns true if the child model is running LevelSpace. This, too, can be combined
with filter to, e.g. only pass a LevelSpace-specific command or reporter to those child models that are able to
run it.
; ; o n l y r e p o r t count t u r t l e s from c h i l d models w i t h L e v e l S p a c e
[ count t u r t l e s ] l s : o f f i l t e r [ i d −> l s : uses−l e v e l −space ? i d ] l s : models
5.3 An important difference between LevelSpace and regular NetLogo is that the numbers reported from ls : models
do not refer to the child model object, but simply to the index of that child model in its own parent’s list of child
models. It would therefore not be possible to report the number up to a grandparent and run code directly from
the grandparent. i.e. this code
; ; r e p o r t s a l i s t o f l i s t s o f c h i l d i d numbers r e l a t i v e t o t h e top−l e v e l
; ; model ’ s c h i l d r e n
l e t grand−c h i l d r e n [ l s : models ] l s : o f l s : models
l s : ask grand−c h i l d r e n [ do−something ] ; ; DOES NOT WORK
will not work because the numbers that are stored in grand-children refer to the model IDs relative to their
parent, and not relative to the grandparent. One would therefore need to ask each child to ask its children to
do−something\lstinline (as we will show later in Section 5.7).
Concurrency in LevelSpace
5.4 LevelSpace extends NetLogo’s approach to scheduling behavior by viewing models as agents. In NetLogo, if a
modeler wants a set of agents to walk forward and then show how many neighboring turtles they have, the
modeler will write
ask t u r t l e s [ f o r w a r d 1 ]
ask t u r t l e s [ show count t u r t l e s −on n e i g h b o r s ]
5.5 In this case, all turtles would first move forward, and then show how many neighbors they have. No turtle would
report their neighbor count until they have all finished moving. However, if we write:
each turtle would move forward and immediately show how many neighboring turtles they have. This would
create a situation in which turtles count each other as neighbors while being out of sync. This is not possible in
LevelSpace because a query about other models would need to go through the parent, and the child is not able
to ask the parent to run a reporter while it itself is running code. Instead, if the modeler needs child models to
react to each otherâĂŹs information or states, the modeler would need to break down the code into chunks, i.e.
;; change t h e s t a t e o f c h i l d models
ls : ask l s : models [ do−something ]
;; r e t r i e v e t h e i n f o r m a t i o n and b i nd i t t o ‘ foo ‘
ls : l e t f o o [ bar ] l s : o f l s : models
;; p a s s t h e i n f o r m a t i o n i n t o c h i l d models t o a c t upon
ls : ask l s : models [ do−something−w i t h f o o ]
5.6 While this may be an edge case, it is an example of something that is different between single-model NetLogo
and LevelSpace and deserves mention.
5.7 Just like with a regular ask, LevelSpace executes all commands within a given code block for each model before
moving on to the next model. This includes embedded ls:ask to child-models. So
; ; ask a l l c h i l d models , one a time ,
l s : ask l s : models [
; ; t o ask THEIR c h i l d models , one a t a time ,
l s : ask l s : models [
; ; t o do something
do−something
]
]
5.8 This means that the user has great control, but also full responsibility, for scheduling commands correctly. This
requires thinking carefully about how to cascade code down through the family tree of models when using
ls : ask or ls : of to manipulate or retrieve information from grandchild-models.
Parallelism
5.9 LevelSpace’s ls:ask and ls:of are designed to run in parallel and will automatically create as many threads as it
can in order to execute the code as quickly as possible for each call.
l s : c r e a t e −models 100 " amodel . n l o g o "
l s : ask l s : models [ c r e a t e −t u r t l e s 1000 ]
; ; t h e f o l l o w i n g code w i l l not run u n t i l a l l models a r e done
show [ count t u r t l e s ] l s : o f l s : models
5.10 When working with embedded ls:ask requests like our example in Section 5.7,
; ; We f i r s t ask a l l o f our c h i l d r e n
l s : ask l s : models [
; ; t o ask a l l t h e i r c h i l d r e n
l s : ask model−i d 2 [
do−something
]
; ; any code h e r e w i l l o n l y e x e c u t e once a l l g r a n d c h i l d r e n o f t h e top−l e v e l
; ; model have completed t h e i r do−something p r o c e d u r e
each child will, in parallel, ask their children, in parallel, to do−something. Only when all the grandchildren are
done will the following code be executed.
5.11 For most uses, this will not make any difference to the user, other than running LevelSpace code faster.
6.3 The menu (Figure 2) has three items. The first one lets the modeler open the code of an existing NetLogo model.
This is often useful, either to make edits to a child model, or to be able to easily access the model to look up the
names of procedures, variables, etc. The second item contains a list of all models that have been opened by
LevelSpace since it was first loaded by using the ‘extensions‘ keyword. The third allows a user to create a blank
model which is often useful if a modeler is building a new LevelSpace model from scratch.
6.4 Any of these options will open a new tab that includes the code for this model (Figure 3). However, it is not
possible to make changes to the Interface of child-models. If a modeler needs to make changes to the interface,
they would need to open another NetLogo application instance.
6.6 It also makes it possible to disambiguate namespace collisions between parent and child. Unlike in single-
model NetLogo, where a turtle variable and a global or link variable cannot share names, it is possible in Lev-
elSpace to have namespace collisions between a parent model and its child models — in fact it is impossible not
to have namespace collisions because of built-in variables and reporters. For instance, all models in NetLogo
have a reporter called turtles, but the code block makes it clear which model’s turtles we refer to:
show t u r t l e s ; t h i s i s the parent ’ s t u r t l e s
show [ t u r t l e s ] l s : o f a−model ; t h i s i s the child ’ s t u r t l e s
6.7 However, because the code block is not evaluated until runtime, this also means that it is possible to refer to
variable names that do not exist in the child model, without getting a compilation error in the NetLogo IDE as
one would if coding with a single NetLogo model. As a consequence, if foo does not exist in the child model,
LevelSpace throws an ExtensionError but only during runtime (Figure 4).
Figure 4: A runtime Extension exception that would usually occur at compile time with regular NetLogo.
• Animals in WSP –âĂŞ both wolves and sheep — contribute greenhouse gases in the CC model via expira-
tion and flatulence when they metabolize food.
7.3 Setting up these relationships is easy with LevelSpace. The following commented code for our new parent
model shows how we import the LevelSpace extension (ls), define global variables wolf−sheep−predation and
climate−change, for the IDs of each of the two child models, and create a SETUP-procedure for the parent
model. Note that no changes are required to either of the WSP or CC models for them to interact as child models
in this new ML-ABM:
e x te n si o n s [ l s ] ; ; load the LevelSpace extension
; ; d e f i n e a v a r i a b l e f o r each o f our models
g l o b a l s [ c l i m a t e −change wolf −sheep−p r e d a t i o n ]
to setup
ls : reset ; ; c l o s e c h i l d models from any p r e v i o u s r u n s
clear −a l l
; ; open CC and a s s i g n i t t o c l i m a t e −change
( l s : c r e a t e −i n t e r a c t i v e −models 1 " C l i m a t e Change . n l o g o " [ new−model −>
s e t c l i m a t e −change new−model ] )
; ; open WSP and a s s i g n i t t o wolf −sheep−p r e d a t i o n
( l s : c r e a t e −i n t e r a c t i v e −models 1 " W o l f Sheep P r e d a t i o n . n l o g o " [ new−model −>
s e t wolf −sheep−p r e d a t i o n new−model ] )
l s : ask c l i m a t e −change [
no−d i s p l a y
setup
r e p e a t 20 [ add−co2 ]
r e p e a t 3 [ add−c l o u d ]
; ; t h e CC model r e q u i r e s a b i t more i n i t i a l i z a t i o n , as i t doesn ’ t r e a c h an
; ; e q u i l i b r i u m t e m p e r a t u r e u n t i l a f a i r l y l a r g e number o f t i c k s have passed ,
; ; so we run i t f o r 1 0 , 0 0 0 . We t u r n o f f d i s p l a y so t h a t t h e model does not
; ; have t o draw t h e view f o r t h e s t a t e o f i t s world a t each o f t h e s e 1 0 , 0 0 0
; ; time−s t e p s .
no−d i s p l a y
r e p e a t 10000 [ go ]
display
]
; ; t u r n on ’ g r a s s ’ i n t h e WSP model , and run s e t u p
l s : ask wolf −sheep−p r e d a t i o n [ s e t model−v e r s i o n " sheep−wolves−g r a s s " s e t u p ]
end
7.4 In order to program these interactions between the two systems, we can write a GO-procedure in the parent
model that uses LevelSpace’s primitives. We
• make the global variable grass−regrowth−time in the WSP model vary as a function of the temperature
global variable of the CC model,
• ask the wolves and sheep with energy greater than 25 in the WSP model to call the add−co2 procedure
in the CC model (interpreting ‘co2’ in the CC model now to represent all greenhouse gases, including
methane),
• ask the grass in WSP to call the remove−co2 procedure in CC, and finally
• dynamically change the albedo global variable in the CC model as a function of the proportion of patches
in WSP that have grass.
7.6 This syntax should look familiar to NetLogo users and modelers, and it should also be legible to agent-based
modelers who are not deeply familiar with NetLogo.
7.7 As mentioned, WSP and CC in their original form serve particular purposes, and they support inquiry into dif-
ferent questions on their own than when they are linked to form a ML-ABM. As such, these models can address
questions about how ecosystems and climate change mutually affect each other, and, in the tradition of but-
terfly effects, how individual sources or sinks of CO2 might ultimately affect the life of entire ecosystems.
7.8 Importantly, when we begin to connect individual agent-based models in a multi-model model, they raise
new questions and support us in exploring new collections of emergent phenomena. From a modeling-as-
methodology perspective, this is important because it potentially allows us to validate and verify individual
models before connecting them to a larger model-system, thus allowing us to add both breadth and depth in
our modeling endeavors.
7.9 Second, it does so without the disadvantage of rigidly bounding the modeled phenomena: if we (or anyone else)
should want to expand on our ML-ABM by adding new models or by changing the ways in which the models
interact, LevelSpace makes this easy. We believe these are important improvements to modeling both as a
scientific practice and as a reflective process. LevelSpace thus broadens the scope of the possible conversation
that the scientific community can have around a model (or model system), and it allows modelers to easily
expand on the otherwise more rigid boundaries that models draw around a particular phenomenon.
LevelSpace Example 2: (Dynamic) adaptation of the level of detail — Wolf Chases Sheep
7.10 When a wolf and sheep meet in the Wolf Sheep Predation model, the wolf simply eats the sheep. Of course,
this is not a realistic portrayal of the predation process. For some purposes we might want to model the chase
between a wolf and a sheep every time an encounter like this takes place. However, doing so within the original
WSP model would put the temporal and the spatial scales of the model at odds with themselves: the chase
between two individuals takes place at a finer granularity of both time and space than the rest of the model.
7.11 With LevelSpace, one possible approach is to build a new model that explores the chase at an appropriate
temporal and spatial scale, and to invoke this model as a child model whenever a wolf-sheep interaction occurs
in the WSP model. To illustrate, we wrote a simple model that we call the Wolf-Chases-Sheep (WCS) model 6
(Figure 5). Whenever a wolf and a sheep meet each other, the Wolf-Chase-Sheep model is initialized with a
wolf and a sheep that have the respective headings and relative positions of the wolf and the sheep in the WSP
model. The Wolf-Chases-Sheep model simulates the chase process and runs until either the sheep has managed
to escape by reaching the edge of the WCS-model’s space, or the wolf has caught the sheep. Using LevelSpace,
we could resolve whether the wolf catches the sheep, or the sheep escapes in the child model like this 7 :
ask w o l v e s [
i f any ? sheep−h e r e [
; ; wolves f i n d p o t e n t i a l prey
l e t p r e y one−o f sheep−h e r e
; ; open t h e chase model
l s : c r e a t e −i n t e r a c t i v e −models 1 " Chase Model . n l o g o "
; ; g e t t h e l a t e s t model I D and a s s i g n i t t o named v a r i a b l e f o r b e t t e r
Figure 6: Left: WSP in the middle of a run with a sheep highlighted. Right: The highlighted sheep’s neural net-
work from the same moment. Red links are excitatory, blue inhibitory. Thickness indicates weight. Opaque
links have been activated by current input. The nodes on the left are the 9 inputs corresponding to the pres-
ence of each agent type in each direction. The nodes on the right are the three outputs corresponding to turning
left, going straight, or turning right.
7.14 Specifically, animals sense nine conditions: the presence of any of three stimuli (grass, sheep, or wolves) in any
of three directions (to the left, to the right, or ahead of them). Processing stimuli, the animal’s neural network
outputs probabilities of predictions of each decision to turn left, turn right, or not turn will be the best decision.
Finally, as in the original model, the animal always moves forward and eats anything edible that it encounters.
7.15 To set this up, we first define what information each animal will send to its neural network, and how it will deal
with the output it gets back. We do this by defining a list of anonymous reporters in our setup procedure that will
be used as input for the neural networks as well as the behavior to perform if activated as a list of anonymous
commands. Each reporter in the input list detects the presence of a certain agent type in a certain direction
from the agent. This could be done with a series of Booleans instead, but this example shows how to combine
map and anonymous reporters with LevelSpace primitives to write succinct âĂŞ– though also more advanced,
and potentially less readable –âĂŞ code.
set inputs ( l i s t
; ; I s t h e r e any g r a s s t o my l e f t ? ( BINARY t u r n s t h e boolean from t h e ANY ?
; ; t o a 0 o r 1 f o r use by t h e n e u r a l network )
[ −> b i n a r y any ? ( i n −v i s i o n −a t p a t c h e s (− f o v / 3 ) ) w i t h [ p c o l o r = g r e e n ] ]
; ; I s t h e r e any g r a s s i n f r o n t o f me?
[ −> b i n a r y any ? ( i n −v i s i o n −a t p a t c h e s 0 ) w i t h [ p c o l o r = g r e e n ] ]
; ; I s t h e r e any g r a s s t o my r i g h t ?
[ −> b i n a r y any ? ( i n −v i s i o n −a t p a t c h e s ( f o v / 3 ) ) w i t h [ p c o l o r = g r e e n ] ]
; ; A r e t h e r e any sheep t o my l e f t ?
[ −> b i n a r y any ? o t h e r ( i n −v i s i o n −a t sheep (− f o v / 3 ) ) ]
; ; A r e t h e r e any sheep i n f r o n t o f me?
[ −> b i n a r y any ? o t h e r ( i n −v i s i o n −a t sheep 0 ) ]
; ; A r e t h e r e any sheep t o my r i g h t ?
[ −> b i n a r y any ? o t h e r ( i n −v i s i o n −a t sheep ( f o v / 3 ) ) ]
; ; A r e t h e r e any w o l v e s t o my l e f t ?
[ −> b i n a r y any ? o t h e r ( i n −v i s i o n −a t w o l v e s (− f o v / 3 ) ) ]
; ; A r e t h e r e any w o l v e s i n f r o n t o f me?
[ −> b i n a r y any ? o t h e r ( i n −v i s i o n −a t w o l v e s 0 ) ]
; ; A r e t h e r e any w o l v e s t o my r i g h t ?
[ −> b i n a r y any ? o t h e r ( i n −v i s i o n −a t w o l v e s ( f o v / 3 ) ) ]
set outputs ( l i s t
[ −> l t 30 ] ; ; Turn l e f t
[ −> ] ; ; Don ’ t t u r n
[ −> r t 30 ] ; ; Turn r i g h t
)
7.16 Next, we define a reporter that captures the information defined by the inputs list, sends it to the neural net-
work with LevelSpace, and reports the results. The apply−reals reporter in the neural network model sets the
inputs of the neural network to a list of numbers, propagates those values through the network, and reports
the resulting output as a list of probabilities. brain is a turtles-own variable that contains a reference to that
agent’s neural network model ID:
to−r e p o r t s e n s e
r e p o r t apply −b r a i n ( map r u n r e s u l t i n p u t s )
end
to−r e p o r t apply −b r a i n [ i n ]
ls : l e t inputs in
r e p o r t [ apply −r e a l s i n p u t s ] l s : o f b r a i n
end
7.17 Finally, we tell the animals to act on the results. Actions are chosen randomly based on the probabilities re-
turned by the neural network:
t o go−b r a i n
l e t r random−f l o a t 1
l e t i −1
; ; Get t h e p r o b a b i l i t y o u t p u t s from t h e agent ’ s b r a i n model−i d
l e t probs s e n s e
; ; Randomly s e l e c t an a c t i o n based on t h o s e p r o b a b i l i t i e s :
w h i l e [ r > 0 and i < l e n g t h probs ] [
set i i + 1
s e t r r − item i probs
]
run item i o u t p u t s ; Run t h e c o r r e s p o n d i n g a c t i o n
fd 1
end
7.18 Thus, LevelSpace allows modelers to implement sophisticated cognitive models for their agents as indepen-
dent, modular, agent-based models. This example demonstrates how one can encode the perception of an
agent, send that information to the agent’s cognitive model, and use the result to guide the behavior of the
agent. Furthermore, it allows high levels of heterogeneity among agents, since each agent can be given its
own, unique cognitive model. As seen in the neural network example, LevelSpace allows researchers to con-
struct agent-based versions of AI techniques to drive the decision making of their agents. However, LevelSpace
also allows for the creation of novel methods of implementing agent cognition. We provide here a briefer exam-
ple without code (though it is available on GitHub 9 ), in which each of the wolves and sheep use a child model
to run short, simplified simulations of their local environment to make decisions (Figure 7).
7.19 This example works as follows: each agent has a vision variable that determines its ability to sense its local en-
vironment. Each tick, every animal sends its knowledge of the state of its local environment to its own cognitive
model. This includes the locations of nearby grass, the locations and headings of the surrounding wolves and
sheep, and its own energy level. The animal then runs several short micro-simulations in which agents perform
random actions (as in the original Wolf-Sheep Predation), keeping track of how well it does in each run (based
on change in energy, and whether the animal died). The animal then selects the action that led to the best
outcomes, on average. The rules of the cognitive model are similar to the rules of the original WSP, but have
been modified to remove factors that the animals would not know about or would not base decisions off of. For
instance, they do not know the energy levels of the other agents, nor do they know about grass regrowth. Thus,
these factors are not included in the cognitive model. For full details, please see Head & Wilensky (2018).
7.20 This method grants agents powerful and flexible decision making in a natural way. For instance, this method
will generally result in a sheep reliably moving to the closest patch of grass. However, if there is a wolf on the
closest patch of grass that will likely eat the sheep, this method may result in the sheep running away instead.
Then again, if the sheep is about to starve to death unless they eat immediately, they may decide that it is
worth risking being eaten by the wolf in order to get the grass. This sort of decision making emerges from
the simple method of agents running micro-simulations to evaluate the consequences of actions, based on the
locally available information, and the heterogenous states of other agents. Furthermore, in addition to allowing
researchers to limit what agents know, the method also gives the researchers two “intelligence” parameters
that they may use to control exactly how well their agents behave: the number of micro-simulations, and the
duration of each micro-simulation, i.e. how many ticks into the future each agent is trying to predict. The more
micro-simulations the agents run, the more likely they are to find the optimal decision. The longer they run the
micro-simulations for, the more they will consider the further consequences of their actions.
7.21 This process is generally similar to using Monte-Carlo sampling to approximate solutions to partially observable
Markov decision processes (Lovejoy 1991). However, in this case, the Markov chain is being defined by an agent-
based model. This offers a natural way to define what would otherwise be very complex processes. It also
allows researchers to directly inspect what is happening in the agent’s cognitive model, and understand why
the agent acts the way it does. This also demonstrates the method’s advantage over, for instance, using neural
networks to guide agents’ actions as in paragraphs 80-85. Neural networks are infamously difficult to interpret.
In the micro-simulation approach, the situations which the agent considers when making its decisions may be
directly observed and meaningfully interpreted.
Persistence
8.6 In the Ecosystems and Climate Change-example, both the WSP and the CC model are opened at the beginning
of the model run, and both models remain open throughout the run. In contrast, in the Wolf Chases Sheep-
example, one model is opened, used to resolve the sheep chase, and then discarded. Finally, in the Wolf Sheep
Brain-example, models are opened and closed with the births and deaths of wolves and sheep. We see the dis-
tinction between these three different kinds of persistence as valuable for describing and classifying LevelSpace
systems: system-level persistence, in which models are opened at the beginning of the model run and closed
at the end; agent-level persistence, in which each child model shares its lifespan with a particular agent; and
event-level persistence, in which child models are opened in response to a particular event or state of the world
and closed when that event has been resolved.
Time synchronicity
8.7 The fourth dimension describes how time is modeled relatively between two model-instances. In the Ecosys-
tems and Climate Change-example, the WSP and the CC model run in tandem from the moment they are set up.
Similarly, in our Agent Cognition examples, each brain model is run in tandem with WSP. In contrast to these
two, in the Wolf Chases Sheep-example, the WSP model pauses until the outcome of the wolf-sheep chase has
been evaluated. These differences point to an important distinction among model-relationships: those that are
synchronous, and those that are asynchronous.
Hierarchy
8.9 In the Wolf Chases Sheep-example and the Agent Cognition-example, the chase and brain models are sub-
systems of the larger phenomenon in the other model. For instance, the wolf and the sheep in the Wolf Chases
Sheep model also exist in the Wolf Sheep Predation model. Similarly, each of the Brain models “belongs” to a
wolf or a sheep in the WSP model. In contrast, in the Ecosystems and Climate Change-example, neither model
can be said to be a sub-system of the other. The distinction between hierarchical and non-hierarchical model-
relationships is important, and is resolved by asking whether the phenomenon in each of the models can be
seen as respectively a sub- or super-system of the other 11 .
Model-Ratios
8.10 In our first two examples — Ecosystems and Climate Change, and Wolf Chases Sheep, respectively — we need
just one instance of each template model at the same time: respectively, one instance of WSP and one of CC,
and one instance of WSP and one of WCS 12 . However, in our third example, we have one instance of WSP and
many instances of the Brain model. Consequently, making a distinction between the number of models in a
relationship is an important part of a multi-level model systems typology. The model ratio is often implied
by the hierarchy of which is it part: if you have an agent-level hierarchical model, you will most likely have a
one-to-many model-ratio. However, we can see the one-to-many ratio not just at the model-level, but at the
agent-level too: consider a LevelSpace system of an industry, where agents represent organizations that open
and close different departments in an ad-hoc manner, and where each department is represented by a child
model. This ML-ABM system would contain a one-to-many model-relationship between an organization and its
departments at the agent-level. Conversely, consider a model system in which we model hundreds of cities,
each producing greenhouse gases. These could all contribute greenhouse gases to one climate change model.
In that case, there would be a many-to-one model-ratio between city-models and the climate change model.
9.2 Some combinations may be rare, and other combinations may even be conceptually contradictory. We want
to mention that it is not obvious to us which are which. When writing this paper, the authors discussed some
combinations that we thought would be unlikely. One of these was the homogeneous, agent-persistent dyadic
relationship: a model that opens a copy of itself whose persistence is determined by one single agent. At first
this seemed unlikely, but as we showed with the latter of the two Agent Cognition examples, we did in fact use an
instance of the WSP model to run micro-simulations that was agent-persistent and that was an instance of the
WSP model. To us, this illustrated how our inability to envision this case in advance was simply a shortcoming
of our imagination and intuition for which of these combinations are viable and useful.
Notes
1
Note that the following is not meant to be a comprehensive list of existing work, but rather give a general
idea of what approaches have been tried. See Morvan (2012) for a more comprehensive review of approaches,
languages and frameworks.
2
https://github.com/CRESS-Surrey/eXtraWidgets.
3
https://github.com/netlogo/levelspace.
4
https://github.com/NetLogo/Population-Dynamics-and-Climate-Change.
5
Albedo is a measure of the amount of light that is reflected vs the amount of light that is absorbed by a
particular surface.
6
https://github.com/NetLogo/Wolf-Chase-Sheep-JASSS-Version.
7
We open and close the model here only to illustrate where one could do it. In the GitHub version of this
model, we open the model only once, and simply “reuse” the same model. This makes the model run consid-
erably faster.
8
https://github.com/NetLogo/Sheep-with-brains.
9
https://github.com/NetLogo/Wolf-Sheep-Predation-Micro-Sims.
10
We mean ‘affordance’ in the design-sense of the word: that it allows — and even invites — a particular set
of actions or manipulations while making other actions more difficult or even impossible.
11
To avoid confusion, we want to emphasize that considerations about hierarchy do not refer to the Lev-
elSpace “family hierarchy” of models, i.e. parents, children, grand-children, etc. Rather, it speaks conceptually
to the modeled phenomena in the models. For instance, one model could load instances of two different mod-
els without either of those child models being hierarchically related to their parent in a conceptually meaningful
way. Similarly, these two child models can be in a sub-system-super-system relationship to each other, even
though neither is each other’s parent or child model in the LevelSpace hierarchy.
12
We emphasize ‘need’ for an important reason: there are many ways of designing a model system, and
many of them will be behaviorally equal. In constructing these typological dimensions, the question we have
to ask is, what is needed in order to preserve the necessary information. In the case of the Wolf Chases Sheep
model, it would be possible to load a model for each “meeting” between a wolf and a sheep at the same time,
thus changing the model ratio to a one WSP to many WCS models. However, because each event is resolved
entirely before the next event is resolved, no information needs to be preserved in the state of the model, and
we therefore only need one WCS model for the system to function.
Berland, M. & Wilensky, U. (2006). Constructionist collaborative engineering: Results from an implementation
of PVBOT. Annual Meeting of the American Educational Research Association 2006, San Francisco, CA, USA
Blikstein, P. & Wilensky, U. (2007). Bifocal modeling: A framework for combining computer modeling, robotics
and real-world sensing. Annual Meeting of the American Educational Research Association 2007, Chicago, IL,
USA
Blikstein, P. & Wilensky, U. (2008). Implementing multi-agent modeling in the classroom: Lessons from em-
pirical studies in undergraduate engineering education. Proceedings of the International Conference of the
Learning Sciences 2008, Utrecht, The Netherlands
Blikstein, P. & Wilensky, U. (2010). MaterialSim: A constructionist agent-based modeling approach to engineer-
ing education. In M. J. Jacobson & P. Reimann (Eds.), Designs for Learning Environments of the Future: Inter-
national Perspectives from the Learning Sciences, (pp. 17–60). Berlin/Heidelberg: Springer
Centola, D., Mckenzie, E. & Wilensky, U. (2000). Survival of the groupiest: Facilitating students’ understand-
ing of multi-level evolution through multi-agent modeling — The EACH Project. Proceedings of The Fourth
International Conference on Complex Systems, New Hampshire, NE, USA
Dickerson, M. (2015). Agent-based modeling and NetLogo in the introductory computer science curriculum:
Tutorial presentation. Journal of Computing Sciences in Colleges, 30(5), 174–177
Drogoul, A., Amouroux, E., Caillou, P., Gaudou, B., Grignard, A., Marilleau, N., Taillandier, P., Vavasseur, M., Vo,
D. A. & Zucker, J. D. (2013). GAMA: Multi-level and complex environment for agent-based models and simula-
tions. Proceedings of AAMAS ’13. International Foundation for Autonomous Agents and Multiagent Systems, St.
Paul, MN, USA
Epstein, J. M. (1999). Agent-based computational models and generative social science. Complexity, 4(5), 41–60
Epstein, J. M. (2006). Generative Social Science: Studies in Agent-Based Computational Modeling. Princeton, NJ:
Princeton University Press
Epstein, J. M. & Axtell, R. (1996). Growing Artificial Societies: Social Science from the Bottom Up. Washington, DC:
Brookings Institution Press
Gilbert, N. (2008). Agent-Based Models. London: Sage
Gilbert, N. & Terna, P. (2000). How to build and use agent-based models in social science. Mind & Society, 1(1),
57–72
Guo, B. & Wilensky, U. (2018). Mind the gap:Teaching high school students about wealth inequality through
agent-based participatory simulations. Proceedings of Constructionism 2018, Vilnius, Lithuana
Hauke, J., Lorscheid, I. & Meyer, M. (2017). Recent development of social simulation as reflected in JASSS be-
tween 2008 and 2014: A citation and co-citation analysis. Journal of Artificial Societies and Social Simulation,
20(1), 5
Head, B. & Wilensky, U. (2018). Agent cognition through micro-simulations: Adaptive and tunable intelligence
with NetLogo LevelSpace. Proceedings of Ninth International Conference on Complex Systems 2018 (pp. 71–81),
Cambridge, MA, USA
Hjorth, A., Head, B. & Wilensky, U. (2015). LevelSpace NetLogo extension. http://ccl.northwestern.edu/
levelspace. Evanston, IL: Center for Connected Learning and Computer-Based Learning
Hjorth, A. & Wilensky, U. (2014). Redesigning your city — A constructionist environment for urban planning
education. Informatics in Education, 13(2), 197–208
Wagh, A. & Wilensky, U. (2013). Leveling the playing field: Making multi-level evolutionary processes accessible
through participatory simulations. Proceedings of the Biannual Conference of Computer-Supported Collabo-
rative Learning (CSCL), Madison, WI, USA
Wagh, A. & Wilensky, U. (2014). Seeing patterns of change: Supporting student noticing in building models of
natural selection. Proceedings of Constructionism 2014, Vienna, Austria
Wilensky, U. & Jacobson, M. J. (2014). Complex systems and the learning sciences. In R. K. Sawyer (Ed.), The
Cambridge Handbook of the Learning Sciences, (pp. 319–338). Cambridge: Cambridge University Press
Wilensky, U. & Rand, W. (2015). An Introduction to Agent-Based Modeling: Modeling Natural, Social, and Engi-
neered Complex Systems with NetLogo. Boston, MA: MIT Press
Wilensky, U. & Reisman, K. (2006). Thinking like a wolf, a sheep, or a firefly: Learning biology through construct-
ing and testing computational theories — An embodied modeling approach. Cognition and Instruction, 24(2),
171–209