LevelSpace: A NetLogo Extension for Multi- Level Agent-Based Modeling

Multi-Level Agent-Based Modeling (ML-ABM) has been receiving increasing attention in recent years. In this paper we present LevelSpace, an extension that allows modelers to easily build ML-ABMs in the popular andwidely used NetLogo language. We present the LevelSpace framework and its associated programming primitives. Basedon three commonuse-casesofML-ABMâĂŞcouplingof heterogenousmodels, dynamicadaptation of detail, and cross-level interaction -we showhoweasy it is to buildML-ABMswith LevelSpace. We argue that it is important to have a unified conceptual language for describing LevelSpacemodels, and present six dimensions along which models can di er, and discuss how these can be combined into a variety of ML-ABM types in LevelSpace. Finally, we argue that future work should explore the relationships between these six dimensions, and how di erent configurations of themmight bemore or less appropriate for particular modeling tasks.


Introduction
. Agent-based models are typically conceptualized and coded as self-contained, separate units, each a ording an in-depth 'thinking and analysis space' of one particular phenomenon. They are modeled as the interactions between two levels: a micro-(or agent-) and a macro-(aggregate-) level (e.g. Bar-Yam ; Epstein & Axtell ; Mitchell ; Wilensky & Rand ). This approach has been useful across many disciplines and domains, but has shown very useful in the social sciences where the micro-level o en represents the states and behaviors of social organizations or individual people (e.g. Epstein , ; Gilbert & Terna ). While the good reasons for restricting models to two levels are manifold, this comes at a cost: by reducing a phenomenon to two levels, we necessarily either exclude processes and entities that do not fit at these levels, or we are forced to abstract them into proto-agents or other low-fidelity simulations. .
For instance, in the classic Segregation (Schelling ) model, we are restricted to two levels: houses, represented by space, and individuals, who make decisions about whether to move or not. However, having only these two levels of simulated processes cuts out things that lie outside of the model, like the impact that segregation might have on neighborhood schools, or things that are inside the model but at a more detailed level like the decision process "within" each agent of where to move, and why. .
In this paper, we present LevelSpace, a NetLogo extension that provides a simple set of NetLogo primitives for connecting models and building multi-level or multi-model models.
We provide examples of how modelers can use LevelSpace to elaborate on processes within existing models, or to connect phenomena, by letting theoretically infinitely large systems of models communicate and interact with each other. Finally, we present six dimensions for describing di erent kinds of inter-model interactions that LevelSpace facilitates and discuss how these six dimensions help us better describe NetLogo models built with LevelSpace.
. Thus, while many meta-models, frameworks, and architectures have been developed, to date none of them address all three ML-ABM problems. Furthermore, many of them are domain specific and thus are not appropriate for a more general ABM-application. Finally, a number of them employ highly specialized formalisms that severely limit the potential for wide adoption. We have designed LevelSpace not only to address all three ML-ABM problems, but also to preserve the flexibility of NetLogo and maintain its "low-threshold, high-ceiling" philosophy, which enables it to support the thinking and inquiry of a wide range of modelers, from young learners to experienced research scientists. .
In the following, we first present and describe LevelSpace. We then demonstrate with three examples how LevelSpace can be used to create models that each address one of Morvan's ( ) ML-ABM problems. To keep these examples simple and avoid relying on specialized domain knowledge, all three examples will extend the relatively well-known and simple Wolf Sheep Predation (WSP) model (Wilensky ) which shows population dynamics in a three-tiered ecosystem consisting of predators, prey, and grass.
LevelSpace: A Language for Connecting, Expanding, and Exploring NetLogo Models . LevelSpace (Hjorth et al. ) is an extension for NetLogo (Wilensky ) that allows modelers to construct ML-ABMs within the NetLogo language and modeling environment. NetLogo is one of the most widely cited ABM languages in both the natural and social sciences (Hauke et al. ; Railsback et al. ; Thiele et al. ), and its low-threshold, high ceiling design commitment makes it a powerful tool for eliciting complex systems thinking in novice-modeler learners as well (Tisue & Wilensky ). In this capacity, NetLogo has been used for learning in domains as diverse as physics (Sengupta & Wilensky ), biology and evolution ( Stonedahl et al. ) and increasingly in social sciences education as well (Guo & Wilensky ; Hjorth & Wilensky , , ). .
LevelSpace extends the capabilities of NetLogo to enable ML-ABM across modeling and learning in all these domains. It allows modelers to dynamically and programmatically open, run, control, and close an arbitrary number of NetLogo models during runtime from inside an arbitrary number of other NetLogo models, using the NetLogo language itself to do so. LevelSpace allows modelers to think of models as agents by letting them manipulate models as they would any other type of agent in NetLogo. In other words, models are created inside other models in the same way as agents are created inside models. In the rest of the paper we will refer to models that have been created by another model as respectively 'child model' and 'parent model'. A child model can be any NetLogo model, even one that uses LevelSpace. Thus, child models that use LevelSpace can open their own child models, allowing for an arbitrary number of children, grand-children, great-grand-children, etc. to the original "top-level" model.

.
LevelSpace supports two di erent kinds of child models: Interactive Models, and Lightweight Models. The former will present the user with its entire Graphical User Interface (GUI), including all interface 'widgets' (plots, buttons, choosers, etc.). The latter, by default, runs in the background with no user GUI; but even here, it is possible to show the view-widget if a modeler is interested in seeing what goes on behind the scenes. However, Lightweight Models will not allow the user to see or manipulate interface widgets, inspect agents, etc. Each of these two child model types is useful in di erent contexts. Each of our three examples of LevelSpace ML-ABMs use child models in a distinctive way, based on the design and modeling needs of that particular example, and to illustrate the flexibility and expressiveness of LevelSpace.

Design considerations about representations of models in LevelSpace
.
At the core of LevelSpace are representations of models inside other models that can manipulate or retrieve data from each other. At the core of our design decisions, therefore, was the question of how to represent models and collections of models.
. Early in the design process, we considered the various options for representations of models and collections of models. One such option, maybe the most obvious one, was to create a Model-object and a Modelset-object that would mirror the Agent-object and the Agentset objects in NetLogo's Java and Scala code. We began implementing this approach on one branch of our development work to explore whether this would work. We quickly saw that it had certain advantages: for instance, storing all models as objects inside the extension allowed us to always ensure that "dead" models were removed on-the-fly from Modelsets, similarly to how NetLogo removes dead turtles. Additionally, having models as objects also allowed us to provide more information about them in the Command Center, similarly how to NetLogo can provide information about a turtle if you "report" it. However, there was a serious disadvantage to this approach: NetLogo does not currently allow its built-in primitives to interact with objects that are defined in extensions. This meant that we would have to reimplement every single primitive that would interact with Model or with Modelset objects. More importantly, it meant that modelers and learners would have even more primitives to learn, and would need to learn when to use the extension specific primitives, and when to use the built-in ones.
. We decided to make minimizing the number of new, necessary primitives without sacrificing modeling power our primary design objective. For this reason, we decided to design LevelSpace to use numbers to represent models (henceforth called "IDs" or "model IDs") and regular NetLogo-lists to represent collections of models. These model representations, as the reader will see in the following description of the extension primitives, are designed to emulate the traditional representation of turtles and other NetLogo agents as closely as possible while being able to utilize NetLogo's existing primitives for iterating over, sorting, filtering, and manipulating lists.

LevelSpace Primitives
. As LevelSpace is built on NetLogo's Extension API and Controlling API, it deploys only a few powerful primitives used for keeping track of models, and for running commands and reporters in them. As mentioned, a core design principle of LevelSpace is to preserve as much of the existing NetLogo syntax and semantics as possible, both in the core NetLogo, and in other extensions like the xw-extension . These primitives therefore bear a strong resemblance to existing NetLogo primitives that serve analogous modeling functions for simple agents. .
In this section we will list and describe the most commonly used primitives that are part of the LevelSpace extension. For an exhaustive list of primitives, please see the GitHub wiki page . We will give a brief example or two, discuss how and when to use each primitive, and discuss advantages and disadvantages of the design of the primitive where relevant.

ls :models .
In order to keep track of child models, the IDs of all child models contained within a model can be reported with the ls : models primitive. It returns a list of all IDs in the order of time of creation, and LevelSpace keeps track of models and removes discarded ones, so that ls : models always returns only currently open models. As mentioned, using lists helps integrate the execution and processing of LevelSpace code, because it allows easy integration with existing list related primitives, like map, reduce, sentence, and foreach. Any model's child models can only be retrieved using ls : models from within the model itself. In other words, if Model A has a collection of child models, and Model B has a collection of child models, then the returned value of ls : models will depend on in which model you run this reporter. This is similar to how the returned value of reporting any agent value in NetLogo will depend on which agent you report it from. This also means that if a modeler wishes to retrieve grandchild models, they would need to run ls : models in the context of their child model. ( would ask the models with IDs , and to do so. We can also pass arguments into ls : ask using NetLogo's standard lambda-syntax. This helps us dynamically change the values that we base the LevelSpace model interactions on, e.g. would ask model to create turtles and then set each turtle's color to one of those provided in turtle-colors. Importantly. ls:ask only asks models that are directly available to a model, i.e. its own child models. Consequently, if a model wishes to ask a grandchild-model to do something, we would need to explicitly pass the ask request "down the family tree", like this: There are two ways of reporting data from models. The first, ls : of, uses 'infix' syntax but does not take arguments. The second, ls : report, uses 'prefix' syntax but allows for use of arguments. Just like ls : ask, ls : of and ls : report take either a single model ID, or a list of model IDs. So, This would return a list of numbers, representing the count of turtles in the models with IDs , , and , respectively. The reported list will be ordered and aligned with the input list.
. Another way of reporting values is to use the ls : report primitive. This works analogously to NetLogo's built-in primitive for running strings as code, called run-result, and allows for the use of arguments: .
Note that because arguments are optional, it is necessary to enclose uses of ls : report in parentheses when providing arguments.
. When passing in a list of model IDs, ls : of and ls : report return results in the order of the list so that you can have many models run a reporter in parallel and still be able to match up models and their respective result. This could feel like a departure from the regular of-primitive in NetLogo where [ x c o r ] o f t u r t l e s will return the x-coordinates of all turtles in a random order each time it is evaluated.
. However, of actually returns values in the same order as it is given turtles. What causes the randomization of order is the fact that turtles is an agentset, which will always return agents in a random order. In other words, the di erence between the behavior of of and ls : of and ls : report is due not to a di erence in the implementation of the LevelSpace-primitives, but to the fact that we store model IDs in lists, which are ordered.
. ls : of and ls : report can be used to retrieve data from grandchildren by embedding them inside themselves. Both put a reporter in the context of a particular child model, just like regular of puts a reporter in the context of an agent or agentset. This means that any reporter will be evaluated for that child model. As in regular NetLogo code, the reporters are evaluated beginning with the inner blocks, so the code above will report the number of turtles in model 's child model with ID .

ls : let
. ls : let allows the parent model to store information in a variable that may then be accessed by a child model. It is similar to the let primitive in NetLogo. For example: l s : l e t number−of−t u r t l e s l s : ask [ c r e a t e −t u r t l e s number−of−t u r t l e s ] will first assign to the LevelSpace temporary variable called number−of−turtles, and then pass that to its child model. Consequently, the code above will create turtles in the child model with ID .
. ls : let variables can be used exactly as any other local variable in the child model's context. However, the parent model may not change (or even read) the value of a ls : let variable once it is set. This is intentional: ls : let variables should be used only to pass information from the parent to child, and not used for any computation in the parent. With ls : let , sharing information with child models becomes just as natural as using local variables to pass information from one agent to another.
. As a stylistic side note, ls : let can replace all uses of arguments, and o en results in slightly more human readable code, whereas using arguments can be less verbose.
. Importantly, ls : let exists in the context of a parent. This means that unlike a normal NetLogo let-variable which exists inside the scope of a full block, the value of a ls : let cannot be passed down through grand children without having to use ls : let again. I.e. l s : l e t num−t u r t l e s count t u r t l e s l s : ask [ l s : ask [ c r e a t e −t u r t l e s num−t u r t l e s ; ; WILL ERROR ] ] will fail because num−turtles only exists in the top-level model. Rather, the top-level model would need to ask its child model to reassign the ls : let in its own context, and then pass that into its own child model âĂŞ the grandchild of the original model: O en, we want to filter models, similarly to how we filter agents in NetLogo with the built-in with-primitive. For instance, we may want to ask only those of our models that satisfy a particular condition -e.g. only those with more wolves than sheep -to do something. For example, suppose we wanted to call the GO-procedure in only models where wolves outnumber sheep. The following code combines ls : ask and ls : models with ls : with to achieve this: These primitives open child models. Recall from above that LevelSpace allows for two di erent kinds of child model: interactive models that present a full interface, and lightweight (or 'headless') models that, by default, run in the background (but that can present a model view as well, if needed). Both primitives take as an argument the number of models to open and a path to the .nlogo file that the modeler wishes to open -this allows it to work analogously with NetLogo's built-in user−file primitive. Additionally, both primitives can also take an anonymous command as an argument. When used, this command receives the newly created model's ID as an input. Importantly, this command block is run in the context of the parent model, and not in the child model. This di ers from NetLogo's regular create− * primitives, because the optional command block here is run in the context of the agent that was just created. However, we made this change to make it easy for the modeler to store a reference to the newly created model in a variable. So, assuming the existence of two global variables loads an instance of each of the two models and assigns them to their respective variables. This not only helps keep track of models but can also be used as a device for writing more easily readable code using illuminating variable names for child models instead of model ID numbers.

ls :close .
This primitive simply takes a number or a list of numbers and closes the models associated with those IDs. If the models that are closed contain LevelSpace and child-models of their own, these will also be closed recursively. All associated objects will be garbage collected by the JVM when needed. Consequently, this primitive is imperative for memory management. As mentioned, ls : close automatically removes a child model from its parent's ls : models.

ls :show/hide .
These primitives also take a number or list of numbers and show or hide the view of the corresponding models. For lightweight/headless child models, this window contains only the view widget, while for interactive/GUI child models it contains the entire interface. When hidden, the child models will keep running in the background. This saves all drawing calls in NetLogo, and automatically sets each models' speed slider to maximum. Consequently, ls : hide can be used to make a LevelSpace model run much faster in those cases where viewing the model during runtime is not necessary.

ls :reset .
This primitive clears LevelSpace, closing down all existing child models (and, if necessary, any descendant models), and readies the LevelSpace extension for a new model run. This also resets the "serial number" of child models, so the next child model created by the model will have ID . In our experience, this command will typically be used in the setup procedure of the top-level parent model.

ls :name−of / ls :path−of .
When dealing with many di erent types of models, it is o en useful to be able to identify a model on-the-fly. ls : name−of and ls : path−of return, respectively, the name of the .nlogo file, and the full path of the .nlogo file that was used when loading a particular model. So that l s : c r e a t e −models " W o l f Sheep P r e d a t i o n . n l o g o " show l s : name−o f will show "Wolf Sheep Predation.nlogo" in the command center. This primitive illustrates an example of when it can be useful to combine NetLogo's built-in list primitives with LevelSpace primitive: Imagine that a model has loaded some number of child models, including some Wolf Sheep Predation-models, and that we for some reason only want to call GO in the Wolf Sheep Predation child models. This can be achieved by combining ls : ask, ls : models, and ls : name−of with NetLogo's built-in filter like this: l s : ask f i l t e r [ i d −> l s : name−o f i d = " W o l f Sheep P r e d a t i o n " ] l s : models ; ; [ go ] ls :uses−level−space? .
Calling an ls : * primitive in a model that does not have the LevelSpace extension loaded will result in a runtime error. It is therefore o en useful to be able to check whether a child model has the extension loaded. This primitive takes a model ID and returns true if the child model is running LevelSpace. This, too, can be combined with filter to, e.g. only pass a LevelSpace-specific command or reporter to those child models that are able to run it. ;

Managing Order Across Levels in LevelSpace
. The multi-model, multi-level nature of LevelSpace means that there are some di erences between the way in which code is executed, and the way in which we ensure synchronicity between models in LevelSpace compared to normal NetLogo. Understanding how to ensure that code is executed in the desired order is important in modeling, and we therefore dedicate a section on it here.

.
LevelSpace's child models can, as mentioned, also load LevelSpace extension, and create their own child models. As mentioned in the previous section, in order for a grandparent model to ask a grandchild model to do something, we can embed an ls : ask inside the code block that we pass down the family line with ls : ask, like this: l s : ask [ l s : ask g r a n d c h i l d −model [ do−something ] ] .
An important di erence between LevelSpace and regular NetLogo is that the numbers reported from ls : models do not refer to the child model object, but simply to the index of that child model in its own parent's list of child models. It would therefore not be possible to report the number up to a grandparent and run code directly from the grandparent. will not work because the numbers that are stored in grand-children refer to the model IDs relative to their parent, and not relative to the grandparent. One would therefore need to ask each child to ask its children to do−something\lstinline (as we will show later in Section . ).
LevelSpace extends NetLogo's approach to scheduling behavior by viewing models as agents. In NetLogo, if a modeler wants a set of agents to walk forward and then show how many neighboring turtles they have, the modeler will write ask t u r t l e s [ f o r w a r d ] ask t u r t l e s [ show count t u r t l e s −on n e i g h b o r s ] .
In this case, all turtles would first move forward, and then show how many neighbors they have. No turtle would report their neighbor count until they have all finished moving. However, if we write: ask t u r t l e s [ f o r w a r d show count t u r t l e s −on n e i g h b o r s ] each turtle would move forward and immediately show how many neighboring turtles they have. This would create a situation in which turtles count each other as neighbors while being out of sync. This is not possible in LevelSpace because a query about other models would need to go through the parent, and the child is not able to ask the parent to run a reporter while it itself is running code. Instead, if the modeler needs child models to react to each otherâĂŹs information or states, the modeler would need to break down the code into chunks, i. .
While this may be an edge case, it is an example of something that is di erent between single-model NetLogo and LevelSpace and deserves mention.
. Just like with a regular ask, LevelSpace executes all commands within a given code block for each model before moving on to the next model. This includes embedded ls:ask to child-models. So .
This means that the user has great control, but also full responsibility, for scheduling commands correctly. This requires thinking carefully about how to cascade code down through the family tree of models when using ls : ask or ls : of to manipulate or retrieve information from grandchild-models.

Parallelism .
LevelSpace's ls:ask and ls:of are designed to run in parallel and will automatically create as many threads as it can in order to execute the code as quickly as possible for each call. each child will, in parallel, ask their children, in parallel, to do−something. Only when all the grandchildren are done will the following code be executed. .
For most uses, this will not make any di erence to the user, other than running LevelSpace code faster.

Non-Language Features of LevelSpace
. LevelSpace introduces some new features to NetLogo's IDE that do not relate specifically to the design of primitives. Further, it introduces a new data type called the 'code block'. These new features were designed to make it easier to deal with the particular challenges introduced by ML-ABM.

LevelSpace as a development platform
. Developing a LevelSpace model breaks with a fundamental assumption in the NetLogo IDE -that there is always one, and only one model. When building LevelSpace models, one o en works in several di erent .nlogo files at a time. We have therefore made it easy to load, view and/or edit more models at the same time in Net-Logo. We did this by designing a few changes to the NetLogo IDE that happen seamlessly as soon as a modeler imports the LevelSpace extension. The first of these changes is that a "LevelSpace" drop-down menu is added ( Figure ).

Figure :
As soon as LevelSpace is loaded, the LevelSpace menu appears.
. The menu (Figure ) has three items. The first one lets the modeler open the code of an existing NetLogo model. This is o en useful, either to make edits to a child model, or to be able to easily access the model to look up the names of procedures, variables, etc. The second item contains a list of all models that have been opened by LevelSpace since it was first loaded by using the 'extensions' keyword. The third allows a user to create a blank model which is o en useful if a modeler is building a new LevelSpace model from scratch. Figure : The LevelSpace menu, expanded.
. Any of these options will open a new tab that includes the code for this model (Figure ). However, it is not possible to make changes to the Interface of child-models. If a modeler needs to make changes to the interface, they would need to open another NetLogo application instance. Namespacing between models and the code block .
Of course, when writing NetLogo code for di erent models there are immediate namespacing issues. To address these, LevelSpace introduces a new datatype to NetLogo called code block. NetLogo code is interpreted as a code block if it is surrounded by hard brackets, and if it is passed into any of the ls : -primitives that accept it. The primary di erence between the code block and the usual command block or reporter block is that it is not evaluated during compile time. This makes it possible to refer to variable names in the child model in the code belonging to the parent model without getting compilation errors. In the following, assume that the child model has a global variable named foo.
;  Examples of Multi-Level Agent-Based Models with LevelSpace . As described above, Morvan's ( ) survey shows that ML-ABM has been used to solve three general kinds of problems. In the sections that follow, we will provide examples addressing each of these problem types by using LevelSpace to construct multi-level ABMs that extend the Wolf-Sheep Predation NetLogo Model (Wilensky ).
LevelSpace Example : Coupling of heterogeneous models âĂŞ-Population dynamics and climate change .
In our first example, which we call Population Dynamics and Climate Change , we expand on the Wolf Sheep Predation (WSP) model by linking it in a ML-ABM system with a model of Climate Change (Tinker & Wilensky ). The Climate Change (CC) model shows how clouds and the presence of greenhouse gases in the atmosphere contribute to the containment of energy from the sun and how this a ects the global temperature. Each of the two models have a clear purpose in their own right: the WSP model allows modelers to think about the oscillation of population sizes in ecosystems consisting of sheep and wolves, and the CC model helps learners understand how the mechanics of photons, greenhouse gases and infrared radiation contribute to global warming. But consider this non-exhaustive list of ways in which these systems a ect each other: • The temperature in the CC model a ects how quickly grass grows back in the WSP model.
• The grass in the WSP model absorbs CO from the atmosphere.
• Animals in WSP -âĂŞ both wolves and sheep -contribute greenhouse gases in the CC model via expiration and flatulence when they metabolize food.
• The proportion of grass to proportion of dirt patches in WSP a ects the albedo in the CC model, which in turn a ects how much visible light from the sun is absorbed by Earth and how much is reflected.
. Setting up these relationships is easy with LevelSpace. The following commented code for our new parent model shows how we import the LevelSpace extension (ls), define global variables wolf−sheep−predation and climate−change, for the IDs of each of the two child models, and create a SETUP-procedure for the parent model. Note that no changes are required to either of the WSP or CC models for them to interact as child models In order to program these interactions between the two systems, we can write a GO-procedure in the parent model that uses LevelSpace's primitives. We • make the global variable grass−regrowth−time in the WSP model vary as a function of the temperature global variable of the CC model, • ask the wolves and sheep with energy greater than in the WSP model to call the add−co procedure in the CC model (interpreting 'co ' in the CC model now to represent all greenhouse gases, including methane), • ask the grass in WSP to call the remove−co procedure in CC, and finally • dynamically change the albedo global variable in the CC model as a function of the proportion of patches in WSP that have grass. This syntax should look familiar to NetLogo users and modelers, and it should also be legible to agent-based modelers who are not deeply familiar with NetLogo.
. As mentioned, WSP and CC in their original form serve particular purposes, and they support inquiry into different questions on their own than when they are linked to form a ML-ABM. As such, these models can address questions about how ecosystems and climate change mutually a ect each other, and, in the tradition of butterfly e ects, how individual sources or sinks of CO might ultimately a ect the life of entire ecosystems.
. Importantly, when we begin to connect individual agent-based models in a multi-model model, they raise new questions and support us in exploring new collections of emergent phenomena. From a modeling-asmethodology perspective, this is important because it potentially allows us to validate and verify individual models before connecting them to a larger model-system, thus allowing us to add both breadth and depth in our modeling endeavors.
. Second, it does so without the disadvantage of rigidly bounding the modeled phenomena: if we (or anyone else) should want to expand on our ML-ABM by adding new models or by changing the ways in which the models interact, LevelSpace makes this easy. We believe these are important improvements to modeling both as a scientific practice and as a reflective process. LevelSpace thus broadens the scope of the possible conversation that the scientific community can have around a model (or model system), and it allows modelers to easily expand on the otherwise more rigid boundaries that models draw around a particular phenomenon.

LevelSpace Example : (Dynamic) adaptation of the level of detail -Wolf Chases Sheep
.
When a wolf and sheep meet in the Wolf Sheep Predation model, the wolf simply eats the sheep. Of course, this is not a realistic portrayal of the predation process. For some purposes we might want to model the chase between a wolf and a sheep every time an encounter like this takes place. However, doing so within the original WSP model would put the temporal and the spatial scales of the model at odds with themselves: the chase between two individuals takes place at a finer granularity of both time and space than the rest of the model. .
By simulating the chase process in its own model, we are able to explore some parameters of that process that would otherwise be either di icult or impossible to explore because of the way in which the WSP model imposes particular levels of scale. By 'zooming in' on the process in a separate model with an appropriate set of scales, we are able to add to the original WSP model a representation of the chase process at a finer granularity. This representation is, like any model, a contestable site of inquiry, and it opens a context for exploring how variables like speed or the agility of wolves and sheep might a ect their success in predation. This allows us to better understand how these variables a ect survival rates at the level of the individual meeting between a wolf and a sheep, and how this might a ect the population dynamics in the whole ecosystem. For instance, further elaborations on this model could include the energy cost of the chase to both wolf and sheep, and could lead to a more sophisticated cognitive model of wolves deciding whether or not it is worth chasing a particular sheep. This expansion of the model would also better align with more recent developments in our scientific understanding of the role of behavior and decisions in ecosystems. Figure : A model in which a wolf tries to catch a sheep.

LevelSpace Example : Cross-level interaction âĂŞ-Models as agent cognition .
LevelSpace can also be used to supply wolves and sheep with a cognitive model that simulates how these agents reason and behave. Here, we have created a neural network model to fill this role (code available on GitHub ).
In WSP and using LevelSpace, we ask each wolf and sheep to open its own instance of a neural network model, giving each animal the capacity for unique behavior (Figure ). At each tick, the animals in WSP send information about their surroundings to their neural network model. The neural network processes the information and sends back a response, which the animal then processes in order to decide how to act. One way to think of this neural network, then, is as a way of predicting what the best course of action is for a given agent. When an animal reproduces, it passes on a copy of its neural network to its child, with the weights between the nodes mutated slightly. Then, because the viability of patterns of responses a ects survival, the animals can evolve the behaviors that provide the best chances of survival over generations. . Specifically, animals sense nine conditions: the presence of any of three stimuli (grass, sheep, or wolves) in any of three directions (to the le , to the right, or ahead of them). Processing stimuli, the animal's neural network outputs probabilities of predictions of each decision to turn le , turn right, or not turn will be the best decision. Finally, as in the original model, the animal always moves forward and eats anything edible that it encounters. .
To set this up, we first define what information each animal will send to its neural network, and how it will deal with the output it gets back. We do this by defining a list of anonymous reporters in our setup procedure that will be used as input for the neural networks as well as the behavior to perform if activated as a list of anonymous commands. Each reporter in the input list detects the presence of a certain agent type in a certain direction from the agent. This could be done with a series of Booleans instead, but this example shows how to combine map and anonymous reporters with LevelSpace primitives to write succinct âĂŞ-though also more advanced, and potentially less readable -âĂŞ code. ; ; Don ' t t u r n [ −> r t ] ; ; Turn r i g h t ) .
Next, we define a reporter that captures the information defined by the inputs list, sends it to the neural network with LevelSpace, and reports the results. The apply−reals reporter in the neural network model sets the inputs of the neural network to a list of numbers, propagates those values through the network, and reports the resulting output as a list of probabilities. brain is a turtles-own variable that contains a reference to that agent's neural network model ID: to Thus, LevelSpace allows modelers to implement sophisticated cognitive models for their agents as independent, modular, agent-based models. This example demonstrates how one can encode the perception of an agent, send that information to the agent's cognitive model, and use the result to guide the behavior of the agent. Furthermore, it allows high levels of heterogeneity among agents, since each agent can be given its own, unique cognitive model. As seen in the neural network example, LevelSpace allows researchers to construct agent-based versions of AI techniques to drive the decision making of their agents. However, LevelSpace also allows for the creation of novel methods of implementing agent cognition. We provide here a briefer example without code (though it is available on GitHub ), in which each of the wolves and sheep use a child model to run short, simplified simulations of their local environment to make decisions (Figure ). . This example works as follows: each agent has a vision variable that determines its ability to sense its local environment. Each tick, every animal sends its knowledge of the state of its local environment to its own cognitive model. This includes the locations of nearby grass, the locations and headings of the surrounding wolves and sheep, and its own energy level. The animal then runs several short micro-simulations in which agents perform random actions ( . This method grants agents powerful and flexible decision making in a natural way. For instance, this method will generally result in a sheep reliably moving to the closest patch of grass. However, if there is a wolf on the closest patch of grass that will likely eat the sheep, this method may result in the sheep running away instead. Then again, if the sheep is about to starve to death unless they eat immediately, they may decide that it is worth risking being eaten by the wolf in order to get the grass. This sort of decision making emerges from the simple method of agents running micro-simulations to evaluate the consequences of actions, based on the locally available information, and the heterogenous states of other agents. Furthermore, in addition to allowing researchers to limit what agents know, the method also gives the researchers two "intelligence" parameters that they may use to control exactly how well their agents behave: the number of micro-simulations, and the duration of each micro-simulation, i.e. how many ticks into the future each agent is trying to predict. The more micro-simulations the agents run, the more likely they are to find the optimal decision. The longer they run the micro-simulations for, the more they will consider the further consequences of their actions. . This process is generally similar to using Monte-Carlo sampling to approximate solutions to partially observable Markov decision processes (Lovejoy ). However, in this case, the Markov chain is being defined by an agentbased model. This o ers a natural way to define what would otherwise be very complex processes. It also allows researchers to directly inspect what is happening in the agent's cognitive model, and understand why the agent acts the way it does. This also demonstrates the method's advantage over, for instance, using neural networks to guide agents' actions as in paragraphs -. Neural networks are infamously di icult to interpret. In the micro-simulation approach, the situations which the agent considers when making its decisions may be directly observed and meaningfully interpreted.

Dimensions of LevelSpace Model Relationships
. Multi-Level Agent-Based Modeling (ML-ABM) is an emerging approach that will be used in by modelers in both (social) sciences and in education. For the purpose of creating a common ground for talking about ML-ABM, modelers and researchers need a common vocabulary for describing these larger systems of models. In particular, having a descriptive language will help in communicating, designing, verifying and validating ML-ABM systems.
. One obstacle to a common vocabulary for ML-ABM is that agent-based modeling is used to describe a variety of approaches that di er widely -conceptually, linguistically, and execution-wise -in how they allow modelers to practically design and code models. These di erences present modelers with specific constraints and a ordances when building ML-ABM systems, and consequently also with di erent conceptual problems to solve and describe.
. For this reason, a vocabulary of the specifics of ML-ABM will depend on the modeling framework and general approach to modeling. In the following, we propose six di erent dimensions along which relationships between ML-ABM models created with LevelSpace might vary. While these may apply to other ML-ABM approaches or languages, we have developed this set of dimensions specifically for better describing ML-ABMs built with Lev-elSpace.

Homogenous vs. heterogeneous relationships .
In LevelSpace systems, many models run at the same time. Sometimes these models are based on the same .nlogo file, and other times those models are based on di erent .nlogo files. We therefore need to make a clear distinction between what we call a template model -the model file, not yet instantiated -and an instance of that model, containing agents, environments, etc. In describing the relationships that constitute LevelSpace systems, an important distinction is whether they take place between two models that are instances of the same template, or whether they take place between models that are instances of di erent templates. We call the former homogenous model relationships, and the latter heterogeneous model relationships. While we did not include any examples of homogeneous model relationships, in other research we built an epidemiological ML-ABM in which instances of the same template model each represent di erent countries or other geographical regions (Vermeer et al. ). Such ML-ABM systems could represent geography by "tiling" copies of a single template model to represent contiguity, or by linking them in a network, to represent connectivity through air travel or other transportation networks.
Directionality refers to whether information in one model a ects the behavior of agents in the other. In our examples, all participating models inform and influence each other bidirectionally, but one could imagine a ML-ABM system in which information is passed unidirectionally from one model to another. In fact, in our first prototype of the WSP-CC model system, only the Climate Change model a ected grass regrowth time but with no reciprocal e ects. We therefore think it is important to make a distinction between the directionality, respectively unidirectional and bidirectional, of information passing between a dyad of models.

Persistence .
In the Ecosystems and Climate Change-example, both the WSP and the CC model are opened at the beginning of the model run, and both models remain open throughout the run. In contrast, in the Wolf Chases Sheepexample, one model is opened, used to resolve the sheep chase, and then discarded. Finally, in the Wolf Sheep Brain-example, models are opened and closed with the births and deaths of wolves and sheep. We see the distinction between these three di erent kinds of persistence as valuable for describing and classifying LevelSpace systems: system-level persistence, in which models are opened at the beginning of the model run and closed at the end; agent-level persistence, in which each child model shares its lifespan with a particular agent; and event-level persistence, in which child models are opened in response to a particular event or state of the world and closed when that event has been resolved.

Time synchronicity
. The fourth dimension describes how time is modeled relatively between two model-instances. In the Ecosystems and Climate Change-example, the WSP and the CC model run in tandem from the moment they are set up. Similarly, in our Agent Cognition examples, each brain model is run in tandem with WSP. In contrast to these two, in the Wolf Chases Sheep-example, the WSP model pauses until the outcome of the wolf-sheep chase has been evaluated. These di erences point to an important distinction among model-relationships: those that are synchronous, and those that are asynchronous.

.
One complication is that time may be modelled di erently in di erent models due to di erences of scale. For instance, in the Ecosystems and Climate Change-example, the CC model runs at a much slower rate. To compensate for this, we call the GO-procedure times for each time we call the GO-procedure in WSP. However, we would still consider this a synchronous relationship because the ratio between their time steps remain constant. In other words, a model dyad is synchronous if the ratio between the models' time steps is constant, and otherwise it is asynchronous.

Hierarchy .
In the Wolf Chases Sheep-example and the Agent Cognition-example, the chase and brain models are subsystems of the larger phenomenon in the other model. For instance, the wolf and the sheep in the Wolf Chases Sheep model also exist in the Wolf Sheep Predation model. Similarly, each of the Brain models "belongs" to a wolf or a sheep in the WSP model. In contrast, in the Ecosystems and Climate Change-example, neither model can be said to be a sub-system of the other. The distinction between hierarchical and non-hierarchical modelrelationships is important, and is resolved by asking whether the phenomenon in each of the models can be seen as respectively a sub-or super-system of the other .
In our first two examples -Ecosystems and Climate Change, and Wolf Chases Sheep, respectively -we need just one instance of each template model at the same time: respectively, one instance of WSP and one of CC, and one instance of WSP and one of WCS . However, in our third example, we have one instance of WSP and many instances of the Brain model. Consequently, making a distinction between the number of models in a relationship is an important part of a multi-level model systems typology. The model ratio is o en implied by the hierarchy of which is it part: if you have an agent-level hierarchical model, you will most likely have a one-to-many model-ratio. However, we can see the one-to-many ratio not just at the model-level, but at the agent-level too: consider a LevelSpace system of an industry, where agents represent organizations that open and close di erent departments in an ad-hoc manner, and where each department is represented by a child model. This ML-ABM system would contain a one-to-many model-relationship between an organization and its departments at the agent-level. Conversely, consider a model system in which we model hundreds of cities, each producing greenhouse gases. These could all contribute greenhouse gases to one climate change model. In that case, there would be a many-to-one model-ratio between city-models and the climate change model.

Towards a Typology of Multi-Model Dyadic Relationships
. We have presented six dimensions that we believe address important aspects of the dyadic relationships and interactions between model templates. See Table for a full overview. Between the six dimensions, there are combinations of possible dyadic relationship descriptors in total. This raises interesting questions about what kinds of combinations and dyadic relationships will be more or less common and more or less generative for modeling phenomena in di erent fields. Our future work will focus on understanding these combinatorics with an aim to map how particular combinations of these six dyadic relationship dimensions can best be used to solve specific modeling problems.

Meaning
Potential values : , :many Table : The six dimensions of dyadic model template relationships . Some combinations may be rare, and other combinations may even be conceptually contradictory. We want to mention that it is not obvious to us which are which. When writing this paper, the authors discussed some combinations that we thought would be unlikely. One of these was the homogeneous, agent-persistent dyadic relationship: a model that opens a copy of itself whose persistence is determined by one single agent. At first this seemed unlikely, but as we showed with the latter of the two Agent Cognition examples, we did in fact use an instance of the WSP model to run micro-simulations that was agent-persistent and that was an instance of the WSP model. To us, this illustrated how our inability to envision this case in advance was simply a shortcoming of our imagination and intuition for which of these combinations are viable and useful.

Discussion and Conclusion
. Multi-Level Agent-Based Modeling (ML-ABM) is a new approach that o ers exciting opportunities for new ways of modeling connected systems. It also raises interesting questions, some of which we addressed in this paper. One focal question that we addressed is: 'what can ML-ABMs be used for'? Following Morvan's ( ) three use cases for ML-ABM, we demonstrated how LevelSpace can address each of them. Specifically, we showed how LevelSpace is versatile and allows modelers to ( ) connect related phenomena through heterogeneous coupling; ( ) "zoom in" on a particular event in a multi-level system; and finally, ( ) simultaneously model processes at di erent scales. Further research is needed to more generally classify modeling use cases and the ML-ABM architectures that can address them. .
An important a ordance of the design of LevelSpace is how easy it is to connect existing models, owing to the fact that LevelSpace works by opening NetLogo models from inside other NetLogo models without necessarily making changes to the code of those original models. This provides an easy way to expand on, or contest assumptions in existing models. As an illustration of this, all three of our model examples use either the Net-Logo Models Library version, or a very slightly modified version, of the NetLogo Wolf Sheep Predation Model (Wilensky ).

.
Our examples also showed how the LevelSpace programming primitives align with the existing NetLogo language and illustrated how easily models can be connected with human readable code. Given the large number of existing, peer-reviewed studies using NetLogo, LevelSpace provides opportunities for collaborative modeling by combining forces, and models, with relatively minor e ort. This raises interesting questions -both practically, about how to push the modeling research agenda in new and more collaborative directions, and scientifically, about how we can use the process of combining models to critique and validate our existing models. .
The last question that we addressed is how to conceptualize ML-ABM in order to create a shared vocabulary. We presented six dimensions by which relationships between models in LevelSpace can be described and distinguished in a more fine-grained manner. In future work, we will explore the combinatorial space of these six dimensions in an e ort to map out the viability of particular combinations, and to investigate how some may be particularly well suited for addressing particular modeling problems or patterns.
. Finally, while we did not spend much time discussing this here, LevelSpace raises an interesting set of questions relating to the educational potential of ML-ABM. There is already a long tradition of improving students' causalreasoning through learning with ABMs (Wilensky & Jacobson ), particularly with NetLogo due to its "low threshold, high ceilings"-approach. This work has leveraged ABM's ability to decompose complex phenomena into their constituent parts, and the cognitive benefits that this gestalt gives to understanding complex causality. Our own early work (Hjorth & Wilensky ) suggests that being able to model systems as consisting of smaller but connected systems helps learners more easily connect causality across many levels. In our future work, we will further explore the ways in which ML-ABM can help learners make sense of multi-level complex systems and phenomena. .
LevelSpace joins a recent but rapidly growing e ort to o er multi-level agent-based modelling platforms. Net-Logo has been embraced by the modeling community for decades, and the addition of ML-ABM capabilities has exciting prospects. We o ered three examples as a proof of concept of how to build ML-ABM models with NetLogo and LevelSpace, and showed how problems that LevelSpace can solve lie within the established set of ML-ABM modeling problems. We are excited to invite the community to join us in using LevelSpace to build new, interesting models, explore novel modeling techniques, and to expand on modeling concepts and theory in the process.

Notes
Note that the following is not meant to be a comprehensive list of existing work, but rather give a general idea of what approaches have been tried. See Morvan ( ) for a more comprehensive review of approaches, languages and frameworks.
Albedo is a measure of the amount of light that is reflected vs the amount of light that is absorbed by a particular surface. We mean 'a ordance' in the design-sense of the word: that it allows -and even invites -a particular set of actions or manipulations while making other actions more di icult or even impossible.
To avoid confusion, we want to emphasize that considerations about hierarchy do not refer to the Lev-elSpace "family hierarchy" of models, i.e. parents, children, grand-children, etc. Rather, it speaks conceptually to the modeled phenomena in the models. For instance, one model could load instances of two di erent models without either of those child models being hierarchically related to their parent in a conceptually meaningful way. Similarly, these two child models can be in a sub-system-super-system relationship to each other, even though neither is each other's parent or child model in the LevelSpace hierarchy.
We emphasize 'need' for an important reason: there are many ways of designing a model system, and many of them will be behaviorally equal. In constructing these typological dimensions, the question we have to ask is, what is needed in order to preserve the necessary information. In the case of the Wolf Chases Sheep model, it would be possible to load a model for each "meeting" between a wolf and a sheep at the same time, thus changing the model ratio to a one WSP to many WCS models. However, because each event is resolved entirely before the next event is resolved, no information needs to be preserved in the state of the model, and we therefore only need one WCS model for the system to function.