Simulation for Interpretation: A Methodology for Growing Virtual Cultures

Agent-based social simulation is well-known for generative explanations. Following the theory of thick descriptionwe extend the generative paradigm to interpretative research in cultural studies. Using the exampleofqualitativedataabout criminal culture, thepaperdescribesa researchprocess that facilitates interpretative research by growing virtual cultures. Relying on qualitative data for the development of agent rules, the research process combines several steps: Qualitative data analysis following the Grounded Theory paradigm enables concept identification, resulting in the development of a conceptual model of the concept relations. The so ware tool CCD is used in conceptual modelling which assists semi-automatic transformation in a simulation model developed in the simulation platform DRAMS. Both tools preserve traceability to the empirical evidence throughout the research process. Traceability enables interpretation of simulations by generating a narrative storyline of the simulation. Thereby simulation enables a qualitative exploration of textual data. The whole process generates a thick descriptionof the subject of study, in our example criminal culture. The simulation is characterized by a socio-cognitive coupling of agents’ reasoning on the state of themind of other agents. This reveals a thick description of how participants make sense of the phenomenology of a situation from the perspective of their world-view.


Introduction
. Epstein's famous postulate "if you didn't grow it you didn't explain it" (Epstein ) of the programme of a generative social science captures in a nutshell the explanatory account of agent-based social simulation. Agent-based models enable the generation of macro-social patterns through the local interaction of individual agents (Squazzoni et al. ). Classical examples include the segregation of residential patterns in the Schelling model, emergence of equilibrium prices (Epstein & Axtell ), or local conformity and global diversity of cultural patterns (Axelrod ). Agent-based modelling has been used for a long time in archaeological research for investigating, for instance, spatial population dynamics (Mithen ; Kohler & Gumerman ; Burg et al. ). Likewise there is a growing interest for including culture (Dean et al. ; Dignum & Dignum ) and qualitative and textual data in agent-based simulation (e.g., Edmonds b). Nevertheless agentbased modelling is less used in ethnographic and cultural studies attempting at uncovering hidden meaning of the phenomenology of action. Typically ethnographic research is done by interpretative research using qualitative methods such as thick description (Geertz ) or Grounded Theory (Glaser & Strauss ). The objective of this paper is to elaborate a framework for using simulation as a tool in a research process that facilitates interpretation. While applying the generative paradigm the objective is growing virtual culture, i.e. an artificial perspective from within the subjective attribution of meaning to social situations. Thereby a methodology will be described for using simulation research as a means for exploring the horizon of a cultural space. .
The paper describes a research process that we developed (Lotzmann et al. ) during the recent EU FP GLODERS project (www.gloders.eu). As the project involved stakeholders from the police in a participatory modelling process (Barreteau et al. ; Nguyen-Duc & Drogoul ; Möllenkamp et al. ; Le Page et al. ) for providing virtual experiences to police o icers, the objective of the research process is the crossfertilization of simulation and interpretation. Thus the example that we use throughout the paper is the investigation of criminal culture. While ethnographic research goes back to classical studies of e.g. Malinowski or Margaret Mead on tribes in the Pacific islands (Malinowski ; Mead ) it has expanded to studying cultures of various kinds such as youth (Bennett ) or business culture (Isabella ). Thus it is reasonable to use the field of the criminal world for interpretative investigations. The objective of the research, imposed by the stakeholder interests, was to investigate a specific element of criminal culture, namely modes of conflict regulation in the absence of a state monopoly of violence. For this purpose norms and codes of conduct of the 'underworld' needed to be dissected. While individual o enders might pursue their own path of action, as soon as crime is undertaken collectively, i.e. in the domain of organized crime, the necessity for standards that regulate interactions emerges in the criminal world as it does in the legal society. These can be described as social norms (Gibbs ; Interis ). Thus studying norms and codes of conduct in the criminal world provides a perfect example for studying the emergence of a specific element of culture, namely social norms . However, the focus of this paper is on methodology, namely the research process starting from unstructured textual data ending up in simulation results which can recursively be traced back to the starting point of the research process. Results concerning content are documented elsewhere (Neumann & Lotzmann ; Lotzmann & Neumann ). For the purpose of this article it is su icient to emphasize that (specific elements of) criminal culture provides an adequate example to demonstrate the methodology of interpretative simulation. The research process entails two perspectives, an analysis and a modelling perspective, which are recursively related to each other.
• Analysis perspective: The process of recovering information within the empirical data from which certain simulation model elements are derived.
• Modelling perspective: The structured process of the development of a simulation model and the performing of simulation experiments based on the empirical data. .
Thus the relation between data, research question and methodology has to be considered carefully. The development of the research process is closely oriented to the modelling process developed in the EU funded OCOPOMO project (Scherer et al. , www.ocopomo.eu) but has been adapted and extended by an interpretative perspective. While for the analysis perspective the process of conceptual modelling, model transformation and implementation of declarative rule-based models has been extended by an initial qualitative data analysis (extending the scenario input proposed in OCOPOMO), for the modelling perspective the traceability concept (Scherer et al. , ) is utilized for developing virtual narratives that facilitate interpretative research (i.e. referring back to the initial qualitative analysis). This enables the use of simulation as a means for a qualitative data exploration that reveals how actors make sense of the phenomenology of a situation to finally enable growing criminal culture in the simulation lab. .
The rest of the paper is structured as follows: Section provides a brief overview of the concept of thick description as methodology of interpretative research. Next, the research process is described in detail in Sections and . This consists of several steps. Section highlights the analysis perspective and Section highlights the modelling perspective. Finally Section discusses how the methodology can provide insights beyond the particular example of criminal culture.

Thick Description in Cultural Studies
. The central element of an interpretative approach to the social world is an attempt to comprehend how participants in a social encounter perceive a particular concrete situation from within their worldview. Instead of observing from outside, interpretation attempts to comprehend social interaction from inside the social actors. How this can be achieved has for a long time been the subject of many debates ranging from philosophical speculation, such as Dilthey ( ) who introduced the term 'understanding' as a technical terminus in interpretative research, to various methods in qualitative empirical research. A particularly well-known approach to interpretative research is the concept of thick description. Originally coined by Cli ord Geertz as a method for participant observation in ethnographic research, the concept quickly traversed to various disciplines such as general sociology, psychology, education research or business science (Denzin ; Ponterotto ).

.
Geertz owes the term thick description to the philosopher Gilbert Ryle Ryle . Citing Ryle, Geertz considers "two boys rapidly contracting the eyelids of their right eyes. In one, this is an involuntary twitch; in the other, a conspiratorial signal to a friend. The two movements are, as movements, identical; from an I-am-a-camera, 'phenomenalistic' observation of them alone, one could not tell which was twitch and which was wink, or indeed whether both or either was twitch or wink. Yet the di erence, however unphotographable, between a twitch and a wink is vast. . . " (Geertz , p. ). This example describes the step from a phenomenology of a situation to the meaning attributed to it. This move from what Geertz, following Ryle, denotes as a step from 'thin' to 'thick' description has been influential for an interpretative theory of culture. Geertz provides an example of a drama in the highlands of Morocco in in which di erent interpretations of a particular sequence of interactions by various ethnic groups (including Berbers, Jews and French imperial forces) generated a chaotic dissolution of the traditional social order: One man had been hijacked by a Berber clan. In compensation he stole the sheep of the clan. However, subsequently he negotiated that a certain number of sheep was a legitimate compensation for the raid. But when he came back to the town ruled by French forces, they arrested him for the . The example serves as a demonstration of an interpretative concept of culture. Resembling Wittgenstein's theory of language games, Geertz argues that culture is public symbolic action. In the example above, the reactions of the di erent cultural groups failed to be meaningful for the other groups because all parties (in particular the French forces) had a di erent view of the meaning of the action of the other parties. In consequence, it comes to a "confusion of tongues" (Geertz , p. ). Winking or stealing sheep have a di erent meaning in di erent cultures. Thus meaning is important for social order because, for instance, the reaction to what is perceived as a conspiratorial wink is di erent from the reaction to an involuntary eye movement, say blinking in strong sunlight. Meaning shapes the space of plausible (and implausible) follow-up actions. .
The implication for investigating cultures is that an interpretative theory of culture is an interpretation of interpretations. The task of a cultural analysis is signifying and interpreting the subjects of study, whether criminals, Berbers or Frenchmen, i.e. interpreting how the subjects make sense of the world from the perspective of their world-view (Denzin ). This calls for a microscopic diagnosis of specific situations in a manner that enables a reader of a di erent cultural background to grasp the meaning that the subjects of the investigation attribute to it. Geertz claims that the role of theory in the interpretative account of thick description is to provide a vocabulary to establish conversation across cultures (Geertz ). Such a vocabulary should "produce for the readers the feeling that they have experienced, or could experience, the events being described in a study" (Creswell & Miller , p. ) such as a sense of verisimilitude (Ponterotto ). Thus the objective of a thick description is providing narrative storylines of the field (Corbin & Strauss ). In the following, we will outline a research process for developing artificial narratives that generate virtual experiences. This enables extending the generative paradigm for growing virtual cultures.

Methodological Process to Build Interpretive Simulations: Analysis Perspective
. In fact, agent-based modelling has a number of properties that coincide with the account of a thick description: Agent-based modelling studies the interaction of individual agents on a microscopic level (Squazzoni et al. ), in relation between cognition and interaction (Nardin et al. ). Likewise qualitative data is increasingly used for the development of agent rules (Fieldhouse et al. ; Edmonds b; Ghorbani et al. ; Dilaver ). Nevertheless, it is rarely the intention of agent-based research to get in conversation with the agents. In the following, we describe a process of deriving agent rules from interpretative empirical research methods. Applying the rules in simulation experiments enables the investigation of socio-cognitive coupling: individuals reasoning about other individuals' minds, by taking into account a shared social context. That means: the agents attribute meaning to the observed behaviour of other agents.

.
This is undertaken using the example of the investigation of police files documenting intra-organizational processes within a criminal network, leading to the internal collapse of a criminal organization. Studying intraorganizational norms is faced with the problem of cognitive complexity. For an analysis of intra-organizational norms, detailed information about the motivation and subjective perceptions of the individuals involved in this process is necessary. This calls for tools which support less handling of a quantity of data but rather detailed interpretative research methods. For this purpose first, MAXQDA (www.maxqda.de) has been selected as a tool for Computer assisted qualitative data analysis (CAQDAS). This is a standard tool and others (e.g. ATLAS.ti or NVivo) would have been equally valuable. Next, the interface between interpretative research and development of agent rules in formal modelling draws on the tools that have been developed in the OCOPOMO project, more specifically CCD (Scherer et al. , ) and DRAMS (Lotzmann & Meyer ), which enable to preserve traceability of agent rules to the empirical evidence base (Lotzmann & Wimmer ). These tools are part of the OCOPOMO toolbox developed in previous research to achieve empirically founded simulation results. Here, Figure : Overview of the data analysis and modelling process also, other approaches and tools could have been used, like e.g. flow charts (as inspired by Scheele & Groeben ) for the conceptual model and NetLogo for implementing the simulation model, at the price of not having tool-support for semi-automated model-to-code transformation and for generating and visualizing traces. The important criteria for the conceptual modelling is that the (graphical) language used needs to be comprehensible to the stakeholders (i.e. people involved in the participatory modelling), and the programming paradigm and language should facilitate the structure of the conceptual model, i.e. a logic-based programming approach like provided by a declarative rule engine can be considered beneficial. Finally, the results of the simulation are traced back to an interpretative framework for dissecting -in this particular case, criminal -culture. .
In our case, the data basis is unstructured textual data from police interrogations of witnesses as well as suspects involved in a violent collapse of the criminal group. It has to be acknowledged that police interrogations are artificial situations and respondents might answer strategically or simply lie. Moreover the interrogation is guided by certain interests of the police. In this case for instance, the police investigations focused on persons related to money laundering and less on drug production. This uncertainty is typical for criminological data (Bley ). Nevertheless the data is the best at hand as in-depth interviews are usually problematic in the case of investigating criminals (however, see e.g. Bouchard & Ouellet who interviewed prisoners). Police interrogations di er from court files not least because they are confidential and many of the respondents were witnesses, for instance as relatives of victims. This provides certain credibility. O en the interrogation protocols describe testimonials of persons in a stress situation such as being in fear for their life or having experienced the death of a friend. Therefore police interrogations can be described as approximations of situations of dialogical conversation, allowing for an in-depth analysis of subjective meaning attributed to certain situations, which brings the empirical analysis very close to the subjective perception of the actors. The aim is to infer hypothetical, unobservable cognitive elements from observable actions and statements to analyse cognitive mechanisms that motivate action in very confused and opaque situations (Neumann & Lotzmann ). Modelling this cognitive complexity provides a challenge for the foundation of model assumptions. For this reason a procedure has been developed that describes a controlled process from qualitative evidence to agent rules. The research partly draws methodologically upon a Grounded Theory approach (Glaser & Strauss ; Corbin & Strauss ), partly upon the OCOPOMO process developed in the corresponding project in order to arrive at a thick description of the field. The simulation model is based on a data driven, evidence based model. The overall research process is outlined in Figure . .
The figure highlights three elements of the analysis process of the data, which provides the foundation for the development of a simulation model. The data analysis is a basically qualitative process which enables to derive detailed model assumptions. The core is the analysis itself, documented in the blue box. The analysis is embedded in a theory of normative agents, developed in prior research in the EMIL project (Conte et al. ), and constant stakeholder participation. .
The analysis can be grouped into the four process phases depicted in Figure : data preparation, concept identification, concept relation identification and concept network analysis. These process steps include various activities which are supported by di erent so ware tools. First, data was provided by the stakeholder. This consists of police interrogations of witnesses and suspects. For an analysis of these documents pre-processing is necessary: data had to be checked for ensuring protection of privacy, the text was translated, and errors and flaws intruded by certain data pre-processing steps (e.g., OCR, translation) needed to be corrected. Data preparation provides the basis for first identifying concepts in the data. .
A detailed representation of the qualitative data analysis and conceptual modelling process details is documented in Figure . In this flow chart the relationship identification and the concept network analysis are merged into a single phase, as these two phases are closely interrelated. Hence, this merged phase as well as the preceding concept identification phase are discussed and illustrated in the following two subsections.

Concept identification: Qualitative text analysis
. In a first step of concept identification the data need to be loaded into a tool for qualitative text analysis such as MAXQDA (see Corbin & Strauss ). Concepts stand for classes of objects, events or actions which have some major properties in common. Relevant text passages had to be identified which reveal preliminary concepts, documented in a list of codes. These are then used to annotate further text passages which provide additional information about the concept. An example is provided in Figure . .
Figure shows a screenshot of how codes are related to the text. On the right hand side is the textual document, in our case the police file. The le hand side shows the codes that are related to certain text phrases. If one code is selected, the corresponding text is highlighted (relation indicated by the red arrow). For the methodological purpose of the article it is not necessary for the reader to read the text: Rather for empowering the reader to follow the research account only the relation between codes and the textual evidence base shall be highlighted.
. Figure , coding is an iterative process: further text passages give rise to the development of new codes and the revision of prior preliminary codes until a stage is reached in which the relevant concepts are ) these results have been presented to the stakeholder in order to ensure the empirical and practical relevance of the concepts. In our case example, the subjects of the investigation had not directly been involved in the research (i.e. criminals), but rather those actors that the research addresses, namely the police, had been part of the participatory research process.

As indicated in
.
Figure provides a full view of the coding. On the le is the list of codes. The selected code is highlighted in red. On the bottom of the right side are the text phrases which are annotated by the selected code, whereas on the top of the right side one can find where a particular coding (i.e. a text phrase) is located in the overall document. Again emphasis is not on the content but on the relation between data and analysis.

Relationship identification: Conceptual modelling .
The first step of the methodology follows Grounded Theory. Already Grounded Theory emphasizes that concepts need to be related, typically undertaken in the research step of axial coding (Corbin & Strauss ). However, in order to derive a simulation model from the data, in this second step the research diverges from Grounded Theory accounts. The coding derived with CAQDAS so ware (here: MAXQDA) serves as the basis for concept relation identification with the CCD tool, which is a piece of so ware for creating a conceptual model of the processes that can be found by the analysis of the data (Scherer et al. , ). The dynamic perspective of conceptual modelling using the CCD approach is loosely related to the concept of flow process charts that have been suggested in qualitative research (e.g. Scheele & Groeben ), but with a specific grammar (an alternating appearance of conditions and actions), with a restricted syntax (e.g. there are no explicit conditions), but with more expressive syntax elements (e.g. attributes -following strict schemes -can be attached to conditions and actions). The rationale for this type of graphical language is twofold: On the one hand the "simplistic" concept can help to make the conceptual model of dynamics intuitively more comprehensible, on the other hand a semi-automatic transformation into program code (here: DRAMS rules) can be achieved. In this context, semi-automatic transformation means that so-called rule stubs with information on related data elements can be generated automatically, while the programmer has to implement the actual rule code. .
Identifying relations between concepts is a central stage for the process view of a simulation approach. This research step departs from a Grounded Theory approach by making use of an abstract framework of (the abovementioned) condition-action sequences (Scherer et al. , ; Lotzmann & Wimmer ). The web of interrelated sequences is denoted as an action diagram. The concept of condition-action sequences is an apriori methodological device to identify social mechanisms on a micro level of individual (inter-)action. Broadly speaking a mechanism is a relation that transforms an input X into an output Y. A further condition is a certain Figure : Screenshot of a CCD with annotations related to object "criminalNetwork" degree of abstraction, which becomes evident in a certain degree of regularity, i.e., that under similar circumstances a similar input X* yields similar outputs Y*. In the social world this is typically an action which relates X and Y (Hedström & Ylikoski ). This is assured by the concept of condition-action sequences. Every process is initiated by a certain condition which triggers a certain action. This action in turn generates a new state of the world which is again a condition for further action. Whereas the data describes individual instantiations, the condition-action sequences represent general event classes. For instance, in our case one condition is denoted as 'return of investment available'. This triggers an action class denoted as 'distribute return of investment'. Obviously this condition-action sequence describes classes of events. Return of investment might be rental income as well as purchasing of companies. This methodology enables controlled generalization from the case. The case, however, provides a proof of existence of the inferred mechanisms. Note that the data basis of interrogations allows including cognitive conditions (such as 'fear for one's life') and actions (such as 'member X interprets aggressive action'). For understanding culture it is essential to retrieve the unobservable meaning attributed to particular situations which are observable at a phenomenological level (Neumann & Lotzmann ). .
In terms of the research process, data need to be loaded into the CCD tool. An actor-network diagram needs to be compiled which entails relevant actors and objects of the domain. These provide the basis for the development of an action diagram of the condition-action sequences describing the processes in the domain. The development of the action diagram needs to be undertaken in constant comparison with the concepts identified with CAQDAS so ware (MAXQDA) in the first research step. CCD provides textual annotations for the identified elements like actors, objects, actions and conditions which ensure empirical traceability (comp. Figure , again not the empirical content is relevant but the relation between data and the identified object, actors and relations). These are imported from the annotations to the MAXQDA codes (comp. Figure with list of annotations to a code). This feature provides a benchmark that all the codes and their relevant dimensions derived with MAXQDA are represented in the condition-action sequences (Neumann & Lotzmann ). Note, that this is a recursive process: the action diagram needs to be constantly revised until a situation of theoretical saturation (Corbin & Strauss ) is reached. Again the validity needs to be ensured by members checking, i.e. consulting stakeholders.

Example: Revealing interpretations .
In the following an example of a conceptual model (the example is taken from Neumann & Lotzmann and ) will be provided that shows how people under study interpret phenomena. That is how people attribute meaning to the phenomenology of situations. Following Geertz we provide an interpretation of interpretations. Like the example of the sheep raid in the Moroccan highlands discussed by Geertz, the example of our research, in fact, is also an instance of the breakdown of social order. In terms of Geertz, it can be described as a "confusion of tongues" (Geertz , p. ). For the observer, meaning is most easily transparent in the case when it becomes non-transparent for the participants. However, here we concentrate on the methodological issue to  demonstrate how conceptual modelling facilitates dissection of meaning as the core business of interpretative research and abstain from presenting the full confusion (Neumann & Lotzmann , ). .
As an example Figure describes a part of the CCD action diagram. This example, in particular the action "perform aggressive action against member X" will be used in the following to demonstrate the link between annotations as result of the empirical analysis outlined above, and the simulation modelling, experimentation and result analysis.
. Figure shows an abstract event-action sequence which is derived from the data analysis. The box with a red flag represents an event. The action is represented by a box with a yellow flag. Moreover, in brackets we see the possible type of agents that can undertake the action. The arrow represents the relation between the event and the action. This is not a deterministic relation. However, the existence of the condition is necessary for triggering the action. Once an action is performed a new situational condition is created which again triggers new actions. In the figure, the process starts with the event that someone becomes suspect (denoted as 'disreputable') which triggers the action of performing an act of aggression against this person. When the victim recognizes the aggression, he or she needs to interpret the motivation. This process of interpretation is displayed in Figure . . Two options are considered as possible in the conceptual model. In fact, this is our (i.e. the researchers') interpretation of how the subjects of investigation interpret their experience. However, it is based on a number of instances that had been categorized in the concept identification phase and the credibility has been checked by stakeholder consultation (in terms of validating qualitative research: member checking). Thus Figure shows a branching point in the interpretation: The perceived aggression can be interpreted either as norm enforcement, denoted as 'norm of trust demanded', or as norm deviation, denoted as 'norm of trust violated'. At this point the agents attribute meaning to the phenomenological experience of being a victim of aggression. As in Geertz' example of the sheep raid, di erent interpretations are possible. Dependent on the interpretation different action possibilities are triggered. Again this is an abstract cognitive mechanism. However, we show one example of how these abstract mechanisms can be traced back to the data. The starting point is the event that for some reason (outside of the scope of the investigation) a member of the organization becomes distrusted (see Figure ). This initiated a severe aggression as shown in the following annotation : Annotation (perform aggressive action against member X): "An attack on the life of M." .
It remains unclear who commissioned the assassination and for what reason. It should be noted that it is possible that an attack on the life could be the execution of a death-penalty for deviant behaviour. In fact, some years later M. was killed because he had been accused of stealing drugs. It remains unclear whether this was true or the drugs just got lost for other reasons. However, the murder shows that the death penalty is a realistic option in the interpretation of the attack on his life. However, M. survived this first attack which allowed him to reason about the motivation. No evidence can be found in the data to support this reasoning. However, data on how he reacted can be found. Annotation (member X decides to betray the criminal organization): Statement of gang member V : "M. told the newspapers 'about my role in the network' because he thought that I wanted to kill him to get the money." . This example allows a reconstruction of a possible reasoning, i.e. an interpretation of the field data. First, given the evidence that was available to the police it is unlikely that this particular member of the organization (V ) mandated the attack. However, the members of the criminal gang had not the time and resources for a criminal investigation as the police would have undertaken. Nevertheless they had to react quickly in complex situations. In fact, it is not a completely implausible consideration. M. was a drug dealer who invested money in the legal market by consulting a white collar criminal with a good reputation in the legal world. V was such a white collar criminal. Thus V possessed a considerable amount of drug money which he could have kept for himself if the investor (in this case M.) were dead. This might be a 'rational', self-interested incentive for an assassination. Second, it can be noted that M. interpreted the attack on his life not as a penalty (i.e. death-penalty) for deviant behaviour from his side . Instead he concluded that the cause of the attack was based on self-interest (the other criminal 'wanted his money'). Thus he interpreted the attack as norm deviation rather than enforcement (see Figure ). Next, he attributed the aggression to an individual person and started a counter-reaction against this particular person by betraying 'his role in the network'. This is an example of an interpretation of how participants in the field make sense of an action from the worldview of their culture. Namely, he interpreted the aggression as a violation of his trust in the gang and reacted by betraying the accused norm violating member. Factually this counter-reaction provoked further conflict escalation. However, for the methodological purpose of demonstrating the research process we stop at this point (see further empirical detail in Neumann & Lotzmann ) and move on to a documentation of how the conceptual model is transformed into a simulation model.

Simulation Modelling and Experimentation: Simulation Perspective
. So far, the presentation of the research process has concentrated on the analysis perspective. Now we come to the simulation modelling perspective, as the process of empirical analysis provides the basis for the development of agent rules. This process encompasses the implementation and verification of the simulation model, with a successive validation phase (involving the stakeholder). With the validated model, productive simulation experiments can be performed, in order to generate results that can be analysed and presented to the stakeholders. Figure shows this simulation modelling process applied in our research in detail.

.
In the diagram two starting points are present: The first is the availability of requirements for the agent architecture that are derived during (quite early stages) the conceptual modelling phase, i.e. as soon as inter-coder reliability is agreed, according to Figure . At this point the implementation-oriented agent architecture can be modelled (using the UML diagram and/or flow charts) which also takes other requirements into account .

Simulation model implementation .
If the conceptual modelling approach and language provide capabilities of model-driven architecture as in this example, then the implementation starts with a transformation step from the conceptual model to program code. With the OCOPOMO tool-box used in our research (as mentioned in Section . ), this step is performed by the CCD DRAMS tool for which the CCD provides an interface that supports a semi-automatic transformation of conceptual model constructs into declarative code for the distributed rule engine DRAMS and Java code for the simulation framework RepastJ . (North et al. ). Since DRAMS is implemented in Java, it entails the premises for close integration with Java-based simulation frameworks to extend their functionality. .
The main objective of using these tools rather than standard tools (e.g. Python, Matlab) or typical simulation so ware (e.g. NetLogo, Mason) can be seen in the fact that these tools enable using simulation as a means for exploration of qualitative textual data by providing traceability information, an untypical application of social simulation. This is in particular the case since the structure of DRAMS code follows a similar logical approach to the action diagram of the CCD. Conditions in the CCD become facts in DRAMS, and actions become rules. This makes it possible that the formal code can precisely reproduce the conceptual model.

.
Hence, the code of a simulation model with DRAMS as technological basis consists mainly of declarative rules describing the agent behaviour. From a more technical perspective, these rules are evaluated and processed by a rule engine, according to Lotzmann & Wimmer ( ) a so ware system that basically consists of: • Fact bases storing information about the state of the world in the form of facts.
• Rule bases storing rules to describe how to process facts stored in fact bases. A rule consists of a condition part (called le -hand side or abbreviated LHS, with the function to retrieve and evaluate existing facts) and an action part (called right-hand side or abbreviated RHS, with the function to assert new facts). In a nutshell, the RHS 'fires' if the condition formulated in the LHS is evaluated with the Boolean result 'true'.
• An inference engine, an expert system-like so ware component that is able to draw conclusions from given fact constellations. As a by-product of this inference process the traces from rule outcomes (in e ect simulation results) back to elements of the conceptual model are generated. From there the step back to the empirical evidence can easily be taken owing to the annotations included in the CCD.
. These features enable decomposing and rearranging the empirical data which will become relevant for the scenario analysis as a means of systematic data exploration (comp. Sections 'Simulation result analysis' and 'Interpreting simulation: Growing criminal culture'). The relations between facts and rules can be visualized as a more technical (i.e. including implementation specific details) counterpart to the CCD action diagram, in case of DRAMS by automatically generated so-called data dependency graphs (DDGs). Figure shows an example, a rule (rectangular node) "member X goes to public" with the actual facts (oval nodes; green for initially

Acts of external betrayal Severity Cases in evidence Inferred Probability
going to police modest . going to public high . Table : Cases from evidence informing the probability for external betrayal.
existing facts, red for facts generated during simulation) for pre-and post-conditions. The pre-conditions include the membership in the criminal network (initial fact "R_memberOf_criminalNetwork"), the knowledge that a 'public' exists (initial fact "pulic") and the actual reason for the envisaged betrayal (fact "externalBetray-alInformation", expressing the consequence of an act of betrayal performed by a White Collar criminal agent earlier in the simulation). The post-conditions of the rule are the information about the decision (fact "aggres-sionDecided"), a possible normative event as the betrayal violated the norm of trust between the criminals (fact "requestFor-NormativeEvent"), and the actual fact that the public knows about the criminal network (fact "R_knownBy_public" which becomes e ective one tick later, indicated by the edge annotation [ . ]; the annotation [ . ] for the other two signifies an immediate e ect of the asserted facts). .
The decision process that might lead to an act of betrayal includes stochastic processes. Calibration of the probabilities in such decision points of the agents refers back to the first phase of a qualitative analysis. In the step of the analysis of the textual data, so-called 'in-vivo codes' had been created, i.e. annotations of characteristic brief text-elements. These had then been subsumed to broader categories which provide the building blocks of the conceptual model. Their relative frequency is put in use for specifying probabilities. Certainly these have to be used with caution: first the categorization entails an element of subjective arbitrariness when subsuming a description of a concrete action under a category such as 'going to public', etc. Second, the relative frequencies in data might not be very reliable. As they are based on police interrogations, some events might appear more likely to be subject of the interrogation than others. It might well be the case that the respondents did not remember or that the interrogation simply did not approach the issue. Thus, given the problem of estimated number of unreported cases inherent in any criminological research, the relative frequencies provide at least a hint to the empirical likelihood of the di erent courses of action. Below in Table , an example is provided of how the likelihood of agents' decisions is informed by the evidence base.
. The next step in the simulation modelling process according to Figure is the verification of the simulation model, i.e. tests whether the concrete implementation reflects the envisaged concept. If the test is negative, then either the bugs in the implementation might be fixed directly (i.e. the implementation is aligned with the specification in the conceptual model), or in cases where gaps or imperfections in the conceptual model are revealed a revision of the conceptual model might become necessary.
. The successfully verified model can then be validated. This step is mainly characterized by presentation and discussion of the model and the gained experimentation results with the stakeholders. Unsuccessfully validated aspects of the model can again lead to revisions of the conceptual model. With the validated model, finally experiments can be performed and outcomes for the simulation result analysis generated.

Simulation result analysis
.
The data dependency preserved during the modelling process ensures traceability of the inference such as simulated facts to the data. This facilitates the analysis of simulation results as means of data exploration. One of Figure : Example of exploration of simulation results.
the available analysis tools is the Model Explorer tool, a part of the DRAMS so ware. Figure shows a screenshot of an actual triggering of the rule "plan a violent action -emotional", in order to demonstrate a prototypical user interface for analysing simulation results. On the right hand side, the log files of the simulation run can be found. On the le hand side, the empirical, textual data is displayed. Both elements, the simulation and the empirical basis, are related by a visualization of the rules triggered to transform data into simulation results in the log file and the corresponding CCD elements of the conceptual model.

.
In the following an example of an interpretation of a simulation run will be provided. Again the example is taken from our research to illustrate the process of developing interpretative simulations. Simulation models typically generate outputs such as times series or histograms. Here the output is di erent: Based on the exploration of the simulation results as displayed in Figure , a simulation run generates a virtual narrative. This final stage of model exploration closes the cycle of qualitative simulation, beginning with a qualitative analysis of the data as a basis for the development of a simulation model and ending with analysing simulation results by means of an interpretative methodology in the development of a narrative of the simulation results. First screenshots of an example of part of a simulation run are provided (six screen-shots captured between tick and tick of a simulation run) in the animated Figure . In the next section it is shown how the model explorer enables recourse to the in-vivo codes of textual data generated in the first step of the qualitative data analysis.
. Figure -('Performing an aggression') displays a scene briefly a er the start of the simulation: One randomly selected agent (Criminal ) became suspect (indicated by the circle) and had been punished (by Reputable Criminal ). However, interpreting the aggression the agent Criminal does not find a norm violation in its event board, the memory of the agent which stores possible norm violations in the past. For this reason the agent reacts by counter-aggression, namely physically attacking the agent Reputable Criminal . This is shown in Figure -. Figure -('Reasoning of aggression') shows the reasoning of this agent on the aggression faced by the agent Criminal . He finds that the o ender is not reputable and for this reason excludes the possibility that the aggression had been a punishment. Figure -('Failed assassination') shows the reaction resulting from the reasoning process: namely an attempted assassination of the agent Criminal .

The screenshot shown in
. The fact that the agent Criminal is still visible on the visualization interface of the model indicates that the attempt has been unsuccessful. For this reason now the other agent reasons on the aggression as shown in Figure  -('Reasoning on aggression'). This screenshot shows that the agent does not interpret the aggression as a death penalty. Potentially the aggression could have been a candidate norm invocation because the aggressor is a reputable agent. However, the agent finds no norm that might have been invoked in its event base. For this reason he reacts by a further aggression as displayed in Figure -('Successful assassination'), again an attempt of an assassination. In this case the agent Criminal successfully kills the agent Reputable Criminal . This is visualized by the disappearance of this agent from the visualization interface. .
However, the fact that somebody has been killed can hardly be concealed. For this reason other agents are also a ected by the event of the killing. The screenshot in Figure -('Panic due to assassination') illustrates the reaction of the other agents: Some did not observe the event. However, those that observed the death of the Interpreting simulation: Growing criminal culture .
The simulation runs show how agents act according to rules derived from categories developed by the researchers. This already includes cognitive elements and thus cannot be compared with the 'I-am-a-camera' perspective described by Geertz ( ) of a pure phenomenological description of the researcher's observation. Nevertheless applying the researcher's categories does not su ice for making sense of a culture from the perspective of the worldview of the participants. In terms of the account of a thick description this still needs to be qualified as a thin description. However, as shown in Figure the rules that are triggered during the simulation run can be traced back to in-vivo codes of the original textual documents. Following the account of a thick description, the central criterion for sustaining the credibility of a cultural analysis is a sense of verisimilitude (Ponterotto ) insofar that the reader gets a feeling that he or she could have experienced the described events (Denzin ). In the same vein, Corbin & Strauss ( ) introduce the notion of a storyline that provides a coherent picture of a case as the theoretical insight of a qualitative analysis. .
For this reason in the description of the scenarios the rules are now traced back to the original annotations in order to develop narratives of the simulation runs, i.e. the scenarios are a kind of collage of the empirical basis of the agent rules. Thus, text passages of the police interrogations are decomposed and rearranged according to the rules triggered during the simulation run. These are tied together by a verbal description of the rules. In sum, this generates a kind of 'crime novel' to get in conversation with a foreign culture (Geertz ). Getting in conversation means that the reader would be able to understand interpretations from the perspective of the insiders' worldview (Donmoyer ) and thereby be empowered to be able to react adequately (at least virtually). As the examples of blinking in the sunlight or secretly exchanging signs, stealing sheep, or -as in our example -making sense of a failed assassination indicate: being able to grasp a meaning of the phenomenology of action is essential for comprehending an adequate reaction or comprehending why the participants failed to react adequately. In our case of participatory research, the storyline of the simulations provides an archive of virtual experience for empowering the stakeholders (Creswell & Miller ; Cho & Trent ). Certainly we do not directly engage in the field for empowering criminals but rather with the police interacting in the field with the subjects under study. Therefore, we provide an example of how a storyline of the simulation described above will look. RC stands for reputable criminal, C for ordinary criminal and WC for white collar criminal, who is responsible for money laundering. Italics in the text indicate paraphrases of in-vivo codes (characteristic text passages of the original data) of the empirical evidence basis of the triggered rules. .
"The drama starts with an external event. For unknown reasons C , who never was very reputable became susceptible. It might be due to an unspecified norm violation, but it may not be so and just some bad talk behind his back. Eventually he stole drugs or they got lost. Then, RC and RC decided to respond and agreed that C deserved to be severely threatened. The next day RC approached C and told him that he would be killed if he was not loyal to the group. C was really scared as he could not find a reason for this o ence. He was convinced that the only way to gain reputation was to demonstrate that he was a real man. So he knocked the head of RC against a lamp post and then kicked him when he fell down to the ground. RC did not know what was happening to him, that such a freak as C was beating him up, RC one of the most respectable men of the group. There could only be one answer: He pulled his gun and shot. However, while shooting from the ground the bullet missed the body of C . So he was an easy target for C . The latter had no other choice than pulling out his gun as well and shooting RC to death.
However, this gunfight decisively shaped the fate of the gang. When the news circulated in the group hectic activities broke out: WC bought a bulletproof car and C thought about a new life on the other side of the world, in Australia. In panic RC wanted to physically attack the o ender. While no clear information could be obtained he presumed that C must have been the assassinator. So with brute force he assaulted C until he was fit for hospital. His head was completely disfigured, his eyes black and swollen. At the same time, RC and C agreed (wrongly) that it was C who killed RC . While C argued that they should kidnap him, the more rational RC convinces him that a more modest approach would be wiser. He went to the house of C and told him that his family would have a problem if he ever did something similar again. However, when he came back, RC was already waiting for him: with a gun in his hand he said that in the early morning he should come to the forest to hand over the money." . This brief 'crime novel' can be described as a 'virtual experience'. Tracing the simulation runs back to the in-vivo codes of the empirical evidence base enables developing a storyline of a virtual case that provides a coherent picture of a case (Corbin & Strauss ). The narrative developed out of the simulation results, including some novel like dramaturgic elements, brings the simulation model back to the interpretative research which has been the starting point of the qualitative analysis in the first step of the research process. It enables to check if the cognitive heuristics implemented in the agent rules reveal observable patterns of behaviour that can be meaningfully interpreted. For this reason, the narrative description suggests to be a story of human actors for exploring the plausibility of the simulated scenarios. The plausibility check consists of an investigation whether the counterfactual composition of single pieces of empirical evidence remains plausible. This means to check if they tell a story that creates a sense of verisimilitude, first of all for the stakeholders, however, as we hope to also to the reader.

Concluding Discussion
. So far agent-based social simulation has not been used in interpretative research in cultural studies following the 'understanding' paradigm in the social sciences. By extending the generative paradigm to growing virtual cultures the paper demonstrates that it can be a useful tool for interpretive research as well. A research process is described to simulate the subjective worldview of participants in the field. Starting from an analysis of qualitative data via the development of a conceptual model to the development of a simulated narrative the research process fosters getting in conversation with foreign cultures. In the following some considerations shall be provided on how the methodology can provide insights and might be useful for further research.

.
As the particular example involved stakeholder participation during the overall research process, first the potential impact for the stakeholder shall be addressed: Criminal investigators look for evidence that suggests further directions of investigations, e.g. the police can only observe dead bodies on the street but not the motivation of the assassinators. The interpretative simulation provides a further source of (hypothetical) evidence beyond physical signs (such as fingerprints, etc.) on a cognitive level, which provides insights into possible motivations for actions. This can be described as virtual experience. Empirically this research strategy might be useful for investigating all kinds of ambiguous situations: complex situations with a high degree of uncertainty in which many decisions are possible and therefore many di erent outcomes are possible as well. Examples include, for instance, corruption but certainly also problem fields outside the domain of criminology. .
In the field of social simulation, the proposed methodology provides a new contribution for the computational study of culture (Dean et al. ; Dignum & Dignum ) beyond reducing culture to agents' attributes (such as colour: (e.g. Axelrod ) or applying certain theories in the agents' design by allowing agents to grow culture (i.e. patterns to interpret and react accordingly to courses of actions of other agents) in the course of a simulation run. Moreover, a growing interest in including qualitative data can be observed in the field of social simulation as for instance indicated in a special issue of JASSS on using qualitative evidence to inform the specification of agent models in . The methodology presented here goes one step further by using qualitative (namely: interpretative) methods for the analysis of agent-models. This coincides with current tendencies of cognitively rich agent architectures for understanding social behaviour (e.g. Campennì ) as technically the interpretation is generated by a socio-cognitive coupling of an agent's reasoning on other agents' state of mind. This feature might provide a direction to explore in future research in developing context sensitive simulation (Edmonds a). .
Concerning the impact of the proposed methodology on contemporary qualitative research it has to be admitted that a practical limitation for its application is that the required computational skills are rather demanding. However, as the simulation is intimately interwoven with the data (as the 'fact base' of the simulation), the scenarios generated by the simulation enable a systematic exploration of the data with regard to the question of what could be called the horizon of a cultural space. Obviously, simulation results depend on the prior coding in the data analysis phase, as during the simulation run only those 'facts' can be executed that have already been coded in the qualitative data analysis. For instance, in the case of our example these concentrate on modes of conflict resolution and aggression. Therefore, the reasoning of the agents about the state of mind of other agents is restricted to such cases. Nevertheless, simulation as a method of data exploration might be of interest for qualitative researchers, namely by rearranging the codings of the qualitative data analysis in a new composition and tracing these back to the in-vivo codes for composing a narrative of a simulated case. Thereby the simulation enables a systematic exploration of qualitative, textual data. This can be described as a horizon of a cultural space, namely what course of action can plausibly be undertaken in a certain cultural context. Note that one criterion for the credibility of qualitative research is the sense of verisimilitude (Ponterotto ). For this purpose the generation of narratives by the simulation model is helpful in particular for the counterfactual scenarios. This provides a criterion for ensuring a sense of verisimilitude: checking whether the coding generates a storyline in di erent combinations (in the modelling phase) increases the credibility of the prior coding in the analysis phase. For this reason we hope to attract the interest also of qualitative researchers.

Acknowledgements
The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP / -) under grant agreement n. ., GLODERS Project.

Notes
It has to be emphasized that criminal culture cannot be reduced to norms or modes of conflict resolution in particular criminal organizations (see also Bright et al. ). These are central to the example used here because of the particular interest of the stakeholders. A fully fledged review of the literature is beyond the scope of this article. Investigations of criminal norms go back at least to the work of Donald Cressey (e.g. Irwin & Cressey , see also Cressey's investigation of the American Cosa Nostra e.g. in Cressey . Moreover, criminal or deviant culture has been subject of historical studies (e.g. Wiener ) as well as the social construction of deviance (Foucault ). O en crime is associated with low self-control (Spahr & Alison ). A brief overview of the concept of organized crime can also be found in Neumann & Elsenbroich ( ). However, for the methodological purpose one example of an element of culture (in this case: norm enforcement) shall be su icient to illustrate the research process of growing virtual cultures.
Member checking denotes a method for ensuring the credibility of qualitative research in which the results are presented to the research subject to examine whether the interpretation of the researcher of how the subjects make sense of a situation from the perspective of their worldview is plausible for the research subjects. In the case of participatory research the subjects are the stakeholders (Cho & Trent ).
To preserve privacy of data, names have been replaced by notations such as M., V , etc.
It should be noted that the alternative interpretation can also be found as illustrated in the following in-vivo code: "I paid but I'm alive." For instance, in this example the Computational Normative Theory developed by GLODERS had been taken into account. The normative reasoning is put into action in the simulation by dedicated rules working on a specific part of the agent memory for storing norm-related information, based on the empirical facts of this scenario. Details on this theoretical integration can be found in (Neumann & Elsenbroich ).
Note that in the following the description slightly deviates from the story developed in the simulation: In the simulation the agents reason about the aggression. In contrast here, there is an immediate shooting, which might be regarded as 'ad-hoc' reasoning. Similar events can be found in descriptions of other cases of fights between criminals, as for instance the Sicilian Cosa Nostra (Arlacchi ).