Computational Models that Matter During a Global Pandemic Outbreak: A Call to Action

: The COVID-19 pandemic is causing a dramatic loss of lives worldwide, challenging the sustainability of our health care systems, threatening economic meltdown, and putting pressure on the mental health of individuals (due to social distancing and lock-down measures). The pandemic is also posing severe challenges to thescientificcommunity, withscholarsunderpressuretorespondtopolicymakers’demandsforadvicedespite the absence of adequate, trusted data. Understanding the pandemic requires fine-grained data representing specific local conditions and the social reactions of individuals. While experts have built simulation models to estimate disease trajectories that may be enough to guide decision-makers to formulate policy measures to limit the epidemic, they do not cover the full behavioural and social complexity of societies under pandemic crisis. Modelling that has such a large potential impact upon people’s lives is a great responsibility. This paper calls on the scientific community to improve the transparency, access, and rigour of their models. It also calls on stakeholders to improve the rapidity with which data from trusted sources are released to the community (in a fully responsible manner). Responding to the pandemic is a stress test of our collaborative capacity and the social/economic value of research.

queue of army trucks transporting dozens of co ins out of town as -hour crematoriums in Bergamo were overwhelmed. By the time you read this article, the situation will be even worse and the virus will be expanding into new places and communities, perhaps hitting the crowded places of less developed countries with devastating impact. .
In order to try to contain the contagion and avoid the collapse of their health care systems, governments are taking draconian measures that, only a few weeks ago, might have caused a revolution. Social distancing, intensive quarantine, lockdown, cancellation of mass gathering events, and strict tra ic restrictions are enforced, sometimes even to the extent of using a pervasive system of police and drones. In many countries, industries, companies, small businesses, and shops have been shut if they are not essential. This will have longterm economic consequences, such as the failure of many small businesses and a decline in private investment -consequences related to the severity of policy measures which, at the moment, do not have a defined expiry date. Politicians are consulting epidemiologists, virologists, and public health experts in order to try and make informed decisions, adapting their responses to contingencies and sometimes reconsidering decisions announced only a few days before. Experts are increasingly featured in the public media and then under pressure to predict when this disease will end to reduce panic. Massive public spending -at levels that will substantially increase deficits -have been enacted. Levels which, previously, might have caused a breakdown of inter-state relationships with the EU. Some observers have started to blame liberal democracies for their inefficiency, while praising the capacity of some authoritarian states (which they considered oppressive only a few weeks before) to respond e ectively. .
The current outbreak of COVID-is not only causing a dramatic loss of lives worldwide, challenging the sustainability of our health care systems, precipitating an economic meltdown, and putting pressure upon the mental health of individuals under quarantine and lock-down measures. This outbreak is also challenging the research community, pushing scientists beyond their 'comfort zone' for two sorts of reason, which we now elaborate upon.
. Firstly, the need for a rapid response is largely incompatible with the 'normal' path of scientific progress, which is based on complex and delicate practices of peer review, testing, and replicability. These practices have been built over time to ensure the validity of scientific claims and research findings. The systemic nature of the COVIDoutbreak requires wide-ranging political decisions about prevention, testing, and anti-contagion measures. These decisions cannot be solely based on epidemiological knowledge, because the e icacy of implementation depends on people's reactions, pre-existing social norms, and structural societal constraints. For instance, the same lock-down policy aimed at reducing the number of infected elderly might have di erent e ects when implemented in a country where several generations live together and a country where elderly live alone but are still very active in their communities, e.g. in religious or neighbourhood associations. .
Tackling the challenge of responding with rigorous research to a complex global problem is a di icult endeavour even in normal times. In a crisis, the 'default' response is to convert/adapt existing models to the new context, ideally by fitting it to newly available data. While this could reasonably be seen as the best way forward given the speed with which the virus is spreading, the dependency on the quality of available data and the underestimation of theoretical premises and original intentions of re-used models can make this of questionable rigour (Edmonds et al. ). .
Responding with rigorous research to a complex global problem is even more problematic in times of crisis. This is due to: public pressure for immediate responses, misplaced expectations about the role of science, misunderstanding about the certainty of scientific knowledge, and confusion concerning public responsibility. The same political leaders who endorsed, without any modesty, public statements similar to the "we have had enough of experts" (a statement by Michael Gove, who was Minister for the Cabinet O ice in the United Kingdom -a statement he later qualified), are now turning to scientists for advice or recommendations on decisions that are politically controversial. Public discourse makes it di icult for politicians to alter course ('U-turn') in the light of new scientific evidence, even if it is to their credit that they do so. Indeed, public perceptions of science itself are not helped by disagreements among scientists concerning the reliability of findings. Moreover, although politicians are (or ought to be) aware that responsibility for decisions eventually lie solely with them as their society's elected representatives, they can seek to dodge this -blaming scientists if informed decisions turn out to be wrong or glorifying themselves in case decisions turn out to be good.
. Secondly, it is rare for any crisis to lie comfortably within the domain of a single discipline. Even if we squarely consider the COVID-an epidemiological problem, our responses to it have environmental, ecological, political, socio-psychological, and economic aspects, and systemic cascading e ects are able to be fully understood only if multi-and interdisciplinary perspectives are considered. Integrating knowledge from all of these disciplinary perspectives is su iciently di icult (e.g. Voinov & Shugart ) that integration itself should be a recognized specialism (Bammer ). We must remember that the dangers of excessive specialisation are well-documented even in less problematic conditions (Thompson Klein  ). .
In this context, it is not surprising that agent-based modelling is under the spotlight. When policy decisions and people's reactions depend on perceptions of the future, and scenarios are probabilistic and largely unpredictable, computer simulation models are seen as a viable method to project future states of a system from past ones in a non-trivial manner. What we see today in many media are predictions of the exponential growth of the number of infected persons based on equations that capture stylized populations and the distributions of their di erent states. However, any social or behavioural scholar can spot that these projections do not consider relevant factors of social complexity, which are intrinsically crucial to the modelled dynamics and a negligible exogenous force. Not recognizing social complexity can undermine the credibility of findings, and thus we call for urgent initiatives to: ( ) improve the transparency and rigour of models to understand theoretical premises and details and ( ) promote data access to help contextualize and validate models across various levels of analysis (i.e., micro, meso, and macro). This call is even more urgent when simulation findings can rapidly a ect public policy decisions (e.g. on possible consequences of certain policy scenarios) and/or motivate individual actions (e.g. impact upon decisions to stay at home to "flatten the curve"). .
To improve the quality, impact, and appropriate use of computer simulation models in this delicate situation, we will in this paper, first, briefly review recent agent-based models of COVID-to bring out their potential and emphasize any existing explanatory gaps. While the number of publications, preprints and simulation tools on immediate responses to COVID-pandemic is rapidly increasing due to the attention of scholars and public pressure, it is important to discuss some important challenges involved and suggest counter-measures to avoid collective mistakes. Secondly, we will reflect on the problematic interface between modelling and policy in order to better understand problems related to excessive expectations about scientific knowledge arising from a misunderstanding of the nature of science. Finally, we will suggest measures to reduce these gaps and improve the relationship between science and public policy via a call for extensive collaboration between public stakeholders and academic scholars in terms of model and data sharing. The credibility of science has recently been under attack from various communities, including anti-vaxxers, climate change deniers, creationists, flatearthers, fake news propagators, and conspiracy theorists but also from some philosophers and critical sociologists. While it is desirable that academic experts have greater public visibility and take a lead in public debate by explicating the evidence, the unfolding of this pandemic carries the risk of undermining science if we do not take the necessary precautions -for example, clarifying the boundaries and limits of presented conclusions/recommendations. Science could be the scapegoat if the public is seeking someone to blame.

Potential of and Gaps in COVID-Agent-Based Models
. Modelling in epidemiology has a venerable tradition since the s, when di erential equations were used to model the population distribution of disease spread, including susceptible, infected, and recovered/dead pools. While this approach has helped to understand the threshold nature of epidemics and herd immunity, such models could not examine important social and behavioural factors, such as the behavioural responses of individuals to policy measures, and the e ect of heterogeneous social contacts on di usion patterns (Epstein ). Progress has been made since the s in modelling epidemiological diseases especially through agentbased simulations that include some important sources of population heterogeneity and explore the structure and dynamics of transmission networks (e.g. Stroud et al.
; Yoneyama et al. ; Zhang et al. ; Hunter et al. ). However, whenever an outbreak suddenly occurs, such as the one into which we have all been thrusted in the last month, several modelling problems emerge that require careful attention. These include: ( ) predicting complex outcomes when crucial data is unreliable/unavailable and theories are underdetermined; ( ) repurposing models outside their original purposes by confusing original illustrative/explanatory purposes with prediction (see Edmonds et al. ); ( ) ignoring good practices of model transparency and rigour either due to the race for public/academic relevance or because of political pressures for immediate responses. .
The case of the Imperial College COVID-model, which has contributed to reshaping the political agenda in many countries, illustrates these challenges. Based on an adaptation of an individual-based simulation model on H N (Ferguson et al. ) and influenza (Ferguson et al. ), in mid-March, a team of the Imperial College published a report in which they predicted a huge number of people would die in Britain unless severe policy measures were taken. Their results also helped to assess the e icacy of isolation, household quarantine, and the closing of schools and universities. These results were quickly endorsed by the UK government (a er some initial hesitation), influenced the US administration and alerted the French administration in their attempt at trying to minimize the mortality rate in their countries due to the transmission of the global pandemic.
. The core of the Imperial College model consists of households, schools and workplaces that are geographically distributed to represent the country under study, travel distance patterns (within the country and international), workplace, school and household size and other demographic data. However, the exact internals of the model are not described in any detail and no one has yet accessed the model code. Maybe this is because the model was written more than years ago by Neil Ferguson, the Imperial College team leader, and includes thousands of line of undocumented code, as admitted in a recent tweet.

.
Considering its impact and relevance, the Imperial College model has been criticized for various reasons: (a) it does not enable the consideration of other policy options, (b) it does not use su icient data across di erent contexts, while claiming general findings, (c) it does not help to understand social conditions and consequences of measures. For instance, Shen et al. ( ) focused on (a) and explored the e icacy of strict contact tracing, pre-emptive testing on a large scale and super-spreader events. They also focused on the model's inability to study the consequences of local dynamics at the micro level rather than the aggregated level of the data available, and the lack of attention to compliance dynamics, which depend on behavioural and social factors. They concluded that the Imperial College model was "several degrees of abstraction away from what is warranted by the situation". Considering the policy and predictive purposes of the Imperial College model, Colbourn ( ) called for more context-specific models of the social and economic e ects of lockdown and other interventions and knock-on e ects on health, including mental health and interpersonal violence, via careful empirical evaluation. .
Unfortunately, even the available examples of more specific and empirically calibrated microsimulation models neglect important behavioural and social factors. For instance, Chang et al. ( ) proposed a microsimulation model calibrated on empirical data to inform pandemic response planning in Australia (see the origins of AceMod-Australian Census-based Epidemic Model in Cli et al. ). The model captures average characteristics of the Australian population by simulating over million individuals with heterogeneous attributes (e.g., age, occupation, individual health characteristics) and their social context (di erent contact rates and contexts such as households, neighbourhoods, schools, or workplaces), whose distributions are taken from the Australian census. In a similar vein, IndiaSIM from the Center for Disease Dynamics, Economics & Policy (see: https://cddep.org/wp-content/uploads/ / /covid .indiasim.March --eK.pdf) was calibrated with data from the -census data of the Indian population and available data from China and Italy to estimate the force of infection, age-and gender-specific infection rates, severe infection, and case-fatality rates. However, although better calibrated than previous models, these do not capture network e ects nor people's reactive responses as the population states simply change via stochastic (randomized) processes determined by parameters (although the parameters derive from the data). .
Independently of their specific characteristics, all of these models are either weak in terms of empirical calibration or theoretically underdetermined or both. Indeed, none of them are based on explicit, empirical/theoretical assumptions of individual behaviour, social transmission mechanisms and social structure constraints. Not only does this imply a very abstract conceptualisation of populations and behaviours: it also misses the chance to understand people's response to policy measures due to pre-existing behavioural attitudes, network e ects and social norms. While the lack of appropriate data on a country or region's socio-economic structure, e.g., household structures or geographical clustering of population, can make a model's parameter calibration problematic, this should not be an excuse to: (a) repurpose models that are purely illustrative or intended to provide a theoretical exposition for predicting complex social outcomes, (b) suppress the attention to behavioural and social factors, which are critical to estimate the e icacy of advocated measures, because data are not available.
. This suggests that any model must be considered depending on its purposes and has associated values and risks for public use. Edmonds et al. ( ) listed seven modelling purposes: prediction, explanation, description, theoretical exposition, illustration, social learning, and analogy. In the Appendix, we tried to consider each of these purposes with a view to evaluating the role the corresponding models might play in a crisis context, the potential usefulness they might have to decision-makers, and the risks associated with their use (many of which are general to all models, not just agent-based models). For instance, in response to the lack of behavioural realism in many of the currently used in the public debate, there has been a proliferation of examples of open source agent-based implementations, though authors admitted that they are probably simply illustrations. While this tells more about the academic e ervescence and selective attention that typically characterise emergencies and outbreaks, the competitive advantage of the Imperial College model and similar models, which at the present stage cannot be seriously tested by the community, makes these e orts of uncertain value for immediate responses, whereas they could be relevant for understanding long-term socio-economic consequences of policy measures. .
In short, the modelling practices that we developed in 'normal' times need to be reconsidered during a global outbreak as this global event poses key modelling challenges. The first one is a COVID-prediction chal-lenge. Prediction of complex systems displaying all sorts of non-linearities, heterogeneities and sensitivities is very challenging independently of the scientific method used to tackle the problem, agent-based modelling included (Edmonds et al. ). We all are experiencing in real life the concepts and analogies that have made the fortunes of complex systems theorists, such as the famous 'butterfly e ect' in chaos theory (Mitchell ). Complex systems researchers are aware that small, unnoticed events, possibly identified only in retrospect, may generate unforeseen, large-scale consequences (Vespignani ). Awareness of significant limitations, such as the nature of complexity, but also lack of data, ontological diversity, or the variety of approaches to simulating human behaviour algorithmically, can make prediction di icult, if not impossible and o en even undesirable (Polhill ; Edmonds et al. ).
. Therefore, modelling experts strive to minimise the limitations -make their model assumptions valid with respect to the new SARS-CoV-virus, and their calibration rooted in the most accurate available data (e.g. Wu et al. ). However, during the current COVID-pandemic, accurate data suitable for complexity-friendly, agentbased models are not yet available and this inhibits agent-based modellers' ability to produce a much-needed, rapid response. At the same time, other scientific communities make bolder claims, even though the same or similar limitations apply to their methods. In late November/early December , when the SARS-CoV-virus probably emerged (Andersen et al. ), it was impossible to precisely estimate the scale and global consequences of the COVID-disease. At the time or writing, four months later, even though we are aware of the rate of loss of human life minimally attributable to the pandemic, precise estimation of the death toll at the end of the crisis is still out of scientific reach. So is the estimation of its direct and indirect consequences worldwide. Nonetheless, developing probabilistic scenarios that can reliably inform policy decisions is an important goal. .
This calls for a second important challenge: The COVID-modelling human behaviour challenge. The complex social dynamics related to transmission, response and compliance (mentioned above) arise from the behaviours of individuals. Research in psychology and decision making has recognized for years that humans do not follow predictable, optimal decision-making even in a laboratory and without deep uncertainty. In times of crisis, findings suggest that cognition is impaired, and that traumatic experiences can cause psychological distress and cognitive distortions (Agaibi & Wilson ; Liu et al. ). A review recently published psychological e ects of quarantine, including post-traumatic stress symptoms, confusion, and anger (Brooks et al. ) -all factors that have long-lasting consequences on decision making and behaviour, including compliance. Modelling the complexity of human behaviour, social interaction and the di usion of collective behaviours or opinions has been at the core of much agent-based modelling (Squazzoni et al. ; Flache et al. ). Although the incorporation of agents' heterogeneity in terms of cognitions and behaviours in epidemiological models is a di icult task and would require cross-disciplinary collaboration (Squazzoni ), there are examples of models using socio-economic data to estimate behavioural heterogeneity in epidemiological diseases (e.g. Hunter et al. , ). Although constructing more complex models takes time and e ort, requiring cross-disciplinary teams and maybe lowering the rapidity of response to public emergencies, it is nonetheless necessary to build better models. Indeed, when trying to estimate the consequences of policy measures that depend on heterogeneous responses, it is o en the case that population outcomes are contingent on specific sets of circumstances that arise from social interaction and its non-linear e ects. Models that cannot examine the social dynamics of COVID-contagion are missing a crucial aspect that has serious implications for any possible estimation, scenario or prediction. We need to use better informed assumptions of the way in which individuals' and communities' behaviours will change as an e ect of the epidemic. Agents are not simply virus carriers and their preferences and actions have implications at multiple levels. .
This calls for a third important challenge: The COVID-data calibration and validation challenge. Model validation is very challenging when the model has a predictive purpose. This is because sometimes data are unavailable and/or there are no parallel situations that can be drawn on to to independently test predictive ability and hence build confidence in model findings. During emergencies, experimental samples or tests might be impossible or unethical (Hunter et al. ). Without appropriate data, validation can be improved by empiricallygrounded theoretical knowledge and domain competence, which can be the bases for a more adequate representation of the complexity of the system. This motivates our plea on the importance of cross-disciplinary collaboration when simulating epidemiological diseases, which intertwines behavioural, social and economic dimensions (An et al. ). While availability of data is crucial for valid model assumptions, retrospective validation of predictive models is also possible during the unfolding of an event. For instance, Zi & Zi ( ), using WHO data (World Health Organization ), analysed the number of deaths due to COVID-and challenged the assumption of a fixed reproduction rate of the virus, which determines the temporal exponential growth of the number of infected and deceased. However, fine-tuning parameters of a predictive model via empirical validation tests during an event, when model predictions inform public decisions on the same event, can generate confusions.
. The problem is that the kind of data with the fine-grained quality needed to calibrate and validate COVIDmodels are (if any) dispersed, fragmented and rarely available in a comparable time window, scale and format. All data are subject to biases and COVID--related data are no exception. Even the simple task of estimating the number of COVID-cases requires that scientists, decision makers and public refer to the number of persons who tested positive for the presence of the SARS-CoV-virus with sophisticated tests, such as RT-PCR (WHO, Laboratory testing for COVID-in suspected human cases). However, these numbers are dependent not only on the actual number of infected individuals, but also, for instance, on: the testing capabilities in a given geographical region, the sensitivity/specificity of a given test, the same definition of 'cases', and the willingness of the testing authority to make undistorted data available (Lai et al. ). This is exemplified by the case of Italy, where the fatality rate is high and unequally distributed because regions have followed di erent testing approaches, while the health authorities have tested only persons who already have two of three symptoms, and have never performed random tests. In short, building predictive models based on the number of cases without considering how cases were defined and data collected can lead to biased estimations. Such obstacles raise questions about the comparability of available o icial statistics between countries, which are used to estimate the number of potential infected in each country. Even if collected and processed with utmost scrutiny, publicly available data reporting the number of confirmed cases almost certainly greatly underestimates the number of infected and, consequently, the number of recovered individuals. This fundamental deficiency can have dramatic calibration/validation consequences if a model aims to predict contagion trends and estimate the e icacy of policies at various time scales.

The Policy-Modelling Interface
. The previous section discussed how the quality of a model depends on its purpose, its theoretical assumptions, the level of abstraction, and the quality of data. A further issue is that good COVID-pandemic models are not always good policy advice models. For example, a perfect strategy to prevent new infections (e.g. a total shutdown) might mean long-term harm to important societal system functions and survival mechanisms. A healthfocussed model concentrating on the process of infection transmission will not automatically provide insights concerning long-term economic consequences or implications for social well-being, hard though it may seem to be e ectively considering trading o such ramifications against the immediate threat of mortality. In other words, a particular modelling focus can limit the arena for debate (Aodha & Edmonds ).
. Scientific policy advice to inform political debates and decisions about the pandemic should be based on the empirical monitoring and assessment of social contexts, including ex ante evaluation and appraisal of potential futures, policy options, and scenarios, as well as on epidemiological models (Weingart & Lentsch ; Wrasai & Swank ; Jasano ; Weaver et al. ). However, the complex characteristics of the social world generate many possibilities and options. The complexity of social reality refutes a "blueprint for social engineering on the grand scale" (Popper , p. ), although the social sciences can teach us much about empirical regularities in social actions and social systems. .
Computational simulation models generating macro phenomena from micro dynamics have the potential to provide some expert advice for public policy making (Gilbert et al. ; Ahrweiler et al. ), especially in areas where empirical data is scarce or of bad quality, such as in the current outbreak. However, the validity of scientific policy advice in this domain needs to be handled with care, honesty and responsibility. The limitations of models and the policy recommendations derived from them have to be openly communicated and transparently addressed. This applies not only to recognising missing, insu icient, or bad-quality data for calibrating and validating models, but also to admitting the fundamental complexities and contingencies of social systems, which require a holistic approach to capture e ects of policy measures across the boundaries of sub-systems. .
Under pressure to respond immediately and the social expectations of expert judgement, the temptation is to turn to simple models with few variables, high predictive claims, and clear messages to policymakers. But, especially in cases of so-called X-events (i.e. human-caused rare, unexpected events that cause a system to shi abruptly from one state to another; see Casti , such as pandemic outbreaks, the need for complexityappropriate and empirically validated models is higher than ever. Even then, merely creating a good model is no guarantee that the conclusions that modellers draw from it will be translated into policy. . One of the problems that arises when translating the conclusions of modelling into policy is managing the potentially fraught relationship between scientific expertise and democratic decision making. There are di erent functional logics for producing legitimacy in science and in policy, the former internally by peer review and the latter externally by elections. Trying to bring these two closer together o en ends with a loss of legitimacy for both: by "politicising science" and by the "scientification of politics" (Weingart ). This is complicated by the "expert dilemma" (Collins & Evans ), that it is usually rather easy for every political position to find an expert willing to provide scientific evidence to support their own position, leading to competing expertise sets. This can lead to scientific advice being treated as merely symbolic or rhetorical as it can be observed in the current COVID-public and media discourse, where experts seem more o en to be asked to legitimise political decisions. .
Modelling and simulation can remedy some of these issues by providing support for "evidence-based policy" (a term from the mid-nineties in the wake of the evidence-based medicine approach) (Pawson ). As outlined in Gilbert et al. ( ), for policy evaluation we need data about the actual situation with the policy implemented to compare with data about the situation if the policy were not implemented. To obtain the latter, randomized controlled trials (RCTs) have been seen as the "gold standard", as they are in evidence-based medicine. However, RCTs in policy contexts are o en di icult and expensive to organise, and are sometimes not feasible, for example when the policy is available to all and so there is no possible control group, or even ethical. Simulation models can be used to create virtual worlds, one with and one without the implementation of the policy, to obtain an evidence base to inform "knowledge-based policy" (Gilbert et al. ) -they can explicitly represent (and hence make available for critique) a "theory of change" that can tell us when the results of an RCT can be applied to a di erent context (Cartwright & Hardie ).
. A second area of di iculty in managing the modelling-policy interface is a still prevalent conceptualisation of the relationship between science and policy that marginalises the expertise of professionals and other stakeholders. There are many theories about the role of scientific policy advice: decisionistic, technocratic, pragmatic, or recursive (Lentsch & Weingart ). However, all of these assume that scientific elites interact directly only with political elites. .
The current situation around COVID-shows how many diverse stakeholder interests are involved in shaping the implementation of knowledge-based policy agendas. Problem solutions require behaviour change on a global scale, changes to societal routines and practices, and new approaches to economic organisation and social well-being. Thus, policy agendas are big societal projects that, to be e ective, have to be supported by all members of society building on their knowledge, experience and expertise. Agendas, and the underlying models supporting these agendas, must be informed not only by the elites, but by all relevant stakeholders and practitioners if they are to be successful and sustainable.
. In our "experimental societies" (Gross & Krohn ), where experimentation, manipulations and policies are ubiquitous and generate reflexivity and performativity of outcomes, it is di icult learn how to coordinate transformation processes around global challenges such as COVID-in a participative way. However, in complexitybased realistic modelling and simulation, there is already an emerging awareness of how to integrate stakeholder interests and expertise. Involving stakeholders in policy and management modelling activities has been extensively applied in socio-ecological management (Jones et al. ; Mendoza & Prabhu ; Robles-Morua et al. ). An example of doing so is the "Companion Modelling" framework (Barreteau et al. ; Étienne ). .
Although policy modelling for policy advice shares the complexity of the target (Geyer & Cairney ), it can be also seen as a straightforward service activity with policymakers and policy analysts as clients who contract modellers under demanding budget and time restrictions (Ahrweiler et al. ). Usually a high immediate utility of results and short deadlines are mandatory requirements of any policy advice project using computational models (e.g. Aodha & Edmonds ). This requires a lean and systematic process among modellers and policymakers to develop appropriate support to help stakeholders and policy actors engage with and benefit from those with modelling and assessment expertise by establishing a set of resources to help both sides negotiate the relationship (e.g. Jager & Edmonds ) .
. An important part of this e ort will be to improve the interface between modellers and policymakers, recognising the requirements of each. From the perspective of a policymaker, advice needs to be specific, concise, relevant to their immediate concerns, and accompanied by a plausible narrative. From the point of view of a modeller, advice needs to recognize the inherent uncertainty of conclusions drawn from the model, and avoid oversimplification. Both modellers and policymakers need to accept that 'evidence', no matter how strong, is just one ingredient in political decision-making, to be mixed in with others such as political expediency, public opinion, practicality, and so on. .
Involving policymakers at the very earliest stages of modelling (that is, 'co-design') can help, by giving modellers a tacit understanding of the policymaking context, and policymakers a feel for the uncertainties and assumptions that are unavoidable in policy modelling. An alternative is to encourage the use of "knowledge brokers", that is people who can bridge the gap between modelling and policy making, preferably drawing on experience of both. Independent 'think tanks' and analytic divisions within government can o en perform such a role.

.
In addition, modellers can make their models more useful for informing policy, for example, by ensuring that updating a model with new data can be done easily and quickly (as is done in weather forecasting, for example, see Kalnay . For COVID-models, this would mean that their calibration would be updated, perhaps daily, to assimilate the latest infection and mortality data. If appropriate, models can also be adapted to become the basis for 'serious games' (Cannon-Bowers ; Zhou ), in which stakeholders can interact with the model and explore the implications of policy options. .
Modellers also need to be prepared for their advice to fall on deaf ears if the policy issues are not salient at the time -policy advice is most influential during 'windows of opportunity' (Kingdon & Stano ), moments when there is coincidence of policy interest, possible solutions and implementation opportunities. They also need to understand that policy making is itself a political process, in which advocacy coalitions compete for influence and may set the 'framing' for an issue. Such framing can heavily influence the search for relevant evidence. For example, it can influence the design of a computational model and the choice of policy options to be examined. As noted above, modelling requires making decisions on matters such as the boundaries of the domain to be modelled, the degree of abstraction, the theory of change implied by the model, the assumptions about unmeasurable parameters and so on. These decisions are o en made implicitly, based on the designer's or policy maker's framing of the policy issue. A danger is that a dominant coalition with an accepted model can use it to support their policy for long periods, simply updating the model over time, while leaving its basic assumptions unchanged (Kolkman et al. ).

A Call to Action
. Our community has made considerable progress in improving the methodological rigour and transparency of agent-based modelling, with a special attention to model documentation and sharing for replicability. The establishment of CoMSES (an online clearinghouse for model codes and documentation), initiatives such as the Open Modelling Foundation, and journal policies to enforce the adoption of open science principles have provided relevant infrastructures to the field that have improved assessment and replicability. Defending these principles and practices is necessary in normal times; it is even more vital in periods of public pressures and political uncertainty. Firstly, a lack of rigorous cross-checks by experts on models could have serious consequences if findings that inform public decisions turn out to be based on brittle assumptions or simply contain mistakes. This could reverberate on the reputation of the whole community. Secondly, when multiple academic teams build their models from scratch without incrementally developing previously assessed models, there is a risk of research waste and misallocation of resources.
. Given that exceptional times require exceptional responses, we call here for: (A) the whole community of agentbased modellers and computational social scientists working on COVID-models to collaborate in maintaining high standards of model building and committing to best practices of transparency and rigour for replicability; (B) the institutional agencies, which have data that could help calibrate and inform COVIDmodels at various national, regional, local levels of granularity to engage the most trusted scientific associations in setting up data infrastructures in order to make data available to the academic community while protecting stakeholders' interests. .
As regards to (A), although competition and timely publication of results are essential to the advancement of science, the need for immediate responses in exceptional times and the availability of online platforms for rapid sharing of results (i.e., preprint online archives, social media) must not compromise the rigorous methodological standards that are essential for the long-term sustainability of the scientific enterprise. .
There is widespread consensus in the community of agent-based modellers on three best practices: ( ) using open source so ware and tools (e.g., NetLogo, MASON) to build models to minimize obstacles to replication and e ective access costs for reviewers and possible re-users; ( ) adopting standard protocols to document models that make easier for reviewers and re-users to assess model properties and building blocks, while allowing model builders to reflect on the adequacy of their model's structure and features (Edmonds et al. ; Grimm et al. ); and ( ) using permanent online repositories, such as ComSES, to archive fully documented models before submission of a paper to a journal to speed up assessment and replicability. We believe that these three practices are all the more important during exceptional times: what the community loses in terms of immediate rapidity of responses to public expectations as a result of complying with these best practices is gained in enhancing its long-term credibility. .
As regards to (B), although immediate sharing of institutional data to help scholars develop more empirically contextualized and customized models is a good idea, the kind of data that is necessary to calibrate model parameters and estimate outcomes could require appropriate infrastructures to ensure that sensitive information is properly treated. While relevant benefits are expected when institutions outsource research to the community on a large scale via data sharing, there are alternatives that could make this e ort manageable in a more responsible way so that the advantages of transparency and open data can be counterbalanced with the priority of protection and security. Benchmarks exist to build customized protocols for data sharing that could be adapted to the purpose of sharing institutional data for agent-based computational research on COVID-(e.g. Squazzoni et al. ). However, this process needs a clear organisation and representative authorities capable of ensuring transparent rules of access and enforcing public interest of data use. In this regard, The European Social Simulation Association (ESSA), the largest association for the advancement of social simulation research worldwide, and the Journal of Artificial Societies and Social Simulation (JASSS), the flagship journal of the community of agent-based modellers, established in , have agreed to o er their expertise and facilities to manage this process in the benefit of institutional stakeholders and the community. .
ESSA commits itself to set up a protocol for data sharing that will be jointly developed with any institutional agency interested in sharing data for research. While the association has membership fees and priorities related to strengthening the European research area, its international dimension and public mandate will help to ensure that any interested teams of scholars independently of their origin and status will have the opportunity of data access through public calls. Public calls will target only interdisciplinary teams formed by at least epidemiologists, computer scientists, and social and behavioural scientists. The Association also commits itself to opening a campaign to leverage funds and donations to support this e ort whenever a first institutional agency accepts to collaborate. JASSS commits itself to enforce its policy on transparency and model documentation by collaborating with CoMSES to streamline peer review of model codes of any manuscript on COVID-submitted to the journal. This will ensure that code reviewers and manuscript reviewers will be mutually informed so that competences and resources will be optimized.
. Exceptional times require exceptional decisions and these may benefit from collective creativity. While attention is now focussed on immediate epidemiological challenges, the decisions of many governments to contain the pandemic are already having unpredictable consequences on social behaviour, social relationships, economic processes, political agendas, and the mental health of millions of individuals. Research will also be needed to understand these long-term consequences, which could turn out to be dramatic beyond the immediate public health sphere. Our call for action is an attempt to organise a sustainable collaborative answer to these long-term socio-economic challenges. We praise current initiatives from prestigious institutions, such as the Royal Society and some funding agencies, to stimulate and support modelling research to address important challenges. ESSA and JASSS are here to help.

Value in a crisis context Usefulness to decision-makers Risks
Prediction Ability to anticipate and compare intervention scenarios (including the consequences of doing nothing). Assessment of uncertainties, and development of 'robust' policies that minimize maximum regret. Base line numbers to use in planning.
If answers to questions can be derived quickly enough, and interventions formalized accurately, it could make a valuable contribution to discussion over interventions.
Over-reliance on the model as an 'oracle', inappropriate political exposure of developers, inability of an e ectiveness-focused model to forecast policy utility. O en, the quality of data to calibrate important model parameters is questionable, especially during an event. It is also not necessarily the case that decision-makers will adopt the policy the model recommends.
Explanation Explanation could address questions such as how we (might have) arrived at a particular outcome, but does not guarantee that the particular causal chain that really led us there is the one simulated.
More likely of use in 'lessons learned' exercises, especially if in conjunction with several other models with a similar purpose.
Enacting measures that address possible causes rather than the actual causes risks unintended consequences in future.

Description
A descriptive model could be used to explore scenarios in a heavily constrained context.
Unlikely to be of value at the national scale, in part because the generalizations needed to model at that scale would be inconsistent with this modelling purpose. Could be used for local levels, however.
Elements not simulated might prove later to be relevant. Overgeneralization from a model fitted to specific circumstances is also a risk. Possible confusion with prediction.

Social Learning
Potentially valuable in resolving conflict. The main value is the process by which the model is constructed, rather than the model itself.
Resolving arguments, encouraging people to see others' points of view, observing the logical consequences of beliefs.
Modelling what people in a group believe does not guarantee relevance beyond the group, or to the empirical world. Usual risks of group-work (e.g. groupthink, dominant voices) need to be carefully managed by facilitators.

Illustration
Useful for communication and education of ideas to the general public.
Provides a means of communicating reasoning behind policies for dealing with the crisis that may be unpopular Under certain conditions, the model may not behave consistently with the communicated ideas.

Theoretical Exposition
Unlikely to be of value. In a crisis context, decisionmakers will have little time for comparing or exploring theories.
No (necessary) connection with the real world in this purpose risks over interpretation if attempt is made to use it.

Analogy
Of little value other than distracting the modellers themselves from the psychological consequences of the crisis.
Not useful. Over interpretation of findings that merit more rigorous study using, say, theoretical exposition or explanation purposes.
At the time of publication the team have announced they will release the code, but the final date and form are not yet clear.

This is the tweet by Neil Ferguson on
March, , : PM: "I'm conscious that lots of people would like to see and run the pandemic simulation code we are using to model control measures against COVID-. To explain the background -I wrote the code (thousands of lines of undocumented C) + years ago to model flu pandemics. . . ".
For instance, see Smaldino's social distancing model (http://smaldino.com/wp/covid--modeling-the-flattening-of-the-curve/), a NetLogo model which illustrates how social distancing flattens the infection curve. It is interesting to note that in mid-March, a simulation model of a non-existent disease "simulitis" in an imaginary population was published as an interactive online graphic by the Washington Post (see: https://www.washingtonpost.com/graphics/ /world/corona-simulator/). The model sparked a broad societal debate on social distancing. The model was timely and openly accessible for the public. Its purpose was to illustrate the possible consequences of an (unsuccessful) lock-down and of social distancing on flattening the curve of population contagion. The exact 'decision rules' were unclear in the article, but the build-up of the storyline, from individuals getting infected to limiting movement of individuals, was extremely clear. In this model, people were susceptible, sick or recovered. There was no explicit implementation of mortality due to an editorial decision by the newspaper, deeming death to be too cruel for its readers.