©Copyright JASSS

JASSS logo ----

Benjamin M. Eidelson and Ian Lustick (2004)

VIR-POX: An Agent-Based Analysis of Smallpox Preparedness and Response Policy

Journal of Artificial Societies and Social Simulation vol. 7, no. 3

To cite articles published in the Journal of Artificial Societies and Social Simulation, reference the above information and include paragraph numbers if necessary

Received: 01-Oct-2003    Accepted: 18-Apr-2004    Published: 30-Jun-2004

* Abstract

Because conjectural 'thought experiments' can be formalized, refined, and conducted systematically using computers, computational modeling is called for in situations that demand robust quantitative study of phenomena which occur only rarely, or may never occur at all. In light of mounting concerns regarding the threats of bioterrorism in general and smallpox in specific, we developed a stochastic agent-based model, VIR-POX, in order to explore the viability of available containment measures as defenses against the spread of this infectious disease. We found the various vaccination and containment programs to be highly interdependent, and ascertained that these policy options vary not only in their mean effects, but also in their subordination to factors of chance or otherwise uncontrollable interference, relationships which themselves fluctuate across ranges of the counterfactual distribution. Broadly speaking, ring vaccination rivaled mass vaccination if a very substantial proportion of smallpox cases could be detected and isolated almost immediately after infection, or if residual herd immunity in the population was relatively high. Pre-attack mass vaccination and post-attack mass vaccination were equivalent in their capacities to eliminate the virus from the population within five months, but the pre-attack strategy did so with significantly fewer deaths in the process. Our results suggest that the debate between ring and mass vaccination approaches may hinge on better understanding residual herd immunity and the feasibility of early detection measures.

Smallpox, Bioterrorism, Agent-Based Modeling, Stochastic Simulation, Vaccination Policy

* Introduction

Smallpox virus, called by one expert "the world's most dangerous prisoner," is among the deadliest biological agents to ever plague human society (Tucker 2001). As such, the successful smallpox eradication campaign of the 1970s is still widely considered the greatest public health achievement to date. In light of this accomplishment, and because the remaining American and Soviet laboratory samples were ostensibly maintained for innocuous purposes of research, public vaccination programs around the world ceased in the late 1970s and early 1980s.

Unfortunately, recent events have resurrected the awful spectre of a smallpox epidemic - one triggered by a purposeful use of the disease as a weapon of terror. Serious concerns revolve around the fate of the Russian smallpox sample since the end of the Cold War, accentuated by the knowledge, obtained through defection, that Soviet scientists developed and tested a clandestine smallpox weapon (Alibek and Handelman 1999). The director of the Vektor Institute - which maintains and studies the Russian smallpox sample - attracted substantial media attention when he announced that, because researchers at the facility earn just $100 per month, they might easily be enticed by rogue states or terrorist groups (Tanner 2001).

Above all, however, the events of September and October 2001 have moved the threat of bioterrorism to the forefront of our collective consciousness. The various means of preparation for and response to a possible biological attack are now the subject of sometimes heated debate in the public, scholarly, and policy spheres. Because empirical research in this domain is constrained by a prohibitively small n - many of the weapons in question have never been used, and smallpox in particular has not been observed in the wild since its eradication - epidemiologists have increasingly turned to more creative techniques, hence the development of several computational models of smallpox with direct policy implications (Bozzette et al. 2003; Halloran et al. 2002; Kaplan et al. 2002; Meltzer et al. 2001). But with the notable exception of the Halloran et al. model, this research is rather uniform in its approach to the problem. For the most part, complex systems of equations are used to mathematically model the exponential growth of a virus like smallpox, presupposing a host of parameter values (most notably the number of new next-generation cases produced by each previous case). These "top-down" models, which calculate an outcome prediction from macro-level input data, serve as important tools in smallpox response planning, but we contend that they alone are insufficient and individually less useful than the kind of "bottom-up" agent-based modeling approach suggested here.

In the present study we leverage the power of "bottom-up" rather than "top-down" modeling, not to predict the future with complex algebra, but rather to learn how thousands of micro-interactions in a population contribute to emergent patterns of infection, death, and survival at the aggregate level. We view the singular course of the future not just as unknown, but also as, in its details, unknowable until after the fact. Accordingly, we have adopted an approach that produces thousands of possible futures, each determined by the experiences and interactions of thousands of micro-actors. These actors are themselves affected by random sequences of events, accidents of virtual "pre-history," and deliberate experimental manipulations. By producing distributions of futures we are able to distinguish between instructive fundamental properties on the one hand, and mere accidents of history on the other, while recognizing that in complex systems even highly-controlled experiments can produce outcomes that vary widely in both likelihood of occurrence and predisposition to catastrophe. Since we use available social and behavioral theories to guide the design of interacting agents, our approach is also a significant improvement over available models which generate effects from pathogenic and host characteristics in isolation from sociological and psychological factors. In sum, we developed a manipulable simulation model that combines knowledge gleaned from previous smallpox outbreaks with mathematical modeling techniques in order to maximize the explanatory power of that information in the context of variable circumstances. By incorporating insights from social psychology, epidemiology, and public policy, agent-based models can advance our understanding of smallpox and the dynamics of possible outbreaks.

* Methodology

Agent-Based Modeling

Traditional mathematical modeling has tended to treat social science rather like Newtonian physics: a fully predictable world comprised of traceable vectors and governed by known laws operating at the macro level. Such "top-down" systems are currently the gold standard in computational modeling, and they excel at explaining the interactions among small numbers of actors or of systems that can be broken down into separate dyadic relationships. However, because social sciences are, as Murray Gell-Mann famously observed, as complex as physics would be "if particles could think," such approaches cannot be directly applied to the social world without assumptions that drastically and artificially reduce the complexity of the problem (Johnson 1999). As such, traditional approaches are typically incapable of capturing fully, or even representatively, the dynamics that result when agency is vested in a great number of micro-actors. How, then, can they be relied upon to help us understand distributed processes like the spread of smallpox?

This need for new, decentralized methodologies has prompted a growing interest in "bottom-up" or agent-based models (ABMs), which produce emergent macro-effects from micro-rules. Agent-based modeling allows us to study the effects of interactions among multitudinous processes or mechanisms whose chaotic patterns of change produce levels of complexity that are mathematically intractable and highly resistant to standard analytic techniques. If smallpox propagated through an undifferentiated and continuous medium, there would be no need for agent-based modeling. But it does not - on the contrary, smallpox is spread almost exclusively by extended face-to-face contact in populations of humans who are fundamentally autonomous agents. Therefore the dynamics of smallpox outbreaks cannot be understood without some theory of social networks and selection processes governing human interactions. Such a theory must be combined with a modeling approach capable of simulating the bottom-up aggregation of micro-interactions into macro-patterns that themselves can then shape outcomes.[1]

Simply put, while systems-dynamics or top-down models implement governing dynamics on an aggregate level, and then measure their effects on the individual or aggregate levels, bottom-up or agent-based models implement micro-rules to govern individual behavior for the purpose of studying their effects at the macro-level. A commonly cited example of this methodology and its power is Craig Reynolds' agent-based model of a flock of geese (Reynolds 1987). Considered at the macro level, algorithms capable of capturing natural phenomena of this type are extraordinarily difficult to develop; though clearly patterned, the aggregate product is highly non-linear, extremely complex, and, like Gell-Mann's thinking particles, lacks a "central mind" or "leader" (Macy 2002). By focusing on the level of local interaction, however, the task is much more easily managed. Rather than modeling a flock with macro-rules, Reynolds modeled a goose, using only three micro-rules: separation (maintain a certain distance from other geese), alignment (attempt to match the speed and direction of adjacent geese), and cohesion (move towards the perceived local center of mass). By modeling directly neither a group of geese as a single collective entity, nor an individual goose, but rather the flock as a collection of interacting individuals, Reynolds' ability to explain the chevron formation demonstrates the power and typical reasoning of agent-based modeling.

Operationalizing Identity: The ABIR Model

As noted above, any investigation of the smallpox threat needs to be guided by a theory of how humans select their contacts with others. Applying to social science the same logic Reynolds employs with regard to geese raises the question of what rules (like Reynolds' separation, alignment, and cohesion) can be considered to govern social actors on the individual level. Constructivist identity theory presents itself as a relevant and convenient guide to this modeling problem, an approach grounded in strong research traditions across sociology, psychology, anthropology, and political science.[2] Broadly speaking, constructivism imagines individuals as boundedly rational adapters who form and reform groups by drawing presentations of self (identities or behaviors) from a limited repertoire of possibilities. These repertoires themselves can change in accordance with conformity principles and varying signals regarding the relative attractiveness of, or payoffs associated with, different identities or behaviors.

The Agent-Based Identity Repertoire (ABIR) model, originally developed by Lustick and Dergachev at the University of Pennsylvania in 1998 (Lustick 2000), operationalizes constructivist notions in a virtual social world within which precisely controlled experiments can be performed. An ABIR world or "landscape" consists of a grid of randomly distributed agents, represented by colored squares. Each agent has a "repertoire" of identities to which it is "subscribed," only one of which is currently "activated" (other subscribed identities are "deactivated"). Agents undergo a constant process of activating and deactivating their identities, as well as substituting attractive identities for others in their repertoires. This process moves forward in accordance with rules derived from constructivist theories and in response to the particular conditions confronting individual agents. These conditions include the influence of local agents activated on different identities and the variable signals available to all agents regarding the relative advantage or disadvantage of activating a particular identity. The virtual society evolves over time because of an ongoing exchange of these identities between adjacent agents, and the rotation by individual agents of their identities into and out of "activated" status.

ABIR thus provides a generalized framework for testing hypotheses in a wide variety of domains - indeed in any domain characterized by large numbers of interactions among entities with a relatively stable but not perfectly fluid repertoire of traits that can become relevant to behavior under particular circumstances. Here this framework is used to study key questions regarding the consequences of a smallpox attack - a topic as yet only rarely studied with ABM, but particularly well-suited to it.

The VIR-POX Model and Simulated Containment Measures

One of the crucial methodological questions facing any modeler, computer-based or otherwise, is the level of analysis on which to focus. Because of our interest in social networks and individual interaction, we sought to construct a model that could be understood as encompassing either a small community, or a relatively small random sample of a larger city-like population. In the latter case focusing on only a sub-sample of agents allows us to understand more fully the interactions on a local level, while still making inferences about the larger group (essentially a collection of these communities) from these data.

The VIR-POX model consists of a 42 × 42 square array of agents endowed with randomly-assigned identity repertoires at the beginning of the simulation, and arranged in terms of their social, rather than geographic, relationships and proximities. In VIR-POX each time-step is understood as corresponding to one day, and experiments are run to approximately five months from the time of initial infection (t=150).[3] After eight days of the simulation, during which the social networks in the landscape have a chance to stabilize from their initial random distribution, one agent becomes infected with smallpox.[4]

A fairly simple algorithm then controls the spread of the virus to adjacent agents, with a variable probability of contagion depending on the number of infected contacts, their stages of infection, and the stipulation that an agent is more likely to contract the virus from an infected neighbor that shares its activated identity than from one that does not (a more distant social contact).[5] Two typical runs at t=130, with no immunity in the population and no policy intervention implemented to control the spread of the virus, are shown in the top half of Figure 1. Black cells indicate agents that contracted the virus and subsequently became isolated from their normal social environment, either by death or by relocation to a medical facility, and cells marked with an "X" indicate infected and contagious agents that have not yet died or sought medical treatment.

Three possible policy interventions were operationalized and simulated at various levels of implementation: pre-attack mass vaccination, post-attack mass vaccination, ring (contact) vaccination, and early detection and isolation. In the first two conditions some percentage of the population is chosen randomly and vaccinated, either at t=0 in the pre-attack manipulation, or at t=25 (15 days after the first clear clinical presentation of smallpox) in the post-attack manipulation.[6] Ring vaccination, the third experimental condition, is a process by which known social contacts of an identified case undergo prophylactic or postexposure vaccination. In our model, vaccination neutralizes the virus in already-infected agents if performed within four days of infection. Typical visual examples of the effects of post-attack mass vaccination and ring vaccination measures are shown in the bottom half of Figure 1.

We presume that in absolute terms no containment measure could compete with 100% pre-attack mass vaccination, but because vaccination is expensive and itself a threat to public health, the "effectiveness" of the various protocols was measured by their success relative to the number of vaccinations employed (i.e., the protection they provide to those not actually vaccinated). As such, the first dependent measure used to judge the effectiveness of the containment measures in a particular virtual history was the total number of agents infected over the 150 simulated days divided by the number of unvaccinated agents in the community. The second dependent measure was the percentage of virtual histories in which a particular set of policy choices resulted in zero currently-infected agents at t=150. That is, how likely were these choices to eliminate the virus entirely by five months after the point of initial infection?

Figure 1
Figure 1. Four different snapshots of the VIR-POX model in operation
Black cells represent agents that contracted the virus and subsequently became isolated from their social environment, either by death or by relocation to a medical facility. Cells marked with an "X" indicate infected and contagious agents that have not yet died or sought medical treatment. An agent's color indicates its currently activated identity.

Thinking Stochastically: Counterfactual Analysis

By allowing us to introduce random but replicable variation into our simulations, agent-based modeling enables us to view the future as a distribution of possible outcomes, or counterfactuals, rather than as something determined with the particular singularity with which it will ultimately unfold. Stochastic simulation is powerful because it allows us to make nuanced statements about the character of that distribution. The replicable variation that makes this possible stems from an algorithmic feature known as "seeding." In any given experiment with the VIR-POX model, all the random calculations that determined the initial composition of the virtual society and the punctuations it experienced over time were derived mathematically from a single "seed" number. By reusing a fixed set of these seeds across experimental conditions - effectively rewinding and playing back history, replicating exactly the same events with different values for our independent variables only - we produced highly controlled experiments with "twin" simulations, enabling systematic pair-wise comparison.

In these "matched seed" experiments we found that communities sharing all their defining properties (e.g., matrix size) varied substantially in their baseline susceptibility to the spread of smallpox. This suggests that the particular streams of perturbations associated with the different seeds used in each experimental condition vary by the degree to which they open or close opportunities for pathogenic spread. By noting the different infection rates yielded by these one hundred distinct (but randomly generated) streams before introducing any containment measures, we were able to distinguish relatively benign streams of particularly sequenced perturbations from relatively malignant ones. Treating the infection levels produced by the streams associated with different seeds under baseline conditions as a continuous measure of what we will call "seed vulnerability" allowed us to examine the differential effects of containment measures on those futures more or less predisposed to undesirable outcomes. In line with our intuition that a virus spread by close contact will spread more rapidly among close social networks, we found that seeds whose streams of accidents resulted in communities with more social heterogeneity tended to exhibit significantly less vulnerability to widespread infection, compared to other random seeds that produced communities featuring relatively little social diversity.

* Results

A total of 19,600 virtual histories were simulated with a fixed set of one hundred seeds and different combinations of the above interventions, at various levels of each manipulation. The independent main effects of each of the three containment measures are displayed in Figures 2 and 3. As expected, all interventions reduced the level of smallpox infection, though they varied significantly in terms of both the level of infection over the course of the simulation, and the likelihood of eliminating the virus by the five-month mark.

Pre- and Post-attack Mass Vaccination

Perhaps the most prominent debates regarding smallpox preparedness and response policy rage over the necessity of mass vaccination. Because the perceived risk of biological attack has increased significantly in recent years, many argue that reintroduction of the smallpox vaccine, which would raise the general level of immunity in the population and make a crisis situation more manageable, is both justified and necessary (e.g., Bicknell 2002; Millar 2001; Tanner 2001). Others endorse mass vaccination as an intervention after a confirmed smallpox attack (e.g., Kaplan et al. 2002). Still others - apparently the majority, including the Centers for Disease Control (CDC) - do not support mass vaccination even after an attack, suggesting that the adverse effects of the vaccine outweigh the benefits, and that rigorous programs of ring vaccination and early detection will be sufficient regardless (e.g., CDC 2002; Mack 2003). VIR-POX brings to these debates quantitative data regarding a range of simulated outbreaks. The dynamic effects of pre- and post-attack mass vaccination strategies will be contrasted here, and these mass vaccination approaches will be compared to targeted vaccination options (ring vaccination) in the following section.

Figure 2
Figure 2. Main effects of mass vaccination

Figure 2 demonstrates that pre-attack mass vaccination sometimes provided a significant additional benefit compared to the post-attack alternative. As we would expect, this differential between pre- and post-attack protocols increased with the level of mass vaccination in question - if relatively few individuals were vaccinated, it was unimportant whether this was performed before or after the attack, but if the same treatments were applied to a higher proportion of the community (i.e. 75% or more), pre-attack mass vaccination resulted in less than half as many infections as the same level of vaccination implemented post-attack. In addition, the effectiveness of post-attack vaccination was inhibited by an apparent floor at high levels of implementation, which did not exist in the pre-attack vaccination scenarios. This suggests that allowing the smallpox virus to spread unchecked for even a short period may significantly limit the potential efficacy of subsequent mass vaccination, even if a very high percentage of the population can eventually be treated. Finally, especially when mass vaccination is employed before an attack, the sharpest decline in the rate of infection of susceptible agents occurred between the 30% and 45% levels of implementation, and the vast majority of the total reduction was achieved by this point.

The rightmost part of Figure 2, which shows the proportion of virtual histories in which the virus had been entirely eliminated by the end of the simulation, suggests a more nuanced interpretation of the differences between pre- and post-attack mass vaccination approaches. At moderate levels of implementation, vaccinating before an attack was slightly more likely to eliminate the virus by t=150 than was the same intervention after an attack, though this difference approached statistical significance only at the 45% level ( p = .058). When higher or lower percentages of the population were vaccinated, the two conditions yield results that are essentially identical in this respect. In addition, neither approach eliminated the virus in more than half of potential futures unless 60% or more of the population was treated, at which point the likelihood of successful eradication rose from 36% for pre-attack and 23% for post-attack strategies at the 45% vaccination level, to 83% and 81% respectively (at the 60% level). Eradication became almost certain (> 97% likelihood) when 75% of the population was vaccinated, a finding that resonates with others' empirically-based suggestion that, as a general rule, 70-80% of a population must be vaccinated to eradicate smallpox (Anderson and May 1991).

Accordingly we infer that pre-attack and post-attack vaccination approaches vary little in their capacity to effectively eliminate smallpox from a community within our five-month time frame, when implemented with moderate or high coverage levels, but also that pre-attack techniques accomplish this with significantly fewer infections (and subsequent deaths) in the process. This difference, of course, must be balanced against the difference in deaths projected to result from the processes of pre- and post-attack mass vaccinations themselves.

Security is another key concern in public health policy. As such, we are interested not only in mean infection levels, but also in the susceptibility of various containment measures to complication by factors of chance or by the intrinsic vulnerability of a particular community. This can be measured by the strength of the correlation between the outcome a seed produces after a particular intervention, and that seed's overall vulnerability (as measured by the scale of the outbreak in the "twin" simulation run that employed the same seed without any vaccination programs).

This method of comparison yields several important findings regarding the differential contours of pre- and post-attack mass vaccination approaches. Interestingly, only a 15% pre-attack mass vaccination rate was required to reduce the relevance of uncontrolled accidents and social composition by over 70% (R2 = .29 rather than 1.00 without vaccination), and to reduce this relationship to the point of statistical non-significance in the less vulnerable half of the seed distribution. With the same level of vaccination post-attack, on the other hand, the correlation between seed vulnerability and infection was substantially stronger (R2 = .50), and this relationship remained statistically significant throughout the distribution.

The inclusion of some level of immunity in these simulations at t=0 need not necessarily indicate a new vaccination effort undertaken in anticipation of a smallpox attack. Results obtained under these conditions can also be interpreted in terms of a society with that degree of "herd immunity" - residual protection against smallpox among individuals inoculated before public vaccination ended in the 1970s. Estimates for residual immunity in the current UK population are in the 10% to 18% range (Gani and Leach 2001; Henderson 1998). Our finding that even these relatively low levels of prior immunity might be important to outcomes underscores the importance of further investigation into the level of herd immunity remaining in various populations. Indeed, very recent research suggests that, contrary to the prior consensus, effective smallpox immunity may subsist in more than 90% of vaccine-recipients for as long as 75 years (Hammarlund et al. 2003). If this were the case, more than 90% of Americans over the age of 35 (approximately 140 million people) might already be fully or partially immune, and our 30% or 45% pre-attack vaccination scenarios could be considered as the new "baseline."

In our simulations, the importance of even slight prior immunity in the population is highlighted by the finding that a pre-attack vaccination rate of 60% did not significantly reduce the relevance of seed vulnerability as compared to the 15% level (r = .43 rather than r = .54). At the 60% level there remained a sharp increase in infection levels among the most vulnerable quarter of seeds, but at the 75% vaccination level this feature disappeared, and the strength of the correlation between a seed's vulnerability and the future it yields declined sharply (r = .14), such that this relationship became non-significant throughout the distribution. Post-attack mass vaccination, on the other hand, yielded distributions such that vulnerability of the seed employed was significantly correlated with the post-vaccination outcome, even with 75% of the population treated (r = .25 at 60% and r = .37 at 75%). In addition to yielding lower infection levels in absolute terms, then, pre-attack vaccination was also less subject to the whim of historical accident, or the intrinsic susceptibility of a particular community, than was post-attack vaccination.

In sum, pre-attack vaccination of only a slight or moderate portion of the population could usually control for the specifics of the particular circumstances confronted, but even a relatively high level of post-attack vaccination typically could not. Waiting until after an attack to vaccinate the public was thus a considerably riskier policy, resulting in more deaths on average and significantly increasing the relevance of uncontrollable factors like the intrinsic susceptibility of a particular community. Interestingly, post-attack vaccination incurred little additional risk over prior vaccination in terms of eradicating the virus by the end of the five-month period. Of course, final evaluation of the two measures must be based in part on variables not included in VIR-POX, including their respective costs in terms of finances, logistics, and vaccine-related deaths; ample information regarding these factors is available elsewhere.[7]

Ring Vaccination

Figure 3
Figure 3. Main effects of ring vaccination

Ring vaccination, a process by which known social contacts of identifiable smallpox cases are traced and vaccinated, is the current gold standard in smallpox response policy. It is credited with eradicating smallpox from the world population in the 1970s, and serves as the core of the current CDC response plan (Fauci 2002; CDC 2002). As seen in the leftmost part of Figure 3, in our simulations any amount of ring vaccination helped to protect the unvaccinated population from infection, and higher percentages of vaccination decreased infection in a linear fashion. Moreover, extremely thorough (i.e. 90%) ring vaccination was only slightly less effective at protecting those it did not directly vaccinate than was even a high level (i.e. 75%) of post-attack mass vaccination. This ring vaccination effect was achieved with an average of about 150 agents vaccinated, as compared to over 1,300 in the mass vaccination scenario.

Unfortunately, the efficacy of ring vaccination was also significantly less stable or predictable than that of mass vaccination, particularly in conditions of high seed vulnerability. The distribution of futures was wide and diffuse, and the vulnerability of the seed employed remained highly relevant to outcomes (r = .50). The variance increased substantially in the more vulnerable half of the distribution, such that, though the overall mean proportion of unvaccinated agents infected was only 6%, there was an 18% likelihood of instead obtaining values between 10% and 18%. Thus, though ring vaccination was often nearly as effective as post-attack mass vaccination, it did not consistently attain these values when faced with relatively unfortunate seeds.

Turning to the second dependent measure (elimination of smallpox within five months), we find that ring vaccination was glaringly ineffective, even at the highest coverage levels. With 90% of contacts vaccinated, for instance, there was only a 45% chance of eradicating the virus during the five months post-infection. Even this relatively weak return was highly contingent on the intrinsic susceptibility of the community, such that among the more vulnerable half of seeds, there was only a 26% chance of eradicating smallpox with 90% of contacts vaccinated. Implemented as an independent policy, then, ring vaccination often but unreliably yielded decreased levels of infection comparable to those achieved by post-attack mass vaccination, and did so without nearly as many actual vaccinations, but typically failed to eliminate smallpox from the community in its entirety (at least during the five month time frame studied with VIR-POX).

When coupled with 15% immunity distributed randomly in the population at t=0, simulating a low level of residual herd immunity, ring vaccination became somewhat more effective, but still did not rival mass vaccination approaches on either dependent measure. With 90% of contacts traced and vaccinated and 15% herd immunity, the virus was fully eradicated within five months in 59% of virtual futures, and the mean proportion of unvaccinated agents infected was 2.8%; without any herd immunity, these measures were 45% and 4.3%, respectively. With the substantially higher level of residual immunity suggested by Hammarlund et al. (2003), approximated here as 30% or 45% effective immunity distributed randomly at t=0, ring vaccination became a much more viable alternative to mass vaccination. As seen in Figure 4 below, 45% pre-existing immunity in the population allowed ring vaccination to successfully eliminate the virus within five months in more than 90% of simulated futures, even when the proportion of contacts vaccinated was relatively low (e.g. 60%). The scale of the infection itself was also significantly reduced - for instance, from 6.1% infection of unvaccinated agents with 75% ring vaccination and no herd immunity, to 1.3% infection with the same vaccination and 45% herd immunity.

Figure 4
Figure 4. Effect of herd immunity on ring vaccination

The mechanisms and duration of residual immunity are not well understood, and as such the recent suggestion that a relatively high proportion of the current population remains immune to smallpox, though important, must be considered optimistic. Accordingly, we conducted additional experiments to determine whether the apparent ceiling on the efficacy of ring vaccination could be raised without introducing prior immunity, instead increasing the proportion of cases detected and isolated during the early prodromal phase of the disease. As seen in Figure 5, even slightly increasing the prodromal detection rate greatly improved the viability of ring vaccination as a means to eliminate the virus, particularly at the highest coverage levels (i.e. 75% and 90% of contacts vaccinated). Without additional early detection measures, vaccinating 90% of contacts eliminated the virus in only 45% of virtual futures; but when infected agents had a 30% chance of being detected and isolated each day of prodrome, this increased to a 75% likelihood of eradication, and with 45% detection per day, ring vaccination eradicated the virus within the five-month timeframe in 95% of potential futures. Higher levels of prodromal detection also brought the effect of 75% ring vaccination progressively closer to that of the 90% level, such that the maximum or near-maximum return could be obtained with a somewhat less robust vaccination program. Increasing the prodromal detection rate affected the first dependent measure (infected agents relative to unvaccinated ones) as well, such that ring vaccination surpassed 75% post-attack mass vaccination when 75% of contacts were vaccinated and infected individuals had a 30% chance of detection each day of prodrome, and even surpassed the same level of pre-attack mass vaccination when 90% of contacts were vaccinated and the daily prodromal detection rate was raised to 45%.

Figure 5
Figure 5. Effect of early (prodromal) isolation on ring vaccination

These findings, which suggest that the effect of ring vaccination is highly dependent on external factors, also highlight an important thread in the ongoing debate over smallpox response policy. Bicknell (2002) typifies common concerns regarding ring vaccination techniques when he suggests that mainstream confidence in them is "predicated on certain assumptions that merit scrutiny." Most importantly, he argues that effective ring vaccination requires identification of infected persons within the narrow postexposure period when vaccination can still be effective (about four days). But "a person may be infective for several days before smallpox is clinically obvious," and accordingly he suggests that ring vaccination alone does not present a viable option for smallpox response policy.

A modified version of the model was developed in order to test the viability of ring vaccination when cases are not identifiable for several additional days after the end of the incubation period and to determine the extent to which ring vaccination can still be effective under these "exacerbated" circumstances.[8] The results of these experiments are displayed in Figure 6 and compared to the standard dataset.

Figure 6
Figure 6. Effectiveness of post-attack mass vaccination and ring vaccination under standard and exacerbated conditions

VIR-POX appears to corroborate Bicknell's argument. As we might expect, both exacerbated curves run above their standard counterparts. But whereas the mass vaccination curves maintain a comparatively small difference across the various levels, which holds fairly constant until the two rejoin at the highest levels of implementation, the ring vaccination curves diverge sharply as the percentage of contacts vaccinated increases. In fact, while three out of the four scenarios (standard and exacerbated mass vaccination, and standard ring vaccination) all converge at the 90% level, even this high level of ring vaccination decreased infection only slightly under the exacerbated condition. Thus whereas under the standard conditions 75% post-attack mass vaccination yielded 78% the infection level of 90% ring vaccination, under these exacerbated conditions mass vaccination resulted in only 30% as much infection (of unvaccinated agents) as did ring vaccination.

* Discussion

Too often research programs with policy applications are pressured toward making "point predictions" and defining themselves as inquiries into what, in the singular, will actually occur. This is akin to demanding that a meteorologist predict the number of snowflakes that will fall on a lawn in a coming storm, rather than offering reliable but probabilistic statements about likely snowfall for the region. Even if these demands are narrowed by specifying conditions that must obtain for the prediction to be accurate - as previous smallpox models have done by differentiating between laboratory release, human vector, and building attack scenarios (Bozzette et al. 2003) - the fundamental assumption is the same: once key aspects of a situation are known, the exact contours of the future become predictable, down to the precise body count.

This approach is understandably attractive. But is there really such a thing as the future? Since no theory imaginable could be a theory of everything, some elements relevant to outcomes will lie outside the theoretical domain of even the best investigation. As such, only the shape of the distribution of possible outcomes can be predicted, never the actual outcome. The research presented here strives to operate within these constraints, hoping to improve the quality of policymaking based on judgments about the future while still recognizing that it is fundamentally unknowable until it becomes history. By using the computer not as an instrument of prophecy, but rather as an efficient tool for quickly conducting many controlled thought experiments, we are able to systematically analyze the relative and differential strengths of the policy options at our disposal.

Because it employs so many fewer vaccinations, ring vaccination would be the clear policy of choice if it were as effective as mass vaccination protocols. Our experiments with VIR-POX suggest that this may indeed be the case, but only if certain conditions are met. Specifically, we find that ring vaccination rivals mass vaccination when (1) a very high proportion of smallpox cases can be identified and isolated within days of the onset of prodrome, and nearly all the close social contacts of each infected case can be traced and vaccinated, or (2) herd immunity in the population is very high (i.e. > 30%). The first of these conditions suggests that some sort of media campaign to promote self-diagnosis might be a crucial component of a successful ring vaccination-based response plan. But if Bicknell is correct that smallpox can be contagious for several days before its victims become symptomatic, our work with VIR-POX suggests that, in the absence of substantial herd immunity, ring vaccination alone might never be adequate. Ring vaccination alone also resulted in the most diffuse and variable distribution of futures of any of the protocols examined with VIR-POX, making it the most susceptible to random complication when confronted with vulnerable seeds. Serious consideration should be given to all of these issues before the status quo policy embodied in the CDC response plan, of ring vaccination first or only, is accepted as the best alternative (CDC 2003).[9] Still, if herd immunity is determined to be high, or if a very robust program of early detection can be implemented, our work suggests that ring vaccination could be reliable and effective, without the costly overhead (in terms of money and vaccine-related deaths) that would result from mass vaccination.

Limitations and Context

The methodology employed here can provide a unique perspective on the problem of smallpox bioterrorism, generating new insights and highlighting important questions, but it is not without its own set of systemic limitations. First, the use of a square lattice in the model implicitly stipulates that each agent in the community will have eight and only eight social contacts to which it might communicate the disease (north, east, south, west, and the four diagonals). Because in our model there is almost an 80% cumulative likelihood of an infected agent having been removed from its social environment by five days after the beginning of prodrome, the eight-node system results in an epidemic with an R0 value (the mean number of secondary cases produced by a primary case) of about 2-3. This number corresponds well with the suggested range of empirical R0 values for smallpox. Meltzer et al. (2001), for instance, conclude from studying 20 previous outbreaks around the world that "2 persons infected per infectious person...is the most accurate representation of previous transmission rates," but note that because of the intervening decline in herd immunity, "the average transmission rate following a deliberate release of smallpox might be > 2." Still, controversy remains and others have suggested an R0 value of 4-6 (Gani and Leach 2001) or even 10 or more (Henderson 1999).

Additionally, the square matrix format mandates that two adjacent agents will share some of their other neighbors. For example, a given agent and its eastern neighbor both border the first agent's northern, southern, northeastern, and southeastern neighbors. Thus our model may be located towards the more closely-knit end of a larger social distribution, best representing communities in which contacts of contacts of an agent are often also the first agent's contacts in their own right. In more diffuse social environments, even scenarios without any vaccination protocols or residual immunity would probably not result in the striking "black hole" of infection observed in the top half of Figure 1.

The size of the matrix appears to raise questions of boundary conditions, but in fact none of the simulated epidemics with substantial policy intervention ever radiated outward far enough to come into contact with this limitation, and even in the unrealistic "zero vaccination" scenarios, including no herd immunity or policy intervention of any kind, the epidemic did not reach the edge of the lattice by the chronological endpoint of the simulation (five months from the time of initial infection). The fact that recent historic outbreaks rarely resulted in more than about 150 cases of smallpox also suggests that a 42 × 42 square is large enough for experimental purposes (Meltzer et al. 2001).

Finally, there is no actual mobility in our model: agents maintain their eight contacts for the duration of the simulation. This is of limited concern, however, first because dynamism is maintained by the constant activation and deactivation of identities, such that the subset of an agent's eight contacts with which it most closely associates changes from day to day, and second because smallpox is not generally believed to have significant transmission potential through casual contact (e.g., passing in the street), instead requiring the kind of prolonged face-to-face exposure one would likely only have with social contacts of some kind or another (CDC 2002). This latter point is still debated, however, and if smallpox turned out to be easily communicated through air in public venues, dynamics significantly different from those reported here would likely result. In addition, as described above, we studied the impact of smallpox on a community level only. If a bioterrorist attack resulted in many widespread nuclei of infection, our findings still apply to each individual nucleus, but there might be some ways in which the overall outbreak would be "more than the sum of its parts."

Our model and results make for several interesting contrasts with previous work in the field. Of the smallpox models identified in the literature, only Halloran et al. (2002) employed an agent-based approach, and it is with their findings that our own are most consonant. Their choice to use a single fixed social structure (four neighborhoods, one high school, one middle school, etc.), when contrasted with our own decision to employ constructivist processes in the abstract and implement the social network as a fully stochastic element of the simulation, presents an opportunity to cross-validate both our models. If both do indeed fulfill their respective purposes, the particular world created by Halloran et al. ought to be subsumed within our distribution of counterfactuals. Because they sought to design a relatively standard community, our findings should and do accord well. For instance, we both find that low levels of general immunity can provide disproportionately large returns, and Halloran et al. find, as we did in our "exacerbated parameters" condition, that "assuming that no one stays home during the prodromal period...the effectiveness of targeted vaccination decreas[ed] more than that of mass vaccination." We also both conclude that residual immunity in the population can be a very important factor in determining outcomes. These harmonies in the findings of two unrelated agent-based models of smallpox, each of which operationalized key variables quite differently, speak to the robustness of the properties uncovered and, probably, to the general power of the "bottom-up" method. Comparing our results with those of Bozzette et al. (2003) is more difficult, simply because the methodologies differ to the point not only of answering questions differently, but also of asking different questions. That being said, they found, as we did, that ring vaccination could be effective, but also recommended prior vaccination for health care workers. This suggestion agrees, in a very broad sense, with the current study's finding (and that of Halloran et al.) that a relatively low level of pre-event mass vaccination can yield a disproportionately large return in terms of containing the infection, though we did not distinguish between different classes of agents with different roles in the community.

On the other hand, the equation-based model developed by Kaplan et al. (2002), led them to take the minority position that mass vaccination ought to be preferred over ring vaccination. They plotted a breakeven curve for mass and targeted vaccinations, based on various estimates of the base reproductive rate of smallpox (R0) and initial attack size, and identified a significant region, characterized by higher R0 values, in which mass vaccination is superior to targeted (ring) vaccination. Increasing R0 is functionally similar to employing our "exacerbated parameters" condition, and we too find mass vaccination significantly more desirable, and more likely preferable to ring vaccination, under these circumstances.

Agent-based modeling and the systematic counterfactual thinking it enables have important roles to play in modern defense and security planning. Our application of these methods to the study of smallpox leads us to several general insights. First, we find that the effectiveness of the various containment measures is highly dependent upon the presence or absence of the others, and on the levels at which each is applied. Second, we ascertain that the different policy options vary not only in their mean effects, but also in their subordination to factors of chance or intrinsic susceptibility, and that these relationships themselves fluctuate across ranges of the counterfactual distribution. As such, planning for the worst 50%, 25%, or 10% of futures would entail measures significantly different from those we would employ if considering only a distilled "average future." Finally, we learn which issues may merit investment in further learning, namely the degree of residual "herd immunity" in today's population from pre-eradication vaccination programs, and the facility of detecting many smallpox cases very quickly after infection. As general principles, we conclude that (1) policymakers ought not to view any of the options at their disposal as all-or-nothing endeavors, as many of them are quite potent even at very moderate levels of implementation; (2) a policy option should be evaluated on the dual bases of its mean consequence and the relative amount of variability in its effect; and (3) planning only for the most likely outcomes may leave us woefully unprepared for real possibilities of catastrophe lurking in the malignant tail of the distribution.

* Acknowledgements

The authors appreciate helpful comments from Roy Eidelson, David Weiner, and three anonymous reviewers of this journal. We also wish to thank the National Science Foundation for its support of this research under Grant # 0218397.

* Notes

1 See Koopman 2002 for an extensive comparison of the strengths and weaknesses of homogeneous (top-down) and heterogeneous (bottom-up) methodologies in the context of smallpox modeling.

2 For representative work deploying constructivist models of human identity and behavior in various disciplines see: Eley and Suny 1996; Kowert and Legro 1996; Nagel 1994; Verdery 1991; Abrams and Hogg 1990; Hogg and McGarty 1990; Stryker and Serpe 1982; and Huddy 2001.

3 The five-month timeframe is based on a conservative approach to available data. Of the last pre-eradication smallpox outbreaks, one of the longest had duration of 80 days from onset to resolution (Foege et al. 1975). We doubled this duration so as to gauge the effectiveness of response strategies that might take substantially longer than those employed earlier in the century, in part because of the probable lapse in herd immunity since smallpox was last confronted.

4 We considered including scenarios with multiple first-generation cases - i.e. in which more than one agent is infected at the outset. In this initial series of experiments, we chose not to explore the implications of such scenarios. In any event, when the matrix is understood as a social cross-section of a larger population, a single initial case does not necessarily indicate an attack that initially infects only one person; it merely suggests that only one of the 1,800 people monitored in the simulation was among those infected.

5 For the sake of transparency, this algorithm is fully explicated in the appendix.

6 For the purposes of our simulation model, “vaccination” refers to successful inoculation such that the recipient becomes fully immune to the smallpox virus. Accordingly, “failed takes” and the imperfection inherent in the vaccine might require that it be administered to some portion of the population greater than 50% in order to achieve the effect of our 50% vaccination scenario.

7 For a general overview of all of these issues, see Veenema 2002. For more specific information regarding the logistical difficulties entailed by post-attack vaccination, see Bicknell 2002, and regarding the incidences of various vaccine-related complications see Henderson et al. 1999.

8 Specifically, the possibility of detection and isolation during prodrome was removed, and the cumulative likelihood of detection during each day of the early rash phase was halved (D1 and D2 in the appendix, respectively).

9 The strength of the relationship between ring vaccination and early detection might also suggest that ring vaccination could become more feasible if contacts were isolated after vaccination, such that even those who were infected but could no longer be effectively vaccinated would not be able to infect others. This scenario was not examined in our initial set of experiments, and might be impossible to implement in reality anyway, but could possibly improve the efficacy of ring vaccination.

* References

ABRAMS, Dominic and Michael A. Hogg (1990) "An introduction to the social identity approach," in Social Identity Theory: Constructive and Critical Advances, Dominic Abrams and Michael A. Hogg, eds. New York: Springer-Verlag, pp. 1-27.

ALIBEK, Ken with Stephen HANDELMAN (1999) Biohazard New York: Random House, Inc.

ANDERSON, Roy M. and Robert M. MAY (1991) Infectious Diseases of Humans: Dynamics and Control New York: Oxford Science Publications,.

BICKNELL, William J. (2002) "The Case for Voluntary Smallpox Vaccination," New England Journal of Medicine, Vol. 346, no. 17 April.

BOZZETTE, Samuel. A., Rob Boer, Vibha Bhatnagar, Jennifer L. Brower, Emmett B. Keeler, Sally C. Morton, Michael A. Stoto (2003) "A Model for Smallpox Vaccination Policy," New England Journal of Medicine, Vol. 348, no. 5 (January).

CENTERS FOR DISEASE CONTROL and Prevention (2002) "Smallpox Fact Sheet" Atlanta, Georgia: Department of Health and Human Services, 2002. http://www.bt.cdc.gov/agent/smallpox (Accessed June 26 2003).

CENTERS FOR DISEASE CONTROL and Prevention (2002) "Smallpox Response Plan and Guidelines, Draft 3.0" Atlanta, Georgia: Department of Health and Human Services, 2002. http://www.bt.cdc.gov/agent/smallpox/response-plan/index.asp (Accessed June 26 2003).

ELEY, Geoff and Ronald Grigor Suny (1996) "Introduction: From the Moment of Social History to the Work of Cultural Representation," in Becoming National: A Reader, Geoff Eley and Ronald Grigor Suny, eds. New York: Oxford University Press, pp 3-37.

FOEGE, William H., J. Donald Millar, Donald A. Henderson (1975) "Smallpox eradication in West and Central Africa," Bulletin of the World Health Organization, Vol. 52, no. 2, pp. 209-222.

FAUCI, Anthony S. (2002) "Smallpox Vaccination Policy - The Need for Dialogue," New England Journal of Medicine, Vol. 346, no. 17 (April).

GANI, Raymond and Steve Leach (2001) "Transmission potential of smallpox in contemporary populations," Nature, Vol. 414, pp. 748-751 (December).

HALLORAN, M. Elizabeth , Ira M. Longini Jr., Azhar Nizam, Yang Yang, (2002) "Containing Bioterrorist Smallpox," Science, Vol. 298, pp. 1428-1432 (November).

HAMMARLUND, Erika, Matthew W. Lewis, Scott G. Hansen, Lisa I. Strelow, Jay A. Nelson, Gary J. Sexton, Jon M. Hanifin, Mark K. Slifka (2003) "Duration of antiviral immunity after smallpox vaccination," Nature Medicine, Vol. 9, no.9 pp. 1131-1137. (September).

HENDERSON, Donald A. (1998) "Bioterrorism as a public health threat," Emerging Infectious Diseases, Vol. 4, no. 3.

HENDERSON, Donald A. (1999) "The Looming Threat of Bioterrorism," Science, Vol. 283, no. 5406 (February).

HENDERSON, Donald A., Thomas V. Inglesby, John G. Bartlett, Michael S. Ascher, Edward Eitzen, Peter B. Jahrling, Jerome Hauer, Marcelle Layton, Joseph McDade, Michael T. Osterholm, Tara O'Toole, Gerald Parker, Trish Perl, Philip K. Russell, Kevin Tonat, (1999) "Smallpox as a Biological Weapon: Medical and Public Health Management," Journal of the American Medical Association, Vol. 281, no. 22 (June).

HOGG, Michael A., Craig McGarty, (1999) "Self-categorization and social identity," in Social Identity Theory: Constructive and Critical Advances, Dominic Abrams and Michael A. Hogg, eds. New York: Springer-Verlag, pp 1-27.

HUDDY, Leonie (2001) "From social to political identity: A critical examination of social identity theory," Political Psychology, Vol. 22, no. 1.

JOHNSON, Paul E. (1999) "Simulation Modeling in Political Science," American Behavioral Scientist, Vol. 24, no. 10.

KAPLAN, Edward H., David L. Craft, Lawrence M. Wein (2002) "Emergency response to a smallpox attack: The case for mass vaccination," Proceedings of the National Academy of Sciences, Vol. 99, no. 16 (August).

KOOPMAN, Jim (2002) "Controlling Smallpox," Science, Vol. 298, pp. 1342-1344 (November).

KOWERT, Paul, Jeffrey Legro (1996) "Norms, Identity, and Their Limits: A Theoretical Reprise," in The Culture of National Security: Norms and Identity in World Politics, Peter J. Katzenstein, ed. New York: Columbia University Press, pp. 451-97.

LUSTICK, Ian S. (2000) "Agent-Based Modelling of Collective Identity: Testing Constructivist Theory," Journal of Artificial Societies and Social Simulation, Vol. 3, no. 1 http://jasss.soc.surrey.ac.uk/3/1/1.html.

MACK, Thomas (2003) "A Different View on Smallpox and Vaccination," New England Journal of Medicine, Vol. 348, no. 5 (January).

MACY, Michael W. and Robert Willer (2002) "From factors to actors: computational sociology and agent-based modeling," Annual Review of Sociology, 28: 143-166.

MELTZER, Martin I., Inger Damon, James W. LeDuc, J. Donald Millar (2001) "Modeling Potential Responses to Smallpox as a Bioterrorist Weapon," Emerging Infectious Diseases, Vol. 7, no. 6 (November-December).

MILLAR, J. Donald (2001) "It's bad policy to hold back smallpox vaccine," Milwaukee Journal Sentinel, December 27.

NAGEL, Joane (1994) "Constructing Ethnicity: Creating and Recreating Ethnic Identity and Culture," Social Problems, Vol. 41, no. 1 (February).

REYNOLDS, Craig W. (1987) "Flocks, Herds, and Schools: A Distributed Behavioral Model," Computer Graphics, Vol. 21, no. 4 (July).

STRYKER, Sheldon, R.T. Sterp (1982) "Commitment, identity salience, and role behavior," in Personality, roles, and social behavior, W. Ickes and E. S. Knowles, eds. New York: Springer-Verlag, pp. 199-218.

TANNER, Adam (Reuters) (2001) "Russian Germ Warfare Experts Raise Alarm," Clinical Infectious Diseases, Vol. 33, no. 12 (December).

TUCKER, Jonathan B. (2001) Scourge: The Once and Future Threat of Smallpox New York: Grove Press, 2001.

VEENEMA, Tener G. (2002) "The smallpox vaccine debate," American Journal of Nursing, Vol. 102, no. 9.

VERDERY, Katherine (1991) National Ideology Under Socialism Identity and Cultural Politics in Ceausescu's Romania Berkeley: University of California Press.


ButtonReturn to Contents of this issue

© Copyright Journal of Artificial Societies and Social Simulation, [2004]