Indirect inference through prediction

By recasting indirect inference estimation as a prediction rather than a minimization and by using regularized regressions, we can bypass the three major problems of estimation: selecting the summary statistics, defining the distance function and minimizing it numerically. By substituting regression with classification we can extend this approach to model selection as well. We present three examples: a statistical fit, the parametrization of a simple real business cycle model and heuristics selection in a fishery agent-based model. The outcome is a method that automatically chooses summary statistics, weighs them and use them to parametrize models without running any direct minimization.


Introduction
We want to tune the parameters of a simulation model to match data. When the likelihood is intractable and the data high-dimensional, the common approach is the simulated minimum distance (Grazzini and Richiardi 2015). This involves summarising real and simulated data into a set of summary statistics and then tuning the model parameters to minimize their distance.
Minimization involves three tasks. First, choose the summary statistics. Second, weight their distance. Third, perform the numerical minimization keeping in mind that simulation may be computationally expensive.
Conveniently, "regression adjustment methods" from the approximate Bayesian computation(ABC) literature (Blum et al. 2013) can be adapted to side-step all 3 tasks. Moreover we can use these techniques outside of the ABC super-structure.
We substitute the minimization with a regression. First, run the model "many" times and observe pairs of model parameters and generated summary statistics. Then, regress the parameters as the dependent variable against all the simulated summary statistics. Finally, plug in the real summary statistics in the regression to obtain the tuned parameters. Model selection can be similarly achieved by training a classifier instead.
While the regressions can in principle be non-linear and/or non-parametric, we limit ourselves here to regularized linear regressions (the regularization is what selects the summary statistics). In essence, we can feed the regression a long list of summary statistics we think relevant and have the regularization select only what matters for parametrization.
We summarize the agent-based estimation in general and regression adjustment methods in particular in section 2. We describe how to parametrize models in section 3. We then provide 3 examples: a statistical line fit in section 4.1, a real business cycle macroeconomic model in section 4.2 and a fishery agent-based model in section 4.3. We conclude by pointing out weaknesses and possible extensions in section 5.

Literature review
The approach of condensing data and simulations into a set of summary statistics in order to match them is covered in detail in two review papers: Hartig et al. (2011) in ecology and Grazzini and Richiardi (2015) in economics. They differ only in their definition of summary statistics: Hartig et al. (2011) defines it as any kind of observation available in both model and reality while Grazzini and Richiardi (2015), with its focus on consistency, allows only for statistical moments or auxiliary parameters from ergodic simulations. Grazzini and Richiardi (2015) also introduces the catch-all term "simulated minimum distance" for a set of related estimation techniques, including indirect inference (Gourieroux, Monfort, and Renault 1993) and the simulated method of moments (McFadden 1989). Smith (1993) first introduced indirect inference to fit a 6-parameter real business cycle macroeconomic model. In agent-based models, indirect inference was first advocated in Richiardi et al. (2006; see also Shalizi 2017) and first implemented in Zhao (2010) (who also proved mild conditions for consistency).
Common to all is the need to accomplish three tasks: selecting the summary statistics, defining the distance function and choosing how to minimize it.
Defining the distance function may at first appear the simplest of these tasks. It is standard to use the Euclidean distance between summary statistics weighed by the inverse of their variance or covariance matrix. However, while asymptotically efficient, this was shown to underperform in Smith (1993) and Altonji and Segal (1996). The distance function itself is a compromise, since in all but trivial problems we face a multi-objective optimization(see review by Marler and Arora 2004) where we should minimize the distance to each summary statistic independently. In theory, we could solve this by searching for an optimal Pareto front (see Badham et al. 2017 for an example of dominance analysis applied to agent-based models). In practice, however, even a small number of summary statistics make this approach infeasible and unintelligible. Better then to focus on a single imperfect distance function (a weighted global criterion, in multi-objective optimization terms) where there is at least the hope of effective minimization. A practical solution that may nonetheless obscure tradeoffs between summary statistic distances.
We can minimize distance function with off-the-shelf methods such as genetic algorithms (Calver and Hutzler 2006;Heppenstall, Evans, and Birkin 2007;Lee et al. 2015) or simulated annealing (Le et al. 2016;Zhao 2010). More recently two Bayesian frameworks have been prominent: BACCO (Bayesian Analysis of Computer Code Output) and ABC (Approximate Bayesian Computation).
BACCO involves running the model multiple times for random parameter inputs, building a statistical meta-model (sometimes called emulator or surrogate) predicting distance as a function of parameters and then minimizing the meta-model as a short-cut to minimizing the real distance. Kennedy and O'Hagan (2001) introduced the method(see O'Hagan 2006 for a review). Ciampaglia (2013), Parry et al. (2013) and Salle and Yıldızoğlu (2014) are recent agent-based model examples.
A variant of this approach interleaves running simulations and training the meta-model to sample more promising areas and achieve better minima. The general framework is the "optimization by model fitting"(see chapter 9 of Sean 2010 for a review; see Michalski 2000 for an early example). Lamperti, Roventini, and Sani (2018) provides an agent-based model application (using gradient trees as a meta-model). Bayesian optimization (see Shahriari et al. 2016 for a review) is the equivalent approach using Gaussian processes and Bailey et al. (2018) first applied it to agent-based models.
As in BACCO, ABC methods also start by sampling the parameter space at random but they only retain simulations whose distance is below a pre-specified threshold in order to build a posterior distribution of acceptable parameters(see Beaumont 2010 for a review). Drovandi, Pettitt, and Faddy (2011), Grazzini, Richiardi, and Tsionas (2017) and Zhang et al. (2017) apply the method to agent-based models. Zhang et al. (2017) notes that ABC matches the "pattern oriented modelling" framework of Grimm et al. (2005).
All these algorithms however require the user to select the summary statistics first. This is always an ad-hoc procedure. Beaumont (2010) describes it as "rather arbitrary" and notes that its effects " [have] not been systematically studied". The more summary statistics we use, the more informative the simulated distance is (see Bruins et al. 2018 which compares four auxiliary models of increasing complexity). However the more summary statistics we use the harder the minimization (and the more important the weighting of individual distances) becomes.
Because of its use of kernel smoothing, ABC methods are particularly vulnerable to the "curse of dimensionality" with respect to the number of summary statistics. As a consequence the ABC literature has developed many "dimension reduction" techniques (see Blum et al. 2013 for a review). Of particular interest here is the "regression adjustment" literature (Beaumont, Zhang, and Balding 2002;Blum and Francois 2010;Nott et al. 2011) which advocates various ways to build regressions to map parameters from the summary statistics they generate. Our paper simplifies and applies this approach to agent-based models.
Model selection in agent-based models has received less attention. As in statistical learning one can select models on the basis of their out-of-sample predictive power (see Chapter 8 of Hastie, Tibshirani, and Friedman 2009). The challenge is then to develop a single measure to compare models of different complexity making multi-variate predictions. This has been fundamentally solved in Barde (2017) by using Markovian Information Criteria. Our approach in section 3.2 differs from standard statistical model selection by treating it as a pattern recognition problem (that is, a classification), rather than making the choice based on ranking of out-of-sample prediction performance. Fagiolo, Moneta, and Windrum (2007) were the first to tackle the growing variety of calibration methods in agent-based models. In the ensuing years, this variety has grown enormously. In the next section we add to this problem by proposing yet another estimation technique.

Estimation
Take a simulation model depending on K parameters θ = [θ 1 , θ 2 , . . . , θ K ] to output M simulated summary statistics S(θ) = [S 1 (θ), S 2 (θ), . . . , S M (θ)]. Given real-world observed summary statistics S * we want to find the parameter vector θ * that generated them. Our method proceeds as follows: 1. Repeatedly run the model each time supplying it a random vectorθ 2. Collect for each simulation its output as summary statistics S(θ) 3. Run K separate regularized regressions, r i , one for each parameter. The parameter is the dependent variable and all the candidate summary statistics are independent variables: 4. Plug in the "real" summary statistics S * in each regression to predict the "real" parameters θ * In step 1 and 2 we use the model to generate a data-set: we repeatedly input random parametersθ and we collect the outputŜ 1 , . . . ,Ŝ M (see Table 1 for an example). In step 3 we take the data we generated and use it to train regressions (one for each model input) where we want to predict each model input θ looking only at the model output S 1 , . . . , S M . The regressions map observed outputs back to the inputs that generated them. Only in step 4 we use the real summary statistics (which we have ignored so far) to plug in the regressions we built in step 3. The prediction the regressions make when we feed in the real summary statistics S * are the estimated parameters θ * .
The key implementation detail is to use regularized regressions. We used elastic nets (Friedman, Hastie, and Tibshirani 2010) because they are linear and they combine the penalties of LASSO and ridge regressions. LASSO penalties automatically drop summary statistics that are uninformative. Ridge penalties lower the weight of summary statistics that are strongly correlated to each other. We can then start with a large number of summary statistics (anything we think may matter and that the model can reproduce) and let the regression select only the useful ones. The regressions could also be made non-linear and non-parametric but this proved not necessary for our examples.
Compare this to simulated minimum distance. There, the mapping between model outputs S(·) and inputs θ is defined by the minimization of the distance function: When using a simulated minimum distance approach, both the summary statistics S(·) and their weights W need to be chosen in advance. Then one can estimates θ * by explicitly minimizing the distance. In effect the arg min operator maps model output S(·) to model input θ. Instead, in our approach, the regularized regression not only selects and weights summary statistics but, more importantly, also maps them back to model input. Hence, the regression bypasses (takes the place of) the usual minimization.

Model Selection
We observe summary statistics S * from the real world and we would like to know which candidate model {m 1 , . . . , m n } generated them. We can solve this problem by training a classifier against simulated data.
1. Repeatedly run each model 2. Collect for each simulation its generated summary statistics S(m i ) 3. Build a classifier predicting the model by looking at summary statistics and train it on the data-set just produced: 4. Plug in "real" summary statistics S * in the classifier to predict which model generated it The classifier family g(·) can also be non-linear and non-parametric. We used a multinomial regression with elastic net regularization (Friedman, Hastie, and Tibshirani 2010); its advantages is that the regularization selects summary statistics and the output is a classic logit formula that is easily interpretable.
Model selection cannot be done on the same data-set used for estimation. If the parameters of a model need to be estimated, the estimation must be performed on on either a separate training data-set or the inner loop of a nested cross-validation (Hastie, Tibshirani, and Friedman 2009).

Regression-based methods fit straight lines as well as ABC or Simulated Minimum Distance
In this section we present a simplified one-dimensional parametrization problem and show that regression methods perform as well as ABC and simulated minimum distance.
We observe 10 summary statistics (in this case these are just direct observations) S * = (S * 0 , . . . , S * 9 ) and we assume they were generated by the model S i = βi + where ∼ N (0, 1). Assuming the model is correct, we want to estimate the β * parameter that generated the observed S * . Because the model is a straight line, we could solve the estimation by least squares. Pretend however this model is a computational black box where the connection between it and drawing straight lines is not apparent.
We run 1,000 simulations with β ∼ U [0, 2] (the range is arbitrary and could represent either a prior or a feasible interval), collecting the 10 simulated summary statisticsŜ(β i ). We then train a linear elastic net regression of the form: (4) Table 2 shows the b coefficients of the regression. S 0 contains no information about β (since it is generated as S 0 = ) and the regression correctly ignores it by dropping b 0 . The elastic net weights larger summary statistics more because the effect of β is larger there compared to the noise.
We test this regression by making it predict, out-of-sample, the β of 1000 more simulations looking only at their summary statistics. Figure 1 and table 3 compare the estimation quality with those achieved by the standard rejection ABC and MCMC ABC (following Marjoram et al. 2003;Wegmann, Leuenberger, and Excoffier 2009; both methods implemented in Jabot et al. 2015). All achieve similar error rates.
We can also compare the regression against the standard simulated minimum distance approach. Simulated minimum distance search for the parameters that minimize: 0.0281099  Figure 1: A side by side comparison between the 'real' and discovered β as generated by either the regression method or two standard ABC algorithms: rejection sampling or MCMC. The x-axis represents the real β, the y-axis its estimate. Each point represents the estimation against a different set of summary statistics. The red line is the 45 degree line: the closer the points are to it the better the estimation. All methods perform equally well. The steeper its curvature, the easier it is to minimize numerically.
In figure 2 we compare the curvature defined by W = I and W = (diag(Σ)) −1 (where I is the identity matrix and Σ is the covariance matrix of summary statistics) against the curvature implied by our regression r: |r(S * ) − r(S)| (that is, the distance between the estimation made given the real summary statistics, r(S * ), and the estimation made at other simulated summary statistics r(S)) . This problem is simple enough that the usual weight matrices produce distance functions that are easy to numerically minimize.
We showed here that regression-based methods perform as well as ABC and simulated minimum distance in a simple one-dimensional problem. Curvature with W=(diag(Σ)) −1 Figure 2: Curvature of the fitness function as implied by the regression versus weighted Euclidean distance between summary statistics. We take the 1000 testing simulation runs, we pick one in the middle (β ≈ 1) as the 'real' one and we compute the summary statistics distances between all the other simulations to it. The red line represents the real β, each dot is the distance to the real β for that simulation run and its associated β.

Regression-based methods fit broken lines better than ABC or simulated minimum distance
In this section we present another simplified one-dimensional parametrization but show that regression methods outperform both ABC and simulated minimum distance because of better selection of summary statistics.
We observe 10 summary statistics S = (S 0 , . . . , S 9 ) but we assume they were generated by the "broken line" model: where ∼ N (0, 1). We want to find the β that generated the observed summary statistics. Notice that S 0 , . . . , S 4 provide no information on β.
As before we run the model 1000 times to train an elastic net of the form: Table 4 shows the coefficients found. The regression correctly drops b 0 , . . . , b 4 and weighs the remaining summary statistics more the higher they are.
We run the model another 1000 times to test the regression ability to estimate their β. As shown in Figure 3  0.0273448 and Table 5 the regression outperforms both ABC alternatives. This is due to the elastic net ability to prune unimportant summary statistics.  Figure 3: A side by side comparison between the 'real' and discovered β as generated by either the regression method or two standard ABC algorithms: rejection sampling or MCMC. The x-axis represents the real β, the y-axis its estimate. Each point represents the estimation against a different set of summary statistics. The red line is the 45 degree line: the closer the points are to it the better the estimation. Notice that rejection sampling (ABC) is inaccurate when β is very small (near 0) or very large (near 2) while MCMC has a larger average error than regression-based methods In simulated minimum distance methods, choosing the wrong W can compound the problem of poor summary statistics selection. As shown in Figure 4 the default choice W = (diag(Σ)) −1 detrimentally affects the curvature of the distance function, by adding noise. This is because summary statistics S 0 , . . . , S 4 , in spite of containing no useful information, are the values with the lowest variance and are therefore given larger weights.
We showed here how, even in a simple one dimensional problem, the ability to prune uninformative summary statistics will result in better estimation (as shown by RMSE in table 5) .  Curvature with W=(diag(Σ)) −1 Figure 4: Curvature of the fitness function as implied by the regression versus weighted Euclidean distance between summary statistics. We take the 1000 training simulation runs, we pick one in the middle (β ≈ 1) as the 'real' one and we compute the summary statistics distances between all the other simulations to it. The red line represents the real β, each dot is the distance to the real β for that simulation run and its associated β. For this model the default choice of weighing matrix (the inverse of the variance diagonal) makes numerical minimization harder.

Model selection through regression-based methods is informative and accurate
In this section we train a classifier to choose which of the two models we described above ('straight-' and 'broken-line') generated the observed data. We show that the classifier does a better job than choosing the model that minimizes prediction errors.
We observe 10 summary statistics S * = (S * 0 , . . . , S * 9 ) and we would like to know if they were generated by the "straight-line" model: or the "broken-line" model: where ∼ N (0, 1) and i ∈ [0, 9]. Assume we have already estimated both models and β = 1. We run each model 1000 times and then train a logit classifier: Pr(broken line model) = Notice that S 0 and S 5 , . . . , S 9 are generated by the same rule in both models: both draw a straight line through those points. Their values will then not be useful for model selection. We should focus instead exclusively on S 1 , . . . , S 4 since that is the only area where one model draws a straight line and the other does not.
The regularized regression discovers this, by dropping b 0 and b 5 , . . . , b 9 as Table 6 shows. Figure 5 tabulates the success rate from 2000 out-of-sample simulations by the classifier and compares it with picking models by choosing the one that minimizes the distance between summary statistics. The problem is simple enough that choosing W = I achieves almost perfect success rate; using the more common variance-based weights does not.

Fit against cross-correlation matrix
In this section we parametrize a simple macro-economic model by looking at the time series it simulates. We show that we can use a large number of summary statistics efficiently and that we can diagnose estimation strengths and weaknesses the same way we diagnose regression outputs.
We observe 150 steps 1 of 5 quarterly time series: Y, r, I, C, L. Assuming we know they come from a standard real business cycle (RBC) model (the default RBC implementation of the gEcon package in R, see Klima, Podemski, and Retkiewicz-Wijtiwiak 2018, see also Appendix A), we estimate the 6 parameters (β, δ, η, µ, φ, σ) that generated them.
Summarize the time series observed with (a) the t = −5, . . . , +5 cross-correlation vectors of Y against r, I, C, L, (b) the lower-triangular covariance matrix of Y, r, I, C, L, (c) the squares of all cross-correlations and covariances, (d) the pair-wise products between all cross-correlations and covariances. This totals to 2486 summary statistics: too many for most ABC approaches. This problem is however still solvable by regression. We train 6 separate regressions, one for each parameter, as follows: Where a, b, c are elastic net coefficients.
We train the regressions over 2000 RBC runs then produce 2000 more to test its estimation. Figure 6 and Table 7 show that all the parameters are estimated correctly but η and δ less precisely than the others. We judge the quality of the estimation through the predictivity coefficient (see Salle and Yıldızoğlu 2014; also known as modelling efficiency as in Stow et al. 2009) which measures the improvement of using an estimatê β to discover the real β * as opposed to just using the hindsight averageβ: Intuitively, this is just an extension of the definition to R 2 to out-of-sample predictions. A predictivity of 1 implies perfect estimation, 0 implies that averaging would have performed as well as the estimate.  We showed here that we can estimate the parameters of a simple RBC model by looking at their crosscorrelations. However this method is better served by looking at multiple auxiliary models as we show in the next section.

Fit against auxiliary fits
In this section we parametrize the same simple macro-economic model by looking at the same time series it simulates but using a more diverse array of summary statistics. We show how that improves the estimation of the model.
Again, we observe 150 steps of 5 quarterly time series: Y, r, I, C, L. Assuming we know they come from a standard RBC model, we want to estimate the 6 parameters (β, δ, η, µ, φ, σ) that generated them.
In this section we use a larger variety of summary statistics. Even though they derive from auxiliary models that convey much of the same information, we are able to use them efficiently for estimation. We summarize output by In total there are 48 summary statistics to which we add all their squares and cross-products.
Again we train 6 regressions against 2000 RBC observations and test their estimation against 2000 more. Figure 7 and Table 8 shows that all the parameters are estimated correctly and with higher predictivity coefficient. In particular δ and η are better predicted than by looking at cross-correlations.  Figure 7: Estimated versus real parameters for 2000 RBC runs in the testing data-set (not used for training the regressions). Each dot is a pair of real-estimate parameters from a single RBC run. The red line is the 45 degree line: the closer the dots are to it the better estimated that parameter is. All parameters are well estimated

Heuristic selection in the face of input uncertainty
In this section we select between 10 variations of an agent-based model where some inputs and parameters are unknowable. By training a classifier we are able to discover which summary statistics are robust to the unknown parameters and leverage this to outperform standard model selection by prediction error.
Assume all fishermen in the world can be described by one of 10 decision-making algorithms (heuristics). We observe 100 fishermen for five years and we want to identify which heuristic they are using. The heuristic selection is however complicated by not knowing basic biological facts about the environment the fishermen exploit. This is a common scenario in fisheries: we can monitor boats precisely through electronic logbooks and satellites but more than 80% of global catch comes from unmeasured fish stocks (Costello et al. 2012). Because heuristics are adaptive, the same heuristic will generate very different summary statistics when facing different biological constraints. We show here that we can still identify heuristics even without knowing how many fish there are in the sea.
We can treat each heuristic as a separate model and train a classifier that looks at summary statistics to predict which heuristic generated them. The critical step is to train the classifier on example runs where the uncertain inputs (the unknown biological constraints) are randomized. This way the classifier automatically learns which summary statistics are robust to biological changes and look only at those when selecting the right heuristic.
We use POSEIDON (Bailey et al. 2018), a fishery agent-based model to simulate the fishery. We observe the fishery for five years and we collect 17 summary statistics: (i) aggregate five year averages on landings, effort (hours spent fishing), distance travelled from port, number of trips per year and average hours at sea per trip, (ii) a discretized(3 by 3 matrix) heatmap of geographical effort over the simulated ocean, (iii) the coefficients and R 2 of a discrete choice model fitting area fished as a function of distance from port and habit (an integer describing how many times that particular fisher has visited that area before in the past 365 days ). These summary statistics are a typical information set we might expect to obtain in a developed fishery.
Within each simulation, all fishers use the same heuristic. There are 10 possible candidates,all described more in depth in a separate paper 2 : • Random: each fisher each trip picks a random cell to trawl • Gravitational Search: agents copy one another as if planets attracted by gravity where mass is equal to profits made in the previous trip (see Rashedi, Nezamabadi-pour, and Saryazdi 2009) • -greedy bandit: agents have a fixed 20% chance of exploring a new random area otherwise exploiting(fishing) the currently most profitable discovered location. Two parametrizations of this heuristic are available, depending on whether fishers discretize the map in a 3 by 3 matrix or a 9 by 9 one. • "Perfect" agent: agents knows perfectly the profitability of each area Π(·) and chooses to fish in area i by SOFTMAX: • Social Annealing: agents always return to the same location unless they are making less than fishery's average profits(across the whole fishery), in which case they try a new area at random • Explore-Exploit-Imitate: like -greedy agents, they have a fixed probability of exploring. When not exploring each fisher checks the profits of two random "friends" and copies their location if either are doing better. Three parametrizations of this heuristics are possible: exploration rate at 80% or 20% with exploration range of 5 cells, or 20% exploration rate with exploration range of 20 cells. • Nearest Neighbour: agents keep track of the profitability of all the areas they have visited through a nearest neighbour regression and always fish the peak area predicted by the regression.
All runs are simulations of the single-species "baseline" fishery described in the ESM of Bailey et al. (2018) (see also Appendix B). We run the model 10,000 times in total, 1000 times for each heuristic. We also randomize in each simulation three biological parameters: (i) carrying capacity of the each cell map, (ii) diffusion speed of fish, (iii) the Malthusian growth parameter of the stock.
We train a multi-logit classifier on the simulation outputs as shown in section 3.2: where b 1 , . . . , b 10 are vectors of elastic net coefficients.
We build the testing data-set by running the model 10,000 more times. We compare the quality of model selection against selection by weighted prediction error. First, in Figure 8 the prediction error is computed by re-running simulations with the correct random biological parameters. In spite of having a knowledge advantage over the classifier, selection by prediction error performs worse than the classifier. In Figure 9 we do not assume to know the random biological parameters. Prediction error (which now tries to minimize the distance between simulations which differ in both heuristics and biological parameters) performs much worse.
Model selection by prediction error is hard if we are not certain about all model inputs. However it is trivial to train a classifier by feeding it observations from different parameter configurations such that only the summary statistics "robust" to such parameter noise are used for model selection. Model selection with unknown biological parameters Figure 9: Comparison of success rate in correctly estimating which heuristics generated a given summary statistics S. The classifier success rate has solid borders. In this set of simulations model selection by minimum prediction error has no additional knowledge about the biological parameters.

Advantages of a regression-based approach
Regression are commonly-used and well-understood. We leverage that knowledge for parameter estimation. By looking at out-of-sample prediction we can immediately diagnose parameter identification problems: the lower the accuracy, the weaker the identification (compare this to diagnosis of numerical minimization outputs as in Canova and Sala 2009) . Moreover, if the regression is linear, its coefficients show which summary statistic S i inform which model parameter θ i .
A secondary advantage of building a statistical model is the ease with which additional input uncertainty can be incorporated as shown in section 4.3. If we are uncertain about an input and cannot estimate it directly (because of lack of data or under-identification), it is straightforward to simulate for multiple values of the input, train a statistical model and then check whether it is possible to estimate reliably the other parameters. In a simulated minimum distance setting we would have to commit to a specific guess of the uncertain input before running a minimization or minimize multiple times for each guess (without any assurance that the weighing matrix ought to be kept constant between guesses).

Weaknesses
This method has two main limits. First, users cannot weight summary statistics by importance. As a consequence, the method cannot be directed to replicate a specific summary statistic. Imagine a financial model of the housing market which we may judge primarily by its ability of matching real house prices. However, when estimating its parameters the regression may ignore house prices entirely and focus on another seemingly minor summary statistic (say, total number of bathrooms). While that may be an interesting and useful way of parametrizing the model, it does not guarantee a good match with observed house prices. Nevertheless, in this case one option would be to transition to a weighted regression and weigh more the observations whose generated summary statistic S i were close to the value of S * i we are primarily interested in replicating. As such, the model would be forced to include component deemed important, even if not the most efficient in terms of explanatory power. Comparing this fit with the unweighted one may also be instructive.
The second limit is that the each parameter is estimated separately. Because each parameter is given a separate regression, this method may underestimate underlying connections (dependencies) between two or more parameters when such connections are not reflected in changes to summary statistics. This issue is mentioned in the ABC literature by Beaumont (2010) although it is also claimed that in practice it does not seem to be an issue. It may however be more appropriate to use a simultaneous system estimation routine (Henningsen and Hamann 2007) or a multi-task learning algorithm (Evgeniou and Pontil 2004).

Possible Extensions
Our method contains a fundamental inefficiency: we build a global statistical model to use only once when plugging in the real summary statistics S * . Given that the real summary statistics S * are known in advance, a direct search ought to be more efficient. Our method makes up for this by solving multiple problems at once as it implicitly selects summary statistics and weights them while estimating the model parameters.
In machine learning, transduction substitutes regression when we are interested in a prediction for a known S * only, without the need to build a global statistical model(see Chapter 10 of Cherkassky and Mulier 2007;Beaumont, Zhang, and Balding 2002 weights simulations depending on the distance to known S * in essentially the same approach). Switching to transduction would however remove the familiarity advantage of using regressions as well as the ease to test its predictive power with randomly generated summary statistics.
Another avenue for improvement is to use non-linear, non-parametric regressions. Blum and Francois (2010) used neural networks, for example. Non-parametrics are more flexible but they may be more computationally demanding and their final output harder to interpret.
It may also be possible to use regressions as an intermediate step in a more traditional simulated minimum distance problem. We would still build a regression for each parameter but then using them as the distance function to minimize by standard numerical methods.

Conclusion
We presented here a general method to fit agent-based and simulation models. We perform indirect inference using prediction rather than minimization, and, by using regularized regressions, we avoid three major problems of estimation: selecting the most valuable of the summary statistics, defining the distance function, and achieving reliable numerical minimization. By substituting regression with classification we can further extend this approach to model selection. This approach is relatively easy to understand and apply, and at its core uses statistical tools already familiar to social scientists. The method scales well with the complexity of the underlying problem, and as such may find broad application.

Appendix A: RBC Model
C t + I t = π t + K s t−1 r t + L s t W t (16)