- Agent-based models are more likely to generate accurate
outputs if they
incorporate valid representations of human agents than if they don't.
The present article outlines three research methodologies commonly used
for explicating the cognitive processes and motivational orientations
of human judgment and decision making: policy capturing,
information seeking, and social choice.
Examples are given to demonstrate how each methodology might be
employed to supplement more traditional qualitative methods such as
interviews and content analyses. Suggestions for encoding results of
the three methodologies in agent-based models are also given, as are
caveats about methodological practicalities.
- Research Methodology, Cognition, Motivation, Judgement, Decision Making
- Most agent-based models
include programming code to simulate how agents make judgments about
their situation and decisions about how to act on their judgments. It
seems reasonable to assume that the validity of agent-based models
depends in part on the realism of this code; code that closely mimics
how people make judgments and decisions is likely to produce more
realistic outputs than code that does not. This prompts two important
questions: What methodologies exist for explicating people's judgment
and decision processes? How can we employ these methodologies to
improve the realism of agent-based models? The present article
addresses these questions.
- Literature on
agent-based models rarely includes references to research on the
processes people employ to make judgments or decisions. Indeed, the
choice of a judgment or decision algorithm often seems rather casual,
relying more on intuition, tradition or programming ease than on
empirical research about people's cognitive and motivational processes.
This has two drawbacks. First, the algorithms are likely to be wrong.
Second, the simulations are likely to ignore the consequences human
variability in the judgments and decision processes people employ. Both
errors can generate simulation outputs that wander far from what really
happens in the model's domain. Economic simulations based on
assumptions of economically rational agents, for example, frequently
produce outputs that wander far from relevant economic observations (Thorngate & Tavakoli 2005).
- How do people make
judgments and decisions? Sixty years of research and thousands of
studies in psychology, political science, marketing, behavioural
economics and elsewhere have given us increasingly detailed answers to
the question. Among the details is an important conclusion: People make
judgments and decisions in hundreds of different ways, and these ways
frequently lead to different choices (for reviews, see Gigerenzer & Gaissmaier 2011;
Griffin et al. 2012).
Most judgment and decision processes incorporate mental short-cuts
called heuristics, some of which have been documented in laboratory
studies under labels such as satisficing, recognition,
elimination by aspects, temporal
discounting, representativeness, anchoring
Dozens of additional heuristics await labels and laboratory scrutiny,
but they can still be seen in daily life. Here is a small sample:
- Choose what was successful in previous, similar situations;
- Ask for advice; follow the advice of the most convincing advisor;
- Watch what others are choosing; if they seem satisfied, mimic their choice;
- Wait; postpone decision in hopes of better, future alternatives;
- Flip a coin;
- Delegate the decision to someone else; if outcome is good, reward yourself; if outcome is bad, blame others;
- Do what tradition, the law or moral codes pre/proscribe;
- Pray for divine guidance;
- Choose the first alternative that meets minimal standards
- Lower your standards;
- Change your desires to those satisfied by current alternatives;
- Use a spreadsheet or mathematical formula.
- The choice of a decision process likely depends on dozens
of variables (Gigerenzer
& Gaissmaier 2011; Thorngate
Some of the variables are psychological, including a person's judged
importance of the decision, decision making habits and training, memory
and attention requirements, social tradition and pressure (see, for
example, Knox & Inkster 1969).
variables are situational, including time available to make a decision,
the number of alternatives available, and opportunities for reversing a
decision made (see, for example, Chowdhury
& Thorngate 2013). Different decision processes can
often lead to the same decisions (Thorngate
but no one has yet found simple rules for determining when this will or
will not occur. It is thus prudent to assume that the decision
processes employed at any time will depend on salient features of the
decision maker and the situation.
- This assumption
suggests that agent-based models can be improved by learning from the
people to be modeled how they perceive the situation to be modeled and
how they convert their perceptions into a choice of actions. To do so
requires collecting three kinds of data: (1) data about which features
of the situation people judge to be salient; (2) data about the
alternative actions they consider; and (3) data about people's criteria
for judging these alternatives.
- Words seem to be
the best medium for capturing the first two kinds of data. As other
articles in this issue document, verbal descriptions are highly
efficient for conveying useful information to researchers trying to
recreate the situations decision makers face when making their choices.
Words, for example, can efficiently convey information about the
time-fame, timing and history of a decision, the features of
alternatives considered, the number and roles of people involved in a
decision, and the sequences of events culminating in a choice. Skillful
use of interview and questionnaire techniques usually extract enough
situational information to render an accurate synopsis for a simulation
to mimic in code.
- Alas, however,
people's abilities to verbalize their situation definition and choice
criteria are rarely equaled by their ability to verbalize what happens
in their head and heart when processing the information that leads to a
decision (see, for example, Kahneman
2011; Nisbett &
Introspective and retrospective accounts of the choice process tend to
rationalize as much as to describe what occurred. Reasons range from
selective attention to bad memory to saving face. The resulting
accounts tend to be biased towards greater consistency and
sophistication than a person's choices reveal. Often they do not
reproduce the choice made (Hoffman
et al. 1968; Nisbett
& Wilson 1977).
- What else can be done beyond introspection to capture the cognitive and motivational processes (often called policies) of decision makers without asking them to explicate these processes in words? Below I describe three methods for inducing characteristics of these processes, not by analysing what decision makers say but by analysing what they do. The methods have their own limitations, discussed towards the end of this article. They offer, however, what I believe are good supplements to the analyses of verbal protocols, and give useful information for writing the lines of computer code representing the heads or hearts of agents making choices.
Policy capturing and paramorphic representation
- Over 50 years ago, Ken Hammond and his colleagues (for
example, see Brehmer &
Joyce 1988; Hammond 1955;
Hammond et al. 1964; Hammond & Summers 1972)
began to use multiple regression equations to represent Egon Brunswik's
ideas about how brains use information. Labeled the paramorphic
representation of judgment (Hoffman
1960), the equations
summarized how variations in features, called cues,
of stimuli correlated with judgments made. The strength of each
cue-judgment relationship was measured by its beta weight in the
multiple regression equation. The larger the beta weight, the more
important a cue was in determining the judgments.
- Dozens of
studies were soon conducted to assess which cues were important to
individual judges, often experts, when performing tasks associated with
their profession (e.g., see Dawes
& Corrigan 1974; Goldberg
1959, 1968; Hoffman et al. 1968; Slovic 1966, 1969).
In a typical study, research participants are first asked to write down
the names of features of situations they face in their profession.
Researchers then construct 10-100 hypothetical cases described by
varying the presence or absence, or the values, of the named features.
The descriptions are then shown to each participant who is asked to
assess each description on a relevant numerical scale. A multiple
regression or an analysis of variance is then performed on the
collected data to determine how much of each participant's judgment
variance can be accounted for by variations in each feature and
combination of features. The procedure is known as policy
& Joyce 1988).
- Hoffman, Slovic and Rorer's (1968)
study of nine radiologists judging the chances of malignant gastric
ulcers from x-ray reports nicely illustrates use of the
policy-capturing procedure. Interviews with the radiologists revealed
that all had been taught to look for seven signs of malignancy – signs
with names such as filling defect and small
Hoffman et al. constructed 96 tabular summaries of the presence (yes)
or absence (no) of these seven signs, each summary showing a different,
plausible yes-no combination. The radiologists independently examined
these 96 summaries twice, each time generating 96 ratings of the
chances of malignancy. The resulting fractionally-replicated, factorial
design allowed the Hoffman, Slovic, and Rorer (1968)
to conduct an Analysis of Variance on each radiologist's judgments. The
percentage of variance accounted for by each of the resulting main
effects and interactions served, like beta weights, as an indicator of
the salience of each sign and sign combination.
- Several surprising results emerged from Hoffman, Slovic and
study. Only two of the nine radiologists, for example, showed
statistical evidence of using all cues in making their judgments. Three
of the radiologists showed evidence of using only three cues, and one
radiologist used only two. Variations in which cues were used likely
accounted for the modest correlations among radiologists; the median
correlation of their malignancy assessments was r = +0.44. In short,
most of the radiologists did not reflect in their judgments what they
reported they did. The paramorphic representation of their judgment
process was more accurate than their verbal description of it.
- The policy
capturing procedure can be easily adapted to explicate a paramorphic
representation of judgment or decision processes employed by people
modeled in agent-based simulations. Suppose, for example, that we are
interested in creating an agent-based model examining the consequences
of people's decisions about using private versus public transit. Here
is one way to explicate features of the cognitive processes people
would use to make the decision.
- Step 1.
Suppose we interview a sample of, say, 30 citizens about which factors
they would consider if they had to choose between commuting to work by
car versus bus. Analyses of their qualitative responses reveals 13
features, three of which are mentioned by more than 20 of the citizens,
and three more of which are mentioned by at least ten. To make the
policy capturing procedure more manageable, we choose these six
most-popular features to construct a policy-capturing task. Suppose
these six features are as follows.
- weekly cost of driving and parking car
- car commuting time
- weekly cost of riding bus
- bus commuting time
- minutes of walking to bus stop
- chances of standing room only
- Step 2. We
then create a range of plausible values to describe variations of the
six factors. For example, we might vary weekly costs of driving an
parking from $60 to $120 in $20 increments, and chances of standing
room only from 20% to 100% in 20% increments. It is also possible to
employ words to express feature values. We might, for example, add a
seventh feature called bus cleanliness with values
such as "very clean," "moderately clean," "moderately dirty," and "very
- Step 3.
Using a 10-to-1 examplar-to-feature rule of thumb, we next create a set
of 60 examplars of car-bus situations. Each situation is defined by a
random combination of values of the seven features. To reduce
confounds, each person is given a fresh, random set of 60 different
situations. Table 1 illustrates three such exemplars.
Table 1: Three examples of car-bus situations. Feature Situation A Situation B Situation C weekly cost of driving + parking $80 $120 $100 car commuting time 20 minutes 25 minutes 40 minutes weekly cost of riding bus $20 $25 $15 bus commuting time 30 minutes 30 minutes 20 minutes minutes walking to bus 2 5 5 chances of standing room only 80% 40% 60%
- Step 4.
Once the 60 situations are created, we present them to a sample of
perhaps 20-50 participants of our choosing and ask them to judge each
situation, one at a time, on some meaningful scale(s). Selection and
wording of the scales is, as always, as much a matter of art as
science. For our purposes, let us use this one:
definitely take my car 0 1 2 3 4 5 6 7 8 9 definitely take the busSeveral computer programmes are available to generate the 60 random combinations of feature values defining the 60 situations, present these situations to research participants, and record their ratings. Examples include Excel®, LiveCode® and Fluid Survey®.
- Step 5.
Once we gather 60 ratings of 60 situations from each participant, we
can undertake statistical analyses each participant's data to distill
her/his results into a paramorphic representation. There are several
ways to proceed. One way is to deploy multiple regression software to
estimate the best-fitting linear equation of each citizen's data,
extracting regressing weights and function shapes (linear? parabolic?)
for each of the seven variables and for whatever combination of
variables, such as car-expense minus bus-expense, seem interesting. If
the software requires some error variance for its computations, we can
generate it by copying a person's data, adding a bit of random error to
each judgment in the copy, and using the original and copy as a sample
size of two. Alternatively, we follow the lead of Hoffman, Slovic and
Rorer (1968) and ask
participants to complete the rating task twice.
- The statistical
analyses of each participant's ratings should generate regression
weights reflecting how important he/she believes each of the seven
features of the situation to be. We can expect these weights to vary
from person to person; one participant, for example, might vary her
car/bus ratings mostly according to driving costs and bus commuting
time, while another might vary his ratings according to the difference
between car and bus driving costs and the chances of standing room
- Step 6. Once each person's policy has been captured, it can easily be implemented in an agent-based model; the best-fitting regression equation for each participant simply becomes a line of programming code. Outputs of the model related to the seven features of the situation (Table 1) at time t are given as inputs to the regression equations of people a researcher wishes to include in the model. The result is a probability of each person taking the bus or car at time t+1. From these probabilities, the expected number of people taking each form of transport can be calculated and fed into parts of the model related to costs, commuting times, crowding, etc. In short, rather than employing one policy equation with arbitrary weights of average or typical rules that agents might use to make transportation decisions, the model can employ several rules, grounded in empirical observations, to more accurately reflect the nature and variety of human judgment processes.
Information seeking and conditional processing
- The policy capturing procedure
described above is useful when a list of plausible criteria for making
judgments and decisions can be obtained, when plausible and realistic
alternatives can be constructed, and when people to be modeled have the
time and motivation to judge a large set of these alternatives. When
one or more of these conditions are not met, however, the policy
capturing procedure begins to lose its charm. Another means of
gathering data to explicate the judgment or decision processes might
then be employed, one that simply requires persons being modeled to ask
- One alternative
method is founded on a simple premise: When people must seek
information before making a judgment or decision, the information they
seek, and the order in which they seek it, reflects their cognitive
processes. Consider, for example, a modified game of 20 Questions in
which two voters are allowed to ask a researcher up to 20 yes/no
questions about three candidates for mayor in a local election before
voting for the one they prefer. Suppose the dialogue with Voter 1
proceeds as follows:
- Q1: Will Candidate A lower property taxes?
- A1: No.
- Q2: Will Candidate B lower property taxes?
- A2: Yes.
- Q3: Will Candidate C lower property taxes?
- A3: No.
- Decision: I'll vote for B.
- It appears that
Voter 1 considered only one campaign issue in making a choice: lowering
taxes, rejecting A and C from further consideration because of their
position on taxes, leaving B as the only acceptable candidate. In the
jargon of decision heuristics, Voter 1's trajectory of information
seeking is associated with a simple, non-compensatory,
elimination-by-aspects, lexicographic choice heuristic in which a
minimum standard (lower taxes) for selection is set and the first
candidate meeting this standard is chosen. One way of coding the rule
in an agent-based model would be as follows:
- If candidate does not promise lower taxes, then reject;
- else if candidate promises lower taxes, then continue;
- If no more candidates, chose one from the set of those not rejected.
- Now suppose a dialogue with Voter 2 proceeds as follows:
- Q1: Will Candidate C give more money for schools?
- A1: No.
- Q2: Will Candidate B give more money for schools?
- A2: Yes.
- Q3: Will Candidate B build more social housing?
- A3: No.
- Q4: Will Candidate A build more social housing?
- A4: Yes.
- Q5: Will Candidate A give more money for schools?
- A5: Yes.
- Decision: I'll vote for A
- In contrast to
Voter 1, Voter 2 seems to consider two campaign issues in making a
choice: schools and social housing, looking for a candidate who is
favourable to both. Voter 2's rule might be coded in an agent-based
model as follows:
- If candidate does not promise more money for schools, then reject;
- Else if candidate does not promise more money for social housing, then reject;
- Else accept.
- The 20 Questions
procedure illustrated above is only one of several information seeking
methods that might be useful for explicating people's decision rules
and coding them in simulation algorithms. Another method is eye
tracking, recording the trajectory of people's eye movements as they
scan information on a screen before making a choice (see, for example, Orquin & Mueller Loose 2013).
Eye tracking is sophisticated but rather expensive and tricky to employ
(see Duchowski 2007).
- For researchers on
a budget, an inexpensive alternative to eye tracking is the information
board: a matrix of alternatives in rows and features in columns with
information available about each alternative-feature pair by clicking,
flipping or otherwise opening its matrix cell (for example, see Bettman & Kakkar 1977;
Hofacker 1984; Payne 1976; Thorngate 1974). Table 2 illustrates a typical information
board as it might appear on a computer screen.
Table 2: A partially-examined information board [order of examination in square brackets] Alternatives
Features (issue) Property taxes Schools Public housing Pollution A  increase taxes by 10%  hire teachers  double amount not a problem B  lower taxes by 30%  more homework click for quote click for quote C click for quote  more computers click for quote click for quote D click for quote  no change needed click for quote more bike paths
- Records of the
cell-by-cell examination order and the decision made provide useful
information for deducing major aspects of a person's decision process.
In one early experiment (Thorngate
& Maki 1976),
for example, students sat opposite the researcher who arranged,
face-down, a matrix of 3x5 cards, each card containing on its face side
a quote from 2, 4 or 8 candidates for city council about 2, 4 or 8
local issues chosen by a city newspaper. Research participants were
asked to choose their most preferred candidate by flipping over as many
cards as they wanted to read, in any order, until they decided on their
preferred candidate. Results showed a highly popular tendency for
participants first to examine all candidates on a favoured campaign
issue (such as Schools in Table 2),
then examine a subset of these candidates on a second issue (such as
taxes in Table 2),
examine a further subset on a third issue, etc. until only 2-3
finalists remained. One of these finalists was then chosen by comparing
their quotes about a remaining issue, often an issue of minor
- A few caveats
about using an information-seeking method are warranted. To be useful,
methods such as the information board require the set of features
describing alternatives to be relevant to most people whose decision
processes will be simulated. So, as with the policy capturing method,
it is prudent to interview people in advance about which features are
relevant to their judgments. Expect that 10-30% of participants will
sometimes be inconsistent in their selection of information, diverting
a predictable trajectory out of curiosity, boredom, or whimsy. As a
result, it is advisable when possible to ask each participant to engage
in more than one choice situation so the most common trajectories can
be detected. In the case of choosing candidates for political office
(Table 2), for example, it
would be worthwhile
for a participant to seek information about Candidates A-D, make a
choice among these four, then seek information about Candidates E-H,
make a choice among these new four, then seek information about
Candidates I-L, and so forth. Modal trajectories of this participant
could then be calculated to represent her/his decision process.
- The judgment and decision processes people employ are known to vary according to the amount and layout of information available prior to a decision (Thorngate & Maki 1976). Participants with inchoate decision processes are likely to alter their process depending on how many alternatives are available and how many features describe these alternatives, in part to meet constraints of their attention and working memory. The number of alternatives and features should thus be selected to approximate the situations represented in an agent-based model. Decision processes are also influenced by the order in which the features are displayed. There is, for example, a tendency for people to examine information presented at the top-left corner of information boards more often than the information presented at the bottom-right. Such tendencies can be mitigated by shuffling matrix rows and columns for each presentation of an information board.
Social choice and motivational inference
- Just as it is desirable to
employ accurate representations of people's judgment and decision
processes in the code of agent-based simulations, it is also desirable
to render their motives for making these judgments and decisions. Most
economists and some cynics believe that people are self-centred or individualistic,
motivated to maximize their own gain. However, social psychologists
long ago demonstrated that people frequently choose to pursue other
motives (see Grzelak 1982;
Messick & McClintock
1968). High on the list of alternatives are the motives of competition
(maximizing relative gain), cooperation (maximizing
group gain) altruism (maximizing other's gain) and aggression
(minimizing others gain).
- These five motives
are often seen as points along two continua defined by the weights a
person gives to (1) her/his own gain and (2) the gain of one or more
others (see, for example, Griesinger
& Livingstone 1973; MacCrimmon
& Messick 1976).
To illustrate, suppose Mary is asked to make a choice between two pay
packages, A and B, each giving herself and John different amounts of
money. Choosing A would give Mary $4 and John $3. Choosing B would give
Mary $5 and John $6. To make her choice, Mary assigns an overall value
to A and an overall value to B, then chooses the one with the higher
value. The values depend on how much Mary weighs her own outcome and
John's outcome, and can be rendered in the following altorithm:
- value(A) (Wown x $Aown) + (Wother x $Aother);
- value(B) (Wown x $Bown) + (Wother x $Bother);
- if value(A) > value(B) then choose A;
- if value(B) > value(A) then choose B;
- if value(A) = value(B) then choose A with probability = 0.5.
- Different values of Wown and Wother define different
motives, for example:
- individualism: Wown = 1.0, Wother = 0.0.
- competition: Wown = 1.0, Wother = -1.0.
- cooperation: Wown = 1.0, Wother = 1.0.
- altruism: Wown = 0.0, Wother = 1.0.
- aggression: Wown = 0.0, Wother = -1.0.
- value(A) = (1.0 x $4) + (-1.0 x $3) = $4 - $3 = +1.0
- value(B) = (1.0 x $5) + (-1.0 x $6) = $5 - $6 = -1.0
- Other values of
Wown and Wother can be used to represent combinations of motives. A
combination of individualism and cooperation, for example, can be
Wown = +1.0 and Wother = +0.5.
motives can lead to different choices. Table 3 shows a set of five
situations, each requiring Mary to choose between pay packages A and B.
Below each situation are dots (•) indicating which alternative would be
preferred given the five possible motives listed above.
Table 3: Different motives lead to different choice patterns. Situation# 1 1 2 2 3 3 4 4 5 5 If Mary chooses: A B A B A B A B A B Mary gets $6 $5 $4 $3 $4 $0 $5 $4 $1 $7 John gets $2 $5 $5 $0 $3 $5 $9 $2 $0 $2 Motive and choice Individualism • • • • • Competition • • • • • Cooperation • • • • • Altruism • • • • • Aggression • • • •
- In addition to
deducing choices from motivational orientations, we can also induce the
motivational orientations from the choices made. One of the easiest
induction procedures employs a simple choice task and logistic
regression. We begin by presenting a person with dozens of situations,
such as the five shown in Table 3,
four payoffs randomly and independently. We then ask the person to make
choices in these situations, recording for each situation the four
payoffs and choice made. Finally, we calculate a logistic regression of
own gain and other's gain on the choices made to estimate the odds
ratios for own gain and for others gain. The logistic regression
equation becomes a paramorphic representation of a person's
motivational orientation; the odds ratios become the best estimates of
Wown and Wother.
- Though Table 3
shows situations with two people, the situations need not be restricted
only to two. Situations can have two, three, or more others affected by
a person's choices, each other represented as a predictor variable in
the logistic regression equation. As the number of others increases, so
too should the number of situations shown to a person in order to
obtain stable estimates of his/her beta weights. It is traditional to
ask a research participant to make choices in 50-100 two-person
situations. The task goes quickly; it normally takes no more than 30
minutes for a person to make choices in 100 situations. Once a logistic
regression equation capturing a person's motivational orientation is in
hand, it is straightforward to insert the equation as code for a
relevant part of an agent based model.
- As with
cognitive processes, it is safe to assume that motivational
orientations will vary from person to person, context to context, time
to time (see, for example, Brewer
& Kramer 1986; Messick
& Brewer 1983; van
Lange et al. 2007).
For example, people are more likely to be motivated to reduce the
difference between their own gain and other's gain when they receive
less than the other than when they receive more, demonstrating more
competition or aggression when "losing" (McClintock
et al. 1973).
Motivations are also likely to be influenced by a chooser's
relationship with the others. It is reasonable to assume, for example,
that people are more altruistic in situations where their choices
affect their children than in situations where their choices affect
their enemies. Including such motivational variations in the code of
agent-based models is likely to improve their accuracy.
- The policy-capturing and motive-assessment methodologies discussed above both employ statistical analyses of stimulus-response pairs to generate paramorphic representations of underlying cognitive and motivational processes. The information-seeking methodology can be adapted to explicate motivational processes as well. Research participants could, for example, be required to ask for payoff information before making choices in situation such as those shown in Table 3. A participant who repeatedly asked only "How much for me in Alternative A? How much for me in Alternative B?" then chose the higher of the two numbers might reasonably be assumed to be individualistic. Another participant who repeatedly asked only "How much would the other receive if I chose A? If I chose B?" and always chose the higher payoff, might reasonably be assumed to be altruistic. Such an information seeking methodology might explicate motives faster than one requiring choices across dozens of situations. The methodology, however, remains unproven.
- Suppose we are interested in
the consequences of employing new rules for choosing winners of a
research funding competition. The new rules prescribe that each funding
application must be assessed according to three criteria: (1) the merit
of its research ideas; (2) the merit of its research methodology; and
(3) the practical implications of its findings. The new rules also
prescribe that applications should be adjudicated by a committee of
three, and that final decisions should be reached by consensus.
Procedures for accomplishing these goals are left to the committee. How
might different procedures affect the outcomes of the adjudications?
- One sensible way
to construct an answer is to determine the cognitive rules members of
the committee are likely to use in assessing and combining the three
merit criteria. Assume the committee members are known, and they agree
to participate in relevant research about their own judgment and
decision processes. We may then construct a task using either or both
of the policy-capturing or information-seeking methods above to
estimate how each of the committee members maps the words of funding
proposals into assessments of merit. We might, for example, find or
construct dozens of short proposals that vary in their merit on each of
the three criteria, ask each committee member to give each proposal an
overall merit rating, then conduct regression analyses to determine the
beta weights each member gives each criterion in her/his assessment.
we might ask each committee member to examine a set of, say, 20
proposals and observe how he/she accomplishes the task. Does the member
always begin by reading the practical implications paragraphs,
continuing to the method paragraphs only if the practical implications
are judged good or better? Does the member stop reading the practical
implications paragraphs after proposal 3 and concentrate only on the
ideas paragraphs, commenting that the proposals' practical implications
are all speculative? Such information seeking trajectories and comments
can help explicate the committee member's judgment process.
- Suppose that,
after the three committee members undergo our research tasks, we
distill from their judgments the following representations of their
- Member A starts each proposal assessment by reading the practical implications paragraphs of the proposal. If A judges the practical implications to be lower than 4 on a 6-point scale (0 = horrible to 5 = wonderful), then A stops reading and rejects the proposal; otherwise, A reads rates the ideas section and the method section on the same 6-point scale, and averages the two ratings for an overall assessment. After all proposals are thus assessed, A rank orders the average ratings to bring to the committee.
- Member B first reads the ideas section of each proposal and eliminates the proposals with ideas she rates lower than 3 on the 6-point. She then reads the method section of each remaining proposal and eliminates all those with unfathomable methods. Finally, she ranks the practical implications sections of her finalists.
- Member C creates a spreadsheet with one row for each proposal and three columns for the three criteria She systematically reads each proposal and assigns a rating on the 6-point scale indicating her assessment. After rating all the applications on the three scales, member C assigns weights of 0.5 to ideas, 0.3 to methods, and 0.2 to practical implications, then calculates the weighted average of the ratings for each proposal. Finally, she rank orders these averages.
- We are now able to
begin writing a simulation incorporating algorithms that represent the
cognitive processes of the three committee members. Translation from
judgment processes to algorithms is usually straightforward. For
example, A's judgment process could be captured thusly,
- If practical implications rating < 4, then
º Rating = (ideas rating + method rating) / 2.
- Judgment = 0.5*ideas + 0.3*methods + 0.2*practical
- If practical implications rating < 4, then reject; else
- Thus constructed,
the simulation could be used to estimate how much agreement we might
expect among the three committee members when assessing any
hypothetical set of applications. Hundreds of hypothetical applications
– each described by a vector of three numbers representing ratings of
ideas, method, and practical implications of one or all committee
members – could serve as input to the simulation. Analysis of
variations in the outputs of committee members' judgments would
indicate how susceptible the assessments of applications are to
variations in the committee members' judgment processes. If these
processes generated a high proportion of disagreements, adjudication
administrators might then seek either to change members of the
committee or train its members to use a common judgment process.
- Short of training
or replacing committee members to increase their inter-judge
consistency, can anything else be done? Committee discussion remains an
obvious alternative; members meet in a spirit of cooperation, talk
about their differences, and modify their assessments in light of each
other's arguments. Consensus follows.
- Yet, what if one
or more committee members lacks a spirit of cooperation? How might
other motives influence the outcome of committee deliberations? We
might try to guesstimate an answer by assessing each committee member's
social motive using the motivational inference technique previously
presented. I am unaware of any research linking social motivations
inferred by this technique to the behavior of adjudication committee
members. Still, for illustrative purposes it is reasonable to speculate
that cooperators are more likely than competitors to reach consensus
quickly because, by definition, competitors are more likely to view
compromise as a personal loss ("Winning is the only thing!" and all
- If the social motives of committee members can predict aspects of the members' group dynamics, then a simulation of this relationship is possible. A simulation of three cooperative adjudicators, for example, might indicate rapid consensus in 90% of discussions to resolve their disagreements. A simulation of two cooperative adjudicators and one competitive adjudicator might show rapid consensus in only 10% of the discussions, and no consensus in another 25%. Such simulation results would be useful to administrators seeking rapid consensus, perhaps leading them to conclude that cooperativeness is just as important as expertise in selecting future adjudicators.
- Policy capturing, information
seeking, and motivational inference are three of several methodologies
for explicating salient features of people's cognitive processes and
motivational orientations. The incorporation of their results in
agent-based models has the potential to increase the realism and
utility of these models. Still, all methodologies have limitations, and
the three outlined here are no exception. The three methodologies are
all reactive, requiring that people engage in tasks
knowing their judgments and decisions are being recorded. Even with
assurances of anonymity, some people might fake good during these
tasks, trying to conceal sloppy thinking or sinister motives. The
procedures themselves are somewhat artificial, so the resulting
explications might not generalize to procedural variations in the world
being simulated by an agent-based model.
- These and other
possible limitations, however, are no greater than those of alternative
methodologies. Interviews, for example, are equally reactive and
produce responses equally likely to change from time to time. Content
analyses of relevant documents or other archival data might not reveal
a judgment or decision process relevant to the situation being
simulated. Despite their limitations, most of these empirical
methodologies still appear more worthy of a try than the traditional
alternative: relying on a researcher's academic introspections or a
programmer's expediencies to portray a typical agent.
- Who should be
chosen to model? If a simulation can validly assume that an agent or a
group of agents all use the same judgment and decision processes, then
any one of the people modeled by the agent or agent group is fair game.
If, for example, we wish to model air traffic controllers in a
simulation of how increases in air traffic affects the chances of an
accident, and if we know these professionals do their job in pretty
much the same way, then explicating the process of one controller would
suffice. Alas, as Hoffman, Slovic and Rorer (1968)
discovered among radiologists, this interpersonal consistency is rare.
Final choices about the number and variety of cognitive and
motivational processes to be coded in an agent-based model are as much
art as science. But the choices are likely to be improved by trying one
or more of the methodologies noted here.
- Groups of people,
rather than individuals, often make the decisions being simulated by an
agent-based model. When they do, modeling becomes more complex. Group
judgment and decision processes are often quite different the processes
of individual group members. Social interactions and group pressures,
for example, often lead people to abandon their preferred processes in
order to avoid interpersonal conflicts, to assert their dominance in a
group, to reduce the time taken in boring group discussions, etc. (for
example, see Esser 1998; Janis 1972, 1982).
As a result, it is worthwhile to consider modeling the group itself
rather than modeling its individual members. To my knowledge, no
research has yet been done to capture a group policy, trace a group
information-seeking trajectory, or estimate the motivational
orientation of a group. It might be possible to do so if a group is
small and its members cooperative. On the other hand, it is little more
than amusing to think of turning a House of Commons into a lab.
- Finally, people
are known to change their hearts and minds on occasion, causing them to
change the cognitive processes or motivational orientations they employ
in making judgments and decisions. Indeed, many of these changes form
the goals of higher education; consider, for example, management or
leadership programmes offering courses on how to be a better decision
maker by breaking old habits of thought and feeling. Yet people do not
need a diploma to make such changes. The outcomes of current decisions
often affect the ends or means of making future decisions. When
decisions are repeated and outcomes are known, learning can occur, with
its concomitant changes in cognition or motivation. When decisions
proliferate or become routine, stress, burnout or boredom can occur,
motivating cognitive shortcuts. Interpersonal conflicts can poison
group decision making routines. The same routines can also change as
group members come and go. These and other changes in judgment and
decision processes present even more challenges to researchers trying
to model real situations. Assessing such moving targets is likely to
require repeated measurements.
- Unfortunately, it takes time, effort and often money to design, construct, conduct and analyse the results of studies based on the three methodologies discussed above. They can be, and sometimes will be, a bother. It is thus worthwhile to learn in advance if the likely decision processes to be explicated by these methodologies will make much of a difference in the outcomes of subsequent simulations. Sensitivity analyses can help. By varying the lines of code representing agents' judgment or decision processes during trial runs of the simulation, it should be possible to determine how much variations in this code influence the simulation outcomes. If the influence is small, there is little reason to deploy the methodologies noted here. If they influence is large, the methodologies are worthy of deployment.
- BETTMAN, J., & Kakkar,
P. (1977). Effects of information presentation format on consumer
information acquisition strategies. Journal of Consumer
Research, 3(4), 233–240. [doi:10.1086/208672]
BREHMER, B., & Joyce, C. (1988) (Eds.). Human judgment: The Social Judgment Theory view. Amsterdam: Elsevier Science Publishers.
BREWER, M. B., & Kramer, R. M. (1986). Choice behavior in social dilemmas: Effects of social identity, group size and decision framing. Journal of Personality and Social Psychology, 50, 543–549. [doi:10.1037/0022-3522.214.171.1243]
CHOWDHURY, W., & Thorngate, W. (2013). Take it or leave it: Cognitive rules for satisfying choices. In R. West & T. Stewart (eds.), Proceedings of the 12th International Conference on Cognitive Modeling, Ottawa: Carleton University.
DUCHOWSKI, A. (2007). Eye tracking methodology: Theory and practice (2nd Ed.). New York: Springer.
GOLDBERG, L. (1959). The effectiveness of clinicians' judgments: The diagnosis of organic brain damage from the Bender-Gestalt Test. Journal of Consulting Psychology, 23(1), 25–33. [doi:10.1037/h0048736]
GRIFFIN, D., Gonzalez, R., Koehler, D., & Gilovich, T. (2012). Judgmental heuristics: A historical overview. In K. Holyoak and R. Morrison (Eds.), Oxford Handbook of Thinking and Reasoning. Oxford: Oxford University Press, 322–345. [doi:10.1093/oxfordhb/9780199734689.013.0017]
GRZELAK, J. L. (1982). Preferences and cognitive processes in social interdependence situations. In V. Derlega & J. Grzelak (Eds), Cooperation and helping behavior: Theory and research. New York: Academic Press. [doi:10.1016/B978-0-12-210820-4.50010-1]
HOFACKER, T. (1984). Identifying consumer information processing strategies: New methods of analyzing information display board data. In T Kinnear (Ed.), Advances in Consumer Research Volume 11. Provo, UT : Association for Consumer Research, pp. 579–584.
HOFFMAN, P., Slovic, P. & Rorer, L. (1968). An analysis-of-variance model for the assessment of configural cue utliization in clinical judgment. Psychological Bulletin, 69(5), 338–349. [doi:10.1037/h0025665]
JANIS, I. (1972). Victims of groupthink. Boston: Houghton-Mifflin.
JANIS, I. (1982). Groupthink (2nd ed.). Boston: Houghton-Mifflin.
KAHNEMAN, D. (2011). Thinking, fast and slow. New York: Farrar, Strauss and Giroux.
MCCLINTOCK, C., Messick, D., Kuhlmn, M., & Campos, F. (1973). Motivational bases of choice in three-choice decomposed games. Journal of Experimental Social Psychology, 9(6), 572–590. [doi:10.1016/0022-1031(73)90039-5]
MESSICK, D., & Brewer, M. (1983). Solving social dilemmas: A review. In L. Wheeler & P. Shaver (Eds.), Review of Personality and Social Psychology (Vol 4, pp 11–44). Beverly Hills: Sage.
PAYNE, J. (1976). Heuristic search processes in decision making. in B. Anderson (Ed.), Advances in Consumer Research, Volume 3. Cincinnati, OH: Association for Consumer Research, pp. 321–327.
THORNGATE, W. (1974). Information seeking: An alternative methodology for the investigation of social judgements. Report 74-6, Social Psychology Labs, University of Alberta, 14 pages.
THORNGATE, W. & Maki, J. Decision heuristics and the choice of political candidates. Report 76-1, Social Psychology Labs, University of Alberta, 37 pages.
VAN LANGE, P., de Cremer, D., van Dijt, E., & van Vugt, M. (2007). Self-interest and beyond: Basic principles of social interaction. In A. Kruglanski & E.T. Higgins (eds.), Social Psychology: Handbook of Basic Principles. New York: Guilford.