© Copyright JASSS

  JASSS logo

Neural Networks: An Introductory Guide for Social Scientists

G. David Garson
London: Sage Publications
Cloth: ISBN 0-761-95730-8; Paper: ISBN 0-761-95731-6

Order this book

Reviewed by
Daniel John Zizzo
Bounded Rationality in Economic Behaviour Unit, Department of Economics, University of Oxford.

Cover of book

Social scientists may have two different goals in dealing with neural network modelling: neural networks may be used as statistical tools for data analysis, or alternatively they may be used as models of cognitive and decision-making processes.

Garson frames his introductory book on neural networks around the first goal: he perceives neural networks as models that, 'in general, ... may outperform traditional statistical procedures' (p. 1) in classification and completion problems, when the nature of the relationship is fuzzy, complex, non-linear and not easily specified in structural equations.

This is a compact book, both in page size and in number of pages (just 163 pages, plus notes, references and index). It is very detailed and with clear ambitions of completeness in the material being covered. Both features - compactness and detail - make Garson's book a very useful guide on a variety of material on neural networks.

Chapter 1 is presented as an introductory chapter. After having provided an initial paragraph on neural networks as valuable statistical tools, it goes into a detailed historical overview that starts from McCulloch and Pitts' perceptrons and then dwells with Hebb, Rosenblatt, Minsky and Papert, Grossberg, Kohonen, Hopfield, Werbos, Hornik and collaborators, and many others. The section that follows expands on the initial paragraph, to present 'a case for neural network analysis' (p. 8). It lists seven points from Toborg and Hwang (1989) in which 'neural network models differ from conventional statistical computing' (p. 8): massive parallelism, high interconnectivity, simple processing, distributed representation, fault tolerance, collective computation and self-organisation. Garson then quickly refers to a few papers: for example, he mentions (again) the Hornik et al. (1989) result that neural networks are universal approximators, but without specifying that a hidden layer is required. He produces a couple of bulleted lists of references showing applications of neural network modelling, with brief discussions of each paper. He ends the section with a list of properties of neural network models, taken from Haykin (1994). The final two sections are on limitations hampering the spread of neural network modelling in the social sciences, and on uses of neural network analysis in economics, management, sociology and psychology. Both sections are framed in terms of neural networks as statistical tools; the second section contains endless bulleted lists of references.

Chapter 2 claims to be a 'terminological tour' of neural network analysis.It is actually more than that: it describes what a neural network is and what the basic ideas are behind neural network processing. However, this is embedded in a thick barrage of jargon and terminological variants on the jargon: completeness is what Garson is aiming at. Chapter 3 focuses on the back propagation model, rightfully described by the author as the most common form of neural network used. It starts off with a description of gradient descent learning rules applicable with back propagation (and also one, the competitive algorithm, which is not). It continues with a good presentation of the back propagation process. It then presents an accessible introduction to the difficulties of gradient descent algorithms in finding global rather than just local error minima, and to ways to make them more effective, such as the usage of a momentum term. Finally, it discusses variants on the back propagation model. Here, as in the following chapter on 'alternative network paradigms', the focus is again largely on completeness of the overview. So, at the start of chapter 4, the reader learns that Garson knows about '42 distinct classes of network topology' (p. 59). Garson then presents an overview, in rapid succession, of GRNN, PNN, RBF, GMDH, ATNN, ART, BAM, Kohonen, counter propagation, LVQ, CALM and hybrid models. It is a very heavy chapter to read all the way through, but whoever is interested in a reference guide on specific classes of networks will find compact treatments, comparisons with other classes of networks and references allowing them to learn more.

Chapter 5 is on key methodological issues, such as when neural networks are called for as statistical tools and how to devise neural network simulations. Garson provides valuable and accessible discussions of how to best choose the neural network architecture, the training set, the training duration and the learning parameters. He discusses generalisation, cross-validation and the all important issue of interpreting neural network output.

Chapter 6 discusses neural network software, focusing on Neural Connection, a program that can be integrated with the SPSS popular statistical package, and NeuroShell. There are numerous pictures of screen shots from Neural Connection, but, in odd contrast, none of NeuroShell. Chapter 7 is just an example of statistical analysis done using the Neural Connection software.

Chapter 8 concludes in just three pages. Garson discusses four examples of (as he puts it) the increased use of neural networks in social science research. One of them is a doctoral dissertation showing how useful neural networks are to model 'learning, recognizing and recognizing sequences of patterns' (p. 161) in relation to three-dimensional object recognition (Vogh 1997). This is apparently sufficient for Garson to claim the 'superior functionality' of neural networks to model 'human behaviour' (p. 162). The rest of the conclusion is all along the lines of neural networks as statistical tools, up to the last section where the future success of neural networks is phrased in terms of their ability to get into 'graduate method textbooks treating multivariate analysis' (p. 163).

There is merit in focusing, as Garson does, on networks as statistical tools and on producing what is virtually a pocket-size encyclopaedia of neural network modelling. However, there are also limitations in doing so. Garson says that one of the obstacles to the diffusion of neural network modelling is that there are many possible models and choosing among them is 'an art form' (p. 17). My impression is that trying to read Garson's book will reinforce this impression in the beginner and the outsider. They will be confused by too many variants and by too many possible models. Presenting all possible variants of everything is excellent for the experienced neural network researcher who wants to have a compact guide, but just unhelpful for all but the most enthusiastic beginner. More motivation could be better provided to the beginner by fully spelling out a couple of example papers and illustrating how they show that neural networks are better than alternatives - rather than by providing endless lists of references. Similarly, the book would benefit from an introduction to the key concepts of neural modelling focusing on the ideas rather than on the terminology, such as the one provided by MacLeod et al. (1998).

There are also limitations in framing neural networks virtually exclusively as statistical tools. The one qualified exception to this framing is in the conclusions, but a cursory reference to a single study on physical object recognition is clearly insufficient to make researchers think of neural networks as psychological models potentially relevant for the social sciences.

As statistical tools, neural networks amount 'to nothing more than an extremely flexible functional form' coupled with a search algorithm (Kennedy 1998, p. 302): roughly speaking, the better the performance of the network relative to the training and especially the testing set, the better it is. As models of cognition and decision-making, neural networks can either represent individual agents facing a problem of some kind (Rumelhart and McClelland 1986), or a group of agents identified as network nodes interacting with each other by the means of the network connections (Nowak and Vallacher 1998). If neural networks are used as models of decision-making, then their performance is measured not in terms of achieving perfect learning, but rather in terms of the extent to which they can work as explanatory and predictive models of real-world human behaviour. It follows that the fact that Anderson's (1998) networks may make mistakes or give approximate answers in dealing with arithmetic is not a problem but a virtue, since it mirrors what even adults do. Similarly, the performance of a network model for the learning of the English past tense gets worse before getting better, and this is a valuable finding because it agrees with what is observed with children (Rumelhart and McClelland 1986). In other cases, a correct answer simply does not exist: when Macy (1996) studies the effect of uncertainty on the co-operative behaviour of interactive neural networks playing the iterated Prisoner's Dilemma, he is simply interested in the predictions that follow from his theoretical model.

There are important methodological differences between using neural networks as statistical tools and as models of decision-making. If neural networks are a statistical tool, virtually anything goes - in terms of choice of algorithms and parameters - justifying a better performance on the testing set (with the only qualification that interpretability of the output might be desirable). If neural networks are models of decision-making, then one needs to face an extra set of methodological issues. For example, is the model robust to small changes, say in the random seed or the architecture? Is the learning dynamic of interest? Is the training empirically reasonable and meaningful in relation to what we would expect real-world agents to have? If the network is interpreted as a network of agents identified as nodes, is the kind of interactivity that is imposed by the model empirically plausible? If the network is interpreted as a single agent, does the researcher want to make some claim of biological plausibility?

A limitation I see in Garson's book is that not only does he provide very little motivation to social scientists to do network models of cognition and decision-making, but he also appears not to recognise the differences between the two possible goals of neural network modelling. Hence, he produces a list of studies mostly of networks as models of cognition and decision-making on pages 21 to 22, or quotes Vogh (1997) on page 161, seamlessly within a verbal treatment of neural networks as purely statistical tools. In the light of the extra methodological demands imposed by using neural networks to model cognitive and decision-making processes, this has to be considered as potentially confusing.

Still, what I have said should not detract from the significant merits of Garson's guide. I have learned from this book, and I shall keep it on my shelf as a good reference source because of its compactness, completeness and detail, as well as because of the very clear and useful chapter on methodology. I would not recommend this book to students as a primary introductory textbook on neural networks, but I think that it could suit them well as a complementary textbook. Social scientists interested in neural networks as models of cognitive and decision-making processes might find MacLeod et al. (1998), Read and Miller (1998), Smith (1996) and Zizzo and Sgroi (2000) useful reads.


ANDERSON J. A. 1998. Learning arithmetic with a neural network: seven times seven is about fifty. In D. Scarnborough and S. Sternberg, editors, Methods, Models and Conceptual Issues: An Invitation to Cognitive Science, Volume 4. The M. I. T. Press, Cambridge, MA.

HAYKIN S. 1994. Neural Networks: A Comprehensive Foundation. Macmillan, New York, NY.

HORNIK K., M. Stinchcombe and H. White 1989. Multilayer feed forward networks are universal approximators. Neural Networks, 6:359-366.

KENNEDY P. 1998. A Guide to Econometrics. Blackwell, Oxford.

MACLEOD P., K. Plunkett and E. T. Rolls 1998. Introduction to Connectionist Modelling of Cognitive Processes. Oxford University Press, Oxford.

MACY M. 1996. Neural selection and social learning in the Prisoner's Dilemma: co-adaptation with genetic algorithms and neural networks. In W. B. G. Liebrand and D. M. Messick, editors, Frontiers in Social Dilemma Research. Springer-Verlag, Berlin.

NOWAK A. and R. P. Vallacher 1998. Toward computational social psychology: cellular automata and neural network models of interpersonal dynamics. In S. J. Read and L. C. Miller, editors, Connectionist Models of Social Reasoning and Social Behavior. Lawrence Erlbaum, Mahwah, NJ.

READ S. J. and L. Miller, editors, 1998. Connectionist Models of Social Reasoning and Social Behavior. Lawrence Erlbaum, Mahwah, NJ.

RUMELHART D. E. and J. L. McClelland 1986. On learning the past tenses of English verbs. In D. E. Rumelhart and J. L. McClelland, editors, Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Volume 2: Psychological and Biological Models. The M. I. T. Press, Cambridge, MA.

SMITH E. R. 1996. What do connectionism and social psychology offer each other? Journal of Personality and Social Psychology, 70:893-912.

TOBORG S. T. and K. Hwang 1989. Exploring neural network and optical computing technologies. In K. Hwang and D. Degroot, editors, Parallel Processing for Super-Computers and Artificial Intelligence. McGraw-Hill, New York, NY.

VOGH J. W. 1997. A sequential memory adaptive resonance theory neural network, with application to three-dimensional object recognition. Unpublished doctoral dissertation, Boston University.

ZIZZO D. J. and D. Sgroi 2000. Bounded rational behaviour by neural networks in normal form games. Nuffield College Oxford Economics Discussion Paper Number 2000-W30, <http://www.nuff.ox.ac.uk/Economics/papers/2000/index2000.htm>.

ButtonReturn to Contents of this issue

© Copyright Journal of Artificial Societies and Social Simulation, 2001