©Copyright JASSS

JASSS logo ----

André C. R. Martins (2005)

Deception and Convergence of Opinions

Journal of Artificial Societies and Social Simulation vol. 8, no. 2
<https://www.jasss.org/8/2/3.html>

To cite articles published in the Journal of Artificial Societies and Social Simulation, reference the above information and include paragraph numbers if necessary

Received: 10-Oct-2004    Accepted: 28-Jan-2005    Published: 31-Mar-2005


* Abstract

This article studies what happens when someone tries to decide between two competing ideas simply by reading descriptions of experiments done by others. The agent is modeled as rational person, adopting Bayesian rules and the effect that the possibility that each article might be a deception is analyzed.

Keywords:
Opinion Dynamics, Deception, Confirmation Theory, Epistemology, Rational Agents

* Introduction

1.1
The problem of opinion dynamics, where one tries to obtain a correct answer about some problem from analyzing other people actions or believes has been studied in a number of papers, by different authors, using a number of different approaches, from spin models to Bayesian confirmation theory. This article will adopt the latter approach and study what happens when rational agents try to make up their minds about the truth of a proposition based solely on reports of results of experiments conducted by other people. In order to accomplish that, we will study the problems introduced by the recognition that deceptive behavior, or simply unintentional mistakes, by the part of the paper's authors might be happening.

1.2
There are a number of approaches in the literature to the problem of opinion dynamics. Weisbuch (2001) proposes a model where people update their believes after interacting with each other. If the difference in opinions is under a certain threshold, both parts change their opinions to an average value; if the difference is bigger than the threshold, nothing happens. That threshold can be interpreted as suspicion that there might be deception in the opinion of the other part. The Sznajd model (Sznajd-Weron and Sznajd 2000) approaches the same problem, using a lattice to represent individuals. Again, people can have one of two opinions and are only convinced to change their own point of view by a minimum number of neighbors sharing different opinions. The minimum required number of neighbors can be interpreted as the fact that one should not trust the opinion of one single proponent of an idea, as he might be sim¬ply wrong. A number of other models dealing with opinion dynamics can be found in Hegselmann and Krause(2002) and Weidlich (2000) and we leave the interested reader to look for the references found there.

1.3
Lane (1997) studies the problem with similar methods than the ones used here, but with a different approach and application in mind, namely, the problem of consumers trying to choose the best product from observing other consumer's, but he doesn't include the existence of deception in his analysis. He investigates if a Bayesian rule would lead to the adoption of the superior product by the totality of the population. He concludes that a much simpler rule, adopt what the majority is adopting, is actually socially better, as the Bayesian rule does not leads to 100% of the society choosing the superior product while the max rule does. As we will see below, this is also true, in a sense, in the scenario we are studying, but it might actually leads to problems, as such a rule might be an incentive to deception.

1.4
This article will investigate the convergence (or divergence) of ideas among readers or participants of a debate when the possibility of deception, meaning both charlatanism and simple erroneous analysis, is included in the analysis. Here, a simple non-interacting model will be proposed, where the decisions taken by the reader do not influence other people. The agents will be modeled as rational beings, updating their opinions on which of two theories is correct by Bayesian rules. It has been proposed by Confirmation Theory (Maher 1993 and references therein) that scientists can be described as having subjective probabilities updated by Bayesian rules. Although it is clear from the many observed conflicts between full rationality and human behavior (Allais 1953, Ellsberg 1961, Plous 1993) that humans are not completely rational in their decision-making, recent results have shown that their heuristics can provide good, fast approximations to rational behavior (Gigerenzer and Goldstein 1996, Martignon 2001) and that many of our heuristics might actually be close to Bayesian inferences for complexes problems (Martins 2005).

1.5
We will develop here an idea by Jaynes (2003), where he points that a scientist would never be convinced of a phenomenon she considers very unlikely to exist (like ESP, per example) based on what an article claims, since the prior probability assigned to the hypothesis that the article was actually a sham is high enough. We will analyze what happens when a reader tries to decide between two theories, A and B, after reading a number of articles, some supporting A and some supporting B and when he includes a probability that some of the articles supporting each theory are wrong (not necessarily the same probability). A complete analysis of such a problem would certainly involve the possibility of detecting problems at the reading and the different kinds of problems that might show up there. However, even the simpler case will already have problems with non-identifiability, as we will see, so it makes sense to first study the simpler case.

1.6
It should be noted that, despite the notation used here, an article does not necessarily mean a scientific article, but could be any piece of information the reader has access to that will help her to pick between two mutually exclusive alternatives. We will describe these alternatives as two opposing theories, but the same treatment and results are valid if one is trying to decide between two products, or anything else, based on data provided by others. The actions of the reader will be described as a random sampling from a population of infinite articles, with an unchanging probability q that each read article will support A. One should notice that we are not taking into account the possibility that a reader will detect flaws in the reasoning of an article. Therefore, unless this possibility is actually non-existent in a given application, the analysis made here can also be understood as describing agents working with a heuristics, instead of fully rational beings. Despite this, we will still have to deal with problems regarding non-identifiable variables, that is, a more complex model would probably not add anything new to the conclusions.

* Reading Articles

The Non-Deception Case

2.1
A researcher reads a number of articles, trying to make up her mind on a controversial issue, where there are two alternative competing theories, A and B, one of which is correct and the other wrong. Each article describes a test of which theory would be the correct one and, due to statistical errors, it is possible that an honest attempt might lead to an erroneous result by chance. If A is true, there is a probability a that an honest and correct experiment and subsequent article will support A; if B were true, there would be a probability b that B would be verified and published. a and b are assumed to be known a prior and it is assumed that A is the correct theory, although that is certainly not known by the reader. The reader assigns a prior probability p to the hypothesis that A is actually true. If the writer of the article finds a result that supports A, he will write an article describing that result, and that is what the reader actually observes. An article supporting A is indicated by Ra and an article supporting B, likewise, by Rb .

2.2
If there is no deception involved, the probability q that an article will support A, indicated by Ra, will be a if A is the true theory (with a probability p) or (1 - b) if B is the true one (probability 1-p), therefore, q will be given by

q = f(Ra|p,a,b) = pa + (1 - p)(1 - b) (1)

2.3
We have a Bernoulli likelihood where the problem of estimating p can be seen as obtaining inference on q. Therefore, for a number r of read articles, we will have, for i articles supporting A and r - i supporting B, leading to a binomial likelihood where q is the chance of observing a success.

2.4
In this problem, i is sufficient statistics for the determination of q and, therefore, p. If the utility for accepting each theory depends only on the probability of it being true, the average value for p will decide which theory is supported, A, if it is larger than 0.5, B, if it is smaller. If, for some reason, the parameters a and b were not known with certainty, the situation becomes different, as we only get real inference about q, while p, a and b become non-identifiable.

The Deception Case

2.5
We must now include the possibility that the article we are reading is not correct, or, as we will be calling it, it is actually a deception. We will do this by stating that there is a probability e that any given article is actually a deception. Also, among the articles that are actually a deception, a fraction given by d favor A and the remaining 1 - d favor B. Therefore we have a new likelihood given by

Q = f(Ra|p,e,d,a,b) = ed + (1 - e)[pa + (1 - p)(1 - b)] (2)

2.6
We still have the same binomial problem of the last section; however, the parameters we are interested in, p, e and d are non-identifiable from the data, regardless of whether a and b are known or not. Therefore, any information contained in the prior distribution about the relation between the parameters will be preserved, subject only to the conditions we may find for q.

2.7
In order to proceed with the analysis of the problem, one has to choose the prior and work from there, accepting that the data will not provide information enough to completely change views, no matter how many articles she will read. As all parameters are in the [0; 1] range, and as we are dealing with binomial likelihoods, Beta densities seem to be a convenient choice for priors. However, even if we choose independent priors for our parameters p, e and d, an analytical analysis of the problem becomes quite hard to perform, except for the infinite articles limit. An easier approach seems to be solving the problem by numerical methods.

Conditioning on q and the Prior Distributions

2.8
For the sake of simplicity, we will assume that the prior distribution for p, e and d are independent. The prior probability for p is trivial as p accepts only two values, 0 or 1. For e and d, we will choose Beta functions as priors. Their exact shape, in the simulations described bellow, is determined by specifying averages and standard deviations for each parameter. It should be noted that for this problem, with deception, Beta functions do not form a conjugate family for the likelihood.

2.9
The behavior of the model when we take the limit of the number of articles read r tending to infinite is the first thing we will investigate. In an ideal case, the average value of p under the limit posteriori distribution, p would converge to 1, as r tends to infinity, meaning one would like to be able to reach the certainty that the correct theory is true. However, in the infinite articles limit, we will have enough data to determine only the value of the parameter q with certainty. If infinite articles are read, the posteriori can be obtained theoretically by conditioning the prior on the value of q, that is, by calculating f(p,e,d|q) when q is known.

2.10
If one wants to simulate the reading of articles, the limit of infinite articles is a tricky one, due to the necessary discretization of e and d. Equation 3 only allows for a finite, predetermined number of possible values for q and, if we are to perform a simulation, the above conditioning will only make sense if we choose only values for e and d that are compatible with the true value of q in our discretized version. However, this value will often be unique in the simulation lattice, and, in the limit of infinite statistics, we might observe only one of the valid values combinations of p, e and d that provide the true value q and any inference obtained will give these single values and not their true posterior distribution. It should be noticed that this is a flaw of the simulation and it is caused by taking first the limit of an infinite statistics (infinite number of articles) before taking the limit where the parameters and become continuous. The problem will not show up for a small number of articles, when there is no certainty on the value of q, meaning that the simulation will still be useful in those cases. But, for the infinite articles case, another way to calculate that limit directly, other than the simulation has to be developed.

2.11
Fortunately, such a way exists. In the theoretical limit of infinite statistics, we can assume we will know the exact value of q without error, regardless of our prior information. We will compare the results calculated this way to those predicted by the simulation, to illustrate the problems with the latter approach.

Simulations

2.12
The code for simulating this problem is quite simple and consists of a series of straightforward subroutines annexed to this paper, written in FORTRAN. It was prepared so that r articles are read and the reader's opinion is updated at each instance, so that one can describe the evolution of the belief with the accumulation of data. A version with previously fixed value for r was written as well as a version where r is as big as needed to achieve a certain amount of certainty about which theory is the correct. The program file BAYES.F, as well as the subroutines, are all in the file Deception_code.zip. The priors are chosen as the discrete approximation to Beta functions with average and standard deviation chosen, favoring neither A or B. This was implemented by choosing uniform priors for both p and d. At each iteration, one article is randomly generated, with the chance of it supporting A based on the chosen true values of the parameters, that is, p = 1 and an amount of deception chosen for each simulation so that we have a total probability of observing an article supporting A given by

q = de + (1 - e) a

2.13
A series of simulations for diverse amounts of deception were run and their results recorded. These simulations will deal with a limited number of articles that might reach the 100 - 200 range, but not much more than that, representing the real case of a person reading many articles.

* Analyses of the Results

The infinite articles Regime: Limits to Knowledge

3.1
The infinite articles regime, despite the fact it is not feasible as a real problem, is an important one. It provides a limit to the amount of knowledge one can obtain from other people under the presence of deception and can give, in that sense, a measure of the size of the problem caused by the introduction of deception. We will solve it by conditioning on the value of q, that is integrating the prior distribution in other to get E[p] as a function of q. The calculus was performed by numerical integration and the results can be observed in Figure 1 for different values of a = b. The figure was obtained supposing a prior distribution for the amount of deception given by E[e] = 0.2 and a standard deviation of 0.2 as well. This choice makes the graph symmetric around q = 0.5, so only values above it are shown.

Figure
Figure 1. Theoretical results for E[p] as a function of q, for several different values of a = b, where it was taken that E[e] = 0.2

3.2
It is clear that, as a (and b) becomes smaller, there is no value for q that will lead to certainty p = 0 or 1. For a = b = 0.65, it is not possible to observe any value of E[p] greater than 90%. Near the values q = a, we have peaks of confidence in the validity of the theory A, as should be expected. That corresponds exactly to the hypothesis that A is true and there is no deception, but we do not reach certainty because that value can also be obtained in cases where there is deception. It can be observed the existence of doubt on which theory is the correct, especially strong in the range [0.4; 0.6]. In this range, one can be sure that deception must be happening, the only question is how much of it, as any combination of parameters leading to the same value of q would provide the same inference. Of course, these results are also a function of the choice for the prior, where it was stated that we initially believed that there is 20% chance that each result is a deception. Had the prior be chosen with a very small value for E[e], the calculated results for E[p] could get as close to one as one might want.

3.3
However, there are situations where such a belief would be non-realistic. It should be clear that this sensibility to the prior is not actually a flaw of the model, for these are actually the conclusions a rational agent should reach under these circumstances. The results show that this can and will be dependent on his prior believes, even when there is an infinite amount of data, due to the existence of deception, that makes E[p] non-identifiable.

3.4
The results for different priors, for a = b = 0.7, can be seen in the Figure 2. The figure shows E[p] as a function of different priors for the amount of deception, that is E[e]. In each case, except for the uniform prior distribution, the prior standard deviation of q was chosen to be equal to E[e], representing the fact that it is reasonable to assume that, if one believes that there is more deception happening, the real proportion will become less certain.

Figure
Figure 2. Theoretical results for E[p] as a function of q for several prior distributions for the amount of deception (a = b = 0.7)

It can be seen that, for a small amount of prior belief on deception, the reader decides which theory is correct, with certainty, based simply on the majority of articles supporting one opinion or the other. That is, the max rule studied by Lane [3] is a consequence of Bayesian analysis when there is basically no deception.

3.5
As E[p] gets bigger, things get more interesting. An intermediary region of doubt appears, that increase in size with E[e]. For values of E[e] around 0.2, one can never be more sure than 90% and, for the uniform prior distribution for E[e], that number goes down to 70%, regardless of the evidence in favor of one theory or another. Obviously, the problem becomes more serious when there is more doubt, that is for smaller values of a and b. A certain amount of deception can make it impossible to always achieve the right conclusion with certainty, even if one had an infinite number of articles to get information from and time to read them all, especially in the case of theories that predict not very conclusive experiments.

The Small r Regime: Real Life

3.6
The small r regime is very important as it corresponds to real life, as real readers face time limitations that do not allow them to read a huge number of articles. An important problem is the rate of convergence to the results observed in the previous section. To study it, the simulation described above was used. The results in the Figure 3 were obtained for the r = 200 case, with n = 100 discrete intervals between 0 and 1.

Figure
Figure 3. E[p] as a function of q for a = b = 0.6 and r = 200

3.7
Some of the features observed in the infinite articles regime are once again observed. There is a maximum for E[p] around 0.75 and, as we move towards values closer to q = 0.5, the confidence on A becomes weaker. It is interesting to notice that the same thing happens, in a much weaker version, for values of q much bigger than 0.7. This is due to the fact that the only way q can be greater than 0.7 is if there is deception favoring A and, therefore, the reader gets not only evidence supporting A, but also conclusive evidence that some of the articles supporting A are actually deceptions. This ends up making a weaker case in favor of A.

3.8
Another important feature in the Figure 3 is that, as q becomes gradually closer to 0.5, the standard deviation associated to the observed values increase dramatically. In the simulations, even for values of q of 0.6, it was observed some readers whose opinions would be more favorable to B than to A. This problem becomes quite serious in the region bellow 0.6, where the opinions are divided and some readers conclude that A is most likely true, while others favor B, even after reading as many as 200 articles. If a smaller number of articles is read, the problem will become even more serious. In this region, not only the existence of deception makes it impossible to conclude that A is true with certainty in the limit where one can read as many articles as wanted, but it also makes opinions more likely to become wrong (E[p} < 0.5), as the standard deviation for the final results become much bigger. Therefore, it also forces one to read far more articles if the limit values discussed in the previous section are to be found. Doubt appears and convergence gets slower.

3.9
Finally, if the reader establishes as a rule that he will keep reading until he reaches a certain amount of confidence in one of the results, it can be seen that, in some cases, as the one shown in Figure 4, the number of articles increases both with the actual deception and the prior belief for the deception and it is especially sensitive to the last one. Figure 4 actually shows the average number of articles needed to achieve E[p] = 0.99, assuming that there was a very small amount of deception present, but that there was initial belief on the existence of deception, described by different values of prior E[e]. As it can be seen, the number of needed articles explodes, for increasing prior values of E[e] even when shown at a logarithmic scale (the simulations failed to converge for E[e] only a little higher than 0.2).

Figure
Figure 4. Average number of articles one must read to be 99% sure of which theory is correct

* Conclusions

4.1
The introduction of deception causes a number of problems in the decision process of which theory (or product, or idea, or decision) should be considered the correct one. In the case where the theories are reasonably well testable, represented by a and b close to 1, only a big amount of belief in deception could actually bring serious problems. As theories predict results less easily testable, the problem becomes more serious and might leads one to incomplete or wrong conclusions. Theories not easily testable, therefore, must be dealt with extreme care, especially when one suspects some deception might be present but has no natural way to determine where. And, it should be noticed that deception, as defined here, includes not only charlatanism but also erroneous analysis, as both can be described by the mathematics used here. In many cases where there is a heated debate between two different theories that might explain the same phenomena, the possibility that a decent amount of deception will exist in the literature can not be disregarded. This is even truer when the authors of the articles have great personal interest in proving their point of view to be the correct ones. In these cases, it is possible and reasonable that a decent percentage of the reported results are due to deceptive behavior or error, not only due to statistical expected error in conducting experiments. A reader, under these circumstances, will find it difficult to state with reasonable certainty which theory is the correct one and might even, in extreme cases, conclude that the wrong theory B is more likely to be true.

4.2
The results for small r also show that, not only the limit answer will not exist with certainty, but any conclusion becomes far more difficult and prone to statistical errors in presence of deception. Therefore, to base one opinion only on reports of other people conclusions may not be the best idea in many problems. The author considers this was already an obvious statement in many areas, as, per example, in political discussions, where almost always it is not clear what each idea exact testable predictions are, and where the debaters have incentives to adopt a deceptive behavior. When a wrong idea can have a few facts apparently supporting it and the correct idea sometimes makes wrong predictions, both due to statistical errors, we are in the region with low values of a and b, and the deception problem becomes important. Also, when one has more than enough reasons to believe on the existence of deceitful behavior on the part of some of the debaters, in the sense of charlatanism, or on the existence of errors of analysis due to lack of knowledge of the debaters, the reasonable choices for priors will include a decent chance of deception and again certainty can not be achieved by a rational reader.

4.3
A more complete study of some of the subjects left open here, as the introduction of uncertainty in the prior evaluation of a and b will be left for a future work. It should also be noticed that the real case has some important differences from the simple model proposed here. Some of these differences are already under study and will be the subject of a future article, where the problem will be approached from an interactive point of view, where the reader and each of the participants in the debate can actually influence all the others. It will include the possibility that the reader will also conducts the tests and not rely solely on the opinion of others. For, as shown here, trying to reach conclusions solely from data provided by others is actually bad science, whenever deception might be present. The only way around this problem is to actually check by oneself the results of some of those papers, what would make the variables of interest, p, e and d, no longer non-identifiable, solving the problem.

* Acknowledgements

The author would like to thank Prof. J. L. deLyra and Prof. C. E. I. Carneiro for many important suggestions and comments while this work was being prepared. Most of this work was done while the author was working at Faculdades Ibmec.

* References

ALLAIS, P. M. (1953) The behavior of rational man in risky situations - A critique of the axioms and postulates of the American School. Econometrica, 21, 503-546.

ELLSBERG, D. (1961) Risk, ambiguity and the Savage axioms. Quart. J. of Economics, 75, 643-669.

GIGERENZER, G., Goldstein, D. G. (1996) Reasoning the fast and frugal way: Models of bounded rationality. Psych. Rev., 103, 650-669.

HEGSELMANN, R. and Krause, U. (2002) Opinion Dynamics and Bounded Condence Models, Analysis and Simulation, Journal of Artificial Societies and Social Simulations, vol. 5, no. 3.

JAYNES, E.T. (2003) Probability Theory: The Logic of Science.Cambridge: Cambridge University Press.

LANE, D. (1997) Is What Is Good For Each Best For All? Learning From Others In The Information Contagion Model, in The Economy as an Evolving Complex System I (ed. by Arthur, W.B., Durlauf, S.N. and Lane, D.), pp. 105-127. Santa Fe: Perseus Books.

MAHER, P. (1993) Betting on Theories. Cambridge: Cambridge University Press.

MARTIGNON, L. (2001) Comparing fast and frugal heuristics and optimal models in G. Gigerenzer, R. Selten (eds.), Bounded rationality: The adaptive toolbox. Dahlem Workshop Report, 147-171. Cambridge, Mass, MIT Press.

MARTINS, A. C. R. (2005) Are Human Probabilistic Reasoning Biases Actually an Approximation to Complex Rational Behavior?, in preparation

PLOUS, S. (1993) The Psychology of Judgment and Decision Making. New York, NY: McGraw Hill, Inc.

SZNAJD-Weron, K. and Sznajd, J. (2000) Opinion Evolution in Closed Communities, Int. J. Mod. Phys. C 11, 1157.

WEIDLICH, W. (2000) Sociodynamics Amsterdam: Harwood Academic Publishers.

WEISBUCH, G. et al (2001) Interacting Agents and Continuous Opinion Dynamics, condmat0111494.

----

ButtonReturn to Contents of this issue

© Copyright Journal of Artificial Societies and Social Simulation, [2005]