Evolutionary Biologist Robert Trivers calls self-deception an essential evolutionary skill #rachendolezal #triversrobert

This is probably too long and nerdy for most.  It is nonetheless considered by many the seminal work on this topic. It’s also one of my favorite scientific papers!

Trivers

Evolution and Self-Deception

Our approach of treating self-deception as information- processing biases that give priority to welcome over unwelcome information also differs from classic accounts that hold that the self-deceiving individual must have two sep- arate representations of reality, with truth preferentially stored in the unconscious mind and falsehood in the con- scious mind
If (as Dawkins argues) deceit is fundamental in animal communication, then there must be strong selection to spot decep- tion and this ought, in turn, select for a degree of self- deception, rendering some facts and motives unconscious so as to not betray – by the subtle signs of self-knowledge – the deception being practiced. Thus, the conventional view that natural selection favors nervous systems which produce ever moreaccurateimagesoftheworldmustbeaveryna ̈ıveview of mental evolution. (Trivers 1976/2006, p. 20)
We propose that self-deception offers an important tool in this co-evolutionary struggle by allowing the deceiver the opportunity to deceive without cognitive load, conscious suppression, increased nervousness, or idiosyncratic indicators that a deception is being perpetrated. To the degree that people can con- vince themselves that a deception is true or that their motives are beyond reproach, they are no longer in a pos- ition in which they must knowingly deceive others. Thus, the central proposal of our evolutionary approach to self-deception is that by deceiving themselves, people can better deceive others, because they no longer emit the cues of consciously mediated deception that could reveal their deceptive intent. 

the first corollary to our central proposal is that by deceiving themselves, people are able to avoid the cognitive costs of consciously mediated deception.

Thus, the second corollary to our central proposal is that by deceiving themselves, people can reduce retribution if their deception of others is discovered. 

People not only self-enhance the world over, but the average person appears to be convinced that he or she is better than average (Alicke & Sedikides 2009). Most of the research on self-enhancement does not allow one to assess whether these aggrandizing tales are self-deceptive or only intended to be other-deceptive, but some of the variables used in this research support the idea that people believe their own self-enhancing stories. For example, in a pair of clever experiments Epley and Whitchurch (2008) photographed participants and then morphed these photographs to varying degrees with highly attractive or highly unattractive photos of samesex individuals. Epley and Whitchurch then presented participants with these morphed or unaltered photos of themselves under different circumstances. In one experiment participants were asked to identify their actual photo in an array of actual and morphed photographs of themselves. Participants were more likely to choose their photo morphed 10% with the more attractive image than either their actual photo or their photo morphed with the unattractive image. This effect emerged to a similar degree with a photo of a close friend, but it did not emerge with a photo of a relative stranger. Because people often perceive their close friends in an overly positive light (Kenny & Kashy 1994), these findings suggest that people do not have a general bias to perceive people as more attractive than they really are, but rather a specific bias with regard to themselves and close others.

our second proposal is that by deceiving themselves about their own positive qualities and the negative qualities of others, people are able to display greater confidence than they might otherwise feel, thereby enabling them to advance socially and materially.

There are a variety of dissociations between seemingly continuous mental processes that ensure that the mental processes that are the target of self-deception do not have access to the same information as the mental processes deceiving the self. For our purposes, these dissociations can be divided into three (overlapping) types: implicit versus explicit memory, implicit versus explicit attitudes, and automatic versus controlled processes. These mental dualisms do not themselves involve selfdeception, but each of them plays an important role in enabling self-deception. By causing neurologically intact individuals to split some aspects of their self off from others, these dissociations ensure that people have limited conscious access to the contents of their own mind and to the motives that drive their behavior (cf.Nisbett & Wilson 1977). In this manner the mind circumvents the seeming paradox of being both deceiver and deceived.

Additionally, social sharing of information can lead to selective forgetting of information that is not discussed (Coman et al. 2009; Cuc et al. 2007) and social confirmation of inaccurate information can exacerbate the false memory effect (Zaragoza et al. 2001). These social effects raise the possibility that when people collaborate in their efforts to deceive others, they might also increase the likelihood that they deceive themselves. Thus, one consequence of retrieving, rehearsing, and telling lies is that people may eventually recollect those lies as if they actually happened, while still maintaining the accurate sequence of events in a less consciously accessible form in memory (Chrobak & Zaragoza 2008; Drivdahl et al. 2009; McCloskey & Zaragoza 1985).

Therefore, our third proposal is that the dissociation between conscious and unconscious memories combines with retrieval-induced forgetting and difficulties distinguishing false memories to enable self-deception by facilitating the presence of deceptive information in conscious memory while retaining accurate information in unconscious memory.

An important question that must be addressed with regard to all of these instances of biased processing is whether they reflect self-deception or some other source of bias. Two manipulations have proven useful in addressing this issue: self-affirmation (Sherman & Cohen 2006; Steele 1988) and cognitive load (e.g., Valdesolo & DeSteno 2008). When people are self-affirmed, they are typically reminded of their important values (e.g., their artistic, humanist, or scientific orientation) or prior positive behaviors (e.g., their kindness to others). By reflecting on their important values or past positive behaviors, people are

reminded that they are moral and efficacious individuals, thereby affirming their self-worth. A cornerstone of selfaffirmation theory is the idea that specific attacks on one’s abilities or morals – such as failure on a test – do not need to be dealt with directly, but rather can be addressed at a more general level by restoring or reaffirming a global sense of self-worth (Steele 1988). Thus, self-affirmation makes people less motivated to defend themselves against a specific attack, as their sense of self-worth is assured despite the threat posed by the attack.

… when self-deception is in service of social advancement via self-enhancement, self-affirmation should attenuate or eliminate the self-deception because the affirmation itself satisfies the enhancement goal. In contrast, when people self-deceive to facilitate their deception of others on a particular issue, self-affiration should have no effect. Here the goal of the self-deception is not to make the self seem more efficacious or moral, but rather to convince another individual of a specific fication that the self-deceiver wishes to promulgate. Self-affirmation is irrelevant to this goal.

5.1. Biased information search
5.1.1. Amount of searching. There are many situations in daily life in which people avoid further information search because they may encounter information that is incompatible with their goals or desires. For example, on the trivial end of the continuum, some people avoid checking alternative products after they have made a purchase that cannot be undone (Olson & Zanna 1979). On the more important end of the continuum, some people avoid AIDS testing out of concern that they might get a result that they do not want to hear, particularly if they
believe the disease is untreatable (Dawson et al. 2006; Lerman et al. 2002). This sort of self-deceptive information avoidance can be seen in the aphorism, “What I don’t know can’t hurt me.” Although a moment’s reflection reveals the fallacy of this statement, it is nonetheless psychologically compelling. Similar sorts of biased information search can be seen in laboratory studies. Perhaps the clearest examples can be found in research by Ditto and colleagues (e.g., Ditto & Lopez 1992; Ditto et al. 2003), in which people are confronted with the possibility that they might have a proclivity for a pancreatic disorder. In these studies people expose a test strip to their saliva and are then led to believe that color change is an indicator of either a positive or negative health prognosis. Ditto and Lopez (1992) found that when people are led to believe that color change is a good thing, they wait more than 60% longer for the test strip to change color than when they believe color change is a bad thing. Studies such as these suggest that information search can be biased in the amount of information gathered even when people are unsure what they will encounter next (see also Josephs et al. 1992). Thus, it appears that people sometimes do not tell themselves the whole truth if a partial truth appears likely to be preferable. We are aware of no experiments that have examined the effect of self-affirmation or cognitive load on amount of information gathered, but we would predict that both manipulations would lead to an elimination or attenuation of the effect documented by Ditto and Lopez (1992).