
Tags
Your Brain, the Liar
Have you ever questioned the nature of your reality?
Thinking machines, like those portrayed in HBO’s Westworld, use new information from their environment to update their beliefs about the world and take action to further their goals. For all such machines, the success of that process of integrating new input is limited by their hardware, their software, and the ingenuity of their designers. Reality itself, on the other hand, can be as complex as it pleases; there’s no rule stating that reality must be comprehensible. Taken together, these restrictions mean that a machine’s internal model of the world cannot possibly represent reality with perfect fidelity. Instead it must compromise, approximate, and cut corners, hoping the representation is good enough for the tasks at hand. In other words, thinking machines cannot fully trust their own perceptions and knowledge.
You, human, are a thinking machine. Question the nature of your reality.
Hang on though, it gets worse. Engineers and scientists design artificial intelligences to accomplish certain goals, like driving cars, winning Go, or diagnosing medical conditions (Esteva et al. 2017). Part of that design process is a mathematically precise description of the task to be solved. Indeed, defining such a function is a major challenge in human-level artificial intelligence (Pennachin & Goertzel, 2007). Using that function, designers can tailor their choices of hardware and software to the precise problems that need solving. We humans have no such luxury. We must meekly accept our standard issue hardware (brains), software (minds), and designer (evolution by natural selection).
But wait! Shouldn’t evolution select for perceptual and cognitive systems that are good at representing true reality? Actually, no, it turns out. It selects for systems that lie in useful ways.
Life evolves as a result of those rare mutations that increase an organism’s odds of spreading its genes, whether by avoiding an early death, having lots of babies, or taking close care of only a few. The systems an organism uses to process information and make decisions — like the organism as a whole — have evolved in response to those same pressures. That is, the problem that our hardware and software was evolutionarily “designed” to solve is to survive and reproduce, not to faithfully represent reality. In fact, computer simulations and even mathematical proofs have shown that perceptual systems designed to tell the truth above all else cannot possibly outcompete systems designed to maximize reproductive fitness (Hoffman & Singh, 2012), and some have argued that our reasoning abilities evolved to win arguments, not to figure out truth (Mercier & Sperber, 2011).
To get a feel for why our brains shouldn’t prioritize truth, imagine you’re home alone at night. While watching Westworld in the living room, you hear a loud noise from the kitchen. Startled, you jump, and maybe you even hesitate to investigate for a fleeting, paranoid moment. You know your fear is almost certainly unwarranted (how often is there actually danger lurking in the kitchen), but it’s tough to stop your danger-detector from going off. Why?

Danger. Hopefully not in my kitchen.
Stepping back from the specifics of this example, let’s break down the ways our danger-detector could be right or wrong. There are four possibilities: (1) there is no danger and we don’t detect any (right), (2) there is no danger, but our detector thinks there is (as with the loud noise above; wrong), (3) there is danger and we successfully detect it (right), and (4) there is danger but we fail to detect it (wrong).
What are the consequences of each scenario? Let’s put a score, from -100 to 100, on each outcome. I’ll make up some reasonable estimates, but feel free to think of your own. In the table below, the vertical axis (left) says whether there is actually any danger, and the horizontal axis (top) is whether our brain thinks there’s danger. Inside each cell is my estimate of the goodness (positive, blue) or badness (negative, red) of each possibility.
Notice that I care much less about whether my detector goes off if there isn’t actually any danger (bottom row). If I’m safe regardless, then while I might not like it very much if I detect danger (I don’t enjoy being afraid!), it doesn’t really affect me too much either way. On the other hand, if there is danger — say, there’s an intruder in my kitchen — then I care very much about whether I detect it or not. If I do, I can get to safety or defend myself, but if I don’t, my life may be at risk.
Even if there is very probably no danger, the consequences of being wrong are so dire that it’s better to freak out needlessly than to remain calm when there’s a small risk. This was even more true of the environment in which humans evolved. Individuals who didn’t freak out at signs of danger had an unfortunate tendency to get eaten by lions. The survivors — our ancestors — were those whose brains routinely lied to them.
It’s not only dangerous situations where our brains misrepresent reality. If you do some math, you can show that the time it takes for a fastball to travel from a pitcher’s hand to a batter’s bat is actually less than the time it takes for the batter’s brain to instruct the arms to swing. Think about that. How can anyone ever hit a fastball if there’s not enough time even to tell the bat where to go?
Prediction (Bubic, von Cramon, & Schubotz, 2010). The brain is constantly predicting the future, not just by extrapolating motion in order to hit balls or dodge obstacles, but more generally by recognizing patterns and transforming its representation of reality to make more sense, based on what it knows. For example, did you know that you have a blind spot smack in the middle of your field of vision? There are no light detectors where the optic nerve meets the retina, so the brain has to guess and fill in what is likely to be in that space. Our brain’s ability to recognize and extrapolate patterns allows us to play sports, learn language, and take advantage of all kinds of regularities in the world around us.
But these abilities aren’t magic. Just like an artificially intelligent machine, our brains take shortcuts and make assumptions to save energy. So they sometimes get things wrong. We fall victim to optical illusions. We hear repetitive speech as music. We often find meaning in randomness by seeing stories in the stars, faces in everyday objects (including Jesus in our toast), and deep wisdom in pseudo-profound… nonsense (Pennycook et al., 2015). These kinds of mistakes are cool, because when they are pointed out to us, we can see them as just that, mistakes.

This house is dumbfounded.
There exist more insidious errors, though, and they’re more common than we’d like to believe. One of the shortcuts brains take in evaluating new information is to trust their preconceptions more than that new information, or to interpret the new information such that it supports their existing beliefs. We make predictions and form expectations, and then we latch on to them. The upshot is that we eagerly accept information that agrees with what we already believe, and dispute or reject information that contradicts our predispositions. Confirmation bias is everywhere (Nickerson, 1998).
Take climate change, for instance. If I am predisposed to believe that human-caused climate change is a real, pressing problem, then I read news reports of glaciers melting and sea levels rising and accept them at face value. When I read reports questioning the existence or severity of these changes, I engage my critical thinking and find reasons not to believe them. On the other hand, if I already believe that climate change is a hoax, those same two sets of news reports have opposite effects: I am skeptical of rising seas and eager to embrace evidence that our environment is stable, even going so far as to accuse scientists of a grand cover-up conspiracy. Without a concerted effort, it’s impossible not to fall into this trap, regardless of your prior beliefs.
If we are all inevitably susceptible to confirmation bias (not to mention the myriad other cognitive biases from which we suffer; Kahneman, 2011), what hope have we of coming to believe what is true and disbelieving what is false? The best approach humanity has come up with so far is to start with your best guess, then try hard to prove it wrong, and then use the results to update your guess. Rinse and repeat. In science, we call that the scientific method, but it applies to reasoning and argumentation across the board.
Unfortunately, we’re really bad at disproving our own guesses. But luckily, we’re really good at disproving others’ ideas (remember how reasoning may have evolved to win arguments?). In evolution, lions kill off those with under-active danger-detectors. In science, experiments kill off ideas that don’t hold up to scrutiny. In both cases, what remains has been optimized to solve a certain problem. We’ve seen that the evolutionary problem to be solved is not truth-detection, but survival. The scientific method, though, was designed detect the truth. As a result, in any discipline where experts try to disprove and refine each other’s ideas, the consensus of that discipline represents the best possible approximation of the truth at a given moment in time. This is true despite the fact that any individual member of that discipline cannot be relied upon to have true beliefs, or indeed perhaps because of that fact.
As individuals, we can be experts in only a minuscule fraction of a minuscule fraction of all the knowledge that exists. So how should we form beliefs about all the rest, given that our intuitions and gut feelings are so unreliable? First, we need to appreciate the span of the gap between expert knowledge and basic knowledge. I really like Steve Novella’s illustration of this principle:
Think of the one area of knowledge in which you have the greatest expertise. This does not have to be your job, it can be just a hobby. Now, how accurate are news reports that deal with your area of extensive knowledge? How much does the average person know? Does anyone other than a fellow enthusiast or expert ever get it quite right?
For me, the answer to that last question is an emphatic ‘no’. And so it would be the height of arrogance for me to assume I know anything meaningful about climate science (for example), especially given that I haven’t read the original research myself. I have no choice but to adopt as my own opinion the results of the process of science. While those results may not always be conclusive, they’re still the closest approximation to the truth available.
To have any hope of arriving at true beliefs, we need to foster an attitude of epistemic humility. We need to recognize that our knowledge and beliefs are wildly imperfect and constructed by brains that actively deceive us. We need to trust experts (who incidentally are not conspiring to cover up the truth; Grimes, 2016). In short, we need to question the nature of our reality.
References
Bubic, A., von Cramon, D. Y., & Schubotz, R. I. (2010). Prediction, cognition and the brain. Frontiers in Human Neuroscience, 4, 25. https://doi.org/10.3389/fnhum.2010.00025
Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., & Thrun, S. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639), 115–118.
Grimes, D. R. (2016). On the viability of conspiratorial beliefs. PLoS ONE, 11(1). https://doi.org/10.1371/journal.pone.0147905
Hoffman, D. D., & Singh, M. (2012). Computational evolutionary perception. Perception, 41(9), 1073–1091. https://doi.org/10.1068/p7275
Kahneman, D. (2011). Thinking, fast and slow. New York: Farrar, Straus and Giroux.
Mercier, H., & Sperber, D. (2011). Why do humans reason? Arguments for an argumentative theory. Behavioral and Brain Sciences, 34(2), 57–74. https://doi.org/10.1017/S0140525X10000968
Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2(2), 175–220. https://doi.org/10.1037/1089-2680.2.2.175
Pennachin, C., & Goertzel, B. (2007). Contemporary Approaches to Artificial General Intelligence: A Brief History of AGI. Artificial General Intelligence.
Pennycook, G., Cheyne, J. A., Barr, N., Koehler, D. J., & Fugelsang, J. A. (2015). On the reception and detection of pseudo-profound bullshit. Judgment and Decision Making, 10(6), 549–563. https://doi.org/10.3389/fpsyg.2013.00279
You must be logged in to post a comment.