what this book provides is a darker account and cautionary tale about how, for all the amazing and self-defining attributes of the human brain, many of its processes have the potential to wreak havoc in our daily lives and undermine our optimal social functioning. (Page 0)

Collectively, these quotations highlight the long-standing observation that the boundary between mental health and mental illness is often blurred, and, indeed, the idea that the distinction is more quantitative than qualitative is increasingly embraced in psychiatry. That doesn’t mean that there’s no meaningful distinction, or that the distinction is merely arbitrary, or that mental illness doesn’t exist, as some have claimed. It means that the difference between mental health and mental illness can be understood in a way that’s similar to how we understand colors within the visible light spectrum. Just as the distinction between “red” and “green” represents a quantifiable difference in wavelength, the difference between having a mental illness like schizophrenia or major depression and not having one can be understood as a matter of degree. But there’s also a meaningful qualitative difference between mental illness and mental health, just as acknowledging a categorical distinction between red and green is crucially important when deciding what to do at a traffic light. (Page 0)

Delusions, which psychiatrists consider to be prototypical symptoms of psychotic disorders like schizophrenia, lie at one end of that spectrum. At the other end are cognitive distortions that, while also treated by psychiatrists, offer an example of false belief so frequently encountered that virtually everyone has them. In keeping with the premise that they exist on a continuum, there are both qualitative and quantitative differences between these two phenomena. (Page 2)

delusions are often recognizable due to stereotypical themes like persecution (e.g., paranoia about being followed or otherwise in danger), grandiosity or grandeur (e.g., beliefs that one is God or has “special powers”), or health (e.g., concerns about being infested by a parasite or having a monitoring device implanted in one’s body). In addition, delusions found in disorders like schizophrenia are often accompanied by other symptoms, like hallucinations or “hearing voices.” And yet, in practice, assessing the falsity of delusions based on their subjective strangeness or perceived impossibility—what psychiatrists have historically called “bizarreness”—is complicated by how hard it is to agree about what is or is not possible in the universe. 2 This is especially true for certain types of beliefs, like those of a religious or metaphysical nature, that are unfalsifiable. Does God exist, and does He talk to people? Does the “soul” persist after death? Is it possible that we’re living in a computer simulation or a multiverse? Who can really say? (Page 3)

In my clinical experience, the unshareability of delusions hinges not on their apparent unfamiliarity or bizarreness, but on their self-referentiality. 4 In other words, delusions—like the stereotypical examples given earlier—are unshared because they’re typically beliefs about the believer rather beliefs than about the world. For example, while it would be easy for you to find those who share the beliefs that there will be a Second Coming of Christ, that telekinesis is possible, that a microchip could be implanted in one’s body for the purposes of tracking, or that alien abductions have occurred, it would be much harder to find those who agree that you are the Messiah, that you can move objects with your mind, that you have a microchip implanted in your brain, or that you were abducted by aliens. (Page 4)

Some psychologists and psychiatrists have therefore proposed that delusionality ought to be assessed by measuring “cognitive dimensions” that are quantifiable rather than categorical. 5 These include belief conviction (how much we believe something to be true), preoccupation (how much time we spend thinking about the belief), extension (the degree to which the belief pervades different aspects our lives), and distress (how much emotional unrest the belief causes). Such dimensions have less to do with a belief’s content, or what is believed, and more to do with how the belief is held. (Page 4)

Since the pioneering psychiatrist Aaron Beck introduced the term in the 1960s, cognitive distortions have been a foundational concept of a popular form of psychotherapy called cognitive behavioral therapy (CBT). While various definitions exist, cognitive distortions can be simply thought of as “errors of belief and how we arrive at and maintain them.” 7 Though they fall short of delusions, Beck characterized them as “systematic deviations from realistic and logical thinking” representing “varying degrees of distortion of reality” that are “similar to [delusions] described in studies of schizophrenia. 8 During CBT, a help-seeking patient’s cognitive distortions are identified and categorized as examples of all-or-none thinking, overgeneralization, jumping to conclusions, magnification and minimization, personalization, and the like. 9 The cognitive behavioral therapist then works with the patient to carefully examine the extent to which such beliefs are supported or refuted by objective evidence gathered by the patient during homework tasks completed outside of therapy sessions. (Page 5)

This illustrative example highlights three important aspects of CBT that reveal something meaningful about cognitive distortions as well as beliefs more generally. First, CBT is based on the premise that cognitive distortions and beliefs lead to feelings and behaviors (e.g., “depression”) rather than the other way around. Second, CBT demonstrates how beliefs can be modified by teaching patients to take a rational and more objective look at evidence that refutes them. And third, clinical improvement isn’t necessarily judged based on the presence or absence of cognitive distortions but on quantifiable reductions in the cognitive dimensions of belief, such as the degree to which patients like Edward believe them and the extent to which the beliefs consume their thoughts. (Page 6)

Presentar el malestar psíquico como una representación erróneamente calibrada.

cita reel cbt sesgo

recent research suggests that delusional thinking in schizophrenia may be at least partially explained by circular inference, in which direct experience is overweighted as evidence while information from others is underweighted. (Page 8)

cita

Similar research using scales that measure quantitative cognitive dimensions of delusions have revealed that “delusion-proneness”—defined by subclinical or subthreshold levels of delusional thinking that fall short of frank psychosis—is common in the general population and is also associated with a greater degree of jumping to conclusions reasoning compared to those without delusion-proneness. 16 Moving even further toward the healthy end of the mental illness–mental health continuum, a recent meta-analytic study (a study using a type of statistical analysis that combines the results of many similar studies looking at the same thing) tells us that, even for “normal” people without delusions or delusion-proneness, anecdotal evidence and first-person narrative accounts are often more persuasive than statistical evidence, at least under certain conditions. 17 (Page 8)

many of us are prone to rush to judgment and to prioritize anecdotal evidence over objective evidence, especially when forming emotionally charged beliefs. People with delusional beliefs or delusion-proneness may do this more than the rest of us, but this is a quantitative difference. We’re all vulnerable to jumping to conclusions and anecdotal bias in varying degrees. (Page 8)

The scientific method was designed to get us closer to objective and universal truth by weeding out subjective misinterpretations. During his 2016 commencement speech to Cal Tech, the physician and writer Atul Gawande described science as a “commitment to a systematic way of thinking, an allegiance to a way of building knowledge and explaining the universe through testing and factual observation.” 22 This way of thinking is based on an iterative process of gathering evidence from repeated observations while controlling for different explanatory factors in order to inform theories about the true nature of the world and how things work. In other words, when we talk about scientific research, we’re referring to “re-search”—a process of looking again and again to determine how much we can rely on what we observe. (Page 10)

cita ciencia

the conviction we have for our beliefs is often inversely correlated with the objective evidence to support them. (Page 11)

cita

I’ll take a more practical approach going forward, modeling all beliefs as probability judgments while noting that many of our beliefs are held with excessive levels of conviction at the expense of acknowledging more appropriate levels of uncertainly. Stated another way, people often tend to adopt an all-or-nothing “belief that” attitude toward matters that warrant more probabilistic and opinionated “belief in.” Such unwarranted conviction is the stuff of delusion and, as I’ll argue throughout this book, often lies at the root of our ideological conflicts. (Page 12)

cita

we should see faith for what it is: an active choice of believing in the face of uncertainty or lack of evidence. (Page 12)

never asked my friends why they felt so confident they’d have a boy on their fourth go around. Like many beliefs that we hold, it was probably a combination of things. Maybe it was wishful thinking. Maybe it was just a feeling or the old wives’ tale about “carrying low.” 1 Or perhaps it was the idea that they already had three girls and were therefore due for a boy based on the “law of averages.” If this last line of reasoning was the case, then my friends would have been guilty of a common cognitive error called the gambler’s fallacy. (Page 13)

Reel que explica este sesgo a partir del mismo ejemplo. Posibles causas.

reel

In this chapter, I’ll take a deeper dive into naïve realism and the paradox of faith by exploring the natural tendency we all have to not only be confident, but overconfident, about not only statistical probabilities but also much else that we believe. As we’ll see, this tendency can be self-protective in healthy doses, but, as always, too much of a good thing can just as easily point us down a darker path. (Page 14)

Peso vs Fuerza de la evidencia.

nota

Twenty years later, Tversky and his erstwhile graduate student Dale Griffin, now a professor at the University of British Columbia Sauder School of Business, performed experiments that further clarified that this kind of overconfidence when making faulty probability judgments sometimes occurs because people tend to overprioritize the “strength” of evidence at the expense of considering the “weight” of evidence. 4 To illustrate the difference between strength and weight, let’s say we wanted to find out if the Monte Carlo Casino roulette wheel was rigged to preferentially come up black and therefore decided to monitor it closely over the course of a night. The proportion of times that we observed the ball landing on black would represent the strength of our evidence, whereas the weight would depend on the number of observations or repeated spins of the wheel. So, if we only observed 30 consecutive spins of the wheel and saw that the ball landed on black 26 times, like it did back in 1913, the strength of evidence suggesting that the wheel was rigged would be strong. But the weight of the evidence would depend on the total number of spins, so that if we observed the roulette wheel all night long and found that the ball landed on black 4,997 out of 10,002 spins, we’d more accurately conclude that the wheel wasn’t rigged after all. We can see then that weight can put strength into better perspective. (Page 14)

the gamblers’ mistake isn’t the bet itself—the probability of the ball landing on red or black on a single spin is always 50-50, so that even if it landed on black 26 times in a row, betting on either outcome on the 27th spin would represent equally good or bad decisions. Instead, the cognitive error relates to overconfidence in the expected outcome, the rationale underlying that confidence, and how that affected the amount of money placed on the bet. (Page 15)

as a general principle, what we can take to the bank from behavioral economics is that, even when probabilities are known and even when people are well-versed in statistical mathematics, “people are often more confident in their judgments than is warranted by the facts.” (Page 15)

The more specific tendency to overprioritize strength over the weight of evidence and selective personal observations over base rates, whether or not individuals are aware of the overarching probabilities, should sound familiar by now because it’s essentially the same phenomenon reflected in the jumping to conclusions reasoning style and the overreliance on narrow subjective experience that’s associated with delusional thinking and naïve realism that I mentioned in Chapter (Page 16)

sesgos cognitivos .p explicando qué son y por qué es importanete tenerlos en consideración.

nota

Following the Nobel Prize-winning work of Tversky and Kahneman, behavioral economics has provided us with a useful model to understand these kinds of errors in probability judgment as a result of intuitive cognitive shortcuts that fall under the larger umbrella of heuristics. 7 In his best-selling 2011 book, Thinking, Fast and Slow, Kahneman proposed two different modes or systems of decisional thinking: an automatic, fast judgment based on instinct, intuition, and emotion and a slower, more rational, and deliberative process. 8 Heuristics represent fast-mode thinking that ideally works together with more deliberative reasoning within a balanced, “dual process” harmony. However, errors in judgment can arise when one mode of thinking wins out over the other, just as when we fail to account for both the strength and weight of evidence. (Page 16)

When fast-mode heuristics result in faulty cognitive representations of reality—that is, false beliefs—they’re referred to as “cognitive biases.” To date, it has been proposed that there are nearly 200 cognitive biases (including naïve realism, the gambler’s fallacy, and the base rate fallacy) that have evolved over time to make human decision-making more efficient while also making us prone to errors in accurately judging the risks and benefits of our actions. (Page 16)

Most people also report that they’re “better than the average person”—a mathematical contradiction if a trait is normally distributed within a Bell curve—with self-appraisals that are inflated compared to how we’re regarded by others. This cognitive bias has come to be known as the “better than average effect,” the “superiority illusion,” or the “Lake Wobegon effect” after Garrison Keillor’s fictional radio show community in which “all the women are strong, all the men are good-looking, and all the children are above average.” (Page 17)

Locus of control is a well-known construct in psychology that refers to the more generic belief in how much personal control we have over life events, whether or not that belief is accurate. For example, those with a high degree of internal locus of control would be more likely attribute a good night at blackjack to their skill as a gambler than merely being the beneficiary of “the luck of the draw.” Locus of control has been studied for well over 60 years, with research finding that belief in personal control as well as an exaggerated sense of personal control can have a variety of potential benefits. (Page 18)

presentacion del concepto y su relación con la salud mentgal.

reel

illusions of control can result in unwarranted self-appraisals for people with inherent financial advantage, leading them to be less empathic toward those who are disadvantaged. 24 Such people might, for example, be more likely discount the ethical or practical benefits of real-world social programs like welfare or affirmative action. This conclusion suggests that while some illusions of control can result in higher self-ratings of individual happiness or mental health, they might also contribute to interpersonal disregard, with a harmful effect on society as a whole. (Page 20)

unrealistic optimism is thought to represent a form of denial—the antithesis of depressive realism—that can reduce stress and anxiety and allow us to devote energy to achieving goals. (Page 20)

University of California Irvine psychologist Elizabeth Loftus has demonstrated that even our memories of recent events are highly susceptible to error, especially when we’re cued to recall our memories in a specific way. 26 For example, in one of her first experiments from the 1970s, she showed films of a car accident to subjects and used different verbs to ask them how fast the car was going. When subjects were asked how fast the car was going when it “smashed” into the other car, they rated it going 10 miles per hour faster than when the verb “contacted” was used in place of “smashed.” A week later, when asked whether they remembered seeing any broken glass in the film, subjects who’d been cued with the word “smashed” were more likely to say “yes.” (Page 21)

experimentos memoria

cita reel

Decades of research by Loftus and other investigators have demonstrated robust evidence of this “misinformation effect,” whereby memories can be manipulated by the way we’re asked questions about past events, and not only in trivial ways. False memories involving various past events in one’s life—like getting lost, being attacked by an animal, nearly drowning, and being in an accident—can be suggested to people who come to endorse them as reality. (Page 22)

cita memoria

Because Loftus’s work—detailed in her 1994 book, The Myth of Repressed Memory—has shown that “repressed” or previously forgotten and “recovered” memories of experiences like sexual trauma can sometimes amount to “false memories,” 28 she has been called as an expert witness in the defense of accused rapists like Bill Cosby, Jerry Sandusky, and Harvey Weinstein. (Page 22)

nota reel

our memories are often imperfect accounts of past events that are frequently edited and rewritten based on new information—and sometimes misinformation—to serve our needs in the present. (Page 22)

While that may run counter to our subjective experience, the reality is that our memories are both fallible and malleable. (Page 23)

sirve para explicar el concepto de folk psychology.

cita nota

we all have islands of relative competence and incompetence so that the Dunning-Kruger effect applies to all of us within the areas where we lack expertise. Dunning puts it this way: “Poor performers—and we are all poor performers at some things—fail to see the flaws in their thinking or the answers they lack” 37 and “not knowing the scope of [our] own ignorance is part of the human condition.” 38 Put even more simply, we overestimate our abilities because we don’t know what we don’t know. (Page 24)