The genetics of Alzheimer’s are complex, involving many genes. Most of these genes, if inherited, increase the risk that an individual will acquire the disease but do not ensure it. But if an individual inherits one of a few rare genes (accounting for <1% of all cases of the disease), there is an exceedingly high probability that they will develop Alzheimer’s. Likewise, if an individual’s parent is afflicted but does not pass on the gene to them, their own offspring are not at increased risk. Carol’s family carries that type of autosomal dominant Alzheimer’s gene. Because her father was afflicted, Carol herself had a 50% chance of developing the disease, and if she was a carrier, so did her two children. (Location 131)

The gist of the amyloid hypothesis is that mutations in APP (and other factors) lead to the accumulation of amyloid into plaques, and because these are toxic, they trigger a cascade of events that leads to neurodegeneration; in turn, this leads to cognitive decline. This hypothesis quickly became the dominant focus of Alzheimer’s research. (Location 152)

So, what’s holding back a cure for Alzheimer’s? Tragically, we can ask the same question for so many other brain disorders, including Huntington’s, Parkinson’s, multiple sclerosis, epilepsy, depression, schizophrenia, and so many more. Though we have ways of mitigating some of the effects of these disorders, we cannot cure any of them. (Location 180)

Humanity can do so many amazing things. We can fly to the moon; we can battle many types of cancers into remission. So, what’s holding us back from curing brain dysfunction? Or, in the absence of cures, what’s holding back more effective treatments? (Location 192)

The bench to bedside narrative is so deeply ingrained in brain research that it’s not typically questioned or even discussed. Brain researchers all have their own analogs of the things that I state in all my research grants: “The goal of this proposal is to understand how the brain stores memories. These results could inform new treatments for deficits in memory such as age-related dementia.” (Location 204)

when it comes to brain disorders, the orderly progression of discovery in brain research has not led to an orderly progression of new treatments and cures for brain dysfunction. (Location 208)

Around 2011 something alarming happened: six of the world’s largest pharmaceutical companies decided to pull back their brain drug development efforts following years of failed and expensive attempts to bring drugs through clinical trials. Together, they had invested several billion dollars in developing new drugs, including $18 billion in Alzheimer’s alone, and they had very little progress to show for the venture. What was going wrong? Reports suggest that, en masse, these companies concluded that we do not yet know enough about the brain to create new drug therapies for its dysfunction; it simply is not a good investment. (Location 212)

Progress in the nuts-and-bolts arm of brain research is exploding in any way you measure it: papers published (fig. 1a; a proxy for facts collected), or even the number of pages in neuroscience textbooks. And it has been for several decades. However, progress in developing new treatments has not followed as expected: the number of new brain drugs introduced annually has been steady for the past 30 years (Location 220)

brexanolone was one of the first realizations of the bench to bedside narrative leading to a genuinely novel treatment for a psychiatric disorder. It didn’t come until 2019, and it was one success in a rich and varied history in which successes were few and far between. (Location 232)

You might wonder: How can this possibly be true? After all, we have plenty of drugs to treat depression, anxiety, psychosis, and many other psychiatric disorders. Well, while these existing drugs work for some individuals, they fail to work for many others. And further, we don’t have a complete understanding of how these drugs work. It turns out that many of these drugs were not developed based on understanding how the brain operates, but rather during a time when we did not understand much about the brain at all. In fact, when these drugs were developed, the field hadn’t even reached a consensus that neurons in the brain use neurotransmitters to communicate. Instead, many of these brain drugs were discovered either serendipitously or by a process I’ll call “try it and see what happens.” The first antidepressant (iproniazid) was created in 1952 while researchers were looking for a treatment for tuberculosis. The first drug to treat anxiety (meprobamate) was created in 1945 while they were looking for a treatment for penicillin-resistant bacteria. Ritalin, one of the first drugs for attention deficit hyperactivity disorder (ADHD), was created in 1944 via “try it and see.” A nontrivial fraction of the drugs that we have today are actually not new drugs, but rather refinements of old drugs created before 1960. And, while we now better understand how the old (and new) drugs work, many of the newer drugs operate in the same ways as the original ones. (Location 241)

the fact that one of the once biggest and most influential proponents of brain research is now questioning the efficacy of the enterprise should prompt us all to pause and reflect, to say the least. (Location 265)

Before we can think about this new path, we need to understand the Grand Plan as it has existed in neuroscience to date. In a nutshell, we’ve been oversimplifying our ideas about what type of thing the brain is and how it breaks. It’s not that simplification in and of itself is bad; in fact, simplification is inevitable. As the saying goes, “All models are wrong, but some are useful.” In that famous phrase, “wrong” refers to the fact that any model, by design, is a simplification that captures some aspects of reality while disregarding others. The problem in brain research is that we’ve oversimplified how we think about the brain in ways that aren’t just wrong but also aren’t useful for treating brain dysfunction because they fall short of what we need to know. The Grand Plan up to this point has rested on these oversimplifications. (Location 295)

Explicar esta crítica a la sobre simplificación en el estudio del cerebro a partir del argumento del libro.

cita reel

most brain disorders cannot be linked to individual genes or even a handful of them. (Location 315)

In this paper, Kandel described the new, emerging way to think about psychiatric disorders as tied to physical changes in the brain. He was tapping into the same ethos that Insel was channeling during his tenure at NIMH when he sought to redefine “mental disorders” as “brain circuit disorders”; it was the same ethos that led to the description of molecular medicine that is reflected in today’s textbooks. Because it reflects the ideas of not just one individual but an entire era, I’ll refer to it throughout the book as the molecular neuroscience framework. This way of thinking was a notable shift away from how mental disorders were regarded in the 1960s, where only a few such disorders were regarded as “organic” (brain) disorders. The majority were regarded as “functional” (mental) disorders with unspecified causes that need not be tied to the brain in any way and were most often treated through psychodynamic therapy. (Location 320)

the fifth principle captured the idea that the long-term effects of psychotherapy and counseling operate via this same mechanism: by learning. Therapy changes gene expression and, in turn, reshapes neural circuits. In this framework, both psychotherapy and brain drugs operate via a common pathway. (Location 338)

it has implicitly been the predominant way of thinking about the brain in the past few decades of brain research and has, therefore, guided how we have thought about treatments for brain disorders. The framework has led us to several important discoveries, including some that are recognized by Nobel prizes, and it has also led to transformative new therapies to treat brain disorders. At the same time, we can now see how it’s problematic, and why it’s time to evolve beyond it. (Location 359)

the framework can be summarized by the phrase, “find the broken domino and fix it.” Why is this a problem? Because the brain is not a domino-like system. It is instead a system that is designed to adapt to changing conditions. A changing brain means that many dominos and their interactions might change over time, and we may not be able to reliably point to a single domino as a culprit. (Location 371)

we need to seriously consider whether the reason we still do not understand the causes of many brain disorders is that we’ve been primarily focused on pinpointing broken dominos, and this is just not how the brain works. (Location 380)

A fable written in 1963 titled “Chaos in the Brickyard” likens the scientific enterprise to building large, imposing edifices, and the facts that scientists acquire via their experiments to bricks. The tale begins by describing science as a type of thoughtful construction that relies on producing bespoke bricks, as needed, to build solid and sound structures while also avoiding waste. It then transitions into the misguided notion that “bricks are the goal,” with bricks collected into large piles with great pride—ultimately at the cost of actually building anything, because the builders can’t find the right bricks in the big mess. The fable ends with the warning, “Saddest of all, sometimes no effort was made to maintain the distinction between a pile of bricks and a true edifice.” (Location 408)

Hardy now advocates for modeling the brain systems associated with Alzheimer’s as a nonlinear dynamical system; in these systems, typically the whole (like cognitive function and dysfunction) is not easily predicted from the operation of its parts (like proteins). Calling this a shift in perspective doesn’t begin to describe it: “For non-linear systems, correlation does not imply causation, causation does not imply a correlation, and even the concept of cause-effect can be difficult to define when multiple interactions, feedbacks, and time delays are involved.” Indeed, these types of systems are far from intuitive. They can be understood; they just require different approaches, including mathematical models. (Location 435)

Brain research is shaped around the reasonable presumption that understanding the right things about the brain will lead us to new treatments for its dysfunction. Achieving this goal begins with a model or theory of a brain function, like mood, or a disorder, like depression. But how do we create an illuminating model or theory? (Location 486)

One strategy is methodological reductionism, the notion that you should begin by breaking the problem down into simpler ones. (Location 491)

Reductionism lies at the heart of the molecular neuroscience ethos. It has both fans and critics. (Location 495)

Reductionism is often pitted against emergence, the idea that the behavior of a complex system is hard to predict from its parts alone, because that behavior emerges in surprising ways from how those parts interact. (Location 496)

cita

The real problem is that “in practice,” exhaustively studying those interactions is typically infeasible because the number of things that exist and interact is too massive. We will not figure out the brain by exhaustively studying all the possible ways in which its 86 billion neurons can interact, and, from those principles, infer our mental abilities. (Location 500)

cita nota reel

The rationale behind bottom-up approaches is to begin by starting with the parts. The rationale behind top-down approaches is to focus on understanding the things that we most want to explain. (Location 509)

The struggle that psychiatrists face in understanding the causes of mental disorders translates into a battle about how to treat them: Do we treat the brain (with drugs or brain stimulation), or do we treat the mind (with behavioral and talk therapy)? (Location 521)

Today, the effectiveness of CBT and other approaches like it is underappreciated. That said, they don’t work for everyone. But neither do brain-based therapies (like antidepressants). The big question is: What type of treatments should we employ for an individual? Ones that target the brain? The mind? Or both, simultaneously? Likewise, where should researchers look for new treatments? It’s a centuries-old debate. (Location 541)

The bench to bedside narrative is pervasive not just in brain research, but in biomedical research of all types: cancer, cardiology, infectious disease—you name it. The narrative captures the straightforward notion that if we want to fix the brain, we need to understand it. (Location 555)

In brain research, the bench to bedside narrative leads us to the following puzzle: If new understanding leads to new treatments, why hasn’t the accelerating pace of discoveries about the brain led to an acceleration in the introduction of new therapies? The answer cannot be that there is a simple delay in the time required for discoveries to be transformed into treatments; something else is happening. (Location 557)

Instead of starting at the bench and ending at the bedside, new treatments typically have emerged serendipitously—scientists stumbling on a treatment for one disorder while looking for another—or via “let’s just try this therapy and see what happens.” (Location 563)

Historically, when bench to bedside research has worked, it’s worked well: once brain researchers have pinpointed causes, they’ve usually been pretty amazing at developing treatments. Unfortunately, we’ve been unable to pinpoint the causes of many types of brain dysfunction, and this is the central factor that led so many pharmaceutical companies to abandon the drug development effort around 2011: the absence of good leads to chase. (Location 566)

This last step is expensive and fraught: the average cost of clinical trials for a single brain drug is estimated to be around US$1 billion and take approximately 10 years. Once a brain drug enters clinical trials, the probability that it makes it through and to the market is less than 5%. (Location 583)

Maybe one of the modern antidepressants: paroxetine (Paxil), sertraline (Zoloft), or escitalopram (Lexapro)? Not exactly—these are all refinements of another drug. And while these refinements relied on understanding how the brain functions, they are not novel drugs that work in a new way. (Location 717)

To understand how brain drugs can be created in ways not accounted for by a bench to bedside narrative, let’s take a look at the class of drugs known as antipsychotics. Not a single one of twenty-five in this class currently in use has a bench to bedside development story. (Location 731)

The first antipsychotic, chlorpromazine (Thorazine), was, like Ritalin, discovered in the 1950s before we knew much about the brain at all. (Location 739)

Chlorpromazine was developed by a French pharmaceutical company focused on creating antihistamines to treat allergies. In that effort, researchers noticed that many of the antihistamines had sedative side effects, and this inspired them to investigate whether they might have a psychiatric application. In 1952 chlorpromazine was administered as part of a drug cocktail to one uncontrollable manic patient. The effect was immediate and calming—so much so that the patient was eventually discharged. Its use was soon extended around the world to other patients with similar symptoms. (Location 745)

At the time of this writing, there are twenty-five different antipsychotic drugs in use. How many of those operate by a novel mechanism? Only the first: chlorpromazine. The antipsychotic effects of all twenty-four other drugs, including clozapine, happen by binding to the same types of receptors. Instead, what differs between them is how effectively they work and their side-effect profiles; these differences are thought to follow from the strengths with which they bind to those receptors. Remember that chlorpromazine was first developed in the 1950s, based on behavioral observations alone. What this means is that the arc of bench to bedside discovery as we typically understand it has never contributed to an antipsychotic that works in a new way. (Location 779)

The first antidepressant, iproniazid (Marsilid), was stumbled on in 1952 during clinical trials for the lung-infecting bacteria tuberculosis by observing that patients who received the drug were “joyous.” In 1945 two Danish scientists created disulfiram (Antabuse, approved in 1951) to treat alcohol-use disorder while trying to develop a treatment for intestinal worms. In the process, they tested the drug on themselves and discovered that when they drank alcohol, it made them feel sick. While looking for an antibiotic to treat bacteria resistant to penicillin, researchers stumbled on a substance that inspired the first anti-anxiety drug meprobamate (Miltown, approved in 1955). They observed that one of the compounds had a calming effect on mice, and they refined it into a drug with longer-acting effects. (Location 798)

Electroconvulsive therapy has a long and complex history that predates the earliest antipsychotics and antidepressants. Today, it is an option for individuals after antidepressant drugs have failed; in more than 50% of these cases, it is effective. The treatment involves inducing a brain seizure with electrical current while a patient is under anesthetic and a muscle relaxant. It is typically given two or three times a week over the course of 2 to 4 weeks. (Location 811)

As we can see, while findings about the brain inspired the development of both of these therapies, we still don’t really understand how either of them works. (Location 845)

by far the biggest factor holding back the development of new treatments is simple: we do not know enough about the causes of brain dysfunction. In the case of many neurodegenerative disorders, we have a good sense of what degenerates but not what causes the degeneration (or how to prevent and stop it)—Alzheimer’s, Parkinson’s, and multiple sclerosis are all examples. In the case of most neuropsychiatric disorders, we have even less of an idea of their causes and no biological tests for them, including depression, anxiety, obsessive-compulsive disorder, schizophrenia, bipolar disorder, and anorexia. In some of these cases there are treatments, but not ones that followed an understanding of biological causes. Psychiatric disorders are among the most complex and mysterious disorders in all of medicine. (Location 871)

In the past few decades, the relationship between understanding and building models has changed in a fundamental way with the introduction of increasingly effective artificial intelligence and, specifically, machine learning. (Location 891)

The drawback of machine learning is that because these models are so complex, we often do not understand how they come up with the solutions they find. This means that the creation of models can no longer be held up as an illustration of understanding; for the first time in history, we can build models by gathering massive amounts of data but otherwise not understand much of anything. Yet we are using these models to help us find treatments for brain dysfunction. The program AlphaFold is one great example. (Location 897)

In the machine-learning approach, researchers train models based on the protein structures that have already been determined and then ask the model to predict the as yet unknown structure of other proteins without explicitly building principles into the model; instead, the model implicitly learns what it needs to know. (Location 915)

“All models are wrong but some are useful,” (Location 1009)

In Box’s own words: “There is no need to ask the question ‘Is the model true?’ If ‘truth’ is to be the ‘whole truth’ the answer must be ‘No.’ The only question of interest is ‘Is the model illuminating and useful?’ ” (Location 1012)

Behind the scenes, these different points of emphasis are driven by overarching concepts (often implicit) about what type of thing the brain is. The brain works via hydraulics; the brain is a computer—those best guesses (often metaphors) change with time and inform the questions that brain researchers ask and answer. If the brain is a computer, how does it compute what to do next? Those answers will most likely be reflected in brain activity. If the brain is created via a genetic blueprint, what genetic instructions assemble an eye? Those answers will most likely be reflected in genetic sequences and the proteins that follow from them. (Location 1019)

Neuroscience has led to some incredible breakthroughs, but we still do not understand what causes most types of brain dysfunction. It may be time for a change in perspective. (Location 1036)

cita

The term schizophrenia was coined by the Swiss psychiatrist Eugen Bleuler in 1911, during an early era in which the idea that many types of brain and mental dysfunction might have anatomical causes, such as brain lesions, was gaining traction. Bleuler used the term to describe the splitting of the mind into different functions that each dominate an individual’s personality for some time. A few decades prior, a similar disorder had been labeled dementia praecox, but it was characterized as a disorder with progressive deterioration, and Bleuler did not believe that schizophrenia was always progressive. In his book describing the disorder, Bleuler pointed to evidence that it was hereditary, although he also admitted that some type of infectious cause could not be ruled out. (Location 1055)

Bleuler speculated that it was improbable that the disorder had psychic causes. He acknowledged, however, that psychic events could influence the symptoms, such as the specific delusions that an individual manifests. (Location 1063)

The impact of chlorpromazine in psychiatric wards during the 1950s was transformative, nearly eliminating the need for patient restraint. While that particular drug can lead to disfiguring involuntary movements known as tardive dyskinesia, other medications with fewer side effects followed later. (Location 1087)

One influential proposal that emerged in the 1950s–1960s was the idea of the schizophrenogenic mother. This was the notion that schizophrenia was caused by mothers who appeared to be loving but were actually tyrannical and bullying, often despite their generally good intentions. The gist was that mothers “gave birth to healthy children and then literally drove them mad” by creating a toxic home environment. The studies that supported this theory were later analyzed and deemed flawed, and in the 1970s and 1980s the idea of the schizophrenogenic mother was debunked entirely. But in the interim, the idea was put into clinical practice, shaming and blaming mothers for disorders they did not cause and subjecting both them and their children to therapies that, on the whole, did not help. (Location 1094)

A propósito de cómo los modelos de psicoterapia a veces causan más daño que beneficio.

cita reel

Today, schizophrenia is primarily regarded as a brain disorder, but the definition of what that means has changed. Throughout the 1960s a popular view was that a mental disorder was only a brain disorder if 1) a biological cause could be pinpointed for it, and 2) it was responsive to brain-based treatment (through medication or surgery). In the 1980s the view began to shift toward a perspective that presumed that all mental disorders are brain disorders, even if their causes have not (yet) been pinpointed, and even if behavioral therapies were effective interventions. The implicit rationale is that in a feedforward pathway (brain to behavior), interventions at any point along it (including behavior) can be effective, at least in principle, even if the causes of dysfunction lie upstream (in the brain). This was the ethos that led to the molecular neuroscience framework. (Location 1123)

cita

Together, these forces created a focus on looking inside the brain for the causes of schizophrenia (and other mental disorders). The official NIMH plan for schizophrenia research in 1987 focused squarely on the molecular neuroscience framework account of the brain as I described it earlier, where genes code for molecules that combine to form neurons; neurons combine into neural circuits; and the activity of neural circuits leads to all mental function and behavior (fig. 2). (Location 1156)

It was not until the mid-1600s that researchers generally agreed that the brain is the center of nervous system function. (Location 1178)

cita

Elaborating long-held ideas that the brain controls the body via fluids, René Descartes (mid-1600s) proposed that the nervous system consists of fluid-filled tubes that operate via mechanical, fluid-based forces. (Location 1179)

cita nota

In the mid-1700s the discovery of electricity inspired Luigi Galvani to propose another idea: that nerves operate via electrical signals. In one persuasive and influential demonstration, Galvani showed how the muscles of a dead frog’s leg could be contracted with an electrical pulse. (Location 1184)

Following Galvani, the role of electricity was clear, but still in question was whether the nervous system was one continuous reticulum (like the circulatory system) or was parsed into individual cells with gaps between them. In the late 1800s Camillo Golgi and Santiago Ramón y Cajal adopted fervent and opposing views on this topic, with Golgi advocating for a continuous reticulum. Ironically, Golgi developed one of the first stains that enabled the visualization of long axons and dendrites of nerve cells, and Ramón y Cajal used it to support his inferences that individual nerve cells exist. Scientists more broadly adopted the idea of the nerve cell in the early 1900s, although not universally until a bit later (Golgi himself was never convinced). (Location 1186)

These 1950s discoveries about the soup of the brain set the stage for an even more intense focus on brain molecules in the latter half of the twentieth century. In other words, thinking about the brain as something in which molecules matter triggered an intense focus on molecules. (Location 1218)

The symptoms of schizophrenia, however, cannot be tied to deficits in a single neurotransmitter pathway; it’s clearly more complex. (Location 1236)

having this information allowed researchers to do new things, such as compare human genes with the genes of other species. This is important because it’s easier to figure out what genes are doing in creatures like worms and flies (when analogous genes exist), and thus leveraging cross-species “homologies” can help researchers understand human gene function. (Location 1267)

only a handful of nervous system disorders have been linked to a single or a small number of gene mutations (like the variant of Alzheimer’s that Carol Jennings’s family carries). More often than not, the genetic variation that leads to brain disorders is highly complex, tied to variation in hundreds of genes. Today, proponents of the perspective that genes matter generally do not focus on individual genes. Instead, they advocate for developing ways to compute total genetic risk based on large and complex genetic combinations (while also advising that the results need to be interpreted cautiously). (Location 1281)

To answer the question of heritability, researchers in the mid-1970s sought the records of adopted children born to mothers with schizophrenia but raised by families without it. They found incontrovertible evidence supporting a genetic link, including that those children developed the disorder at the same rate as children raised by a mother with schizophrenia (implying that the disorder was tied to the genes, not the mother who raised them). Work done since the 1970s, however, also clearly demonstrates that it’s not linked to just a single gene—rather, using GWAS, it’s been linked to genetic variation in more than two hundred genes. (Location 1304)

Researchers also have determined that schizophrenia is not purely genetic—if one identical twin has it, the likelihood that the other will is only 50%. Following on the phrase epigenetics matter, work in the 1990s suggested the biology behind this epigenetic influence: individuals whose mothers were pregnant with them during the Dutch hunger winter famine of 1944–45 were more likely to develop certain diseases, including schizophrenia, and those individuals had different DNA methylation patterns than their siblings. Other factors associated with increased risk for schizophrenia include infections and injury in utero. Today, schizophrenia is suspected to be a neurodevelopmental disorder that follows from some type of disruption during fetal development, as a combination of genetic and environmental factors. However, what those genetic and environmental influences are is not well understood—we still do not have a test that can accurately diagnose the disorder based on genetics, DNA methylation patterns, and/or what happened in utero. (Location 1309)

The problem is that most brain disorders are vastly more complex than the examples I’ve described. Unlike cystic fibrosis or Huntington’s, which are strongly linked to variation in a single gene, most brain disorders are weakly linked to variation in many genes. The same is true for many types of cancers, including ovarian cancer, which is associated with variation in at least sixteen genes. Still, as long as the number of genes involved remains small, there is some hope that genetic combinations can inform treatments. Brain disorders like schizophrenia and autism, however, are linked not just to tens of genes but to hundreds. Researchers have not yet figured out good ways to interpret or deal with that degree of combinatorial complexity. (Location 1407)

As already mentioned, this is also true for schizophrenia, where if one identical twin has it, the chance of the other twin also having it is only 50%. This is because these disorders do not only follow from our genes but from complex interactions between genes and our environment, and we often do not know what those environmental influences are. When a genetic mutation increases the probability that someone will develop a disorder but does not ensure it, this complicates the plan of using that genetic information to develop a treatment. Progress is no longer as straightforward as, say, engineering a mouse with the same mutation to study it. (Location 1419)

Simply put, brain disorders involve hundreds of genes, and their causes aren’t exclusively genetic. Acknowledging this, the molecular medicine proposal will not be effective for treating most brain disorders. (Location 1425)

To investigate, a typical fMRI experiment compares fMRI activation across two different conditions—for example, to study vision, while looking at a picture versus a blank screen. The researcher then averages these differences in activation across many individuals. This approach is effective for understanding how the human brain is organized. (Location 1453)

To use fMRI for precision medicine, however, we need to use it in a third way that allows us to make sense of it for individuals, and this is where fMRI has hit a roadblock. Finding that fMRI activation in prefrontal cortex is lower on average when patients are depressed does not necessarily mean you can infer whether an individual patient is depressed based on an fMRI measure of prefrontal activation. The ability to do so depends on the amount of overlap in the depressed and nondepressed groups, and, as it turns out, there’s a lot of overlap. Consequently, you can’t use fMRI brain activation to cleanly diagnose whether an individual is depressed. This is not true just for depression and prefrontal cortex—it’s true for all mental disorders and all brain areas. (Location 1462)

A propósito de la ausencia de biomarcadores en salud mental, y la pregunta que puede aparecer sobre la posibilidad de utilizar métodos de neuroimagen para lograrlo.

nota cita reel

What is clear is that the naive form of molecular medicine, as it is presented in textbooks, is not the path forward for brain research. The challenge now is figuring out what to replace it with. (Location 1479)

Looking once more at schizophrenia, as we’ve discussed, today it is suspected to be a neurodevelopmental disorder that follows from some type of disruption during fetal development as a consequence of combined genetic and environmental factors. The gist is broadly consistent with the molecular neuroscience framework, where the environment determines the genes that are expressed as proteins during development, which combine to create neurons and shape neural circuits; in adulthood, the activation of neural circuits leads to schizophrenic symptoms. Beyond these somewhat hand-wavy statements about possibilities, we do not know much about the specifics—we cannot perform a genetic test or use a brain scan to diagnose the disorder. (Location 1488)

As described, the bulk of the science feeding into the molecular neuroscience framework was produced by the brain research community across a period of around 50 years. It was a shift away from the idea that most disorders affecting the brain had largely functional (mental) causes. In contrast, in this new framework, all mental disorders are organic brain disorders because all mental processes are biological. In the pursuit of understanding the brain, this era wholeheartedly embraced reductionism. While reductionism need not necessarily lead to oversimplification, a lot of this era of brain research did just that, as exemplified by the idea that a complex phenomenon like depression or psychosis can be entirely chalked up to the amount of certain brain chemicals, produced by the proteins expressed by genes. (Location 1493)

when we experience mood, we are tapping into brain activity distributed across many brain areas—but which ones and how does it work? We still lack good concepts for thinking about this problem. (Location 1517)

As George Box reminds us: “All models are wrong, but some are useful.” (Location 1524)

perhaps it’s time to pause for a moment in the intense effort to pile up bricks in the brickyard (as with the accumulation of more facts about, say, molecules and brain activity) and think a bit more about the edifices we are trying to build. (Location 1527)

One lesson we can apply to brain research is that metaphors are not the goal of research; the goal is models (the analogs of the algorithms in this computer science subfield). This lesson has two important consequences. First, metaphors, “word models,” and ways of thinking about things in broad strokes must be explicitly formulated as falsifiable models (and, in turn, hypotheses) so that researchers can test their validity. In the words of Wolfgang Pauli, a model or theory that cannot be assessed is “not even wrong.” A metaphor of the brain is precisely this. It ultimately needs to be formulated as a model, most often formalized as mathematical equations. If we fail to take this last step, we risk running in circles trying to test the validity of a metaphor that cannot be tested. (Location 1594)

cita reel

While we need to keep our eye on the prize of models and theories, thinking about what the brain might be like is an essential first step we should not overlook. Shifts in how we think about it—Copernican insights—may be the spark that triggers the impactful progress we so desperately need. (Location 1603)

When I consider the state of brain research today, I suspect that some subfields, including anxiety and depression, are waiting for their Copernican moment. (Location 1627)

The idea of the brain as a computer can help us understand how the brain and mind are related. In this metaphor, the brain is something akin to the hardware, and the mind is produced from the software it runs. All mental function is thus mediated by the brain. That is, some might suspect that mental function emerges from brain function, but changes in the mind cannot happen without physical changes in the brain. (Location 1650)

even if we could pinpoint the specific neural circuits and the specific types of neurons involved in attention, we still would not have a satisfactory explanation of how attention works. Mapping the biological phenomenon is not enough. (Location 1662)

As Marr worked through these ideas, he converged on the proposal that a complete explanation of anything that the brain does requires descriptions at each of three levels that I will refer to as why, what, and how.a At the top, descriptions of why focus on the thing that needs explaining, including the problem that an individual is trying to solve (such as seeing and identifying a face) and good ways to go about it. Descriptions of behavior fall at this level. One level down, descriptions of what typically focus on what the brain is doing—what each part is computing. In the case of vision, a what-level description would capture the computations performed by each part of the visual system to construct a neuron that responds selectively to a face. Finally, descriptions of how focus on the way in which those computations are implemented in the biological machinery of the brain, including the neural circuits, individual neurons, and molecular cascades involved. (Location 1674)

The year 1943 marked a turning point in thinking about one particularly pervasive model, artificial neural networks. Before then, computing (such as adding numbers) was considered one activity among many others that humans performed, such as cooking, walking, or talking. After 1943 computing shifted to the thing that the brain does to support all its functions. This transformation was the result of highly influential work by two neuroscientists, Warren McCulloch and Walter Pitts. According to McCulloch and Pitts, networks of model neurons are capable of operations that flip between two states, active and silent, a property called all or none firing. These flips reflect the binary outcomes—true (active) versus false (silent)—of logical operations such as AND and OR (fig. 5a). McCulloch and Pitts demonstrated how simple model neural circuits can compute these logical operations, and simple operations can combine to perform more complex computations. In reality, neurons in the brain are extremely complex. The elegance of McCulloch and Pitts’s work is the illustration of how vastly simplified versions of these complex entities networked together can be powerful. (Location 1710)

an algorithm is a problem that can be solved by an abstract computational device that manipulates symbols according to a set of rules, a Turing machine. (Location 1749)

even if we were to adopt the notion that the brain is the type of thing that can in principle compute anything, this idea is so broad that it’s not very helpful for moving brain research forward. (Location 1754)

To illustrate how the general notion of the brain as a computer can be helpful, let’s return to what we know about how pharmaceutical drugs work. We often know something about their effects at the molecular level, but we need to understand how those effects lead to changes in brain function and behavior. For instance, many antidepressants elevate levels of neurotransmitters like serotonin and norepinephrine in the synapse, which presumably affects brain activity in some way, but how exactly, and how does that change mood? (Location 1762)

we don’t understand how many of the drugs that we have operate at the what-level—how targeting molecules leads to changes in symptoms. (Location 1770)

the link we are most hungry for—a description of how what’s happening in the brain leads to the mind and behavior. (Location 1837)

When Ritalin is administered to monkeys, like humans, they perform better on the covert visual attention task. Moreover, day by day, the degree of their improvement is predictable by the degree of decorrelation measured in the visual system, demonstrating the link between them. (Location 1852)

As described earlier, antipsychotic drugs block dopamine receptors. While all existing antipsychotics block a specific type of dopamine receptor (D2), it’s unclear how that alleviates psychotic episodes; there’s a big gap in explanation there. And that gap lies right at the heart of the big mysteries about the brain that we are eager to understand: the precipice between sanity and psychosis. (Location 1875)

Many of the arguments on the “mental disorders are not brain disorders” side of this debate boil down to an extension of this straightforward idea: once mental variables start interacting in complex ways to “cause” mental disorders, it can be more helpful to think about causal interactions at the mental level (as opposed to the brain level)—like in the insomnia causes fatigue example, where both insomnia and fatigue exist at the mental level. When stated that way, the idea is not so profound. We call the neurodegenerative disorder Huntington’s disease a “brain disorder” as opposed to a “subatomic particle disorder” after all (even though it does ultimately involve a reconfiguration of subatomic particles corresponding to the genetic mutation that causes it). (Location 1984)

When Marr originally proposed these ideas, he did so in collaboration with his colleague Tomaso Poggio, and at that time there were four levels (the what level was split into two). Today, Poggio agrees with a singular what level, but he proposes that two additional levels should be added above why (to capture evolutionary forces and learning/development). Many researchers (including myself) who find Marr levels beneficial liken them to conceptual guidelines that are helpful in particular scenarios as opposed to a specific prescription about what all possible explanations should look like. (Location 2002)

Marr was right in noting that for any question that takes the form of “How does X work?” (where X might be vision or memory or mood or sanity), there will not be a singular level of explanation that captures the answer. (Location 2019)

describing that a brain area is involved in a behavior is distinct from understanding how a brain area contributes, and this is true even if the description employs a causal perturbation, like brain stimulation. (Location 2032)

Despite many repetitions of the same chorus across decades, the need to link what’s happening in the brain to the mind and to behavior—Marr’s what level—often has not happened. (Location 2036)

I suspect that by regarding the brain as a (sequential) computer, we’ve set up the problem in the wrong way insofar as our goal is to understand and treat brain dysfunction. The brain as a computer is an extension of the domino chain, and that’s neither the type of thing that the brain is nor why it breaks. (Location 2061)

Metáfora computacionalista de la mente.

cita nota

While a powerful concept, it turns out that homeostasis is not a great description of what the body always needs to do, even in the case of blood pressure. As we shift from lying down to standing up, we rely on anticipatory increases in blood pressure just before we stand to ensure that we do not faint as a consequence of blood rushing away from our heads. The notion of allostasis is an extension of homeostasis intended to capture the fact that for many systems (including blood pressure), the goal is not to maintain perfect stability but to anticipate when changes will happen and meet that demand. More broadly, allostasis is the notion that the brain does not just respond to change but anticipates future needs and proactively adjusts the body to maintain efficiency. Another example of allostasis is the release of insulin triggered in anticipation of eating (by the sight and smell of food and habitual mealtimes), preventing a large rise in glucose when we eat. (Location 2092)

While the purpose of an adaptive system is to maintain robustness in the face of changing conditions, it turns out that endowing a system with adaptivity makes it fragile to catastrophic failures that are hard to prevent entirely. In April 1986 an accidental power surge ruptured the coolant system that supplied the Chernobyl nuclear reactor, leading to its meltdown, an explosion, and the spread of radioactive contaminants across the Soviet Union and Europe. In August 2003 a single power line in Ohio short-circuited after coming into contact with a tree, triggering a chain reaction that left over 50 million people serviced by the North American power grid (from New York to Ontario) without electricity for several hours to days. These are but a few examples of how fragile complex adaptive systems are and how they can fail (despite best efforts). (Location 2143)

That’s fragility. It happens when a system’s components are recurrently connected such that they depend on one another. While the fragility of an adaptive system can be minimized to some extent, there’s ultimately no way to eliminate it. It is a fact that as systems become more responsive, they also tend to become more fragile. Likewise, as more elements are included in an adaptive system, its fragility tends to increase. (Location 2158)

One idea that has proven powerful for thinking about how this might work is the notion of a developmental landscape. First proposed by Conrad Waddington in the late 1950s, development is envisioned as a ball that starts at the top (at the point of fertilization) and rolls down a landscape full of hills and valleys to the bottom (reflecting a mature individual; fig. 8a). In this scheme, a developmental disorder—let’s say schizophrenia—is envisioned as happening when the ball ends up in one valley. Not-schizophrenia happens when the ball ends up somewhere else. The landscape is configured such that the valleys are deep at the bottom of the hill, and once the ball ends up in the schizophrenia valley, there’s no way of escaping it. Roll the ball several times, and it will take different trajectories down the landscape and only land in the schizophrenia valley on a subset of times (as a consequence of the initial conditions and tiny bumps along the way that push it in one direction or another). The landscape metaphor provides some intuition for how it is that variation during development that is so small it manifests like chance can lead to big effects (like schizophrenia) in a more mature individual. (Location 2168)

There are many things that a network might do, but its activity patterns determine what among those things it actually does, and attractor basins draw it into doing a specific thing. The dynamics of these complex systems are most often visualized in the same way: as plots of an energy landscape in which the system’s state (reflected as a ball) always rolls toward lower energies unless perturbed by some outside force. The landscape shape can have valleys (attractors) and hills (repellers), determined by the network configuration. This means that where the network will settle depends on where it starts. A simple network might have a single attractor basin; a more complex network might have many (Location 2191)

Both developmental and adult plasticity change a network’s parameters and so can change these landscapes. Less developed attractors tend to be shallower, and it’s easier for some change in the network to cause it to jump out of that state—this might reflect what happens early in development or learning when a new state is being acquired. The attractor basin is shallow, and therefore the system might easily avoid settling there. When the attractors become more entrenched, it takes a much larger input or perturbation to move the network to a new state once it’s settled in an attractor basin. This parallels the idea that the brain becomes less able to change across development, which Waddington called canalization. More often than not, development leads the brain to “get stuck” somewhere healthy, but in the case of neurodevelopmental disorders like schizophrenia, the brain gets stuck in a maladaptive state. (Location 2197)

Some forms of mental illness (like depression and anxiety) are thought to be acquired in a conceptually similar way but during adulthood, through learning. In these “systems” models, some researchers find it helpful to think about recurrent interactions between psychological variables such as stress and different types of emotions. A normal amount of anxiety is thought to be a good, adaptive thing: it motivates us to go to the dentist and to save for retirement. Anxiety can also get trapped, however, in a cycle in which worrying leads to feeling down, which can lead to more worrying, which can lead to less social interaction, which can lead to more feeling down, and so on. This downward spiral can lead us to “get stuck” in a maladaptive attractor state; to get out, we need to break this feedback cycle. In this way, these models can be used both as metaphorical descriptions of how mental disorders happen and explicit descriptions formulated as mathematical models. One advantage of writing down these dynamical systems mathematically is that they can be explored for emergent properties in scenarios that are harder to think through, such as when larger numbers of variables are involved. (Location 2205)

In other words, the brain is not just a complex adaptive system; it’s a complex adaptive system that computes. (Location 2224)

The brain breaks because it is a complex system, and complex systems must be able to adapt to changing conditions—but this adaptability makes them inherently fragile. It’s thus hard to overstate the impact of shifting away from domino chains and toward thinking about the brain as a complex adaptive system; it changes everything. (Location 2233)

The umbrella term that researchers use for the group of mental functions that includes feeling, emotion, and mood is “affect.” Researchers who study affect discuss the experience of emotions like happiness, sadness, and calmness as “affective states” and the outward expression of those emotions (like smiling or crying) as “affect” itself. Open and contentious questions in this area of research include: What is the purpose of affect? Do other animals experience affective states in the same ways that we do? Did affect evolve to fulfill some function, and if so, what is it? (Location 2243)

In other words, “emotions are not just about what is, but also about what matters.” The brain is not just processing information but also trying to anticipate what will be good versus bad, so that it can maximize well-being. (Location 2254)

As Styron powerfully explains, “Most physical distress yields to some analgesia—not so depression. Psychotherapy is of little use to the profoundly depressed, and antidepressants are, to put it generously, unreliable. Even the soothing balm of sleep usually disappears.” Individuals don’t just experience depression; they suffer from it. While we have some ways to help them, those ways aren’t effective for everyone. We must find better ways. (Location 2274)

A propósito de lo mucho que nos falta en relación a tratamientos efectivos para la depresión.

cita

Like other psychiatric disorders, biological tests can rule out possible confounding causes (like an underactive thyroid), but there are otherwise no biological tests for it—its diagnosis is based on an individual’s symptoms alone. (Location 2282)

Surprisingly, even though we know that antidepressants are capable of modulating mood and alleviating depression (at least in a subset of individuals), we still do not know what in the brain causes mood, and we do not have a great theory of what depression (Location 2284)

If we don’t know what depression is, how can we treat it? (Location 2296)

The first dimension traverses from happy to sad and is associated with approaching by virtue of rewards. The second dimension traverses from anxious to calm and is associated with avoiding by virtue of punishments. Other emotions, like aroused, are created as combinations of these two dimensions. (Location 2334)

We should anticipate that creating a model of mood will be trickier than it is for other brain functions, like memory. For memory, there is an objective ground truth (something happened or didn’t), and you can create a test to measure how well someone remembers. In comparison, mood is a subjective experience for which there’s no right answer for what your mood should depend on; that makes it more difficult to design a good test. Likewise, we expect that different types of experiences will affect the moods of different people to different degrees; we can’t expect that there will be a fixed equation that will describe how everyone’s moods will fluctuate. This implies that the goal is an equation where mood is allowed to depend on different things (like rewards and punishments), and we allow the degree to which those things matter for mood to be different for different individuals—the mood of some people might be less sensitive to negative experiences (or sensitive for a shorter period). (Location 2338)

cita reel nota