Highlights

id852439270

Episode AI notes

  1. The Free Energy Principle is a significant concept at the intersection of neuroscience and philosophy, providing insights into brain processes.
  2. The brain functions differently than previously thought, with the latest theory suggesting an inverted view of sensory information processing.
  3. Bayesian inference is essential for the brain to construct a representation of reality from sensory data and navigate the environment effectively.
  4. Approximate Bayesian inference is used to interpret sensory data efficiently due to the complexity of the formal solution.
  5. Generative models help the brain simulate sensory data based on prior assumptions to understand the environment effectively.
  6. The brain minimizes prediction error and informational distance in simulation to create a perception of reality based on the limited sensory data.
  7. Understanding Bayesian statistics and Bayes’ Theorem can enhance the comprehension of how evidence influences probability estimates.
  8. Neuro-anatomical plausibility of visual perception involves complex processes in the brain from light hitting the retina to encoding in different cortical areas.
  9. Neuroplasticity and learning in the visual cortex allow individuals to perceive the world differently through controlled hallucinations influenced by visual inputs.
  10. Hierarchical predictive processing in the brain involves low-level models detecting basic features feeding into higher layers processing higher-level features.
  11. The brain operates as a generative model where prediction errors flow upwards in the hierarchy, generating what it thinks it is seeing based on these errors.
  12. Sensory data plays a crucial role in determining the contents of generative models, with a constant interplay between the likelihood of data and prior expectations.

→ Readwise


id682471704

El Principio de la Energía Libre como modelo más validado del funcionamiento del cerebro. Transcript: Shamil Chandaria The same time, I was also a research fellow at the Institute of Philosophy at London University and looking at this kind of intersection between neuroscience and philosophy. And at the time, I think, you know, back in 2013, 14, you know, they asked me, since I was the kind of mathematical guy there, you know, there’s this thing called the free energy principle Coming out of Carl Fristen’s lab. And, you know, can you explain how this really works? You know, you know about entropy and stuff like that. So I started really getting into it and it was very interesting because of course it’s deeply connected with, with information theory and machine learning. And to some extent, I would say, I now take the position, and I think many neuroscientists do, that it’s the closest thing we have to a kind of general algorithm of what might be going on In the brain from a big picture perspective. And

cerebro predicción teoría

→ Readwise


id682471962

Modelos del funcionamiento del cerebro antes del Principio de Energía Libre Transcript: Shamil Chandaria The thing is that, you know, maybe 20 years ago, the consensus of, you know, what the brain was doing was it was kind of taking bottom-up sensory data, sensory information, and kind of Processing it up a stack. And then eventually the brain would know what was, would figure out what was going on. And that view of what the brain is doing is in fact, upside down according to the latest theory of how the brain works. And

cerebro cognición predicción teoría

→ Readwise


id682474268

Los desafíos perceptuales del cerebro y cómo los soluciona mediante inferencia predictiva. Transcript: Shamil Chandaria In fact, I don’t know how deep you want to go with this, but actually you can even start before that, which is from the philosophical problem, which is, you know, what Plato and Immanuel Kant kind of pointed to, which is that we only know our appearances, our experience. We have no contact with reality. Most people’s common sense view is that, oh, look, we’re looking out at the world through little windows in the front of our skulls, and we’re seeing trees as they really are. Now, of course, that cannot be true for precisely the reasons you said. We’re just receiving some noisy, random electrical signals coming in. And the brain has never seen reality as it is. I was going to, the tree as it is in itself, if that makes any sense. Now, what the brain has to do is figure out the causes of its sensory data. In other words, it’s trying to figure out what is causing its sensory data so it can get some grip on the environment. And that, of course, is important from an evolutionary perspective, because if we don’t know what’s going on in the environment, we won’t know where the food is and we won’t know where The tiger is. So we need to find out the causes of our sensory data. You know, and this is ultimately, formally, exactly the statistical inference problem, the Bayesian inference problem. And Bayesian inference is trying to figure out the probability that given my sensory data, I’m seeing a tree. Okay.

cerebro percepción predicción realidad

→ Readwise


id851815676

Perception of Reality and Bayesian Inference The common view that we perceive the world as it truly is through our senses is challenged, as our brain only receives electrical signals and constructs a representation of reality by inferring the causes of sensory data. This process is crucial for survival to identify threats and resources in the environment. The brain engages in Bayesian inference to make sense of sensory inputs and navigate the environment effectively. Transcript: Speaker 1 I mean, in fact, I don’t know how deep you want to go with this, but actually you can even start before that, which is from the philosophical problem, which is, you know, what Plato and Emmanuel can’t kind of point it to, which is that we only know our appearances, our experience, we have no contact with reality. Most people’s common sense view is that, oh, look, we’re looking out at the world through little windows in the, you know, on the front of our, our skulls, and we’re seeing trees as they Really are. Now, of course, that cannot be true for precisely the reasons that you said, we’re just receiving some noisy, random electrical signals coming in. And the brain has never seen reality as it is. I was going to, you know, the tree as it is in itself, if that makes any sense. Now, what the brain has to do is figure out the causes of its sensory data. In other words, it’s trying to figure out what is causing its sensory data. So it can get some grip on the environment. And that, of course, is important from an evolutionary perspective, because if we don’t know what’s going on in the environment, we won’t know where the food is, we won’t know where The tiger is. So we need to find out the causes of our sensory data. You know, and this is ultimately formally exactly the statistical inference problem, the Bayesian inference problem.

→ Readwise


id851815738

Approximate Bayesian Inference for Understanding Sensory Data Understanding sensory data is crucial to comprehend the environment, locate food, and identify threats like tigers. The brain tackles this challenge by engaging in Bayesian inference, seeking to determine the probability of perceiving specific objects based on sensory input. However, the formal solution to Bayesian inference is highly complex and computationally intensive, leading to the utilization of approximate Bayesian inference. This approach involves learning to interpret sensory data by processing incoming information and attempting to discern the perceived objects effectively. Transcript: Speaker 1 In other words, it’s trying to figure out what is causing its sensory data. So it can get some grip on the environment. And that, of course, is important from an evolutionary perspective, because if we don’t know what’s going on in the environment, we won’t know where the food is, we won’t know where The tiger is. So we need to find out the causes of our sensory data. You know, and this is ultimately formally exactly the statistical inference problem, the Bayesian inference problem. And Bayesian inference is trying to figure out the probability that given my sensory data, I’m seeing a tree. Okay. So as we said, it turns out that the brain can’t solve this problem because actually formally solving, you know, the Bayesian inference problems turns out for technical reasons to Be computationally explosive. So what evolution has to do and what we have to do in artificial intelligence is use another algorithm. It’s called approximate Bayesian inference. And the way you solve it, because Bayesian inference is so difficult, the way you actually solve it is going at it backwards. And what you have to do is you essentially have to have all this data come in and try to learn what you think you’re seeing.

→ Readwise


id851815677

Approximate Bayesian Inference and Generative Models The brain cannot formally solve Bayesian inference problems due to computational complexity. To address this, evolution and artificial intelligence use approximate Bayesian inference, which involves simulating sensory data based on what is believed to be observed. This process, known as generative modeling, helps the brain interpret and understand sensory input by simulating the expected data based on initial assumptions. Transcript: Speaker 1 So as we said, it turns out that the brain can’t solve this problem because actually formally solving, you know, the Bayesian inference problems turns out for technical reasons to Be computationally explosive. So what evolution has to do and what we have to do in artificial intelligence is use another algorithm. It’s called approximate Bayesian inference. And the way you solve it, because Bayesian inference is so difficult, the way you actually solve it is going at it backwards. And what you have to do is you essentially have to have all this data come in and try to learn what you think you’re seeing. And from what you think you are seeing, you then simulate the pixels that you would be seeing if your guess is correct. So if I think I’m seeing a tree, what your brain then has to do is go through something called a generative model and actually simulate the sensory data that it would be seeing if this was Indeed a tree. Now that is incredible because what it means is that, well, you know, the upshot of that just to cut to the chase, what this is the real kind of what’s called a neurophenomenological hypothesis, Which is that in fact, what we experience, if we’re

→ Readwise


id851815739

The Brain’s Internal Simulation and the Free Energy Principle The brain uses a generative model to simulate sensory data based on what it thinks it is seeing. This internal simulation is what we experience as our perception of reality. The Free Energy Principle states that our brain minimizes prediction error by simulating what it believes is happening in order to make sense of the limited sensory data it receives. Transcript: Speaker 1 And from what you think you are seeing, you then simulate the pixels that you would be seeing if your guess is correct. So if I think I’m seeing a tree, what your brain then has to do is go through something called a generative model and actually simulate the sensory data that it would be seeing if this was Indeed a tree. Now that is incredible because what it means is that, well, you know, the upshot of that just to cut to the chase, what this is the real kind of what’s called a neurophenomenological hypothesis, Which is that in fact, what we experience, if we’re aware of it is our internal simulation is precisely that internal generative model. Now you might just then conclude, well, we’re just hallucinating. We’re just simulating how do we have any grip on reality? And this is where the free energy principle comes in. It says that, you know, what we have to do is we have to simulate what we think is going on. But it’s not any simulation. It’s a simulation that minimizes the prediction error from the output of your simulation and the few bits of sensory data that we get.

→ Readwise


id851815678

Minimizing Prediction Error and Informational Distance in Simulation A simulation should focus on minimizing prediction error with sensory data while also reducing the informational distance between the output of the generative model, simulation, and priors. By achieving this, the simulation closely resembles the expected outcome before observing sensory data. This concept of minimizing prediction error and informational distance forms the basis of free energy. Free energy involves two components: prediction error and informational distance to the prior expectations. By performing approximate Bayesian inference and creating a simulation that minimizes prediction error with sensory data and deviation from prior probabilities, one can effectively implement the concept of free energy. Transcript: Speaker 1 Now, what you want to do is you want to have a simulation, which is minimizing the prediction error with the sensory data, but also minimizing the informational distance between the Output of your generative model, the simulation and your priors. In other words, you want a simulation that is as close to what you would normally expect before seeing the sensory data. So this is really what the free energy is. The free energy has two terms. The first is roughly kind of a prediction error. And the second is an informational distance to the prior of what you’d be expecting. So it turns out that we can actually do approximate Bayesian inference, which is the mathematically optimal thing to do. If we simulate the world and create the simulation in such a way that minimizes the prediction error with the sensory data that we get and also minimizes the deviation from the divergence From our prior probability distribution, prior probabilities. So that’s kind of the free energy in a nutshell. And it’s kind of, as I said, it’s very interesting because it helps us think about phenomenology, which is what I’m interested in because if

→ Readwise


id852439212

Understanding Bayesian Statistics and Bayes’ Theorem Bayesian statistics and Bayes’ Theorem offer a straightforward mathematical approach to revising probability estimates based on evidence. This method allows for the calculation of the probability of an event given certain data, providing a formula for calculating such probabilities and determining the entire probability distribution. By grasping the fundamental structure and principles of Bayesian statistics, individuals can better comprehend how evidence influences probability estimates. Transcript: Speaker 2 Yeah, it was a great overview. Maybe I’ll track back through some of that just to give people a few hand holds here and also give them areas they may do some further research if they’re interested. So many people will have heard of Bayesian statistics or Bayes’ theorem. And it’s actually a pretty simple piece of mathematics that it’s worth looking up because it’s unlike many equations. Once you track through the terms, it does repay one’s intuitive sense of how things should be here. This is a mathematical description of how we revise our probability estimates based on evidence. And so when you look at this equation, I just pulled it up to remind myself of its actual structure. Speaker 1 If you want, I can just do a little very simple example. Speaker 2 Sure. Yeah. I mean, I was imagining something like, you know, what’s the probability that is raining given that the street is wet, you know? Yeah. So I mean, I’ll stick to the brain and the tree and the data. Sure. Speaker 1 Yeah. So what Bayesian theorem says to think about our tree in the brain example, you know, what it’s giving you a formula for calculating the probability that of there being a tree given your Sensory data. Okay. In fact, it’s calculating, you know, Bayesian inference, the way we’re doing in the free energy is calculating the whole probability distribution.

→ Readwise


id852439213

Neuro-Anatomical Plausibility of Visual Perception Visual perception involves light hitting the retina, being transduced into electrochemical energy in the brain, and passing through different brain areas for detection and encoding. Neurons respond to various features like straight lines, cortical columns build complex images, cells respond to faces including specific ones like Bill Clinton. There’s a one-way feed-forward mapping of the world, but there are ample top-down connections from the frontal lobes to the visual cortex, showing a significant influence of higher brain areas on visual processing. Transcript: Speaker 2 Okay, so just to give some neuro anatomical plausibility to this picture, again, the common sense view of the science here is that we have a world. It’s stick to vision because I think it’s the most intuitive. We have a world which we see with our open eyes. And the way we see it is that in the light hits the retina and then it gets transduced into electrochemical energy in the brain and transits through various brain areas. And along the way, various features of the visual scene are detected and encoded. So there are neurons that respond to straight lines. There are cortical columns in the visual cortex that build up a more complex and abstract image. And eventually you get to some cell in the cortex that responds to faces rather than anything else. And even you’ll get cells that respond to specific faces like the fabled grandmother cell. And you’ll get to an experiment about 25, 30 years ago that showed that there were cells that were responding to the face of Bill Clinton and not any other. And so you have this kind of one way feed forward picture of a mapping of the world. And yet in your description here, our seeming to reverse the causality. One interesting piece of neuro anatomical trivia is that we have something like 10 times the number of connections going top down rather than bottom up from returning to visual cortex From the frontal lobes.

→ Readwise


id851820559

Neuroplasticity and Learning in the Visual Cortex Learning can modify the activity and structure of the visual cortex, allowing individuals to perceive the world differently. This learning process involves areas of the cortex beyond vision that connect back to the visual cortex. Neurologically, the physical changes required for learning are inscribed in the brain, propagating down to the visual cortex. Deeper layers of the cortex develop a predictive model of the world through learning, described as a controlled hallucination resembling the dreaming brain’s activity, but influenced by visual inputs in waking life. Transcript: Speaker 2 That has always been somewhat inscrutable that we know that you can modify the activity and even structure of visual cortex by learning. So you can learn to see the world differently and that learning largely takes place frontally or in areas of cortex that are not strictly limited division. And yet they connect back to visual cortex. And so you imagine what is required neurologically to learn to recognize. Let’s say you become a radiologist and you learn to read CAT scans. That learning has to be physically inscribed somewhere. And we find that the changes propagate all the way down to visual cortex. There’s a picture of layers, some of these deeper layers that are above vision are now encoding a model of the world on your account that is predictive, that is making guesses, that is A kind of, I believe, Arnold Seth, when he was on this podcast described as a controlled hallucination. It’s very much like what the dreaming brain is doing except in waking life, it is constrained by visual inputs to the system of the sort that you just described.

→ Readwise


id851820560

Feed Forward in Generative Model The generative model in cognitive systems operates by generating a simulation based on priors to predict the outcome. It then sends only prediction errors through feed forward from the higher cortical areas to the low level inputs, rather than transmitting all feedback nuances. This process highlights why there are significantly more feed forward mechanisms than feedback nuances in such systems. Transcript: Speaker 1 Right. Great. So exactly as you say that, you know, it’s kind of always been a bit of a mystery why there are 10 times as many feedback nuances that are kind of feed forward in some of these systems. And the picture that we just talked about where the generative model, this simulation model actually points down from the higher cortical areas towards the low level inputs, where The sense data is coming in. Now, in fact, you know, so one way to think about this model is that we’ve got this kind of generative model, which starts with our priors, what we think is going on and makes a simulation. And what flows up the feed forward part is just the prediction errors.

→ Readwise


id852439214

Hierarchical Predictive Processing in the Brain The brain functions through a hierarchical predictive processing system where low-level models near the data detect basic features like edges and corners, which then feed into higher layers processing higher-level features. These models are driven by priors generating expected outcomes, and information flows up in the form of prediction errors, while the brain only generates what it thinks it is seeing based on these errors, without the actual data flowing upward. Transcript: Speaker 1 Now, it’s not just one huge model going all the way from top to bottom. As you intimated, the scheme that is now thought to arise is something called hierarchical predictive processing. So it’s essentially that you have a whole series of low level models near the data. You know, the first layers of the visual cortex might be, you know, having, you know, models that are detecting edges and corners, and then, you know, you build up from there exactly Like you do in a neural network where higher layers and then in the network are essentially processing higher level features, except that these are all being driven down by these priors That are generating what we would expect to see. And all that’s flowing up, the funny thing is that the data actually never flows up the brain. All that’s flowing up is the prediction errors up this feed forward network. What’s coming down is the output of the generative model. So the brain is only generating what is thinks it’s seeing. And there is no actually what we’re seeing is just prediction errors flow up and say, can you please adjust it, there’s a large prediction error here.

→ Readwise


id851820561

Hierarchy of Generative Model in the Brain The brain operates as a generative model where data does not flow up to the brain; instead, prediction errors flow upwards in the hierarchy of models. The brain generates what it thinks it is seeing based on the prediction errors that flow up, prompting adjustments. There are multiple layers in this hierarchy, with each layer providing priors to the layer above, including concepts, multi-sensor integration, reasoning, and language at the top levels. Transcript: Speaker 1 And all that’s flowing up, the funny thing is that the data actually never flows up the brain. All that’s flowing up is the prediction errors up this feed forward network. What’s coming down is the output of the generative model. So the brain is only generating what is thinks it’s seeing. And there is no actually what we’re seeing is just prediction errors flow up and say, can you please adjust it, there’s a large prediction error here. So what we think is going on is that we have these kind of models that sit one on top of another and the higher level model sends that, you know, is where the priors come from. And now you might ask, well, where did the priors of that higher level model come from? Will they come from priors a layer above? And you know, we don’t know how many layers in this hierarchy there are, but you know, there might be something like half a dozen layers in the hierarchy. And right at the top of the hierarchy, you know, we get things like concepts and, you know, multi-sensor integration concepts in the higher, you know, reasoning and language.

→ Readwise


id852439215

Weight of sensory data in generative models When attending to something, the sensory data is given more weight in determining the contents of the generative model. This is a constant interplay between the likelihood of the data and the priors, where sensory information can override prior expectations. Despite only a small portion of the visual scene being seen in color and accurately at any given time, the brain creates a coherent and colorful representation of the entire scene. Transcript: Speaker 1 Essentially what happens is that is that when you attend to something, you give more weight to parts of the predictive processing hierarchy stack and specifically you give more precision Weighting to the sensory data, the likelihood of the data. And so you would say there’s a very large prediction error here and you would be, instead of your priors dominating the posterior, what you actually see, the sensory data would have A greater weight in determining the contents of the generative model. So you know, this is a kind of a two way street that’s going on constantly between the likelihood of the data and the priors, your expectations. And you know, it’s interesting just to take a step back. You know, you’re seeing this relatively constant scene in front of you, you know, presumably in these beautiful colors in a cartoonish definition. And yet if you look at what’s coming through your eyes, I mean, you could only see a very small portion of the visual scene at any one time because that’s where, you know, you’re immaculate. The only part that’s seasoned in color and accurately is like a tiny portion of the visual field. And yet you’re seeing everything clearly in color.

→ Readwise