Machine Learning Street Talk (MLST)

Why Every Brain Metaphor in History Has Been Wrong [SPECIAL EDITION]

42 min
Jan 23, 20263 months ago
Listen to Episode
Summary

This episode explores how scientific metaphors for the brain—from hydraulics to computers—reveal more about our current technology than actual neurobiology. Host and guests including philosopher Marvita Chiramuta argue that simplifications are necessary but dangerous when mistaken for truth, and that understanding requires recognizing the limits of our models and the perspectival nature of knowledge.

Insights
  • Brain metaphors throughout history reflect the most advanced technology of their era, not discoveries about how brains actually work—suggesting our current 'brain as computer' model may be equally misleading
  • The fallacy of misplaced concreteness: scientists often forget their elegant mathematical models are simplifications, not reality, leading to overconfident claims about AI inevitability and consciousness
  • Prediction and understanding are fundamentally different goals that pull against each other; modern AI excels at prediction but offers no guarantee of genuine understanding or insight
  • Knowledge is inherently perspectival and embodied—it emerges from finite communities with particular tools and blind spots, not from abstract, universal repositories like the internet or LLMs
  • Cognitive horizons exist for all organisms; humans likely have structural limits to what we can comprehend, similar to how rats cannot learn prime number mazes no matter the training
Trends
Shift from mechanistic to relational ontologies in science—moving away from 'what is it really?' toward 'what is this useful for and from whose perspective?'Growing skepticism about AI inevitability among philosophers and neuroscientists, driven by recognition that mechanistic assumptions about cognition may be historically contingent rather than fundamentalDistinction between prediction (what will happen), control (making it happen), and understanding (compact, communicable knowledge) becoming critical as LLMs dominate prediction without enabling understandingRe-ontologization of knowledge in the digital age—recognition that computational models are one valid ontology among many, not the metaphysically correct oneHaptic realism in philosophy of science gaining traction—knowledge emerges through engagement and intervention, not detached observation, challenging the 'view from nowhere' idealIncreased focus on cognitive horizons and bounded rationality as explanatory frameworks for why certain problems remain intractable despite computational advancesCritique of universal knowledge repositories (internet, LLMs) as culturally naive—emphasis on how knowledge is always situated, perspectival, and dependent on community interpretation
Topics
Free Energy Principle and neuroscience reductionismBrain metaphors and technological determinism in scienceFallacy of misplaced concreteness in AI and neurosciencePrediction vs. understanding in machine learningFunctionalism and substrate independence in AI philosophyEmbodied cognition and haptic realismPerspectival nature of knowledge and scientific ontologyCognitive horizons and limits of human understandingLLMs as prediction engines vs. understanding systemsHistory of brain models (hydraulics, electricity, computation)Simplification and abstraction in scientific modelingCollective vs. individual knowledge and epistemic communitiesRe-ontologization in the digital ageCausal invariance and software as abstract patternsTemperature as a non-physical property analogy for information
Companies
Google DeepMind
John Jumper, Nobel Prize winner, interviewed about prediction vs. understanding in protein folding research
Neuralink
Mentioned as example of companies promoting brain-as-computer metaphor through brain-software interface framing
Anthropic
Claude Code mentioned as example of recent AI automation advances in software development
GitHub
Referenced as example of how information/knowledge must be stored in physical substrates despite seeming non-physical
People
Carl Friston
Neuroscientist who developed the free energy principle; childhood observation of wood lice inspired decades of work o...
Marvita Chiramuta
Philosopher at Edinburgh University; author of 'The Brain Abstracted'; major influence on host's thinking about scien...
François Chollet
AI researcher and friend of the channel; proposed kaleidoscope hypothesis about universe being made of code with unde...
Luciano Floridi
Professor interviewed on ontology vs. metaphysics; discusses how digital evolution changed our ontological understand...
John Jumper
Nobel Prize winner at Google DeepMind; distinguished prediction, control, and understanding as three separate scienti...
Noam Chomsky
Linguist and cognitive scientist; critiqued theories that accommodate all observations as meaningless; discussed cogn...
Yoshua Bengio
AI researcher; discussed software as abstract causal patterns and spirits as disembodied causal mechanisms in nature
Nicholas of Cusa
Medieval philosopher; coined 'docta ignorantia' (learned ignorance)—studying hard to learn what you don't know
James Joule
Physicist who discovered temperature is not a physical thing but a property of matter through cannon-boring observations
René Descartes
Philosopher who modeled nervous system as hydraulic automata, reflecting high-tech metaphor of his era
Albert Einstein
Physicist cited as believing universe is fundamentally orderly; 'God doesn't play dice' reflects faith in legible nature
Plato
Ancient philosopher; Chiramuta notes Chollet's kaleidoscope hypothesis mirrors Platonic belief in hidden mathematical...
Anna Ciaunica
Philosopher; used mountain-climbing metaphor to argue that the path and substrate matter as much as the destination i...
Mike Levin
Researcher; debated with host about functionalism and whether substrate matters for intelligence
Jeff Beck
Neuroscientist; stated brain explanations will always be analogies to most sophisticated current technology
Terence Tao
Mathematician; GPT 5.2 reportedly solved one of his unsolved problems, raising questions about AI understanding
Quotes
"The free energy principle is not meant to be complicated or difficult to understand. It's actually almost tautologically simple."
Carl Friston
"Science is a humanistic endeavor. The purpose of science in the universe is to make the universe intelligible to us, not to control it, not to predict it, and not to exploit it."
Marvita Chiramuta
"The sameness is something that we impose. It exists in our description, not in nature."
Marvita Chiramuta (on software running on different substrates)
"Predict means you say what will appear on my computer screen in the future. Control is I want it to come out 17. Understand is I have such a small collection of facts that fits on an index card."
John Jumper
"If you have a theory, there are two kinds of questions you have to ask: Why are things this way? Why are things not that way? If you don't get the second question, you've done nothing."
Noam Chomsky
"Knowledge is a much more collective phenomenon. You cannot throw engineering manuals and cement into a gorge and expect to get a bridge because the books don't have knowledge. Teams have knowledge."
Marvita Chiramuta
"Knowledge is inherently of a place, of a community. We acquire knowledge not by being completely open-minded but by narrowing our view, discounting possibilities."
Marvita Chiramuta
Full Transcript
Let me tell you a little story. 1960s, in the summer, a little kid named Carl was playing around in the back of his garden and he noticed all of these wood lice crawling around, you know, the little insects that can curl up into a ball. And what he noticed was that depending on whether they were in the sun or in the shade, they would move faster or slower. They behaved differently. and that's it. Carl grew up to be Professor Carl Friston, one of the most cited neuroscientists alive. He's been on this channel before, more times than I can count, and that childhood observation about wood lice, it never left him. He spent decades developing what he calls the free energy principle, which tries to explain all of behavior with one equation. Perception, action, learning, why you scratch your nose, All of it, Friston claims, comes down to minimizing a single mathematical quantity. There's an old physics joke. Assume that we can model a spherical cow in a vacuum. The joke is about how scientists grotesquely simplify messy reality to tame it. The free energy principle might be the ultimate spherical cow. It promises to explain self-organization, this bewilderingly complicated phenomenon, with something so emaciated we might as well call it tautological. Even Friston himself agrees with this, by the way. This is what he said to us last time we spoke with him. The free energy principle is not meant to be complicated or difficult to understand. It's actually almost tautologically simple. So the whole free energy principle is just basically a principle of least action pertaining to density dynamics, the dynamics or the evolution of not densities, but conditional densities. That's just it. This is before thermodynamics, it's before quantum mechanics, it's just about conditional probability distributions. So what do we do with this? Has Friston actually found some deep truth about how minds work? Or is he doing what many scientists do, which is mistaking the simplification for the actual thing? Well, it turns out there's a philosopher who has spent an incredible amount of time thinking about this exact problem. Professor Marvita Chiramuta teaches at Edinburgh University. Her book, The Brain Abstracted, is basically about what happens when neuroscientists simplify brains to study them. What gets captured? What gets lost? One of the answers that might seem obvious to people is that we pursue science because we're curious. We just want to know how the world works. We want to reveal, discover the underlying principles of the universe, which apply in all cases. Switching off the idea that you're just interested in nature for its own sake out of curiosity and saying, OK, how can we engineer these systems to actually do things that we want? Getting them to behave in artificial ways. If those simplifications sort of allow you to achieve your technological goals, there's no in principle problem with oversimplification. simplification if you're gonna say I'm not just interested in nature for its own sake I just want applied science. I should say by the way that the brain abstracted probably influenced my thinking more in 2025 than anything else. She's an inspirational lady I look up to her very much and certainly thinking back on many of the episodes we've done in 2025 I can see her influence in the questions I ask and how I think about things. So here's her starting point. Scientists have to simplify. We're limited creatures trying to wrap our heads around systems way more complex than we can actually comprehend. Our working memory holds maybe seven items. Our attention is more scattered than a group of toddlers with iPads. We die after 80 years if we're lucky. So we build models, right? We leave stuff out on purpose. We tell ourselves stories about how the world works. But the question is, why does any of this even work at all? Science is a humanistic endeavor, right? The purpose of science in the universe is to make the universe intelligible to us, not to control it, not to predict it, and not to exploit it. Now, you can do all those wonderful things, if you like, but in the end, as far as I'm concerned, science is no different from poetry is that we're trying to make sense of the world, trying to give it meaning in relation to our own existence. If you'll allow the indulgence, I want to tell a little story. It's a boxing match. In the red corner, Simplicius. He thinks science works because the universe is actually simple underneath. Find an elegant equation and you've hit the real thing. Simplicity tells you that you're on the right track. And in the blue corner, Ignorantio. He thinks we simplify because we're too dumb to do otherwise. Our models work well enough for our purposes, but they're approximations, just useful fictions, if you like. The map, not the territory. Now, both of them agree that scientists need to simplify, but where they disagree is what that means about reality. Simplicius had history on his side, or at least a certain type of history. Galileo, Newton, Einstein, they all believed pretty explicitly that nature was fundamentally orderly, and that Finding simple laws meant you'd found something true. Einstein famously said, God doesn't play dice. And no, he didn't actually think God had anything to do with it. But he was expressing faith that the universe is, at the very bottom, legible. Now, Chiramuta has gone all in on Ignorantio's position. She thinks successful science tells us we've become good at building useful simplifications, and that doesn't prove that nature is simple. The philosopher Nicholas of Cusa had a phrase for this attitude, doctor ignorantia, basically learned ignorance. You study hard, you learn a lot, and what you learn includes what you don't know. Now, when we interviewed Chirimuta, she'd been following Francois Cholet's videos. And for those of you who don't know, Francois is a friend of the channel. He's our mascot. He's one of my heroes. And he's got this idea called the kaleidoscope hypothesis, which is basically that the universe is made out of code and underneath all of the apparent gnarly mess that we see, there is intrinsic underlying structure. Everyone knows where the kaleidoscope is, right? It's like this cardboard tube with a few bits of colored glass in it. These are just like few bits of original information get mirrored and repeated and transformed, and they create this tremendous richness of complex patterns. You know, it's beautiful. The kaleidoscope hypothesis is this idea that the world in general and any domain in particular follows the same structure that it appears on the surface to be extremely rich and complex and infinitely novel with every passing moment. But in reality, it is made from the repetition and composition of just a few atoms of meaning. A big part of intelligence is the process of mining your experience of the world to identify bits that are repeated and to extract them, extract these unique atoms of meaning. When we extract them, we call them abstractions. Now, she's not saying that Cholet is wrong. She's saying that he's making a philosophical bet. Might be right, might be wrong. It's the same bet that Plato made. Seeing that as a philosopher, I thought, that's Plato. Because Francois precisely says, we have the world of appearance. It's complicated. It looks intractable. It's messy. But underlying that real reality is neat, mathematical, decomposable. Now, I feel like I should defend Cholet a little bit here, you know, because obviously we love Cholet. He's not making any weird metaphysical claims. At least I don't think he is. If scientific theories actually explained reality the way it is, you would expect fewer U-turns. Now the biggest simplification in the 21st century, the final boss of simplifications, is this idea that the mind is a computer or that the mind is running a software program. So we have inputs, we have processing, we have an output. This metaphor has become so established in the collective zeitgeist that no one even questions it anymore. It barely even registers in our brains as a metaphor. So is it or isn't it? Isn't it a little bit weird that computation is this abstract formalism, like, you know, an automata that makes these state transitions, something completely non-physical. And we're describing the mind as if it is that abstract thing. That sounds a little bit weird. There are many movies made about this who talk about uploading their minds into the matrix. Neuralink talks about interfacing with your brain's software. Yoshua Barg thinks that consciousness is a software program running on your brain. That this is the universal, that you have these invariances in nature, that you can have patterns that have causal power, that have the ability to reproduce themselves, that have the ability to shape reality, that are invariances that you cannot simply explain more simply by looking at what atoms are doing in space. But you have to look at these abstract patterns to make sense of them. Every other explanation is going to be more complicated in the same way as money is going to be impossibly complicated if you try to reduce it to atoms. So you have to look at these causal invariances. And spirits are actually such causal invariances. They are actually disembodied. They're not bodies. They're not stuff in space. They're not mechanisms in the same way, but they are causal mechanisms, abstract mechanisms. And so we put the spirit back into nature using the concept of software. A lot of people think that's metaphorical, but they don't think it's metaphorical at all. It's the literal truth. Software is spirit We all just talking about this stuff without even batting an eyelid Like where the skepticism man It just sounds so plausible to us So we assume that it just kind of has to be the case. There is something super interesting about computers. What a computer ultimately is, is it's a causal insulator. The computer is a layer on which you can produce an arbitrary reality. For instance, the world of Minecraft. You can walk around in the world of Minecraft and it's running very well on a Mac and it's running very well on a PC and if you are inside of the world you don't know what you're running on and it's not going to have any information about the nature of the CPU that it's on, the color of the casing of the computer, the voltage that the computer is running on, the place that the computer is standing in in the parent universe, right, our universe. So the computer is insulating this world of Minecraft from our world. It makes it possible that an arbitrary world is happening inside of this box. And our brain is also such a causal insulator. It's possible for us to have thoughts that are independent of what happens around us. We can envision a future that is not much tainted by the present. We can remember a past that is independent from the present in which we are. and it's necessary for us. Our brain has evolved as such a causal insulator as well to allow us to give rise to universes that are different from this one, for instance, future worlds, so we can plan for being in them. Barg says that money is an example of a causal pattern. It's not the ink on a banknote. It's not the electrons in your bank server. It persists across and ensconces in various physical instantiations. So paper, coins, gold, digital ledgers. And yet, they say, money causally affects the world. It gets you fed. It starts wars. It builds cities. He says that software is the same. A program is an abstract pattern that can run on many types of chips, maybe even neurons. And that pattern has causal power because it controls whatever substrate it's running on. The same algorithm produces the same effects regardless of what physical stuff implements it. So the invariance, that sameness across substrates, is the causal mechanism, the pattern itself, at least according to Josiah. He even accepts that physics is causally closed. He says that the abstract description and the physical description are two ways of looking at the same causal structure. Neither is reducible to the other. Both are real. But I'm pretty sure Chiramuta would ask, who identifies that invariance? When we say the same algorithm runs on different chips, completely different things are actually physically happening, right? Different voltages, different electrons doing different things. The sameness is something that we impose. It exists in our description, not in nature. And as for the money example, money only works because of human interpretive practices, right? If you take away the humans and their agreements, it's just paper, right? Money is just paper. And the causal power is actually in the social substrate that participates in it. Now, I think Josia has taken a useful way of talking about complex systems and promoted it to metaphysics. And that's Simplicius all over again, right? Mistaking the elegance of our descriptions for the structure of reality itself. I mean, maybe information really is more fundamental than matter. But that's another philosophical wager. And we've made these bets many, many times before. Just look at the history of all of this. So Descartes thought that the nervous system worked like the hydraulic automata in French royal gardens. Fluids pumping through tubes, pushing levers. That was the high-tech metaphor of his day. Later, when scientists figured out that nerves carry electrical signals, the brain became a telegraph network. Then it was a telephone switchboard. Signals travelling down wires. Operators routing calls. And now in our era, the brain is a computer. To be precise about what we mean by physical. And everything has to be physical because even GitHub, you know, has to store its data in some sort of hard drive or magnetic field or whatever technology, but it's not storing it in nothingness, you know. So knowledge, information always has this form of physical embodiment. Now, I think we tend to think about it as non-physical because it is a thing that is not a thing, which is the same as temperature. You wake up, you look at your phone, and you see the temperature, and you decide how you're going to dress, and nobody has any doubt that temperature is something that can be measured. But it took about 2,000 years for us as a species to figure out what temperature was and the fact that it could be measured. And there were two fundamental difficulties that I would say made it difficult for us to understand temperature. The first one is that first people thought that hot and cold were two separate things. okay so that temperature was like a mixture of the two it's like when you make green out of blue and yellow okay and it took a while for people to understand that cold was the absence of heat and not that cold and heat were two different quantities that were tempered together they were mixed so temperature actually means mixture not you know like what we now mean by temperature the other thing that was very difficult to understand is that people thought that temperature was a thing, was some sort of fluid that grabbed onto things. So let's say if you had a steel rod that is hot, is that steel rod kind of like has this sort of invisible fluid that is heat. And they had good reasons to believe that it was an invisible fluid because it could flow. Let's say you could connect that rod to something that was cold and that cold thing was going to warm up because that fluid was going to be flowing in that direction and so forth. So they thought that it had a physicality as a thing. A brilliant Englishman, Jao, basically figures out that that is not the case, that temperature is not a thing. And the way that they do it is through this observation, which I don't know if you know how cannons used to be built. So if you just grab a piece of sheet metal and you make it into a cylinder and you try to make a cannon out of that, the moment exactly that you shoot the cannon, that's going to open up like a flower in a cartoon, like a Looney Tunes type of situation. So what they would do is they would make these solid, you know, cylinders of metal, and they would bore a hole in it, you know, to create the cannons. And boring those holes released an enormous amount of heat. So Joel thought, well, how come all of that heat is there? It's like an infinite amount of heat. If I continue to bore a hole in a piece of metal for an infinite amount of time, it cannot be a thing then. And that, you know, leads him to realize that temperature is actually something that has to live in things, but it's not a thing itself. It's related to the kinetic energy of the particles in the thing, but it's not a thing itself. It doesn't have its own particle. There isn't kind of like a temperature particle. Temperature is kind of like a property that matter has and that holds on to things. Knowledge is similar, you know, in that it holds on to you and to me, you know, and to the collective to exist. But it doesn't have kind of like a physicality in itself. but it always exists in some sort of physical medium or substrate so in that sense it's always going to be physical no matter how virtual it gets it has maybe a different type of physicality but even electromagnetic waves that are transmitting you know data from your wi-fi router to your laptop are technically a physical embodiment now i spoke with professor luciano floridi a few years ago and it was actually one of my favorite ever episodes of mlst i think very highly of him which is why we're going to show some clips of him in this show because it's very apropos but this is what he had to say about it ontology on the other hand is how we structure the world in the sense that we think that that's the way it is with the kind of ice we have and the kind of light around the world that those are the colors we perceive but certainly a world full of colors is the world which i take it to be the world that's my ontology re-ontologizing means changing some of that particular nature allow me a distinction so i hope it's not too confusing reality in itself college system description of reality as we perceive it enjoy it conceptualize it live through model of the system ontology to me is the ontology of the model is not the metaphysics of the system i hope i haven't uh no made a complete mess here okay so metaphysics no minimum system whatever the source of the data that we get, fantastic. The data don't speak about the source. The music of the radio is not about the radio, but there is a radio. Of course, the music is what we perceive. The music has its own ontology structure, etc. The model. The model is, at that point, what we enjoy. Why did the digital evolution has changed the nature of the world around us? Not metaphysically, but ontologically, so the re-ontologizing. Because some of the things that we have in from modernity, a sense of the world that is now being restructured, and a certain understanding of the world, so re-epistemologizing as well, of that world. We go back to this temptation of talking about reality as if it were something that we need to grasp, catch, portray, hook, spears when in fact the way I prefer to understand it is as malleable understandable in a variety of ways something that provides constraints it doesn't mean that you can interpret in any possible way but leaves room for different kind of interpretations so if the floor of data that come from whatever is out there, and again, I'd rather be sort of agnostic about it, can be modelled in a variety of ways. One way is to, especially 21st century, given the technology we have, et cetera, to interpret that as an enormous computational kind of environment. It's perfectly fine, as long as we don't think that there is our right metaphysics. It's the correct ontology for the 21st century Now this is not relativism because on the other hand different models of the same system are comparable depending on why you developing that particular model and let me give you a completely trivial example suppose you ask me whether that building is the same building that question has no real answer because it depends on why you're asking that question if your question is asked because you want to have directions i'm gonna say oh yeah that's the same building so the same building yeah absolutely no go there turn left no traffic lights but if your question is like same function as i know it's completely from building it was a school now it's a hospital next question so is it or is it not the same that that question is the mistake an absolute question that provides no interface what computer scientists call level of abstraction chosen for one particular purpose so that I can compare whether an answer is better than another. Let me crack a joke for the philosophers who might be listening to this. The zero ship. Is it the same or is not the same? Who is asking? Why? Because if it is the tax man, you're doomed, man. I mean, there is no way you can play any, oh, I change every plan. You're going to pay their tax. It's the same ship. I don't care. But if it is a collector, that ship is worth zero you change all the planks you must be joking is worthless so is it or is it not the same depends on why you're asking that particular question tell me why and i can give you the answer not why in other words no frame within which we have chosen the interface that provides the model of the system not potential answer so the question is like is the universe a computational, gigantic, yes or no, meaningless. Is it worth modeling the universe as a gigantean for the purpose of making sense of our digital life? Oh, yes, definitely. Because we are informational organisms. Aha, so metaphysics. No, I meant in the 21st century, the best way of understanding human beings today is as informational organisms. Last century, we thought that biologically not made much more sense, a lot of water and a sprinkle of the bluvet extra and so on mechanism vicar time etc not absolute answers not relativistic answers but relational answers the relation between the question the purpose and the actual answer but it takes three not two so the computational model isn't literally true but it's useful the mistake is forgetting that it's a model So the early cybernetics guys, so Wiener and McCulloch and Pitts, they knew that they were working with analogies. McCulloch and Pitts wrote their famous paper showing that neurons could theoretically work like logic gates. Now they weren't claiming neurons actually were logic gates, but they were using it as a kind of functional description. But somewhere along the way, the metaphor hardened. A lot of neuroscientists today don't say that the brain is like a computer. They say it is one, and the metaphor became the thing itself. Now, Chiramuta, borrowing from Whitehead, by the way, she said that this is the fallacy of misplaced concreteness. This is another one of those leaky abstractions I was talking about. By the way, there's a great book called The Brain Abstracted by Marvitsar Chiramuta. I interviewed her recently. And she said that one of the most pervasive myths in neuroscience is that we use these leaky abstractions and idealizations to talk about cognition. And usually it's using the most recent technology at the time. So, you know, a few hundred years ago, we were describing the brain in terms of pulleys and levers. Yes, that's right. And, you know, and then it was, you know, as a prediction machine, as a computer and all this kind of stuff. Yeah. This is an example of these are grounded things that we understand. They're really good models because we can both talk about computers. We both know what computers are, but the brain doesn't work like that in any sense. Jeff Beck put it even more bluntly when we spoke. It will always be the case that our explanation for how the brain works will be by analogy to the most sophisticated technology that we have. How's that for a non-answer, right? So, you know, a couple thousand years ago, right, how'd the brain work? It was like levers and pulleys, man. I mean, duh, don't be ridiculous. At some point in the Middle Ages, it became humors, right? Because fluid dynamics was the kind of technology that took advantage of water power, was like the most advanced technology that we had. Now the most advanced technology is computers. So, duh, that's exactly how the brain works. Now, here's something that kind of bugs me, right? You go into any AI conference or you drink from the well of San Francisco by spending too much time on Twitter and you develop this mindset that AGI is inevitable. You start feeling the AGI and you'd be forgiven for thinking this because I've been using Claude Code and my God, I feel that there's been more interesting stuff happening in the world of software development in the last six months than there has been in the previous 20 years. This technology is genuinely amazing, but it is automation technology. It's not really intelligence, which means it's only really as good as your ability to specify and supervise and delegate to the system. But it is absolutely amazing. But why do we have this view? It's not an argument that AI is impossible so much as why does it seem so possible, so inevitable to people? And saying that what I'm arguing is that if you look at the history of the development of the life sciences of psychology, there are certain shifts towards a much more mechanistic understanding of both what life is and what the mind is, which are very congenial to thinking that whatever is going on in animals like us in terms of the processes which lead to cognition, they're just mechanisms anyway. So why couldn't you put them into an actual machine and have that actual machine do what we do? So with all that, all of that mechanistic history in the background, AI could seem very inevitable. But if that mechanistic hypothesis is actually wrong, then these claims for the inevitability of a biological like AI would not actually be well-founded, but we could be subject to a kind of cultural historical illusion that this is just going to happen. Cultural historical illusion. I've been thinking about that phrase. Maybe our confidence says more about what we've inherited intellectually than about how minds actually work. Now, another thing that Marvitsa has inspired me to think about a lot is the difference between prediction and understanding. Indeed, when I interviewed the Nobel Prize winner John Jumper at Google DeepMind a couple of months ago, this was the question I asked, and he had quite an interesting way of distinguishing those two things. It's almost like it's at any point learning how to refine and optimize the structure okay so i think we should distinguish three things predict control understand first so predict means that you say i'm going to do a thing what am i going to what will be this value of my machine what will appear on my computer screen in the future that is predict control is i want to measure this thing in the future and i want it to come out 17 right that's control understand is a lot like predict except there's a human in the loop understand means that i have such a small collection of facts that you will predict and you will do it with facts that i can communicate to another human um in kind of this compact fixed fits on an index card that's almost understand and so i think these machines let us predict. They let us control. We have to derive our own understanding at this moment, right? We can experiment now on the artifact. We can look at the 200 million predicted structures, not just the 200,000 experimental structures in order to help us understand, but it doesn't do the act of understanding for us. It does the act of predict and maybe control. The problem is these two goals actually pull against each other? I think we're at this moment in science now because we have these tools like LLMs for language and conv nets and visual neuroscience are being used as predictive models of neuronal responses which don't have that mathematical legibility that originally, so when I was trained in the field, that people aspired to have. And so you have this possible conflict. You can either pursue that goal of understanding or you can pursue the goal of prediction, but it seems like you can't have both at the same time. Now, on the one hand, people go into neuroscience because they want to understand the mind. They want that feeling where something clicks and you suddenly get how it works. That's what drew Chiramuta to the field in the first place. That's what keeps people up late at night reading papers. But on the other hand, there's just prediction, building tools that work. If your model forecasts data accurately, maybe you don't care whether it's true in some deeper sense. So LLMs are getting unreasonably good. They are winning math olympiads. They are, I mean, as of last week, actually, GPT 5.2 apparently discovered a new theory. Well, it solved one of these problems that Terence Tao had on his website. This is insane. But does it actually understand anything? And does it matter if it does or doesn't, as long as it works. Chomsky had an amazing commentary on this a few years ago when we spoke, and I think it's still as relevant today as it was then. Suppose that I submitted an article to a physics journal saying, I've got a fantastic new theory and accommodates all the laws of nature, the ones that are known, the ones that have yet to have been discovered. And it's such an elegant theory that I can say it in two words. Anything goes. Okay. That includes all the laws of nature. The ones we know. The ones we do not know yet. Everything. What's the problem? The problem is they're not going to accept the paper because when you have a theory, there are two kinds of questions you have to ask. Why are things this way Why are things not that way If you don get the second question You done nothing GPT has done nothing Classic Chomsky So maybe theories are overrated maybe prediction is enough but Chiramuta worries about that trade-off, right? When you give up on understanding, you don't know when your tools will break, you're stuck with black boxes, they work until they don't, and you won't see it coming when they don't. I spoke with philosopher Anna Chionica about this recently, and she had a beautiful way of describing it. Suppose you want to climb a mountain and you arrive on the top of the mountain. What's the argument to say that actually it's only when you're on the top of the mountain that what the climbing on the mountain is? I mean, you cannot really arrive on the top of the mountain if you don't do it with the first step. Every single step matters. The first step is as important as the last one. Actually, we are more conscious when we take the first steps in climbing the mountains than when we are on the top of the mountains and we have all this like full blown capacities and sometimes we shut ourselves in the lake. And of course, I brought this up when I debated Mike Israel. And the biggest misconception in all of AI, what all of the folks in San Francisco believe in is this philosophical idea called functionalism, that we're walking up the mountain and when we get to the top of the mountain, we have all of these abstract capabilities like being able to reason, play chess. But that disregards that the path that you took walking up the mountain is very important. And not only the path, the physical instantiation, the stuff that the mountain is made out of. So Mike's view is that if something produces intelligent outputs, why does the substrate matter? Silicon, neurons, it doesn't make any difference. It's all information processing. Needless to say, he pushed back hard. You can climb mountains, you can touch stuff, but you never truly embodied experience anything if you push on that philosophical button hard enough because you can always abstract out to like, These are just neural network pings from groups of neurons. And so you don't truly deeply know anything in some kind of weird philosophical way because it's just neural network calculus all the way down. You know, you climb the mountain. That's cool. Helicopter can climb a mountain much better than you. Does not have the ability to reason and abstractly and plan and predict things at all. So it's possible that what you can do or how you can function isn't the whole story. Or maybe if that's wrong, we should just start using helicopters. So individual minds are limited. But what about collective minds? What about humanity as a whole? We've built this incredible thing over centuries, right? Libraries, universities, Wikipedia, an expanding store of knowledge that no single person could ever hold. Doesn't that escape our individual limitations? So there's this dream of universal knowledge, accessible anywhere, perspective free. There is a tacit and implicit idea there that knowledge is something that something can have, while my view is that knowledge is a much more collective phenomenon. And it's not something also that you can put in something like a book. In my opinion, the book doesn't have knowledge. The book is an archival record of some ideas that I was able to put together in a nice structure. But you cannot have a conversation with the book. Knowledge only can go to work when it's embodied. You cannot throw like, you know, a bunch of engineering manuals and cement into a gorge and expect to get a bridge because the books don't have knowledge. Teams have knowledge. Organizations have knowledge. Yes, knowledge is social. Communities accomplish what individuals can't. But collective knowledge is still knowledge from somewhere. This matters, right? It's shaped by particular questions, particular tools and particular blind spots. I think one of the interesting things about this phenomenon, not only of LLMs, but the internet as this idea that it's the repository of all human knowledge, is that it goes along with this idea almost that knowledge doesn't have to be perspectival. It doesn't have to be like of a place, of a community. It kind of can float free of the situation in which this knowledge was acquired. That's kind of the aspiration of these ideas sort of of a universal repository of knowledge. But what this perspectivalist position actually sort of points us to is actually knowledge is inherently of a place, of a community. we acquire knowledge not by being like completely open-minded to everything that's possible to know but actually by sort of narrowing our view discounting possibilities actually is what allows you to pursue a line of inquiry and actually pin down some information about say the natural world which is humanly achievable so the contrast I'm trying to make here is between a view which says that knowledge is perspectival. It's inherently from a human point of view, which means that it's inherently finite. We cannot aspire to this sort of universal free-floating knowledge because as finite human beings, we can only achieve knowledge of the world through recognizing our limitations. And this notion of like, you can have non-perspectival knowledge like everything in the internet based on like all of the different possible perspectives all blended together that this somehow gives us a god's eye view. LLMs aspire to be this like every person voice but it's precisely because they don't have a particular socialization into a finite community that they're not reliable that we can't pin them down to actually what would be a kind of honest, trustworthy perspective. So Chiramuta has this idea that she calls haptic realism. Most of the philosophy of science treats knowledge like vision. You stand back and you observe reality from a distance. She thinks it's more like touch. We just look around, we absorb how things are. Our knowledge is sort of entirely objective. It's almost like a God's eye view on reality. But if you think that scientific knowledge in particular is more kind of touch-like, you can't ignore the fact that we sort of run into things, we have to pick things up, engage with them, ultimately change them in order for us to acquire knowledge of them. So you cannot discount the fact that we're meddling with things in the process of bringing about our knowledge. Neuroscientists are more than passive observers of brains. They poke them, they prod them, they stimulate them, they model them. And in doing that, they change what they find. The patterns that emerge are real, but they're also partially created by the process of investigating itself. It takes all the messiness of biological cognition and it reduces it to one imperative, minimize free energy. Everything else supposedly follows from that. Now, Simplicius loves this. I mean, finally, the simple truth, the one principle to explain it all. But Ignorantio says, wait a minute. The math is elegant. The framework is unified. But does that mean it's captured what brains actually are? Or did we just build another beautiful simplification and started forgetting that it was a simplification? So Chirimuta said to me that we should ask different questions, right? Not, is this true? But what does this help us do? What does this light up? What does it leave in the darkness? And the other thing, of course, is that we are finite biological creatures, right? There are limits to our cognition. And Chomsky spoke about this fascinating concept of a cognitive horizon when we chatted with him. If we are organic creatures, we're going to be like other organic creatures, in that there are bounds to our cognitive capacities. So, for example, a rat can be trained to run pretty complicated nases, but it can't be trained to learn a prime number maze. turn right at every prime number. It just doesn't have the concept. And no matter how much training you do, you're not going to get anywhere. Well, I suspect there's reasons to suppose we're like rats. We have capacities. We have a nature. We have a structure. They yield all sorts of extensive range of things that we can do, but they probably impose limits. And I think we could even make some guess about what these limits are. So our best theory is they bump up against the walls of the limits of our cognition, of our cognitive horizon. And maybe that's fine, but maybe even knowledge of where the walls are is useful in of itself. Science makes things simple, and it's not a flaw, right? Without simplification, we'd have nothing. You can't study everything at once. But simplification has risks, right? You forget your model is a model, you mistake elegance for truth, and you think you found solid ground when really you're just building another flaw. So look at Opus 4.5, right? Foundation models today, they are artifacts of staggering complexity. We've trained them on everything humans have ever written. We treat their outputs like they came from somewhere authoritative, somewhere outside of us, somewhere that knows. But the knowing was ours all along, right? Just compressed, refracted, reflected back to us from the silicon. Whether that reflection captures the actual thing, that is a question that we're barely starting to ask. You can use powerful frameworks like the free energy principle, but just remember they're frameworks, right? They're tools for building. They're not the final word. So the brain is not a hydraulic pump. It's not a computer. It's not a telephone network. It's probably not a free energy minimizer either. I mean, at least not in some like literal way. What the brain actually is, we will only ever cap glimpses of, right? That is through our limited instruments and theories. And that's okay because that's what it means to be finite. So Chiramutis, you had this amazing example from Greek mythology called Proteus. And if you could pin him down, he'd have to answer your question correctly. But if you let go and you let him get away, then he would shapeshift and shapeshift. Nature is like that. You can pin it down, you can ask questions, but it's always perspectival. As soon as you let go, there's always a myriad of other perspectives that can be interpreted from reality. Carl Friston's Woodlice, they were doing something very similar. right so slow down in the sun move faster in the shade but friston isn't a woodlouse and neither are you