Abstraction & Idealization: AI's Plato Problem [Mazviita Chirimuuta]
54 min
•Jan 23, 20263 months agoSummary
Philosopher Mazviita Chirimuuta discusses how abstraction and idealization in science, particularly in neuroscience and AI, can lead researchers astray by oversimplifying complex biological systems. She argues against the Platonic assumption that mathematical models reveal deeper truths, proposing instead a 'haptic realism' grounded in human finitude and embodied interaction with the world.
Insights
- Abstraction in science is often justified by Platonic assumptions that mathematical representations access deeper reality, but this overlooks how human cognitive limitations drive simplification choices
- The brain-as-computer metaphor, while useful for focused research, has been ontologized into a claim about what brains fundamentally are, obscuring the importance of biological implementation and embodied interaction
- Scientific knowledge emerges through active engagement with nature (haptic realism), not passive observation, meaning researchers inevitably shape what they discover through their investigative choices
- Historical case studies like reflex theory show how oversimplification can dominate science for decades before alternative frameworks emerge, suggesting current computational approaches may face similar limitations
- AI systems lack the embodied finitude, distal responsiveness, and meaningful engagement with environmental challenges that characterize biological cognition and understanding
Trends
Philosophical critique of mechanistic reductionism gaining traction as AI systems reveal limitations of purely computational modelsGrowing recognition that biological implementation details (biochemistry, embodiment, energy constraints) cannot be dismissed as irrelevant to cognitionShift toward pluralistic views of science acknowledging contingency in research paths and opportunity costs of narrow theoretical commitmentsIncreased scrutiny of Platonic assumptions underlying AI research (geometric deep learning, kaleidoscope hypothesis) as foundational but unexamined premisesEmerging concern about technology infrastructure's immaterial presentation masking real physical and environmental costsDevelopmental psychology findings raising ethical questions about reduced face-to-face interaction in digitally-mediated childhoodsConstructivist epistemology gaining philosophical ground as alternative to scientific realism in explaining knowledge productionHeidegger's critique of technology as culmination of Western metaphysical tradition becoming more relevant to AI ethics discussions
Topics
Abstraction and Idealization in SciencePhilosophy of NeuroscienceComputational Theory of MindBrain-as-Computer MetaphorPlatonic Forms and Mathematical RealismHaptic Realism and Embodied CognitionReflex Theory as Historical Case StudyCybernetics and Mechanistic BiologyMcCulloch-Pitts Neural NetworksBiological Implementation and EmbodimentAI Understanding and SemanticsIntentionality and AgencyHeidegger's Critique of TechnologyHuman Finitude and Knowledge LimitsDigital Divide and Information Society
Companies
Facebook
Mentioned as example of diffuse digital entity under GDPR regulations and social media's controlling influence on hum...
People
Mazviita Chirimuuta
Philosopher of neuroscience and author of 'The Brain Abstracted' discussing abstraction, idealization, and AI's Plato...
François Chollet
AI researcher proposing the 'kaleidoscope hypothesis' that the universe is written in code, exemplifying Platonic thi...
René Descartes
17th-century philosopher whose mechanistic view of biological systems influenced centuries of reductionist science
Charles Sherrington
Physiologist who developed reflex theory while acknowledging it was an idealization that didn't exist in real life
Ivan Pavlov
Developer of reflex conditioning theory that dominated neuroscience for decades before being superseded
Warren McCulloch
Co-author of 1943 landmark paper interpreting neurons as logic gates, founding neural networks as computational concept
Walter Pitts
Co-author of 1943 landmark paper interpreting neurons as logic gates, founding neural networks as computational concept
John Searle
Philosopher who argued computation lacks causal powers and challenged strong AI through the Chinese Room argument
Daniel Dennett
Philosopher whose three stances framework (physical, design, intentional) influenced discussion of explanatory levels
Immanuel Kant
Philosopher whose transcendental idealism and emphasis on human finitude informed Chirimuuta's haptic realism
Martin Heidegger
Philosopher whose critique of technology as culmination of Western metaphysics frames AI as instantiation of transcen...
Hilary Putnam
Philosopher whose 'rock argument' challenges computational theory of mind by showing any physical system can implemen...
Hasok Chang
Cambridge philosopher of science proposing 'realism for realistic people' and pluralist approach to scientific inquiry
Luciano Floridi
Philosopher cited on digital entities and GDPR regulations treating individuals as diffuse information entities
David Chalmers
Philosopher interviewed about virtual worlds and whether digital experiences constitute real reality
Quotes
"The results you get in the lab can be well established and fine. There's nothing wrong with those data, but there's more of a problem of generalizing from what you learn in the lab to outside of the lab with neuroscience."
Mazviita Chirimuuta
"It's not an argument that AI is impossible so much as why does it seem so possible, so inevitable to people?"
Mazviita Chirimuuta
"When you're saying that some of the apparent irregularity in the data is irrelevant that's your decision as a scientist it's not relevant to you at the moment but that it could be relevant to someone else"
Mazviita Chirimuuta
"Nature is sort of inexhaustibly complex. There's all kinds of patterns and things going on there. It can be pinned down and we can get true answers. But when we sort of release our grip it will carry on shape-shifting."
Mazviita Chirimuuta
"We cannot claim to have that God's eye neutral view on things, but that it would be a mistake about knowledge and about ourselves as knowers to think that that is an aspiration that makes sense for us."
Mazviita Chirimuuta
Full Transcript
What should we say as philosophers about the relationship between neuroscience and philosophy of mind? So how much of our ideas about how the mind works can we read off from the results that neuroscience is telling us? The results you get in the lab can be well established and fine. There's nothing wrong with those data, but there's more of a problem of generalizing from what you learn in the lab to outside of the lab with neuroscience. For cognition in the real world, it's precisely all of that complexity and all of that interactivity that is really important to how, for example, animals are able to negotiate their environment. It's not an argument that AI is impossible so much as why does it seem so possible, so inevitable to people? If you look at the history of the development of the life sciences of psychology, there are certain shifts towards a much more mechanistic understanding of both what life is and what the mind is, which are very congenial to thinking that whatever is going on in animals like us, in terms of the processes which lead to cognition, they're just mechanisms anyway. So why couldn't you put them into an actual machine and have that actual machine do what we do? Yes, but anyway, Marjolita, welcome to MLST. It's amazing to have you here. Thanks so much for having me along. So you wrote this book, The Brain Abstracted. It's an amazing book. Folks at home should definitely buy this book. It's really, really good. Tell me about this book. It was quite a few years in the making. I think officially I started writing it maybe 2018 and it came out in 2024, but it was really based on ideas that I'd been working on. Maybe since 2014, I started publishing some philosophy of science papers about computational explanation in neuroscience. And then going back beyond that, some of my own experiences when I was doing training in neuroscience on the visual system. And I was using computational models of the era before there was deep learning or anything that fancy. and thinking about really what does understanding the brain through this lens of computation by saying that we have models which not only simulate the brain as biological simulation using computers and all kinds of things or weather simulation such and so forth but actually kind of alleged to duplicate the function of cells in the brain which is this kind of additional claim which is made about computational modelling when it's applied to the brain as this unique structure, which is not only a biological organ, but also a kind of computer itself. The arc of your book is we have this problem with simplification, because as scientists, we want to build legible theories about how the world works. A lot of philosophy of science in recent years has picked up this topic of abstraction and idealisation. So abstraction is sort of quite a general word which can just mean sort of ignoring details which are there in concrete real life situations. So it would be familiar to you from doing sort of Newtonian problems in physics where your teacher tells you, well, there's always friction in real life, but we'll pretend that the friction isn't there. So you're leaving out a detail which is known to be there in the concrete system. Idealization means sort of attributing properties to the system that you're modeling in science which are known to be false. So for example in genetics modeling the assumption is made of infinite populations. These kinds of idealizations often make the calculations more tractable but of course there's no such thing as an infinite population in real life. In some way an abstraction is also always a false representation always an idealization so sometimes the difference between the two can be subtle how I put this in the book is that an idealization kind of points us to the thought that when we have a scientific representation we're kind of presenting something which is kind of cleaner and better than the thing in real life when we talk about something someone being idealistic it's like they have a view of how things should be and unfortunately reality does not live up to that so idealization in science is often to do with sort of representing things mathematically in a way which is kind of cleaner and neater than could be possible in real life and on abstraction you said in your book that there's the the lofty philosophical version of abstraction which is you know upstairs in the heavens of plato i think you said or even galileo there's this idea that these natural forms exist which are disconnected entirely from the the sort of the spatial the the temporal realms and then there's the the more deflationary view of abstraction which is simply that we just ignore details now i'm speaking with my good friend francois cholet again tomorrow he's releasing the new version of the arc challenge and I think he does have this and many AI researchers do they have this platonistic idea he calls it the kaleidoscope effect which is that the universe basically is written in code and what we see is like a kaleidoscope when all of the rules of the universe just get composed together in different ways and all we need to do as AI researchers is kind of decompose back into the rules what could possibly go wrong? So I watched some of the videos with Franca I found it really fascinating precisely this kaleidoscope hypothesis, because seeing that as a philosopher, I thought, that's Plato, because Francois precisely says, we have the world of appearance, it's complicated, it looks intractable, it's messy, but underlying that real reality is neat, mathematical, decomposable. This is precisely this sort of contrast between the world of forms and the world of being, sort of eternal stable truth and the world of becoming appearance messy flowing complicated reality and so it goes back thousands of years in philosophy it's really interesting that this is an assumption not only that AI researchers make often but it runs through science as a kind of justification for the pursuit of mathematical representations even when they sort of depart from known facts about the concrete physical systems in reality the idea that the mathematical representation is getting you more to the truth the underlying truth of how things are as opposed to what I call the like the down-to-earth view of what abstraction is and mathematical representation is that it's something that we do because of our cognitive limitations so instead of thinking that the abstraction gets you like the higher level of reality just saying that we do abstraction because we're finite knowers there's limits to how much complexity any individual person or group of people can actually encompass in their modeling strategies or representations and actually it's only by pretending things that are more simple than they actually are that we get some traction So that's like the down to earth, mundane explanation of why abstraction is so much used in science. Yeah, it's so pervasive in the deep learning world. I mean, I also interviewed the folks who pioneered this geometric deep learning blueprint. And that's the same idea, basically, that, you know, the world is described with geometry. And all we need to do is imbue these geometrical inductive priors into deep learning models. And then they can, you know, essentially by reducing the degrees of freedom to ones which are aligned with how the universe works, then we get where we want to go. I think the notion of like patterns and real patterns to invoke Dennett's term there is a helpful one. So one thing that you could say is going on here is that, yes, there's lots of complexity there in the natural world. It's apparent in the data. But like if you just denoise the data a bit underlying there, there's a real pattern and we should we don't have to be like Platonist and weird about it. But there's just regularity that is sometimes masked by noise. That doesn't seem like too metaphysically problematic. but one of the questions that I sort of pose to that as a challenge to that you know very moderate view and I say this frequently in the book is when you're saying that some of the apparent dysregularity in the data is irrelevant that's your decision as a scientist it's not relevant to you at the moment but that it could be relevant to someone else it could be really important to how that system works in the natural world for reasons that you're not aware of. So when we sort of classify the signal versus noise in our data sets, we shouldn't ignore the fact that those are decisions that we're bringing to bear on our investigation. We shouldn't assume that we're just reading off the signal, the real pattern that is there in reality, and that there aren't very many other significant real patterns there. And to the extent that we're probably also kind of creating pattern through the through the very denoising process that we bring about interesting i mean physicists aren't under any um illusion so they know that newton is is a an idealization and just to contrast i mean you you cited a reflex theory i mean of course uh pavlov and the dogs you know folks at home will know about that and newton is still around we still use that but we don't use reflex theory anymore. Yeah. Yeah. So this is a chapter that I present in the book as a case study of how oversimplification can get scientists on the wrong track. So the history of science is like hindsight 2020. We're looking at a theory about how the brain worked, which was really dominant for a few decades at the end of the 19th century, beginning of the 20th century. it's familiar to us with Pavlov with this idea that we can explain behaviour in terms of reflexes which get conditioned and there's obviously learning involved with that. The most ambitious version of the theory said that all of the functions in the brain are basically versions of reflex arcs, so sensory motor loops. So a very prestigious and well-regarded physiologist like Charles Sherrington was heavily invested in the reflex theory. But he admitted in his book, The Integrated Action of the Nervous System, that this notion of a simple reflex is an idealization. It probably doesn't exist in real life. And yet this is the key that's going to kind of unlock neurophysiology. It's going to help us decompose and make sense of all of these different interactions that could be observed experimentally. So what seemed to be going on there is that scientists were sort of taking that age-old method, which is that it's a good heuristic to seek parsimonious explanations, to use Occam's razor. And the obvious thing to do was like, let's assume there's this thing that there's a simple reflex, and then running with it way too far actually never being able to explain the amount of data that they had initially thought that they would with it. And it's not clear how long the reflex theory could have gone on for if it hadn't been for the computational theory sort of coming in during the Second World War era and basically providing an alternative explanatory framework, which was also quite neat and I would say provides its own kind of idealization toolbox. So a very popular thing in cognitive science is to say well if something behaves you know the same way as a cognizing human for example then we can maybe we might draw inferences that it has consciousness and has many other cognitive faculties but there's always this kind of you know almost ignorance of the actual mechanism of the object of study? I think behaviourism has a bad name, but it's not that discontinuous with a lot of thinking, which is sort of normal and still acceptable in science, which is to treat things as black boxes. This is precisely what the behaviourist said. It's like the mind is opaque. It's hidden within the walls of someone else's individual subjectivity. As scientists, all we know are the inputs and the outputs, and we'll just track those. And that like a version of what you just said if well if the inputs and the outputs the behavior of this system are looking like what we know to be a conscious system elsewhere Well let just treat them as all of the same class of objects given that the only available information is the inputs and outputs I think that kind of reasoning can be fine in certain contexts, but it's a philosophical leap to say that access that we have to our own thoughts and the presence or absence of subjectivity that we're aware of with other people is irrelevant to making these decisions or judgments about what other kinds of systems can have consciousness. So I think it's much too quick to just go behaviorist and say, well, there's no relevant difference between X and Y, even if one is a person, one is a machine, just because we can say that there's some similarities and inputs and outputs. I think if I remember correctly, at one point you drew an imaginary kind of graph where you said on one axis we have science realism, which is where, you know, our scientific theories actually represent things in the world. And then we have empiricism, which is the idea that, you know, facts we receive, you know, tell us something about the world. And then there's this more interesting axis, which I think you're very inspired by, which is this kind of constructivist idea. Can you explain that? Yeah. So the constructivist path sort of which is different from the scientific, realist and empiricist one sort of really runs with the idea that we are active makers of knowledge. it shouldn't be confused with the kind of constructivism that we have in some kind of like more extreme branches of sociology of knowledge which say that all scientific theories are social constructs and not constrained by phenomena that have been observed in nature so it's so I'm not saying that scientific theories are merely constructive in the way that like poems could be like a work of imagination and so forth but the idea is that there's this interactivity between humans, groups of scientists, their plans as epistemic agents going out into the world with an agenda to find stuff out about certain phenomena in order to achieve certain goals, often technological, applied science goals. And there's some pushback from the things in nature themselves that they're investigating. But the idea that knowledge is always the product of this interactivity. So we cannot discount that there is a human framing side to this. We can't go along with this idea that a scientific theory is just reading off the source code of the universe as if the human way of conceptualizing those phenomena had no bearing on the theory as it ultimately turns out. But we also can't discount that the theory that arises is constrained by how things happen to be that is worked out through that process of experimental interaction. You said, I think you were inspired by Immanuel Kant. So he had this transcendental idealism, and please bring that in. But that somewhat informed your own view, which is this haptic realism. Can you introduce that? Yeah, so that's saying that knowledge comes about through this process of interaction. So this notion of haptic realism is emphasizing that it's through engagement. So haptics being like the sense of touch. The contrast here is with an ideal of knowledge, which is based on this idea that we can know things in a disengaged way. If you think of vision as the archetype of knowledge, what happens when we look around our surroundings and use sight as a source of knowledge, we can get into this mindset where it seems like we do not have to interact with things in order to know them. We can just kind of absorb information passively and then because we're not bringing about our representations in a kind of active way, it would seem to us, and I'm not saying this is how vision works, but it's a kind of conceit that often comes about if you use this very visual model for knowing. John Dewey called it the spectator theory of knowledge so this is a clear predecessor for what I'm saying here is that like we just look around we absorb how things are our knowledge is sort of entirely objective it's almost like a god's eye view on reality but if you think that scientific knowledge in particular is more kind of touch like you can't ignore the fact that we sort of run into things we have to pick things up engage with them ultimately change them in order for us to acquire knowledge of them so you cannot discount the fact that we're kind of meddling with things in the process of bringing about our our knowledge and another sort of dimension of this haptic metaphor is that our hands are not only a sensory organ but they're also the means by which we manipulate things. So manipulation means precisely working with the hands. And so I think that really captures, if you like, the double face of scientific models. They're both means of acquisition of knowledge in the way that hands are also sensory organs. We find things out about the world through the sense of touch. But they're also means for changing things, for doing things. We speak about this in evolution. What would happen if you could just rerun evolution? You know, what would happen if we could just have a parallel universe and the entire enterprise of science just ran again? And what you're alluding to is that it wouldn't be completely different. Maybe there are some guardrails, but it is actually quite divergent. Yeah. Yeah. There's certainly like contingency in the history of science, you know, where people start out, cultural factors which prompt them to people to ask certain kinds of questions and not others. so a view quite similar to what I say about haptic realism in the book is by Haseok Shang who's a professor of philosophy of science at Cambridge and he has a view which he calls realism for realistic people that's the title of his new book and he is an out and out pluralist about science so he says that because there is contingency in the history of science it means there are paths not taken but we could maximise the acquisition of knowledge if we just like explored as many of those different paths as possible which isn't something that I say in the book myself because I think there are also reasons why it makes sense to narrow views and paths of inquiry and also we don't have like unlimited resources but yeah sure there are opportunity costs that come along with like taking a certain path and there are others not pursued. In the enterprise of science there might be a trope or an idealization that we're getting closer to the truth. Yeah. And do you think that's the case? Do you think as the enterprise of science just progresses that we're getting closer to the truth or could we be in cul-de-sacs, basins of attraction and so on? That's very much associated with scientific realism. So there's this view that there is one way nature is and science succeeds insofar as scientific representations conform to this one way that nature is. From my view, sort of takes very seriously the idea that nature could just be sort of inexhaustibly complex. So if you ever try to sort of, if you ever pin it down in one representation, there are ways also that it could be represented sort of inexhaustibly many different varieties of ways that you can investigate it. and then also ways that any one representation is lacking. So there's a kind of inherent sort of lack of convergence that that picture brings about. One of the ways of expressing this is to say that nature is Protean. There's this mythological character called Proteus who was a shapeshifter, this mythological being that lived in the sea and he would keep changing his shape. But if you could pin him down, he would answer you a question and tell you the truth. But the thing was you had to pin him down. And I think this is a really nice illustration of what's going on with our interactions with nature as scientists. Nature is sort of inexhaustibly complex. There's all kinds of patterns and things going on there. It can be pinned down and we can get true answers. but when we sort of release our grip it will carry on shape-shifting and there's lots of other ways that it could be. So yeah, one final theory. I'm not so convinced by that. This is very much a view that I think makes sense if your basis for your theory as a philosopher of science is really the biological sciences which is where I'm coming from. If you're a physicist it seems much more natural to think that there is one fundamental set of laws of the universe which is going to be nailed down once and for all and could explain everything. Biology just sort of throws up lots and lots of examples, particularities. It tends to be less considered sort of less intellectually satisfying in comparison with physics. Oh, you can just spend all your time in biology doing stamp collecting because there's this thing and there's this thing and there's this thing. How do you tie it all together theoretically? But on the other hand, I think that if you take that particularity and that shifting quality to biological phenomena, then actually it just forces you to think about knowledge differently. In your book, you spoke about a trajectory, I suppose, of possibly failures of simplification. So we just spoke about reflex theory. But one of the big things is this metaphor of cognition or the brain perhaps as being a kind of computer. And you spoke about the early roots of this from reflex theory to cybernetics and computationalism. Can you sketch that out? So the connecting thread is really this idea that what cognition is, is something that is machine-like. That what, I don't know, going back to the 17th century, this is a view associated with the philosopher and physicist and physiologist René Descartes, who said we need to go along with this idea that everything that happens in the body is explicable in terms of quite simple mechanistic forces. And this idea that biological systems are machine-like has obviously been hugely influential in the different branches of science. The reflex theory was one instance of that. People often said machine-like reflexes and making comparisons with sort of Newtonian decomposition. With the computational framework, you have an actual machine, a digital or analog computer, which could be compared with brain processors. I mean, cybernetics is an interesting stage along the way because they were building sort of little devices which had some degree of autonomy made up of and supposed to be emulating versions of negative and positive feedback as hypothesized to occur in the body. But yeah, I would say at the core of this research idea is that if what's going on in the body is ultimately a mechanistic process, then by redoing engineering with this non-living system, which is capturing some of the core operating principles that we find in biology, then we can use that device as a map, as a resource, to then reinterpret what's going on in the biological system. You saw that with, for example, McCulloch and Pitts in their 1943 landmark paper of interpreting neuronal cells as logic gates and then saying, yeah, you could build a computer out of neural nets. This is the origin of neural nets as we know them today. This is the birth of the idea. But then using that notion that neurons are logic gates to then interpret what's going on in physiology. So what I describe, and it's in chapter four of the book, is a sort of back and forth thing of making devices which are somewhat inspired by biology and then using those then as the lens through which to review biology again. And I say that the advantage and the appeal of this process is that it allows you to, or gives you a kind of license to ignore so many things that are happening in the brain and nervous system, which are just not shared with non-living machines. like all of the biochemistry, all of the ways that neural tissue is shaped by vasculature and interacts with the immune system and all of that sort of background stuff that if you a theoretical computational neuroscience you can say I only interested in the computational properties of the brain I don need to care about all of that messy biological detail So it gives you a kind of tunnel vision which a scientist can be fine to have tunnel vision. You can't take in everything at once all of the time. But what I take issue with is the kind of ontologization of that, saying that because computational neuroscience is this successful field of inquiry, we know now that the brain is a computer. I think that is not an inference we should make. Yeah, I mean, I don't think connectionists typically argue that, I mean, they would say it's a different mechanism. Yeah. But they think that there's some kind of functional equivalence. And that's the thing, because so many folks in AI at the moment, they are interested in biologically plausible architectures. So what if, you know, like the cyberneticists did, what if we have more autonomy, diversity, agency and so on? And they fundamentally think that, I guess they make the assumption that the world is a machine. And if we replicate it with sufficient fidelity, then we can reproduce the behavior. To what extent are the mechanisms of the brain inherently bound up with the fact that the implementation here is in living tissue? So I think it's really sort of tantalizing evidence about how the extent to which sort of brain processes and signaling between neurons, not just the electrical specialized signaling that neurons do, but biochemically, it's kind of outgrowths of signaling that's happening elsewhere in the body all of the time. So that there's nothing that we shouldn't think of neuronal cells as distinctively cognitive as opposed to the other cells in the body, but that they're extensions of the ways that cells signal anyway. and if neuronal function is so much just a manifestation of what's happening with metabolizing cells anyway that makes it more of a stretch to say that a machine that's not living could have the same functionality yeah i mean no one's trying to sort of build artificial neural networks with living cells no no but i mean there is there's a there's an analogy in neural networks there's this thing called um like the lottery ticket hypothesis which spoke about pruning and what the research has found is that you train this big dense neural network and after it's trained because you need the density for stochastic gradient descent for you know training tractability after it's trained you can prune away 90 of the connections and it still works the same way and maybe maybe evolution and and our kind of like biological instantiation maybe it's the same thing It's it's we've been through this billion plus year training process and all of these things that we think are important, like, you know, the the the instantiation, the autopoiesis, the, you know, the the agency and so on. Maybe those are vestigial. Yeah. And we can now just kind of snip, snip, snip. And we can just kind of create this abstract version. It seems reasonable. what do you think? It's just vestigial I mean I think we really need to take seriously the economy that is there and biological information processing we do a lot more with a very limited energy budget running our brains every day than artificial neural networks are really really expensive to run It doesn't strike me that biological cognition could get away with being that wasteful, that surely to keep things sort of blowing up in terms of like energy being consumed for information processing biologically, that there must have been a fair amount of pruning on the way. I kind of think of agency as being a bit of a spectrum from, you know, you can think of it in a deflationary sense as being this autonomous thing that's the cause of its own actions. And then the deep philosophical sense is that there's this intentionality and you can control the future and whatnot. And it rather speaks to like the physicist's view is that, you know, all of these, you have this light cone and you have all these micro interactions. Yeah. And of course, that's beyond our cognitive horizons. And so we develop ideas of representations where we can have these distal relationships between things that are in our mind and things that are far away in time and space. And I suppose, you know, you think of that as being another form of idealization. But the fascinating thing, though, when I think about agency, I think about it in terms of like apparent causal disconnectedness. So we are agents because you have consistent beliefs and ideas and you're not just an impulse response machine that's being, your actions aren't determined entirely by this situation. You're a person. And I kind of perceive that as a kind of causal disconnectedness. Yeah, I agree. I mean, what I say in the book, in that chapter, I set out and I say this to be sort of very metaphysically neutral about what representation is, what intentionality is. But at the same time, not what I directly wrote in the book. I think I agree with you that there is something very important about connecting the notion of agency and intelligence with this thing of like being responsive to what is actually very distal. It could be distal in time and space. It could be like distal because it happened a long time ago. But this is what biological memory is. Things that happen like to you when you're a baby affect how you are now. physical systems like non-living physical systems they're much more constrained in their in their actions and I don't mean that in like action but just what they do what happens to them by what's proximal to them there's like the distal is always screened off by the proximal if that makes sense like the whereas for you all of these things that happened in the past could be as relevant as anything that happens in the room right now or your ideas about the future would be relevant to what you're saying right now. So yeah, this notion of being sensitive to what's not immediately driving you in your surroundings, I think that's a really important thing to latch onto and delineating at least the class of systems that we want to call cognitive to ones that we would say are sort of merely physical not intelligent in any important sense of the word very cool so so we're trying to um i suppose partition the world into logical units that we can understand and agents are a great version of that and daniel denier of course he had the three stances he had like the the um the physical stance the design stance and the intentional stance as a way of kind of like you know building useful explanations of of of how you know things behave and introduce those but you said that you didn't quite agree with that because to den it it's a hierarchy which means you know like the intentional stance perhaps has precedence over the other ones oh so it so actually it's kind of the reverse he it's it's as if the physical stance has an ontological priority like that's what's really there but it's useful to use the design and the intentional stance. Very interesting but you said for you you don't really have that prioritization you kind of you're open-minded. Yeah so that's part of the sort of metaphysical neutrality of that I set out with the chapter is to say okay let's not go in with the assumption that low-level physical causes are the like the primary causes of everything yeah it's it's a way of of if you like taking um intentional phenomena at face value intentional in the sense of like bearing representations and i think one of my criticisms in that chapter is this agenda which is there in philosophy of mind to say to say that okay if representation is real we need to be able to tell a physical story about how it comes about and I'm just saying why not why go along with that project if there's no and it's this is actually going a Donetti in view is is um you might see it is that if talking about representations and intentionality is useful within the sciences why not just take that in face at face value and not say that that needs to be established by making it coherent with some causal story about what's going on in terms of non-intentional physical interactions. So that was the position there. With Putnam's rock, right, you know, he said that you can take any open physical system and you can configure it in such a way to have the same types of information processing. And then why wouldn't that have all of the cognitive properties? When you're making a claim that the brain is a computer and that that explains cognition, what grounds have you got for saying that any arbitrary physical system is actually implementing a computation just from looking at its physical dynamics? If it's purely a question of mapping the physical dynamics to a computational formalism, then any physical system can afford a mapping of that sort, whether it's a rock, whether it's the sofa, whether it's my stomach as opposed to my brain. And so that's a challenge to the computational theory of mind, that it's assuming that brains implement computations just because we can sort of model them as computationally, but we can model all kinds of things computationally, what makes brains special. So what about this idea of whether computation itself has causal powers? I don't think it does. So computation itself is mathematical formalism. It's like exists. It is a mathematical structure. Things that have causal powers are concrete physical systems. So I just think they're different kinds of things. So Searle famously argued that the reason why we can't build strong AI is that computation doesn't have causal powers. It's implemented in silico. So what does have causal powers are the machines that actually implement the computation. But couldn't you sort of say, well, there is still a causal graph. Perhaps you would argue that computation isn't a node in that causal graph. It's just some kind of an aspect of it. Yeah, I mean, I think it just goes back to this issue like computation in and of itself is not the kind of thing that could have causal powers. I think Sol's point, and this is in the rediscovery of mind on this, was an interesting one. It was kind of maybe kind of subtle and it kind of gets lost in the wash of like AI back and forth and Sol bashing, which happens a lot. but it was about the kinds of ways that we form explanation in the sciences and he's what his point was that cognition if it's anything is something as part of the physical realm the realm of causation the assumption of the computational theory of mind and he argues that this is very dominant within cognitive science is that you can explain this phenomenon which is a phenomenon of the concrete physical world through this non-causal thing which is computation and suddenly there's no gap that needs to be closed and I and I think that's that's a fair point is that there's something inherently that needs like further justification of why of all of the things that happen in the concrete physical world that demand explanation why we reach outside of the concrete realm of physical causation into computation in order to explain this thing, cognition. Another argument I was making was, you know, how machines couldn't understand. Yeah. And of course, he was talking about things like semantics. Yeah. Do you feel that they could understand? What do I think? I think certainly there's more to human understanding than that. I think that a thing about human cognition and animal cognition in general is that my view is that it's not a set of discrete modules that work separately from one another. I think language is bound up with sensory motor engagement and likewise how we perceive the world is shaped by linguistic concept formation and everything like that so the idea that you could just sort of detach off a language faculty have it replicated in an LLM that doesn't have the other bits of our cognition and it doesn't have embodiment doesn't have the capacity to engage with the world and that it could have understanding in the same way that we do i find that implausible interesting but again you know if we do the the galaxy brain thing and we say we can embed robots in the physical world give them sensory motor affordances and all the rest of it um you know there are many replies to the chinese room argument about this like the robot reply and Yeah, yeah. You know, would they have a little bit more understanding? Yeah, I mean, that's getting more along the lines of the kind of thing that could have understanding. I think there's also more relevant stuff in the background of what it is to be a biological thing, in that some things are inherently meaningful to you and relevant to you because they're connected with the demands that the challenges of your environment place to you. Like I was saying before, you know, life is a, being alive is a way of being always in a situation that is problematic to you. So saliency and meaning, I think, are connected to that. It's not to say that you couldn't have robots with more and more precarious lifelike situations. and maybe, I don't know, it's not beyond the realms of possibility that then you start seeing understanding in ways that are more like that as well. Let's talk a little bit about Heidegger. So he spoke a lot about our relationship with technology and I know this is something you've been thinking about. Can you tell us about that? So one of the things that Heidegger said, and in many ways he's a grandiose and unpleasant person, but his grandiosity kind of manifested and how he could have read the importance of philosophy to things about the modern world today. So one of the things he said was that technology and cybernetics for him, we could say AI for us today, was the culmination of a metaphysical tradition, right? So it's because the history of philosophy set out on the certain path that it did that we are here today now with these technologies which seem to be having such a transformative role in our lives. And to the point that we feel like not in control of these technologies. So I'm not a kind of philosophical determinist like that. I don't think, you know, just because some philosophers in ancient Greeks said certain things that this is why we have AI today. But I think there are some features of his, the contrast he draws with the philosophical tradition in his own account of what it is to be a human person that give us an interesting perspective on what is being assumed in that path towards AI. So one of the things he really insists on is human finitude. Like we are inherently finite bounded knowers as communities and we're communities of knowers. But that the philosophical tradition has encouraged a kind of leap that encouraged the idea that something about us as knowers kind of crosses beyond the boundaries of finitude into like a universal boundless realm of knowledge. And I do think that the very idea that a non-situated, sort of non-embodied absorber of fats, like an LLM, that kind of just sits there and sucks in all the information in the world, that that could somehow be a counterpart to how we know things as human beings. I think that's an instantiation of this lack of acknowledgement of human finitude because it's human finitude like coming from a culture which is expert at some things but not other things and whose knowledge is grounded in a discrete set of sensory experiences that are not accessible to other people's that boundedness of knowledge. To say that that is not inherent to what it is to be a knower, that a purely disembodied absorber of facts is what we are, I think that's sort of revealing that lack of acknowledgement of finitude which is there in that tradition that he criticised. We're kind of moving into this technologically embedded world and it's changing our nature in some way. This tradition that Heidegger criticizes from that path from philosophy to technology, of this aspiration to sort of transcend embodiment, transcend materiality, to create for ourselves this leap into a almost spiritual world of pure information. And it's interesting how technology infrastructure is presented to consumers, consumers people like me as like behind the scenes immaterial not really connected with real world constraints like it's the cloud it floats above us there's no there's no almost cost to it it's weightless that's not how technology infrastructure works but it seems like that's what we'd like it to be we'd like this idea that all of this information age that we have us around today is not connected with real world stuff like actually building computers and shortages of resources and consumption of energy and all of those things. We like to think of it as disconnected from that in ways that it's obviously not. I agree that it's not because clearly we live in the physical world but there's an apparent disconnection. There was the digital divide of course in the 1980s and it was very difficult even to work and get a job if you didn't know about computers. now even to get a driving license you know it's all done online the the legal landscape is not physical anymore it's virtual and even like the european gdpr regulations as floridi says you know you're a digital entity and it's diffused and so on so you know facebook and whatnot it certainly feels like we live our world we live our lives in the information world yeah and it's becoming more and more kind of confusing in that sense. So living in the information world, do you mean by that kind of the information that we get through devices, through kind of this diffuse spread of technologically connected things is more salient to us than our experience with like our concrete here and now situation? You know, there was that John Ronson book about you've been shamed or something like that. You know, basically saying that there's almost more controlling pressures in the social media world than there is in our physical world. Sure. Yeah. I mean, I think all of that is possible because human beings are imaginative creatures. Like we live in imaginative worlds and it's not so much the social media is always in our face. well maybe it is people looking at the screens but our way of thinking through our own life history and how we project and everything else there's a whole world that we're also constructing around that so I think we're co-creating this digital world and it doesn't work unless we're imaginatively and emotionally invested in this so my view goes back to this Kantian idea which is also goes back to this idea of human finitude that there's something wrong with thinking that what you are as a knower is the kind of being that can float free of your environment and just sort of regard it from above and like taking all the information that's there as it is by itself without your impact on the world and my point is that we're not those kinds of beings we only acquire knowledge through this arduous process of interaction, which means that we cannot claim to have that God's eye neutral view on things, but that it would be, it's a mistake about knowledge and about ourselves as knowers to think that that is an aspiration that makes sense for us. Is it possible for both things to be true at the same time? I'm not making any like weird claims that we're not actually really yeah physically you know situated but but it's almost like we're increasingly disconnected in many aspects of our mental life from from from our direct physical experiences yeah so i so i think that's true in terms of like what we pay attention to i mean people that look at their phones instead of looking out the window when they're on a train not looking at the people around them so there's this where our attention goes can be wherever the internet wants to place us but the phone is still like a concrete physical device that with little light emitting diodes that you know produce like photons in our retinas right um so i think it's about again I go back to saying it's about the imaginary around that around that that in our minds this isn't just a device that just is showing images right now but it's this portal to this other place where we can be disconnected or dissociated from where we are right now but I think that need to live in an imaginary to be disconnected is probably always there and manifest in different ways in human culture through fantasy mythology all kinds of things and we can debate whether it is imaginary or not with I interviewed Chalmers about his reality plus book and he was talking about you know whether virtual worlds are real and stuff like that I mean certainly with social media it is still real and you know like these people on Instagram they're having real experiences and I'm vicariously experiencing them through my phone so the question is whether that's a sort of like a truncation of our mental life or whether it's expansive yeah i mean this this is um something that i think becomes a question of like ethics and phenomenology this like the part of philosophy that like really studies experience and what we draw from it in its own sake so there's there's always an opportunity cost people if you're looking at frames absorbed into one thing then you're not absorbed in other things and ethically is it right to do that? I think that's really important. We're running a big experiment on the next generation because kids, young, young children nowadays spend a lot less time looking at people's faces than they used to. Will they be socialised in a way that will allow them to lead happy lives later on? We don't know. Is that something you worry about? I do, I do, yeah. Tell me more. um well just because it's a massive experiment i mean it's obvious that um from developmental psychology that young children are predisposed to be sort of paying attention to social interactions with the people around them the gaze and faces and features and all of that if they have less opportunity to make those connections have those experiences at a very young age it seems like that's bound to have an effect on how they relate to people later on. I don't see how it couldn't. I mean, there's a thought experiment that I had a debate with an AI doomer. And because I'm a big externalist, I think, you know, we're embedded in these cognitive ecologies as you do. And, you know, I gave the example, imagine that, you know, some child was brought up in a hermetically sealed chamber and you gave them a computer and the internet and they could, you know, they could still learn to do lots of clever things and so on. But we're becoming a bit more like that. It's not quite a hermetically sealed chamber, but we're mediating through the internet and so on. And that could go one of two ways, right? As you say, it could dramatically kind of truncate our mental life and cause lots of problems, or maybe it just won't be a big problem. I mean, there were already experiments done on monkeys in the 1950s of depriving them of maternal contacts. And it didn't work out well for those monkeys. I mean, should we be doing that experiment on children, sort of depriving them of what is instinctively the kind of social engagement that, you know, young humans seem to need to develop normally? Amazing. Marjorie, thank you so much for joining us today. It's been a great conversation. Thank you very much, Tim. I've enjoyed it.