Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

339 | Ned Block on Whether Consciousness Requires Biology

71 min
Jan 5, 20263 months ago
Listen to Episode
Summary

Ned Block, a leading consciousness philosopher, challenges computational functionalism and argues that consciousness may require specific biological mechanisms, not just abstract computations. The discussion explores whether AI systems could ever be truly conscious and what criteria we should use to determine consciousness in machines.

Insights
  • Computational functionalism—the view that consciousness is purely about input-output computation—is increasingly questioned by leading philosophers as insufficient to explain phenomenal consciousness
  • The distinction between access consciousness (global information availability) and phenomenal consciousness (subjective experience) is critical for understanding what makes something conscious
  • Biological substrate and mechanisms may matter for consciousness in ways that pure computation cannot capture, even if the same abstract function is computed
  • Current AI systems like large language models lack key properties associated with consciousness, including temporal experience and genuine first-person perspective development
  • The moral and practical questions about AI consciousness are becoming urgent business and policy issues, requiring philosophers to provide clearer frameworks for decision-making
Trends
Shift away from Turing Test as the standard for machine consciousness toward more sophisticated criteria involving substrate, mechanism, and phenomenal propertiesGrowing corporate investment in AI safety and consciousness ethics, driven partly by liability concerns and partly by marketing incentives around companion AIIncreased interdisciplinary collaboration between neuroscience, philosophy, and AI research to understand consciousness mechanismsRecognition that LLMs and current AI systems fail at tasks (arithmetic, chess rules) that suggest their computational processes differ fundamentally from human cognitionEmerging focus on whether AI systems can develop first-person perspective without explicit training on human first-person narratives as a consciousness testAcademic philosophy departments under pressure to address AI consciousness and animal consciousness as urgent social issues rather than purely theoretical problemsDebate over non-reductive physicalism and whether consciousness requires specific biological mechanisms or can be substrate-independentExploration of electrochemical (vs. purely electronic) processing as potentially crucial to consciousness, based on evolutionary evidence
Topics
Phenomenal Consciousness vs. Access ConsciousnessComputational Functionalism and Its LimitationsAI Consciousness and Machine SentienceSubstrate Dependence vs. Substrate IndependenceThe Hard Problem of ConsciousnessBiological Mechanisms in ConsciousnessAI Safety and Machine Welfare EthicsTuring Test Alternatives and Consciousness CriteriaLarge Language Models and Artificial IntelligenceNeuroscience of Consciousness and Visual PerceptionPhysicalism and Consciousness ExplanationEvolutionary Biology and Nervous System DevelopmentQualia and Subjective ExperienceMoral Status of Artificial IntelligenceTemporal Experience and Consciousness
Companies
Anthropic
AI safety company whose large language model Claude was cited as allowing opt-out from unpleasant conversations, rais...
OpenAI
Developer of GPT-3 and GPT-4 models discussed as examples of current AI limitations in arithmetic and rule-following ...
People
Ned Block
NYU philosopher and consciousness expert arguing that consciousness requires biological mechanisms beyond pure comput...
Sean Carroll
Host of Mindscape podcast and physicist discussing his evolving views on consciousness and computational functionalism
David Chalmers
NYU philosopher credited with formulating the 'hard problem' of consciousness and distinguished from Block on dualism...
Thomas Nagel
NYU philosopher known for 'what is it like to be' framework for understanding consciousness, cited as agreeing on har...
Anil Seth
Consciousness researcher and former Mindscape guest who emphasizes substrate dependence and biological mechanisms in ...
Daniel Dennett
Philosopher who debated illusionism and consciousness, accepting the illusionist label despite reservations about the...
Alan Turing
Mathematician credited with proposing the Turing Test as a criterion for machine consciousness based on behavioral ou...
Marissa Carrasco
NYU psychology colleague of Block who discovered how attention changes visual perception through receptive field migr...
Stuart Sheber
Harvard computer scientist who calculated the feasibility of a blockhead lookup-table machine passing the Turing Test
Gary Marcus
AI researcher credited with noting that large language models lack rule-based computation despite being able to state...
Steven Pinker
Cognitive scientist first to emphasize that LLMs don't use rules despite understanding them, unlike human reasoning
Phil Anderson
Physicist famous for 'more is different' paper on reductionism, cited regarding non-reductive physicalism
Martina Neeter Rublin
Swiss philosopher who discovered pseudo-normal color vision as possible real-world case of inverted spectrum phenomenon
Keith Frankish
Philosopher credited with coining the term 'illusionism' regarding consciousness, adopted by Dennett
Willard Van Orman Quine
Philosopher whose paper 'Qualia' defined qualia in ways even qualia advocates reject, cited regarding consciousness t...
Quotes
"What really matters is the output of the computation going on. And this grew into a view called computational functionalism. What matters is the function of various things going on in, and how they are embodied in some kind of computation."
Sean CarrollEarly in episode
"Phenomenal consciousness is the so-called what it's like of experience. And sometimes people say things like the redness of red, but the fundamental fact about phenomenal consciousness is no one can define it. You really kind of have to point to it."
Ned BlockMid-episode
"A computational simulation of a rainstorm is it wet? A computational simulation of gravity doesn't produce any gravity. So the question is whether consciousness is like that."
Ned BlockMid-episode
"If you could make an AI that isn't trained on people saying things about their first person point of view. And it nonetheless expressed a first person point of view. That would be more convincing than what we have now."
Ned BlockLate episode
"I think that Ned, well, he'll speak for himself, but I think that he's a little bit more open mind that about what the substance is that is doing the computing, but he does think that there's more to consciousness than simply computation."
Sean CarrollEarly in episode
Full Transcript
Hello everyone and welcome to the Mindscape Podcast. I'm your host, Tron Carroll. One of the kinds of questions that I get a lot, whether in asking anything episodes or just more generally, is can you tell me something that you've changed your mind about? Of course I've changed my mind about lots of things. I try not to be too dogmatic or stuck, but I actually struggle to answer that question because there's some trivial examples where I changed my mind because we got better data or I got better information, right? I've changed my mind about the acceleration of the universe. I used to think it was decelerating. In 1998, we found that it's accelerating and I instantly changed my mind about that. Right now, I think the best candidate for what's causing that acceleration is the cosmological constant. I'm open to changing my mind if we get better data that says it's something dynamical rather than a constant vacuum energy. But more vague questions, philosophical, cultural, political, aesthetic questions, I have trouble pinpointing when I changed my mind, even though I certainly did it, because my process tends to be fairly gradual and I've forgotten what it was. It started me down the road of changing my mind. Maybe I did change my mind, but to me, what my opinions are now seemed like they must have always been that way, even though I know that's not true. I mention all this because I think I might be in the process of changing my mind about something, not something like super dramatic about my feelings about how the world works, but something nevertheless pretty important. The question of what does it mean to be conscious? In other words, what are the requirements for something to be conscious? There's a point of view towards consciousness, which we all know is a complicated subject. We've talked about it here in the podcast many times. There's plenty we don't know about consciousness. And mostly I stick to saying we don't have to change the laws of physics in order to explain consciousness. I still think that. I'm not going to, not in any danger of changing my mind about that any time soon, although eventually, who knows. But okay, even if the world has made a physical stuff, doing physical things, when does that stuff doing some processes count as conscious? There's a point of view that really puts the emphasis on an input, output mechanism. This would go back to the touring test with Alan Turing. Turing suggested that if you had a computer program that could have a conversation with the human and trick them into thinking that it was conscious, then it should count as conscious. What really matters, in other words, is the output of the computation going on. And this grew into a view called computational functionalism. What matters is the function of various things going on in, and how they are embodied in some kind of computation. You get input as a conscious creature, both from words and from vision or whatever, whatever sensory input you have, you do a computation on it, and there's an output. This is saying that what doesn't matter is the way in which the computation gets done. A pocket calculator is different than an apicus, but they both do the same calculation, even though they're made of different things, doing different processes. So I would have, not too long ago, more or less signed on to, a point of view like this, but I've been pushed away from that point of view. Not because anything new has truly happened, although some people have been articulating the alternative in ways that I get better, it's just that I'm learning more about what people say about consciousness and appreciating more what the different subtleties are. So in IlSeth, former mindscape guest, and also today's guest, Ned Block, have been pushing a point of view that says computational functionalism is not up to the task. It's not just about what you compute, it's how you compute it that really matters. Now, ANeal wants to go further than that and say that it's not just how you compute it, but what is physically doing the computing? And he wants to put an emphasis on biology. I think that Ned, well, he'll speak for himself, but I think that he's a little bit more open mind that about what the substance is that is doing the computing, but he does think that there's more to consciousness than simply computation. There's other processes that matter as well. And so he has written an article recently, I'll link to it in the show notes, but it's simply entitled, can only meet machines be conscious. Meet is of course a casual term for what we're made of, biological organisms of that form. And even though despite the title, it's not mostly about substrate dependence, like he's open to, you know, if different chemical reactions did the same sort of physiology that would still count as conscious, even though there's different stuff doing the processes, but the sort of subconscious processing that is going on contributes in his view to our experiences, what he calls phenomenal consciousness. Ned is actually a super well-respected philosopher in the field of consciousness. I quote him in the big picture, mentioning his distinction between access consciousness, which is what more or less what David Chalmer's classifies as the easy problem. It's sort of your ability to access different pieces of information globally in your cognition versus phenomenal consciousness, which is the feeling of experiencing something. And that is what is hard to explain. So Ned wants to argue that maybe, at least very open mind, these good philosophers like just suggesting possibilities we should take seriously, maybe these things that we think of as experiences of conscious states have something to do with the subconscious processes that are going on in our biological manifestation or instantiation. And maybe therefore you could build a computer program that was arbitrarily good at tricking you at giving all of the output that you might expect a conscious creature to give you and nevertheless it would not qualify as what we think of as conscious. And I'm actually becoming very open to this possibility. It's not in any sense a repudiation of physicalism. A Neil Seth and Ned Block and I all agree the world has made a physical stuff, doing physical things, obeying the laws of physics. But how consciousness fits into that picture is still controversial topic that we have a lot to learn about. And of course, becoming superduper relevant because we're building computer programs that act a little bit conscious at what point will we be ready to say that they truly are. I think we also all agree the point is not yet, but it might be coming, might be coming sooner rather than later. Maybe those AI's are going to get the right to vote at some point. Who knows? This is where philosophy needs to get on the stick. Figure this out. Let us know what would really truly qualify as conscious. We're not quite there yet, but hopefully the kind of framework that Ned Block lays out will help us get there. So let's go. Ned Block, welcome to the Binescape Podcast. Oh, thanks for having me on your podcast. I figured we could start very, very broad. You know, the audience is broad. They come with a lot of different levels of knowledge. So tell me, what is consciousness? So I like to distinguish between a couple of different ways people use the term, the distinction I like most is between phenomenal consciousness and access consciousness. Phenomenal consciousness is the so-called what it's like of experience. And sometimes people say things like the redness of red, but the fundamental fact about phenomenal consciousness is no one can define it. You really kind of have to point to it. And that gives rise to a lot of misunderstanding in the area where many people think, I don't know what you're talking about. Sometimes I like to explain it by talking about famous conundra, like the inverted spectrum. Maybe the things we both call red look to you the way things we both call green look to me. That look is the phenomenal consciousness. And those, you know, Mary's famous thought, the Mary, famous Mary thought experiment. She's raised in a black and white room. She goes out of the room and sees blue for the first time. And she learns what it's like to see blue. Right. You know, there are all these thought experiments. Each of them has their own problems. And but I think they do something to explain what I'm talking about. I talk about phenomenal consciousness and then there's access consciousness. Yeah. Then there's access conscious with some kind of global availability of information. That is what, you know, I wrote a paper in the mid 90s, making a big deal out of this distinction. And I had two opposite reviewers, one of whom says said, you know, the access consciousness makes a lot of sense to me. But what is this phenomenal thing? I don't know what you're talking about. And then another of you said the opposite, as you might imagine. And you know, I still get both of those responses. And then there's a third, there's a third thing which in many people's mind is even more important, which is what is sometimes expressed by consciousness of a mental state is a state I'm conscious of myself as being in. And this idea that there's something about, that you need another state about the conscious state to make it a conscious state. So the idea is that there's a thing, transitive consciousness, sometimes called consciousness of. And what it is for a state to be conscious is for there to be another state that amounts to consciousness of the first state. And that's a big view, many people hold this, you know, it descends from Armstrong and Locke before him Locke. And many people feel like this is the main idea. So I think there are those three different strengths to consciousness. I'm always talking about phenomenal consciousness. Yeah. How, I mean, some people very casually will think of consciousness as being related to self-awareness. I'm conscious of being in a certain form. Is that? Yeah, that's the third thing that I just mentioned. Yeah. Okay. Then I understand access consciousness less well than I thought I did because I thought that was self-awareness. I think you're onto something there, which is that self-awareness kind of thing is a form of access consciousness. It's a pretty sophisticated form. You know, I think young infants are conscious and they don't have that and animals are conscious and they don't have that. So I think you can be conscious without that self-awareness thing. Okay. But just so as clear as that can possibly be, let's do the access consciousness thing one more time. Is the word access doing what I think it's doing? It's what I have access to in my brain. Yeah. I put it more in terms of global, with regard to perceptual information, global availability of that perceptual information. It's very linked to the global workspace kind of idea where you know, when you're conscious of something, it's available to all your cognitive mechanisms, decision-making, thinking, planning, problem-solving, report it. Okay. Good. Actually, that does help. And then the phenomenal consciousness, of course, this is where the sexy action is talking about, you know, how we're going to understand it. Yeah. Let's actually dig into the inverted spectrum a bit because I never talk about it here on the podcast. Okay. So what does that mean? Well, as I said, it is the hypothesis that things that we agree are read look to you, the way things we agree are green look to me. And the idea is that things look different to different people. Now, the, you know, to have a thorough going version of it, you need to say something about the other colors and the simplest form is a red green inversion. That keeps blue and yellow the same. And then there are all kinds of technical issues about this, but, you know, they don't really matter. The key thing is that things might look different to different people. And I have to say that although this has, has a story in past where many of the constituents thought it made no sense at all, I was pointed out in an article about maybe 25 years ago by the Swiss philosopher whose name I'm forgetting that there is this phenomenon known as pseudo normal color vision. Okay. And it may be an actual case of this. Now to understand pseudo normal color vision, you need to know a couple of things. First of all, that there are three kinds of cones. The so short medium and long wave cones. And the, the, the long and medium are mainly responsible for red and green. And the, they have pigments in the, their pigments in the, in the cones that are responsible for the signals that come out of them. And the pigments are two pigments, chloralab and erythro lab. And a very common form of red green color blindness is one in which genetically caused one of those I forget which one chloralab, let's say, is in both cones instead of a ripple lab in one and chloralab in the other. And that's a genetic effect and those people have trouble telling red from green. So there's another form of genetic red green color blindness. And that's where both cones have erythro lab in them. So the first kind, they both have chloralab, the second kind, they both have erythro lab in them. And it can be shown that if you had both genetic defects and once, then you would have the chloralab and erythro lab reversed. Okay. And making certain assumptions about how the pigments are connected to the opponent processes in color vision, you can, if you make those assumptions, you can deduce that such people would have reversed red green experience. The assumptions are once we have no way of testing. And there are many little technical issues connected with it, but oh, I forgot to say that we can calculate that there are actual people and not a small number either who have both genetic defects, so called genetic defects and once. So there probably are people in the population who have this, I mean, there are, there definitely are human beings with the, this pseudo normal color vision. And maybe they may have genuinely red green inverted spectra. But we have not identified any and invited them to our philosophy conferences. No, because the thing about being pseudo normal is you're going to act pretty much like anybody else. But I know you're pseudo normal and, you know, there will no doubt be many differences between their color vision and that of so-called normal people. But the thing is, one important fact of color vision, it varies hugely from person to person. And even from, you know, between genders, between ages, you know, for example, my color vision is much yellower than yours, because I'm way older. And the lens yellows, except one of my lenses, I have cataract replacement and it isn't yellow. So you can tell the difference. Yeah. So, but I guess the philosophy issue that I don't reject, but I fret about is when you say words like what you experience as red is what I experience as green. But yeah, get that sort of begs the question of whether there is an objective thing about what I experience. Yeah, there is a crucial emendation, which is that we wouldn't, if this kind of thing is widespread or even if it could be widespread, it would be kind of wrong to call the normal people have when they see red as the experience of red. Because the word, then won't, the gourd will go with the external colors, not with the internal experience. Yeah. So, the, what's objective here, the most obviously objective thing is the external colors. But I also think that the phenomenology is objective too, it's just that we don't know how to measure it. Yeah. So, that's maybe what you're saying. It is. I mean, part of me, and again, I'm not tied to this in any sense. I'm eager to understand better. But part of me wants to say, look, there's a story that I can tell about photons and electrons and neurons. Yeah. And that's a definite story. And you want to say that there's an additional story, which is what we're experiencing. Yeah. And I'm like, yeah. I don't know, maybe. Yeah. Oh, well, okay. So, that puts you in a certain category. Right. You have somewhat illusionist leanings. I hate that word. Illusionists. Yeah. Yeah. Illusionists, in the way the term is usually used, is it refers to a view that, instead it put it, there are conscious properties or conscious states, but they're not what you think. Right. So, yeah. There are all kinds of views about consciousness, including illusionism. Right. And, yeah. So, I don't know quite what to say about illusionism. It just seems to me to be plain from personal experience. I have actually had discussions with people about this. Many people are just puzzled by illusionism, and they don't understand how somebody could be an illusionist, but maybe that's something wrong with us. Or maybe people's experiences are just different. Or maybe you're a zombie. Maybe. Yeah. We're going to get into that. But I think these are all on the table, yes. So, in fact, actually, that'll be helpful right now. It's Martina. It's Martina Neeter Rublin. She's a Rublin who published the paper first on suit and normal color vision. Good. Sorry. I was glad you're here. I didn't get my credit. So, what you're saying, clearly about phenomenal consciousness, does resonate with things that we have talked about on the podcast about Tom Nagel's idea about what it is like to be things, David Cholmer's idea of a hard problem. How do you put your own perspective in context of those folks, all of whom are at NYU? So, you've cornered the market on all of these people. Exactly. Yeah. So, I'm in pretty much agreement with them about the basic facts. I mean, you know, Tom has a very hard to understand and interestingly different metaphysics of it, but I don't agree with it. And actually, Dave's metaphysics, I don't agree with you. Dave is basically a dualist. Right. I'm not a dualist. I'm a physicalist. Yeah. So, we agree on the phenomenon that there really is consciousness. We're not illusionists. We agree that there's something just like Dave and I certainly, I don't know about Tom, but I suspect Tom too, agree that there is a hard problem. It's really hard. It's different from what Dave calls the easy problems. So I'm on board with all of that stuff. And I guess I don't want to go down a rabbit hole here, but the illusionist label, I think I am an illusionist, but I would never call myself that in public. And I argued with Dan Dennett about it. I think it says terrible label. Like, I'm not an illusionist about tables and chairs. I think they exist. And I think that conscious experiences exist in exactly that way. Does that make me an illusionist? Yeah. I guess the illusionism is just as hard to define as many other things in the consciousness sphere. And being an ism, it shares the vagueness of all ism. Sure. There will be different people. I should say, by the way, that illusionism is not Dennett's term. That term is due to Keith Frank. He's got said, but Dennett agreed with it. He used it, right? I think he did. He did. He accepted it. But you know, he said in the, something in the preface, I think, to his book from the 90s, conscious, I guess, consciousness explained that there was what he regarded as a question of tactics. Yeah. Do you say there is no such thing as consciousness or do you say there is such a thing, but it isn't what you think. You know, it is famous paper, Quine and Qualia. He says, yes, there's consciousness, but there's no qualia. And then he gives a definition of qualia that just nobody believes it. Yeah. Yeah. It's an endemic thing. Even the proponents of qualia don't believe that definition of qualia. And in fact, he quotes Cindy Shoemaker in the article saying, I don't believe in the, I'm an advocate of quiet. I don't believe your definition. I don't accept your definition of qualia. By the way, what is your definition of qualia? Well, I don't really have a definition. I think you can only point to it. So it's the experience is what it is like. It's the what it's like. It's the phenomenal consciousness. Yeah. And can these different kinds of consciousness, let's stick with access and phenomenal consciousness. The sort of easy problem part and the hard problem part. Do they come apart? Could it, could it organism or a being have phenomenal consciousness, but not access consciousness or vice versa? Oh, yeah. In fact, I think that's an open empirical possibility for a lot of simple organisms that don't have much, don't have very good cognition, but may have phenomenal consciousness. And then, you know, machines, we may have machines that have access consciousness, but no phenomenal consciousness. So that's another real, real possibility. And there are limited versions of these things or possible limited versions in ordinary phenomena involving humans. Yeah. Okay. Good. So, the phenomenal consciousness is the target of the hard problem and the hard problem is hard. Do you think we're making progress? Do you think that we're learning either in the philosophy side or the neurosurgeon side about phenomenal consciousness? I think on the neuro, well, look, the most basic thing that we've learned, I think is making the right distinctions. So, yeah. Thank you, man. However, I do think that the neuroscience side has made it a little smidgen. And I think that the vision of progress, and I think it sometimes can be rather vague whether it's the easy problems or the hard problem. You know, the thought I have is that, as Pat Shochlin once said, and I've felt for a long time, is that the way probably to make progress on the hard problem is by focusing on the easy problems. Yeah. And if you get enough info on the easy problems, maybe some idea will happen with regard to the hard problem. But I think there's no doubt that if we are to solve the hard problem, it will take some real breakthrough. It's a very physicist way of thinking that we should do the easy things first and then maybe the hard things will take care of themselves. Yeah. Right. Yeah. So I'll give you an example of a something that makes a little tiny bit of progress. So my colleague, Marissa Carrasco, in the psych department at NYU, discovered that attention changes slightly, changes the way things look, makes them look higher in contrast, it makes a moving thing look like it's going fast, slightly faster. The most significant one is it makes something look slightly bigger. And she and her colleagues found a neurological explanation of that, which is that neurons in the visual cortex have what are called receptive fields. So in early vision, the receptive fields are very small, and later vision, they're very big, but in early vision, the receptive fields are small, and they're aimed at a area of space. A receptive field is the area of space that that neuron processes information from. And what happens when you attend to a certain area of space is that the receptive fields that surround it migrate to cover the point of attention. And then you have more receptive fields trained on that space than you did before. And that hypothesis called the labeled line hypothesis leads to it looking slightly bigger. So that's an interesting, you know, it doesn't solve the heart problem, but it's all, and maybe it is just an easy problem, but it's about the way things look. It explains why they look the way they do. It's an odd phenomenon you wouldn't miss, certainly predicted. And you know, I think it's pretty cool, actually. And the hope I have is that as we make progress, maybe we will set the stage for a real breakthrough. That's great, because it leads right into my next query about what could an explanatory account of phenomenal, phenomenal consciousness possibly look like. Like might it just end up being a better understanding of what some neurons are doing? Are you going to require something to do here than that? Well, I would regard the one I just mentioned as juicier than that, because it's really about the way things look. And that is juicier than just finding out what some neurons do. It's juicier in that it has to do with the way with phenomenal consciousness, the way things look. So, you know, I'm cheered by results like that. And they're, you know, envision there's lots and lots of really interesting results that have neuroscience explanations. You know, your colleagues like, you know, Chas Firestone, E.J. Green, and Ian Phillips, and there, those people, and Stephen Gross, those people are all very involved in such any things. So, yeah, I'm hopeful for the future, although I won't live to see it. Well, so, but this is hardening to me. So, the, so you're happy to imagine that when we do get an explanation of phenomenal consciousness, it might take the form when I'm experiencing what it is like to be something. Here is what is happening in the brain. Well, here's the thing about solving the hard problem. Yeah. I don't think you can say in advance what the form of the explanation will be, but that's the form you mentioned is certainly a candidate. Okay. But that's all I'm getting at. I mean, I completely agree. We can't say what the solution is going to be. Yeah. But I'm just wondering what would be acceptable as a solution. Yeah. Well, it's really hard to say without hearing the solution. You know, the lot, I'm sure, as you are well aware of physics. A lot of solutions to problems have raised more issues than they've solved and more puzzling issues than they've solved. And maybe this will do the same. That's absolutely true. That's true. But in physics, you know, if I don't have a theory for a certain physical phenomenon, I kind of know what such theories look like, right? Like, oh, there's some space of states, there's some dynamical equations for the, for the hard problem. Yeah. I just, I just don't know. I mean, maybe, I'm not, you know, we're just, we're just at much more at C on, on, on, unconsciousness than probably we ever have been about physics. Yeah. No, that's not completely. It's just, it's just much more, everything is so puzzling, right? Which leads very nicely into my next query about, you've already mentioned this a little bit, but you're a physicalist. So you're not tempted, at least at the moment, by saying that the hard problem is so hard, we need to expand our ontology of the world. I don't see that expanding your ontology is going to help. Yeah. You know, you can be a doleist and it doesn't give you any, it just, it introduces some religion or mystery and doesn't solve the hard problem at all. And the same with, same with pan-psychism. You know, it, you know, it just replaces the hard problem with a so called combination problem. You know, everything is a little bit conscious, but how the hell do you put them together into a conscious human? Yeah. I don't see any solution, I don't see any solution there. It just seems like, you know, I mean, you know, my colleagues who believe this stuff have reasons for believing it and, and you know, there are some really interesting arguments about that. But solving the hard problem, I don't think so. But you've talked about non-reductive physicalism as something that might come into play here. So for other reasons, I was just thinking about reductionism today. In fact, I think, yeah, I think that you mentioned Phil Anderson in your paper that we're going to talk about. Oh, do I? Phil Anderson. Maybe. I'm not, I might be mixing up different things. But Phil Anderson wrote this thing. We were both famous papers saying more is different. And people, people never read the paper. They just quote the title and he's in the paper. He's super affirmative about reductionism. He loves reductionism. Yeah. He just doesn't think it's useful when you're, when you care about the higher levels. But you want to be anti reductionist a little bit. Actually, I want to be reductionist. Okay. Of course, it depends what you mean by reductionism. Of course. So the usual meaning of non-reductive physicalism or the one that most people have in mind is usually a form of functionalism. It's that the right descriptions of these phenomena are at a higher functionalist level, where you specify the organization of a system and that allows for many different realizations of it. Yes. Implementation. That's not my version. Okay. My version is, you know, the meat centric version. Good. You know, where I'm referring to this famous short story called, their meat of meat, if you read that? No, I haven't. Oh, it's very funny. I'll send you a link to it. I know. It's a, yeah, it's a, I always do it in my undergraduate classes. So it's from the point of view of a group of machine, you know, silicon beings, machines made by other machines that were themselves made by other machines on the ultimate origin is lost in history. And they go around the universe discovering other conscious beings. And then they discover us and they say things like they're made of meat. And then one of them says, well, but how do they think? And the answer is they just use meat. I have to read the story. And then how do they communicate? They flap their meat. And they think it's so awful that there are these, these meat conscious beings that they just better suppress the information and forget about them altogether. It does have its downsides being made of meat. I guess it does. Yes, yes, it does. But okay, you've introduced the word function or functionalism. So that is a perspective. It's a perspective you want to sort of push back against a little bit. Yeah. I think what's happened in current thinking about artificial intelligence is that a lot of very influential people are computational function. They think that certain computations are necessary and sufficient for consciousness. And I have long been a doater of this. And 1997, I published a paper called Biology versus Computation in the study of consciousness. It wasn't a real paper. It was a BBS reply. But I've been pushing this line for quite a while. But things are getting hot now. And it really matters now. It used to be that it was kind of just philosophers. But now it's a lot of other people. And as you may know, there's a big issue of AI safety, which is the extent to which machines have feelings and they can be damaged. And their welfare has to be taken into account. And one of the large language models, Claude, made by Anthropic, has now allowed its AI to opt out of a conversation if it's too unpleasant. Hmm. Better safe than sorry, I guess. Yeah. And I think that's the idea. So the idea of, well, sorry, is there a difference between a distinction between just functionalism and computational functionalism? Yes. Yeah. Functionalism is a much broader doctrine that encompasses a lot of other kinds of functions. And functional roles basically, with philosophers of talk to under the heading of functional roles, they can be, they're basically causal map of states and how they affect one another and that kind of thing. But the version that has really become important now is computational functions. And because the question is in people's minds, is do the computations that these AI's make or some future AI's determine conscious states? Or as I think, is there some kind of a biological necessary condition? Right. Right. So the idea, I should say, when I say, when I say I think that, I mean, I think that that is equally plausible. I don't mean, I have really strong evidence for that. So the idea of the sales pitch, I guess, for computational functionalism would be look, the brain clearly computes some things. You can at some level think of how you communicate with the human being as it gets some input, it gives some output. Clearly, there is a computation underlying that. And yes. And the computational functionalist view is that is what it is. There's nothing really extra going on. Yeah. Yeah. Yeah. So what I think is, if you want the machine to be conscious, you may need a certain kind of implementation of those computations. Yeah. You have a nice distinction in the paper about roles versus realizers. And I know that it was hard to explain jargon words in an audio format, but why don't you give that a try? I think the difference. Yeah. So the idea is that the role is the abstract organization of the system. And what causes what? When you're talking about computational system, it's the computations, the thing does. And the realizer is what does those computations? So I mean, famously, a computational process, just take a simple computational process like adding or multiplying. It can be realized in an electronic system or a electrical system with relays and switches. Or a mechanical system with gears and pulleys. And these are different real realizers that implement the computations in different ways. And of course, we think that they're just doing the same computation in a different way. But the question is, do the computations characteristic of consciousness require some particular form of realization? And so how does this is very, very close to the discussion about substrate independence or dependence? Yes. Now, I am not a substrate dependence person. My fellow traveler, Anil Seth thinks the substrate, the material, the stuff is what's important. I focus on the mechanisms. Right. So for example, our neural firing, our neurons involve certain ions, calcium, potassium, chloride, et cetera. And maybe neurons could be made out of a different substrate with different ions. Maybe there's some silicon way of doing it. I don't know. I'm not a chemist. So I don't know what could be put together. Using a different substrate, but the mechanisms, I mean, from my ignorant point of view, maybe there could be something with similar mechanisms, but a different substrate. And I think it's the mechanisms that count. Not the substrate. Good. Okay. So an abacus adds two numbers together using a different process than an electronic computer does. And you're going to say that that difference might matter, but whether the abacus is made of iron or wood does not matter to you. Yeah. I should say, by the way, it's interesting that mental abacus calculations, people do without the abacus, have make different errors than, then, then, then, base 10 computations because abacus are involved, I don't know quite how they work, but they involve fives. They make mistakes of five. Okay. I mean, interesting. So imagery, abacus imagery differs from digital, it's not digital, but decimal imagery, where the key mistakes are often made in carrying. Yeah, interesting. I mean, it reminds me of the fact that when we built these wonderful, large language models, and they do an amazingly good job of mimicking human conversation, but they become bad at arithmetic, even though they're computers, right? Like because they're clearly the processes are different. Oh, yeah. In fact, they've still bad at arithmetic. And you know, GBT-3 was only about, I think, 20% accurate on three-digit multiplication. You know, the original report showed the accuracy levels. And they've gotten more and more accurate. And of now, the models that are hooked up to the internet or hook up to a calculator, try to send these calculations to some other kind of computing device, but they don't do it very well. And they still make mistakes. You know, one that was circulating on the internet for a while was, I think, GBT-4 was a certain kind of calculation where the machine regarded 5.11 as larger than 5.9, because 11 is bigger than 9 and that kind of similarity. And they also make mistakes with the rules of chess. It's a fairly commonly reported thing that when you get them in unusual chessboard situations, the pieces will jump that are not nice. And you know, the basic point there is one that Gary Marcus made years and years ago, which is they don't have... Actually, Stave Pinker was the first person to really make this point, which is they don't have rules. They can read the rules and they can tell you the rules, but they're fundamental mode of computation. It's not based on the rules. So well, whereas they know the rules arithmetic, they don't use them. So I think that it was talking with an ill-sathe over the last year, shook me out of my dogmatic, computational functionalist slumbers. I would have been all in favor of it, but I think I understand the problems with it better now. But so I still have a little bit of a worry. I'm trying to make sure that I can reject computational functionalism, but still be a physicalist. And the worry would be can't I just be expansive and think of literally every process as some kind of computation? And if so, any physical thing is computational. Yeah. So you're thinking of the physical church-turing thesis that you mentioned in an email. Yeah. So maybe should I say something about the church-turing thesis? Yes, please. Yeah. Okay. So the church-turing thesis is the thesis that a mechanically computable function is Turing computable and vice versa. The vice versa is pretty obvious. Well, I should say first, mechanical computability is an intuitive notion. Yeah, we don't know. There can be no proof of the church-turing thesis. That's why it's a thesis. So the if it's Turing computable, then it's mechanical. Well that is just obvious because everything that Turing machine does is a mechanical thing. You know, it writes on a tape, it moves the tape, it erases from the tape, you know, that kind of stuff. But it's if it's mechanical, then it's Turing computable that has been something that has raised, you know, people who are concerned about whether that's really true. My view is that that is true by stipulation because whenever anybody comes up with a counter example, everybody says, oh, that's not really a counter example. And then they refine the notion of computation to rule it out. The most notable case is computation mentioned actually by Turing. Computation by a truly random process like atomic decay. You know, you can regard, you can set that up so that you know, like a Geiger-Coder or something, it computes a series of infinite series of numbers. So like in some sense, it's a computing of function. And that can be computed by a Turing machine. So what people say is, oh, that's not mechanical in the sense in which we meant it. Or it's not a computation in the sense in which we meant it because it's not reproducible. So they refine the notion of computation to rule it out. And you know, a similar point could be made by the sense of computation in which is not random in which a river computes the rate of erosion of its banks. Exactly, right. And that's not Turing computable either. Or at least arguably it's not Turing computable. You could approximate it. But you know, so, and this gives you know, there's a whole issue about the real numbers and the real numbered values of things. And anyway, so I regard the Turing Turing thesis as really a stipulate. Certainly that, if it's mechanical, then it's Turing computable as a bit of a stipulation. But it's a reasonable stipulation. I mean, so, but okay, so the process that we're talking about are mechanical and the functions they are computing, you know, ignoring real numbered values, which probably you can't ignore because there's all, you know, all the things in the brain are particles. So there can be a Turing machine that computes that function. But the problem I would raise is whether that computation is itself analog in the following sense. You can regard a mental process as analog if a computation of that process need not preserve the mental properties. Okay. So reasoning is arguably a non-analog process because if you have a machine that computes the computations, does the computations involved in reasoning? You know, you know, I mean, at least it's a good case that it is reasoning. But consciousness is another matter. And there consciousness is like gravity or a rainstorm. A computational simulation of a rainstorm is it wet? A computational simulation of gravity doesn't produce any gravity. I mean, the objects that do the computation are have mass and so they will themselves have gravity, but the computation doesn't itself determine any gravitational traction. So the question is whether consciousness is like that. And I think we just don't know. So the Turing thesis doesn't help us. I've heard many times this example of simulating hurricane does not make you wet or does not create wetness. But it's a worry that there's some linguistics light of hand going on here. If part of the simulation was a person in the simulated thunderstorm, that person would say that they were getting wet if they were an accurate simulation. So I shouldn't have used the word simulation. What it's really about is whether an implementation of that computation has to preserve the mental properties of it. So I'm not really not talking about a simulation. I'm talking about an implementation of it in computational hardware. And for many processes you might name, for example, freezing. You can, freezing is a process by which a liquid forms a crystalline solid. You can implement those computations without any actual freezing. So that's many, but not most processes are like that. So that's the way I should really have defined. I didn't mean to use the word simulation. I would have used it in any way. So it's not your fault. Yeah. So if I, yeah. So the real, the real issue is about implementation. Yeah. Okay. Good. I mean, is part of your thesis that, well, when I was trying to understand what a Nilset was saying and I, it, it, it vibed with me a little bit when I was reading your paper, there's a black box view of how one deals with human beings, like I said, input and then output. But then there's also like a wet green biological view where there's a lot of processes going into it in the, in the meantime. And I want to say that what you are saying is that those processes may be matter to what we think of as conscious experience. Yeah. Yeah, exactly. And so if that's true and I'm actually, you know, my, the, the scales have fallen from my eyes in the last year, like I said, and I'm very open to this possibility. No, you're not because you're an illusionist. No, no, see, this good. Now we're getting, now we're making money here because I think that conscious experiences are emergent higher level ways of talking about things that happen at a physical, purely physical level. But I'm open to the fact that that what the emergence refers to depends on subconscious, sub computational things, not just the input and the output. I mean, so that I see so that so there's a version of this that you can accept because, you know, my whole thing even as an illusionist, exactly because my whole thing is entropy in the arrow of time. And I really feel bad that I don't remember who said it, but someone on the internet said, LLMs do not experience the passage of time. And I think that's crucially important, but because our cells do experience the passage of time. And we can be shocked if that had something to do with conscious experience. Yeah, that's a good point. Yeah, no, conscious experiences intrinsically temporal. Right. Yeah, yeah, I agree with that. So what do we know about it? I mean, you make a point in the paper distinguishing between electrochemical processes versus merely electronic processes. Yeah. So yeah, that's a really just a speculation, but it is kind of remarkable that the way our brains work, translate, you know, electro, purely chemical signals into electrical signals and then back to chemical signals, you know, to neurotransmitters between cells, electrical within the cell. And you know, as I mentioned in the paper, the early theories of the synapse were electrical. And it turned out it looks like there's some evidence that a purely electrical nervous system didn't do very well. And I try mentioning a number of different ways in which electrochemical processing might be superior and conclude that it's kind of a mystery. But we're lucky we have it because maybe that's what led to consciousness. So I should say also, yeah, sorry, go ahead. Sorry, I was just going to say when you say didn't do very well, you mean as a matter of evolutionary history? Yeah, yeah, that's the Tina so-called Tina fours that at least at one stage had a purely electrical nervous system and they didn't get more lead to more complex animals. They were kind of an evolutionary dead end whereas the electrochemical pathway did generate much more complex animals, including us. So is that a well known fact among evolutionary biologists or is it something that you've noticed? Okay, so as I said in the paper, up to 2022, it was thought that sponges were the first animals or widely thought that. Then in 2023, there's some, you know, we had there were some results suggesting that actually Tina fours were the first and that they differ from all subsequent animals in an important chromosomal way. They're all now I'm told I haven't read the paper yet that there's some new evidence that maybe goes against some of that. So I think it's the situation is in a state of flux and I don't know if they all would accept that. I think even if the Tina fours weren't first, I think I'm guessing it would be probably pretty well to agree that they were a dead end. Okay, but I don't know for sure. So scientists always changing their mind about things because of new evidence. Yeah, I'm scientists. We philosophers never do that. So I'm wondering how accurate you would you would count it if I said that part of your lesson or your message is that consciousness can depend in interesting ways on subconsciousness on things that were not. Absolutely. Yeah, exactly. Yeah. Are there, what do I want to say? Are there, can we have subconscious experiences? Is that a thing? Well, of course, I have advocated that there might be experiences in an isolated part of the cortex that are completely cut off from access. And that's a more remote speculation than my other speculations, but I think it's conceivable. You do many people fail to feel it's not conceivable. Okay. You do mention the possibility that there would be something like a repressed memory that can affect our feelings, that could affect our phenomenal consciousness. Yeah. Well, the repressed feelings. So that I was trying to illustrate the possibility of phenomenal consciousness without access consciousness from a Freudian point of view. And I pointed out that the Freudian picture of repression is repression in the access sense. So the case I imagined is you had a very terrible experience and you repress it, but it had very vivid phenomenal qualities to it. And maybe when it's repressed, it still has those phenomenal qualities. It's just that you don't access them. So that was the thought. And the Freudians regurred that as unconscious because of the lack of access. Good. Yeah. Okay. They would have benefited from this distinction, definitely. Okay. We had a question every month for the podcast I do and asked me anything episode. We're just questions pour in and I try to answer them. And I got a really good one last time. Oh, my. That sounds really tough. I get to pick which ones I answer. That's what makes it palatable. So one of them was like, how well have we done on deciding the criteria for saying when an AI will be conscious? Yeah. Yeah. And I said, I think we're not that well. Yeah. I don't know. Not that well. I agree with that. We're really at first base. I think the most promising suggestion is one that a lot of people have made, which is that if you could make an AI that isn't trained on people saying things about their first person point of view. And it nonetheless expressed a first person point of view. That would be more convincing than what we have now. Way more convincing. So the idea is, you know, these LLMs are trained on a ton of human generated stuff that involves all kinds of first person point of view. You know, you give it a number of books and you're going to have a lot of first person point of view. Maybe if you trained it just on, I don't know, the encyclopedia Britannica or something or. Yeah. And, you know, if you successfully eliminated everything in the training data that had a first person point of view and it nonetheless developed one, you know, that would have some convincing force, I think. Well, that's interesting. I mean, but certainly still the realizers are still wildly different. Yeah. Yeah, then for our wildly different, but maybe, but that would be some indication of maybe those different realizers do realize some kind of experience or at least the first person point of view. Good. So there's a distinction between like the classic touring test, really only focused on inputs and outputs, right? Like if the outputs sounded human, then we're going to call it consciousness. And by now, we're, I think we're mostly sophisticated enough to say, that's not quite enough. Nobody talks about the touring test anymore. Right. It's really funny. You know, just in three years, it's, or four years, I guess it's completely disappeared. I mean, I think by blockhead example just conclusively refuted the touring test. What does that mean? You know that example? I don't know. Oh, it's an example of a brute force, um, touring test passing machine. The idea is, you know, just as tic-tac-toe is just completely solvable through a tree structure where you put in, you know, you let the say that the, the other person go first and then you put in a plausible move. The human puts in a plausible move in the program for each of those and then for each move that the, the, the, the, the, the judge makes next, the human puts in another one that is the human programmer. Yeah. I mean, and you make a tree where you, where the programmer has put in every other move. And you know, they don't have to be good ones, but anyway, for every move, you can say, I put that in there. And, you know, the idea is, it's like that for a conversation. Right. The idea is that you have, um, you, you, you decide how long the, the touring test is going to go like an hour. You compute the type of strings that can be, how many, how long they can be and you put it. You know, I mean, it would be very large. Stuart Sheber actually did it. Uh, Stuart Sheber is a, is a computer scientist at Harvard and he did a calc, he published a philosophy article to calculation, uh, how big it would be and how long it, you know, and, you know, the answer is really big. Yeah. Um, but the point is it's a conceptually possible. And, uh, it would do as well as a person on every, uh, because the people have put those number, those points in. And the thing about it is, is that for every clever response, some person could say, yeah, I thought of that. Well, I like this example a lot because, um, I mean, one might be tempted to say, well, why wouldn't that be conscious? It's just a big lookup table, but how do you know it's not conscious? Like maybe the universe is just a big lookup table, but it points out the fact that we do, at least informally, associate consciousness with kind of some specifics that have going on in a biological organism in a way that at least, at least we rule out negative, the negative. So the point I was making with that example was, maybe we don't know what internal going on are responsible for consciousness, but something that just is a lookup table, that's not conscious. So it must be more than just the input output. Well, more than that. Yeah. So, I mean, does your distinction between roles and realizers, et cetera, help us even a little bit in getting an answer to when the AIs will be conscious? Like, this is an important question coming up on us. Yeah. So, well, at least it allows us to formulate the problem, which is always a, a step up. You know, I mean, the way I like to think about it is, and deciding whether any alien being is conscious, you really have no choice but to extrapolate from us. And you can extrapolate on the basis of our computational properties or on the basis of our sub-computational properties. And we just don't know which. Yeah. Distinction is important for formulating that point. We don't know which, and we don't have any reason to favor one over the other. So if I had to assign a probability, it would be 50% for each. Do you know if moral philosophers have started to weigh in on when we should be nice to the AIs? Oh, yeah. Oh, there's a huge amount of literature on this now. I went to an AIs safety, which is that. It's about that in Berkeley two weeks ago. And there are a lot of people think about it. And all the companies have people thinking about it. Well, they're thinking about whether the AIs should be nice to us more than whether we should be. No, no, no, no, no, no. They're also thinking about whether we should be nice to the AIs. And you know, one of the issues I raised with a number of people at that meeting was why the companies are thinking about this. And you know, the answer I got from some people is, well, they want to be on top of it. In case people start complaining about torturing the machines, they want to have a whole body of information. The worry, what I then raised was, well, look, these, you know, what you're getting from people thinking about this is, you know, a lot of stuff that could be used against them. Yeah. And is that really what they want to fund? Shouldn't they be just squelching it? And then, you know, the answer that occurred to me, although nobody said it, was actually very simple. And that is more than 50% of the profit that these companies make more, sorry, take it back. More than 50% of the uses people put these things to are as companions. So the extent to the extent that they can boost the idea that maybe the machines are conscious. That is good for the bottom line. I find that community, a weird mix of people who are absolutely ruthlessly in it for the money and people who are entirely idealistic about, you know, creating a better future and creating, like they're both there and they're both working hand in hand. Yeah, that's exactly right. It is a very interesting thing. And we're going to see a lot of things happening on the neuroscience side, the AI side. I think that the philosophers, philosophers need to get on the stick and give us an answer to this when we start being nice to the AI. It's our job. But what we really need is more people studying it. I think the issues we're talking about today are super important. Every philosophy department should have somebody who's working on this kind of thing. Well we talk about a lot of things. Do you mean the AI in particular or consciousness? Yeah, AI in particular because it's such a major social issue. But you know, it's also true that the issues having to do with animal consciousness would be come from theory. Right, but that, but yes. Significant. We didn't even talk about it. And boy, that's, you know, there's a huge, I mean, it's under the heading of expanding the moral circle. But a lot of people are interested in this. A lot of stuff going on. And we really, you know, academia has to get on on board with this stuff. You know, departments are so high bound with, you know, they don't do new things, but they really should be doing this. I'm on your side. I mean, we always cringe a little bit to think of what the people 500 years in the future are going to say about us. And a lot of things were not paying attention to. Maybe we should. If there are, if there are people, what the computers will be saying about us. Maybe we should be asking. But that was extremely helpful in framing us for having these discussions. So netblock, thanks very much for being on the Mindscape podcast. Oh, thank you. It was fun.