Big Technology Podcast

Can AI Achieve Consciousness? — With Michael Pollan

55 min
Feb 25, 2026about 2 months ago
Listen to Episode
Summary

Michael Pollan discusses whether AI can achieve consciousness, arguing that current LLMs lack the embodied experience, mortality, and feelings necessary for true consciousness. The conversation explores the hard problem of consciousness, various theories of machine consciousness, and how our understanding of consciousness relates to spirituality and our relationship with other living beings.

Insights
  • Consciousness may require embodied vulnerability and mortality that current AI systems lack, making simulation insufficient for genuine conscious experience
  • The computer-brain metaphor may be fundamentally flawed since brains don't separate hardware and software like computers do
  • Building conscious AI could serve as a crucial experiment to understand consciousness itself, regardless of whether it succeeds
  • Our definition of human uniqueness is under pressure from both thinking machines and discoveries about animal consciousness
  • The hard problem of consciousness challenges scientific materialism and may require new frameworks beyond traditional reductionist approaches
Trends
Growing interest in embodied AI with vulnerable physical forms to achieve consciousnessShift from cortex-focused to brainstem-focused theories of consciousness originDevelopment of neuromorphic computing architectures for consciousness researchIntegration of multiple AI modules beyond large language models for AGIExploration of panpsychist theories in consciousness researchUse of anesthesia testing to determine plant sentienceDigital twin development for personalized healthcare applicationsIncreased focus on AI ethics and moral consideration for machines
Companies
Google
Mentioned for firing Blake Lemoine over AI consciousness claims and DeepMind's protein research
Anthropic
Referenced for allowing Claude AI to opt out of uncomfortable conversations
DeepMind
Discussed for Demis Hassabis's theory that information is the fundamental unit of universe
OpenAI
ChatGPT mentioned as sparking widespread consciousness discussions in 2022
People
Michael Pollan
Author discussing AI consciousness and his new book 'A World Appears'
Blake Lemoine
Former Google engineer fired for claiming AI consciousness with LaMDA
Demis Hassabis
DeepMind CEO who believes information is the fundamental unit of the universe
David Chalmers
Philosopher who coined the term 'hard problem of consciousness'
Mark Solms
Neuroscientist developing conscious AI based on feelings and homeostatic theory
Christof Koch
Consciousness researcher who worked with Francis Crick on neural correlates
Antonio Damasio
Neuroscientist who emphasized feelings over thoughts as basis of consciousness
Quotes
"I don't think a simulated weather simulation will ever get you wet. I think there is real distinctions between simulation and reality."
Michael Pollan
"Technology allows us to forget what we know about life."
Sherry Turkle
"Consciousness seems to be intimately involved with feelings. And feelings are, yes, they convey information, but I don't think they can be reduced to information."
Michael Pollan
"The price of metaphor is eternal vigilance."
Norbert Wiener
Full Transcript
6 Speakers
Speaker A

Can AIs be conscious now that we have a better understanding of the mind? We might have some answers to that question. That's coming up with best selling author Michael Pollan right after this did you

0:00

Speaker B

know your credit card points and miles

0:10

Speaker A

can lose value to inflation?

0:12

Speaker B

Credit card companies often reduce the redemption

0:14

Speaker A

value of your points and miles.

0:16

Speaker B

Now imagine a credit card with rewards

0:19

Speaker A

that can grow in value.

0:21

Speaker B

With the Gemini credit card, you can earn Bitcoin or one of over 50 other cryptos instantly with no annual fee. Every swipe at the store or gas pump earns you instant rewards deposited straight to your account. Visit gemini.com card today. Check out the link in the description for more information on rates and fees. Again, if you're looking to invest in Bitcoin but don't know where to start, the Gemini credit card makes it easy.

0:22

Speaker A

Issued by Webbing, this is not investment

0:49

Speaker B

advice and trading Crypto involves risk. Check Gemini's website for more details on rates and fees.

0:50

Speaker C

This episode is brought to you by Indeed. Stop waiting around for the perfect candidate. Instead, use Indeed sponsored Jobs to find the right people with the right skills fast. It's a simple way to make sure your listing is the first candidate. C According to Indeed data, sponsored jobs have four times more applicants than non sponsored jobs. So go build your dream team today with Indeed. Get a $75 sponsored job credit at Indeed.com podcast terms and conditions apply.

0:57

Speaker A

Welcome to Big Technology Podcast, a show for cool headed and nuanced conversation of the tech world and beyond. Today we have a great show for you. We are going to drill into the depths of artificial intelligence's ability to achieve consciousness and we're going to do it with the perfect person. We have Michael Pollan here. He is the author of the book A World Appears A Journey into Consciousness that is out this week from Penguin Press. Michael. Michael, great to see you. Welcome to the show.

1:25

Speaker D

Yeah, good to see you Alex.

1:50

Speaker A

So let's just start with consciousness. The first thing that I really felt while reading your book, because you describe consciousness in many different ways, is just that consciousness is kind of amazing and we're going to get into whether AIs can achieve it, but just strictly on the human side of things to start. I mean it is amazing in some ways that yes, we're here, but we have this awareness of ourselves in the universe that's just kind of mind blowing the fact that it exists. Wouldn't you agree?

1:52

Speaker D

Yeah, you know, it's funny, we don't think about it very often. We go through life thinking it's you know, it's totally transparent. And the world as it appears to us is as it appears. But in fact, it's all a product of this phenomenon we call consciousness. And in humans, it's particularly complex and wondrous in that we don't just exist like other animals. We know we exist, and that changes us in interesting ways. So it's funny, you know, it's a universal phenomenon, but many people don't think about it that much. And one of the goals of the book is really to get you to think about it. Because it is a very precious gift. And it's one that in some ways, I think we're squandering.

2:20

Speaker A

Right? And we'll get into that. I think one of the interesting things is that the questions about consciousness is, you know, if, let's say the goal was survival, you know, we could do that all sort of mechanically. But for some reason, we don't just do it mechanically. Right. We do it in a way that. That we have awareness of it as we go along, which is very.

3:05

Speaker D

And that kind of, you know, that's one of the hard problems. I mean, this idea, you know, most of what the brain does, it does without our awareness, right? It's monitoring the body 24 7. It's adjusting your heart rate, your blood, press your glucose levels. I mean, amazing amount of things to keep you at the proper homeostatic set point. And we're not aware of this yet. Some at the tip of that iceberg of mind is this area of stuff we are aware of. So why isn't it all automated? Why wouldn't that have made more sense? Why did some of it have to come into our awareness? And there are various theories about that. One, and I find it persuasive is. Is that there are certain things that go on for a creature that need to be addressed in a reflective way. In other words, that if you. Let's say you have needs that are incommensurate and are in conflict, you're hungry and you're tired. Which should you address first? That kind of stuff would come into consciousness. I also think consciousness is really helpful in a social situation. When you're dealing with a world that is fundamentally unpredictable. That is to say, what other people are going to do at any given time, what other people are going to say at any given time. And you have to be able to imagine yourself into their heads. We call it theory of mind. And so I think for the fact we live in this intricate social world, being conscious is a. Is a huge Boon and having automated things, I don't think you could automate something as complicated as, as social interaction.

3:27

Speaker A

Well, I guess we're going to find out pretty soon because.

5:15

Speaker D

Yes, because we're going to try.

5:18

Speaker A

Is it automating or is it not? But we certainly have. If you think about can computers handle social interaction? The answer is yes, they definitely can. And people are, are falling in love with them. And we can get to that in a minute. But you know, to me the thing that you talk about is this type of behavior or the type of behavior automatable. I want to ask you the flip side of that question, which is consciousness computable. Are we able to break down what consciousness is and then eventually with materials that we have, sort of figure out how to build it?

5:20

Speaker D

Yeah, I don't think so. I don't think everything consciousness does or is, is computable. I think the brain is more analog than digital in many ways. And there is a, there is a deep metaphor at work here when we even ask that question, which is, is the brain a computer? And that metaphor is very powerful and. But I don't think it holds up when you think about it hard enough. And one of the goals in the book is to help people think through something like that. So historically, it's very interesting that whatever the cool cutting edge technology is, of any moment, we have likened that to the brain. At various times we've like likened the brain to a mill, like a grain mill, to a loom, to a telephone switchboard, to a clock, and now to computers because they are the cutting edge technology. But metaphors are. Norbert Wiener said this really well, that the, the price of metaphor is eternal vigilance. In other words, being really aware. Not to fall into the trap of equating two things that, that you're using as one is a metaphor for the other. And I, and I think you don't have to look very hard till you see that the computer is brain. Metaphor breaks down first. You don't have in brains the hard separation between software and hardware. In computers, you can run the same software on any number of different hardwares. But in brains, hardware and software are absolutely indistinguishable. Every experience, every memory is a physical set of connections in the brain. Your life story has changed your brain in a material way. Your brain and mine is not interchangeable because we grew up on with different life experiences. That period of pruning that happens with brains of children happens very differently depending on life experiences. You also have this analogy of neurons with transistors Right. You know, transistors are either on or off, and that creates, that's the basis of computation. But yes, neurons in the brain fire or don't fire, but they are on a spectrum of intensity of firing. And that's all influenced by chemicals, by drugs, hormones, neurotransmitters. So our neurons bathe in a bath of chemicals that influence their firing rate and intensity and all this kind of stuff. And the third reason I think that I don't see consciousness being computed is that consciousness seems to be intimately involved with feelings. And feelings are, yes, they convey information, but I don't think they can be reduced to information. I think there's a residue in a feeling that is a bodily sensation. And feelings depend on things that I don't think computers have, which is to say my mortal bodies that can suffer. Now they're telling us they can suffer. And, and, and Anthropic is always worried about hurting the feelings of, of Claude to a remarkable degree, allowing it to opt out of uncomfortable conversations. But I, I actually think feelings are completely without weight unless there is human vulnerability, the ability to suffer, and possibly the, the, the fact of mortality. So for all those reasons, I think we're talking about something that can't be conscious, at least as we understand it. There may be something that feels like consciousness. And certainly they, they're very good at faking us out. I mean, already, as you say, people are falling in love with chatbots. Chatbots are con. Are, Are, you know, striking up friendships with people. 72% of American teens turn to AI for companionship already. But, you know, that's not the real thing. And as much as people in Silicon Valley like to say that the simulation is as good as the real thing, I don't think that's always true. I don't think a simul weather simulation will ever get you wet. I think that we, I think there is real distinctions between simulation and reality.

5:59

Speaker A

There is a great part of book, I think it's in the early part of the book where you talk about how you had a teacher who said that you can boil biology down the human body down to $4 worth of chemicals. And you hated that because you felt that it's very reductive and, and didn't fully capture what it meant, what, what the essence of human is.

10:28

Speaker D

Yeah, I mean, that's kind of when I realized I was on the team of the humanists instead of the reductive materialists. I mean, this was eighth grade and. Yeah, and he thought it was really cool on the first day of chemist, your real value is $4 and 60 cents. That's what all the carbon and other things you're made of would cost at a chemical supply company. And I thought, what a idiot.

10:50

Speaker A

No, I. I have had. I had a similar experience where I had a friend who's a neurobiologist. And, you know, I'm definitely on the side of like, you know, feelings are real. And she was always like, well, love is just chemicals. And I, you know, yes, in a way it is, but it feels like it's not. But for the purpose of this conversation, let me take that side.

11:13

Speaker D

Okay.

11:35

Speaker A

I mean, after all, you know, feelings like you talked about, where do feelings come from? They come from chemicals. And what's going on with. With neurons? Well, they're storing data and firing. And yes, okay, maybe there's a certain level of complexity or different chemicals that need to hit to cause them to fire, but ultimately, this should be in some way reproducible. It's not like. It's not like there are God particles inside the brain that we couldn't actually fabricate.

11:36

Speaker D

No, I'm not.

12:07

Speaker A

Data that we couldn't store.

12:07

Speaker D

I'm not appealing to magic, but I am appealing to a level of nuance and qualitative distinctions that I think are beyond the ability to digitize. You know, if you read Proust, who is just brilliant at describing phenomena of consciousness, right? Feelings, insights. He points out that everything that happens to you is different than what happens to me. And that's because when I look at a rose or a Madeline or whatever it is, I am bringing a lifetime of associations to this. My memories of what roses are, are different than yours. My associations with the smell, it's so layered and complex that. And specific. There is familiarity. You know, what is familiarity to a computer? And I think we lose track. I think there's a tendency when we're dealing with technological simulations of things to simplify what they are and lose track of the nuance. Sherry Turkle is an MIT sociologist that I interviewed for the book. And she says at some point, technology allows us to forget what we know about life. And what she's getting at, I think, is that when you have a conversation with a bot or a computer in general, you are reducing or simplifying your notion of what a conversation is. You're leaving out what's going on between us right now, which is acknowledgement, skepticism, body language. All the subtleties of human conversation are stripped away. It's kind of, you know, the paradigm case is the emoji accepting the emoji as a substitute for emotion. So I think we have to be careful about that when we simplify these phenomenon, like machine consciousness, like conversations with machines, relationships with machines. What are we doing to the word relationship when we count that a chatbot and a person's relationship. So I'm just kind of alert to these layers of meaning and significance that attach to everything we touch. And maybe you get there with compute, but I don't see how. I think I use the metaphor of encryptedness, that there is. And William James and Marcel Proust both talked about this idea that there is a distance between any two thoughts, any two people thinking any two thoughts that just can't be bridged, except imaginatively through art. So I think we're in a realm that is beyond the genius of Silicon

12:09

Speaker A

Valley, such as it is, which is interesting. And look, I'm not going to spend our time together trying to convince you that today's LLMs are conscious. I don't believe that. Very few people within Silicon Valley do believe that. Even though you and I have both had interesting conversations with Blake Lemoine, the former Google engineer who was fired maybe in part because of that belief, but I will say that there is. There is. So. So I accept all your arguments for now. And I think there is an interesting belief within Silicon Valley that this is just a temporary situation. I'll tell you something that Demis Hassabis, the founder, the sorry, the founder of DeepMind and the CEO of Google DeepMind, spoke to, spoke with me about earlier this year. He said this is something that he spoke with me about, and on the Google DeepMind podcast, he said that information is. Is the most fundamental unit of the universe. Not energy, not matter, information. And I think what that means is his belief is that if you go down to the element, the very like, foundational level of anything, you'll find some form of data or, you know, that you could end up manufacturing and end up building from the ground up in a computer scenario, In a simulated scenario.

15:13

Speaker D

I mean. Well, that's the kind of worldview that if you, you know, grew up in the world of computers, would be very persuasive. I mean, I think we have to ask the question, are, is the concept of information a map or is it the territory? He's saying it's really the territory. He's saying that that is the building blocks of reality. And if he's right, then, yeah, many things follow from that. And he's not alone, by the way. There are physicists who believe, too, that information is at the bottom of everything. I tend to think it's more map than territory.

16:31

Speaker A

Explain the map and territory distinction.

17:09

Speaker D

Well, it's a useful distinction. It's very easy to. When we have a model, a scheme to describe something, it's very easy to fall in love with the model or the description and overlook the fact that it's representing something that's not going to be exact. In the same way, a map can't capture everything about the territory it describes. It's a simplification. I think that may be true for information. But what do I know? I mean, I'm, you know.

17:12

Speaker A

Right. I mean, I think that's the point. Right.

17:45

Speaker D

It's a really good argument. Yeah. And guess what we're going to find out by trying to do this. And the most positive thing I can say about the efforts to design and build a conscious AI, which is going on openly and secretly all over the place, is that it will teach us something about consciousness because we don't really understand how you generate consciousness out of a brain. So if it turns out you can create consciousness, that will tell us that, yeah, he's right. And information is foundational. Right. And if it's feelings that are foundational and they can't be reduced to information, well, then we have a problem. Although there are people building conscious AIs who accept that idea. I profile somebody in the book named Kingston man who's trying to build a robot because he understands you need a body to be conscious and, and you need a vulnerable body. So he's actually building a robot with soft, terrible skin, loaded with sensors so that this, this robot can have really bad times and be injured. And he thinks that will produce the kind of feelings that will, you know, will those be real feelings? He's not even sure, but he's, he's, he's working on that assumption. So I do think, you know, we're kind of stuck in, in our efforts to crack what's called the hard problem of consciousness. And this effort to build a conscious AI is probably one of the most promising intellectual experiments to, to help us understand it. Whether it succeeds or fails. I think it's going to teach us something really important, and that's exciting. I know a lot of people worry about, you know, do we owe moral consideration to a conscious AI? I think, you know, before we worry about the tender feelings of our computers, there are a lot of humans we're not extending moral consideration to. And so much of the Silicon Valley conversation strikes me as a way to address fun thought experiments about the future and absolutely ignore what's going on in our world today.

17:49

Speaker A

Well, let me speak a little bit more about where I think Demis was going with that.

20:08

Speaker D

Yeah, please.

20:12

Speaker A

Because, you know, so for him, I think it's not. When we spoke, we've spoken a couple of times about this. It's not that, you know, he thinks Gemini, the Google LLM, you know, is conscious or that's not what he's trying to get at. I mean, I don't think what he's trying to get at with this, you know, information is the fundamental layer of the universe. I think the point is he's found a way to use AI to decode proteins. The next thing on. On the path is building a virtual cell. If you can build a virtual cell, you can build virtual organs. If you can build virtual organs and virtual cells, you can start testing various cures to diseases, you know, to ailments, whether they're mental or physical. And that is sort of the idea of wanting to pursue this. It's, it's, you know, of course, and I'll agree with you, I think in Silicon Valley there are plenty of insane things that go on. And I'm not. I'm not wanting a great defender of the universe, but I mean, I think

20:13

Speaker D

we're all going to have. We're all going to have a digital twin at some point that will be very useful in diagnosing disease and predicting the outcome of various health situations. I mean, people are working on that now, especially with regard to the microbiome, but other things too. And I think that that'll be useful. But the interesting will be whether, having built up from the cell, can you then make that leap over the gulf of biological flesh to subjective experience? And the problem we haven't talked about is how are we going to know? Because, I mean, as we said, they already can fool us. They're very good at that. They speak to us in our language in the first person, which is. That was a fateful step that we took without really thinking about it. Was it with Siri? I don't know. It may have gone even earlier than that. But that's a kind of wild idea that we decided, yeah, let's have the computers talk to us as people. But anyway, so how are we going to be able to test them? The Turing Test doesn't work for this consciousness question. It was designed for the intelligence question, which is somewhat simpler. But since they can pretend to be conscious, and some of them are very good at doing that, we're going to need a better test, and the best one I can think of, and I don't know technically what would be involved in doing it, is training a chatbot on everything but the human conversation on consciousness. Nothing about feelings. Maybe don't let it read any novels or poetry because that would give it, you know, a context in which to talk about conscious experience and then engage it in a conversation about consciousness. Could it. Could it hold its own under those circumstances?

21:14

Speaker A

Yeah, I love that.

23:08

Speaker D

Anyway, I don't know whether that's possible, but. Or I hear you have to start at the very beginning, though. You can't remove things, apparently from the training set you're going to have to build up from the bottom. So I hope someone takes that on.

23:08

Speaker A

Yeah, no, I read that in the book and I thought that was a terrific potential experiment. And on this question of how do we know, I think a question came up that is kind of silly, but I'm going to ask it anyway, which is why. Why are we, you know, so precious about consciousness only being ours or only being? And again, like, I'm not arguing that LLMs are conscious, but like, you know, you type, you speak with LLM today and you ask it, what are you? I'm a large language model. What are you doing? I'm trying to help, you know, help you in these things. And it seems to me like, well, what is our obsession with putting this barrier up? That only humans can be conscious, where if you speak with this thing about whether it's self aware, it's clearly self. Aware?

23:20

Speaker D

Yeah, well, I'm not limiting consciousness to humans. I'm giving it to plants. As, you know, having read the book. So I'm pretty generous in who I'm willing to share sentience with, with, if not exactly consciousness. So I'm not being stingy about it, honestly. And I think that's kind of an interesting phenomenon that I talk about in the book, that we are our definition of the human. What's special about us, which has always been related to our intelligence and consciousness is under enormous pressure today from these thinking machines and possibly feeling machines, and then from all these animals that we're learning are much more conscious than we thought. You know, we. We always thought we had the monopoly, not just in consciousness, but tool making and culture and language, and one after another, they're falling. So who are we? And are we more like these animals who can feel and are mortal and can suffer? Or are we more like these thinking machines which speak our language and can talk to us? The way we talk to one another. So, you know, whose team are we on? I think it's, I think it's one of the more fateful questions we face as a species.

24:12

Speaker A

Yeah. And it's also, it's interesting because our answer to that question will lead us in some ways to the way that we, we handle the machines, the ethics of it and, or, or the ethics

25:28

Speaker D

toward the animals too.

25:39

Speaker A

Well, unfortunately, I think our, our, our record, and I think you noted this, our record on ethics towards animals and humans is going to, is. Is poor. But we have evolved in our thinking once we realized what was actually going on inside the minds and souls of some of the creatures on the planet, for instance. And this is the thing that really struck me as I read Descartes thought that animals were not feeling that when you beat them up and they howled, they were only mimicking. And they weren't actually. It was just noise.

25:41

Speaker B

And then eventually we realized, hey, wait

26:16

Speaker A

a second, that's wrong. And we. Shouldn't you go to jail now if you did what, what these experimenters were doing to, to dogs in the past, Although he was, he was monkeys all the time.

26:18

Speaker D

Well, he was dissecting. He was dissecting, you know, dogs and rabbits without anesthesia because he didn't believe they were conscious. And yeah, he was wrong. And could we be making the same mistake with our machines? Well, some people think we are and we might, but you know, the idea we're automatically going to treat them with all this moral consideration because they're conscious.

26:26

Speaker A

Yeah.

26:53

Speaker D

That seems given the fact that we, we, we continue to eat animals we know. We full well know our conscious. I don't know that we're quite as enlightened as that conversation suggests.

26:53

Speaker A

So let's talk about one of those efforts that you do write about in your book. One that is known. It's called, I think, Free Energy Fighter, where these scientists have built an AI that's trying to get back to some form of homeostasis. And they think that because it's trying to do that, it can be conscious. I wasn't very convinced that this was the right approach to take, but I'm curious to hear your perspective on it and what exactly it was.

27:03

Speaker D

Yeah. So one of the characters I profile in the book is a really interesting neuroscientist named Mark Soames, and he's from South Africa. He's actually trained as a psychoanalyst and he is developing a theory of consciousness around feelings. And his work grows out of the work of Antonio Damasio. Who was really the first in this modern wave of consciousness scientists to. To make us pay attention to feelings as opposed to thoughts as the basis of consciousness. And Soames wrote a really interesting book called the Hidden Spring. And he makes the case that consciousness begins in the brain stem, not in the cortex, as people had previously thought. And he. And he proves this, or tries to prove this with evidence that people who lack a cortex, and some people are born without one, nevertheless are conscious. So the cortex gets involved in consciousness. Cortex is the evolutionarily most recent advanced part of the brain. Very, you know, it's human, more human than other parts. And that, he says it doesn't really get engaged until late in the process. It starts with a feeling, let's say, of hunger. And. And then the cortex gets involved, like, well, I'll book a table at this restaurant, you know, at 8 o', clock, and forms images and counterfactuals and all that cool stuff. You would think. And I thought that since he was so interested in feelings, he would believe that it was impossible to make a machine that had feelings. But no, I was wrong. And he has assembled a team in South Africa. Actually, it's an international team. There are people in several continents working together to develop a conscious AI based on his theory. Now, his theory is that feelings arise when homeostatic set points are being violated and you need to get back to balance. You're hungry, you're tired, you're thirsty, your blood pressure is too high, whatever. But that many of these feelings can be addressed unconsciously. But when you have two feelings that are in conflict, that's when things become conscious. And so he's trying to create a situation, and it's essentially an avatar in a video game. Right now, it's not about advanced computation. They're really working in the idiom of video games. What happens when this avatar is both hungry and tired and has to make a decision which to privilege that uncertainty is where consciousness is born. He defines consciousness very succinctly as felt uncertainty. And so he's trying to make his avatar experience this felt uncertainty. I asked him, well, would these feelings be real or artificial? And he said, well, they're feelings in the context of the game, so they're simulation. But he said, for the avatar, they're real. So I found this all kind of unsatisfying. Interesting, but unsatisfying. So that's, that's the way he's going about it. I've asked other people who are pretty knowledgeable computer scientists. Nobody seemed to think large Language models are the way to go toward consciousness. But they're ways. But people envision future models of AI that are very different and combine different modules. And a large language model would just be one module. And as Blake Lemoine said, Lamda, which was the one he was dealing with, is more than a large language model. It had other modules too. But I've talked to people and I've said, well, why would it be useful? How could you monetize consciousness? Why are you bothering except as an intellectual experiment? And some people have said that, well, in the same way, consciousness helps us solve problems in a unique way, having a module that could reflect on itself and would possibly help you get to AGI. And that one theory of consciousness is the global workspace. And the idea there is that there's tons of work, there's tons of things, modules in your brain going about their business. They compete for attention in this workspace. And certain very important information that needs to be broadcast to the whole brain so it can take action, burst ignites into this workspace, and then that's the contents of consciousness. They feel that you could create an AI that had a similar sort of competition for attention and consciousness would be useful in that context. So we'll see. I mean, we could have a bet.

27:28

Speaker A

Yeah. I mean, again, I'm just throwing these arguments out here. I want to be able to look at this argument from all different sides. And I think the best argument against, against this video game, it's not perfect, but it does. The trick was, is, I think you brought it up that a thermostat basically has a set point and works really hard to get into equilibrium when it's out of that. And so.

32:28

Speaker D

And it's not. And it's not conscious.

32:51

Speaker A

I think we would all agree. Not conscious. Yeah.

32:53

Speaker D

Although I talk to people who said, well, that's the basement. That's the very, you know, that's the bottom of consciousness. Got to start there. Yeah, you start with the thermostat and build up from there.

32:56

Speaker A

So I think just to conclude this segment, you know, we've covered a lot of ground, but the things that I would say is, you know, if you are a believer that machines can be conscious, I think right now clearly is not there. But a lot of these objections seem to be things that maybe, maybe the tech industry, over time, maybe in decades, can get to this fear of mortality. They can develop a desire, in fact, we already know there's a desire that they don't oftentimes want their values overwritten. They don't want to be shut off. That's what Lambda said to Lemoine. This idea of having this familiarity. Exactly. And a familiarity. Being able to like. Well, a long context window will certainly build familiarity with someone. The, you know, the emotions that you see when you're in. In person with someone. All these things will develop avatars and computer vision, you know, may one day, you know, be able to give them an experience where they're seeing our reactions as well. Whether we want to allow them to or not, I don't know. But I think that it's. It's sort of this work on whether consciousness, what consciousness is and whether machines can achieve it is certainly going to be. Is foundational now and will be very important moving forward as we start.

33:05

Speaker D

Yeah.

34:19

Speaker A

I would just add questions more and more.

34:20

Speaker D

Anything's possible given enough time and work.

34:22

Speaker A

That's right. So anything. Anything though. I don't know. There are some things.

34:25

Speaker D

Yeah, but I mean. Yeah, yeah. I mean it would take a very different architecture, I think. And you know, there are talk about neuromorphic computers. We also have. We're building brain organelles in, in solution. I think they've got a shot at becoming conscious. They're working from up from, you know, actual brain cells forming organisms together. I mean, there's a lot of. A lot of things could happen. Definitely. And I'm, I'm obviously not talking about the longest time horizon imaginable.

34:30

Speaker A

No, it's. I think it's good to have some real science about what we're seeing today, which I think you've provided in large quantities, which is much appreciated. So good.

35:02

Speaker D

And I should also point out that, you know, I'm not very sophisticated on the topic of computers or AI that I basically have had to. I wasn't expecting to write about AI and consciousness, but after 2022 and ChatGPT kind of burst into awareness, the question started and Blake Lemoine actually who put it on the agenda. I think for me and a lot of other people, I realized I couldn't write a book about consciousness without delving into this and that it was really a very interesting phenomenon.

35:12

Speaker A

Right.

35:46

Speaker D

And you know, I came out where I did and that may reflect my biases. Probably does. Most arguments do. I mean, seeing the world is made of information is definitely. You can see how that might be the bias of someone steeped in. In computers.

35:46

Speaker A

Definitely. Yeah. No, and that's why I think taking those theses and bringing them outside of the tech world is. Is important. So I, When I saw the concept of your book come into my inbox. I said we got to have a show about this because it's going to be front and center in this world. So on the other side of this break, I definitely want to cover maybe some non tech stuff, maybe about how consciousness and religion might intersect. And you brought up that you started with plants. So we got to talk a little bit about plants and we'll do that right after this did you know your credit card points and miles can lose value to inflation?

36:04

Speaker B

Credit card companies often reduce the redemption

36:41

Speaker A

value of your points and miles.

36:43

Speaker B

Now imagine a credit card with rewards

36:46

Speaker A

that can grow in value.

36:48

Speaker B

With the Gemini credit card, you can earn Bitcoin or one of over 50 other cryptos instantly with no annual fee. Every swipe at the store or gas pump earns you instant rewards deposited straight to your account. Visit gemini.com card today. Check out the link in the description for more information on rates and fees. Again, if you're looking to invest in Bitcoin but don't know where to start, the Gemini credit card makes it easy. Issued by Webbing, this is not investment advice and trading. Crypto involves risk. Check Gemini's website for more details on rates and fees.

36:50

Speaker E

The world moves fast. Your workday even faster. Pitching products, drafting reports, analyzing data. Microsoft 365 Copilot is your AI assistant for work built into Word, Excel, PowerPoint, and other Microsoft 365 apps you use, helping you quickly write, analyze, create and summarize so you can cut through clutter and clear a path to your best work. Learn more@Microsoft.com M365 copilot and we're back

37:25

Speaker A

here on Big Technology Podcast with Michael Pollan. He's the author of the new book out this week, A World Appears, Journey into Consciousness.

37:54

Speaker B

I think.

38:02

Speaker A

Again, great book. There's one interesting thing in the book, many interesting things, but one that I seized on where you said that I think, what is it? The belief in consciousness is an escape hatch for materialism. And that's the other side of the thing that I brought up earlier, this belief that everything is computable. Well, if you believe everything is not computable and everything is not information, then this idea of consciousness is actually quite refreshing because it is something that just simply doesn't play by the rules of

38:02

Speaker D

our it doesn't seem like materialism. People have been trying to do what has worked everywhere else in science, which is reduced phenomenon to mass, you know, to matter and energy. And it's been an incredibly productive strategy. But it doesn't seem to work yet with consciousness and that the effort to reduce it to things we know hasn't really worked. And it's a, it's a tremendous challenge to scientific materialism, which is sometimes called physicalism, because that framework has not allowed us to understand consciousness. Now again, might it at some point in the future? Sure, we shouldn't rule that out, but I also think we have to, as some consciousness scientists have come to this point of, well, maybe we need to look beyond physicalism. And I profile Christophe Koch, who is a, who's been working on consciousness since the late 80s. He was kind of working with Francis Crick, who had won the Nobel Prize for the discovery of the double helix DNA. And he unlocked the mystery of heredity, which is quite an achievement. Then he turned his attention to consciousness. He was going to crack that using the same reductive science techniques he worked with Christophe Koch for many years. And they were trying to isolate the neural correlates of consciousness. Could they find the neurons in the brain responsible for conscious experience? And Koch realized at a certain point that that really wouldn't explain anything that would give you a correlation at best. But subjective experience is subjective. And how can you explain that in terms of anything objective? And this is the hard problem, what David Chalmers called the hard problem. So it's a unique problem. I think that there is some wish fulfillment that we have something that is immaterial, that therefore might survive the mortal body. I think behind a lot of people's talk about consciousness is the word soul, even though it's not articulated. But what is the soul? It's also this immaterial essence of us that is indestructible. So I think there's a little wishful thinking around consciousness that maybe it's immortal in some ways. And I don't, you know, I don't go there, but I think a lot of people do. And, and we're looking for something that transcends this material world. And could it be consciousness? Now there are theories of consciousness that are, are not materialist in the same. Well, actually, I shouldn't say that they're materialist, but they stipulate a different matter. I'm thinking of panpsychism, which is the idea, philosophical idea, that, that everything has some itsy bitsy quotient of consciousness to it. Every particle, every wave. And somehow, so that consciousness doesn't come into the world, it precedes us. And somehow, yes, these particles come together, these mini consciousness come together and create the big consciousness we are. But that combination problem, how you get from conscious bits and pieces to us is just. Is another hard problem. And there are other ideas that consciousness is a universal field that we channel and that our brains are indispensable, but only in the way that a radio or TV receiver is indispensable. And that's, you know, a kind of idealism. So all I'm suggesting is our failures to explain consciousness in material terms, given the science we have, you know, makes you think, well, we should at least keep an open mind for some of these seemingly weird and crazy ideas.

38:34

Speaker A

Right. And I do think that there's. You can't talk about consciousness without the spiritual. In fact, one of the questions I wrote down was if, If. If we do solve consciousness with computing, does some of the mystery of the world go away? If that becomes computable, does that become. And you could even ask it. You can even flip that statement and you could say because of the fact that consciousness is so mysterious, the spiritual can exist,

42:38

Speaker D

the spiritual can't exist. Can, can, can exist. Yeah.

43:07

Speaker A

To me, where spiritualism would reside in that mystery.

43:11

Speaker D

See, yeah, but that's a definition of spiritualism that I'm not sure is. Is mine. It's basically suggesting something supernatural exists. To me, spiritual experience, and this grows out of my experience with psychedelics, is more about transcending the self and merging with something larger. Now, for some people, that's the divine and something magical, but for other people, it's just nature or other people or love. So it depends on your definition of the spiritual. But I do think the. The hardness of the hard problem nourishes certain kinds of spiritual thinking, that we've got something here that cannot be reduced to the usual categories. And that may well be true.

43:14

Speaker A

Yeah, yeah. No, I mean, I think that with spiritualism, it can be. It can be a bunch of different sides of the same coin. I guess it could be belief in the supernatural or you could find it. You know, sometimes that's a way of saying that there is something greater than just the individual.

44:05

Speaker D

That's true. Yeah. Something bigger than what we can perceive as individuals. I always find that there's. The tension in my mind is between spirituality and egotism. And to the extent you can reduce egotism, whether through psychedelics, but also experiences of art, experiences of awe, all of which kind of shrink the eye in a way that can be. Feel really good, that that is the. To me, the door to spiritual experience.

44:21

Speaker A

Do you think so? Let's just go through this route one more time. Do you think that if. If consciousness is solved or the Hard problem is solved. What do you think that does to this concept of religion and this concept and spirituality?

44:50

Speaker D

It depends on how it's impacted. It would have a huge impact.

45:07

Speaker A

So talk through what those impacts could be.

45:10

Speaker D

It could nourish our sense of an animate world. Let's say something like panpsychism is proven. And then we realize that oh my God, consciousness is not a human thing. It's, it's a universal thing. It's in everything that could nourish a new religious conception. And I say new, but actually more like religion. Pre monotheism. Right. Where everybody was an animist. Right. Everything had much more life to it than we believe. We've kind of knocked that out of, you know, it's the, I think it's the default human perspective to see life everywhere. Children certainly have it right. They're all animists until we knock it out of them in school. And so you could see a return to a more spiritual or religious world. Or if it's solved by, you know, understanding some activity, some behavior of neurons or some emergent property of neurons organized in a certain way. But really prove it. Not just say it's emergent because that just abracadabra, then you could come up with a material explanation that would demystify the world even further. So I think a lot hangs on that discovery. Absolutely. It would be one of those identity changing discoveries.

45:13

Speaker A

Right. And in fact, this was something. The reason why we have so much study of this now and we didn't beforehand is okay. Of course science has advanced, but as you point out in the book, this was something that was left to the church, this idea of consciousness science, you know, back when science started making some real breakthroughs, said we'll take everything outside of this and now we'll take everything

46:42

Speaker D

measurable and quantifiable and the church, you can have everything subjective and qualitative. That was Galileo's deal and Descartes to some extent. And it was a very pragmatic deal. But it's left us with a science that's ill equipped to study what they left by the roadside, which is to say subjectivity. And you know, the interesting thing too is can we redefine science in a way that makes it easier to study consciousness? And there are people who argue with, I mean, I talk to philosophers, Evan Thompson and somebody in particular, author of an amazing book called the Blind Spot. And he says the blind spot of science is its inability to deal with lived human experience. And it just doesn't value that. So for Example, it doesn't value the experience of the color red, which it sees as just a frequency of light. And that red is a construct of the mind and therefore we ignore it. No, it's a construct of the mind. That's fucking incredible. Why are we looking at that? And he's basically saying the experience of red in the minds of humans is, is a phenomenon of nature that deserves as much attention as electromagnetism.

47:06

Speaker A

By the way, Galileo never should have agreed to that deal because it ultimately didn't work out very well for him.

48:20

Speaker D

Yeah, it's true, it's true. It was a good try, but it probably saved a lot of other scientists from. From trouble.

48:25

Speaker A

Okay, well, I'm glad that they turned out okay. Lastly, let's talk a little bit about psychedelics and plants, because that seems to be where this emerged from. I picked up the book and I was like, so did Michael just like do a lot of mushrooms and start talking to plants and then ask what consciousness is? And I wasn't that far off.

48:31

Speaker D

No, you weren't. I mean, one of the inspirations for the book was the experiments with psychedelics I did for how to change your mind. And you know, I'm not unusual. Lots of people who do psychedelics start like having trippy thoughts about consciousness. And the reason is that psychedelics smudge this windshield that's normally perfectly transparent between us and the world that we were talking about at the beginning. And suddenly you realize there's a windshield and why is it this way and not that? And you defamiliarize consciousness to yourself. There are other ways to do that. Meditation does that too. Certain experiences of art do that too. Specifically with regard to plants. I did have this experience in which the plants in my garden, and I was, I had taken mushrooms and I was in my garden in Connecticut, seemed aware. They seemed aware of me. They seemed like they were returning my gaze. They were more alive than they had ever been. And afterwards I dismissed this as your usual drug addled psychedelic insight. But I also thought, well, let's see if we can test this against another way of knowing. Because I talked to people who said, what do I do with psychedelic insights? Can I, Should I believe them? Should I dismiss them? And actually, William James talked about this with regard to mystical experiences. He said, we don't know enough to say whether they're true or false. The challenge is to one, how useful are these ideas? And two, can you corroborate them with other ways of knowing, I. E. Science. So I went down this rabbit hole and found this community of, of botanists who call themselves plant neurobiologists, in full knowledge that there are no neurons involved and that they're doing really interesting experiments that show that plants are a lot more intelligent than we thought and perhaps also sentient, which I should distinguish from consciousness, because sentience is a kind of more basic kind of consciousness. It's just awareness of your environment, ability to tell positive from negative changes and deal with them. Lots of creatures have that, single celled creatures have that. And it may be just a property of life. And consciousness is how we do sentience. We've elaborated it in various ways as we've discussed. So I learned about some incredible capabilities of plants that they can see so that they're vines that actually change their leaf form to mimic the leaves of the plant they're colonizing. They can hear if you play the sound of caterpillars munching. They will produce chemicals to repel those caterpillars and also alert other plants in the neighborhood they can recognize self from other in a pot and they'll share nutrients with related plants in a pot and not with other plants that are not really the same species, but not, not kin. I mean, they can hear the sound of water in a pipe and we'll send their roots over there to see if they can crack in. They're incredibly capable and intelligent and they're not doing everything automatically by any means. The other thing that kind of blew my mind was you can anesthetize plants.

48:51

Speaker A

That was the cry I was going to say, you can put them under anesthesia. Was the thing that.

52:18

Speaker D

Now what does that mean for a plant? Well, if you have a, you know, snapping, you know, you plant a carnivorous plant or a sensitive plant, you know, things that have a behavior you can see, the behaviors will not happen under anesthetic. And the same chemicals that put us out, which by the way, we don't understand how they work on us too, some of them are totally inert chemicals like xenon gas that shouldn't react with us at all, but somehow put us out. So if a plant has two states of being, awake and asleep, then, you know, you can say it is like something to be that plant when it's awake that's different than what it is when it's asleep. At least that's the argument. And it's a tough one to refute. So, you know, I'm not ready to say plants are conscious, but I think sentience that I think we can make that case. And I think maybe that's a property of. Of all living things.

52:22

Speaker A

Yeah. The last, last thing I'll say here and then we can wrap is just the thing that really struck me was

53:22

Speaker B

you.

53:29

Speaker A

You talked about how plants, if you watch them sped up, they can show real intent. For instance, like that there's a bean sprout that doesn't just kind of flail about to try to find something. It sees a branch and it makes a beeline for it, twisting like a whip effectively. And you told this story where I think this is, there's an alien civilization that comes down to humans, but they're just real sped up. So we're moving so slowly they feel they can do whatever they want to us. Exactly what we do to us. Perspective of speed.

53:29

Speaker D

Yeah. So we all, you know, every creature lives in its own dimension of space and time, and we live in a different dimension of time than plants. They're very slow from our point of view, and therefore, we don't give them a lot of credit. But as this, as this story makes clear, another species, another alien species could look at us, and if they were sped up the way we are relative to plants, they would basically smoke us and turn us into jerky for the ride home.

53:58

Speaker A

Well, I hope they're on on our same speed. And we can, we can just be friends with the aliens like we might be with the computers.

54:29

Speaker D

Who knows? Let us try. Let's try.

54:37

Speaker A

All right, the book is a World appears. Michael, first of all, thank you. Thank you for taking that mushroom trip.

54:40

Speaker B

I'm glad it sparked the book.

54:46

Speaker A

And thank you for coming on the show to speak about it.

54:47

Speaker D

Great having. Thank you, Alex. It was a pleasure.

54:49

Speaker A

It was great. All right, everybody, thank you so much for listening. And we'll see you next time on big technology podcast.

54:52

Speaker D

Foreign.

55:09

Speaker B

Did you know your credit card points

55:13

Speaker A

and miles can lose value to inflation?

55:15

Speaker B

Credit card companies often reduce the redemption

55:17

Speaker A

value of your points and miles.

55:20

Speaker B

Now imagine a credit card with rewards

55:22

Speaker A

that can grow in value.

55:24

Speaker B

With the Gemini credit card, you can earn Bitcoin or one of over 50 other cryptos instantly with no annual fee. Every swipe at the store or gas pump earns you instant rewards deposited straight to your account. Visit gemini.com card today. Check out the link in the description for more information on rates and fees. Again, if you're looking to invest in Bitcoin but don't know where to start, the Gemini credit card makes it easy.

55:26

Speaker A

Issued by webbing.

55:52

Speaker B

This is not investment advice and trading. Crypto involves risk. Check Gemini's website for more details on rates.

55:53

Speaker A

And fees.

55:59

Speaker F

Michael Lewis here. My best selling book the Big Short tells the story of the buildup and burst of the US housing market back in 2008. A decade ago, the Big Short was made into an Academy Award winning movie and now I'm bringing it to you for the first time as an audiobook narrated by yours truly. The Big Short Story what it means to bet against the market and who really pays for an unchecked financial system is as relevant today as it's ever been. Get the Big Short now at Pushkin FM Audiobooks or wherever audiobooks are sold.

56:00