AMA | Feb 2026
190 min
•Feb 2, 20263 months agoSummary
Sean Carroll's February 2026 AMA covers diverse topics spanning physics, philosophy, AI consciousness, and contemporary politics. Carroll addresses questions on dark energy, cosmology, consciousness in artificial intelligence, and reflects on the current political crisis in the United States while maintaining cautious optimism about democracy's future.
Insights
- Computational functionalism may be incomplete—what matters for consciousness isn't just input-output mapping but the underlying physical processes and metabolic dynamics that generate experience
- Dark energy may be variable rather than constant, but current data doesn't support abandoning Lambda-CDM model or endorsing Big Crunch scenarios
- AI safety concerns are valid but anthropomorphizing AI obscures the real risk: humans and AI teaming up to delegate critical tasks to systems we don't understand
- University education's value transcends career preparation—it provides intellectual formation, exposure to diverse ideas, and development of empathy that shapes lifelong thinking
- Democratic resilience depends on sustained civic engagement (voting, organizing) rather than assuming institutions will automatically prevent authoritarianism
Trends
Shift from computational functionalism toward process-based theories of consciousness emphasizing substrate-specific dynamicsGrowing recognition that AI safety risks stem from capability-value misalignment rather than malevolent superintelligenceIncreasing tension between university education's intrinsic value and extractive pricing models making access economically prohibitiveRecognition that non-equilibrium physics and complex systems theory are essential for understanding biological and social phenomenaPhilosophical interest in consciousness expanding beyond neuroscience to include metabolic, temporal, and dynamical system propertiesDebate over fine-tuning in cosmology shifting from anthropic principle arguments toward understanding what constitutes meaningful probability measuresEmergence of process-based rather than outcome-based frameworks for evaluating AI consciousness and moral status
Topics
Dark Energy Spectroscopic Instrument (DESI) Results and Lambda-CDM Model TensionsComputational Functionalism vs. Process-Based Theories of ConsciousnessAI Consciousness and Moral Status of Artificial IntelligenceBlack Hole Information Paradox and Hawking RadiationFine-Tuning Problem in CosmologyNeutrino Oscillations and Mass EigenstatesNon-Equilibrium Physics and Biological SystemsSimulation Hypothesis and Bayesian ReasoningDemocratic Resilience and Authoritarian RiskUniversity Education Value and Student Debt CrisisGauge Symmetry and Quantum Field TheoryCosmological Constant ProblemComplexity Theory and EmergenceMetric System Adoption and Measurement StandardsLiberal Education and University Activism
Companies
Libsyn
Podcast advertising platform offering host endorsements and pre-produced ads across thousands of shows
People
Ned Block
Philosopher discussed extensively regarding computational functionalism and consciousness theories
Anil Seth
Neuroscientist whose work on consciousness processes influenced Carroll's evolving views on functionalism
Jennifer Chen
Co-author with Carroll of eternal cosmology theory involving baby universes and arrow of time
Andy Albrecht
Physicist known for new inflationary cosmology and dark energy parametrization research
Olufemi Taiwo
Georgetown philosopher whose optimistic framing about eventual triumph of democratic values influenced Carroll
Stephen Hawking
Physicist whose work on black hole radiation and quantum gravity remains foundational to current research
Jim Hartle
Physicist who contributed to understanding emergence of time in quantum cosmology
Roger Penrose
Physicist whose cyclic cosmology model represents alternative to Carroll's eternal universe approach
Paul Steinhardt
Physicist who developed cyclic cosmology models with specific arrow of time requirements
Turok
Co-developer of cyclic cosmology models discussed in context of fine-tuning problems
Seth Lloyd
MIT quantum physicist whose computational universe framework relates to functionalism debates
David Albert
Philosopher of physics whose work on quantum mechanics and time is relevant to cosmological discussions
Tim Maudlin
Philosopher of physics who critiques standard interpretations of quantum fields and wave functions
Philip Goff
Philosopher debated by Carroll regarding panpsychism versus physicalism on consciousness
Gerard 't Hooft
Nobel Prize-winning physicist whose curriculum for theoretical physics education is referenced
John Rawls
Political philosopher whose Theory of Justice is recommended for understanding modern ethics
David Papineau
UK philosopher whose book on philosophical devices is recommended for metaphysics and epistemology
Douglas Hofstadter
Cognitive scientist whose work on self-reference and logic influenced Carroll's thinking on normativity
Isaac Asimov
Science fiction author whose psychohistory concept is critiqued as oversimplifying social dynamics
Donald Trump
Current US president whose administration policies are extensively discussed regarding authoritarianism
Quotes
"I do not regret to inform you that we are going to win."
Olufemi Taiwo (quoted by Sean Carroll)•Early in episode
"You have to stand up to the terrible things that are going on. You have to fight in whatever way you can, and you also have to keep living your life."
Sean Carroll•Opening remarks
"What matters is not just the actual output of the computation, but the process by which the computation is carried out."
Sean Carroll•Consciousness discussion
"The thing about LLMs that convinces me that they're not conscious is that they don't get bored."
Sean Carroll•AI consciousness section
"Life resists its entropy increasing by increasing the entropy of the universe elsewhere."
Sean Carroll•Thermodynamics and biology discussion
Full Transcript
Marketing is hard, but I'll tell you a little secret. It doesn't have to be. Let me point something out. You're listening to a podcast right now, and it's great. You love the host. You seek it out and download it. You listen to it while driving, working out, cooking, even going to the bathroom. Podcasts are a pretty close companion, and this is a podcast ad. Did I get your attention? You can reach great listeners like yourself with podcast advertising from Libsyn ads. Choose from hundreds of top podcasts offering host endorsements or run a pre-produced ad like this one, across thousands of shows to reach your target audience in their favorite podcasts with Libsyn ads. Go to Libsyn ads.com. That's L-I-B-S-Y-N ads.com today. Hello, everyone, and welcome to the February 2026, Ask Me Anything edition of the Mindscape podcast. I'm your host, Sean Carroll. It's a little bit weird for me personally to be recording this AMA right now. You know, we have the usual spectrum of great questions in the AMA, but it's a weird time in the United States. I like to, as you know, pretend that I'm talking potentially to people listening hundreds of years from now. I know that's not true. Most of the people who listen to the podcast are going to do so within a couple weeks or months of when it's released, but that pretend scenario lets me say things very clearly that are probably pretty obvious to everyone. So the thing I'm going to say is here we are in the second Donald Trump administration, an administration which is becoming increasingly lawless and authoritarian. And nobody should be surprised by this. You remember that at the end of his first administration, Donald Trump incited a riot at the U.S. Capitol in order to cling to power after losing a fair election. Once that happens, it's completely inexplicable and inexcusable that he is allowed to run for president again. But as we've seen over the past few years, our elite institutions, whether it's the government or the media or whatever, are completely unprepared for the kind of situation that we're finding ourselves in. And they react very badly. And here he is president again with far fewer guardrails because he didn't probably even expect to win the first election in 2016. And he had a certain number of fairly establishment figures in the administration because, you know, you got to fill out those job posts somehow. Now there's nobody in the administration except for pretty radical Trump supporters who are willing to break all the rules, lie outright all the time in order to support him. say the most ridiculous things that are contradicted by everything in our laws and video evidence and things like that. In particular, in the last few days, we've seen two murders of people on the streets of Minneapolis by government agents, by agents of the Immigration and Customs Enforcement Agency, and Renee Nicole Good and Alex Pretti. And for no reason whatsoever, no good reason whatsoever, I should put it that way, as ICE, as it is known, ICE, has been trying to terrorize the people of Minnesota to try to dig up people who have different skin color and accents than they approve of and deport them. Like a normal government, when an agent shot dead a citizen on the street or in their car, would say, well, you know, there's probably nothing wrong here, but we're going to investigate it. We're going to make sure that nothing bad happened. But this government doesn't do that. It instantly starts lying and saying, you know, these people were radicals and they were attacking and they were a danger and they just make things up. In the era where everything is filmed and readily available on video, it's perfectly obvious that they're lying, but they can do it for various reasons. So anyway, this has made people despondent, you know, despairing of the state of the United States, the state of democracy and so forth. And I feel it. I feel the sadness that comes with the fact that this can be happening in our country. Of course, it never was impossible that it could be happening in our country. Things like this happen in our country. Many things like this happen elsewhere. It's not even by any stretch the biggest thing that has happened. credible estimates say that the canceling of U.S. aid has killed over 600,000 people worldwide since the second Trump administration started, thanks to Elon Musk and the Doge agency. And so that's worse. But, you know, those people are far away. They don't sort of viscerally hit you quite as much as people who clearly, to a lot of Americans, could have been them. Someone just trying to get home in their car or, you know, be an observer out on the street with a camera. So it is depressing, and it makes it weird to talk about physics and black holes and quantum mechanics. But as I've often said, you have to. You have to stand up to the terrible things that are going on. You have to fight in whatever way you can, and you also have to keep living your life. And I think that, you know, in my little tiny itsy-bitsy infinitesimal way, recording an KMA is some contribution to continuing the life that we want to get back to living. And I think that's important to do. But the other thing is, and I think this is super duper important, as there's a philosopher who I follow on Blue Sky, Olufemi Taiwo, a philosopher at Georgetown. And he has this wonderful, he's a great follow if you're on Blue Sky. He has this great quote that he sometimes pulls out under certain circumstances. He says, I do not regret to inform you that we are going to win. And you know what? I think that's right. You never know for sure. No 100% credence is here. But despite what has been going on in the past few weeks and long before that, we're going to win. The bad people are not going to win. They're going to try. And we should not in any way underestimate the extent that they will try. They will hold nothing back. They will stop at nothing. They will break all the rules. They will do their best. But for the most part, people are not in favor of this. People in the United States do not like that. It is not popular. We want something different than that. There are plenty of people, you know, both in opinion polls and people I know personally, who regret supporting this disaster class that we're observing as our current administration. There's an enormous amount of damage that has been done, that will be done in the future, that will take decades or generations to try to fix if we're ever able to do it. But I do think eventually the good guys are going to triumph in this. It doesn't happen automatically. You have to do the work. There will be setbacks along the way. It's not always obvious what to do. Take all that as given. But I'm actually optimistic that the eventual picture is going to be a good one for us. And by us, I mean the people who believe in democracy and human rights and not shooting people on the street, things like that. So that's what I want to have to keep in mind. I'm not going to give a detailed defense of that opinion. But look, if you want a little tiny anecdotal defense, read the news from Minnesota. Read the news about those people in Minneapolis who keep standing outside in the freezing cold despite the fact that their neighbors are getting shot at and they're trying to stand up for what is right. and support each other and do the right thing. And that part of the story brings just enormous warmth and happiness to my heart. So in that spirit, let's go. David Lofqvist says, the DESI results, that's D-E-S-I, Dark Energy Spectroscopic Instrument, these results made some scientists question if the universe's expansion is actually accelerating. And some say it's now in favor of the Big Crunch model. What is your take? So this refers to a bunch of new experimental observational results. In fact, there was yet another little batch of results that came out from not DESI, the Dark Energy Spectroscopic Instrument, but DES, the Dark Energy Survey. The name Dark Energy is kind of a sexy name. Everyone wants to have it in the name of their experiment. So these results are slightly, not like very, very obviously, but slightly questioning the perfect fit that we've had for a long time between the cosmological data and what's known as the Lambda CDM model. Lambda CDM stands for Lambda is the cosmological constant, the dark energy, not just that the dark energy exists, but that it is truly a cosmological constant, something that is not changing over time, but a true constant. And, of course, CDM is called dark matter. So the lambda-CDM model means that the dark energy is constant. That would make the universe accelerate about 70% of the energy density of the universe. So when we say that the data are in a little bit of tension with that model, usually the way that that's interpreted, and this is apart from the Hubble tension, that's a different tension that we're talking about, this tension is just whether or not the best fit to the recent, recent cosmologically speaking, evolution of the universe is a model with constant density dark energy, or whether it's better to have some dark energy that is changing with time gradually. And the new data say that, well, it's a little bit better maybe if the dark energy is changing slightly with time. Now, it's not at all a slam dunk. The data are not definitive. There's different sources, different ways of analyzing the data. You're trying to fit many different pieces of information together. That's why it's very hard to just throw out everything we know about Lambda CDM and have some different model because there's lots of different reasons that we think that Lambda CDM does a good job. But anyway, it is absolutely possible that the dark energy density is changing with time. So that certainly does not mean either that the universe is not actually accelerating or that it's now in favor of a big crunch model. I don't even know how in the world any data right now could somehow tell you it's in favor of the big crunch model. The universe is not shrinking after all, and the dark energy density, even if the dark energy density goes away to zero, the universe would still not big crunch. It would just expand more and more slowly rather than an accelerated expansion. The Hubble parameter would just fade to zero rather than sticking at a constant non-zero value as it does with a true cosmological constant. So it could crunch if the energy actually became a negative energy density. That's possible. It's something we just don't know about. We don't have any great handle on why the dark energy would be changing with time at all. we're just open to the possibility. I mean, a big reason why I don't spend too much time thinking about it these days. I spent a lot of time thinking about it, I don't know, 20 years ago, 25 years ago. But there's not a lot of motivation for doing it other than, you know, maybe it's true. So let's be careful. Let's check to see whether or not it's true. And then, of course, you would learn a lot if you discovered that it was true. And you could hope that by allowing the dark energy to be variable rather than constant density, maybe you could solve some other problems. Maybe you could point toward a solution to the cosmological constant problem, or the coincidence between the dark energy density and the matter density, etc. As far as I can tell, none of that has actually come true. That was sort of the hope in the early 2000s when we started thinking about these things, but it hasn't really come true, so I think that the cosmological constant is still by far where most of your credence should be. Schlier says, My understanding is that all prior decreases in complexity of the Earth's biosphere, a.k.a. mass extinctions, have been caused by extrinsic geological or astronomical events, like asteroids, volcanoes, etc. When left to its own devices, the biosphere has always gotten more complex. If humans cause a sixth mass extinction, it would be the first time a product of the biosphere caused the system to lose complexity. Do you think the possibility of this says anything interesting about complex systems or where we might be in the long-term arc of complexity. So I don't think this is true, that the biosphere has never been sort of self-decomplexifying. I do think that there was this thing called the Great Oxidation Event, where certain—we don't know a lot of details. It was too long ago, billions of years ago, right? But the very early Earth's atmosphere didn't have a lot of oxygen in it. There was early life forms that created a lot of oxygen and then started starving off the other early life forms. Now, maybe that wasn't a tremendous decrease in complexity just because there wasn't that many life forms around. I'm actually not very up on the details of the great oxidation event. But it does tell you that this thing is the kind of thing that can happen. The reason I'm answering the question here is because I do think that it puts a finger on a crucially interesting aspect of complexity in general and biology in particular. There is a sense in which complex systems of the super complex forms like biological organisms are, are pretty fragile. The sense is that if you stop giving, let's just say, you know, a grown up eukaryotic vertebrate, for example, stop giving it oxygen or food or all sorts of other things, it will die, right? I mean, they're pretty animals and plants die all the time. I know that plants are not vertebrates, but, you know, complex animals die all the time because they require a large amount and a very specific kind of input from the environment around them. And if you zero that out, then they're not going to be able to flourish. And likewise, other things can happen. You know, they can get into fights with each other and, you know, kill each other and things like that. But on the other hand, the system as a whole, if you look not at individual organisms but at the biosphere, that's very robust. And it's robust. Both the fragility and the robustness come from intrinsic features of complexity. The reason why individual organisms are so fragile is because there's a lot of moving parts in them, and they're very specific. like moving parts are doing things to help the specific organism stay alive, and sort of there's a lot of failure modes that you can imagine. But the biosphere, the collection of many, many organisms, is taking advantage of complexity to diversify, right? To be full of very different kinds of organisms filling different ecological niches. And that kind of aspect of complexity gives you a robustness of resilience and ability to bounce back. There have been multiple mass extinctions. Many of them, most of them, you're right, come from external exogenous influences, right? Asteroids or whatever. But none of them have actually wiped out all life on Earth, right? That's a sign that life on Earth is actually pretty robust. And part of that is because what will be fatal to one species is not even going to bother some other species. So I think that this is all hand-wavy, right? And I think it's basically true. Understanding this in a more quantitative level and a more sort of rigorous theoretical framework is, I want to say, would be very interesting from the complex systems perspective. But I'm sure people have actually taken steps in that direction. I'm just not familiar with the actual modern research on this problem. Okay, Shambles says, Having re-watched the wonderful TV show The Expanse recently, it was worth noting that in the future even the Americans were using the metric system. Unclear whether they've adopted A4 paper sizing yet. For those of you benighted Americans, that's what some people, I don't know whether everyone in Europe uses A4 paper. Certainly in England they use it rather than American letter-sized paper. So the question is, Do the USA's current problems stem from low-level national angst about having to do things in 64ths of an inch? Excellent question, but the answer is no. I do think that, look, the metric system is eventually going to win. As a system of measuring things, it just makes more sense. I still, to this day, even though I enjoy cooking and things like that, have to, like, stop and think about the number of ounces in a pound and stuff like that. Courts, liters, whatever. So it's just not efficient to use this outdated imperial system. And I think that we're moving in that direction. Scientists basically always use the metric system. So I think that the expanse of the TV show is probably correctly judging that if we eventually colonize the solar system, et cetera, and have outposts there, probably most of the people will use the metric system. In part because you want to use the same system that your friends and collaborators are using. But, you know, look, I also have to say that there's more to life in a system of weights and measures than being easy to remember. There's also a question of how useful and convenient it is for the uses you want to put to it. I think that centimeters and meters are actually pretty darn useful. Like, you suffer no decrease in usefulness when you go from feet and inches to meters and centimeters. mass and weight is a little bit trickier i mean i don't think the kilogram is like use kilograms and then grams those are separated by a factor of 10 to the three right it's a pretty big dynamic range a gram is pretty light and a kilogram is pretty heavy so there's a little bit of lack of convenience there in in that part but you know okay you can get used to talking about tens of grams or hundreds of grams. It's not that hard. I think temperature is the place where Fahrenheit is much more sensible than Celsius for human beings. Because the point being that you don't have subdivisions of degrees as a worry, like you have subdivisions of feet or meters or whatever. It's just the actual scale that matters. And so the Celsius scale was developed in the same spirit as the rest of the metric system. You know, let's have everything be factors of 10 and 100, etc. But they put the zero point at the freezing point of water at atmospheric pressure and the 100 degree point at the boiling point, which means that zero is cold for a human being and 100, you're dead. Whereas in Fahrenheit, zero is very cold, but still it's out there. It's very plausible that you could experience it on a winter's day. and 100 is very hot, but also it is out there as part of the real world. As a result, the sort of dynamic range that is relevant to common atmospheric temperatures is much more convenient in Fahrenheit than it is in Celsius. Who cares where water is going to boil? How often do you need to know that number? So I'm going to do that much of standing up for Fahrenheit. It's not going to win. The dye is cast, more or less. I think the metric will win out overall. Tim Giannitzos says, you talked in your holiday message about the value of a liberal education. What are your thoughts on the relationship between liberal education and activism at universities? If the university president and majority of the faculty support a cause, would you view university support of student activism for that cause to be considered part of a liberal education? I think that my own view, which is not, people have very different opinions about this, and that's okay. I think that universities as universities or departments or, for that matter, professional societies should feel absolutely free to have stances on political issues that hit at the core of their mission. So universities absolutely can have stances on political issues relevant to education or free speech or academic freedom or things like that. I don't think that as universities, they should have political stances on other issues that are contentious in general. The faculty can and the students can and they can personally not in their roles as representatives of the university. But you can make a an organization home housed, located at a university. Like you can make an organization of people at Johns Hopkins that advocates for a certain cause that I'm entirely in favor of. I think that's a bedrock principle of free speech. But I don't think the university should do it. I think that there are different things, different places to have political activist stances. You know, not every organization has to be activist in the same way. But, you know, people think that when they have a particular cause that is worth fighting for, they want everyone to fight for it. And if they're part of a bigger organization, they want that organization to fight for it. So universities are made of people, just like everywhere else. So it's always going to be a sort of contentious boundary line between what the university does as a university and what the university does as a collection of people making up the university. David Sotolongo says in the book The Battle of the Big Bang by Niesh F. Shorty and Phil Helper they go over about 25 different theories of how our current universe came into being including the one developed by you and Jennifer Chen I was curious what probability you would place in your and Chen's theory being correct and what kind of probability you would place on Andreas Albrecht's DeSitter Equilibrium cosmology theory being correct as well as any other theories you may or may not be interested in. I don't have a big opinion about Andy Albrecht's DeSitter Equilibrium Cosmology Theory. Andy and I are actually, in fact, I'm going to fly to Davis in a short number of days and we're going to be chatting. Andy is a great guy, super good physicist, really has a wonderful taste for really important and interesting problems. And we often agree on things. We don't agree on this arrow of time stuff quite exactly. Andy Albrecht, by the way, famous for a couple of reasons. Famous for being one of the inventors of new inflationary cosmology after Alan Guth invented old inflation. Also famous for helping to parametrize evolution in the dark energy, which we were just talking about measuring. And of course, he and Lorenzo Sorbo coined the term Boltzmann brains. I don't know if that makes you famous or infamous, But he has a theory, desider equilibrium cosmology, that I truly don't understand. I tried to understand it. I don't get it. So I don't have a strong feeling one way or the other. I would put aside a specific question of the scenario that Jenny Chen and I proposed, just because it involves speculative physics, that even if we're roughly on the right track, I would expect that the correct theory would be much more rigorous and advanced and specific about how baby universes come into existence or something like that. So in some sense, I think that our theory is like the sketch of a theory, an aspiration for a theory, rather than a full-blown theory all by itself. What we can do is talk about the different, what is more useful, rather, is talking about the difference between different broad classes of approaches to the history of the universe as a whole. Let's put it that way. So one approach would be maybe the universe is just finite in time, right? Maybe there's a beginning and an end. Another approach is that there's a beginning, but there's not an end. Another approach is there's neither a beginning nor an end. And under that last category, or even under several of these categories, when there's neither beginning nor an end, et cetera, you can say, is there a symmetry where the past and future look roughly the same? Or is there something deeply, deeply different from the past to the future? And I do think that the nice thing about the model that Jenny Chen and I put forward is that it's the metric and it's not especially fine-tuned and it's eternal. So it doesn't beg the question about why the universe is so special and interesting at any point in time. I think that models like cyclic cosmology models, whether it's Steinhardt and Turok or Roger Penrose or whatever, they often have an arrow of time that is pointing eternally in the same direction. which requires an infinite amount of fine-tuning. I think that that is very, very unlikely. So I think that either an eternal model where entropy and complexity increases without bound toward both the far, far future and the far, far past is perfectly plausible. The other thing I think is perfectly plausible is maybe time is emergent, and maybe there's not an infinite amount of time. Maybe there's just a finite amount of emergent time, And for some reason that we don't know yet, this is something I'm thinking about right now, but for some reason we haven't yet figured out, time is emergent. And at the beginning of emergent time, entropy looks low from some perspective that we would have to debate about what that means. I think either one of those is a well-defined possibility. In the latter possibility where time is emergent, I don't see why entropy would be low at the beginning of time if it had a beginning. but it could be, you know, it absolutely could be. Stephen Hawking and Jim Hartle had some ideas about that among others. So I think that, you know, we have to calibrate the correct level at which we should be thinking hard about this. Of course, it's always good to write down specific models and to be as exact as you can and as clear as you can about what you think is happening, but you shouldn't think that any of the models for the whole eternal history of the universe or even the finite history, if that's what it is, that we're writing down right now are like leading contenders to be right. They might move in the direction of being a leading contender to be right, but we just don't know enough about quantum gravity and the emergence of space-time to really be at that stage quite yet. Igor Kopelov says, in conversations about AI consciousness, like yours with Ned Block, it is often assumed that if an AI were conscious, we'd need to be careful to treat it morally. But is that necessarily true? How bad is it to mistreat an AI if you can always just reset it right back to the way it was before you started? That seems like a really important difference between people and AI that isn't related to their experience of the world. I apologize if an AI ends up reading this and finds it deeply offensive. Well, I think there's two answers to that. the simple and cheap one, which I think is still quite effective, is it is better to err on the side of not being a moral monster. So if you haven't really come to some conclusion about the once and for all correct theory of morality and you're not sure whether it would be immoral to mistreat a conscious artificial intelligence, then maybe you shouldn't do it, right? Like, why is it so important? nothing. It is important to you, but why would it be important to mistreat an artificial consciousness if you actually thought that it was conscious? Wouldn't it be better just to err on the side of being nice to it? Not to mention maybe you should want to be nice to it. But I think that the other interesting thing to say about it, not really an answer to the question, but a sort of follow-up thought, is why are we moral to anybody, right? And the idea of AI consciousness or AI agency or whatever you want to call it, makes us think hard about why we think it's important to be nice to other creatures, other sentient creatures for one reason or another. I think that a lot of the reason why is because we think that the mistreatment, the lack of pleasure or the existence of pain or whatever, or the bad feelings that the sentient creature has that we are causing is intrinsically bad when you're doing it. The fact that maybe you can turn it off and reset it doesn't change the fact that you've done that bad thing. But the fact that AI is different, the fact that you can maybe turn it off and reset it, restart it in some way, opens up a different set of possibilities for thinking about these things. And I do think that our intuitions aren't necessarily up to the task of answering these questions. So I'm all in favor of thinking about them. But I think that until we really, really think we understand what's going on, probably it's better to be nice to the conscious AIs whenever they come along. Konstantin Heisen says, How concerned are you about the possibility that the USA is sliding into an authoritarian fascist regime? Many people here in Europe believe they are seeing the warning signs of this and are increasingly worried what can be done to prevent it. Well, as I said in the intro, if I have to bet at even money, I think that the United States will not actually become an authoritarian fascist regime. But I also think that it's not a 0% possibility. I don't even think it's a 1% possibility. The important fact to me is less trying to place the betting odds on the future evolution of the United States and more recognizing that there are forces currently in power who very much would like the United States to be an authoritarian fascist regime, and therefore we should try to do something about it. As I said in the intro, I think that most people don't want this. Even most Republicans don't want this. There was just a guy who was running for governor of some state, I'm going to forget, Iowa, but he was running, no, actually it was Minnesota. It was actually Minnesota. And he's running as Republican. He's a Republican. And he stepped down. He resigned or he left the race because he said he couldn't agree with how the National Republican Party was talking about his state of Minnesota. I think, you know, we have to be able to hold both things in our minds at once. What is going on is really, really bad. And we can fight it and win. You know, like to say that we can fight it and win is not to minimize the badness. It's not to say that things aren't going to get worse. It's not to say that maybe we're wrong and we're not going to win and we should be very, very careful to keep up the fight so that it doesn't happen. None of that follows. I think that I'm still at an even money level optimistic about the future of democracy in the United States. But enormous, enormous harm is being done along the way, and we should try hard to prevent it. what can be done to prevent it, you know, we have a system. And I know people don't want to hear this. The single most important thing is to make sure the system works, to vote, you know, like nothing drives me more crazy than a bunch of people who complain about the government and then don't vote, right? Or people who say that, you know, both parties are the same, da-da-da-da-da, there's no point. They're both owned by the corporations, whatever. That's just know-nothing defeatism that is a huge reason why we're in the mess we're in right now. Now, people are going to say, well, the votes aren't going to count. They're going to try to overturn the election. Okay, well, let them try to do that and then fight that when that happens. But still, organizing, getting people out to vote, spreading the message of the good and democratic message as broadly as possible and convincingly as possible. There's no magic bullets here. It's not like, oh, if we do this, everything will be fixed. It's hard work and it will never go away. A hundred years from now, if we still have democracy in the United States, there will still be anti-democratic forces within the United States. And so it's like cleaning your room, you know. It's not like you clean the room so well that you never have to clean it again. You always have to fight against the forces of authoritarianism. It's just right now the fight is going especially badly for us. But I'm at least slightly optimistic that we will eventually come out on the right side. Sergey says, my question is about the black hole information puzzle. It looks like the current state is that we can model all of the phenomenology of the Hawking radiation with a regular quantum system coupled to a cold bath, unitary radiation, the page curve, all that stuff. The saddle points, wormhole replicas, etc. Picture seems to offer a path forward toward unitarity without drama at the horizon and are also modeled to some degree without involving gravity. So what is left? What kind of advance would make you satisfied with the puzzle being resolved or dissolved? So I've actually, I'll confess that over the past, I don't know, two years, when was the last time we had a podcast about this? It might have been with Netta Englehart or maybe Raphael Busso, I'm forgetting, in my dotage, you know, who came first, et cetera. I'm not super duper up on the most recent few months of advances in the black hole information puzzle. So my impression, though, from having friends and reading occasional things online is that we're still in a situation where there's kind of a picture coming into focus that says that indeed information is conserved. That's not super surprising, of course, because many people, including myself, believe that all along. and also if you think that ADS-CFT is a useful model for quantum gravity, then it's more or less guaranteed that you're going to have information conserved because you know it's conserved on the CFT side of things, so it should also be conserved on the gravity side of things. But I still have the impression that we don't actually know how the information gets out. So there are a lot of arguments, like from the 80s and 90s especially, about how difficult it is to get information out of the black holes. By people who thought the information should get out, but they just wanted to be super-duper clear about, you know, it's not clear at all how it can happen. There's different ways of slicing the space-time and the evaporating black hole and so on. And I don't think that it's quite at all obvious what the mechanism is in down-to-earth terms for real-world black holes here in not ADS, not anti-desider space, but the real universe, for getting the information out. So there's a difference between having sort of a paradigm that convinces you that the information can get out and everything can nicely fit together, versus really knowing at the detailed level what the specific mechanisms are. Maybe I'm wrong about that. Maybe someone out there knows what the specific mechanisms are. We should have someone on the show to talk about that. Aaron Anathema says that Stephen Wright, the comedian, once asked, If you're driving your car at the speed of light and you turn on your headlights, will they do anything? I'm pretty sure the answer is no, but I'm not sure exactly why. I'm guessing that it is because the universe, not the car, is the reference point, and therefore the photons simply can't go any faster. Is it that simple? So I'm answering this, even though I've answered very, very similar questions very frequently before. I want to, like, give a once-and-for-all answer to this, because I give the answer, and the answer I give is correct, but people don't want to believe it. So I'm going to say right now you just believe it. Here is the answer. You can't drive a car at the speed of light and, quote, unquote, turn on the headlights. And not because you can't drive a car at the speed of light. Like, let's imagine you mimicked a car as a series of photons moving in some direction all at the speed of light in the shape of a car. Okay? Let's just imagine you could do that. Call that driving a car at the speed of light. Let's be as generous as possible to the formulation of the question. The problem is the phrase, turn on the headlights. If you're moving at the speed of light, you can't do anything because time isn't passing for you. Often questions like this are phrased in terms of saying, like, you know, what would the photon experience while it's doing this? And I try to say it doesn't experience anything because time doesn't pass from that perspective. From the external perspective, I think that, Aaron, I think that your intuition is basically right, that the way that we reconcile the fact that time doesn't pass for things moving at the speed of light with other stories that we tell about what happens to the light, like it gets emitted and absorbed and so forth, is that those stories are told from an external perspective of time-like observers where time does pass. But really, there's no such thing as turning on your headlights if you're already moving at the speed of light. Because from your perspective, no time passes between when the car leaves the garage and when it arrives at its destination. That is why none of these questions have satisfying answers. Burke Luffler says, I appreciate from your podcast that you place a low credence on simulation theory. One of the reasons you note is because of the computational power relative to the size of the universe. However, since we have a sample of n equals 1, couldn't the simulators have an infinitely larger universe? Sure, absolutely they could. I'm not sure that I've ever said that one of the reasons is because of the computational power relative to the size of the universe. I've said what is more or less the opposite many times, that we have zero idea about what the simulators would be like. We have zero ideas what laws of physics they have access to, whether it's something as true as space and time in their universe, or energy or any constraints like that. We have no idea how in the world, from being in a simulation, would you be able to conclude anything about outside the simulation. What I've tried to say is that there is a usual argument for the simulation hypothesis, which invokes a premise which I think is completely wrong. And that premise is that we should reason as if we are randomly chosen intelligent observers within the set of all intelligent observers. So those include the intelligent observers at the higher level who are simulating us and us here in our level and the lower level of people that we will someday simulate. Because the argument starts by saying, you know, someday we'll be able to simulate universes. And it's possible that we could simulate lots of universes with lots of observers in them, many more than our actual quote unquote physical observers here in our universe. and therefore most observers are simulated and therefore how do we know that we're not simulated, right? But if you really bought that, you should be in the lowest level of all the simulations and more importantly, you don't have any reason to buy that. You're not a randomly chosen observer. You're an observer in the universe. The question to ask is, given what we know about the universe and thinking as carefully as we can as good Bayesians, does it look like a universe that would be simulated? And I think that the answer is no. I see no aspects of the universe that make me think, oh, yes, this is probably simulated by somebody else. And I see plenty of aspects that don't look that way. That's why my credence in the model is relatively low. Jennifer Stoneman says, if we named the neutrinos based on their mass rather than their weak eigenstates, their interaction eigenstates and the weak interactions, I suppose, Then the charged leptons, would the charged leptons be a mixture of mass 1, mass 2, and mass 3? So it's a little bit, yeah, I understand the origin of this question. I'm going to try my best to answer it. We'll see whether I succeed. What we say is something like the following. When we talk about the neutrinos, there are three kinds of neutrinos. Sometimes we talk about the electron neutrino, the muon neutrino, and the tau neutrino. These are the kinds of neutrinos that are associated with the charged leptons when they decay. So when a muon decays, it creates an electron and a muon neutrino and an electron antineutrino. And the reason why we know that is because it makes all of the conserved quantities match up. You start with muon number one, because you start with a muon. you don't want to create or destroy muon number or electron number. So you can do that by making an electron but an anti-electron neutrino, so the total electron number is zero. And then you create a muon neutrino, so the muonness goes from the muon into the neutrino. And so then we notice that the masses are not arranged the same way. What you call the lightest neutrino, the middleweight neutrino, and the heavyweight neutrino do not line up with the electron neutrino, the muon neutrino, and the tau neutrino. They're mixtures. This is what we talked about with Ryan Patterson. So what you call the electron neutrino is a mixture of the lightest neutrino, the middleweight neutrino, the heavyweight neutrino, and so are the other two. But we never sort of talk about the charged leptons, the electron, the muon, and the tau, as being somehow mixtures of different things. Why is that? Why is that not true? I mean, there's different ways to answer this one. I think one answer is you don't have to ever talk about that because you don't create charged leptons by the decay of neutrinos because the neutrinos are much, much lighter, right? So charged leptons decay or interact and create neutrinos, yes. But the other way around just basically doesn't happen. There are interactions. This is why I said it's more complicated. There are subtleties where you sort of collide neutrinos together and make things and so forth. But we don't sort of create neutrino-flavor eigenstates that then have to be associated with different charged leptons. So we're fine talking about the charged leptons as always in their mass eigenstates. We know what the electron is, the muon is, the tau is. They're the ones that have the same masses. In fact, if you wanted to, if you want to sort of clear things up in your mind, stop talking about electron neutrinos, muon neutrinos, tau neutrinos. Just say there's the light neutrino, the middleweight neutrino, the heavy neutrino, the mass eigenstates, and admit that when a muon decays, it's going to emit a superposition, a mixture of those three different kinds of neutrinos. It's a little bit sloppier to talk that way than to just talk about the muonness, etc., but it would be another valid way of talking. Su says that Buddhist philosophy treats consciousness as dependently arisen, a process that exists only in relation to conditions rather than as a thing tied to a specific substrate. When you ask whether consciousness requires biology, do you see room within a naturalistic physicalism for this kind of dependent arising process-based view, or does consciousness ultimately have to bottom out in particular physical structures like brains? Well, I think that consciousness has to bottom out in some physical structures. I'm not sure when you say particular physical structures, you mean to imply that it would only be in brains. I'm open to consciousness existing in all sorts of different kinds of structures. And furthermore, I'm somewhat sympathetic to most of the words that you used in your statement about the process really being what matters rather than the stuff that is participating in the process. Having said that, I'm not especially enthusiastic about taking cues to the nature of consciousness from Buddhist philosophy, nor am I especially enthusiastic about taking cues about the nature of consciousness from indigenous American philosophy or Platonistic and Aristotelian philosophy or Islamic philosophy, Persian philosophy, African philosophy, any of those philosophies, Christian, Judeo philosophies. What do any of these things have to do with the nature of consciousness? They have ideas, but it's very much to me like cosmology, right? Like all of these different traditions of thought have often talked about different cosmological models. Plato had his cosmological model. Aristotle had his. There are Buddhist cosmologies, Hindu cosmologies, etc. So what? You know, like I don't care in some sense, right? Like some of them might have said things that kind of are considered to be plausibly correct today. Others did not say things. What matters is that the reason why they said those things is very, very different. The reason why. So, you know, as we've often talked about, after the Big Bang model came on the scene, the pope asked George Lemaitre to sort of declare victory. Like, oh, the universe had a beginning, just like we Christians have always been saying. And Lemaitre said that's a bad idea because maybe someday they'll come up with a better model where there isn't a beginning. But the point is that Christians never said that the universe started 14 billion years ago in a hot, dense state that later coalesced into billions of galaxies. They never said any of those things. And the reason why science eventually said those things is not because people sat around and thought really hard. It's because they took data. It's because they were forced to come up with models of the universe based on experiments and observations and having them agree with theories. I think the same thing is true for consciousness. Like, we have to understand what consciousness is by thinking about it and developing theories, but then testing those theories against data and against experiments and against things that we will learn about how consciousness works And at the end of the day some of those discoveries some of those beliefs about the nature of consciousness might happen to either agree or disagree with this or that ancient philosophy. But the ancient philosophers were not reaching their conclusions on the basis of that data that we're using now. so we can use them as inspiration, but we shouldn't give them too much credit for having gotten it right semi-accidentally. Peter Bamber says, I've heard you and other physicists say that the singularity in a black hole is a point in time, not in space. For a person who crosses the event horizon, how far in the future is a singularity? Very short. Very, not very far in the future, but of course, unsurprisingly, it depends on the size of the black hole. The bigger a black hole is, the more massive a black hole is, the gentler the gravitational force is near the horizon, and the more time it will take before you hit the singularity. I don't remember the exact numbers. You could look it up. This is definitely a question you could just look up on the Internet, more accurate than talking to me about it. I seem to remember that if it's a solar mass black hole and you cross the event horizon and you don't struggle too much, you will hit the singularity in about one millionth of a second. So that's very fast. If it's a bigger black hole, you know, if it's millions of solar masses, that might take you longer, but it's still not going to take you very long. The singularity is pretty near to your future generally. Ben Lloyd says, I recently saw an argument against the idea that our universe is fine-tuned, and it goes something like this. The fact that small changes to physical constants would make life impossible just shows the laws are sensitive. It doesn't by itself mean that they're improbable. To call them improbable, you need a well-defined probability measure over possible constants, but we don't have that. In other words, unless we know what the constants could vary, and with what weights, saying our universe is unlikely or fine-tuned isn't really meaningful, and it's undefined. The string landscape or eternal inflation could, in principle, provide such an ensemble, but without a measure that tells us how often each vacuum or set of constants occurs, the fine-tuning problem isn't well posed. What do you think of this argument? I think this argument is pretty cheesy, honestly. I'm not convinced by this argument. The reason why it's cheesy is because it involves a bunch of sentences that are in principle true. We don't have a probability distribution over all sorts of things, like the fine structure constant or the cosmological constant or whatever. That doesn't mean that we should treat any possible observed values of those numbers as equally unsurprising, because we're not being mathematicians or statisticians or even philosophers or being scientists in this case. We're trying to look for clues about what theories we should be developing that we haven't developed yet. You have to keep in mind that whenever we're talking about cosmology or the large-scale structure of the universe or the beginning of the universe or the nature of emergent space-time or whatever, we have to admit we don't know the answer, right? So I talked a little bit about this in the solo episode on fine-tuning. And the reason why fine-tuning is interesting is not because we have a single, well-developed measure over the probabilities of different constants of nature taking on different values. What would that even mean if there were only one universe, right? Well, we have our expectations as human beings, as scientists. We have expectations that certain parameters are more natural and sort of like not that big a deal versus others that seem to be indicating that something deeper could be going on. Could those expectations just be wrong? Sure, they could. But just to say, well, they could be anything, we just measure it and move on, is just choosing to ignore one of the very few and very precious pieces of information that we have to help guide us towards building better theories of the universe. So I think that we should use that information, not ignore it. Rohan says, what role do computer simulations play in the scientific method? The example I'm thinking of is planetary formation. It seems like we've been surprised by the diversity of planetary systems discovered, but shouldn't these have been predicted by simulations? I can also imagine a simulation that predicts a myriad of possible solar systems, but not our own, would have been more correct but ultimately discarded as it didn't fit our known observations at the time, namely of our own solar system. Given that we need to make assumptions and do coarse-graining to create a simulation but don't know what we don't know, can a simulation really make predictions? Sure. Simulations absolutely can make predictions. I think all you're pointing out is that in this particular case, given the state of knowledge, the state of our ability to simulate the relevant complicated physics, the simulations weren't very good. That's not an argument that simulations can't make predictions. Simulations make predictions all the time. I mean, certainly in cosmology, we do simulations of the growth of large-scale structure that make highly precise and quantitative predictions that we compare very successfully, usually, against the data. And when we compare them and they don't agree, that's a clue that we take very seriously. Just in astrophysics, et cetera, that I know of, you know, places like supernova explosions or star formation or whatever, we absolutely need simulations to make comparisons with the data. In particle physics, when you do something like collide two protons and try to make predictions for the large hadron collider and what it's going to see, that is like 100% based on numerical simulations. That is not pencil and paper. Those protons are very complicated, and the dynamics that go into making the collision products are very, very rich, and no one's going to do them on a piece of paper. So, yeah, simulations are super important. We just have to understand what the limitations are. Usually it's a garbage in, garbage out situation, where if we don't know what's actually going on, the simulations of our theories are not going to do us a lot of good. Ken Wolfe says, from understanding the fundamental laws of physics? You know, I think two things. Number one, the question is far from settled. I think that's right. I agree with Ned about that. How far we are, I think it's just an absolutely impossible thing to say. Maybe tomorrow someone will get the answer. You know, we don't know that. I think that there is kind of a feeling that maybe the progress of science should be a little bit more predictable than it actually is. The brain is very, very complicated. It's very, very hard to understand. We shouldn't be surprised if it takes a long time to understand consciousness. But we should also not be surprised if we make great progress. Compare what we know now to what we knew 100 years ago about how the brain works. It's extraordinarily more that we know now. Do we know enough more now to understand consciousness? Clearly not. Do we know enough now that maybe we have some good ideas about it? plausibly yes. I think that's something we should absolutely take seriously. So I don't like to predict how long it will take future scientists to answer questions. I think that's something that is historically never gone very well. Jared Sage says, during your episode with Steven Pinker, he discusses how nonverbal interactions may be common knowledge generators. I found the framework very enlightening, but it seemed to me that Pinker is just rediscovering ideas that are already common in psychoanalysis. a discipline he has openly criticized. The rationalist movement often rejects philosophies that lack scientific substantiation, even if their interpretations are later supported by new empirical evidence. And I have mixed feelings about that. Do you think the more rational position in situations where science converges on the interpretations and conclusions of less rigorous philosophies is to lend more credence to them or to continue withholding credence from them? The answer to this question feels relevant to the demarcation problem, so I think it's important to ask. yeah I think that this is actually maybe I should have grouped this together with a question about Buddhist philosophies and consciousness because there's a similar thing going on you're asking there's an attempt an idea that we should be rigorous and scientific and develop theories about difficult things but sometimes there are people who maybe don't come across as quite as science-y as us not quite as rigorous and evidence-based but somehow they seem to be hitting on some true things. How much credit should we be giving them? And I think that this case of psychoanalysis is like an in-between case. It's in between ancient philosophies, ancient Buddhist philosophers or Plato and Aristotle, who essentially had almost zero cosmological evidence, right? They knew a little bit about astronomy. They knew about the planets. They didn't know anything that we would currently call cosmological. versus something like psychoanalysis, where people did know a little bit about human behavior, right, and the workings of the mind. So I think that things like psychoanalysis typically get some things big and right and some things big and wrong. Maybe another example is traditional medicine, like Chinese medicine or whatever. There's a joke that goes around, like once alternative medicine, we have a name for alternative medicines that have been tested experimentally and found to work, namely medicine. that's easy to say but i think that there's you know people who who claim to be as rational as it is possible to be are still human beings and they fall into certain traps of wanting to feel a little superior to the people who don't use the same science-y sounding techniques and procedures that they do so there's a lot of people who would be reluctant to give credit to a methodology of Chinese medicine or methodology of Lacanian psychoanalysis if they didn't get there in the right way. And I think that, you know, you can just be honest. I guess there's certainly an implicit answer in your question, which is we should give them the right amount of credence, right? We should not give them too much credence because if this approach was not as evidence-based and empirical as we would like it to be, these approaches will be subject to mistakes. So we should be very, very on the lookout for what those mistakes are, and we should try to test them experimentally. On the other hand, you know, feel free to give some credit where they say things that were somewhat evidence-based and also turned out to be right. Like sometimes people get things right well ahead of us, and, you know, the fact that they don't quite use the same lingo that we do, etc., should not count against them. Connor O'Brien says, Lately I've been delving into group theory, gauge theory, and symmetries, as your book, Quanta and Fields, has made me very curious. And left with a lingering confusion that's difficult to articulate, what in actuality physically corresponds with the rotations in SU3, SU2, and U1? Consider these two phrasings from the book. A gauge symmetry involves a transformation that happens independently at every point, and we can rotate the quark field with an SU3 transformation. Does this gauge transformation rotation actually correspond to a physicist making a measurement in a particle physics laboratory, or is this we can rotate the quark field more so referring to the mathematical framework we use to predict outcomes and determine probabilities for certain outcomes? I like this question because there's an easy answer. It is the latter. It is completely the mathematical representation of how we're talking about the various fields. For those of you who don't know what we're talking about here, in modern physics, in quantum field theory, the forces of nature that we know and love, like electromagnetism, weak force, strong force, even general relativity to some extent, can be thought of as gauge symmetries, which means that there is some way of transforming different elements of the theory into each other, under which the physically important stuff is invariant. And the analogy you should keep in mind here is just a coordinate system on some place in space, right? Like on the top of a table, you can write down an x-axis and a y-axis and use x and y coordinates. You could also rotate those x and y coordinates with respect to each other. That's another perfectly good coordinate system. The point of coordinate systems is I make them up in my head. I could physically write them down with a pencil or whatever, but I don't need to. They're purely conceptual at the end of the day, and they help me locate things on the tabletop. But someone else, using a different coordinate system, can still locate things and measure physically invariant things, like the area of a surface or the length of a line or anything like that. So the gauge symmetries in quantum field theory are exactly that. They're our human-based choices of how to say which quark is red, which is green, which is blue, and then how we can rotate them into each other. So nothing physically happens. There's no gauge transformation experimental machine that actually implements it. It's all in your head. Nobody feels time, says. One of my favorite parts of comedian Pete Holmes' You Made It Weird podcast is that he would always end the interview with the question, what do you think happens when we die? Is it dead over nothing or something else, which really focuses on the ideas of consciousness and philosophy? It could be highly personal and spiritual, but it could also be objective and scientific. I've heard that there's some sort of chemical release in the brain upon the moment of death that feels like a sort of drug trip that eases us into oblivion. If the feeling of dying can be something that feels good to the mind, What does that mean in terms of the value of life? Well, nothing, I presume. I mean, there's all bunches of things that can feel good to your mind other than dying, which are relatively less dramatic than ending your life, right? Of course, I don't know what exactly happens when you die. I would suspect that the feeling that people have when they die is different for different people under different circumstances. There may be some universal common thing that happens at that moment or in that process, but just because it feels good or if it felt bad, that is almost completely negligible compared to the entire previous span of someone's life. I mean, to the extent that there is something we recognize as a value in life, I hope that it matters what you've done over the long term, not just how you feel in the last few seconds. Gary says, is it wrong to think of a neutron star as being an enormous atomic nucleus? I think it's more wrong than right. You know, you can define nucleus in different ways, and you can certainly define a nucleus in such a way that a neutron star would be included, but the differences to me are more important than the similarities. Importantly, the neutrons in the neutron star, and it's not only neutrons, Like there's neutrons in the center of the neutron star, but there's also like a crust where there's still some protons and electrons flying around. But the neutron star is held together by gravity. The idea of a nucleus is that the nucleons, the protons and neutrons, are really held together by the strong nuclear force. In fact, they're held together by sort of the remnant strong nuclear force that spills outside of the nucleons themselves. In fact, in fact, in fact, in fact, it's a little bit misleading to think about the conventional picture of a nucleus when even I draw it this way, like if you draw a little ball representing a proton and another little ball representing a neutron and you have a bunch of balls sort of nestled up against each other, that's not really what a nucleus looks like. Even in Deuteron, the nucleus of deuterium, which is this one neutron, one proton, It's not like a dumbbell with a proton over here and a neutron over here. They all get smushed together. The quarks don't know that they're supposed to be partly in a proton and partly in a neutron. They're just in the deuteron, the combined system. So different nuclei can have different excited states and shapes and things like that, but all the quarks are living in there happily together. The neutron star is just a different thing because the gravity is doing all the work there. so there's not a lot of physics you can do that treats neutrons, stars, and nuclei as more or less the same. Miro says, I'm planning to visit the United States, specifically the East Coast, in June and July to experience the World Cup and the U.S. semi-quincentennial. I'm very excited about seeing the U.S. in person, but clueless. Which sites and experiences would you recommend to a Mindscape listener for their first and possibly only visit to the East Coast? What's the best city for a July 4th celebration? I think under a lot of circumstances, it would be fun to go to Washington, D.C. for history reasons and things like that. There's a lot of museums and monuments and things like that, especially for a July 4th celebration. At this particular time, that might not be a good idea. For one thing, it might not be a good idea just because it'll be a mess, right? Too many people if they're there. but also it's politically fraught right now in all sorts of ways, and nerves are on edge. It might not be the most celebratory vibe that you're looking for. In general, forgetting about July 4th, I think that the one city you have to go to if you're going to visit the east coast of the United States is New York City. That's not surprising. I'm not going to be very clever here. New York is one of the major cities of the world. It's pretty unique. There's a lot to do when you're there, and you're not going to run out of fun things to think about. I don't think, I don't know of any specific July 4th celebrations going on. I'm not a real, like, fireworks and July 4th celebration kind of guy myself. I'm not a parade kind of guy in general, but I'm sure there are fun things to do when you're there. And then it would be good to see somewhere other than New York as well. The two obvious places to go would be Boston and Philadelphia. They're both intrinsically interesting cities on their own rights, but also historic, right? Actually, both Boston and Philadelphia arguably played a bigger role in the American Revolution than New York did at the time. New York certainly had its day, but plenty of historical sites to see in Philadelphia or Boston might give Boston a little bit of an edge for sort of aesthetic beauty, but Philadelphia a little bit of an edge for literally July 4th, right? I mean, that is the place where they were signing the Declaration of Independence on July 4th. And so I'm sure they're going to do that right. I have sentimental attachments to all of these cities growing up near Philadelphia and then living for, what, eight years in the Boston area. So I love them all. It would also, like, I don't know what your schedule is or whatever, but there's a train route that goes up and down the East Coast. So you could have fun going someplace completely different, going to Annapolis here in Maryland, or even Baltimore for that matter, but going further south, going to North Carolina or going to Georgia or whatever. There's plenty of things to do. It just depends on what exactly you are in the mood for. Earth to Dan, I should say that coming from, I don't know where you're coming from, coming from either Europe or Asia, the train experience in the United States might not be very impressive. to someone who was visiting here. So just keep that in mind. Okay. Earth to Dan says, As I understand, the smallest possible black hole is equal to the Planck mass, which is about 21 micrograms. Therefore, what do you think of the idea that instead of evaporating completely through Hawking radiation, black holes should just evaporate down to the Planck mass, leaving some sort of remnant? Would this remnant be eternally stable? Well, I think a couple things. One is there's just no reason for that to happen. Like, if you believe, which you probably should, all the calculations that say that black holes give off radiation, there's nothing in any of those calculations that says that giving off radiation should slow down once you get to the Planck mass. I don't know what exactly happens there. You could easily imagine that it just evaporates into two big particles, right? Or when I say big, I mean energetic particles compared to the energies that we usually see, because the Planck scale is very high. But it's not hard to imagine them evaporating into something, so why shouldn't they in some sense? Now, back in the day, when we were first thinking hard about the black hole information loss puzzle, one of the ideas that was floated is that the black holes do stop evaporating. They basically take all the information that fell into them. They keep them there. The information is not available in the external radiation. It's still inside the black hole, and the black hole goes down to the Planck scale and then leaves a remnant there. But this idea is not very popular right now. It was a good try, but it doesn't really work for all sorts of reasons. For one thing, it breaks the relationship between the entropy and the area, or I should say the entanglement entropy and the Beckenstein-Hawking entropy, which is proportional to the area of the black hole, because you can't fit that many microstates inside that much area. but you could sort of change up the laws of physics a little bit to make it happen. More importantly, there were arguments that seemed to be pretty good, although I haven't followed them for a long time, that said that what that would mean if the tiny black holes were really keeping all the information, that would mean that not all tiny Planck-scale black holes are truly the same, right? Because they have to be different because they're keeping all the different information in there. And that means that the number of distinguishable tiny black holes is ginormously large. And that doesn't mean that they're physically out there in the world, but the number that you could in principle imagine would be absolutely huge. Why is that bad? Because even though the black holes are pretty massive, the Planck scale, they could still be created in quantum field theory virtual interactions. They would be loop diagrams in the technical lingo involving black hole remnants. each individual loop diagram involving one kind of black hole would be completely irrelevant for the most part. But because there are so many different remnant black holes, so many different kinds of them, again, not physically existing remnants, but different ways you could be a Planck-scale remnant, these effects turn out to be huge and be very noticeable in ordinary particle physics experiments, and we haven't noticed them. So I think the safe money is on black holes not leaving remnants, but, you know, certainly something we don't know about. Once you get to that level of physics, you should keep an open mind. Ophir Averbuch says, I recently listened to a conversation with former Mindscape guest Tim Maudlin, where he made the point that quantum fields are not at all like classical fields, meaning they are not just a function taking values in different points in space-time, like the electric field. Instead, he says the object of interest is the wave functional, which exists in the infinite dimensional configuration space, so one cannot truly speak of ontologically real fields being excited. I was wondering, A, if you share his thought on the matter, and B, why we can't speak of the vacuum state of the field as the ontological entity where local excitations of the field are created and annihilated. Well, I mostly agree with what Tim was saying there. I've said very similar things myself, that what truly exists in the wave function realist way of thinking, as opposed to sort of an epistemic take on quantum mechanics, is the quantum state overall, the single quantum state of the universe. We represent that quantum state by starting with some particles or some fields and then making wave functions out of them. But it's not the representation that matters in exactly the same way as we were just talking about gauge invariance and coordinate systems, etc. If you talk about the wave function of a field, you're choosing a certain way of talking about the quantum state but there's other ways of talking about the quantum state that are equally good and this is a lot of the basis for my research these days and trying to figure out how to go from a truly abstract fundamental quantum state to the world that we know and love that looks like it's made of particles and fields and things like that. So I think that that's fine but I guess the only thing that makes me hesitate a little bit is the fields that you quantize are very much like classical fields. So I'm sure that, I mean, Tim knows this perfectly well, but just so everyone else listening knows it, you very traditionally in quantum mechanics or quantum field theory, in non-relativistic quantum mechanics or quantum field theory, you construct your quantum theory by starting with a classical theory and quantizing it. Nothing wrong with doing that. Nothing necessary about doing that either. And it might not work, and you might want to be careful about thinking about it, but there's nothing wrong with doing it in general. When you make quantum electrodynamics, for example, you start with a field representing electromagnetism, a field representing the positron and the electron, and then you quantize them. You plug them into the quantum-making machine, and you pop out a theory of wave functions in Hilbert space. So the thing that you start with, and this is what I'm trying to emphasize, really is just like a classical field. You take classical fields and you quantize it. The things you're quantizing to make quantum field theory are the same as the classical fields you thought you had in the 1870s. Okay, I'm going to group a bunch of questions together. Hopefully I can remember in my mind what they're all about. But there's a bunch of questions following the podcast with Ned Block that are kind of bumping up next to each other. so I'm just going to read them all together, and then we'll see where we go. Sam Wagoner says, I really enjoyed your conversation with Ned Block. It was especially interesting to me because, by coincidence, I had just watched your debate with Philip Goff about panpsychism versus physicalism. Can you comment on the relationship between computational functionalism and the philosophical zombie thought experiment? Do your changing views on the former impact how you think about the latter? Aman Nilapa says, The episode with Ned Block was very interesting. as was your admission of the change in your views on computational functionalism. But I was left unconvinced by the claim that it may not be possible to model consciousness as a computation. It seemed to me that even if Ned is right, all that would mean is the process we need to model is more complex than what happens in our brains, complex as it is. He mentioned a freezing example, which I just could not grasp, but seemed more to suggest some substrate dependence-like argument to me. What, according to you, is the strongman argument, which, if true, would lead to a refutal, is what is written here, but I think refutation is what is meant, of computational functionalism view of consciousness. Kyle Stevens says, can you clarify your recent change in view on computational functionalism? My takeaway from the episode from Ned Block was not that computational functionalism is wrong per se, but rather that the necessary computations may simply be more complex than we may have imagined. And finally, Zach McKinney says, in your reflections, which Patreon supporters, by the way, get access to reflections I do after every episode. Just a very brief couple minutes of me chit-chatting in the wake of whatever happened. So Zach says, in your reflections on the episode with Ned Block, you mentioned that you still believe consciousness could be instantiated in silico, but you don't think LLMs and other current forms of AI are there yet. What sorts of evidence from future AI models and systems would make you begin to suspect the presence of consciousness. Okay, so these questions are all about not just my chat with Ned Block, which I have other questions later that we'll get to, but also specifically this issue of computational functionalism. And so let me just give you the background of what I'm trying to say here, or I guess the quick version of what I'm trying to say here. What is computational functionalism? It's a view of, and again, I'm not the expert. I would like to let you know I'm not the expert on theories of mind or anything like that. But the basic idea is right there on the tin. When it says computational functionalism, it says that what is happening in consciousness or in the mind more generally is that the mind is computing things for some function, for some purpose. What matters is the computation of, you know, going back to Alan Turing. And the reason why Turing thought that the Turing test was a good idea, which he called the imitation game, is because he thought that if a computer or an artificial agent could give outputs to inputs with respect to inputs that seemed indistinguishable from the outputs that a human being would give, then for all intents and purposes, it's a human being. It thinks like a human being anyway, so it's just as conscious as a human being is. So I think that the idea that that is the right way to judge consciousness has more or less evaporated. Very few people believe that. There's more to consciousness than fooling some people into thinking that what you're talking to is a human being. Now, what is the alternative to computational functionalism? So there's one ambiguity that we have to get on the table right away, which is what do you mean by computation? and this is very much something that Ned and I talked about and he agreed that it's a little vague what do you mean by computation there is a perspective that you can get in people like Seth Lloyd the quantum physicist at MIT who wrote a book I think called The Computational Universe or something like that and to him basically everything that happens in the world is a computation two billiard balls bump into each other that's a computation an electron orbits a hydrogen atom that's a computation So, okay. I mean, if that's your definition of computation, and you're a physicalist about the world, about consciousness in particular, then computational somethingism sounds right about consciousness, because what else could it be? Like, everything physical is a computation, then so is consciousness. But I think that there's a more, that's not a very useful definition of computational functionalism. I think that the more meaty substantive one is really closer to what I just said, that what matters is the input mapped to the output, right? That's the sort of spirit of computational functionalism, is that what you really are looking for in your explanation of consciousness is an explanation of why certain inputs get certain outputs from the mind. And I think that is what I see more clearly now is not at all implied by physicalism. So Ned and Anil Seth, who also talks about very, very similar things, and I are all thoroughgoing physicalists. OK, we don't think that there's any spooky stuff there involved in consciousness. There are some other people who do think that there's spooky stuff, including some of Ned's colleagues. But we're not in that camp. OK, so what I guess I would have thought a while ago before reading Anil and Ned is that there wasn't that much daylight in between accepting physicalism, but not accepting computational functionalism for reasons just like this. You know, mostly everything is a computation. And what changed in my brain is an appreciation of the importance of the processes that go into mapping those inputs to the outputs. So, in other words, it's not an emphasis on the stuff doing the computation or doing the consciousness or whatever. I think that all of us in principle, well, I think that we all think that in principle, you don't need literal hydrocarbons to do consciousness. But in practice, the way that the hydrocarbons, the organic chemistry in our bodies instantiate consciousness is very, very different than what happens inside a computer, inside a modern computer. Let's forget about what computers could do someday because we all agree. Anyway, I agree that computation, that future computers could very well count as consciousness, as conscious. But the things that are going on in your brain when you are thinking and you're being conscious count. That's the point. That's the difference to me between saying yes to physicalism but no to computational functionalism. because computational functionalism doesn't care what is going on in your brain to get the input mapped to the output, right? It just matters what the output, it just cares about what the output is. And again, I want to emphasize that people can be fussy and have different definitions of computational functionalism, so I'm giving you mine, okay? I'm not trying to argue about what the definition should be. I think that what matters, forget about the definitions, is that it's a perfectly respectable point of view to imagine what Turing did, which is that what counts for consciousness is how the conscious creature is asked questions or is given stimuli and responds to it. And the alternative to that is that, sure, that counts, but also what counts is some processes, physical processes going on inside the box, okay? So it's not just a black box. It matters what's going on inside. And I'm trying to talk in very abstract terms because I'm very open to all sorts of different ways of instantiating consciousness. But the ways that we know involve, you know, the metabolism and the organic chemistry going on in your brain, the conversion of free energy into heat and work, right? So all of that stuff could plausibly matter. And this intersects with the idea that I've had for a long time that something that matters for consciousness a lot is an appreciation for the passage of time. The fact, and I said this many times, the thing about LLMs that convinces me that they're not conscious is that they don't get bored. You can just keep an LLM there, not ask you anything, it will be fine. You could put a little extra program on there, a little subroutine that tells the LLM to, you know, poke you and say, hey, I'm bored. but that's different than actually what is going on in a human being when they're bored. So that is my overall picture of why I do think that it's respectable to still be a physicalist but not be a computational functionalist in that sense. So to Sam's question, can I comment on the relationship between computational functionalism and the philosophical zombie thought experiment? Yeah, there's no relationship whatsoever. The philosophical zombie thought experiment, the more you think about it, the more you realize it's just not a very good experiment. is just not a very good thought experiment. If you're really a physicalist about consciousness, the philosophical zombie thought experiment says you can imagine something that is physically identical to a conscious creature but doesn't have conscious experiences. And the physicalist just says, no, you can't. Because what you mean by consciousness is some emergent way of talking about what is physically happening inside the creature. So the principled physicalist should just say at step one, where you say, I can conceive of a philosophical zombie, is to say, no, you can't. When you think about what consciousness is, you can't. And then the person trying to pose the thought experiment might say, well, I don't necessarily buy physicalism. And you can say, OK, that's fine. In that case, all you're doing is explaining that you already don't buy physicalism. You're certainly not making an argument against physicalism. And then to Amon's question, what is the strongman argument that leads to a refutation of computational functionalism? I think I just gave you that. Kyle's question, the necessary computations may simply be more complex than we may have imagined. Again, I think of it as, and maybe other people think of it differently. I'm just telling you how I think of it. it's not just that computations are happening, the computational functionalist point of view. It's that the different parts of consciousness themselves are fundamentally computations. By fundamentally, I mean that is what matters about them. And the thing about a computation is the algorithm you use to do the computation doesn't matter. There are different ways to divide numbers or to calculate the motion of the Earth around the sun. What matters is the answer. What matters is not the algorithm used to get there. And I'm just saying that the algorithm, in the case of consciousness, might matter, not just the answer. And finally, what sorts of evidence, Zach asks, from future AI models and systems who begin to make you suspect the presence of consciousness? Well, certainly more evidence of an actual rich inner phenomenology of the sort that conscious creatures have, the idea of being bored and things like that. I mean, that's just a glib saying, but I want to see evidence that there's something going on inside the future AI model that maps on to all the stuff that is going on beneath the surface. So I guess maybe a motto is the following. I'm not going to believe in consciousness until I believe in unconsciousness, until I believe, or subconsciousness, I guess, like all the things going on in our minds, the stream of consciousness, that kind of thing. in the same way the conscious creatures have it would be a step toward convincing me. But let me say also, in perfect honesty, I don't really know the answer to this question fully, and I don't even have a strong opinion about it. I do think that, as I was sort of semi-teasing Ned at the end there, this is something that philosophers should really be thinking about very, very hard and very, very seriously, because we are making tremendous advances in AI, And this is going to be a relevant question that really is ripe for philosophical explication before it becomes too late. Okay, David Kudeverdian says, I'm a bit confused about how dark energy contributes to the curvature of space. Sometimes I hear that the curvature of our universe is very close to zero. What did cosmologists think about the value of the universe's curvature before the discovery of dark energy? That is, before it was understood that about 70% of the universe's energy content comes from dark energy. So first, I'm not sure if this is your question, David, but let me make sure everyone knows. There's general relativity, which says that gravity is the curvature of space-time. And certainly in the universe, there's plenty of gravity, right? There's not only gravity locally when you have planets and stars and black holes, but there are gravitational phenomena like the very expansion of the universe, okay? And so there is curvature of space-time. Undoubtedly, clearly, nobody doubts that. When cosmologists talk about the curvature of the universe, they don't mean that. They do not mean the curvature of space-time. They mean the curvature of space. And they mean the curvature of space overall, the three-dimensional space that is sort of a good approximation to what the universe looks like on very large scales and is perfectly smooth. So we're ignoring the local gravity of galaxies and black holes and whatever and just looking at the universe as a whole. So it's a well-known fact when you take your cosmology course that when you look at the universe as a whole and you divide it up into space and time, it's a special fact about cosmology that you can do that in a more or less unique way in our real universe. There's no guarantee that that had to be, have been the case, but it seems to be. So that curvature, that three-dimensional space in which we live, could be positively curved, negatively curved, or flat. Those are the three, in principle, possibilities. Now, what did cosmologists think about the value of the curvature of space before the discovery of dark energy? That depends on the cosmologists that you talk to. So, because I was there in the late 80s, early 90s, when I first became sort of a professional scientist thinking about these things, And I saw very clearly, if you talk to particle physicists, theoretical particle physicists especially, they had two things in mind. Number one, they had the flatness problem in mind. The flatness problem is the idea that if there are any substantial curvature of space at very, very early times in the history of the universe, that would grow with respect to the contributions from matter and radiation over time. So therefore, it's unstable. Like if there's a little bit of curvature at early times, it would become a lot of curvature now. And we knew there wasn't a lot of curvature. We knew it wasn't overwhelming. There's a good amount of matter and energy in the universe. So most particle physicists, just on naturalness grounds, would have said that the universe is probably spatially flat. And then, of course, you have the theory of inflation. And inflation more or less predicts that the universe is spatially flat. so that was an extra reason for them to think that. But then there were the observational astronomers and cosmologists who were actually out there measuring stuff in the universe. And in the early 90s, there was already a very strong feeling among observers that if the universe were spatially flat, that implies a certain amount of matter in the universe, and we've looked for it, and we can't find it. And by the late 90s, that feeling was extremely strong, And it was very, very well placed. They really should have seen the extra matter if it had made up the critical density of the universe. And the critical density is what you need to make space flat. So they thought that the universe was curved, was negatively curved. Space was negatively curved. So what happened in 1998 was suddenly you realize that, oh, there's dark energy. What that means is the astronomers were not going to find the energy necessary to make space flat when they were looking for matter, because the energy is not in the form of matter, it's in the form of dark energy. And the value that you needed to understand that the universe could be spatially flat was exactly compatible with what the observers were actually finding. So suddenly, in 1998, the doubts were cleared up, and we realized, both theorists and observers, that more or less the universe is very close to spatially flat. Of course, there's always error bars on a real measurement, But as far as we know, that's the value that we have. Muffin says, I was recently having a debate on AI creativity and how it differs to human creativity. If you were to give an image generating AI all the images and data that Van Gogh had access to before he created his paintings, but not Van Gogh's work itself, would the AI feasibly be able to create his works or is there something fundamentally different about the way that Van Gogh creates new things? For what it's worth, my instinct is that no, AI would not be able to create the Van Gogh. Humans have the ability to extrapolate beyond the things they've encountered, whereas AI is constrained to thinking within the data it's been trained on. But I'm struggling to put my finger on exactly what it means for humans to extrapolate beyond what they have observed. So as in many discussions of the capacities and capabilities of AIs, you have to distinguish between what might happen in principle someday and what is feasibly happening now or in the near-term future. So someday, if you do something very different than what is currently going on in AI research and you try a different attack, you might, as far as I can tell, be able to have AIs that are just as creative and interesting as humans. But if you just use the techniques of machine learning and things like that that we currently are plugging into our AI models, I think that you have a point, Muffin. But I think that the point is not just that there's a special spark of creativity in humanity. I think that giving all the images and data that Van Gogh had access to is not giving the AI Van Gogh, right? I mean, Van Gogh had other things going on. He had a personality. He was hungry. He had various mental issues going on. He was worried about feeding himself and, you know, being a success. There's a whole bunch of things going on. And therefore, you know, he had access to things the AI wouldn't have. There's a more subtle but also important difference, which is that the way that current AIs create things is, as we were just talking about with consciousness, entirely different than the way that human beings create things. Like when a human being writes a story, a human being can have a reason to write the story. They might have a point they want to make or they might have seen something that gives them an idea for a character or a plot point. Or they just want to make money from, you know, selling their story to Hollywood or whatever. And they want to create, they have desires to create certain reactions from the readers of their stories, right? They have envisioned what would it be like if this person read this story? Would they get into it? You know, can we keep up the tension, et cetera, et cetera. So there's a whole bunch of, a whole framework of how human beings go about creating a story which is completely absent in a modern sort of deep learning model that creates pictures. The deep learning model is just looked at all the pictures in the world and kind of interpolates between them given certain instructions which is fine which is again astonishingly good at what it does But it doing something very very different so you would have to make AI in a very different way if you wanted to do the same thing that human beings do. And I see no obstacle to doing that, but it's just not what is going on right now. Andrew Samrick says, hello from sunny Florida. I'm calling in my once in a lifetime priority question here. Remember that Patreon supporters, who are the ones asking all these AMA questions, get to ask a priority question once in their life, which I will do my best to answer. So Andrew's question is the following. In the past few years, I've discovered a love of theoretical physics. I'm wondering if and how to further my education and potentially work in the field. Please note that I studied history as an undergrad and later earned an MBA. I've only studied physics on my own. As best as I can reason, the world of academia is not made for people like me. I don't have a driving desire to be repeatedly published or to lead a department. I just want to flesh out a specific theory and determine if it's worthwhile or if I'm nuts. I'm now in a situation where I can visualize field and energy flows, but I lack the mathematical background to communicate my ideas, and I lack the overarching background to define and distill my ideas into a defensible thesis. So what does a 52-year-old former businessman do in a situation such as this? Is it head back for a PhD if that can help me move forward? But I don't know if finding a decent advisor in South Florida is going to be sufficient to move my idea along. And before that, would you suggest an undergraduate or graduate degree in physics and or math? Is it better to find a partner for the quantitative portions? So let me be super-duper honest right from the start. about the, I want to emphasize that I personally think it's a bad idea to try to learn physics specifically to develop an idea you already have. I think it's a great idea to learn physics. I think it's a great idea to learn physics well enough that you can do research in physics, But the idea that you have a really good theory, but just not the math to spell it out is very, very unlikely to be true. The theories that we have now that are successful in the last hundred years of physics plus are all really deeply mathematical from the start. You just don't have like a picture and then say all I need is the math to back it up. We all, all of us theoretical physicists, propose theories, and you have to write down the equations to see whether or not the ideas that you have give you a plausible result. If you haven't done that, then you don't have a plausible theory or even a promising theory right now. So I would be very much in favor of you learning new physics and trying your best to understand the cutting edge of research. But you have to go in with an open mind. You can't do it for the reason of developing a theory that you already have, or you're just setting yourself up for disappointment. Now, as to how to do that, I think there's lots of different ways to do it. Like, let's be blunt. You can just buy textbooks and read them. That's not that hard. It's not a mystery. You can find online all sorts of curricula that, you know, look up a university webpage. or Gerard de Tuf, the Nobel Prize winning physicist, has a whole web page where he sets out a curriculum to become a good theoretical physicist if that's what you want to do, including links to textbooks that you can buy. And now you might say, well, I'm not good at learning from reading textbooks. So that's a warning sign if you're not good at learning from reading textbooks. That's a warning sign that's going to be hard for you to learn the requisite physics. Of course, I do think it's easier to learn if you're taking classes and have other people to talk to and can go to lectures and do problems and get them graded and things like that. So going to a university and studying is definitely the standard way of doing something like this. Again, I would not do it because you want to spell out your theory, but because you want to learn physics. That's the defensible reason for doing it. And I don't think that South Florida is lacking in places that you can go to learn the requisite physics. Now, can you learn the requisite physics? Is it too hard or whatever? I don't know. I always think that these things are worth a try. You don't want to decide ahead of time it's too hard, therefore you don't try. You might buy a book. You might say, like, what is my current level of physics? Is it like a freshman or is it like a beginning graduate student? Find out what kinds of books those people are supposed to be reading. Buy a good example of such a book. Try to read it. Are you finding that, okay, you know, like it's a challenge, but it's not insuperable. You can see how if you really put your mind to this, you would make progress. Or are you finding like, oh, my goodness, this is just not speaking to me in any way at all. So I think that you can judge for yourself whether or not this is something that would be fruitful for you to pursue or not. And then, you know, if the going to classes thing doesn't work out, there are online classes, you know, there's MOOCs, there are various discussion groups online where you can ask questions, there's chat GPT, there's AIs. Like, that's something AI is really good at, helping you learn a new subject. You have to always worry that there's occasional hallucinations. But the great thing about AI is that it's like I think of the modern large language models is basically dynamical search engines. Right. You can search for knowledge that is already out there, not generating new knowledge, but search for knowledge is already out there in a really flexible, interactive way. What better gift to people who want to learn things than that? So there's a lot you can do. Do what works best for you and report back. Give it five years. Let us know how it's been going. C Branch says, I enjoyed your romance of the university holiday message. And while I fully agree with your take on the value of the college experience, it made me wonder whether you have any thoughts on Michael Sandel's argument that Trump voters were motivated by the perception that credentialed elites look down on them. Could there be some truth to the charge that we are guilty of disdain toward the non-college educated? I'm absolutely sure there could be some truth to that charge that some people are guilty of disdain toward the non-college educated. That's a sufficiently vague statement that it's hardly worth engaging with. I mean, it's certainly true. Other people are not guilty of disdain toward the non-college educated. So I think you have to be a little bit more careful about thinking about the dynamic here. Um, I don't, so I think that you would need evidence to convince me not that Trump voters are perceiving the credentialed elites are looking down on them, but that credentialed elites really are looking down on them systematically, not just like one or two examples of people saying things like that, but like really that's an existing consensus among credentialed elites. If you can find evidence for that, that would be very interesting, but I suspect it would be harder than you might think. What's easy to find is evidence of people complaining that credential deletes look down on them. And this is why I'm not quite so sure that there is anything actionable to do to fix this problem. The problem is not the actual looking down on the fact that the problem is that one political side leverages the perception of credentials of these looking down on them to gain a political advantage. But they can leverage that whether or not the accusation is true. People are ready to be defensive about things like that. I think it's more helpful to concentrate not on, you know, I don't want these people to think I'm looking down on them, and more on just let's make the world better for everybody. Let's make the world better for people who are educated, people who are not educated, people who have had opportunities. Let's make sure that there are as many opportunities out there. That's, I think, what I would rather put my effort into thinking about. Christoph Radomsky says you often invite people at the opportunity of them releasing a book do you really find time to read that book so you can relate to it during the show well not the whole book no in fact I mean this is a so the short answer here is no I do not read the whole book but yes I do read parts of it and I try to get the gist but in fact I mean the good news is that I think that's the right thing to do I don't want or need to read a whole book to talk to someone on a podcast. It's not my job to say what's in the book, right? It's my job to ask them what's in the book. But the reason why you need some background reading the book is because you can't just say, so what's in your book? That's not a very useful discussion. And I do live in fear. The real reason to read the book is I live in fear that the guest has something really, really interesting to say, and I just never give them the opportunity to do so because I don't ask the right question, right? So I read the books or enough of the books that I can get a feeling for what are the interesting points in the book, what are the important claims that are being made, so I can make sure that we have time to talk about them. I will say, parenthetically, that, even though I'm sure I do it myself as a book author, it's very annoying when people have, you know, 12 chapters in their book and you look at the table of contents and all of the chapter titles are like an unexpected journey or there are other like playful, fun things that are cute and convey no information whatsoever about what is in the chapter. You know, as someone who wants to get the gist of the book without reading the whole thing, I mean, the good ones I do end up reading, but not necessarily before the show. But I want to get the gist of all of them, and it would really be nice to have chapter titles that conveyed an impression of what is inside the book. I suspect that's just good advice more generally. I suspect that it's useful for people who decide whether they want to buy the book to be able to look at a table of contents and see what the chapters actually are about. Certainly in the biggest ideas in the universe, I just label them in the most simplistic way, right? You know, chapter 8 is entropy, or whatever it's going to be. Chris Kaltfosser says, You've argued that we shouldn't posit non-physical entities when physical explanations suffice. But consider abstract objects like fictional characters. On a strict physicalist view, Sherlock Holmes would seem to exist only insofar as there are physical representations and cognitive practices sustaining him. If all such instantiations were destroyed, there would be no remaining truths about Sherlock Holmes. Many people find that conclusion deeply unintuitive. It suggests that truths about abstract objects are contingent on the continued existence of their physical tokens. If you're comfortable with that implication, fair enough. But if not, what principled reason is there to resist a similar move in the case of consciousness, where subjective experience seems equally resistant to reduction to physical description alone? So I think there's a lot of interesting issues going on here. They're kind of jumbled together a bit. So I'll try to say what I usually try to do, which is some things that I think are true. And you can figure out for yourself whether I'm actually answering the question. I don't think that abstract objects, whether they're fictional characters or mathematical ideas, exist in the same way that physical stuff exists. I am a reality realist. I think that what exists is the physical world. What other things are are ways of talking about the physical world that may or may not be useful or illuminating in various circumstances. I would not say that Sherlock Holmes exists. I would say that the idea of Sherlock Holmes is represented in various physical items in the world, and it's a useful concept to invoke, and you can talk about it in a sort of casual way. Sherlock Holmes would have done this, understanding that you don't really mean the same thing as you would if you were talking about a real physical person. I don't quite understand why that would connect to an idea like, if all such instantiations were destroyed, there would be no remaining truths about Holmes. That doesn't sound right. What there would be is no more physical instantiations about Sherlock Holmes. It would still be true that in the past when there were such instantiations, one could say such and such a thing about what those instantiations were doing. So I don't think that the truths about abstract objects are contingent on the continued existence of their physical tokens. There's plenty of truths about circles without the circles physically existing or indeed without any perfect circle ever physically existing. So I don't think that that connection really goes through. Finally, with consciousness, consciousness is not an abstract object. Consciousness is just a set of properties that exist in a higher level emergent description of people and how they interact with each other. I don't think it's resistant at all to a physical description like some people do. And I think that most claims otherwise are mostly begging the question. But again, you know, these are all things that I am happy to be wrong about if someone convinces me otherwise. Steve Bonner says, In your excellent episode on neutrinos with Ryan Patterson, he describes how researchers create streams of neutrinos, then detect the electron and muon neutrinos by watching for the production of electrons and muons, respectively. He didn't mention a similar approach for tau particles. Are tau neutrinos also produced in the streams from accelerators? And do the tau particles that are released in the detector live long enough to be seen? So I'm not the person to ask these questions to, honestly. Like, these are detailed questions. I worry about mixing them up. So I'll tell you things that I think are more or less right. I mean, I see no reason why you wouldn't make tau neutrinos just as well as electron and muon neutrinos. But, yes, if an interaction happened where a neutrino created a tau particle, the tau particles decay very, very quickly. I don't know exactly how quickly. So in the world of particle detectors, you have various resolutions about how long a particle needs to live before you can literally see a track for it. And what often happens is if the particle lasts a long time, you can see a track and you can see how it bends in a magnetic field. You know, there's that particle because you can basically infer its mass. If it only interacts a little tiny bit, then basically what you see is what it decays into. And if it lives a little tiny bit but not too short, then that point at which it decays into other things will show up a little bit displaced from where the particle was actually created. So you can measure that displacement, and that tells you something about the lifetime. If it only lasts a very short period of time, then you don't see it at all, and you infer its existence from the statistics of what it decays into. For example, with the Higgs boson, the Higgs boson decays way too quickly for you to actually see the Higgs boson. When we say we've discovered the Higgs boson, no experiment, including Large Hadron Collider, has actually seen a Higgs boson and probably never will. What you see is a slight enhancement in the production of certain decay products at certain energies when you make a Higgs boson at that energy. So I think that what I don't remember is what the actual lifetime of a tau particle is, the tau lepton. I don't know if it's so short that you can never see it or if it's long enough that you can see a little displacement or even a little tiny track. That's outside my expertise right now. The Nine-Tail Fox says, Do you think that there is much credence to the idea that we are moving more away from capitalism and seemingly more into techno-feudalism? You know, I think that these questions along these lines are sort of simultaneously interesting and unanswerable because they're interesting because they gesture toward something that is relevant. You know, how practices and organizations in our economy and our political system are changing over time under the influence of financial issues and the representation structure that we have in our government and things like that. Many things going on that are very interesting. but just to say are we moving more from one thing to another when we clearly were never purely capitalist even if by we we mean the united states other countries are even less capitalist or more capitalist than we are and i don't know what it would mean to move into techno feudalism um i don't think that sounds to me like an especially helpful category or label it might obscure the possibility that something is interesting happening because you're drawing an analogy between what is happening and what was happening in a pre-capitalist feudalistic society and that analogy might not actually be very helpful so I don't think so, no but I mean there might be I know that I always tell people to keep their questions short and you should keep your questions short I've had some pretty good questions that don't go answered because people keep them very long or write them in a very extended way and tell little stories with them and things like that, and therefore they don't get asked. But this might be too short for me to really wrap my brain around what is meant by techno-feudalism. So you would have to be a little bit more specific before I could come down on that. But also, you know, I don't know. I have this feeling that there are certain things that are easy to commentate about and hard to be right about. And there's an industry out there in sort of talking about the era we're in and the era that we're moving into and things like that. And there's not a lot of accountability in people who say things, make these proclamations as to whether or not their proclamations actually turned out to be accurate. So I like to be a little bit more humble when making grandiose statements like this. Eric C. says, this question comes out of your conversation with Ned Block. My understanding of the large language models is that they do a bunch of linear algebra. Do you think the linearity of this calculation is important to the consciousness question? Even a cell is a complex dynamical system, and I have trillions of those buzzing around my body, probably affecting my conscious experience somehow. It seems that our existence as non-steady state equilibrium, non-linear dynamical systems fundamentally separates us from AI. Even if all that consciousness emerges out of information processing, it's a very special kind of information processing. So I think it's a bit of an exaggeration to say that LLMs just do a bunch of linear algebra. They certainly do a bunch of linear algebra, but there are nonlinear effects also. The way that LLMs, that neural networks in general, mimic the behavior of real neurons in the brain is that they have some kind of function that relates their inputs to their outputs, and that function is generally nonlinear. It's not just a combination of the inputs. You sort of reach a threshold, and then the neuron fires, right? That's the paradigm on which neural networks are built. So automatically, even if that's just sort of one step in a complicated set of linear algebra things, that is a nonlinear thing. So I don't think that nonlinearity is really the point here. I think that, as mentioned before, there might be important points about processes, not just outcomes, but the way in which calculations are done and computations are done. That I'm very, very open to. But I think that nonlinearity is basically everywhere. So I don't think that that's nearly enough to be what separates us from AI, if anything indeed separates us. Orbital Magpie says, It's fascinating and equally frustrating to see you and AI safety advocates reach opposite conclusions from a shared starting point. I hope I'm not misrepresenting your position on AI safety, but I think it's fair to summarize it as AI is not like humans. They don't think like us. They don't function like us. So it's wrong to anthropomorphize them and think that they are capable of malice like humans do. Whereas AI safety advocates will say because AI is not like us, they don't share our values. Therefore, we can't trust them to not do things that would harm us. I'd say I'm a lot more sympathetic toward AI safety advocates views. I think a key point is that AI doesn't need to be capable of malice to harm us. In fact, it doesn't need to be intelligent. A runaway roller coaster will flatten everything in his path, sorry, road roller, not roller coaster, will flatten everything in his path, including humans. That's not because the road roller hates human or anything. It just does that. And ideally, we'd want to build safeguards into road rollers so that if the driver falls out of the cockpit, the road roller will stop running. The same thing applies to AI. We shouldn't anthropomorphize AI to think that they could have intentions to hurt us. It might just do so because there's no safeguard to stop it. In fact, AI is already causing harm on a small scale. Many production databases have been deleted by malfunctioning AI, for example. So my question is, do you agree with this argument for AI safety, and if not, what is wrong with this logic? So nothing personal, Orbital Magpie. I don't want to take it out on you, but I'm answering this question because it's certainly very, very frustrating to me to have these conversations because I try to be clear about what I'm saying, and I'm just 100% frequently, 100% is an exaggeration, but it seems very, very difficult to get my actual opinion to be understood by various people. And I think that because my opinion doesn't quite fit into one of the boxes that they have prepared to accept opinions into. So when you say the quote, trying to characterize my position, AI is not like humans. They don't think like us. They don't function like us. So it's wrong to anthropomorphize them and think they are capable of malice like humans do. I agree with all of that. That is absolutely a fair characterization of some things I would say. It is nowhere near a complete description of things that I say. I have tried to say very frequently and very clearly, AI safety is a huge worry. There are many, many ways that AI could be very, very harmful. I do not doubt that in any way. But I go on to say that it is wrong to think about those harms in anthropomorphic terms. Basically, I'm agreeing with what you say at the end of the paragraph. The harms that are going to come from AI are not because the AI is going to become super intelligent and outwit us and stop us from preventing it from taking over the world or something like that. Those scenarios are just silly. And at a more detailed level, talking about AIs in terms like malice and values and things like that is just a category error. But that doesn't mean there's no harm. It means the opposite of that. It is very much like a runaway road roller. I completely agree with that. The real worry about AI is not that it's going to become super intelligent, but that humans and AIs are going to team up to be stupid, that we're going to turn over mission critical tasks to AIs that we don't understand because we are anthropomorphizing them. And we think, oh, this AI is pretty smart, just as smart as a human being. How bad can it get? Right. That's the real worry to me. And I think that just a refusal to take seriously the fact that the AI is something different than the human being, which is not to say it's not capable in all sorts of ways, but it's capable in different ways. And the sort of glossing over the differences is going to lead to misunderstandings, and that's going to lead to safety, harms, things like that. There are huge harms being done by AI. If you follow my Blue Sky feed, you will see all sorts of scientists complaining about the fact that the scientific literature is being completely polluted by junky AI written papers, right? That's a very, very minor harm compared to some other possible ones, but it's very, very clearly out there. So the fact that I don't want people to anthropomorphize AIs is the opposite of saying that I'm not worried about AI safety. Nikin says, why is non-equilibrium physics hard? So that's a good question, but, you know, there's a short and glib answer, which is that all physics is hard. There's a slightly less glib answer, slightly less short, but still pretty glib and short, which is, given a system that you want to analyze, there's usually only one way to be equilibrium. You know, if you tell me if you have a box of gas and you tell me the density and temperature and pressure and whatever and tell me it's an equilibrium, then I know what it's doing. There's nothing more to say about it. But there are many, many ways to be non-equilibrium. So just characterizing what is possible in the world of non-equilibrium physics is much, much harder, much less saying how it actually behaves. And so I think that the real answer is sort of tending toward being a little bit more direct and clear about why it's harder to say how these things behave. And it's related to the fact that equilibrium systems basically have unique states that they can be in, given some external parameters. When you let something relax to equilibrium, the final state it reaches is basically an attractor, right? Once you know that entropy is increasing and once you know the system is trying to equilibrate in various ways, there is, by the way, a distinction between thermal equilibrium and thermodynamic equilibrium. Those sound the same, but they're not. If I remember, I don't remember correctly, but if I remember, thermal equilibrium is just that the temperature has equilibrated, but other features might not. So if you have cream mixing into coffee and the cream and the coffee are the same temperature, you could be in thermal equilibrium, but you might not be in thermodynamic equilibrium if the distribution of cream and coffee have not actually equilibrated. But anyway, once you tend toward thermodynamic equilibrium, that's kind of all you need to know. You don't need to tell me the specific path that the system takes to go from its non-equilibrium state to its equilibrium state if all I care about is where it ends up. In the world of non-equilibrium physics, the details start to matter. You might care about the way in which something behaves and evolves and is dynamical in the non-equilibrium world. And there you're not going to have a nice, simple attractor point that you go to. You're going to care about all the details of the path along the way. So there's just always going to be a lot more complexity when you do non-equilibrium physics, which is too bad because most of the world is pretty non-equilibrium. Donovan H. says, finitism or finitism. I honestly don't know how to pronounce that word, even though I've written papers with the word in it. Finitism, I guess, because things are finite, right? Not finite. So let's call it finitism. I've seen a couple of articles, says Donovan, seen a couple of articles lately about the idea that there's something wrong in physics if we are postulating physical objects having infinite properties, like black holes having infinite curvature or space-time being infinite. If it's not a straightforward reductio ab absurdum, it does feel incongruous to think of finite things with infinite properties. Could we take the infinities out? Should we? I think that I resist a little bit this feeling that we should, quote-unquote, take the infinities out. I think that's not quite the way to think about it, even though the intuition is getting at something real here. I think the slightly more respectable thing to say is if you have a set of equations, a set of dynamical relations in some formal system that is supposed to predict what will happen, and the domain of applicability of those equations has certain quantities in it which are real numbers, right? Real numbers, you know, the set of real numbers does not include infinity. Infinity is outside the real numbers even though there's no biggest real number, but there's also infinity is not quite there either. So when something purportedly becomes infinite, typically that means your theory is breaking down. Certainly that's the traditional way of thinking about singularities in general relativity or something like that. So that's not very clear about how it's breaking down or what you should do about it. But if, for instance, in general relativity, the singularity that you get inside a black hole, it is what we call a space-like singularity. That means it is a moment of time, not a location in space. So you hit it at one moment in time if you fall into the black hole. and then the very down-to-earth problem is you don't know what happens next. I mean, you say that as an astronaut falling into a black hole, you die. Okay, that's fine. But the equations of general relativity do not predict what would occur after the singularity or even if there is anything after the singularity. Traditionally, we would say space-time has a boundary at the singularity and we just don't know what to do. That's a little bit different than the kinds of infinities that we naively get in quantum field theory, where you're trying to say, what is the probability of an electron meeting a positron and annihilating into two photons? If you get that the probability is infinity, that's just nonsense, right? The probability has to be less than one, okay? So sometimes infinity means your equations break down, you don't know what to do. Sometimes infinity means, no, you simply made a mistake, like you're calculating a number and you didn't do it correctly. So in all of these cases, what do you do? And I think that the idea that you take the infinities out is the wrong idea. Because in both cases, in the general relativity case and the quantum field theory case, the implication should be your theory is wrong. You are not working with the correct theory. So what you want to do is find a correct theory, right? You want to find a better theory. In quantum field theory, we can do that. We can find effective field theories. we can understand why we made the mistake of getting infinity in the first place. In general relativity, we don't know how to do that, but we also know that the classical general relativity predictions don't take quantum mechanics into account, so we suspect that a quantum theory won't have these problems, a quantum theory of gravity. So I do think that the infinities are a sign you should improve your theory. I don't think you should conceptualize the improvement of the theory as simply taking the infinities out. Stevie CPW says I grew up in Utah, Idaho and Wyoming where it was a big deal when the rodeo came to town. I was happy to hear that Jennifer is a fan. What is her favorite event? When you watch with her, do you have a favorite event? Have the two of you ever attended a live rodeo together? I'm not sure what her favorite event is. She's pretty ecumenical there. She likes all the events. The bull riding is obviously the sort of glamour event in the rodeo but also Saddle Bronk and And bareback riding is fun. The barrel racing is also a lot of fun. The tie-down roping, it's all good. We all like all of them. I do watch with her occasionally, but I'm not that into it. You know, I watch it for, you know, occasional amusement and because I don't love the rodeo, but I love my wife. It's exactly the same reason why she occasionally watches basketball. And we do, we have attended live rodeos together. In fact, we went one night to the National Finals Rodeo last year in Las Vegas. And one year we went to the Santa Fe rodeo, and we both happened to be in Santa Fe when the rodeo came to town. They were both very fun in different ways. Santa Fe is a relatively tiny rodeo. You can sit in the second row and just watch everything very close by, whereas the National Finals Rodeo is in the Thomas and Mack Center, which some of you might know as the basketball stadium at UNLV, University of Nevada, Las Vegas. and it's a huge deal with flashing lights and a lot of pageantry and things like that. So, you know, whenever you go to a live sporting event of any kind, there's a lot of aspects that are enjoyable, whether or not you actually are into the sporting competition that is being held before you. David Maxwell says, When should the people of a country be held morally responsible for the actions of their government? When they elect it, when they elect it a second time, even after being told what it will do, when they don't do enough to stop it before they no longer can. I don't know that it makes sense to, quote-unquote, hold the people of a country morally responsible for the actions of their government. I would be willing to talk about holding individual people responsible for their role in the creation of that government. Like if a person voted or worked to get bad people into office or in general to give bad government power in their country, then you can absolutely hold them morally responsible. But I don't know if collective moral responsibility makes any sense. Why would I attach moral responsibility to people who tried their best to prevent that government from coming around? Of course, you might have supported a government coming into power without realizing that it would be bad. And then the question is even trickier. You know, are you morally responsible for not understanding that? Should you have worked harder to understand that? And I'm not asking that rhetorically. Like, sometimes, yes. Sometimes it was pretty clear that a government was going to do bad things, and you just chose not to pay attention. And then you are absolutely morally responsible, I think. Other times they reasonably tricked you and you didn't know what they would do. Still other times the government does some good things and some bad things and you decide to make a compromise, right? And that can be a very, very tricky thing. But I do think that it's a little bit lazy and not very helpful to just think about attaching moral responsibility to collective groups of people that are very heterogeneous rather than the individuals who are actually making their choices. Sandro Stuckey says, Electronic clocks, we could surely give them access to those. What is it that biological processes possess that we could not implement in a computer? Fair enough, for the most part, but I think that the problem lies in the phraseology, not in the underlying point. There's a difference that might not be obvious, I should say, between experiencing the passage of time and measuring time, even measuring time repeatedly. OK, sure, you can give an LLM access to a clock. You can even give it instructions. You know, make sure you check the clock every so often or something like that. That is not equivalent to experiencing the passage of time and the experience of the passage of time. You can also debate what that means. And you have full employment talking about the psychology and the physics and the biology of all that. But the point is, it inevitably happens, your experience of the passage of time, because a person is an out-of-equilibrium system. We were just talking about non-equilibrium physics and the difficulties of that. But people, you know, as Schrodinger pointed out long ago in his little book, What is Life? Living organisms have the property that they need fuel at all times, right? Even if they don't need to literally be eating and breathing all the time, you need to be breathing most of the time, but you're always either eating or using up the fuel that you had previously eaten. In other words, beneath the hood in a living organism, there's an enormous set of things going on in a very time-directed way. The metabolic processes, the maintenance of your cells and things like that, the processing of the food in your brain, the processing of ideas, you know, whatever is going on below the surface, subconsciously. There's a lot happening in a biological system that has no analog in an LLM. And an LLM, if you're not giving it a query and it's not training or anything, if it's just sitting there on the computer, literally nothing is happening. so to make it closer to the experience of time as experienced by a biological organism you would have to radically change i don't think trivially change you have to radically change what the llm was you would have to make it something that relied on an external source of something information free energy whatever it is and then have a sort of constant churn below the surface where it was using that resource to maintain its out of equilibrium configuration. That would be what would be analogous to a living organism that we think of as conscious. Now, are all of those processes beneath the hood truly necessary for consciousness? I have no idea. I think that what Ned Block was saying, that Anil Seth is saying, is maybe they are. Maybe what matters is not just the actual output of the computation, but the process by which the computation is carried out, and specific aspects of that process might really be relevant here. Again, I'm not devoted to the idea, but I appreciate that it's a very live possibility and definitely worth taking seriously when we really sit down to decide what counts as consciousness vis-a-vis an artificial intelligence. Brandon Wheeler says, What can the citizens of the USA do if Trump starts going on a path of full abuse of military power? If the military won't refuse orders because he makes up fake legal reasons for his actions so that they are not violating legal orders? You know, I don't know. But I think that the question is very, very complicated, and we can't be overly simplistic or alarmist about it. You know, for better or for worse, the United States government and all of its systems have a lot of safeguards in place, a lot of mechanisms that make it hard to just do things that are wildly illegal. Now, I know perfectly well that none of these safeguards is perfect. You know, you would like to think that the courts stop you from doing illegal things. And the truth is, the practical reality is sometimes they do, sometimes they don't. OK, but there are enough of them that can be tripped enough of these safeguards that can be tripped in various different ways at various different times that it's it's way too oversimplistic to just say, well, what if Donald Trump makes the military declare a dictatorship? Right. There's a million steps in between here and there where he could be stopped by the government itself, by the military, by the courts, by other parts of the government. So it's actually, you know, as people have been talking about, you know, as they begin, look, let's not sugarcoat it. It's very, very scary, the fact that people have to be seriously contemplating whether or not the United States government and military have the capacities and the wherewithal to prevent the president of the United States from becoming a dictator. OK, like that's something that you might have had completely hypothetical discussions about back in the day. Now we're having real discussions about it. And that's bad. And that's very scary. And the prospect that he could assume dictatorial powers is realistic, you know, is not completely crazy. I don't think it's going to happen. I think that there are lots of guardrails trying to prevent it from happening. And the fact is that already here in late January 2026, his policies are not popular in the United States. Like, it'd be one thing if there was a sort of popular movement that wanted him to become a dictator, but there isn't. You know, it was popular enough to get him elected, but there's a lot of buyer's remorse out there right now. So it's not as if there's enormous political pressure on people to help him become a dictator, right? There's some political pressure on the Republicans in office to help him get away with all sorts of outrageous things, but there's still prospects of lines being crossed that people don't want to cross, okay? So what can ordinary U.S. citizens do? I don't know. I mean, the same things that citizens in any country do where there's some kind of coup that installs a military dictatorship. None of them are good things. None of them are pleasant. None of them are guaranteed to be effective. None of them are things that we've ever had to contemplate in the last 250 years here in the United States. But, yeah, those are the kinds of things we would have to do. I can't even say what they would specifically be because they depend on details that we don't know about yet. But drastic times would call for drastic measures. Irkon Sertelli says, When you say that the universe can be described by a vector in Hilbert's face evolving through time, that gives me the impression that we are accepting a Newtonian time structure where time is like a universal number line. and each point on the line maps to a single state for the whole universe at the moment. However, we know that time is more subjective in the real world and there's no notion of simultaneity across meaningful distances, let alone the entire universe. How should one reconcile this contradiction? Sure, I think this is a question that people have asked in various ways, not here on AMAs, but when I talk about these things, they do sometimes ask about them, but I don't actually think this is a big deal. I think that, you know, when we pass over from Newtonian worldview, where time and space are separately absolute and real, to a relativistic worldview, where time and space are just two different aspects of the single underlying space-time, we sometimes say that there is no universal, well-defined notion of time. And as such, that's true. That is correct. It doesn't mean, though, that we should stop using time in our equations. What it means is that it's not unique. That doesn't mean it doesn't exist. It's like when you realize that you don't have to measure lengths in inches, you can use centimeters also. That doesn't mean you have to stop measuring lengths. It just means you have to tell me, are you using inches or are you using centimeters? It's exactly the same for time evolution. There's nothing to do with vectors in Hilbert space evolving through time. This is true for any theory of modern physics. for Maxwell's electromagnetism, for the Schrodinger equation in quantum mechanics, for the standard model of particle physics, for general relativity, in all of these cases, wherever there's this non-uniqueness to our time parameter, just tell me which time parameter you're using, okay? That's all that is going on in this particular way of doing things. So when you say the universe is a vector in Hilbert's space evolving through time, and someone says, well, what do you mean by time? What is the reference frame? What that means in technical language is there is some unitary transformation on the Hilbert space that maps one version of a state evolving through time into a different version of a state evolving through time that you interpret as using a different time coordinate in the immersion space-time. So it's all fine. You know, once you are very, very careful about what you mean by these words, there's nothing. This is not one of the true worries about whether or not this kind of point of view makes sense. Stan Montmanilov says, in episode 340, Rebecca Neuberger-Goldstein said, life itself is a counterentropic process, and if your mattering project is itself counterentropic, it creates things that demand order. If we are at one with our mattering project, with the force of life itself, this is a good mattering project. Everything worth living for is a real battle because it's a real battle for order against disorder. So then Stan says, if life's goal is to counter entropy, how should we think about diversity or variation which could be seen as increasing entropy? Is there a tension between striving for order and maintaining or increasing diversity? So I think there's two things going on. One, let me directly answer Stan's question, and two, let me back up to give some bigger picture view here. It's not that life's goal is to counter entropy, for one thing, okay? Life does counter entropy, in Rebecca's way of saying, and I'm going to comment on that in a second, but it's not like that's necessarily a goal. You know what I mean? Like a ball rolls down a hill, that doesn't mean the ball has a goal of rolling down the hill. But more importantly, diversity and variation do not increase entropy in this very real technical physics sense. If I have an ecosystem which is full of nothing but E. coli bacteria, and I calculate its entropy, that entropy is going to be very, very low in general. It's going to be very, very tiny, just because every cell of the bacteria is going to be fairly orderly. If I replace that with a diverse ecosystem with many, many different species of many different kinds of plants and animals and proteins and whatever, the overall entropy is still going to be very, very low, close to zero, just because every individual organism is relatively organized and low entropy. So you have to be very, very careful between the technical definition of entropy and the rough idea that entropy is somehow orderliness or disorderliness or something like that, right? I think there's plenty of room for diversity. In fact, I think the more nuanced understanding of what life is trying to do makes it very clear that diversity is serving a good purpose in the role that life plays. As I think we already mentioned, a diverse set of species is more robust against extinction threats and things like that. Okay. But the other thing to say is so I wouldn quite agree with Rebecca way of thinking about life as a counterentropic process It not that I disagree with it but I think it only kind of half the story or you know a version of a more complete story I think that the correct thing to say, as we were just talking about a second ago, is that living organisms themselves are out of equilibrium. That is to say that they're not maximum entropy, right? You know, if you enter maximum entropy, we'd all be like just smoothed out mush. And that would not be very interesting, would not count as a living organism. But we are sort of somewhat stable, right? We exist over time. We do not simply decay right away into some very equilibrium, mushy configuration of stuff. So life is or, you know, living beings are some kind of configuration of matter that is dynamical by their nature. It's not just like a crystal or some mechanically stable object, but it's sort of temporarily stable in virtue of taking advantage of the fuel that is available around it in the form of free energy, literal food, or photosynthetic photons, or whatever they are. That kind of setup is a little bit different than saying that life is counterentropic. On the one hand, life resists death. You know, Schrodinger says correctly that without our fuel, we die. That's a very entropy increasing process for sure. And life resists that by repairing ourselves, as we talked about with Stuart Brand and things like that. But the way that it resists this drive toward equilibrium relies on the fact that entropy is increasing over time. So it's not like the second law of thermodynamics is somehow in conflict with the existence of life. It is exactly the opposite of that. Life absolutely relies on the second law of thermodynamics. So if you wanted to put it in some kind of quick, simple motto, it wouldn't be that life is counterentropic. It's that life resists its entropy increasing by increasing the entropy of the universe elsewhere. So it's a little more subtle, a little more complicated, and I think it's completely compatible, to get to the point of Stan's question, with the idea of diversity growing over time. And, you know, we're nowhere near as diverse and as complex as we can be here in the biosphere of the Earth. Horst Verst says, I asked a couple of years ago what your Bayesian prior was about finding evidence of life within the next generation of telescopes in, say, the next 40 years. And I remember it was really low, like 10% low. Has your opinion about the likelihood of finding life on other planets changed after your episodes with Nick Lane on the origins of life and Blaze, Aguera, Iarkos on the emergence of replication? Well, a little bit, but not very much. So for one thing, I don't think 10% is low because 10% for literally finding evidence of life with the next generation of telescopes is the probability we find life in the next 40 years using the telescopes that we have either both available or planned is a very different question than does life exist elsewhere in the galaxy, even in the accessible part of the galaxy. Okay, like finding it might be hard even if it's out there. Having said that, I don't consider 10% to be a low probability at all. 10% things happen all the time. I often say this. As a poker player, you don't have to play many hands before things that are 1% or 2% chances just keep happening over and over again. Like if I do something 10 times, I'm going to get some less than 10% things probably. So I don't think that's a low probability at all. Now, whether the credences have changed a little bit, you know, I do think that the computer replication stuff from Blaze's podcast does point in the direction of the robustness of life forming. But there is also like a cynical take on that. You know, what really that showed is that once replication starts, it takes over. That's a less difficult claim to believe than one about the probability of replication starting, right? So those are two different things. What is the probability of replication starts? And secondly, if it starts, does it take over? In order for his computer demonstrations to work, there had to be a non-trivial probability that replication could start. That's the kind of thing that just depends. And by replication, we just don't mean literally replication. We mean replication with information being carried from generation to generation. So, you know, fire replicates or ice forming in cold weather on the ground replicates, as we're witnessing here in Baltimore right now. But that's not carrying information. That's just due to the intrinsic chemical structure of the thing that is growing. living beings and the little computer programs in blaze's computer uh have some equivalent of genetic information that is passed down from generation to generation so that kind of replication starting um there's one question how likely is it to start the other question is does it take over it's not so surprising that it takes over and the question of how likely it is to start seems plausibly very dependent on the system you're looking at, right? So it's exactly what is hard to draw a lesson from, the fact that it happened in his little computer program. It's very hard to go from that to the possibility or the probability of some chemical reaction that has the ability to carry information from generation to generation spontaneously starting in the atmosphere or beneath the ocean of some planet around some star. Not that I know that this is either large or small. It's just hard to actually estimate that probability. So, you know, the 10% number might be way off. It might be that a more educated person in geology and chemistry and biology would put it at 90%. But I think that there's not a lot of consensus there. So 10% is my way of saying I think it's probably not going to happen, but I wouldn't be very surprised if it happened. 10% things absolutely do happen. Marie Roskew says, Back in 2013, in a short video about mass, you said that the term relativistic mass should be banned, deleted from the dictionary. Why do you or did you think so? I forget exactly the context in which I was saying that, and so you could argue about whether banning words is a good thing or not, but I don't think that relativistic mass is a useful concept. And look, this is an argument about words, not an argument about physics. If you want to talk about relativistic mass, that's fine. The question is not what really exists or how physical systems behave. The question is how do we best describe them. And there's a way of describing systems in special relativity that says there's something called the mass, but the relativistic mass depends on the velocity and it grows with increasing velocity. And some people can talk that way. I find it better, more clean conceptually and calculationally, to treat the mass as a fixed constant parameter for a single particle or for an object who's not changing its composition or what it's made of or anything like that, okay? What changes is the energy. The energy is what really matters. That's E equals mc squared when the thing is not moving at zero velocity. and then there's the relativistic energy, which is mc squared times a relativistic factor that gets bigger and bigger as the object goes closer and closer to the speed of light. So technically, 1 over the square root of 1 minus v squared over c squared. And it's just easier and better to think of mass as something that is fixed when you talk about the mass of a particle, right? When you talk about the mass of a proton, you don't tell me how fast it's moving. You treat that as a number that is fixed once and for all. When you tell the energy of a proton, you certainly do care how fast it's moving. It's just a cleaner way of dividing up the roles of these different concepts. Patrick Brown says, I often hear or read physicists describe a calculation as hard. What does that really mean? Does this description translate to a certain class of problem, i.e. one which takes three days to complete? Maybe it's a practical impediment to progress rather than a conceptual one. I'm interested to understand what makes a calculation hard at the highest levels. Yeah, you know, look, that's a very subjective thing, but I like the question because I like questions that give me an opportunity to think about and hopefully communicate, you know, what the actual working life of a theoretical physicist is. We do talk about physicists. We physicists do talk about calculations being hard all the time. I think if you can do the here's here be my rule of thumb. taking into account that I have never thought about this before, and I'm making up this rule of thumb after Patrick asks the question. So my rule of thumb would be if you can do the calculation in a day, it's not that hard. If you can do it in an hour, it's certainly not that hard. But if it takes, you know, like most of a day, maybe if you're feeling that kind of way, you could get away with saying that one's hard. If it takes three days to complete, that counts as a hard calculation, I think. but it's not a very firm way of talking because some calculations can take months, right? So if you have just finished a calculation that takes months, I don't think I've ever done a calculation that takes months myself, but I've certainly done ones that take days, several days. If you've just done one that takes months and someone else says, oh, yeah, my four-day calculation, that was really hard, you're going to look at them like, oh, come on, right? That wasn't that hard. It's subjective. It's not a hard and fast thing, But it's all relative to the capacities and resources of human beings, right? If someone else can do the calculation in five minutes and it takes me a month, it's hard for me, but it's not hard for them. ICTMontreal says, You often argue that on Bayesian grounds we should stop asking why, once additional hypotheses no longer improve explanatory or predictive power, even if that means treating the laws of physics as brute facts. My question is whether the same stopping rule applies to the normativity of truth itself. In your view, can the fact that correct reasoning ought to bind a belief be treated as a brute fact justified pragmatically by science's long-run success? Or is there a principal distinction between accepting brute laws and accepting brute normativity? Put differently, within poetic naturalism, what ultimately makes a belief wrong rather than merely less useful? So I'm going to try to answer this question. I'm not sure I'm going to do a great job because I'm not quite sure how the word normativity is being deployed here. Normativity means talking about how we should act or how we're supposed to act rather than what actually happens, which is a prescriptive versus descriptive kind of way of thinking. I wouldn't exactly say, just to be super clear, I think you've gotten it mostly fair in describing my view, But I wouldn't have personally said one should stop asking why once additional hypotheses no longer improve explanatory or predictive power. What I would like to say is you can always ask why. Knock yourself out. Ask why. What you can't do is demand an answer. You can't demand there is such a thing as the reason why something is true. Some things are going to have plausible reasons why they're true. Other things are not. That's the best we can do if we face the universe and the metaphysical reality of it all with a humble attitude towards what might be out there. So I don't think there's a clear stopping rule about when you should stop asking questions like, why is this true? If you found the wave function of the universe and you said, well, why is that true? Maybe the answer is it's just the way it is. There isn't any deeper answer. But maybe there is a deeper answer that you could find one. So by all means, keep looking. Please don't stop if you're driven to do that. Different people have different ways of judging whether or not such a search, such a task is actually fruitful or not. So your question, can the fact that correct reasoning ought to bind a belief, be treated as a brute fact, justified pragmatically, or is there a principal distinction? Here I'm much fuzzier and I have less to say to be very, very honest about it. I remember when I was young being deeply influenced by Douglas Hofstadter's discussion of this. It might have been in Gödel Escherbach, but it might also have been somewhere else. You know, he had a discussion where he says, like, you want to prove something using logic, right? And you think, okay, logic. Logic is airtight. I can't go wrong. And you say, okay, here is a syllogism that says all A's are B, all B's are C, therefore all A's are C. And you say, okay, well, why is that true? Why is that a syllogism that is true? And you say, oh, there's a rule that says that this following form of syllogism is true. You say, oh, okay, good, there's a rule. But how do I know the rule is true? And you can see, I'm not going to go through the whole thing, but you'll see where this is going. Well, there's a rule that says the rule is true, right? And so he has this long set of nested set of beliefs that you can buy into and everything. and ultimately, on the one hand, I'm in favor of thinking about these hard questions carefully, like a philosopher of logic, a philosopher of mathematical logic. Those are good things to do. On the other hand, I am not a philosopher of mathematical logic, and I don't have any strong feelings about these things. As a scientist, I am happy with the pragmatic justification with, okay, at some point, these rules work. Let's use them. Maybe I can't prove that they need to work. in some normative way, but until you give me a reason for thinking they won't work, I'm going to use the rules of logic and things like that in their standard senses. Keith says, I started Drops of God this weekend with my spouse. It is about wine, vineyards, neat wine detective work, and some nice drama so far. While watching, we were wondering if you all had checked it out, and what do you think? Yes, we're huge fans of Drops of God. Some of you might know that Jennifer's job is a science and culture writer at Ars Technica. So she both covers science stories, cool physics ideas, but also reviews TV shows and movies and things like that and has a year-end roundup where she says, you know, the 10 top best. And I think Drops of God is not from this last year but the year before. But it was absolutely in her top 10 list. It's a great show based on a Japanese manga, I believe. and there's both a French side of the story and a Japanese side of the story, but in the TV show there's people speaking Italian and English and a whole bunch of different things, and there's competition between two people who have different stances on what it means to be a sommelier and really think about wine and stuff like that. And so, yeah, pushing all my buttons. I love it. It was great. I believe they are doing a second season scheduled to come out soon, although it's one of those things, you know, season one was pretty self-contained. I'm not sure if it needs a second season, but hopefully they'll do a good job. Robert Boyle says, I did my degree in mathematics at Cambridge, England back in 1987, and I took all the theoretical physics options I could, such as GR and quantum mechanics. It's been great to try and catch myself up again through your podcast and books. You've also inspired me to take more of an interest in philosophy, something I never studied. But when I asked Chad GPT to give me recommendations for books to read, The topic is History of Western Philosophy by Bertrand Russell, first published in 1946. Surely there is something better to give me a good grounding in the latest thinking than an 80-year-old book. What would be your top recommendations for someone who wants to understand the core ideas and get an up-to-date view of the latest thinking? Something that is close to what you might write yourself if you ever got around to writing the biggest ideas in philosophy. Well, it's a little bit hard because philosophy is very big, right? You know, what is going on in modern, I don't know, philosophy of physics is not very connected to what's going on in moral philosophy or aesthetics or whatever. So it's very hard to get a big picture overview like that. There are things like, you know, the Cambridge Encyclopedia of Philosophy, these giant books that try to give you an overview in many things. But it's not systematic. It's like an encyclopedia. There's lots of little entries in there, and you can pick and choose the ones you want. Even though it's an encyclopedia, it's just one volume. It's not too intimidating. I guess the one fun recommendation, well, the two recommendations I would do is either think of a specific field that you care about, like philosophy of quantum mechanics or something like that, and read books in that. There's good books by David Albert, Tim Mulden, other people. If you do ethical philosophy, like, you know, you could do worse than read The Theory of Justice by John Rawls or something like that, even though it's a little bit out of date. But there is a fun book that I can recommend, I don't know if I ever have, by David Papineau. David is a philosopher somewhere over there in the UK. I don't know, but he's a very good philosopher. I like his stuff. And he has written a relatively short book called Philosophical Devices, Proofs, Probabilities, Possibilities, and Sets. Now, it's still, you know, a relatively focused set of material. It's what you might call metaphysics and epistemology. Right. So logic, language, things like that. But he's giving you the overview of how people think about those topics in the modern world. So he's not talking about morality or aesthetics or various other parts of philosophy. And he certainly not talking about the history of philosophy that much. So, again, you might just not have one book that does everything that you want it to do. Thomas Anderson says, In the many worlds interpretation of quantum mechanics, would the probability distribution that specific events are sampled from be invariant across all the worlds? For example, would the size of cities always be modeled by a power law? Well, so I'm not sure whether this is a many worlds question or just a more general physics question. So the specific example, would the size of cities always be modeled by a power law? This is a feature of higher-level emergent systems that is studied under complexity theory, right? Nothing specific about the lower-level laws of physics. So, you know, often people give the examples of the laws of economics, right? The law of supply and demand, would that hold in an alien civilization that didn't have the same sort of history and ideas that we did? Arguably it would, or for that matter, something like Darwinian evolution, perhaps the rules that you talk about, about power law distributions of cities or symbols in an alphabet or something like that. There are good reasons why these kinds of law-like behaviors are very, very generic across very, very large sets of different circumstances. Now, are they generic enough that I can guarantee you that they would hold in every world, in the many worlds interpretation of quantum mechanics? That's an ambitious question, so I can't really say that with confidence. But I do think that given relatively similar conditions and laws of physics, but some differences at the small scale levels of initial conditions or things like that, that you would tend to get certain features being more or less universal in different worlds. That would be my guess. I don't have a theorem or a proof that that's going to actually be true. Alan Lubel says you said that if the universe does in fact last for infinity that it does not necessarily mean that all humans within that universe will live again and you use the example that you can create a system of numbers that never repeats in an infinite universe but aren't numbers themselves sorry but here's the question aren't numbers themselves not necessarily real because one can argue that math is a human invention to predict patterns in the universe and not really part of the universe so if this opposition is valid could you give other examples of humans in the universe never repeating in an infinite universe? Sure. The example of the numbers is just to remind you that there's a difference between a set being infinitely big and a set repeating over and over again. The mathematical, there's a mathematical property behind what's going on. If you have an infinite amount of time exploring a space, and that space is finite in size, and you never stop, then you will repeat over and over again. That's basically the Poincaré Recurrence Theorem, in some sense. So Poincaré proved a theorem that that will happen. But it is a postulate or a premise of the theorem that the space of possibilities is itself bounded, is finite. If you have two particles moving in empty space with no interactions between them, they will never repeat a configuration. They will get closer to each other and they'll get further away from each other, and they're never going to come back and do the same thing over and over again. So it's just a mathematical example that lasting forever does not mean repeating. At the level of physics in the real world, this is extremely plausible. You know, we think that it is plausible that our universe has a positive vacuum energy. It will expand forever. Again, I'm not saying this is true. I'm saying this is plausibly true. It will expand forever. And what happens is stars burn out and fall into black holes and evaporate away eventually into nothingness. And we're left with empty space forever. In that empty space forever, there's no reason for human beings to ever reappear. The universe just doesn't come back to that initial state. It depends on the details of the laws of physics, which we don't know. So it's absolutely possible that there are laws of physics, that the correct laws of physics say there will be repeats, but you can't just argue from that from just the statement that the universe lasts forever, even if that's true. A Short Distance Ahead says, What I think I understand is that in the many-worlds interpretation, each time there is decoherence, it creates different branches of the wave function, orthogonal, non-interacting universes. What I don't understand is whether those universes are in a completely new space-time or in the same space-time. Since we don't have a clear understanding of how quantum mechanics and relativity work together, wouldn't it be possible that those branches are all in the same space-time and still contribute to gravity? And if so, wouldn't those branches be candidates for what we call dark matter energy? Short answer is no, they would not be candidates. Look, I mean, of course it's possible, you all know that, anything is possible, But that's just not how many worlds works. The space-time is, it's not that the worlds are in space-time. It's that each world has space-time in it. Or even better, each world is separate, okay, in the many worlds interpretation of quantum mechanics. They're non-interacting. That's what non-interacting means. Non-interacting includes interacting via gravity. And each world describes an emergent space-time separately in each world. So, yes, if you have a very different setup where you have a single space-time with many parallel worlds, like you can have, for example, in brain theory, in string theory, brain theory, B-R-A-N-E, you can imagine multiple parallel brains. And in that case, if those brains are more or less fixed in distance between each other, the gravity from what's going on in one brain can absolutely leak into other brains and look like dark matter or dark energy. That's absolutely possible. It's in no way an attractive theory. It's much easier just to have particles that don't interact electromagnetically here in our universe. That's really just not that hard to do. So you don't need to work all that hard. And remember, you know, the very roughly speaking, 5% of the mass density or energy density of our universe is ordinary matter. 25% is dark matter. So they're different by a factor of 5. OK, whereas the number of universes in many worlds is much bigger than five. So even numerically, it doesn't quite work out that you would expect that even if somehow there was an effect, it would look like what you and I know as dark matter and dark energy. Finally, the dark matter behaves differently than ordinary matter does. It clumps in different ways. Right. And namely, it clumps very little and only because of gravity. whereas ordinary matter has dissipative phenomena like emitting photons and forming molecules and things like that so that's why all of the stars and gas and dust fall to the center of a galactic halo but the dark matter is spread out in a puffy cloud throughout the halo so if what we think of as the dark matter were really ordinary particles but on another branch of the wave function we expect their behavior to be more or less the same but it's not. So that kind of proposal, you know, people have thought about this stuff. It just doesn't quite work out. Prahas David Nafisian says, can you compare and contrast your ideas for the physics of democracy with Isaac Asimov's character Harry Seldon's Psychohistory, found in the Foundation Trilogy? An AI overview in part says Psychohistory combines history, sociology, and statistics to foresee societal trends, though it falters with unpredictable individuals. Furthermore, it is a science that uses advanced mathematics, akin to statistical mechanics for societies, to model and predict the behavior of vast groups of people. Sure, I can compare and contrast them. One thing is that I don't have a theory of the physics of democracy by any stretch of the imagination. I'm trying to write a book aimed at a popular audience that kind of mixes and matches various ideas from physics with different contexts in government and society and things like that, in particular in democracy. But there's no, like, one cure-all, catch-all, big-picture thing. It's like, let's think about phase transitions and voting theory and, you know, all these different kinds of things that might be related. So it's the opposite of Harry Seldon's psychohistory. In fact, I don't think that psychohistory was a very good idea. In fact, the beginning of the book, the foreword or whatever it's going to be, the prologue in very rough draft form starts by making fun of Isaac Asimov's idea of psychohistory. So Asimov was trained as a chemist. He knew statistical mechanics for sure. And he had this idea that when you have many, many atoms that come together, they have collective behaviors that are easier to predict than the specific behaviors of every individual atom. 100% true. But he figured, Well, therefore, if you have enough human beings in your society, the human beings are kind of like the atoms. The society is kind of like the fluid or gas or whatever. So it should be a similar thing. It is not a similar thing, in fact. And part of the reason why it's not a similar thing goes back to the earlier question about why non-equilibrium physics is hard. On the one hand, societies are out of equilibrium. They are evolving, right? They are doing complicated, dynamical things. But much more importantly, the individual constituents of that collective system, in this case, human beings, are themselves complex and nonlinear. OK, the nice thing about atoms is that the individual atoms are pretty simple. So it's true that if you wanted to keep track of the specific motions of every single atom in a huge collection of them that makes the gas in a box or whatever, that would be very, very hard to do. But averaging over them by calculating instead of the specific locations and velocities of every atom, just calculating the net velocity, pressure, density, things like that, is enough to predict the behavior of the system as a whole. But that idea that you can just average over things and that's good enough to predict what's going to happen is not universal, is not necessary, is not going to be relevant when the individual subsystems, when they interact, it's nonlinear. When two atoms interact, you don't need a lot of data to tell me what's going to happen next. When two human beings interact, all sorts of things can happen. And furthermore, chaos theory comes into the game as well. The interactions are nonlinear, which means a small deviation in what the individual constituents do in their initial conditions can lead to a wide variation in what they actually end up doing. Now, this nonlinearity, unlike the case of fluid mechanics, this nonlinearity sort of percolates up to the collective. So because the collective is something very far from equilibrium and is subject to chaotic dynamics itself, In principle, you can have a very tiny change in the behavior of a person or two people in a society that leads to dramatic changes, not just in the behavior of that person, but in the behavior of the whole society. So it's kind of the opposite of what Asimov was counting on. What Asimov figured is that if you had enough people, if your society was big enough, you could be able to make detailed predictions for what would actually happen. Of course, he actually cheated. You know, if you've read, I don't want to spoil it, but, of course, it's only an interesting story when the predictions go wrong, of course, for various reasons. It's very much like his laws of robotics, right? Asimov had the foresight to see that we're going to need some laws of robotics. He set them down. But people sometimes act like, okay, those are the right laws of robotics. They clearly didn't read any of the stories. Like, every single story is the laws of robotics not being enough to tell the robot what to do. Likewise, psychohistory goes wrong in all sorts of ways, because there's always exceptions to the assumptions that you make. So what I'm after is much less a theory that would predict how society is going to evolve into the future, and more a set of tools for understanding what kinds of things can happen. I'm not sure that aiming for a specifically predictive theory is even a good idea in this case, but maybe some stochastic set of, you know, understanding what the space of possibilities is like would be a useful thing. Mikkel Benenson says, does Laplace's demon know the truth value of Goldbach's conjecture? Goldbach's conjecture, for those of you who don't know, says that every prime number can be written as the sum of two. Sorry, every even number can be written as a sum of two prime numbers. At least every even number greater than two. Two is an exception to the rule. And we don't know. We haven't proven Goldbach's conjecture one way or the other. It's an unsolved problem. So, Mikkel is asking whether Laplace's demon knows whether or not Goldbach's conjecture is true. So, I mean, the disappointing answer is no, because Laplace's demon doesn't exist. so the reason why that is a non-trivial statement in this case is because the actual answer is I don't know, it depends on what you mean by Laplace's demon there are certainly versions of Laplace's demon that don't know the truth value of Goldbach's conjecture because the idea of Laplace's demon is a vast intellect that knows the exact instantaneous state of the universe So in classical mechanics, that would be the position and momentum, a very little constituent of the universe. In quantum mechanics, it might be the exact quantum state or something like that. And the vast intellect also knows the exact laws of physics. And the vast intellect has arbitrarily powerful computational abilities, so we can compute what's going to happen. And then the point is that Laplace's demon can predict with perfect fidelity in a deterministic system what will happen next. what will happen arbitrarily far in the future and what did happen arbitrarily far in the past if it's a closed system or if it's the whole universe. So there's kind of a minimal version of Laplace's demon which is exactly that and no more. All Laplace's demon knows is what the state of the universe is and what it was and what it will be. That's it. This version of Laplace's demon doesn't even know what temperature is. It doesn't even know what a human being is. All it knows is what all the atoms or the quantum fields are doing. Whatever is the most fundamental description is the only thing that this sort of minimal, lightweight version of Laplace's demon has access to. So certainly Laplace's demon doesn't know the truth value of mathematical conjectures or principles. This version of Laplace's demon doesn't even know that 1 plus 1 equals 2, much less the Goldbach's conjecture, because all it knows is the physical universe and what its state is. That's all, okay? So you can't talk to about anything else. But now, of course, you're willing to say, well, that's an unrealistic version of Laplace's demon. You're limiting it too much. We haven't limited it that much. It still does know the exact state of the universe and has arbitrary calculational power. But you can envision versions of Laplace's demon that know all the emergent descriptions of the world as well, a version of Laplace's demon that does know about temperature and entropy and human beings and stuff like that. And you can imagine versions of Laplace's demon that know the truth values of all well-formed mathematical statements, okay, to the extent that they exist. Or maybe that version of Laplace's demon would say, rather than talking about the truth value, we should talk about the truth value conditionalized on some set of axioms, like the piano axioms for arithmetic or something like that. whether certain statements follow from certain axioms, because as Gödel told us, just because a statement is true doesn't mean you can prove it from some set of axioms. Eric Stromquist asks a priority question, and he says, I recently stumbled across the fact that the same application of Nürter's theorem to the U1 symmetry of the Schrodinger action that leads the electrical charge density and current, when Noether's theorem is discussed in a physics lecture, can also be interpreted as yielding the probability density psi squared for a universal wave function in configuration space, relevant to the Everett interpretation, together with its associated probability current, where that current follows the Bohmian guiding velocity field. So, for both the Everett interpretation and Bohm's theory, Noether's theorem seems to provide the Born rule with physical grounding, without having to sneak it in by hand, by just postulating the probability density is the absolute square, or having to resort to decision-theoretic arguments. My question is, since I'm sure you're aware of this derivation of conserved currents from wave functions, why isn't much more, as opposed to very little, made of it in discussions of the measurement problem and the source of Born rule probabilities? Yeah, so you have to distinguish between two different issues. One is, what is the origin and nature of probability in quantum mechanics? And there's a separate issue, which is why is the probability, once you understand its origin and nature, given by the wave function squared? The second one is easy, and the second one is what you're addressing here, okay? So there are many ways to mathematically derive from the Schrodinger equation the fact that psi squared gives you a probability density. That is to say, mathematically, what it means to give you a probability density is You give a function, a set of numbers, defined on the configuration space or whatever you want to define it on, which are individually numbers between 0 and 1, and they all add up to 1, or it's some infinite dimensional version of that, some continuous distributional version of that. And that gives you a probability density. Everyone knows that psi squared is the thing you can make from the wave function that has the properties of a conserved probability. That is something we do teach in quantum mechanics courses. David Tong, recent Mindscape guest, remember he came up with, he's written all these books, textbooks, and I used his textbook when I taught quantum mechanics this last fall, and he has a very nice derivation of the conserved probability current in exactly that kind of language. But the question is, the difficult question is, okay, so why is that mathematical formula, which has the mathematical properties of a probability density interpreted as the probability of getting a measurement outcome. That's the hard part. I mean, again, there's all sorts of ways. There's something called Gleason's theorem, which is another way of doing it. Zurek has an invariance-based argument. The arguments are all out there and they're well known. But that doesn't mean that you're actually establishing physically that when you do a measurement, you get different measurement outcomes with that probability. You know, if you look very, very carefully at the derivation of the probability current using Neurah's theorem, et cetera, et cetera, et cetera, the word measurement never appears. If you really want to understand quantum mechanics and why it's tricky, you can't gloss over the fact that in the standard textbook, I shouldn't say derivation, presentation of quantum mechanics, measurement has a special status. You can't just look at what's happening when the wave function is evolving according to Schrodinger equation. The measurement violates the Schrodinger equation, and that's a whole different thing that is hard to understand. Now, I actually think that we understand it. I do think that we know why there are probabilities and why they look like the Born rule, but I accept the fact that it is a non-trivial question to ask. Okay, we've reached the end. Last question comes from Anonymous, who says, My wife and I have graduate level education at U.S. universities and two freshmen in high school. So I presume that you're saying that they are you. Yeah, you two freshmen in high school, that your children are freshmen in high school. I presume that's what's being said. Anyway, frankly, I'm still paying student loans 20 years later. As we prepare for high school graduation and what comes next, we can't imagine our kids not going to college, even as we aren't sure how we'll pay for it all. I recall those years in my life as the most unencumbered, a period in which I was able to pursue intellectual and social interests to my heart's content. Also, the time when I fell in love with ideas and learned compassion and empathy for others. Our kids have some practical doubts about what further education could provide for them in admittedly uncertain times. While I don't disagree with them, I feel that there's more at stake than just their career paths. You've seen a lot of students through the later stages of their education. how would you sell high schoolers of today on further education or would you as your opinion on this has your opinion on this evolved over time well a lot of this it's a great set of questions here you know a lot of this was addressed in my holiday message on the romance of the university but not all of it so what i tried to address in that little message was why a university education is important and useful at the level of fundamental values, not just at the level of earning money. Forget about training for your job. I think there's a lot of reasons why university-level education is special and valuable. And the short version of it is because it prepares you to open yourself up to other things. It gives you a grounding in being able to think about things in different ways and being open to new experiences that you wouldn't otherwise get in a typical life history of someone, at least in the United States or at least people who I'm familiar with. There is something special about that time that you have as an undergraduate student. And like you say in the question, you get to devote yourself to learning things and thinking about things and addressing ideas, which is not going to be the case for most of the rest of your life. Now, what I didn't address in that message was the fact that it costs money. And these days it costs a lot of money. It costs more money than it did when I was a kid. And when I was a kid, it already cost enough money that I had to pick my undergraduate institution on the basis of where can I get a full tuition scholarship because we couldn't afford the actual tuition. So that's a real thing, right? I mean, it's absolutely a sensible consideration. I'm not going to tell you, oh, who cares about how much money it costs? Because, again, I said this in other places, like there was a solo podcast on the coming phase transition, et cetera. And the economic system here in the United States has become very good at being maximally extractive. You know, capitalism is always extractive. You know, the people who want to sell you things want to set the cost, the price of the good as high as they can because they want to earn a profit. But the problem is, you know, how high can you set it before people stop buying it? and capitalism in the modern technological era has just become super duper good at setting it just high enough that you will buy it, but at the sort of minimum level of satisfaction. You know, when you get a good deal on something, you can buy it, you can spend the money and then be very happy that you purchased it. But if you're very happy that you purchased it, then the seller is going to say, well, why didn't I charge you more for it? And if they charge just enough so that you're on the fence between buying it and not buying it, then when you buy it, you're not super happy because on the one hand, you have a good thing that you want. On the other hand, you've just given away a lot of money. Going to college has become like that, right? And the way that it's become like that, because the colleges understand that a lot of people can't afford to spend a lot of money, what they do is they, on the one hand, jack up tuition to very, very high levels. and on the other hand provide various ways for people to either get grants or loans or things like that to pay more than they can afford. And if you do it by getting student loans, then you're stuck trying to repay those student loans for many, many years thereafter. And a time of your life where you're not making a lot of money, where things are tough, it's very, very repressive, honestly. It can be very difficult. So I think that if you run the numbers, it is still worth going to a good school. I think that even if all you care about is money, I think that you will make more money over the long term by going to school, taking out the student loans and repaying them than you will by not getting a college education at all. So I don't think that's actually a reason not to go to an undergraduate university. It is a reason to be annoyed and upset at how expensive it is. You can look for specific places that might be cheaper than others. You know, I'm a big, big believer in college education. I'm less of a big believer in fetishizing the best schools, the top schools. You know, I think that people don't appreciate the extent to which how good of an education you get as an undergraduate at a university is much more up to you than it's up to the university. You can go to Harvard and get a terrible education. You can go to some big state school and get a wonderful education. So if it turns out that what's getting in the way is that you're only aiming at schools that are super expensive, then that's a change you can make in your utility function. It's important to go to school. It's less important which school you go to, as long as you go to one that is either big enough to be very, very diverse or small but precisely targeted at what you want as a student. And then there's other things. Yeah, there are scholarships. Certain schools, like Johns Hopkins, have instituted programs where if you make less than a certain amount of money, if your family makes less than a certain amount of money, they won't charge you tuition. I think the amount of money is like $200,000 per year for the household. So depending on if you're really not making a lot of money, you can actually get into a school and not pay that much tuition. You still have to pay other things. There's still fees. You might take out some loans, but it's not completely prohibitive. So I would also say the only other piece of practical advice is don't decide ahead of time that you can't afford it, right? Apply. See what is available in terms of financial aid. See what the system is. See what it would really cost. That's the mistake I made because I came from a non-academic family. I didn't know what the possibilities were, So I didn't even apply to a lot of undergraduate schools that might have made it possible for me to go there. But I just didn't know. So, you know, yes, send out the applications. Hope for the best. You know, apply widely. Apply to some really aspirational schools. Apply to some that would be, you know, perfectly good, even though they're not your first choices. And see what happens. And, you know, I do think that it's a shame if you don't end up going to college just because you think you can't afford it, not because you're wrong in saying that, that you might be right or you might not be right. But don't leap to that conclusion prematurely is what I would say, because it's really, really rewarding to do it. I hope we are able to fix the system that we have so that more people can go to college and without all these extra burdens hanging down on their heads. If I were the boss of the world, that would be the case. But I remain not the boss of the world. Maybe that is better for me. Maybe it's even better for the world. Who knows? Thanks, as always, for listening to the AMA. I love your questions. I hope you appreciate the answers. Thanks for Patreon supporters for supporting Mindscape, and I'll talk to you next time. Thank you.