Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

AMA | September 2025

210 min
Sep 8, 20257 months ago
Listen to Episode
Summary

Sean Carroll's September 2025 AMA covers his new teaching roles at Johns Hopkins, foundational questions in quantum mechanics and cosmology, the stability of political systems, and practical challenges in physics research. Carroll discusses quantum mechanics pedagogy, the nature of complexity and entropy, interpretations of quantum theory, and contemporary issues in academia.

Insights
  • Teaching foundational physics requires balancing rigorous mathematical formalism with intuitive understanding; different students benefit from different pedagogical approaches
  • The Born Rule in many-worlds interpretation emerges naturally from rational decision-making under self-locating uncertainty rather than requiring complex proofs
  • Complexity in physical systems depends on dissipative processes and energy flow, not just entropy increase; the coffee-cream example requires long-range coherence to manifest observable complexity
  • Political systems lack stable equilibria due to constant environmental perturbations; democracy and authoritarianism are equally unstable, making stability-focused design crucial
  • Quantum gravity conceptual problems persist in weak-field regimes only when treating spacetime curvature itself; perturbative gravity on flat spacetime avoids these issues
Trends
Increasing recognition that AI-generated scientific papers threaten literature quality; citation metrics become unreliable as filtering mechanismsGrowing emphasis on stability analysis in social system design rather than seeking optimal equilibrium statesShift toward understanding emergence through coarse-graining and effective field theory rather than reductionist approachesRecognition that foundational physics insights (quantum mechanics, thermodynamics) can improve higher-level theories but aren't always necessaryTension between defending academic freedom and managing real-world consequences of institutional resistance to government pressureRenewed interest in decoherent histories formalism for understanding quantum mechanics in cosmological contextsAppreciation for time-domain astronomy surveys enabling discovery of unexpected phenomena rather than testing predetermined hypotheses
Topics
Quantum Mechanics Pedagogy and FormalismMany-Worlds Interpretation and Born RuleQuantum Decoherence and Measurement ProblemComplexity and Entropy in Physical SystemsQuantum Gravity and Spacetime DiscretenessPolitical System Stability and DemocracyCosmological Inflation and AnisotropyBlack Hole Thermodynamics and InformationEffective Field Theory and EmergenceEntropic Gravity and Holographic PrinciplesAcademic Publishing and AI-Generated ContentParton Distribution Functions in Particle PhysicsDecoherence in Quantum MeasurementWarp Drives and General RelativityMoral Constructivism and Ethics
Companies
Amazon Music
Podcast distribution platform sponsoring the episode with ads for podcast listening
Figs
Medical apparel company sponsoring episode with nurse testimonial about scrubs quality
Johns Hopkins University
Carroll's current employer; discussed new teaching positions in Philosophy of Cosmology and quantum mechanics
Space Telescope Science Institute
NASA facility across from Johns Hopkins; mentioned removing DEI signage under government pressure
Melwood
Disability employment nonprofit sponsoring episode with vehicle donation program
People
Sean Carroll
Host; theoretical physicist at Johns Hopkins teaching quantum mechanics and philosophy of cosmology
David Wallace
Philosopher of physics; pioneered decision-theoretic approach to deriving Born Rule in many-worlds
David Deutsch
Quantum computing pioneer; co-developed decision-theoretic proof of Born Rule with Wallace
Jacob Barandes
Physicist questioning many-worlds assumptions; discussed in context of quantum foundations
Ted Jacobson
Physicist; pioneered entropic gravity ideas and Einstein equation of state formalism
Eric Verlinde
Theoretical physicist; proposed entropic gravity theory connecting entropy to gravitational force
Hugh Everett
Quantum physicist; original proponent of many-worlds interpretation discussed in context
John Wheeler
Physicist; developed 'it from bit' concept and measurement-focused interpretation of quantum mechanics
Karl Popper
Philosopher of science; criticized Copenhagen interpretation's classical-quantum boundary
Janane Ismail
Philosopher; discussed Laplace's demon and self-modeling systems creating emergent underdetermination
Joshua Green
Effective altruism researcher; founded Giving Multiplier for balanced charitable giving
Miguel Alcubierre
Mexican physicist; proposed warp drive metric solution to Einstein's equations
Lottie Ackerman
Caltech student; co-authored paper on anisotropic inflation with Carroll and Mark Wise
Mark Wise
Caltech particle theorist; co-authored anisotropic inflation paper with Carroll and Ackerman
Phil Anderson
Physicist; famous for 'more is different' principle about emergence and reductionism
David Hilbert
Mathematician; derived Einstein's equations from action principle; anecdote about hyperinflation
Vera Rubin
Astronomer; namesake of new observatory conducting time-domain sky survey
Zach Weinersmith
Cartoonist and science communicator; shared Hilbert hyperinflation story on social media
Quotes
"If you're a musician, you learn to play scales...not because it's especially musical, but because it's sort of ingraining some intuition that turns out later to be very useful."
Sean CarrollTeaching quantum mechanics problem-solving
"The Born Rule is just, you know, there for the taking. And so I just think you should take it."
Sean CarrollOn many-worlds interpretation
"Laplace's demon was never meant to live in the world...you would have to be as big as the universe."
Sean CarrollOn determinism and prediction
"What happens when you make a moral choice is not that you are or are not adhering to some abstract code; it is that you are revealing or constructing who you are."
Sean CarrollOn moral particularism
"There's nothing discrete about space necessarily in quantum gravity. Again, maybe it would be. Now, it might be a worry...but it's certainly not like a theorem either."
Sean CarrollOn spacetime discreteness
Full Transcript
Whether you're into unsolved mysteries, solved mysteries, or creating your own mysteries, Amazon Music's got millions of podcast episodes waiting. Just download the Amazon Music app and start listening to your favorite podcasts ad-free, included with Prime. Hi, I'm Katie Duke, and I've been a nurse for over 20 years. Listen, I used to think that I was my most stylish in my 20s, but honestly, style and confidence only get better with age. And that is why I love figs. These scrubs are beautiful, comfortable, and they are built to last. They're not those boxy, scratchy uniforms that we all started out in. No, no, no. These fit perfectly. They feel amazing, and the quality is just wow! My favorite color? Burgundy. It's chic, it's timeless, and it's even the same color as my apartment, because I'm kind of obsessed with it. I love adding custom embroidery to make my scrubs as personal as my style. And since I work in telehealth, my embroidered figs even double as my ID badge. It's never too late to reinvent yourself or your scrubs. Get 15% off your first order at wearfigs.com with the code FIGSRX. That's wearfigs.com, code FIGSRX, for 15% off your first order. Hello everyone, and welcome to the September 2025 Ask Me Anything Edition of the Mindscape podcast. I'm your host, Sean Carroll. The big news here at Mindscape World International Headquarters is, of course, the teaching has started. It's the school year again. It's September. I think as I've already mentioned, I'm teaching two courses this year. One is the Philosophy of Cosmology, which is a descendant of a course I taught three years ago, just called, at that point, Topics in Philosophy of Physics. But we have a good number of philosophers of physics, where people are able to teach those things at Hopkins. So we're trying to rationalize a division of labor between what we're teaching, and obviously, cosmology is a good fit for me. I'm interpreting cosmology very broadly to include basic philosophical questions about how do you treat epistemology, and for that matter, metaphysics, when you're living in a world that is very, very big. The way that the world might be very big is maybe because there's a multiverse, or maybe because there's a quantum mechanical, many-world situation going on, where you enter into these situations of self-locating uncertainty, or anthropic reasoning, things like that. So we'll talk about the arrow of time. We'll talk about inflation and the cosmogical multiverse, and fine-tuning. And then we will talk about quantum mechanics and many-worlds. It's a great, fun course to teach because Hopkins is going through a transitional period with its course requirements, and they have this new system. I'm not quite sure how well the system is doing, but the system is that rather than saying you have to take so many science courses, so many humanities courses, or whatever, they have different, I forget what they're called, foundational abilities, or something like that. And different courses can satisfy the requirement for a different kind of foundational ability. But it's not a system, the actual list of foundational abilities is not really a perfect fit, a perfect match to what we actually teach. So, for example, there's a foundational ability called, I think, ethics and foundations, which every philosophy course counts as an ethics and foundation, foundational ability credit. But also, the other course I'm teaching, quantum mechanics for undergraduates, it counts as a foundational ability. And likewise, there is a sort of science and data foundational ability, where the philosophy course counts for that too. So anyway, the point is that I got a bunch of people. Hopkins is a very STEM-y school. There's a lot of engineers and chemistry majors and pre-meds and things like that. And so there's a lot of people taking my philosophy and cosmology course, because they think it's a good way to get a humanities credit. There's also a good number of humanities people taking it because it's a good way to get a science credit, roughly speaking, which makes for great conversations, because we have a very broad set of people taking the course, coming with very different backgrounds. And these kinds of big picture ideas are perfectly made for that. On the quantum mechanics side of things, it's supposed to be super straightforward, right? This is the required undergraduate quantum course. It's two semesters. I'm teaching the first semester. Chris Overstreet will be teaching the second semester. Chris is an atomic physicist, an experimental atomic physicist. And so I get to leave to him all the stuff about angular momentum and the helium atom, for that matter, the hydrogen atom, all of those things. And I can talk about really the concepts of quantum mechanics. You can't just make it a philosophy course. It's got to be a physics course. You have to teach the students to do the problems, to be able to solve the problems on the homework set. So there's absolutely a certain amount of solving differential equations involved. Last week, we did Fourier transforms in the relationship between position space and momentum space. And at some point, we'll be doing the square well and the harmonic oscillator and all those things, all the usual quantum things. But also, I will teach them about qubits and entanglement and density matrices and even a very little minor amount of quantum information. So they get a little bit more of a feeling for really the deep essence of what quantum mechanics says. So far, so good with that. The only drawback of the quantum mechanics course is that because, well, basically they snookered me here. They asked if I wanted to teach it. I said, sure. And only after I said, sure, did they reveal that it is required for various purposes of arrangement of various courses that this course be taught Monday, Wednesday and Friday at 9 a.m. This is not my favorite arrangement. I'm much more a let's teach for an hour and a half twice a week rather than an hour three times a week kind of guy. And I'm certainly not a Monday morning at 9 a.m. kind of guy. So this means they teach every day of the week. As I've said many, many, many times, I don't know how the high school teachers do it, much less the elementary school kids. So good on them for being able to teach all day long. I teach once one class a day and it's exhausting to me. But it is fun and it's fun to remember all this stuff. Right? Like I last took a quantum mechanics class in the 1980s. So that was a long time ago. And I have not taught it since then. This is the first time I've taught it. I've written papers about quantum mechanics. I've thought about quantum mechanics quite a bit, but it's different than, you know, making sure you know where the H bars go in your convention for the 48 transform, which I was struggling with the other day. But, you know, it's good. It's good for you. It's good to training. I actually got a question in class like, why are we doing? Why do we have to solve all these particular examples? And the example, the response that came up with in real time was, you know, if you're a musician, you learn to play scales, right? You learn to just go in the major scale, pentatonic scale, whatever, up and down the keyboard or the fretboard or what have you. Not because it's especially musical, but because it's sort of ingraining some intuition, some some subconscious way of dealing with your instrument that turns out later to be very useful. If you are an athlete, you will do warm up exercises and drills and things like that. And to be a professional working physicist, solving simple quantum mechanics problems in basic circumstances is kind of that thing. It gives you not only familiarity with doing it, but also a toolbox, right? A set of things to refer to to go like, well, I don't know, when we did the harmonic oscillator, it was like this. And that turns out to be very useful. So useful for everyone all around. Of course, I say that now, because I haven't given out any tests yet. It's always when the grading happens that people I've noticed that students in the class go from being all happy with everything that's going on to some fraction of them being less happy. That's life. That's okay. That's how we're going to do it. I hope that everyone else is having as good a beginning to fall as the students in my classes are. And we can dive into the AMA. Remember that the AMAs are brought to you by Patreon supporters of Mindscape. You can be a Patreon supporter if you so chose. Go to patreon.com slash Sean M. Carol. And you not only get to have the good feeling of supporting Mindscape podcasts, but you get to ask these questions that are being answered here on the AMAs. You get ad free versions of the podcast. You get access to the little reflection recordings that I do after every podcast. It's a good deal all around, I think, for a rather minimal investment. Thanks very, very much to everyone who does support Mindscape. I appreciate it very much. Let's go. George asks, how is it possible that my actions now, writing a reasonably coherent question, were encoded in the position and momentum of some particles billions of years ago, which have been influenced only by the four forces plus the quantum randomness? Surely any unpredictable pattern in the universe, such as the sentence, would be far more likely to descend into random gibberish unless there was another force at work at the level of fundamental particles that was preventing them from moving randomly. So I'm not, I have mixed feelings about this question. I think you are getting at something very, very deep, but I'm not going to quite let you get away with just saying, surely, this sentence would be far more likely to descend into random, random gibberish. I don't know. Is that, does the earth have the high likelihood of just fundamentally disintegrating at any moment? No, because it's kept together by gravity, right? It's certainly possible that structures in the universe maintain some level of coherence rather than just randomly bouncing around. I think that he actually, when you, when you think about questions like this, which I'm all in favor of doing, you really have to think carefully about all the physics that goes in to what is happening. I mean, in fact, your question is very closely related to things I touched on in the solo episode on complexogenesis. What's happening as you evolve from the early universe to today is indeed you're just following the laws of physics. At very, very early times, there was almost no information in the universe, but the branching of the wave function caused by quantum mechanical decoherence, etc. imprinted some initial information that then was amplified over the course of time. Using information in a sort of macroscopic sense, the microscopic information is supposed to be conserved throughout the wave function of the universe. The way the information in any one particular branch of the wave function is affected by that, right? It's, it's, it changes a little bit, but there's still a certain amount of it macroscopically that you can talk about. And that goes up when you create those cosmological fluctuations. Anyway, none of this is what you're talking about. You wonder about the evolution of those initial perturbations into things like sentences and stuff like that. And I think that the very crucial role is played by, guess what, the arrow of time and increasing entropy. The thing about sentences or the earth or other coherent structures in the universe is that you're observing some macroscopic matter configuration, right? Atoms put into certain arrangements and so forth. You're not observing the dissipation of heat and increase of entropy that goes into making those structures. So if you think about, you know, a certain configuration of matter without confining it to a certain, well, let's put it this way. We think of a certain configuration of matter. So a certain number of atoms of different kinds, etc. With a certain energy, okay? Then if you let all of that just sort of bounce around in a box, energy is conserved. And therefore the other configurations it can possibly find itself in have exactly the same energy because of conservation of energy. Whereas if you keep track of just atoms, but don't keep track of the photons and things like that, then the total energy in the arrangement of atoms is not conserved because the atoms can bump into each other and emit photons. They can also absorb photons, etc. Right. So basically the photons act as a sink or a bath or an environment into which energy can flow or be extracted from. And as a result, the configurations of matter that you're allowed to explore are much greater in number because you're not limited to those configurations with exactly the same energy that you had when you started. And so there's a persistence or at least quasi stability that matter is able to find just by dissipating energy and settling into a local minimum of energy. So it might be that there are other ways of arranging the fundamental constituents of that system that have higher entropy or so forth. But you can't get there because it would cost energy to do it, right? Think of being in a meta stable vacuum. That's just a fancy physics way of saying in some landscape, you're in a valley, but you're not in the lowest valley. There's a lower valley somewhere else, but there's ridges of mountains in between you and the lower valley. So you get stuck there. I think that most configurations of matter in our current universe are like that. So I just don't think you can be very glib about saying it would be far more likely to dot, dot, dot, unless you really think through the fundamental physics of how these dynamics are supposed to work. Marcin Chadi, I'm thinking I didn't pronounce that correctly, but asks, do you think that democracy or rule of law is an aberration in the sense that it's an unstable equilibrium in the fitness space of political systems? I should have grouped these two questions together because they're both really about fitness landscapes in some sense. But now we're thinking about governments or nations or societies and their organization. No, I don't think that democracy or rule of law is an aberration. It pops up too often. If for those of you who've been around for a while now, I did a podcast with David's to Savage quite a while ago on the origin of democracy. And, you know, he made, he wrote a whole book about it and he made a very good point that in history, democracy pops up a lot more than we're taught. You know, it was not invented by the ancient Athenians or anything like that. In fact, lots of primitive cultures work on essentially democratic lines. You know, if you if you think about the broad scope of history, you could easily convince yourself that democracies are not stable, but you can also easily convince yourself that dictatorships and authoritarian systems and oligarchies are not stable either. And I think that maybe that's a feature. I think that that is actually probably a good way of thinking about it. You know, we're not in an open, sorry, we're not in a closed system. Thinking about this in physics terms, you might say that there might be some configuration of human beings interacting with each other that is sort of a stable minimum, right? And that you might imagine that there's some revolutions and wars and societal changes and transformations, but eventually you settle down. OK, but there's no reason to expect that if you're in a system that is constantly being buffeted by changing environmental circumstances. And human beings are both, you know, a society is is buffeted by outside influences literally from other countries or societies or whatever. Also, you know, geographical or environmental changes like if society gets hit by a famine or drought or so forth or a plague that could definitely have an effect on the political system. Not to mention changing technologies, right, which changes what kinds of systems work and what don't work and how they do so. So I think that you can't really hope for a truly stable equilibrium in the fitness space of political systems. There might just not might not be any. Now, having said that, it might be very interesting to ponder the question of whether or not almost all political systems are fundamentally unstable. In the sense that there might be variables that you know, you and I don't keep track of when we're looking at the stability of a political system that are gradually changing over time. Maybe in the first hundred years of a working democracy, everyone is really psyched about the fact that they're in a democracy. Everyone is willing to make the sacrifices to be a good citizen and to uphold democratic norms and so forth. And then maybe a couple hundred years later, they're no longer that into it. They take for granted the success. You know, they're thinking about all the shortcomings of this system. They look around. They see a potential strong man who promises to fix everything. And they think, yeah, maybe that's not so bad. Maybe there's naturally a back and forth kind of thing where, you know, a single system wears out its welcome and is prone or at least possibly vulnerable to changing into something else. So I think that democracy and the rule of law happen sufficiently frequently throughout the course of history that it would be wrong to call it an aberration. But it's also not going to be something where you can say it's the correct end of history stable equilibrium either. Owa says, a devilishly handsome friend of mine and I got into an argument recently. That's it's good for you that you have these devilishly handsome friends. Congratulations. He says he thinks that the lack of a foundational understanding of what wave function collapse is, is the reason why we've continued struggling to build effective quantum computers. I instead posited that it's mostly an engineering challenge trying to get qubits entangled for the computation without any environmental interference. Insofar as I don't think there are even theoretical experiments in which we could differentiate between, for example, spontaneous collapse, many worlds or Bohmian mechanics. It's hard to see how a well-defined problem like building a quantum computer could be affected. Do you think that improvements to our ontological understanding of wave function collapse could would meaningfully change the course of quantum computing? No, I think I'm on your side, actually, on this one. As much as I am a fan of foundational studies in quantum mechanics, I don't think that that's what's holding back quantum computation. It might very well be what held back the idea of quantum computation. The failure to consider wave functions in their own right as describing quantum states rather than just tools you use to calculate things might have prevented people from taking advantage of them for a long time to think of new algorithms and so forth. But probably not in a very practical, efficacious way in the sense that we didn't have the technology to build them anyway, right? So I do think that your impression is right, that whatever, whatever slowness there is in building quantum computers is mostly because it's hard to build quantum computers because once you have more than a couple qubits entangled with each other, it's very hard to prevent them from bumping into the outside world in the broadest possible sense and therefore decohering. You have to not only get a lot of qubits, you have to not only get them entangled, but you have to sort of keep them entangled and manipulate them in the process of an algorithmic computation long enough to get everything to come out. So to me, it's not surprising that it's hard. It's actually surprising we've done as well as we have. Whether you're solving murders during breakfast, cracking cold cases on your commute or playing amateur detective at bedtime, Amazon Music's got millions of podcast episodes waiting. Just download the Amazon Music app and start listening to your favorite true crime podcasts ad-free, included with Prime. Help Melwood expand opportunities for people with disabilities to find work. Donate your unwanted vehicle to support our workforce development programs in your community. Donating is fast, free and easy. Call today and Melwood will pick up your vehicle for free as early as tomorrow. To get started, call 1-877-MELWOD or visit Melwood.org. That's 1-877-Melwood or Melwood.org. Call and expand opportunities today. Bill Cork says your guest Jacob Berendas referred to complicated proofs by David Wallace and others as being necessary to prove that many worlds is a valid theory and he found them wanting. How necessary do you find these approaches to your high credence in many worlds? Well, I don't find them necessary at all. You know, I think that I stick by the statement that what many worlds is is the statement that the physical world is represented by a vector in Hilbert space that evolves according to the Schrodinger equation. And then it is our job to figure out what those statements would imply about the observed world of our experience. And that can be a very, very hard job. It's not the theory's fault that it's a hard job. It's our fault that we're not smart enough to do it. The theory remains very, very simple. In terms of the difficulty in proving the born rule or whatever, you know, I would say two things. Number one, I'm not quite sure that it matters how difficult it is. Unless you're saying like, well, you think, you know, the answer and you're cheating and you're hiding your cheats inside the difficulty. Like if that proof were airtight, it wouldn't matter how difficult it was. In fact, to be kind of impressive that you got it in such a difficult way, even though, you know, there are so many steps involved or whatever. But also, look, I become a little jaded or a little grumpy about the born rule in many worlds. You know, one thing I should say, of course, for those of you who don't know, I have a different way of proving the born rule that I wrote a paper with with Chip Sieben's and it's much more direct. And it's pretty straightforward. And I think it's probably more physically illuminating than the decision theory kind of approach that David Deutch and David Wallace have pioneered. We also say, Chip and I in our papers say very explicitly, we don't think that those proofs are wrong. We're just saying that, you know, there's more ways of shedding light on the problem are generally going to be useful. And so we're offering one. But, you know, the it's not a surprise. You know, I think that the reason why I'm getting grumpy about it is because I've decided I think that this is more or less defensible, that people's attitude toward deriving the born rule in many worlds mostly comes down to people's attitudes, people's intuitions, people's personalities, much more so than proofs or logical deductions or anything like that. Think of it this way. If you knew that the world did in fact run by the rules of many worlds and you thought that wave functions really did branch into separate classical worlds and you knew that there would be some self locating uncertainty about which branch you're in. There's two attitudes you can take. One attitude would be, well, I don't know which branch I'm in, but I'm going to try my best to come up with a way of assigning problem probabilities or credences to which branch I am in. And I'm going to try to do it in the most sensible way I can think of. Okay. If you have that attitude, there's a 100% chance you land on the born rule. It's just the overwhelmingly obvious thing to do is simple. It's what the theory is trying to tell you to do. And it works. But if you go into that situation with an attitude of, you know, I'm in this situation of self locating uncertainty. I don't want to put any credences on where I am. I don't want to unless you force me to do so and you can't force me to do so. Well, you're also right. I can't force you to do that either. But that's basically the objection, right? It's kind of like an, I don't want to objection. Nobody has ever said that if you take, no one sensible, I don't think, has ever said that if you take many worlds seriously, you would end up with a unique set of credences other than the born rule. It's almost impossible to even make sense of that statement. The born rule is just, you know, there for the taking. And so I just think you should take it. And so to me, people's worries about it are very much beside the point. I do get the fact that the metaphysics involved, that thinking about personal identity and the nature of probability and the nature of self locating uncertainty, that's all really good. That's all perfectly legitimate stuff to worry about. But once you worry about it, you're going to get the born rule. I'm pretty sure. Mark Kumeri says, have you thought about where you want your assets to go after you pass away? I'm curious what institutions or causes are meaningful to you? I don't have a specific great answer for you here. You know, I do think that I'm in the position where I actually have assets, but not in the position where I have enough assets that they're going to make a difference to the world. So the default is just sharing them with family members when I pass away. And I think that that's mostly what we will do. But, you know, one can imagine, you know, one can still ask the question, where would you like your money to go? Like, you know, maybe the next book I write sells a billion copies. You never know. Maybe maybe a lot of people join Patreon and make me rich. Who knows? You just never know. So and I don't have a specific thought out plan for that. It's a very good, you know, I'm at the age where you should start having well thought out plans for this kind of thing. All I will say is that you referencing something I said in last month's AMA about morality and the fact that I think that, you know, it's okay to care about things that are near and dear to you a little bit more than things that are distant and unknown to you. But also that it's good to care about things that are distant and unknown to you. I will actually give a plug for the strategy offered by former Minescape guest, Joshua Green, in his organization or the organization he's involved with anyway, called Giving Multiplier. So this is sort of an effective altruism kind of organization where they do look at different charities and they ask the question, which charity does the best good for the world per dollar or something like that. And typically it's curing diseases in poor countries, right? That gets you a lot of bang for the buck in terms of charity. But the great thing about Giving Multiplier is they recognize that real people in the real world also want to give money to their local cat shelter or to their alma mater or whatever, right? You know, things that have some personal meaning to them as well. So they provide a way that you can do both, that you can give a certain amount of money and split it and you can tell them exactly what the percentage split would be. And if you go back to that, I'm not going to remember the URL off the top of my head, but if you go back to the web page on the on preposterousuniverse.com for the episode with Jonathan Green, Joshua Green, there is a link to Giving Multiplier specifically for Minescape listeners, where you get a special boost in the matching that they do if you are a Minescape listener. So I don't know exactly what I will give money away to, but I think that kind of strategy of spending some of it on saving the world and some of it on causes that I know personally and I'm very much invested in is probably the kind of thing that I would be tempted to do. Michael Benidson says in the solo podcast about complexity in the universe, you explain how in a cup of coffee mixing with cream, you go from low entropy, unmixed to high entropy mixed, the complexity goes from low, unmixed to high mixing to low mixed. This may be wonder, can complexity in some particular cases be related to or even defined as the rate of change, the time derivative of the entropy? That's a very good question. I think that the answer is probably no. If I just take the question of face value, literally, because remember, there's other choices of dynamics where the entropy still goes up, but complexity does not develop, right? So for exactly the same curve of entropy over time, I can get different curves for complexity over time. So I think it's not possible that complexity is literally the rate of change of entropy because there's some extra choices to be made there. But I think you could develop maybe an argument that complexity is abetted by entropy changing, right? So I think this is sort of what is going on in dissipative systems, going back to Prigogene and his friends and probably also kinetic dynamic stability ideas that Eddie Pross investigates and so on, that in addition to kind of mechanical stability where you have atoms piled on top of each other in a stable configuration, there's this other kind of kinetic stability that relies on free energy coming in from the environment. And that's what a hurricane does or a living organism does. Left to themselves, a living organism or a hurricane, if they didn't get any external input, would just collapse relatively quickly and stop doing what they had been doing. But instead, they get fed free energy from the environment and that keeps them going for a long time by increasing the entropy of the universe, right? So entropy increasing can absolutely play a role in the persistence of complex structures, but I don't think it's quite as simple as the entropy rate actually being the complexity or even being proportional to complexity or anything like that. Jonathan Bird says, I appreciated the space you gave Alvie Ray Smith to tell his story in your latest podcast. For the end, he explained how pixels are not little squares, which made intuitive sense to me as a musician in the digital age. Neither speaker cones nor eardrums move in tiny steps. So in practice, we never reproduce or hear the pixels of digitized music. I'd love to hear your thoughts on how the same concept might apply to quantum mechanics, i.e., the math obviously works fine, but might we experience a poor representation of reality when we analytically translate quanta to something we can more easily experience and manipulate? I like this question, but I don't think I'm going to give a specially satisfactory answer to it because I've never quite thought in these terms. I mean, honestly, the sampling theorem that Alvie talked about is really interesting. I don't think I'd ever heard of it before. For those of you who didn't listen to that podcast, the sampling theorem is a theorem about capturing the information in a smooth signal in a finite number of pixels as they were. But again, pixels are not just values of the signal at different locations. There are versions of that. There are sort of smeared out versions of that in this way of thinking about it. The obstacle to believing the sampling theorem is that a continuous signal in principle could have an infinite amount of information in it and a discretized signal with a finite number of bits can only have a finite amount of information. How is that possible? The reason why it's possible is because there's an assumption in the theorem that there are not arbitrarily high-frequency modes involved, that there's sort of a shortest wavelength that is involved in the decomposition of the signal into a sum over different wavelengths. So I don't, I mean, in some sense that is happening in quantum mechanics, in quantum field theory for sure, the whole idea of effective field theory is kind of exactly that, right? It's not that there aren't fluctuations going on on arbitrarily small length scales, but it's that you don't need to pay attention to them, that you can sort of summarize their effects in what is going on in the longer distance wavelengths. That's not exactly the sampling theorem, but it's sort of a similar kind of thing. More generally, you know, emergence comes out of coarse-graining, and coarse-graining generally, not always, is why I'm hesitating, but generally involves ignoring things that go on in over short distances. And the specific way in which what happens at short distances is summarized, or the important part of what happens in short distances is summarized at what happens in long distances is tricky and matters, and a lot of that goes into doing an emergence kind of description correctly. So yes, so yes to things like this. I'm not sure to the general program of that, but it's an interesting thing to think about. Nicholas Weiberg says, are the Copenhagen interpretation of, and the shut up and calculate attitude, I think I presume quantum mechanics was left out here, the Copenhagen interpretation of quantum mechanics and the shut up and calculate attitude, essentially aspects of the same thing? Well, they're very close, but they're not exactly the same thing. So shut up and calculate is supposed to be, well, number one, it's supposed to be a joke, okay? David Merman, who's a well-known physicist, was writing an article, a column in Physics Today where he was talking about the foundations of quantum mechanics, and he caricatured a certain perspective as saying, just shut up and calculate, that is to say, ignore the foundations of quantum mechanics. He wasn't either advocating that perspective, nor saying that any other particular person held that perspective, he's just saying it's a perspective that sort of is out there. And that's a different kind of thing than the Copenhagen interpretation, especially because neither Bohr nor Heisenberg, the founders of the Copenhagen interpretation, were especially fond of shutting up. They would never shut up. They would certainly calculate, they were very, very good at calculating, but many physicists are good at calculating, very few physicists are good at shutting up. And if you get into it in the Copenhagen interpretation, the problem is that Bohr and Heisenberg, number one, didn't agree with each other, and number two, didn't agree with themselves at different points in time, and number three, weren't even very clear what they meant. So it's very, very hard to agree on what the Copenhagen interpretation actually says. Part of what it actually says, I mean, well, one way of thinking about it was actually very explicitly stated by John Wheeler, who was a follower of Bohr in his famous paper that I'm going to forget the name of the paper now, but it's the paper from which we get the phrase it from bit. People interpret the phrase it from bit as saying, you know, information is at the basis of reality, but that is not what Wheeler meant. What he meant was when you make an inf... measurement of a quantum system, you get quantized answers, you get bits, right? And that's what reality is made of. It is the measurement outcomes of quantum mechanical systems. That was his point. And that was pretty much what Heisenberg believed, and that's sort of the impetus for him inventing matrix mechanics way back in the day. His idea was ignore not just what the electron is doing, but the idea that the electron is doing anything when you're not looking at it. You can have a way of describing the state of the electron, but all that ever matters is what you're going to observe or measure. And that's the Copenhagen interpretation. That's beyond simply saying shut up and calculate. It's an attitude toward what is real, namely, what is real are measurement outcomes. And to make a distinction between the measurement outcomes and the quantum mechanical underpinnings of them, you kind of have to act as if the classical world is real. You know, the macroscopic world of human beings and measuring apparatuses is truly classical. It's not just classical to a good approximation. It is classical. And this is what drove people like Karl Popper or Hugh Everett completely bananas. Like, how where... who draws the line between the classical part and the quantum part? Again, none of that is part of just saying shut up and calculate. So I would say that the... what I call the textbook interpretation of quantum mechanics is sort of a stripped down Copenhagen interpretation without all the philosophical ramifications. And shut up and calculate is an even more stripped down version of that, where you don't even care what the answers are, much less propose certain answers. Zach McKinney says, is it... is it conceptually possible for naturalism to be demonstrated false? Or would naturalism continue to postulate that however magical or inexplicable a given phenomenon may seem, there must be some underlying explanation at the level of physical laws, either known or unknown. I would say that naturalism is exactly like any other hypothesis about the world. It can never be demonstrated false because that's not how a hypothesis about the world work. You know, Newtonian gravity can't be demonstrated false. What you do is you gather evidence that makes it less and less likely that you should have credence that it's the right theory. You can always come up with some cockamamie excuse why your experiment didn't fit in with the prediction that you thought Newtonian gravity was making. Likewise for naturalism, like there could be all sorts of things happening that look miraculous and spiritual and, you know, evidence of life after death and a million other things. And yeah, you could invent post hoc naturalistic explanations for them. It might very well be the case that that stops being an interesting thing to do if there are so many things going on that are better explained by non-natural explanations than naturalism will go away. In the real world, there's no such thing, right? Like all of the things that claim to be evidence for non-naturalist phenomena turn out to be like really on the boundary of even observable or credible or whatever. So I think it's not a lot of danger for naturalism right now. But in principle, I could imagine giving up on it if we got enough new evidence in. Gensin says from the Wikipedia article on the Copenhagen interpretation, and then he quotes some things about the Copenhagen interpretation from the Wikipedia article and then says my question is Robert Soir Soir, S-O-A-R-E, proposed in 1996 to rename the field of mathematical logic dealing with computability and its generalizations from recursion theory to computability theory. His proposal was adopted in the subsequently indeed reduced unnecessary confusion. Could we propose a similar change regarding the Copenhagen interpretation? So as I just said, people don't agree with the Copenhagen interpretation says. So but that's, you know, both a bug and a feature. Okay. The thing about computability theory is people do kind of agree on what it is, right? It might be that the word Copenhagen, the phrase Copenhagen interpretation is not the most descriptive of what it is, but the problem is not just the label. The problem is actually agreeing on what the substance of the proposal is supposed to be. So, you know, I don't see the point of it very much honestly worrying about the name. It's not the name that is the problem. You can propose a name change. Sometimes those work. They generally don't work when a field is well established already and people have been using a certain phrase for 100 years now. So I wouldn't put a lot of effort into it. I would, you know, you can try, but I don't think it's going to really be a very popular move. Shane Jones said, I listen to a talk from former Minescape guest, Janane Ismail, in which she discusses totality and Laplace's demon, and she makes the argument that a Laplace's demon that is embedded in the universe couldn't predict the behavior of an anti predictor, who knows about the demon's predictions and deliberately acts to confound them. Prediction and self-modeling creates emergent underdetermination, where the very attempt to achieve complete predictive closure creates feedback loops between higher order patterns. This seems to suggest that underdetermination is a genuine feature of reality when there's sufficiently complex systems that can represent and respond to information about themselves. Even if the scientific image provides complete microphysical descriptions, the manifest image retains genuine causal efficacy and unpredictability that isn't merely epistemic limitation. Do you see this interventionist account as complimenting your compatabilist views and reconciling physics with meaningful human choice? Well, I think that it is part of a compatabilist view. I can just give my own attitudes about Laplace's demon, which is that I completely agree with what Janane says. You can absolutely imagine building a thing in the universe that waits until it hears what the prediction is, and then it does the opposite. It's a not gate. It takes in the number zero and turns into one and vice versa. But to me, that's just entirely unsurprising, because Laplace's demon was never meant to live in the world. You all laugh because I'm constantly saying none of us is Laplace's demon, but it's true. No one else is Laplace's demon either, and the reason why that's perfectly obvious is because in order to simulate the universe, you would have to be as big as the universe. You can't be smaller than it. You can't have fewer, less information carrying capacity than the universe. Otherwise, you don't have the ability to simulate what the universe is going to do. You can simulate parts of it, but Laplace's demon, his whole thing, is he's able to simulate the whole thing. So I've always thought of Laplace's demon as a thought experiment as, number one, a thought experiment about someone who lived outside of our actual physical reality, but also number two, just a vivid illustration of what it means to be deterministic. And what it means to be deterministic does not mean that anyone in the universe can know what the predictions are. That's just not part of determinism. So that lack of possibility, so I'm basically just completely agreeing with what Janana's pointing out, she's using different arguments to get there. But it's very much what I've been saying about compatibilism, that since you do not know, even if you thought that the underlying laws of physics were deterministic, what the prediction actually is, the higher level emergent way of thinking is one in which you attach agency and the ability to make choices to human beings. And if you want to call that free will, knock yourself out. Just download the Amazon Music app and start listening to your favorite true crime podcasts, Adfree, included with Prime. Frank Russell says, let's pretend that the measurement problem didn't exist, that electrons behave like particles, not like waves. In what way would the world be different? How far would we have gotten with theoretical physics, what theories would go away? Well, it's impossible to completely answer this question, because when you say electrons behave like particles, you kind of have to tell me what everything behaves like. It's not quite sufficient. But if you naively, straightforwardly, if you just say what if electrons were particles, the whole impetus for inventing quantum mechanics, one of the big impetuses comes from the fact that atoms would be dramatically unstable if electrons were like particles. Electrons would not orbit around the nucleus. They would just fall into the nucleus and sit there forever. And therefore you would not have atoms. So you would not have chemistry, molecules, materials, substances, anything like that. Therefore, life would be impossible, the universe would be completely different and none of us would live there. So classical mechanics is not a close call. It's not like, well, it could almost work like that. You really need something very, very different, given the ingredients that we have as far as particle physics are concerned. Daron Vilioti says, when we try to connect fundamental physics with human meaning, linking neurons to consciousness or quarks to purpose, what do you see as the most productive way to frame that relationship? As layers, we should keep separate, i.e. different ways of describing the same things in the poetic naturalist sense, or as parts of a larger unified picture. Or maybe I'm just confused and that's essentially two ways of saying the same thing. Well, meaning and purpose are not the same thing. So part of poetic naturalism as I talk about it in the big picture is there are many ways of talking about the world, but not every way has sort of equal status or an equal kind of description of what it is trying to do. There are multiple scientific ways of describing the world and those work at different levels of resolution, if you like. There's sort of a comprehensive way that as far as we know right now comes down to quantum mechanics and quantum field theory, and there are higher level ways where you coarse grain and you have materials and fluids and living beings and what have you. But then there are ways like evaluative, normative judgments, morality, purpose, meaning, all those kinds of things, aesthetics, those are not fixed by physics. So those exist simultaneously with the underlying physical reality and I would strongly argue that a successful version of any one of those attempts needs to be compatible with the underlying physical reality. But they're not really unified in that sense because two people can have different evaluative schemes that are incompatible with each other, but both compatible with the same underlying natural world. So, you know, and I think that's okay. I think that's perfectly fine. That's built into how they're going to go. I think we got to get used to that. Abazin says, it seems that quantum gravity and general relativity are in conflict because quantum gravity would involve some sort of granularity of space and possibly time, but the granules could be understood as a medium much like the non-existent ether. Is this a reasonable way to understand part of the tension between these fields? Nope, it is not a reasonable way to understand that. There's nothing that says that there's any granularity to space. There might be granularity to space. That's a possible thing. That is a thing that we can take seriously as an option, but nothing in quantum gravity says that. You know, one of the things I got to teach or at least mention to my quantum class is there's nothing granular about quantum mechanics in any sense, right? If you think about why do you get discrete energy levels of electrons in atoms? It's not because the electrons wave function is discrete in any way. It's a smooth function and it's solving a smooth differential equation. It's just that the solution set to that differential equation comes in a discrete set of functions just like the ways that a violin can string can vibrate comes in a discrete set of functions. It's a feature of solutions to differential equations. And likewise, nothing about gravity being quantized suddenly makes space discrete. So I guess that's the first thing to say. There's nothing discrete about space necessarily in quantum gravity. Again, maybe it would be. Now, it might be a worry if you thought the space was discrete that those granules would be ether like in some sense because you might think that, well, if I have some lattice or some structure like that, that would violate Lorentz invariance. It would give you a preferred reference frame. You could have some speed relative to the underlying granules. Maybe. I think that that's absolutely possible. People have looked into that. But it's certainly not like a theorem either. It's not necessary. You have to think harder about these things. A lot of classical intuition goes away in quantum mechanics. And this is kind of a classical intuition secretly, even though you're talking about quantum gravity, like you're going from not you personally, but one gets the temptation to say, well, gravity, but I'm going to quantize it. So I'm going to replace this smooth continuum by a discrete lattice or something like that. That's really just not what quantum gravity says. Again, it might be true, but nothing that we know about quantum gravity insists that we move in that direction. Philip Rutherland says, in your last AMA, you said as a human being, it's okay to care more about people who are close to us and our moral philosophy should admit that feature of human nature. I'm curious how you think about individual differences here. Some people care a lot for those far away. Others hardly at all. Are you talking about an average tendency and evolved baseline or something else? If we take these differences seriously, does that imply a set kind of individual ethics where people with less capacity owe less and those with more capacity owe more? Kind of, I think is the short answer to that. I don't think it's about owing less and I don't think it's about capacity. I think the last sentence, the last phrase of your question, I would not quite agree with. But the rest of it, I mean, the tendency of what you're saying, I think I would agree with. The basic idea is that as a moral constructivist, if you want to say that, I don't think there's a right answer out there in the world, objectively true for what it means to be moral. I think that different people will have different moral systems and all we can do is try to, number one, get along and number two, talk to each other and maybe persuade each other to change our minds if we have a strong argument that being moral means something different. But it's not like science. It's not like math. It's not like there's a right answer in the back of the book. And this is something that I think is very, very hard for people to accept. And I get that. But nevertheless, I think that it is true. I'm learning, you know, I'm in the philosophy department now, I'm trying to learn these things. There is a point of view called moral particularism, which is kind of like this, you know, which is which is moving in that direction. And one person put it in the following way, that what happens when you make a moral choice one way or the other is not that you are or are not adhering to some abstract code is that it is you are revealing or constructing who you are. So it's not about being good or being bad. It's about being yourself and other people or your inner conscience might find yourself good or bad. And you have to learn to live with that and decide what you want that to be. And I think that this is at the heart of why so many moral theories that try to come up with the once and for all right answer lead you quickly to conclusions that are abhorrent, because that's just not how morality really works. I also say, and I'm going to continue to say that I'm very low levels of certainty about any of this. I'm not. I mean, I'm an expert in the sense that I've read a lot. I've thought a lot. I've talked, taken a lot of courses, talked to a lot of people. I have not come to a conclusion about the once and for all right way of thinking about morality. So ask me again next year, I might change my mind. It seems to annoy many physicists. What problems would such a drive realistically cause if the negative energy required to make it work could be made. So for those of you who don't know, Miguel Alcubierre, who is a Mexican physicist back in the 1990s, I think, pointed out the following idea that if you want to say like the Andromeda galaxy is a million light years away, therefore will take a million light years to get there. But you also have general relativity. General relativity says space time is curved. The fact that space time is curved means that I could imagine a metric of space time in which there's a little tube stretching from here to the Andromeda galaxy inside of which I've changed the metric so that it's actually quite a short distance from me to the Andromeda galaxy. And this is without a wormhole or tearing space time apart or anything like that. Just about like stretching in the right way or in fact in this case, contracting space time in the right way. And being a respectable general relativist, he then proved that in order to do this, you need to do all sorts of naughty things from a general relativity point of view. Roughly speaking, you need negative energy densities or exotic matter or technically speaking to violate the weak energy condition. In quantum field theory, you might be able to get a little bit of exotic energy that the counts as negative energy. There's certainly not any known way of getting as much and as stable a configuration of negative energy density that you would need to make a warp drive. And also there are, you know, deep kind of conceptual issues here. How do you do like the thing about Einstein's equation is that you can write them Einstein's equation or equations, you can write it as an initial value problem. You can say if I take a slice of space time and I tell you what the metric of space is and its momentum and all the fields and what they're doing, I can solve the equations, tell you what will happen in the future. But it's not naturally phrased that way, right? The way that Einstein naturally phrases his equations is four dimensional. It's not three plus one dimensional. He doesn't actually distinguish space and time. That's something that we human beings find convenient to do. And so it's that's why you can invent metrics geometries of the universe that obey Einstein's equations, but have closed time like curves or topological weirdness or faster than like travel or all of these things. It is very unclear whether any of those things wormholes weird topologies for the universe, etc. and including warp drives could sensibly arise out of any configuration of physically realistic matter. And not to mention the fact that the amount of energy you would need is astronomically jihumongous because you know you're thinking about it, you're creating a gravitational field here, right? You're warping space time and you're warping space time over presumably a distance of light years at the very least. And you're warping it in a such a way that sort of inside the warp drive, it's not very warped. Otherwise it tears your body apart from tidal forces. So it needs to be a lot of region of space time. It needs to be quite large so that it's smooth inside. All of these just make it sort of hilariously unrealistic. It's one of those very important results in physics in the sense of proving what is conceivable, but it is not meant to be or supposed to be or should be taken as anything realistic now or in the conceivable future, honestly. Gabe Ayala says in regards to math, can non-base-10 mathematics explain the universe in a better way than base-10 mathematics? So I have two answers to this question. One is no, it cannot. The base that you use in your mathematics is kind of like the set of units you use when you're measuring distances. That's like saying, can you measure distances? Are there distances that can be measured in inches that cannot be measured in centimeters? No. You can just convert right back and forth. If you did mathematics in any different base other than 10, you could convert it into base-10 mathematics and you would get the answer. So that's the real answer to your question. No. However, it gives me an excuse to tell an amusing story that I read on Blue Sky from Zach Wienersmith, former Mindscape guest, where I'm going to get the details wrong because I just read it on social media. I didn't do any research or anything. David Hilbert was one of the world's most famous mathematicians, early 20th century giant of the field, and he was living in Germany at a time when Germany was going through hyperinflation. This is a really, we're not talking 10% inflation. We're talking that the money would become essentially worthless overnight. And so the German government, among many of the strategies that it chose, it basically replaced the mark, the Deutsch mark, which was their unit of currency with a new version of the mark, which was worth one million of the old ones. And they had some name, right? So I don't remember the name, but like the new mark is a million old marks. And David Hilbert, the mathematician said, you know, that won't solve anything. You're just renaming it. You cannot solve an equation just by changing the name of a variable. And the reason I'm telling you the story is because it worked. It did help solve the hyperinflation problem because mathematicians, as good as they are at math, sometimes neglect the human factor. And the human beings suddenly started being able to pay with things, you know, at the level of five or 10 marks rather than five million or 10 million marks. And that made them feel better. And their attitudes helped stabilize the inflation. So whenever you have a situation where something can be done one way or the other in a logically rigorously equivalent way, you have to keep in mind that it might be the case that it is still better or easier or more productive to do things one way rather than the other. And I don't know. I don't think that doing things in a different base is going to be an example, but it could be who knows. Matthew Hall says some time ago, I watched a lecture by Leonard Susskin on black holes, he said that if you lower a thermometer close to the event horizon of a black hole, it will record a very high temperature. The temperature would easily be enough to ionize an atom. But nothing would happen to the same atom falling through the event horizon. This apparent contradiction can be explained because any attempt to observe whether the atom gets ionized would necessarily involve hitting the atom with enough radiation to ionize it. Recently, I heard a podcast with Tim Maudlin where he explicitly called out this explanation from Susskind as the end of logic. A person can't burn up at the event horizon and not burn up at the event horizon. What is your opinion about this? So there's two aspects to this problem. One is what happens and the other is what you observe happening. So I think everyone agrees on what happens here. There's a physical difference between being held up near the event horizon and falling through the event horizon. The physical difference is because you can fall through the event horizon in free fall. You can just fall and you don't even notice that there's an event horizon there. To be held up, to be lowered down near the event horizon but be at the end of a string so you can't fall through, that means that you are being accelerated at an enormous amount so that you're not falling into the black hole. Just like I'm being accelerated right now sitting in my chair because the earth gravitational field is trying to pull me toward the center, the chair is accelerating me away from it. That's a very mild acceleration. Near the event horizon of a black hole, you would be subject to a huge amount of acceleration so there's absolutely no surprise that the physical situation is very, very different. In fact, I wrote a paper I think I mentioned with Christopher Shaloo recently about what observers measure when they fall into black holes in terms of Hawking radiations and so forth and we did it all very, very carefully. We dotted all the eyes, crossed all the T's, etc. So this is a solvable problem. I don't quite understand what your paraphrase of Susskin's example is supposed to be about because you say this apparent contradiction. But the apparent contradiction is supposed to be between lowering a thermometer close to the event horizon of a black hole versus falling through. Those are two different things. So there's not a contradiction that they behave differently. I think that what is going on is it's supposed to be an apparent contradiction between what is observed by a far away observer and what is observed by a person falling in. And there it might very well say I guess I'm not exactly sure what the situation is that is being described here. My very strong opinion is that what Lenny said, if I, if it was completely translated into rigorous words is completely correct. And that when Tim Maudlin heard about it, he heard a garbled version of it and chose to interpret it uncharitably. That's always something that we're prone to do sometimes. Sean, not me, but another Sean, asks, comparison between yourself and Keanu Reeves are presumably few and far between. In his role as Neo in The Matrix, his character finds the ability to see past the higher level emergent simulated world and perceives the base code that underlies it. Do you ever find yourself looking at everyday objects and being struck by the deeper physical truths beneath them? Do you ever find yourself staring at a coffee cup and saying, Dear Lord, this is incredible? You know, yeah, kind of, I think. I mean, I think that one does seep in at a subconscious level and understanding of how things work and that flavors how you approach them. I mean, one of the features like it or not about emergence is that a successful higher level emergent theory doesn't really depend on the lower level microscopic goings on. So you don't need to know what's going on microscopically and indeed it often doesn't help you. You can imagine situations in which there's a pretty good higher level description, but you would be helped by knowing more specific things at the lower level. But that's not the generic case. You can easily get the converse of that. So it gives you a nice feeling to know, you know, about energy and momentum and entropy and dissipation and things like that. But doesn't help you fix the car when it breaks down. Miran Mizrahi says, So now that you've settled into Hopkins, have you gotten into lacrosse yet? No, not really. I can't really say that I have. Maybe I will. I'm open to the possibility. For those of you who don't know, Johns Hopkins sport is lacrosse. I mean, like most universities, they have intercollegiate athletics in many different sports, but they've been historically really, really good at lacrosse multiple national championships and the whole bit. This first became known to me when I was in junior high school. I was a other than Pennsylvania, which is the neighboring state to Maryland. And I participated in a study run by Johns Hopkins called the study of mathematically precocious youth run by Julian Stanley. They had a bunch of kids who did well on standardized tests, take the PSATs, the preliminary SATs, you know, like these tests you can take to sort of practice for taking the college boards. And if you did well enough, you got like some sort of recognition. And if you did really well, you got followed by the study to see how well you progressed through time. I did well enough to be invited to Johns Hopkins for the little ceremony, but not well enough to be followed up. So I can't tell you how I compared to everyone else was followed up by the study. But anyway, at the award ceremony, Johns Hopkins, which was formative for me, it was a the first time I had ever really been on a real college campus. And it was amazing to me and I loved it. And but there was a speech by some professor or administrator, I don't know, at Johns Hopkins, who was talking about, you know, to all these kids who are basically in junior high school or early high school about going to college and things like that. That's something that most of them would have been interested in doing. And he mentioned how, you know, at Hopkins, they mostly were interested in academically strong students. But he as a joke, self deprecating joke, he said, like, unless you're really good at lacrosse, and then then you'll definitely have an in. I had no idea that lacrosse was the sport of choice at Johns Hopkins. But I literally walked by the lacrosse field every day going to work. So I definitely know about it now. And I'm open to, you know, catching a game at some point. I don't know, they call it a game, a match. I really don't know. But yeah, I should, I should do that as part of my identity as a blue jay now. Whether you're into unsolved mysteries, solved mysteries, or creating your own mysteries, Amazon Music's got millions of podcast episodes waiting. Just download the Amazon Music app and start listening to your favorite podcasts, ad free included with prime. I love figs. These scrubs are beautiful, comfortable, and they are built to last. They're not those boxy scratchy uniforms that we all started out in. No, no, no, these fit perfectly. They feel amazing. And the quality is just wow. My favorite color, burgundy. It's chic, it's timeless, and it's even the same color as my apartment, because I'm kind of obsessed with it. And I love adding custom embroidery to make my scrubs as personal as my style. And since I work in telehealth, my embroidered figs even double as my ID badge. It's never too late to reinvent yourself or your scrubs. Get 15% off your first order at wherefigs.com with the code figsRx. That's wherefigs.com code figsRx for 15% off your first order. Rad Antonov says, can you share some anecdotes about how you or other faculty are using AI in the classroom? Has it moved the bar for academic achievement in any perceptible way yet? Well, no, not really. I mean, what do you mean by use AI in the classroom? I mean, most of professors dealing with AI deal with AI in two ways. Number one, they use AI themselves to do their research, to learn something, to look something up, to get suggestions for whatever, writing grant proposals or whatever it may be. Hopefully, with a good amount of skepticism, I like to compare LLM outputs to early days of Wikipedia, where there's a lot of knowledge there, but it was certainly very unreliable. And when I do quiz the LLMs and things that I know about, sometimes you're really impressed at how right they are. Sometimes you really shake your heads at how wrong they are. And I think that most experts in the field, in whatever field they're in, know this. The other is, of course, preventing the students from using AI in the ways that they shouldn't, namely having the AI write your paper for you or do your homework for you or something like that. I have not had that issue yet. This is one of the reasons why in my quantum mechanics class and in my philosophy of cosmology class, I'm doing a lot of grading based on in-class exams, which I am generally not a fan of. I like to do either take home exams or problem sets or papers. But yeah, now AI is making that harder because it's just an enormous temptation for the students to get help that way. I do have a final paper assigned in the philosophy class, and what I told my students is that they should treat AI as a person. They're allowed to talk to people about the paper they're writing, right? They're allowed to ask for help. That's fine. But then you put that person's name in the acknowledgments. You admit what you got out of that person. And of course, the final product has to be yours. If you cut and paste from an AI into your paper, that is called plagiarism, and that has severe academic consequences. I think, you know, I don't know, we'll have to see, but my impression is that Hopkins students are mostly aware of AI and will use it, but don't rely on it too much. I don't know. Maybe that changes rapidly over time. So maybe what my experience from a year ago or two years ago is no longer relevant here. As far as actually using AI in some clever way to improve your pedagogy, I have no idea about anyone doing that. I'm very, very old school myself when it comes to pedagogy. I like either standing up in front of a room with lots of people in it and lecturing at the blackboard, or sitting down at a table with a small group of people and discussing things about a text or an idea, or whatever. And this semester, both classes are lecture classes, so it's just me up there at the blackboard. I did ask my class the other day, you know, what fraction of their other professors use PowerPoint slides or some other kind of slides in their classes, and they told me it was about one third. So I have never done that, could never imagine doing that. I can see why it might be useful in like a Physics 101 kind of thing, like you chose some animations and stuff like that. But you can just go online and buy a whole PowerPoint course now. Like they're there. It does make your preparation quite a bit easier. I can't imagine doing that for my own courses, but one can imagine being in desperate straits and going there. And it's not that different. I mean, I'm happy to ask AI like, you know, what are the topics that should be covered in the first semester of quantum mechanics? I just wouldn't listen to it. I wouldn't do what it says just because it says I would use my own judgment about what to do, but I might read something that it says and go, oh, yeah, you know what, I should do that. Like that's that's my use level for AI right now. Bits plus Adams says you've said there are both technical problems and conceptual problems with quantizing gravity and also we understand quantum gravity in the weak field regime pretty well. These appear to be at odds. It seems as though the conceptual problems as I understand them would persist in the weak field regime. How should I think about this? Yeah, they're not I mean, you see that they're not at odds because of the phrase the weak field regime, right? So they're technically not at odds. One is a statement about quantum gravity. We don't we have both technical and conceptual problems. One is a statement about a certain regime of quantum gravity. We understand quantum gravity in that regime. But I get your question. You're asking, you know, OK, you can imagine having technical problems that are relevant in the full theory, but not the weak field theory. But the conceptual problems seems like they should be equally conceptual problems in either strong fields or weak fields. But it turns out that's not the case. So what we mean by weak fields for gravity is that space time is not very curved, right? When there's no gravity, you have Minkowski space, you have special relativity, you have flat space time, and you generally have fields propagating within flat space time. And the whole thing about quantum about classical gravity, general relativity is it's just changing the background on which everything is moving, right? By letting space time itself have curvature. But as long as that space time curvature is small, we can treat it as a perturbation of flat space time. In other words, rather than just saying there is a metric that tells me the curvature of space time, we can say there's flat space time and there is a field propagating within flat space time. And I recognize that field to be the difference between the real metric and the flat space time metric, right? So if you want to think about equations, g mu nu is the metric, a mu nu is the flat Minkowski metric, and you write g mu nu as h mu nu, sorry, is a mu nu plus h mu nu, where h mu nu is this tiny perturbation. And then you can just treat h mu nu as a field. You treat the perturbation to the curvature of space time, to the metric of space time as a propagating field propagating on flat space time. So the conceptual problems come from the fact that you're quantizing the structure of space time itself, not just a field propagating on it. And in the weak field limit, you can think of gravity as describing a field on flat space time and its quantization. And that goes through perfectly well. So there is an answer to that question, but it's a good, it's an insightful question. Anonymous says, it seems tautological that you can't derive ought from is, but if oughts exist at all, they emerge from physics, they supervene on the physical state of the universe. Do you think oughts will be like vitriol and life, in that they were confusing concepts in the 1800s, but we later got an objective handle and better ontology on what we meant. Yeah, I think this is a very good question. You know, when I, I think I said this in the big picture, but certainly I've said it in blog posts and things like that. The problem with saying that you can't derive ought from is, even though I agree with it, I agree that you cannot derive ought from is, but then if you're a naturalist, what is is all there is. And so where are the odds going to come from? And the answer is you can't derive oughts from anywhere. That doesn't mean they don't exist, but you can't derive them from fundamental laws. You can figure out on the basis of psychology and anthropology and sociology and whatever, maybe what certain kinds of people will develop as their own moral codes. But that's different than saying what are like the objectively true oughts out there in the universe. Those are not derivable from what is in the universe because they don't exist. That doesn't mean that other oughts don't exist. Subjective oughts, just like individual people might have different opinions about flavors of ice cream or different credences about what the dark matter is. Different people can have different moral principles that they want to live by. Those exist and we can talk about which ones are more sort of sensible, which ones sort of cohere with other beliefs and so forth without thinking that they are derived from the underlying physical reality. David Kuda Verdean says, the Einstein equation is written in terms of the energy momentum tensor, which is expressed using thermodynamic values. Is there a version of the Einstein equation written in microscopic terms? Imagine a universe with only several classical particles in it. Yeah, absolutely. So what David is referring to is the fact that the Einstein equation, which you can read about, for example, in space time in motion, and there I will say that on the left hand side you have the Einstein tensor, which is a feature, a characteristic of the curvature of space time. On the right hand side you have the energy momentum tensor, which is all of the stuff in the universe, the matter, the energy, the heat, the momentum, all that stuff. And as David says, usually for practical purposes, like if you're doing cosmology or stellar structure, you know, you're studying the density profile of a neutron star or something like that, where general relativity might be relevant, you characterize the energy momentum tensor in terms of things like pressure and density and temperature and stress and strain and other things like that, thermodynamic quantities. And it does seem a little weird to have this pristine, fundamental, geometric thing on the left hand side of Einstein's equation and this sort of higher level emergent, thermodynamic stuff on the right hand side. Einstein himself was bothered by this. It all goes away. All of this worry completely disappears if you define the energy momentum tensor in terms of the principle of least action. So for those of you who know, you can define all the laws of physics, classical physics, by starting with an action, which is an integral over all of space time or some region of space time of certain quantities. In fact, kinetic energy minus potential energy. And then you minimize or at least extremize that quantity among all the set of all possible paths that all the fields and all of the particles in your physical theory could take. And that works with gravity just as well as everything else. So David Hilbert, who's already mentioned once in this podcast, showed how to derive Einstein's equation from what is called an action principle, the principle of least action. And indeed, arguably he did it first, but probably not. And certainly he did it on the basis of the insights that Einstein shared with him. So Einstein roughly gets, rightly gets credit for the equation, but Hilbert gets credit for the action formalism of it, the way to get it. And in that formalism, you tell me what the action is for matter, right, for stuff. Is it scalar fields? Is it fermions? Is it photons or whatever? And there's a well known equation for finding the energy momentum tensor corresponding to that theory of matter. And that equation has nothing to do with pressure or density or anything like that. It's just in terms of the fields and their values. It could be point particles, it could be whatever. For that, you're probably not going to get it in space time in motion, but it 100% is in my textbook, space time and geometry. So you might be at the level where you start need to checking that out. T's Spenrenen says, What does the production process of new or improved theoretical physics look like? How do theoretical physicists devise, develop and actually test new theories? Is there a defined process such as EG in engineering? I didn't know there was a defined process in engineering. That's news to me. I'm used to the theoretical physics process, which is very far from being defined. Different people have different methods. I guess, you know, one thing that is absolutely true is that it's very much not the stereotype of the lone genius thinking by themselves about the nature of the universe. Physics, like many other scientific disciplines, is intensely interactive and collaborative. And the vast majority of big insights come from interacting with either literal other scientists, or at least reading their papers, listening to their talks, talking to people over lunch, talking to your students, getting asked questions, you don't know the answers to. Thinking about things that you've been worried about for decades and maybe have a new way of tackling or something like that. There's just a lot of ways that can happen. I once wrote a series of blog posts, if you're interested, basically on how a paper gets written. So there's a particular paper that I wrote with Lottie Ackerman and Mark Wise on how, what would you predict for the anisotropies of the cosmic microwave background if inflation had been anisotropic? For those of you who know about inflation in the early universe, part of the usual picture of it is that it's a form of ultra dense, temporary dark energy that smooths out the universe and makes it flat and wrinkle free and isotropic. It just pushes it, accelerates it in every direction equally. And Lottie Ackerman, who at the time was a student at Caltech, I don't know how, but she was listening to some physics colloquium and she came up, she asked herself, well, what if it wasn't isotropic? What if inflation happened in a certain direction faster than it happened in the perpendicular directions? How would we know? And so I knew a little bit about cosmology. Mark Wise is a very well known particle theorist who knows a lot about cosmology also. So the three of us wrote a paper about it and I wrote a series of blog posts about the process from asking that question to getting the paper done. And this is one very possible, very plausible, very common way the papers get written. Someone asks a question and you go like, oh, yeah, I don't know the answer to that one. That's interesting. It's just never out of the blue. Other times they're like a long standing thing and you're just working away at it. Like someone says, oh, if there are perturbations in the early universe, forget about anisotropies. What's the general way of calculating what the cosmic microwave background anisotropies would look like? And you just, you know, that's something that takes many different papers writing because there's many different effects going on and you just do your best at figuring it out. So it's very far from systematic is all that I'm really trying to say. You get good ideas when you can. A big part of the process is deciding what level of idea is hard enough to be interesting but doable enough to be tractable for you and the skills that you have. The great deceiver says, I had a drum teacher once that told me that the stronger you build the foundation, the taller you can build the building, which is metaphorically and literally true. It is a crawl before you walk before you run sort of thing. A perfect algorithm for success, a depthness, virtuosity, whether in music or theoretical physics or baking. I was reminded of all that while watching your recent big think talk as someone who is not just thinking and speaking about the foundations of quantum mechanics and physics but actively working on them here and now. Does that idea make sense to you that understanding the universe at its most fundamental level isn't just important but critically necessary to help building our collective knowledge higher to a point maybe thousands of years in the future where we are running along in our understanding. Well, yes and no. I do think that as I just said, and as Phil Anderson memorably said years ago, more is different. Emergent higher level theories work fine without knowing the underlying foundations. And as I said in a different question just a few minutes ago, you don't need to know the perfect ontology of quantum mechanics to build a better quantum computer. It's a different level of question. On the other hand, you never know when some insight from understanding the lower level better might leak into understanding of the higher levels. And now we're talking about levels that are not that much different from each other, right? You know, the foundations of quantum mechanics to quantum field theory, for example, those those two levels such as they are, are not that separate. So I do think that when it comes to quantum gravity and theories of everything kind of things, those are. A challenge for us and in part they're a challenge because we don't have that is an area where we do need a better understanding of the foundations of quantum mechanics and so forth. A better understanding of the foundations of statistical mechanics leads you to ask certain questions about information or cosmology or whatever. So I think both I don't want to make it too strong a statement here. Better understanding of the foundations is just intrinsically good. And it can help you understand something at a higher level, but I don't want to neglect the fact that you can also just understand the higher level that's perfectly possible. As Phil Anderson made very vividly clear, you don't need to understand the top quark or the Higgs boson to try to think about superconductivity. Miloš Vizor. I hope that came somewhat close to that one. I remember the term epiphenomenalism being thrown around a lot in our philosophy discussion group a decade or so ago. I would naively think it's the most straightforwardly physicalist philosophy of mind, but you seem to disagree by pointing out that we do talk about having a conscious experience of the world and hence consciousness has a semblance of causal power. Do you think it's mostly a semantic disagreement over which aspects of the first person experience count as consciousness? Or is there a deeper distinction that you could flesh out? Well, I worry. I mean, this is I'm glad you're asking the question. I worry that I don't understand what is being talked about well enough. Because to me, this doesn't sound like a hard question. If I understand it correctly, and maybe I don't. The idea of epiphenomenalism is that there's physical stuff going on in the brain, whether you want to do it, talk about it at the level of atoms and molecules, or you want to talk about it at the level of neurons and electrochemical signals. But one way or the other, there's some physical stuff going on. And then the physical stuff does what it does and is unaffected by consciousness. But then the consciousness is just a description of what is happening in the physical thing. It is epiphenomenal. This is going along for the ride. And that sounds like superficially plausible, but then you realize that there is higher levels of emergent description, right? And there are other levels in which consciousness is 100% causal. Whether I'm conscious of something, I mean, I have trouble expressing this because it's just so obviously true. Does one really think that whether or not I am conscious of a certain thing doesn't affect my behavior in any way at the level of being a human being? The question of whether consciousness is epiphenomenal is kind of like the question of whether the table in front of me is epiphenomenal. It is true that at the level of atoms and molecules, there are no tables. And I can describe the table in terms of atoms and molecules without referring to the concept of the table. The table is entirely an epiphenomenon, but it certainly has causal power here in the macroscopic world. And I think that consciousness is just 100% straightforwardly the same situation, but maybe I'm missing something. Carol E. Cantor says, in one of your recent podcasts you mentioned the wire. I was very late to the series and only watched it recently. Now that you live in Baltimore, how does the description of the city in this show relate to your personal experience? Yeah, interesting you asked because after I mentioned that in the podcast, I mentioned hearing Idris Elba talk to Amy Poehler on her podcast. I'm happy to go on your podcast, Amy, by the way, if you ever need someone with very little celebrity juice, but just a talkative theoretical physicist. But anyway, it was after that that Jennifer and I decided that we should watch the wire again. We had watched it, I think, we watched it once together, you know, full, and we've seen a lot of it sort of again randomly, but we should just watch the whole series now that we live in Baltimore. And so we've been doing that and we're in the middle of it. And it's great watching it because of course, just like watching all sorts of TV shows when we lived in LA, you can say like, oh yeah, I know that place, right? I know where that is. Of course, also, the parts of Baltimore that I hang out at are not prominently featured on the wires. It's not a show about Johns Hopkins or the or the academic environment. They do appear sometimes. In fact, I don't know if you remember for all the wire fans out there, but Bunny Colvin, the, I guess, major, I guess he was a major in the police force who was about to retire and legalized drugs in season three. And he got caught at the last minute. But part of the plotline in season three was that he had applied for a job at Johns Hopkins as a sort of security guard guy. After his retirement from the police force was about to take effect and then they pulled the job offer. So Johns Hopkins doesn't come off very well. The, you know, the, the academics don't always come off very well. They come off as well meaning, but not always very street wise. Let's put it that way in the wire. Nevertheless, still the best TV show of all time. I love it. Tejas Damania says, Do you have any unpublished mindscape episodes? For instance, you had a guest recorded the talk, but either was not happy with the discussion or for some other reason that you decided not to publish that episode. A related hypothetical question, assuming you never did this already. In case a guest says something that you are not agreeable or feel inappropriate personally or for the audience, would you edit it out? So I don't have any unpublished episodes. It's, you know, I don't have, I say this all the time, but I don't have time to like record more episodes than I published. That's not likely to happen. Twice, maybe three times. Yeah, let's say, let's say three times something related to that happened. Twice I recorded an episode with someone who was across the ocean and the audio quality was just not good enough. And I asked them if we could record it again and they said yes. And I won't reveal who those people were, but I don't want to like make people either feel bad or good about being guests on the show. I love all of my guests exactly equally. But they were kind enough to record it again. And so we have, you know, bad audio versions of the same podcast guest that did appear later. And then one time I forgot to hit the record button on the podcast and we had to record that again. Also, even though the audio quality had been fine, but all of these cases, whenever I started interviewing somebody, that person did in fact get an episode that appeared on the internet. In terms of what I edit something out, you know, I'm 100% willing to edit things out. Usually it would be because there's either like an obvious audio glitch, right, like someone's coughing or the alarm goes off or whatever. Or someone thinks they can do a better job, right, like someone can say like, actually, I didn't like that thing I said, you could you edit that out that rarely happens, but it does happen. Sometimes people are like trying to be jocular and humorous and realize it might be misinterpreted the wrong way by some of their friends or enemies or whatever. And so they asked me to edit it out and that's fine. I don't know what would happen if someone said something that I just thought was offensive right there in the episode. I probably would leave it in if I thought that that was their intent. You know, if I thought that, you know, they just sort of said something that could be misconstrued as offensive, then I might edit it out. If I thought that like it wasn't important to them, right, that they were just, you know, talking and they said something in an awkward way. But if I thought that a person was intentionally making a statement that could be interpreted as offensive, I would sort of let them make that statement, I think. I don't recall an episode where that actually happened, but I think that would be my attitude. Dan Butler said, you've said some things to the effect that you can't mix levels of description. Understandably, you're where you're using everyday concepts to describe microscopic phenomena better described by equations. But practically speaking, aren't we almost always mixing levels of description? Say you're doing an experiment and notice that a particle is moving too quickly for your purposes. Maybe you need it to take a long time to travel from point A to point B. Isn't it perfectly reasonable for you to decrease the temperature of your apparatus on the thinking that the particle was moving in a way that was not the right thing to do? The particle was moving too fast because my apparatus was too hot. The apparatus and the particle were two totally different levels of description. I don't see how you could avoid needing a model that describes both particle and the apparatus since you need to deal with both. Well, I think this is a very good question, actually. I will stick by my statement that you can't mix levels of description, but I will very readily concede that it's an extremely subtle issue and you should think about it very, very carefully. I'm not sure I understand the example you say. You notice a particle is moving too quickly for your purposes. Isn't it reasonable to decrease the temperature of your apparatus? Do you mean like a microscopic particle? If I'm thinking in terms of temperature, I shouldn't even be thinking in terms of individual microscopic particles. I'm not quite sure what experimental setup we have in mind here, but if I'm tracking one particle in particular and treating all the other particles, such as the thermal bath in the background, that's okay. That's one level of description in which I am including one particle and the thermal bath in the background. I think that's a theory that is relevant to this particular experiment. I think a better example comes from things like diseases, where we have macroscopic human beings and they get sick and it took us a long time to figure out, as we talked in an episode with Tom Levinson, for example, that these are made by germs, these are made by little microbes. All the time, we mix up our description of killing the microbes with trying to cure the disease, right? Or various chemical levels in your brain or something like that and the mood that you have. These are things that we mix up. I think that the more careful thing that I would probably try to say, if I sat down and worked it out very carefully and wrote a philosophy paper about it, which I have not done, but I think that whatever description you have, it has to be, on the one hand, both complete and on the other hand, non-redundant. What I want to get away from is a description where I have a box of gas and I act as if both the position and velocity of every molecule in the box of gas and the temperature of the gas are relevant variables. Because I can derive one from the other. They are not independent from each other, right? So that's the kind of thing that I want to avoid doing by saying that you can't mix levels of description. But if you have a situation where you're trying to track something microscopic, but then you're treating the rest of the world as macroscopic and high level, then I would just call that a single theory and you would need to put as much information as you can to have a complete description. By complete, I just mean there's enough information there to make the prediction or the statement or the, you know, derive a property that you want to be able to talk about. And that might be a case by case basis. It might be very complicated. I mean, certainly, once your higher level descriptions get up to nations or corporations or something like that, on the one hand, many people are involved and you're not going to keep track of each individual person. On the other hand, some individuals might be really important, right? Like the dictator of the country or whatever. So you might need to have variables to keep track of all those things, but I would still call that a single level of description. Once you get past elementary physics, levels of description become a little tailor made, a little bespoke for the situation you're trying to describe. Elias asks an applied physics of democracy question. Is there an algorithm to draw fair non gerrymandered congressional districts in the United States? This is another question that has sort of two levels of answer. One is it's actually very easy to draw non gerrymandered congressional districts in the United States or anywhere else. It's hard to draw gerrymandered districts. The idea of gerrymandering is to delicately carve out districts so that all of the bad guys, all the ones who you don't want to win are concentrated with really high density in some regions. And therefore you get, you know, 55, 60% majority of the of the party you do want to win in as many other districts as possible. That's tricky to do. What is an interesting mathematical problem is given some particular map, can you show that it looks gerrymandered versus not look looking gerrymandered? And in fact, we had a whole episode of Minescape about that with with Jordan Ellenberg sometimes back who is a mathematician. But the second level of layer of answering this question is that there are times when you want gerrymandering. Maybe in some ideal sense, you don't ever want gerrymandering because if you had a good way of assigning representation, you wouldn't need gerrymandering. But the problem with the United States system is that you have geographic districts that are winner take all. So it's very easy to imagine a situation when you have 40% of the state is party A, 60% is party B, and you have perfectly non-gerrymandered districts that party B gets 60% in every district. So even though 40% of the population is in party A, they get 0% of the representation. So sometimes you want to gerrymander in the in the interests of being fair. This is especially relevant for racial representation, for example, you can have plenty of places in the country where there's a racial minority that is, you know, 20, 30% of the vote 40%, 49% of the vote and gets almost no representation in Congress. So sometimes we intentionally carve out shapes of congressional districts to, you know, make it that those people get a say in Congress. So it's actually, you know, it's I all the time I get not all the time, but I've gotten emails from well-meaning scientists or mathematicians who've said like, here I have a way to never gerrymander. And the problem is that's not exactly what we want. What we really want is not a system where you just have geographic districts with winner take all, right? Proportional representation or something like that. But that's a step very far away from anything that's realistic in the US right now. Shatlik Matkulov says, as an aspiring academic in physics, I'm an undergraduate physics student, I'm sometimes worried about the financial realities this path imposes. My undergraduate studies are covered by 100% tuition waiver from the university, which helps a lot. However, considering I'll likely be studying and training for roughly the next nine years before or earning a real salary and giving my families relatively limited finances, I'm hesitant to fully commit to physics even though I'm very passionate about it. I'm considering developing a side project or skill to generate extra income. However, with doing that reduce my chances of a successful career in physics, since it would require investing time that might otherwise be spent on physics. What's your take on this? Did you face a similar dilemma on your journey? Additionally, how would you describe the financial situation of current academics, including yourself? Are they generally well off? To get to the last question first, I think that a successful mid-career academic is comfortable. Well off would be an exaggeration. I think that there are certain academics in certain fields of law or medicine or computer science these days, perhaps, or economics, where you can make a lot of money. And often that a lot of money that you make is some combination of the actual salary the university gets you and consulting gigs or whatever you have on the side, or maybe you're in an engineering school and you invent things and you have a startup company. That's perfectly plausible. But the median academic, I would say, you should expect that I'm just making this up so I apologize if I'm wildly wrong, but I think that a full professor at an average university should be making roughly $100,000 a year. That's full professors beyond just tenured professor and some full professors make a lot, the superstars of the university, etc. That's why it's just a guess, just a guesstimate. I haven't looked up the numbers. But that's a perfectly good salary to live in most college towns in the United States. If you live in a super expensive place, if you live in Manhattan, places like Columbia or NYU will help their faculty with finding housing or even have faculty housing. In Baltimore, you just got to buy it for yourself, but that's okay. It's a lot easier to do so. What you care about, and I think rightly so, is the journey to get there. I think the finalist nation is fine, but the major overriding thing that you should be realistic about is that most undergraduate physics majors who want to become physics professors do not succeed. It's a small fraction, even of people who get PhDs who succeed in being full professors someday. Some of them do, and also the ones who don't often end up earning more money because the training that you get as an undergraduate physics major or as even a PhD in physics is really useful for a whole bunch of other careers you could have, whether it's some kind of semi-quantitative thing in finance or something like that, or just a job out there in the world that requires people who are smart and used to working very hard and can come up with creative ideas. All of these are things that a physics education teaches you to be good at. So keep that in mind. That's the major thing. You can aim, and you should aim if you're passionate about it, to be a full-time professor of physics, but it's a hard road to hoe. I don't think that the financial worries are the big ones, especially if you're already an undergraduate who has tuition taken care of. I was an undergraduate who had tuition taken care of with a family who was able to give me nothing in terms of support for college, so I took out loans to pay for room and board and miscellaneous things like that, and I had to pay those loans back. But two things happen. One is that when you go to graduate school in physics, at least at almost all places here in the United States that I know of, they will have tuition, but you don't pay it. You don't pay the tuition for graduate school. Either the university pays for that because you take a job as a teaching assistant or something like that, or you have an advisor with a grant who hires you as a research assistant, and then the grant pays for your tuition. So in fact, once you get to the point of being a professor, you will realize having graduate students is really expensive for physics professors, not personally, not out of their pocket, but out of their grant money. This is the stuff that's being cut, by the way, right now by the Trump administration. They think that they're cutting waste and fraud and stuff like that, but they're just cutting the salaries of graduate students and therefore decreasing the number of graduate students who can be admitted into the program. But anyway, under ordinary conventional circumstances, graduate school is not only not doesn't cost you anything, but you get paid, you get a salary for being that research assistant or teaching assistant or whatever, or some kind of stipend for being a if you get a fellowship, which is always a possibility. And then, as long as you're in graduate school, you typically don't have to pay off your student loans. Eventually you will have to, but they're deferred as long as you're still getting an education. So you don't need to start paying your student loans until you're actually earning a salary and a salary beyond what you would get in grad school. So I would say that overall, there's plenty of things to worry about in about a career in physics or academia more broadly. But the financial side of things isn't really that I mean you'll spend years of undergraduate graduate postdoctoral years, not making a lot of money, right, like living in cheap apartments and you know with roommates and stuff like that. But it's not like you're you're unable to support yourself or you need to get a second job. As far as getting some kind of side hustle or whatever, I would not recommend that unless that were like secretly the thing you actually cared about. It's perfectly okay to do a physics degree and do something else. But you are not helping your chances of eventually becoming a physicist you're helping your chances of eventually striking gold with the other thing that you're doing if it's really truly physics that you care about. These are the years when it makes sense to devote yourself to learning and doing physics as much as you can. Peter Betcher says on your Minescape episode with Nyesfshordi and Phil Helper, one of them mentioned that you used to work on ether models. What got you into ether models and what kind of model was it and what made you give up on the ether? You know, you never really give up on ideas. You put them aside and you think about them again. When we were talking about ether spelled A-E-T-H-E-R, this is a very specific idea and probably it's giving you the wrong impression. I don't know where the name came from. It might be from Ted Jacobson. But ether models are ones in which a vector field through the universe has a nonzero expectation value. So you have ordinarily in physics, you have scalar fields, you have spinner fields representing fermions like the electron and neutrino and the quarks. You have vector fields representing gauge bosons, photons. You have tensor fields representing the graviton. But the only ones that get a nonzero value in empty space, as far as we know, is the scalar field, the Higgs boson. And the thing about the scalar field that gets an expectation value in empty space is it's just a number at every point in space. It doesn't pick out a direction in space. But you can easily imagine a vector field, actually it's not as easy as you might think, but you can imagine a vector field similar to the photon field of electromagnetism, but not exactly the same. But you can imagine one getting a nonzero value in the vacuum, which means in the lowest energy state. Of course, literally the electric field could get a value in space or the magnetic field could get a value in space, but that would carry energy. That would not be the minimum energy state. So an ether model is one where a vector field or a collection of vector fields are nonzero, even in empty space. And by having a direction that they're pointing to, they have not only a magnitude but a direction, they pick out a preferred reference frame, the frame that is, with respect to which that vector field is pointing in a time like direction, the frame that is at rest with respect to that vector field. But if you're doing it respectively, you let that vector field have dynamics. You let it move, you let it wiggle, and then you ask about what the dynamics can be, what are the effects it has on the universe, etc. And so my first ever published paper was on violating Lorentz invariance with a vector field, but we didn't take it super seriously in the sense that we didn't worry about the vector fields dynamics itself. We just put it in as a feature of the background and then asked what would be the experimental consequences of this. This turns out to be a really useful question to ask. It's always useful to ask what are the experimental consequences. Later people, not me, thought about the dynamics that the vector field should have, and Jacobson and others wrote a bunch of papers in particular. Eugene Lim, who was a student of mine at the University of Chicago, we wrote a couple of papers that became popular, one that I really quite liked, just called Lorentz violating vector fields slow the universe down. And the reason why that paper was fun was it was such, we asked a good question, namely like, what is the effect on cosmological expansion of having a vector field like this. And then the answer we got was just so beautiful and simple that it was really quite elegant and why we were able to have such a simple title for the paper. And the point is that you examine, well, here there's a failure mode, you can get into trouble. What we found was that if you put the vector field in a cosmological background, so it's completely homogeneous, pointing in the direction of the cosmological expansion, right? So there's no preferred, well, the preferred rest frame that it picks out is the one that is already picked out by the cosmic microwave background, right? That was the easiest guess for a cosmological vector field. And what we found is that it has an energy density that just scales exactly parallel to whatever other energy density is dominating the universe, whether it's matter or radiation or vacuum or whatever. And what that means is that secretly it's just changing the value of Newton's constant of gravity, okay, because that's the overall constant of proportionality in the expansion equation, the Friedman equation, between the expansion rate and the energy density. So it doesn't act like energy all by itself, it just changes the relationship between the energy density and the expansion rate. And you might think, oh, okay, well, that's something that I can test experimentally. But the problem is, if that's all that it ever does, then it's changing the gravitational constant with respect to what it might have been, but you don't know what it might have been. So you're not actually testing anything experimentally. What we therefore did was to separately ask what would the effect of this vector field be in the solar system, where we measure the Newton's constant and things like that from apples falling from trees or the earth going around the sun. And we found this is just amazing that again, all it did was change the value of Newton's constant, but it changed it by a different amount. So in fact, the overall effect is that the value, effective value of Newton's constant that appears in cosmology is a little bit less than you would infer it should be from testing gravity in the solar system, and that is experimentally testable. Then I wrote some other papers once I was at Caltech with a bunch of students there on the dynamics of vector fields and instabilities in the ether and things like that. But what you want to know is sometimes you ask a question, like we asked the question earlier I was talking about, about the anisotropy of inflation, and you find an answer and the answer might be like, OK, that's the answer, or the answer might be like, ooh, that's really, really interesting. And I think in the case of ether fields, what we found was that, oh yeah, you know, OK, that's good to know, but it didn't lead to anywhere else. It didn't like improve our understanding of anything. So I didn't see a lot of reason to keep thinking about it. Maybe someone else will come up with such a reason. DMI says, what will you do if the military takes over Baltimore? Those of you 500 years from now who are listening to archival editions of the Mindscape podcast might want to know that here in 2025, the federal government has taken upon itself to call out the National Guard and perhaps other elements of the government's military apparatus to crack down on apparent crime in the cities such as Washington, D.C. and Los Angeles. Without any interest whatsoever in those cities and having this crackdown happen, it's a very questionable legality. Let's just put it that way. And it's more or less meant as an intimidation tactic, more than an actual crime fighting tactic. And so the question is, what will it have, what will I do if it happens in Baltimore, which has been threatened? Baltimore has been mentioned as a possible target. There is a very strong correlation between whether a city has been targeted by the government and whether it has a black mayor. You can draw whatever conclusions from that you like. Of course, the real answer to the question is, I will not be able to answer that until it happens. You know, there's too many details, there's too many things going on. To say the military takes over is if we're sober and careful about it, an exaggeration. In the cases that we've seen so far, it's not really been a military takeover. Indeed, in Washington, D.C., even though there's been some extra legal rousting of people that were, I don't know, deemed suspicious by the National Guardsmen or whatever, to a large extent, the National Guard has been put to work doing things that the park service should be doing, like putting mulch around trees because the park service employees were fired when Elon Musk took over the government for a little while. Of course, it's much more expensive to have National Guard soldiers, I guess, from out of state come in to put mulch around the trees, but that's where we are in this world right now. So, it's going to depend a lot on details. There's absolutely part of me that says, you know, resist in whatever way you can. What does that actually mean? I'm not going to try to speculate what that means. You know, I think there's always a tension in these kinds of situations between acting in a way that sort of makes you feel righteous and acting in a way that actually makes the world a better place. I am 100% on the side of acting in such a way that it makes the world a better place. I'm not quite sure what that would be in this situation, so we'll have to wait and see. It's a terrible thing that we even have to contemplate this kind of thing, but there you go. I was just reading an article in UpEd, I guess, in The New York Times the other day, and it, you know, it was about what Chicago could do and the fact that this was written by a person who had studied Chicagoans and knew that they were not averse to causing a ruckus, causing violence and things like that, and, you know, saying that Chicago could be a powder-cad keg. But the point is that there was just sort of casually in the middle of the article the phrase, quote, a de facto military occupation of Chicago. And this is something that is just inconceivable to us here in the United States just a short while ago that we would be talking about a military occupation of an American city for no good reason. It's amazing to me that people have not really caught on to the seriousness of the situation that we're in right now, but I will not go on a long rant about that. So we can just take it as something that is terrible and move on to more pleasant topics, like Brendan Barry asking, why is a proton's parton distribution function, pdf, dependent on the energy of your probe, q? I've been told that it's because at higher energy you're probing finer structure in the proton, however the parton distribution function gives you the parton's momentum fractions of the overall proton. Why is the distribution of the parton's momentum dependent on the energy of the probe? This is a great question. I know it's a little technical for those of you who are not physicists out there, but as I have often said in various contexts, the proton is a quantum mechanical object. It is not a classical bag of quarks and anti-quarks and gluons. You can't ask questions like how many quarks are in the proton. That sounds like a question you should be able to ask. In fact, sometimes you will say the answer is three, three quarks in a proton, but then someone else will come along and say, well, actually there's a cloud of quarks and anti-quarks as well as gluons. And then at a higher level of sophistication, you come across the parton distribution function, the PDF, where both quarks and gluons count as partons. And the parton distribution function is supposed to answer the question, how many do you see if you observe it, if you actually probe it, if you shoot, let's say, a high energy electron or photon into the proton, and it bounces off in a certain way, and you want to interpret that scattering process in terms of a certain number of partons in there. And as Brendan says, it's not a constant number. It's not a fixed number. It depends on how you probe it. And the answer is, because there's no such thing as the number of partons in the proton, there's a quantum state. And what you're doing by measuring the proton using different energy probes is you're making different kinds of measurement, just like at a more extreme level, measuring position and momentum or measuring two different things. So what you're doing with these different experiments is measuring how many the observational outcome of measuring the number of partons with a probe of different energies, there's just, I can't say much more than there's just no reason for that to be the same at different energies, right? You're just doing fundamentally different measurements. You have a high energy probe bumping into the proton or a low energy probe. They're going to interact with the quantum state that is the proton in different ways. And then we interpret that exposed fact, though, as, oh, it bumped into a certain number of partons. But that's only classical language after the fact that it's meant to make us feel warm and fuzzy inside. It's not what is actually going on inside the proton. Chris Derubio says, I'd like to hear whether you agree that American political polarization may be partially explained or understood by considering epistemological polarization. On one side of the spectrum, a large fraction of citizens seem committed to evidence-based approaches to formulating political positions. They're making a good faith effort to use critical thinking and to set a high evidentiary standard. On the other end of the spectrum, we have alternative acts, a rejection of expertise in expert analysis, conspiracy theories, an appeal to ancient texts of knowledge, etc. Any idea on how to bridge this epistemological divide? Well, I don't think that this is a good diagnosis of what's going on. This is an incredibly self-flattering diagnosis from one side of the spectrum. I think that if you want to understand why people are acting and talking in a certain way, you have to do it in a way that they would accept. You have to diagnose them in a way that you might not agree with their self-diagnosis, but you have to understand what it is in order to really understand why they are acting in a certain way. You have to be able to repeat what they would say if they were asked, why do you believe these certain things? They would certainly not say, well, we don't believe in truth. We believe we can make up our own facts, right? Maybe it comes down to that at the end of the day, but simply saying that is just not going to give you an understanding of what's going on. I do think that right now in the United States, if you divided up people by left and right political spectra, the belief about how the world is working and happening and things like medicine and climate change and democracy, the opinions of people on the left are more aligned with the truth than the opinions of people on the right. I don't think there's any necessary connection there. This is what happens to be going on right now. It's very complicated. I think that the search for overly simplistic diagnoses gets in the way. So lots of things are going on. We have to take them all seriously. I'm not going to say what they all are now. I don't even have a good theory of what they all are, but I appreciate the fact that there are many of them and it's complicated. I'm going to group two questions together. Shambles says, having read recently about a proposed experiment to test the theory of entropic gravity, I'd appreciate your thoughts on the idea of entropic gravity more generally and are you a fan? And Andrew Goldstein says, can you explain how gravity would be emergent from information and entropy? I've read a bit about it, but I'm still having difficulty understanding the reasoning behind the theory. So entropic gravity became popular after a couple of papers by Eric Verlinde, although similar ideas had been investigated by other people, including Ted Jacobson, who's already mentioned once in this podcast. And the basic idea is of the very compelling toy model metaphor that is used is that of an entropic force. I actually kind of love this metaphor and I think it's very interesting and compelling. If I have, you know, a block of wood, let's say suspended from the ceiling on a spring, okay, then there is a mechanical force that of course is the force of gravity pulling the block down, but then there's a mechanical force pulling the block up or pushing it down, depending on where you are, how stretched the spring is. So if you sort of try to compress the spring, it will push against you. If you pull it down, it will pull you back up. So there's an equilibrium point that it wants to reach. And it's all very understandable in terms of ordinary Newtonian forces. Now imagine that you suspend the block from the ceiling, not with a spring, but with a very, very low mass chain. Okay, so we're making very low mass to imagine that it can wiggle around in a high temperature situation. So in that case, the chain by itself just hanging there has an equilibrium where the chain is completely straight, right, where it's just all stretched out and the block is as far from the ceiling as it can get. But if we put the whole system at a certain temperature, so we put in a room with, you know, an atmosphere that is at a high temperature, then the chain starts wiggling around. Okay. And what happens is there is another thing going on other than gravity pulling it down, which is the entropy of the chain. The, if the chain is perfectly long, you can think of it this way. You don't have to think of it this way. There's a good way to think about it. The chain is perfectly straight and long. That's a low entropy situation, right, because there's only one way to be perfectly straight and long. If you squeeze the chain too much, it's also a too low entropy situation because there's fewer ways for it to wiggle. So there's sort of a favorite size for the chain to have, which maximizes the entropy of all the different configurations of the different links in the chain moving back and forth. And that will balance against the pull of gravity and you can interpret that as an entropic force acting on the block. So it's a sort of different kind of force than a mechanical force, but it comes to the same thing. And the idea behind Verlinde's paper is that gravity is like that. Gravity, which we can think of very commonly as a mechanical force, might be the result of an entropic force from a whole bunch of microscopic degrees of freedom that you and I don't know about, right? So without going into details about what the links of the chain are, there might be some little invisible degrees of freedom that make up space time and that are trying their best to maximize their entropy. There's other things going on, just like the block pulling down on the chain is affected by gravity as well as the entropy of the chain. In real gravity, there's other things going on like the momentum of a planet or whatever. But basically, you can try to recover your ordinary expectations for how gravity works. The way that Jacobson did it, he referred it to it as the Einstein equation of state. And he wrote a couple of very important papers talking about how rather than positing Einstein's equation for gravity and deriving Stephen Hawking's law that entropy is proportional to area for a black hole, you can posit that it's a subtle thing. I'm not going to get it exactly right here because there's a lot of technical details that matter, but roughly speaking, you can posit that the flow of entropy, flux of entropy across a surface is proportional to its area. And from that, you can derive Einstein's equation. So this is called the Einstein equation of state. And so that flux of entropy across the area is sort of similar in spirit to the entropic gravity idea. I think it's all great. I mean, I'm not sure whether this is the kind of thing where you're going to say that is the correct way to think about it or whether you're going to say this is an interesting alternative way to think about it or whether you're going to say, we don't know how to think about it. That's maybe that might not be the right way, but I definitely think it's a provocative and interesting proposal. There's a similar thing. I've written a couple of papers in areas very close to this that you can look up if you're really interested in. And so I do think it is promising. But the question is, how do you make progress on this? What do you do with it once you say that? For Lindy had some ideas about cosmology and dark matter and things like that, which I don't think we're very promising at all. Jacobson's proposal has been taken up by some people like Tom Banks and others, including myself and others. So maybe we can get someplace with it, but that's going to be the question. Can you actually use this kind of insight to help you get a good theory of quantum gravity or understand black hole information or something like that? Charlie says, what evidence is there for or against the possibility that space is discrete rather than continuous? I should have grouped this with a question before, but there's no evidence that space is discrete rather than continuous. There's not really evidence against it other than the casual idea that if you just do the most straightforward attempt to make space discrete, then you will violate Lorentz invariance. Then you're picking out some preferred frame in which space is at rest. That doesn't seem to comport well with what we know about theoretical physics. It might be true, right? It might be that there is some experimental prediction from that way of thinking that will eventually come true, and that's something very, very important and interesting to look for. But as I said earlier, quantum gravity doesn't mean that space is discrete, and in fact, I think that it's almost certainly not right. I will put the chance that space is discrete very, very low credence. It might be that there is some discreteness somewhere in the correct description of quantum gravity. But the thing about quantum mechanics, just as we were talking about the proton a minute ago, is that it's not just a simple collection of classical objects, right? I think that the problem with thinking of space as discrete is that you begin to think of just a bunch of points making a lattice and calling that space. And quantum mechanics is way more subtle than that. It's just never going to be quite that simple. Like, what if it's discrete in momentum space rather than position space or something like that? So whatever it's going to be, I think we should be open to different possibilities, but it won't be something as straightforward and simple as turning space into a discrete lattice. Magnus J says, I enjoyed your article called Why Boltzmann Brains Are Bad, but I do have some questions after reading it. Entropy-wise, a brain is more likely than the universe we are observing, but how does one really compare these probabilities when considering field theory? The universe event will include, among other things, a Higgs field value of 246 GeV that will allow for matter as we know it and thus a brain to exist. I'm having some trouble understanding how a brain-sized fluctuation would also provide its own Higgs field in a global state where the universe, or the Higgs field, isn't active. This still leaves the possibility for BBs into sitter space where you discuss vacuum decay as a way to limit Boltzmann Brains. But my question pertains to the more original brain versus universe argument and the universe's initial properties. So just to be clear here, you don't compare brains to the universe because a brain is inside a universe, right? What you're comparing is a brain in an otherwise empty universe to a universe with a lot of stuff in it. Our universe has 10 to the 88th particles in the observable part of the universe, and at very early times, all of those particles were in an incredibly low entropy state, essentially zero entropy compared to what the maximum entropy could be. So the difference between maximum entropy and the actual entropy of that actual universe in which we live is enormously big. Whereas the difference between the entropy of empty space and empty space with a brain in it is relatively very, very, very tiny. So in either case, in the whole universe case or the brain case, you're imagining that you already have space and time and Higgs field and all those things, right? Those are not what you're fluctuating into existence. You're fluctuating into existence different matter configurations within a specific way, within a specific background. And the idea that I like the baby universe idea is that you're trying to fluctuate just a little, what amounts to a tiny black hole on the other side of which appears a baby universe, and that's actually much smaller and easier to fluctuate into than a brain. But that wasn't really part of the Boltzmann brains or bad discussion. The idea there is that, you know, let's put it this way, I have a brain. I have a brain and I think that other people also have brains. For all of those different brains to fluctuate into existence is just necessarily less likely than for just one brain to fluctuate into existence. That's just a counting argument. There's really no way to wiggle out of it. Warp 90 says, why is Hawking's result of black hole radiation about black hole radiation widely accepted even though a theory of quantum gravity is still out of reach? Well, a couple reasons. One, it's really not about quantum gravity at all. Hawking did not attempt to quantize gravity, nor did he say he was. He was studying the behavior of quantum fields in a fixed curved space-time background. And that should be pretty well understandable, right? We understand quantum field theory pretty well in a flat space-time background, and putting it in a curved space-time background isn't that much of a leap. And then the other leap that you have to make is simply to imagine that there is energy in the quantum fields and that energy is going to affect the space-time that they live in, right? So if energy is leaving the black holes, the black holes are going to have to shrink. So no details about quantum gravity are at all necessary to understand Hawking's result about black hole radiation. Now, the simplistic conclusion from Hawking's calculation that information is lost is one that comes out exactly because you have not tried to include all of quantum gravity, right? And so one of the things that people have been doing, trying to do for decades now, is try to put the quantum gravity back in and help understand how that might get the information out of the evaporating black hole. They haven't completely succeeded in a way that everyone agrees with, and it turns out to be a really, really hard problem, but that's no reason not to believe Hawking's original result. The thing about the result is it's very robust. There are analogues to Hawking radiation that you can make in a condensed matter system. The equations are the same, and solving the equations is not that hard. You could do it. So it would be much more surprising if Hawking's result about black hole radiation was wrong than if it turned out to be right, even though we have no direct experimental evidence. Matt Haberlund says, studying dynamics as an engineer, it was easy to invent systems for which the equations of motion didn't have neat or enlightening analytical solutions. Studying classical mechanics from a physics perspective, it seemed miraculous that such useful general results, like the Euler Lagrange equations, could be derived from simple principles. How often does inability to carry out mathematical calculations get in the way of your research? I'm thinking about this question. I'm not quite sure that I'm going to give the most informative answer to it, because in the work that I mostly do, carrying out mathematical calculations is either doable by hand, you know, write down the equations and solve them, or you can do a little numerical solution to the equations, right? If you have a, just as a very simple example, if you have an expanding universe with different kinds of matter, let's say you have a scalar field rolling down a potential, then for most potentials that you can choose for the scalar field to roll down, you're not going to find an exact analytic solution to that particular expansion history for the universe. But you can very easily put it on a computer and solve it, and there's no reason not to trust that answer. So for most of what I do, mathematical difficulty does not really get in the way. Now, in complex systems, you have an example where things are much different. Because of positive feedback and, you know, complicated networks and hierarchies and chaos and all of these things, you have a situation where a simple approximation, a spherical cow, might not be good enough. And it's not that you can't do some mathematical model of the system, but you might have to use different techniques. If you go back to the interview we did with Don Farmer, who is mostly talking about economics and agent-based modeling, but agent-based modeling is an example of a technique that you use when you don't think that you can invent some simple model that you can just plug on a computer and solve. So instead, you conjure up in your model a bunch of hypothetical agents and let them bump into each other and see what happens. So in some sense, you're still doing a computer simulation, but in another sense, you're not trying to use your brain power to pick out what are the relevant variables here. You're letting the simulation tell you what the relevant variables are. So I think that, you know, things are not necessarily always simple, but mathematical calculations are generally doable one way or the other. Gao Shang Wei says, in your 2014 paper about vacuum fluctuations, you argue that an empty decider space encoded in an infinite-dimensional Hilbert space will lead to a static quantum state. However, in the following years, when talking about the arrow of time in lectures and on Reddit, you make references back to your 2004 paper with Jennifer Chen, which uses the idea of quantum fluctuations that you appear to have argued against. What are your views now about your 2004 paper and whether the situation you analyzed in the paper is still possible? Well, this is just what happens when you think about two different theories, right? Two different theories might give you two different answers. In the case of the 2014 paper about Boltzmann brains and vacuum fluctuations, we were not imagining, we were not sort of including in the space of possibilities fluctuations into baby universes. Instead, we were just thinking about quantum fields in a background expanding universe, which is a much better understood theory. It might not be the right theory, right? Because maybe the right theory has baby universes. That would make me very happy. But we were addressing the Boltzmann brain problem in a context where you just were worried about the simplest possible extrapolation of real-world cosmology. In the real world, the simplest possible extrapolation is we have a cosmological constant. We are becoming more and more dominated by the cosmological constant. That can last forever and will have a future, which is quantum fields fluctuating in empty decider space, like you said. And as we said in that paper, and I still agree with in 2014, in that scenario, the fields will just settle down and stop fluctuating before too long. And you will not actually fluctuate into Boltzmann brains. The idea that you will fluctuate into a baby universe that spits off, pinches off, becomes separate from the background cosmology. That's a quantum gravity idea that is on much less firm ground. We don't know whether that's true or not. So for the purposes of the 2004 paper, we assumed it was, and we derived some explanation from it, some scenario from it, some conclusions. But it might not be right. That's how it is. We just don't know. So I would say that we're okay either way. So either in the case where there aren't any fluctuations at all, then we're not going to fluctuate into Boltzmann brains. In the case where there are these fluctuations into baby universes, then perhaps there can be a fluctuation into brains as well. But you may made a whole universe even more easily as we just talked about. You've created a new little baby universe that pinches off, it goes its separate way. They can grow into a huge number of brains. And so hopefully, and again, this is not something that's perfectly well understood, but hopefully it's those ordinary brains that arrive in universes after they have been born as baby universes that will eventually dominate the number of observers in the universe. Wes Payne says, in your July 2024 AMA, you gave a great explanation of tensors as multi-linear maps on vectors and co-vectors. In quantum mechanics, though, tensors appear very differently as tensor products of Hilbert spaces describing entanglement. How do you plan to introduce tensors or bridge those perspectives in your upcoming quantum mechanics book? For what it's worth as a physics undergrad who got a mostly confused half introduction to tensors in the dual space, the abstract universal property view was what finally made things click for me, though I imagine that might be too abstract for the usual physics path. So yeah, for people who have no idea what is going on here in this question, the idea of a tensor appears very differently. Well, it appears lots of different places, but most obviously, and in your face, it appears in two different contexts than in undergraduate physics education or beginning grad school physics education. It appears in quantum mechanics and it appears in general relativity. And the idea of a tensor is exactly the same. It's not different in those two contexts, but the use to which they are put, the notation that is used to describe them, and the description that is given of them is completely different. So you wouldn't even recognize them as the same thing if you didn't know that the word was the same. The idea of a tensor, as Wes says, I gave the most abstract way of thinking about it back in a year ago's AMA or more than a year ago, but there's a more down to earth way of thinking about it. So say I have two vectors, right? So I just have, well, two vector spaces, right? So let's say I have two possible vectors and there's something called a direct sum that you can do of these two vector spaces. It just says you combine them, you smush them together. So instead of having two vector spaces of three dimensions each, you have one vector space of six dimensions. It's really quite simple. But in the tensor product construction, you keep the possibility of vector space number one and vector space number two, both doing their own thing. And then you consider the combinations of every possible way they can do their own thing. So this, you can kind of see it sounding like quantum mechanics already. If I have system A and system B, there's not some fact about what system A is doing and some system B is doing. There's a superposition of possible measurement outcomes. So that's why tensor products are how you construct composite systems in quantum mechanics out of underlying systems. So in that case, if you have three dimensional vector space and three dimensional vector space, the combined tensor product of the two is nine dimensional, not six dimensional is three times three, not three plus three. And that's how it's usually presented in quantum mechanics talk. The general relativity version of it, which again is exactly the same thing, just presented in different ways. You're going to put it to different purposes. You're generally not making a composite system of two different systems. You're generally making a tensor field out of combining two different tensor fields. So you're multiplying vector fields together or something like that. And or you're taking derivatives of vectors and making the Riemann tensor, etc, etc, etc. So pedagogically, it's a very good question. This is always a question when you're teaching quantum mechanics or relativity or anything like that, which is how much should you give the most bird's eye abstract view of this mathematical construction versus how much should you just put it to work in this particular situation and not worry about the overall view. I'm struggling in this in the quantum mechanics course right now, because if you're honest about things right away, well, let's put it this way. Many quantum mechanics books lie to you all the time about all sorts of things. And I'm not talking about weird things about interpretations of quantum mechanics or the measurement problem. I'm talking about the down to earth mathematical formalism. You know, they'll say that position eigenstates are a basis for Hilbert space, which is just not true. It's kind of close to true and it's close enough that you can say it and get away with it. But mathematically, it's not really correct. So you have to be more careful and and tensors and things like that are one of the places that I'm bumping into that. So I think you have to compromise. I think that some people really, really benefit from getting the high level abstract mathematical description. Some people do not. They want the down and dirty just tell me how it works kind of thing how I can push around the symbols kind of thing. So you have to try to compromise how I will actually do that in the course or in the textbook. We're going to have to wait and see. Alexi Kostibis says you mentioned that Johns Hopkins, your employer took down a DEI statement in a previous episode. I'm curious if you have felt any other effects of the administration's attacks on universities, any chilling effects on your speech changes and how you teach etc. If you can't answer freely just wink twice. So it wasn't I think that you might have slightly misparaphrased. It was not Johns Hopkins that I was talking about. It was the Space Telescope Science Institute, which is across the street from Johns Hopkins and administered by Johns Hopkins, but it's funded and run by NASA. So it is a NASA facility, not a Johns Hopkins facility. It wasn't a DEI statement. They had a sign on the wall saying like, you know, we respect all kinds of people and natures and things like that. And they took down the sign. Okay. And it's very depressing that they took down the sign and it's also very unsurprising. Let me just say this. I both think that universities, more so than government agencies, because universities are supposed to be at least in principle independent, but I do think that universities, law firms, newspapers, all of these things should absolutely resist encroachments on their freedoms from the government. And they should certainly resist the temptation to give in prematurely before they're even forced to give in. They should fight. At the same time, I completely understand that resisting and fighting back might mean that they get their funding cut and getting their funding cut doesn't mean, you know, oh, we can't buy a yacht this year. It means we fire people. We fire administrators. We fire students or we don't hire students in the first place. We fire postdocs. We fire janitors and whatever computer people. You know, we just can't afford to pay all these people. So real people's real lives get affected by this kind of thing that you're tempted to do. So it's exactly as I was just saying, you know, you have to weigh the symbolic benefit of acting in a righteous manner versus the real world effect of hurting people who are not necessarily in a position where being hurt is something they can easily handle. Again, not to say that you shouldn't do it. I think that universities should resist, but you can't just act glibly about it. You have to understand the human cost of doing so. So I don't know what the situation was that Space Telescope Science Institute. I don't know if they were forced to take down that sign or if they took it down just because they thought it might worry somebody. But I recognize that I don't know. So I'm not going to judge without knowing all the facts behind it. There's been no effects whatsoever on me personally other than the fact that, you know, the overall financial situation of the university, also maybe the universe, has certainly been adversely affected. So there's a hiring freeze. We can't hire new faculty or postdocs or anything like that unless the money is already, you know, set. So I'm trying to hire a postdoc this year because I have funds that the university promised me that I should be able to do it. But I have to jump through some administrative hoops and it's not as certain that it will be able to do it as it would have been in other years. There are faculty searches that we did that we had to cancel. There's an overall, you know, pay cut basically for everyone at the university, including myself. So there's but but there's no one who's been saying, you know, I can't say that Donald Trump is a moron or anything like that. I don't ever say something like that in my classes. But no one has taught me that I shouldn't. Brent Meeker says you've written a paper with Jackie Lawman about violation of conservation laws and quantum measurements. You said that energy was only conserved on average, but not in a single measurement. If I understood it correctly, it would also apply to the measurement of any conserved quantity, not just energy. Can this provide any test of many worlds versus, say, Cubism? Do all interpretations of QM imply non-conservation in measurements? So two things. Number one, I don't think there's any difference as far as I can tell between different interpretations or formulations of quantum mechanics vis-à-vis this problem. I think that all interpretations of QM would, as far as I know, predict basically the same thing except for those that explicitly violate the Schrodinger equation like objective collapse models in which the violation of conserved quantities is even worse, even more obvious. That was the inspiration for our paper. In objective collapse models, everyone knows that energy is not conserved and they're using that fact as a way of experimentally constraining the theories. So Jackie and I are just pointing out it also happens, not quite as noticeably, but it can also happen in every other interpretation of quantum mechanics. It wasn't just a many-worlds kind of thing. But the other thing is, applying the idea to other conserved quantities, there is in fact a huge difference between energy and every other conserved quantity, namely that for other conserved quantities, you can be in a state of a single value. You can be in an eigenstate. Typically, for example, for electric charge, the electric charge of the whole universe is expected to be zero, right? And it's not going to change no matter whatever measurement you make. Energy is the one thing for which that can't be true because energy is the thing that appears in the Schrodinger equation and tells you how the wave function or the quantum state evolves with time. It's the one thing that you can't just have a definite value of, otherwise the quantum state doesn't change over time. Now, there's subtleties there that one could get into. There are versions of quantum theory or versions of quantum mechanical models where the wave function doesn't evolve with time and you have to say the time is emergent, okay? But that's okay. If time is emergent, then there is basically an effective Hamiltonian, which gives an effective energy and that you're not in an eigenstate of that effective energy and then everything that Jackie and I said applies to that effective energy. So it's the same discourse. So it's still valid even in those cases. But energy is special in that particular way. Peter Krausp says, I enjoyed your solo episode on complexity. You said, the universe essentially observes itself. I was wondering how this had bootstrapped itself if we go back in time. My naive assumption is that shortly after the Big Bang, the universe must have had at least two states that weren't entangled with each other, a mixed state as I've been able to look up. Whether these guys could be labeled as system and environment seems a bit strange. What do you think about it? Well, there's two things. You're right. There was a provocative little statement, a saucy little statement, that the universe is in universities. I've been in universities for too long in my life. The universe essentially observes itself. The university certainly observes itself. No one's surprised about that. So what does that mean, that statement, the universe essentially observes itself? So the sort of casual way that I think about it, which is the way that I've written about it, is that you, as implicit in the question, you divide up the universe into a system part and an environment part. And then the environment part becomes entangled with the system part. That's decoherence. That's what I mean by the universe essentially observing itself. In cosmology, in the early universe, this division of system and environment is not at all obvious. What is doing the role of the environment? Maybe you're saying it's short wavelength fluctuations. Maybe it's long wavelength fluctuations because there's fluctuations outside your observable part of the universe, etc. But these are addressable questions. I actually did write one paper about this with Jackson Pollock and Kim Body on eternal inflation. Because in eternal inflation, you say things like you have an inflaton field and it rolls down a hill, rolls down a potential, but there are quantum fluctuations. And there's a probability that the field fluctuates up the hill versus down the hill, etc. etc. And this is all done in an extraordinarily naive way in terms of actual quantum mechanics. What do you mean by a fluctuation? Who's observing this fluctuation? Is it collapsing by itself? Is it sort of magically being measured? And so we decided that we would try to do it correctly with decoherence and all that stuff. And we did a lot of work and a lot of equations. And at the end, we got a very, very slightly different answer than the usual conventional hand-wavy way of doing it. So that's why no one writes these papers because you do a lot of work and you get basically the same answer and no one is really interested. There is a more sophisticated way of doing it, which is with decoherent histories. The whole idea of the decoherent histories program basically, metaphysically or ontologically, it's the same as Everett. It's just saying that there's a quantum state evolving with time, but it's giving you a formalism for picking out the quasi-classical histories within that quantum mechanical state. And it relies on the existence of some choice of measurements that could have been done, like measurements that you don't actually do, but you could have done them. And this helps you separate out what counts as a classical history. So that would be a way of doing it in cosmology that wouldn't rely on a distinction of system and environment. As far as I know, I mean, maybe people have done it. I'm really just not familiar. I bet if you did it, you would just get the conventional answer out once again. Nikola Ivanov says, In your solo podcast about time, you described a bounded universe in which time is fundamental and that reaches eventually all possible states in an infinite loop. And this situation was going to inevitably create Boltzmann brains as the lowest entropy configuration with observers in it. It seems that this conclusion assumes that Boltzmann brains are one of the allowed states in this configuration space. For example, when a mechanical system explores a configuration space, it visits all states but is subject to constraints like energy conservation. Why are we assuming in this thought experiment no constraints of any kind to the formation of Boltzmann brains? Maybe the conservation of energy or some other constraint doesn't permit the formation of Boltzmann brains as the lowest entropy state with observers in it. Yeah, maybe. I don't know. I encourage you to write a paper if you have a calculation that shows that that is true. The reason why I don't think it's very likely to be true is because there's nothing special about brains, right? There's nothing special physically about a brain. After all, we want to consider situations where there could be real observers in the system, right? Like if you had the zero energy state, then you could make an argument that Boltzmann brains don't fluctuate into existence because of energy conservation. But you want a state that could have 10 to the, whatever it is, 12 galaxies in the universe as part of the quantum state. So I don't think that energy conservation is going to prevent you from having a single brain or two here or there. And I can't think of any other conservation laws that would get in the way of that either. It is true that if you delicately arrange things so that you live only on some subspace of all the possible states of your system that you might have wanted to explore, then there are places you can't get to under the laws of physics. But you have to work to make that happen. If you just sort of pick a random state, it's going to go to all sorts of different places. And I think that's the generic assumption to make in these circumstances. Jonathan Jertsen says, I can't wrap my head around the concept of degeneracy pressure. I understand that when a fermion wave function is anti-symmetric with respect to particle exchange, this causes a two-particle wave function to become zero if the particles have identical states. But how can particle exchange, which is rather abstract and discreet, give rise to a pressure which is concrete and continuous, one that can even be overcome through gravity in some cases? So this is a great question. If I remember correctly, I talk about it a little bit in quanta and fields. And for those of you again who have no idea what's going on, you may have heard of the Pauli exclusion principle. Two fermions, like two electrons or two quarks, cannot be in exactly the same quantum state. They can be in almost the same quantum state because electrons have a spin that can be either up or down. So you can put two electrons with opposite spins in the same spatial quantum state. But then beyond that, you've used up all your extra freedom, so that's why in a helium atom, you can have two electrons that are more or less in the same orbital. But as soon as you go to lithium, et cetera, you need extra orbitals because you are excluded from those original orbitals. And it is absolutely true, as Jonathan says, that this sounds like a yes or no question. Are you in the same state or are you not? But in fact, people use this idea of Pauli exclusion to arrive at pressure in neutron stars and white dwarfs and like that when these fermions are squeezed very close to lead together. So how can that happen? The answer is, you know, it's very nice and simple and fun to do the simple calculation about can two electrons being exactly the same quantum state and get the answer no. But in the real world, what do you mean by exactly? Well, not what do you mean? You know what you mean. But think carefully about what you mean about exactly the same quantum state. If I have two wave functions for an electron that are really, really almost exactly the same, but slightly different, is that okay? Like, is it only when they're exactly the same that it's excluded? So if you go through the math and the answer is no, that is not true. If you try very hard to put two electrons in a state that is very, very close to each other, there is a force that pushes them apart, that prevents that from happening. So in fact, you want, you can't have a substantial overlap between two electrons in two different quantum states. They have to be basically orthogonal to each other. And that's in fact exactly what happens when you go through the atoms beyond the helium atom. All of these different orbitals that you studied in chemistry are as quantum wave functions perpendicular to each other in Hilbert space. So it is that effective force pushing the electron wave functions apart that gives rise to degeneracy pressure. Carajayu says, Richard Dawkins has said that high school math and science education often fails to capture students' interest because it tries to do so by showcasing its practicality, whereas he believes that they should focus on its beauty. I'm aware that as a professor, you're mainly teaching those who are already interested, but what are your thoughts on this issue in STEM and maybe education as a whole? That is, between practicality and beauty, which do you think should be emphasized more in math and science education? You know, as I sort of hinted at before, different people will respond to different things. I'm not completely sure that education emphasizes practicality rather than beauty, although, you know, I'm somewhat sympathetic to that view. But also, I'm pretty darn sure that there are students for whom the practicality is much more important than the beauty. Those who eventually become professional scientists might be seduced by the beauty of science. And I do think the beauty of science is important and should be mentioned, but I always think it's a mistake to say, like, here's the right way to talk about science, to present science, to educate people about science, because people are different. I think you got to, like, mention the beauty and mention the practicality. And that, you know, if that makes it harder to fit everything into your course, so be it. You got to do it that way. And you got to not just do it, not just should you try different techniques, but while you're doing the techniques, you should see what's working. You should get feedback from your students. You should, like, see where their heads nod and where their eyes light up and where they answer the questions correctly, you know, what techniques are working. I'm very, very down to earth and empirical about these kinds of questions. Yousef says, how can we know that the irregularities in galactic motion is due to dark matter rather than Newton's or Einstein's laws breaking down on large scales? Is it because dark matter is a better candidate or is there more to it? This is the kind of thing, I talked about this a lot before, sorry, Yousef, but it's out there on the internet, but I'll give you the basic short version. Dark matter is a hypothesis. Changing Einstein's equation or general relativity is a hypothesis. Or changing Newton's laws or whatever. These are all different scientific possibilities. You should not be at all surprised to hear that scientists have given a lot of thought to all of these possibilities. And once you have the hypothesis, you have to compare it against the data. Not just a little bit of data, but all the data out there. And the very short version of the story is once you go beyond galactic motion, by which I think you mean the actual motion of stars and gas in galaxies, there's a lot of other phenomena out there. There are clusters of galaxies. There's statistics of large scale structure. There's the cosmic microwave background. All of these, there's weak gravitational lensing and strong gravitational lensing and blah, blah, blah, blah. All of these are really good fits to the idea that the universe is full of dark matter. They're really bad fits to the idea that there is a change of the law of gravity. Maybe there is a change to the law of gravity, but there's also dark matter if you really want to fit the data and fitting the data is what I'm all about. Nick B says in the constellation, sorry, in the TV show Constellation, Mike, who is the hitman slash fixer from Breaking Bad, turns up as a morally questionable physicist seeking a new state of matter. He is prepared to risk astronauts lives to get the results from his experiment. The experiment can only take place on the International Space Station because it requires zero gravity. Is it accurate to describe the conditions on the ISS as zero gravity? Or is it just a location where gravitational forces balance or cancel each other out? And do you know of any physicists who would feel that a few lives are a fair trade, they don't have to kill them directly, for a radical scientific breakthrough that would benefit all of humanity? Well, these are two very different questions. You've sneaked two questions in there, but they're both okay. So I will, I will grant you this one. I did, I did, by the way, like, cancel some questions that people ask because they were trying to ask more than one question. That's not those are not the rules of the AMA. You get one question per month. Anyway, the ISS is usually described as a microgravity environment. There are tidal gravitational fields in the sense that two objects that are not exactly the same point but slightly separated from each other, floating in the ISS in principle, feel a gravitational field that either pulls them apart or separates them over time. But those forces are really, really, really tiny. So for all intents and purposes, it's zero gravity. Now, there is a nomenclature problem. What do you mean by gravity? The curvature of space time is not zero in the ISS, because you're in the gravitational field of the Earth, not to mention the Sun, the galaxy, and so forth. So all that means is that you travel along a certain path, right, a geodesic around that gravitational field. And the principle of equivalence says that as long as you're freely falling, it's essentially as if there is no gravitational pull. That's what's going on in the ISS. So it's no gravity in the sense that you're freely falling. It's not no gravity in the sense that space time is flat, but that's not an incompatibility as long as you're in relatively small regions of space time. As far as the willingness to kill people to do scientific breakthroughs, no, I think that's generally a bad idea. But you know, again, if I'm super duper careful about saying what is true here, you know, when you build the ISS, it's possible that people would be killed in an accident, in a construction accident or something like that. People have been killed flying to space, right? And that's always another possibility that you have to consider. So it's not that science is so special that we should kill people to get scientific discoveries. It's that the ordinary workings of human life involves some degree of risk that I think is perfectly appropriate to accept, including searching for new scientific breakthroughs. Ken Wolf says, a while ago, you had given a very extensive and delightful answer to my question about the value of comfort food in the most general sense of the term. I guess this is something of a follow up question, but I was wondering if there's any particular comfort food you've become enticed by or addicted to lately. It could be actual food, books, television, games, podcasts, music, relaxation routines or anything really. This is a good question. I'm all in favor of comfort food as a thing. You know, as you might guess from various other things I say in other contexts, I believe in variety. I believe in some of the time you're expending your mental energies, working or thinking about something really hard. Sometimes you're out in the world experiencing things. Other times you're sitting at home, bedging with some comfort food you literally or metaphorically. Metaphorically, actually, like in the past week or so, a couple weeks maybe, I think that the comfort food that I've temporarily been indulging in is just archive dives on my favorite web comics. Some of you may be familiar with this idea that, you know, a web comic has been going for years. Of course, you might want to follow it every day if you really like it, but also you could just start from the beginning and take a couple days to go through the whole archive. So I've been doing that, I mean, more than days because I'm not going to spend 24 hours doing it, right? But like here and there, it's a good way to just use up a few minutes of time to relax without stretching your brain too much. Girl Genius is definitely my favorite web comic, but questionable content is another one that I've been reading recently. Very, very different in spirit, both of those. More literally in terms of comfort food, you know, I have my comfort foods and that's always how it's going to be. We do have a good friend here who's one weakness, maybe not only weakness, but one of his weaknesses is cheap Chinese takeout food. Nothing very elegant, you know, advanced Sichuan or Peking duck or anything like that. Just, you know, chow mein and fried rice and egg rolls. And I got to say, those egg rolls, you know, the classic, I don't even know if they have them elsewhere in the world, but in the northeast United States, where I grew up, the big thick egg rolls, deep fried with duck sauce, makes me think of the days when you would have, we would also at the Chinese restaurants, they would serve you a lot of food. And I would also like to say, you know, I'm not a big fan of the bowl of fried wontons with hot mustard and duck sauce. I don't know where to get those, but these egg rolls have been hitting the spot and it makes me think I should try to learn to make these egg rolls. How hard could it be? So I actually looked up on the internet how hard it could be and turns out it's very hard. It, you know, it's not like it's skills beyond my ken, but the amount of work you need to do to make an egg roll is quite large. And so it kind of makes sense if you're going to make 100 egg rolls and sell them at a restaurant. I'm not quite sure if it makes sense for me to go through all the effort to make myself a single egg roll, but I might be tempted. You know, you don't know. I'm just predicting that this is something that could happen in the future. Julian Voidl says a basic question. Is there, is there an electron in the orbital when it's not interacting with the environment at a certain probability? Or is there just the wave function? This is not a basic question. A very deep, important, profound question. Different people will give different answers to it. In my way of thinking, the answer is it's just the wave function. It's not just Julian was smart enough to put just in parentheses here, indicating that we could choose to use the word just there or not. It's the wave function. That's what it is. I think the wave function is a direct representation of all of physical reality. There's other approaches in Bohmian mechanics. You would have both the wave function and the a single particle, the electron, and something like Jacob Bairndes's point of view. You just have the electron. The wave function is a probabilistic description of where it might be, but the electron is what actually exists. So we don't know. This is part of the great embarrassment of modern quantum mechanics is that we don't know the answer to these questions. Michael Bright says, I very much enjoyed your conversation with Professor Bairndes. What I found most interesting was that he seemed to be questioning what is happening here in quantum mechanics. I naively thought that the debate amongst physicists was much more about, we know what's happening, what does it mean or imply about reality, but he seemed to be questioning what exactly was happening. So my question is, is that a fair distinction? And if so, how much of academic quantum theory is about answering what is happening here versus the question, what does this imply? So this is very closely related to the previous question. The answer is already clear. Physicists do not know what is happening there in the wave function in quantum mechanics. We, again, it's when I say we don't know, it's not that we have no idea. We have different ideas and we don't know which one is right. We don't agree on which one is right. I think it's very hard for most people to wrap their brains around the idea, including professional physicists, that the wave function really is the only thing that is happening. For the simple reason, it is not what you see when you look at things. That's the puzzle about quantum mechanics, that what you see is not how you describe the system when you're not looking at it. And some people make their peace with that. Other people want to attach some reality like John Wheeler, as we just talked about, to the observational outcome specifically. So we don't really know. But just so you know, physicists don't spend any time worrying about this. Physicists don't spend a lot of time thinking about the deep foundational issues about the ontology of quantum mechanics. They shut up and calculate for the most part. So they use the formalism of wave functions and things like that to make predictions about what they're going to see in their detector. And that's what they spend the overwhelming majority of time doing. The experimental physicists especially. Theoretical physicists, it's a little bit, you know, it's a bit of a puzzle because we manipulate wave functions. That's what we do if we're quantum mechanically inclined. And we solve for them and we invent different systems that are represented by wave functions and so forth. But then sometimes you can't help but heads with an interesting question about what does it mean to observe the wave function? What gets observed? Right now there's a bit of a dialogue going on in the literature about quantum gravity in decider space. Decider space being the solution to Einstein's equation with nothing but a positive cosmological constant. And it's kind of ambiguous. You get some puzzling results there and a lot of it comes down to what do you mean by making an observation in this situation? And I do think that things would be clarified if people had better ideas about the foundations of quantum theory. Vinay Kumar says, the Vera Rubin Observatory recently came online. If I understand it correctly, the observatory will take hundreds of images of the southern hemisphere sky every night for 10 years for a survey called the Legacy Survey of Space and Time, LSST. Are you aware of certain research groups that will be using this data for cosmological research? Which research are you most excited about? And what questions do you hope the data from this observatory can answer? Yeah, I wanted to answer this question in part because just to let people know the cleverness of the astronomers involved, like many astronomical big projects, when it is first proposed it is given a boring acronym. And then when it is getting closer to getting funded, it finally gets its final name. So the Vera Rubin Observatory was not called the Vera Rubin Observatory. When it was first thought of, it was called the Large Synoptic Survey Telescope, LSST. So there's a lot of papers out there written about like, what will LSST teach us? And then it was renamed the Vera Rubin Observatory when it came closer to being completed. And now the survey that it's doing, that the Vera Rubin Observatory is doing is called the Legacy Survey of Space and Time also acronymized as LSST. So I think that was a clever move on their part. And yeah, I think it's going to be, well, like many good astronomical observatories, what I'm most excited about is what we don't expect, what we don't anticipate, right? That's almost always the case when you have a really good new way of looking at the cosmos, you generally discover things you didn't expect to see there. And that's what I'm most excited about. The great thing about LSST is the time domain, as we say. It's just not easy to take pictures of the sky in high resolution, right? You need to collect a lot of photons. And therefore, traditionally, most people have concentrated on taking basically a photograph or maybe a spectrum, either way, an image or a collection of data at one moment of time at some particular astronomical object. Now, that's obviously not completely true for things like motions of the planets and comets and things. We take images over time for studying fluctuations in variable stars or supernovae or whatever. We also take images over time. But the idea of doing a survey over the sky systematically over time is something that is very, very difficult to do. And this is really the first in-depth effort at that. And so we're going to discover a bunch of things. We're certainly going to discover a bunch of asteroids, right? Things that, you know, would show up as little dots on the sky in a single photograph, but you see them as moving over time. That's a big thing. You'll discover a bunch of supernovae, a bunch of who knows what. I don't really know. It will be very, very useful for gravitational lensing surveys, for looking for machos, massive compact halo objects, and also sort of unanticipated ways in which distant galaxies or nebulae within our galaxies slightly change with time. You know, nobody thinks that the cosmic microwave background changes with time in any noticeable way, but we haven't really tested it, right? The LSST is not looking at the microwave background, because it's an optical telescope, not a radio telescope, but it would be nice to have the resources to actually measure whether that's true or not. No one's going to spend money looking for time variations in the microwave background, because they're so unlikely to be there that it's probably a waste of money. But it's interesting to think about. Kristof Redomsky says, in an excellent article on David Hume in Eon magazine, it was said that most scientists have little respect towards philosophy. From your perspective, is it true? Yeah, I think it's basically true. I mean, most scientists have little respect for history or economics or literature or most other areas of human endeavor. They have a little bit of respect for math, okay? But still, they'll make fun of the mathematicians for being too overly concerned with formalism and proving things in rigor, not enough interest in getting the right answer. But you know, scientists are selected by the process of making scientists to have respect for science. There's no necessary correlation with having respect for other fields. Philosophy in particular, weirdly, is close enough to science to get less respect than average, because it's close enough to science without being science, right? Without using the methodology of science. Philosophers very obviously tend to care about different questions than physicists do, than scientists do, I should say. And therefore, there's mutual sort of disdain for what these people care about. Of course, within that group, there's all sorts of variation. There's plenty of philosophers who have enormous respect for science, plenty of scientists who have enormous respect for philosophy. These are all just very vague generalizations, but there is some truth to them. C.P. says, in your August AMA response to a question about democracy, you said some people in their political or social theorizing imagine versions of an ideal society, and argue those societies would be ideal without putting enough effort into understanding the stability of those societies or social structures under perturbations. I also think people do not think about stability enough. Engineers who use control theory and stability theory to design for, sorry, I added a word there. Engineers use control theory and stability theory to design for stability. For society, I see stability as something not only to be understood, as you said, but also to be engineered and designed for. As an example, I see the progressive nature of the US tax code as a very useful tool that acts to make the income distribution more stable. Do you think there is something to the idea that we should not just understand stability, but in cases where we wanted we should design society for stability? Short answer, yes, absolutely, no question. This is related, of course, to the place we started at in this AMA, the stability of a democratic setup. And I think it reflects an interesting feature of thinking about the physics or the science more generally as a way of describing society or democracy or government or whatever, which is this kind of reflexivity or agency that we have in society that we don't generally have. For physical systems. So a box of gas that you're going to describe thermodynamically, you can have some variables about pressure and temperature and so forth and find out what it does. But your choice of what variables to use is more or less dictated by the physical system that you're studying. Okay, there are certain ways of making coarse grained descriptions that work and certain ones that don't. The difference in something like a democracy or more broadly a social setup more generally is that we both choose the system that we are trying to implement and then we live within it. And there's this feedback loop, right? So we talked a little bit before about gerrymandering and so forth. And the choice of how to represent the preferences of people in the society is a highly non-trivial one. And that choice sort of comes out two ways. Number one, in the voting system. So do you do just winner takes all or do you have some ranked choice voting or something like that? Secondly, in the representation system. So like we said, in the United States, we have geographical districts. There's no necessity to having geographical districts, even if you had broader districts. So you said like the whole state of Maryland could be one district, but we get more than one congressional representative and instead of voting for one representative within each district, we are going to vote for, you know, the top and I don't know how many representatives Maryland has. But it could be just the top end people. Maybe that will give you better representation of what people in the state overall want. And so without answering the question of what is the best way to do it in democracy, when you were thinking about making it stable, what what what when you're thinking about making it good, let's say, aiming for stability is one of the things you should absolutely try to have in mind. And I mean, it doesn't really, despite, you know, Thomas Jefferson wondering whether it'd be fun to have a revolution every 20 years. There's a lot to be said for stability and reliability in government in both the system and the government and things like agreements that the government makes with other countries with its own citizens and so forth. And one of the many ways in which the current regime is a disaster is it has completely destroyed any reliability that the United States has to be a good international partner to respect the agreements that it has been party to and so forth. And that can't be fixed just by this regime going away and replacing it with a new one, you know, because maybe another one comes back four years later that that's what just happened. I do think that designing for stability and thinking about therefore thinking about what it means to be stable what are the features of a democratic system that ensure its stability is a super duper important thing to do. Kent link letter says when people talk about the expansion of the universe I often hear them talk about it as if the expansion is a momentum imparted by the big bang, and that without dark energy the momentum would respond to gravity and the expansion would slow. Is this really the right way to view expansion and how it might slow without dark energy. I don't see why objects pulling on each other would make space smaller, rather than just pulling objects closer to each other into clumps without changing the size of space. Well, this is an interesting question in the sense that it's one of the times when I have to be clear on the distinction between something that is a puzzle at a truly scientific level and something that is a puzzle because we're trying to squeeze the scientific theory into our common ordinary language to talk about it. And this is very much number two. Okay, the equations of general relativity or cosmology are completely 100% unambiguous. We know what's going to happen right we know exactly what the metric of space time is doing. We can predict whatever experimental observations you want to make. There's no problem there's no real ambiguity, but explaining it in words generates ambiguity. So this thing that happens should you talk about it as momentum and things moving apart should you talk about as things pulling on each other or should you be more strict in talking about the geometry of space and the behavior of the scale factor over time and the energy momentum tensor and whatever it is. So part of the answer to that is whatever makes you feel good whatever gives you an intuition that you can use to understand what general relativity is predicting. That's the actual thing that matters. It's worrisome because you know the your intuition can be let astray. If you're well what's what's the right thing to say your intuition the idea of intuition is something that enables you to understand or get a feeling for what the theory would predict if you sat down and solve the equations without actually sitting down and solving the equations right that's when you know you have good intuition about the theory it's not intuition in the sense that it's baked into your brain you build it up over time you sort of think about it. So the question is like in different circumstances how would you think about what is going to happen in the universe different circumstances than the ones we know about. So for example like once you're taught that space is expanding and I just came across this issue in my philosophy of cosmology class once you're taught the space is expanding and that provides a very nice explanation for things like both the Hubble law but even the you know specific stretching of wavelengths of photons cooling down the universe as time goes on. Then it's then you begin to resist the idea that space is not expanding within the galaxy okay and the explanations for why space is not expanding within the galaxy which it's not by the way. Usually talk about the fact that well the particles that were making up the galaxy were moving apart from each other but then under the mutual pull of their gravitational field they began to come back together and now everything is equilibrated nothing's expanding anymore. And that's a perfectly legitimate explanation but it doesn't seem to quite comport with the original explanation of things being pulled apart because space was moving. And what can I tell you but all of these are slightly imperfect translations of equations into words and at some point you just got to understand and believe with the equations are telling you. Anonymous says I mean given understand that magnetic fields are frame shifted electric fields so why does anybody think that there should or even could be magnetic monopoles how would that even work when you try to frame shift the magnetic monopole back into an electric charge. Well you have to remember that it is true in relativity that when you have an electric field or magnetic field or both when you change your reference frame. So when you move from you know a certain reference frame in which things are moving in a certain way to what is called a boosted reference frame where you have some net motion that velocity with respect to the original one. Then the electric field the magnetic field transform into each other but you want to be careful about exactly interpreting those words also it's not that electric fields suddenly become magnetic and vice versa. It's a little bit of an admixture of electric field into magnetic field. So in fact since you can't go faster than speed of light you cannot take a magnetic field and boost into a frame where it's 100% electric field and vice versa. There's always going to be if you have a pure electric field or a pure magnetic field you will always get a little bit of the other one but not 100% of it. So it is this it's not as if you could turn an electric charge into a magnetic charge just by changing your reference frame you don't have enough freedom to boost yourself to do that. And besides which you know the question of magnetic monopoles is just a question you're welcome to think about. People thought about it long time ago just from looking at Maxwell's equations of electromagnetism and noticing an asymmetry between electric charges and magnetic charges. That's the kind of thing that provokes physicists to think about things without giving them any really strong reason to expect that the thing that they're thinking about will be there. But then later in the 1970s it was realized that if you have grand unified theories theories that tend to unify the strong nuclear force with the weak nuclear force and the electromagnetic force. Such theories generically predict the existence of magnetic monopoles so it's not just we can imagine them they're predicted by very reasonable theories. We haven't found any so somehow you have to get rid of them the in fact the first real reason why people thought about inflationary cosmology was to get rid of the monopoles the prediction from early universe cosmology for the existence of magnetic monopoles was that there'd be way too many magnetic monopoles rather than too few and inflation helps dilute them away. We don't know whether either magnetic monopoles actually exist or whether inflation actually happened but that is a consistent story that we can tell so right now it's an empirical question we have to keep looking see what happens. The Sean Acklog says I understand that decoherence occurs when a quantum system becomes entangled with an environment that has a large number of degrees of freedom resulting in negligible off diagonal interference terms and separate effectively independent branches of the wave function. While there is no strict threshold decoherence is typically described at a macroscopic scale involving cats or friends of famous physicists. Therefore I was intrigued to learn that a single particle in a spatial or electric superposition can decoher at the macroscopic scale. Experiments apparently show that if carefully prepared even a single scattered photon entangled with electron spatial superposition can suppress interference enough to produce decoherence. Do the superimposed spatial states truly evolve independently in perpetuity how would many worlds describe this phenomenon. Yeah you put your finger on something that is sort of a I don't know a shortcut that people often give I know that I give it when talking about decoherence and branching and so forth. In many cases of interest like Schrodinger's cat etc. The thing that you're talking about is a big macroscopic thing that instantly interacts with its environment and by interacting with the environment we mean really interacting with many many many many different particles in the environment. Okay and so what happens is two things happen and we don't tend to distinguish between these things so you're correctly putting a finger on it. One is the different parts of the initial system the cat being awake and the cat being asleep become entangled with different states of the environment. Such that the states in the environment are perpendicular to each other the state entangled with the awake cat is perpendicular to the state entangled with the asleep cat. And therefore you have decoherence that's what decoherence means the system you're talking about becomes entangled with another system in such a way that the other system is not. The other systems entangled states are orthogonal to each other and then you get no more interference in the original system okay you destroyed quantum coherence you have decoherent. But the other crucial thing in the cat case is that there are so many particles in the environment that you become decoherent with that you can't practically imagine undecohering. You're never going to undo that process you are stuck in that situation where you basically have two environment states that are orthogonal to each other that you have become entangled with and therefore not only. Do you have two states that are not going to interfere anymore but they're going to go their own way forever that's an extra statement that you can make. So if you have a single particle like a single spin. Let's let's say you're imagine doing the double slit experiments you're sending electron through two slits and you do the version where you observe which slitted goes through okay. And then that destroys the interference pattern on the other side because you have decohered ordinarily what we mean when you say you have observed the particle going through one slit or the other is that you have some macroscopic measuring device or a human being with a brain. And there's lots of moving parts to it and that becomes entangled with the electron going through the left slit or the right slit. But you could imagine a version where you just entangled that little one bit of information did the electron go through the left slit or the right slit with a single spin. So the new extra spin that is sort of quote unquote observing the electron gets entangled so that it's spin up at the electron goes to the left slit spin down the electron goes to the right slit. That is 100% enough to destroy the interference pattern that is decoherence. The thing is you could undo it and this is the this is the sort of gimmick behind the whole delayed choice double slit experiment the quantum eraser experiment. So you can decoher just with one degree of freedom you don't need many many degrees of freedom. But if you want that that decoherence to be irreversible then in a practical case it just makes sense or it's it happens in fact very robustly that you're going to become entangled many many degrees of freedom. Chris Kultfosser says the second law of thermodynamics is a universal objective law but it's based on the concept of entropy which relies on our human defined macro states. I understand that my intuition tells me I will never see bill your balls spontaneously reassemble but my question is this. Where does this simple intuition amount to a universal law that determines this thing how can a law of the universe be based on what seems to be a subjective artificial notion of order. Well it's not it's it's subjective and artificial the course grading of microstates into macro states but it's not arbitrary. There are good ways and bad ways of course grading into macro states I'm not going to go into great detail here because I talked about this elsewhere but the simple answer is. There is real structure in the way that you course grain into macro states you could just wildly pick completely bizarre sounding macro states like Dan Dennett in his paper on real patterns. I forget exactly the example he used but it's something like you know you could make a composite macroscopic object of the color blue and my left sock. But but who cares why would you do that there's no coherence there there's no sensibility that doesn't give you any handle on the universe that doesn't let you predict anything OK. So the actual macro states we choose we do so for reasons it is to that we choose them it is true that we macros course grain things in certain ways rather than other ways. But doing so helps us understand the actual real physical dynamics of the world. Nate Nomeus says when sci-fi shows like fringe which I think you consulted on dive into parallel universes are they echoing real theories like the many worlds interpretation or brain worlds and M theory or is it really just a sci-fi invention. I love the trope and love how it was done in fringe and always have this burning question when I see it. You know usually shows like that are not heavily driven by scientific accuracy. They might be inspired by scientific ideas but something like fringe. They don't have a full time science consultant. I helped out with a friend of mine who was a writer on the show and I got a little mini shout out in the show but I didn't have a day to day input on most of the world building that was involved. They basically read the elegant universe or hear something on a podcast or something like that and run with it. In fact I was once I had the very fun experience of being on the jury panel for the Sloan prize at the Sundance Film Festival. The Sloan Foundation of course supports a lot of science and what they do kind of just for fun but I think it's a worthwhile thing is they sponsor a prize at the Sundance Film Festival for the film. That has the best scientific aspect to it one way or the other it's very loosely interpreted so it doesn't have to be a documentary or anything like that. It could just be you know some slightly science fictiony movie or something like that that is that helps people you know it does science in a good way I guess very broadly construed. And you know there were some people who I knew on the panel it was a lot of fun and we gave the prize to this movie called another earth which some of you may have seen. About the idea there was another copy of earth on the opposite side of the sun that we had never noticed before because we couldn't see it because it was on the opposite side of the sun. But then a catastrophe brings the two earths into contact and wacky hijinks ensue. It started Britt Marling who was also in the OA and a bunch of other things. Anyway we gave it the prize mostly because of the movies that were entered in the competition that year none of them were great. I thought another earth was actually a good movie and the science was fine. It wasn't clearly wasn't scientific. There's no real theory that there's another copy of the earth on the other side of the sun. But it's sort of in addition to the other earth being there the idea was that there was an exact copy of the earth on the other side of the sun down to the same people. So you know you had a twin doppelganger living on the other earth and all that stuff. So there's no real science there right. But it's kind of inspired maybe you might imagine by ideas of many worlds of quantum mechanics or something like that. And we asked the filmmakers afterward and they said that indeed they were inspired by hearing a radio interview with Brian Green former Minescape guest. So and that's fine. I'm all in favor of that. And that's typically what happens. There's occasionally films or TV shows that try to do it better. You know the good place had a lot of input not scientifically but philosophically if you're into that and movies like Europa Report or The Martian do try to do things relatively carefully not to mention something like Interstellar of course. Patricia Paulson says I was wondering if all the very short lived unstable particles that pretty much only exist during collisions are really necessary. Would the universe be any different if the particular particle wasn't produced once in a great while. I hope that makes sense. I'm just a lay person fascinated by fundamental physics since I had to write a paper in college. Actually interestingly we're not sure. So if you if you mean necessary in the sense that could you imagine a world without the various unstable short lived particles in the standard model of particle physics. For the most part yes you can imagine such a world. Let's particularly focus on the three generations of matter particles of fermions because we have the lightest generation with the electron and its neutrino the up quark and the down quark. And then that pattern is repeated two times say the muon and its neutrino the charm quark and the strange quark. And then you also have the town and its neutrino the top quark and the bottom quark. So three generations or three families and they just seem to be heavier copies of the lighter family. And this is of course the whole situation that inspired the wonderful quote from I I Robbie when he said who ordered that when they first discovered the muon which is just a heavier copy of the electron. You could absolutely imagine a world in which those particles didn't exist. The question is would it be importantly different from our world since after all these particles are short lived and they're not part of the matter in the universe in any important way because when they're produced they disappear pretty quickly. Now they do contribute to interactions through quantum corrections things like that so they have a measurable effect. And indeed that's one of the ways of looking for particles we haven't discovered yet to look for indirectly their effects on the particles that we have discovered through quantum fluctuations. We haven't done that yet but that's what you're doing when for example you're measuring the magnetic moment of the muon or something like that. You're looking for influences of new particles. So the details would certainly be different that those particles were not there. Now it's possible that these particles play a very important role because by having three generations you need that many generations to allow for what is called CP violation the violation of the discrete symmetry charge parity and that might play an important role in creating the asymmetry of matter and antimatter in the universe. I say might because it's certainly true that we need CP violation in order to get the matter antimatter asymmetry. It is not clear whether the CP violation that we have in the standard model particle physics is enough to do it. So it's completely possible that you could still get perfectly good matter antimatter asymmetry without three generations by some other mechanism. In fact that's absolutely on the table. Sandro Stuckey says in your July complexity solo you talk about the coffee and cream example to illustrate how entropy steadily increases while complexity comes and goes. You say that you think this is quasi robust behavior you expect to see in many closed systems. But then toward the end of the episode you reveal that in your coffee automaton paper you don't see this behavior in the simplest setup with nearest neighbor interactions where coffee and cream just slowly diffuse into each other. Indeed you apparently need long range coherence or forces such as a spoon stirring the coffee cream to observe the effect. You said this is provocative and I agree. Don't we see turbulence and convection appear in fluids even though their dynamics are governed by nearest neighbor interactions. Is the coffee automaton maybe too simplistic to capture this kind of emerging complexity. Well there's no doubt that the coffee automaton is too simplistic to capture all sorts of things. That's why we ran it with sort of different versions of dynamics. So in particular the kinds of nearest neighbor interactions that we looked at really you know we're kind of Markovian. If you want to remember the concept that we introduced talking to Jacob Barandas you know the idea that you forget the previous state of the system and you just look at the current state of the system. So the nearest neighbor interactions in that model had you know one particle either moving randomly or interacting with literally its nearest neighbor. Whereas in turbulence you get coherent interactions just because you have momentum right like one particle can drag along another particle. So that simply knowing the positions of individual particles is not enough from moment to moment. So when I say long range interactions I don't necessarily mean a force that by itself stretches over long distances. It might be you know like a sound wave can be a long way rate range interaction even though it's just particles bumping into each other you know. So we we don't know exactly the right way of specifying what is it that allows complexity to evolve in these circumstances. That's why we're doing these very very simple experiments not because they're supposed to be realistic but because if you gather a list of places where it does happen and it doesn't happen then you can hope to home in on what are the necessary conditions but we're not there yet. I don't claim to have that answer. Peter Lloyd says is there any theoretical reason to believe that apart from the probability distribution that the Born Rule gives us the outcome of a quantum measurement is purely random. I know that empirically it is reliably random but does it have to be. Is there anything in quantum mechanics that actually precludes the possibility that a non physical conscious mind could reach into our world and mess with the measurements provided of course it maintains the Born Rule in the long run. I know you don't believe in non physical consciousness but do the equations of physics actually forbid it. Well the equations of physics forbid it in the sense that the equations of physics say what happens and they don't include that. You could very easily imagine just as you said that a non physical conscious mind does reach in and picks out one way for things to go rather than the other. That's just not part of the equations of physics. It's a different kind of theory which you're welcome to explore. It's a very strange kind of theory because you're saying you want to overall maintain the ordinary Born Rule distribution of weights of measurement outcomes. Which means that you know if the non physical consciousness says you're going to get spin up the next three times in a row it has to sort of compensate for that later on by letting you get more spin downs or something like that. But you know as I very very often say if you want to ask what is possible given everything you know about the universe many many many things are possible. Connor Schaffrin says a friend of mine published his first scientific paper a few years ago and has gotten a fair amount of citations within its niche. It's considered the second most cited paper. At first glance this seems pretty cool that for looking into many of these citations he discovered that a disturbing number of them either clearly misrepresent the paper over exaggerate its findings or were just obviously written by an AI. This took him down a rabbit hole exposing the world of fake science publications AI science slop and how it manages to leak into mainstream media to be presented as real science. This has led him to lose faith in the scientific publishing system. I know you've done some episodes on the topic before but what advice would you give to my friend who's thinking of walking away from her career in science. Well I certainly don't think that walking away from her career in science is the right thing. The thing that is being criticized here is not science or even the scientific publication system. It's the idea that you can rank papers by how many citations they get. That was never a great idea. It was a pretty good idea like it's not unrelated you know like if you're getting a lot of citations from real scientific papers. That means you made an impact on the field doesn't necessarily mean your paper is genius or anything like that. And indeed getting no citations doesn't mean your paper is not genius. It's not a very strong correlation but it's something. It's something more than just sort of saying well you have a paper. Let's just count how many papers you've written. You know at the end of the day science is going to be fine because we care not just about how many papers people write or how many citations they get but the actual content of the papers. If you want to hire somebody you should read the actual papers they write. You should just not look at how many papers they have and what journals they're in and how many citations they have. You should figure out whether the papers are good. And if the paper is good it doesn't matter how many A.I. publications cite them or not. That's not to downplay the very real threat of just being flooded with junk in the scientific literature because of A.I. papers. That's a problem. I've noticed some A.I. papers on the archive already myself. The archive is not set up to read every paper very very carefully and decide whether it's legitimate or not. So this is a real threat to trying to get real science communication happening among scientists because if there's just too many papers to read all of which are trash then it's going to be harder to find the good ones. It'll be it'll actually ironically drive us toward a more elitist kind of setup because you're only going to read papers by people whose names you know. Those are the ones who you trust and therefore the likelihood of someone whose name you don't know writing a really good paper and getting it noticed will go down because you just don't believe papers who are not from reputable sources anymore. I don't know what to do about this but it doesn't get in the way of the integrity of science itself. It's just one more little obstacle that we have to look out for while we're moving forward. Rue Phillips says in your recent podcast with Jacob Berndes he said that he was once a Neveredian but as he learned more and more about it it's many underlying assumptions he grew cool to the idea. He only mentioned a few assumptions but made it sound like there are many more than I would have thought based on the way you typically portray many worlds. Can you talk about what the many worlds assumptions are and how you would have addressed the main ones with Jacob if your conversation were to have gone in that direction. I guess I kind of already addressed this in a different question earlier in the AMA. The many worlds assumption are very small. There's not that many of them. The world is a vector in Hilbert space or represented by one and it evolves according to the Schrodinger equation. But then you have to interpret the theory and interpreting the theory in a theory like many worlds is a much harder task than interpreting say classical mechanics. And that's not at all surprising because classical mechanics was developed on the basis of experiences of the world that are very close to our everyday intuitions. And technology things like discovering radioactivity and measuring the spectrum of black body radiation and discovering the nuclear model of the atom all pushed us to invent quantum mechanics. And therefore it shouldn't be surprising that quantum mechanics is further away from our intuition about the world than Newtonian mechanics is. So we have to do the work. You know that's what I've been trying to encourage people to do for a long time now. What Jacob was specifically referring to was a very particular way of proving the born rule that probabilities given by the wave function squared in many worlds. And it's a way that was pioneered by David Deutsch and David Wallace to former mindscape guests even though neither one of them talked about this particular bit of work. We talked about other things with them. And it's a very technical kind of proof based on ideas from decision theory etc. It's exactly one of those things where Deutsch's original paper is very simple and very easy to understand. I wrote a blog post about it once. It's very possible to figure out what's going on. It's kind of a clever idea. And then you sit down and you're a careful philosopher and scientist and you realize well OK we should be more explicit here in this step and be very very careful about spelling out our assumptions dot dot dot. And you end up elaborating on the details and it grows to a longer thing. And as I said before if it's a longer thing it's a longer thing as long as it's right. I don't care how long it is. But also more importantly there are other ways to prove the born rule and get these probabilities out that I find much more direct and intuitive. And so I just depend on them. It's not that hard. There's a lot of personality things going in here. I think that what's going to matter is what works. You know like if it continues to be the case the different formulations of the foundations of quantum mechanics lead to the same predictions then what people will care about most is which ones push you in a better direction to make more improvements on physics going forward. I think that's clearly going to come from many worlds but it's I'm glad that other people are working on other things. Colin Johnson says have you considered there may be a point where the outcome for humanity is preferable under Chinese leadership than American. We should obviously advocate for the virtues laid out in the American Constitution but it seems to me we may be diverging so greatly from them under current leadership that regimes such as China may actually be less fascist corrupted anti science even if not as democratic as we would hope. So roughly speaking no but also I sort of deny the the idea of the question like I don't really think that going forward we're going to have one country that is quote unquote the leader of the world. I'm not even so sure that the United States was ever the leader of the world. It was the most powerful country for a long time and it was an example in some places it was a counter example in other places. It was a bad example in other places. One of my Hopkins colleagues was just on Blue Sky pointing out that in Latin America no one ever thought that the United States was a beacon of democracy because they kept overthrowing democracies in Latin America. And finally the rest of the world has caught on to this. So I think that it's just more complicated than the simple-minded idea that one country is the leader. Having said that I think the Chinese system is abhorrently bad. Let me be very very clear about that. It's autocratic. It has it's full of human rights abuses in ways that are even worse than the United States. You know the United States has those problems too. Now it's absolutely possible that the United States continues down a very bad path and becomes worse than China. That's completely possible. But in that case what I would say is that neither one of them are any good. I'm not going to start saying that one is good because the other one got even worse like that doesn't seem quite like the right judgment to pass. Cole Kale asks a priority question. Remember priority questions with this is the only one we got this month but priority questions are your one chance to ask a question that I will absolutely try my best to answer. You get one priority question per lifetime. If you are killed and resurrected then I will give you another priority question. So Cole says in your most recent AMA you mentioned that no one really knows what determines the rest frame of the cosmic microwave background and that it's a foundational question cosmologists often overlook. That resonated because I've recently completed a paper that directly addresses this. The idea is that the CMB rest frame emerges from the causal structure of a universe scale computation driven by informational constraints. The mechanism being that what we observe as progress through time is a step-by-step resolution of limited causal possibilities enforced by universal scale bone-esque pilot wave. From that the CMB represents the first fully rendered frame of the universe's unfolding which is why it defines a preferred frame. So my question is if causal computation would impose a hard limit in that no system can compute the universe's future faster than the universe itself unfolds would you consider this a new kind of physical limit of the universe? One that not only blocks Laplacian determinism but places a lawful ceiling on prediction generally potentially empirically undermining Descartes' demon. So I know this is a priority question but I'm not going to give you a very helpful answer at all. First because I don't really understand the proposal. Maybe there is a good proposal here that I'm not saying that I disagree with it. I'm saying that there's just not enough details for me to say what it is. When you say the causal structure of a universal scale computation driven by informational constraints that could mean lots of things to me. That is not nearly enough information for me to say oh yes therefore there is a rest frame for the universe. I mean in some sense that those words could be completely well attached to absolutely conventional physics. You just think of the laws of physics as being a computation. There is causal structure there. There are informational constraints. There you go. You haven't changed the laws of physics at all. You just attached new words to them and yes so that's a completely plausible model. It's just a new interpretation of what's going on anyway as far as I know it doesn't help explain why there would be a rest frame to the cosmic microwave background. On the other hand maybe what you mean is some computation that sort of lives in some kind of structure that picks out a rest frame. Most computational theories don't worry about Lorentz invariance or anything like that so you can cook in a preferred rest frame by having fundamental laws of physics that have a rest frame in them. That's something you're absolutely welcome to think about but then of course the challenge is you better show that your theory does not have a rest frame. It doesn't already get ruled out by data because there's a lot of good data that doesn't find a rest frame even though it looks for it. And then in terms of the actual question you know once again I just don't know how to interpret the words. If causal computation would impose a hard limit would you consider this to be a new kind of physical limit on the universe. I think that that is not a new kind of physical limit it already exists. The universe does as we said before in talking about Jananismel's work on the impossibility of embodying a Laplace demon in the universe. You need a universe size thing to compute the universe. That's just kind of just counting degrees of freedom really. It does not block Laplace in determinism. It blocks the existence of Laplace's demon in the universe which is never a plausible construction anyway. So I guess I just don't have enough details here to really give you a useful answer. Sorry about that. Jameson says you've said in the past that at the end of the day there are just brute facts as primary explanations for phenomena that themselves have no explanation. I am inclined to agree with you on this. However it is still very hard to wrap my head around it. As an alternative say that at the bottom of everything there is instead an infinite regress of explanations. So rather than a brute fact underlying everything you instead have explanation after explanation forever. Why is the brute fact option more palatable than the infinite regress option? Well for one thing I think there's a lot of things going on here. I think that these are the kinds of questions, explanations versus brute fact that are easy to ask in everyday common language but become different when you apply careful rigorous philosophical analysis to them. What counts as an explanation for certain things? This is a question being begged by the question that you're asking. I'm not sure what it would mean honestly to have an infinite regress of explanations. The reason why I think that brute facts are inevitably going to be part of the final theory of the universe has nothing to do with finite versus infinite chains of explanation. It's just that I can think of different possible worlds. I can think of a possible world running on the rules of classical mechanics. I can think of a possible world running on the rules of quantum mechanics. I can think of different possible worlds running on exactly the same rules just with different constants of nature. And if you can think of these possible worlds and they're perfectly conceivable and well behaved by themselves then there is a fact, a brute fact about which possible world we actually live in. And I don't see any way of getting around that. Even if in one world you could come up with some infinite chain of explanations for why things were the way they were, you would still be stuck with asking yourself. Why is that the world we live in? There's no necessity about it as long as there's any other possibility. Rob Adkerson says, if you had to pick, what is your all time favorite space probe or satellite? I can't decide between Voyager 2, New Horizon or Cassini, but don't get me started on spirit or curiosity. I don't really have a favorite. This is a good question. I've never really sat down and thought about it. Probably if I were forced to pick it would be the original Viking landers, the Viking landers on Mars back in the 1970s. It's kind of just mind boggling in retrospect what they pulled off, right? They built a science lab on another planet. Or rather they built a science lab and sent it to another planet and it worked. Now of course since then with spirit or curiosity or many other pathfinder, whatever, we've done it better. But the first time you do it is absolutely special. Not just sending a satellite but having it land and do science experiments on a whole other planet. It's hard enough to set up a science lab in your basement, much less on another planet. So that's very impressive to me. But I will give props to Cassini because Cassini was a probe that visited Saturn and was mostly about taking pictures of Saturn and its moons. But it was also used as a super high precision test of general relativity. By counting the timing from radio waves coming back and forth to Cassini at different points in the orbit around the solar system, you could measure the gravitational time delay which turns out to be an enormously precise test of Einstein's equations of general relativity. So that kind of bonus spin-off makes Cassini way up there in my estimation. And finally, this is the last question of this month's AMA from Joshua Hiller-Up. Can you briefly explain how under poetic naturalism, different formulations of quantum mechanics that give the same correct predictions of any possible experiments aren't all equally real? That's a very good question. But I think that there's two things going on. One is, is it really true that different, truly different formulations of quantum mechanics will give exactly the same predictions in every possible circumstance? I kind of think that that's less likely than people take for granted. I think that we need to think harder. And this is easy for me to say because I haven't done any of this hard thinking. But I do think that we need more work on the different predictions of these theories the same. It would seem strange to me that you could have a truly new and different ontology and dynamics and have exactly the same experimental predictions even though you don't have some theorem that says the two ontologies are actually mathematically equivalent to each other. They seem like different theories, but they're not making different experimental predictions. I'm a little skeptical about that. But the other thing is, okay, what if you did? What if, what if, what if Bohmian mechanics and many worlds and Jacob Barringdus's idea are kind of like Hamiltonian mechanics and Lagrangian mechanics? They are two or three different formulations that will always give you exactly the same answer. In that case, they are all true. In that case, you're welcome to use those different languages to talk about the universe. I don't think that's the likely thing that we're going to be pushed into, but it's okay. Like you're then in perfectly welcome to say, I like this formalism because I don't have to think about multiple worlds or I like this formalism because it is the simplest set of axioms I want to write down. If it were truly identical in every conceivable case, then I wouldn't care which one you liked. I just don't think that's going to end up being the case. We'll see about that. But, you know, as I've said before, like, I think I know the right one, the right version of quantum mechanics. So I'm not going to spend my time trying to throw stones at the other ones. I'm going to try to spend my time figuring out the existing puzzles in the one that I think is probably right. We'll see whether I turn out to be wise about that or foolish. And with that thought, thanks very much for supporting Mindscape. Thanks for listening to this month's AMA. Very big props to all of the Patreon supporters that make this possible. I'll talk to you next time. fades in an epic Global Gaming League video game showdown. Four rounds, multiple games, one winner, plus a halftime performance by multi-platinum artist, Travi McCoy. Watch all the action and see who wins and advances to the championship match against NEO. Right now at globalgamingleague.com, that's globalgamingleague.com. Everybody games.