Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

323 | Jacob Barandes on Indivisible Stochastic Quantum Mechanics

178 min
Jul 28, 20259 months ago
Listen to Episode
Summary

Jacob Barandes proposes indivisible stochastic quantum mechanics, a radical alternative to wave function-based interpretations that replaces the Schrödinger equation with non-Markovian stochastic laws acting on classical particle configurations. The theory maintains empirical equivalence with standard quantum mechanics while offering a fundamentally different ontology grounded in classical probability and deterministic particle positions, though with non-continuous trajectories.

Insights
  • Non-Markovian dynamics (where future behavior depends on past history, not just current state) can generate quantum mechanical predictions without invoking wave functions or many-worlds branching
  • Division events—moments when systems become entangled with environments—naturally correspond to decoherence in standard quantum theory, providing a classical probability interpretation of measurement
  • The theory avoids the 'stone soup problem' of many-worlds interpretations, which require accumulating speculative metaphysical hypotheses to derive the Born rule
  • Indivisible stochastic processes may extend beyond quantum mechanics to model complex non-Markovian systems in biology, neuroscience, and climate science
  • The framework suggests quantum gravity may require first developing a fully probabilistic generalization of general relativity, not just quantizing it
Trends
Renewed interest in stochastic approaches to quantum foundations after decades of neglect, enabled by modern non-Markovian process theory unavailable in the 1920sShift toward ontologically modest interpretations that avoid claiming knowledge of fundamental reality while maintaining empirical adequacyGrowing recognition that decoherence alone does not solve the measurement problem, requiring deeper foundational workInterdisciplinary application of quantum mathematical tools to classical complex systems via non-Markovian modelingPhilosophical physics gaining legitimacy as a rigorous scientific discipline with measurable contributions to physics progressSkepticism toward universal wave function assumptions in cosmology and quantum gravity researchEmergence of effective field theory thinking in foundations—accepting approximate, context-dependent laws rather than seeking perfect universal laws
Topics
Indivisible Stochastic Quantum MechanicsNon-Markovian Stochastic ProcessesWave Function Interpretation AlternativesQuantum Measurement ProblemDecoherence and Division EventsMany-Worlds Interpretation CritiqueBohmian Mechanics LimitationsQuantum Field Theory OntologyBorn Rule Derivation ProblemQuantum Foundations PhilosophyEffective Field Theory MethodsQuantum Gravity ApproachesClassical vs Quantum ProbabilityEmergence of Macroscopic BehaviorCausation in Quantum Mechanics
People
Jacob Barandes
Developed indivisible stochastic quantum mechanics as alternative to wave function interpretations
Sean Carroll
Podcast host and author of 'Something Deeply Hidden' on quantum mechanics interpretations
Hugh Everett
Developed many-worlds interpretation; Barandes critiques probability derivation in Everett approach
David Bohm
Developed pilot-wave theory and discovered decoherence; influenced Barandes' early thinking
David Wallace
Developed decision-theoretic derivations of Born rule; editor of Philosophy of Physics journal
John Bell
Proved Bell's theorem on non-locality; work relevant to Barandes' local but non-Markovian approach
Edward Nelson
Developed Nelsonian stochastic mechanics in 1960s-1980s, precursor to Barandes' approach
Max Born
Introduced Born rule for quantum probability; foundational to quantum measurement theory
Erwin Schrödinger
Developed wave mechanics and Schrödinger equation; central to quantum theory Barandes critiques
Werner Heisenberg
Developed matrix mechanics; advocated abandoning classical ontology in 1925 paradigm shift
Albert Einstein
Influenced Bohm's pivot to hidden variables; debated quantum foundations with Schrödinger
Paul Dirac
Formalized quantum mechanics in 1930 textbook; established standard interpretation framework
John von Neumann
Provided mathematical foundations of quantum mechanics; unified Heisenberg and Schrödinger
Simon Mills
Co-authored review on stochastic processes; independently identified indivisible processes concept
Kevin Modi
Co-authored review on stochastic processes; identified indivisible processes in classical context
Judea Pearl
Developed causal inference framework; discussed with Carroll on causation fundamentals
Ned Hall
Challenged Barandes' Everettian views; invited him to teach quantum foundations seminar
Nima Arkani-Hamed
Taught quantum mechanics and spacetime; presented arguments on quantum mechanics interpretation
Quotes
"Maybe we just need to be shaken out of our dogmatic slumbers, and then we'll find the right answer."
Sean CarrollIntroduction
"The basic idea is that you don't have a wave function. That's the biggest thing about this particular approach, which is known as indivisible stochastic quantum mechanics."
Sean CarrollEarly explanation
"You need to know something about the past history of what the particles were doing. This is where it gets a little vague to me, and you can judge for yourself from the discussion, whether it's compelling or not."
Sean CarrollNon-Markovian dynamics explanation
"I don't care if you make all the same predictions, because by proposing a different underlying ontology, you open up new ways to move forward into better understandings of the world."
Sean CarrollOntological significance
"Something had to give. Now, you can think of a physical theory in very broad terms as having three components. One component is the stuff, the moving parts, the more or less physical ingredients."
Jacob BarandesTheory structure explanation
"I just felt like maybe there was just something that would work. I tried a variety of proposals for a while. I worked on the modal interpretations... But to be honest, I wasn't very satisfied with any of these approaches."
Jacob BarandesJourney to new theory
"What I happened upon in 2022 was a different way to formulate the laws of a non-Markovian system. It turns out that you can in fact specify just a few simple rules."
Jacob BarandesDiscovery of indivisible processes
Full Transcript
Hello everyone, welcome to the Mindscape Podcast. I'm your host Sean Carroll. Quantum Mechanics, one of our favorite topics here at Mindscape, continues to be in this weird situation where it's a wonderful theory that fits all the data. We can do spectacularly good calculations, compare them against experiment, achieve agreement, many significant figures, and yet it is very, very easy to ask questions about quantum mechanics that we don't know the answer to, not just what would happen questions, but what does the theory say questions? And so we have the whole sub-discipline of foundations of quantum mechanics, trying to figure out what the true theory is behind the successful quantum mechanical predictions. And as many of you know, there are different approaches here. Ordinarily in quantum mechanics, you have a wave function. And then the first question you ask is, does the wave function represent reality, or is it merely epistemic? Is it merely something about our ability to make predictions about things? But then if you do think that the wave function represents reality, then you still have choices. Is it the sole representative of reality, or are there hidden variables or something like that? And if it is the sole representative of reality, does it always obey the famous Schrodinger equation, in which case you get the many-worlds theory? Or does it sometimes change stochastically at different moments, depending on what model you have, then you have objective collapse models of various sorts. And all of these theories are truly different theories. They're not different interpretations of quantum mechanics. They potentially have different experimental consequences. In some cases, we know clearly what those different experimental predictions are. In other cases, we're less sure. But still, we don't have a consensus that one of these approaches is on the right track. And therefore, it's useful, very, very important, in fact, I would say, to develop entirely new alternatives to these famous models of quantum mechanics. Because who knows? Maybe we just need to be shaken out of our dogmatic slumbers, and then we'll find the right answer. Today's guest, Jacob Brandis, is a physicist and philosopher at Harvard University, and he has a proposal for a brand new way of thinking about quantum mechanics. When I say brand new, it's, you know, brand new is always a questionable thing in science or academia, because everyone always has predecessors. So, Jacob's idea of quantum mechanics lives in a tradition of what is called stochastic models of quantum mechanics. There have been some stochastic models of quantum mechanics, but he has a different one. Okay, so his particular approach is distinct from anything that's been tried before. And I'll tell you the basic idea of it now, because, spoiler alert, this podcast is very long. It takes us a while to get to what Jacob is actually proposing in his new theory, because we spent a lot of time preparing groundwork for understanding the issues in quantum mechanics more generally. So I'm going to tell you the basic idea, and then you'll see it fleshed out in the discussion. The basic idea is that you don't have a wave function. That's the biggest thing about this particular approach, which is known as indivisible stochastic quantum mechanics. You really just do have, for electrons, for example, you have point particles and they have locations. But rather than following some deterministic trajectories, those point particles move stochastically according to some rules. Okay. And so all you can do is predict the probability of seeing them. The rules are complicated. I don't think we really told you exactly what the rules were, or even I don't know if it's known what the rules are supposed to be in the most general cases. But one crucial feature of the rules is that they don't depend just on the state of the particle at any one time. This is in contradiction to the entire Laplacian paradigm that has ruled all of physics since the time of Isaac Newton, the idea that you know the state of the world at one moment in time, you can predict what happens next. In quantum mechanics, you can predict what happens next with a certain probability, if what you're predicting are measurement outcomes. But still, the state at one moment in time is enough to predict what that probability is. Here in Jacob's approach, it is not enough. You need to know something about the past history of what the particles were doing. This is where it gets a little vague to me, and you can judge for yourself from the discussion, whether it's compelling or not. But I'm all in favor of trying new wild things, because that's where you make a breakthrough. Even if every individual new wild thing is very, very unlikely to be true, if you try enough of them, you really might make a breakthrough, and that will be very important. Despite the length of the podcast, we didn't get to talk about everything. One thing, after finishing, Jacob emailed me and said, you know, we really should mention Bell's theorem, Bell inequalities. John Bell proved these famous inequalities that are purported to show that a local theory cannot reproduce successfully the predictions of quantum mechanics. Therefore, you need something in a hidden variable approach. You would need something like Bohm theory, De Broglie-Bohm theory, which has non-local interactions between the particles and therefore can completely recover the ordinary predictions of quantum mechanics. Jacob's theory is local in space, but this non-Marcovi in property, the idea that the past is required to tell you what is going to happen in the future is a kind of non-locality in time. Anyway, long story short, Bell's theorem is fine. Bell's theorem is completely, if you believe the setup at all, completely satisfied by this approach. Indeed, as far as Jacob can tell, there is no prediction of conventional quantum mechanics that is different in his approach. It's just a different underlying ontology. Again, which I'm all in favor of. I don't care if you make all the same predictions, because by proposing a different underlying ontology, you open up new ways to move forward into better understandings of the world, because we have a lot of open questions. Now, you know, again, full disclosure, I am entirely unconvinced by this approach. I think that the many worlds approach is fine. So one of the things we're going to do in the conversation is Jacob will have a chance to explain why he doesn't think many worlds is fine. And as usual, in the best mindscape tradition, you, the audience members, get to decide what you think is fine and not. With that, let's go. Jacob Beringdiss, welcome to the Mindscape podcast. It's wonderful to be here. Real privilege. You say that now, but we disagree about quantum mechanics. This is a big deal. But I think that, you know, I've had people on here before that I've disagreed with about quantum mechanics. I think I've had more people talking about quantum mechanics who I've disagreed with than people I've agreed with. So you're walking in a very well-trared path. So, but we're going to pretend for the moment that the listeners have not listened to me and David Albert and Tim Lawdland and Janine Ismail talk about quantum mechanics at great length. What is the big deal about quantum mechanics? Why is it the physicists and philosophers don't just understand it? Isn't it been around for a hundred years? Well, that's a really good question. So I guess before I start, I want to gush a little bit about you in particular. So I think, you know, your listeners, by this point, you know, they've listened to your amazing interviews with brilliant, interesting, creative people. And they've, I think also, you know, listened to your Ask Me Anything. They've seen your solo. They've listened to your solo podcast episodes. So they know that in addition to being an incredible communicator of science, that you're also a fantastic scientist. And I can say that just a couple of anecdotes before we get started with the rest of the discussion. So you know this, but I use your textbook on general relativity for my graduate level general relativity class. And it is a masterwork of textbooks. There are a lot of textbooks now on the market for general relativity. But I still think it's the best. And also, I had the incredible good fortune of working with you recently on a thesis committee for an astronaut and PhD student, the brilliant and amazing Chris Chilu. And just getting a chance to see you up close in action doing science was amazing. So I just think it's important that everybody who listens to this podcast knows that when they listen to you, they're not just listening to somebody who is fantastic at communicating science, but they're listening to a fantastic scientist. Thank you very much, Jacob. That's very nice. And the listeners do get tickled when I have guests who are fans of the podcast. They're like, oh, so we're all in the same club. We're all just thinking about these big ideas. So that's great to hear. Yeah. So now to get back to quantum mechanics. Here's how I would answer your question. The origin of quantum mechanics is a little bit hard to pin down. A lot of people pick the year 1900 when Planck introduced his quantum hypothesis. At the time, Planck was trying to understand how to combine two theories, electromagnetism. This is our classical theory of electric fields and charges and electric forces and magnetic fields, bar magnets, you know, that whole story, electromagnets with thermodynamics, which was thermodynamics at the time. This is a theory of work and energy, useful work, entropy, heat engines, refrigerators, basically the science of how we take heat around us and convert it into useful energy and the limitations we run into when we try to do that. By this point, we're trying to understand the by this point, you know, various important people like Boltzmann and Maxwell had begun to construct what we call an inter theoretic reduction, an explanation of thermodynamics in terms of more basic ingredients, the statistical behavior of more fundamental things. Sorry, when you say we call it that, you mean philosophers call it that because philosophers. Yes, I'm sorry. Yes, in both camps, I know. That's fine. I think if you, Sean, as a scientist and a philosopher, absolutely. Yes, I do. You and I are two of the very few people who were kind of literally have appointments in both physics and philosophy departments. That's right. Yeah. Yeah. So, but, you know, a Planck ran into some problems. He had trouble getting his theoretical predictions to fit with experimental data. There was experimental data, but a certain kind of a plot, a graph that showed how bright or how intense radiation was coming out of these heated systems as a function of the color or frequency of the radiation coming out of them. And there were experiments, people had experimentally determined what this looked like, and he was trying to explain it theoretically from first principles, and he was having a lot of trouble. And to make a long story short, he was able to get a theoretical prediction that matched the experimental results by hypothesizing that you could only activate or excite a given frequency of radiation in certain discrete chunks, quantized chunks. And these became the quanta of his quantum hypothesis. And this is where quantum theory gets its name and initiates the story of quantum theory. This is 1900. And then over a period of time from 1900 into the 1920s, physicists and philosophers, and honestly, physicists who were also philosophers, because at that time, people were thoroughly trained in both. I encourage listeners who were interested in learning more about that to read an article by Don Howard called Einstein's philosophy, I believe that's what it's called, about how Einstein was thoroughly trained in philosophy. He read Kant's three critiques by the time he was 16. There was mandatory coursework at the university where he became a physicist. Everybody had to learn philosophy. He ran a philosophy study group reading reading club. And he was deeply versed in many of the philosophical thinkers, both before his time and of his time, as were many of his colleagues, many of people who gave us quantum theory and general relativity were thoroughly trained in philosophy alongside physics. And you could see this in their dialogues. But so in this period, physicists and philosophers were trying to put together a picture of the world and a set of laws that would give an empirically adequate meaning capable of accounting for making the right experimental predictions. And they ran into a lot of trouble. This period from 1900 to early 1920 is often called by historians of physics, the time of the old quantum theory, old quantum and theory all capital capital O, capital Q, capital T. And in this period, people had this idea that the world picture was basically classical looking. You had particles, you had fields, and they were trying to come up with rules, with laws that you could combine with this roughly classical looking picture to get the right predictions. And so you have something like in the teens, the 19 teens, you had the Bohr model of the atom. And this is the model that people often still think of when they think of the atom with particles, electrons, charged particles going around the center of the atom, around the positive charged nucleus in something like circular orbits. But of course, we know that when electrons transition from one orbit to another, they emit radiation in these discrete amounts. Again, in quantized amounts, they emit quantized radiation. And so the picture that he had that people worked with, and I think some people still think of, is that you have these nice orbits. But there's some kind of rule that particles can only jump between these orbits in discrete amounts. That picture worked reasonably well for making predictions about the pattern of radiation we get from hydrogen atoms, but it didn't work so well for other kinds of atoms. And people did try to improve the model. They tried to introduce other kinds of orbit shapes. But this just didn't work very well. There were a lot of other confounding examples. So by the beginning of the 20s, you have this collection of heuristic formulas, approximate rules, and no real consistent coherent theory. By the early 1920s, this is like 1923, 24, people like Wolfgang Pauli, Pauli of the Pauli exclusion principle, write this famous idea that certain kinds of particles, fermions of which electrons, for example, cannot be in the same quantum state. This is responsible for the stability of matter, for the solidity of matter, very important. Wolfgang Pauli. And of course, he did many other important theoretical things. Bohr was already beginning to toy with similar thoughts as famously was Heisenberg. They began to question whether you should begin from a classical picture of the world or really any picture of the world at all. Maybe we should abandon classical, physical pictures. And philosophy sometimes we refer to the physical picture of what the world really is as the ontology, right? This is the ontology of the theory. Basically abandon the ontology. And if you read one of the most famous papers in the history of physics, Heisenberg's 1925 paper, introducing what shortly thereafter was called Matrix Mechanics. This paper was a breakthrough paper presented in some ways, the first modern version of quantum theory. And the paper begins with a set of statements that any philosopher of science would regard as initiating a philosophical shift in paradigm, paradigm shift, where he just says explicitly, we should abandon trying to think of these pictures and we should develop a mathematical formalism that just connects things we observe with other things we observe. So this is a very deliberate move that he makes. And it opens the door to our modern theory of quantum theory, a theory of extremely abstract mathematics. The mathematics in general terms is not extremely advanced. A college student, you know, doing a math major, certainly well before the graduate level would be able to understand the mathematics that we use in quantum theory. It's just that the mathematics is very abstract and the connection to things that are physical, the connection to ontology is murky or not there at all. So this was an important shift. The idea was to give up classical ontology and embrace a purely new radical paradigm for how to think about laws and pictures. Heisenberg didn't really present an ontology in this paper, but shortly after, Schrodinger came along and introduced another way to get the same predictions. Originally, Schrodinger called this undulatory mechanics, like undulations, like waves going up. And I think it's a fantastic game. But eventually it became known as wave mechanics. Schrodinger introduced his wave function, which was not present in Heisenberg's work. And the wave function satisfied differential equations, which physicists really like as a way to formulate laws. And then what developed after that was a picture, an ontology in which you have like a wave function, an abstract mathematical thing that undulates or oscillates. And it's got laws, dynamical rules that it obeys. And all of these look distinctly nonclassical. Shortly after that, Max Born came along and introduced his famous Born rule, which was a formula for taking wave functions and computing probabilities of getting certain measurement outcomes. And this rule does not look like it comes out of ordinary classical probability theory. We have to take these abstract mathematical ingredients, and then we push them through this very abstract formula and outcomes a probability of getting a certain kind of a measurement outcome. But it's not in... When chocolateeers make chocolate, sometimes they'll say that they make it from beans to bars, right? From getting them off the tree. This is not classical probability from beans to bars. This is some other distinct kind of way of getting probability that seems to be nonclassical and then probability comes out. I'm sorry. Let me just poke at that because the phrase classical probability, I'm never quite sure what that means. I mean, we have Professor Kolmogorov gave us some rules for what probabilities are, the numbers between zero and one, they add up to one, etc., etc. Are those not obeyed by quantum probabilities? Good question. Just as an interesting historical note, Kolmogorov published his treatise, Axiomatizing Probability in 1933, which is after all the textbooks came out by Dirac and von Neumann established. Interestingly, probability theory is less classical than quantum mechanics, the sense that it was formalized after quite... What I mean by this is classical probability, I mean ordinary probability in the sense that we attach probabilities to events, to propositions, and then we manipulate probabilities using a set of rules. These rules, it depends on exactly what formalism of probability you're using, if you're a frequentist, if you're a Bayesian. But roughly speaking, if you want to take a Bayesian point of view, we have a formalism for taking probabilities we assign to propositions and plugging them into formulas and then getting new probabilities for propositions. This is what I mean when I talk about the ordinary classical probability formalism. In quantum theory, we can't do that. At some point, we have to introduce these abstract mathematical ingredients in between, or at various places, that don't look like ordinary probability. I can give an even more concrete example of this. In ordinary classical probability, we imagine we have some collection of propositions, which is just a set of possible statements. When we assign probabilities to them, we're generally assuming that those statements are mutually exclusive, one of them is true, the others are not, and collectively, they exhaust all the possibilities. Once you assign probabilities to all of them and add up all the probabilities, you get 100%. Then we take that set of possibilities that's called a sample space, and then all of the propositions, the more general coarse grained, higher level, approximate things we might want to say about it, are all rooted in this set of more fine grained, mutually exclusive, exhaustive possibilities. All that stuff is going to give you, in fancy language, we say, a commutative algebra of random variables. We have a bunch of quantities that you can add together, you can multiply them, they obey the usual rules of addition and multiplication. In quantum theory, we just don't have that. Famously, in quantum theory, if you ask a certain question about a system, like, is the system spin up or spin down? Then you interact with the system in such a way that you get an answer, we call that a measurement. Then you ask a different question about a system, like, is it spinning left or right? You interact with the system and get that answer and measure it. You might think, okay, I now have two answers, but if you go back and try to measure the first property again, you might get a totally different answer. There's a certain non-commutativity of certain kinds of observable questions. I know in that example, you might say, well, of course, I mean, if the thing is up and down, it couldn't be left and right, and maybe what I did was I messed it up in some way. But there are certain observables that you might think you can figure out one of them and then figure out the other, and it shouldn't disturb the first. But for every quantum system we know of, there are always observables, things we might want to know about it that fail to have this property, fail to have this ability to ask the questions in the order, and that non-commutativity is an inherently non-classical property. So I don't want to dwell on this too much, but it's important to me, and I know that we're going to get into the weeds later, but can't you just equally well say that these observable properties don't exist? They're not the things that you assign classical probabilities to. What has changed is not the use of classical probability, but what you assign probabilities to. I agree with you. In principle, that's a way out of this. And when you start looking at Everettian approaches to quantum theory, you begin to make moves like this, where instead of attaching probabilities to physical processes happening out in the world, we attach probabilities to what we will see, what our experience will be, where we will be in some branching universe, in some branching universal wave function. So there is the possibility you could shift the story and bring back something that looks more like ordinary probability theory. In some ways, the kind of probability that's used in many worlds or Everettian type quantum theory are kind of bringing back the old way of thinking about probability, but just deciding them to different kinds of things. But this was not how people were thinking about quantum theory in the 1920s and 30s. They just saw a complete break from the way we usually do probability theory in some fundamental ways. So maybe the right thing to say, again, I want to get back to, you still haven't answered the very first question I asked, because I haven't given you a chance to, but there's a whole bunch of things that sort of come together to have a package of how we think about physics. And what you're saying is something has to go. And one way of saying what has to go is your notion of probability theory and maybe other things as well. But what we all agree on is something has to radically shift from the classical view to the quantum view. That's right. That's right. In the 1900s up until the early 20s, as I said, physicists could not find a way to get the right experimental predictions with the kinds of thinking that they were used to from classical physics. Something had to give. Now, you can think of a physical theory in very broad terms as having three components. One component is the stuff, the moving parts, the more or less physical ingredients. In philosophy, speak, we might roughly call this the ontology. What we're saying is there according to the theory. In physics, speak, we call that the kinematics. Kinematics is the description of the moving parts, the basic ingredients. The next component is the rules for those ingredients, how those ingredients are supposed to change, what can happen to them and how to make predictions of what they'll do. We call those in philosophy, that's the nomological part, the nomology. But in physics, we call it the dynamics or the laws, the dynamical laws, the basic rules for how these ingredients move around. And then the third component is how we assign probabilities to things in the theory, the probabilistic side. Now, in a deterministic theory, where you know exactly how things begin, you might not need probabilities, but in a way you kind of do, because zeros and probability of zero and one is like saying things definitely happen or definitely don't happen. So you can think of those as subsumed into this larger idea of thinking about probability theory. And the probability side you could think of as broadly speaking, what we know about the system or belief, our credence, our belief about whether certain things about things are true, and maybe also a part that's the chancey side or things just happening in some chancey way. Philosophers like to call this aleatory probabilities as compared with credence or subjective probability. But in some way, we have probabilities also in our theory. And we have a classical way we think about those three ingredients. Classically, the kinematics, the moving parts are going to be arrangements of things in something like physical space. They'll be particle arrangements or they'll be patterns of fields in space or something like that. On the dynamical side, in classical physics, usually we imagine we have something like an equation that takes the arrangement or configuration or state suitably defined. And then we can use this to project forward to figure out what the system is going to do. Or in broader terms, we can put constraints in some larger sense on how the system can behave. And often those rules take the form of equations that take us from any one moment to the next infinitesimally next moment. And in calculus speak, we call that a differential equation. And then the third component, probability is ordinary probability. As I've explained, we have a set of possibilities. We assign probability numbers to it. And then we use probability formulas to update our probabilities. So those are the three ingredients. And by the end of, by the beginning of the 1920s, and certainly after 1925, 1926, I think there was just a consensus that we had to give up all three of those things. All three of those things had to be thrown out. You went from the classical world to the quantum world. And, and, and that's it. So the idea of those three things didn't get thrown out. But what we were using in classical physics for each of them had to be replaced by a quantum thing. Correct. More precise and more precisely said, excellent. Yes. We didn't get rid of the idea we had these three things, but we replaced them with with quantum versions of those things. We couldn't think about those three things using the classical paradigm anymore. And I already mentioned, you know, these innovations, right? So on the ontology side, the extent we had one, it was wave functions. It was some kind of mathematical thing, a wave function, a quantum state in some broad sense, in some abstract kind of mathematical space. And today, that space usually we call it Hilbert space. On the dynamical side, we replaced the equations that we were familiar with classical physics, Newton second law for signals mass times acceleration, the Maxwell equations that describe how electric and magnetic fields change with time, we replaced those with a dynamical rule called unitary time evolution, which in in the simplest cases is an equation named after Schrodinger called the Schrodinger equation, which just tells us if you know the quantum state at one time, this equation will tell you moment to moment what it will look like at later times. That's the Schrodinger equation. And then the probability side is given by the measurement axioms and the born rule and these complicated rules for how we take these abstract ingredients and generate probabilities. So this just seemed like a radical departure from how we thought about kinematics, dynamics and probability from before the beginning of modern quantum theory. This is formalized by 1930 in Paul Dirac's book, Principles of Quantum Mechanics. I won't tell you how old he was when he wrote that book because it would make everybody really depressed. He was very, very, very young. And then John von Neumann followed up two years later with in the English translation, mathematical foundations of quantum mechanics. But they basically told the same story. John von Neumann told it in a much more mathematical rigorous way. They unified the matrix mechanics of Heisenberg with the wave mechanics of Schrodinger. Both of those were seen to be manifestations of a deeper structure. There is an abstract kind of space. It's not like physical space. It's called Hilbert space. Quantum states are these mathematical things that live in this abstract Hilbert space. These mathematical quantum states change with time according to unitary evolution or the Schrodinger equation. And then there's a prescription set of rules, the measurement axioms for taking those mathematical ingredients and generating predictions of what we will see when we do measurements. In particular, they tell us probabilities for measurement outcomes. And so by 1932 you're basically done. And again, remarkably, this is a year before Kolmogorov publishes his axiomization of classical probability theory. So it's kind of an amazing circumstance. But this picture was so radically different from what people had thought about before. And not only was the picture radically different at the level of ontology. Are we saying that what physically exists is a quantum state in an abstract Hilbert space? But also the measurement axioms were difficult to... I mean, you just took them on as axioms. They worked really well, extremely well. Those axioms basically underlie how we calculate things in the standard model of particle physics today, the standard model that our best theory of the interactions of elementary particles and fields today. And they give predictions that are correct to... Gosh, Sean, what's the latest? What's the longest number of decimal places that we have predictions to? It's double digits, right? It's more than 10 decimal places. So it's very good. In the high energy theory area here at Harvard, there's a glass door that has, I believe, if I remember correctly, it's the gyromagnetic ratio of the electron. It's the theoretical prediction and the experimental prediction. And they agree to many, many decimal places. So you look at that and you just say, what else could we possibly need? We have this theory. It makes all the predictions. This is fantastic. There are other historical reasons why people shifted toward a more pragmatic attitude, a more practical-minded attitude toward quantum theory around the end of World War II. The center of gravity of physics moves from Europe to the United States. There's a lot more money that goes into physics or the amount of funding increases dramatically. There's a real emphasis on getting concrete results. And there's also just rapid progress. I mean, the amount of progress that was made in all areas of physics, but including what you might call high energy physics, short distance physics. I'm a little nervous to call it fundamental physics because I don't offend anybody who thinks that that other kinds of physics are also fundamental. But a certain kind of physics makes a tremendous amount of progress. We're discovering particles left and right. And Nobel prizes are being handed out all the time. And I just think there was this attitude that spending time trying to go back and reevaluate the foundations of this theory was a waste of time. It was a waste of time. It was misguided. And that evolved from thinking it was a waste of time to actually becoming a serious threat to one's career. The history of 20th century physics tells a pretty sad story of people making very important contributions to physics, to applications of physics, often motivated by trying to probe the foundations of quantum theory and suffering significant career ramifications. So people talk about EPR, people have heard of Einstein, Podolsky and Rosen and entanglement and all that stuff. Einstein certainly did fine career wise. We're not worried about Einstein's career. His career was great. But other people began to revisit some of those questions in the years that followed. The first, I think, really significant, I mean, there are a lot of small examples, but the first, I think, really significant example after the days of entanglement and the debates between Einstein and Schrodinger is David Bohm. So David Bohm was at Princeton and he was a theoretical physicist. He wrote a book called Quantum Theory. It's a book that's available now. Dover produces an edition of it. He published it in 1951. And most of it is a very nice, but relatively conventional treatment of quantum theory as was understood at that time. But near the end of the book, he decides to do something that you're not supposed to do. He decides he wants to present a better, more complete, more transparent analysis of the measurement process rather than just treat it as a bunch of axioms. When you do a measurement, I don't know what a measurement is, then this magical thing happens, probabilities come out, the quantum state suddenly collapses to lock in the results. What's really going on there? He wanted to present a pedagogical, more foundational picture of what was going on. And again, this was frowned upon, but he did it. And he laid out the process. He put in a quantum system, an object that was to be measured. He put in a device that was to measure it. And then he studied how they interacted according to the other rules of quantum theory, not the measurement axioms, but the more fundamental rules, the Hilbert space rules, Schrodinger evolution to see what would happen. And he got these weird, entangled quantum states. And then he had this amazing approach. He called it, this is section 22.8, the destruction of interference in the process of measurement. And he showed that you got these branches that came out. At the end of this interaction between a measuring device and the system, you went from a weird superposition of the system to be measured and a measuring device before the measurement, they combined, they evolved, and now you got these branches. In one branch, the measuring device obtained one result. And the system, in some sense, had a quantum state that corresponded to just that result. And then there was another branch on this next branch, the measuring device had a different result. And the system it was studying was in a quantum state corresponding to that result. And then there was a third branch and a fourth branch, they were branches that at each branch, you had what looked like a story, a definite, almost classical looking story for how the measurement should have been folded. But you had all these branches. And what he was able to show was that the branches stopped talking to each other. Once you get a big measuring device involved, there wasn't any interference between the branches. They began behaving a lot like distinct classical realities. And this is in that book I recommend that if viewers are interested that you buy a copy of it. It's a beautiful exposition. And today we call this destruction of interference and this emergence of branches. When a big enough system gets involved in the story, we call that decoherence. And decoherence was later developed by a number of other people. Deater Zay did further work on it in the 70s. And more people did work on it going into the 70s, 80s, and 90s. But far from getting accolades for introducing this pivotal idea, Bomes' career took some rather unfortunate turns. He found himself exiled to Brazil and unable to return to the United States. Now, there were other reasons for this. There were also political reasons that I might get to get into. But certainly, he wasn't handed a Nobel Prize for discovering, for laying out the story of decoherence. And this is a huge shame. Decoherence is a pivotal idea in how we make sense of quantum theory today. Not just philosophically or foundationally, but you can't pick up a research article from an atomic physics laboratory or a quantum computing research group or quantum information or high energy theory or condensed matter, really any area of physics that deals extensively with quantum theory without reading about decoherence, decoherence time scales, shielding our systems from decoherence. People worry, why can't we make a quantum computer today? One of the problems is very difficult to protect your systems from this decoherence process. I think in some sense, this means that all of these practicing contemporary physicists should be paying royalties to poor Bome, or at least to the field of foundations and philosophy of physics. Now, I think some people would look at Bome and say, well, no, this counts as physics. He was a physicist. But he was doing physics in a mode that is instantly recognizable to people who work in philosophy of physics. He was taking theories. He was probing their structure. He was trying to improve the way we made sense of them. And in so doing, he made a major contribution to how we think about physics. Do you know if that part of Bome's book was any influence on Hugh Everett? To my knowledge, no. I'm not sure that it was. Who was also at Princeton at approximately the same time. He was. The book was 51. Everett was at Princeton in the mid-50s. 55, I think, is when he started there. I don't know if Bome was still there at the time. But yeah. Because he published his dissertation in 57. He was certainly working on the ensemble with person 56. Yeah, okay. There's some overlap. You're probably right. Yeah. I'm not in a story and I don't know the detailed history here. I don't know to what degree they overlapped. But that's an excellent question and one that I'll definitely go and look at. We should look at. Because certainly the story does sound familiar to an Everettian, right? Of course. But then Bome ends up going in a very different direction because Einstein nudged him. Correct. Yeah. Yeah. So Bome was very pleased with this and he explained this to Einstein, who was obviously still in Princeton at the time. Einstein lived until 1955. So Einstein was there. And Bome came away from that conversation with Einstein. No longer convinced he had solved the problem. And the next year, Bome introduced his hidden variables approach to quantum theory, where the quantum state was a pilot wave that guided the particles around. But I should say that decoherence didn't go away. Sure. Bome published two papers, the first of which laid out the outline of this new approach, and the second paper went deep into the measurement process. And to get the measurement process to work in Bome mechanics, you still need decoherence. Now, the argument I would make to anyone who says, all we need to explain quantum theory and solve the measurement problem is decoherence is just look at Bome. Arguably, Bome understood decoherence. I think it's safe to say being the person who came up with the idea and laid it out so beautifully, you can again read his book. I would wager a guess. I mean, sure, not everyone who discovers something fully understands that that's true. Historically, we have examples that doesn't always happen. But in this case, he seemed to have a very good understanding of what was going on. And he didn't think it was enough to resolve the measurement problem. And I think that's telling. So I do worry that we're not going to get to the juicy stuff. So I want to accelerate our velocity here. Could you just give us a brief overview because we've already mentioned Bome and Everett. And these are two of the leading contenders for a true theory of the foundations of quantum mechanics. Could you explain why they're different? Right, right. So in Everett's, so in Bome's approach, we have the quantum state, it does this funny branching. But in addition to the wave function, there are particles. There are localized particles moving around in space. And they're guided by the wave function. And they're guided so that they arrive at destinations according to the rules and predictions and probabilities from quantum theory. This sounds very gerrymandered. Like he just kind of carefully engineered his pilot waves to do this. But it actually comes out rather beautifully when you're working with systems of fixed numbers of finally many non relativistic particles. You take the Schrodinger equation, you sort of decompose it into parts. And the parts tell a very natural story in a language that's very similar to what's called the Hamilton Jacobi formulation of classical physics, which he knew. And in that story, you see what looks like a pilot wave guiding things around. So it's actually a really, at least for these, this limited class of systems, a really elegant picture. Essentially, whatever it did was drop the particles, just drop them. And actually, so although, so you asked about whether Everett knew about Bohm's work on decoherence, that I'm not sure about. But Everett certainly knew about Bohm's work, because in Everett's unpublished long form dissertation, which he finished in 1956, you could find it online, just Google 1956 Everett long form dissertation, which by the way is 137 pages, 137, very important number in physics. When I teach my philosophy of quantum mechanics classes, it's course number 137, that's no mage both to Everett and also to the inverse fine structure constant. Everett talks about Bohm's theory in detail, in that thesis, comparing and contrasting with his approach, I don't think he talks about Bohm's analysis of decoherence, though that's why I'm not 100% sure, but he certainly knew of Bohm's work. That's true. So Everett drops the particles in an attempt to simplify the theory, simplify the axioms, make very clear when I say simplify, simplify the axioms, make the assumption simpler, and just try to build a sensible formulation of quantum theory that is empirically adequate, makes the right predictions, agrees with the experimental predictions that we're making, but doesn't have these extra ingredients in the axioms that Bohm's theory did. And you don't believe either one of them? That's right, at least not now. Not now. Yeah. Sure. I went through a Bohm phase when I first read Bohm's papers, they are very compelling and very beautiful. You can see why Bohm was, you know, thought he discovered something so important. I think the papers are a little bit difficult to appreciate for people who haven't yet learned about the Hamilton Jacobi formulation of classical physics, which is, you know, many, many goes back to the 1800s. I mean, it's, it's, but it's something that people knew. Schrodinger used it to build the Schrodinger equation. It was well known to physicists back then. We don't teach it at the elementary level so much to physicists anymore, although some physics students will learn it if they take an advanced course in classical mechanics. So when you read that paper, and you're familiar with this formalism, it seems incredibly natural. It just looks like, it just looks like someone took the Hamilton Jacobi formulation and just added an extra force in it. And now you've got quantum mechanics. And it just seems like, oh my gosh, this is such a beautiful picture. And so of course, my next question was great, let's see how this applies to more general kinds of quantum systems. Let's see if it applies to discrete quantum systems, systems where the moving parts are like discrete things. We use these kinds of quantum systems to model what goes on instead of a quantum computer or an a ferromagnet, you know, like a permanent magnet, like a bar magnet. How does it apply to quantum fields? So field theory, this is where we imagine moving parts that are spread out over all of space, where there's roughly speaking a different moving part at every point in space. And you also have to contend in those theories with the rules of special relativity from Einstein, and that makes things much more complicated. And so I just jumped in and I'm like, great, let's see how this works. And I could not see how to make it work. I couldn't see how to make it work. And it's not just me. And so I began reading the literature on this. And let me just say that although I don't agree with some of the other approaches to quantum foundations that exist, you can't read the work of these people and not be blown away by their creativity, their intellect, their ingenuity. I am in awe of the work that they've done. And I think you just have to read their work to see. And even though I don't necessarily agree with the conclusions they draw, there's a lot that you can take from their analyses. And this is really the beauty of what I call philosophical physics, which is doing physics using the tools of analytic philosophy. Sometimes these tools let us make progress in a very direct way on physical problems. And sometimes you get physical results as spin offs. And so the spin offs of a lot of this stuff, I think, is really remarkable. But so people worked on trying to make versions of Bohmian mechanics that would give good, elegant description of systems where you had to follow the rules of relativity systems with fields in them. And here I'm thinking of people like Ward Stroiva, Shelley Goldstein, and a collection of people who've worked on Bohm theories since then. There's a lot of people, and I don't want to miss names. I'm afraid if I don't mention names, people are going to feel like I'm leaving them out. But once you look up those names, you'll learn about people like Nino Zanghi and Rodney Timulka and Detlef Dürer. So I'm going to leave that names here, but I assure you it's not because of any disrespect. So I read their papers, because I couldn't figure out how to do it. And I read their papers, and the models they constructed were really, really, really complicated. They were really complicated. I guess Bell also worked on this as well. Beables for quantum field theory, I think, was the paper where he tried to contend with this. This is John Stewart Bell, the Bell of the Bell's theorem, who played a very important role in quantum foundations as well. And you could get some things that looked like they worked for very simple quantum field theories in some situations, but I couldn't see how to get them to work for interacting quantum field theories, quantum field theories that had fermions in them. These are the quantum fields that underlie electrons. They're very weird, these quantum fields. And in the end, you got these sort of complicated admixtures of pilot wave dynamics and stochastic dynamics, which we'll come back to. So the beautiful deterministic picture of the Bohm approach gave way to a prediction where the laws were chancy now. And the rules seemed now kind of gerrymandered, kind of ad hoc. Like, we knew the predictions we needed, and now we're just going to reverse engineer the rules of these Bohmian generalizations of quantum theory to agree with those rules. I keep saying Bohm, by the way, but DeBroy, Louis DeBroy, introduced a primordial version of this pilot wave approach. Bohm didn't know it initially, but eventually after Bohm published his papers, they connected. And so some people call this the DeBroy-Bohm formulation instead of Bohm's theory of Bohmian mechanics. So these pilot wave theories, these hidden variable theories, they work very well if you have a bunch of spinless particles moving on relativistically. Once you try to include modern physics, relativistic quantum field theory, they both get a lot uglier, and it's not even clear that they work at all. So one recognizes why you might be skeptical, but why in the world would you be skeptical of Everett, which is just so obviously the truth? I should also just mention that on that point of can these models work at all, David Wallace, who is a philosopher of physics at Pittsburgh. Yeah, yeah. And I mean, when I talk to physicists about what philosophers of physics can do and extradite, I often point them to David Wallace because he's just incredible. He wrote a paper and gave a bunch of talks called The Sky is Blue, why quantum theory, the interpretation of quantum theory is not under-determined, in which he argues basically your point that it's not merely that the Bohmian models, the DeBroy-Bohm models, haven't worked very well or very inelegant, but that they don't succeed at important but relatively elementary tasks like explaining why this guy is blue that comes from Rayleigh scattering. It's a relativistic interaction between light and charged particles in the atmosphere. And his attitude is it's been, you could argue, 100 years since DeBroy first introduced the hidden variables pilot wave approach, more than 70 years since Bohm did. And the fact that this approach has not yet been able to accommodate this very important kind of problem is a serious issue. So yeah, I think it's more than just a question of elegance. I think there's a practical problem here. Now, I'm not going to rule out that it's possible at all. People are ingenious and maybe someone will listen to this or not listen to this, but come along and come up with a way to get Bohmian mechanics to work. And if that happens, that would be very exciting. But let me just say that by the end of my foray into Bohmian mechanics, I just felt like it was too hard and it wasn't working. And I wanted to try something else. And then I became an Everettian. Did you? Okay, good. Yes. I spent a period of time as an Everettian and I had the exact same experience. You're very ecumenical. Yes. Well, I hope it's an aspiration, but whether any of us has achieved it is a different question. So I encountered Everettian quantum theory. I read Everett's paper. I mean, I was reading all these papers, Bohm's paper. I was reading it while I was in grad school pretending to be a physicist. But really, I was interested in philosophy and in the foundations of physics, foundations of quantum mechanics. So I read Everett's original published version of his dissertation. I didn't encounter the long form unpublished version until many years later. And I feel very bad about that because it is one of the most brilliant pieces of work in physics and in philosophy of science. It is an extraordinary philosophical work filled with amazing ideas and arguments. And I wish I'd seen it earlier. So I encountered Everett's approach, of course, like anyone interested in science. I had heard of the many worlds interpretation. There's a cartoon version of the many worlds interpretation. Oh, that's the interpretation where every time you make a decision, the world splits into a world where it happens this way to world what happens that way. You know, there's a cartoon version that is not very much like the actual version. But I had this cartoon version in my head. I just seen people mention it a few times as I was coming up in physics. But I began to engage with literature. And I found Everett's thesis very compelling, the published version very compelling. He tells this anecdote that shows up a lot in in treatments of Everett's work. This anecdote about about Copernicus and about whether the sun goes around earth or whether it's earth turning. When this is brought up, it's to get across the idea, yes, I know it looks like the sun's going around the earth. But what would it look like if earth were in fact turning? Wouldn't it look the same? And so if you have a problem with world splitting, if you think that somehow is not how things look, his response is, well, what would it look like if worlds were splitting? Would it really look any different? And so this is in that published version. It's also in a letter that he wrote in correspondence with Bryce Dewitt, a theoretical physicist, who was quite skeptical of the Everett approach. This letter is correspondence is 1957. Bryce Dewitt right wrote him a letter saying, I sir, I simply don't split. And Everett responded, well, how would you know if you did? Right? What would the world look like if you did? And eventually, Bryce Dewitt went from being a skeptic to being one of the strongest proponents of the interpretation. And by 1970, he wrote an article in Physics Today, the trade magazine for physics, in which he presented this interpretation to the wider physics community. So Everett certainly won a very important convert to the cause. Yes, so I found the whole thing very compelling for a variety of reasons. And I think I was an Everettian longer than I was a Bohmian. And I got into arguments with people about why Everettian quantum theory was correct and why all you need to do was decoherence. And, you know, I got pushed back from people. I had arguments with people. I was still pretty new to thinking philosophically about science. And I was learning how to construct rigorous philosophical arguments, which by the way, they don't teach you in physics grad school. That's a very important skill that takes a lot of time to hone and definitely still work in progress for me. But it's one thing you have to work on if you want to do philosophy. So I began to try to hone this picture. And I had a couple of pivotal moments. Ned Hall, another guest I know that was on your podcast, is a professor here in philosophy at Harvard. And we had some arguments about the foundations of quantum theory. And he invited me to give a guest lecture in his philosophy of quantum theory class. And so I was forced to sit down and do the thing that we philosophers of physics are always asking physicists to do. Oh, quantum theory is fine. It's got no problems. Well, sit down and write out a detailed argument, filling in all the gaps to explain exactly how this works. And so I was forced to do this exercise. And I found it very difficult. And when I went into his class, the students had very tough questions for me that I had difficulty answering. So there's that. There's other interactions. I had Neymar Khani-Hamed taught a class at Harvard. I guess this was second year in grad school for me. It was on quantum mechanics and space time. And he presented a certain kind of argument that was motivated by Sidney Coleman's lecture on quantum mechanics in your face about how to get quantum mechanics to work with a sort of ever ready in picture. And, you know, I spent months trying to make that fit together and I couldn't make the lecture work. I spoke to Neyman on a go and he doesn't remember this. And, and, and, you know, maybe I misinterpreted what he was saying. It's entirely possible. It's certainly not as you know. But at the time, for whatever reason, I thought this was his his view of how quantum theory worked. I couldn't make it work. And so I had to grapple with the Everett approach. And I began reading more papers. I went from Everett's papers to more modern papers on on Everett in quantum theory. I read the work of people like David Wallace. I read papers, books, arguments for and against the Everett approach. And I started to have doubts. So I remember I had some difficulty with David Deutsch's paper from 1999 on decision theoretic derivations of the Bourne rule in quantum theory. These sorts of arguments were expanded later by David Wallace in some papers and eventually in his textbook, The Emergent Multiverse, which was published in 2012. And, you know, by this point, I was getting a little worried because the arguments were getting longer and longer and more and more technical. David, so whoever his argument for probability in many worlds is basically one page. David Deutsch's argument is 15 pages, and the mathematics is much more intricate. And David Wallace's is 83 pages. And although I'm not sure I mean, it depends exactly where you put the initial point to the final point, it's arguable that it actually starts earlier in the book. But I'm trying to be like when he actually said I'm going to start proving the theorem. And the mathematics is very, very complicated. For anyone who's curious about what the mathematics of that proof looks like, go out and buy his book. You should buy it because like Everett's long form dissertation, it is also one of the most incredible works of philosophy of science and physics that I think has ever been written. When you read the book, you are immediately struck by how brilliant David Wallace is. I mean, it's just an incredible book. Anytime a student comes to me and says they want to work on quantum foundations, I always recommend that they buy his book and that they read it. And so far, they buy it. So I'm sending revenue to David's way, I guess, but for very good reason. But I recommend that people actually buy the book and try to get through the argument. You'll see it is very, very, very complicated. And this makes me extremely nervous. The concern that people expressed about trying to get probability out of the many worlds picture is that it runs into logical circularity, where you're trying to derive something that we should see probabilities come out without assuming the thing to be derived at the beginning. That's cheating. Now, I know that one could argue what one person calls logical circularity, another person calls beautiful self-consistency. Consistency, of course, right. Exactly, exactly. But at least for me, I was very nervous because if you make a proof very long and very complicated, the odds increase that you're somewhere going to slip in an assumption that you're trying to prove at the end. So what these proofs are trying to get is the Born Rule, a specific formula for how you take the quantum state and how you take certain mathematical ingredients called operators and you put them together, and then you apply a certain mathematical operation to them. And what comes out is the probability that a measurement will have a certain kind of value. That's the Born Rule. You don't want to assume anything. You want to be very careful about your assumptions as you go, right? You don't want to assume the Born Rule when you begin. You don't want to make certain assumptions that are difficult to justify. You want to start with assumptions. We call these premises in your argument that are reasonable and that ultimately produce this result. That's how theorem works. You begin with certain premises and you get some conclusion. And I just got really nervous reading that proof. And I went through it and what also struck me was that although one of the selling points of the Everett approach, going back to Everett's original dissertation, is that it is axiomatically so simple. Whatever it says in his dissertation is, he just wants to keep the quantum state and Schrodinger evolution and then stop, and not anything else. What struck me is that you do have to add a lot of other axioms and premises to get the theorem to work. And I made a list of these. I went through the book and this is another task I give to students who are interested in quantum foundations is to read any of the proofs, the Deutsch proof, David Wallace's proof, and keep a running list of every extra assumption, every premise, the richness axioms, the rationality axioms, assumptions about the connection that an individual should have to many copies of themselves, assumptions about how free willed agents are allowed to make, to apply operations in an agential way, certain assumptions about the structure of the Hilbert space. There's a whole list of these things. I'm not even listing all of them. This is one that Tim Adlin has brought up, the idea that distinct possibilities should correspond to orthogonal or perpendicular directions in Hilbert space to begin with. Tim Adlin is like, well, why can't you consider a blend of those two things as a distinct possibility? Tim Adlin in his review of, I think it was, of David Wallace's book asked, if someone says, do you want to go the left path or the right path? And if you were to say, well, I could choose either of them, could I choose to take both paths? If the person says, sure, you could do that too, you might consider that a distinct possibility. If someone said, you have to pay $5, take the left path or $7, take the right path, and you said, well, how much would it cost me to take both? And the person said, $20. That seems to be a distinct thing you could choose to do. There are further assumptions that in the years since I've realized there's sort of implicit behind this. And I began to develop the sense that it falls prey to what I now call the stone soup problem. I can make a fantastic soup, the story goes. All I need is stones and water. This is based on an old folktale. Really, you can? Yeah, yeah, I can do it. I'll start making the soup. I'll make the soup. Oh, it already tastes delicious. But if I had just a little bit of seasoning, it would be even better. Okay, great. Actually, if I had some carrots, just a few carrots, oh, even better. It's practically done. But if I had some meat, it would be even better. And so on and so on and so on. And by the end, people are like, wow, can you believe that this amazing soup and only from water and stones? You just get this sense by the end that this is what you have. And the extra assumptions you need, it's not that I have any problem in principle with making assumptions or having premises or even assumptions that are metaphysical in nature. I mean, we need metaphysics to get out of bed in the morning. I mean, you need something. It's just that when you have to make a lot of them, and a lot of them are difficult to justify if you don't already believe them. For example, you have to believe that if I don't have a unique future version of myself, but instead, in the future, there are an uncountable number of copies of myself, then I should value their experiences the way I would value the experiences of one copy of myself. You know, David Wallace says this explicitly in the book. It's one of the assumptions he takes. I don't know how to justify that. You either believe it or you don't. And you have to stack so many of these, I call them speculative metaphysical hypotheses, SMHs on top of each other. And each one reduces my Bayesian credence, my belief in the theory, that by the end, I just felt like this was a problem. And I should say this was initially what got me worried about the Everett approach. There were other things too. And since then, I now have, I think, not negative arguments, but more positive arguments for why I think that there are issues there. And the final thing I'll just say is that I developed a different approach to quantum theory. Oh, very good segue there. That's what I was... The best thing to do, yeah, exactly, is to do something new. Put your money where your mouth is, absolutely. And so I wanted you to explain your new approach. My impression is that it's not just a tweak. It's a pretty radical, different starting point. Like, you don't even believe in wave functions. Right. That's right. That's right. Oh my goodness. Oh my goodness. Yes, heresy. Well, I'm a renegade. Call me a renegade if you want. Yeah. I'm a rebel. Yeah. So... Why don't we do the very short version of what the theory says, and then we'll back up and sort of say, okay, why is it that way, et cetera? Exactly. When we look around the world, we see objects and the objects do things. They behave in various ways. And that behavior sometimes looks very predictable. We can say exactly where things end up. And sometimes the behavior looks less predictable, looks more chancey. We see coins flipping in a way that looks kind of chancey, weather looks chancey. We see objects doing things out in the world separate from us. And the natural question is, can we extend in broad outlines that picture down to the micro scale? Is there a sensible way to extend it down to the micro scale and still have an empirically adequate description of quantum theory? That's one way to think about this. Another way to think about this is go back to 1923-24 and suggest that the people working on the development of quantum theory were maybe thinking too narrowly about what laws could be like. Maybe if they imagined a more general kind of law, they could retain the classical kinematics, the classical looking picture reality of the stuff, the physical objects. Maybe particles, maybe fields, maybe something else, but something that's arrangements of things. And classical notions of probability, but they just needed a new set of laws, a new kind of laws that were unknown at the time. And so one way to view this project is, well, there's been a lot of time since then. People have gone in many different directions about what laws could be like. There are now available to us the kinds of laws that when married to classical kinematics and classical probability appear to give you an empirically adequate description. So maybe we were hasty in throwing away the classical looking ontology. So the conventional quantum person would say, well, a realist about quantum theory, we're not even going to bother with the epistemic people, but they would say, okay, what exists is the wave function of the electron, but when you measure it, you see just an electron. And that's the fundamental mystery here. And you're saying, no, what exists is the electron, but it behaves differently than you think. That's exactly right. Yeah. So notice in the story, I've gone back to 2324, 1923, 1924. This is before Schrodinger introduced the wave function. So the argument here is that you could have bypassed the introduction of the wave function and developed an empirically adequate description. Here, there is no wave function in any fundamental sense. There's just stuff, the moving parts, whatever it may be, we don't know what it is. Maybe we have to wait to find a fundamental theory of nature or a unified theory or something like that. I'm not, I'm going to be very modest here and not claim I know what the true fundamental ontology is, but we have some moving parts and the laws, these more general laws, act directly on those moving parts and carry them to the places where experiments say they're supposed to go with the right probabilities. That's it, with no fundamental role played by the wave function as a middle manager. So what are these laws? How do they do all this magic that convinced us to introduce wave functions back in the day? Yeah. So by the 20s, there was already a theory of stochastic processes. So the word stochastic, so stochastic is a subcategory of probability. We use probability for many things. Stochastic is a more narrow notion of probability. We use the word stochastic when we're talking about, I mentioned aleatory probabilities, probabilities of chancey things happening, rolling a die, throwing a coin. We apply them to dynamical things that change, but they change in a way that we assign probabilities to, throwing darts at a dart board, for example, shooting arrows. And shooting arrows is where the word stochastic comes from. Stochos, I think, is arrow in Greek. So we call these stochastic things and there were already some theories where the laws were stochastic. Famously, one of Einstein's greatest discoveries, one of his greatest breakthroughs was using Brownian motion to give strong empirical evidence for the existence and nature of atoms. And Brownian motion is an example of a stochastic process. It's a particular kind of stochastic process, a very simple kind of stochastic process. It's a stochastic process where the probabilities you assign to a chancey thing happening depend only on the state or configuration of the system in the present. You know the state in the present and then you can predict what is going to happen. In this case, not deterministically, not deterministically, not definitively, but probabilistically. And this mirrors the way that our fundamental laws of physics up until that point had been phrased. Newton's second law, F equals ma, you know the position and velocity of your particle, you can predict where it's going to go. It doesn't matter where it was. As long as you know the position and velocity at one time, you can predict the future. The Maxwell equations, the equations that describe electric and magnetic fields have a similar property. General relativity is actually an interesting case. It's not 100% clear that in all circumstances for all kinds of space times, we have a theory quite like that. For some very nice space times, we can phrase Einstein's theory of general relativity, the basic equations as describing that kind of evolution. It's actually not entirely clear that that is always the way to think about Einstein's theory. I'll come back to that because that actually may be important. But for many of the theories we had up until that, and certainly we didn't have general relativity 1905 when Einstein was doing Brownian motion, the idea was this is how laws worked. You specify what's happening in the present, the laws tell you what happens next. This is called the Markov assumption, Markov, M-A-R-K-O-V named after the Russian mathematician Markov. And it's baked into how we think about laws of physics. It gives very simple laws, usually. And we've been very spoiled because the Markov assumption has served us incredibly well for centuries. It's telling that whether you think of Brownian motion or you think about the rules of textbook, orthodox, standard quantum mechanics we got from Paul Dirac and John Von Neumann, that those rules are also Markovian. The Schrodinger equation is Markovian. You give me the quantum state at one time and it tells you it'll be in the future. The measurement predictions, the Born rule is Markovian. You tell me information about the present and then I can generate the probability predictions from that. Now, it's not that we have never encountered physical systems whose behavior is not Markovian. We have, although a detailed analysis of that came much, much later. It's just that fundamental laws appeared to be Markov. That was the paradigm we were working in. And so by the time Everett is writing his long-form dissertation, this is in 1956, he's already aware of efforts at the time by Fritz Bopp and Imre Fenyes. He doesn't mention Fenyes. He mentions Bopp. There's apparently also a Fritz Bopp who's a physicist today who's not the Fritz Bopp from that time. So if you look up Fritz Bopp and you're like, wow, he's still alive after 100 years, it's a different Fritz Bopp. But the Bopp of the 1940s, as Everett describes, was already working on trying to marry a more or less classic-looking picture of particles with stochastic probabilistic laws directly acting on them. And Everett actually says this is an interesting program. He says he wonders where it's going to go. He says he thinks it's still more complicated than his approach, the theory of the universal wave function, what we now call the Everett, or many worlds approach. He says it's more complicated, but he says, you know, but there's no fundamental reason to prefer a deterministic picture like Everett's, Everett's saying this, over a probabilistic picture. He says what his objection is, is the standard picture where sometimes you have a deterministic picture when no one's looking at the system, and sometimes you have this observer-dependent probabilistic picture. He says that is untenable to him. You should find a way to do it one way or the other. Everett shows the deterministic way. And he just pointed out that Bopp was trying this other way. Everett called it the stochastic process interpretation of quantum theory and his thesis. So Everett was aware of this approach. People worked on these kinds of approaches for decades. From Bopp and then Feniez, you then have the most famous version of this approach by Edward Nelson from the 1960s to the 1980s. And so this approach is now broadly called Nelsonian stochastic mechanics. So sorry, the approach combines the idea that there is no wave function. There's an electron and it moves around, but it moves around in an intrinsically stochastic way. Correct. With the idea that the way it moves around is not simply determined by its current state, but by, in principle, its entire past history. Oh, so make it clear, all of these approaches assume the Markov assumption. Okay, that's what I'm trying to get All of them assume the Markov assumption. They all talked about the moving parts, the particles, moving around according to laws that just take the present state of the system and then tell you probabilistically what it's going to do. So that was the structure of these rules. They were all built on things like Brownian motion. Physicists are very familiar with Brownian motion. If you talked to some physicists like me, when I was first learning about stochastic processes, and you'd mentioned a stochastic process, I would immediately go to, oh, you mean something like Brownian motion or a Markov chain, as we call a discrete version of Markov evolution, a Markov system, to the point at which I almost regarded stochastic and Markov as synonymous, as I think some people maybe also do. So these approaches that had existed up through the 1980s were based on trying to get the moving parts to behave in the empirically correct way by using laws that were Markovian. And when you say that there was no wave function, this is actually a little bit of a subtle question. You have to actually solve the Schrodinger equation for these things and then take the Schrodinger equation and then plug it into a second set of stochastic differential equations, these very complicated equations. And these equations are, they've got forward and time evolving parts and backward and time evolving parts. They're very complicated and difficult to justify on first principles. It kind of looks like someone took the predictions of standard quantum mechanics and again, reverse engineered some set of differential equations that would give you the same results. So it has the same kind of feeling that I ran into when I was thinking about Bohmian mechanics, trying to apply Bohmian mechanics to more general kinds of systems, that it just felt like you were starting with the answer and working backward and ending up with very, very complicated rules that seem difficult to justify on their own merits. Now, I should be clear, I didn't know about any of this. I didn't know about Bob or Fenyes or Nelson. I was not familiar with stochastic approaches to quantum through any detail. I think I'd heard of Nelson's name. He was the one person I heard of, but I had not carefully assessed his work before. And certainly, I didn't think that there was any possibility that an approach like that could be viable. Certainly, for the time I was an undergrad and for grad school, anyone I began thinking about quantum foundations, I'd basically arrived at a place where I thought maybe there was just no way to find a picture that worked in quantum theory, that maybe this was a case where rather than, so we talk about how given any inevitably finite amount of experimental data, there are always infinitely many theories that will fit it. This is called the underdetermination of theory by data. And so you need to appeal to some other kinds of criteria like which theory is the simplest, which one is the most predictive and so forth. You have to apply to other things in order to pin it down. And then once you specify the theory, arguably you could have an underdetermination of interpretation by theory. So one theory may have many ways to interpret it. My worry was that we had here a case where the theory over determined the interpretation. The theory was so intricate, the quantum theory, textbook quantum theory, was so complicated and intricate and so difficult that there was no interpretation that would fully work at all. And I found this distressing as someone who has realist inclinations. I believe there's a real world out there. I mean, at some basic level, if measuring A and then B and then A again can change the result you got for A and wouldn't have changed the result if you had not measured B, that really makes you feel like there's something out there you're pushing on. There's some world out there somehow. But I basically felt like maybe there was just something that would work. I tried a variety of proposals for a while. I worked on the modal interpretations, which are not widely known, although in some sense they're being rediscovered, I think in the high energy theory community. I got interested in cister-algebraic approaches to quantum theory, and maybe they could provide a window and had to think about the foundations of quantum theory. But to be honest, I wasn't very satisfied with any of these approaches. I regarded them more as exploratory exercises, and not something I would put credence in being, okay, this is I think the direction that really we need to go. So the story is fast forward to 2022. I'm trying to prepare for a class. And by the way, I should say that in my opinion, teaching is to doing research, what stand up is to being a comedian. You really need to be reformulating your ideas and presenting them to new eyes over and over again in order to keep fresh and to keep connected and to be creative. So I think teaching is incredibly valuable. Many of my research ideas have their roots in classes I was trying to teach. I think David Hilbert, the famous mathematician, always taught a new class every semester on a new topic, and this was how he kept, I mean, you can do that if you're David Hilbert, maybe I don't know how we can do that, but this is his trick to mastering so many different fields. So I was trying to teach a class, and I was trying to find a better way to explain quantum theory to students. And I ended up stumbling into a stochastic approach to quantum theory. That's the short story. And I was trying to formulate a way to represent stochastic processes, classical stochastic processes in a way that would make them look more like quantum theory. And my plan was to just bring the two theories close enough together so that I could more or less justify most of the ingredients of quantum theory, and then pinpoint more precisely for the students what assumption we need to give up or modify to get quantum theory. So it wouldn't seem quite as ad hoc as just listing the austere Dirac von Nomen Axioms. And then remarkably, what happened was I just got quantum theory. There was no gap, and I didn't understand what it happened. How could quantum theory be a stochastic process, especially one that was as simple as the one I was working with? I didn't need to gerrymander all the laws. And after some effort to understand what had happened, I discovered that I had inadvertently given up the Markov assumption. And then I immediately plumbed the research literature. Surely someone else tried to do this. Someone else said, let's try non-Markovian laws, see if we can get quantum theory out of them. And it was basically, metaphorically speaking, at least crickets. There was basically no serious effort to try to formulate quantum theory with a more or less classical notion of probability and classical notion of physical ingredients of the ontology of the kinematics, but with laws that were not Markovian. To be very clear, that's a huge giving up, right? I mean, you're saying that what the particle does next is not predictable, even probabilistically, from what its current state is. You need to know everything in its past for maybe 14 billion years. So that's a good question. How much about the past do you have to know? These non-Markovian processes are somewhat new. So one view about a non-Markovian process, I'm going to simplify things as much as I can here, is you want to predict what the system is going to do, so you need to know something about its past. But if you want to make a better prediction, you have to know more details about its past at more past moments. And if you want to make an even better prediction, you need to know even more details about its past. And in principle, to make an idealized prediction, you need to know every detail about its entire past, like from just at the present all the way to the arbitrary past. And you could look at something like this and say, this is going to be non-predictive. In quantum field theory, we have models where we call them non-renormalizable theories, where for a long time people thought, well, these are theories where you have to know an infinite amount of information about the laws to make any predictions, so they're just not useful. But there was a really important development. People realized that you actually didn't need to know all of those details. You could make systematically better and better predictions without knowing all the details. This is part of what led rise to what we now call effective field theory, treating quantum field theories as tools for making predictions that don't have to be perfect. That's not quite what happened here. But you could look at a non-Marcovian physical theory, non-Marcovian laws, and say, these are not predictive. I need to know an infinite amount of information about everything. The laws are infinitely complicated. How could I predict anything? I either should either assume the laws are fundamentally Markov and I don't need to know the past at all, or maybe they're a little bit non-Marcovian. I need to know maybe the infinitesimally next previous moment. In some sense, Newtonian mechanics is like this. You have no position and velocity that's equivalent to knowing position now and position infinitely in the past. Then with those two data points, you can figure out the future. But that's not so bad. As I said, we can reformulate that as what basically looks like a Markov process by just taking the velocity, including it in what we call the state of the system. You could think of all of these as ways of getting around this problem. The laws are fundamentally Markovian, or they're a little bit non-Marcovian, or maybe they're just not Markovian, but then let's approximate them as Markovian because we can't possibly write down laws so complicated. When you deal with non-fundamental systems, I'm nervous but use the word fundamental again. But let's just call it systems where there's some notion of emergence going on. Systems in biology, systems in neuroscience, psychology, social sciences, environmental systems, we imagine that these systems are non-Marcovian. Effectively, that there are details in the present that we can't capture because we can't find grain or zoom into the present so accurately that we can figure out everything going on. There's some details written into the fine details of the present moment that are recording the past. Because we can't see those, we have to treat the past like it matters. In principle, if we can't know exactly what the present is, we should be keeping track of the past in some way to make predictions. When people use these models, they often make approximations. They assume let's pretend it's Markov, let's pretend we could ignore the past. This doesn't always work. I cannot treat your brain as a Markov process. Your memory does matter, although I guess we're all becoming more Markovian as time goes by, sadly. But in the sense that we're aging, not in the sense that we have less memory, but that's what I meant. We do have more memories when we get older, but eventually, eventually, we start to lose our memory. So those are basically approaches that were available. What I happened upon in 2022 was a different way to formulate the laws of a non-Markovian system. It turns out that you can in fact specify just a few simple rules. And it doesn't pin down a fully realized, exact, complete history-based non-Markovian process. It defines a collection of distinct such non-Markovian processes, each one of which, to borrow language from my colleague Alex Meehan, University of Wisconsin Philosophy, calls realizers. Given a single one of these new processes that I assembled on in 2022, there are many ways to realize it, to fill in all the details, assign detailed laws to all the detailed histories. Any such way to assign laws to all the detailed histories is called one non-Markovian realizer. And the model I introduced didn't single out one of them. It's what we would call an equivalence class of realizers. Each realizer is infinitely complicated, but the laws that define one of my models, those laws are very simple. And I guess there's a spirit of this idea that one finds throughout physics. When we talk about a thermodynamic system, when we assign it a macro state, a coarse-grained state, we assign it a few simple properties. And we know that we're really describing a very large number of different microstates, different fundamental possibilities. In some spiritual way, this is a little bit like that. You specify a couple of simple rules, and then you have all these different ways in principle could be realized. A model where you're specifying just a few of these simple rules turns out to fail to have a certain kind of property that we take for granted in other physical theories. This is the property that if I want to describe how a system will evolve in time from an initial time to a final time, my laws are sufficiently rich that I can describe how the system will evolve from the initial time to any choice of intermediate time in between. And then I can ask, my model says the state will be such and such, and then I can take that state, and then my laws are rich enough that they can tell me how the system will evolve from that intermediate state to the final state. In other words, I can take any interval of time from an initial to a final time, and I can divide it up, I can divide it into smaller intervals of time, and my laws will be rich enough they'll tell me how to get from any one moment to any other. That property is lost in these models, so these models are indivisible, and that is what I call them. And then I looked in the literature, not just for non-Marcovian approaches to quantum theory, but also had anyone played around with indivisible stochastic models, and it turned out that the term did show up in the literature. In a pre-print article by Simon Mills and Kevin Modi, who work in quantum information, they have a beautiful review article. It's open access, it's PRX, physical review X, so people can go and they can, so they're PRX or PRX quantum, I don't remember which it is, but you can look it up, Simon Mills, MILZ, and Kevin Modi, M-O-D-I. And they have a wonderful review paper, a review article on stochastic processes and classical physics and stochastic processes in quantum physics. They mean something a little different by stochastic processes in quantum physics. They mean assume the whole Hilbert space, all the mathematics, and then just kind of use kind of analogies with the classical case, but apply them to the Hilbert space ingredients. That is not what I was doing. I was trying to get quantum theory out of applying stochastic processes to classical ingredients. But in the first part of their review, where they're going over how classical stochastic processes work, in like figure six, in a throwaway remark, they say, you could imagine having an indivisible stochastic process. It would fail to have this divisibility property, and they used the capital Greek letter gamma, which is also the symbol I had been using. So my median impulse was, oh my gosh, I've been scooped. They're using the same symbol, they're using the same name for exactly the same kind of property. And so I eventually reached out to both of them after I had probed this work and did some work to show that you are able to get an empirically adequate description of quantum theory from these systems. I contacted them, and it was lovely speaking to them. They're lovely people. They hadn't thought about using a process like this, applied to classical probability, classical ingredients, and getting quantum theory out. They just sort of mentioned it. They didn't really push it to see how far it would go. But that's what I mean when I say that these laws weren't available in the 1920s. Sure. Right. I did independently develop the idea, but I was also thinking about stochastic processes a lot. I'd read a lot of papers on them by this point. And so clearly the ideas were now available in a way that they just weren't in the 1920s. We didn't really have a comprehensive theory of classical stochastic processes, I think until the 1960s, something like that. This is decades after quantum theory is calcified. So there's a good reason to think that this couldn't have been done back then. Although it would be interesting to wonder if Simon Mills and Kevin Modi had lived in 1910, how the history of quantum theory would have gone. So I'm trying to understand the ontology of your theory. Is there an actual definite, although impossible to deterministically predict, trajectory of the electron over time at every moment of time? Well, there's not a trajectory at a moment, but across moments, yes. At every moment of time. In other words, yes. If you ask, is the electron somewhere some specific place at every moment in time, the answer is yes. And the totality of all those locations is a trajectory. Yes. And from knowing where, and it's configurations, not momentum. Correct. So we make a distinction in physics. I know you know this, Sean, but for everyone listening. By configurations, we mean the most rudimentary way to talk about arrangements of a system. So for particles, that's just where they're located. For fields, it's just how intense they are, which way they're pointing. When you include velocities as well, or momenta as well, you talk about both the position and the velocity of a particle. Now you're not talking about configurations anymore. You're talking about, we use the word state for them. States a little bit ambiguous. Sometimes we use the word phase, space, point, or something like that. But here I mean the most rudimentary thing, just the locations of the particles. But I want to make clear, I don't know the particles are fundamental. I don't know what the fundamental ontology is. So if you're modeling particles, yes. Then on the theory, on the theory where you're modeling particles, then the theory says that each particle has a location at every moment. Are the trajectories smooth? No. In general, they are not smooth. Are they continuous? In general, they're nowhere continuous. No. Okay. So they can literally, the electron does not necessarily travel in between point A and B in time, even if those events are as close together in time as you want. That's correct. Yes. Now, it's not that we have no control over what's going on. There are laws for an indivisible stochastic process. They're very, very simple. What you get out of the theory is an ability to say, what is the probability at any moment in time of what configuration your system is in? Not the probability of what measurement you'll get. That's later on, that's empirical, that's a prediction of the theory. And that's what textbook quantum theory gives you. This new approach, this indivisible approach to quantum theory, the indivisible theory, indivisible quantum theory, whatever you want to call it, it says you can talk about the probability that the system is in some configuration. If you're talking about particles, the probability they're in a particular arrangement at every moment in time, and they are in only one of those arrangements at any given moment in time, you can also talk about the probability for the particles to be in other arrangements given what arrangement that they were in at particular moments in time. These are called conditional probabilities, and the theory gives a rudimentary conditional probability. You do have some laws, but that's basically all you have. That's not enough to give you trajectories that are guaranteed to be everywhere continuous. For particles that is a little weird, for fields it's a little bit less weird. The idea that a particle can jump from one place to another is a little weird, but the idea that fields, like isolated, we call them degrees of freedom, but isolated intensities at various points could be fluctuating in some way. It doesn't quite seem as weird to me, and maybe this picture is a little more amenable to a field view, but only because of our intuitions. I don't think there's any fundamental reason you couldn't think of particles this way. There is some structure, not everything is possible, but yes, for elementary particles you would expect to see that sort of jumping behavior. Importantly, as systems get bigger and bigger in a sense of putting together more and more and more particles, the weird discontinuous behavior of the particles begins to look on larger scales like continuous motion. Eventually you get into the regime of mesoscopic, middle scale, and eventually macroscopic, more or less human scale physics, in which you have what we call collective degrees of freedom. You've taken all the individual moving parts and you're thinking of them now as sort of holes as big systems, people, rocks, mountains, coins, dice, stars, planets. At that scale, those objects will move along more or less continuous trajectories, in some cases probabilistic. You can show that when they're probabilistic, they'll now essentially be Markovian, apart from the usual approximations. If you're doing the brain, it's not going to look Markovian, but at some level, you begin to get back the kinds of stochastic processes we're familiar with in classical physics, and in the simplest cases, you get deterministic behavior like Newton's laws. So if I know the state of the particle as a point in configuration space, a point in ordinary three-dimensional space right now, but my dynamics are non-Markovian, is it true that in principle, I would need to know its entire past trajectory to predict what's going to do next? That's a very good question. If you know the location of the particle, that's an interesting... So when you ask if you know what you mean, do you mean that you have done some kind of measurement to determine where it is, or you have a God's eye view of the world, and you can just know where the particle is? Let's be God for the moment. Okay, yeah. Well, it's a bit blasphemous, but okay, let's play the game. I haven't been struck down yet. Exactly. Yeah. So if you have a God's eye view, then yes, you would have to know something about the past of the particle. The question is how much would you have to know about the past in order to predict... Okay, and the question is, what are you predicting? If you want to... So I need to step back for a second. In this picture, you can presume that the particle, in fact, has some trajectory that we don't know, but maybe God knows. From a God's eye view, there's some particular trajectory. Now, from God's eye view, there's no need for laws at all. God just looks down at the fabric of space and time and all the particles or fields or whatever, and you just see, okay, as God, I can just see the whole trajectory. I don't need quantum theory, I don't need laws of physics, I don't need probabilities, I don't need any of that stuff. So this is why it's tricky to talk about the God's eye view, because now we're assuming that we have like a demigod who doesn't know the whole trajectory, but nonetheless can know where it is at one moment, but no more in a way that we, regular humans, can't. I guess if you're a demigod in this sense, you have some access to this secret information, but not the whole trajectory, then in order to predict what it's going to do, you have to incorporate information about the last so-called division event. So what is a division event? When you model an indivisible process, you begin with some time, you call it the initial time, and you get a bunch of conditional probabilities of the form given that the system is in this configuration at the initial time, here is the probability it will be in that configuration at some other time that you can choose. And the time you choose, the target time is completely adjustable, there's no discreteness of time, time is continuous. And if you want to think in cosmic scales, you could take the initial time, I guess, to be the beginning of the universe or something, I don't know. I'm not sure if the universe had to begin with one of these things, who knows. As systems interact with other systems though, as measuring devices come in, but you don't need measuring devices, even relatively simple systems could come in and interact with your system or an environment or whatever. As systems interact with each other, what you find is that when you treat the probabilistic behavior of these systems using ordinary probability theory, and you allow the system in question, the particle in this case, to interact with some other system, call it the environment or it doesn't have to be a big environment, it could be even just an individual particle, it's fine. In such a way that the other system gets a reading, develops a dependence on the configuration of the system it was looking at. And then you classically marginalize, classical marginalization is just ordinary probability speak for summing over the possibilities of the other system. What you find is the system you had, the subject system, the system you were looking at, the system that was studying, develops a new conditioning event, a new point at which you can condition. At these moments, the indivisible evolution, the laws that cannot be divided over time, at moments like those, they can be divided. You can evolve the system to such a moment, division event, you could stop the evolution there. The system has revealed its configuration in some way to the rest of the universe, even to one other small system in the universe. And then the laws of your system are rich enough to tell you how to condition from that moment to the future. Is it basically a measurement as we think of it casually? It's decoherence. Decoherence, okay. You can show that what looks like decoherence in the usual Hilbert space picture is just the generation of a division event in this picture. But of course, the metaphysics is very different. We're not making branches, but you can literally map the mathematics of decoherence directly to the mathematics of the generation of a division event. But the answer to the question, what do I need to know, given the current state of the particle, to be able to predict what its future will be, is the entire trajectory between now and the last division event? You need to know its configuration at the last division event. You don't need to know the trajectory in between. You don't need to know the detailed trajectory. Yes. If you want to predict, okay, if you want to predict, okay, let me back up. If your goal is to predict what you will see in an experiment, if you were to measure the location of the particle in the future, then you only need to know what it was doing at its last division event. But actually, even then, you can actually get away with knowing less. And I can tell you a little bit what that means. But certainly, one view is, certainly if you knew what its configuration was at a previous division event or at the initial event, then you would have the resources, the theory would have laws that would tell you what you would see, what probabilities you assign to experiments done in the future. Now, if your question is, what information do we need to be able to assign a probability to it simply being in a particular configuration, actually, that's still the same. The way the theory works is, if you know its configuration at the last division event, and you know the laws which are given to you by the theory, then you can say what the probability it will have of being in any particular configuration at any other time, at any later time. What you don't have is probabilities for the detailed trajectory. For that, you would need to pick one of the infinitely many different realizers. In other words, there's some things that this theory will predict. If you're a demigod and you know the configuration of the particle, I'm sorry, if you've done a measurement and you know if there's a division event and the configuration of the system is specified at that last division event, then the laws of the theory are rich enough to tell you the probabilities with which it will be in whatever configuration you ask about at other times. If you want to know the detailed path it will take, or you want to assign detailed probabilities to the detailed paths it could take, then you're asking for more laws than the indivisible process gives you. Those additional laws would correspond to some particular non-Marcovian realizer of the process. How objectively well-defined are division events? Division events are as objectively well-defined as decoherent branches. So not very. Well, it depends on the system. For a small system in contact with a pretty big environment, you're going to get really sharply defined division events, but there will be some error and this means that all the laws, the effective laws you would use to describe the system are going to have error terms in them. But you have to pick out a separation or factorization of system and environment, for example. That's true, yeah. So the idea is that you want to model, let's say, five particles. You've got five particles of apparatus. You would pick, obviously, one system is the five particles. If the particles are interacting with the atoms in the room, you take your environment to the atoms in the room. If you wanted to pick only half the atoms in the room, you could do that. That would give you basically the same answer. There are going to be tiny, tiny discrepancies between those choices. That means that all the laws you use will have extremely small discrepancies in them. But given how incredibly sharp decoherence is, the degree to which, so one way to characterize how sharp a division event is, and now this is going to get a little bit in the weeds, but when we write down a quantum state, and the quantum state evolves into some blend or superposition, there's not a unique way to represent that super position. But if you decided to single out some particular feature of the system like its position, let's say, you want to focus on position, then once you've fixed position is the thing you want to single out, you can talk about what that superposition looks like as a superposition of different possible positions, roughly speaking. Then you can talk about the degree to which the terms in that superposition interfere with each other. These are called coherences. There's also a way to formulate this in terms of off-daggul entries of density matrices, but I'm not going to talk about that. We have a precise way to calculate how significant the sizes of those interference or those coherences are. When a system undergoes decoherence by an appreciably big environment, and it doesn't need to be that big, the cosmic micro background is significant enough for particles of dust floating in nearly empty space, when there's a significant enough environment, going back to the early days of decoherence, those coherences reduce to an extremely small number. They become exponentially small in time and also exponentially small in the number of moving parts of degrees of freedom of the environment. You get within tiny fractions of a second, I mean for a dust particle floating in the universe, it's like 10 to the negative 40 seconds, some incredibly small amount of time. I may be somewhat off, but the numbers of that scale, those coherences are suppressed almost to zero. They're basically undetectable. That's how precisely defined a division event is. So if I have a single spin that is in a superposition of spin up and spin down, if I entangle it with one more spin, is that a division event? Spin is a little bit of a tricky thing. Replace it with a position of a particle, I mean a two-state system. I could come back to why spin is tricky a little bit if you'd like, because this gets into a distinction between beables and emergibles, which we can talk about, but yeah, we can pick something else. You said position, so one particle interacting with another particle's position. One particle that is in a superposition of two different positions. Right, and interacts with another particle. It becomes entangled with another two-state system. Yeah, yeah. So no, I think in, I have to, I mean, I haven't done every conceivable calculation one could do, but in this particular case, I'm pretty sure that you don't get a robust division event in this circumstance. You need more than just one particle. If they were 12 particles, what about that? The more particles you have, the better you get. Well, actually, no, actually, no, let me take that back. I did this calculation. I forgot. I actually did this calculation. So here's a concrete realization of your question. Let's do the double slit experiment. This is a concrete realization of exactly this question. Suppose that we send a particle toward a wall with two small holes in it. We imagine some of the time the particle will not succeed in getting to the holes, but every once in a while, maybe it goes to the holes. And then far away on the other side of this wall, there is a detection screen where particles can land. And when they land, they stick or they set off a light or they get registered in some memory system or something like that. They leave a mark in some sense. And we imagine setting particles in one at a time. One particle after another, many, many, many particles, always one at a time. And over time, we're going to get some distribution or pattern of dots, a histogram of dots on the detection screen. Now, the question is, can we predict what that distribution of dots will look like if we're sending small pebbles through the screen? The prediction is that we'll get a distribution that's kind of centered in between or the part of the detection screen that's kind of in the middle of the detection screen. And then as you go sort of farther out from the middle of the detection screen, they get more and more sparse. There's some sense in which that distribution is a blend of two distributions. It's the distribution you would have got if you only had one hole blended with the distribution you would have got if you'd only had the other hole. And I have your book by the way next to me, Something Deeply Hidden. I recommend it to everybody. If you haven't read Sean's book, it's absolutely amazing. Something Deeply Hidden. I strongly endorse it. Sean, you should get your book. And you cover that. I re-read it before we had this discussion. But so it's that good. Yes, I actually read it. It's that good. So you cover the double slit experiment in the book. If you send quantum mechanical particles, photons, or when this was actually experimentally done, I think it wasn't even experimentally done until like the 60s or 70s, actual electrons in one at a time, you do not get that classical distribution. You get a very funny looking distribution with kind of peaks and valleys, certain places where there are lots of dots and certain places where there are very few or no dots. And when you do this experiment like 10,000 times, you get this band structure. You get these bands, these sort of depending on how you've set up the experiment, it may look like a line of bands. It may look like concentric circles. But now to be clear, you don't see the bands on anyone run of the experiment. Every one of the experiment, you see one dot. It's just that over many, many runs the experiment, the dots build up this particular pattern, which is very, very weird. Although this particular experiment was not done with electrons until sometime after, it was understood well before what the theoretical prediction from quantum mechanics should be. For example, Heisenberg talks about this in his book, Physics and Philosophy. He talks specifically about this experiment. And this experiment is very weird, because if you wonder, gosh, I wonder which hole the electron is going through. Maybe every time the electron goes to this experiment, I will simply look with my eyes and see which hole is going through. Well, it doesn't mean to look with your eyes. You have to scatter some particle off the electron and do something in order to get something to get into your eyes. Or if you have some device, you have to somehow have the device interact with the electron, but whatever. You try to interact the electron in the slightest, most sensitive, least disturbing way you could possibly imagine. Just enough so you can figure out definitively which hole it goes through in every run of the experiment. And when you do this, the interference pattern markedly changes. It almost completely goes away. It turns out there's a little bit of a residual effect if the screen is very far away, because after you've measured the electron, it will go back to being a little bit quantum mechanical on its way to the screen. And so you'll get a little bit of weirdness, but not nearly as much as you did before. And so you might go, okay, well, let me imagine doing this with the most rudimentary system I could imagine. Let me imagine not putting a person or a complicated measuring device, but just the simplest quantum mechanical system you could imagine. A single quantum bit or qubit right near the holes. And the qubit cannot be on or off. It's a binary switch, a quantum binary switch. If you think of it classically, quantum mechanically can be sort of a blend of those, but classically, it's like on or off. And you program it so that it's definitely off. And it stays off if the particle goes down through the lower slit, but it switches to on if it goes to the upper slit. So you might think this is the most rudimentary where you could possibly, because then what you do is after the particle lands, you go and you look at the qubit. And if it was off, you know that the thing, the particle must have gone through the lower hole. And if it's on, it must have gone through the upper hole on each one of the experiment. But this is enough to ruin the interference pattern. This is enough to actually ruin the interference pattern. So decoherence has happened. And it's happened through an interaction of the particle with the simplest possible kind of qubit system. Now, I think a natural question to ask is, what could possibly be going on here? One view is that, well, this kind of looks like waves. I mean, you don't see a wave in the experiment. What you see is dots, but the pattern of dots kind of looks like the pattern you might imagine if waves were going through the slits. There was an experiment done centuries ago by Thomas Young, the young double slit experiment that was supposed to establish that light was really a wave. And there you're sending real waves of light or light rays or whatever to the slits. I guess they didn't know that they were waves yet. The point was to show that they were waves. But you shine a light at these slits. And on the other side, you just see bands, right? And you're not seeing dots, you see bands. And so you think that light really is a wave, a wave moving in 3D space, really moving through the holes. That sort of picture is a little weird in the quantum case. And the reason why you know it's weird immediately is because if you decided to send two particles into the experiment instead of one, the pattern you get would look very odd. Because the pattern you get actually looks like the projection down to 3D space of a wave pattern in six dimensional space. And the reason is that when you send two particles in at a time, to the extent that we use something like Schrodinger's waves to describe the particles, the waves live not in physical space. The waves live in configuration space. This is distinct from Hilbert space. It's the space of possible classical arrangements of the particles. And for two particles, each of which has three position coordinates, this configuration space has six coordinates. So it's a six dimensional space. And now this does not look like the Young-Double-Slit experiment anymore. And if you send three particles or five or eight particles through the experiment, you really need to treat the wave as living in, I guess, in the eight particle case, 24 dimensional space. And then I don't really know where the holes are. So this one view is that when you send one particle in, the particle's wave, Schrodinger wave, is like going through both holes. But once you've got more than one particle in the system, and the waves live in three n dimensional space, if you've got n particles, I don't even know that you can talk about the waves going through holes. I don't even know what that means anymore. So I think this picture is, we tell this picture to intro students, it's in the first chapter of volume three of Feynman's lectures on quantum mechanics. But I think this picture is actually very misleading. You get an intuitive picture only in very simple cases. And beyond that, it's not intuitive at all, at least to me. But coming back to your question, we do get decoherence even though we had only one qubit involved. And so one view of this is that there is a Schrodinger wave evolving in some sense. And the qubit has produced decoherence in a many worlds view. And then there's a world in which the particle goes through one slit, and a world in which it goes to the other slit. In a Bohmian view, the particle only goes to one slit, but the pilot wave guiding the particle kind of goes through two. But this actually depends a little bit on whether you're the sort of Bohmian who thinks pilot waves are physical objects, or whether you're the kind who think that they're just expression of laws. So it's a little bit murky. Textbook quantum theory says that the qubit, well, it's still complicated. Textbook quantum theory says there's decoherence, but no measurement yet because the qubit doesn't count as an observer. The measurement only happens once the particle hits the detection screen. And the decoherence has changed what kind of measurement will happen when it hits the detection screen. What does this look like in the indivisible stochastic approach? When you send your electrons in one at a time into this apparatus, you know the initial configuration of the electron as it was being sent. I mean, you do an experiment, to do a good experiment, you need to set up careful initial conditions. That's how we set up an experiment. And then the electron does not get read by anything. It doesn't interact in any way that mutually exchanges information. It doesn't reveal its configuration to any other systems through the experiment until it reaches the detection screen when it does. And what that means is that this is not a Markov process. So if the idea is, well, let's just ask how the electron will evolve as it goes from its initial emission point to the walls. And then once we get to the walls, we can ask, was it in the upper hole or wasn't the lower hole? Suppose it's in the upper hole. Then we can, we have laws that will tell us how it will evolve after that. Or suppose it was a lower hole. We have laws that evolve it after that. If we do that, we get more or less the pattern you see in the classical case. And no waves, no obvious interference. But notice this entails a division. The laws, you're assuming, divided at the walls. It entails something like a Markov assumption. This Markov assumption is not baked into the fundamental laws of probability theory. It's a physical dynamical assumption that the dynamical behavior of this process, the laws that govern how it will behave, have this feature that I can evolve the system from the initial time to an intermediate moment. Any intermediate moment of my choosing. Ask, okay, well, given it could be in this configuration or that one, and then given either of those two configurations, the laws will then tell me how it will evolve after that. That's a divisible process, at least divisible at the walls. If you do not assume the process is divisible, if you simply drop that assumption, you will generically get an interference pattern on the other side. This doesn't, the particle is always in one location. It's never in both holes. But nonetheless, without making that division assumption, the generic prediction is they will be an interference pattern at the other side. And the simplest interference pattern looks a lot like the interference pattern we get in quantum mechanics. If you put a qubit near the holes, and the qubit interacts with the electron as it's going through the holes, then you can just show then you can just show that what looked like decoherence in the standard quantum formalism looks in this picture with just classical probabilities. It looks like a division event and the behavior of the electron, the laws described in the electron just look like the electron is you know, that you can evolve the laws now give you laws that take you from the walls to the screen. Now you have a divisible process of the kind that would agree with the classical prediction. It sounds like the division events play a crucial role in the formulation of your theory. Yes, they play a crucial role. That's right. And I should say by the way that if the qubit is not perfect, if the qubit is imperfect, if the qubit kind of doesn't always deterministically, because here we're assuming the qubit very deterministically goes one way or the other. If we assume that the qubit is not perfect, you won't get a good division event. And the worst the qubit does, the less reliable it is, eventually if it's really not reliable at all, you just don't get a division event at all. Yeah. So that worries me. Division events are very important for the formulation of your theory, but there can be a spectrum of possibilities in between having one and not having one. Yeah, I agree completely with that. Now the question is what are the ramifications of this imprecision? One of the things that worried me about the Everett approach, separate from the problem of probability, was that the worlds in the Everett approach, the branches, had a similar kind of fuzziness to them. The fuzziness arguably becomes negligibly small once the branches are macroscopic enough, once the interference effects are small enough, but there's always some fuzziness, and this really, really worried me. In this picture, the fuzziness is only a fuzziness in the laws. It's not that the electron on a model where you're taking electrons to be fundamental, which of course may not be nature, but if your model is one of which the electrons are fundamental particles, what is not fuzzy is that the electron is always in one and only one place. That's not fuzzy. What's a little bit fuzzy is the laws that we use to describe the electron, but having effective laws that are not exactly correct, that's as old as dirt in physics. Every effective field theory we use in physics is a theory where the laws are only so accurate. We don't assume the laws are exactly perfect. I'm willing to bite the bullet on that and say that the laws for a quantum system in this picture, like an electron, are never going to be exactly precise, but they will be extremely sharp in many real-world scenarios and will be able to make definitive predictions of any behavior. There does seem to be a difference between saying that laws are only approximately correct, like Newtonian gravity in the solar system, versus saying laws are not precisely defined, which Newtonian gravity is. It's just not correct. That's a bit tricky, because if you say, for example, that I want to describe the behavior of the planets in the solar system on Newtonian gravity, let's not worry about general relativity. That's actually really hard to do, because in principle I have to worry about all the other objects in the universe, like realistically. Now, they give tiny effects. Again, we're assuming a very flat-footed Newtonian gravitational scheme in which there's no time delay of gravitational effects. There's instantaneous action at a distance. The star-batal juice is instantly affecting, but this is just for sake of argument. In principle, you need to know where batal juice is. You need to know where the stars in the farthest part of the galaxy are to get an exact set of laws for the behavior of Earth's motion around the Sun. Sorry, that's just not right. I have the exact laws to get the exact prediction from those laws. I need to know the exact configuration of any amount of stuff. I see. You're making a principal distinction between what the law is. The law is of the form given two masses. Newtonian gravity specifies a law of the form gm1m2 over r squared, and I just don't know all of the things contributing to that. I completely bite that bullet. My attitude about this is, you could think of it in two ways. One way is to think about the law as a principle, and the other is to think about the law as a specific set of dynamics for a certain system. If you buy a law, what you mean is the principle that Newtonian gravity is Newton's constant gm1m2 over r squared, I agree that's an exact law. That's an exact law, and it's just a question of applying it. But if you consider Earth as just some physical system in the world, and you want to model that physical system, Earth alone, you need to write down a differential equation for Earth's behavior, and that differential equation is inherently going to be imprecise, because we can't take into account all of the effects. What I'm describing is somewhat like that. One way to think about it is, I would say what I'm describing is a little bit like that, but a little bit not like that. For maybe the whole universe, the whole universe, there is some exact indivisible process that's unfolding, and that's some exact law. Maybe there are some truly isolated systems floating in the universe, not dark matter, but dark, dark matter that really is not interacting with anything. Maybe those systems are evolving according to some exact indivisible stochastic process that we don't know about. But realistic subsystems of the universe don't have precisely defined laws at literally 100% precision, although their laws may be defined to some very, very high degree of precision. What I will simply say to you, though, is I agree this is a very interesting property of laws. What about an Everettian quantum theory? The Schrodinger equation is the only law that says definite as I can get. But who's Schrodinger equation? The Schrodinger equation with the Hamiltonian of the universe. Of the universe. Good. So the scale of the whole universe, I agree. The claim here is that in place of saying we have the Schrodinger equation for the whole universe evolving uniterrally in some quantum state, some universal state vector, some universal wave function. Instead, the entire universe is some indivisible stochastic process. The universe is going through some sequence of configurations. And those configurations are exactly described by one of these indivisible stochastic processes. Once we start looking at subsystems of the universe and trying to figure out what their behavior is, what their laws are, we're going to start incurring approximations. And the same is true if we start trying to write down quantum mechanical laws for subsystems of an Everettian universe. I don't quite understand that because if the system is the whole universe, there can't be any division events. It can't become entangled with anything. That's true. At the level of the whole universe, there will not be division events. That's right. Just like at the level of the whole universe in the Everett approach, you're not getting environmental decoherence either. So what is the data I need to predict what the universe is going to do? So in Everettian quantum theory, we need to begin with some initial quantum state. And that initial quantum state evolves according to the Schrodinger equation. In the indivisible approach, we begin with some initial conditioning event. And then from that initial conditioning event, the universe evolves exactly according to indivisible process. Now, that indivisible process is not going to be particularly perspicuous. It's not going to be particularly informative any more than the universal evolution of the universal wave function is until you start asking what are the subsystems of the universe and asking what are their effective laws. I'm usually not interested in the unitary evolution of the entire universe and the Everett approach and the Everett approach. I'm still interested in understanding what a particular object on my desk is going to do, what will happen in a particular tabletop experiment, what a planet will do, what a quantum mechanical particle will do. And for that, I'm using effective versions of, I've done some kind of work to go from the exact Schrodinger evolution of the whole universe down to these individual systems. I'll also push back a little bit on the notion that we necessarily can be sure we have a universe described by unitary time evolution. Our observable universe, as we all know, is not a closed system. Stuff is, we could only see so far into the universe, but we think that there's no boundary. Things can, there are things we think that are outside of our universe and maybe things are entering or leaving, I don't know. It's possible that there's some appropriately defined megiverse or maybe the universe, the whole universe, not the observable universe, but everything. That is some unitary evolving quantum system. That's possible. It's also possible it's not true. I mean, there are a couple of reasons to be nervous about this. One is it would require a pretty wild extrapolation of the Schrodinger equation from where we know that it works. We have strong evidence of the Schrodinger equation for basically microscopic systems and some very carefully controlled mesoscopic systems at low temperatures, but certainly systems that are not bigger than a person. We're extrapolating that equation, not just to the observable universe, but to whatever larger total universe there is. Importantly, we're also neglecting general relativity. We're neglecting gravity, which we think is an important consideration when you're talking about cosmic scales. I'm not 100% sure that there is a meaningful, well-defined universal wave function. This brings back, this was another concern I had about the Everett approach. I worry that the Everett approach depended sensitively on something that we can't possibly ever know, or at least there's no good reason to think we'll ever know. I have one of my desks. I have a Russian doll, a nested doll. I think this is a nice metaphor. A nested doll is one of these dolls where the doll opens up and then inside there's a smaller doll, and that opens up and inside there's a smaller doll, and so on. It's kind of a reverse open doll for a universe. We begin with tiny quantum mechanical systems that are in some cases easy to isolate from environmental effects, and they evolve pretty much uniterally, although never perfectly, but pretty much uniterally. As the systems get bigger and bigger and warmer and warmer, they deviate ever more from unitary evolution because of environmental interactions. They become what we call open systems. We have to use open quantum system dynamics to describe them, and these do not look like the Schrodinger equation. There's effective equations we use for these kinds of things. I won't go into the technical details, but if people are interested, you can look up the GKLS equation or Lin Blotty and Evolution, which themselves entail lots of approximations. As you consider bigger and bigger and bigger systems, the systems get farther and farther from unitary evolution, and evolve, well, classically. I mean, classical physics is the world that we first encountered before quantum theory because that's what the behavior of big systems looks like. Systems get more and more classical as they make them bigger because they're just exposed to more environmental degrees of freedom. This, as best I can tell, just proceeds as you get to bigger and bigger scales. Once you start talking about regions of space the size of a galaxy, this thing has hopelessly many interactions with the environment and is hopelessly far away from anything like unitary evolution. You imagine you just have this concentric set of Russian dolls. They just keep getting bigger and bigger. Your scope increases in layers and layers, as you consider, more and more of the universe. There's just this assumption that at some point this stops and suddenly everything is unitary again. That would be when you reach the whole universe in some sense appropriately defined. I'm just skeptical that we have good reason to think that that is true. Certainly, if you want to be a naive scientific induction, you want to use naive scientific induction. You would say, yeah, something like that. You would say, the bigger I take my scope to be, the farther and farther away from unitary evolution it is. My prediction is that in the limit in which the scope becomes infinitely big, I'm just infinitely far away from unitary evolution. To say that at the end of this suddenly things snap back into unitary evolution, that makes me nervous. To the extent that you need that assumption to get the Everett approach to work, yeah, I'm a little nervous about that. It could be that you can formulate the Everett approach when you don't need that assumption. I want to make it very clear. I don't necessarily think you need it, although I do think it would complicate what the Everett approach looks like. Let me ask two questions to follow up. Both of which are in the general idea of making this approach safe for quantum field theory. One is, and I've had this mild disagreement with Bohmians, that they have this idea going back to De Broglie, that there are waves and particles. That's why quantum mechanics sometimes looks like waves, sometimes looks like particles. In the case of quantum electrodynamics, in standard quantum theory, which is not Bohmian, you just take the electromagnetic field, you quantize it, and the particles pop out. You don't put extra particles in. You get Fox-Base, you get Photons, the whole story. The ontology is fields, but there's a well-defined set of things that say, here's why it looks like particles in certain circumstances. What's your ontology going to be? Is it going to be fields, or is it going to be particles? Because for electromagnetism, neither one of those seems to be quite the whole story. That's a very good question. I think one view that one gets out of the Everett approach is, we've done it. We've found the fundamental ontology of nature, its quantum states and Hilbert spaces. Now, we just need to figure out perspicuous, meaning a conceptually transparent ways of figuring out how to think about or describe that fundamental ontology. My view is, I was going to say more modest, but I don't want to make a moral judgment about this, but in some sense philosophically modest. I don't know what the fundamental ontology is yet. I don't think we have a right to claim we know what it is at this point. The ontology you plug into your indivisible stochastic model depends on what kind of a system you're trying to model, just like we would do it classically. If classically you want to model particles, then particles, you use the kinematics of particles and you find laws to describe the particles. If you want to model a system of scalar fields or the Maxwell electromagnetic fields, you would take the model to have the kinematics appropriate to fields and you need laws appropriate to fields. There's not one particular ontology, an uber ontology that is the ontology of everything because we don't know what it is yet. You have a model or a theory and on that theory, it comes with some particular theory dependent choice of ontology, some particular theory dependent choice of how to make sense of probability and some theory dependent choice on how to make sense of laws. More too, in a little while hopefully we'll talk about causation, which I've been thinking a lot about and I think there's some interesting things to say about that too. So if you ask what I think v ontology is, is it particles or fields or something else, I don't know. We pick a model, we pick a theory, we want the theory to be empirically adequate when to make the right predictions and we pick a theory that comes with a particular ontology so that together with that ontology and the laws and the probabilities we get as good a set of predictions as we can. If you want to make predictions that are of the kind we make with a standard model, predictions that are accurate to 12 decimal places, the model you're going to want to use is a model in which if you do this with indivisible stochastic process, it would be a model in which your ontology is fields and particles in that picture would be emergent features that show up. So we didn't talk about emergables, but there are features of an indivisible stochastic process that you can go out and measure and you're measuring something that's really there. If you're taking your ontology to be actually, if particles really are what you're taking your ontology to be on the given model you're working with, then when you measure where a particle is, you're revealing where it actually is. If you're taking for your model the ontology to be field configurations and you measure how intense a field is at a certain spot, you're really revealing what that intensity is, or at least there's some direct connection between what the field is doing and what your experiment is. And then there are other features you might want to measure. There are other properties you might want to measure about the system and those properties might not be directly connected to those fundamental ingredients. There are theorems, famously a theorem by Bell and Koshen and Specker in the 1960s. It's now called the Koshen-Specker theorem or the Bell Koshen-Specker theorem that says it cannot be the case that every observable feature you want to measure is fundamentally there waiting to be revealed. At least some of them have to emerge in some sense through some interaction between the system being measured and the measuring device. And so particles on a model that when with ontologies fields, if on a model the ontologies fields, particles are emergent features. If you want to measure how many particles there are, you will do a particular kind of measurement and you'll get a particular number and that number will give you some quantized value. But that value is not revealing how many particles were there. According to that model, it's just a probabilistic prediction that is ultimately rooted in the behavior of the fields. So I guess in the conventional view, I have an ontology of fields and I can be very specific about saying under certain circumstances, I will observe discrete packets of energy which I interpret as particles. And are you saying that the same thing happens in your theory if you start with a field ontology? Repeat your question again. In the ordinary way of doing things in quantum field theory, I start with a field ontology, I quantize it. There are certain very clearly definable experimental circumstances under which what I detect are individual packets of energy that I interpret as particles. Are you saying that in your theory, if you start with a field ontology, you will also detect particle-like packets of energy under the right circumstances? Or is this a difference between your approach and the convention? Oh, no, no, no, no. The former statement you made is correct. It makes the same empirical prediction as standard quantum field theory. In fact, it's hard to come up with examples of how this picture makes any different predictions from standard quantum mechanics. I guess I just don't see where the particles are coming from if I start with a field ontology and I don't quantize. So that point about quantize is an interesting statement. In standard quantum theory, to quantize is to take some classical looking ontology and to replace the classical variables in some sense with abstract mathematical things called operators on a Hilbert space, obeying certain kinds of rules called commutation. There's a whole system of how we do this quantization recipe. And the quantization recipe is not a fundamental statement about what nature is doing. Nature isn't classical and then it just decides sometimes to quantize and become quantum mechanical. Presumably, nature is quantum mechanical the whole way through and we're just using quantization as a heuristic, as a trial and error method for starting with a classical description of some system, which we know is only approximate, and guessing the truer or more fundamental theory for the system. That guessing process of guessing the more fundamental version, that's what you mean when you say quantization. Quantization in this indivisible picture is quite different. We start with a classical ontology and we don't make the ontology non classical. Instead, we just we just pick different dynamics. The dynamics are going to be of this new more non Markovian, sort of indivisible form. One advantage here is this makes it a little easy to imagine describing hybrid classical quantum systems in a similar formalism. So it may make it easier, for example, to have some big system that we're pretending as classical interacting with some quantum system because now we can do it all in one formalism. We don't need the operator picture. The particle question, though, is it's a question of what we see when we do an experiment. If I take a quantum field and I model some kind of measuring device and I model the two of them as one giant quantum system, and by quantum system here, what I mean is now I model the whole thing with some set of indivisible stochastic laws. And I allow this overall system to evolve according to those laws. What I will find is that the measuring device, if it was tuned to measure this thing we call particle number, will arrive at some integer value at the end. And if I repeat this experiment many times with many identically prepared setups, I'll find that the probability distribution of those integer values the measuring device finds is in accord with the same Born Rule type predictions I would have gotten from the quantum field theory. So in this picture, there aren't really particles. The particles are emergent things that show up on measuring devices. And I think that's not such a striking thing. I mean, when I first learned quantum field theory, I was told that particles were emergent things. Now that we take quantum fields to be fundamental, you're not supposed to think of particles as really their own independent level of reality. They're emergent. That's kind of what I'm saying also, although I think that this is a somewhat more transparent way of saying it, arguably. I mean, it does seem like a little bit of a miracle to me that you get exactly the same predictions as conventional quantum theory. It is a miracle. It's amazing. Some people call it a miracle and some people call it beautiful self-consistency. I'm not going to say whether that's a compliment or a hidden critique. So the one last question then is, I don't understand how you get the stability of matter in this theory. If I'm doing chemistry, I take wave functions really seriously. I have an electron. It's not located at a point. Its electric field is not spread around that electron at a point. But I have a wave function. I solve for the value of the electric field I would measure. Everything works great. I can calculate energy levels. And you have particles. And how in the world are you going to get the orbitals to look just like ordinary non-relativistic quantum mechanics if you really don't have waves? You have particles. I mean, the answer is that once systems start to get very complicated. Very complicated. I mean, a hydrogen molecule. A single hydrogen molecule. Sure. Sure. Sure. So well, then, you know, for models that are as simple as a hydrogen molecule, what happens is you put the particles in, you write down the appropriate indivisible stochastic laws. They're relatively simple to define. I can if I mean, if you want to get into the technical weeds again, I'm not 100% sure how technical you'd like me to be here. But there's a fairly not very. Okay. Well, let me just say that classically we have a procedure for how we do this. Classically, we propose what's called a Hamiltonian. And the Hamiltonian we have, it's a particular kind of a mathematical thing that contains the laws in it, or we use Lagrangian, but usually we use a Hamiltonian. And then there's this set of rules where we go from this Hamiltonian function to the laws to the dynamical equations that say how the system is going to work. And this is known as the Hamiltonian formulation of classical physics, the laws are called the Hamiltonian equations of motion. You can look it up, they fit on a t-shirt, it's not so complicated. There's a similar procedure. If you want to specify the laws of a quantum system, in the traditional way, you specify a Hamiltonian, you evolve the system through Schrodinger evolution, and then you have to use the measurement axioms, or if you're an Everettian, you make some argument about the system branching or something like that, and then you try to get probabilities out or whatever. In this approach, you specify Hamiltonian, and what the Hamiltonian does is it tells you what all of the indivisible stochastic laws are. And the Hamiltonian you use is the same Hamiltonian you use in standard quantum mechanics. You just literally take the Hamiltonian you would have used and you just plug it into this different algorithm and outcomes a set of indivisible stochastic laws, and then when you let those indivisible stochastic laws evolve the system, the system will in every experimental way show the same behavior. It's just you get an empirically adequate picture of what happens. Now, what's going on under the hood? If you're modeling the hydrogen atom as a fixed non-dynamical center, a proton, and you treat it as if it doesn't move as we usually do in undergraduate textbooks, and you model the electron as a point particle, then what you've got is a point particle moving around. And the precise way it moves around is going to be complicated, and the theory is not going to tell you definitively exactly which path it will take as it moves around. But the theory will give you, the indivisible stochastic laws will give you a set of laws that describe its behavior in somewhat sparse terms, and those laws will be sufficient so that if you interact with the hydrogen atom or the hydrogen atom is interacting with other atoms or it's part of some bigger solid or something like that, you'll just get the same observable behavior you would have gotten otherwise. But remember, I'm not asking about the hydrogen atom, I'm asking about the hydrogen molecule. Oh, the molecule. I'm sorry. Let me explain why I care. Sorry. The hydrogen molecule, I can do exactly what you said. I can treat, you know, protons as very heavy. I solve the Schrodinger equation for two electrons moving in this background, and I get an answer, and the answer is stationary. It doesn't evolve over time, right? Because it's a wave that is settled down into its minimatic configuration. You're telling me I actually have two electrons that are moving around, and their electric fields are moving around, but real-world hydrogen molecules don't jiggle. Yeah. So you got to be a little careful here. There's a bit of a mixture going on between quantum and classical physics. So you can't pretend the electrons are quantum mechanical and then treat their electric fields classically and say, well, the electrons are moving around, their electric fields are moving around. The whole thing is some giant quantum process. Sure. The question is how will it behave? We can't just say, well, the electron is jiggling around. I'm trying to talk your language. Yeah, of course. Of course. You have particles and electric fields. That's right. Yeah. So what you end up having is, you know, from this picture, so in standard quantum mechanics, you have the wave function. The wave function is some kind of stationary state, which means that it has a definite energy and it doesn't, at the level of the quantum state, appear to be changing except for trivial phases or whatever that don't matter. In this picture, the wave function is not a physical object. The physical objects, if you're modeling particles, are the particles. The particles are zipping around and they're zipping around in dynamic equilibrium. And their dynamic equilibrium, as an equilibrium, looks effectively stationary. We can still, and I should make it very clear here, and I probably should say this at the beginning, I'm not saying don't use Hilbert spaces and wave functions. Right? If you want to model a classical Newtonian system, you could use forces and write down all the masses and accelerations and velocities. But for a very complicated Newtonian system, you're probably going to use one of these analytical mechanical methods like Hamiltonian mechanics or maybe the Hamilton Jacobi theory. In Hamilton Jacobi theory, we have all these weird mathematical constructs evolving and moving around. In Hamilton Jacobi theory, there's this function that obeys a partial differential equation, which inspired both Bohm and Schrodinger to develop their particular formulations of quantum theory. But we don't think that the Hamilton Jacobi function is like a physical object, even though sometimes it's very simple, simpler than the moving around particles that we're trying to describe. I'm saying that the wave functions and Hilbert space appartenances, the mathematical ingredients that usually we use in quantum theory, play a similar kind of a role. They're often very convenient to use. If you want to make predictions as quickly and easily as possible, you probably want to formulate things using wave functions. You probably want to treat observers as axiomatic primitives, not worry about the details of the measurement process, just treat measurements as just these perfect events, use basically the tools of standard quantum theory, and make all your predictions and you can make them really fast that way. Trying to predict the behavior of a polyatomic molecule directly using indivisible stochastic laws is going to be a painful exercise. The hydrogen molecule, really? Well, no, for two particles, no. I don't think it's going to be complicated for two particles. You'll be able to get dynamic equilibrium, you'll be able to predict what the wave function would look like if you set it right down. But once you start talking about very complicated quantum systems, I'm talking about tabletop condensed matter experiments where you've got arrays of large numbers of particles and you want to talk about the appropriate way to describe. At some point, you're going to want to just use standard quantum mechanics for stuff. That's fine. But I want to talk about the hydrogen molecule. Why doesn't it jiggle? So the particles are jiggling. Yeah. But what is that? I mean, they're jiggling in a way that leads to a stationary probability distribution over time. But physically, they're jiggling. That sounds different than being stationary. Why isn't it radiating? Okay. So the distribution of their behavior is stationary. Their precise individual motion is not knowable, not empirically accessible. It exists. Why is it not radiating? Well, why would it radiate, Sean? It's accelerated charges. I've taught that they radiate. According to what theory? You tell me. I'm asking about your theory. Why don't they radiate? Right. So in classical electrodynamics, where you have, you could imagine, you've got charged point-like particles following determinable trajectories, well-defined, going in some path, and you can write down what the path is. And if the path has acceleration, and if the path involves speeding up or slowing down or moving in curved paths or going in circles or whatever, then the classical theory of Maxwell electrodynamics is going to predict that these things will emit radiation, electromagnetic radiation. This is a prediction of the theory. You run into some subtleties with self-forces and stuff and classical electromagnetism, all these famous problems, but more or less that's kind of the picture that you get from classical electrodynamics. If you want to start treating particles quantum mechanically, you cannot help yourself to classical electrodynamics. So the prediction that jiggling charged particles are going to radiate is simply not available. If you want to ask whether they'll radiate now, what you need to do is combine your two-electron system, your hydrogen molecule with two electrons. You're going to want to combine it with the quantized electromagnetic field. You're going to combine it with QED. You're going to imagine that in addition to these two particles, there is a huge set of more or less localized variables, degrees of freedom representing the intensity of the directions of the electromagnetic field at all points in space. Now you're going to want to ask, okay, once we've done this and we model the whole thing with the appropriate Hamiltonian or Lagrangian for charged particles, source charged particles coupled to the electromagnetic field treated as a field, will this combined model predict that measurements of energy are going to find, distant measurements of energy are going to find that energy is being propagated out of the system? And the answer is, well, we know what the answer is. If you try to model this whole thing, including QED, including the electromagnetic field, you just have this equivalence between this model and the predictions you would get from standard quantum theory. Standard quantum theory says that you do not get electromagnetic waves in these sorts of pictures. So you're just going to get the same prediction out of this model. If you're careful to model the entire system and include all the ingredients, you're going to get, there's just an isomorphism, you're going to get the same predictions that you would have gotten from standard quantum mechanics. But in the conventional case, I have an intuition. There is a wave, I solve an equation for it, it settles into its minimum energy state. In your view, I'm missing the intuition, I'm hoping that you can provide it for us. You have a bunch of things jiggling around. Why does it look like it's not jiggling? So, Sean, I'm going to reveal something to you that I've revealed before. I came into this world without a lot of intuition. And you can certainly ask my elementary school teachers or my parents about this or my siblings, or really anyone who knew me. And this was a hindrance for a significant portion of my life. But once I started getting into physics and a particular foundations and philosophy of physics, I found that this lack of intuition is in some cases an asset. I think, you know, when we're trying to navigate around the social world of human beings, the social economic political world, having really good intuition is incredibly helpful. Take it from me, as someone who's had to learn the hard way. But when you're trying to contend with the behavior of elementary particles and quantum fields far removed from human experience, I don't know that our intuitions are the best guide for how to think about these things. Sean, it's absolutely true that if I could come up with some picture of reality that agreed with the rules of quantum theory, that met the previously mentioned potential overdetermination problem, managed to be a world picture that was consistent with the rules of quantum theory. I'd love it if that picture were intuitive in every way. I don't think we're going to get that intuitive picture. It's possible someone will come along and find one. It's been 100 years since matrix mechanics. By the way, do you know where the word matrix comes from? I don't. Okay, interesting. So I didn't know for a long time, but I was curious. The root word, the root of matrix, what's the root without the X? Mother, mater? Mother, right. What do matrices have to do with mothers? So when James Joseph Sylvester introduced the term matrix in 1850, 1851, he was trying to find a convenient way to generate determinants. Determinants for the listener are used for many things. In his particular case, he was using determinants to determine which kinds of equations could be solved uniquely. Had solutions, didn't have solutions, had unique solutions, didn't have unique solutions. And he realized that if you arranged a bunch of numbers that show up in those equations in an array, then there was a very simple procedure for getting the determinants out. And he says in the paper, it's like we're pulling the determinants out like we're pulling off spring out of a womb. And so he used the word matrix, which is apparently Latin for womb. And huge spoiler alert for anyone who has still not seen the matrix in 26 years. But part of me wonders if the Wachowskis happened upon this etymology and this... I'm happy to take that that they did not. Because it's a rather remarkable coincidence. If you think about what happens in the movie, which I won't reveal in any more detail. Fair enough. So the point is like we haven't developed an intuitive, what I would regard as an intuitive picture. Bohmian mechanics is very unintuitive. The many worlds picture, whatever its strengths is not very intuitive. And this picture is also not very intuitive. But given the many difficulties I've had trying to make sense of the other approaches thus far, difficulties that might be improved, I don't know, but thus far haven't been improved. This approach met my minimal list of requirements. The first was empirical adequacy. That is absolutely a requirement. It must get the experiment predictions experiment predictions correct. The second is it shouldn't deliver ambiguous predictions in certain in principle situations. And Wigner's friend is a good example. Viewers, listeners can look up what Wigner's friend is. But it's an example where the textbook axioms kind of don't know what to do because you've got two observers. You're not sure whether one of them does a measurement or not. So you shouldn't get ambiguous predictions. The third is that it should at least in schematic terms, maybe not in all the gory details, but in schematic terms it should be able to let us tell a kind of picture of how the macroscopic world is emergent from a micro world. When we talk about fluid water being emergent from water molecules, this is kind of the paradigm for how this is supposed to work. At one level you see little water molecules. If you want to model them more or less classically as little marbles or small rocks or something bouncing around, and then you zoom out, you zoom out and zoom out, and their behavior begins to look more and more like a fluid. And you can see the emergence as you sort of zoom out bigger and bigger scales. You can find YouTube videos where they show you this. There may be situations in which emergence is much harder to model, but at least in broad schematic terms we can say that the behavior of a human being generated by our brains is emergent from the interactions of our neurons, the behaviors of our neurons. Some story where there's some kind of physical substrates, the microscopic or underlying or whatever physical stuff there is, and some broad outline for how macroscopic behavior or objects are supposed to arise from them. So the third requirement is that we at least in schematic terms be able to talk that way. And then the final requirement is that we don't have an endless list of extra empirical, speculative metaphysical hypotheses, extra empirical assumptions, a stone soup problem basically, an epicyclical list of additional things we keep having to add in order to get the thing to work. Empirical adequacy, though, is the most important thing. And this approach looks like so far it's hitting the empirical adequacy requirement, which I think could be a good thing. Bohmian mechanics at this point doesn't mean it because it can't describe the kinds of real world systems that we use. Some of the problems with the Everett approach, including the probability problem, make me nervous about whether the Everett approach is ultimately empirically adequate, although I know there's work continuing on that. The ambiguity question is resolved on some approaches, the Everett approach, and the Bohm approach, and basically approaches that do not propose a fundamental collapse of the wave function. They will tell you what will happen, and my approach does that as well. So those all meet that mark, whereas traditional textbook approaches or the Copenhagen approach are a little bit ambiguous about those things. Being able to tell a story, at least schematically, about how the macroscopic world emerges is very important. The Bohm theory in principle sort of lets you do that, at least in the models where it works. Copenhagen does not. The Copenhagen interpretation does not, and Hugh Everett was one of the most articulate explainers of why. He said the Copenhagen interpretation assumes the classical world at the beginning and rules out on principle, the idea we could derive what the classical world is like from the quantum world, and he regarded that as a serious problem. I agree with Everett on that. And then the final thing is avoiding a list of, just a long list of speculative metaphysical hypotheses, which I'm trying to do. I think my approach does it. The Everett approach, this is one other area where I get very worried about it. Now, there are other aesthetic considerations one could have, but I take these four to be deal breaker requirements. And if someone comes along with a more intuitive approach that hits those four benchmarks, I would be the most thrilled person in the world, especially if it's me. But even if it's not, I would just like to know the answer. Is there, for audience members who want to dive even deeper, is there something they should read? Not yet. Okay, we're getting there. This work is showing up in journals. I mean, we just got a publication not long ago. In a really beautiful journal run by David Wallace, currently run by David Wallace, called Philosophy of Physics. It's an open access journal that was created by the Philosophy of Physics community. And it's really just a remarkable project. I mean, so many people in the community had their hands in building up this journal. It's only been in existence a few years. And it's really just spectacular. And I give credit to a lot of people involved, but especially David Wallace, who's, you know, it wasn't enough that he had to be like a great physicist and philosopher of physics. He also had to be like a really great leader and a great journal editor. Like, I'm sure he's bad at something. I'll figure out what it is. Maybe you know. But anyway, so huge, huge credit to the people who were responsible for that. So this just showed up in publication in that journal. And there are more publications on the way. There is not yet, because this is all still pretty new. And this project is just a couple of years old. There's still a lot of work to be done, which is also good. I mean, it's not often that in a subject as old as quantum foundations, you open up a completely new leaf in our notebook, right, a brand new sheet of paper, waiting to be filled in. This is a different approach from all the approaches that have existed before. What can we do with non-marcovianity and indivisible processes in quantum physics, but also beyond? Because now you can read the approach the other way. You can say, well, if there's this correspondence between quantum systems and a large class of non-marcovian systems, maybe this means that if I am trying to handle some non-marcovian system, some real world, not microscopic macroscopics, some ordinary everyday life, I'm trying to model some complicated system, there's no H bars, this is not a quantum system in any real in any practical sense. But if I can represent it using the tools of quantum theory, then we can use all the mathematics we've developed over the past 100 plus years for quantum theory and apply them to these new kinds of systems. We can apply the renormalization group, we can apply effective field theory, we can apply all stationary phase, all kinds of clever things to new kinds of systems. So there's a lot of work to be done, but also what this means is it's too new for there to be a readable, accessible, popular level description of this approach. You can go look up some of the other conversations I've had that are online. And I give a big shout out to some people I've spoken to before. I did a whole bunch of interviews with Kurt J. Mungle. I know, I think you were on Kurt J. Mungle's podcast also, right? I remember that episode. So anyway, so just Google and you'll find all these things, but no, there's not a an accessible readable reference yet. One of the, one of my long term projects is to follow in your footsteps, Sean, and write a popular book. But you make it look much easier than it is. Yeah, that's, don't be fooled. Right now I'm trying to finish one. So yes, I'm not feeling that it's easy. Yeah. So there's a couple of other items. I don't know if there's any way to get them into this conversation, but there are some, so in addition to hopefully making more sense of quantum theory and hopefully generating new applications for things outside of quantum theory, I wanted to come back to this question about general relativity again. So general relativity in certain kinds of situations behaves like a Markov process. If you have a very nice kind of universe or space and time joined together into a certain kind of manifold, we call it globally hyperbolic, but it's a manifold in which we can really think of the universe being sliceable up into moments and the moments, moment to moment can be connected together to make the whole universe. Then in this situation, we can formulate Einstein's equation, the Einstein field equation as something like the kind of differential equations we've seen before, where we can take initial information and we can predict what's going to happen. But there's no guarantee, there's no fundamental reason we know of at this point by space times have to be nice in this way. And in the general case, the Einstein field equation doesn't really operate like those other equations. And my suspicion is that this is connected with the difficulties in developing a theory of quantum gravity. Now, there are other difficulties as well. And I don't have time to go into the mall now. But my suspicion is that the reason we don't yet have a fully probabilistic version of general relativity. So not general relativity with small, noisy, you know, tiny corrections, but like a fully probabilistic generalization where we replace the Einstein field equation with genuinely probabilistic laws in a fully general way. The reason we haven't done that is because is because of this non-marcovianity. The kinds of probabilistic systems that we have a lot of experience with are marcovian. And if general relativity is not like that, it's going to resist that kind of effort. My suspicion is that if we could develop something like a fully realized probabilistic generalization of general relativity, then although it might not be quantum gravity, but who knows, we don't have it yet, it might at least be a very important stepping stone to developing a theory of quantum gravity. And there are some people trying to generalize general relativity in a fully probabilistic way, but those efforts are at this point very rudimentary. And so I would just say is this project has sensitized me to the importance of taking problems one at a time. Before we jump to quantum gravity, maybe we should start with probability gravity and try to see if we can make gravity a probabilistic theory. So these are all potential directions and future things I'm thinking about. And the final thing I'll just say is there are philosophical consequences to all of this as well. This is a theory in which the laws take a new kind of form. And when you've got physical theories with new kinds of laws, you can go back and revisit old questions. One of the deepest questions in philosophy of science and metaphysics is what is causation? What are causal influences? I know you had Judea Pearl on your podcast a little while ago and we've developed, we humans, scientists, not me, but scientists, the Royal Wee. Scientists and statistical modelers have developed a really amazing framework for characterizing and using statistical models with causal ingredients to describe real world system situations, scenarios. We use causal modeling in medical testing. We use causal modeling all over the place. But when Pearl was on your show and you asked him, so does this picture that has been developed, does this tell us what causation is fundamentally? Like what really is it? And his answer was he didn't think there was causation fundamentally. He thought it was just an emergent artifact. And I think the example he gave was in F equals MA, there's no directionality, right? Force equals mass. The equal sign just goes in both directions. There's no sense in which one thing causes another so much as that things are so connected by laws in a non-causal way. Well, that may be fine as far as it goes if your theory is Newtonian mechanics. But if you have a new theory, a theory in which the laws take a qualitatively different form, you can go back and revisit those kinds of questions. Now, one way that we cash out causal relationships and causal modeling is through conditional probabilities. Well, the laws in this indivisible approach to quantum theory are conditional probabilities. So now you might ask, maybe we can. Is it possible? Could we try to ground or develop a theory of microphysical causation rooted in the laws of a theory like this? And that's one of the more philosophical or metaphysical directions I'm also thinking a lot about these days. It's good to know that we're not running out of questions to ask or things to worry about. So, Jacob Bairndes, thanks so much for being on the Mindscape Podcast. You've given us a lot to think about. It's really an absolute delight. And I guess the last thing about you, Sean, I'll say, I want to say one more thing about you. So, it's possible that I said this now, I'm forgetting it because it's been at the end of the interview, but having someone like you who is a phenomenal scientist and also, by the way, a very good philosopher and an amazing communicator out there making the case for the work that we're doing in foundations and philosophy of physics, the work that you're doing and the rest of us are doing, that you're writing articles in the New York Times. By the way, I have my students in my philosophy of quantum theory class read your New York Times stuff. The fact that you're out there making this case to the public, to people with money who could in principle provide funding for this work, which gets almost no funding, comparatively speaking to anything else, is, and you're doing it so well, you know that I've told you this many times before. Every time I see you, I practically tell you this, but my level of gratitude towards you, the gratitude that we should all have in this field towards you is bottomless. So, thank you so much for all that you do. And I hope you keep doing it. Thank you very much for those nice words. I mean, not everyone agrees that my interventions in these fields are actually good things. So, I'm glad that some of us think that there should be more focus attention on these things and hopefully this podcast will be part of it. So, thanks very much for appearing. You're very welcome. Lovely to talk to you. Hopeful tuxing.