Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

320 | Solo: Complexity and the Universe

135 min
Jun 30, 202510 months ago
Listen to Episode
Summary

Sean Carroll explores how complexity emerges in the universe from simple initial conditions, examining the relationship between entropy, information, and increasingly sophisticated physical systems. He argues that complexity arises through stages as subsystems learn to exploit available information resources, from simple mixing patterns to life and consciousness.

Insights
  • Complexity is not inevitable from entropy increase alone—it requires specific dynamical features like long-range forces and dissipation mechanisms to emerge and persist
  • Information should be understood as a physical resource (the gap between maximum and actual entropy) that complex systems exploit to maintain structure and adapt
  • The analog-to-digital transition enabled by massless particles like photons is crucial for chemistry and information storage, making gauge symmetries anthropically necessary
  • Living systems represent a qualitatively different stage of complexity by maintaining internal models of their environment and formulating future goals
  • Humans remain far from exhausting the space of possible configurations—natural selection explores only a tiny fraction of genetic possibility space
Trends
Complexity science increasingly integrating information theory with statistical mechanics to quantify emergenceRecognition that dissipation and energy flow are fundamental to structure formation, not violations of thermodynamicsShift toward understanding life and intelligence as sophisticated information-processing systems within physical lawGrowing interest in quantifying complexity via algorithmic compressibility rather than single universal definitionsAnthropic principle arguments for fundamental physics features (gauge symmetries, photons) based on observer requirementsInterdisciplinary convergence between cosmology, biology, and computer science around complexogenesis mechanisms
Topics
Entropy and the Second Law of ThermodynamicsAlgorithmic Complexity and Kolmogorov ComplexityInformation Theory and Shannon EntropyCosmological Structure Formation and Galaxy EvolutionBig Bang Nucleosynthesis and Early Universe PhysicsInflation Theory and Quantum FluctuationsBlack Hole Entropy and Hawking RadiationChemical Bonding and Dissipation MechanismsBiological Evolution and Natural SelectionChemotaxis and Bacterial Information ProcessingDNA and Genetic Information StorageCellular Automata and Computational ComplexityGauge Symmetries and Fundamental ForcesAnthropic Principle in PhysicsGoal-Directed Behavior in Living Systems
Companies
Santa Fe Institute
Carroll is a fractal faculty member and recorded the episode during a regular visit there
Bell Labs
Historical reference to Claude Shannon's work on information theory and signal transmission
People
Sean Carroll
Host exploring complexogenesis, entropy, and information in the universe from cosmological perspective
David Krackauer
Former Mindscape guest; advocates for teleonomic definition of complexity requiring goals and information
Scott Aaronson
Co-author with Carroll on unpublished paper about apparent complexity and algorithmic compressibility
Lauren Willett
Co-author with Carroll on unpublished paper about apparent complexity in dynamical systems
Stephen Hawking
Pioneering work on black hole entropy and thermodynamics in gravitational systems
Ludwig Boltzmann
Foundational work on entropy definition relating microstates to macroscopic observations
Claude Shannon
Founder of information theory; developed formulas showing relationship between entropy and information
Stephen Wolfram
Mindscape guest; advocates complexity arising from simple rules rather than initial conditions
Michael Wong
Collaborator with Bob Hazen on provisional law of increasing complexity in chemical systems
Bob Hazen
Collaborator with Michael Wong on complexity in chemical evolution
Erwin Schrödinger
Author of 'What is Life'; predicted aperiodic crystals (DNA) as information storage mechanism
Christopher Adami
Mindscape guest; emphasized mutual information between genome and environment in life
Malcolm MacIver
Early Mindscape guest; discussed information processing transition when fish moved to land
Adi Pras
Mindscape guest; discussed kinetic stability and non-equilibrium dynamics in life
John Wheeler
Coined 'radically conservative' approach to physics—stick with known laws while pushing boundaries
Szathmáry Eörs
Co-author of 'The Major Evolutionary Transitions' identifying information transmission in evolution
John Maynard Smith
Co-author of 'The Major Evolutionary Transitions' framework for understanding evolutionary stages
David Albert
Coined 'Past Hypothesis' explaining low entropy initial conditions of the universe
Katie Mack
Mindscape guest; discussed possible future scenarios for the universe
Gary Gibbons
1970s work with Hawking on black hole thermodynamics and entropy
Quotes
"Complexity is a feature of macroscopic collections of matter under one of the various ways of defining entropy."
Sean Carroll~20:00
"The early universe was a simple place. The current universe is a very complex place, at least in parts of it. How did that happen?"
Sean Carroll~5:00
"Complexity can happen at medium entropy configurations of stuff. In this case, it actually does."
Sean Carroll~45:00
"Living systems can envision where they want to be and work toward getting there in a way that non-living systems don't."
Sean Carroll~180:00
"The existence of intelligent observers relies on the existence of gauge symmetries and massless particles that can dissipate energy."
Sean Carroll~165:00
Full Transcript
Hello, everyone, and welcome to the Mindscape Podcast. I'm your host, Sean Carroll. Podcasting, like the subject of today's episode, is a complex system. Many things happen. You cannot always know what is going on. Sometimes the schedule kind of gets away from you and you decide that this would be the right time for a solo episode. This is a fancy way of saying that I'm behind on actually recording episodes because of various things that happened. So why not just do it myself? That's always a strategy that is available to us. And I'm recording this from Santa Fe, New Mexico, where I'm doing one of my regular visits to the Santa Fe Institute as part of being a fractal faculty there. So complexity is on my mind. We had a very nice meeting just last week on science and history, both of things which involve complexity in different ways. And it was stimulating to hear historians and scientists come together. But what I've been thinking about for a long time is complexity and the universe. And I know that in bits and pieces, I've talked about this in AMAs and other solo episodes, even in books and things I've written. But I've even given talks on it. You can find talks online on YouTube that are pretty close, similar at least to what this solo episode is going to be like. But I thought it would be good to take a step back, not really talk about specific individual research level ideas. I have some of those, but they're very vague and they're not very far along right now. So rather than that, talk about the big picture of this question of how complexity comes to be in the universe. Some of you may know, I already wrote a paper on that topic with Scott Aronson and Lauren Willett quite a while ago. And we still haven't published the paper, but we're still working on that. It's like 10 years later. Don't worry, we'll get there. Science doesn't care when you actually publish it. It cares about the truth, right? But there's many, many places to go beyond what we did in that very simple paper, which I'll describe later in the episode. So there's a lot of fronts on which one can attack the problem of how complexity comes into existence in the universe over time. And I'm saying this as someone who, you know, knows a lot about this subject in some ways, but not nearly everything in other ways. I've not been doing complexity research all my life. I have been doing universe research all of my professional life. So I know more about cosmology than the average complexity person, more about fundamental physics, less about unequal equilibrium dynamics and computer science theory and statistical mechanics and complexity theory and all those things. So we're trying to put them together in a novel way and we'll see what happens. So I thought that it would be fun to just sort of lay out the general picture as I see it and also places we're hoping to go. Questions that are still open, things we're trying to learn about, calculations, maybe we would like to do, ideas to keep in mind as we're doing all of these things. So that's what this episode will be about. Complexogenesis, I sometimes call it. We talk about bariogenesis, the origin of barions in the universe. Complexogenesis is the origin of complexity in the universe. The very, very early universe was a simple place. The current universe is a very complex place, at least in parts of it. How did that happen? Is there any scientific quantitative rigorous understanding? We can put to that. I'll give you my take on that. Other people will have their own takes. Let me also take this opportunity to quickly say that you could be a Patreon supporter of the Mindscape Podcast. Just go to patreon.com slash Sean M. Carol and join up to support the podcast, get a lot of benefits, including being able to ask the Ask Me Anything questions that we do once a month. And also, there's other ways to support Mindscape that are completely free, like leaving reviews at iTunes or Spotify or, I don't know, wherever one leaves reviews of podcasts. Spread the word. Let other people know that this podcast is worth listening to. We've been doing it for a long time now. I enormously appreciate the support that the Mindscape audience has given to the podcast. And so with that idea, let's go. Of course, if you want to talk about the origin of complexity, the very first thing you have to do or one of the first things is tell me what you mean by complexity. Give me the definitions of complexity. And people argue over that. So of course, there's multiple definitions out there. I kind of don't care. One of my goals in this presentation is not to tell you what the right definition of complexity is. It's to take the fact that complexity has all sorts of different aspects to it. Some we know when we see, others we perceive by thinking about it more carefully and include all of them. Right. There's this picture, as I'll say, where the universe starts without any complexity at all in a very real sense and all sorts of different kinds of complexity developed over time. And we're going to see how that happens as a set of stages, you know, bit by bit. You get certain kinds of complexity develop and then maybe other more sophisticated kinds happen later on. Not everyone sees it that way, of course. David Krackauer, who is the president here at Santa Fe Institute and former Mindscape guest, made the point in our conversation that he considers complexity to be sort of real complexity only when a system can be considered to be teleonomic, that is to say, to have some goals of its own. The picture being that at some point in the history of the universe, physical systems develop the capacity for having information content gathered and thinking about the future and moving towards some goal. And those are all characteristic of complexity. And he's worried a little bit that if you just include everything in the definition of complexity, even things like spin glasses that George O'Preese recently won the Nobel Prize for, then it all just becomes a subset of physics and you're missing important things. Well, you know, so my attitude, which I don't think is substantively in disagreement, but we put our emphasis in different places is that everything is a physical system. There are no non physical systems. There are different ways of talking about physical system. So in some of those ways might be biological or mental or whatever. And there should be a unified picture. I'm very interested in both the fundamental levels of reality and the higher emergent levels and in particular understanding how they're compatible with each other, how the higher levels are constrained by the fact that they supervene on the lower levels in some very real way. Furthermore, I think that this idea of teleonomic matter or advanced sort of complex systems that can adapt to circumstances and things like that is great and important. You need to get there, but you're not just going to leap into it right away. Right? You're not just going to have some random collection of molecules or at least the best way to make collections of matter like that or not have random collections of molecules simply spontaneously organized into that. It's going to happen by stages. And so we'd like to understand what those stages are, even if the earliest stages don't look teleonomic or information processing at all. OK, with that little throat clearing out of the way, and again, without even defining quite yet what we mean by complexity, let's think about the evolution of complexity over time. We think that some parts of the universe do grow more complex, right? The biosphere grows more complex over the past 400 years. The universe grows more complex over the past 14 billion years. Again, not because there is some goal of doing it, not because the laws of physics direct it to happen, but through some features that are importantly dependent on both the laws of physics and the initial conditions. This is why cosmology is relevant to talking about complexogenesis. There's an initial value problem here. Those of you who listen to me or to my escape many times will know that it's a very parallel discussion to discussion of entropy and the arrow time. Entropy is a feature of macroscopic collections of matter under one of one of the various ways of defining entropy. There's many ways to define the entropy, just like there are many ways of talking about complexity. Boltzmann's definition, the definition that is on Ludwig Boltzmann's tombstone, is to say that there are certain things you can observe about a physical system, and there are many, many different configurations of the microscopic constituents of that system that are compatible with those macroscopic observations. And so let's chunk up all of the possible microscopic states of the system into macro states. Macro states are sets of possible microscopic configurations that look the same to us macroscopically. Then it is a feature of the world, which we can try to explain, but that's a job for a different podcast. That at early times, soon after the big bang, by the standard way of thinking about macro states and micro states in the universe, the universe had a very, very low entropy. The status of why that's true is certainly very interesting. And that's something we're going to go on to. We're going to just take it as given for this particular discussion. And then simply because there are more ways to be high entropy than to be low entropy, it's the most natural thing in the world. If entropy increases over time from that initial low entropy past to the future, we are nowhere near done in that process. Roughly speaking, we don't know about the future of the universe because there's things we don't know about physics and conditions and things like that. But we have a standard picture of what the universe might look like. And roughly speaking, about 10 to the 100 years from now, we will reach maximum entropy, a thermal equilibrium state of the universe. And right now we're only 10 to the 10 years after the big bang. So 10 to the 100 years is very far in the future. But the rate at which interesting things happen slow down. So even though we're only a tiny fraction of the beginning of the history of the universe, a lot of interesting things have happened in those 14 billion years. To cosmologists, 14 billion and 10 billion are the same number, right? So 1.4 times 10 to the 10 and 10 to the 10 are not numbers we need to worry about distinguishing, cosmologically speaking. So sometimes I'll just say the universe is 10 to the 10 billion years old. So the reason why I'm retelling that story is both because it's important to the complexity story, but also it sort of parallels the kind of discussion we have because we're not introducing new laws of physics. We're asking what are the features of the laws of physics that give rise to this behavior? And certainly the initial conditions of the universe are playing a very important role here. Now, in the biological case, just a pre figure or foreshadow or a little bit. Again, there's nothing in Darwinian evolution that says you're supposed to go towards higher levels of complexity. We had Michael Wong on the podcast not too long ago. And he and Bob Hazen and their collaborators suggested a sort of law of increasing complexity, but it's only a provisional law that only supposed to work under certain circumstances. And it's not even proven as a law. It's sort of a conjecture, if you like. And we're not going to get into that because we're thinking even bigger picture than they are right now. But the point is that it's well known that individual biological species sometimes lose complexity, right? Biological species are trying to adapt themselves to their environment. If a species that was living in the sunlight changes its environment, so now it's living underground and there is no sunlight, it might lose the ability to see it might lose vision because it's using up resources on maintaining that little bit of complex structure that doesn't need anymore. So there's nothing in biological evolution that says we need more complexity. Rather, in biological evolution, you're exploring a space of states, a space of genomes of biological creatures. Where we haven't explored nearly all of it. And we never will. It's too big, okay? So there's plenty of room for discovering new biological innovations. And those can happen in part of the biosphere and not other parts. So just like it's very natural for entropy to increase over time, it's also very natural for biological complexity to increase over time until you reach some sort of saturation point. And just like right now in the history of the universe, we're nowhere near that saturation point. Right now in the history of the biosphere, we're nowhere near that saturation point. The difference is there can be events that are maybe not completely improbable that dramatically decrease biological complexity, catastrophes, whether self-imposed or imposed from somewhere else in the universe. It's not a law of nature. It's just a tendency. And sometimes that tendency can be reversed. So let's compare those two sort of stories, the story of increasing entropy and the story of increasing complexity. And again, I'm very honest about not having to find complexity yet. It's right now in the eye of the beholder. We can get to different definitions later. The example I use and I love using, I'm going to complete, keep using it. And many of you already heard me use it is cream mixing into coffee. Okay. So imagine in your head, for those of the lucky few out there who haven't heard me give this example before, a cup of coffee with coffee at the bottom and cream on the top. Okay. That is a low entropy configuration of cream and coffee because it's a very specific kind of arrangement. You can rearrange the cream molecules within each other. And you wouldn't notice it macroscopically. You can rearrange the coffee molecules within each other. And you wouldn't notice it macroscopically. But you can't mix cream with coffee without noticing. Okay. So that's why it's a low entropy state. You can then mix it. You put a spoon in there or you just let it mix itself over time. Maybe, you know, it's in a mixer or something. And it will become all mixed up. And now it's in a high entropy state. And the high entropy state is everything is mixed together. Everything looks perfectly uniform. It's very conventionally true of very high entropy states that they are featureless. Right. Because if they were features in the high entropy state, you could sort of mix them together and increase the entropy. So the completely mixed cream and coffee situation is high entropy. And voila, we have the second law of thermodynamics that entropy tends to increase in closed systems over time from completely unmixed to completely mixed is a journey of increasing entropy. Whereas if we think about the complexity of the system, and again, without defining it, we're just going to follow our noses and say, look, when the cream and the coffee are completely separate, that's a pretty simple configuration. Because intuitively, it was easy for me to precisely describe it to you. Namely, all the creams on the top and all the coffees in the bottom. Macroscopically, there's nothing more interesting going on. Microscopically, maybe I need to tell you the position, velocity of every molecule in there. OK, but it's so already we've learned something. There's something about complexity that is a coarse grain of macroscopic phenomenon, right at the level of the micro states, at the level of the position of velocity of every molecule or atom or elementary particle in that cup of coffee. There's nothing that distinguishes the amount of information you need to convey the state of the system from one moment to the other, whether it's mixed or unmixed, it doesn't matter. This is very much like saying that Laplace's demon doesn't know about entropy, because entropy is a coarse grained phenomenon. Entropy is an example of something I can say about a system given wildly incomplete information. And likewise, Laplace's demon has complete information, so it doesn't need to talk that language. Complexity is a similar thing. The reason why the cream and the coffee completely separate or is simple because there exists a highly compressed description that tells you everything about the macroscopic configuration. Likewise, when you've mixed everything together, another cream and coffee is all mixed in, it's a high entropy configuration. It is still very simple, because again, I've given you the complete macroscopic description. If you think about complexity, so here's one version of complexity is first coarse grain the system, ignore all the microscopic specificities that you don't really care about from your macroscopic point of view. And then ask me how much information do I need to give you to completely specify the state of the system? That is one version of complexity. And in this particular example, even though it is low entropy at the beginning and high entropy at the end, it is simple. That is to say low complexity at the beginning and also low complexity at the end. The punchline, of course, is that in the middle, where the coffee and cream have begun to mix into each other. And maybe you see like some tendrils of cream and coffee or some swirls, turbulence in there, something like that, there to precisely tell you where exactly all of the cream and coffee, the different layers of darkness and brightness and so forth would appear to you in an image would require a lot more information. That's when it looks complex. And this behavior is, I would say, quasi robust by this behavior. I mean, the idea that in a closed system, you start with low entropy and entropy simply increases, but you start simple and complexity can grow and then decrease. OK, that is a quasi robust in the sense that it doesn't have to happen. But it's a very natural thing for that to happen. Complexity can happen at medium entropy configurations of stuff. And in this case, it actually does. OK. So that's interesting. The idea very roughly is that entropy increases, but complexity comes and goes. OK. Now, number one, that's a certainly not a very sophisticated version of complexity. There's no teleonomy there. There's no substructure. There's no power laws. There's no hierarchical network or anything like all the various things that conventionally go along with discussions of complexity. None of that is there. What we're talking about is literally an amount of information that needed to specify a macroscopic configuration. And this is quite literal. You could actually do this experiment with the cream and the coffee, take a picture of it on your iPhone and save the images of the cream and coffee separate halfway mixed together and completely mixed together. If you all do it right and you can do this, it doesn't need to be cream and coffee, whatever fluids you like, as long as they're distinguishable. The image that you save on your phone of the medium entropy configuration where they're half mixed together will have a larger file size than the files of the simple configurations where the cream and coffee are either all distinct or all mixed together because there is a more efficient compression algorithm when the cream and coffee are completely separate or completely mixed together because there's big parts of the picture that look the same macroscopically. And your compression algorithm that is JPEG or JIF or whatever is taking advantage of exactly that. So this sort of very simple minded version of complexity is literally tracked by how much we can compress the macroscopic information. Now. There is a tension between the fact that entropy increases over time and the fact that complexity comes into existence in the biosphere. Right. This is a well known tension that has been exploited by people who want to teach creationism in schools. There's an argument that biological evolution is incompatible with the second law of thermodynamics. This argument is complete bullshit. It's very, very wrong. But there's still something that remains to be explained. So we'll be very, very careful and explicit about this. The fact that complex structures like you and me, like other animals and plants and so forth come into existence in the biosphere is completely 100% compatible with the second law of thermodynamics, even though the sort of intuitive everyday language gloss on the second law would say disorderliness increases over time. The tension is if the whole universe is going through a process by which disorderliness is increasing over time, how can it ever come to be that things like you and me, which are exquisitely organized biological machines would pop up in the mechanistic, non teleological, not goal directed evolution of ordinary physical stuff. It doesn't seem like the origin of life or the later evolution of life from simple single cell, simple single celled things into complex multicellular things is an example of entropy increasing, right? Now the answer there is very well known to anyone who knows anything about this, which is that the earth and the biosphere are not closed systems. Okay. The I even said it when I quoted the second law in a closed system, entropy increases over time. The earth is not a closed system. The earth gets light from the sun. And it's very, very important that the sun is a hot spot in a cold sky that provides a source of energy, but it's a source of low entropy energy. The earth gets light from the sun. It does things with it. And then it gives back the energy to the universe. Okay. And it gives back the same amount of energy, roughly speaking. These days it gives back a little less because of global climate change. We are keeping a little bit more energy than we give back to the universe, but that's a tiny perturbation on the overall flux of energy. The important thing is that we give that energy back to the universe in a much higher entropy form for every one photon we get from the sun, which is typically a visible light wavelength photon. We give back 20 photons to the universe, 20 infrared wavelength light photons. And that's 20 times the entropy, roughly speaking. So even though it is true that if you ignored the flux of radiation from the sun and then back to the universe, the biosphere coming into existence represents a decrease of entropy. It's not a net decrease of entropy in any sense whatsoever. It's parasitic upon the fact that the whole picture, including the light we get from the sun, is absolutely increasing entropy over time. It's exactly like, say, the second law of thermodynamics does not prevent you from cleaning up your room. Cleaning up your room lowers the entropy of the configuration of stuff in your room, but it doesn't lower the entropy of the universe because you are doing work, you are sweating and cursing and whatever it takes. And if you were very, very careful about accounting for all the entropy, you would see that it's going up. Okay, so when I say there's a tension between the existence of complex biological structures and increasing entropy, it's only an apparent tension. If you really understand what's going on with the entropy budget, there's no conflict at all. Nevertheless, if you're, if you've gone beyond the sort of culture war political battles about teaching evolution in schools and are just asking the science question, even though there's no contradiction with the second law to say that entropy is increasing and biological complexity is also developing here in the biosphere. It's also not obvious why it happens. Okay, it's allowed to happen. But that doesn't mean it will happen. It's a little bit trickier than that. There's, you know, the moon gets light from the sun and radiates it back to the universe, but it doesn't develop life in any obvious way. So this raises the questions of complexogenesis. Where exactly does all that complexity come from? What are the necessary and sufficient conditions for these kinds of complexity to develop? Okay, that's all warm up. That's all inspirational pep talk. And now we can start thinking about the universe more specifically, more seriously. 14 billion years ago, there was something called the Big Bang. There's a whole nother discussion to be had about what you mean by the Big Bang. We're not going to talk about singularities in the beginning of the universe. We can start talking a few seconds after the actual Big Bang event. If you want, we can talk about the part of the universe where we actually know something about it. We do know something about the universe just a few seconds after the Big Bang because of Big Bang nucleosynthesis. The early universe was a nuclear reactor and a fusion reactor that was turning hydrogen, sorry, yeah, hydrogen and neutrons into helium and other light elements. And you can see the effects of that. And you can predict exactly the relative ratios of protons to helium, nuclei and so forth. And you can see in the current universe, in parts of the universe, which have been relatively undisturbed by the appearance of stars and things like that, that the abundance of helium and other light elements today matches what we predict from general relativity and from our knowledge of the contents of the universe from those nuclear fusion reactions a few seconds or minutes after the Big Bang. So we know that there might have been before that something like inflation or something like that, a period at a much shorter period of time, 10 to the minus 30 seconds or whatever, when the universe didn't have particles at all, it was dominated by some inflaton field. Okay, and that is more or less smooth and featureless. And then it reheats, as we say, and turns into this gas of hot particles. But we don't know that for sure. So we'll think about that. We'll keep that as an option. But I'm just letting you know that that's the part of the history of the universe, which we don't have 100% control over. If inflation did happen, so let's talk as if it did for a while. If you take the universe as it is today, and you take sort of the volume that we can see, right, you know, maybe 20 billion light years in every direction. And then you shrink that down under our extrapolation of the expansion of the universe given general relativity and its matter content. So you shrink it down to what it was in some tiny fraction of a second. And you say you claim that inflation happened. Okay, let's imagine that it happened. You don't need to claim it's not going to really matter for anything that we're going to say here, except for one thing, which I'll be specific about in a second. But mostly it's just something that we can ask about right now. The thing about inflation is there's not a lot of particles moving around, right? It's just one big scalar field. And there's essentially no specificity to the configuration of that scalar field. It's very boring. So a lot going on. It corresponds to a low entropy configuration. In fact, roughly speaking, all of the entropy comes from gravity, comes from space time. Okay, this is one of the reasons why this whole discussion of gravity and cosmology is slightly complicated because cosmology, where you have the whole universe as your subjective interest, is a case where gravity matters and gravity matters for entropy in particular. And entropy and gravity are two subjects which we don't have 100% confidence talking about. We have some knowledge of, given what Stephen Hawking did and Gary Gibbons did in the 1970s. So we're going to wave our hands a little bit. But all of this is to say, it is perfectly adequate to our current purposes to say that the entropy of the universe during the inflationary era was maybe something like 10 to the power 10. Okay. It could have been 10 to the power one, could have been 10 to the power 20. We've none of this really matters for our current discussion. It's just what will be a very low number compared to what the entropy is a little bit later on. The reason for the uncertainty is that we don't know much about the specifics of inflation. Okay, we have lots of different possibilities for how inflation could have happened and so forth. But let's just keep that number 10 to the 10 as a number out there for the entropy at the very, very early times 10 to the minus 30 seconds after the big bang. Then once you reheat the inflationary energy into ordinary matter and radiation, then in our observable patch of universe, I'm calling the universe. I'm going to be a little bit sloppy about this. I can't help it. Sorry. When I say the entropy in the universe, I mean what I said before, which is the region of space that corresponds to our currently observable universe, perhaps extrapolated backward or forward in time. Okay, so I don't have any idea what the entropy is outside our observable universe. So I'm not talking about that. And even though I don't have complete observational evidence over what our universe was like in the very far past or the very far future, I can use our standard picture of cosmology to talk about what the understanding of entropy would be under that picture. So if cosmology turns out to be different because of some future discovery, then we can rehab the conversation. But anyway, within what we call the co-moving volume, the volume of space that is corresponds to the volume of space we can observe today, there are about 10 to the power 88 particles in the universe today. Almost all those particles are either photons or neutrinos. How do we know this? Sometimes it's because of direct observation, like with the photons, they're mostly in the cosmic microwave background, and we can actually just detect them and count them. With the neutrinos, it's harder, but we can make a prediction once again, based on known physics, and we can test that prediction against the data. The number of neutrinos. So you might say, look, neutrinos, you might have heard the neutrinos come in different forms. There are electron neutrinos, and there are muon neutrinos, and there are town neutrinos. How do we know how many of them there are? And how do we know there aren't other kinds of neutrinos that aren't included in our current knowledge? These are excellent questions, but cosmologists are not idiots. They thought of these questions. They have a theory that predicts how many neutrinos there should be if there are only three different kinds. And that theory says, look, at very early times, there were roughly equal numbers of photons and each kind of neutrino because they're created equal. Right. These are all essentially massless particles. Neutrinos have tiny masses, but compared to the energies in the very, very early universe, the mass of neutrinos essentially zero. It's negligible. What happens is there's various events in the history of the universe, like electrons and positrons coming together and annihilating. We know that they annihilate into photons, though you create more photons in the universe, but you don't create more neutrinos. So even though there are three kinds of neutrinos and only one kind of photon, we actually think that there are more photons in the universe than neutrinos. None of this matters. Okay, I'm just I'm just telling you that I'm not cheating you. I'm just trying to give you reason to believe that I'm not lying to you. We thought about all of these issues. The point is there are roughly 10 to the 88th particles in the universe. And mostly photons and neutrinos. Why? Because they're light and they don't annihilate with each other and go away. Okay. So they're easy to make. They're hard to kill. That's why there's mostly electrons and neutrinos. Things like electrons and protons and neutrons. Well, you know, neutrons are unstable. They just go away. Unless you capture a neutron in a nucleus, it's not going to last very long. So of the heavy particles, we mostly have protons and electrons. And there are roughly 10 to the 80th of them compared to the 10 to the 88th of the photons and the neutrinos. So 100 million times as many photons as there are protons or electrons. Now, there could be dark matter in the universe. That's absolutely possible. How much of it is there? We know the density of dark matter in terms of grams per cubic centimeter. It we don't know the mass of individual dark matter particles. If the mass is larger than that of a proton, which in most dark matter models, it is then the number of particles in dark matter is much smaller than the number of either photons or protons. So we don't need to worry about it. If the mass is much lighter, then it's a trickier story. But you might expect that you get approximately the same order of magnitude of light dark matter particles as you do photons or neutrinos. All of which is to say the entropy of the universe in our co moving volume, as far as our best cosmological models predict right now, is about 10 to the 88th. And that is true today. The entropy of the photons and neutrinos is about that. It was also true soon after the Big Bang Nucleosynthesis when we made all of those light particles and we actually had some observational when we made all those like nuclei and we actually have some observational data about what was going on. So all of that is to say if we're tracking the entropy of the universe over time, the entropy of our co moving volume of universe, it starts at maybe 10 to the 10. It eventually not too long grows to something like 10 to the 88 because the entropy of a gas of particles is to within order of magnitude the number of particles that are in there. If it's a thermal distribution, which it is in this case. So the entropy goes up, right? It goes from 10 to the 10 to 10 to 88. That's good. That's what the second law of thermodynamics says should happen. And there's only two more events in the history of entropy in the universe that really matter. One is that you make black holes. Deven Hawking told us that black holes have entropy. There's a simple formula. If you have a million solar mass black hole, the entropy of it is approximately 10 to the power 90. The entropy goes like the area of the event horizon, which goes like the short shell radius squared entropy goes like distance squared. And the short shell radius goes like the mass. So the entropy is roughly proportional to the mass squared of the black hole. We think that big galaxies like the Milky Way, like other big spiral galaxies in the universe, each one of them has at the center a supermassive black hole. Supermassive means a million solar masses or more. The biggest supermassive black holes have entropies of something of masses, of something like a billion times the mass of the sun. So if a single million solar mass black hole has an entropy of 10 to the 90 and the whole universe back before there were any black holes had an entropy of 10 to the 88, then entropy has certainly gone up because there's a bunch of black holes in the universe. The little black holes don't matter. They're sub dominant as far as entropy is concerned because the entropy goes like the mass squared. And so we can basically do an inventory of all the big black holes in the universe. Again, within cosmological precision to an order of magnitude or two, the total entropy of the universe today, the co-moving universe, the co-moving volume in which we can observe is about 10 to the 103. Okay, all of which is just to convince you the entropy is still going up. 10 to the 10 to 10 to the 88, 10 to the 103. Now, eventually, if you keep going forward in time, those black holes will evaporate. In fact, all the matter in the universe will fall into black holes. Now the black holes are going to evaporate. This is what happens 10 to the 100 years from now. The last supermassive black hole, according to our best estimates today, will evaporate and go away. And then you might think the entropy is zero or maybe you think it's big because of all the particles that were made from the black holes and stuff like that. Here's where we're in uncharted territory. We truly don't know. But because quantum gravity matters in these circumstances. The thing that I like to do, the estimate I like to put on the amount of entropy in the observable universe comes from quantum gravity, comes from at least semi-classical quantum gravity, exactly as Hawking proves that a black hole has entropy proportional to the area of its event horizon. If you live in a cosmological universe with a positive vacuum energy, a positive cosmological constant, like we think we probably do, even though we don't know for sure, but we think we probably do, then there's a horizon around us. And that volume of universe with a horizon around it has an entropy proportional to the area of that horizon. And that horizon is big. It gives us an entropy of something like 10 to the 122, something like that. OK, so again, it's higher. The entropy goes up over time from 10 to the 10 to the 88th, 10 to the 103, 10 to the 122. So it's a story of increasing entropy. And then past that, you just have an empty universe. Nothing in it. Deciderspace, technically, because of a positive cosmological constant. And maybe that lasts forever. Maybe there's some future cosmological weirdness. Remember, we had Katie Mack on the show a while back talking about possible future scenarios for the universe. Doesn't matter. We're perfectly content with thinking about only the first 10 to the 100th years of the history of the universe. And we have a pretty good handle on what could happen there. OK. So that's just a reminder of what we know about cosmology. And as you know, from what I just said, entropy goes up over time. And we even have an understanding of why, as I said, Boltzmann's definition of entropy says it's the number of microstates that fit into a macro state. And I've told you what those numbers are. OK, so entropy increasing in time is something that makes perfect sense to us. So now we can start thinking about the complexity. And city is we're using the I know it when I see it kind of attitude. And I think that it's pretty clear what actually does happen at those early times. It's exactly like a cup of coffee. I'm trying to figure out the right way to say this. The early universe, as we currently understand it, was essentially featureless. It looked the same everywhere. It is exactly like everything. It looks high entropy if you didn't know about gravity. This this causes a lot of people confusion. Even professional cosmologists, they say, look, the early universe looks like it's in a thermal equilibrium state. It's low. It's that's a high entropy state. But of course, it evolves in a much higher entropy state. How can that happen? And they confuse themselves by thinking about the expansion of the universe, giving it more room to grow. But it's all wrong. The thing is that that smoothness of the early universe is actually low entropy because gravity was really strong. And there's much more room to make black holes and inhomogeneities in the configuration of matter when gravity is strong, then when gravity is weak. OK, the early universe really did have low entropy. But it was also simple is the point. It's a low entropy and simple configuration. If I say, you know, a second after the Big Bang, the universe is hot, dense and smooth and rapidly expanding. As long as I attach numbers to how hot it was, how dense it was, how rapidly expanding it is, you're done. I've completely described the universe to you. If we imagine that we're allowed for our present purposes to think of complexity as how much information does it take to completely specify the macroscopic configuration of the system? The answers have very, very small amount of information for the early universe. Now, skip ahead to the future of the universe, right? We said what's going to happen is all the matter in the galaxy is going to fall into black hole. Black hole is going to evaporate with thin gruel of particles. The universe continues to expand. So even that thin gruel of particles sort of dilutes away to nothingness and we're left with literally nothing but empty space. There it is. That's the description of the universe in the far future. It's empty space with a certain vacuum energy, with a certain cosmological constant. It's a very, very simple description. So again, simple at the beginning, simple at the end. Now, right now in the history of the universe is when the universe is complex. Because if you wanted to describe what was happening in the universe today, you would have to tell me all sorts of details about galaxies and stars and maybe even individual planets and life on them and internets and books and podcasts and all these things, right? That's those are all part of the macrostate of the universe. And it's enormously complex. Even if I cannot give you a number, sorry about that. I would love to be able to attach quantitatively a number to that. But that is beyond my pay grade right now. That's something that is very worth thinking about trying to do. I can do it for the cream and the coffee. I can't really do it for the universe. It's too complicated. But clearly it's the phenomena are so dramatic that we don't need to be utterly quantitative about them to say that the universe has the same complexity growth curve that the cup of coffee does. It starts low, it goes high today, and then it's going to diminish into the future. Everyone always wants to know when I say this, how close are we to the peak right now? But I can't answer that really precisely because we don't have this quantitative way of measuring it. It depends on your course grading. All right, it depends. Like for the cream and the coffee, you can kind of just pick some scale at which you observe what's going on and then course grain within that scale. It's much harder to do with the universe. The dynamic range of interesting things going on in the universe is much larger than that. So I am not able to tell you how close we are to peak complexity. Also, presumably, because it depends on questions like how much life is there elsewhere in the universe? How much technological advancement? How much technological advancement could there be? Like, you and I are in a civilization that is really just in the beginning of technological advancement. So if technology and interconnectedness of a culture contributes a lot to the quantitative complexity of the universe, then I have no way of measuring that right now. Sorry, I can tell you one tidbit for what it's worth. You can you can forget about life and technology and things like that. Just look at stars. Okay, the creation of stars is in some sense an example of a structure coming into existence that requires more information to precisely tell you how many stars are there. Where are they? Things like that. So it's a version of complexity. And you might think, well, maybe we're just at the beginning of the star forming era in the history of the universe. And the answer to that one is no, we are not at the beginning of the star forming era in the history of the universe. The peak star formation rate was about four billion years after the Big Bang. Most of the stars that will be ever formed in the history of our universe have already been formed. We are in a slow down era in terms of the formation of stars in our universe. Stars shine for billions of years. So that's the ones that were made almost 10 billion years ago. Many of them are still shining, especially the low mass ones. So we're in a star rich universe, but we're not making a lot more stars now. Most of the star formation has already happened. Does that mean that we're past peak complexity? Again, that depends on details. I don't know the answer to, but it's something to keep in mind. OK, so the star formation thing is like one factoid to keep in mind. Remember, we're not giving you, I'm not presuming to give you the once and for all comprehensive picture of complexogenesis or the evolution of complexity in the universe. We're groping toward that. This is how science gets done. This is science in progress. We're taking some facts, some data, some observations, some factoids. And we're asking what is the bigger picture into which they all fit. So peak star formation. OK, that's one factoid. Another is think a little bit more in detail about this process of structure formation in the universe. The universe starts very smooth, simple. Galaxies and stars and planets form over time. Eventually they will sweep away in early days of my era of cosmology. One would do simulations of large scale structure, which only included dark matter in them. OK, and the nice thing about dark matter is that number one, it's most of the matter in the universe, but also number two, it's simple. It's just gravity pushing it around. You don't you don't get dark matter stars, supernovae, interstellar materials, magnetic fields, any of those complicated things. These days. A really good cosmological simulation is going to include more than just dark matter, but back in the day, you would just look at dark matter. And what happens is you start with a box of dark matter in your computer, not real dark matter, simulated dark matter, which is more or less smooth, but not perfectly. And what happens is as the universe expands and grows, gravity pulls together these slightly over dense regions into very over dense regions and it evacuates the regions which are slightly under dense. So as I like to say, it turns up the contrast knob in the universe. From our perspective of apparent complexity, the complexity that you get just by looking at the system, complexity goes up under that process. But of course, it's a much richer story if you start including all the details because you don't just increase the contrast, you start making new things that didn't exist before, not just galaxies, but also stars, planets, etc. And there it's not just gravity, right? It's a balance of forces. And this is where things get really interesting in terms of complexity coming into existence. Gravity is kind of a dumb force. It's long range, which makes it important in astrophysics and cosmology. It just accumulates the more matter you get. It doesn't cancel out like electromagnetism does with electromagnetism. You have positive charges and negative charges. So the earth has no net electric field around it, but as a net gravitational field. And because there's only mass, there's not positive and negative charges. All gravity does is pull things together. If you have something like dark energy, you can push things apart. But in terms of particles or bodies, celestial objects, it just pulls them together. So there's not a lot of room conceptually for gravity all by itself to create truly complex structures. You get a little bit of an increase in apparent complexity, much like in the coffee cup. But you're not going to make a living being just out of the force of gravity. What you have in practice is that gravity does the initial work of pulling things together. But eventually other forces kick in. And this is I mean forces in a broad sense. I know that sometimes in particle physics, we talk about the four forces of nature, right? We talk about gravity, electromagnetism, the weak nuclear force, the strong nuclear force. That's always been a fake. That's always been a sort of shorthand for saying that there are four different kinds of gauge bosons in the universe. But you also have, you know, the Higgs boson. Does that count as a force? Yeah, kind of it does. Kind of it doesn't. What about the Pauley exclusion principle that says that two electrons or two fermions more generally cannot be in precisely the same quantum state? That leads to a force that leads to the Pauley force, the electron degeneracy force, if you want to call it that, a Fermi pressure in some sense. That's in some sense the most important force in our everyday lives. That's what keeps solid matter solid. The fact that electrons cannot be in the same quantum state. Otherwise, atoms could just be exactly on top of each other. Atoms take up space because of the Pauley exclusion principle. That's a really important force in the scheme of things, even though it doesn't count as one of the four elementary particle physics forces. The reason for that is that the word force is not actually fundamental in modern quantum field theory. There are quantum fields and they obey the equations that they obey. Okay. And we human beings later on in our macroscopic lives find it convenient to refer to certain things as forces and certain things as not forces. But who cares? That's not fundamental. That's not deep. In the broader conception of the word force, the fact that matter takes up space is crucially important. Planets congeal coalesce out of the, I don't know, intergalactic, the primordial soup, in some sense. And they only stop coalescing because they have pressure inside because they're solid. Stars coalesce, they stop coalescing for a different reason, not because they're solid, but because nuclear reactions start going on in the center of the stars. Those nuclear reactions give rise to heat, which gives rise to pressure. And you solve some equations. If you were an astronomy graduate student like I was, you would solve equations for hydrostatic equilibrium to understand stellar structure when you were in graduate school in a simple model. But, you know, in some sense, planets and stars look similar, but in some sense, they're very different. They're supported by very different kinds of things, thermal forces versus simply material solid forces. But I'm pointing this out just to emphasize that there's a transition of some sort, because, you know, we're taking clues from what we see in the universe. There's a transition of some sort from simply gravity pulling things together to an interplay between a simply attractive force like gravity and a repulsive force, like the pressure that you get inside a planet or a star. And it's this interplay, this competition between two forces that allows complexity to really become interesting, right? If all you had in the world was gravity, you wouldn't make very, very complex, interesting structures, but we have a richer world than that. Easy to say, easy to point to that feature of the world. What we want to do is understand in more detail what is precisely the features of these competing forces that allow complexity to come to existence. I don't know the answer to that, so I'm not going to give you the answer, but that's the kind of thing we're thinking about. Okay, so that was fact number two. Fact number three, first fact was star formation slowing down. Second fact was competition of forces. Third fact is the way that we're talking about complexogenesis is a particular way, and certainly my favorite way, but not the only way. What do I mean by that? There, the way that I'm talking about it is to imagine that in the early universe, we can debate exactly about what the word early means there, but in some sense of the word early universe, there was a configuration that was pretty darn smooth, but not exactly smooth. Okay, it was not exactly smooth, and therefore over cosmological time, regions that were just a little bit more dense than average could coalesce under the force of gravity and become denser, whereas other regions emptied out, and that leads to an increase in the apparent complexity of the configuration. But the later complexity is sort of inherent in the earlier configuration. In the approximation where all of the relevant physics is classical, that's a pretty good approximation on astrophysical scales, but there will always be exceptions. You could be Laplace's demon, right? You can imagine that the underlying physics is deterministic. If you knew, and this is how you can simulate, when you simulate large scale structure, it's a classical simulation, because you're not really doing quantum mechanics there. You have a bunch of point particles and they have gravity and maybe you can somehow simulate stars forming and supernovae exploding and things like that, but it's all mostly classical. And so whatever complexity you get at late times was kind of inherent there in early times because the laws of physics are deterministic. So what we're saying is that the initial conditions have all of the capacity for the complexity to eventually come into existence. And all that's going on is that the ordinary laws of physics are bringing to life that sort of potential complexity. This is a very different picture than someone like Stephen Wolfram would advocate. You know, Wolfram another Minescape guest. We didn't talk too much about this aspect of his work, but one of his famous claims is that complexity in the universe can be thought of as arising in a way analogous to it's a little vague, but that's okay. You can be vague in the early days of constructing a bold new physical model analogous to cellular automata. If you know the famous pictures that Wolfram always has in his books and in his talks and things like that of these grids, these two by two grids where you start at the top with some initial condition and then you evolve downward in time because he's a computer scientist. And you get black and white pixels lighting up according to some rule. And different cellular automata will have different rules. And what Wolfram was able to show is that you can start in a cellular automata automaton from extremely simple initial conditions. Ones that are basically if you have like a row of cubes or squares and squares are either white or black, maybe all of them are white except for one is black. Okay, that's very, very simple initial conditions. And then you just apply the update rule from the cellular automata to that and you get completely bizarre looking chaotic complex behavior later on. So that is super interesting in its own right. It's showing how complexity can arise out of simplicity. But there all of the work is being done by the dynamical law by the rule that takes you from one configuration at one moment in time to the next configuration at the next moment in time. There the complexity was not inherent in the initial condition. It's inherent in the rules. It's absolutely allowed to contemplate that the complexity in the real world comes from something like that. It's just completely incompatible with everything that we know about fundamental laws of physics today. The laws of physics as we know them are deterministic except for wave function collapse. Of course, that's a whole other thing. But they're mostly deterministic and so cosmologically speaking, if you evolve the universe from a few years after the Big Bang to a few billion years after the Big Bang, it's very ordinary deterministic laws of physics that are giving rise to the increased complexity because it's all there in the initial condition. Maybe a Wolfram-esque attitude is going to eventually be the right one. I'm always in this mode of being what John Wheeler called radically conservative. You start conservative in the sense that you stick with everything you know about the laws of physics, but then you're radical and that you push them as far as you can. So we're sticking with known laws of physics mostly. In that picture, complexity does not arise from the laws. It arises from the initial conditions. Now, I keep hesitating and stumbling because the world is fundamentally quantum mechanical. The things I just said are only true classically. And quantum mechanically, things are a little bit different. Wave functions do collapse. Or if like me, you're an Everettian, decoherence happens in the wave function universe branches. And it turns out that this branching of the wave function is not just something that we can say, yeah, it happens occasionally, but it's not important. In very interesting cosmological models, branching of the wave function is 100% necessary for the story of complexity that we're telling right now. In fact, this is a kind of a cool thing that I don't know if anyone has ever really emphasized when talking about cosmology, if you believe in inflation. If you think that the universe underwent this period at very early times, and was dominated by some almost constant dark energy, some super high energy, dark energy, whether it's an inflaton or something else. We have a story about how that initial configuration evolves into the present universe. And you may have heard this claim that galaxy formation ultimately comes from quantum fluctuations in the early universe. What does that mean? That means that the quantum state during inflation is incredibly simple. And like we already said, it's incredibly low entropy. It's not just approximately smooth, it's exactly smooth. That's the difference. The quantum state of the universe during inflation is basically the vacuum state. It's basically as simple as it could possibly be. So where does all this initial conditioned data from which the later galaxies and planets arise, where does it come from? The answer is branching and decoherence, right? The universe essentially observes itself. If you want to split the universe into sort of an environment part and a system part, maybe the system part is the large scale fluctuations in density. The environment part is the small scale configuration of individual photons and things like that. The initial wave function of the universe during inflation is kind of like a simple harmonic oscillator. It's a vacuum state. It's perfectly featureless. It's simple and so forth. And then after reheating, when you turn all that inflation energy into hot dense matter, and part of that matter acts like an environment and part acts like the system that has density fluctuations and things like that, you branch the wave function of the universe. And what you're doing is you're branching an initially simple overall wave function of the universe into a combination of branches. And in each branch, things look complex and specific. So this sounds a little bit wild and new agey, but if you hang around cosmologists and they ever show you the picture of the cosmic microwave background, right? If you look at the image that you get from the Planck satellite or the W satellite of the cosmic microwave background, density fluctuations in the early universe, this is data. This is really what the universe looked like a few hundred thousand years after the Big Bang. Very tiny fluctuations in temperature, one part in 10 to the five from point to point. And it looks kind of random, right? It looks like some blotches of a little hot spot, cold spot, etc. And it is random statistically. We can be very, very specific about the power spectrum, the probability distribution of fluctuations. So that map that you're seeing of density fluctuations or temperature fluctuations to cosmic microwave background, we think is one particular realization of a random process. And in this language of wave functions and branching, what that means is it's one particular observation measurement of the initially featureless wave function of the universe. So the early universe was truly simple in this picture. And we are only living on one branch of it that looks potentially complex. So quantum mechanics and the splitting of the universe into branches via decoherence plays a crucially important role in this story of complexogenesis. We're still thinking about complexity so far in this very lowbrow sense, very simplistic way, just like how complex does the system look? We're not worried about functions or interdependent motions of subsystems or much less goals or adaptations or anything like that. But what we're not done thinking about this sort of dumb version of complexity yet. So we said that it seems sensible and true that a closed system will start in low entropy and the entry will go up until it hits thermal equilibrium. It starts simple, then complexity can develop and then the complexity will eventually go away. You will look simple again at the end of the day. This is true whether you are a cup of coffee with cream or whether you are the whole universe. So there's some robustness, but it's not inevitable. So think about the cup of coffee again. Think about not stirring the cream into the coffee. Think about just letting it sit there. You're not at zero temperature. It's not frozen, your cup of coffee. Individual coffee molecules and cream molecules will move around if they'll have their random thermal motions. If you waited long enough, and it might be a long time to wait, but we have time, the cream in the coffee will mix into each other. They will gradually diffuse into each other. Unlike if you stir it with a spoon, there could be a sort of very gradual, kind of simple transition from all the cream on top and all the coffee on the bottom to everything being mixed together. If you then plotted on a plot, on a graph, if you did some quantification of the complexity of that configuration, which you can do in that case, this is what Scott Aaronson, Lauren Willett and I did in our paper. We sort of defined a quantity called the apparent complexity of an image, which just says coarse grain the image into cubes or whatever, and then compress it. What is its algorithmic compressibility? There's this famous notion from Komolgorov and Chaitin called algorithmic compressibility. It says, given some string, which in images really is strings, you can take an image and just list what the value of the image is in every pixel. Given a string, what is the shortest possible computer program that would output that string? If you had a simulation, this is what we actually did in our paper, you have a simulation, so you have what we call the coffee automaton. You have a grid, n by n, and you have some white pixels and some black pixels, and they're going to mix together. Just like Laplace's demon, if you kept track of every white pixel and every black pixel, there would be equal numbers of them, and you just have to tell me where every pixel was. So the total amount of information you would give me to specify the microstate of the theory never changes over time. You just have to tell me for every pixel what is its value. But what we did is look at coarse-grained versions of that, which we said, okay, we have an n by n grid, but we're going to take some chunk of it, 10 by 10, a little part of it, and chunk the big n by n grid into 10 by 10 subgrids, and then average what's going on. So this is exactly what your eyeballs do when you look at the cream and the coffee. You don't see every atom or every molecule. You see a coarse-grained version of, oh, that's pretty dark, that's pretty light, there must be a certain amount of coffee, a certain amount of cream, etc. So then what this does is it's the difference between a random number and an ordered number. So let me, let's pause to think about this, pause to think about the concept of apparent complexity, which is what Scott and Lauren and I defined. If you have a number, let's say you have a billion digit number, and the number is a billion zeros all in a row. Okay, is that simple or complex? Well, that's pretty simple. I just gave you the whole number. I just told you, 000, 000, right? So by ordinary standards, by the standards of homologous complexity, I could output that number by writing a little computer program that said print, quote, zero, and then do that a billion times, or a quadrillion times, whatever. However long the number is, the computer program to print it out is pretty short. Whereas what if you had just a random number, a random billion digit number, billion digits of, you know, decimal representation of the number. So the number is three, five, eight, zero, one, nine, nine, etc. for a billion different digits. This shortest program that outputs that looks like print, quote, and then the number. You can't get shorter than that. And so it's at least a billion characters long. And if it's a quadrillion digit number, then it's a quadrillion characters long, etc. So that's why the algorithmic complexity of Komolgrove and Chaden is interesting, because it's asking you the question, is there sort of a computer sciencey way of compressing your description of how to output that number? If it's just a random number, then no, there's no way to compress it. You have to just tell me the whole number. But a random number doesn't have any structure to it, right? It doesn't feel to us like complexity. So that's why we're defining apparent complexity. So in the case of the digits of the number, let's say you have a billion digit number, but you don't tell me every number, every digit, rather you chunk it into sub numbers, let's say 100 digits long. Okay, and then you take the average of what's going on in those 100 digits. If it's truly a random number, then in those in the 100 digits, if you take the average of it, it's going to be, you know, five or four and a half or whatever, you know, maybe four or five fluctuating up and down. But there's much less variation in the average from place to place of a random number. Whereas if there's sort of a structured number that has, you know, zeros like 10 times in a row, but then one 10 times in a row and two 10 times in a row, where there's a little bit of compressibility, then you are able to, even after coarse-graining, then there's an output. Sorry, I should have been more clear about this. I didn't complete the thought with the averaging. If you average over 100 digits and you just get the same average again and again, then the commulgary of complexity of that number is small. Because the commulgary of complexity of the billion digit number is big, you have to tell me what every one of the digits are. But if the average of every 100 length digit substring is the same, then it's easy to output that. Okay, you can just output the same number again and again. Just like if it's, if the whole number is just zero again and again and again. So the apparent complexity of a random number is low. The apparent complexity of a string of a billion zeros is low. The apparent complexity of an intermediate number that has some structure, but is not completely random or completely ordered will be high. Right? So it's just like the cup of coffee. It's capturing what we want to do. Anyway, this was a slightly technical discourse from the, the, the discretion. No, that's not the word I'm looking for. Detour, I guess, from what I'm trying to talk about here. The point is, we do have, in the case of the coffee or the automaton, a very quantitative way of telling you what the value of the complexity is. There's one little footnote there. And you know, this is what, this is what we can do in a solo podcast. I wouldn't have time to do in a seminar or something like that, but, or a public lecture, I guess, in the seminar, I better do this. There is a footnote to Kamolgrove complexity, which is that it is uncomputable, which is a problem. Okay. Kamolgrove complexity is uncomputable for cool computer science reason. That go back to the halting problem of Alan Turing and his friends. You might think that for some string of numbers, you want to calculate its Kamolgrove complexity. So you say, what is the shortest program that will output it? Well, I don't know. But if I have a certain well-defined programming language, I could simply cycle through all the computer programs, right? I could make the shortest program and the next shortest program and try all of them until I get to one that outputs this number. That would be my algorithm for finding the shortest program to output this number. The problem with that algorithm is it is doomed to fail because of what is called the halting problem. In computer science famously, there is no general purpose way of looking at a computer program and telling me whether it will ever halt or not. So if I try to just cycle through all the computer programs starting with the shortest one and letting them get longer, I might hit ones that seem to go for a long time and they are not stopping. And I might want to say, oh, is this just an endless loop that I am stuck in in this computer program? In which case I could stop it, stop abort the process and start a new computer program. But in general, I never know. Maybe it is just taking me a really long time. One of the features of Kamolgrove complexity is it depends on the length of the computer program, but not on the amount of time that it runs. So this makes from a formal strictly mathematical perspective, it makes the Kamolgrove complexity uncomputable. It is, however, estimatable in a large fraction of interesting cases. That is why in practice you can have efficient, useful compression algorithms. And so rather than actually calculating the Kamolgrove complexity of a coarse-grained image to define apparent complexity, we just compressed it. We used GZIP and then we tested that the specific compression algorithm that we used didn't really matter. You get the same answer in all the different cases. Okay, good. All of which is to say, in the simple-minded case of just the grid of squares being yes or no or zero or one or white or black or whatever, given a coarse-graining and giving a compression algorithm, we can calculate the apparent complexity and given a dynamical ski for letting that algorithm run over time and having the coffee molecules and the cream molecules mix in with each other, we could plot the growth and then eventual decay of complexity. The prediction is the complexity starts low, it goes up, and then it goes back down again. But like I said, you can imagine dynamics for the cream and the coffee where it doesn't go up. If you just have cream and coffee molecules mixing in with each other, like diffusing independently with no spoon stirring it or anything like that, then maybe it doesn't ever look very complex. And this is the reason why we have to revise our paper that we initially submitted because there was a bug in our way of measuring the complexity of these images. And we fixed it, but we didn't revise the paper yet. So that's upcoming, I promise. One of the reasons why I keep talking about it in public is to guilt myself into actually doing the work of revising the paper and putting it back in the revised version online. Here's the trick. You can come up in the coffee automaton, where it's just a bunch of white squares and black squares, and some update rule that says how do the white and black squares move throughout the grid. You can come up with different rules. One rule might be that for every two nearest neighbors, there is a percentage chance per time step that they will interchange with each other. If it's two black squares, they interchange, doesn't make any difference. But if they're white and black, and they interchange, then it does make a difference to the overall image. And you can just run that algorithm. And guess what? You never get any complexity in that particular algorithm that is kind of like the particles diffusing in the real cream and coffee with each other. It's just smooth and featureless, and it never looks complex. We were eventually able, and this will happen, this will appear in the revised version, we came up with a different algorithm, which we call the tectonic model, which has the following that rather than individual squares in the grid, you look at two nearest neighbors and ask if there's a percentage chance that they will interchange with each other, you look at a finite sized block of the grid, and you randomly choose both the size of the block and the orientation is it horizontal or vertical, and then you randomly move it to the left or to the right. Okay. So there's, you know, we're careful that there are rules that you keep the total number of white and black squares constant, basically, you put it on a tourist and trust me, we did all those things, correct, we finally got it right. And the difference there between the sort of individual nearest neighbor interactions versus the tectonic model where you have large scale coherent interaction, it turns out to be completely important. The upshot is that the nearest neighbor interaction model doesn't ever become complex, the complexity always stays low, as entropy goes from zero to maximum. The tectonic model where there are these coherent motions, large scale agreement between different pixels about what they're doing, that's when you get complexity, that is much more analogous like putting the spoon in there and stirring it, even though there's no external spoon coming in, it's still internal dynamics to the coffee cup, but it's sort of coherent dynamics, there's large scale effects. To us, that's extremely provocative that result, right, this is saying that some kinds of dynamics are going to make complexity come into existence, some kinds are not the existence of sort of long range forces or long range coherence seems to be playing a role. Again, we don't have the once and for all list of here are the qualities you need in your system, the properties in order to get complexity to develop. But this is a clue. This is a hint, this is pushing us in a certain direction. I suspect that there's some mathematical result that says that complexity must be low at low entropy, because it's just not a lot of different things the system can do. And complexity must be low at high entropy, because everything is in equilibrium and pretty smooth. And complexity is allowed to be high at medium entropy. But whether or not it actually achieves that large complexity along the way from low entropy to high entropy depends on the details. And that's what we would like to better understand. Okay, so clearly, the universe does this, right, the universe has large scale coherent motions. That's what gravity does. Gravity is large scale and coherent like the tectonic model in the coffee cup. So we're seeing like a little bit of a hint of maybe some answers to the question, what are the properties you need in the laws of physics in order for complexity to arise, even in this again, the very most simple system and the very most simple definition of complexity that you may have. So that's the result that we have in hand. What I've been trying to do for a while, and I'm still working on is pushing beyond that result to be more realistic, right, to think really about statistical mechanics and different forces and entropy and quantifiable formulas. And we have a bunch of ideas and none of them is quite ready for prime time yet. So I'm not going to be laying any of those ideas on you. But I'm going to sort of give you some hints in the remaining time about how I see all of this big picture fitting together. So basically, here's what I see. I think it's a story of information. I haven't really used the word information that much yet. I've used it in the sense of saying, how much information you need to give me to specify some configuration or something like that in the definition of complexity. But information is another one of these words, like entropy or like complexity, that has different definitions and appears in different avatars, different guys's. And we have to be very, very clear on what we mean. In particular, let me let me explain something that sadly never, not never, but rarely gets explained and is very confusing. What is the relationship between entropy and information? Okay, and the reason why I say it as explicitly as this is because a computer scientist and a physicist will give you opposite answers to this question. They both mean true things, but slightly different things. So computer scientists or you know, or even I should say maybe engineers, communications people, they harken back to Claude Shannon. Claude Shannon is the founder of information theory. And he had a very specific question in mind, he was working at Bell Labs, he was interested in knowing what is the best way to send signals across the transatlantic cable in a way that would convey information while being robust against noise, right? If you try to convey a signal across large distances, you might get noise in there might degrade because of random fluctuations that you don't have any control over. And Shannon invented formulas for the information content of a message. And he realized in you know, there's there's stories about this where he was talking to john von Neumann, etc. The formulas look mathematically just like the formulas for entropy from statistical mechanics. In the following sense, and for the following reason, what what Shannon realizes if I'm sending a signal on with a certain number of bits, okay, and I want to maximize the information content in that signal, what do I have to do? Well, let's say the signal is just zeros and ones. What I'm interested in is learning something I didn't know before. That's what information really means. If I already knew it, if you tell me a fact that I already knew, I don't really gain a lot of information. So the example I like to use if someone says the sun rose in the east this morning, and they are very reliable, you believe what they say, okay, so they they set us ends to you, but you didn't really learn a lot of information, you already knew that the sun rose in the east, basically every day, right? That's what that was already your expectation. But if that same reliable person tells you the sun rose in the west this morning, and you think they are reliable, and they're not joking with you, and they didn't make a mistake, suddenly you've learned a lot. You've learned a lot because even though the message was just as long, right, the word east and the word west are exactly as long as each other, that conveys much more information because it is so surprising. And so Shannon worked out that if you want to convey the most information in a message, what you want to do is make every bit or every word or every part of the message as surprising as it can be. Now, they can't be perfectly surprising, you know, you're going to get something. So what he said is, imagine you have a frequency of getting different signals, or if you want to put it this way, a probability distribution, right? So if you're getting zeros and ones, what's the probability the next digit is going to be a zero or the next digit is going to be a one. And what he realized is that to maximize the information content, you want that probability to be uniform, you want it to be maximally spread out, like in the English language, when you if you're just getting letters one by one in the telegram message, when you get the letter Q, usually the letter U is going to follow it, right? When you get a word beginning with th, maybe you'll be the, the might need not be the there's plenty of words beginning with th, but that's the most common word. And therefore, you're learning less from getting that signal than you would if every little word or every little symbol in your message was equally probable. And he quantified this, and he quantified it in a way that led him to a formula, which was exactly the formula for entropy, that Boltzmann and Gibbs and their friends came up with. And the way it works is high entropy is high information, you know, high entropy in the statistical mechanical sense is everything spread out, everything's equally likely, right, a uniform probability distribution. If you're low entropy, then your probability distribution is localized and peaked on some certain set of configurations. And if you take your messages from a highly localized, highly peaked probability distribution, none of them are surprising, you're not learning a lot. Okay. So in information theory, as Claude Shannon thinks about it, high entropy means high information, because high entropy means a uniform probability distribution. That means every little bit of your message is meaningful, it contains some non trivial information, it's not just redundant or predictable from the start. Okay, communication theory, high entropy equals high information. Physics has a very different point of view. In fact, the version of entropy that we've been talking about, the one that Boltzmann had, remember, there's different versions of entropy, just like different versions of information or complexity. So Boltzmann's definition of entropy is the logarithm of how many microstates are in your macro state, where your macro state is the set of microstates that macroscopically look the same to you. High entropy means there are many, many microstates in your macro state, low entropy means that there are very few microstates in your macro state. So let's ask the question. If I tell you what macro state I'm in, how much did I just tell you about the microstate? How much information is there about the specific microstate in the macro state information? Well, the answer is, if you're in a high entropy macro state, and I tell you you're in a high entropy macro state, you know very little about the microstate, because by definition of entropy, there are many, many, many microstates that could have been in that high entropy macro state. Whereas if I tell you that you're in some specific low entropy macro state, there aren't that many microstates in that low entropy macro state. So you have learned a lot. You have gained a lot of information by being told that your physical system is in a low entropy macro state. So to physicists released physicists in this sort of Boltzmannian mode, low entropy means high information, and high entropy means low information. So the communication theorists or information theorists think that information and entropy are in the same direction. Physicists, statistical mechanics physicists anyway, thinking in Boltzmann's way, think that information entropy are opposite to each other. Okay, and I'm giving you that because you know, why not? I might as well help clarify something that you might get confused about, but we're being physicists. Now we're being Boltzmannian statistical mechanics now. So we're using a language where a low entropy state contains a lot of information. The to be slightly more specific about that, to tell you that the system, whether it's the universe or the coffee cup or a box of gas or whatever, is in some specific low entropy state is to convey to you a great amount of information about its possible microstates. It's very, very constraining, very, very specific. You've learned a lot. Okay. And the reason why we want to talk that way is because we know that the early universe was in a very, very specific low entropy state. So that fact telling you the state of the early universe conveys an enormous amount of information. And we know that as the universe evolves and time goes on, entropy will go up. So we go to a state where just knowing the microstate of the universe has almost no information in it. In other words, the available information, let's define the available information as the difference between the maximum entropy the state could be in in this Boltzmannian sense, and the actual entropy that it has right now. So if it's in maximum entropy, the available information is zero. If it's in a low entropy state, very, very low compared to maximum, the the available information is basically equal to the maximum entropy. And in that conception, what is happening in the evolution of the universe is that there is a resource that we have to use that we have in the sense that we're able to use a resource of available information. Okay, I'm saying it this way, because usually, in physics, we would talk about free energy. Free energy is sort of the ordered amount of energy. This is what Schrodinger would have talked about when he wrote What is life? He talked about neg entropy. It's kind of a silly word that I don't like to use. But because it's the entropy is never negative, he just meant the difference between the maximum entropy and the actual entropy, the amount downward you go from the maximum to where you are, which I'm calling the available information, he was not thinking about information theory at the time. But if you multiply that entropy by the temperature, you get an energy. And basically, the entropy times the temperature is the useless energy. And the difference between t times s and the total energy is the useful energy, the free energy, the energy that we can use to do work. And what Boltzmann's point, sorry, what Schrodinger's point was when he wrote What is life is that living creature uses free energy from the sun or from wherever to metabolize and to self repair and to learn and to do things. And that is the characteristic of what life is. It's not the only life is not the only system that does that a fire does that you have some wood and you light it on fire, it is converting free energy into useless high entropy dissipated energy. So it's easy to do but a living creature will do that kind of thing in an orderly constructive way. This is all very vague because the definition of life is all very vague. But you get the point. As you burn something or metabolize your food or mix cream into coffee, you're losing you're using up a resource that resources the available information provided by the difference between the thermal equilibrium, entropy, maximum maximum entropy, and the actual entropy that your system is in the more you burn and dissipate, the more you increase the entropy, the less available information you have. If you're in a thermal system with a temperature, you can convert that to a free energy way of thinking. But I think the information theory way of thinking is actually more general, more robust. Sometimes there's no heat bath that you're in sometimes you want to think more generally than that. So forget about temperature, just think about entropy and information and define the available information to be that difference. Maximum entropy versus actual entropy, we use that up over time. Why am I dwelling on this? Because I think that as we go past the simple minded apparent complexity of the coffee cup or the large scale structure of the universe, to think about more sophisticated versions of complexity, what's really going on is subsystems of the universe coming up with better ways to take advantage of that resource of the available information resource, we are using it up, right? We're chewing our cud, and we're sweating, and we're doing global warming, and we're basically increasing the entropy of the universe, mixing cream into our coffee. Sometimes we're using it using that power for good, we're using that resource to do something interesting and complex. So my hypothesis is that the story of complexogenesis is going to be a story of stages, right? We do things in more and more sophisticated ways that we recognize as more and more complex. So let me put a little bit more meat on those bones, but honestly, not that much meat. We're still, we're going to be a little vague here, because I think that the understanding actually is vague. The story we told about the coffee and the cream, the apparent complexity, okay, is a very kind of low brow version of complexity. There is information, you are using it up as you are increasing the entropy of your coffee and cream system, but you're using it up in kind of a dumb, simple minded way. The next level where you use that information a little bit more cleverly is maybe what we could call it is metastable complexity. And here, I think that the typical physicist wouldn't even be thinking about information, but I think that maybe it's part of a bigger picture, information is the right way to think about it. I'm thinking about the difference that we already mentioned between a planet and a star, okay. So planets and stars are both more or less stable configurations of matter over billion year time scales, right? Eventually stars die, they don't last forever. But over a very long time, stars just sitting there looking more or less the same from moment to moment. And a planet is just sitting there looking more or less the same from moment to moment, but for very different reasons, okay. The planet is not using up any resource, it's just mechanically stable. It's a kind of a simple minded, brute way of being more or less the same from moment to moment. The star is using a resource. It's stable because it has fuel inside in the in the form of either protons or you know, some other light nuclei that can undergo nuclear fusion into heavier nuclei through various processes that astronomers love to talk about. But there's only a finite amount of fuel, right? If you turn all your protons into helium, you've used up your protons. Eventually, you'll turn them into iron or something like that. But maybe that's much later on. And then you stop being stable, and then the star either collapses or goes supernova or whatever. So this is again, not a very sophisticated use of information, but it is an example of maintaining stability by using up some fuel. Okay, stars do that, planets do not. As an aside, galaxies are sort of interesting special case like they're an intermediate case. Galaxies are not exactly stable, not like a star galaxies do evolve over time. But if you do the details, so galaxies will not last forever. galaxies are not maximum entropy configurations. And the reason they're not is because you can always increase the entropy of a galaxy by through the various gravitational interactions of the stars. Let's forget about stars evolving and exploding, whatever, think of stars as point masses that last forever. And just think about a gravitating system with a bunch of point masses in it. You might think that you find some stable configuration, if you only had two stars, and you were completely Newtonian in your gravity, then you could be completely stable, they would just go into lipses around each other. There are even certain very special cases of three body systems in Newtonian gravity that can last forever. But the generic case when you have many objects in Newtonian gravity that are just point masses is what you can do is have interactions between the different masses such that one of them gets flung out of the system. It escapes and reaches escape velocity, just through its bumping into gravitationally bumping into other stars, and it flings outward and the rest of the system contracts a little bit. Okay, so the overall total energy of the system remains constant, but the entropy goes up through the contraction of most of the stars, this is literally what happens when you form a galaxy or whatever. Those galaxies are going to continue to contract over time by spitting out stars. So they're not perfectly stable, and the entropy is increasing. But when you run the numbers, that process is very slow. So galaxies can look stable for a long time without really increasing in entropy, even though it's not really mechanical stability like the Earth, it's not really like the stars are pressing up against each other. It's just that the time the gravity is a weak force and the time it takes for the entropy to increase is very slow. Okay, this is just a reminder that physics is complicated and the universe is very complicated. Okay, anyway, the point is that the existence of this resource, this great this decrement between the maximum entropy we can have and the actual entropy we do have makes a way for certain systems to be metastable like stars, namely to use up fuel, okay, and to use that fuel to maintain some steady state of some sort and non equilibrium steady state configuration. This is we mentioned this idea when talking to Adi Pras in the context of life, he has kinetic stability, kinetic dynamical dynamical kinetic stability, which is a slightly different thing that a chemist would care about. But this is like the physics version of that. Okay, still stars are not very complex. It's not very exciting, right? If that's where as complex as you could get, it would not be worth writing home about. There's other systems like think about the atmosphere of Jupiter. I'm not sure if this is the best example, but it's an example. You know, if you take these pictures of Jupiter, it's gorgeous, right? Like there's all this stuff going on, these different colors, all this, you know, the great red spot is there, other spots sort of come and go the great red spot survives survives for a long time. That seems more complex than a star, right? It's still not clever, it's still not goal oriented or adaptive or anything like that. But there's clearly substructure that persists for some reason, right? And I can guarantee you, even though I haven't gone through the calculations, it wouldn't persist if there were not an input of low entropy energy in there. There's some dynamics going on. The atmosphere of Jupiter is not exactly the same from moment to moment, it's not frozen like the topography of the moon or something like that. It's dynamic. But it's clearly also not just maximum entropy all by itself. There are subsystems, right? There are components that are interacting with each other. We're beginning to see, even though it's again, nothing like a living organism, but we're beginning to see something that we would more viscerally recognize as complexity, this sort of modular kind of thing, where things break into subsystems, each of which plays a different role. And again, dependent on this resource that we're using up, if the sun disappeared and the Jupiter, you know, has radioactive elements inside that heat it up. And so if all that energy production mechanism, low entropy energy production mechanisms ceased, you would get rid of all the structure in the atmosphere of Jupiter. So it's a temporary, somewhat steady state, persistent, but not forever, kind of configuration that is dependent on this resource that we're using up. So clearly, the big leap is from this sort of structured complexity, but the kind of dumb complexity that you would see in the atmosphere of Jupiter, to something more like a living being, right? The origin of life. I'm not here to tell you how life began. That's an open question. We don't know all the details about that. But clearly, it's sort of very intuitive that living beings are more complex than non living beings. And how can we think about that? Again, I think it is a matter of a living being is able to take advantage of its information rich environment, its low entropy environment, to persist, you know, Schrodinger's definition of a living organism, somewhat tongue in cheek. But he had his had something going on in his mind was something that kept on moving long after it should have stopped. What he has in mind is if you put, you know, a rock into a bowl of water, the rock would just float to the bottom, and then fall to the bottom and then just sit there will move. He put a goldfish in a bowl of water, it will move around for quite a while, as long as it has food, right, as long as it has that information resource that it can take advantage of, it can maintain its structural integrity, and keep moving around. And that's what makes it living. It's using that resource to maintain its stability. If the food supply gets cut off, the goldfish dies, and it becomes more like the rock. So I don't know, I'm sure there are many, many people out there in the world who just have a much more sophisticated view of this than I do, because I don't know a lot about biology. But if we talk to Chris Kempis, or Eddie Pras, or Michael Wong, or any of those people who we recently talked to about this stuff, they would have a picture of how living beings take in information and use it to survive. The one example I know of is kind of amusing, because it's sort of an example of making a prediction, even though it was a prediction in the sense that I didn't know what the answer was, the rest of the world knew what the answer was. There's this very well known phenomenon called chemotaxis. If you put a bacterium in a petri dish with more nutrients on one side than on the other side of the dish, so there's a gradient, lots of nutrients on one side, very few nutrients on the other, the bacteria, even though they sort of from moment to moment look like they're thrashing around, will in general move toward the direction of more nutrients, right? Somehow they're smart enough to know that they will be happier, they will live longer if they're in the more nutrient rich environment. Now that by itself doesn't seem that complex, right? After all, if I put a ball on a hill, the ball will roll down the hill, and that doesn't require a lot of information, knowledge or anything like that, it's just the ball is instantaneously responding to the forces that are being exerted on it. How is that any different from what the bacterium is doing? So I thought about it, and I thought, you know, if this picture of information as a resource is right, and this picture of life as a more sophisticated complex system than a rock is right, then maybe the bacterium is not simply mindlessly responding to the gradient and nutrients and moving in that direction. Maybe there is something interior to the bacterium that is keeping track. This is the next level of complexity, you know, over and above just modularity, but actually literally keeping a record that is informationally rich. So in information theory terms, you would have mutual information between the state of the bacterium's interior and the state of the nutrient gradient in the outside world. It turns out this is right. I thought about this, and I looked it up and indeed, there are proteins inside the bacteria that basically keep track of the direction in which the nutrients are bigger. So yeah, the bacterium really is directly and manifestly taking advantage of that informational resource that it has to use. If the interior of the bacterium were in thermal equilibrium, or maximum entropy, it would not be able to keep track of what the exterior environment is doing. If you remember the podcast with Christopher Adami, he made a big deal about information theory and life. And the way that he puts it is that the whole genome of a living being has a very high mutual information with the whole environment it's in, in the sense that the genome of a living being is selected to survive in that environment. So there's a lot of relationship between what the genome is like a genome that makes you able to survive underwater is very different than a genome that makes you able to survive on land. So that can be quantified using information theory. And I think it all fits very nicely into this picture. Of course, beyond just being a living being, there's the next stage that you would like to talk about, which is a thinking living being. And you know, look, you can go on about this in great detail, but now we're into the realm of biology, we're in the realm, not the biology is not interesting, but it's it's there's a bunch of people studying it already. Okay, like the physicists have very little to offer here, compared to the real biologists who know what they're doing. There is a a famous book that that caused a lot of impact. And people still talk about it, you know, impact is can be both positive and negative, or at least can be controversial, as well as as praiseworthy. There's a book called the major evolutionary transitions. I have trouble pronouncing these people this person's name, but it's by a Aors, smuth, Mari, soft, Mari, s, z, a, t, h, m, a, r, y, presume Hungarian name, and john Maynard Smith. So these are both, you know, working evolutionary biologists, the major evolutionary transitions. And what they tried to do was to pinpoint moments in the history of evolution, where not just you have a new species or whatever, but you have a kind of mode of living. And what interestingly, what they ended up concluding is that the way to think about these evolutionary transitions, like, you know, from prokaryotes to eukaryotes, or from single celled organisms to multi cellular organisms, then all the way up to like, language use and things like that. The common thread that that they identified was the use of information. Really, the transmission of information, which is a little bit different, but the it's still along the same general lines. I've learned recently, actually, from talking to David Grakauer here at SFI that there are real biologists who are skeptical or poo poo the book because a lot of the pinpointing of major evolutionary transitions by South Mari and Smith was based on feelings. It was not quite as rigorous and quantitative as you might like. But okay, that to me, that's not criticism. That just means they're making a hypothesis. Now we got to think about it. So the common idea is that once you get into life, you have an enormous number of things that you can do the space of possibilities is much bigger. And a lot of that space consists of how can we use the information resource that we have to survive in this difficult world that we're in. One such thing, I'm mentioning a lot of former Minescape guests, but Malcolm McIver was a Minescape guest in the early days, and he talked about how there was a transition when fish climbed onto land. When you're a fish, when you live under the water, you're swimming around. And the attenuation length of light in water is rather short, it's meters, roughly speaking, and you're swimming in meters per second. So you can't see that far in front of you. When you see something, you don't have that much time to react to it. Okay, you better either decide right away, whether it's friend or foe or food. And whereas, when you climb onto land, now you can see essentially forever, you can see things that are far away and therefore a new mode of information use, information processing opens up to you. When you are a fish, the only evolutionarily useful mode of information processing is you see something and you react to it. When you're on land, you can see something and you can think about it. You can plan, you can imagine different hypothetical scenarios, and sit and contemplate which one of those will be the best. Should I run up on that tree? Should I hide behind the rock? Should I attack this thing? You know, whatever you want to do. And that costs resources to think about things, right? Thinking in the brain is a energetically costly thing. But if it gives you a survival advantage, evolution will eventually find it. That is an even more sophisticated version of using the information resource that we have that gives rise to a level of complexity that the little bacterium can only imagine or could not even imagine because it doesn't have the capacity for imagining. So all of this is to say, I think you can see the vague outline of a picture of stages of increasing complexity in the physical world characterized by increasingly sophisticated ways of using the information resource that we have around us. What you would like to do is turn that vague picture into a more quantitative one by coming up with what a physicist would call an order parameter. An order parameter is just a number you can compute that sort of characterizes a phase transition, like for liquid water transforming into steam or to ice, the speed of sound changes. The equation of state changes dramatically at those transitions. So you can have an order parameter that tells you, yes, you have had a phase transition, but you would like to do is use information theories. What we're trying to do, me and my friends use information theory to come up with order parameters that characterize these different stages of complexity. And there's no guarantee that it's straightforward, like maybe some things happen in some physical systems, some ways of using information happen earlier, and stage one happens before stage two and one system and stage two happens before stage one and another system, it could be a mess, right? That's the whole joy of complexity is that it's not necessarily simple, it could actually be complex. Okay, so I could stop there. That's my picture of the way that we should think about complexogenesis in the universe and its relationship to entropy, but I did to information as the difference between maximum entropy and actual entropy. But there's just one other kind of thing that I think is really interesting, you know, back to the more basic question of what are the features of the laws of physics that allow this to happen in the first place? Okay, we can kind of sketch out how it does happen. If the laws of physics were different, then would it still be able to happen? Or, you know, is there some feature of the known laws of physics that we can pinpoint that you say, oh, even if I didn't know what the universe was like, if I knew that these were the laws of physics, then it would happen. This is the kind of question that only physicists and maybe philosophers would care about. Biologists don't care about this. They think they know what the laws of physics are, so they can just plug them in. But that's okay, we can be physicists, we can ask what if the laws of physics had been different? What is important here? I mentioned from our investigation in the coffee cup that it seems as if the existence of long range forces is an important feature. But there's another feature that I'm becoming increasingly convinced is super important, which is the existence of photons or something very, very much like photons. What do I mean by that? Well, think of a box of gas, okay? I love thinking of boxes of gas, I think of different molecules. Think about a box of gas that has sort of two different kinds of molecules. We're abstracting away from real physics here to just do thought experiments. Think of red molecules and blue molecules, okay? And they're bouncing around in a box of gas, because it's a gas, so they're moving around. The space of all possible configurations for that system is big, right? There's a lot of different configurations that the molecules could be in, even at, you know, a fixed energy or whatever. There's not a lot of interesting, if it's in a high entropy state, not a lot of interesting information usage in that context. You might need a lot of information to tell me where every molecule is, but there'll be a different configuration. In a second, there's nothing stable and interesting. There's no information processing in any interesting way. But imagine that you didn't only have these atoms, you also had the ability to have chemistry, these molecules, I guess I was calling them. In other words, maybe a blue atom and a red atom or two blue atoms or two red atoms could bump into each other and stick to make a molecule. I should have called them atoms in the first place, because I'm going to call them molecules now. If I were a chemist by trade, I would call them monomers for the individual pieces and polymers for when I'm making a big thing. So I'm imagining that my little monomers, my little individual particles, have the ability to stick together. And if they come in two forms, red and blue, or zero and one, whatever you want to do, then it's interesting what happens, not to be too provocative about it, but there is an analog to digital phase transition. In the phase where all of the monomers, the particles are just bouncing around, that is analog, they could be anywhere they want, right? There's an infinite number of places they could be. Once they stick together, there's a bit of digital information that comes to life, namely the order. Is it, you know, red, blue, blue, blue, red, blue, or is it red, blue, red, red, blue, you know, there's some storage of information. And of course, that ability to store information in a relatively reliable way is crucial to what we imagine we're babbling about as higher level ways of using information. It doesn't help if the information is out there, but you can't store it and use it. And this ability to sort of have a digital version of your configuration of particles is crucial to being able to do that. Now, if you are a physicist by training, and you're thinking about the space fall possible laws of physics, as we're doing right now, there's a very crucial thing about that idea that two particles or two atoms or two monomers will stick together, namely, generically, that does not conserve energy. That is an inelastic collision. If you think back to your early physics courses, the total kinetic energy, the total momentum of the system will be the same before and after they stick, but the total energy will be different. At least the total kinetic energy of the two things that happens. But of course, chemistry happens all the time in the real world. Atoms do stick together. What is going on? What's going on is the state of the atoms when they're stuck together is slightly lower energy than the state where they're freely moving. That information didn't just disappear, it is transferred into photons. The atoms give off a photon that carries away the amount of energy that is the difference between the amount they had before the amount they had after. And the thing about photons is they're really flexible. A photon is a massless particle. Einstein told us the equals mc squared, what he means by that is that a massive particle has a minimum amount of energy that it can have. When the particle is just at rest, it has equals mc squared energy. If it's moving, it has more energy than that, right, kinetic energy. But if it's just sitting there, there's a minimum amount of energy you can have given by mc squared, a massless particle, the minimum energy can have zero, right, mc squared is just zero. And it can have any higher amount of energy, because it can also have kinetic energy. So a photon, which is a truly massless particle, can add any energy at all, as measured in some rest frame. That feature, which we sort of take for granted, is crucially important to allowing chemistry to happen. Because different kinds of atoms with maybe different initial energies will stick together, and they will need to be able to give off different arbitrarily different amounts of energy in order to conserve energy and stick together, right? They have a very definite amount of energy they have while stuck together. And because they have different potential velocities before they stuck together, they can have any energy beforehand. So photons, which are particles that can carry away any amount of energy you want, play an absolutely crucial role in this analog to digital transition, they allow for it to happen in a very, very real way. And this idea that, you know, the digitalization of the information is, you know, clearly, we know examples where this is crucially important. Schrodinger, again, in his book, What is Life, he famously predicted the existence of something like DNA. What he was his argument was the following, you know, he's a physicist, clearly. So a statistical mechanics was one of the things he was very good at. He's thinking about molecules, or atoms, bumping into each other. And he knows that they can't actually convey a lot of information because they're randomly moving around. If they sort of cooled down, and your molecules became a solid, a crystal, let's say, as a specific example of a solid, then you're cooled down, but you're still not containing a lot of information. Think about a crystal, a crystal of salt, or diamond, or whatever, is it's just atom after atom after atom, there's no new information contained. If you know that you're in a diamond, you know that if I'm a carbon atom here next to me, there's going to be another carbon atom. No new information has been conveyed. So Schrodinger said, for a living being, to have something like genetics, something like the ability to send its genome down to subsequent generations, it must have a configuration of atoms inside that contains information in a relatively stable form. So it can't be a real crystal because real crystals are just predictable and no information, but it also can't be a gas or a fluid. It has to be what Schrodinger called an aperiodic crystal. That is to say, an arrangement of atoms in the form of a molecule, where you don't know what the next grouping is just by knowing what your current grouping is, basically an alphabet, or a way of conveying different bits of information at different sites of what he was thinking of as this aperiodic crystal. And of course, now we know it's DNA that does this RNA also does it to some extent, but RNA is less stable. Then there's a whole long story that I'm going to I'm going to avoid the temptation to start talking about RNA world and the origin of life and things like that. But it's precisely this analog the digital transition DNA is very, very much analogous to this sort of red and blue balls sticking together that we were just talking about. You don't know from knowing that a certain nucleotide is is one identity, what the next one is going to be and therefore in Claude Shannon sense, the information about the next one carries information in an interesting way. So you can kind of see the importance of photons here. And there's almost just for fun, I can imagine an anthropic principle argument for gauge symmetry. So this is even this is wackier, we're getting late in the podcast. But you know, the existence of a massless particle that interacts non trivially with ordinary matter should not be taken for granted. So consider the following argument. Intelligent observers are complex information processing systems, right? That's what we think we are. And the anthropic principle is supposed to say, the one thing about the conditions in their universe, the laws of physics underlying them is that they have to be compatible with the existence of intelligence observers. Otherwise, you wouldn't have any intelligent observers, they're talking about the laws of physics. Okay. So at this level of analysis, what we mean by intelligent observer is some complex information processing system. Now, a complex information processing system is a configuration of matter, which as we just discussed, it won't be created unless you can dissipate extra energy away. Right? If you think about the space of all possible configurations of atoms to make a complex system, at any one energy, if you start with the atoms all moving, it's very, very unlikely that you will find a configuration where the atoms are stuck together with exactly the same energy. That's almost a set of measure zero, it's very, very difficult to find. You can only find these sort of interestingly complex structures by going down in energy by decreasing the energy of the atoms by giving away the energy to some other part of the universe. Okay, dissipation, that's exactly what it is. When can dissipation happen? Well, you need low mass particles, massless particles would be the best, because they can have any energy at all, and they can carry it away. But neutrinos or something like that are low mass particles, they're of no help whatsoever, because they don't interact that much. It's very, very hard to get a neutrino to carry away energy. So you want a low mass particle, but one that interacts noticeably, right, that really has a non zero chance of carrying away some energy. Now, there's a separate particle physics argument, but basically, the only low mass particles that interact noticeably with other particles are gauge bosons, something that we've talked about, because volume two of the biggest ideas in the universe talks a lot about gauge symmetries, and the fact that the existence of a symmetry in the quantum field theory context leads directly to massless particles. Ordinarily, if a particle is massless, it's only because it doesn't interact with anything, gauge symmetries allow for interactions while keeping particles massless. And therefore, the existence of intelligent observers relies on the existence of gauge symmetries. And so you can make photons. So there you go. I've explained why gauge invariance exists. It's because of the anthropic principle. I wouldn't take this very seriously. But I think it's amusing to think that when we know the laws of physics as well as we do, sometimes it's, it's, you know, attempting to take for granted the features that they have. But if you imagine different laws of physics, things might have been very different up to including life might have been impossible. Okay, let me let me just wind up with yet another amusing observation. Like we don't have the full picture here. I think it's pretty obvious. We're groping toward this idea of complexity increasing because we have this information resource. And therefore, there's a lot of different ways that we can put it to use. And the the issue is the homework assignment is quantify all the ways that physical configuration of matter can use that information resource to survive and argue that it's going to be more and more complex ways of doing that. Okay. So like I said, I could stop there. But the final observation is the following. Once you do this, analog to digital transition, once you have dissipation so that you can take your configuration of matter, it can cool down into a complex information, rich configuration of stuff, the space of possibilities that are relevantly different from each other becomes enormously big. Okay, this is this is the crucial thing that I'm still trying to wrap my head around to sort of figure out how to quantify it and say it exactly. Even though in the box of gas, there's a huge number of different places and velocities the molecules can have, they're all kind of the same in some intuitive sense that you would like to make quantitative. Once the molecules start sticking together, then they're not the same anymore, then the different orderings of the monomers into the polymer matter. So how big is the space of possibilities? When you say like, okay, there's many different ways to put those atoms together to make like a DNA or whatever. How efficient could we be if you think that a DNA molecule carries the information that turns into the blueprint for a big macroscopic organism? And you think the natural selection can be thought of as sort of searching through the space of genomes for things that climb up to peaks in a fitness landscape and can survive in a harsh environment. How good a job can we do in exploring that space of possibilities? Let's make it explicitly the space of genomes. Okay, so let's imagine we have nucleotides, and we're putting nucleotides into DNA. And we're saying we have four different, we have an alphabet, which has four letters in it, GCTA, those are the nucleotides that come into making DNA, and we're going to put them in different orders. And the human genome has approximately three billion base pairs in it. So three billion nucleotides in some specific kind of order. Is that because the universe has searched through all possible DNA strands up to three billion in length and found the perfect one? No. So just to quantify that how big the space of possibilities are, imagine that we took all of the protons and electrons in the universe in the observable universe, 10 to the 80th of them, as you now know. Imagine we, God, let's say, because we don't have the capability to do this, imagine that we put them all into the form of DNA, okay, indeed into the form of base pairs for DNA, and then we're going to assemble them into strands with n base pairs in each strand. And imagine that we do this one billion strands per second. Okay, so we're going to keep taking as all the matter in the universe. And we're going to put it into different combinations of GCT and a, and we're going to keep shuffling through all those combinations a billion times a second. And we're going to somehow magically perceive which one of those may good living organisms and which ones would fail. Okay. And how much time do we have within the age of the universe 10 billion years? How long of a strand of DNA could we search through comprehensively? So we really checked every single possibility. The answer is about 180. About 180 base pairs in a DNA molecule, we could if we if we had the entire universe devoted to this program of putting all the different base pairs in different orders and asking whether they were good or bad, and we could do that in a billionth of a second, all of which is entirely unrealistic, of course, just so you know, we're just trying to make a point here. The point here is there's not enough matter or time in the universe to search through all the possible configurations of nucleotides and DNA. Even if we did it in this completely unrealistic way, we wouldn't get even to length 200 of our hypothetical genomes here. The human genome is 3 billion base pairs long. So we search through the space of possibilities in an enormously inefficient way, right? We can't possibly be comprehensive about it. That's why natural selection uses sort of randomness and culling, right? You know, you randomly mutate your DNA. And then if you're unsuccessful, you die out and if you're successful, you reproduce. That's how natural selection works. And you're not going to do anything like a careful examination of all the possible things you can do in a DNA strand that is 3 billion base pairs long, you're going to explore an enormously tiny fraction of that available landscape of possibilities. So what that implies to me is that in this process of increasing complexity, increasingly sophisticated ways to use the information resource that we have in the universe, we're nowhere near done. We could easily imagine configurations of matter that are in some hopefully definable sense, way better at surviving than we are, right? Because who knows what you could do by organizing these DNA base pairs and better, more efficient strands to make better organisms. Not them advocating doing this. There's this is the thing about natural selection is it's mindless. It's not teleological. But somehow, I guess this is a good place to end on. One of the things that happens in this progress, progress is of course the wrong word here in this progression, this evolution from simplicity to complexity. As these subsystems of the universe become increasingly sophisticated at using the information or resource around them, is that we can do something called imagining the future, as we mentioned with with Malcolm MacGyver's talk, etc. The reason why that's interesting is because physics wise, is the universe is governed by two things. Number one, the laws of physics and number two, a past boundary condition. The low entropy of the universe in the past, the past hypothesis as David Albert has dubbed it, the fact that the early universe starts with very low entropy in the past. There is no future hypothesis, right? There's no future boundary condition for for whatever cosmological reason, our universe has a boundary condition at one end of time, but not the other. Okay. And so, all of the sort of coarse grained evolution of the universe questions can be addressed by the underlying laws of physics, plus an initial condition, not a final condition. There's no nothing you need to know about the future in order to predict how systems will evolve from the past to the future. There is something you need to know about the past, the low entropy of it. There's nothing you need to know about the future. Okay, the only condition is in the past. But there are also systems in the universe that have goals. A goal is a future state that you would like to reach, right? These systems are of course, living systems, living systems have goals. As far as we know, non living systems don't have goals, they might have places they go to, but they're not trying to go to there because they have a purpose in the same way that living systems do. Living systems can envision where they want to be and work toward getting there, right? Like if you drop a ball, then it falls toward the floor. If you are Aristotle, you might casually say the ball wants to be at the floor, it has a nature to be down there. But I could just catch the ball, I could just put my hand there and stop the ball from reaching the floor, right? If there is a cat that wants to get a mouse, and okay, so the cat moves towards the mouse, I could try to step in the way of the cat and the cat would move around me. The cat would change its behavior in order to keep pursuing its goal in a way that the falling ball doesn't because the cat is a living organism that can imagine things in the future. So somehow, somewhere along the progression from very, very simple systems that are just increasing their entropy and evolving along with the universe, to more complex information utilizing systems, the boundary condition that is initially only in the past, these individual subsystems of the universe invent a future boundary condition for themselves. Some state in the future that they would like to reach. That is nowhere to be found in the microscopic laws of physics or anything like that. Laplace's demon just works moment by moment. There's no future boundary conditions. But this appearance of complexity, this complexogenesis, this increasingly sophisticated use of information, allows us to have future goals. And I think that's really interesting. I would like to know, I would like to be able to sort of quantify what is the moment at which that happens. I don't think that the bacterium doing chemotaxis really has a future goal. I think it's just, it's just responding to the moment in some way. But I have future goals, right? I want to finish this podcast, I want to publish it, etc. in very downed earth ways. No one argues with with that. So somewhere along the evolutionary tree between me and the bacterium, the idea of a goal appears that has to be one of the stages in complexogenesis, right? The ability to formulate goals, states of future configuration that you will try to get to, even if I don't know how I'm going to get there. Exactly. I don't know what time of day I'm going to write the show notes for the episode or whatever. But I do know that on Monday morning, when I published the episode, right? That's a fascinating thing that can happen in this in this information utilizing capacity that we develop as complex creatures in the aftermath of the Big Bang. Of course, eventually we'll all go away. Doesn't last forever. The cream and the coffee do mix together. The complexity will eventually disappear. But even though the stars are mostly formed, human life scale, human lifespan is of order 100 years, 100 years is nothing compared to the time scales at which entropy is increasing the universe and complexity is developing in the universe. So I don't know about the universe as a whole. But I think that unless we do something dumb and kill ourselves, there's a lot of room for increased complexity and increased increased sophistication in our use of information here on Earth. If that's something that we value, then that's maybe a goal that we can have to keep that going, to keep that surviving and not do dumb things and destroy ourselves here on Earth. That's a good place to stop. So thanks for listening to Mindscape. I will talk to you next time.