From First Principles

FFP EP. 25 | Plants, Quantum Sensors, and Predicting Cancer Evolution

109 min
Feb 10, 20262 months ago
Listen to Episode
Summary

This episode covers three major scientific breakthroughs: a 50-year mystery solved about how plants synthesize alkaloid compounds through a single-step enzyme process, quantum entanglement enabling unprecedented measurement precision beyond classical limits, and Alpha K, a computational tool that maps cancer evolution as a traversable fitness landscape to predict tumor behavior under treatment.

Insights
  • Ancient bacterial enzymes repurposed by plants through endosymbiosis demonstrate convergent evolution across independent plant lineages solving the same biochemical problem
  • Quantum entanglement can be operationalized for practical metrology by measuring correlated combinations of states rather than independent measurements, achieving 3x precision improvements
  • Cancer evolution follows predictable fitness landscapes similar to classical evolutionary theory, enabling computational prediction of tumor adaptation to therapy
  • Single-point mutations (tyrosine to phenylalanine) can fundamentally alter enzyme function and create evolutionary advantages in pharmaceutical production
  • Modern scientific breakthroughs increasingly depend on integrating data from previous studies and applying AI/computational tools to operationalize theoretical concepts
Trends
Synthetic biology applications enabling pharmaceutical production through engineered microorganisms rather than plant cultivationQuantum sensing and metrology moving from theoretical physics to practical multi-parameter measurement systemsPhysics-based computational approaches being applied to oncology for personalized cancer treatment predictionAI tools like AlphaFold accelerating protein structure determination and enabling enzyme engineeringConvergent evolution patterns suggesting fundamental optimization principles in biological systemsPrecision medicine shifting from empirical treatment to computational prediction of disease evolutionQuantum entanglement applications expanding beyond fundamental physics into practical sensing and measurementIntegration of multi-disciplinary data (genomics, physics, computation) solving long-standing biological mysteries
Companies
Google DeepMind
Developed AlphaFold AI tool used to predict 3D protein structures for enzyme analysis in alkaloid synthesis research
H. Lee Moffitt Cancer Center and Research Institute
Developed Alpha K tool for predicting cancer evolution through fitness landscape mapping published in Nature Communic...
Brookhaven National Laboratory
Operates the Relativistic Heavy Ion Collider (RHIC) used in STAR collaboration research on hadron formation from quan...
CERN
Referenced as location where Higgs boson was discovered using similar decay product analysis techniques
Fermilab
Mentioned as alternative U.S. particle physics facility in discussion of relativistic heavy ion collider capabilities
JPL (Jet Propulsion Laboratory)
NASA Deep Space Network facility that received data from Voyager and Pioneer missions measuring Jupiter's size
Netflix
Humorously referenced in context of Artemis 2 launch delays and Hollywood studio preparation
People
Catherine Wood
First author of University of York alkaloid biosynthesis paper solving 50-year enzyme mystery
Sewell Wright
1932 evolutionary biologist who developed fitness landscape analogy foundational to Alpha K cancer evolution model
Werner Heisenberg
Physicist who formulated 1927 uncertainty principle, fundamental limit being approached in quantum metrology research
Leisner and Spencer
1973 researchers who hypothesized single-step enzyme mechanism for alkaloid synthesis, confirmed 50 years later
David Freeberg
CEO of Ohalo, company using polyploidy genetics to increase agricultural crop yields through chromosome doubling
Quotes
"We've been looking for this gene for like 50 years, and now these guys have finally found it."
Krishna ChowdhuryEarly in episode
"Plants invented version control before GitHub, okay? So this, you heard it here first."
Lester NarePlant evolution discussion
"It's like a little mini factory. Yeah, yeah, yeah. It's very cool. And it's doing both processes at once."
Krishna ChowdhuryAlkaloid enzyme discussion
"The only way that this can happen is if there's a single process that goes from amino acid strips the cooh and then makes the ring"
Krishna ChowdhuryEnzyme mechanism explanation
"We created our Google maps of cancer and we took one cancer cell. We split into two. We create, generated the map based off one and then the real world other ones traversed what the map said it would traverse"
Krishna ChowdhuryAlpha K validation discussion
Full Transcript
So we've been looking for this gene for like 50 years, and now these guys have finally found it. They literally took a single quantum state, and then they split it into two, but then they maintained entanglement. It looks like it turns out that an extra chromosome isn't always a bad thing. Yeah, if you're a Cancer. If you're a Cancer. If you're a Cancer, extra chromosomes can be really good. But not a Gemini. No. Or a Sagittarius. hello internet this is your captain speaking lester nare joined as always by my co-host and our resident phd krishna chowdhury we are back in studio this week with three great main stories lined up for you along with the rundown this week we're going to touch on a story about an overlooked plant and how it could transform medicine production we're going to get into some quantum entanglement, which is giving us superpowers to measure the impossible. And we're going to end with this new breakthrough tool called Alpha K, which is helping scientists predict how cancer cells will evolve before they actually do. We're going to learn about the science from the ground up because this is From First Principles. For our first story, we have a new research paper out of the University of York in the UK that was published in the New Phytologists. With the byline, scientists at the University of York have discovered how plants produce chemical compounds that might assist in developing new environmentally friendly medicines. We always like our plant stories. Yes. So let's break down what exactly is going on here in this new study from the University of York, which was published on January 13th. That's right. And, you know, plants are not as sexy, let's say, as, you know, animal cells or like bacteria, fungi. But they're incredibly versatile. They've been around for 450 million years. And they've had to deal with a lot over those 450 million years, right? because they don't really move around. And because they don't move around, they can't do traditional defenses of just like running away from something that's trying to eat you, right? They're fixed in place. And so they still need to evolve defenses against herbivores, pathogens, insects that are trying to eat them. And their solution is effectively chemical warfare, okay? And they're really, really good at chemical warfare. And the core chemicals that we're talking about in this study are something called alkaloids. So they're nitrogen-containing secondary metabolites. By secondary metabolites, we mean like, you know, it's not actively involved in photosynthesis and stuff like that. It's a derivative of those compounds. And we're aware of these. Morphine, nicotine, caffeine, quinine, these are all alkaloids that are made by plants as defense mechanisms that they figured out through their evolution. right and this particular study that's out of the york university it solved a 50 year old mystery about how exactly certain plants make these compounds okay this idea of alkaloid synthesis they found the gene that does it for a particular pathway and that could lead to cheaper pharmaceuticals a lot of really cool things and i think the chemistry in this particular study is what i found really cool because they've figured out the nitty-gritty mechanism by which this particular enzyme does what it's supposed to do. So we're going to understand and learn how plants generate and synthesize these alkaloids, which they use as a defense mechanism, but as humans have some potential commercial applications. Exactly, yeah. Pharmaceutical applications, all of that. So first, let's just try and appreciate the target molecules, right? These alkaloids. They are heterocyclic rings with a nitrogen. By heterocyclic, that means there's a cycle of carbon atoms, but one of the carbon atoms has been replaced by nitrogen. They all come mostly from amino acid. There's a few that come from the nucleic acids, but most of them are derived from amino acids. There's two different types that we can think about. There's the lysine derived, which are from the amino acid lysine, and then there's ornithine derived compounds. Nicotine, probably one that's very ubiquitous, That one comes from ornithine. The idea is you take ornithine, which is this amino acid. It's a linear in chain. And then you remove some stuff, you add some stuff, and you put it into a ring. And that ring form becomes then nicotine. Okay? So why are plants even doing this in the first place, right? Nicotine, for example, very specifically, is something that mimics neurotransmitters in animals. Specifically in insects, it paralyzes them. So if there's an insect that's trying to eat this plant, you can't do it anymore. But because of animal evolution, that particular neurotransmitter in insects is also related to stuff that we use in our brain, specifically acetylcholine. Here you can see the nicotine molecule and the acetylcholine molecule. The shapes of the two are kind of similar, which means that when nicotine goes inside our brain, the acetylcholine receptors which are receptors on our neurons that recognize acetylcholine and then open up ion channels to then turn on or turn off a neuron they could sometimes be triggered by nicotine this is what leads to the cognitive effects of nicotine you know the high that you get or the addiction all of that is just because the lego block of nicotine is closely mimicking the lego block that is acetylcholine Got it. Right? Got it. Yep. And so that's why plants sort of have an advantage when they evolve these types of molecules, right? And a lot of these types of molecules can be used in pharmaceutical research. Okay. Okay? Now, the question is, how exactly do plants make this happen? Because remember what I said. There's an amino acid, which is a linear chain. You usually have like an amine group, a carboxyl group, a carbon in the middle, and then some stuff attached to that carbon. It's a linear thing. You got to now turn this into a ring. Okay? There's several ways to do this at a molecular level. But it's been kind of annoying to figure out how exactly that happens. We understand conceptually how to do so not naturally inside a plant or how plants naturally do so. We can get it conceptually, but we've not yet been able to identify how plants do it in a non-manufactured way. Yes, yeah, yeah, exactly. And so one of the tools that you can use is something that goes back for at least 50 years. It's something called radioisotope tracing. What you do is you replace the carbon atoms that you grow this plant with with carbon 13 instead of carbon 14. Or sorry, carbon 12. Carbon 12 is six protons, six neutrons in the nucleus. Carbon 13 is the same number of protons, six, but you have an extra neutron. Okay. Okay? And when you, you know, let's say we supply the plant pathway with a bunch of glucose that is only carbon 13, right? now we can trace how is that plant going to break up this glucose and incorporate it into different types of molecules for example pyruvate or the citric acid cycle how much of it is going to go into fatty acids right and we can figure out by weighing the fatty acids and the citric acid and things like that we can be like well the citric acid is only half as heavy as it would be if all of the carbon 13 went in there right because it's got three extra neutrons instead of six extra neutrons things like that yes so that's the logic behind this idea of radioisotope tracing it's been used to figure out that dna is the genetic code of life if we go back to our watson and crick episode of last year right and so the question now is we'd like to apply this technique to how do plants make these alkaloids okay now let's let's start with the substrate lysine which is the amino acid okay okay and this lysine is going to go on to make like an abacine or securinine some some alkaloid that we're interested in okay the problem is the following when lysine gets goes through and becomes this alkaloid there's an intermediate compound called cadaverine okay here's the idea lysine is not symmetric because you've got an amino group on one end that's the h2n the nitrogen and then you have an amino group on the other end but then that's also connected to a carboxyl group which is the cooh carbon oxygen oxygen hydrogen okay so it's not symmetric there's going to be a process called decarboxylation where you take that carboxyl group out and now it becomes a symmetric molecule right there's an amino group h2n on one end and then there's an nh2 on the other end the the switching of the letters is just to show that it's completely symmetric if i were to look at this molecule cadaverine this way and then i were to switch it around it would look the same to me okay okay now if that is an intermediate then what should happen another enzyme would take that intermediate and then it could grab on one end or it could grab the other end right right because it doesn't know right which way came out of the enzyme right which one which part which end of the molecule the left or right had this cooh attached it doesn't know because it's just freely floating around so this next step enzyme is going to grab that and if i were to do radioisotope tagging yes of this molecule and i were to make a ring out of it with the nitrogen on one end and let's say that nitrogen is our index of reference the radiocarbon could be either on the left or the right right because the whatever molecule turned it into a ring grabbed that cadavering from either the left end or the right end okay and that's what we see in a lot of our um stepwise symmetric pathways these are called stepwise symmetric okay in the sense that it doesn't know which one it grabbed and so that top line you get a nitrogen um you get this heterocarbon ring with the nitrogen and that tagged carbon is either to the left of the nitrogen or to the right of the nitrogen. Now that happens in some plants, but in other plants, that tagged carbon is always going to be on the left of the nitrogen. Interesting. Okay. So what is making it move from being a chance, a 50-50, to 100% always being on a singular side, which creates consistency. Exactly. And there's actually a lot of these compounds that are like this. Okay? From both lysine and ornithine, you've got plants like nicotiana, phlegea these guys make these asymmetric compounds sorry nicotiana sounds like a cardi b song it totally does and she would be remiss to to say that you know it is an asymmetric um alkaloid you know but this is pretty common okay is the idea yep okay okay so it's not a unique thing we see it across a variety of these uh compounds yeah yeah it's it's across a variety of these compounds and so there's something very interesting going on and this might seem kind of just like a boring trick okay so you know your carbon is your tagged carbon is only on one side but now who cares think of yeah who cares but now let's actually before we even talk about like applications and stuff let's actually think about okay what could be happening at the chemical level such that my enzyme is always grabbing one end, right, of a symmetric molecule. It's a rod that looks exactly the same whether I were to do it this way or turn it around. And yet the enzyme is always grabbing one end. And this is where we come to a hypothesis that was given in 1973 by Leisner and Spencer in the Journal of American Chemical Society. 1973, so this is 50 years ago. they said the only way that this can happen is if there's a single process that goes from amino acid strips the cooh and then makes the ring if that intermediate cadaverine were to go and freely float around there's no way that the next enzyme would grab one end this only happens if like you know in some analogy let's say i'm the enzyme with my right hand i grab the amino acid i strip the cooh and then i put it to my left hand and i make a ring right yeah i'm doing it in one smooth process yes there's not i'm grabbing it i'm letting it go and there's some other guy who's grabbing it because if the other guy grabs it it's had time to tumble in the 300 kelvin environment of the cell right and so it's it's a very nice kind of hypothesis right it's it's like when you think about it at the molecular level this is the only way that it could have happened there there can't be a chain of independent processes yeah generating this outcome because it would be the second link in the chain would not have sufficient information to consistently select for the same side yeah so So it has to be an end-to-end process that is self-contained from start to finish. Yes. And this was in 1973. They had this hypothesis. Okay. And people have been looking for this enzyme ever since. So we have the hypothesis, which defines this idea that it has to be a singular process. But we've not seen it experimentally. Yeah. We've been trying to look for it. Okay. These guys found it. Okay. That is the paper. Okay. Okay. This phantom enzyme for over 50 years. It's now been found. the study is out in new phytologist um first author wood very nice paper catherine wood and the idea this is a 50 year old treasure hunt to try and find this thing okay and there are applications that we'll get to later but i think it's just incredible that like there's been the science question for 50 years based on a very simple experiment which is that this pathway is not symmetric and the only way it can't be symmetric is as you said it's got to be an end-to-end process. And one thing that's always so interesting in the modern conversations, in the contemporary conversations about science, is this idea that we've already solved all of the problems, and or solving old problems is, you know, sufficiently trivial. Yeah. And this is a perfect example. That's obviously not true. It's obviously not true. But this is a perfect example of that. Exactly. And this paper is also really nice because it uses very modern techniques to answer this problem we've got alpha fold with ai that we're going to get into later okay um new techniques that are only you know sort of available now in the modern day right so the target organism that they used in this study is um phlugea sufriticosa it's um actually used in chinese medicine yi yi qi i believe is how you pronounce it um i got one of my friends who's chinese to send me a voice recording. So if that's wrong, that's his fault. Maybe he's setting me up. But it's been used in Chinese medicine, and we know consistently that it forms non-symmetric alkaloid securinine. Okay. So this is now the model organism that we're going to use to try and find what is the enzyme in this plant that is doing this. They do transcriptome analysis, So they extract the RNA from 15 different tissues in the plant, and they do de novo transcriptome analysis of the whole thing to figure out what the mRNA is that is being transcribed from the DNA. From that, they get a bunch of candidates, and they look for what are the enzymes that could be possible that are creating this. They find two enzymes. There's gene 4984, 4984. That's your standard cadaverine. It produces your cadaverine, which is the intermediate product. But then they've got this one gene, 1-8-6-4, and that produced your 1-peperidine, which is a precursor of that securinine with the alkaloid. It's producing that end product in one step. You're not getting any cadaverine. You're not getting anything in the middle. And this is that Olado enzyme that we're going to get into later on. But that's the enzyme that they found. And they found that this enzyme could take you from lysine all the way to the end in one go. So the idea is gene 4984 was just getting us to, in this chart that we're looking at, the middle step where the cadaverine is created. But it did not bring us through the oxidation and then all the way down into the sort of final product. That's right. But that gene 1864, this was the smoking gun because it was that end-to-end process. Exactly. Yeah. And it's the smoking gun, but that's not going to make your paper. I see. Okay. Because if you just submitted a paper being like, hey, we found this enzyme that does the whole thing. Well, it could be doing all sorts of other stuff. Reviewer 2 is going to be like, well, you didn't really show that it's non-symmetric and all this other stuff, right? so reviewer two is going to get you you got to go even deeper you got to really convince your reviewers that this is the thing that is doing that non-symmetric grabbing and how is it doing it all this other stuff right and so that they didn't they didn't stop there next what they did was they purified that gene they incubated it and then they wanted to see how how exactly is it doing this so they got um a lysine and they tagged the nitrogen this time okay okay the nitrogen this time is only going to be on one end and if it is stepwise meaning there's an intermediate then the nitrogen is going to we're going to see we're going to see a nitrogen getting stripped and so you'll see the final product not have that radioactivity that radioactive isotope of nitrogen or you're going to see the radioactivity, right? It's going to be 50-50. The nitrogen that you put in, that you tagged, is either going to be there or it's not because the enzyme is either grabbing the left end or the right end. But if it is a concerted single-step reaction, then the nitrogen is always going to be there. The nitrogen that you put in on one end of this lysine is always going to be there. And that is what they found. They found that this is, again, this radioisotope measurement that they did. The other thing they did was nuclear magnetic resonance, the NMR, which is the stuff when we go to, when we get, you know, our MRIs. Yes. It's the same technology. What they're using is that the nitrogen that they tagged has an odd number of nucleons in it, which means it's going to have a spin. And so you can measure that nuclear spin to figure out if your nitrogen is currently there. Okay. And they found that, yep, that nitrogen is definitely still there in our product, which means that this is the asymmetric reaction. Because your nitrogen in the beginning was only on the left end. Yes. Right? And now the fact that it's 100% there all the time means this enzyme is always grabbing that left end. Right? And so we've used a combination of observational tools and tracing tools to actually watch the evolution of the process to both show the stepwise and concerted versions and what happens. And because we're tagging the nitrogen, which is supposed to always be in the same spot each time, it's very obvious when you look at the tracing that in one of these pathways, it's always there. Yeah. And so it's a combination of these different levels of tooling that have allowed them to make this observation. Exactly. Yeah. And then they want to get into even deeper, this enzyme. How does the enzyme actually work? Still not good enough. Still not good enough. Why is it different from the normal ones? Yes. Okay. So this class of enzymes, they're calling it ornithine, lysine, arginine, decarboxyl oxidases. The decarboxyl oxidases is key because it's both doing the decarboxylation and the oxidation in a single step. um these are called uh olados yeah olados olados is the uh the acronym yeah and um they're from a class of proteins called plps which use vitamin b6 as a cofactor so the vitamin b6 is like helping along the enzyme even though it's not part of the active substrate it's like in the back sort of making sure that the enzyme shape is correct okay next they used alpha fold okay this is an ai tool that developed my Google's Deep Mind to predict the 3D structure. Before, let me just take a step back. Before we had AlphaFold, you had to do all sorts of protein crystallography, maybe like cryo-EM, all this other stuff. Now you can just plug it in to AlphaFold, and you'll get a really nice 3D structure. Plug it in, plug it in. Right? And it's super cheap to do. Once you've got that, now we can figure out what is the difference between the ancestral oladol and this particular oladol? The difference is a tiny amino acid, difference between tyrosine and phenylalanine. Look at the structures of these two. They're exactly the same, except for a hydroxyl group, an OH. The phenylalanine just has an OH attached, and that OH is the entire difference. That's so funny. Such a minuscule. it's a tiny yeah all it is is one of the carbons is attached to an oh otherwise the entire amino acid is exactly the same but that one site difference in the active site is what causes this entire change downstream it's what basically changes the down all the downstream all of the downstream because if you don't have the oh effectively what that means is if you don't have the oh the cadaverine comes in or you make the cadaverine and then the oh if you have the oh it it stops another hydrogen from coming in and sort of freeing this guy got it got it okay before there was nothing to stop another h plus ion from coming in and just like freeing this this thing goes away and then you know some other enzyme hooks up with it here because i've got the oh I'm not messing with this cadaverine. The cadaverine stays inside my enzyme for longer, and then the process of oxidation can actually happen within the same spot. That makes sense. It's sort of like this, because of its presence, it dictates whether it will stay or whether it will go. Yeah, like the time that it stays inside the enzyme is determined by whether that oxygen and hydrogen are there or not. Okay? What's, again, very cool about this particular enzyme is also it is very promiscuous. It's not loyal to lysine or ornithine. It does lysine or ornithine, arginine. So it's incredibly versatile if we want to use it for industrial purposes. This goes back to the idea we keep coming up with with some of these things, where is it a single point discovery or is it, in my analogy of it, more of a platform that has multiple use case possibilities because it's a fundamental unlock that has more than a singular point solution. Exactly, yeah. So it's very cool in that way, right? Because it's very versatile. It can use all sorts of amino acids to create these compounds of many different types. It's like a little mini factory. Yeah, yeah, yeah. It's very cool. And it's doing both processes at once. It's like a single machine that end to end. Yes. Is it like an efficiency play, less likelihood for issues, transitioning from a step one to a step two? Exactly. All these kind of things. All these, yeah. And you're already getting to sort of what I'm going to talk about at the end, which is the applications. But before we do that, the last bit that they studied was the evolution of this particular enzyme. Okay. How did plants figure out how to make this? Yeah. Okay. You would think that the plants, you know, usually you'd naively think the plants have some precursor protein. Yeah. And then that precursor protein gets mutated in some way. And then I have this new use case. This particular enzyme is actually more related to bacterial enzymes, not plants. Interesting. Okay. That's kind of weird because bacteria are not plants. Right. Unless we remember that the chloroplasts that are inside bacteria were once... I mean, sorry, the chloroplasts that are inside plants now were once cyanobacteria. They were once bacteria that through endosymbiosis became integrated inside the plant cell. And they've just stuck around for like a billion years because they've had a good, the plant cell has had a good, the plant cell gets glucose and ATP. So energy and food out of the cyanobacteria. The cyanobacteria is in a nice like all-inclusive resort. It doesn't have to worry about like trying to get food for itself because the plant is just supplying it as long as it does the work. Right. So this particular enzyme is more related to the decarboxylases in bacteria than it is to plants. And if we look through the evolution, there's independent lineages that all of these different plants have. Because remember, we've got different plants, right? We've got rosids, nicotiana, artemisia. All of these are different plants that are finding ways to make this happen. they've independently found this bacteria enzyme and they've repurposed it so it's not like one guy did it and then he let everyone know all of these different lineages did it at different times but they found the same use case that's so which is pretty interesting right it's almost like a fundamental yeah uh thing that's like each of these lineages were like this is the most efficient way to do this process. Yeah. And they all sort of ended up centering on the same implementation Yeah they reinvented the wheel like four or five times in evolution You know, it's kind of interesting to think about because one way you could one thing you could ask is like, how is this even possible? Right. How do you how do you do this kind of evolution? Well, the cyanobacteria genome gets integrated. The bacterial genome gets integrated into the plant genome. And then what you have is these things called tandem arrays, which is effectively when you replicate a gene, sometimes you get multiple copies of that gene. OK, and that becomes something called a tandem array. Now, your main copy, you don't want to mess with. Right. OK, it's kind of like in GitHub when you like push commits. You have a branch separately for yourself where you're doing all sorts of heinous crap. Don't push it to main. Yeah, you never push it to main unless you're really sure that it's working. Right? And so this tandem array is basically that. It's effectively like multiple GitHub branches. You're messing around here, and your main is still going at it. But with all these copies, you can start doing some mutations. You can figure it out. And if something works, hey, let's keep it, you know, because evolution wants to keep it. Plants invented version control before GitHub, okay? So this, you heard it here first. Yeah, yeah. Plant-based version control is the true OG. Yeah, yeah, exactly. And we're just repurposing it. repurposing it so now let's look into applications what could this be used for well um these alkaloids are very important in medicine right and what you can do for example if you want to make securinine which is this particular compound that we're studying this is um you know in cancer this can be used against leukemia or leukemia therapy it can be used as a kind of neuro protection type thing before what we used to have to do is grow this plant and get the compound out you can imagine that's not scalable no and it's very expensive yeah now we know the gene that does it furthermore the gene is a one-step process so we can just crisper this gene into yeast and then grow yeast The idea being is we know the factory that produces the outcome we're looking for at this molecular level inside of plants. But because we understand the genes, the needed gene expression to replicate that factory, we have the blueprints. Yeah, yeah. So we can just take any yeast cell and give it the instructions to build the small factory, which now removes the need to grow the plant or any of that life cycle process. we literally can just have a single function yeast cell that focuses on producing this outcome period yeah exactly and then we can just scale up the number of yeast cells yeah and we just it's one step no life cycle you can do it in the lab and this is how you get the industrial grade industrial volume of production exactly so it's making this very cheap because all you have to do is grow yeast which we know how to do that's like bread and beer right so this synthetic biology approach can completely revolutionize how we grow these class of compounds right and it's a whole class it's not just that particular one this is the first enzyme that we found that does this right now we've got a blueprint on how to even approach this problem right right we can scale up we can do all sorts of really cool stuff so securinine is the one that's a particularly a specific candidate that would help as it relates to leukemia therapy yeah but what you're saying is security and is one only one of the outputs of this mini factory yeah yeah there's so many alkaloids out there right that could be using this one-step process and a one-step process is just really good because it's just much more manageable it's a single factory you don't need to you don't need the second one to find the intermediary and da da da da da you know very very very interesting so i i thought i thought this was a very cool like you know botany doesn't get a lot of love but this was a very cool um cool one all the botanists out there you're always going to get some love on this channel yeah you're always because that was and again it's there's levels to this right it's not only the observation it's showing how that observation is functionally happening the the the mechanisms by which that process arises uh then getting with that understanding now it's let's see how we can apply this to other areas of research yeah again it it feels very much like a platform because you can generate any number of different alkaloids with this factory um it's it's funny you know the old town that i'm from in north carolina that area used to be called like it's tobacco used to be the big agriculture uh so like even in Durham, they call it tobacco district. They converted these old massive tobacco factories into like, you know, co-working spaces and all this and that. I will have everybody know that the collapse of the tobacco industry and the RTP was not due to these mini factories. We weren't there yet. However, I mean, it seems like there is, like if nicotine is one of these alkaloids that could come out of this factory, that seems like a place that some people who are trying to line their pockets are going to run to. I don't know what the economics are around nicotine production, so that might be totally off base. But there's a million different use cases you can see for this. A great story number one, starting off with botany, which we don't always cover. No. But we want to give some love. We're going to jump into one of the most popular parts of the show, the rundown. We are not able to cover every breaking science research story week to week because there's just so much science happening globally. So we take the rundown as an opportunity to give you a little taste of what else is happening in the world of science without going into a deep dive. But before we get into the rundown, just a quick little couple of housekeeping notes, such as how are you, my friend? Pretty good. We had our Super Bowl festivities yesterday. Yes. One of the most entertaining Super Bowls in recent memory. I was riveting. Dude, it was riveting. As the footballer on the pod, football as in not American football, but the true football, I've only recently learned the rules. Okay, nice. And so I don't know. It was not great. It was not great, yeah. There was a lot of not running. I think that's the technical term. They did not run. The sports ball term. It will be the Super Bowl, if I'm not mistaken. It sounds like it's going to be in Los Angeles next year. Oh, nice. So if anyone at the NFL wants to do a segment on the science of football, we are here in local. It's a quick, short drive over at SoFi. Yeah, we could do Deflategate. We could do a breakdown of Deflategate, which is why Belichick apparently didn't get into the Hall of Fame as a first ballot Hall of Famer, which is a whole thing. It's a bit much. It is. Isn't he like one of the greatest coaches ever? You could take away everything after Deflategate and he's still talking to you. So it's like, okay, guys, come on. All right, enough of the sports ball and banter. We will now jump into our rundown. We have four stories. We're starting off with our first story, which was in CBS News, about the delay for the Artemis 2 launch due to technical challenges. it faces delays after a critical test revealed that there were hydrogen leaks pushing the much anticipated launch back so yeah what's what's going on here with this story yeah um basically hydrogen is really small right the hydrogen atom is the smallest because it's single proton single neutron sorry single proton single electron so it's going to leak through when they're trying to pump up the rocket better to catch these things now than later um i always knew there was going to be probably uh the first the first one is always scrapped because the rocket hasn't really been tested so they really need to make sure so you know artemis is the sls the space launch system is still sitting on the launch pad waiting for its march debut but they're still going to the moon I heard that they couldn't get the Hollywood studio prepared for the launch in time. And so, you know, they had to get Netflix ready. It was raining and they couldn't mimic the rain. It just wasn't accurate. Yeah, that's what it was. And so for those who are listening and not watching, my face is full of sarcasm. But we did mention this in our Artemis Deep Dive that, you know, it's a launch window, not a launch date. And we explained why this launch in particular, which was our episode 24, has a lot more complications because unlike the Gemini and Apollo missions where we tested different aspects of the workflow across multiple launches, we're basically trying to condense all of that into a much smaller amount of launches. And so there's a lot of things that can go wrong there. So we'll keep an eye out for the actual launch date for Artemis 2, which again is built in Lego. Not plural, not Legos. Lego, which we got in our comments. Yes, Lego is already plural behind Krishna's left shoulder in all camera views if you're watching on social or on the pod. Our story number two is about the king of the jungle, which is sometimes debated, but lions having a second roar that scientists have only just discovered, which could potentially help with conservation efforts. Yeah. And so this was a weird story. What's instrumental about this discovery here in animal behavior? Yeah, I mean, so everyone is aware of the lion's roar, right? Lion is the king of the jungle. it has a roar and everyone kind of knows what it sounds like kind of like that exactly and now what they've done is they've taken thousands of hours of recordings of lions and they've ran that through an ai classifier and what the ai has figured out is there's actually two subtypes of roars that lions make there's the first one that's the loud one that everyone knows and then there's a second one that is a little bit lower takes a little bit longer and is a lot more unique to each lion so when we say this can be used for population studies what you could do is you could have a recording of the jungle and you can have an ai then go through and each line is going to have a different voice profile right and so you can identify how many individuals there are in a certain territory where are they going you can identify a line here and then a month later if he like went to the neighboring national park you can say oh wow he like traveled all the way right this um the recordings were taken from tanzania and zimbabwe so um a lot of these lions have collars that they've been fitted to sort of track and they also have little microphones and so that's where the data set comes from i think it could be huge one thing that i learned when i was um reading about this story is that you know the mgm roar yeah the met the metro goodwin mayor yes lion's roar that comes in the beginning of all these different films now owned by amazon yes that's actually now that's actually a tiger roaring yes it's not even a lion for those who don't know the best roar of the big cats 100 is the tiger 100 yeah the line is not that great and so they used a tiger but they put a lion there it's like it's like dubbing it's it's very low lion drawers are kind of low energy jeb they're a little yeah a little apathetic exactly uh tiger again the mascot of the greatest university institution on the planet that's right it's chosen for a reason yeah uh also a great zim shout out well done yeah uh it's not always that we can get to plug uh my my uh my motherland yeah in the story that's a good one i thought that was a cool one and the research comes out of the university of exeter and the university of oxford and it's currently out in ecology and evolution the journal fantastic we're going to move to our third breakdown story uh related to nasa's juno mission reveals that jupiter is much smaller than we previously thought what's going on here yeah this one was weird because i was like how do you how do you get the size of a planet wrong especially one that's very it's like quite big Like, how do you do that, right? And then it's actually an interesting question. How do you measure the size of a planet? Yes. Right? So the way that they measured it before was when Voyager 1 and 2 and Pioneer 10 and 11 went to Jupiter. What you can do is you can beam, you know, you're beaming data to JPL at all times, right? The NASA Deep Space Network, which currently is at JPL. I don't know where it was back then. So I shouldn't say that they were beaming to JPL, but now they do. so you're beaming data to JPL at some point you're going to go behind the planet so you're going to get blocked and then at another point you're going to get back and you're going to start beaming that occlusion tells you how big Jupiter is the slice of Jupiter at the time right now with Voyager 1 and 2 and Pioneer 10 and 11 you only really have 6 data points right because it's like you go behind you go so there's but 1 two for each and then there's like you know only four so eight sorry eight data points right because you're only going once they didn't revolve around jupiter they just like visited and then they were on their way to the other planets yep juno has been going around jupiter for quite a while right so we get more measurements which means that our error bar can get smaller now our error bar is less than half a kilometer okay which is quite good quite nice um and since it arrived in 2016 it's collected a lot more data okay what these guys did was with that additional data what you can do is now actually calculate the size of jupiter not only um you know at the equator but also at the poles because it's it's sort of traversing latitude on jupiter as well as it moves around um what ends up happening is you send radio waves now imagine when there's no jupiter between you the radio wave just goes through fine. As you start approaching the disk of the gas giant, the radio waves are going to go through the top parts of the atmosphere, which have a bunch of ions, the ionosphere, that we also have on Earth. And those ions are going to start changing the frequency of the radio waves, and it's also going to start bending the radio waves, right, just because of refraction. And so you're going to get that data too. The frequency is going to change. And then at some point it's going to get completely cut off. Right. Right. And you're doing it at multiple latitudes. What ends up happening is, so we found out that the poles are 12 kilometers smaller than previous measurements. Okay. And the equator is only a little bit smaller by 2.5 miles. So it's flatter and the whole thing is smaller. Okay. And that's what we found. This is important because if we want to make models of Jupiter, like the weather on Jupiter, what kind of weather phenomenon we'd see, the climate on Jupiter, things like that. Even a number this small is quite a big deal. Significant for any derivative calculations or simulation creation. Yeah. I thought that was cool. Very cool. That one's out of NASA. Yes. Jupiter, smaller, flatter, but still very big. Yes. Very, very big. Yeah, very, very big. And it was a nature astronomy. So our last story is about physicists getting a peek at how matter is born from nothing. Yes. Okay. Out of the vacuum. I'm going to need an explanation for this one. Out of the quantum vacuum. Right. Okay. So we really want to understand how particles like protons and neutrons get created, especially in the very, very early universe. right after the big bang this mechanism of creation of these particles called hadrons hadrons are you know this collection of quarks quarks are the fundamental atomic particles of the nucleus not protons but what's inside the protons and it's a little unclear how that forms okay there was something called the quark gluon plasma quarks are the particles gluons are the force carriers of the strong nuclear force that's the thing that binds a nucleus together because if you think about it the nucleus is a bunch of positive charge right protons on top of protons on top of protons all that positive charge wants to get away from the other positive charges because electromagnetism wants wants it to leave but what keeps them together is the strong nuclear force which overpowers the the electromagnetic force the strong nuclear force is very very weird because if you think about gravity and electromagnetism, the farther you get, the weaker the force. The strong nuclear force up to a certain length scale, the farther you get, the stronger the force. It's very weird. It leads to something called quark confinement, which is this idea that if I have two quarks that are right next to each other and I start spreading them apart, the force between them is going to get bigger. What does that really mean? That means the energy density in between them is going to get really big. At some point, the energy density is going to be enough where two quarks pop out of existence, out of just the gluon energy that's there, right? And so you're never going to see quarks on their own. So it's been really hard to study how protons form because you never see the individual constituents on their own. Does that make sense? Yes, yes. Okay. This new paper is about a technique that lets us study that formation process without really relying on trying to see a single quark on its own. Okay. Okay? This is from the STAR collaboration at the Brookhaven National Lab. They have something called the Relativistic Heavy Ion Collider. This is the largest particle collider that we have in the U.S., although people at Fermilab might disagree with that. Depends on what you're trying to talk about. If you're from Fermilab and you want to complain about that factoid, please put it in the comments. Yeah, go ahead. And so what happens is in this relativistic heavy ion collider, there's a ton of energy, right? And sometimes what you get is a strange and an anti-strange quark that pops out of the vacuum just from vacuum fluctuations. Now, vacuum fluctuations meaning the quantum vacuum is the lowest energy state. But because of the Heisenberg uncertainty principle, that lowest energy state is still going to have a tiny bit of jiggle, right? And that jiggle means there's really particles popping in and out of existence. Sometimes you get a strange and an anti-strange particle that pop in and out of existence. That could borrow energy from this high collision that we're getting inside this collider. And when it borrows energy, it turns into a cousin of the proton called the lambda hyperon. So it's still three quarks. The proton is up, up, down, and the neutron is up, down, down. those are the three quarks that make up the proton and the neutron here you've got a strange up down or a strange up up that's this lambda hyperon so it's like a cousin of the proton you're going to get the lambda particle and the anti-lambda particle because you always have to conserve charge and all this other stuff but the spins of these guys are going to be correlated because the strange the strange quarks that came out of the vacuum have correlated spins right and what they could do is measure the spin correlations of these particles okay when you do that that's something i can actually measure right i can i can wait for these lambda particles to decay into different products measure the angular momentum of all these different products figure out what the original spin was and then i could see if these two are correlated because they you know formed at the same time things like that what's cool is this gives a way to think about how hadrons form without having to worry about trying to observe individual quarks now we can actually peer into this process of how does that three constituent particle form from a thing out of the vacuum how does that spin correlate if they're closer together there's higher correlation If they're farther apart, there's less correlation. And, you know, it's kind of cool because the relativistic heavy ion collider is retiring now, and it's going to become part of something much bigger called the electron ion collider. It's part of the DOE. What they're doing is building with the existing infrastructure at Brookhaven, this larger collider now. So this is one of the last things that it did, but it's very fundamental. It was in nature out of the star collaboration. it's kind of a you know um the swan song of that collider and it's it's quite fundamental it's people are very excited just a quick point of order for those who don't know the doe is the department of energy uh they are the federal agency which is in charge of all of our nuclear weapons for example and anything that has a nexus to atomic or nuclear energy weapons and fundamental research in those categories and what's interesting about this it sort of sounds like we basically have found it's this whole tracing mechanism similarly to our past story it's like we found a way to look at the derivative products or outcomes and then reverse engineer with math what the where they came from yeah which allows us to not have to worry about the ability to observe yeah because we have enough data to get to the original state exactly derivative observations yeah yeah and this is a common technique in particle physics like when we when we found the higgs boson for example at cern right again it's always these decay products that you want to catch and then from that reconstitute what the higgs was what is the mass of the higgs so on and so forth so this is doing it but now it's actually looking at spin looking at the angular momentum of these tiny little objects and trying to make it happen right they're trying to trace back what's happening with the quantum vacuum you know why i love this story of course because if we're talking about the quantum vacuum or vacuum energy obviously we have to talk about zero point energy yeah and that's how the aliens are getting here guys exactly it's obviously they figured out how to manifest and utilize vacuum energy in a very similar way to maybe how we're describing yeah dude not proven i'm sort of just putting it out there getting it into your minds letting it ruminate a little bit maybe we'll have a future research story that actually points to it but we're not quite there yet but speaking of the quantum we're going to end the rundown here and we are going to go into our story number two which is a quantum sensing story the question here or the idea here is we have this new multi-parameter estimation with an array of entangled atomic sensors. This was published in Science in January from the University of Basel as well as Sorbonne in France, the Laboratory Castel-Brosel. Nice. One of us can kind of pronounce French stuff. and the idea here is how can quantum entanglement revolutionize measurement precision and that appears to be what we have going on here. Yes, measurement precision is very big in physics. Physicists love to measure things extremely precisely. Now classical measurement is limited by something called the standard quantum limit. The idea is the quantum world is discrete, which means, for example, if I want to measure photons that are coming into my photodetector, there's going to be fluctuating power on that photodetector because photons are going to be arriving one by one. This is the idea of shot noise. okay and because the quantum world is what it is it's quantum there's all this discrete stuff you're going to get an error based on whatever measurement because of the discreteness of the world that you are measuring okay and that error goes like one over the square root of n where n is the number of particles that you've sort of observed okay we want to go even beyond that Because the fundamental limit when it comes to actually recording something is really the Heidenberg uncertainty limit. And what this particular paper is doing is what they've done is successfully split a cloud of atoms that they've made into a Bose-Einstein condensate. they've split it into three and what they're doing is using the entanglement between those clouds of atoms to then up their game on how sensitive they can measure something they can do it with multiple parameters so you can measure something here on the left you can measure another thing on the right so on and so forth and you can do it across space and they're very clever about how they're able to use this entanglement to then get beyond that standard quantum limit. They're starting to probe the real limit, which is the Heisenberg uncertainty principle, right? I will just note, as someone who comes from the software world, SQL, meaning standard quantum limit, is a little bit, maybe I've done a different naming convention there because SQL is a popular database. Oh, really? Oh, yeah, SQL. Yeah, yeah, yeah. So it's... For sure. I don't think physicists care, though. Zero percent. Zero percent. I'm sure they use SQL all the time. I mean, I use it all the time in my work. But yeah, for us, at least for the AMO guys, SQL means standard quantum limit. It's okay. We can always learn multiple acronyms. Yeah. Context switching, right? Exactly. So let's get into uncertainty and information, right? When it comes to the real limit, it is the uncertainty principle that's been guided by Werner Heisenberg in 1927. he came out with the uncertainty principle and people have been chasing this limit ever since this is the limit of the universe you can go beyond this just because of the nature of quantum mechanics and the nature of our reality right The classical limit is the standard quantum limit, 1 over square root of n. You've got shot noise because there's a uncertainty on how much you can measure based on discrete amount of stuff coming in. Now, here's the deal, though. The Heisenberg uncertainty limit is a limit on the product of two observables, the product of the noise on two observables. For example, the standard one that you think about is momentum and position, right? If I know my position really well, then I don't know my momentum that well, and so on and so forth, because the product of these two numbers, if one number is small, the other has to be big, such that the product remains about the same. But what you could do is exactly what I said. If I really want to know my position very, very well, I could not care about my momentum. I could squeeze my observation such that my delta on one axis is very big and my delta on another axis is really small. So instead of a circle where the error in my, let's say, x and y, these are two different observables, is the same, my error on x could be really small. That could be my position. And my error on the y-axis, which is my momentum, could be very big because maybe i don't care maybe i don't care to actually measure that and when i do this with this quantum metrology i could get a hundred times better than my standard quantum limit and still stay above the heisenberg uncertainty limit okay okay these are called squeezed states okay because you're squeezing in one direction and you're like stretching in the other direction and squeezing in the direction that you care about right we're basically we want to increase the level of precision in one of these two observed states or dimensions yeah and well i'm trying to understand why yeah but we'll get there but the the first idea is instead of having an even distribution on error across both observed states we're trying to maximize precision on one while giving up precision on the other yes and to think about like why would you want to do this let's go back to let's go into like a example like ligo the gravitational wave observatory right where we want to measure the position of our spin of our mirrors really well right so i want to measure um how far apart one leg is versus another right i want to do that now the way i do that is through interferometry where what i'm really trying to measure is the amplitude of the light that's coming, when it goes on one arm versus the other arm, and it comes back, it interferes. If both the arms are exactly the same length, then the light is going to cancel, and I'm going to get zero brightness. But if there's a slight offset, then I'm going to get a tiny amount of brightness in my detector, because the light is not exactly canceling out. The only thing I care about is the amplitude. Right. Right? Yes. Yes. And I couldn't care about, let's say, the frequency of the light because i know that the laser is my frequency right right right so this is just like something that i've i'm sort of doing a back of the envelope here obviously there's other things but that's the idea yes what if you only care about one thing in ligo the only thing i care about is the amplitude of the light yep right yes because then i can make this this kind of gravitational wave drawing that makes sense okay yes so the challenge is there's incompatibility between different observations for example what i was saying with the position and momentum heisenberg says you can't measure both at the same time and the other thing is what if i want to measure at two different spots in space okay that is what so what ends up happening is you've got like two different spots in space what you can do is instead use entanglement this is your quantum action at a distance spooky action at a distance right this won the nobel prize a few years back in um in physics we always use alice and bob for these um experiments i don't know why there's even a quantum company called alice and bob because that is what we use to show entanglement and teach entanglement you get a pair of particles you send one particle to alice you send the other particle to bob and then you ask one of them to measure their particle and that's immediately going to affect in some sense the measurement that alice has okay because these two particles because they're entangled they're connected to each other across space and time through some weird mechanism that's like kind of faster than speed of light but like not really but there's nuances but what ends up happening is if i measure the spin of here that's going to affect the spin here at least that's what it looks like the two are correlated even across these distances okay and what these guys figured is well actually what we could do is maybe use this to then up our game in terms of measuring different things at different locations okay they use something you see what i'm saying like we'll get into it the first boz einstein condensate that's what they used the first boz einstein condensate was in the late 1990s in boulder they made it this is kind of sometimes colloquially called the fifth state of matter, what you end up doing is cooling atoms down to just above absolute zero, such that all of these individual atoms behave like a single entity. They behave like a single quantum wave packet. Yes. Okay? They're just riding on top of each other. What these guys did, Lee and others, in 2026, in this particular paper, is they created a Bose-Einstein condensate, and then they spatially split that macroscopic entangled state and made a bunch of different atomic sensors that were entangled with each other, made an array, and then they could estimate whatever thing they were trying to estimate across these different things while still maintaining entanglement. Very cool. You guys, this is... And I just, I cannot get over how creative some people are in, at the edges of our understanding and being so clever. Because I can already, I can already see where this is going, but let me, let's continue because I'm curious about a couple of details, but I think you're going to get to them. around how, because ultimately what we're trying to be able to do is not only, before we talk about increasing precision in one dimension, because of the Heisenberg uncertainty principle, we can't know two observable states with high precision. And this has sort of been the boogeyman that everyone's been trying to work around. And this appears to be a very clever way to hijack the limitations using quantum intelligence. Yes, and that multiple parameter is what's interesting, right? Right. Because what you could do is say, hey, what if I just measure the position over here with the momentum over here? Yeah, exactly. Because those two, you never said. You know what I mean? Yes. That's what's happening. Yes. Okay, got it. Okay. So how did they do it? They've got this thing called the atomic chip. It's microfabricated gold wires that create these very steep magnetic field gradients. Okay. And with those very steep magnetic field gradients, you can trap rubidium-87 atoms. and you can use those rubidium atoms as a two-state qubit system because the outer electron of that rubidium atom, the spin of that could be either with the nucleus or opposite the nucleus, the same way we talk about the hydrogen hypersplitting. This is the rubidium hyperfine ground state, and that's a two-level system. You can tightly confine that thing, and you can have the atom transfer between either the electron spinning this way or spinning this way by sending in a radio frequency pulse this is our zero or one yeah so you've got you've got a kind of qubit here right and you can tightly confine this this cloud of atoms near that chip surface and when you tightly confine it at really low temperatures you get a boson stone condensate okay great that's no longer that cutting edge anymore which is kind of crazy to think about right right in like 20 years it's like oh yeah okay fine you made a b in boson stone condensate great cool yeah next what you do is you split them apart okay so now using your radio frequency what you can do is toggle that atom and very slowly turn the single potential well which is where the atoms are sitting you can start making a little hill in the middle and the atoms are going to split into two well okay you got to do this very very slowly okay you got to do this extremely slowly adiabatically is what we call it and when we get there we're going to create these radio frequency dressed potentials where you have multiple little wave packets of atoms so a single atom is now being split up into multiple different clouds that are still entangled because we did it slowly enough that the entanglement of that original Bose-Einstein condensate is splitting. And just as a quick note, just for my recollection about Bose-Einstein condensates, the idea there was, you know, what you were able to do is you create one sort of macro object that maintains all the parameters as one sort of holistic thing. Yeah, it's one wave function. It's one wave function, which has its own inherent value. Yeah. in and of itself. Yes. And so now we're building on top of that concept to take that macro, this one-way function across multiple component parts, and we're now splitting it where they are maintaining entanglement, which is then going to be our next... But I'm correctly understanding the idea behind the Bose-Einstein condensate. Yes, exactly. It's one-way function that we're now splitting into, I guess, two-way functions, but the two-way functions, because they were derived from the first one and we're doing it slowly enough, the entanglement is still very strong. It's sustaining. Okay. Okay. Right? And now what we can do with those multiple clouds of atoms is we can measure one quantity here. We can measure another quantity here, another quantity here, so on and so forth. But because they are entangled, these are not independent variables. And so the noise is not independent. This is what's crucial. Okay. If they were independent, what would happen? If they were independent, let's say I've got atom cloud A and atom cloud B. I measure some parameter on atom cloud A. That's going to be, let's say, A plus or minus some number because that plus or minus number is an error. With B, I'm going to measure plus or minus some number. If I wanted to calculate, let's say, a difference between what is A and what is B and things like that, those errors would be independent. and when I calculated that sum or that difference, the error would only go down by 1 over the root 2. Okay. Okay? Because they're independent. Yes. Now, when I measure A plus or minus something and B plus or minus something, that plus or minus is not independent. They're related. So what I could do is measure something like A plus B and A minus B. Or if I have three things, I can measure A plus B plus C, A minus B plus C, A minus B minus C. I could measure all of these different combinations, and the noise in that total measurement is going to be smaller than if I would have measured the two independently. That is the key. Oh, my goodness. Does that make sense? Yes. Yes. We're doing, like, number magic here. Yeah, yeah, yeah. We're, like, measuring combinations of states rather than individual states, and whereby we do that, because the noise is correlated, because the stuff is entangled, yes the total noise is smaller it's smaller on on the aggregate computation on the aggregate measurement yes as opposed to the individual measurement fascinating which goes back to that now i'm thinking about that original chart we looked at with the circle and the oval and the implications on the level of precision because the noise is now yeah smaller because we're making we have this aggregate measurement that allows us to decrease this is it kind of makes sense it It does, and this feels very cool because in theory... Let me pause. Let me let you continue. Yeah, so let's see how they actually did it, right? So as I said, they've got a single wave function with the Bose-Einstein condensate. Now let's see what they do with just two clouds. So when they do two clouds, they've got this local microwave that splits it into two clouds. There you can actually see the sort of two clouds right next to each other, about like a few tens of microns to hundreds of microns apart yes right and what we're going to do is measure some parameter which effectively means we're measuring like a phase of the wave function it's like an angle difference between the two things like we're measuring an angle at all times okay um on something called a block sphere but we don't have to get into that we're measuring some parameter okay the two parameters are correlated and that's why when you look at you know if we were to measure phase two versus phase one the the clouds are not completely circles they're spin squeezed they're these ellipses right one direction is the sum theta um like phase one plus phase two the other direction is the difference phase one minus phase two what you can measure is let's say on one end we measure the sum which is the axis perpendicular to the ellipse so that's the squeezed axis the part that's like shorter yes on that ellipse yes we measure that and then what i can do is rotate the second one so that now i'm the the squeezed axis is on the difference part and now i measure the difference okay that's another key thing they could rotate and manipulate these things such that whatever axis they wanted to do that's the one that would be squeezed okay so you've got this ellipse and you're like i want to measure here now let's rotate now i want to measure here So they're able to do both. It's not at the same time, so you're still not cheating. Heisenberg is still happy. But at different times, you're getting these measurements. You've measured the sum in one. You've measured the difference in the other. So now with the sum, I can find the average value of this parameter. With the difference, I can find the gradient. I can find what is the difference between here and here. And I'm still keeping Heisenberg happy. But because of this entanglement and all of the fancy tricks that I've done, each measurement now of the sum and the difference, I've beaten the standard quantum limit. Right. Because I'm not measuring independent. I'm not measuring phase one and phase two, and then I'm taking the difference. I'm actually just measuring the difference. Yes. Right. Through this mechanism of the atomic sensors in the Bose-Einstein condensate, which gives you this mechanism by which to be able to measure where now the aggregate measurement has less noise than the independent measurements. Yeah. And because you can now also rotate in the way that you described it, we can get a level of precision across multiple observable states. Yes, yes. Instead of having to choose one or the other. So in the LIGO example you talked about earlier, the examples we just care about one thing, which is the mirrors. with this conceptually you can now expand that to say well we care about two things and we can now get a level of precision above the sql yeah closer to the heisenberg limit yeah which is theoretically the actual limit of observational precision yeah we can't get better than that but we'd like to get as close to that as possible and here what we're doing is we're measuring the actual let's say difference right we're measuring four minus three equals one before we were measuring four three and then we'd have to do the math to be like okay four minus three is one here we're just measuring one and the plus or minus on that one is much smaller than we would have done otherwise because the four and the three would have had their own individual error and so the noise at the when you get to the conclusion much higher we're actually observing the conclusion with okay very very very fascinating and now with three we can do all three combinations right with three i can do the plus plus plus plus minus plus plus minus minus so on and so forth and the diagonals that you see is what they're actually doing right like what are you actually measuring this needs to be a proof of concept right it's not like they're measuring some magnetic field that's changing because if you were to do that as a first experiment right you'd be like well how do you know you even measured it correctly correct yeah that makes sense so what you do what do you do you actually encode the parameter you engineer in and you go encode the parameter and then you say can i measure what i encoded so you ultimately know what the result is yeah to measure such that it's controlled yeah because this is a controlled experiment this is a proof of concept right you need to show that it works on stuff that you already know the ground truth for and that's what shows in the in the diagonal it's like they're trying to in for the off diagonal stuff it's like you you encode something but you're trying to measure in some other state like you encoded plus plus minus but then you try to measure plus plus plus you're not able to do that right because that wasn't the point right but that's and that's kind of the point of that diagonal Along the diagonal, you get this boost in sensitivity. Okay? That's the negative decibels. The negative decibels is how much more boost you have over the quantum limit. You're getting like negative 5 decibels, which is like a 3x improvement. Improvement. Right? Right. And that makes sense. The point is, you know, in order to be able to sufficiently prove and convince that this is true, you need to be able to show that it measures something we have a discrete value for. Yeah, yeah, yeah. It's like something you've engineered that you know the ground truth for. And then you're like, okay, I was able to do this. Now I can go and use this to measure other stuff. Yeah. Right? Yeah, that makes sense. And the applications for this are very cool. Okay? So for one, you can measure magnetic fields really, really precisely. You can create kind of like a vector camera that images the full magnetic vector field. So you have the X component, the Y component, and the Z component of some material, let's say. You can create this sensor now and measure inside that material the magnetic fields in each direction. You can do this trick where it's like, okay, now I care about X. Let's measure X. Now I care about Y. Let's measure Y. I mean, again, you're not going to do it simultaneously. But if you do it a thousand times and you have reasonable assumptions about how constant the magnetic field is, you can get to pretty nice precision, right? You can just repeat the experiment over and over again. um internal you can have a quantum internet of clocks okay so you know with by entangling atoms at different sites in a kind of lattice what you can do is have a distributed clock network where there's a clock here there's a clock here extremely precise based on this two level splitting and and then you can measure gravitational red shifts at the millimeter scale meaning i have my bosein sand condensate here i raise it by one millimeter if i raise it by one millimeter it's going to feel the earth a little bit less by a millimeter yeah if it feels the earth a little bit less time is going to be sped up a little bit more and you can actually measure that by having the clock decide where it is on a millimeter scale right and finally if you're interested in dark matter there's this one um you know theory of dark matter and of like if you want to measure gravitational waves when a gravitational wave comes through the each different part of the sensor is going to feel the gravitational wave at different time points yeah so you can measure it that way if you've got a dark matter particle or a dark matter quantum wave that goes through you're going to get a jitter and so the the more sensitive we can get the better we can measure these really really tiny things because the point is that each points of the measurement apparatus you're going to have up to a 3x uh increase in level of precision which when we're talking about gravitational wave detection is extremely valuable yes it's extremely valuable and i mean for their proof of concept you know when they when they went from two at the level of two i think they got about a 3x precision when they got to three they didn't have a 3x it was only like maybe you know 50 10 percent or something like that which is fair right which is fair um but it's a proof of concept right the we need a large number of atoms right now they've only got about 5 000 atoms in this boson stone condensate if you had something like you know 10 to the 6 a million atoms then you could start competing with classical sensors right and then if you have even more now you're really going gangbusters yes yes with your with your improvement you're really hitting that heisenberg uncertainty limit right this is quite this is quite quite nice so it's it's a it's a proof of concept we're we're transitioning to this multi-parameter quantum metrology right and it's going from theory to experimental reality and i was reading about it it's like even the theory of this was not really well founded but they're running with it and they're showing that the experiment kind of works it's very very cool this again is out of the physics department at the university of basel as well as the laboratory castel brosol at the university of sorbonne in France the Europeans doing well doing well a quantum French doing well the French doing well this we have had a lot of French yeah uh French stories in the last couple of episodes um very fascinating um especially because like we talk about all the time the Heisenberg uncertainty principles one of those things to me that's still so weird um the analogy I always kind of bring up when we talk about it is like and this is probably the common analogy in video games uh you reach the edge of the level the map the edge of where the developers built the map yeah and you can't can't go beyond the edge like that's the limit yeah and it's just the limit yeah and that's it and that's it and you can never know what's outside the limit or and it's a crude analogy but um it's very weird it's just very weird it's it's a central tenet of of quantum mechanics it's what makes it so different from classical mechanics right right right uh beautiful we always love a good solid physics story. We are going to end with our final story of the day which is about Alpha K. Yeah. Which is this local adaptive mapping that's specific for cancer research. This was the Nature Communications from the H. Lee Moffitt Cancer Center and Research Institute Integrated Mathematical Oncology. This one, there's a lot of concepts we've talked about that have helped me kind of grok this concept, I think, a little bit better. Because, you know, I didn't understand what gradients and gradient descent was before and things like that. And there seems to be conceptually some overlap here. And so now I have a mental model for us to work with. But what do we have going on here in this sort of new cancer story? Yeah, this is a very cool cancer story, I thought. Again, uses a lot of physics, which is my favorite kind of biology. They're introducing this new tool, Alpha K. This is out of the Moffitt Hospital in Tampa, Florida. What they're effectively doing is trying to predict how cancer evolves. It's a paradigm shift in the way we think about cancer evolution because before we used to think there's just no rules. It's chaotic. The whole thing is just this cancer thing is just trying its hardest to live. These tumors are trying its hardest to beat whatever we throw at it. And the way that it does it is in a very chaotic environment. The genome is changing very chaotically. And so it's very hard to predict something that is chaotic, right? It's kind of like weather. But at the end of the day, even weather is predictable, right? We have weather models, and they're pretty good most of the time. Yes. Right? Yes. Maybe not like mammoth, because mammoth has mountains, and mountains can get in the way of predictions and things like that. But, you know, for Los Angeles, it's like pretty good. It's quite nice. So can we do the same thing with cancer genomes? Okay. Particularly what they're trying to tackle is something called aneuploidy, okay? There's a paradox in aneuploidy. We've heard about aneuploidy when it comes to disorders like trisomy 21. That's how you get Down syndrome, right? You get three copies of the chromosome 21, and that leads to Down syndrome. Okay. We should only have two copies for sort of typical organisms. Two copies of each chromosome. We've got 23 chromosomes, so 48 different chromosomes. If we got two copies, we're good to go. But aneuploidy is when we have more or less of a specific chromosome. It is very bad and catastrophic for normal cells, but cancer cells love it. 90% of solid tumors are aneuploid tumors. Here you can look at a cancer cell. there's four of two there's three of one three i guess there's still two so three is is still chilling there's four copies of four three copies of five three copies of 12 three copies of 11 10 is still two you see what i'm saying this like if somebody looked at this chromosome it's like there's no way this is a normal cell right because there's what is going on yeah it's just all over the place it's all over the place there's three of some there's some are completely deleted and just not even there right so what's going on cancer loves doing this okay okay and there's a clear evolutionary advantage to doing this for a cancer cell there's clear a disadvantage for a normal cell but somehow for cancer those cells love it right there's some kind of benefit that you get from that genetic variation and it kind of makes sense because the more of a the more copies of a chromosome that you have, the more you can mess around with mutations, right? It's that same concept of the GitHub main versus your branch. If you're a cancer cell, you've got a bunch of your own branches that you're just trying all sorts of stuff to survive, right? Because whoever the patient is and the doctors that are treating the patient, they're throwing radiation, chemotherapy, everything at you. And from a cancer cell's perspective, it's like, I got to change as fast as I can and evolve out of my current environment to get through to the next stage of whatever therapy that they're going to put in, right? The cancer cell is trying to evolve and survive. And if we wanted to look at the possible number of karyotypes karyotypes are this set of chromosomes right If we were to look at all of the sets of all the different chromosomes that we have to model in order to sort of try and figure out some kind of simulation of cancer evolution it would be something like 10 to the 10 to the 20 So 10 to the 20 zeros, which is a billion billion zeros, not a billion billion combinations, a billion billion zeros. and a one. Okay? It's a lot. It's never going to happen. Even with quantum computing, it's never going to happen. Okay? Stop trying to make it happen. So, instead, these researchers came up with Alpha K, which is inferring what's called a local landscape of fitness, and it's trying to figure out how the tumor is going through this local landscape of fitness. So, the big initial problem is that the scale of trying to just map all the possibilities is so it's just impossibly large yeah and so it's not worth trying to like create the map of everything yeah like a brute force strategy right right right so so even with con it's just like the scale is too large so what they're trying to do is basically say can we isolate certain aspects of the landscape yeah that are meaningful to, to basically deal with the scale problem. Yes, exactly. Exactly. Specifically when it comes to this any ploy. And before we move off any ploy, I just wanted to talk about a hollow by David Freeberg, David Freeberg of the other podcast fame. Yes. And he has, he's CEO of this company called a hollow. They are doing any ploy effectively. They have, they have something called boosted, what is it called boosted reproduction or boosted genetics effectively you know when we were born we get half of our genes from our mom half of our genes from our dad and so you get one set of chromosomes from your mom one set of chromosomes from your dad and that makes you what he figured wasn't plants what if we just got well what if we just kept the kept the entire set so we had tetraploids meaning two chromosomes from dad two chromosomes from mom right in the nucleus and i was really surprised that it works like that it doesn't just the the the plant doesn't just die right to me trivially it's like you should just die but these plants are they're making bigger potatoes the potatoes are like lasting longer on the shelf all sorts of stuff it's like incredible genetic technology and they must have done some crazy engineering to make sure that these you know these cells are like actually lasting as long as they do the the i think the main idea behind ohalo is you know if we look at our agricultural supply chains globally yeah which are very susceptible to any number of both environmental as well as geopolitical crises if folks remember back to the beginning of the russia ukraine conflict one of the things that was brought up is though the ukraine is one of the bread baskets of europe and if the agriculture goes down there's going to have these cascading supply chain issues across not only europe but other countries as well and you know we obviously have anthropogenic climate change issues and all of this stuff and so finding ways to make the yield on agriculture and crops higher for the same amount of used space we don't need to expand to larger amounts of area being uh used for crops but can we just actually genetically modify the or just be better yeah more rice bigger potatoes bigger tomatoes yeah so that that's like the fundamental concept and and it's incredible i mean apparently it's working but the way that they went about with that genetics i just thought it would never work and it's fat it's fascinating yeah it's a good thing you know yeah it is a good thing and i'd love to know i'd love to like one day when we get big enough i'd love to i'd love to have the chief scientist of oh hollow on and just ask him like how is this even possible if you are connected to the chief scientist at oh hollow and you are a fan of the pod and would like to send them a dm uh we would love to dig into this because it's i think a really impactful concept yeah i think it's very cool and it's very cool technically yeah uh so we we'd love to connect however for this story that is not the focus that is not the focus i just wanted to because the aneuploidy reminded me of totally of you know this multiple copies so now let's get back into the genealogy of cancer let's talk about something called the fitness landscape this is something that we've been alluding to you will lead it to the gradient descent and things like that in um 1932 there's this guy sewell wright he came up with this analogy of how evolution happens okay what you can think about is a 2d landscape that has hills and valleys okay the axes in 2d so the direction north south could be how much of gene a do i have east west could be how much of gene b do i have okay so every point is a location in genetic space where it's like i have this much of gene a this much of gene b there would be hills which is where you're more fit so it's an advantage to be here and there's valleys where there's less fit right and so you don't want to be in the valley you want to be towards the hill this is the fitness landscape analogy of evolution and physicists love this because everything is a potential energy landscape and we're just looking at trying to get to the highest point makes sense and i just want to make sure i'm understanding this graph so we're looking at basically sort of an xyz yeah uh uh like plane and we see gene a and gene B in the way that you just described. Yeah. And then the, I guess the height. Yeah. The height is this population fitness. Yeah. And so the idea is the taller, like the, the, the hills are where it's, it's highly fit and the valleys are where it's lowly fit. So it looks like if you've ever played, like create a level in Fortnite or any video game and you want to put like, you can generate hills or mountains, it kind of looks like that, that sort of imagery in terms of just what we're visualizing here. If you want to be a sniper, you want to get to the highest elevation, right? So here, if you're an organism, you want to get to the highest elevation on this fitness landscape. Now, this is a nice metaphor. And for the longest time, it was just a metaphor. Okay. What these guys have done with the Alpha K concept is operationalize that metaphor into something that's quantifiable. Okay. Okay. Because what you can do is now make empirical fitness estimates based on data. Okay. That is what this paper is doing. It's taking that metaphor and it's saying, actually, this is a real thing that we can at least apply very quantitatively to the problem of aneuploidy. Because aneuploidy, your axes don't have to be genes. They can be number of a certain chromosome. You can have this axis be, I have one of chromosome one. I have two of chromosome one. Three, four, five copies, six copies, right? That'll be one axis. The other axis will be how many of chromosome 2 do I have? How many of chromosome 3 do I have? The fourth axis. And so you have 22 dimensions, each chromosome. And the point where you are is how many copies of each chromosome you have, right? So if you only had three chromosomes, let's say you were some organism, and you had three copies of one, two copies of number two, and four copies of number three, then my point on this axis would be three this way, two this way, four up. That's where I am. That's where this particular cell is. Yes. Right? So now we can create an actual quantitative landscape. And we can measure at each point what is the slope of this landscape. Because we can track that cell at that particular point and ask which way did it go. Right? And we can reconstruct the fitness landscape that way. That's what they're doing. Very quantitatively, they're like reconstructing this in a very real sense. It's no longer a metaphor. Initially, this concept was strictly meant as a mental model, as a way to grok or to kind of imagine how this happens. What we're saying is now scientists are actually taking real-world data and creating a literal fitness landscape, particularly in oncological studies. Yes, yes. And that's why it's called adaptive local fitness landscapes or aneuploid karyotypes. They're literally making that fitness landscape. And then from that construction, now we can tell which way is it going to go. Is it going to go down the valley? Is it going to go up this way to this peak? Or is there another peak that's nearby? These are the questions that they can answer. This is not a trivial problem to do, right? Because, and it really comes from technology. So before we used to have the standard single cell sequencing, where what we do something called MDA. What we would do is effectively have these polymerases, these DNA replicators. they would exponentially amplify certain parts of dna so if we wanted to sequence the dna of a single gene single cell there's not a lot of dna right so in order to sequence you need to amplify the amount of dna so that then we can sequence it when you amplify things you could get exponential speed up on certain chromosomes and then you know the the protein that's doing the dna replication maybe got to another chromosome a little bit later. So there's not a lot of five because it started on five later, but on one it started right away. And so one, there's like a thousand copies of one, but there's two copies of two. That's not going to help for us, especially for aneuploidy, right? If we've got like a thousand copies of gene one, which is on chromosome one, well, I don't know if that's because there's actually a thousand copies. Or if the guy just like started there first, right? Yeah. Instead, these guys used the DLP plus solution. There was already a paper that was out that had a direct library preparation of single cells. What you're doing with DLP plus is you don't care about the sequences of the chromosomes. You only care about the number, right? Because that's what you want for this particular study. Yes. And so what you do here is you dilute your tissue. So you've got the tissue with a bunch of cells. You dilute it so that each drop that comes out of your pipette has a single cell in it. You put each drop into a microfluidics array. It's like a chip. And you can now weigh each droplet to figure out how much of this chromosome is there, how much of this. Because each combination is going to get you a unique weight. If I have like four 1-kilogram weights and two 5-kilogram weights, so on and so forth, I can like figure out and back construct how much of each is there. Right. And that's what they used. That's the data set that they used. It was already there. And they're actually using this data set to now create this 22 dimensional space. Because what they're doing there is it removes the distinguishing, the problem of how to distinguish. Because everything is sort of universal. Like it's like a clean base. Yeah, it's a clean place. It's like flat. There's uniform coverage. It's not like, oh, chromosome one got lucky. It's kind of like, you know, when you want to just figure out how many pages there are in a book, right? The MDA analogy would be you have a noisy photocopier that sometimes it'll copy a thousand copies of page one, two copies of page two, 5,000 copies of page three. Fine, you'll get to read the book. But I don't know if the original book had a thousand, I don't know, maybe the author was crazy. But with DLP+, all you're doing is weighing the book and from that inferring how many pages it has. Because the only thing you care about is the pages. You don't want to read the book for this particular use case. Understood. You only care about the number of chromosomes that you've got. And so from that discrete data point, now we've constructed this 22-dimensional space. This is the original paper from 2021 in Nature that actually put out that data library. Yes. And this is the data library that they're using. So again, this goes back to what I always like to talk about is this science has this compounding effect. Yes. Yeah. And, you know, new discoveries can enable others. So basically what we're saying is there is a library that this DLP plus library that the Alpha K team was able to use to deal with this problem of number of different expression, the volume of expression. That was not what they were measuring for. They just needed a clean base in order to be able to look at this specific issue as it relates to how cancer, like how cancer is making these selections about number of chromosome types as a means to traverse this fitness landscape. Exactly. And now we can finally get into reconstructing the fitness landscape. Right. Because now we can follow single cells. Right. And we can ask, what are they doing? Right. Right. Right. How are they going about in this fitness landscape? You can ask for a frequency change based on individual fitness, and you can look at what is the slope of a particular cell at a certain point, right? And from that, infer, okay, this is actually a really steep slope because the cell is reproducing way faster. That means that it's going up in fitness, right? And so on and so forth. So it's actually very, very cool to think about. The other really cool technique that they used was something called Krigging. Have you heard of Krigging? No. Yeah, I hadn't heard of it either. It's called Gaussian process regeneration. Effectively, you've got a bunch of points, and you ask, okay, how do I smoothly fit a bunch of Gaussian processes such that you can interpolate between these different points, okay? So it's not just like standard interpolation where you just like add lines between points. You want there to be some kind of flow. So this was originally found because they were looking for how to infer where the gold was in a South African mine. Of course. Okay? Of course. They drilled boreholes everywhere, and they found the amount of gold in each borehole. And then they figured, okay, a gold line is going to be kind of a Gaussian sort of smooth thing. So if I have data here, how do I infer where the gold is in between? Right? And that's what it was originally used for. but now people are using it on all sorts of stuff um it's used in finance a lot too but in this case for um cancer research which i thought was very cool that is interesting um so now we got to prove that this map is real right we've constructed this map right we understand how we were able to generate this fitness landscape um and now okay great that's cool nice diagram yeah nice diagram nice like 22 dimensional space can you actually predict stuff yes okay they did um in silico validation so they made agent-based models where they simulate sort of cancer and they saw that the cancer was great but again that's simulation yeah it's not gonna get you a paper in nature communications okay okay you've got to have some experimental data yes to back it up so what they did was something called sister passages what you do is you've got a cancer tissue yeah you split that up into two you sequence one of them and you train your model or you parametrize your model that way and then you ask the model okay now try and predict where the cancer is going to go and you follow that second sister cells to see how it would move and it's tracking i'm so mad right that's kind of nice this is this is this is really good this is this is really good the the point is we created a map, a 22 dimensional space map of a fitness landscape using that library that was created in that previous nature paper. And I think it was 2021. Yeah. And the idea was this was used to be this theoretical concept. That's actually like literal. Yeah. Like it translates literally. We created our Google maps of, of cancer and we took one cancer cell. We split into two. We create, generated the map based off one and then the real world other ones traversed what the map said it would traverse yep meaning that that fitness landscape is literal yeah and it's like literal yeah yeah yeah like there is a fitness landscape and these guys are following it right um in chemotherapy this this works out really well so um cisplatin which is a type of chemotherapy one could ask well you've made a fitness landscape now suppose i introduce a stress like cisplatin that's going to completely change the fitness landscape right right it's going to be now dynamic because certain hills are going to become valleys yes and so on and so forth yes can you um can you predict there yes and they could they could show that there was an increased variance and this actually um is in support of something called punctuated equilibrium okay which is a model of evolution that is not gradualism in gradualism you get changes very gradually over time right punctuated equilibrium is there is a stress that stress leads to bifurcations in your tree of life and so there's a certain stress and then all of a sudden you get different types of species and cell lines and so basically almost not instantaneously in a literal sense yeah but short time in a short time scale short time scale and this actually shows that cancer is very much a punctuated equilibrium process, right? The therapy actually sharpens the selection gradients. So certain fitness hills become even more steep. Some totally get squashed and so on and so forth. Which tracks like conceptually. Like if you were to think about why is cancer so hard to defeat? Like it moves so fast. It would make sense that this punctuated equilibrium is what's happening. And it's cool to see that you can actually measure, get measurements that uh you know that point to that being the case exactly and one thing that one can think about is like so why why would cancer cells do this where this you have this whole genome doubling sometimes where you know kind of like in a hollow all of your chromosomes have now four copies instead of two okay why would you do this well what this does is create a flat fitness landscape and so you have something called survival of the flattest where imagine right before if there's a really thin peak your organisms will want to stay near that really thin peak because that thin peak is really really tall but with cancer because you're throwing so much at them you kind of want to be in a flat hill rather than a thin peak because in a thin peak if you go off by a little bit it's like el cap right in yosemite you're just gonna fall but if you if you've got a sort of nice kilimanjaro like hill then even if you stray a lot in your landscape you're still going to be able to be doing just fine in terms of fitness and you can literally see that with this kind of trade-off the tumor shifts from this sharp peak high fitness low tolerance to flat peaks right and the clinical relevance is if you've got instability and that instability pushes you past some kind of their error threshold then your population is going to collapse onto flatter peaks and not stay on that thinner peak so then you're going to have a harder time maybe because at the flatter peak there's a lot of different stuff that cancer can explore and still be just fine. The point being, you know, you don't want to basically give an evolutionary advantage unnecessarily by how you're targeting your therapist and stuff like that. Yeah, exactly. And so, like, if you can say, like, hey, we know if we sort of attack this in a particular way, it's actually going to spread the evolutionary optionality that the cancer cells have across a wider surface area, which is then going to actually make it subsequently more difficult to deal with. It's kind of an interesting way to think about it, is that how you try to treat does have an impact on how it evolves, and there's actually right and wrong choices to minimize the surface area of risk in how you treat. Exactly. And the other thing you can do is with that kind of rugged landscape, suppose you've got a single peak, and now there's a neighboring peak that's nearby. Traditional theory says that I can't get to the neighboring peak because I'd have to go through the valley. But now with this Alpha K, we can actually... It's kind of like providing a navigational chart to steer and predict how it goes from one to the next. How it could go from one peak to the other without going through the valley. Right, because it's basically this map. Yeah, yeah. And so we know all the routes between two different points on said map. And so we... That's so good. And it's really cool. I mean, it just shows... This very quantitative physics-based approach to now I'm like going up on a fitness landscape. I can sequence, not sequence, but I can tell how much of a particular chromosome I have in certain cells. And from that, create this map of karyotypes, right? And create, okay, how much of cell one do I need? It can depend on my past. It can depend on all the other contexts that I have because this model can build that in. It's very cool. And the computational cost is, you know, it's right now it's 22 dimensions. So it is pretty expensive, but it's better than doing the sister cell treatment, which is what we used to do, where you take a little bit of the cancer cell and you see how it evolves in a test tube. I mean, here you can just plug it into a computer. Right, right, right. Which, you know, one, while it may be expensive now, it's the first formulation of the concept. you can in theory create other flavors that choose for different levels of dimensional precision as one key aspect. But two, it's all simulation, so you don't have the physical lab-related costs associated with it. This is very, very good stuff. Yeah, I thought it was very cool. And this one is doing it for aneuploidy, which is number of chromosomes. But now you can think about creating, you know, you can think about creating landscapes for actual within a chromosome. How much of gene one do I have? How much of gene two do I have? In a transcriptome, how much of mRNA for this particular gene do I have? You know, it's like really taking that fitness landscape and being like, no, we can actually just treat it as real. Right. And then you can apply that to more than just the anemploidy for cancer. I mean, it can be applied across the board. Right, and I can now forecast what the cancer would do if I give it this treatment. This is very, very good. I mean, you know, obviously cancer treatment, we've talked about a lot of different type of cancer-related research stories. And, you know, one of the, there's so many challenges that are involved in it. Like there's, it's like a multi-combinatorial mass of stuff. yeah um but one of the things that's been so interesting is our tools to look measure and or understand what is literally happening have have been accelerating yeah because we kind of right now have like more of a shotgun approach for therapeutics yeah the sniper approach obviously there's some cases that are getting more close to a sniper approach um but like chemo is obviously very destructive to all the cell like all the cells which we actually discussed in a previous episode so why that's the case yep um so stuff like this is really really impactful for oncology generally um which is fascinating i mean my my mom works in clinical trials and they've you know historically they've looked at a whole variety of things obviously a lot of big pharmaceuticals are trying to go to big ticket things yeah that have huge market value and obviously oncology is a huge one huge one um and so really great stories to a nice spectrum that we covered. We started with botany, one of our first plant stories about alkaloid biosynthesis. And that small factory we're now able to replicate in yeast. And the new phytologist, that was from York, University of York in the UK. We had a great rundown with a bunch of different, a variety of different pieces. The quantum entanglement story was fantastic. that one at the university of basel and sorbonne uh very very interesting again measurement precision talking about my favorite the heisenberg uncertainty principle we now can sort of not break the concept no we're not we're not we're not we're not doing anything when it's not magic yeah but it's clever creativity for how to work around those limitations and we ended with this alpha k local adaptive mapping which again i now have a way to grok this concept because we've talked about um concepts like gradient descent in the past like with a lot of the foundation and frontier models that's how these things work they're traversing yep um you know these hide multi-dimensional spaces and so all very very good stuff uh i just the only thing that i think we forgot this episode was what people should comment. Oh, yeah. We didn't come up with a good one yet. So we're going to do this on the fly. How about Alpha K, alternative... Alternative acronyms. Alternative full forms of the acronym Alpha K, which is not alpha like the alpha particle or alpha Greek. It's A-L-F-A-K. Yes. Yes. That's a good one. So let's see what people come up with because we do have some comedians. We did see a lot of good responses to LBF from last episode. The freedom one was the best. Pound force. Yeah. Pound force is the correct one, but pounds per freedom, I think, we're good. And yes, the imperial system versus metric system has its issues. My name is Lester Nare, joined as always by my co-host and our resident PhD and all-around science genius, Krishna Chowdhury. This is From First Principles.