Relentless

The Quest to Cure Alzheimer's | Sacha Schermerhorn, Babylon Bio

84 min
Feb 14, 20262 months ago
Listen to Episode
Summary

Sacha Schermerhorn, founder and CEO of Babylon Bio, discusses his company's mission to cure Alzheimer's disease through novel approaches targeting tau pathology and neuroinflammation. The episode explores the challenges of Alzheimer's drug development, the failures of the amyloid hypothesis, and how Babylon is using AI and unconventional strategies to tackle this complex disease.

Insights
  • The amyloid cascade hypothesis may be targeting the wrong phase - amyloid deposits 20-30 years before symptoms, making it the cause but not necessarily the best therapeutic target
  • Building a portfolio approach is essential for moonshot problems like Alzheimer's - single asset companies face existential risk with each program failure
  • Hiring missionaries over mercenaries is critical for long-term difficult problems - the best talent is motivated by the challenge, not compensation
  • AI and LLMs can help connect disparate scientific findings across fields to identify new therapeutic targets and drug repurposing opportunities
  • Maintaining direct patient contact through volunteering at memory clinics helps teams internalize urgency and stay connected to the human impact of their work
Trends
Shift from amyloid-focused to tau-targeted Alzheimer's therapiesAI-driven drug discovery and repurposing using large language modelsPortfolio approach to biotech investing and drug developmentIncreased focus on neuroinflammation as a therapeutic targetMove toward subcutaneous drug delivery to improve patient complianceIntegration of epidemiological data mining for drug repurposingEmphasis on hiring for mission alignment over compensation in biotechUse of biomarkers like Tau 217 for earlier Alzheimer's detectionCommercial intermediates strategy to self-fund moonshot researchFine-tuning AI models on clinical trial data for drug development
Companies
Babylon Bio
Schermerhorn's company developing novel approaches to cure Alzheimer's disease
OpenAI
Partnering with Babylon Bio to develop AI models for scientific hypothesis generation
Biogen
Developed Aducanumab, the controversial Alzheimer's drug that cleared amyloid but showed no cognitive benefit
Eli Lilly
Developing Donanemab (Kasunla) and tau-targeting therapies for Alzheimer's treatment
Pfizer
Acquired Biohaven for $11.6 billion for their migraine drug Nurtec
Biohaven
Biotech company that developed migraine drug before being acquired by Pfizer
Ionis
Partnering with Biogen on BIB080, an intrathecal tau antisense oligonucleotide
Long Journey
First investor in Babylon Bio who appreciated Schermerhorn's personal financial commitment
New Limit
Company where Jacob Kimmel works, influenced Schermerhorn's thinking on AI applications
SpaceX
Example of how individual contributors may have limited impact at hyperscale companies
People
Sacha Schermerhorn
Founder and CEO of Babylon Bio, leading efforts to cure Alzheimer's disease
Dr. Alois Alzheimer
Historical figure who first identified Alzheimer's pathology in 1906 but was initially ignored
Michael Merzenich
Godfather of cortical plasticity who mentored young Schermerhorn in neuroscience research
John Moore
Lead inventor of migraine drug Nurtec, recruited by Schermerhorn to join Babylon Bio
August Deter
First Alzheimer's patient studied by Dr. Alzheimer in the early 1900s
Jacob Kimmel
New Limit researcher who influenced Schermerhorn's thinking on AI drug discovery applications
Don R. Swanson
Information scientist who developed 'Swanson Linking' for discovering drug connections
Simon LeVay
Researcher who found neurological differences related to sexuality but faced scientific ostracism
Vlad Coric
CEO and founder of Biohaven, successful biotech entrepreneur in neuroscience
Laura
Babylon Bio's founding scientist and first hire who set high standards for company culture
Quotes
"There are a lot of clues about Alzheimer's. I think there's enough clues for us to understand what is actually causing the cognitive impairment."
Sacha Schermerhorn
"If you're developing a drug, you should probably not go after the cause unless you want to run a 30 year trial which like, it already costs quite a pretty penny anyway."
Sacha Schermerhorn
"My goal is to literally get like the avengers of drug hunting, throw them in a room and just throw them at this incredibly meaty problem."
Sacha Schermerhorn
"The best people in the world really just love what they do so much that they fail retirement multiple times."
Sacha Schermerhorn
"You just need to be wrong long enough to one day be right."
Sacha Schermerhorn
Full Transcript
2 Speakers
Speaker A

There are a lot of clues about Alzheimer's. I think there's enough clues for us to understand what is actually causing the cognitive impairment. Often these clues come from very disparate fields. The shingles vaccine. There's a group in Wales where if you were born after this cutoff date, you were eligible for the vaccine, and if you were born just before you were not, they basically stratified those patients and followed them over time. And they found that people that had the shingles vaccine were 20% less likely to develop Alzheimer's or all cause dementia within seven years. Why? We have no idea. And that's because, again, these are very distinct fields of science that don't interface with each other directly. So with these elements kind of taking off, can we basically, like, get very smart scientists to train these models to approximate some heuristic that they use and then just deploy these on like the knowledge graph of science to try and connect us? That seems really tractable. And like, we have more than enough clues to figure out like, the perfect target for something like Alzheimer's or other diseases.

0:00

Speaker B

Today I have the pleasure of sitting down with Sasha Skermahorn, and he is the founder and CEO of Babylon Bio. They are currently trying to find a cure for Alzheimer's. I think over the last, like many decades, tens of thousands of people have been basically working on a cure for Alzheimer's. Billions and millions of dollars have been spent and almost no progress has been made. What about that got you excited?

0:52

Speaker A

So I, I, I would actually disagree with the premise that the no progress has been made. I think a lot of progress has been made, but by showing us things that don't work more than things that do. So there's no existence proof that this is attainable, which to me is extremely exciting. And I think, yeah, the past several decades of failure have given the perception of intractability and if you were to take the second order implication of that intractability, I think it's super interesting because it actually makes it a more tractable problem to go after if everyone else thinks it's a graveyard. Um, that to me at a meta level is like obviously fertile grounds for, for actual innovation. Um, yeah, I mean, I think like, you know, we, we, we've kind of still looking at this disease through the prism of like, Dr. Alzheimer from 1906, which is definitely a good thread we should talk about because he had a crazy story. But yeah, like, this is, we, we are kind of focused on two pathologies. We have no idea kind of how they're related. But I think we've had enough clues in the clinic and on the diagnostic side and prognostic side to understand that, you know, a little more parts of the picture that, you know, maybe the field hasn't updated their OS on.

1:13

Speaker B

Do you want to just go through some of the approaches that have been tried and why they didn't work?

2:21

Speaker A

Yeah. All right, let me. Let me actually start with the Dr. Alzheimer think it's just. It's crazy, and it's a very good, like, backdrop. So Dr. Alzheimer was this. This dude from, like, way back when he was born in the late, late 19th century and got super, super interested. Well, first of all, his degree, his medical degree and his research during that was on earwax. So, like, you know, started in the world of earwax and was. Was kind of lost, kind of like, you know, just a roaming intellectual. Not really, you know, didn't have. Hadn't set his sights on anything yet. But he got. When he was doing, I think. I think it was his residency, he was. Got introduced to August Demeter, who was a patient who was in her late 50s, I believe, and had this, like, disorientation, discombobulation, general cognitive impairment. People thought she was crazy. People didn't know what was going on. Maybe she had a mind virus. And. And anyway, spent a long time with her, characterized her very well clinically. When she eventually passed, he. He ended up doing histopathology on her brain. And he found these two. Used a. Basically some variant of a silver stain, which was a way to visualize these proteins in her brain. And he was punched in the face by these two extremely large structures that shouldn't have been there. And it was what he called plaques at the time and neurofibrillary tangles inside of certain neurons. And obviously the right conclusion from that was, if this is not in a healthy brain and is present in the brain of a patient with what was then to be called Alzheimer's, you know, these are clearly causing the symptoms that she had. Anyway, he ended up giving a talk, I think it was in Frankfurt, and it was a big seminar in front of all these people. And he presented the first case of Alzheimer's, what was then to become Alzheimer's. He was expecting a standing ovation, absolute crickets. People walked out in the middle of it. They just took their bathroom break. They didn't care at all. They came back to the next presentation to a completely packed house, which was on chronic masturbation. So that was about. Summarizes the level of respect that he had he kind of died, you know, a bit of a, you know, I don't want to call it a scientific pariah, but he didn't get the credit.

2:26

Speaker B

He deserved like any good scientist did some really good research and then no one cared in his time.

4:35

Speaker A

Yes, basically, it's just such an insane, like the following. They kicked him off stage, basically no follow on questions to hear this fabled talk on chronic masturbation. So anyway, and then fast forward, you know, I think the field really rightfully was saying, well, these things are not present in a healthy brain, let's get rid of them. And that was many decades of research. And yeah, the craziest thing is we eventually had data to suggest that we finally had these drugs that were able to reduce amyloid in the brain. The amyloid plaques. The plaques are made up of a thing called beta amyloid and that was supposed to be this incredibly toxic thing. And the reduction of these, you can basically deplete this in the brain of an Alzheimer's patient and have relatively de minimis efficacy. And Aducadamab famously reduced amyloid. Pet, I can't remember, I think it was 76% and had no impact on cognition whatsoever.

4:41

Speaker B

Isn't the amyloid cascade basically what people have spent, I don't know, roughly half of Alzheimer's research and dollars have been.

5:35

Speaker A

Spent on that seems like a fair approximation, 100%. I mean this is a huge, like it has been the dogma over many decades and you know, my kind of like hot take within the Alzheimer's space is like amyloid, you know, because people have basically seen that hey, we reduce amyloid has relatively trivial efficacy in patients. Therefore it seems reasonable to conclude that that's actually like we've debunked the hypothesis. But I actually think that's wrong. And I think, you know, a lot of the biomarker development over the past like 15 years even has revealed that amyloid starts depositing in the brain 20 to possibly 30 years before you develop symptoms. And so that preclinical phase is like, clearly amyloid is causative and it's necessary, uh, but it may not be sufficient. Um, and, and eventually after 20 to 30 years you develop this like you, you transition into the clinical phase. And, and so like if you were to go into a burning house, it doesn't matter what started the fire, right? It doesn't matter if it was a match or a toaster or whatever it is on fire, it's on fire. So like you should put it out, right? And you should figure out clever ways of doing so. And I think that's where the field, you know, maybe, maybe has like, you know, lagged a little bit behind is like not, not updating their priors that like the cause is actually not the thing that you need to target. Like, if you're developing a drug, you should probably not go after the cause unless you want to run a 30 year trial which like, it already costs quite a, quite a pretty penny anyway. So you want to keep them as small as possible.

5:42

Speaker B

So if you're not going after the cause, what are you going after?

7:01

Speaker A

So, so I think there's like many ways to tackle that problem. I think, you know, the thing that we spent a very long time thinking about and like, I personally have thought about quite a bit was this like transition period. Like how do you basically have this Dorman Age for 30 years and all of a sudden you wake up and you start forgetting where your keys are and that leads to this cascade. It's just become abundantly clear that fossil related tau is definitely a linchpin in that process. And so the thing that Dr. Alzheimer kind of conceived of as these neurofibrillary tangles actually turn out to be the number one predictor of the onset of cognitive impairment. So they're not the predictor of whether you'll develop Alzheimer's period. Uh, amyloid is like a very good marker to suggest early kind of development. Um, but if you look at the AUCR C of like all the biomarkers you could imagine, there's a thing called Tau 2, 17 which is a fragment of these tau tangles that, yeah, proves to be the most predictive of when you'll develop Alzheimer's. Uh, I should say the cognitive impairment phase.

7:05

Speaker B

Yeah. For you yourself, like, why did you kind of make this the thing that you wanted to work on for the next like 25 plus years?

8:09

Speaker A

I think like it was a parallel track of these two things. I basically, I was very excited on the scientific side. I was, you know, very big reader when I was growing up and I was kind of like a music and art kid and not really into science. And then when I was about 13, I was also a big troublemaker. Stole a book from my library, my middle school library, on neuroplasticity. It was called the Brain that Changed Itself by Norman Doidge. And, and I was like, horrible kid. Horrible kid. Yeah. And I was just like absolutely engrossed by this thing. I mean, I devoured the entire book and the, the top guy that they talked about was this guy, Michael Merzenich, who was like the godfather of cortical plasticity, he was the first one to really show that your brain can rewire itself in these fundamental ways. And so I emailed him, just being like, you know, I found out he was in San Francisco, where I grew up. I emailed him. No response, and, like, you know, just bombarded him, basically, until he finally met with me. We stayed in close touch, and then when I was about 15, I started interning with him, working, happened to be on Alzheimer's, and around the same time, my grandmother had gotten diagnosed with Alzheimer's. And I think just like, you know, as I progress on the research side and, like, went to college, was doing a lot of research there, and seeing these, like, what I conceived of as these breakthroughs in the lab and seeing the incongruity with, like, that and my grandmother having not a single drug. This was pre lecanimab, pre aducanumab, all these other drugs that came out. And, yeah, that. That just, like, crystallized my desire to eventually do something. I was, I felt angry, to say the least, let down.

8:17

Speaker B

What kind of gave you the conviction that this was a solvable problem? If people have been working on it for decades and it hasn't been solved, that's, you know, very big hurdle.

9:47

Speaker A

As I spent a lot of time asking pretty fundamental questions to these very, very, very top people who I respect tremendously, by the way. They, like, set the stage for a lot of really important research. I mean, even in the first, like, seven months of Babylon, I probably met with 500 people. I was flying all over the world, emailing, cold emailing every single person. I'd read papers. You know, I was reading maybe 10, 15 papers a day, emailing all the authors of the papers I thought were good and just going down that rabbit hole. And I would just ask these very basic questions, you know, why do you think Tao favorables are toxic? No answer. You know, no answer. And again, I don't blame them. There was not enough information for us to be able to have these answers, but it just felt like there were enough of these, like, you know, axioms of this disease that we just, like, had not yet characterized. And that gave me a lot of confidence. That was why the part of the reason I got into neuroscience to begin with was, like, the fundamental pillars of this entire field aren't even there. Yeah, it's fertile grounds for innovation or discovery. That's super exciting.

9:57

Speaker B

And on that, like, when you were kind of coming up to speed on how everything worked, like, what was Your process for doing that?

10:51

Speaker A

Yeah, I mean, look, I think, like, I think a part of it is just like constantly challenging your priors. And like, you know, one of the benefits, I suppose, like, it was a luxury that I was afforded that I had not been in the field for 20, 30, 40, whatever years. Like, I was this young kind of, you know, very, very, very hungry to learn kind of guy. And like, I was always, always asking questions, always willing to update my priors. I, I'd say, like, there were certain points where I was almost myopically focused on very specific biology that I thought was relevant. And then, you know, go deep enough down the rabbit hole, realize it's not zoom back out. And there was no ego in that process. It was kind of like, obviously I'm going to be wrong. Like, definitionally, like, you know, even like the, the term Alzheimer's expert for me is like almost oxymoronic. You know, if, if, if there were real experts in the Alzheimer's space, I think we, we would have a, we'd have a lot more answers. So, yeah, I think just realizing that, like, no one was an expert and then that I was in a unique position to just, you know, rewrite how I think about this disease from the ground up was like a pretty compelling prospect.

10:58

Speaker B

You basically have talked a number of times about updating your priors. What kind of paths have you gone down in the past, just over the past couple years, where you thought that there might be something, you know, some gold, negative gold at the end of the tunnel, and then realized that's just the wrong direction and turned around.

12:06

Speaker A

Okay, so. So like, I got really obsessed with axon degeneration for a while and thinking that that was, you know, the cause of, of Alzheimer's, specifically the cognitive impairment due to Alzheimer's. And that was a very deep rabbit hole. I met all the people who published the seminal papers there, and I still think it's incredibly compelling. I think sometimes you'll kind of hit a dead end insofar as it relates to the drugability of that pathway. And so for that work, it was very compelling. A lot of evidence showing that the rate of cognitive decline, meaning the slope of your decline once you actually become symptomatic, that varies significantly as a function of specifically what are called white minor hyperintensities in these long range tracks in the brain. And, and I went very deep, very deep, very deep. And I just realized, you know, drugability of that pathway was, was really hard. There were no drugs that people had developed to actually intervene in that pathway and doing it de novo would have been like, you know, that's like a 10 year academic project, let alone then transition into something that you could have, could have approximate a drug. So yeah, I think there were a lot of these. I mean, you know, honestly, our first program for Alzheimer's we worked on. I think the biology, I still think the biology is really compelling. It was a completely novel target that no one had ever, you know, tried drugging. And we got really excited about some of the, some of the biology there. And you know, in the end we basically found that the, you know, we knew the biological risk was like the highest percentile you could imagine, but the, the kind of like drugability ended up being in the first percentile. And that was just a really bad quadrant for us to be in. And so we pulled the plug. But I still think that's like a super exciting protein, just a bad target for Alzheimer's.

12:23

Speaker B

There's been a huge amount of capital deployed into trying to solve Alzheimer's and it hasn't happened. How did you kind of structure the way that Babylon operates so that you can actually get to a point where you're able to solve it?

13:57

Speaker A

I think the first program gave me scar tissue that if you like, Alzheimer's is such a hard thing that any individual program, it cannot be existential for the company if you're a single asset company trying to go after Alzheimer. Just like, you know, definitionally a moon, a moonshot is something where you're almost certainly you're starting to fail. Right. And I think like, you know, it had me thinking a lot about like ways to kind of hedge against that because you don't want to scale down the ambitions of what your goal is. Obviously in favor of like, you know, increasing the pos, but at the same time you need to be realistic about like no one's going to, you know, just bland, you know, I mean, I'm not Elon Musk. I can't go raise tens of billions of dollars tomorrow. So I think like, you know, to really, you have to get creative around that problem. And you know, it had me going very deep down the rabbit hole of like portfolio theory and like ways to kind of like hedge against this at the portfolio level and yeah, like more updates to come in the future there. But like, I think just financializing the process of self financing Alzheimer's, moonshots, that's something we've tried to be super thoughtful about. And yeah, you know, the goal is just to amortize the risk of those moonshots basically and come up with clever ways to do so.

14:10

Speaker B

Yeah. Are you able to go into any of those?

15:19

Speaker A

All right, like, I mean, I think at the highest level there are a lot of opportunities out there that are not per se, like venture exit size. And when I say opportunities, like in the context of the farmer land, like I'm talking about drugs. And so there's a lot of drugs sitting on the shelves that like, you know, Roy Vent obviously, you know, famously kind of spearheaded this where they were like there are these multi billion dollar blockbusters but you have to de risk them significantly through several successive stages to get them to the inflection point that warrants that acquisition size. But I think like there's actually an even larger number of these like assets where it's not going to be a multibillion dollar exit, but like low nine figures for sure. Is that something that, you know, I think people should dedicate their lives to doing? Like absolutely not. Or maybe if that's your objective function. But, but for us, you know, if that went back onto the balance sheet, like the thought experiment was basically like a lot of those drugs, if that could go straight back onto the balance sheet instead of getting circulated up back to the investors, sorry, to the Babylon investors, then you know, that's an amazing way for us to kind of like again financialize this process and self fund so that by the time we have a drug on the market for Alzheimer's that you know, we were able to take it away, take it all the way through end to end without needing to kind of partner up at the very last stage. Right. Like how shitty would it be to run a marathon and you know, you're three feet away from the finish line and then you have to, you know, hold hand in hand with the second place who was, you know, 10 minutes behind you. Like that would just feel super defeating. And so we don't want to give 50% of our drug or whatever at the very finish line just because we ran out of money or we weren't able to take it through or what have you.

15:21

Speaker B

The way I think about it is it's almost like a complex way of creating a new Google search where you just have this like huge cash cow and you're able to siphon all the cash from it into the research angle.

16:52

Speaker A

Sure, yeah. Commercial intermediates are like things that I think people who are thinking on a, like founders specifically are thinking on a long time horizon should be very thoughtful about.

17:01

Speaker B

I remember Viagra at one point was like a heart medication and then someone realized that there was another application for it, and so they repurposed it. And now it's worth, you know, billions of dollars, I think, a year. How many drugs are like that out there that you have already, the research has already been done and you can just repurpose them for another use.

17:08

Speaker A

For another use. You know, it's not as simple as the mechanism of action is, you know, I think oftentimes like conceivably targeting the same target. So like, you know, most molecules have a single target and that target is hopefully the thing that elicits the salubrious effect if you target it. And so the idea that that same target, for instance, sildenafil, which is Viagra, targets, I think it's a phosphorase 4 inhibitor. But that target alone actually is relevant for Alzheimer's, which is super interesting. Totally makes sense. Uh, cause it's, it's like proliferative in nature. It actually leads to like, you know, cell outgrowth and neurite outgrowth. And like, these are very good things in the context of Alzheimer's, where you're getting this degeneration and you're actually stimulating regrowth. And so there was actually a study that. Well, okay, to close that point real quick, it does not mean that the PK itself, like the pharmacology of the drug, may be totally different. Maybe it doesn't even get into the brain. So whereas it could conceivably be efficacious by targeting the same thing, it's not going to actually get into the brain. So I guess that's my, like, non answer, you know, avoidance of your question. Like, I don't know what the exact number really is and what the, like, for me, estimate of that would be. But like, I have to imagine there's probably hundreds high, hundreds of those opportunities out there where it would actually be a very good drug for the other thing. But, but, but probably the super majority of those are just like off patent and like, there's no market there for you to actually advance it. But I will say so. So in the context of sildenafil, which is Viagra, I think there's a very interesting story, which is there was a study that basically did look through electronic health records and you know, the goal was like, look through the prism of epidemiology, see if you can see these trends, and then like, use that as a repurposing angle. And so they found that people who took sildenafil were 69% less likely to develop Alzheimer's. It was all caused dementia, but then people who did not. And this is like, I mean it's just, you know, the number alone is hilarious that Viagra reduces your risk by that number. But the, the overall was the conclusion was that, okay, well Viagra should be repurposed for Alzheimer's. Follow on studies, we're not able to reproduce it. There's probably some like bias in terms of people who take Viagra need to have, you know, healthy hearts. Healthy hearts, probably better. You know, overall like anti hypertensives are also good for Alzheimer's. So like long term that's probably what happened there. But um, but still, I just think there's a lot of these like provocative stories out there and there's an AI element to that as well we can get into if you're interested.

17:25

Speaker B

Yeah, I'm, I'm down.

20:02

Speaker A

Okay. Yeah. So I mean, I think like there was an information scientist called Don R. Swanson who did this thing called Swanson Linking, which was like his whole thing was basically that there are a lot of medicines out there that can basically be discovered just based on information we already have, not new information that we need. And I, I'm like totally in on that concept. I think it's like, you know, in the graph theory, formulism of it, it's like you don't need to add a new node to the network to feel like, you know, that that will unlock new biology that you can drug. It's like there's probably a lot of medicines out there that can come from just drawing edges between preexisting nodes. And so he took that to the max. And this was like at a time where information gathering, scraping, all these things were like super analog. So like kudos to him. But there was a few examples like magnesium and migraine. But, but the big one that he did was for a thing called Renaud syndrome where he basically saw that Renaud syndrome was linked to blood viscosity and that blood viscosity was also linked to fish oil. And so, you know, he therefore posited that fish oil would be a good intervention to intercept or remediate the, the, the Renaud syndrome blood viscosity issue. And you know, that was like a big paper that I believe did ultimately prove to be efficacious. So I think there's a lot of these, I think there's a lot of these and I think LLMs have unlocked a new ability to actually be able to do that in high throughput. It's definitely something we've been exploring. I, I, we'll see if it bear fruit. It bears any fruits. But I'm very bullish on that general concept of just like A is connected to B, B is connected to C, therefore A is probably connected to C. Yeah.

20:04

Speaker B

Is there any other like neurodevelopmental or like neurodegenerative diseases that are able to kind of be cured in the process of trying to solve Alzheimer's?

21:44

Speaker A

Oh, interesting. If you believe that false related tau and tau fibrilization is like causative for the cognitive impairment in Alzheimer's, which I certainly do, then absolutely. There's, there's like other tao pathis where that's the predominant pathology. Because part of the problem with Alzheimer's is yes, it's tau pathology, but it's also amyloid pathology. It's also neur inflammation. Like there's a litany of different things going wrong in the brain, as you could imagine. And so it's much harder to kind of isolate the thing that is very much causative. On the other hand, you have like frontotemporal dementia, you have picks disease, psp, these other things where they are what are called toies. And that's like the predominant pathology in all of those is tau ferbalization. So yes, it's definitely conceivable that like, depending on how you target taunt, that could be efficacious in other diseases that are also rare and can also get accelerated approval and things like this. And there's definitely companies out there doing that. But. But yeah, oftentimes like for something like Alzheimer's, you know, if you have a rare disease where you have a hundred percent chance of developing the pathology and you know, going into this neurodegeneration, I think you'd be fine with like, you have a much higher tolerance for, for toxicity. You have probably, you know, a lot of these drugs are intrathecal, so they have to inject into your spine basically.

21:52

Speaker B

Which I imagine is like a high bar barrier to doing it for sure.

23:10

Speaker A

And so, but people are trying this directly in Alzheimer's. There's a drug called BIB80 from Biogen and Ionis and it's a Tauso and that's intrathecally delivered. And so I mean, I don't know. Imagine giving your grandmother, you know, a spinal injection. It just.

23:14

Speaker B

You have to do this like multiple times.

23:30

Speaker A

Multiple times, yeah. Dosing will vary. There's. Eli Lilly has a tau Sirna as well. That's like similar thing just intrathecal. And I think these things will be really important proof of concepts and like an existence proof that you can actually really stop progression of Alzheimer's. Because I do think we'll see the best efficacy so far. Like, that's. My prediction for this year is BIB080 is going to be the most efficacious drug for Alzheimer's yet. The readout is slated to be in May of this year, but I have a strong suspicion it'll be like, probably Q4 just. Just because recruitment's really hard for things like that. People drop out because again, they don't want to keep coming back and getting a sp final injection. But. But I think it'll be a. Maybe not an amazing drug, but it'll definitely be an amazing proof of concept.

23:31

Speaker B

And for the drugs that have shown to be somewhat efficacious for Alzheimer's, what's been the process for actually developing those?

24:16

Speaker A

Okay, so rightfully so. People, as kind of mentioned previously, were going after this like beta amyloid, because that's the thing that's the most present or kind of salient pathology in the brain of an Alzheimer's patient. And so monoclonal antibodies are a good modality because they're highly specific. If they're like, humanized, then like, you know, it's. There's no foreign agent that you're introducing into your body, so there's no, you know, like minimal likelihood of getting an immune reaction, things like this. And so the benefit of these antibodies is they get to where they need to go very efficiently. Well, more on that in a second. But they, they are highly specific and highly efficacious once they target the, the protein. The problem is 0.1% of the antibody that you put into your body will actually get into your brain because there are these massive entities that are trying to be shuttled across the brain. And so the first generation of these were like, you know, the first drug, so like Alzheimer's field had mantine in 2003, which was like basically a symptomatic treatment. 19 years later you had Aducanumab. And so it was like nothing for 19 years. And then Aducanumab, which was considered a breakthrough. Biogen famously and later infamously put this on the market despite not showing any cognitive improvement in these patients, and reversed, you know, FDA. ADCOM had, I think it was 14 people on the panel. Zero of them approved it, all of them rejected, but it was submitted anyway. There's a whole another conspiracy around that. We, we'll save that for, you know, people can Google that if they're interested. But yeah, it had an amazing job at, at removing these plaques. Absolutely no efficacy. And. And the later generations got better and better. So Lecanemab, which now is marketed as Leqembi, and then Donanemab, which is now marketed as Kasunla. And. And the latest one that I'm most excited about is a thing called Trontinumab, which actually hijacks a shuttle, the transfer receptor in your brain. So you have the blood brain barrier which is protecting things from coming in, which makes sense. You don't really want anything that's going into your body to go straight into your brain, especially pathogens. Um, but this is basically hijacking that shuttle that allows for this active transport into the brain and it increases significantly the amount of antibody that can get in there. And so Trontinumab has like the best efficacy in terms of amyloid clearance ever. And I, I'm like super excited about that one, especially a subcutaneous formulation because again, going in every. Every other week or every month to get a iv I. I hate needles generally, but like an IV going into an infusion clinic, like when you're seven years old. I don't know, it just.

24:22

Speaker B

I remember, I remember my. My sister getting shots as a kid and she. It was like the worst experience. She was just like horrible crying and stuff and she really, really hated it. She had like PTSD from getting any shots.

26:52

Speaker A

Yeah.

27:03

Speaker B

For the first like 18 years of her life. And. And that was. That was a very big barrier.

27:04

Speaker A

I said to get my blood drawn yesterday. I almost passed out. Really, it's my biggest fear in life.

27:09

Speaker B

Yeah, I had, I had a blood draw where they took like 28 vials of blood in. In one go.

27:13

Speaker A

Was this a, like one of these Health wellness. This was.

27:18

Speaker B

It was like a full blood panel. And that was. That was pretty rough. I definitely almost passed out and threw.

27:21

Speaker A

Up, I believe, from that.

27:27

Speaker B

So when you, when you started this company, you've been at it for about three years. What was kind of the first few months or year? Like, what did you decide to.

27:29

Speaker A

Excruciating for?

27:38

Speaker B

Sure.

27:39

Speaker A

I. I mean, you know, to be fair on the investors, I was going out pitching, you know, I basically was telling people, we're going to go cure this undriggable disease. I don't have a Ph.D. and, and I know there's all these failures, but like, trust me, bro. And, and by the way, it's just me in my bedroom in New York line up and. And it was so hard, so hard to raise money. I mean, like, literally no one took me seriously. People were laughing at me. People, you know, fell asleep during calls. I had three people fall asleep during calls. Like in the middle of calls, people would hang up. After five minutes people go. I mean, it was like really, really brutal. I'm glad I can laugh about it now because it was like, I mean, it was just hilarious. Like, I just kept like chewing glass and I was like, it just became a game at a certain point, you know, it's kind of like. And I had so much conviction in what I was doing. I put. I was self financing all the experiment. I mean, it wasn't like I was going to stop any of the experiments. So paying for everything. Personally, you know, I think similar to you just like credit score like in the toilet. But. But yeah, I just like, it just made a lot of sense and like, I wouldn't do it any other way. And I'm super grateful for that because it, it just like, you know, probably just crystallize the like general, you know, I'm incredibly strict on the finances and we're relatively, you know, very lean by most comparisons, I would say. And yeah, a lot of that just came was birthed from like the pain of paying for, you know, everything with my personal credit card or like, you know, we do a high throughput screen and they'd be like, you know, hey, please send your, you know, have your CFO send a PO and like, you know, for the $59,000 wire. And I was like, yeah, do you take credit card? Yeah. So it's like, did you put a.

27:41

Speaker B

$59,000 wire on a credit card?

29:18

Speaker A

I ended up wiring it directly for my savings. But yeah, it was.

29:19

Speaker B

Did you have a lot of money to start this or. Not really.

29:22

Speaker A

Not. Not really. I would say like, you know, there was a brief period of like, you know, software ventures that I did and one of them became lucrative enough that it gave me the liquidity to do it. But I mean, I for the most part have been paycheck to paycheck, like literally since starting the company because I put all of that into the. That was like the pre. Precede money of the company.

29:25

Speaker B

What was different about the story that you were telling back then? Because I know you kind of threw a whole bunch of no's and people falling asleep. I imagine that you realize that something about this, like if you want to get that money, it's probably, you probably have to tell a slightly different story than you're telling in order to do that. What, what changed? What shifted?

29:43

Speaker A

It's funny, I Think, like, you know, as someone who I'd like to pride myself on, like, updating my progress constantly. Weirdly, at the time I was relatively like, stringent about this. Like, I think there was, like, know, maybe local fluctuations in terms of, like, I'd meet with the Boston investors and they'd be like, hey, your science is really cool, but, like, who the hell are you? And like, why should we give you a penny? And, you know, and so maybe I kind of, like, spoke a little more formally or tried to be a little more, you know, present in the way that I thought was, like, necessary. I'm very glad that that, like, you know, I equilibrated and like, yeah, I just went back to just being me. But. But yeah, I think, like, I was. I was pretty strict about not like, succumbing to that pressure because I really wanted people who were going to join the mission for the right reasons and if they were joining because they thought it was a, like a very high probability thing or whatever, like, that was not the right partner I wanted to have for, for the rest of my life. And so thankfully, like, Long Journey came in. They were the first ones to observe the same set of facts and be attracted to them versus, like, repulsed by, like, you know, maybe I won't name names, but like, one of the people at Long Journey, when I told him about the life savings thing, he was like, that is awesome. It was the first person who was not like, you're an absolute idiot. What are you doing? You know that's going to go to zero. It was someone who, like, actually saw the dedication and actually appreciated the fact that I was so committed. And so, yeah, thankfully they came in and, you know, since then, thanks to the efforts of the team, like, it's been. Our fundraising has gone much more smoothly since then. That was the hardest round to ever raise. But. But yeah, I'm super grateful for it, obviously, in retrospect.

30:01

Speaker B

So if you have most people that you meet just immediately are basically turned off by, I mean, at least in the early days, the idea that there's this like, super low probability bet that will require a ridiculous amount of capital to actually even have a, like, reasonable shot in like a huge amount of time, maybe like 10 plus years.

31:35

Speaker A

Yeah.

31:50

Speaker B

How did that kind of enable you to find and attract the right types of individuals to be supportive of the mission?

31:50

Speaker A

That's a very good question. Because I think, I think again, a lot of the advice I was getting from the more like, you know, Boston biotech, like, archetypes was like, you need to, you need to work backwards from compensation. You need to go as high percentile as your burn allows you to. You need to, you know, and they gave me all this advice about winning the great talent over with the amount of equity and whatever else, and it just did not sit well with me. I just was like, you know, my goal is again, to assemble a team of best in class people that are missionaries, not mercenaries. And that was also something where I felt I was getting constant external pressure to hire the right people. Thankfully, I was uncompromising in that I was just really like so insistent on only hiring people who are going to work for the right reasons. You know, I probably will not reveal my kind of like, proxies for what ends up being very predictive of that, but I, I certainly am. Like, I've got a very long list of things that I'm looking for when I meet someone. And I don't do conventional like hiring processes or whatever. It's very like get to know the person over many, many months and see how they kind of fare and, um, and, and be very sober with them. About like, you're going to work your ass off. Um, you know, you're going to age much faster probably than you would otherwise. I'm not going to be able to pay you, you know, more than anyone else.

31:56

Speaker B

And I think that's actually a really good test because a lot of, like, especially here in Silicon Valley, a lot of the people that are getting hired that are really, really smart today are basically just getting ridiculously massive compensation packages. And I think it kind of sets the wrong tone for why are you even working here, like working on this problem? Is it for $20 million or is it because you care about the thing?

33:12

Speaker A

I think the fact that you have seen a massive kind of exodus of a lot of really exceptional talent or the constant velocity of people moving back and forth, that's indicative enough of that. That model is actually not the right way to go about this problem. And so I think to whatever extent you can kind of filter a priority for those kinds of people, you're in a much better position, at least if you're working on something. If we were a pure hedge fund or we're some whatever, like maybe it's a different story, but for me, you know, thank God I was very insistent on this and very like, yeah, like it kind of like weirdly turns out that the best people on the planet just want to work on the hardest problems. And so, you know, there was an early process of me trying to recruit these Drug hunters. And when I'd go meet with like, very talented people, and I'd be like, hey, you know, I'm going after Alzheimer's. They're like, sorry, Alzheimer's. Just like, that's impossible. Like, good, good luck to you. Or they'd be like, hey, I need. If you want my advice on anything, you need to start paying me. Fifteen minutes into the meeting and I was like, no worries, let's hang up here. So, you know, I had to like, filter through a lot of people. Took me a very long time. And then like, one thing kind of flipped for me in terms of like the hiring process where I. I basically started looking for people that were just like the secret geniuses behind a lot of the success stories that I respected the most. And so one of them was, you know, the example of that was a thing. So biohaven was a company based in New Haven that licensed a drug from BMS called Remediopin. And it was a CGRP antagonist for migraine. And anyway, they sold it for, I probably shouldn't disclose a number, but a very low amount of money. And then after, you know, having a successful phase three and going on the market, they're bought by Pfizer for $11.6 billion. It's considered like one of the great successes in recent history in the neuro space. And so everyone was like celebrating the executives that kind of like took that forward and to be fair, like Vlad Korich, who's the CEO of and founder of Biohaven, like exceptional world class. But I was more interested in the geniuses who, like the medicinal chemists who actually invented that molecule. And. And so I ended up reaching out to everyone on that patent, the composition of matter patent for medipent. I was most impressed with the lead inventor, John Moore. And, and again, I think like when I met John for the first time, I basically told him, like, we're taking the biggest swing you can possibly take. This is like very. You know, any individual program has a very low likelihood of success. But like, my goal is to literally get like the avengers of drug hunting, throw them in a room and just throw them at this incredibly meaty problem. And John was like, you know, sign me up. I mean, there was not a. Again, I think these other. One of the proxies I look for is like the extent to which they'll negotiate. Right. John could get paid way more than what he's getting elsewhere. And, you know, we've been lucky enough to, to have him kind of come out of retirement to be with Us full time. And like, that was just a direct byproduct. Not of the money, not of the equity, not of anything, but like, because of the passion and the nucleation of talent that we've been lucky enough to do have.

33:36

Speaker B

Money is definitely one factor for determining or figuring out rapidly whether or not someone is more missionary, more mercenary. But what have been the other biggest indicators or signal for you over the past three years that you've developed for figuring out whether or not someone's a missionary?

36:24

Speaker A

Okay, I will give one version of this, which is it's been very predictive if someone is willing to just jam with me on science and, and have no, you know, it's not a formal hiring process. It's literally just like, hey, two people who are super passionate about this space, let's just talk for hours unscripted. Just like, are they willing to do that? And not only are they willing to do that once, but are they willing to do that like many, many, many times over many months? And for me, the people who are like the right people, at least for Babylon, are the ones who basically are so passionate about what they do, they do this for free. And like the thought of getting paid is just like, sure. I mean, like, it's a secondary kind of thing, like a byproduct of the.

36:38

Speaker B

Thing, but it's totally at all front of mind.

37:19

Speaker A

Absolutely. I mean, I think like, you know, yeah, like it's so funny because it is almost like the midwit meme. But like, I've just found that like the best people in the world really are just so down to just chat science. They don't. The money is not what drives them. And you know, there's probably like a skew towards like, you know, something in there in terms of like, they have to have some financial stability, etc. But like, yeah, the best people in the world really just love what they do so much that they fail retirement multiple times. They're like in their 70s, they still just want to chat. Signs of random retirement. Yeah, totally. I mean like, you know, John I'm pretty sure has failed retirement like two or two times. Maybe this is the first time. No, I think he, I think. But Bill at least has failed retirement a few times. And yeah, these guys are indefatigable. I mean, like it's. I've been super impressed with the stamina. Like it inspires me to work harder seeing how, how much, you know, how they, when they do the 12 hour days when they're, you know, sending me emails at 4am on a Saturday. Like, these are the kinds of things that, for me, I just, you know, the command, the ultimate respect. Like, late 60s, early 70s, still working that hard. Like, they must really love what they do. And those are the kinds of signals I look for. At least did this sort of process.

37:22

Speaker B

Where you're really trying to optimize for these people that are working on the problem purely because they care about the problem. Did you basically have any other approaches at the very beginning where you went in another direction because you initially thought that that was the right mode of operation and then kind of worked backwards and said, no, this was a mistake, I'm going to go in a different direction.

38:31

Speaker A

I think there is, like, a really tough balancing act between, you know, you. You may have acute needs that, like, you need to immediately reconcile. And like, part of that is a, you know, ends up skewing you towards just, like, compromising your standards in favor of, like, moving very quickly and plugging the wound, so to speak. And, like, you know, I'd be. I'd be lying to say there haven't been, like, you know, examples of that. But, you know, overall, I think our hit rate is, is really high. And, like, part of it honestly started with Laura, who's our first hire. She's the founding scientist of the team. She's just an incredible workhorse and was like, so, so, so just, like, dedicated from day zero. Like, I met her and, you know, to be fair, I think I'm audacious enough that, like, I just, I'll have people do the work and, you know, see what happens. And so, like, first time I met her, and I was still in New York at the time, and I was super impressed. And I just said, you know, the next day I was like, hey, I'm stuck in New York. I need you to start going to all these different labs that I'm looking at in San Francisco. And by the way, some of them are like two hours outside of San Francisco. And like, by the way, I don't really have money to pay you or. I definitely didn't have money to pay her generally, but I definitely couldn't pay her to even get the Ubers because all of my money was just like going straight to these, these experiments I was running. And so she, yeah, just was like, okay, I guess, I guess, you know, sure. And, you know, she helped out as a friend for a long time and then, you know, things kind of worked out. And when she was finished with her postdoc, UCSF pulled her in full time. But yeah, like that kind of dedication I think set the stage for everyone else and definitely like held my bar very high in so far as it related to future hires.

38:48

Speaker B

Keeping the DNA right.

40:23

Speaker A

Yeah, keeping the DNA right. And like, it's like such a, it almost feels platitudinous at this point when people say like, well, the first 10 hires will define like the rest of your company. But it's like we're still super early days of Babylon. Like this isn't even chapter one in my opinion, but it is like I couldn't be more true. And I think just being like, you know, culture can only be defined by what you're willing to fire for, in my opinion. And I just think, yeah, people are like very scared to do that, but in doing so they actually, it's like an adverse selection for future hires because if people feel like, oh well, the bar is kind of like all over the place, like that's not going to hire the best people. And so I think just being super, super strict about that and uncompromising, even if you really like the person is like, really has been important for us at least.

40:24

Speaker B

So Alzheimer's is something where there's very long feedback loops and I think this is going into it. You recognize that this is like a multi decade journey where there's not really a very clear end date or like, you know, this is when we're going to solve it. How do you both maintain that very long term view while also keeping your foot on the gas as much as possible?

41:07

Speaker A

I think a huge part of why the team works so hard is because again, like I've, I've just filtered for that up front and like, you know, maybe this is ignorant, but I'm very much of the belief and have come to the realization that the super majority of management problems are actually just hiring problems that are kind of masquerading as something else. And so, you know, I don't have to like push the team to be working, you know, from you know, early in the morning to late at night. Like that is, that's a byproduct of the people that I've happened to have filtered for. And, and so I think like, you know, the, the burnout thing is like, and again like three years in, but like, at least I personally have been working this way for pretty much my whole life. Like, I think when you're just incredibly passionate about something, there's just like a infinite resource that you can tap into. And I think the burnout does come as a byproduct of like not having impact. So that, that, that definitely matters. And that's been said before, but like, yeah, again, like, how are you able to get like a 67 year old flying red eyes every other week from New Jersey and like working 12 hours a day? Like, I don't know, I think it's just like hiring for someone who is just built like that. Right. Like Dario on our team is built different. Like he's, he's, he's a beast.

41:27

Speaker B

Sam Altman has talked about the idea of burnout in the past and I think he basically said like work does not actually cause burnout. What causes burnout is losing.

42:37

Speaker A

Absolutely.

42:44

Speaker B

And if you're always winning and you feel like there's momentum, you never burn out.

42:45

Speaker A

Yeah, I, I'd probably like, you know, fork that idea and put like a corollary to it which is like, maybe, but my revision actually of that would be that I think it's like a lack of agency that breeds burnout. It's, it's actually not like losing because if you feel like you had agency over the decision and it blows up in your face, I still think like you, you internalize that differently than if something blows up in your face. And it wasn't even like you had no control over the situation. So I think just like continuing to make sure that the team like, you know, feels that agency over these things and like they're working super hard but like they also are able to influence things like that that probably really matters. And I think like, that's also just a byproduct of keeping the team super small is like, you know, the numerator will always be one. You're always just one human being hopefully. But, but like keeping the denominator small just means that your contributions to the total like pool of activity is basically much larger when, when you have a small denominator versus a larger one. And I think that's why like a lot of really talented high agency people are leaving a lot of like hyperscalers is because they kind of like irrespective of how talented they are, the denominator has gotten so large. Their individual contributions, no matter what, how good they are, is just like de minimis in comparison to what it would be on a very small Delta Force team of like super talented people. I've talked to a couple of people.

42:48

Speaker B

From SpaceX and it's the same thing where, you know, if you have a single person at Space X, they may not have a huge impact on, you know, actually like moving the needle at that company. Whereas they could take the same amount of action and have a much bigger impact working on their own product or problem.

44:06

Speaker A

For sure.

44:20

Speaker B

Yep.

44:21

Speaker A

Yeah, I definitely subscribe to that.

44:21

Speaker B

Yeah, I think Martin Shkreli talked about this where he said there's just a whole bunch of going back to this idea of there's a bunch of drugs that are out there that have already been synthesized or made and they just haven't been properly like productized for the right disease. How have you kind of been able to figure out how to go through that search space and figure out which ones are worth going after versus not so?

44:23

Speaker A

So my kind of like crazy vision for the future at this nexus is that this is like a conversation I've had with Jacob Kimmel from New Limit and I. He really like influenced the way I think about this specific problem. But you know, John, John Makeor, for instance, like on our team, like Bonafide, kind of like drug hunting genius. Like his track record speaks for itself. But like, you just, you spend 20 minutes with that guy and you're just like, okay, he is like almost extraterrestrial. And I think like, so John is, is, is let's say bandwidth constrained by his own biological compute. Okay, he's, he's one entity, he's one person. He's like, you know, so, so could you get a million John Meek ors in a server room trying to find you medicines? Like, I think that's a really exciting prospect. Obviously that's a great North Star. How do you get there? That's a separate story. But yeah, that's like something that I've tried being thoughtful about. And like, you know, I think part of constraining the search space is like just baking in really good priors, baking in really good heuristics and like. Yeah. So to whatever extent, you know, with our very small team, like we've been able to kind of do that. That's, that's, you know, that's something. I guess I'll just say that's something we think about a lot and like something we're building towards. And like, again, this is one of these card flips that's like not yet occurred, but I'm pretty sure it'll be like a really good investment.

44:44

Speaker B

The first time that I talked with Lada, she mentioned that she didn't have a phone and it really confused me. And then I realized that you also don't have a phone. And I think that you're the person that she got that from. And it's this idea of like Eliminating distraction, I believe, and kind of eliminating the things that will make you be able to leave the actual office. How do you think about kind of eliminating distraction and just focusing on the thing you're actually doing?

46:02

Speaker A

I think it's just like, my lens here is pretty simple. Like, I look at life through the prism of life minutes and life minutes. Life minutes, yeah. And if you're just like, throwing away a third of your life, like, waking life minutes, just like, into the abyss, that's like, probably like, not net productive. And so, yeah, it's like something I think about a lot is just like, you have this. It's the one resource or currency that we're given in life, and you have to deploy those resources meaningfully. So where do I want to deploy my life minutes? For me, the thing that I believe is the most important for just my life trajectory is Babylon, and I want to increase as much surface area to that as possible. And so part of that is a distraction minimization. Another part of that is just, you know, not feeling good existentially about this idea of just like, throwing things into the abyss. Like, you know, so. So, yeah, I. I'd say, like, it's probably. Yeah, it's been like, about three years now. No cell phone. People are like, oh, my God, how do you travel? How do you, like, if you're stuck somewhere, how do you get directions? I'm like, speak to a human the same way we've been doing for millennia. Right. Like, prior to this. So, like, it's a very recent phenomenon that's been like, incredibly, you know, it's. It's distributed itself so quickly that, like, now it feels impossible to imagine life without a phone. But. But it's cool. And like, honestly, at the least, like, when I tell people, like, at the airport and I have them print out my ticket instead of like, scanning a QR code, like, it's actually. It's nice because it actually, like, you can just like, see like something at least, like some switch flipping somewhere in their brain of, like, oh, this person.

46:26

Speaker B

Thinks a little bit differently.

47:58

Speaker A

Yeah, well. Well, yeah, but, like, also, like, oh, maybe it's like, not the thing that I. Maybe it is not as necessary as, you know, I may have thought. And yeah, my. My life has been, like, significantly enriched as a result of it.

47:59

Speaker B

How did you make the decision in the first place?

48:09

Speaker A

I had a very complicated relationship with my phone. Always. Like, in college, everyone always made fun of me because I had the oldest version, like an iPhone 4 until 2019, maybe 2020. Like, that was that was my whole thing because I just. I think I just, like, saw the kind of, like, iteration rate. Like, people basically conflate. Conflating change with progress as it related to the Apple products and being like, oh, my God, you know, now I have this new. A new icon for my YouTube channel. And I was like. And there was actually a really sobering moment with the YouTube, like, with the iPhone, where you used to be able to watch YouTube videos, which I'd, like, play music there, and I'd play a YouTube, like a song close my iPhone, and it would still keep playing the music. And it was actually also a good hack to get around the ads. And then one day they pushed an update, and now I had, you know, the red YouTube icon instead of the old, like, TV thing they had. And then I no longer could do that. And they got rid of that feature very intentionally. And I was just like, we've passed the inflection point where, like, they need to kind of appease us as customers because the retention is just so amazing that, like, they can basically get away with whatever they want. And I think, like, yeah, that just crystallized my. Like, these phones are, like, not trending in the right direction. So I started using, like, black and white everything. This was like, 2016 probably at this point. And then, yeah, just, like, in college, hated my phone. And then eventually it just became a super easy decision when I started Babylon.

48:11

Speaker B

So you just mentioned off camera when we were talking that you think the information to cure Alzheimer's is already out there. Yeah. What do you mean by that?

49:31

Speaker A

Okay, so, I mean, look, it's. It's like. It's a crazy thing to say, but. But I think, like, you know, the. The. The amount of information that's like, you know, let's call it the scientific literature that comes out every week that's, like, worth reading, relevant to whatever field is, like, has far outpace our ability to read, just, like, period, right? Like, if you had a Nobel laureate trying to ingest papers every single week, like, okay, well, it's just like, it was intractable decades ago to even catch up on all the latest. So for me, the kind of, like, you know, taking that to the extreme, like, if there is way more information than any individual can, like, you know, have to. To be able to, like, determine what synthesize, then, like, there's an emergence of these, like, the first kind of model that actually allows us to reliably, let's just say, ingest information in text space or, like, language space. Then those two things Together could probably be super powerful. So like there are a lot of clues about Alzheimer's. I think there's enough clues for us, like at the epidemiological level, molecular level, whatever, to understand what is actually causing the cognitive impairment. And so like, you know, and they come like, often these clues come from very disparate fields. So like in the world of like, you know, vaccine biology, it's like, okay, well, well, the shingles vaccine, there's a group in Wales in like, I think it was, there were people born in the early 90s. That's got to be before. But anyway, people born at a certain time. There was a mandate in Wales where if you were born after this cutoff date, you were eligible for the vaccine. And if you were born just after, just before you were not. And so this was like, you basically stratified the entire like Wales population into two camps. And so people that were born within a week of each other, there's no, there's nothing different about them except they, one of them, half of them got the vaccine, the other half did not. They basically stratified those patients and followed them over time. And they found that people that had the shingles vaccine were 20% less likely to develop Alzheimer's or all cause dementia. It was actually all cause dementia within seven years.

49:37

Speaker B

Why?

51:35

Speaker A

We have no idea. And that's because again, these are very kind of distinct fields of science that don't interface with each other directly. So all we know is like, you know, the latent space of those observations. Like clearly there's, there's like, there's something there. We don't have the common language to be able to interface with them. And so I've gotten super excited about this idea of like, this is something I've been toying with for years now. But with these LLMs kind of taking off, can we basically a fine tune these LLMs with our best in class team or whoever, but just get very smart scientists to train these models to approximate some heuristic that they use and then just deploy these on the knowledge graph of science to try and connect us. That seems really tractable. And we have more than enough clues to figure out the perfect target for something like Alzheimer's or other diseases.

51:35

Speaker B

You started working, I think with OpenAI to basically try to design models that are super good at this. So how did that happen and what's the goal of that?

52:22

Speaker A

I mean, so, so it was like birthed from a conversation I had with our leadership around this topic. And it sounded like, you know, the OpenAI had been thinking very similarly along those lines. And so then from there, you know, it became a question of like, if you want to fine tune models to just be better at particular tasks. The classic thing I always say is like, you know, for. To, to calculate an objective function, you need to compute a delta. To compute a delta, you know, in number space, that's like A minus B. It's a subtraction, very simple. In string space, that's Levenstein distance. In reasoning space, that's like, that's an unsolved problem. No one's really like cracked that yet. And so they were like, well, how do we kind of compare scientific hypothesis A with B if, you know, we want to train the model to get better and better at generating good scientific hypotheses. And so the calculus was pretty simple. It was like the closest thing we have to binaries in the world of biology is like clinical trial readouts. And it's not perfect, but it's the closest you can get, I think. And so we use that as a way to basically compute deltas to say like, okay, it either hit the primary endpoint or did not, and then that becomes like a yes, no kind of thing. And from there you can compute a loss function finally in the world of biology and start trying to train a model to understand, you know, how you can map from the world of biology to the world of clinical trial, like outcomes. And my, my dream for that project was like, one day if we had a model that could reliably predict these outcomes and say, like, hey, we're going from the world of biology now Tell us what's going to work, that one day we can do the opposite and say there was a successful phase three for Alzheimer's on this endpoint. What was the biology that we drugged? And I think that was a bit, you know, idealistic. Let's just say like in reality that's not really, that wasn't a tractable path, but it's still something I think a lot about.

52:29

Speaker B

But what else with the kind of explosion of AI has been unlocked or has become made possible over the past couple of years that would have been impossible for the past hundred.

54:18

Speaker A

I mean, you know, it just makes like. I'm not going to say anything new here, but I just think generally like, you know, search space is like, you can constrain the search space much more readily. And I think like, not being bandwidth constrained in terms of like, as humans, it's very hard for us to hold like many, many, many different things in context. And so and models, you know, LLMs are having that problem as well. But, like, if you can kind of extract the, like, quantum of information that's relevant to you from, you know, hundreds of papers, it, like, reduces the compute complexity, let's just say required to like, actually get information very quickly. So, like, if I have clue a, like, in my mind and then I'm like, you know, deploying these bots to kind of like, retrieve relevant information, it's much easier for me to kind of like, lay out on the table all the clues that I need to kind of like synthesize together in one basket instead of the usual process, which is like, okay, I'm going to go read a paper. Oh, that's interesting. Let me now go read these, like, 10 other papers. Now. Let me go. You forget paper, the first paper you read. But, you know, by the time you're done with like, you know, a week of that investigation, so I just think it, like, your iteration loop is just much faster, and that's probably a good thing.

54:28

Speaker B

You talk about controversial science.

55:37

Speaker A

Go into that. All right. I just think it's. It's super interesting when there is, like, a collision between, like, society and science and like, you know, what we as humans or society kind of like choose to do with that. And so, you know, I think the proverbial example of this is, like, there was a researcher in the early 90s called Simon Lavey, and I think it was 1991. He came up with what he dubbed to be the kind of neurological substrate of homosexuality. And, and so this was a thing called the third interstitial nucleus of the anterior hypothalamus. It was published in Science, which is like the top journal. And, and, you know, it was featured on Oprah and like, you know, 60 minutes maybe, all these things. And, you know, he was expecting the Nobel Prize. And, and by the way, just specifically on the science, like, what he actually found was that there's a small kind of nucleus of cells that is 2.8 times larger in, in, in heterosexual males than it is in homosexual males and women. And so there was like, no delta between the latter two groups, but like a 2.8x multiple on, in the size there. On average. I think the end was like, it was like 40 plus people. And so there were a lot of critiques about the science. Like, well, some of these, you know, the, the men that were, the homosexual men that were enrolled in this were, you know, had a lot of them died of hiv, aids. And like, maybe that was a contributing factor or whatever, but for the most part, it was. It was like just an observation. He was not saying what to do with that. And he's expecting the Nobel Prize, basically, and instead he gets excommunicated from Salk. He joins some random, you know, institute. He took a leave of absence, and one year later, he gets completely, like. He becomes the scientific pariah of the century. It's like, you know, the left hated him because he gave a therapeutic target for sexuality. The right hated him because it validated homosexuality. And so you kind of had this like, convergence pincer attack that just like, skewered him, basically. And. And so, you know, since then, he's been a kind of, like, roaming intellectual and he's an amazing, you know, researcher and all these things. But it was just so interesting that, like, when you have an incongruity between, like, well, basic science should be, in theory, agnostic to these kinds of things within the bounds of some ethics. But like, a discovery, someone should not be penalized for a discovery. He didn't say what to do with it or anything. And he just literally showed, like, data. He just showed data. Anyway, he. He's since been, you know, more or less like, you know, he's gone in the winds of. Of scientific discovery. But the irony that I think is, like, just like, so sobering is he's gay. He just wanted to understand his own sexuality. And so, like. And again, I'm not like, you know, without commenting on, like, the science itself of. Of that biology, like, it is really just like. And I don't even know what the right answer is there too, because it's like, I think, you know, it kind of giving a therapeutic target or a filter for things like sexuality. Like, that is, like, controversial. Same with gene editing, right? Like, should we put a moratorium on all crispr research just because you can genetic babies? And, like, some people think that's a bad thing. Like, I don't know. You know, these are. These are. These are maybe not questions for the basic researchers who make the discoveries to answer and probably the period just like, goes there.

55:39

Speaker B

Yeah, there's this idea that, like, science and discovery kind of progresses one death at a time. How do you think about that in the sense of, you know, if we're living longer, how do we kind of keep on pushing the envelope on science?

58:44

Speaker A

I think, like, I think the. The. It definitely calls branches of scientific discovery. I don't think, you know, I don't think the right. If you imagine, like, a tree that, like, bifurcates constantly and, like, it's a knowledge tree. And it's kind of like you have these different branches. I don't think the right way to increase the number of scientific discoveries is to like, prune branches and reallocate resources into like, ones that are currently bearing fru. I think, you know, long history of amazing discoveries coming from surprising places. And so, you know, and this is part of the, by the way, the argument around like, funding for the NIH and like, you know, academic institutions, like, should we even still be investing in basic research or do we just need to like, concentrate capital in these other places? But yeah, no, I would definitely like reject the premise that like, deaths are like, you know, deaths of scientific inquiry or whatever are like the right way to go about things. I think, you know, discovery obviously breeds further discovery. And so we should be like, whatever extent possible, pouring gasoline on the fire when we see that there's like a tiny, tiny little something there.

58:57

Speaker B

I noticed on your website, the first thing that you see is not like, we're going to cure Alzheimer's, it's actually we want to give more time back with your like, loved ones.

59:57

Speaker A

In effect.

1:00:06

Speaker B

I'm wondering what other things can you basically target in the process of trying to cure this specific disease? Like, let's talk about neurodegeneration. If you don't necessarily cure Alzheimer's, but you're curing other things that relate to it and like, relate to neurodegeneration, you're able to give people more time with the people that they, that they love. Is this something that you're thinking about where you're trying to target this specific thing, but over the course of doing that, you're going to find other cures and, and remedies for things?

1:00:07

Speaker A

I think it'd almost be hubrisic for me to answer. Yes, like, I think, you know, is it, I think the process of like anyone kind of like pursuing a scientific path is that kind of going back to the bifurcation of different branches. It's like there will be spin off things that, that hopefully bear fruits on their own. And you know, the perfect instantiation of Avalon is one where the mission, you know, proves to be true. Or like, we're able to achieve that. Maybe like the secondary one is that we're able to plant several flags that, you know, allow the field to kind of advance further. And so yeah, from that perspective, I think, like, it's interesting the, I think like the giving family, families more time with their loved ones, like, that's like a super important prism through which to look at this Problem because you want more quality time. Like, my grandmother had a really horrible final few years. It was like, it was not right for someone to go through that. And so to whatever extent we can kind of like, you know, I think the longevity field generally is like, you have like, people who are focused on health span and the people who just like, want to increase the number of like, total life. Total life years lived. And like, I'm definitely not in the latter camp. I think, like, just like suffering for.

1:00:35

Speaker B

The last 20, but you live for.

1:01:48

Speaker A

That is like, exactly like that. That is just like, obviously not good. Yeah. I think there's a human element to. To that kind of like, you know, that statement as well, which is like, you know, I'm very. We think in the world of like, molecules and proteins, and my team is like, you know, always thinking through the. Through that lens. But I think an interesting kind of decision I made as well early on was like, you know, for us to also get exposure to the other side of that, like, go meet real human beings that are living with this disease. Like, you know, we had a patient come in here the other week and. Or the other day rather. And. And that was like, again, it's just such a, like, sobering thing to kind of see what's on the other side. And like, volunteering at memory clinics, like, these are the kinds of things where it's like, look, you know, we're kind of going through the lens of like a pharmaceutical that will help these people. At the same time, there's, like, so much work you can do in the meantime. Right. Like, I often wonder this that, like, you know, for all these companies that are spending billions of dollars on this, like, and. And you know, ultimately they may fail to like, have the drug, the cure for whatever disease they're going after. Like, would they have just been better, like, with the kind of like, AUC of like, overall kind of benefit to humanity just been higher if they had just put like, those hundreds of people to like, you know, volunteer in these, like, community centers or whatever. Like, I don't know, like, that's like. That's a provocative question. But like. Yeah, yeah, it's like something. Something that at least we're trying to do both.

1:01:49

Speaker B

Yeah. And on the, like, volunteering at memory clinic side, you have basically, I think every single week you bring your team and your like, entire team to go work and like, help at those clinics and just try to like, keep yourself close to the problem. How did you come up with that?

1:03:09

Speaker A

Yeah, it's not the. The right Cadence per se, but like, you know, it's whenever they let us. But yeah, I think the, again, like the calculus was like pretty simple. It's just like we're working super hard. I'm always pushing my team like, you know, the, the big joke in the company is like, you know, what's Sasha's favorite timeline? Like it's yesterday. And so like I'm always pushing these timelines, always cranking, you know, hard and really trying to get the team to, to move with urgency. And I think you just like internalize that urgency so much more differently when you like see these people and you're like, man, like they deserve to live. Like you, like, you want to work harder to like give them the bet. Right. Let them benefit from the eventual work if everything works out. And like, you know, I think the kind of thought experiment that I definitely try to have the team do is like, if you had a parent that like were suffering from this or like you knew had a hundred percent chance of getting this in 10 years, like would you be able to live with yourself if you didn't work as hard as you possibly could to like, you know, prevent that eventuality? So yeah, I think it's just like, it's incredibly important. I think more companies should do this and for us has been just like rewarding on all fronts.

1:03:25

Speaker B

On the distraction side, you are basically like trying to systematically eliminate all the distraction in your life that doesn't have to do with Babylon. How do you think about basically staying focused on the, the long term mission while also basically doing all these other things in the meantime in order to get there? So you have to go make a bunch of money. Like if you had a massive checkbook of $10 billion, maybe you can do, you know, a different set of steps. But because you don't have that, you have to go find those drugs and like, yes, you know, do whatever you do with them.

1:04:28

Speaker A

Yeah, I think this applies in so many things, just like this meta level concept, but like 100. I think the, the need for a commercial intermediate is almost something I quietly resent. Like, you know, in a perfect world, like you know, again, someone would hand us like you know, just a carte blanche to just work on this problem. And I mean a true carte blanche. I think like we see a lot of companies that are incepted with like billions of dollars and like, you know, we'll remain those unnamed. But like there's a lot of these mega rounds that are happening now in the bio space and you're seeing a concentration of capital into the certain companies. But you do not see that balance sheet be put to work. Like, I think the balance sheet is incredibly like the kind of like impact per dollar goes up or like, let's call it the impact per dollar is inversely proportional to the number of like people on the team. Because with a super small dedicated team of like extremely talented people, I'm pretty sure it's like better to give them the same amount of money than like a team that'll have hundreds of people immediately because their balance sheet now allows them to hire all these people. And so. So yeah, I think it's like time to bureaucracy is like much shorter. If you are like incepted with a billion dollars versus, like you kind of have to, you know, fight your way up to the top. And you even see that with like a lot of companies that are started in downturns, like macro level downturns. Those some of the most successful companies ever were started in economic, you know.

1:04:56

Speaker B

Do you think that model of getting a bunch of money right at the start, so like opening, you know, creating a lab and then just having, you know, like $3 billion or something. Does that model even work? Or does it basically create the wrong muscle memory for commercializing a drug and making that work?

1:06:13

Speaker A

I can't speak from experience, so I will say that I can definitely say that being very, very, very scrappy early on has just like been imbued into the DNA of the company. And like, we'll. I hope and pray that we will never not be as scrappy as we, we have been. I think it's like, it's been really. Yeah. Integral to, to how we do things at Babylon. And that was a pure byproduct of the early days where I had zero money and I was feeling that pain of like, again, just wiring those money for my life savings. So like, that didn't feel great.

1:06:28

Speaker B

What like, net results do you think are going to come from having that like, scrappy DNA versus having a massive checkbook?

1:06:57

Speaker A

I think it's a forcing function to be a lot more thoughtful about like the decisions you make. And, and I think like, by the way, I've done this to the extreme where like, we've said no to a lot of things we should have said yes to because I was just so like, no, it's like, it's too much money. It's too much money. And then six months later I'm like, I really regret not doing that. So like, you know, thankfully it's been nothing. That's like Been actually, like, you know, overall massively influential. But yeah, I think it just, like, it forces you to be a lot more thoughtful and. And then just generally, like, again, it kind of hires for the right people. This is part of the reason we haven't even announced, like, how much money we raise or anything like that is like, I just think it puts a big neon dollar sign above your head and, like, could just be a kind of bat signal for the wrong talent. And again, I don't want to, like, say that's the case because I've never been on the other side of it. But, like, at least for us, it's been very good to just like, not signal those kinds of things because then it doesn't become an inclusion criterion for the people that we hire.

1:07:03

Speaker B

After having those situations where you basically say no and then you later realize that you made the wrong call, how do you kind of update your decision making going forward?

1:07:55

Speaker A

I definitely feel the pain on a very emotional level when I make a bad decision and I internalize that pain. I think part of like, you know, there are a lot of things where, like, you know, maybe our first default is to just be like, oh, well, I feel pain. That's a bad thing. Therefore I should, like, not feel that. And obviously then there's like the second order of thinking around that, which is like, hey, the pain is a good forcing function just to get better. And you need that as part of your gradient descent, right? Like, you need to feel the pain of making the wrong decision. And so, yeah, I think I just, like, you know, I really have those. Like, I feel it on a visceral level when I made a bad decision and I, like, sit with that. I don't try to reject it. I'm like, good. I'm glad that I feel the pain because it helps me, you know, informs the next decision I'll make. And yeah, I think it's just a process of kind of iterating on that.

1:08:04

Speaker B

What's your process for experiencing that pain and dealing with it?

1:08:48

Speaker A

Probably not like, you know, not one that's like, optimal. I think I'm just generally like, you know, I sit with it. I write a lot. Like, I have like a. A notion file which is just like stream of consciousness. Like every now and then, if there's something where I'm like, oh, I got like a real gut punch, I will just like, dump it out on the. On the kind of screen so I can like, read it back to myself honestly, in the future. And. And I found that to be incredibly Fruitful. Yeah. I've had a rolling document since day zero of the company, and it's just like, my stream of conscious thoughts at all these different points in our company. And do you feel like after you.

1:08:53

Speaker B

Have one of those setbacks and then you write it out, you're able to kind of work through it in a better way?

1:09:26

Speaker A

Yeah, that's. That's for sure. And I would say, like, you know, the other thing is having almost, like, internal proxies to, like, it's really hard to know where you are at in terms of, like, your growth rate. And so I think you need to, like, look for these external signals that can give you a sense that you're in, like, a high for derivative kind of situation. And so the. My proxy for this is, like, time to last embarrassment. And, like, you know, if. If you were like, if it takes you. If you need to look back two years, like, in your records, to be like, oh, I'm really embarrassed that I, like, thought this way or that I, like, wrote that thing or, like, spoken that way, like, that's probably, like, way too long. And if you're really embarrassed by, like, even the way you were, like, thinking about a problem or, like, speaking, you know, generally, like, two months ago, I think that's, like, you're in a high first derivative kind of environment. So, like, that to me, is. Is definitely my proxy. I'm constantly looking back and just being like, oh, God, I can't believe I was, like, so juvenile in my way of thinking about this. I'm sure I'll say the same thing about this interview in a few months. And I, like, I hope that's the case. Right. Like, I kind of want to constantly be checking myself on these things and updating accordingly.

1:09:32

Speaker B

I would almost say that the how often you're being embarrassed is. Or you're embarrassed by the way that you were thinking before or the actions that you took is kind of a proxy for the amount of risk that you're willing that you're taking at any given mom moment. And so if you're taking no risk and you're not changing anything anywhere, then you're probably not embarrassed at all. How do you think about risk taking and, like, deciding what risks are worth taking?

1:10:32

Speaker A

That's a good question. I put it in context of the rest of the stuff that we're doing. So, like, again, at the pipeline level, when we look at the assets that we're working on, like, it is. Nothing can be taken. Like, these things are not poisson in their distribution. Like, they are very much beholden to each other. And so if I'm like, you know, I know when we're stretched very far in the risk dimension because I'm internalizing it constantly and then we have to hedge against that by, you know, having another thing. It's not like a very mathematical formulism of like portfolio theory, but it is like how I think about just. Even the portfolio of different decisions I'm making on any given time is like, I think if we've overextended ourselves like just financially in the past, like week, let's call it, I'll be very conscious of like the future deployment if I don't see an ROI and then like, you know, within a month or whatever. So yeah, I think it's just like spending a lot of time just internalizing these things, feeling like at a visceral level. Like, I think that's like a underrated thing is like if you're deleting all inputs except your company, you just like internalize things almost somatically, like differently. And so yeah, your intuition will like hopefully like give you a gut check. Haha. Like a gut check on like, you know, hey, we've overextended ourselves.

1:10:56

Speaker B

How is your intuition kind of updated over time as you gathered this new information?

1:12:05

Speaker A

I don't know. I don't know. I think, you know, I'd like to think that. Yeah, I try to like take stock of like times where I've been like really like strongly like this is going to happen and I've been wrong. Like those large deltas in like confidence and outcome is like those are really rich with information. Yeah. And it just comes back to logging it. Right. Like you just like having moments. Yeah. Like just like literally dump that into notion. Like man, I was so sure this was going to work and it didn't. Or like the other way around. How often does that happen? Hopefully like, you know, less frequently over time. I'd say in the early days it was, it was like once every few months. And like now I don't remember the last time I have to dig up my notes.

1:12:10

Speaker B

Yeah, let's just talk about like what.

1:12:53

Speaker A

Causes cognitive impairment over time in Alzheimer's? I mean, I think like a. The answer is we don't know. It's obviously not unknowable, but we just like, we don't yet know, I think like on like a biological or like molecular level. Like what I think is going on is that, you know, my favorite question to ask Alzheimer's people is like, why are tau Fibrils toxic and tau fibril. So like we know that amyloid starts pausing 20, 30 years before symptoms. Eventually that starts to lead to the hyperphosphorylation of tau. And that hyperphosphorylation of tau ends up leading to the detachment of tau from microtubules. And then like tau fibrilizes and it causes neurodegeneration. And like that final link is just like a huge question mark. And so my like crazy theory that was really like the birth of Babylon was that basically these. So, so Tao is like a relatively like disordered protein that like forms a very specific structure when it fibrilizes, which just means it basically binds to itself. And, and, and you know, maybe a note to the editor that you should look up 6hre in the protein database and like that, you know, you can display that up on the screen. But it's like basically it looks like when tau fibrilizes, that's the, the crystal shark or it's a cryo em of the tau fibril. It looks almost like a celery stalk that's slightly elongated and then stacked up against each other. And it's like back to back, two celery stalks. And like if you're looking at a like, you know, bird's eye view cross section and, and so they stack into these like long celery stalks. And my strong belief is that they basically are sequestering essential proteins and these like really essential proteins that your body almost like nutrients for your neurons, they're getting sequestered by this like the celery stalk that's suddenly now in the middle of your neuron and it's just like a sponge or like a black hole that's just like sequestering all these essential proteins. And so it's the loss of the soluble protein that is like causing the cognitive impairment or neurodegeneration, not the like actual just presence of these things. So anyway, that's like one kind of thing and then the other that I think is really interesting is neuroinflammation. And like, you know, I don't know if you've ever had a concussion, but like yeah, they, they suck. And I like phenocopies, like when you have two things that are phenotypically like very similar, like clinic clinical presentation is like relatively similar. I mean this is like pseudo scientific, but it's like a way to grok these things is like a concussion. You know, you have like discombobulation, like cognitive impairments, like short term memory Loss, inability to form new memories. Like, it's kind of like, you know, sounds Alzheimer's, like. So I. So I think it's wrong to like, therefore conclude that the underlying path of physiology is the same. But I also don't think it's crazy to like, start there.

1:12:55

Speaker B

So it's a little bit A equals B equals C sort of kind of.

1:15:38

Speaker A

Right. Like, I mean, it's like, it's just like observing like, basically the same thing with like, two completely different diseases and then just being like, well, are there clues? Yeah. Is there like a kind of like, underlying thing between them? And so, you know, a concussion, like, you know, symptomatically, like, emerges like within seconds. Right. Like, you get hit the concussive blow and then you like, immediately feel whatever the discombobulation. And, and so I think that that time horizon, like, the fact that the temporal resolution of that is like, on the order of seconds, that's a clue. And so what's the only thing that can mobilize that quickly? It's probably and immediately cause issues at the cognitive level. It's probably inflammation. And so you have this immediate inflammatory response, and that's happening on the order of seconds. And so we know that more or less. I'd contend that 100 of the variance is explained by neuroinflammation in the concussion case, and it presents similarly. Does that port over to Alzheimer's? That's a huge question mark. And again, I'm sure, you know, is.

1:15:41

Speaker B

Inflammation basically a massive indicator of future onset Alzheimer's?

1:16:38

Speaker A

Well, so, like, inflammation seems to be long term, which is where a lot of the, like, you know, vaccine biology is like, probably linked, like, systemic inflammation. But neuroinflammation specifically, like, for sure. And a lot of people have like, tried drugging this. This is like a thing. This is a known kind of mechanism. But. But what percent of like, the cognitive impairment is like, described by specifically the neuroinflammation? That's a huge question mark. But like, like my pseudoscientific kind of thing here is like, maybe it's 100% really? And maybe the towel is just like a trigger for that.

1:16:42

Speaker B

So you mentioned that concussions and Alzheimer's are similar. Are you able to take anything from like, concussion or research and port it over to Alzheimer's?

1:17:13

Speaker A

Yeah, that's like, in a sense what I'm suggesting. Like, I think the, the clinical presentation being similar is enough to just like, at least warrant investigation beyond that. And so, yeah, there are things around, like, resilience and like, recovery Rates of concussions that I think are interesting and like, definitely targets that I've seen in the Alzheimer's, like, omics data sets. So like, yeah, again, it's a bit of a, like, poor man's, like, you know, estimate or like, you know, inquiry. But I think it's a, I think it's kind of fun to play with those, like, intellectually.

1:17:22

Speaker B

How many different little bits of information or indicators from all these different fields are you kind of thinking that you want to take into basically solving this one mega problem?

1:17:50

Speaker A

I think the way that I think about going trying to cure Alzheimer's is just like, you just, you need a fortress balance sheet to take as many orthogonal shots on goal as possible. And you can have a very strong prior about the biology. You're very likely to be wrong. And so to whatever extent you can absorb the blows of being wrong in the clinic, that's probably like, yeah, you just need to be wrong long enough to one day be right.

1:18:00

Speaker B

So you're like, success Vector is effectively just creating a company that can take a bunch of hits.

1:18:21

Speaker A

Yeah, for sure. That's right. That's right. And like, and, and then being like super, super first principles about how you think about this disease and like, again, trying to be as orthogonal in biological space as possible, but like, hopefully synergistic and execution space where like the day to day looks very similar from like program A to program B, but in biological space you're like hedging some of.

1:18:25

Speaker B

The risk other than actually writing down how you're feeling and, and what decisions you've made. What other things do you do to basically kind of prime yourself to be able to take hits as a person? Because you're basically the company yourself. You're like the soul of the company. So how do you prime yourself to take hits?

1:18:49

Speaker A

I don't have an answer for that. I don't think I've done that well. I think I just take them on the chin and just sit with them and I'm like, ouch.

1:19:08

Speaker B

Do you have days where you just lock out and then recover and then go back in?

1:19:17

Speaker A

I mean, I think, you know, the process of like, obviously being a founder is just like getting punched in the face a thousand times a week and just being like, smiling and asking for more.

1:19:22

Speaker B

But most, most companies are not designed like, with the understanding that you're going to have a whole bunch of failure on the road.

1:19:31

Speaker A

That's true. And like the failure, like the scale of the failure is like so grand. Right. It's like, you know, if if we, you know, if we have a failure in phase two or even phase three, like, that could be hundreds, maybe thousands of patients and like years of work and like many years of work and like hundreds of millions of dollars and like. Right. Like the scale of these failures is like pretty catastrophic. So like. Yeah, I, I can't, you know, it's very likely that I will at some point know what that feeling looks like. I, I haven't had one on that scale. I think generally, like, we just like calibrate to whatever distribution we're kind of like presented with. So like, for me, I'm probably emotionally like, in terms of the responses I've had so far to the things, they may be small in absolute space. Like, like how bad the, the blast radius of that thing was. But like, I'm sure the feeling will be the same when we start to like, get comfortable with larger scales. Um, but, but yeah, I think like, you know, there are definitely some days where I'm just like, oh, like, you just get like gut punch. It's like mainly when there's like successive gut punches. And for me it's like it's never really lasted more than 24 hours. But like, you know, you just kind of go home that night and you're just like, I'm just going to. Instead of like doing emails, I'm like, I'm just gonna like watch some stupid like Netflix thing or like something like that. There's been a few times like that since starting the company of just like, you just go home, you just reset, you sleep and you wake up and.

1:19:37

Speaker B

It'S another day I like will feel terrible for. I can feel terrible for 12 hours. Like during a certain day, you know, something, something happens. In the rare case, like the biggest setbacks, maybe it might be a couple days.

1:20:54

Speaker A

Yeah.

1:21:08

Speaker B

But then eventually what I would actually do is I had these moments where I would just like play video games for like three days straight. Completely delete everything from my brain. Just play a video game. And then by, by the third day I get bored and I'm like, gotta get back at it.

1:21:08

Speaker A

100. Yeah, I, yeah. Deleting inputs is so critical. I did it before Babylon. I. That was the first brush with like no cell phone whatsoever. Cold turkey. I went two weeks in a very remote part of Scotland sleeping on a like shitty mattress, like on the floor in a creaky old like cabin chopping wood, you know, to make fire, things like this. And it was like totally life changing because yeah, I think like, you know, when you delete Your inputs. You realize that like the majority of these things that we misconstrue as thoughts are just reactions in thought space. And that process changed my worldview where I was like, wow, we're just being subliminally primed all the time in ways that we don't even realize. And when you just like drown out all the noise, you just come back with a new perspective you've never had. And that gave me actually the clarity to start Babylon.

1:21:22

Speaker B

So, yeah, what are the things that keep you up at night?

1:22:09

Speaker A

I think one thing that I've been thinking about lately that's definitely been interesting to sit with is like, you know, are these like. It's the question I think everyone's asking that's like, you know, some like, level of AI adjacency is like, are the secular trends of like all these hyperscalers and the rate of improvement of these models, is that just like always going to exceed our ability to fine tune on top of them? Like, if you're sitting at the application layer of any of these, is there even room to actually be ahead of the curve? I'd like to think yes. But it's been very interesting watching these trend lines of constantly paddling to stay just ahead above what everyone else is doing. And it probably will become a thing of diminishing returns in the future where the rate at which you have to paddle to stay just a head above everyone else is probably going to have to increase so much so that it becomes intractable. So that's one thing I think about a lot, that observing and trying to kind of like understand.

1:22:13

Speaker B

Have you actually experienced like this acceleration in the past three years even for sure, absolutely.

1:23:11

Speaker A

Like, I feel like everyone has felt that at some level, like emotionally and then also the. Because I'm like totally convinced that fine tuning will define the next decade of AI. But like, I mean, I think you're just going to see a lot less people doing that because I think the infrastructure is also non obvious to support that on top of these LLMs. And I think other people are kind of looking around being like, well, if I'll spend all these months to fine tune a model and then three weeks later it'll be obviated by whatever new model comes out from OpenAI or whoever, then what's the point? And I think that's probably a good existential question to be asking.

1:23:17