Inside View: The AI Personas Needed to Diagnose Disease
This episode explores Microsoft's groundbreaking AI study showing that multi-agent AI systems can diagnose diseases four times better than human doctors. Dr. Matthew Lundgren from Microsoft Health and Life Sciences discusses how multiple AI models working together as specialized agents can simulate expert medical panels, potentially democratizing access to high-quality healthcare diagnosis.
- Multi-agent AI systems outperform single AI models and human doctors by having specialized agents handle different aspects of diagnosis (cost analysis, devil's advocate questioning, quality control)
- AI diagnostic systems could democratize access to expert-level medical care by simulating the multidisciplinary panels available only at top medical centers
- The integration challenge for AI in healthcare isn't just technical but regulatory, requiring new frameworks for non-deterministic systems that have moments of both brilliance and failure
- Medical information doubling every 90 days is driving hyper-specialization among doctors, which AI could help reverse by augmenting general practitioners' capabilities
- The future of medical practice may shift from memorizing facts to managing AI agent teams, requiring new metacognitive skills rather than traditional medical knowledge
"Microsoft's AI is better than Doctors at Diagnosing Disease"
"I believe that the human expert, plus these expert systems together will ultimately deliver better care, no matter what profession you're in"
"Medical information is doubling roughly every 90 days"
"In fact, 10 to 20% of AI interactions with these common chatbots like GPT are around a medical use case"
"I want to democratize that experience for everyone, increase the access. So no matter where you live or what you do for a living, you should have that same level of precision when it comes to your healthcare"
This is an iHeart podcast.
0:00
Guaranteed Human.
0:02
Run a business and not thinking about podcasting, think again. More Americans listen to podcasts than ad supported streaming music from Spotify and Pandora. And as the number one podcaster, iHeart's twice as large as the next two combined. Learn how podcasting can help your business. Call 844-844-IHeart.
0:04
Kaleidoscope Foreign.
0:27
Welcome to Tech Stuff. This is the Inside View. I'm Oz Velocian here with Cara Price. Hello.
0:33
So, Oz, I'm very curious to know more about the story you've brought me this week, since it's a topic we discuss a lot on this podcast.
0:40
Yeah, so today I've got a story about AI in healthcare, specifically AI and diagnosis. I spoke with Dr. Matthew Lundgren, who is the chief scientific officer for Microsoft Health and Life Sciences, about this blog post that Microsoft recently published with the title the Path to Medical Superintelligence.
0:47
Do I want to know what medical superintelligence is? It's more big than just regular intelligence. But I actually heard about this study. It was everywhere. And if I remember correctly, it was that the AI were better at diagnosing than doctors. Right?
1:07
Yeah, that's right. In fact, four times better. There was a headline in Time magazine which really says it all, Microsoft's AI is better than Doctors at Diagnosing Disease. Special shout out here to Elliot Fishman, who's our old friend. He's a professor of radiology at Johns Hopkins and he runs this fascinating email group that discusses new developments in AI. Matthew Lundgren and I are both members of this group, and Matthew is also one of the authors of the study.
1:20
What kind of doctor is Dr. Lundgren?
1:47
Like Elliot Fishman, our friend, he's a radiologist by training and has a public health background. He was hired at Stanford, where he started using machine learning to analyze large data sets. Here's Matthew.
1:49
Eventually, my lab grew into a very large AI center at Stanford, which bridged the computer science department and the medical school and kind of saw translation of newest techniques into healthcare applications accelerate. Taking that work a step further, I went to Microsoft on sabbatical at Microsoft Research and realized that a very similar opportunity was there in big tech if you could start to connect the latest technology to problems in healthcare. And so that's how I came to be here. And that's kind of what I still do all day.
2:03
And Matthew is also one of the authors of the Microsoft study.
2:34
I believe that the human expert, plus these expert systems together will ultimately deliver better care, no matter what profession you're in. There's always a gray haired person that has, you know, in some sense seen it all and kind of compressed that into their brain and they're pattern matching in a way that is just faster than folks that don't have as much experience. And that's true anywhere, but certainly in medicine. Right. I think that the assistance or ability of AI to now sort of connect dots in ways that maybe can achieve that wisdom or that experience and bring that to the surface. It's kind of an unprecedented time.
2:38
Beyond the exceptional performance that is four times better than human doctors. One of the things I found most interesting about the study is, was that it wasn't just one single AI model doing a diagnosis. It was a whole team of AI models that were able to talk to each other in order to come up with hypotheses, order tests, and ultimately come up with a diagnosis.
3:15
So multiple AI models seems a little bit unfair.
3:36
Yes, and in fact, we talked about this. The doctors in the study were not allowed to call specialists to help them with their diagnosis, but the AIs were allowed to talk to each other. So doctors are not gonna be made obsolete anytime soon.
3:39
Well, good, because I have a physical coming up and I don't need four AI models being like, well, this girl got real big this year.
3:52
Now, as you and I already know, people are already using AI regularly to diagnose themselves. In fact, I think more than 10% of the overall chatgpt traffic is around medical stuff. This is not always music to the ear of doctors. So it was interesting to look at an example where this is actually an AI built for doctors to work with doctors rather than patient facing. And the other interesting thing for me, which we talk about with lungren, which we'll get to, is how this idea of multiple AIs talking to each other can simulate the experience of the best hospital systems in the US for people who otherwise might not have access to these panels of experts.
4:00
I can't wait to hear what you learn from him.
4:41
Well, here's the rest of my conversation with Dr. Matthew Lundgren. So you're a trained doctor and I want to start with the basics, which is diagnosis. I'm not sure when the last time you made a diagnosis on a patient was, but I'd love to hear from you as a doctor, what is the process of diagnosis?
4:44
Yeah, I mean, it depends quite a bit on the specialty. But as most people know, the classic image of a physician. Right. Is to speak with the patient, kind of do a Sherlock Holmes kind of thing. Everyone's seen the shows like House and things that kind of sensationalize sort of the approach, but really there's quite a lot of unknowns that you have to tease out. Right. You have to interview the patient, you have to obviously interpret labs and other information, and you have to start to narrow things down and order appropriate tests. Try not to chase too many, what we call zebras, but keep those in mind in case you're dealing with one.
5:03
And the zebra would be the classic House episode, right?
5:34
Yeah.
5:37
Right.
5:38
Well, every House episode is a zebra, which actually has some relationship to the study we're going to talk about today. But in general, it's more common to have an uncommon presentation of a common disease than an. A common presentation of an uncommon disease, if that makes sense.
5:38
Right, right, right. And this kind of relationship between AI and doctors has been going on for a few years. I remember reading a great piece in the New Yorker about how one of the challenges for AI was that the best doctors can't actually tell you in words why they're good at making diagnoses.
5:53
That's right. It's interesting. I think there are things that humans have many cognitive biases that are well understood, and I think keeping that in check while also trying to leverage the information in front of you and not be affected by the case you just saw or something you just heard at a conference or an error that you experienced years ago that's still impacting the way that you think about diagnoses. And I think those biases have been well published and discussed ad nauseam in healthcare. But we're kind of dealing with this new human plus AI dance.
6:14
That's fascinating. Yeah. I mean, I actually slipped and fell down a few stairs at the weekend and bashed my head slightly on one of the stairs and then didn't feel very well. And I was like, I wonder if I could be concussed. So I did a selfie and sent it to ChatGPT and it said, my eyes look fine. So I actually, if I'd been more worried, I would have gone to the doctor. But there's a kind of a dark side to that as well.
6:48
Yeah, I mean, I think it sounds like you did.
7:11
Okay.
7:12
But I would say that the old saying in healthcare during the. Particularly the rise of the Internet. Right. Which is kind of the other similar kind of technologic advancement that impacted healthcare. We used to say to our patients, you know, your Google search does not replace our medical degree. Right. And that. That wasn't meant to be condescending, but it was Just sort of like we had to sort of pull them back from the abyss of going down a rabbit hole. And every ache and pain was immediately terminal cancer.
7:13
Right?
7:36
That kind of. But today it's different. It sort of references the experience you just mentioned, but that's happening everywhere. In fact, the recent OpenAI launch of GPT5, they spent 15 minutes talking with a patient who went through a very difficult battle with cancer and worked with the model herself and was able to have very complex medical jargon explained to her in plain English, was able to help her with questions to ask the physician. And as someone who still practices and sees patients today, I, I have to say my patients are better informed than maybe ever. And it's kind of changing the bar with this classic information asymmetry problem where the patient has to kind of keep up with the technical speak and all the information that we spend decades learning. It feels like there's almost a better playing field. So I can have this conversation with my patient almost at a peer level and then we can go through the care journey together. I'm extremely excited about that prospect.
7:36
Taking a couple of steps back, I mean, you mentioned you've been in and around this since 2012, 2013. Why do people want to use AI in medicine?
8:35
Well, it's an incredibly challenging discipline and it has only become more so maybe in the last 10 or 15 years. One of the things that is going on is that information is doubling roughly every 90 days. Medical information. And that trend has been going on for a really long time. And what does this mean? Publication of papers, publication of papers, new therapies, new guidelines. These things keep stacking up. Right. And so just because you've been through medical school and training. Right. We have lots of systems in place to help us continue our education, but really the reaction to that has been to sub, in some cases, sub, sub, specialize. So to give you an example, I am a diagnostic radiologist. So that's the cat bigger specialty. And then I specialize in interventional radiology, which is a image guided procedures, basically. And then I am further specialized in pediatric version of that. So that's like a Russian nesting doll of specialties. And you see that throughout healthcare. And that is partly due to the complexity of care that's required for some patients. But also it's due to the information tidal wave and being able to hold all that in a human mind. Right. With all of our limitations. And so AI, I think at least the work that we've been doing here is starting to provide a counter narrative to Needing to be sub sub specialized in order to be able to manage information and take really good care of your patients across a wide variety of complex diagnoses. And I think that that's really where the excitement is. I think right now is like, can I use this system to augment my ability to care for patients?
8:44
And why isn't AI more ubiquitous in medicine? What has been the integration challenge up until now?
10:20
Well, there's a whole podcast just on that Oz, I would say, but the short version is that we have been an incredibly skeptical discipline. It's skeptical of new technology and at the same time extraordinarily risk averse for good reason. Right. We require significant evidence to change the way we practice. We have, as you know, clinical trials take years and years and some still fail. Actually many fail. And we accept that as the system that keeps our patients safe and keeps us on the cutting edge. I think in terms of just the technical mechanics of adoption, we have, we have a very rigid system in the software IT world that is changing. What's so, again, what's so exciting about this is that again, any physician can pull out their cell phone and interact with this cutting edge AI without needing to have, you know, three, four year long cycles of integration with software. Right. And it's just the early days. But as of the trends that we're.
10:28
Seeing, just to take a step back, I guess the classic model of measuring AI performance versus Dr. Performance was to present a hard problem or a hard diagnostic conundrum and ask for an answer and measure answer versus answer. How is that different to what you've done?
11:24
Yeah, well, it's even less precise than that. So the way up until now, at least for large language models, when people talk about they have medical capabilities, they were actually using medical examination questions. So like there's a question stem and then there's a multiple choice answer. That's not medicine, but it is how we qualify our humans, right. Human physicians to be granted a medical license. So that we kind of used that for a long time as a surrogate or a bellwether. But it wasn't really could it pass.
11:42
A test to be a doctor rather than could it actually be effective at acting as a doctor? That's interesting.
12:12
Right. And we were able to show very early on with GPT4 that these models outperform physicians on these multiple choice tests. But there's all kinds of caveats there is that really medicine has it seen some of that data in its training? Assuredly, yes. Right. And is that useful? I think those questions came up. Now, in practice, it's estimated that 10 to 20% of AI interactions with these common chatbots like GPT are around a medical use case. So we know that there's someone who's getting value out of that somewhere. Right? And we see it with our own eyes. So how do we bridge the gap to something else slightly more realistic in terms of not giving you all the information upfront, just like we would in real healthcare. One of the principal thoughts around the study was is there a way to take advantage of the incredible capabilities that these models have in medical diagnosis and knowledge, but also push it a bit further and not have it kind of just be a question answering machine? And so we thought, can we kind of have several versions of the model, kind of act as different humans or this is that idea of an agent and give them jobs. One job is to look at the economics of the test that you're trying to order. One is to question your next decision point. So the information isn't just in and out with one model, it's actually in and out through a system of models. And we show that no matter what model you use, whether it's Google's model, whether it's OpenAI's model, whether it's an open source model, it improves that diagnostic capability on these extraordinarily challenging diagnostic tasks.
12:17
So you had 10 co authors on this study and as we talked about when it was released, took the world by storm. And so I mean, how did you go about designing the study and what was the hypothesis and what have you found?
13:52
So this was a cross Microsoft collaboration. But Harsha Nouri, who is the lead on this, really wanted to say we have a lot of evidence that these models perform well for these standardized tests. And then we see this real world situation where that's not how people present. They don't show up with, hey, these are all my tests, these are all my problems, and these are the four choices of what I may have, right? And then taking what are essentially some of the most difficult questions out of New England Journal and structuring them in a way that requires a model to ask for more information or order tests, just like a physician would. The hypothesis was that that would be interesting in and of itself. But then what if we also put humans through that same system? In other words, here's the first step. Headache. Okay, what do you do next? Well, do I need to ask more questions, do I need to order a test, et cetera, et cetera. One of the really brilliant outcomes here was by having that System of agents, as opposed to just the single model, allowed us to have a more realistic understanding of the capabilities. In other words, if I wanted to know the answer, and I'm a chatbot, my answer could be, let's order every single test that there is, and that would probably get you the right answer. Is that feasible? No.
14:06
Right.
15:18
So forcing it to think about resources, cost of the care actually found a very interesting, what we would call the parietal frontier of capability under constrained resources. So they were actually getting to incredible diagnoses very, very accurately, but also cost efficiently. And that was really one of the biggest takeaways from this work.
15:19
Can you just to make it more concrete for our listeners, can you kind of set up one of these cases as though an episode of House, Dare I say, and then what the human doctors did and what the AI agents did, and then how you compared their.
15:42
Performance, let's just say it was someone that had easy bleeding, that was unexpected. They were brushing their teeth and they started bleeding. And it was kind of unusual. And they noticed that they were getting a lot of bruising and there's just a certain battery of tests. I think that was pretty comparable on both sides in terms of what they ordered. But taking continuing, that would be what.
15:58
The AI ordered and what the human doctors ordered.
16:15
Human and AI pretty much. Right. So the first few steps, I think a lot there was a lot of similarity, which is expected. Where we started to see early divergence was because of that agent setup, humans did kind of jump to more advanced tests, more quickly, more expensive tests. And that was interesting because the models were able to kind of get to the next step with a battery of less expensive tests. So we thought that was a kind of an interesting starting to see some divergence. And then to be fair to the humans, they're still kind of handcuffed. In other words, they're just getting text feedback as they're interacting with the system. Whereas when I'm with a patient, I'm seeing them, I'm able to kind of take some cues, I'm able to examine them. So there was some limitations there. But nonetheless, once it got to the stage where you had a differential diagnosis, so a list of likely things, more often than not, the model was ranking them in a much more data driven order that ultimately led to the correct diagnosis much more quickly. Whereas, you know, as you, as you would with humans with these limitations, you're kind of going in some rabbit holes, you're maybe not ordering them in the best order. And so you're kind of Going down other paths that end up increasing the time or expense or potentially leading to the wrong diagnosis.
16:17
After the break, how the multi agent system, the diagnostic Orchestrator, actually works. Stay with us.
17:35
Run a business and not thinking about podcasting. Think again. More Americans listen to podcasts than ad supported streaming music from Spotify and Pandora. And as the number one podcaster, iHeart's twice as large as the next two combined. So whatever your customers listen to, they'll hear your message. Plus, only iHeart can extend your message to audiences across broadcast radio. Think podcasting can help your business. Think iHeart streaming radio and podcasting. Let us show you@iheartadvertising.com that's iheartadvertising.com hi.
17:56
Kyle, could you draw up a quick document with the basic business plan? Just one page as a Google Doc and send me the link. Thanks.
18:26
Hey, just finished drawing up that quick one page business plan for you. Here's the link.
18:33
But there was no link. There was no business plan. It's not his fault. I hadn't programmed Kyle to be able to do that yet. My name is Evan Ratliff. I decided to create Kyle, my AI co founder after hearing a lot of stuff like this from OpenAI CEO Sam Altman.
18:38
There's this betting pool for the first year that there's a one person billion.
18:53
Dollar company which would have been like unimaginable without AI.
18:56
And now will happen.
18:59
I got to thinking, could I be that one person? I'd made AI agents before for my award winning podcast Shell Game. This season on Shell Game, I'm trying to build a real company with a real product run by fake people.
19:00
Oh, hey Evan, good to have you join us. I found some really interesting, interesting data on adoption rates for AI agents and.
19:13
Small to medium businesses.
19:20
Listen to Shell game on the iHeartRadio app or wherever you get your podcasts.
19:22
I put the study through ChatGPT. Describe the diagnostic orchestrator as like a virtual team of five doctors, each with a different role. One lists possible illnesses, one chooses the best tests, one plays devil's advocate, one watches the budget, and one checks the quality of everything. The team talks it out step by step, but decides what to do next. Is that a fair summary?
19:29
That's exactly right. And you can have infinite numbers of those agents. I think these five are just kind of scratching the surface of what's possible. I will say just quickly that I was incredibly happy to see that the curmudgeon agent we called it, or the Devil's advocate agent was helpful because you get into these group think situations. And it's kind of fun to watch a model argue with other models about some of the decisions being made in questioning the steps. So where the models fall short today is outside of the text domain. And what I mean by that is models are incredibly good at understanding medical concepts as they're communicated in text form. But when you get into the images and genomics and waveforms and all the other types of ways that we take care of our patients, the models are vastly underperforming humans. A good example of that is if I needed to look at a chest X ray in one of these diagnostic steps and the model had to interpret the chest X ray, couldn't read the report, actually had to look at the image, it would fall short and fail nine times out of 10. So we know that that's a significant gap. But on the other hand, most healthcare, 80% of physician or patients interaction with their healthcare systems involve some kind of other information, like a ECG or a biopsy path slide right, or a mri, for example. So I'm hoping to see agents that have those competencies included into the mix where we can start to really get to a place where the diagnostic environment meets how we're testing the systems.
19:50
There was a study last year which I was fascinated by, which is that AI diagnosis in this study was better than human plus AI. In other words, there was a study and you would assume, or you would hope that a doctor using AI would be better than just an AI diagnosis alone. But in fact, the human plus AI model was worse than a pure AI model. And one of the conclusions from this was maybe that the doctors didn't want to listen to what AI was telling them. But I mean, did you see that study and did it give you pause?
21:27
For more than a decade, we've been kind of dealing with this unexpected result. This goes again all the way back to the earliest days of applying at least some of the powerful deep learning systems in healthcare. We have consistently seen that, in other words, in whatever setup the AI, if you just leave it alone, typically does better than the human plus the AI or the human alone. Now, is that a indictment on the human ability or is that more of a have we set this up in a way that either doesn't favor the real world, or have we not figured out the ideal human computer interaction or how we should be, what tasks should we be offloading to the system versus the tasks that we should be collaborating with the system on? I think that's really where the exploration is that I'm interested in because I still hold out hope and sort of some sense of self preservation, but that there is a future where the two are better. Just how to offload what job and in what sort of system that ultimately becomes maybe it's five agents, maybe it's 10, maybe it's a thousand. You know, we don't know the answer yet. We're just barely scratching the surface. But in three years time I expect this to be fairly common, that clinicians of all types will be working alongside and or even consulting with some of these systems for their care of their patients.
21:58
And what is the adoption rate today? I mean, how far, what would need to happen for this paper that you've written and the system that you've developed to be widely deployed in US or global healthcare?
23:21
In a very practical sense, there is a lot of regulation around this and regulation requires very rigorous study and evidence and real world deployment. All the things that you would expect if you're, you know, care team is using some of these things to take, to take care of you and your health problems, generating that evidence, working with policymakers, trying to figure out exactly what evidence would get to the point where we can say definitively this is at standard of care or beyond and it should be used and here's how you use it. Those are very mechanical, but they're very important. It may also require a change in how we approach the regulation of medical software because these kinds of systems are challenging our traditional software that we have used for decades in healthcare. Right. They're very different, they're non deterministic, they have moments of brilliance and moments of, you know, stupidity, I should say. Right. You've seen these kind of things. And so how do we actually design a system where it's safe, effective and actually improving outcomes? And that's ultimately the evidence we have to generate.
23:33
Yeah, I mean, beyond stupid mistakes, how do you see the, the risks here? I mean we're seeing this research around the problems of cognitive offloading with AI. Some suggestions that if you use AI too much you become dumber and deskill yourself. I mean, is there a risk of deskilling doctors? What are some of the maybe intangible but nonetheless medium term risks that we should be considering here?
24:35
What they refer to as skill atrophy is real. And we've seen this in various other disciplines too. I think it will also require a shift in how we think and perform our knowledge work jobs. And one way this has been sort of looked at is via the idea of metacognition. So rather than you having to be the central source of decision making. Are there things that you can manage? So imagine you managing a team of these agents. You have a goal, but you're offloading some of the cognitive tasks to those agents. Those are some of the early discussions around it. But I fundamentally believe that everyone that's in a knowledge, work, industry or role will have to rethink how that role evolves in the future. And this is kind of that first step, at least for us in the healthcare space, which is that do you need to memorize all these facts or do you just need to be able to have the right judgment and know which. Where the models are good and not good and be able to fill those gaps and manage like you would? A team, like any manager would know 101. This person's good at this. They have some weaknesses here. Right. I'm going to assign this task to them versus this, you know, that's that metacognition world that where I think we're rapidly heading towards. And healthcare is going to have to figure out a way to do that as well.
25:01
Yeah, I mean the other question around deployment is conflict of interest. Right. So the previous research I've seen is all around AI versus human doctors. But this element you've added as to cost as well, it's not just outperforming, but it's outperforming at less costs in terms of tests is a really interesting element, but it adds the potential for major conflict of interest on both sides. So, for example, I'm British, grew up with the NHS and one of the consistent themes of the NHS was death panels. Are there bureaucrats deciding when people should die? What is the appropriate level of care to give to people to prevent them from dying, given that it's a drain on a public budget that's in the uk. Here in the US we have these for profit healthcare model where there is an incentive which if you're insured, you sometimes worry about that your physician or healthcare system is pushing you through numerous medical procedures because ultimately it's a profit center and you may not actually need them. So how do you begin to grapple with those problems when you think about a system like this, these are problems.
26:23
That have existed even as you know, before AI. And I think that the responsibility for those of us kind of generating the evidence and the capabilities and kind of displaying the rationale behind how these things work or don't work and where they work is a separate conversation entirely from both the economics and the cultural, societal aspects of just how we Deliver care. I think it wouldn't be a controversial statement to say that, at least from the US perspective, that our healthcare system is not ideal. Right. And that's true whether you're in a capitated system, a fee for service system, or a government based system like the va. I have to hope that the better angels prevail here. But I agree with you and share your concerns that in the wrong hands or with the various misalignments that happen in these systems at all different levels, we could end up causing, you know, some disruption in a way that we aren't hoping to see in the end.
27:32
There was an interesting blog post that you wrote around cancer care and it really struck me because, I mean, I don't want to put the words into your mouth, but as I read it, if you are one of the lucky few who gets to go to one of the great cancer centers like MD Anderson when you're sick with cancer and you have access to this cross field panel of experts who, as you mentioned earlier, are sub or sub, sub or even sub, sub, sub specialized, you have a measurably better outcome. Whereas in fact most people in the US and certainly almost all people practically speaking globally, don't have access to these cancer centers. Talk about that and about this idea of the multiagentic AI and how it sort of reflects or refracts what we've been talking about with the diagnosis piece.
28:31
Yeah, this was a very important. So yeah, thanks, thanks for bringing this up. I think a lot of people don't know some of the inner workings of healthcare, where some of the really big bottlenecks are in terms of getting the best possible outcome. And one of them is in cancer care, as you're pointing out, where some of the leading centers, and in particular in larger cities, they have the ability to bring specialists from all different disciplines together to discuss a patient's care. And that's called a tumor board or a multidisciplinary tumor board. The reason that not everyone can do that is not just because they don't maybe have that specialist in house, but also because of the massive amount of prep time it takes to gather all the information. It's not just the patient's data that you have to gather. You have to gather what clinical trials are new and available and this patient eligible for. But what does the latest literature say? And someone has to go through all that information, prepare that, and then present it to a group. And what we found from the asco, which is the large society in cancer care in the US was That it takes between two and a half and three and a half hours of preparation time per patient. And some centers run thousands of these tumor boards a year. And those are the ones that have the most resources and certainly the most accessible. The idea of AI fundamentally for me, and the reason I'm in this field, is that I want to democratize that experience for everyone, increase the access. So no matter where you live or what you do for a living, you should have that same level of precision when it comes to your healthcare. And so this was that first step to that. I do think this is going to continue to evolve back to our conversation around managing a team of experts. As your primary physician, could I call on a team of expert agents to help walk through some of the things that we might not be considering in our 15 minutes we have together once every six months or whatever that looks like? I'm very hopeful that given the right circumstances and the way the technology is progressing, we're going to get to a place, I think in a perfect world at least, where the access for every patient is equivalent to those who may have access to the best resources.
29:19
You mentioned that 20% of all AI searches, or up to 20% of all AI searches are around medicine, which is fascinating. I didn't know that. But there are of course other people who don't want AI in their healthcare settings or who are worried about their human doctor or their primary care physician being replaced by an unfeeling machine. What do you say to them?
31:32
It's interesting, I think going all the way back to the earliest days of search, that stat was still about the same. Up to 20% of Internet searches were healthcare related. And we're seeing two interesting trends. One from the economists that showed that these searches that are going on today in the typical search engines, the one that's going down the fastest is healthcare. Isn't that interesting? Because where are people going then? Well, they're probably going to the models. So I actually push back on that. I think that most people want to be educated about their medical condition and they want to feel safe and free to ask questions about their own healthcare in an essentially infinitely patient, knowledgeable, sort of oracle environment. And again, we're not there yet, so I don't want to make that claim. But even me, I put my data into these models and ask questions about it and I walk away sometimes learning something or at least knowing what I should be asking my physician. So again, would I rather do that than any healthcare? Not me personally, I do want to have that relationship with my physicians, but I also want to walk in much more knowledgeable, so I feel like we're on a peer level when we're speaking about my care decisions.
31:55
Matt, thank you.
33:07
Thank you so much. Oh.
33:08
That's it for this week for tech stuff. I'm Cara Price.
33:32
And I'm Os Velosch and this episode was produced by Liza Dennis, Tyler Hill and Melissa Slaughter. It was executive produced by me, Cara Price and Kate Osborne for Kaleidoscope and Katrina Norvell for iHeart podcasts. The Engineer is Behead Fraser and Jack Inslee makes this episode. Kyle Murdoch wrote our theme song. Please do rate, review and reach out to us@techstuffpodcastmail.com we love hearing from you.
33:35
This is an iHeart podcast.
34:13
Guaranteed Human.
34:15