Google's Liz Reid on Who Will Own Search in a World of AI
51 min
•Apr 23, 20265 days agoSummary
Liz Reid, VP of Search at Google, discusses how AI is transforming search through features like AI Overviews while maintaining Google's core advertising business. The conversation explores the tension between AI-generated summaries that keep users on Google versus traditional click-through models, and how Google measures success through user engagement and query expansion rather than just monetization.
Insights
- AI Overviews are expansionary for search—they lower the barrier for users to ask questions they previously wouldn't bother searching for, increasing overall query volume and platform engagement
- Google's monetization strategy is shifting from click-through dependency to query expansion and improved ad targeting through more detailed user intent signals in conversational queries
- Multiple product entry points (Search, AI Mode, Gemini app, Maps, YouTube) will likely coexist rather than converge, as different use cases and user preferences require distinct experiences
- AI quality and spam detection remain critical competitive advantages; the volume of AI-generated content matters less than Google's ability to surface trusted, high-quality information
- User behavior is evolving toward natural language queries that express full intent rather than keyword optimization, fundamentally changing how search ranking and ad targeting work
Trends
Shift from keyword-based to conversational, intent-rich queries as users adopt AI-assisted searchExpansion of search query volume through lower friction—users asking questions they previously deemed not worth the effortMultilingual and accessibility improvements through LLM-based search, particularly benefiting non-English speakersAI-generated content proliferation creating ongoing spam/quality challenges requiring sophisticated ranking and filteringDiversification of search entry points across devices and form factors (phone, desktop, watch, glasses) rather than convergenceNew advertising formats and targeting opportunities emerging from richer, more detailed user intent signalsCross-platform AI product usage patterns showing distinct user behaviors between Search, AI Mode, and Gemini appRecruitment and skill assessment evolving to evaluate AI tool fluency alongside traditional technical competenciesPrivacy and data handling becoming more complex as conversational queries reveal more personal contextFact-checking behavior emerging where users verify LLM outputs against Google Search results
Topics
AI Overviews and their impact on click-through ratesSearch monetization in the age of generative AIQuery expansion and user engagement metricsAI-generated content quality and spam detectionConversational vs. keyword-based search behaviorMulti-product strategy (Search, Gemini, AI Mode, Maps)Privacy considerations in conversational searchMultilingual and accessibility improvements through AIAdvertising formats and targeting in AI-assisted searchRecruiting and evaluating AI tool fluencyUser intent disambiguation in short queriesEcosystem health and publisher relationshipsForm factor diversification (mobile, desktop, wearables, AR/VR)Fact-checking and verification behaviorsCompetitive positioning against OpenAI and other LLM providers
Companies
Google
Primary subject; Liz Reid is VP of Search; discusses AI Overviews, Gemini, search strategy, and competitive positioning
OpenAI
Mentioned as competitor developing independent LLMs; discussed regarding advertising business potential and search di...
Anthropic
Referenced as major independent AI lab alongside OpenAI developing generative AI technology
Microsoft
Listed as legacy tech company/hyperscaler attempting to keep pace with AI developments
Amazon
Listed as legacy tech company/hyperscaler competing in AI and search space
Meta
Mentioned as competitor with token-maxing culture and AI experimentation; example of aggressive AI adoption
Alphabet
Google's parent company; noted as having Gemini model that people discuss despite disruption risk to search
Vanguard
Episode sponsor offering bond funds and fixed income products for financial advisors
Bloomberg
Podcast network and media company producing Odd Lots and other shows
YouTube
Google subsidiary used for search queries, particularly in India; example of multiple search entry points
Instagram
Referenced as example of successful ad format innovation in feed-based products
People
Liz Reid
Guest discussing Google's AI search strategy, AI Overviews, Gemini, and search monetization in AI era
Joe Weisenthal
Co-host of Odd Lots podcast conducting interview with Liz Reid about AI and search
Tracy Alloway
Co-host of Odd Lots podcast; participates in discussion about AI search and user behavior
Jason Calacanis
Founder of Mahalo; discussed as example of pre-AI content spam and human-generated slop on the internet
Nick Carlson
Wrote article exposing Mahalo's low-quality content strategy; example of spam detection challenges
Quotes
"I think what's interesting about it is that the space of search is very big and what people are trying to do is very big. And sometimes people really want quick answers and they want it right in front of them. And sometimes they want to go deep or they want to hear from particular individuals, right? I think there's this sort of myth that people want AI or the web, that I actually think what we see is that people want AI and the web together."
Liz Reid•~20:00
"The answer doesn't buy the pair of shoes. You actually have to buy the shoes right so you still have to go pick a merchant for that people care often to hear people's perspectives right like you'll talk about okay well we want a bunch of answers and yet this is like a golden age for podcasts right so clearly people sometimes want to spend a couple of seconds and other times they'll spend a whole hour listening to things"
Liz Reid•~22:00
"One of the things that's very surprising to people at times is they think they come somewhere for all the questions they have already today, right? Like maybe they think they come to Google all the time, or they I think they go to Google plus LLMs or Google plus LLMs plus TikTok plus whatever. They think they ask all the questions they have. But that's not true. You actually make a calculation when questions go through your mind of, is it worth spending any time to figure out the answer to this question?"
Liz Reid•~55:00
"If you use them blindly, you're going to run yourself into trouble. Okay. If you as a leader sort of don't use judgment on them, then you get the example of like, I will just create a job that runs in the background and does dumb things to do the tokens, right?"
Liz Reid•~48:00
"I don't think the space is so huge and it's changing so quickly right now that to sort of be able to know for sure whether or not you can create one sufficiently dynamic personalized experience that one app, one entry point can truly do it all. I don't think we know yet on that."
Liz Reid•~68:00
Full Transcript
Thanks for listening to All Thoughts. Follow the show on Amazon Music for more future episodes, or just ask Alexa. Play the podcast All Thoughts on Amazon Music. Today's show is brought to you by Vanguard. To all the financial advisors listening, let's talk bonds for a minute. Capturing value and fixed income is not easy. Bond markets are massive, murky, and let's be real, lots of firms throw a couple flashy funds your way and call it a day. But not Vanguard. At Vanguard, institutional quality isn't a tagline. It's a commitment to your clients. We're talking top-grade products across the board of over 80 bond funds, actively managed by a 200-person global squad of sector specialists, analysts, and traders. These folks live and breathe fixed income. So if you're looking to give your clients consistent results year in and year out, go see the record for yourself at Vanguard.com slash audio. That's Vanguard.com slash audio. All investing and subject to risk, Vanguard Marketing Corporation Distributor. Bloomberg Audio Studios. Podcasts. Radio. News. Hello and welcome to another episode of the Odd Lots podcast. I'm Joe Weisenthal. And I'm Tracy Allaway. Tracy, when we talk about AI, we talk a lot about the big labs, the big independent labs, particularly open AI and Atropik. But of course, we know that the legacy tech companies, the so-called hyperscalers, et cetera, they're not just going to give up this groundbreaking technology without a fight. No, and we've certainly seen various efforts to, I don't want to say catch up, but to keep pace, I guess. Keep pace. Yeah. So of the major legacy companies, which I would say are like Microsoft, Amazon, Meta, and Google, or sorry, Alphabet, I would say clearly Alphabet is the one that has the model that people are talking about, Gemini, right? And in a way, that's kind of surprising because one common intuition that people have is that companies aren't very good at developing the thing that disrupts their own legacy business, right? This is just a famous sort of B-school thing that people talk about all the time. Yes. So the issue, one would think, with AI and Google in particular is that Google, you know, famous for its search. Yeah. No one believes me, by the way. The first time I ever used Google in like, I guess it was the either late 1990s or early 2000s, I told my parents to invest in the company and they didn't listen. Oh, really? Yeah. They don't believe me either, but I swear I did. The first time I used it, I was like, oh, my God, this is so much better. So I feel so dumb because I remember the Google IPO very well. And I thought I was very smart because it was like I was 20. I think it was 2004. I thought I was very smart back in those days. I don't think now that I'm older, I realize I don't know anything. But I thought I was very smart. I was like, oh, these sheep, they're just buying. It's overvalued. Should I short this IPO? It's like a bubble, et cetera. Thank God I didn't put on some short trade and go bankrupt. But it never occurred to me to go long because I don't. And I think it's the journalist temperament. We're all very cynical. That's right. You have to be an optimist to be an investor. Yeah, you have to be an optimist. You have to be willing to be part of the crowd. You have to ride the wave. We don't really like riding waves, et cetera. And so I was like, oh, should I short this? Anyway, I should have bought it. Okay. I should have held it. And you know what? I should have bought it. Even though we don't trade, like, should have bought it in, like, 2023 when everyone was saying that ChatGPT was going to eat its lunch. Should have bought it a year later when people were saying, oh, Gemini is too woke, et cetera. All these turns, there are opportunities. But anyway, I'm happy to just talk about it. All right. Now that we've gone on a very long segue, what we are getting at- It's an important segue. What we are getting at is that in theory, AI would seem to pose a threat to Google's core business, which is search. So if you type in a query in Google now, I think I have one open from the last time we recorded a podcast. I don't know why I was looking this up. It says, can you see tankers physically from the Strait of Malacca? Oh, yeah. In the Strait of Malacca from Singapore. So if you type that into Google, it used to be you would just get a bunch of search results. That's right. Now you get an AI overview, which basically pulls in a bunch of results from other pages and gives you a sort of summary. And so the question is, if people are just going to be looking at these summaries instead of actually going to the pages that Google search used to turn out as links at the top of the page, what does that mean for traffic via Google search? Well, and then the other big element here, which is that, you know, people stop like clicking as much on links or even seeing links. What does that mean for the advertising business? If the expectation is that you see the answer right there, et cetera. And one of the questions we have about OpenAI in particular is, are they ever going to be able to launch an advertising business? Can these two things combine, et cetera? We know that to this day that Google search ads are the greatest money printer that's ever been invented, basically, in the history of the world. And so how this is going to interact and how Google is thinking about these questions, and just say, it's a little unclear to me because, yeah, it's nice. I could just put on the same question. Can you see tankers in the Strait of Malacca? I don't know if these are right. I don't know if the AI overview is right, but it's there. It's there. So lots of interesting questions here. Also, AI slop, right? Are the search results that are being turned up actually of any certain quality, right? It's a great point. And one of the reasons perhaps that a lot of people, and I would include myself in this, use AI more and more. It's because I have like, you know, I have some issues, let's say, with search. So anyway. I'm going to clip that. I have some issues, Joe Weisenthal. Anyway, we really do have the perfect guest to talk about this. Someone who is like really literally right in the middle of all of this and can answer all of our questions. We're going to be speaking with Liz Reed. She is the VP of search at Google, and she has been at the company for over 20 years, has been in the current role, has been on the search team for a few years. And so we're going to hopefully get answers to all these questions. So, Liz, thank you so much for coming on the Outlaws podcast. Thank you for having me. Delighted to be here today. Absolutely. Why don't you just start by telling us, like, what's your role at Google? What does it mean? Okay, you're the VP of search at a company that people know for being a search engine company. But what does your title actually entail? I lead the search team that you can think of as covering the product, the engineering, our user designers and data science, sort of the team that fundamentally builds the search product that you use. So how much of your day-to-day has taken up thinking about AI nowadays versus, like, let's say, two years ago? Well, two years ago, I would say it was also still a fairly large amount. I think AI is a deeply transformative technology in what it opens up. And I think AI has been in search for many years in different forms. It's much more in the forefront these days with things like AI overviews and AI mode. But if you go back several years, it was how we transformed a bunch of ranking with efforts like Burt and Mum that were built on some of the early transformer breakthroughs. But at the end of the day, you know, and if you go back once upon a time, AI didn't just referred to generative AI, referred to general machine learning and other things like that. And you know, in the early 2000s, Google had spell correction, which felt sort of revolutionary at the time that used AI. Right. No one calls spellcheck AI anymore. Yeah. Nobody calls spellcheck AI, but like it was at the time, right? It just shows you how far the world has come. But the opportunity to really transform search and realize Google's mission at a new level is really exciting and an amazing opportunity and very humbling to do this for a product that so many people use. By the way, you mentioned BERT, my little software hobby project of training a machine learning model to tell whether something is more indicative of the written or spoken word is based on BERT. So thank you for developing that. And thank you for open sourcing it so that someone like myself cannot train it. But talk to us about just how you're thinking about this core tension, that for years, Google had a business of you go to a, you put in a term in a search bar and then people click out. And some people, and some of the clicks were to organic results and some of the clicks were to paid results. And now people are, we're entering this world in which people expect to get whatever they want right there from the query without that impulse for a click. And you have a business that is still dominated by that click out one way or another. So just like, you know, and I want to drive into the details, but big picture, is that a real tension? I think what's interesting about it is that the space of search is very big and what people are trying to do is very big. And sometimes people really want quick answers and they want it right in front of them. And sometimes they want to go deep or they want to hear from particular individuals, right? I think there's this sort of myth that people want AI or the web, that I actually think what we see is that people want AI and the web together. Okay. There are certainly questions for which you just want the quick answer, and then you're done. And that's been true in many ways for years, right? We'll talk about this with AI, but I bet most of the time you look up the weather, you just want to know what the temperature is and you're done, right? And you don't once, but then you're going to go on a trip and you're going to go and actually dig in more because you're going surfing on other pieces. I think if you think about this, people are like, oh, I have an answer I don't, so why would I do an ad? Well, the answer doesn't buy the pair of shoes. you actually have to buy the shoes right so you still have to go pick a merchant for that people care often to hear people's perspectives right like you'll talk about okay well we want a bunch of answers and yet this is like a golden age for podcasts right so clearly people sometimes want to spend a couple of seconds and other times they'll spend a whole hour listening to things and so one of the things we see with the shift with ai overviews is that you get more of this pronouncement what's your goal, okay? If all you were going to do was go to the webpage, see the fact, and immediately click back, you're going to spend like a half a second on the page, okay? You see those things shift. But if what you were going to go and do is read an article for five minutes, you're still interested in reading that article for five minutes, right? AIO might help you point to the right page. So we see fewer bounce clicks where a user would sort of go and immediately come back because they weren't happy. You see people go though, and they want to hear from other people. They want to hear their expertise, their perspective, their unique take. I take fashion as an interesting example sometimes. If you hate fashion, then you love using chatbots to replace the need, right? You didn't really want to spend any time, that's fine. But if you were someone who was spending a lot of time reading influencers and what their interests in the fashion, you have not decided to replace that with a chatbot, right? You're still going to want to hear from those fashion tastemakers. And so there's an opportunity with AI overviews to help you get started and then make it easy for you to dig in and connect. And I think people's interest in connecting with other people is just as strong these days in many ways. So just at a simplistic level, can you tell me, like, how does Google determine whether it shows the AI overview or not? So if I type in Corgi into Google search, I'm biased because I have two Corgis, but it just gives me a bunch of links to like the American Kennel Club and like the Corgi subreddit and things like that. If I type in what is a Corgi question mark, it gives me the AI overview. Is it just everything with a question mark returns an AI result or how are you actually deciding what to present to users? No, what we try and an important premise of this is that we shouldn't give you AI for the sake of giving AI. right? The point is for it when we think it adds value to people. And so it's not really associated with question marks. Question marks are often when people are looking for more of a description, they have a harder question, which maybe a single web page doesn't answer or whatever else. But what we're really using is looking at signals from users to say, does the AI overview provide additional value or not? And so most people, I haven't studied the Corgi query in detail, But for a query like that, probably what we're seeing is that most people aren't just trying to figure out what is a corgi. Maybe they want to see pictures about it. They want to click on knowing more about the dog breed because they're trying to engage in it. And so we basically learn over time based on user signals. The same way we learn about like when should you show the weather one box and when should you show local results and when should you use sports. That the AI overview provides additional value. Great, we show it. It doesn't. Then we don't want to get out of the way, right? You don't want your search for Wikipedia. You go and you type in Wikipedia, which a lot of people do. They want to get to Wikipedia. They don't want to go and say, let's give me the history of Wikipedia. That's not why they search for that, right? If they search for odd lots, right, they probably want to quickly get to your podcast. And so we have a variety of signals that try and help us understand when is it adding value and not. And we get smarter over time as people both change how they ask questions, as the models get smarter, right? Like we don't want to put an AIO review if we think it's not going to be high quality. So as the models have gotten more powerful, we can cover more cases and just continue to develop really with the focus being what is the best response to give a user for the questions they've asked. Today's show is brought to you by Vanguard. To all the financial advisors listening, let's talk bonds for a minute. Capturing value and fixed income is not easy. Bond markets are massive, murky, and let's be real. Lots of firms throw a couple flashy funds your way and call it a day But not Vanguard At Vanguard institutional quality isn a tagline It a commitment to your clients We talking top products across the board of over 80 bond funds actively managed by a 200 global squad of sector specialists analysts and traders These folks live and breathe fixed income So if you're looking to give your clients consistent results year in and year out, go see the record for yourself at vanguard.com slash audio. That's vanguard.com slash audio. All investing and subject to risk, Vanguard Marketing Corporation Distributor. Hello, I'm Stephen Carroll. I'm in Brussels, where many of Europe's biggest decisions get made. And I'm Caroline Hepker in London. We're the hosts of the Bloomberg Daybreak Europe podcast. We're up early every weekday, keeping an eye on what's happening across Europe and around the world. We do it early so the news is fresh, not recycled, and so you know what actually matters as the day gets going. From Brussels, I'm following the politics, policy and the people shaping the European Union right now. And from London, I'm looking at what all that means for markets, money and the wider economy. We've got reporters across Europe and around the globe feeding in as stories break. So whether it's geopolitics, energy, tech or markets, you're hearing it while it happens. It's smart, calm and to the point. And it fits into your morning. You can find new episodes of the Bloomberg Daybreak Europe podcast by 7am in Dublin or 8am in Brussels, Berlin and Paris. on Apple, Spotify, YouTube, or wherever you get your podcasts. What separates good leaders from transformational ones? I'm Jessica Chen, and in Season 2 of Leading by Example, we'll sit down with executives like Grace Chen of Birdie Gray to find out. It's important to understand where you spike, but also really acknowledge where you don't and find people who can fill those gaps. Listen to Leading by Example, executives making an impact on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. I have a question about what you see among user behavior. And my question is, do you see the same user or a cohort of people who use both Google.com and Gemini.Google.com? dot com? And do you see distinct patterns of queries from the same user, but different types of searches? Or do people just sort of throw the question mostly in the Google search box and start from there? Or do you see people who just use Gemini and do everything there? Like, what do you see in terms of emergent patterns about how an individual chooses which of the sites to enter into first and how they do different queries in each one. Yeah. So maybe just, so we're all talking about the same thing. There's sort of your main search page. There's AI mode. That's part of search. And then there's the Gemini app, right? Across. And I would say like, there's a lot of users. So their behavior varies across all of them, but there are some patterns. Okay. There's plenty of people who co-use across them. There's plenty of people that are actually using several AI products right now, just in general, right? Not even just within Google. Across Gemini and search, the more informational ones, if it's an informational query, then the probability that they're using search or AI mode is going to be higher. If it's a creative query, it's more of a productivity question. I want to please rewrite this to make it sound more formal, right? Those type questions are going to be more Gemini oriented. Between AI mode and search, the main search page, some people use AI mode mostly via AI overviews. They start on AI overviews and they transition. For those who go direct to AI mode, they tend to do that for queries that they consider sort of more complex, longer questions, questions where they expect that they're going to do more follow-ups versus if you're doing a very browsy query, you might choose to prefer all of the SERP. If you know that like your goal is to just get to a particular web page, you're more likely to start with the search result page. But, you know, there's obviously overlap in the use cases, but across search and AI mode tends to be sort of more longer, complex, more conversational queries versus more traditional queries. And between Gemini and search, there's more of a productivity and creativity versus information slacked on them. So since we're talking about user behavior, one of the things that seems to be happening now is people will use an LLM, it doesn't matter which one, they'll ask a question and then they will go and fact check the answer that they get on Google. And I'm really curious if that's something that you're aware of as a sort of user behavior and if the idea is that maybe Google becomes maybe not an AI overviewer per se, but maybe the sort of fact checker of last resort for other LLMs? I think we're definitely aware that people use Google as a fact checker for some of their LLM use case. I think people have used Google as a place to fact check information pre-LLMs for a number of things. A friend tells them something, you know, and sort of come. But I think people use search for a lot more than just fact checking, right or even just looking up facts they want to go browse what they're going to go by they want to go check up the sports score of the latest team and we do see with AIO reviews that with the presence of AIO reviews people are asking more longer questions they're asking more conversational questions and so some of these questions they started bringing to an LLM as we brought AIO reviews in they took on those same types of questions and have brought them to search. One of the complaints about search, and I would say I've complained about this or complained about, is how many of the results in the SERP are like very, almost like too timely. Let's say someone enters the news, there's a headline about so-and-so. I'm like, I'm really curious about this person. And so I like search their name and I get like 10 results or however many results all from the last day since they made the news. And it actually is difficult for me to find information that didn't have the context of the news. So it was like unbiased in some way. Is that recognized as an issue? Because I certainly feel it as a user. But I'm curious from your perspective at Google, whether this is something that you think about as a way in which search becomes less than ideal. I think one of the things that's generally very challenging about search is that people enter the same query that's often very short with a lot of different intents in mind, right? And so in a bunch of the examples where you have, probably our data says that most people just want to see the recent articles. And those are the ones that get all of the clicks, but you didn't, right? And so how do we figure out across all of the different intents and match them across? And so I think there's this question about both how do you get the different facets of a question? How can you personalize the results more effectively for people? I do see more, this is anecdotal as opposed to complete data, but to your example, people using AI mode and AI overviews for some of the people queries to understand more about the person independent of the rest of the news articles. Because if you don't know who, most of the people searching for it know who the person is, but some of the people searching for it don't. And so you can see behavior like that. But I do think one of the interesting things about the evolution of AI is that people stop talking just in keyword ease as much and they start expressing more of what they want. And then that becomes much easier for us to give an answer. So if you say, tell me about someone versus like what's new with someone, that's actually much easier for us to figure out how to give better results than if all we just say is someone, right? We used to talk about an example query in a different way is falafel. What do you want to know with falafel? Some people don't know what falafel is. They want a definition. Some people want recipes. Some people want to find where to eat. Some people want nutritional information. They all just use the word falafel. And that's just harder to figure out how across all of them we do that. We also see that with tensions on things like, well, do you want video results or do you want more text-based results? People are very opinionated about what the right answer is, but they are not very opinionated in the same direction. And so we try and meet, you know, multiple billions of people's needs at once. Just to be clear, the falafel example is great. Would you say that today you see a greater diversity of falafel-related queries, whereas maybe like five or ten years ago you just get falafel, And now people know that they could type in, where does falafel come from? What is a falafel recipe, et cetera? Have people gotten more sophisticated over time in their query specifications about falafel? I don't know falafel very specifically, but in general, yes. We have seen with AI overviews, meaningfully longer queries. We see more natural language queries, but it's also not even something as basic of that. it can also be like you were searching for restaurants. We used to laugh about the, like before I worked on search, I worked on maps and local, some of the intersection with search and people would just be like restaurants, New York. And you're like, what do you want me to do with that query? Right. Okay. The best restaurants in New York are going to take three months and 99.9% of the population can't afford to go to them. Okay. But like, are you picking 10 random ones, et cetera. But like part of why people will do that is they had a much more complex. I want a restaurant in this location for five people. It can't be too pricey. I have a vegan member. I also have like I have kids. That was the question they had in their mind. And in the old word of keyword ease, none of that information would be kind of spread throughout the web. And so you wouldn't feel confident you could just put in the question. And now with like AI reviews and AI mode, you can start to actually, and you see people do this, they tell you the real problem, right? They don't take their need and translate it to what the computer understands. They try to give the computer their actual need and expect us to do the translation. And I think that's really exciting to see because when we can be more helpful, but also like those are real problems people had. If you go back to the mission, it was organize the world's information and make it universally accessible and useful. Like that useful part, right? It's not just that it's organized. Is it useful to you? And I think one of the most exciting things about AI, the transformation going on right now is that you can actually make information much more useful to people. And that really opens up. That makes it so that people just ask more questions because we can actually do a better job meeting their needs. Does that come with any complications in terms of privacy or competition for Google? If people aren't using keywords as much anymore, if they're doing basically like query brain dumps into the prompt and saying, you know, I am so and so, I have a kid, I live here, I want to do the following things, this is my issue. Is that like an added layer of complexity that you have to deal with as like a large search company? I mean, I think people, from a privacy perspective, we give people sort of a range of different things. They can be sort of an incognito. They can be signed out. They can be signed in across. And I think Google has a long tradition of really treating people's data with a great deal of care and having cutting-edge security and privacy by design. So I think people are seeing the value and they have continued their trust in Google. I think it means it's a harder job on quality, right? You have to take this question that's many parts and you have to figure out how you break it apart. And you have to do work to think about things like latency because you can't just, you know, if everyone uses the same keyword and it's not personalized, then you can cache it all. If all of a sudden the queries get much more diverse, it has consequences there. But I think we just see that it's very empowering people, right? That it takes some of the work out of searching. I think sometimes people think, oh, a few years ago they said like, oh, what more can you do with Google search? But like if you actually ask them, okay, when was the last time you spent 20 minutes searching when you would have preferred to spend two? It's actually not that hard for me. Oh, the last time I was trying to like go find a service provider. The last time I was trying to go like do these bigger tasks in life. And so it's been kind of exciting to just make people's lives easier by helping them address their real need. I have so many different theoretical questions I can ask. Here's one. Actually, you know, this is something that was inspired by our producer Dash, a conversation that I had with him 10 minutes ago prior to this and related to something else. I imagine in your career at Google spanning over two decades at this point, you've been involved in quite a bit of recruiting and recruiting software engineers in particular. And I imagine that's a particularly important aspect in some way or another for the VP of Surge. Given what we've seen with AI coding and so forth, when you're doing one of these lead code software developer interviews, et cetera, is it different today than five years ago? Do you have to think really differently about the battery of technical questions that you would propose to a software engineer today, given the restructuring of the nature of the job in the world of AI-generated code? I think the process is definitely evolving. I wouldn't say that we have perfected the science yet. But there's two angles in which you're thinking about it, right? One is you don't want to ask questions for which they just go and type in the answer in the chat bot and recite it back to you, okay? So you need to make sure. To the extent that your goal is to understand, are they critically thinking? Are they able to think through a problem and do that? You want to make sure that that's actually what you're assessing, right? And so is it in person? How are you doing that on some basic way? But there's the other thing that I think the tools are powerful You can use them in ways that make you more effective, and you can use them in ways that make you less productive. How do you think as the fluency is changing with AI the way a software engineer might approach the problem now is different than they might have approached the problem five years ago without some of these tools And so I think we all learning how to change asking that question right Are you building up that expertise? Are you building up that fluency? And the fluency isn't fixed. What was possible with the tools six months ago, let alone two years ago, is different than what's possible now. And in six months, it will be different. So you have to start thinking about how part of your interview is thinking about sort of fluency with the use of tools in the same way that like when IDEs became important or when people stopped using assembly language and they started doing Java, right? You had to sort of evolve the interviews. It's just that it's happening very fast. Okay. Right? And so we all have to be on our toes, but exciting. Like you play with this tool and it doesn't work for something and then it's not like play with it two years later. It's like play with it three months later. Maybe the tool will now work for these things. Okay, well, speaking of tools, I mean, one of the things, Joe, I think you've said this. You've been playing around with Cloud Code, this idea that actually when you start vibe coding everything and telling your agent to do everything, it feels like you don't even necessarily need a computer, much less a search engine, presumably. So I'm just curious, if you gaze five or 10 years into the future, what do you think the default entry point for interacting with the web is actually going to be? Is it going to be a search engine like Google? Is it going to be a specific LLM? Is it going to be my personal agent that I've vibe coded for all of my preferences? Yeah, just can I just add on to this question? This is something I think about, like, if I want to send an email to Tracy today, then what I have to do, like I find the tab in my browser, I scroll over there. Okay, that's my Gmail tab, etc, whatever. I would like to just be in my terminal. It'd be so much easier to say, here, send an email to Tracy saying this. There's all these steps that I currently do because of the nature of graphical user interfaces that now that I've gotten in like cloud coding or whatever, feel a little clunky. It feels a little yesterday. And so, yeah, I'm extremely curious about will the web with these series of boxes that we drag and drop, et cetera, is that the future or will it just be someone talking in English to their computer? I don't think, like 10 years is a long time right now in where the tech is. That's fair. In one year. Three months from now. Will we still have browsers in three months from now? People are like, okay, we believe in 10 years. Will it be AGI? Will anyone be doing anything the same? Okay, so with that aside, I think there are some things I believe in and some things I think we don't know. I think you already see if you go back 10, 20 years ago that the way you interact with the tech has evolved a bunch, right? It used to be it was just the laptop. Well, now it's the phone. Well, now it's also the watch, okay? In some cases, it's the glasses. This sense that it should feel like the information is sort of at your fingertips in whatever medium is useful, right? But I don't know that this becomes a... We haven't so far replaced all of the old ones, right? Like, you use the phone a lot more, but my guess is you're not doing all your cloud code work on your phone and you're doing some of it on the desktop. That's true. Right. The introduction of the watch has supplemented, but it hasn't eliminated the desktop, right? So what's been interesting actually is that it hasn't gone in the direction of converging to the answer. It's actually increased the form factors. And so that you want to be able to access this information wherever you are in whatever form factor makes sense. And so will it be glasses? Will it be something else? Quite possibly. But let's even say it's glasses. become a big deal. Glasses are very small screens, even there. You're probably not going to do your big productivity thing on desktop. So I think what you'll see is that the access point is not confined to one thing, but that the key is to eliminate the friction, right? And the toil. To your point, you had to do six steps. You didn't want to do the six steps. Why should you do the six steps? Okay. I think you see that like some things are much easier to do with a chat interface And then some things, actually a chat interface is a super slow way to go do, right? Like if you have a list and you have to go say, please remove this long title for the 10th item, that's actually much harder to do with chat than like an interface that does that, right? So I don't think it necessarily converges on a single thing. I do think it should feel much more adaptive to your point about, well, if this is the way you prefer to interact, not just where you are, but how you interact. and can you customize and can you create to what extent do the user interfaces look designed for you versus look designed for general and can you have influence in them i think you'll see that i do think we sometimes like we're very aware of what doesn't work well we're not necessarily aware of what does work well like companies spend huge efforts working on how do they do shopping carts really well yeah okay this belief that sort of the chatbot will have a more optimized shopping cart for every shopping cart place in the world than the one you go to every day. I don't know. Not clear, right, for those things. But I do think it should feel much more personal. It should feel much more dynamic. It should feel much more ambient and available to you. And I don't think it will be one size fits all, either per person or per form document. The news doesn't stop on the weekends. Context changes constantly. And now Bloomberg is the place to stay on top of it all. Hi, I'm David Gurra. Join us every Saturday and Sunday for the new Bloomberg This Weekend. I'm Christina Ruffini. We'll bring you the latest headlines, in-depth analysis, and big interviews. All the stories that hit home on your days off. And I'm Lisa Mateo. Watch and listen to Bloomberg this weekend for thoughtful, enlightening conversations about business, lifestyle, people, and culture. On Saturday mornings, we put the past week's events into context, examining what happened in the markets and the world. Then on Sundays, we speak with journalists, columnists, and key political figures to prepare you for the week ahead. Join us as soon as you wake up and bring us with you wherever your weekend plans take you. Watch us on Bloomberg Television, listen on Bloomberg Radio, stream the show live on the Bloomberg Business app, or listen to the podcast. That's Bloomberg this weekend, Saturdays and Sundays, starting at 7 a.m. Eastern. Make us part of your weekend routine on Bloomberg Television, radio, and wherever you get your podcasts. What separates good leaders from transformational ones? I'm Jessica Chen, and in Season 2 of Leading by Example, we'll sit down with executives like Grace Chen of Birdie Gray to find out. It's important to understand where you spike, but also really acknowledge where you don't and find people who can fill those gaps. Listen to Leading by Example, executives making an impact, on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. I've been reading some articles. I think I saw one, there was a big one in the information recently. I was talking about one of your competitors' meta. Yeah, it's a competitor. And it was like, everyone's token maxing there. And there's a token leaderboard and people are competing to show that they're using AI more than others. And from my perspective, that boggles my mind because compute is a cost and just using compute per se does not strike me as a particularly good way of measuring who is productively contributing to the company. I mean, I could certainly find an easy, quick, recursive way to burn tokens. And generate a bunch of AI images of corgis. Yeah, generate it and then create one that just keeps telling it to improve itself, et cetera. The flip side, which some people say is like, look, it doesn't matter at this point because everyone has to figure out how they're going to use AI productively in their work. So, you know what? Don't even worry about metering AI. Tell everyone to pedal to the metal and AI use. And if someone is maxing out on tokens, it means they're experimenting with something and then they'll find something that really is a productivity enhancer. I'm curious if from your perspective, it makes sense to like essentially see token consumption or compute use as a proxy for like someone who's doing their job aggressively well. I think the thing with all of these proxy metrics is if you use them blindly, you're going to run yourself into trouble. Okay. If you as a leader sort of don't use judgment on them, then you get the example of like, I will just create a job that runs in the background and does dumb things to do the tokens, right? So as a leader, your job is to sort of use good judgment and not just think about the incentives. So if somebody isn't playing around at all with the tools, when we know that they can improve productivity, then we need to figure out like why and how we help support. Maybe there's some issue with the part of the system they're working on and we should go fix it. Or maybe we just need to help upstill them or whatever else the case is. I do think there is a level of experimentation required. So I don't think it works if your answer is like, you need to ensure that all your token use is completely optimized. It's not going to work. People have to learn what's possible. They're doing different jobs. The tech is changing. but it can either be like, don't use the tools or just max your tools blindly. It's a noisy signal, but it's a signal, right? So go look at it and understand as a place of where to look, don't use it as a final judgment. So speaking of measures and not oversimplifying them, I want to go back to the sort of core attention that we started the conversation out with, which is the AI results versus people actually clicking through to results and generating traffic. And I know you were talking about AI being expansionary or complementary for Google search, but I'm very curious how you actually measure that. And the more granular you can get on this, the better. Like, what are you specifically looking at to say that actually this is something that's good for our business versus something that's detracting from the core? So I guess I would say like Google's guidance in Northstar has always been like focus on the user. That's our biggest question at the heart is how do we make a great experience for users? And then you want to be thoughtful, obviously, about other concerns. Okay, like if you don't have a healthy ecosystem, you can't build a service ongoing. So you need to make sure you're nurturing a healthy ecosystem. If you make no money, then you can't fund this wonderful service. So you have to be thoughtful about those. But the place you start with is try and build something amazing for users. One of the things we've seen again and again with Google search is if you're doing a really great job, people will not just do another query, they will come back to you more often. They will take their phone out of their pocket an extra time. That's a high bar. It's one thing to go and say, I've showed you something. Can you do one more thing while I'm showing it to you? It's another thing to get you to decide you're going to bother to unlock your phone. You're going to boot up your desktop. You're going to navigate in the browser. And so one of the things we really look for is when we're doing these changes, does it cause people to come to search more often? Not just use search more often, but come more often. We also do various like UX research studies and try and understand what are people happy about or not? What are the things they find are frustrating? are more users adopting it, not just how much are they using it? So we look at a bunch of different metrics. But one of the biggest is really like, do you choose to come and ask Google? Like, do you essentially hire Google more often, right, for things you need? And one of the things that's very surprising to people at times is they think they come somewhere for all the questions they have already today, right? Like maybe they think they come to Google all the time, or they I think they go to Google plus LLMs or Google plus LLMs plus TikTok plus whatever. They think they ask all the questions they have. But that's not true. You actually make a calculation when questions go through your mind of, is it worth spending any time to figure out the answer to this question? And if the answer is no, then you just don't ask the question. And so when we talk about AI as an expansionary moment, what we really mean is there's a whole bunch of questions people have. There's a whole bunch of curiosity that is people are not exploring. And they're not exploring because they view it as too difficult or too much time or not sure that it will be worth it. And AI lowers that barrier. And it can lower that barrier in ways that are sometimes for US English speaker are surprising, which is like, actually, in a bunch of countries, there's not all the content in the web and the language you speak. OK, LLMs can help unlock that content. AI overviews, because it's using an LLM, can be more multilingual than just the web corpus is by default, right? Okay, so suddenly information that wasn't available to you as a Hindi speaker is now available. It can be visual. You had a question about this flower. You had a question about that cool purse you saw, like where can you buy it, but you didn't know how to describe it. It's possible. It can also just be like, my kid has a question. Do I say like, I don't know, or do I go ask the question, right? You see with young kids, they ask questions all the time, right? They go, why, why, why, why, why, why, why. And like at some point parents are like, because. It's not bothering me. Go ask the LLM. They go, okay. Let me Gemini that for you. But they do that because from a kid's perspective, they assume adults know everything, right? And it is no cost to them, right? They're not worried about their time and other things. As an adult, it's not that you're not curious. You just don't think everything is known and you don't have the time, right? And so if you lower that barrier it allows you to be that kid again that just sort of explores all of these things or get started on those projects that felt daunting or enables you to save or learn a new skill or whatever else. And that's really exciting. But I don't want to dismiss the wonder of being a kid and learning about the world, but I'm going to sound very callous in a second. But how do you make money off of that? Like, how do you make money off of the AI overviews? Is it just customer retention? Is that what we're basically boiling it down to? And then if it is customer retention, then you could have a strong argument for saying that like everything can just go through Gemini instead of search. I think I would say a couple of things. Search only shows ads on a subset of queries, right? You're like less than a quarter of queries, right? So there's a whole bunch of queries pre-AI overviews, right? That you don't make money on because many of them are not of commercial need, right and so you asked a question about tankers earlier probably pre-aio that query wouldn't have shown ads anyway it doesn't show ads now by the way yeah i checked this too there's no ads on that it doesn't show ads before there are no ads in that cry right nobody's trying to advertise something on that okay so those queries areas like it doesn't disrupt right then there's a bunch of class of queries where like you're shopping the presence of an aio review or Gemini answer is it doesn't preclude the need to still buy the item. So there's still this huge opportunity with ads because there's all of this choice that's going. I think you also see that there's an expansion of queries, right, to this point. So you get more queries. And so some of those queries are more commercial. Some of them are not, but some of them are more commercial. And so those become new opportunities for ads. And there can also be things like when the query is under specified, or it's a single query, you actually don't know as much. So you can't maybe target the ads as well. If people start expressing more of their need, if it's more of a conversation, they're going more down funnel, you can actually create better ads, right? And so you can think about new opportunities for ads formats, right? Some number of years ago, people have said like, how can you make money from a feed? Okay, well, Instagram ads are very popular. Yeah. Right. And so there's new ads formats as you recognize new technology and new opportunities. But the commercial needs are still often there and the desire for user choice is still often there. And so there's still a lot of possibility going forward. And so it's worked out very well right now with us in the balance. So I realized that I am a user of both the Gemini app or Gemini.google.com and just Google.com. And I have like a sort of real an intuition of which one I go to for which purpose. So if I want to look up the capital of Moldova, I'll just search capital of Moldova, which I just got, do you know what it is, Tracy? Sorry, I'm not trying to stump you. No, I feel like you're going to say it. No, it's Kishinau. I'd never heard of it. Oh, no, I didn't know. Yeah, I didn't know that either. But anyway, but if I want to understand what are some academic papers that have been written about why it is that high-frequency trading firms tend to not have outside capital, et cetera, and what was the theory for this. I think at this point I would use Gemini for that and hope to, I'd use Gemini or some other ones, but within the context of this conversation, that's more of a Gemini query for me than a Google one at this point. Will there always be two boxes or do you foresee eventually there is just one box and it will just know this is, like, why do we need two boxes? I don't know what life will be like in five years. I think it's very, sometimes people want sort of an experience, although the information need seems like similar, they actually want different experiences across. And so if you take a pre-Ellum example, people use YouTube for search some. In the U.S. they use it some. In India they use it a huge amount. They bring a bunch of queries that you would bring to Google search in the U.S. right? You could say, okay, well, we haven't, why haven't we collapsed YouTube and search, search box into one search box, right? And do that. And it hasn't necessarily been the case. We have the Google app and we have Chrome. They both allow you to search and they both allow you to browse the web. You have a set of people that love the Google app and you have a set of people that love Chrome and you have a set of people that use both on a phone, but you can't necessarily convince either population that they want to stop using one app and just switch to the other app. So I don't think the space is so huge and it's changing so quickly right now that to sort of be able to know for sure whether or not you can create one sufficiently dynamic personalized experience that one app, one entry point can truly do it all. I don't think we know yet on that. People come for restaurant searches. They come to Maps and they come to Google Search. we have not collapsed the maps app and the search app. At some point it becomes big. You know, you're putting all this directions code into the Google search app. Is that actually useful, even if you had a full maps view? So I think we're just going to have to learn over time about what's good. But the space is really giant, and they do have different emphasis right now on what they try to excel at. And you want to make sure that in the attempt to bring things together, you don't become only okay at everything. and you want to make sure that you can shine at all the use cases people need. And that may mean two products or that may not. Or it may be a third product, right? Like I don't know in five years, there may be a third product that replaces all the products. You have your personal agent. You don't talk to any products. I don't know. So I realized we kind of promised to talk about AI slop a little bit in this conversation. So one of the things that's happening with AI is not just that I can ask a bunch of questions I might not otherwise have had time or the inclination to ask, but also AI is being used to generate vast amounts of content that are aimed at potentially answering any silly question I or anyone else on Earth might have. Creating space, yeah. Yeah, just churning it out. And I'm very curious how search is weighing, I guess, the quality of its results in the new slop era of the internet. I think there's a tendency at times to think about AI slop as if it's, before AI slop there was slop. Yes. Right? Human-generated slop. There was human-generated slop. Now there's AI-generated slop. So there has always been slop on the web. And so what doesn't really matter at some level is how much slop is on the web, so much as is there great content on the web and can you surface it, right? And this is Google's bread and butter in ranking and has a long history of looking for spam and trying to drop it and make sure it doesn't show. And we crawl many, many more pages than we even put in our index. There's pages we put in the index that we never surface, right? So that we can keep that rate of spam and slop at a very low rate. And it is a constant effort, right? It's not a problem you solve because some of the people generating the spam, there's a lot of financial incentives associated with it. But that is what people have come to trust Google is that it will show great information. And it's a thing that we will continue to put a huge amount of effort in. And so that's the way I would think about it. It's not like how much AI slop or human-generated slop or whatever automated slop pre-AI, post-human-generated there is, but making sure that the information you do see is trusted. Liz Reed, thank you so much for coming on OddLoss. That was a fascinating conversation. I have like a billion more questions. But we'll have you on in three months when the entire world has changed. And we'll get an update from you. Thank you very much. It was a pleasure to be on with you. I like the point about human-generated stuff. Do you remember? Yeah, but the difference is the volume. No, I know. I get it. But do you remember Jason Calacanis' startup Mahalo? No. So Jason Calacanis, who's been on the podcast before, he had this startup for a while. It was like the biggest piece of garbage in the world. No, for real. People need to go like that. It was called Mahalo. And the idea was they were just going to hire a lot of people to write articles that were not very good to appear in Google. Oh, I see. To swamp the surface. Yeah, there was a famous one that my old colleague at, I think, at a business insider, Nick Carlson, discovered. And it was like, if you search how to play the xylophone, there was a Mahalo article for that. And it was, I swear to God. Okay, it was step one, decide if you want to play a xylophone. Well, that's an important step. Step two, get a xylophone. Step three, learn to read sheet music. Step four, practice reading sheet music and play. So like this was actually like this. I'm just remembering like it is people have been trying to like stuff complete garbage into the search results for a very long time. And I always get a chuckle thinking about that example. And you should go look for it. I'm looking for it now. I'm very distracted. It was so bad. Wait, we're just going to like laugh about this article for. Look at this. Just search it. Mahalo, how to play the xylophone. Yeah. I see something from Mahalo.com on YouTube. That can't be it. Yeah, yeah. Just search. So yeah, Business Insider, February 21. Hilariously useless. Mahalo is a guide to playing the xylophone. Did you write that? No, Nick Carlson wrote it. Oh, I see it. He wrote it up. Yeah, yeah. Unfortunately, now it's behind a paywall and itself is covered in sloppy ads. So I guess. But anyway. Sorry. Decide whether you want to buy a used or new xylophone. Metal xylophones are less expensive than wooden ones. Please. That's useful. That's a useful tip that I did tell you. I know, but when you see that, it's like, please, AI, like, save us from this human-generated garbage that we're trying to clog search results before. All right. On a serious note. Yeah. I did think the point about not customer retention, but, like, expanding the volume of, like, user queries on the platform made a lot of sense. Totally. Which I hadn't, like, really considered that much before. So even if you do get a bunch of no-click users, they are more inclined to come back to the platform in the future. And maybe some of that eventually lands in clicks. The other thing I hadn't thought of, and I thought it was a great point, which is that Google currently runs multiple search boxes. Are you looking at the Mahalo article and cracking up right now? I'm sorry, I am. Step four is experiment with different mallets. It's so good. I just really want to play the silo for you. It's so good. It's so good. Step five is practice regularly. It's crazy. I think AI is much better. It's so good, isn't it? It's like the biggest steaming pile of garbage I've ever seen on the internet. 15 years or 10 years before anyone has ever. Can we have Callie Canis back on the podcast just to talk about this? We should have Jason back on the podcast just to like grill him about what exactly he was thinking and his sins against the internet for having put this on there. Well, he was an early adopter of non-AI slob. But anyway, that point about there are multiple search boxes already, right? There's the YouTube search box. You're still laughing. I'm sorry. I was trying to make eye contact with you and not look at my computer. I'm just thinking about the silo phones. Should we just leave it? Yeah. Okay. All right. Yes. Shall we leave it there? Let's leave it there. This has been another episode of the Odd Lots podcast. I'm Tracy Alloway. You can follow me at Tracy Alloway. And I'm Joe Weisenthal. You can follow me at The Stalwart. Follow our producers, Carmen Rodriguez at Carmen Armand, Dash O'Bennett at Dashbot, and Kale Brooks at Kale Brooks. And for more Odd Lots content, go to Bloomberg.com slash Odd Lots. We have a daily newsletter and all of our episodes. And you can chat about all of these topics 24-7 in our Discord, discord.gg slash Odd Lots. And if you enjoy Odd Lots, if you like it when we talk about AI and human-generated slot, then please leave us a positive review on your favorite podcast platform. And remember, if you are a Bloomberg subscriber, you can listen to all of our episodes absolutely ad-free. All you need to do is find the Bloomberg channel on Apple Podcasts and follow the instructions there. Thanks for listening. I'm Francine Lacroix, an award-winning journalist, and I've got a new podcast, Leaders with Francine Lacroix from Bloomberg Podcasts. I've interviewed everyone from heads of state to fashion icons about the news of the moment. But I've always been curious, who are these people as leaders? I don't think there's one right way to be a leader. Make decisions. A poor decision is always better than no decision. Listen to new episodes every other Monday. Follow Leaders with Francine Lacroix wherever you get your podcasts. What separates good leaders from transformational ones? I'm Jessica Chen, and in Season 2 of Leading by Example, we'll sit down with executives like Grace Chen of Birdie Gray to find out. It's important to understand where you spike, but also really acknowledge where you don't and find people who can fill those gaps. Listen to Leading by Example, executives making an impact, on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.