Terms of Service with Clare Duffy

The Best Way to Search for Info Online in the AI Era

26 min
Jan 6, 20263 months ago
Listen to Episode
Summary

This episode explores how Google Search is evolving in the AI era, with VP of Product Robbie Stein discussing new features like AI Overviews, AI Mode, and visual search capabilities. The conversation covers how Google decides when to deploy AI summaries, accuracy concerns, and how users should approach different search tools versus chatbots like Gemini.

Insights
  • Google is strategically deploying AI only where it adds value rather than everywhere possible, reflecting a principle of 'AI for the sake of AI' being counterproductive
  • Visual search (Google Lens) is experiencing explosive growth at 70% year-over-year with 1 billion users, representing a major shift in how people search
  • User feedback mechanisms like thumbs down buttons are critical for improving AI accuracy at scale, making users active participants in model refinement
  • Specificity and conversational query length have increased 2-3x in AI Mode beta, indicating users are asking fundamentally different questions than traditional keyword searches
  • Google views itself as an information company first, not a general-purpose AI company, positioning search as distinct from chatbot competitors like ChatGPT and Perplexity
Trends
Shift from keyword-based to conversational, specific queries in search behaviorVisual search becoming primary discovery method for product and plant identificationAI-generated summaries with source attribution becoming standard search result formatPersonalization of search results based on individual user engagement patternsIntegration of shopping, comparison, and educational use cases directly into search resultsEmergence of multi-modal search combining text, image, and conversational elementsUser demand for transparency in sponsored content within AI-generated resultsCompetition from specialized AI chatbots and browsers (ChatGPT, Perplexity) expanding the search marketIncreasing sophistication of reasoning models enabling verification and multi-step problem solvingGrowing user expectation for AI to provide links and encourage deeper research rather than single-source answers
Companies
Google
Primary subject; discussed AI Overviews, AI Mode, visual search, Gemini, and competitive positioning in search market
OpenAI
Mentioned as competitor with ChatGPT, representing challenge to Google's search dominance
Perplexity
Identified as competitor introducing AI chatbot and browser to keep users within their ecosystem
People
Robbie Stein
Vice President of Product at Google Search; discussed AI features, accuracy, training, and competitive strategy
Clare Duffy
Host of Terms of Service podcast; conducted interview and provided user perspective on AI search accuracy
Quotes
"You can ask really hard questions in there now and type in a sentence like, how do I get a ketchup stain out of this carpet if it happened two days ago and I don't have vinegar in the house."
Robbie SteinEarly in conversation
"The system will learn that if it tried to do an AI overview, no one really clicked on it or engaged with it or valued it. And we have lots of metrics we look at that, and then it won't show up."
Robbie SteinMid-conversation
"We're not here to build another general purpose chatbot. And so we're really optimizing for those kinds of informational questions and building what we think will be hopefully the most knowledgeable AI set of experiences that connect you to information."
Robbie SteinCompetition discussion
"I think what you're seeing is it's very quickly getting higher and higher quality very fast. And I think one thing we try to do is we've integrated all of the core search kind of signals into these AI experiences."
Robbie SteinAccuracy discussion
"I think people just limit themselves with the kinds of things they ask. So it's really that level of specificity that I think we'll get the most out of search now."
Robbie SteinQuery recommendations
Full Transcript
When you type a question into Google, you can be sure you'll receive some kind of answer. But these days, there are many different forms it can take. Maybe the first thing you see is a link. Maybe it's an ad. And lately, it might be a paragraph of information written by artificial intelligence, known as an AI overview. And that's not to mention Google's AI chatbot Gemini, or all of the other various chatbots you can now separately turn to for information. It feels like a real moment of transition in terms of how we find information online. And with that in mind, I wanted to know, what is the best way right now to get answers from the internet? And can we be sure that the newest tools on the market are accurate? To walk me through how Google approaches these questions, I have Robbie Stein with me today in the studio. He is the vice president of product at Google Search, where he's responsible for the core search experience as well as new AI offerings. My conversation with Robbie after this short break. Robbie, thank you for being here. Thanks for having me. So I know obviously you are a longtime Terms of Service listener, but if you had not been and you wanted to find out more information about the show, how would you have gone about searching for that information today? Yeah, I'd probably just put it into Google. But unlike what I may have done many, many years ago where I just typed in a couple of keywords, I probably would have typed in more of a sentence. Okay. Tell me more about the show and maybe the last few guests that have been on it. Yeah. So you sort of touched on this there, but the Google search bar, the sort of traditional thing that people think of is no longer the only Google platform where people can go to find information. Give us an overview of the sort of world of ways that people can now ask Google for information. Yeah. I mean, there's a few that we see that people are using the most. The core search experience is obviously the largest. So there's a text box, google.com, you put information in it. But I think what's neat is you can ask really hard questions in there now and type in a sentence like, how do I get a ketchup stain out of this carpet if it happened two days ago? and I don't have vinegar in the house. You could try that. And you probably get an AI response. And if you clicked on it and learned more and then wanted to go even deeper in AI mode, you can actually have a conversation about your issue. And so that's one kind of key way people get information. The other which we're seeing is immensely popular is through the camera. And so there's this really unique moment where you can ask about the things you're seeing. So maybe you see, like I had an experience where I saw some weird stuff in our garden, like little white stuff that seemed to be hurting our plants. I took a picture. I was like, what's going on with this? And it's like, oh, it told me what the issue was, recommended that I get ladybugs. Oh. And actually went to the local store and got a little ladybug kit, and that took care of it. I love it. So, you know, it's pretty incredible. Like, those are the kinds of questions that would be pretty hard to ask about maybe five, ten years ago. And now we're seeing that happen every day. Yeah, the photo search thing I use all the time, also often for plants. I'm like, what's going on here? Or what is this plant that I don't know? So let's talk a little bit about the AI summaries that are now showing up at the top of some Google search results pages. How do those work? How is Google AI coming up with what is going to be in that summary? Yeah, well, right now there's a model that's designed to help find information. And then it's able to produce this really helpful overview with links to dive in and click and learn more yourself. But it gives you this really helpful context. That's called AI overviews. And if you ask a really specific question, you're pretty likely to get one of those. You don't see them everywhere because the system actually learns where they're helpful and will only show them if users have engaged with that and find them useful. Oh, interesting. So for many questions, people just ask a short question or they're looking for a very specific website. They won't show up because they're not actually helpful in actually many, many cases. But where you ask that kind of question I asked about catch-up, it's pretty obvious, pretty complicated question. You may have follow-up questions. AI might be uniquely useful there. Google learns that, finds information, and in many cases actually issues additional Google queries under the hood to expand your search and then brings you the most relevant information for a given question. And so if you ask a question, for instance, about something inspirational, which is something that we see all the time with image search, it'll return a gallery of images. Or if you ask about shopping, it's integrated into the shopping products on Google search. And so it's really designed for search users and search needs and optimized for those questions that people most commonly ask. Go a little bit deeper, if you will, about when you decide to put up an AI overview versus not. I think a lot of people will have noticed this. Like sometimes you do, sometimes you don't. How does Google decide when to add that overview? Yeah, well, what happens is the system will learn, so it'll try it, and then see if people engage with it for certain kinds of questions. So maybe you just search for an athlete that, you know, starts as like a new quarterback, like who is this person? And you Google the person. And it turns out that an AI overview is not particularly helpful. It's much more interesting to maybe see photos of the person, quick description, like their links to their bios or their social media pages, articles about them that have been written, profiles. That's more likely going to show up. And what happens is the system will learn that if it tried to do an AI overview, no one really clicked on it or engaged with it or valued it. And we have lots of metrics we look at that, and then it won't show up. And then the system kind of generalizes that over time. And what you see at Google is a reflection of our best understanding of what's most helpful for a user for a given question. And does that apply on like an individual basis? Like if there's somebody who just regularly doesn't want to click and look at the Google summaries or the AI overviews, will they see fewer of those? Yeah, we are personalizing some of these experiences. So if you're the kind of person that would always click a video, you might see video results higher. But right now, that's a kind of a, it's a smaller adjustment probably to the experience because we want to keep it as consistent as possible overall. But I think over time, our goal is to create something that's great for you. Google users may have also noticed something called AI mode. It sits at the top of the page with the other Google search tabs like news, videos, and images. Click on AI mode and you'll get a longer written answer to your question more like a response you get from a chatbot And you can ask follow questions about it in a chat format I wanted to get Robbie take on when someone should use AI mode versus regular Google search. Yeah, we really designed AI mode to really help you go deeper and deal with a pretty complicated question. You're trying to, you know, figure out what kind of car you might want to buy or like backup power ideas for your apartment or your house. And there's like little power stations you can buy. There's kind of complicated things, and you want to maybe compare them. You want to ask follow-up questions. And so AI mode is really awesome for those kinds of informational tasks. But we don't really think that you should have to know where to ask a question. So the way we've thought about it is you just put your question into Google, and you'll get an AI overview if it's useful. And if you click on going deeper, you're in a more back and forth AI mode experience that's conversational. And you can take that next step. So you don't really need to think about where you put the question. But I think for people that are maybe more well-versed into how these different AI and chatbots work, we are seeing people just go sometimes directly to the AI mode and just ask these really hard questions because they kind of know they want AI for a given question. And that actually was one of the reasons we built it because we actually saw in the search bar people were typing a question and then actually adding AI at the end. Oh, interesting. And so they were trying to get the AI to show up because they felt like this was such a complicated question. It's unlikely there's any single place that has information about this. And so now there's a place you can go directly to ask. And AI mode was in beta test mode for a long time, right? Like what did you learn from the test and how people were using that differently from traditional Google search? I mean, I think the two biggest things is people were asking longer and harder questions. And so we saw like a two to three full increase in the query length. And they're asking very specific questions. They're adding more context to it. So they're adding the specificity. So instead of a keyword like, you know, things to do in Nashville, it's like restaurants to go to in Nashville if one friend has an allergy and we have dogs and we want to sit outside. Like that's the kind of question you might see, which was a very different kind of question for Google. And the other thing is people were following up. And we noticed that people were asking these follow-up questions because when you saw part of the response, you wanted to learn even more. And so it was a conversational form of search, and that was new as well. Let's talk a little bit about visual search. Walk people through how that works. Like if I've never used visual search, just explain how the process works. Well, the easiest thing to do is to get the Google app and open it and then tap on the camera button. And what will happen is it will open a camera. It's called Google Lens. And you just snap a picture. And what you'll probably see is an immediate result that starts to explain with AI what you're seeing. And then you can add questions to it. So you could say, like, tell me about where to buy this. You could say, what's going on with this plan? How do I treat it? And you can have a back and forth conversation through that way. This is actually something that's one of the fastest growing ways people are searching. It's up 70% year over year, which is really remarkable. And this is like a billion users that are using this kind of a way of searching now. And if you have an Android phone, you can do something called circle to search, where you can kind of circle anything you're looking at. And people really love that as well because the visual search for media allows you to, say, find an outfit someone's wearing and say, where could I buy this? or what product is this and how can I get it? And that's also been really popular. I feel like I hear frustration from users about ads. Like when they go to the traditional Google search page, sometimes you have to scroll through three or four links at the top that are ads. Will people see advertising content in AI summaries as well as the traditional links? Yeah. So, I mean, first of all, our approach to ads is that it's shown when helpful. It's just like what I mentioned with the AI overviews. And actually, the vast majority of Google searches do not have ads. And so if you're trying to do something where an ad is really useful in that context, you'll see ads. And I think we see something similar that is possible in the AI set of products that we're building as well. And if there's an opportunity, like I was mentioning buying, like maybe one issue was having some backup power. If your house or your apartment loses power and maybe you want a power station that can charge your two computers and a couple phones for your family. Like there might be a great offer, a great deal at that time that you might want to learn about. So I think there's lots of ways that can be done where this is really helpful for people, and that's really our focus. But it's an experiment on how we can have that same set of principles for the AI world. And will people know, like, if they're seeing an ad in overviews or in AI mode, will it be clear that that business has paid to have information? Absolutely, yeah. Transparency and clarity that this is a sponsored item is really an important principle. Do you worry that if people just sort of have AI, give them an answer, that they'll be less of the let me go to multiple different sources and read and think critically about what I'm seeing because I'm just having this very convenient answer given to me up front? Well, one thing is, first of all, a very critical and key design principle for all of our AI experiences is to connect you to the web. That's the key part of Google, what people come to Google for. And what we hear from our users is that they don't want to take any single sources word for things. And so one thing we've done in our models pretty uniquely is we've actually designed them to recommend further reading and opportunities to go deeper. So you'll see a content tray that will show up with links to dive in, like read more articles or read the review from an expert about a mattress you're about to buy, for instance. And that's because people want that. They're asking to look up reviews and see what people are saying about a restaurant or a product. It's a core need for users and a core design principle. When we come back from the break, Robbie and I talk about whether we can always trust the accuracy of these AI answers and about the role that users play in Google's training process. I also ask him how Google is trying to stay ahead of competition from other AI chatbots. We'll be right back. this week on the assignment with me audi cornish we are gonna talk about bad bunny with somebody who has been covering him since the beginning writer and journalist suzy expozito to see the working class being represented i loved the ways that he brought it to the stadium It was like a movie set It was like a living organism that we watching on the screen It was. That's a great way to put it. Listen to The Assignment with me, Audie Cornish, streaming now on your favorite podcast app. I want to ask you about accuracy in the AI summaries and the AI mode answers. obviously I think Google has made a lot of progress on this from some of the like we saw recommendations to put glue on pizza days but I had this experience just recently like three weeks ago I was at the dentist and I was asking her about these two different types of toothpaste and she's like well let's look so she's Googles it compare this brand to this brand and she opens the AI summary and she's looking at it and she's looking at it I'm kind of looking over her shoulder And what had happened was that the summary was talking about one brand and linking to the other brand. Like, it had mixed them up. And I was like, oh, I think that's what happened here. Like, it's flip-flopped the information. But I just wonder, like, how do you make sure if that is the place? Obviously, hopefully people click the links and go deeper. But as we're seeing people rely more on the information that AI is providing, how do you make sure that that is as accurate as possible? Yeah, I mean, accuracy and creating high-quality information for people is absolutely the most important thing we do. And so one thing is AI is a new technology, and AI can make mistakes. And I think everyone's learning that across every way that AI works. But I think what you're seeing is it's very quickly getting higher and higher quality very fast. And I think one thing we try to do is we've integrated all of the core search kind of signals into these AI experiences. And so when it's kind of recommending or giving you overviews, it's using information and linking to information that others have found very helpful for that question in the past. And it's possible that there's all kinds of reasons why it can make small mistakes. That's one of the reasons why also I think we want to be making sure people can go and click in and see information themselves as well. Yeah, talk a little bit more about what that training looks like on the back end. Is it just giving it more data to work with? Is it the sort of like reinforcement training where you're saying, no, this was a bad answer. You should have done this instead. Like how does that work? It's not all of those things. I think we would consider what you're saying a loss. And so we would say like there's a loss pattern. And so we take the look at the link. We'd say, why was it linking to that? And then we would understand. And usually what happens is there's a few reasons why simple things like that could happen. One could be maybe there was a link that referenced the other brand on the page, right? And so maybe that's one thing. You know, there could be an instance where there's a review that someone mentions that's highly used. and then the actual site itself has conflicting information. Like a user who uses the products has one thing, and maybe the product itself, like the company that makes the products, is another thing. I see. So what do you do? Information is very complex, but Google's been working on this for 25 years. And so I think that we do some of the best work in this space and have studied what information is trustworthy, what's helpful, what links are useful for people, what's not, what's spam information and problematic versus not. And so I think in the large, vast, vast majority, people are getting exceptionally helpful and high-quality responses overall. But of course it can make mistakes. And how are you catching if and when those mistakes show up? How does Google know? How do you work on fixing that? Yeah. I mean we have deep evaluation metrics where we will run numerous types of questions through the system and check them all. It's this relentless focus on improving them and fixing them and making it better every single day. And then when you fix one of those things, you fix a whole part of the system. And then what happens is a month later, a few months later, it just gets better and better and better. And obviously, the models are also getting more sophisticated. And now very advanced reasoning models are introduced. So this is a very different generation of model than when we first launched the very first AI overview. And this is a model that's increasingly capable of reasoning, thinking through things, verifying its work, checking. And you're seeing much more of that in our AI systems now. And are you looking for user feedback too? Like if I ask a question and something's wrong, like is there a button? Yes, there's a thumbs up and thumbs down. You can report issues. We look at them religiously. Got it. So do a thumbs down. If you see an issue, report it. My team will look at it. We will look at that. Every single one of those is taken seriously. And we have systems that look at user feedback at scale. So Google has become synonymous with searching for information online. And the company is now facing real challenges from rivals like ChatGPT and Perplexity, who not only have their own chatbots, but now are introducing their own browsers, like trying to keep people within their ecosystem. What is Google doing to stay ahead of the competition, and how much does that worry you? I mean, look, the pie is expanding enormously. I mean, people can ask questions like they could never ask before. You can take a picture of your homework and ask, I'm really confused by the third question in here and how step four works. Please explain it. I mean, it's just, this is an enormously expansionary moment. And so that's exciting. The way I think about it is billions of people are going to Google all the time to ask these questions. And we see people trying to get more from Google search. And so for us, we're focusing on information only. That's what we're here to do. We're not here to build another general purpose chatbot. And so we're really optimizing for those kinds of informational questions and building what we think will be hopefully the most knowledgeable AI set of experiences that connect you to information. Does Google still see itself as a search company or an AI company? Or like how do you think about – is that the same thing? I don't know. I mean, I think from a Google search perspective, it's an informational product. And I think that traverses all technology. It's like, what's the best technology that helps people to get information that's trusted, that's helpful, that helps them in their life, helps them make key decisions? And it turns out that right now, AI is one of the ways that you can really unlock that. Because like this homework example I keep coming back to, it's very unlikely anyone's ever written anything about that homework question. But you can get help on Google now with that question. But at the end of the day, AI is not helpful for a lot of things. And it's actually slower, potentially. it's not going to give you that rich and full page experience people expect to browse the web necessarily So like the core search experience But I think we kind of view it much more broadly It like where could AI be uniquely helpful We going to help you with that and we going to bring AI to you But if AI is not going to be helpful we shouldn force that Yeah that makes a lot I feel like a lot of what we see just sort of broadly in the space with AI right now is sort of like AI for the sake of AI, because now we have AI and look at these things it can do. So I appreciate what you're saying about, no, we want to integrate it when it makes sense, when it's actually helpful, and not necessarily really just because we can put it everywhere. How should people think about when to use Google Search versus AI mode versus asking Gemini a question? What are those things best for? AI mode is really integrating into core Google experiences. And in fact, we're testing now ways that they're really seamless to be able to move from AI overviews right into AI mode to have further follow-up questions. So I just think about it as just asking Google whatever's on your mind. Okay. And I think what we're seeing is people who come to Google, they use Google all the time. They have an informational question. And these are things that people come to Google for today. It's about shopping and sports. And you have questions about a stock or whatever. Those are things that are awesome in Google search. And I think people can now ask their really specific and hard questions right in Google. And that would be my recommendation. And then if people are using chatbots and they're using Gemini, that's great too. And I think those particularly have been incredible for these really kind of productive use cases where you kind of are building something together, you know, with Gemini, and it's helping create for you. And there's all these really incredible ways you can generate imagery. And so I think everyone's kind of learning. It's early about, like, what products are going to most help for their needs. And I think that we just view AI as a way to make it easier to get the things you're trying to get out of Google. So Google as the place where you go if you're sort of, like, more straightforwardly looking for information, Gemini is your, like, thought partner or, like, a collaborator. That also is going to give you information, but that's maybe the distinction? Yeah, I think that's right. I think it's more of this assistant that can help you with all these things you're trying to do in your life. You can upload spreadsheets with it. You can generate images for your homework or for your class. Whereas really, search has always been about information predominantly. But we want to innovate and meet people where they are ultimately. How should people be formatting what they ask to Google search to get the best information? Yeah, I mean, I think the main thing that is interesting is people just limit themselves with the kinds of things they ask. So it's really that level of specificity that I think we'll get the most out of search now. Whereas before, I feel like you almost wanted the opposite. Like if you asked a really long question in Google, you would potentially see no results found, right? And so my recommendation is just to ask. Like if you're trying to understand a couple products, put in the serial numbers or like the product names. You can ask it and instruct it to do things. You can say, can you make me a table to do it? You can say with Gemini 3 now for subscribers, you can actually say create a visualization that helps me understand something and it will code up a whole little simulation for you. Like put that all into Google, into AI mode and see what happens. Do you have examples like are there really fun or creative ways that you're seeing people use Google search now that have surprised you? I mean all the time. For Gemini 3, it's been really incredible, Gemini 3 Pro launch. I mean people are using it to explain educational and scientific concepts in really exciting ways. Someone asked about how to understand the Doppler effect and it generates the simulation of how things move and how sound works. And so that was really surprising and pretty awesome. I think people can now visualize and graph information. So if you ask AI mode to compare stocks, for instance, over time, it can graph arbitrary information and create an interactive module where you can hover over it and see its differences over time. And so I think it's always exciting to see people discover those kind of new things. And those have been really fun. Awesome. Well, Robbie, thank you so much. This was really interesting. Thank you. Likewise. Appreciate you having me. Going into this conversation with Robbie, I was feeling a little overwhelmed by all of the available search options. Is it best to ask a question to Google Search or to Gemini or to AI mode? It's helpful to know that the regular old search bar is still a solid starting point. It was also nice to hear that Google is thinking more about personalizing what shows up at the top of our search results pages, because I, for one, would still almost always rather click through links and make up my own mind about the answer to a question than read through an AI-generated summary. Maybe that's just me. Here are some other takeaways from my conversation with Robbie. First, don't be afraid to be detailed and specific in your questions to Google. The technology can now handle more complex queries instead of just searching for keywords. If you get an AI overview answer and have follow-up questions, you can hit the Go Deeper button to start a chatbot conversation in Google's AI mode. But remember to fact-check the information you're getting from the AI. As we've talked about a lot on this show, these new technologies can still make mistakes. If you get an AI overview answer that's inaccurate or unhelpful, hit the thumbs down button to report it to Robbie's team. Even in a small way, we can all contribute to the tools we use. That's it for this week's episode of Terms of Service. I'm Claire Duffy. Talk to you next week. Hey, I'm Anderson Cooper. On my podcast, All There Is, we explore grief and loss in all its complexities. You'll hear deeply moving and honest discussions with people who have faced and are living with life-altering losses. My guest is writer, poet Megan Fowley, who was married to the poet Andrea Gibson. Andrea died last summer after a years-long battle with cancer. Is grief different than you thought it would be? Yes, it is. I don't think that I thought that I would be able to have as much joy as I've had. I guess if I wasn't able to find joy and laughter now, I would have missed the point of Andra's messaging. Talking grief, building community. That's what the podcast is all about. This is all there is. Listen and follow wherever you get your podcasts.