Big Technology Podcast

How Google DeepMind Operates & Experiments — With Lila Ibrahim and James Manyika

50 min
Feb 18, 2026about 2 months ago
Listen to Episode
Summary

Google DeepMind COO Lila Ibrahim and Google's James Manyika discuss how Google DeepMind operates as a modern Bell Labs, focusing on ambitious research agendas while building interdisciplinary teams. They explore Google's experimental culture through Labs projects like NotebookLM and Flow, the company's shift toward more aggressive shipping of AI products, and breakthrough applications in education, quantum computing, materials science, and weather prediction.

Insights
  • Google DeepMind combines top-down mission-driven research with bottom-up exploration, allowing researchers freedom within broad ambitious goals
  • Google has shifted from being cautious about shipping to 'relentless shipping' with 6-month Gemini iteration cycles to stay competitive
  • AI in education shows promise for personalized learning and teacher productivity, but requires reimagining workflows rather than just adding technology to existing processes
  • The company maintains experimental culture through Labs projects and encourages 20% time contributions from across the organization
  • Breakthrough AI research is rapidly translating to real-world impact, from flood prediction covering 2 billion people to materials discovery expanding known stable crystals 10x
Trends
Centralized AI development with distributed product integration across tech companiesRapid iteration cycles becoming standard for AI model development and deploymentAI-first experimental products emerging from research labs to test new paradigmsEducational institutions grappling with AI integration and policy frameworksQuantum computing progress accelerating toward practical applications within 5 yearsAI-driven scientific discovery expanding from proteins to materials and weather systemsSpace-based computing infrastructure being explored for future AI training needs
Companies
Google
Primary focus as parent company of DeepMind, discussing AI strategy and organizational changes
Google DeepMind
Main subject discussing operations, research approach, and breakthrough AI applications
OpenAI
Competitor mentioned regarding ChatGPT's success with transformer technology and competitive dynamics
Bell Labs
Historical comparison for DeepMind's research approach and organizational inspiration
Pixar
Referenced as inspiration for creating collaborative creative environments at DeepMind
McKinsey
Mentioned regarding CEO Sundar Pichai's background and potential influence on organizational structure
Waymo
Example of Google division receiving specialized AI models from DeepMind for autonomous driving
Microsoft
Referenced in sponsor content for Microsoft 365 Copilot AI assistant features
People
Lila Ibrahim
Chief Operating Officer of Google DeepMind, main guest discussing operations and AI applications
James Manyika
SVP of Research Labs, Technology and Society at Google, discussing Labs projects and research
Demis Hassabis
CEO of Google DeepMind, referenced for leadership approach and timing decisions
Sundar Pichai
Google CEO mentioned regarding McKinsey background and organizational restructuring approach
Sam Altman
OpenAI CEO quoted about Google's competitive response and market positioning
Steven Johnson
Writer who influenced NotebookLM development with his research tool needs and feedback
Jeff Dean
Google executive mentioned as originator of AI audio overview idea for research papers
Quotes
"Our mission is to build AI responsibly to benefit humanity. The first thing we do is take really ambitious research agendas. We structure it in a way where we're looking at what are the big problems, but not telling people how to do it."
Lila Ibrahim
"If Google took us seriously early on they would have smashed us and now they're a formidable competitor."
Sam Altman
"We're now kind of on the cycle that Gemini models where every five, six months there's the latest generation. I think that's part of what you're seeing going on."
James Manyika
"Imagine that every student could have a personalized tutor and if every teacher could have a teaching assistant, where AI is a productivity tool, that really could change the dynamic of how teachers and students interact."
Lila Ibrahim
"100 years from now. Of course we're doing it in space because the sun has 100 trillion times more energy. It's available 24/7. Imagine if that's probably how we're going to be doing it in the future."
James Manyika
Full Transcript
3 Speakers
Speaker A

How does Google DeepMind operate and make bets? And what's making Google more experimental? Let's talk about it with two Google leaders right after this did you know your credit card points and miles can lose value to inflation? Credit card companies often reduce the redemption value of your points and miles. Now imagine a credit card with rewards that can grow in value. With the Gemini credit card, you can earn Bitcoin or one of over 50 other cryptos instantly with no annual fee. Every swipe at the store or gas pump earns you instant rewards deposited straight to your account. Visit gemini.com card today. Check out the link in the description for more information on rates and fees. Again, if you're looking to invest in Bitcoin but don't know where to start, the Gemini credit card makes it easy. Issued by Webbing, this is not investment advice and trading. Crypto involves risk. Check Gemini's website for more details on rates and fees.

0:00

Speaker B

The world moves fast.

0:57

Speaker C

Your workday even faster.

0:59

Speaker A

Pitching products, drafting reports, analyzing data Microsoft 365 Copilot is your AI assistant for

1:00

Speaker C

work built into Word, Excel, PowerPoint and

1:08

Speaker A

other Microsoft 365 apps you use, helping

1:11

Speaker C

you quickly write, analyze, create and summarize so you can cut through clutter and clear a path to your best work. Learn more@Microsoft.com M365Copilot welcome to Big Technology

1:14

Speaker A

Podcast, a show for cool headed and nuanced conversation of the tech world and beyond. We have a great show for you today because we're going to go deep inside the way Google's AI and technology research operations work. We have two great guests with us today. Lila Ebram is here. She is the Chief Operating officer of Google DeepMind. Lila, welcome.

1:26

Speaker B

Thank you.

1:47

Speaker A

And we're also joined by James Manika. James is the SUP of Research Labs, Technology and Society at Google. James, welcome to.

1:47

Speaker C

Well, thanks for having me.

1:55

Speaker A

And of course this is our concluding conversation here in our series at Davos and we do have a live audience. Live audience. Make some noise. Let them know you're here. All right, so much to get to, not a lot of time. Let's just start with the way that Google DeepMind operates. Demis Hasabis, the CEO of Google DeepMind who was recently on the show, has described DeepMind as sort of a modern day Bell Labs. But what does that mean? Lila, can you tell us a little bit about how the research Is it a lab operation company? How does it operate?

1:56

Speaker B

Maybe I should start with our mission because I think everything is based off of that which is to build AI responsibly to benefit humanity. The first thing we do is take really ambitious research agendas. We structure it in a way where we're looking at what are the big problems, but not telling people how to do it. When you think about how did we first approach that, it's really about taking inspiration from the golden era of Bell Labs, but also government programs like the Apollo program and even more recently Pixar. So it's all focused around bringing in really great talent and creating an environment for them to succeed and to explore. So first thing is that big research agenda telling people what to kind of the area to focus on, but not how to do their job. The second thing is really, because it's such a broad agenda, we want to build interdisciplinary teams. How do you create a culture where you can have a bioethicist next to a computer scientist and a neuroscientist? Because we think that's really where the magic happens and unlocks the work. And this type of approach has resulted in such extraordinary efforts. And we're also not afraid to explore and then say, when is it time? I think demos has a remarkable way of measuring time, like time to explore. Are we setting the really ambitious goals? How are we doing progress towards that? And also not being shy to say, okay, now's the time to take a step back and pause it or double down. Great example of that is over the past few years we've been doing a lot of work around one science area, learning science. How do people learn and can we improve it?

2:29

Speaker A

Right.

4:08

Speaker B

And then this year was really, Dennis was like, okay, Gemini is good enough. It's time to infuse everything we've done with the industry around learning science into Gemini. And that was one of our focus areas to really advance how Gemini could be provided for learners. So there's something I think quite magical within Google DeepMind about timing.

4:08

Speaker A

Okay, GDM, I guess we're going to go. Everybody in the tech industry uses timing.

4:29

Speaker B

I almost caught myself before saying it,

4:32

Speaker A

but let's talk about that. So the way I just want to talk through process a little bit, the way that you just described that Demis said that Gemini was ready for learning and then Google DeepMind started to pursue it. How much of what Google DeepMind works on is top down versus bottom up. A way that I've heard OpenAI describe, the way that it works is like a bunch of different startups within a larger company. Is that a similar way that Google operates or does it come more from the top?

4:35

Speaker B

Well, because our mission is so ambitious, we're really trying to understand what are the big challenges where AI can help us unlock our understanding of the universe around us and solve some of humanity's biggest challenges. It's broad enough that we can do things like how do we do weather exploration and try to predict weather forecasts, how do we do alphafold and protein structure prediction to help us better understand diseases so we can come up with better therapeutics, generative AI. How can we continue to improve that to make people's lives better? Again, we take a very broad portfolio perspective, but we allow the space for researchers to explore. And that's really what I meant in the beginning of like we've got to find the right talent. So mission driven culture and values aligned people who want to have this type of exploration and a big impact and scale that we can have being part of Google. So I would say some of this is Demis is quite remarkable in terms of his thinking in this space because he's been doing it for so long.

5:05

Speaker C

Right.

6:14

Speaker B

DeepMind was founded 16 years ago. It's been kind of a lifelong mission of his. And yet we have an organization full of people who are creative, who like to work in an interdisciplinary environment, who want to have impact in this world. So they also come up with their own approach to things, setting, setting goals.

6:14

Speaker A

It's a little bit of both.

6:33

Speaker B

Pardon me? Yeah, a little bit of both top

6:34

Speaker A

down from demos and then some bottoms up. Okay.

6:35

Speaker B

And which makes managing part of that organization structure.

6:38

Speaker A

I'm definitely going to talk to you about talent. I will talk with you about talent for sure. And you know, on that note, how have things changed? Because I'm just going to talk about the tech industry more broadly. It seems like there used to be a moment where a lot of tech companies gave, you know, these talented people broad leeway to explore things that might not have immediate results. Then all of a sudden we got into this AI race and many companies brought their researchers who were working on these long term products much closer or much long term projects much closer to the product. And all of a sudden there was a almost imperative for long term research to make immediate product impact. So has that changed as well over time? Is that something that's going on within DeepMind as well?

6:41

Speaker B

Yeah, I joined about eight years ago and we've definitely been on a journey. But what I think is so exciting about Google DeepMind and I think why so many of our employees stay so long is because we have that breadth of portfolio. So there are some people that want to continue the deep research frontier AI research that they do, or a scientific more focused on the science. And we have the space to do that exploration while also delivering on the advancements around generative AI, such as all the progress we made last year with Gemini.

7:32

Speaker A

Okay, let me take that a step further. The way that the transformation within Google has been described is that instead of having every product area or product group chart its own direction on AI, there's now this central engine room within the company which is I think the AI division that generate, that creates the AI and then farms it out to these product areas. So can you talk a little bit about that process and how that works?

8:04

Speaker B

Yeah, and actually I think that's been one of the exciting things over the past few years with a combination of Google Brain and DeepMind of bringing the best of Google's AI teams and research together under one roof where we could have, we could explore such a broad portfolio. And so we've really been focused on, as you mentioned, becoming the AI innovation engine. And then I wouldn't say we farm things out to other Google teams. We collaborate very closely with the product areas and their customers to understand what the needs are so that we can build the models better from the start and do so in a very collaborative and responsible way such that by the time it goes to different Google products, it's already been through a lot of that testing and can be refined through for that specific use case.

8:35

Speaker A

Okay, one last question.

9:21

Speaker B

And that's actually helped us, I think what's resulted in that, for example, is like Gemini 3, we launched it and then it was available to a broad group of developers and users right away.

9:22

Speaker A

All right, one last question on this and then we're going to go to James. And James, thanks again for being here. So let me just ask you this. On our show we have this hypothesis that Sundar spent time at McKinsey and this is sort of like a McKinsey style approach to like really reorg, centralize and then work with the groups. Is there a truth to that?

9:33

Speaker B

Well, you have a former McKinsey person here who might be able to address the structure.

9:51

Speaker A

James.

9:57

Speaker C

No, I think what you've got going on is an extraordinary thing because on the one hand you've got the Gemini program which underlies all of this building the kind of large scale models, Gemini itself, Gemini 2.53 and all of that. And this kind of came about back in three years ago when we put together the Google Brain team and the DeepMind team to create the Gemini program. So that program now underlies all the things across the company. So you see Gemini show up in Search, in Google Workspace, it shows up in all our products, in NotebookLM, all of these things. So it's kind of the foundation. And that's why, as Lyla said, Google DeepMind and the Gemini program has become the engine room. But in addition to that, you've got all these other things going on. There's deep science going on in the company. I mean, this idea of this foundational kind of tackle the biggest root node problems that open up research and innovation in so many areas. So you've got all of that going on too. And then you've got all these other special kind of ambitious projects working on things like genie, build world models. You've got work going on to build special things for Waymo and enhance the models of Waymo's that lead to Waymo, the Waymo driver. So you've got a lot of these things going on. So I don't think there's a top down as much as let's take advantage of the foundation called the Gemini program. Make sure that every time we do these rapid iterations, you've seen we're now in a cycle of every 6ish months, there's a new generation of Gemini. Make sure it shows up immediately. As Lila described, there is no, you know, shipping and delay. So the minute the latest version of Gemini comes out, you're going to see it in search, you're going to see it in, you know, in the Gemini app itself. You're going to see it everywhere. So that's kind of the incredible thing that's happened over the last three years.

9:59

Speaker A

All right, I want to talk about Labs. So Google Labs. A lot of us who used Google products in the early days, you know, we saw this era of experimentation within Google and then Labs went away for a bit. Not that Labs was the only bit of experimentation within the company, but then Labs was revived and it seems like we're starting to see many more experimental projects come out of Google proper in a way that we hadn't seen in a long time. So how responsible is Labs for that? And why is Labs back?

11:55

Speaker C

Oh, Labs is so much fun. So what actually happened was three years ago, this is a kind of inspired Sundar moment, said, let's reboot Labs and we're in this AI moment. How do we explore and experiment and build these AI first AI products that are totally AI first. So the idea with Labs is let's take the most amazing research coming out of Google DeepMind and Google Research and any other place, quite frankly, in the company where there's incredible research and focus primarily on how do we build experimental AI first products. I think what most people probably know of the most is what's now, you know, Notebook lm, You know, the way that started by way is incredible because I remember when I first encountered so what is.

12:25

Speaker A

What is Notebook LM and Tell the Story.

13:11

Speaker C

So Notebook is fun. So it started as a product called Tailwind. There were four, five people working on it. And the idea was, you know, can we build a very AI native research tool that is grounded on what you put into it? So in other words, you know your sources, you might have books, you might have papers, you might have drafts, you may have whatever you. You. Your content that you want to ground it on, put it in Notebook and be able to engage with it. So that was the conception of the idea. And in fact, in some ways, you got additional impetus from Steven Johnson, who's a writer. And Steven Johnson, you know, is one of these people who kind of keeps everything. So. So he has notes from the 90s and drafts of books and all kinds of things. He said, I'd love a product where I can dump all that stuff in and engage with it. What was I thinking, 1997? What was that draft I did? And so. And be able to be able to. So what NotebookLM has become is this incredible research tool grounded on what you've put in. And when you engage with it and it summarizes or drafts something, it gives you these citations. And that's, in some ways, is a key feature of it. So if it says, Alex, you know, you said this, or your source says this and summarizes in some way, it'll give you citations. If you want, you can click on the citations. They take you all the way back to the original content, right? So, which is incredibly useful. Then. Then a fun thing happened, which was, well, you know, so it was already a very useful tool. Then we said, well, actually, you know what? Sometimes I want to hear my sources as opposed to just engage with them. So I said, okay, well, actually, the technology is ready enough. We can actually add AI audio overviews,

13:12

Speaker A

which is like, effectively a podcast. You can have it with, like two hosts.

14:53

Speaker C

You can have it. Actually, the origin of the idea wasn't even to do that. So initially the idea was a few of us, you know, Jeff Dean and this legendary Jeff Dean said, well, actually, you know what, we're reading all these papers that are coming out at this incredible pace in the computer Science field. It'd be nice to be able to hear a summary of them verbally while I'm driving into work or something. So just, you know, then I can figure out which people I'm going to read. So that was the original idea. They said, actually, no, you know what, it's easier to learn stuff when you have you hear people talking about it, engaging. That's why seminars are interesting. Right. As a learning mechanism. So that's where the idea came from. So we did these audio overviews which, you know, in the form of a podcast or a discussion with two hosts discussing it. And now it's evolving. That's when the product just kind of took off.

14:57

Speaker A

Yeah, whenever I give a presentation about AI, that's the party trick where I build one of these notebooks in front of the audience and then I hit play on the podcast. And people who haven't seen it before, it's like a jaw drop moment. In fact, we've had multiple people on our YouTube feed and coming from the podcast, they've asked Alex, did they train on your voice? Because it sounds a lot like me. And I say, no, listen. They always say, let's unpack this at the beginning. And you have to understand every podcast. So it's not me.

15:42

Speaker C

You know what? One of the most fun use cases of a notebook, by the way, is because now you can put in things in all kinds of formats. There can be papers, there can be YouTube videos, there can be whatever's on your hard drive. One of my fun use cases was actually when I had to do this thing where I was seeing all these papers from literally over 100 countries in different languages. So I put them all in and just engaged with content in multiple Other languages because NotebookLM can handle multiple languages. And now you can do video overviews. And so.

16:12

Speaker A

But I think you can make like an animated. Not an animated video, but a video with like graphics.

16:42

Speaker C

With graphics and slides. But I think this is an example of the kind of thing that happens in labs where we try to take this incredible research that Lila and colleagues and others are doing at Google DeepMind and Google Research to say, how do we build amazing AI first products? Flow is another example.

16:46

Speaker A

And if you play, I just. So I'll tell you a story about Flow, then I'll let you talk a little bit more about it. I just did my first and last mountain climb and it was Cotopaxi in Ecuador, and I wanted to make a video sort of capturing the moment. But there were a couple things that happened that I just could not that I didn't videotape, because I decided to spend the climb actually climbing, as opposed to YouTubing, which is apparently, from what I hear, rare these days. And there was a moment where my water bottle fell out of my backpack and rolled down the glacier and then kind of disappeared and into the darkness. And I wanted to illustrate that. So I went to Flow, the Google Video Generator, and I said, I want to make an animation documentary style to show this, and slotted that into the video. So now you can. And before, I would have to hire an animator. Now you can do it.

17:02

Speaker C

Yeah, no, it's. It's. It's incredible. But I think, well, you know, flow's an example of the magic that happens in labs. So I remember a bunch of us got together, so Joshu runs labs, and, you know, demis, and a few of us say, what if we put all these tools we now have together and into something that's actually useful? And in fact, the initial version of it that we have in some ways was clunky. Then we said, well, actually, let's just talk to some actual filmmakers and get their input. So one of the things that happens in Labs, by the way, is we try to engage a lot with creatives and others to help us think about how we build these tools. So, anyway, that's how Flow came about.

17:50

Speaker A

Yeah, you can build scene by scene, prompting into video, and you can have continuation. I think that's probably where the name comes to configure flow.

18:26

Speaker C

And what you just said was an insight that came from filmmakers. In fact, the initial version, I said, no, no, what you've got is actually not very useful. I'd like to be able to build things scene by scene and be able to stitch them together, be able to do this. So, you know, so that's why it's been helpful. So if you say, what is labs? It's a place where we try to experiment with all these things at any one time. We probably have about 30 experiments cooking. So if you go to the Google Lab site, you'll probably see about 30 different things.

18:33

Speaker A

But I have a request for you. Broaden the access, because there's a lot of pro, you know, a lot of projects in there that seem really interesting to use. But every time I'm there, it's a wait list.

18:59

Speaker C

You have to. We'll work on that. We'll work on that. I mean, so. So, for example, so, you know, one of the other ones, and sometimes we're surprised what people find useful. I'll give you an Example one is Pomelli, which is the. It's a tool for SMBs to imagine this is not your typical kind of techie startup SMB, but kind of more kind of traditional SMB where they want to build a web presence. And so you can literally engage with Pomeli as an SMB and be able to build literally a web presence in incredibly imaginative ways. So we always have all these things cooking in Labs. AI Studio is another example of the kinds of things this is for developers. So we're trying to think of all these incredible creators, whether they're developers, artists, filmmakers, musicians, to create these incredible AI tools.

19:10

Speaker A

Yeah, there, there's two that I really want to get access to and I think are potentially going to be big. Maybe the next notebook. Lm there's cc, which is an experimental productivity agent within Google, which looks great. And then disco.

20:00

Speaker C

Oh, disco.

20:12

Speaker A

You can build a web app based basically based on links. So if you're like thinking about doing something for the weekend, you can just like open a bunch of tabs and then it will figure out what type of app to make for you. So maybe a custom map with dots for each potential event and then you can pick the dates that you want to actually be in the place that you're thinking about and then it will sort of highlight what's going to be available then. So this is to both of you. Back in the day, Google had This concept called 20% time, where Google employees were basically empowered to spend 20% of their time on something that's not, that wasn't core to their job description. And that's where a lot of big Google products came out of. I think gmail was a 20% project. So I want to ask you both about these experimental projects. Who builds them and is a version of 20% time back or how does this, you know, obviously a lot of cool experiments. How is it happening inside the company?

20:13

Speaker C

Well, I'm happy to start. So I think effectively that's still alive. So I go back to Labs. So if you think about the things that are in labs, I would say something like maybe 80% of them came out of people actually in the Labs team. The other 20% came from 20% stuff. I'll give you a good example on

21:06

Speaker A

a topic that, wait, 20% time still lives within Google.

21:27

Speaker C

We encourage people to come up with those things. So I'll give you a good example in an area that Lala and I care deeply about, which is education and learning. So somebody in Google Research came up with the idea that, oh, they're working on something else. But they came up with the idea, what if we created a way for somebody to learn something their way, however they want to learn. Because it's now possible to get these tools to help you learn in any number of ways. So that eventually became learn your way, which is an experimental product you'll find in Google labs that was not done by somebody in labs. Somebody else in another part of the company had come up with the idea. So we constantly are getting all these ideas from across Google about these incredible things. Another example that actually came out of Google DeepMind and Google Research is co scientist, which those teams worked on, which is a tool for scientists to do actual scientific discovery. Now you're going to see that show up in labs as a way to test it, get other people to work on it. But it wasn't, as it were, built inside labs. So the notion of people generating ideas from across the company is very much alive and you get some exciting innovations from that.

21:30

Speaker A

Lila DeepMind researchers have the ability, if they want to build an experimental product, to maybe do that.

22:39

Speaker B

Yeah, I think this is actually just part of our culture and that's really about giving people the chance to explore and also taking a very interdisciplinary approach. So it's actually not just limited to researchers, which has been quite exciting. It's actually being able to pull together different perspectives and trying to solve real challenges. And sometimes that's even actually AI tools to help us accelerate how we're working. How does our legal team make the review of research papers faster and be able to provide feedback? How do we do red teaming for more automated red teaming for our responsibility team or how do we understand ancient texts? We have a project that actually one of our researchers decided to he wanted to explore. It's not just about intelligence today, but what is it about knowledge from the past that we might not know about. So he worked to come up with a project that was not just to be able to date a tablet, but also to fill in missing gaps to translate it. And so we now have Project aeneas, which is all about ancient texts. So there are, to James point, one of the things that we have at Google is really smart, curious people and a culture that supports that exploration.

22:45

Speaker A

Yeah. As we close this segment, I talk a little bit about why I'm so interested in it. I think the average company last century was on the S&P 500 once they reached for 67 years. Now it's like 15 years right now. And as this AI moment happens, it's going to, I mean Google's seen this firsthand, right? It's going to be things will be moving even faster. And where ideas come from, the imperative to experiment and create new projects, I think that's key to any company's long term sustainability. So it's very interesting to hear how it operates within Google.

24:11

Speaker B

I was going to comment. I spent some of my career in venture capital. I used to say that that was the most remarkable place to be because you'd have these entrepreneurs with audacious ideas who wanted to build ideas. And I think what's, what's crazy about my experience at Google is this is just part of everyday culture and it happens in all parts of the organization. I think how it comes to life is quite different. Might be quite different in Google DeepMind than other parts of Google, but the fact that it's supported across the entire organization.

24:46

Speaker C

Yeah. If I could add one other piece on this, Alex. I think one of the things that I think is really quite unique about the research culture at Google and I'm including, back to your point, original Bell Labs question and this happens in Google DeepMind and Google Research is this idea that we've got to go from research to reality. And I think what you see, a lot of these kind of research originated breakthrough ideas then very quickly transition into real world impact. I mean AlphaFold is a good example, right? Which is incredible breakthrough, Nobel Prize worthy and all of that. But look at what's happened since then, right? You now have three and a half million researchers accessing it in over 190 countries. You take some of the breakthroughs in weather and forecasting and prediction, they're now actually being used in the real world. We now do flood forecasting, which is a very incredible kind of research question. But now it's covering 150 countries with 2 billion people. So I think this idea of going from research, breakthrough scientific research, translating that to societal impact, I think is a very unique aspect of what we do.

25:18

Speaker A

There's a natural follow up here that I have to ask because if I don't ask it, the audience is going to be like, why didn't you ask that? For many years Google seemed like it was, or at least the perception was that it was afraid to ship. Case in point, you created the transformer model. ChatGPT is the first mainstream application built off of that. In fact, I spoke with Sam Altman at the end of the year and one of the things that he said, one of the sort of notable things he said in that interview was that if Google took us seriously early on they would have smashed us and now they're a formidable competitor. So has the imperative to ship become something that's more important within Google and has there been more ambition to bring these experiments out into the public?

26:21

Speaker C

I think there is, but I think there's a natural evolution of that. I think one of the things that's important is there incredible amount of research breakthroughs going on and there's always going to be at Google. I think this productive tension between is it ready? Is it not? And we don't always get that right. And I think that tension, I think, I actually think it's a great tension because this idea of part of being bold and responsible, I think we have to live with that tension. So you've got that going on. But I think what you also see is a realization that for many of these experiments and innovations, there's actually a lot to learn. This is back to the scientific method by having people use it, experience it, and we learn from that. So there's so much kind of red teaming you can do of a product that we do a lot of that, but there's also a lot you can also learn from when people use it either usefully or even adversarially, you're going to learn a lot from that. So I think that's been a bit of the evolution that shipping and useful products and also learning from that shipping is very helpful. So you're seeing us, we like to talk about this idea of relentless shipping. So we're now kind of on the cycle that Gemini models where every five, six months there's the latest generation. I think that's part of what you're seeing going on.

27:04

Speaker A

Okay. I definitely want to make time to talk about AI and education, which I know Lila and both of you have really worked on, but Lila has been a very important passion of you. So for of yours, let's take a break and we'll come back right after this. Here is the problem. Your data is exposed everywhere. Personal data is scattered across hundreds of websites, often without your consent. And that means that data brokers buy and sell your information. Your address, phone number, email, Social Security number. And that exposure leads to real risks. Things like identity theft, scams, harassment, higher insurance rates. Incogni tracks down and removes your personal data from data brokers, directories, people, search sites and commercial databases. Here's how it works. First, you create your account and share minimal information needed to locate your profiles. Second, you authorize Incogni to contact data brokers on your behalf. Third, that Incogni will remove your data both automatically with hundreds of brokers and via customer moves. There's also a 30 day money back guarantee. Take back your personal data with incogni. Go to incogni.com bigtechpod and use code bigtechpod at checkout. Our code will get you 60% off an annual plan. Go check it out. Starting something new isn't just hard, it's terrifying. So much work goes into this thing that you're not entirely sure will work out and it can be hard to make that leap of faith. When I started this podcast, I wasn't sure if anyone would listen. Now I know it was the right choice. It also helps when you have a partner like Shopify on your side to help. Shopify is the commerce platform behind millions of businesses around the world and 10% of all e commerce in the US from household names like Allbirds and Cotopaxi to brands just getting started. With hundreds of ready to use templates, Shopify helps you build a beautiful online store that matches your brand style. You can also get the word out like you have a marketing team behind you easily create email and social media campaigns wherever your customers are scrolling or strolling. It's time to turn those what ifs into with Shopify Today. Sign up for your $1 per month trial@shopify.com Big Tech go to shopify.com Big Tech that's shopify.com BigTech and we're back here on Big Technology Podcast with Lila Ibrahim, the COO of Google DeepMind, and James Munika, SVP of Research Labs Technology and Society at Google. It's great to have you. Both AI and education has been something that you're both passionate about and have done a lot of work on. A recent study that you did found that 85% of students 18 are using AI. Probably the other 15% aren't telling you. And 81% of teachers report using AI, which far surpasses the global average, which is that 66% of the public uses AI. So this is making a real impact in education. Let's just start with your perspective on is this a net positive to education Because I think the criticisms are like they're out there that kids are using it to cheat and teachers are using it to grade those cheated papers. What's happening in the, you know, in practicality?

28:19

Speaker B

Well, I think first of all this is a really important area that as James mentioned earlier, we're approaching it as we approach everything, which is how do we be bold and thinking about how AI might actually transform how people learn and really unlock human potential while also being responsible and thinking about what the risks are and making sure that we're investing in mitigating those. One of the things that we found also in that survey is about 80% of the 18 plus learners are actually finding it's helpful for their education and their learning. It's giving them the information they need in the way format that they might need it. One of the areas that we really have been focused on is making sure that it's not just like providing an answer, but that will actually take you through the steps. This is grounded in everything we do, which is a scientific approach. Back up three years ago we said let's treat learning like a first class science problem. How do people learn? We have some of that experience and expertise within Google. We also know that the world is full of people who are studying this. So we took a very deliberate approach to collaborate with pedagogy experts, educators worldwide and have been doing a lot of that what we called LearnLM and this was the year that we infused that into gemiini and then developed features like guided learning in the gemiini app where you can go through and it helps you actually break down the problem. So it's teaching you how to learn and how to break down the problem. And for someone like me, who also happens to be a parent of teenagers, I think about this a lot.

31:35

Speaker A

And

33:15

Speaker B

I have twin daughters so I'm constantly running a B tests.

33:16

Speaker A

Yeah, you should Let 1 use AI and make sure the other doesn't and then see who turns out better.

33:20

Speaker B

You know what's interesting? Well, I'll take that as input for my next experiment. But one of my daughters is dyslexic and the way the education system has been built is not for someone like her. And yet what I have found is when she can integrate AI into her learning process. Whether it's breaking down a math problem or helping her take her words that are sometimes scrambled and put them into something more coherent. It's actually giving her the confidence in a way that I have never seen her before. And I think back a lot to I also have a sister with a physical disability. Tools were not. Education system was not made for her. Think about the entire world and how many students have been left behind because they just didn't have access to to this technology. So our idea is imagine that every student could have a personalized tutor and if every teacher could have a teaching assistant, where AI is a productivity tool, that really could change the dynamic of how teachers and Students interact. We're not saying that the AI is the magic, the teacher is still the magic, but it frees up the teacher to actually do that human to human interaction. And we've seen some really great progress in a lot of the work that we're doing with productivity tools for teachers. I was just in Northern Ireland and teachers there, they worked with the government and ran a pilot and the teachers had like little post it notes. And what they found was on average they were saving 10 hours per week per teacher and their post it notes for how they were using their time, which was, I'm getting time back with my family. I can now do lesson plans for different learners of different types within my 30s +student classroom. It was so encouraging. But there's still a lot to learn. We're still in the early stages and we have to go into this knowing that it is high stakes. We're talking about people's lives and their longevity, but helping them learn, being able to learn and opening up the opportunities and then being able to learn from that and integrate it into our research is critically important.

33:24

Speaker C

Yeah. One thing I would add is I think one of the things we're learning is that learning is no different than other areas of society, which is when a new technology comes in, you don't just bolt it on to an existing process and an existing workflow. You have to almost reimagine the workflow. Let me give you an example. In learning, so we know that there's this issue and concern around cheating. So in a world in which you have tools like this, I'm not quite sure you want to do test and assessment the old way. For example, we found, it's actually quite interesting where we work with some school districts. For example, we found Lila described guided learning. It actually turns out when students actually use guided learnings, they actually do learn and the mastery of the subject improves. But this school district found that actually, you know what, maybe we should have more tests. Because we know that when students are getting ready for a test, they actually do use guided learning. Whereas when they're just trying to hand in homework at 11pm the night before, they don't.

35:30

Speaker A

And if somebody watching is going to have a heart attack here. Yeah, more tests.

36:33

Speaker C

So what they realize is that, well, let's do an experiment. What if we actually have a weekly test? Oh, so in other words, let's expand this window. When students are motivated to turn on guided learning and actually master the thing because they're going to have to do a test. They actually found that students were actually learning more. So that's an example of how maybe we need to reimagine even what the workflow and the learning process is, as opposed to just trying to bolt on a technology to an existing structure and existing workflow. So there's a lot of interesting experiments and innovations that we're learning a lot from by talking to teachers and some schools and school districts. So I think we're at the very early stages of this, But I think the concerns that people have around cognitive offloading and so forth, those are real concerns. Yeah, and we have to work on that.

36:37

Speaker A

I do want to talk about that because, like, with many things, with technology and especially AI, I think the concern is that these uses that we're talking about, like, it's, by the way, amazing that the LearnLM will go step by step. And like, actually, instead of spitting out an answer, work with the person using it to be able to, you know, help them make progress. But does it? The issue is that some of the most ambitious people will use this. This is a potential issue, and their performance will just go through the roof, but then it will just create this dichotomy between the people that use it the right way and those that use it the wrong way. There was a great article the New York Times recently about it's not just students, it's teachers. That's the headline is the Professors are using ChatGPT and some students are unhappy about it. And there's this student at Northeastern who is reading her professor's slides and seeing the slides fill with spelling mistakes and extraneous body parts in the images, which are like telltale signs of AI. So what do you think about the fact that this could create a even broader divergence in society? Lila?

37:25

Speaker B

Actually, it reminds me a lot of when we introduce computers into classrooms and into universities. So I think there's actually quite a few lessons I have from those days that we're trying to explore and do research. So one is what we can do about that. But one thing we are also separately trying to do is convene leaders to talk about how to approach this from a system perspective, bringing together administrators to say, what is the framework that they want to use within their organizations for responsible usage of the technology. I think one of the challenges we have right now is, is it's a little bit of everything happening, rather than taking an exploratory approach to say, listen, AI isn't going away. Access and literacy. Equitable access and literacy is important. Some students might be using it because they want to get Ahead, others are afraid they're going to be perceived as cheating, so they're not going to use it. And to your point, that creates a separation and sometimes we see that based on gender too, by the way.

38:33

Speaker A

Oh.

39:37

Speaker B

So I think what we can do is how do we bring together leaders to explore how we enter this next chapter? How do we start to set the guardrails in a way that maximizes the benefits while mitigating the risks? We held an event James and myself and a few other colleagues co hosted late last year to start exploring and sharing best practices. What are people experimenting with, what is working, what's not? And we had our researchers there as well. We also did some hands on training so that teachers can actually learn how to use the tools responsibly. Again, I think this is more about unlocking productivity and potential versus like some of the replacement. So we have to work on making sure the incentive models are in place

39:39

Speaker A

as well, that's for sure. Okay, we have 10 minutes left. So I think there's so much experimental technology that I want to talk about. So can we just use our remaining time to go through four of your cutting edge technology approaches or disciplines, maybe two minutes each or so where we'll just kind of talk about the state of them. It's definitely too much to cover in a short amount of time, but I don't want to leave here without touching on them. So first to you, James. State of quantum seems like it's moving faster than a lot of people anticipate.

40:24

Speaker C

Yeah, Quantum. You know, we have an incredible quantum AI team that's doing extraordinary kind of path breaking work. And I think the headline on this is that I think quantum computing is actually making more progress than people realize. Because keep in mind that the whole idea of what everybody's aiming for in quantum is how do we build a fully error corrected quantum computer. And there's been lots of different approaches to this. I think the dominant approach that most people are taking is the superconducting qubits approach. That's what our team is doing. There are other teams in the world that are doing that. It's a very complex way of doing at it. People think it's the best shot at it, but there are other mechanisms. There's neutral atoms approaches, there's a whole range of approaches. I think the progress that's happened is as follows. The underlying chips are making incredible progress. Our Willow chip, for example, hit a big milestone. It was a big enough deal about a year and a half ago where, you know, it was able to do in, you know, you know, a computation, a benchmark computation called RCS, which would take a classic frontier supercomputer 10 septillion years to do. And that's like, you know, one of like 25 zeros. It's, it's a big number. It was able to do it in under five minutes. So the progress on. And also. And correct errors in a fundamentally breakthrough way. One of the things that's always been an issue with error correction, which is the other big barrier in quantum computing, is how can you reduce the error rate as you scale up and add qubits. So the real breakthrough, despite the fun, spectacular number that I told you about, the real breakthrough, which is what got us the breakthrough of the year award prize, was that for the first time, we're able to show that you can do what's called below threshold error correction, which is as you scale up the system, the error rates are actually going down, down, which is exactly what you'd want, as opposed to that they're actually going up. So that was a big deal. The other big deal was actually late last year when, because all these benchmarks, including the one I just told you, these are computations that are fun and great for benchmarking, but these are competitions that are actually not useful for anything. But last year we're able to show probably the first useful computation. This is our quantum echoes result. It was a big enough deal, made the COVID of Nature, which is great. Our teams were excited about that. What that showed was an actual useful computation for figuring out the spin dynamics of molecules, which could not have been done any other way. And we're able to validate the result with colleagues at Berkeley who actually validated the results in a lab with NMR data. So that was the first example of a useful computation. So you put all that together, you realize that the progress that people had kind of decades away is actually happening much faster. So I actually think we're going to start to see useful applications in the next five or so years from quantum computing. And that's pretty exciting.

40:56

Speaker A

Definitely we're going to spend much more time, I think, on this show thinking about that. Material science, I think, is one of the more overlooked areas of AI research where you can actually find new materials through AI predictive techniques. So, Lila, talk a little bit about where that stands today.

43:55

Speaker B

It goes back to what are some of the root node problems that we might. If AI can help us unlock a basic understanding of the universe around us, it can open an entire field for ourselves and other researchers. To build upon that AlphaFold being one of those, the alpha genome, the alpha gnome, the one that you've just mentioned. Our material science was really exciting because we basically went from 40,000 known stable crystals to 400,000 plus that are now being tested in research and in labs. What that really means is if you think about things like how do we build better batteries for electric vehicles or superconductors for supercomputers, it's really going to. One way we can do that is through thinking of new materials. We're still, I think, quite early in this stage, but we believe this is something promising that could really change how we work and live.

44:14

Speaker A

And what do we get if there's new materials discovered? Is it like something that's maybe T shirt thinness but winter coat warmth?

45:09

Speaker B

Yeah, yeah.

45:18

Speaker A

I mean, looking at the background behind you, that's all I can.

45:19

Speaker B

Yeah. I think this is like when you, when you look everything around us and like I said, if you think about even batteries. Right. And electric vehicles, of how do you make a vehicle, the range of a vehicle or the charging capacity of it, being able to have better batteries and not be limited by some of today's physics, I think things like that are going to be possible with some of these basic materials.

45:23

Speaker A

Okay. Now weather, weather prediction with AI is actually something that Google's working on pretty

45:48

Speaker B

diligently in many different ways.

45:55

Speaker C

Yeah, we actually have a very broad program around weather and that's working Google, DeepMind and Google Research trying to. Look, there's so many things you want to predict with weather. One is just forecasts. What's weather going to be like next week, tomorrow, there's that kind of work. So graphcast, which came out of Google DeepMind's an incredible kind of state of the art kind of model for that. You're also trying to predict other things in weather. You're trying to predict monsoons, cyclones, you're trying to figure out when floods are going to happen. These are weather, all these extreme weather events. So we actually have a very broad program where we try to use the latest AI innovations to make predictions. I'll give you an example of one that actually. Two quick examples.

45:57

Speaker A

No, no, do one quick one because I have to ask you about suncatcher. You want to talk about Sun Catcher? Unless your team gives me more time. Let's just do one example.

46:38

Speaker C

Well, let me do one example because this actually affects people and saves lives. So it has always been known that if you could predict floods with more than six days advance notice, you can actually save Lives. In fact, the UN estimates it's like you can save probably half the damage that happens. And so this has been always been one of these kind of challenges. Can you do that? So our team starting about maybe two and a half years ago built a model to do that to predict these so called riverine floods. And we tried it in Bangladesh. It worked. Now fast forward to today we're making these riverine flood predictions covering 150 countries and places where more than 2 billion people live. I think that's extraordinary. So that's an example of breakthrough innovation leading all the way to societal useful impact.

46:46

Speaker B

We're working with the National Hurricane center as well where we could predict 15 days in advance, 50 different routes for hurricanes and actually tracked hurricane Melissa. So you start to think about what this type of insight might mean for crisis preparedness.

47:32

Speaker A

Yeah. And then more mundane things like airplane schedules so you know that a storm is coming. You can sort of take care of that in advance. Okay, last thing. Suncatcher, what is sun catcher?

47:46

Speaker C

So this is in classic Google moonshot fashion where you say, okay, so imagine how we think about training AI systems, how we do today and you imagine 100 years from now, how would you imagine we'll be doing it given the compute and energy requirements needed to train models? So you say 100 years from now. Of course we're doing it in space because the sun has 100 trillion times more energy. It's available 24 7. Imagine if that's probably how we're going to be doing it in the future. So why don't we try to build towards that future? So Project Suncatcher is a moonshot in classic Google fashion where we said let's start to build towards that. So we're going to try to put in, we've already done the first, a few of the key milestones. We're going to try to put tpus a special purpose AI chips in space and do training runs.

47:58

Speaker A

You're sending chips to space?

48:50

Speaker C

Chips to space.

48:51

Speaker A

This is actually happening.

48:52

Speaker C

Yeah. So the first milestone is we're hoping that in 2027 we'll have done a couple training runs in space. This is project Suncatcher with the idea towards building towards this future where this will probably, is probably how we're going to be doing it. So people imagine Dyson spheres and all these things about of course you want to harness the energy capacity in your system, in your galaxy, in our case in our solar system first and then eventually ultimately in the galaxy. You're going to do things in space

48:53

Speaker A

there's this idea that former Googler Ilya had that if we're going to get towards AGI, maybe the world is going to have to be papered with data centers. But you put them in space, maybe we can have the rest of the Earth for us.

49:23

Speaker C

So. So stay tuned. So our next milestone will be in 2027. Well, hopefully we'll have done some training runs.

49:37

Speaker A

Would either of you go to space? If you get the opportunity. You trust the current spaceships?

49:44

Speaker C

Yeah, they're pretty good. I grew up wanting to be an astronaut. I failed, obviously. Really?

49:51

Speaker A

So did I. I did not, and I will not be going to space. All right.

49:55

Speaker B

I'm more interested right now in how do we make Earth better. And I think that's where AI can really make a difference.

50:01

Speaker A

Yeah. Imagine focusing on this planet. That's an idea. All right, Lyle, James, thank you so much for coming on the show. Really.

50:06

Speaker C

Thanks for having us, Alex.

50:12

Speaker A

All right, everybody, thank you for listening and watching, and thank you again to Qualcomm for having us at your space here in Davos. This concludes our series of episodes, avos. Been a great four, five episodes, actually, if you include the one we did with Demis. And we'll see you next time on Big Technology Podcast. Thank you, thank you, thank you. Did you know your credit card points and miles can lose value to inflation? Credit card companies often reduce the redemption value of your points and miles. Now imagine a credit card with rewards that can grow in value. With the Gemini credit card, you can earn Bitcoin or one of over 50 other cryptos instantly with no annual fee. Every swipe at the store or gas pump earns you instant rewards deposited straight to your account. Visit gemini.com card today. Check out the link in the description for more information on rates and fees. Again, if you're looking to invest in Bitcoin but don't know where to start, the Gemini credit card makes it easy. Issued by webbing. This is not investment advice and trading. Crypto involves risk. Check Gemini's website for more details on rates and fees.

50:13

Speaker B

Well, the holidays have come and gone once again.

51:35

Speaker A

But if you've forgotten to get that special someone in your life a gift, well, Mint Mobile is extending their holiday offer of half off unlimited wireless. So here's the idea. You get it now. You call it an early present for next year. What do you have to lose? Give it a try at mintmobile.com/switch.

51:37

Speaker B

Limited time, 50% off regular price for new customers. Upfront payment required. $45 for three months. $90 for six months or $180 for 12 month plan taxes and fees. Extra speeds may slow after 50 gigabytes per month when network is busy. See Terms.

51:54