The Future of Intelligence with Demis Hassabis (Co-founder and CEO of DeepMind)
50 min
•Dec 16, 20254 months agoSummary
Demis Hassabis discusses the transformative progress in AI over the past year, covering advances in multimodal models, world models, and agentic systems. He explores the path toward AGI, the importance of scientific rigor alongside commercial development, and the societal implications of advanced AI including economic restructuring and international collaboration needs.
Insights
- AI progress is not hitting scaling walls but experiencing diminishing returns—still significant improvements possible through innovation in reasoning, consistency, and reliability rather than just data scaling
- World models and simulations represent a critical missing capability in current AI systems, enabling spatial understanding, physics comprehension, and embodied learning that language models alone cannot achieve
- The shift from passive AI systems (user-directed) to autonomous agentic systems in the next 2-3 years will dramatically increase both capabilities and risks, requiring new cybersecurity and safety frameworks
- Current AI systems exhibit 'jagged intelligence'—excelling at PhD-level tasks while failing at high school problems—indicating consistency and reasoning gaps that must be solved before AGI
- Post-AGI society will require fundamental economic restructuring beyond current models, potentially including new systems like participatory budgeting and universal basic income, not just add-ons to existing capitalism
Trends
Shift from large language models to multimodal and agentic AI systems as the primary research frontierIntegration of world models and physics understanding into AI systems for robotics and embodied intelligence applicationsEmergence of 'thinking' systems that spend inference time reasoning through problems rather than generating immediate responsesGrowing focus on AI safety, consistency, and confidence calibration as prerequisites for AGI rather than afterthoughtsConvergence of multiple AI research streams (language, vision, world models, agents) toward unified proto-AGI systemsIncreased investment in scientific applications of AI (fusion, materials science, drug discovery) as proof points for transformative impactRecognition that international collaboration and governance frameworks are urgently needed but currently fragmented and underdevelopedDebate over AI's short-term hype versus long-term underappreciation of transformative potential, with concerns about startup valuations in bubble territoryEmphasis on responsible AI development through commercial incentives (enterprises demanding guardrails) rather than regulation aloneGrowing philosophical and economic questions about human purpose, creativity, and meaning in a post-scarcity, post-AGI world
Topics
Agentic AI Systems and Autonomous AgentsWorld Models and Simulation TechnologyMultimodal AI Capabilities (Text, Image, Video)AI Reasoning and Consistency ProblemsHallucination Reduction and Confidence CalibrationPhysics Understanding in AI SystemsFusion Energy and AI ApplicationsMaterials Science and AI DiscoveryAlphaFold and Protein FoldingAI Safety and Responsible DevelopmentAGI Timeline and Path to Artificial General IntelligenceEconomic Restructuring Post-AGIInternational AI Governance and CollaborationQuantum Computing and Classical Computation LimitsAI in Robotics and Embodied Intelligence
Companies
Google DeepMind
Demis Hassabis is CEO and co-founder; organization conducting research on Gemini, world models, and path to AGI
Commonwealth Fusion Systems
Startup working on Tokamak fusion reactors; Google DeepMind announced deepened partnership to help with plasma contai...
Google
Parent company of DeepMind; integrating Gemini AI into search, workspace, email, YouTube, and Chrome products
Alphabet
Parent holding company of Google and DeepMind; positioned to benefit from AI integration across product ecosystem
OpenAI
Competitor in large language models and AGI race; mentioned in context of competitive landscape and AI development
People
Demis Hassabis
CEO and co-founder of Google DeepMind; primary speaker discussing AI progress, AGI timeline, and societal implications
Hannah Fry
Podcast host and interviewer; Professor conducting the conversation with Demis Hassabis
Shane Legg
Co-founder of DeepMind; leading efforts on post-AGI economic and societal planning; discussed AGI timelines
Roger Penrose
Physicist and mathematician; referenced for theory that quantum effects in brain may relate to consciousness
Alan Turing
Mathematician and computer scientist; Turing machines are central to Hassabis's core research philosophy
Quotes
"I'm basically doing everything I ever dreamed of. And we're at the absolute frontier of science in so many ways, applied science as well as machine learning."
Demis Hassabis•Opening
"It feels like we packed in 10 years in one year"
Demis Hassabis•Early discussion
"There's something missing still from these systems in terms of their consistency... they're really good at certain things, maybe even like PhD level, but then other things, they're like not even high school level. So it's very uneven still."
Demis Hassabis•Discussion of AI limitations
"If we could have modular fusion reactors, you know, this promise of almost unlimited, renewable, clean energy would be obviously transform everything."
Demis Hassabis•Fusion discussion
"The difference this time is that it's probably going to be 10 times bigger than industrial revolution and it'll probably happen 10 times faster. So more like a decade than unfold over a decade than a century."
Demis Hassabis•Societal impact discussion
Full Transcript
I'm basically doing everything I ever dreamed of. And we're at the absolute frontier of science in so many ways, applied science as well as machine learning. And that's exhilarating, that feeling of being at the frontier and discovering something for the first time. Welcome to Google DeepMind, the podcast with me, Professor Hannah Fry. It has been an extraordinary year for AI. We have seen the centre of gravity shift from large language models to agentic AI. We've seen AI accelerate drug discovery and multimodal models integrated into robotics and driverless cars. Now, these are all topics that we've explored in detail on this podcast. But for the final episode of this year, we wanted to take a broad review, something beyond the headlines and product launches, to consider a much bigger question. Where is all this heading really? What are the scientific and technological questions that will define the next phase? And someone who spends quite a lot of their time thinking about that is Demis Asabis, CEO and co-founder of Google DeepMind. Welcome back to the podcast, Demis. Lovely to see you again. Great to be back. I mean, quite a lot's happened in the last year. What's sort of the biggest shift, do you think? oh wow i mean um it's just so much has happened as he said it's just it feels like we packed in 10 years in one year i think a lot's happened i mean certainly for us the progress of the models we've just released gemini 3 which we're really happy with multimodal capabilities all of those things are just advanced really well and then probably the thing i guess over the summer that i'm very excited about is world models being advanced i'm sure we're going to talk about that yeah absolutely we will get onto all of that stuff in a bit more detail in a moment i remember the very first time I interviewed you for this podcast and you were talking about the root node problems about this idea that you can use AI to kind of unlock these downstream benefits and you've made pretty good on your promise I have to say and that's do you want to give us an update on where we are with those what are the things that are just around the corner and and the things that you've sort of solved or near solved yeah well of course obviously the big proof point was AlphaFold and it's sort of crazy to think we're coming up to like five year sort of anniversary of of AlphaFold being sort of announced to the world, AlphaFold 2 at least. So that was the proof, I guess, that it was possible to do these root node type of problems. And we're exploring all the other ones now. I think material science, I'd love to do a room temperature superconductor and better batteries, these kinds of things. I think that's on the cards, better materials of all sorts. We're also working on Fusion. Is this a new partnership that's been announced? Yeah, we've just announced a partnership with a deep one. We already were collaborating with them, but it's a much deeper one now with Commonwealth Fusion. who, you know, I think are probably the best startup working on at least traditional Tokamak reactors. So they're probably closest to having something viable. And we want to help accelerate that, helping them contain the plasma in the magnets and maybe even some material design there as well. So that's exciting. And then we're collaborating also with our quantum colleagues, which they're doing amazing work at the quantum AI team at Google. And we're helping them with error correction codes, where we're using our machine learning to help them. And then maybe one day they'll help us. Yes, exactly. The fusion one is particularly, I mean, the difference that that would make to the world that would be unlocked by that is gigantic. Yeah. I mean, fusion has always been the holy grail. Of course, I think solar is very promising too, right? Effectively using the fusion reactor in the clouds in the sky. But I think if we could have modular fusion reactors, you know, this promise of almost unlimited, renewable, clean energy would be obviously transform everything. And that's the holy grail. And of course, that's one of the ways we could help with climate. Does make a lot of our existing problems sort of disappear if we can... Definitely. I mean, it opens up many, this is why we think of it as a root node. Of course, it helps directly with energy and pollution and helps with the climate crisis. But also, if energy really was renewable and clean and super cheap, almost free, then many other things would become viable, like, you know, water access, because we could have desalination plants pretty much everywhere, even making rocket fuel. You know, it's just there's lots of seawater that contains hydrogen and oxygen. That's basically rocket fuel, but it just takes a lot of energy to split it out into hydrogen and oxygen. But if energy is cheap and renewable and sort of clean, then why not do that? You know, you could have that producing 24-7. You're also seeing a lot of change in the AI that is applying itself to mathematics, right? You know, winning medals in the International Maths Olympiad. And yet at the same time, these models can make quite basic mistakes in high school math. Why is there that paradox? Yeah, I think it's fascinating, actually, one of the most fascinating things and probably that needs to be fixed as one of the key things why we're not at AGI yet. As you said, we've had a lot of success and other groups on getting like gold medals at the International Maths Olympiad. You look at those questions and they're super hard questions that only the top students in the world can do. And on the other hand, if you pose a question in a certain way, we've all seen that with experimenting with chatbots ourselves in our daily lives that it can make some fairly trivial mistakes on logic problems. They can't really play decent games of chess yet, which is surprising. So there's something missing still from these systems in terms of their consistency. And I think that's one of the things that you would expect from a general intelligence, you know, an AGI system is that it would be consistent across the board. And so sometimes people call it jagged intelligences. So they're really good at certain things, maybe even like PhD level, but then other things, they're like not even high school level. So it's very uneven still, the performances of these systems. They're very, very impressive in certain dimensions, but they're still pretty basic in others. And we've got to close those gaps. And, you know, there are theories as to why, and depending on the situation, it could even be the way that an image is perceived and tokenized. So sometimes actually doesn't even get all the letters that you, you know, so when you count letters in words, it sometimes gets that wrong that it may not be seeing that each individual letter so there's sort of different reasons for some of these things and each one of those can be fixed and then you can see what's left but i think consistency i think another thing is reasoning and thinking so we have thinking systems now that at inference time they spend more time thinking and they're better at outputting their answers but it's not sort of super consistent yet in terms of like is it using that thinking time in a useful way to actually double check and use tools to double check what it's outputting. I think we're on the way, but maybe we're only 50% of the way there. I also wonder about that story of AlphaGo and then AlphaZero, where you sort of took away all of the human experience and found that the model actually improved. Is there a scientific or a maths version of that in the models that you're creating? I think maybe, I think with what we're trying to build today, it's more like AlphaGo. So effectively, these large language models, these foundation models, they're starting with all of human knowledge, you know, what we put on the internet, which is pretty much everything these days, and compressing that into some useful artifact, which they can look up and generalize from. But I do think we're still in the early days of having this search or thinking on top, like AlphaGo had to kind of use that model to direct in useful reasoning, traces, useful planning ideas, and then come up with the best, you know, solution to whatever the problem is at that point in time. So I don't feel like we're constrained at the moment with the kind of limit of human knowledge, like the internet. I think the main issue at the moment is we don't know how to use those systems in a reliable way fully yet in the way we did with AlphaGo. But of course, that was a lot easier because it was a game. I think once you have AlphaGo there, you could go back just like we did with the Alpha series and do an Alpha zero, where it starts sort of discovering knowledge for itself, I think that would be the next step. That's obviously harder. And so I think it's good to try and create the first step first with some kind of AlphaGo-like system. And then we can think about an AlphaZero-like system. But that is also one of the things missing from today's systems is the ability to online learn and continually learn. So, you know, we train these systems, we balance them, we post-train them, and then they're out in the world, but they don't continue to learn out in the world like we would. And I think that's another critical missing piece from these systems that will be needed for AGI. In terms of all of those missing pieces, I mean, I know that there's this big race at the moment to release commercial products, but I also know that Google DeepMind's roots really lie in the idea of scientific research. And I found a quote from you where you recently said, if I'd had my way, we would have left AI in the lab for longer and done more things like alpha fold, maybe cured cancer or something like that. Do you think that we lost something by not taking that slower route? I think we lost and gained something. So I feel like that would have been the more pure scientific approach. At least that was my original plan, say 15, 20 years ago, when almost no one was working on AI, we're just about to start DeepMind. People thought it was a crazy thing to work on. But we believed in it. And I think that the idea was if we would make progress, we would continue to sort of incrementally build towards AGI, be very careful about what each step was and the safety aspects of it and so on, analyse what the system was doing and so on. But in the meantime, you wouldn't have to wait till AGI arrived before it was useful. You could branch off that technology and use it in really beneficial ways to society, namely advancing science and medicine. So exactly what we did with AlphaFold actually, which it's not a foundation model itself, general model, but it uses the same techniques. you know, transformers and other things, and then blends it with more specific things to that domain. So I imagined a whole bunch of those things getting done, which would be hugely better, you know, you'd release to the world for just like we do with AlphaFold, and indeed, do things like cure cancer and so on, whilst we were working on the sort of more the AGI track in the lab. Now, it's turned out that chatbots were possible at scale, and people find them useful. And then they've now morphed into these foundation models that can do more than chat and text, obviously, including Gemini. They can do images and video and all sorts of things. And that's also been very successful commercially in terms of a product. And I love that too. Like I've always dreamed of having the ultimate assistant that would help you in everyday life, make it more productive, maybe even protect your brain space a bit as well from an attention so that you can focus and be in flow and so on. Because, you know, today with social media, it's just noise, noise, noise. And I think AI actually that works for you could help us with that. So I think that's good, But it has created this pretty crazy race condition where there's many commercial organizations and even nation states rushing to improve and overtake each other. And that makes it hard to do sort of rigorous science at the same time. We try to do both. And I think we're getting that balance right. But it makes it harder. On the other hand, there are lots of pros of the way it's happened, which is, of course, there's a lot more resources coming into the area. So that's definitely accelerated progress. And also, I think the general public are actually, interestingly, only a couple of months behind the absolute frontier in terms of what they can use. So everyone gets a chance to sort of feel for themselves what AI is going to be like. And I think that's a good thing. And then governments sort of understanding this better. The thing that's strange is that, I mean, this time last year, I think there was a lot of talk about scaling, eventually hitting a wall about us running out of data. And yet we're recording now Gemini 3 has just been released and it's leading on this whole range of different benchmarks. How has that been possible? Like, wasn't there supposed to be a problem with scaling hitting a wall? I think a lot of people thought that, especially as other companies have sort of had slower progress, shall we say. But I think we've never really seen any wall as such. Like what I would say is maybe there's like diminishing returns. When I say that, people only think like, oh, so there's no returns. Like it's zero or one. It's either exponential or it's asymptotic. No, actually, there's a lot of room between those two regimes. And I think we're in between those. So it's not like you're going to double the performance on all the benchmarks every time you release a new iteration. Maybe that's what was happening in the very early days, you know, three, four years ago. But you are getting significant improvements like we've seen with Gemini 3 that are well worth the investment and the return on that investment and doing. So that we haven't seen any slowdown on. There are issues like are we running out of just available data? But there are ways to get around that you know synthetic data you know these systems are good enough they can start generating their own data especially in certain domains like coding and math where you can verify the answer In some sense you could produce unlimited data So you know all of these things though are research questions. And I think that's the advantage that we've always had is that we've always been sort of research first. And I think we have the broadest and deepest research bench always have done. And if you look back at the last decade of advances, whether that's Transformers or AlphaGo, AlphaZero, any of the things we just discussed, they all came out of Google or DeepMind. So I've always said, like, if more innovations are needed, scientific ones, then I would back us to be the place to do it, just like we were in the previous sort of 15 years for a lot of the big breakthroughs. So I think that's just what's transpiring. And I actually really like it when the terrain gets harder, because then it's not just world class engineering you need, which is already hard enough, but you have to ally that with world class research and science, which is what we specialize in. And on top of that, we also have the advantage of world class infrastructure with our TPUs and other things that we've invested in a lot for a long time. And so that combination, I think, allows us to be at the frontier of the innovations, as well as the scaling part. And we effectively, you can think of as 50% of our efforts on scaling, 50% of it is on innovation. And I think my betting is you're going to need both to get to AGI. I mean, one thing that we are still seeing, even in Gemini 3.0, which is an exceptional model, is this idea of hallucination. So I think there was one metric that said it can still give an answer when actually it should decline. I mean, could you build a system where Gemini gives a confidence score in the same way that AlphaFold does? Yeah, I think so. And I think we need that, actually. And I think that's sort of one of the missing things. I think we're getting close. I think the better the models get, the more they know about what they know, if that makes sense. And I think the more reliable, you could sort of rely on them to actually introspect in some way or do more thinking and actually realize for themselves that they're uncertain or there's uncertainty over this answer. And then we've got to sort of work out how to train it in a way there where it can output that as a reasonable answer. We're getting better at it, but it still sometimes forces itself to answer when it probably shouldn't. and then that can lead to a hallucination. A lot of the hallucinations are of that type currently. So there's a missing piece there that sort of has to be solved and you're right as we did solve it with alpha fold but in obviously a much more limited way. Because presumably behind the scenes there is some sort of measure of probability of whatever the next token might be. Yes, there is of the next token. That's how it works but that doesn't tell you the overall arching piece. How confident are you about this entire fact or this entire statement? and I think that's why we'll need to use the thinking steps and the planning steps to go back over what you just output at the moment it's a little bit like the systems are just it's like talking to a person and they just you know when when they're in on a bad day they're just literally telling you the first thing that comes to their mind most most of the time that'll be okay but then sometimes when this very difficult thing you'd want to like stop pause for a moment and maybe go over what you were about to say and adjust what you're about to say but perhaps that's happening less and less in the world these days, but that's still the better way of having a discourse. So, you know, I think you can think of it like that. These models need to do that better. Okay. I also really want to talk to you about the simulated worlds and putting agents in them, because we got to talk to your genie team earlier this year. Tell me why you care about simulation. What can a world model do that a language model can't? Well, look, it's probably my longest standing passion is world models and simulations, in addition to AI. And of course, it's all coming together in our most recent work like Genie. And I think language models are able to understand a lot about the world. I think actually more than we expected, more than I expected, because language is actually probably richer than we thought. It contains more about the world than we maybe even linguists maybe imagined. And that's proven now with these new systems. But there's still a lot about the spatial dynamics of the world, spatial awareness, and the physical context we're in, and how that works mechanically, that it's hard to describe in words, and isn't generally described in corpuses of words. And a lot of this is allied to learning from experience, online experience, there's a lot of things which you can't really describe something, you have to just experience it. Maybe the sensors and so on are very hard to put into words, you know, whether that's motor angles and smell and these kind of senses, it's very difficult to describe that in any kind of language. So I think there's a whole set of things around that. And I think if we want robotics to work, or a universal assistant that maybe comes along with you in your daily life, maybe on glasses, or, you know, on your phone, and helps you in your everyday life, not just on your computer, you're going to need this kind of world understanding. And world models are at the core of that. What we mean by world model is this sort of model that understands the causative effective effect of the mechanics of the world, intuitive physics, but how things move, how things behave. Now, we're seeing a lot of that in our video models, actually. And one way to show how do you test you have that kind of understanding? Well, can you generate realistic worlds? Because if you can generate it, then in a sense, you must have understood the system must have encapsulated a lot of the mechanics of the world. So that's why Genie and VO, our video models and our sort interactive world models are really impressive but also important steps towards showing we have generalized world models and then hopefully at some point we can apply it to robotics and universal assistance and then of course one of my favorite things i'm definitely going to have to do at some point is reapplying it back to games and and uh you know game simulations and create the ultimate games which of course was maybe always my subconscious plan all of this yeah all of the time exactly what about science too though because you use it in that domain yes you could so building models of scientifically complex domains whether that's materials on atomic level and biology but also like some physical things as well like weather one way to understand those systems is to learn simulations of those systems from the raw data so you have a bunch of raw data let's say it's about the weather and obviously we have some amazing weather projects going on And then you have a model that kind of learns those dynamics and can recreate those dynamics more efficiently than doing it by brute force. So I think there's huge potential for simulations and kind of world models, maybe specialised ones for aspects of science and mathematics. But then also, I mean, you can drop an agent into that simulator world too. Yes. Your Gini team gave us this really amazing quote. They said, almost no prerequisite to any major invention was made with that invention in mind. And they were talking about dropping agents into these simulated environments and allowing them to explore with sort of curiosity being their main motivator. Right. And so that's another really exciting use of these world models is you can we have another project called Simmer. We just released Simmer 2, simulated agents where you have an avatar or an agent and you put it down into a virtual world. It can be a kind of actual commercial game or something like that. Very complex one like No Man's Sky, a kind of open world space game. And then you can instruct it because it's got Gemini under the hood. You can just talk to the agent and give it tasks. but then we thought well wouldn't it be fun if we plug genie into simmer and sort of drop a simmer agent into a another ai that was creating the world on the fly so now the two ais are kind of interacting in the minds of each other so the simmer agent's trying to navigate this world and genie is as far as genie's concerned that's just a player and uh an avatar doesn't care there's another ai so it's just generating the world around whatever sim is trying to do so that it's kind of amazing to see them both interacting together. And I think this could be the beginning of an interesting training loop where you almost have infinite training examples because whatever the Simit agent is trying to learn, Genie can basically create on the fly that world. So I think that you could imagine a whole world of like setting and solving tasks, just millions of tasks automatically, and they're just getting increasingly more difficult. So we might try to set up a kind of loop like that as well as obviously those simmer agents could be great as game companions or so some of the things that they learn could be useful also for robotics yeah the end of boring npcs exactly it's going to be amazing for these games yeah those worlds that you're creating though how do you make sure that they really are realistic i mean how do you ensure that you don't end up with physics that looks plausible but is actually wrong yeah that's a great question and can be an issue. This basically hallucinations again. So some hallucinations are good because it also means you might create something interesting and new. So in fact, sometimes if you're trying to do creative things or trying to get your system to create new things, novel things, a bit of hallucination might be good, but you want it to be intentional. So you kind of switch on the hallucinations now, right? Or the creative exploration. But yes, when you're trying to train a Simma agent, you don't want genie hallucinating kind of physics that are wrong. So actually what we're doing now is we're almost creating a physics benchmark where we can use game engines which are very accurate with physics to create lots of like fairly simple like the sorts of things you would do in your physics a level lab lessons right like you know rolling little balls down different tracks and seeing how fast they go and so like really teasing apart on a very basic level like newton's three laws of motion has it encapsulated it whether that's vo or genie have these models encapsulated the physics of that 100% accurately. And right now they're not, they're kind of approximations. And they look realistic when you just casually look at them, but they're not accurate enough yet to rely on for, say, robotics. So that's the next step. So I think now we've got these really interesting models. I think one of the things, just like we're trying with all of our models, is to reduce the hallucinations and make them even more grounded. And with physics, I think that's going to probably involve generating loads and loads of ground truth, simple videos of pendulums, you know, what happens when two pendulums go around each other but then very quickly you get to like three body problems which are not solvable anyway so i think it's going to be interesting but what's amazing already is when you look at the video models like veo and just the way it treats reflections and liquids it's pretty unbelievably accurate already at least to the naked eye so the next step is actually going beyond what a human amateur can perceive and would it really hold up to a proper physics grade experiment. I know you've been thinking about these simulated worlds for a really long time. And I went back to the transcript of our first interview. And in it, you said that you really liked the theory that consciousness was this consequence of evolution, that at some point in our evolutionary past, there was an advantage to understanding the internal state of another. And then we sort of turned it in on ourselves. Does that make you curious about running sort of an agent in evolution inside of a simulation? I mean, I'd love to run that experiment And at some point, we run evolution, we run almost social dynamics as well. Like the Santa Fe used to run lots of cool experiments on little grid worlds. I used to love some of these, but they're mostly economists. And they were trying to like, you know, run like little artificial societies. And they found that things, all sorts of interesting things got invented like that. If you let agents run around for long enough with the right incentive structures, markets and banks and all sorts of crazy things. So I think it would be really cool. and also just to understand the origin of life and the origin of consciousness. And I think that is one of the big passions I had for working on AI from the beginning was, I think you're going to need these kinds of tools to really understand where we came from and what these phenomena are. And I think simulations is one of the most powerful tools to do that because you can then do it statistically because you can run the simulation many times with slightly different initial starting conditions, maybe run it millions of times and then understand what the slight differences are in a very controlled experiment sort of way which of course is very difficult to do in the real world for any of the really interesting questions we want to answer. So I think accurate simulations will be an unbelievable boon to science. Given what we've discovered about sort of emergent properties of these models having sort of conceptual understanding that we weren't expecting do you also have to be quite careful about running those sort of simulations? I think you would have to be yes but that's the other nice thing about simulations You can run them in pretty safe sandboxes, maybe eventually you want to air gap them. And you can of course monitor what happening in the simulation 24 and you have access to all the data So we may need AI tools to help us monitor the simulations because they be so complex and there be so much going on in them If you imagine loads of AIs running around in a simulation, it will be hard for any human scientist to keep up with it. But we could probably use other AI systems to help us analyse and flag anything interesting or worrying in those simulations automatically. I mean, I guess we're still talking medium to long term in terms of this stuff. So just going back to the trajectory that we're on at the moment, I also want to talk to you about the impact that AI and AGI are going to have on wider society. And last time we spoke, you said that you thought AI was overhyped in the short term, but underhyped in the long term. And I know that this year there's been a lot of chatter about an AI bubble. I mean, what happens if there is a bubble and it bursts? What happens? Well, look, I think, yes, I still subscribe to it's overhyped in the short term and still underappreciated in the medium to long term, you know, how transformative it's going to be. Yeah, there is a lot of talk, of course, right now about AI bubbles. In my view, it's not one thing, binary thing, are we or aren't we? I think there are parts of the AI ecosystem that are probably in bubbles. One example would be just seed rounds for startups that basically haven't even got going yet. And they're raising at tens of billions of dollars valuations just out of the gate. It's sort of interesting to see, can that be sustainable? You know, my guess is probably not, at least not in general. So there's that area. Then the people are worrying about, obviously, there's the big tech valuations and other things. I think there's a lot of real business underlying that. so um but remains to be seen i mean i think maybe for any new unbelievably transformative and profound technology of which of course ai is probably the most profound you're going to get this over correction in a way so when we started deep mind no one believed in it no one thought it was possible people were wondering what's ai for anyway and then now fast forward 10 15 years and now obviously it seems to be the only thing people talk about in business but you're sort of going to It's almost an overreaction to the underreaction. So I think that's natural. I think we saw that with the internet. I think we saw with mobile. And I think we're seeing or going to see it again with AI. I don't worry too much about are we in a bubble or not? Because from my perspective, leading Google DeepMind and also obviously with Google as an Alphabet as a whole, our job and my job is to make sure either way we come out of it very strong. And we're very well positioned. And I think we are tremendously well positioned either way. So if it continues going like it is now, fantastic. We'll carry on, you know, all of these great things that we're doing and experiments and progress towards AGI. If there's a retrenchment, fine. Then also, I think we're in a great position because we have our own stack with TPUs. We also have all these incredible Google products and the profits that all makes to plug in our AI into. And we're doing that with search is totally revolutionized by AI overviews, AI mode with Gemini under the hood. we're looking at workspace or email or you know at youtube so there's all these amazing things in chrome there's a lot of these amazing things that ai we can see already are low-hanging fruit to apply gemini to as well of course as gemini app which is doing really well as well now and the idea of universal assistant so there's new products and i think they will in the fullness of time be super valuable but we don't have to rely on that we can just power up our existing ecosystem i think that's what's happened over the last year. We've got that really efficient now. In terms of the AI that people have access to at the moment, I know you said recently how important it is not to build AI to maximise user engagement, just so we don't repeat the mistakes of social media. But I also wonder whether we are already seeing this in a way, I mean, people spending so much time talking to their chatbots, that they end up kind of spiralling into self radicalising. How do you stop that? How do you build AI that puts users at the centre of their own universe which is sort of the point of this in a lot of ways but without creating echo chambers of one yeah it's a very you know um careful balance that you know i think is one of the most important things that we as an industry have got to get right so i think we've seen what happens with you know some systems that were overly sycophantic or you know then you get these sort of echo chamber reinforcements that are really bad for the person so i think part of it is and actually this is what we want to build with gemini and i'm really pleased with the gemini three persona that we had a great team working on and I helped with too personally is just this sort of almost like a scientific personality that's warm it's helpful it's light but it's succinct to the point and it will push back on things in a friendly way that don't make sense rather than trying to reinforce you you know the idea that the earth's flat and you said it and it's like wonderful idea I don't think that's good in general for society if that were to happen but you've got to balance it with what people want because people want these systems to be supportive helpful with their ideas and their brainstorming so you've got to get that balance right and i think we're sort of developing a science of personality and persona of like how to kind of measure what it's doing and where do we want it to be like on authenticity on humor you know these sorts of things and then you can imagine there's a kind of base personality that it ships with and then everyone has their own preferences you know do you want it to be more humorous less humorous or more succinct or more verbose people like different things so you add that additional personalization layer on it as well but there's still the core base personality that everyone gets right which is trying to adhere to the scientific method which is the whole point of these and we want people to use these for science and for medicine and health issues and so on and so i think it's part of the science of getting these large language models right. And I'm quite happy with the direction we're going in currently. We got to talk to Shane Legg a couple weeks ago about AGI in particular. Across everything that's happening in AI at the moment, the language models, the world models, you know, and so on, what's closest to your vision of AGI? I think actually the combination of, obviously there's Gemini 3, which I think is very capable, But the Nano Banana Pro system we also launched last week, which is an advanced version of our image creation tool. What's really amazing about that, it has also Gemini under the hood. So it can understand not just images, it sort of understands what's going on semantically in those images. And people have been only playing with it for a week now. But I've seen so much cool stuff on social media about what people are using it for. So, for example, you know, you can give it a picture of a complex plane or something like that. And it can label all the diagrams of, you know, all the different parts of the plane and even visualize it with all the different parts sort of exposed. So it has some kind of deep understanding of mechanics and what, you know, makes up parts of objects, what's materials. And it can, you know, render text really, really accurately now. So I think that's getting towards a kind of AGI for imaging. I think it's a kind of general purpose system that can do anything across images. So I think that's very exciting. and then the advances in world models, you know, Gini and Sima and what we're doing there. And then eventually we've got to kind of converge all of those. They're kind of different projects at the moment and they're intertwined, but we need to, you know, converge them all into one big model. And then that might start becoming, you know, a candidate for proto-AGI. I know you've been reading quite a lot about the Industrial Revolution recently. Are there things that we can learn from what happened there to try and mitigate against the sort of some of the disruption that we can expect. Yeah, I think there's a lot we can learn. It's something you sort of study in school, at least in Britain, but on a very superficial level. Like it was really interesting for me to look into how it all happened, what it started with, the economic reasons behind that, which is like the textile industry. And then the first computers were really the sewing machines. And then they became punch cards for the early Fortran computers, mainframes. And for a while, it was very successful in Britain became like the center of the textile world because they could make these amazingly high quality things for very cheap because of the automated systems. And then obviously the steam engines and all of those things came in. I think there's a lot of incredible advances that came out of Industrial Revolution. So child mortality went down and all of modern medicine and sanitary conditions, even the kind of work life split and how that all worked was kind of worked out during the Industrial Revolution. But it also came with a lot of challenges. It took quite a long time, roughly a century, and different parts of the labor force were dislocated at certain times. And then new organizations like unions and other things had to be created in order to rebalance that. So like it was fascinating to see the whole of society sort of had to over time adapt. And then you've got the modern world now. So I think there were lots of obviously pros and cons of the Industrial Revolution, why it was happening. But no one would want, if you think about what it's done in total, like abundance of food and in the Western world and modern medicine and all these things, modern transport, that was all because of the Industrial Revolution. So we wouldn't want to go back to pre-industrial revolution, but maybe we can figure out ahead of time by learning from it what those dislocations were and maybe mitigate those earlier or more effectively this time. And we're probably going to have to because the difference this time is that it's probably going to be 10 times bigger than industrial revolution and it'll probably happen 10 times faster. So more like a decade than unfold over a decade than a century. One of the things that Shane told us was that the kind of current economic system where, you know, you exchange your labour for resources effectively, it just won't function the same way in a post-AGI society. Do you have a vision of how society should be reconfigured or might be reconfigured in a way that works? Yeah, I'm spending more time thinking about this now. And Shane's actually leading an effort here on that to sort of think about what a post-AGI world might look like and what we need to prepare for. But I think society in general needs to spend more time thinking about that, economists and social scientists and governments. As with the Industrial Revolution, the whole working world and working week and everything got changed from pre-Industrial Revolution more to agriculture. And I think that's going to at least that level of change is going to happen again. So I would not be surprised if we needed new economic systems, new economic models to help with that transformation and make sure, for example, the benefits are widely distributed. and maybe things like universal basic income and things like that are part of the solution, but I don't think that's the complete, I think that's just what we can model out now, right? Because that would be almost an add-on to what we have today. But I think there might be something way better systems, more like direct democracy type systems where you can, you know, vote with a certain amount of credits or something for what you want to see. It happens actually on local community level. You know, here's a bunch of money. Do you want a playground or tennis court or an extra classroom on the school? And then you let the community sort of vote for it. And then maybe you could even measure the outcomes. And then the people that sort of consistently vote for things that end up being more well received, they have proportionally more influence for the next vote. So there's a lot of interesting things out here, you know, economists, friends of mine who are brainstorming this. And I think that would be great if we had a lot more work on that. And then there's the philosophical side of it of like, OK, so jobs will change and other things like that, but maybe we'll have fusion will have been solved. And so we have this sort of abundant free energy. So we're post-scarcity. So what happens to money? Maybe everyone's better off, but then what happens to purpose? Right. Because a lot of people get their purpose from their jobs and then providing for their families, which is a very noble purpose. I think some of these questions blend from economic questions into almost philosophical questions. Do you worry that people don't seem to be paying attention, sort of moving as quickly as you'd like to see? What would it take for people to sort of recognise that we need international collaboration on this kind of topic? I am worried about that. And again, in a sort of ideal world, there would have been a lot more collaboration already and international specifically, and a lot more research and sort of exploration and discussion going on about these topics. I'm actually pretty surprised there isn't more of that. Even our timelines, which were, there were some very short timelines out there, but even ours are five to 10 years, which is not long for institutions or things like that to be built to handle this. And one of the worries I have is that the institutions that do exist seem to be very fragmented and not very influential to the level that you would need So it may be that there aren the right institutions to deal with this currently And then of course if you add in the geopolitical tensions that are going on at the moment around the world, it seems like collaboration and cooperation is harder than ever. Just look at climate change and how hard it is to get any agreement on anything to do with that. So we'll see. I think as the stakes get higher and as these systems get more powerful, and maybe this is one of the benefits of them being in products, is that, you know, everyday person that's not working on this technology will get to feel the increase in the power of these things and the capability. And so that will then reach government and then maybe they'll see sense as we get closer to AGI. Do you think it will take a moment, an incident for everyone to sort of sit up and pay attention? I don't know. I mean, I hope not. Most of the main labs are pretty responsible. We try to be as responsible as possible. You know, that's always something we've, as you know, if you followed us over the years, that's been at the heart of what everything we do doesn't mean we'll get everything right. But we try to be as thoughtful and as scientific in our approach as possible. I think most of the major labs are trying to be responsible. Also, there's good commercial pressure actually to be responsible. If you think about agents, and you're renting an agent to another company, let's say to do something, that other company is going to want to know what the limits are and the boundaries are and the guardrails are on those agents, you know, in terms of what they might do and not just mess up the data and all of this stuff. So I think that's good because the more cowboy operations, they won't get the business because the enterprises won't choose them. So I think the kind of capitalist system will actually be useful here to reinforce responsible behavior, which is good. But then there'll be rogue actors, maybe rogue nations, maybe rogue organizations, maybe people building on top of open source. I don't know. Like, obviously, it's very difficult to stop that. Then something may go wrong. And hopefully, it's just sort of medium sized. And then that will be a kind of warning shot to humanity across the bow. And then that might be the moment to kind of advocate for international standards or international cooperation or collaboration, at least on some high level basic or, you know, kind of like what's the basic standards we would want and agree to. I'm hopeful that that will be possible. In the long term, so beyond AGI and towards ASI, artificial superintelligence, do you think that there are some things that humans can do that machines will never be able to manage well i think it's the big question and i feel like this is related to as you know one of my favorite topics is turing machines and i've always felt this that if we build a gi and then almost talking back about our simulation discussion and then use that as a simulation of the mind and then compare that to the real mind we will then see what the differences are and potentially what's special and remaining about the human mind. Maybe that's creativity, maybe it's emotions, maybe it's dreaming, consciousness. There's a lot of hypotheses out there about what may or may not be computable. And this comes out to the Turing machine question of like, what is the limit of a Turing machine? And I think that's the central question of my life, really. Ever since I found out about Turing and Turing machines, I fell in love with that. That's my core passion. And I think everything we've been doing is been sort of pushing the notion know what a turing machine can do to the limit including you know folding proteins and so it turns out i'm not sure what the limit is maybe there isn't one and of course my quantum computing friends would say there are limits and you need quantum computers to do quantum systems but i'm really not so sure and i've actually discussed that with some of the quantum folks and it may be that we need data from these quantum systems in order to create a classical simulation and then that comes back to the mind which is is it all classical computation or is there something else going on you know like roger penrose believes you know there's quantum effects in the brain if there are and that's what consciousness is to do with then machines will never have that at least the classical machines will have to wait for quantum computers but if there isn't then there may not be any limit maybe in the universe everything is computationally tractable if you look at it in the right way and therefore turing machines might be able to model everything in the universe. I'm currently, if you were to make me guess, I would guess that. And I'm working on that basis until physics shows me otherwise. So there's nothing that cannot be done within these sort of computational? Well, no one's, in fact, put it this way, nobody's found anything in the universe that's non-computable. So far. So far. Right. And I think we've already shown you can go way beyond the usual complexity theorist P equals MP view of like what a classical computer could do today, things like protein folding and go and so on so i don't think anyone knows what that limit is and that's really if you were boiled down to what were we doing at deep mind and google and what i'm trying to do is find that limit but then in the limit of that though right is that in the limit of that idea is that we're sitting here sort of there's like the warmth of the lights on our face we kind of hear the whir of the machine in the background there's like the feel of the desk under our hands yes all of that could be replicable yes by a classical computer yes well i think in the end my view on this is why i love kant as well there's all of my two favorite lots of kant and spinoza for different reasons but kant the reality is a construct of the mind i think that's true and so yes all of those things you mentioned they're coming into our sensory apparatus and they feel different right the light the warmth of the light the touch of the table but in the end it's all information and we're information processing systems and i think that's what biology is is what we're trying to do with isomorphic that's how i think we'll end up curing all diseases is by thinking about biology as an information processing system and i think in the end that's going to be and i'm working on my spare time my two minutes of spare time you know physics theories about things like information being the most fundamental unit should we say of the universe, not energy, not matter, but information. So it may be that these are all interchangeable in the end, but we just sense it. We feel it in a different way. But as far as we know, this is still all these amazing sensors that we have. They're still computable by a Turing machine. But this is why your simulated worlds are so important. Yes, exactly. Because that would be one of the ways to get to it. What's the limits of what we can simulate? Because if you can simulate it, then in some sense, you've understood it. I wanted to finish with some personal reflections of what it's like to be at the forefront of this. I mean, does the emotional weight of this ever sort of wear you down? Does it ever feel quite isolating? Yes. Look, I don't sleep very much, partly because it's too much work, but also I have trouble sleeping. It's very complex emotions to deal with because it's unbelievably exciting. You know, I'm basically doing everything I ever dreamed of. And we're at the absolute frontier of science in so many ways, applied science as well as machine learning and that's exhilarating as all scientists know that feeling of being at the frontier and discovering something for the first time and that's happening almost on a monthly basis for us which is amazing but then of course we and shane and i and others who've been doing this for a long time we understand it better than anybody the enormity of what's coming and this thing about is still under actually appreciated in fact what's going to happen in more of a 10-year time scale including to things like the philosophical what it means to be human? What's important about that? All of these questions are going to come up. And so it's a big responsibility. But we have an amazing team thinking about these things. But also, it's something I guess, at least myself, I've trained for my whole life. So ever since my early days playing chess, and then working on computers and games and simulations and neuroscience, it's all been for this kind of moment. And it's roughly what I imagined it was going to be. So that's partly how I cope with it is just training. Are there parts of it that have hit you harder than you expected though? Yes for sure on the way I mean even the AlphaGo match right just seeing how we managed to crack Go but Go was this beautiful mystery and it changed it and so that was interesting and kind of bittersweet. I think even the more recent things of like language and then imaging and what does it mean for creativity I'm you know have huge respect and passion for the creative arts and having done game design myself and you know i talked to film directors and it's it's an interesting dual moment for them too there's like first on one hand they've got these amazing tools that speed up prototyping ideas by 10x but on the other hand is it replacing certain creative skills so i think there's sort of these trade-offs going on all over the place which i think is inevitable with a technology as powerful and as transformative as as ai is as in the past electricity was and internet. And we've seen that that is the story of humanity, is we are tool making animals. And that's what we love to do. And for some reason, we also have brains that can understand science and do science, which is amazing, but also sort of insatiably curious. I think that's the heart of what it means to be human. And I think I've just had that bug from the beginning. And my expression of trying to answer that is to build AI. When you and the other AI leaders are in a room together, is there a sort of sense of solidarity between you? That this is a group of people who all know the stakes, who all really understand the things? Or does the competition kind of keep you apart from one another? Well, we all know each other. I get on with pretty much all of them. Some of the others don't get on with each other. And it's hard because we're also in the most ferocious capitalist sort of competition there's ever been, probably. You know, investor friends of mine and VC friends of mine who are around in the dot-com era say this is like 10x more ferocious and intense than that was. In many ways, I love that. I mean, I live for competition. I've always loved that since my chess days. But stepping back, I understand and I hope everyone understands that there's a much bigger thing at stake than just company successes and, you know, that type of thing. When it comes to the next decade, when you think about it, are there big moments coming up that you're personally most apprehensive about? I think right now the systems are, you know, I call them passive systems. You put the energy in as the user, you know, the question or the what's the task. And then these systems kind of provide you with some summary or some answer. So very much it's human directed and human energy going in and human ideas going in. The next stage is agent based systems, which I think we're going to start seeing. We're seeing now, but they're pretty primitive. Like in the next couple of years, I think we'll start seeing some really impressive, reliable ones. And I think those will be incredibly useful and capable if you think about them as an assistant or something like that. But also, they'll be more autonomous. So I think the risks go up as well with those types of systems. So I'm quite worried about what those sorts of systems will be able to do maybe in two, three years time. So we're working on cyber defense in preparation for a world like that, where maybe there's millions of agents roaming around on the internet. And what about what you're most looking forward to? I mean, is there a day when you'll be able to retire knowing that your work is done? Or is there more than a lifetime's worth of work left to do? Yeah, I always, well, I could definitely do with sabbatical and I would spend it doing science. Just a week off. Yeah, so a week off, even a day would be good. But look, I think my mission has always been to kind of help the world steward AGI safely over the line for all of humanity. So I think when we get to that point, of course, then there's super intelligence and there's post AGI and there's all the economic stuff we were discussing and societal stuff. And maybe I can help in some way there. But I think that will be my core part of my life mission will be done. If this is, I mean, it's only a small job, you know, just get that over the line or help the world get that over the line. And, you know, I think it's going to require collaboration like we talked to earlier. And I'm quite a collaborative person. So I hope I can help with that from the position that I have. And then you get to have a holiday. And then I'll have the, yeah, exactly. Well-earned sabbatical. Demis, thank you so much. Thanks for having me. Well, that is it for this season of Google Deep Mind the podcast with me, Professor Hannah Fry. But be sure to subscribe so you will be among the first to hear about our return in 2026. And in the meantime, why not revisit our vast episode library because we have covered so much this year from driverless cars to robotics, world models to drug discovery, plenty to keep you occupied. See you soon.