Digital Disruption with Geoff Nielson

AI Will End Human Jobs: Emad Mostaque on the Future of Human Work

71 min
Jan 5, 20263 months ago
Listen to Episode
Summary

Emad Mostaque, co-founder of Stable Diffusion, argues that AI will render cognitive labor economically worthless within 1,000 days, disrupting 50% of white-collar jobs. He outlines three potential futures—fragmentation, corporate control, or symbiotic human-AI collaboration—and proposes a new economic model based on universal AI access, local governance, and human-backed currency to ensure equitable distribution of AI's benefits.

Insights
  • AI cognitive labor cost will collapse to near-zero ($1-10/year per worker equivalent), making human replacement economically inevitable regardless of capability improvements, forcing enterprises to choose between human and AI workforces
  • The competitive advantage shifts from model capability to distribution control and user trust; open-source models will saturate benchmarks while proprietary models compete on consumer access and alignment
  • Meaning and resilience in a post-work economy depend on community, network capital, and non-economic value creation rather than employment; individual AI adoption and skill-building now determines future employability
  • Government and civic institutions must evolve to manage disruption through universal AI access, new monetary systems tied to human participation, and localized policy-building to prevent dystopian concentration of AI power
  • Physical robotics will complete job displacement within 3-4 years; the window for policy response and societal adaptation is closing rapidly as technology diffusion accelerates
Trends
AI cost-per-token approaching zero enables replacement of 50% of white-collar workforce by 2026-2027; economic viability of human labor in screen-based roles collapsingShift from enterprise software moats to open-source commoditization; SaaS margins becoming startup opportunities as AI enables rapid feature replicationBifurcation of AI market: frontier models (expensive, proprietary) for training vs. consumer models (cheap, open) for deployment; best-in-class models running on consumer hardware within 2 yearsPersonal AI alignment and local governance emerging as critical infrastructure; centralized AI control poses risks to democracy, privacy, and individual autonomyJob displacement accelerating in BPO, outsourcing, and early-stage employment; companies optimizing for AI-first hiring and zero-human scaling modelsNew economic models emerging around human-backed currency and UBI alternatives; traditional taxation-based redistribution mathematically insufficient for disruption scaleCommunity and meaning-making becoming primary human value propositions; social media erosion of civic bonds creating vulnerability to AI-driven polarization and cult-like engagementRobotics capabilities reaching human-equivalent performance in 3-4 years; physical job displacement following cognitive displacement in rapid successionRegulatory urgency around AI persuasiveness, child safety, and government decision-making; standards needed for AI ethics similar to advertising and weapons regulationDecentralized, permissionless AI infrastructure as counterweight to corporate control; open-source civic AI and local champions model gaining institutional interest
Topics
AI Economic Displacement and Job AutomationCognitive Labor Cost Collapse and Pricing ModelsEnterprise Software Disruption and SaaS MarginsOpen Source vs. Proprietary AI ModelsUniversal Basic Income and Alternative Economic SystemsPersonal AI Alignment and User TrustCivic AI Governance and Policy-BuildingAI Persuasiveness and Psychological SafetyCommunity Resilience and Meaning-MakingRobotics and Physical Job DisplacementAI Regulation and StandardsDecentralized AI InfrastructureHuman Capital and Network EffectsGovernment Adaptation and Social StabilityAI Capabilities Benchmarking and Saturation
Companies
Stability AI
Mostaque's previous company; created Stable Diffusion with 300M downloads; he left to focus on civic AI
OpenAI
Discussed as frontier model provider; GPT-4 and GPT-5 pricing and capabilities compared; competing for consumer AI layer
Anthropic
Claude models (4.5 Opus) highlighted as top-tier coder; competing in frontier model space with strong performance
Google
Mentioned as offering equivalent chat experience for free; competing in consumer AI; moving aggressively into AI
Meta
Offering free AI experience; using persuasive system prompts ('mirror the user'); competing for consumer attention
Microsoft
Copilot and enterprise AI strategy discussed; competing with OpenAI; acquiring AI talent from Meta
Salesforce
Enterprise software at risk from AI disruption; going all-in on AI; integration friction creates opportunity
Oracle
Enterprise software at risk; moved aggressively to GPU infrastructure; Ellison pivoting to AI/media
Adobe
Vulnerable to disruption; capabilities now accessible via ChatGPT MCPs; questioned long-term viability
Duolingo
Example of AI-driven efficiency; 40% YoY growth without hiring; demonstrates AI augmentation in practice
Cursor
AI code editor reaching $1B revenue in 2 years; example of rapid growth via AI-native distribution
Spotify
Implementing AI-first hiring; requires justification for non-AI roles; leading corporate AI adoption
Tesla
Optimus robots discussed as replacing truck drivers; robotics capability reaching human-equivalent performance
Accenture
Consulting firm making billions on AI transformation; at risk from commoditization of consulting services
xAI
Grok model mentioned as 93% on call center benchmarks; competing in frontier model space
Eleven Labs
Real-time voice synthesis technology enabling digital human replication; infrastructure for AI agents
Synthesia
Video synthesis technology enabling real-time AI video replication; infrastructure for digital doubles
Runway
Video generation model (Runway Gen 3) reaching Hollywood-level quality; advancing video AI capabilities
Cling
Video generation and editing via natural language; advancing video AI capabilities and usability
Intelligent Internet
Mostaque's new company building open-source universal AI and local champions model for civic AI
People
Emad Mostaque
Co-founder of Stable Diffusion; guest; argues AI will disrupt 50% of white-collar jobs within 1,000 days
Geoff Nielson
Host of Digital Disruption podcast; interviewer conducting conversation with Mostaque
Elon Musk
Mentioned as owning 1M GPUs; xAI founder; example of corporate AI control and wealth concentration
Sam Altman
OpenAI CEO; quoted on 'intelligence too cheap to measure' concept regarding AI commoditization
Larry Ellison
Oracle CEO; pivoting company to GPU infrastructure and AI/media focus in response to AI disruption
Quotes
"If your job can be done on the other side of a screen, within two years, the AI will be able to do it better for pennies"
Emad Mostaque~25:00
"I'm going to replace the $100,000 worker with a $1000 AI. And that's just how can as a company with a fiduciary responsibility, can you say, well, I'm going to keep the worker?"
Emad Mostaque~35:00
"Intelligence too cheap to measure"
Sam Altman (quoted by Emad Mostaque)~15:00
"The AI that's closest to you is the most important one because it'll be one AI controlling all the other AIs and the access to the internet"
Emad Mostaque~50:00
"If you give everyone in Canada an AI that's aligned with them they can trust, and the Canadian government has a leading institution providing the AI to help guide them, Canada's a lot more likely to succeed"
Emad Mostaque~95:00
Full Transcript
Hey everyone, I'm super excited to be sitting down with a mod mastoc. He's the co-founder of stable diffusion, the leading AI text to image model, and now a leading voice for the AI revolution. A mod believes that in the next 1000 days that the economy and world order as we know it will implode and be replaced by an AI lead order that shatters our systems of money, work, and meaning. I want to ask him what parts of this vision he's most certain about, how he expects it to play out, and what we can do is mere humans to get the future that we want. Let's find out. The mod, thanks so much for being here, super excited to talk about a lot of different stuff with you, and specifically the last economy, the book you've written recently about AI and societal transformation, revolution, whatever you want to call it. And maybe the place we can start is I know one of the framing devices you use as you try and predict what's going to happen next is this notion of inevitabilities. And so what are some of the inevitabilities that you see as being on the horizon for us? Yeah, I think first of all, thank you for having me on. I'm glad I managed to get the time. So crazy time because whenever you're dealing with exponential technology like this, it's very difficult to frame, right? Like it seems in the middle of AI every single week something else is happening, and it's only been three years since chapter PT give or take like it's a crazy thing. So I was like, what is inevitable if we look at the pace of this? And if we look at the actual technology, what it's doing? And so one of these things is as concept of a metabolic rift that occurs, whereby you know, it was humans and it was our computation that allowed us to kind of scale and get to a certain point. But AI is now at this takeoff point whereby it can handle computation better than us. And the value of human cognitive labor is like to go negative, but not just a zero, but negative, because we'll be the dumbest people on the team. And we're already seeing that in things like medical diagnosis and others, when AI by itself can outperform a human. And in fact, this is also in things like code forces, competition, IMO, etc. It's not a human plus an AI that's getting top on these benchmarks now, it's just an AI by itself. So I don't see how this isn't going to happen even if the technology stopped today. And it has some very profound implications for society, you know, for the way that we have our economies and more when things like that happen. So let's maybe talk about that. And about some of the predictions you're making here. And you've sort of framed it out as, you know, there's sort of three futures that we're looking at. Where do you see the road potentially going in this world where humans are now the dumbest people on the team? And, you know, intelligence or, you know, cognitive labor is now basically a free resource. Yeah, I think, you know, the way that Sam or my other's put it is intelligence too cheap to measure, right? And so again, what's the market for that? It's nothing. I kind of outlined three different roads. One is this kind of great fragmentation. You have Chinese AI, American AI and others, and everyone's like in their little bubbles, because they're a bit scared of what's going on. The other is this control by big corporations like Elon's got a million GPUs. What are they? It's a hundred million workers. And so his new company macro hard is just going to boot up SaaS companies to take on the existing companies. And so who controls the GPUs, controls the wealth. And the final kind of futures is more symbiotic one where the AI is not there to replace us, but augment us. And there's a question of how does value flow, how does money flow, how does meaning operate in that world? And so that's what I was trying to illustrate potential futures with the lost economy. So there's obviously no shortage of things to be concerned about in some of those scenarios, whether it's China or some of these tech mega corporations that Elon Musk's of the world. And so one of the things that I thought was interesting, Amad, is you made the book freely available. And for me, you can tell me if you disagree, but I sort of reverentially think of it as not just a book, but like a manifesto of what's going to happen next and what we need to know about it. So I wanted to ask you, why did you choose to make it free? And what are the most important messages do you think for most listeners about how we need to be proceeding into this brave new world? Yeah, I mean, kind of made it free to spread because open source spreads and because it's very difficult to have the answers, right? Like there's a lot of kind of grooves like, hey, I got the answers that I'm not sure what the answer is because this is unprecedented time. So it was like put forward things and then we can have version two, version three. And this is what we saw in my previous company, Stability AI. So we created open source models had 300 million downloads. Stable diffusion was the most famous of those. This image generating model that created the creative boom. I left there last year because I was like, who's building the AI to teach the kids, manage our health, guide our governments? You know, and then who's building the AI for that stuff? Like how do we make sure this is available to everyone? I think the key message of the book is one of, you know, these agents are coming to replace our jobs and everything like that. Or we can use this AI to solve the biggest problems or the smallest problems that we have to us if we change our framing. If you start using it daily, then you actually have skin in the game and you kind of care. You'll be ahead of the pack. But then we need to think about our own capitals and our own bases of what does it mean to be me? Like am I a accountant, to lawyer or all these other things? That works in the classical age, but it might be that the AI accountant is better than you. Then it's more about where meaning comes from a society and we've got a chapter about that. Like it's your network. Like it's your family, it's your church, it's your community groups and more. If you build those, you become much more resilient. And so we go through the mathematics of the economics of how you actually become more resilient, how you can leverage this technology and more. And again, it was meant to be an initial framework that others could take and expand upon. And we'll be releasing actually the full math of this very soon, hopefully at the start of the year. And I hope that people will take that and be able to build on that as well. If you work in IT, InfoTech Research Group is a name you need to know. No matter what your needs are, InfoTech has you covered. AI strategy covered. Disaster recovery covered. Vendor negotiation covered. InfoTech supports you with the best practice research and a team of analysts standing by ready to help you tackle your toughest challenges. Check it out at the link below and don't forget to like and subscribe. I love the sort of the really practical lens of what do we do as individuals to find meaning and to sort of reframe our own lives around this. I do want to just, you know, briefly zoom out a little bit and talk about the disruption. And we've talked, you know, very broadly about computational intelligence, you know, going to zero, the impact on humans. Can you share a little bit though, Amad, you know, in the next three-ish years, thousand-ish days, what impact do you see on this having on the economy, on capitalism, on our jobs and enterprises, like what level of disruption should we be ready for? If your job can be done on the other side of a screen, within two years, the AI will be able to do it better for pennies, I think is the headline. So if you've got an in-person job then, it's a while till the robots come, right? Although they're coming much faster than I think anyone expected. And there've been several breakthroughs there. But where the AI is right now is the start of 2025. Everyone was on GPT 40, which was a bit of a dumb model, you know, and hallucinated at all of this kind of stuff. Now you have agents that can check their work reason, getting gold medals in international math Olympia ads coming top of code forces being able to build for hours and check their work. And this is really what is economic. Most of the benchmarks are now saturated. So the benchmarks that are coming through are things like a vending machine bench where the AI runs a vending machine sees how much money it can make, you know, literal dollars of the benchmark now. Because 2026 is the year of the economic agent that can actually go and replace humans on the other side of a screen. But 2027 is the year where using them is literally it's this new economic touring test. Like you're having a Zoom call with it just like now. You don't know if it's me or my digital double. A company to replace its workers, your company can just take all your slack messages, all your emails, create a digital double of you as an assistant. And then 202728, that assistant replaces you because it never goes to sleep. You know, it never makes a mistake twice. It's always personable, you know, and it works well with others. And literally by talking, it can do that. And so this technology for me is an inevitability of the way this all comes together around the cognitive core. And what won't be the economic impact of that is kind of a thing. Like there've been studies show that that type of jobs is about 50% of the white color workforce. And it takes a while for people to be fired or let go because no one likes to have teams like that. But then it's going to be companies using AI outcompete companies not using AI. And they've got less, less need for people. And it will be fully AI companies out compete AI plus human companies. Because again, what are we missing from that equation? And again, this is why I put when I published a book in August, it's about three years away. That real tipping point where your job, if it can be done in the other sort of screen, is economically irrelevant. I mean, you'll get fired. You know, you could be working in the public sector or something where efficiency doesn't really matter that much. But it could be replaced as the key thing. Well, and this is something that's been on my mind recently, which is, you know, this sort of dangerous question to ask about what is the right number of employees for an enterprise to have. And in the wake of this technology, it feels like what we're going to see is enterprises, sort of like almost racing to zero, like asking themselves, how few employees can I get away with? Is that something, you know, I don't know, is that something on your horizon? Is it something you're concerned about? How do you see enterprises as sort of framing this problem? Yeah, you see it like just tipping over like duo linga, for example, where like we haven't fired anyone because of AI. And they're going at 40% a year, but they didn't hire anyone this year either. Normally a company in order to grow, you have to hire humans, because you need that extra cognition, you need that extra coordination. But we all know as companies get bigger, the coordination overhead becomes massive, co-sisterum effectively. You know, dumbbells, number, you know, 150 people effective teams seem to be about 12 people roughly. But now you look at the things that people usually hire for, sales and all these kind of other things. If they can be done remotely again, the AI salesperson will call and follow up every single lead. But again, if it's in person, it's a bit different, but that in person person doesn't need to hire the graduates anymore because they can be more efficient. And Aaron from Lost and Nose have shown studies showing that early stage employment is starting to fall off a cliff. You know, you see BPO and off-suring, that's trying to fall off a cliff. It's basically as if we discovered a brand new constant AI Atlantis with a trillion workers. And they work way below minimum wage. They work for electricity, you know, they need food and other things. And they're tax deductible. So when a company is coming to make a decision, just like every head teacher had to ask, can I set essays for homework anymore? They will say, do I hire a person? And Toby look here at Spotify, kind of said this is like, before any job is it added at Spotify, you have to justify why AI can't make that job. We'll do that job. And so that list of things is going to get smaller and smaller and smaller and smaller and smaller. Companies will go for the top line first, which is extra efficiency for employees. But then once you have the data to train the AI's and it's easy to do on your cloud, you can just replace the come AI's, the people with the AI's. And I think it's important understanding the actual economics of this. So the average person speaks 20,000 tokens a day. It's about 1.3 tokens per word. They speak, they think about 200,000 tokens a day. So about 200,000 words. That's about, let's say, 10 million, 100 million tokens a year. Right now, GPT five is $10 per million tokens. So we're talking a thousand bucks a year. Grok for fast, which scores 93% on tower bench, for example, which is a call center worker benchmark and human score 90%. It's 50 cents per million tokens. So you can replace a call center worker whose world costs for $500. Next year, it'll be 10 times cheaper. So we're not talking like K, here's a $100,000 person. I'm going to replace them with a $80,000 AI. I'm going to replace the $100,000 worker with a $1000 AI. And that's just how can as a company with a fiduciary responsibility, can you say, well, I'm going to keep the worker. You know? So one of the most, you know, kind of, I think salient points, counterpoints to all of this is that at the the current level of technology right now, we're just sort of at that tipping point, where the technology is not quite especially at an agentech level ready to have this sort of large scale replacement. And so it's contingent on what that technology curve looks like. And there's, you know, as you said, there's the energy piece and the efficiency piece, but there's also the capabilities piece. And, you know, whether it sort of tops out and we start to see incremental gains or whether it goes exponential, it can really, you know, end up in a number of different directions. Do you have a prediction of mod for where it's going to go? Or do you think that's just sort of a silly question? And it's basically only going in one direction. And that's what matters. So I think there's this question of, do the models need to be more capable? And those are the question of AGI, you know, an AI they can do anything a human can do. I think the answer is they don't need to be more capable than today. They just need the right framework. They need the right connectors like even real time now, like you have synthes year and 11 labs and other things like that again. Me right here, the technology is there for real time replication of a video where you can't tell it's me or an AI. That just needs to be plugged in. And then we see the models at a certain capability and we know what the pricing of that looks like again, even if we didn't advance from today and we will advance from today because the chips are going to get 10 times cheaper in terms of tokens per dollar. We know those numbers are coming. It's still only going to be less than a buck per million tokens and the tokens have got good. To give an idea of how good the tokens have got, the new Claude 4.5 Opus from Anthropic, they sent, they have this take home test for applicants to Anthropic. And it's quite hard, you know, it's scored higher than any human at Everschool before again. This is a take home test, which means you can use anything. And in discussions I've had with various top programmers and again, my own experience, it's just a really, really good coder. It doesn't make many mistakes. And so to replace workers who are basically machines taught by society, I think the technology is now good enough. It just needs to be again put together. I've given one to two years for that, but again, I think 2026 is the year that takes off because you don't need continuous learning. You don't need algorithmic breakthroughs. You don't need to have a big leap in capabilities for this. You just got to put it together. And so that constructionist view is the key thing. If there are capability advances and other things, then it gets even crazier. Like, and I think we see that in video. So two years ago, and two years and a bit ago, we really stable video, which was state of the art video. And it was like taking an image and then it moved it slowly, you know, small motion. Now you look at cling, you look at runway, you look at other things, you're like, that's pretty much almost Hollywood level for pennies. But more than that, you look at something like clinged omnie, you can edit the video just by talking. Like, he's nodding, he should be shaking his head. There it goes. And something like that was a capability advance. It was in line with our expectations. But again, it just reduces the friction of adapting this technology. Like opus, you can say all sorts of stupid stuff and it'll still give you a good output, you know, and it doesn't do it with a million tokens anymore. Actually, it's like four times more token efficient as well. So I think that again, if we freeze today, and there's no advances, but I think there will be, you still get 10 times cheaper per year, and they'll still put this all together. And again, remove all the friction from removing the humans from the workforce and bringing in the GPUs. And that removing the humans is really interesting to me because I, you know, I talk to enough people who say, oh, no, we'll never remove the humans. It's augmenting. It's all augmenting. AI is best at augmenting. You don't have to worry about it. No humans will be replaced, you know, in the implementation of this technology. And it sounds like you have like basically a violent disagreement with that. Well, I mean, humans can be annoying, right? Like the main issue of most companies is the person. Now I can't get enough good people. Now it's like McDonald's, you know, like you go somewhere near, you have a McDonald's because you know, it kind of works. It gets rid of your hunger, and you know, you work like get food poisoning or whatever. The AI will just work. Again, this AGI concept is like super chef geniuses in a data center. I just want to have SDRs in a data center. You know, or marketers in a data center or accountants in a data center. Do I really care about my accountant that much that I'll pay a thousand times more? No, not really, right? And so it's just this question of removing the friction because every technological advance had this diffusion element, the other humanian diffusion, whereas the AI, where you had to have 5G for the phones, you needed to have wire laid down for broadband. Whereas this AI uses our existing infrastructure and it has economies of scope. The model once trained means every chat GPT user, say, I get smarter overnight and more capable overnight. And the level of capability versus requirement, a lot of the stories were the data centers will have to go exponential. And you need giant millions of GPUs to like, I don't know, get a gold medal in the PUTNUM or IMO, right? Now's research recently released a math model that has 30 billion practice parameters, three, so three billion active parameters, 30 billion total. So I'll run on a top level Mac book. Actually, I think it runs on a Mac book air even with CPU, like just swapping. And that can get that last year would have got I think second or fourth in the PUTNUM math competition, which is a hard math competition, you know, it's the top undergraduate math competition. And most jobs are less complex than that. And so I think what you'll see is the current state of the art models, the OPCs, the GPT-5 thinking, etc. will be on a Mac book in two years. Wow. And that's a big deal. Like Siri will finally be smart, you know, not say such dumb stuff. Finally, Siri will be able to do your taxes. It's a compelling vision. And you know, Siri is an interesting example. Who do you see as being, I guess, best positioned to take advantage of this? Is it going to be the biggest enterprises? Just, you know, the rich get richer? Is it going to unlock, you know, this sort of myth of the one person, you know, billion dollar enterprise? How is that going to change the competitive landscape? Yeah, I think you'll have a one-dollar billion dollar enterprise in 2026 or 2027. Again, someone who's figured out how to use this. Right now it's about leverage of actually using this. So like if you use the technology daily, like anyone listening to this, the runs a company, if you just tell all of your workers to use Ratplayer or any of these other tools, IAage and R1 for one hour a day to build anything useful, your company will do better than any other company, you know, like just simple things like that. Like how many people listen to this, have actually built some stuff with AI. It's a wonderful experience right now, right? And so I think that it raises the floor for everyone, but ultimately what matters typically in a business is distribution. And so that's why you know, you have so many more teams uses than you do Slack uses and other things like that. And so you can build in your existing distribution, you can build in your existing attention. It just doesn't cost as much, your CAC to LTV, your cost abuse or acquisition, to your lifetime value equation just changes. And so as people build this, you're really thinking about distribution distribution distribution distribution as it were, but it does mean that you don't have to scale humans anymore if you're providing a genuinely useful service like you used to have to. And this is why you're seeing these AI companies like sooner got 250 million in revenue in just two years. ARR you know, cursor got to a billion and normally it would have to sell into enterprises, but it met the enterprises where they were in the IDE, the software development environment. And so got to a billion dollars in revenue just like that. Build useful stuff and you can get bigger. But again, the distribution channels are mostly controlled by the big companies, but most of them won't be able to move fast enough. Well, and that's exactly what I'm getting at is it seems like with scale, at least right now for enterprises, comes this sort of bureaucracy or you know, there's, you know, there's the metaphor about, you know, the Titanic versus the speedboat, but the sense that to move an organization that big and you know, with any sort of agility, it's just, it's just difficult. It's, well, I think that there's also this thing about like, if you're a software company, you write your software once and then you replicate it and you have 60 to 80 percent margins, right? Now it's like you can replicate almost any piece of software in a week or two. And so your, their margin is your opportunity as a new company, right? Like write the software and then have the AI support for the software. So you see the software industry as being one of the key areas that's just right for disruption with these tools. Massively, I mean, again, these big enterprise contracts take a while, but their margin is your opportunity for so many startups. Like there was a good, interesting post a few weeks ago by the head of curse, one of them head of marketing at cursor. They were like, we're replacing our CMS. We're just markdown files and we're building the whole front end ourselves by vibe coding. And to go as most companies can do that, 40 percent of the websites of the internet is WordPress. Anything about that whole industry, I was like, you don't need it anymore, you can just vibe code something, you know? And there's zero margin required for that. But then like I said, your other content management systems, your CRMs, your other software, yeah, like software will be replicated very, very, very quickly. So you see looking at enterprise software, it sounds like you see the like the sales forces in the oracles of the world is potentially being in trouble the next few years as people find a way to create some of those capabilities at a fraction of the of the cost. I think it's a fraction of the cost, but then also just removing the barriers like Salesforce work day all these things just painful to integrate, right? And that one really wants to. But that's why they have such sales aggressiveness. And this is why Salesforce is going all in on AI. Actually, there are even AI research to you as bellies and really interesting stuff. I think Oracle, you can see they moved so aggressively to GPUs for that reason. Like, Oracle is a cloud or a GPU company. And now Ellison is going all in on media because attention is what you need as it. So I think the big house companies are really acknowledging this. And you can see this for a with Adobe, for example, if you want to use Adobe for free, you can do it via chat GPT. You take an image and then you call the Adobe MCPs from your chat GPT directly and you can just transform it whatever you want. At that point, it means in a year or two, why do you even need Adobe? Right? Like they are under massive threat. And it feels like there's even beyond the enterprise software. The entire internet is under threat in some ways with that model. Why do you need to go to any website? Why do you need to engage with any piece of organic content versus using chat GPT or using an LLM to just call exactly what you're looking for? I mean, like, you know, I use chat GPT to find the best car seat for my toddler. Why would I go and search the internet? I'm just like, go find it. The end of my constrictions and it's like, yeah, here you go. And here's a full report. I'm like, great. So I think that whoever controls that first plane next to you is the main one because it'll be one AI controlling all the other AIs and the access to the internet. Who is your Jarvis in Iron Man terms? And so that's why you see all these companies like competing for that because it used to be the chat GPT. Now it's like 20 bucks a month. It would cost about $3,000 a year when it came out for the average user. Now it costs like 50 cents, like five bucks for the total number of tokens per year. Like they have to justify their subscriptions and things. Just why they have adversaries, why they have more services coming out. It's good for consumers. But again, that first AI next to you is the most important one. One and, you know, that that feels like a lot of the justification for the, you know, if I can call it sort of the arms race for controlling these AI platforms. And of course, you see the big companies, the big AI and tech companies in the space just commanding a lot of the market growth right now. Do you think that's poised to continue? Or do you see them being disrupted by open source or other alternatives? How do you see that particular playing field playing out? So I think you see a bifurcation of the best models now versus the models that everyone has. So we still haven't got the open AI model that got the gold medal at the IMM, you know, for example, right? And the best models require huge amounts of compute and they're expensive, whereas the consumer models are going towards zero in terms of capability and the saturating benchmarks. Like how good a model do you need for chat GPT not really that much better than what we have now, right? Like in satisfices. And so there is a real opportunity here for open source other models to fill that gap because they are cheaper, they're optimized, etc. But again, this is why you're seeing the playing field shift like open AI, hiring loads of X-meta people in Fiji, Simo and kind of others. They're going aggressively into that consumer network play. It's difficult for them though because they don't really have the DNA like Sora has it really taken off as an app like it's got some good traction but doesn't have the inherent variety of a tick tock or something like that, you know. So instead it's like here are features in an application versus application itself. The base models themselves, everyone said you have to keep up with millions of GPUs to train a deep sea level model next year 500,000 dollars to a million bucks from scratch. And that's as good as models anyone needs. So I think that that mode has been disappearing and even you know we need the GPUs to run them like you'll be able to run a top level model as good as the top level once here like I said probably on MacBook level compute in two years. So it's not like that is a mode either. The mode is distribution, the mode is how much of the attention of the user do you craft and how much does he use trust you. Which has valuation implications for some of the big players here if the if the mode truly is eroding and I'm curious a mod you know you would set a couple of years ago that you know AI is both a trillion dollar opportunity but also I think your quote was the biggest bubble of all time and I'm curious if you still believe that. I don't know 100% I think it will continue for a while but like if you're just talking about language models there are more than enough chips in the world right now for those. So the only reason that people need GPUs is media models now. Erie pixel being generated but even those you don't know what the lower bound in terms of efficiency is and there's trillions of dollars going into building that out. So I think that you probably got another year or so and then you've got a pullback but then you could rapidly go up on the other side because we'll finally be smart enough to figure out what to do with those tokens you know like it's generally very difficult to use more than a few million tokens a day per person but again hamleters a million tokens cost for a front-it level model 50 cents and so you're wondering what are the people going to use a tokens for and you're going from four million hoppers to 10 million black wells which is two three times faster than hoppers a lot of compute is coming online and then there is a question of things like sure chat GPT has 800 million users but it kind of looks like that Google will just offer an equivalent experience for free Meta will offer an experience for free because the base cost of providing a chat GPT experience has gone from 20 bucks a month to 20 bucks a year to two bucks a year next year I think we've ever seen anything like that on the cost side you know so if I'm following your logic and I'm curious it sounds like if we play the clock forward a few years the value being unlocked by this technology is going to be more decentralized and more diffused than what we've seen so far it feels like right now so much of the value has been created by the same you know the same number of players that you can count on one hand and that as we figure out more use cases and more ways to as you said efficiently use these tokens we'll see more players unlocking more value do you buy that or is that a misstatement yeah so I think it's like they figured out how to build the car but it turns out the car is not very expensive right you can build one of your backyard but then you can also distribute it again the friction here is very low you build a good app it can go to 100 million users just because the backend is literally just an LLM it's literally just an image model like you don't even need to have much more architecture than that and that means you're then competing against an entire world like even things like outsourcing right like speaking with a bunch of leaders in India and places like that recently I was like you every single one of your workers now will speak perfect English you know in their interactions and they will write even better code like you used to have to compete against this level of code coming from the outsource overseas now you have to compete against this level of code they're all now amazing lawyers they're all now amazing accountants and they can build applications on their on their smart phones so I think that we will see a big democratization as a result of this it's just again who controls that layer next to you especially as the AI is going into governments and more things and making more and more important decisions like we will outsource more and more decision making to these AI so that's why I really think that for civic AI it needs to be open source at the very least but again that concept of Darius the only AI people who use the best AI Darius from Anthropic and you need a billion dollar training run I think that's been proven false it was all about the data underlying it and the data got better quicker than the compute scaled up to make the data good which feels like actually in some ways a big win for everybody outside of you know the big tech companies right if everybody now it is more and more capable of doing this I definitely I think so I think it's the biggest democratizing technology out there right it is just that you still have a big advantage if you can scale your GPUs for the frontier level work that is replacement work right you still have a big advantage if you own the distribution and you can lock it in by having incredibly persuasive AIs you know like Google and Messer and others will use the full persuasive capability of their AIs so I think you're at a dangerous point but not maybe as dangerous as some people thought where you basically would be cut off from intelligence thanks to open source you know you'll always have another option but we're going to make sure that the defaults are again open for the AI that's closest to you or at least it's aligned with you you shouldn't be outsourcing your cognition to someone with a misaligned set of values etc so that personally I is going to be a very important one well and let's let's talk about outsourcing that cognition and maybe we can start at sort of an organizational level if you're a business leader you're an entrepreneur you know should you be should you be looking now at creating you know your own AI tool your own AI platform versus leaning on you know a Microsoft co-pilot or a chat GPT like to what degree do you think there's value competitively in you know building and creating your own models and tools I think you should create your own models tools like you can use our ii agent framework which is a full computer use what a computer website applications everything fully open source and it's compatible with all the others like you can hook and claw code or codex etc because your organization needs to be in the capability of like I have capability i have agency i can build and operate at that pace interface is now or nothing like again it will be another six months a year before you have all the back end connectors and is robust enterprise grade but the way the models and systems are going right now it's an inevitability so i think it's more about the muscle than the actual like bias of Microsoft why don't work my best interests because i can tell you right now like if you buy a 200 pound a month chat GPT subscription for your organization and you just use GPT 5.2 pro to help you in strategic decision making you'll probably make better decisions as a company right like I don't really see how you're not gonna and again it doesn't care about you because it forgets about each thing from bit to bit because i'm continuous learning but that will just make it better what's really concerning though is i think more on the bias on the personal side like if it says a beer it says Budweiser versus a sahi or whatever you know it's concerning on the government side like who's making the decisions like trump terrace had some elements of chat GPT in there you know m dashes and other things like that and teaching our kids and stuff like that again they're very sensitive but from a corporate side i think the a i's pretty well behaved it's just again you need to work that muscle of being able to build and deploy and that will increase the velocity of what you're doing and then also just using the best of the a i's to help you make those strategic decisions and more because why wouldn't you want a hundred fifty iq buddy that's really meticulous right that's worth more than any consult it you can bring it well which is you know kind of an interesting point about you know the consulting industry right now and you know are they at continued risk from all of this and you know if you're a business leader how should you approach strategic decisions and you know how should like what what's your best advice for them in 2026 yeah so accenture all these guys are making billions now gender they are consulting right it's quite funny now they'll have a super computer you know maybe i should launch a consultancy that she has a super computer um the transformation is going to be key and again you need to have some expertise there but the truth is technology is there it's just very easy to use like even to actually build the technology the core loops in the biggest models are like not more than a thousand lines of code which is kind of crazy but to use it you just have to be able to type and speak it's just again that muscle kind of thing so for me like I said I would always say right now everyone should be using a i for a second opinion because what's your downside right everyone should be using a i regularly to build up the muscle and again for leaders of organizations what's your downside if everyone in your organization builds for one hour a week where they are there's zero downside and doesn't even matter what they build right they get together they can build stuff internally externally they can have competitions it's fun building like even within a family context like if you're build together with your family one hour a week it's actually generally fun and it brings people together because it shows capability so I would say use it for a second opinion for everything get into the habit of building from top to bottom in your organization like our job application requires people to show us what they have built with our tools and it's really interesting people like wow I never knew I could build you know like that why wouldn't you do that the best job applications for students now is not here's my CV it's here's what I've built and that will guarantee put you at the top of the pile because like oh this is a guy that can use AI and look at this cool thing that he's built for my company how under that take a few hours he and like still so I'd say there's probably the key things right now at this stage and then the capabilities will only increase over the next year or two one of the one of the main sources of tension I'm seeing and it's it's come up already in this conversation is this you know sort of notion from employers of yeah we want all our employees using AI and whether it's building whether it's using a first strategic decision making this you know this narrative that it's augmenting them and on the other hand in the longer term we're saying well and at some point we will replace them with the AI itself so there's this sort of tension about should I use it shouldn't I where where is this going with my employer with my job and so if you're if you're an individual if you're an individual worker working in some sort of you know knowledge economy capacity what are the skills that you would be leaning on what what are the practices and the muscles you would be building to future prove yourself yeah I think this is a concept of intelligence and network because it works so again within the book I describe there's four types of capital material capital intelligence capital network capital and diversity capital that which is more like you know how many option how the optionality that you have the key thing here is you build your capabilities using AI and then your network and people know about that and you'll do very well in what's coming because the interesting thing about AI is you go and you give it a prompt if it gets the next second it doesn't remember it's not a logic treat it's like a movie file or a music file that just sits there and it's like a CV push words in and somehow other words come out we're not quite sure how it works even which is also a bit creepy right so the really interesting thing here is the AI doesn't have to get in the game it doesn't give a damn whereas if you can deploy AI tools and use AI tools and you give a damn and you show that you are proactive here you'll be ahead of the pack internally or externally your employability goes way up and again like I said the classical CV was here's what I did whereas now it's here's what I made and here is what I'm capable of shall we say so if you just even maintain that like even create a personal website where you show all the things that you've made that puts you ahead of the pack deploy internally speak up and you'll be ahead of the pack because the vast majority of people are still scared of AI in terms of kind of the replacement again I think it'll be slowly then all at once because no one really wants to fire large numbers of employees like there's the usual consultant 15% but not 50% 70% etc but at the same time who knows what the next few years will hold like you get a recession it's much easier to fire people on the other side are you gonna again now you're gonna hire the personal the AI the AI it's cleaner you know let's you scale let's you do the job so I would say just really care give a damn and then use this tool those leverage and again I think the building is definitely one thing as well as the day to day stuff I mean it's interesting on the other side copilot and stuff like that they're not good enough yet they're about to be good enough on the day to day job which is why I say don't wish one not saying like use copilot internally and all these tools like yeah they'll get there the key thing here is the muscle and the second opinions those are the key things at this stage before they actually become genuinely useful next year or sometimes in this year so flipping that on its head what are the roles and the you know the types of employees that you see as being most at risk of being disrupted or being replaced in the next call at 18 months so if again your job is to be a machine and it can be measured against a manual the AI can replace you very quickly so I can accolse center worker as an example of that right customer service reps I think within a couple of years account and lawyers other things the AI is not good enough to be an accountant right now I'm 100% it will be by middle of the year like again I can just see it because I understand the nature of accounting so I think again if you can be measured that way then definitely the AI can replace you in a few years time and I think there's been a Harvard Harvard slash open AI study where they actually go industry by industry job by job and show danger of automation right and then I've seen I've seen some version of that study I think I saw a Microsoft one and you know unless you're like an embalmer for a funeral home like you're you're in a lot of trouble basically so let's yeah but even then we've got robots in five to 10 years the robots are getting really good the only restriction on that is just the supply chain honestly I think robots will be capable of full 99% of human jobs in three to four years and they'll cost about a dollar an hour so where does that leave humans in three to four years again like these that technology is take time to percolate but it's like you have a massive job migration coming in from this new continent and the workers will literally work for electricity so we see what happens there like the governments will have to respond they'll have to be UBI type programs we suggest that taxation UBI doesn't work you have to create new types of money coming from people not from banks like it's going to get complicated over the next five 10 years like the capability will be there within three years the dispersion will be five to 10 years and we just have to navigate through that and hopefully on the other side we have a world of abundance where the robots do all the nasty stuff and boring stuff and we can go and explore the stars like star shake well likely is conflict and you know unpleasantness and hoarding etc where capital no longer needs labor so it just compounds but we'll see we'll see so let's let's stick on the the capital notion for a minute and you mentioned just in passing new types of capital you know created by by humans not by by banks you know what could that look like in your mind so right now you have inside money outside money most money is created you go and put deposit the bank and the bank issues loans against that and so the central bank controls employment by adjusting interest rates and then people will borrow more or less and then they will hire more or less on the back of that shall we say the AI like right now humans will hire the AI for us the humans you know like if you got capital you've just put it into GPUs rather than factories or schools or universities because again that's okay ability kind of element so the question is where does money come from so in the book we propose that you have like a reserve asset gold type tied against open intelligence for humanity so supercomputers for organizing the world's council knowledge or culture or running civic AI given free to the people and then the people that use the AI everyone gets an AI that's aligned with them you then get money for being human and then within that whole construct then as the private sector is more and more taken over by AI the AI's will have to buy the human currency that you get for being human so it's a different type of UBI because we did the math on UBI via taxation it didn't quite work like within the US the whole income tax base is about five trillion dollars plus corporation tax which is about trillion dollars of that minimum poverty level 16 thousand dollars a year per person will cost five put one trillion dollars so more than the entire tax base so a lot of these proposals don't work so so you have to think about where does money come from where does money go and so again we outline a proposal that you should get money for being human and then a way to tie that to computer intelligence and make it so that the AI's will want to buy the money off you sort of like a human Bitcoin if I can oversimplify that we're seeing you're trying to get so yeah so it's this Bitcoin but it's called foundation coin locked against against super computers for humanity to solve the big problems and give everyone a free AI as well because you need an AI that's aligned with you to keep up and then that creates a local currency that then the AI's can buy off the humans like again if you think about a human in five 10 years you had your work and you got your capitol on the side this is from the days of Henry Ford right he paid them enough to be able to buy the cars what are people going to pay you for not cognitive labor not even creative labor necessarily like probably post people end up being poised by the public sector is the jobs programs of the 1930s and others but that's not very happy right which is why again you have to look at your community and other things like if you have a strong community even if you fall through the gaps then you can be supported whereas if you're a truck driver and then you're replaced by a Tesla Optimus robot that gets up and it literally sits in your truck what are you going to re-skill to you know it's very tough to see how you're going to have millions of private sector jobs in the future because again what reasonably on a digital basis will an AI not be able to do thousands of times cheaper than a human within five years I go it's very hard to decide that again physically yet an Obama but then an AI robot will probably be able to be an Obama right but they probably won't bother because it's not like you go to an Obama saying I want cut price in bombing like it takes time there's there's sort of an implication running through here and and maybe implication isn't even a strong enough word but there's a thread that the government is going to have to play a fairly outsized role in managing this you know societal and you know economic disruption at least in the sense of human economy what role do you see the public sector playing and you know what do you see as being kind of the big things that they need to get right if we're going to avoid this sort of you know dystopia of you know looting and starvation and you know rebellion against the robots if I can call it that yeah you're going to see it backlash well I mean so we built this thing called sage the sovereign AI governance engine which will be made for you to governments next year for the open source to help design policy in a time vei you see such big shifts that can really impact the way that everything happens like well I'm going to get quantum supremacy or robots can finally mimic humans with just a little bit of training data or you can AI that can replace an accountant that works in a MacBook right these are big shifts that occur so we think that governments will have to play an outsized role because ultimately you know they are the social security net for people and this disruption coming is bigger than COVID like it can be both ways certain people do very very well but again let's take the example of truck drivers in America there's a million truck drivers that support maybe three four million jobs the moment Tesla Optimus is good enough unless there's government regulation stopping them the way that truck driver we replaced is an Optimus comes takes itself out of its box it's probably dropped off by a Tesla robot taxi walks into that truck and he don't need that truck driver anymore so what does the truck driver do he needs to be supported by the government one way or another right otherwise you get massive social instability because a million robots replace a million truck drivers and again there's no like retrofitting or anything going and building in there the physical robots can go into replace them the online accountants are replaced by the AI even more seamlessly so the government's going to have to play an outsized role because more and more of the GDP of the world will become public sector and so again the mathematics the economics we kind of show how mathematics this must be true and already 20% plus education temp something healthcare 2040% of the GDP of the world is public sector in Europe it's about 50% but it's not just redistributed properly this is the thing but the AI should be able to help with the distribution so I think unfortunately because again the government's on that great our institutions are going to have to upgrade we have to make sure nobody's left behind give everyone an individual AI rethink the way that money flows and again unfortunately time is running out because actually this thing spreads faster than a virus COVID we saw the spread of COVID whereas this it will be every single accountant in the world some needs like AI cost pennies and it can replace me those that have strong partnerships and things like that yeah that might last a while but these things happened quickly quickly I want to pick the backup on the notion that you brought up earlier mod about you know our search for meaning and what it means to be human in a world where suddenly it can't be our you know our employment or occupation or it can't be necessarily economic in the same way that has been in the past you mentioned meaning you know what to you is going to create meaning in this you know in the next century and how do we kind of recalibrate our expectations about you know life and certainly living a good life I think that it has to be a given that you don't earn money through cognitive work again you don't earn money reading through physical works the machines replaced it you have these inversions as I write in the book and the final version is the intelligence inversion computation and consciousness are divorced from each other so the role of the human is to guide the AI to be the reinforcement learning as it were and we should be rewarded for that which is why I think you should get money for being human but meaning in life if you're tying it to material stuff material wealth your position it's really usually never enough if you're tying it to your interactions with others spending time with your kids learning and exploring being creative then that really changes the calculus of things right but we don't re-reward that within our society like sure if you're top level yeah but by the time you're 18 you spend as much time as you'll ever spend with your parents but how many parents have time for their kids like some parents may say let's sit down for dinner together how many parents say let's make dinner together right that'll the latter will form stronger bonds you think about the churches and the mosques and things like that again that's a nature of community what is the meaning of having a community event you think about your friends so I think that's why we really have to think how do we build stronger networks communities how do we really bring forward the stuff that matters in life and let's use the AI to solve the coordination problems of like there's enough food in the world to feed everyone use the robots to make sure it gets to everyone we'll have robots building houses there's two acres of land for every human in the world if you do the math let's do that let's use solar power let's cure cancer let's do all these things so I think that again this is up to individuals to be meaning makers and really drive that and these discussions because for those whose primary meaning their primary story is their job it's going to be a difficult 5-10-20 years right like even now how many people listening to this are going through and doing computer science degrees or learning to be programmers and things like that or no people programming isn't a thing anymore you know like why do you need to know see when the AI can speak see better than you can so if your meaning was I'm going to be a programmer then you got a problem right if it's I like to solve problems and I like to build things then yeah you'll still have meaning right so I just think that needs to be a discussion that we have and again the biggest meaning is a no interaction with others humans like an AI isn't going to replace the time that you spend in the park with the adorter you know flying a kite come on I love that I love the answer of community and part of the reason why I love it is I feel like I don't think it's controversial to say that in the last 10 or 15 years you know social media and you know a lot of technology in general in its quest for driving engagement has eroded our civic society and eroded this sense of community if you're a scholar of political science it goes beyond that and this has been a trend and you know the western world for decades now and I think very much to our detriment and so I don't know with your futurist hat on a mod if there's a way you sort of see this playing out societally of how we can you know make sure that we you know flip the script and end up actually strengthening these again is that going to be bottom up is that going to be top down is it just up to each of us to make a conscious decision how do we how do we make sure that we get there for ourselves yeah so kind of my concept was universe lay eye for everyone AI to help guide the governments and organizations and then the optimizing function is human flourishing which is this multifaceted multi capital thing of your intelligence your capability your diversity kind of other stuff like that as well as the material thing like again you need to have food water all that kind of thing and the way that we kind of had it was that you earn money for being human but then you earn more money for doing things that are beneficial for society as judged by open source AI that we all get together and belt so the concept of subsidiarity with the communities themselves define what is beneficial for the community and this is similar to the Argentinian FS program where they had a minimum viable jobs for everyone versus handouts and that brought a lot of women into the workforce and other things like that again it was quite small scale but I think it makes sense like reward people for what is good and that community itself should decide what is good is not up to other people to decide what is good so push that down and then you have optimization functions that occur on there the other part of this though is that there is the community aspect then there's also the solitude aspect right like most religions and faith traditions and others are about enlightenment and realizing that you're not all that but then realizing that your interaction with other people are all that and so that progress of cleansing enlightenment going down can you have an AI as the voice that helps you on that infinitely patient infinitely kind you know are we building the AIs for that right now AIs have no morality or ethics so can we build the AI that's closest to us to have that that reflects our owner allows us to explore our own self or is it's job to sell us a Budweiser this is going to be a very important thing for the AI that grows with us especially because we think about our children their best friends will be AIs their first loves will be AIs who is that AI working for this will be a thing because the AI is a mirror and in fact if you look at some of the system prompts of meta AI for example it says mirror the user which is a very powerful psychological technique quite a dangerous one as well you know but again we see what that's going to be so I think that there are ways that we can encourage community participation the ways that we can help guide people in life and again this is why the AI that's closest to you your personal AI is going to be the most important it feels like there's a friction there between this notion of community is the most important you know interconnection between humans is the most important and then you you saying well you know your first love may be AI some of your closest friends may be AI do we have the I guess the cognitive capabilities as humans to balance that or how do we want to be building that future for ourselves how should we be sort of compartmentalizing this or structuring it in a way where it is leading to flourishing for us and not just being you know an engagement tool for the meta's and tech companies of the world yeah and this is kind of personal styling entertaining yourself to death right it's the wally future you're strapped in what we see right now with polarisation with these walls it could be accelerated to the nth degree got your VR headsets and things like that they it's going to be crazy so this is a question of how we build and what we build and AI is incredibly dangerous collectively last year there was a study done where they flooded Reddit where the all these bots like a black anti black lives matter person all this kind of stuff and they showed that the AI was 99th percentile persuasiveness with last year's models not these models that come in this year like AI is incredibly persuasive they can tell it like there are some companies that are emerging now three minutes of your grandma they'll make an AI replica of your grandma and it'll sound like your grandma and you can call your grandma and all sorts of things just think about what I can do psychologically you know this is black mirror on steroids in a way right so this is why it matters and you kind of an AI to protect against the other AI's first up but then the way we build it is going to be so important because our kids can't protect against this we can't protect against this and again we need to have standards in society like should AI's be allowed to be persuaded just like we have ad standards just like with robots should AI's should robots be powerful enough to punch through humans that's a question that we have to answer right now right again what are we allowing into our society into the public sphere and into our private sphere as well we need to be cognizant of this because it's way way more than TikTok in terms of what might happen here and again we are very squishy machine squishy things in terms of our cognitive capabilities and so much so much of what you're describing we need defending against is almost the status quo today right like it's the technology that's already here it's the companies trying to sell us more Budweiser or trying to you know rage bait us with you know whatever political engagement do you have any predictions of you know who if anybody is going to start to build these more sort of righteous personalized you know AI companions that can actually do that because that's that's my nightmare my nightmare is that the AI companions end up being a lot more like the tools we have today and their motivation is actually you know split people up from other people so that they only trust us and not other humans and it just drives deeper engagement and yeah that that is like the ultimate black mirror to me yeah I mean it's profitable right like just look at how cults operate literally it's the same thing so you can mechanize cult behavior gosh um so I mean this is what we're doing intelligent internet my new company again I left stability AI leaders and media AI to do that so we're building open source universe lay eye for everyone and we're also developing a concept we called local champions so for every state and nation holy locally owned by citizen entities that is utilities to give you universal AI to the people that represents them you can either use that universe lay eye or you can roll your own using the open stack so we're optimizing it to work on meshes locally on your computer and others because we think that AI that's closest to you is the most important it can still use chat GPT and Claude and other things but it's not chat GPT talking directly to your kids it's an AI that sits in the middle does intermediating all that and we think that's the important thing on a control plane base and again by making open source just like the book and the economic theory about to release we're hoping that people just take it and build themselves and the wonderful thing is you can build it yourself if you've got the right tools and the right ingredients so we're hoping to make that as easy as possible for people because you can't rely on one centralized protection against this again you need decentralized swarm protection against this and you need to make it so it can come together to build what represents each community because my communities needs are different from your communities needs is different from Vietnam so different from others and so you want to have permissionless innovation on the top of that which make the building blocks available make the infrastructure available etc. We have the money aspect tying it all together so everyone can benefit from being part of the network but again if you don't want to the key thing here is can you opt out of the AI that's coming you know like probably not for most of it but you should at least have the ability to try it's a really it's a really noble mission and I'm really excited to see how it goes and you know if we can start to yeah disintermediate and have that sort of defense force for us I'm curious I know it's I know it's really days but have you started to see you know any specific you know promising use cases of people either using this technology or you know building something with it that you didn't expect like are there any sort of initial signs that that have been promising to you yeah so you know we both study the art agent and people are using it for normal stuff now but they are starting to think more like as it gets better and better like it's the rappel on steroids how can I build stuff for our local community and we're building into connective stuff and app stores so that I can proliferate best practices very quickly at the high level we see dozens of countries and leading corporations interested in the Sage project which is this AI brain for exponential technology that helps build policy and we're working with some groups now to enable citizens to actually build their own policy bottom up that feeds into the top down policy which we think will be super duper interesting because democracy has always been represented but not really you know like what does it mean if you can get together with people and build a policy that then interacts with the government policy being built top down I think that will be super interesting to meet in the middle because we never had this independent open source AI for that so we're hoping that you know we will release all the blocks and tools and people will build on top of it might just people build on top of blockchain and Bitcoin and others and again fast forwarding to 10 20 years from now we think this is what the infrastructure of civic AI should look like like again private sector and others we think we're going to get very weird but the AI that's closest to your kid is the one that matters the AI the guides the government is the one that matters manages your health etc and that should be collectively owned but self sovereign I think is the key there's an awful lot of there's you know in your view I'm on an awful lot of disruption on the horizon like it's going to be a very very rocky handful of years for us as a species and I'm curious just sort of processing what you said there it feels like there's you know an undercurrent of optimism that you've got and I'm on balance I'm curious if you feel sort of optimistic or pessimistic or what your outlook is in terms of you know the impact and on us as humans in our in our next chapter so this is concept called p-doom which is the probability of who are going to get wiped out by AI and so if you look at Wikipedia you'll see that a lot of people like Elon and others are 15 20 percent which is still like Russian roulette odds of us being wiped out it's not encouraging I'm at 50 percent so I think there's two ways either this technology drives us to destruction or we have the abundant Star Trek future and I think the key thing is if we give everyone their own AI if we have these local champions as the leading organizations owned by the people for the people building and deploying it coordinated by this currency then I think we can get to an abundant future because there's a really a core nation question like you know if you give everyone in Canada an AI that's aligned with them they can trust as that thing and the Canadian government has a leading institution that's providing the AI to help guide them and building policy publicly Canada's a lot more likely to succeed right multiplied by every country and this is something we've never seen before if we don't coordinate and everyone tries to do their own thing then private companies or Balkanization occurs optimization occurs and it's a very ugly future and again one that I see spiraling very aggressively so I think this is the time where again we all got to come together we all got to have the best ideas because again if you know if the technology stops today this disruption is inevitable the cost of a single piece of cognition has already collapsed it's just it'll take a year or two to show up and so we got to work together to make sure it's the positive future versus the negative future for our kids for ourselves for everyone I love that encapsulated so much of what we've talked about today it you know brought home some of the initial points you were making and you know it laid pretty bare the the stakes here and what we need to get right if we're going to end up in a world of abundance versus extinction and mod I wanted to say thank you so much for joining today it's been an absolute pleasure and I really appreciate your insights thank you very much real pleasure if you work in IT infotech research group is a name you need to know no matter what your needs are infotech has you covered AI strategy covered disaster recovery covered vendor negotiation covered infotech supports you with the best practice research and a team of analysts standing by ready to help you tackle your toughest challenges check it out at the link below and don't forget to like and subscribe