"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis

Confronting the Intelligence Curse, w/ Luke Drago of Workshop Labs, from the FLI Podcast

76 min
Jan 1, 20264 months ago
Listen to Episode
Summary

Luke Drago of Workshop Labs discusses the 'Intelligence Curse' - the risk that as AI systems become dominant economic actors, societies may stop investing in human development, similar to how resource-rich countries can become dependent on oil rather than their people. The conversation explores how this could lead to reduced economic mobility, weakened democratic institutions, and concentration of power among AI-controlling entities.

Trends
Bottom-up automation starting with entry-level positionsDeclining job postings in AI-automatable fields like software engineeringShift from human augmentation to human replacement as AI goalIncreasing importance of controlling proprietary data and tacit knowledgeGrowing tension between AI safety through centralization vs democratizationRise of B2B AI economy potentially excluding consumer marketsNeed for privacy-preserving AI tools that work for individuals rather than corporations
Full Transcript
Hello and welcome back to the Cognitive Revolution. Today I'm sharing a special cross post episode from the Future of Life Institute podcast, hosted by Gus Dacher and featuring Luke Drago, co author of the Intelligence Curse and co founder of Workshop Labs. I wanted to bring this conversation to your feed because it highlights a critical question that I think society should be grappling with much more than we currently are. Is it wise to to design AI systems to compete directly with and potentially replace humans as economic actors? Personally, I'm relatively optimistic about humanity's ability to adapt to the social and economic changes that will come with AI, and I tend to worry much more about catastrophic scenarios where we lose control of AI systems entirely. But this conversation did force me to confront the possibility that things might still go seriously wrong even if we do manage to solve the alignment problem. Luke focuses on a particular failure mode which he calls the Intelligence Curse. This concept echoes the resource curse phenomenon that we see in some resource rich but underdeveloped countries today, where an extractive elite manages to maintain power without democratic legitimacy, or even much in the way of cultivating the productivity of its own population simply because they control key resources. By analogy, in a future where AI systems power the economy and human labor is no longer much of a bargaining chip, whoever controls the AI could have a dangerous level of power. I have to say, for as much as I'm hopeful that the AI revolution can finally free people from doing work they don't enjoy, this dystopian vision is a pretty natural extrapolation from what happens in today's world when human workers are rendered economically uncompetitive for whatever reason. And as we've seen even in many parts of the United States, the results are not pretty nor without consequences for the rest of the country and the world. Listen to this episode and I think you'll have to agree that at a minimum, if AI is going to get anywhere near as powerful as I and all the Frontier Lab leaders seem to think it will, we are going to face a massive challenge. Luke, to his credit, does have some very interesting ideas about what we can and should do to solve this problem. At the societal level, he recommends investments in open source AI to commoditize the intelligence layer and prevent excessive economic and political rents from flowing to model owners. For companies, he emphasizes the need to design AI systems that empower individual users while allowing them to retain control over their economically valuable data. And for individuals, he suggests guarding your valuable know how carefully, developing N of 1 career paths and chasing moonshot projects sooner rather than Later. I'm excited to see how Workshop Labs delivers on this vision, and I hope to do a full episode with them when they launch to the public in the coming months. For now, I hope you enjoy this conversation about the Intelligence Curse and how we might break it. From the Future of Life Institute Podcast with Gus Dacher and Luke Drago welcome to the Future of Life Institute podcast. My name is Gus Docker and I'm here with Luke Draco Lucas. Luke, welcome to the podcast. It's great to be here. Thanks for having me. Great. So you have this essay series on the Intelligence Curse, and maybe we should just start at the very core of that and ask, what is the Intelligence Curse? Yeah, so I'd summarize the Intelligence Curse pretty simply. The idea is that if you have non human factors of production and they become your dominant source of production, your incentives aren't to invest in your people. And this sounds very abstract. What does it mean to have a non human factor of production? And what does it mean that we can build things that actually replace us? And why doesn't this just result in like AGI utopia? But I think we have some concrete examples, and one of the ones that we point to in the essay, and what we actually named the effect after, is the Resource Curse, where there are states that rely primarily or have a significant amount of their income that come through oil revenues as opposed to investment in their people. And what you end up seeing is because investments in oil produce a greater return than investments in their people, those states oftentimes funnel money towards the oil investment as opposed to their people. The result of this is a worse quality of life for their people who have much less power. Because at the core, your ability to produce value is a core part of your bargaining chip in society. Yeah. So the worry here is that as we get more and more advanced AI systems, governments and companies will be incentivized to invest more in building out even more advanced AI systems as opposed to empowering workers and citizens. Exactly. Yeah. I guess one objection here that I hear from economists is just that if we look at previous technologies, we see that they basically increase wages and increase living standards unevenly and with setbacks. But over time we see increased wages and living standards. Why isn't the same just going to happen with advanced AI? Well, I think there's a category distinction of what we're trying to do. The last thousand years of technology has been technologies that have been extremely adaptive for humans, that have helped humans do new things, and they haven't encroached upon our core fundamental advantage, which is our ability to think and then do things in the real world. Obviously, during the Industrial Revolution, there were lots of concerns that replacing and automating large parts of physical labor would result in a world in which people didn't matter. But I think the actual outcome was a bit different because, of course, there isn't a machine that was produced in the Industrial Revolution that completely automated human thinking and our ability that's kept us at the top of the food chain. And if you look at the goal of a whole lot of companies in the field, you'll find that they stake their claim, their reason for existence is to create technologies that can do everything that any human can do better, faster and cheaper. And of course, the question then is if it is the case that this allows capital to convert directly into results without removing the need for other people in the middle of, why wouldn't companies just invest more and more money into this? I don't think it's Machiavellian. I don't think it's an evil plot by them. What I think instead is that if you have the opportunity to save 50% on your wage bill, while also getting better, faster, more reliable results, most people are going to take that option. And so my concern here is that as we continue to build technology that is designed to replace rather than to augment that, we moved closer and closer towards a world where people just don't matter. And then, of course, you're reliant on other forces. You're reliant on government to make sure that you still have a high quality of life when you can't produce it for yourself. I think that's a very precarious situation to be in. If we think about pensioners today, for example, they don't produce much for society. In fact, they are, in a sense, a draw on society's resources, but they're still protected. Why couldn't we imagine an expansion of that system? This is kind of the obvious solution that comes to mind for people. We will have universal basic income, and we will have protection of individual rights, and so we will maintain agency and relevance in an age of advanced AI. So I end up arguing something like the core proposition is that your economic value is an important part of your political value. We've seen in the history of democracies that oftentimes they start at the moment where there are diffuse actors who have varying amounts of capital, who need to find ways to settle disputes without violence. The emergence, for example, of British democracy and the Magna Carta came Because there were, you know, lords that had power that wasn't equivalent to a king necessarily, but sure had a lot of influence and that came from the material possessions that they controlled. This necessitated free courts and some sort of a way to solve disputes in Parliament. And the evolution kept moving backwards and backwards and backwards. And we continue to see that this, like economic liberalization is oftentimes a precondition for the democracies we really care about. Now there are non democracies that are fine places to live, that don't wildly trample on human rights. But of course we know that there's an extremely strong correlation between governments that respect your rights and enable you to be prosperous and governments that are democratic. These things aren't one to one, but they're pretty damn close. And so the concern that I have here is that as we level the underlying economic structure that creates these bargaining chips that put us in power, that we end up reducing those. Pensioners are a fantastic example here because of course, a pensioner, pensioner isn't someone who appears and never works for the rest of their Life. Pensioners have 40 years of working extremely hard, paying into a system, and then being active members of society who then have a bargaining chip so that in the last 20 or 30, you know, 10, 20, 30 years of their life, they get this exemption. It's because of the system that we have built that this is stable. And I would also add that, you know, of course, in the history of the United States, for example, we treat our retired folks way better today than we did for things like the New Deal, which involved, you know, mass amounts of unrest and workers trying to use their bargaining chip. So I'm very concerned about the world in which we all are pensioners forever, with no way to actually bargain at the mercy of the next election for what happens in our subsequent years. Which economic metrics should we be looking at if we want to try to confirm whether the intelligence curses is actually happening or disconfirm the hypothesis? There are a couple of things that I take a look at. Income inequality seems quite important. Is it the case that you have kind of, we talk about sudden takeoff in AI, where there's suddenly a foom and all of a sudden AIs are way, way smarter than us? I think you might want to also look for this in economics. Is there a sudden moment in which capital immediately begins compounding because every dollar you put into a system produces some sort of an outbound return? And if you see this kind of rapidly accumulating where, you know, remove talent from the equation and suddenly capital gets, begets more capital then the actors who already have lots of capital can really rapidly accumulate. Now it's already the case that having capital makes it easier to get more capital. But there are a bunch of boundaries, a bunch of restrictions and outsized players can still win. So outside of, you know, mass income inequality, I'd also take a look at things like economic mobility. Is it the case that people who aren't rich can move upwards in society? United States of course, is a very famous society for having this as a marker of its success. That you can come from anywhere, start from nothing and win doesn't mean you're guaranteed to win, but there's always a pathway. And I think if those pathways start to close, that would be a very alarming signal here. Now we'll talk about, I, I presume we'll get into pyramid replacement here. And I think there some things that really want to look at as well include like rising unemployment rates, especially among like your earliest age rackets right there that are just entering the workforce. But those are a couple of the metrics that I'm taking a look at here and that I've advised others to look through. Yeah. Actually explain that, that concept for us if you would. Pyramid replacement, what does that look like? Yeah. So at the beginning of the paper, we are the beginning of the series of essays, we say that it's pretty likely that if the technological trend continues, you're going to lose your job. And we try to tell a story of how we think that's going to happen. And we start with the example of the multinational white collar firm. These are very large companies oftentimes that do a whole lot of work. Every year they hire a new class of analysts or a new class of entry level employees whose goal is to work their way up the pyramid. And they hire a lot of them. They spend a whole lot of time recruiting from the top universities. They show up on campus and their goal is to create this pipeline of talent because the company has a lot of people at the bottom and a few people at the top. But as people at the top leave because they retire or because they find other opportunities, you need a funnel of leadership. And our claim is that AI first makes it very easy to replace the people at the bottom. Now there's actually a paper that came out, I believe yesterday starting to show some empirical evidence for this. In some fields AI is being augmenting, but in others it's being just quite replacing. And what we've seen here in that in These targeted fields, I can't recall each one off the top of my head, but obviously software engineering is one of them. We've seen a shrinking in the number of job postings and in the number of job offers and overall employment between like the 22 to 25 year old bracket in these fields. That's exactly what you would expect if it is easiest to automate the entry level work first. Our claim then is that AI is likely to move up the pyramid as it gets better and as it gets more and more agentic and capable of doing more tasks with long horizon planning, and as companies are able to capture more and more of that knowledge for themselves, that what they're able to do is move up the pyramid, replacing people bottom up as opposed to a kind of middle out or top down replacement. One day you wake up to find that all of your colleagues are AI and the next knock at the door is booting you out too. We think this could happen at every level of a white collar firm. Now there are a bunch of exceptions here. Obviously it'll work different in some industries. Some sectors within a company are going to be easier to automate than other ones. And I think this is not exactly how it works in blue collar work. I think blue collar work looks speculatively. I think it might look more 0 to 1 as in there aren't the robots required to do lots of blue collar work. And then there are. And I think I'm less familiar and I spent less time in the literature on the structure of blue collar companies, but my understanding is there are a lot more people who work doing like a similar job. It's a bit less pyramid shaped, a bit more flat with like a small pyramid at the top. That's a pretty disastrous situation if robotics is able to rapidly automate those jobs. Yeah, you might even imagine that the managers of a bunch of physical workers or blue collar workers might be replaced before the workers themselves. So you could imagine systems that can automate invoicing and scheduling and so on being easier to do with or being replaced by AI before we have fully functional robotics to actually do the blue collar laboratory. I do wonder if we're talking about the trend already happening. I mean, this is a quite complex question, but how do we know that it's happening because of AI? Say there are fewer job postings related to programming. Could that be because of a general market trend or interest rates or something different than AI? So all flag, the paper that I'm talking about is one that I've looked at. I've not spent a ton of time with yet. So I don't want to speak as an expert on that paper. I'd love to link that in the description as well, and I'll spend some time on that myself. But that particular paper, if I understand it correctly, works to isolate that to try to understand what the mechanism was here. And my best guess here is you want to look at a couple of different factors. One, you're going to want to see what industries are being affected. We have a pretty good sense as to what tasks are automatable right now and what tasks aren't. We know, for example, that software engineering is extremely automatable at like it's like base level. And so you would expect to see if it's AI, that the tasks that we know were easier to automate are the ones that are falling while other ones are being augmented or much less affected. And my understanding, again, haven't read the entirety of the paper, have just skimmed the initial findings there. My understanding is that is roughly what you're seeing and if that's not the case, that is what I'd be looking for. Here is what based on a projected existing and projected AI capabilities, which sectors are seeing changes in employment and does that match their expectations? Hey, we'll continue our interview in a moment after a word from our sponsors. If you're listening to this podcast, you're probably thinking seriously about where AI is headed and maybe about how you can actually contribute to making it go well. I want to tell you about an opportunity that could become a pivot point in your career and a springboard for you to make a positive difference. A program that I've been so impressed by that I've supported it with a personal donation. I'm talking about MATS, a 12 week research program that connects talented researchers with top mentors working on AI alignment, interpretability, security and governance. These are researchers at Anthropic, Google, DeepMind, OpenAI, the AI Security Institute, Redwood Research Meter, the AI Futures Project, Apollo Research, Govai, RAND and other leading organizations. The track record here is remarkable. Mats has accelerated over 450 researchers, with 80% of alumni now working in AI safety and security. 10% have co founded AI safety initiatives including Apollo Research, whose co founder and CEO made the 2025 Time 100 AI list. Matt's fellows have co authored over 120 publications with more than 7,000 citations and helped develop major research agendas like activation engineering, developmental interpretability and evaluating situational awareness. The program is fully funded, a $15,000 stipend, $12,000, compute budget, housing, catered meals, travel and office space in Berkeley or London. Everything you need to focus entirely on research for three months, with the chance to extend up to a year. Applications open December 16th and close January 18th. If reducing risks from advanced AI is something you care about, you should apply. For more information, check out matsprogram.org TCR that's matsprogram.org TCR or see the link in our show Notes the worst thing about automation is how often it breaks. You build a structured workflow, carefully map every field from step to step, and it works in testing. But when real data hits or something unexpected happens, the whole thing fails. What started as a time saver is now a fire you have to put out. Tasklet is different. It's an AI agent that runs 24 7. Just describe what you want in plain English, send a daily briefing, triage support emails, or update your CRM. And whatever it is, Tasklit figures out how to make it happen. Tasklit connects to more than 3,000 business tools out of the box, plus any API or MCP server. It can even use a computer to handle anything that can't be done programmatically. Unlike ChatGPT, Tasklet actually does the work for you. And unlike traditional automation software, it just works. No flowcharts, no tedious setup, no knowledge silos where only one person understands how it works. Listen to my full interview with Tasklet founder and CEO Andrew Lee. Try Tasklet for free at Tasklet AI and use code COGREV to get 50% off your first month of any paid plan. That's code COGREVASKLET AI. Yeah, yeah. Actually, let's dig into that a bit more and think about which sectors or which jobs or tasks would be protected from automation. And I've suggested some some mechanisms of protection that we can talk about. Where, for example, if you're a lawyer, there might be kind of legal restrictions on replacing you. I don't think we're going to see an AI judge employed by the government very soon. Or at least I don't think we're going to see that until that's basically probably the the last job to be automated. So how do you think about legal restrictions to automation and and could those become more important as we face this increased market pressure to to automate Derek Chang, who's at the Windfall Trust now, but was at Convergence Analysis, Convergence Research, one of those. I think there's a lot of things with similar names in this space. Derek has a really good piece on what jobs are likely to be more and less resilient to automation. And there's some of the ones that you expect, obviously things like physical labors as they're more resistant right now. And I think there was a story for, you know, 50 years that automation hits physical labor first and mental labor second. And actually we're seeing the exact opposite given way we're making progress and capabilities. I think your judge point is actually quite interesting to me and I think it's correct. The jobs that have strong legal protections are going to be harder to automate. Now, of course, that doesn't mean the people who are in those jobs aren't going to automate their own work. And this is a, you know, both an example of opportunity here and also an example for some sort of gradual disempowerment where like you just automate away to a generic model that, you know, makes decisions on your behalf. I think it'd be a bad world potentially if every judge was using the same AI model to make the same decisions. And great, there is a human judge, but it's the same prompt, same output. At the very least, you'd want some more diversity that represents the actual beliefs, feelings, understandings that the judge involves, other roles that I think makes sense here to talk about lawyers, kind of. I think the lawyers who are at the partner level are going to be very easy to not automate. Paralegals are a different story. And I think like entry level law work is an interesting one here because of course your first year lawyers who've just been hired, their job is mostly grunt work. And if a firm can hire half as many of them, it might be the case that on paper it's hard to automate lawyers, but the law firms who have lawyers working there automate their own work. Such degree that either A, you get an abundance of new law firms arising, or B larger ones continue to accumulate capital without hiring new people. And I think an important question for what happens next is at that moment of initial automation where a whole lot of entry level jobs get cut and we start to be reduced. What happens next? Is it A, that large firms continue to grow and like monopolize the industries, or B, that we get an abundance of smaller firms that allow for more diverse economic output. We are very, you know, Rudolph and I are much more excited about that second world than that first one, the one where this creates a bunch of opportunity. But I don't think it's by default. I think we have a lot of work to do to get there. Yeah, yeah. It's actually an interesting point that you could see a job such as being a judge staying and not being automated, but in practice being automated because the judge is using an AI model to make educated guesses about cases. And so that would be a way for society to maintain the formal structures we have today. And without actually thinking about which functions in society. We're interested in automating. And so I think that that would be quite a bad situation to end up in, because then we haven't actually grappled with the question about whether we want to outsource kind of the profession of being a legal judge to AI. Yeah, exactly. And I think one of my real concerns there is again, that same model. If everyone's using GPT7 and they're calling that thing in to do all of their judge work, then whatever flaw exists in GPT7, that's now your judge. And I think my concern isn't just have we automated the task, but with what information are we automating it? Yeah. We also have perhaps another barrier to automation is judgment in a broader sense and taste. So, for example, you can have hundreds of AI models, generate whatever you want, whatever piece of writing or imagery you want, but judging what is actually interesting to people is something that's perhaps more difficult to automate. Do you think we might become kind of employed because we have human judgment and because we have taste, or do you think that's ultimately also automatable? So it really depends on the pace and progress of capabilities and exactly what we aim for. I am much more excited about a world where that is a strong, durable human advantage, that diversity of taste. One example here. Are you familiar with Nomads and Vagabonds? He's an artist on Twitter. He actually did the Art for the Intelligence Curse at the Art for Workshop labs. My understanding after working with him a bunch, is he takes a stable diffusion model and fine tunes it on his own work and the kind of work that he's aiming for. He's gotten very, very good at prompting it and he produces these absolutely brilliant results. I just cannot get that kind of result out of a model. I don't have the taste for it. I don't know what kind of data should be going in in the first place. I don't know how to write my prompts like he does. And I'm sure I've worked with him before because obviously we worked in the Intelligence Cursar and I know he gets hundreds of outputs and yet he releases a very select few. I think that's a fantastic example of someone how you could use AI to be an exceptional tastemaker. I think his judgment is really exceptional there. It's still his work going in and his work going out. And because of this new medium that he's using, it's been one of the best examples I've seen of an artist fully embracing new technology while still maintaining their own distinct style and tast think anyone could look at the art that he's outputting and say it's anyone but his own. That is one of the things that I'm really excited about moving the technology towards. But I don't think that's the goal of the major companies. Again, you know, this definition that OpenAI uses of AGI is predicated on doing most economically valuable human work. That is a very different game than the oh, we're going to do some economically valuable work, but it's all going to be tools in your hand that's going to allow you to change and shape the world. That's a different ballgame to do all of it versus to do some of it. And the target right now is total automation. It's a very, very different outcome. Yeah. One barrier to automation that you mentioned in the essay series is local and tacit knowledge. And this would be knowledge that's spread out, that's difficult to formalize in the way that you can train models on it. And it's knowledge that's perhaps shifting constantly and so it intersects with taste and judgment. In a sense. Yeah. Is this knowledge and this local and tacit knowledge, is that a way for us to remain relevant? So this is part of our belief at Workshop Labs. I think if I summarized our thesis in two sentence. It's two sentences. It's that we believe that the bottleneck to long term AI progress runs through high quality data, specifically data on tacit knowledge and local information. That's both the skills that you have that you accrue throughout doing the things that you do. That's really hard to digitize, not because it's impossible to digitize, but because it's hard to know where to get it because you have it. And second, local information, the kind of things that you see around you, the opportunities that you can spot because you are an embodied person with access to real time information about everything in your sphere. Right now. The labs really want this data. It's why there's a, you know, a rush to integrate into your browser. It's why there's a rush to build these bespoke RL environments where an expert gets involved in helping to create a model that's really good at this one task. But you have a distinct advantage which is that right now you have that data, the kind of data that is valuable to AI. Progress is in your pocket and on your laptop and it's in your day to day life. So our proposition is why don't we take that data and put it to use for you entirely privately so that we don't have, you don't have to trust us. We just can't train a model and sell the data to your boss. We can't train a larger model to automate you, but we can take an existing model and dramatically tune it towards your work, lock it down so that only you can use it and let you put it to work. I think you should have control over the tools that augment you and you should reap the benefits of the data that already exists in your world. And that's what we're aiming to do here at Workshop. Yeah, I actually think you could see a future in which there is this form of. There's a tension between leadership at a company and the workers at a company where the workers are unwilling to give up their tacit and local knowledge to model for that model to train on. And company leadership might be quite interested in gathering that data and training on it so that they can reduce reduced labor costs. So is that perhaps some, some and, and the new tension in the economy? So I think that's one of the tensions. But I also think one thing that I think people oftentimes forget is that 50% of Americans work at a small and medium sized business. These are not the kinds of companies that have hundreds of people with which they can mine surface level data from. These are the kinds of companies are like most people on the team are doing something that actually matters, as in if they didn't show up for work, something wouldn't work. And because of that they have lots of specific information about their processes that are really important. I think the outcome I'm excited by is one where AI shifts the direction away from extremely large companies. Because look, candidly, a lot of those tasks are automatable today, but humans retain this advantage or are able to put to use their existing advantages in with that embodied experience and are able to train models that can help them compete much faster and better, creating an explosion of small companies and small enterprises that really understand what's going on locally and ultimately help break that efficiency gap. We usually see where large companies are more efficient because of their scale, because we can put so much intelligence to work for the Average person. But I think this really, really means that those important things that make you competitive just shouldn't be given away. I'm a strong believer that data is kind of the new Social Security number and wrote a piece about this a while back where the thing that you got for caring about privacy in 2015 candidly was worse ads. There are some exceptions, right? Like dissidents obviously need to care about privacy. People in authoritarian countries who are talking bad about the government need to care about this. But for the vast majority of people and the vast majority of cases, you got worse ads. I think in the next 10 years, if you aren't careful with that proprietary info, if you say, all right Lab A, I'm going to give you everything in my life to get moderately better ChatGPT results and they don't lock this down for you and they don't take extreme care to make sure they aren't going to train on it. You are one button push away from having someone hoover up that data and sell it to the highest bidder and use it to automate you out of the economy. That is a much different situation for the value of your data. And I think people would do a whole lot better if they'd start caring about that soon. I don't think we're there quite yet, but part of the reason that we care so much about privacy at workshop is because we are aiming at creating a solution that is able to guarantee these things so that we can't use that data to automate you. On a societal level, what you might get from handing over your tacit knowledge is a slightly better AI model. But on a personal level, if You're a maths PhD student on a, on a low salary and you might get offered hundreds of dollars per proof that you, that you provide with a step by step solution to, to, to train a model on that is, that is quite an economic incentive. Do you think we as a society will be able to overcome this? This, this incentive to give up our data just when the individual incentive is such is so strong. This is part of the arms race and it's why we are laser focused on delivering models that aren't just like kind of okay in private, but are better at your existing work than an off the shelf model because of the data that they have and because of this, your work improves. I don't think it's the case that you can win this game by walking in and saying look, we have worse tools and we can't pay you. But don't worry, it's privates people don't make decisions like that. The answer has got to be the default tool that you want to use. Cares about what's going on here. And I think Apple is a fantastic situation here where Apple at its bones is what I would call a privacy second company. It is for very few people is the selling point for Apple is rarely, oh, this thing is entirely private. But Apple understands that they are, especially in the United States. They are the infrastructure with which almost all modern communications happens. And so they understand they have a responsibility to protect user privacy. And so unlike many other companies, they have locked everything down to ensure that your messages are private, that your phone calls are private, that your interactions are private, that your device doesn't get a virus. And they've gone through painstaking efforts so that you know that device is always reliable and always works for you. Anthony Guire at FLI has a paper on loyal AI assistance, and I know he talks about it as well and keep the future human. But you have got to know that the model that is helping organize and orchestrate your life works for you, not for someone else. And that means it has to be good at working for you and it has to be verifiably working for you. And I think that's how we plan on overcoming some of these incentives. I don't think the labs are going to pay every single human on earth a couple hundred dollars to gather up all their data. And I think that might be kind of the scale of what they need to do to actually beat this with that kind of incentives. So I think by delivering an actually better experience for users and then secondly layering on extraordinary protections here, we can both serve customers well and fulfill our impact. How would we guarantee that? The data that I'm providing say that that data remains private. Is there a way to do that without just trusting Workshop Labs? So I'll have more to preview on this soon. Once we launch here in September and October with a couple of blog posts that I think we'll walk through working on here, what I can say for now is that we are getting as an industry, there are now increasingly ways to do this. You could do things like encrypting all information in transit, decrypting it within what we call a trusted execution environment, where using Nvidia secure enclaves and then attesting to the code that is running so that you can see that nothing is being extracted from that. You could store the weights of a model, for example, also encrypted. Got it, Got it. Hey, we'll continue our interview in a moment after a word from our sponsors. If you're finding value in the cognitive revolution, I think you'd also enjoy Agents of Scale, a new podcast about AI transformation hosted by Zapier CEO Wade Foster. Each episode features a candid conversation with a C suite leader from companies including Intercom, Replit, Superhuman, Airtable, and Box, who's leading AI across their organization, turning early experiments into lasting change. We recently cross posted an episode that Wade did with One Mind founder and CEO Amanda Kahlo about AI led sales, and I also particularly enjoyed his conversation with John Nerona, Chief Product Officer of AI Product Pioneer and recently minted Double Unicorn Gamma from mindset shifts to automation breakthroughs, Agents of Scale tells the stories behind the enterprise AI wave. Subscribe to Agents of Scale wherever you get your podcasts. Being an entrepreneur, I can say from personal experience can be an intimidating and at times lonely experience. There are so many jobs to be done and often nobody to turn to when things go wrong. That's just one of many reasons that founders absolutely must choose their technology platforms carefully. Pick the right one and the technology can play important roles for you. Pick the wrong one and you might find yourself fighting fires alone in the e commerce space. Of course, there's never been a better platform than Shopify. Shopify is the commerce platform behind millions of businesses around the world and 10% of all E commerce in the United States, from household names like Mattel and Gymshark to brands just getting started. With hundreds of ready to use templates, Shopify helps you build a beautiful online store to match your brand's style, just as if you had your own design studio with helpful AI tools that write product descriptions, page headlines, and even enhance your product photography. It's like you have your own content team and with the ability to easily create email and social media campaigns, you can reach your customers wherever they're scrolling or strolling, just as if you had a full marketing department behind you. Best yet, Shopify is your commerce expert with world class expertise in everything from managing inventory to international shipping to processing returns and beyond. If you're ready to sell, you're ready for Shopify. Turn your big business idea into Cha Ching. With Shopify on your side, sign up for your $1 per month free trial and start selling today at shopify.com cognitive Visit shopify.com cognitive once more. That's shopify.com cognitive. If we move back to the intelligence curse for a bit here, we talked about or you mentioned social mobility as an indicator kind of decreasing social Mobility of an indicator as an indicator of the intelligence curse happening. Perhaps you could sketch out what, what a bad scenario looks like here. What does it look like? If we have a more static society with lower social mobility, where capital is the main driver of, of progress. But that progress is not made by a set of diverse actors. It's made by companies that are larger and larger. Yeah. What does that kind of society look like? So I think there are a couple of examples here, but I'll just kind of tell the story to the perspective of one guy. Let's say I'm like a college graduate. Let's in the year 2020 or the year 2030, you know, I've graduated from college. I'm struggling to get a job. I for some reason studied CS. I'm not sure why I did that in 2020, but you know, in 2026 wasn't obvious what was going to happen. So I've woken up in 2030 and I cannot find an entry level job. I also couldn't find internships, maybe like one or two companies here and there. But on the whole it's just way cheaper not to get me involved. Okay, so I can't get a job. I'm relying on unemployment, which is increasingly strained because I'm not the only undergraduate who can't get a job. A whole lot of undergraduates can't get a job. Meanwhile, you know, Microsoft has published record earnings because they've been able to half their expenditure on employees and double their output. This is exciting for a lot of reasons. But remember that in the US Corporate taxes are a very small amount of the federal budget. 50% of federal budget taxes or tax revenue comes from income tax. So we have a smaller and shrinking income tax base because less people are making that income while companies are posting record profits. And of course they have the kind of money to work to evade those taxes as well. So our social safety nets are increasingly strained. Unrest is increasingly popular. People are very upset. They don't have a lot of time on their or they have a lot of time on their hands. The thing they do is they protest or they get very upset. And the result of this is our social safety nets just stop working. They're not able to keep up with the strain. We aren't. We have to reduce payments, we have to make fiscal cuts. It's in the name of tightening our belts and pulling ourselves up by our bootstraps. And in 2040, whole lot of people just aren't employed. And there was a battle, there was a political debate over what we would do. And we passed some sort of UBI for a while, but that UBI wasn't sufficient for the kind of standard of life that you would expect. And it's increasingly unstable. And of course now we have a couple of companies who are really, really powerful. And those couple of companies are increasingly realizing that they'd be better off if governments weren't getting in the way all the time asking for things. And so if you look at like The Tom Davidson KU paper about how an AI or an individual armed with AIs could take power, you've got increasing social unrest, instability in institutions. This is a ripe environment for someone to come in and disrupt an existing order. Maybe that happens democratically, maybe it happens non democratically, but the result is that suddenly not only are you less economically safe, but you're also in a situation where the rights you took for granted to return your economic stability are now out of grasp. They're harder for you to get. That doesn't sound so great. Isn't it the case that companies say Microsoft and Google and Nvidia and perhaps OpenAI and so on, that there will be fierce competition in providing products for consumers at the very top? So even if you have the main drivers of the economy being capital deployed by massive companies, you would see innovation, you would see from competition and you would see better products and services. Yeah, Potentially one of the ways that you can break the intelligence curve or one of the necessary components is commodifying the intelligence layer. If it is the case that one or two or three players have total access, a monopoly on intelligence, it's then the case that they can continue to raise the rents. I saw a tweet I think recently that said something like, if you were a rapper around a commodity, you're a landlord, and if you are a wrapper around a monopoly, you are a renter and you are totally at the mercy of the monopoly to continue to set your rates here. And so a world in which there's like prolific cheap intelligence and then your job is to specialize it into the thing that you do, that's a better world to be in. But I think, you know, the goal of the labs is to get this recursive self improvement and just take off here. And in that kind of scenario that's a very different game. That's one player that's won or a couple players that have won. Now I don't think commoditizing intelligence fixes the problem entirely, but I do think it's a necessary precondition to Breaking this intelligence curse. Yeah, you mentioned Microsoft posting record profits and so on. Perhaps a naive question here is to ask who are they selling to in this world? If the college graduate doesn't have a job, who are they actually selling to? Which services and products are they providing? So I feel bad that I'm picking on poor Microsoft here. I don't know if they're like the right people to pick on. But you know, I don't, I, I don't mean it. Microsoft, it's not you specifically. I just picked the first tech company that came to mind. But let's go a bit broader. Who are the companies selling to? I think we talk about this in the piece, but the core thing here is probably to each other. The B2B environment is quite large and it is not necessarily true that there has to be the like what we now call the consumer level in a technology space. And a whole lot of companies get by just fine selling to each other. I think you can expect that to continue to occur across a variety of areas, especially as the core fundamentals become more important. These are primarily land, compute, energy, intelligence. And the more important those get, the more important the businesses that can provide that to get of course governments under their possible clients. But I think, you know, it is not the case that you have to have this like vibrant consumer style economy that we have today. I think this world has way fewer Starbucks. Sorry to pick on them, but I think it's got way fewer cafes and way fewer phone cases. But it's probably got a whole lot more data centers and you can see labs trading with each other, AIs trading with each other, providers trading with each other in this increasingly closed loop? Yeah, the intelligence curves is, is a kind of a riff on the resource curse. Are there any lessons we can take from how countries have dealt with the resource curse in trying to deal with the intelligence curse? Yeah, so the resource curse is not guaranteed doom, it's a curse. But it's breakable. And there are of course great examples of countries that did break it. The obvious one here is Norway. Norway is a state that has a sovereign wealth fund. It is fueled by oil revenues. It does have a real economy on top of that. And I think one of the things to be careful in this comparison is that of course oil is not a one to one replacement from all human labor. It's a very tempting investment target if you already have a lot of it. You still need humans somewhere in the chain and you can get a more diverse economy. More diverse economies tend to win than these oil states in like direct comparisons. But it's a very tempting curse. But what happens in Norway, Norway is, you know, by many, many metrics, one of the best countries in the world to live in. Excellent education, excellent social services, really stable government, really democratic government. How does this happen? Well, we use some of the quotes from officials at the time and we looked at some of the case studies in the paper. But a core thing here is Norway had extremely resilient institutions before the resource curse was possible, before they discovered oil. They had an excellent civil service that was really good at understanding what to do when this happened, and a very low corruption society. The question for me is do we think we currently live in a world with excellent institutions and exceptionally low corruption? I don't think so. I think basically every American that has looked at our government has said something here is fundamentally broken. And it's been that way for decades. And it seems like every time we think we get a reformer in what we get is increasing brokenness. I don't think we're at the case right now where we have selfless members of congress and extremely resilient institutions. And I think what it's going to take to withstand the pressures if you actually get total automation is stronger. It's more resilience than you would need to withstand the kind of oil pressures here. Of course, another thing going from Norway is that there is still room for a dynamic human economy on top of that, and so you can reinvest that money. Saudi Arabia is a great example of this where as Saudi Arabian officials have become increasingly concerned that we are near peak oil and that renewable energy is going to be increasingly the way of the future, they are trying to invest their petrodollars into creating a more sustainable and not like uppercase s, environmentally sustainable, just like a more dynamic economy that attracts large businesses. Dubai's done this as well. Now, of course, important question here. While the economics are now starting to move towards democratizing, you'll notice that these states I'm mentioning here that are sometimes cited for high quality of life for some people, Saudi and the UAE have high quality of life for certain kinds of people, for people that are economically important to the state. But of course they also rely on an underclass. And in Saudi's case, you know, I wouldn't say it's the beacon of gender equality in the world for half the population. I wouldn't say those freedoms are well afforded now as it has been the case that Saudi Arabia in particular, I want to zone in on them, has moved towards this more diverse economy. They've also concurrently started liberalizing their gender relations. I mean, you've seen under mbs, there's been. I'm not going to call it heaven or anything, but there's been a real effort to somewhat liberalize this relationship in an otherwise pretty conservative society. It is not on accident that these things are happening concurrently. And I think one of the things you should be wary of is arguments that, well, we're going to centralize all power in the hands of a couple of actors. We're going to automate the entire economy, but the incentives are going to exist for the state to really care about you. The examples that we have of states where this is true is Norway. In other states, if you're not economically useful, it's a bit harder of a sell. It's not always true. There are exceptions. We talk about this case study in Oman where, like, there was a credible threat of revolution. And this helps, you know, this helps force, you know, the state to dole out its rents. The argument is that, like, rent seekers would like, or the rentiers would like to have all of the rents, but they also really want to remain in power and continue to get some rent. And so if it's cheaper for them to capitulate than to lose, well, then that's an easy out for them. But of course, when we're talking about AI that can automate every job, we're also talking about the automation of repression and increasing surveillance. As we make things more legible, it's easier for governments to trend toward this despotic realm where they can also put down dissent and prevent these kinds of forces that would otherwise force states to capitulate. So increasingly, by increasing the state's ability to such a dramatic degree, you have this moment where states are very weak. And then once they're able to automate repression, they're suddenly very strong. In both outcomes, you risk losing the ability for democratic processes to work. Do you think we'll be able to shape the future economy using our culture, using our values? Or do you think that what matters most in the end is the underlying features of AI as a technology and the economic incentives that. That it causes? Yeah, incentives are a powerful thing, but they are not predetermined. One, they're not predetermined, and two, they're not ironclad. We have so many examples in history of great people defying incentives. I mean, I can just rattle them off. Washington deciding to step down, becoming the great cincinnatus and not making himself king is One obvious example here where a leader looked at the incentives, looked at the ability for them to gain power and said no. And oftentimes I think one of the ways to reconcile like structural views of history and great man views of history is that these instructional forces set up the incentives. But individuals can then defy or alter those incentives and make different choices. Incentives aren't law, but they are really powerful. And you want to align your incentives so that you're not hoping that every time a bad thing could happen, you are totally reliant on the character of the person in power, such that they ignore every incentive in front of them. We talk about this in the paper. We said that economic forces are a predominant force here and a very powerful force and that societies are extremely exposed to these incentives. But there are other things that shape their values as well. Cultural forces are very powerful. And oftentimes countries make decisions in favor of their culture or societies do that are culturally good for them, even if they're economically bad. The existing powers dynamics that we have also enable this one example here is like Brexit is an obvious example of a country's population choosing a thing that is probably against their economic interest for a different value set. And I'm not commenting on the merits of that debate. I'm simply saying that there is a strong economic economic argument on one side and an argument on sovereignty on the other. And that sovereignty argument won the public even if it failed to persuade their elites. I'm not saying that every outcome should be like Brexit, but I'm saying that this is the kind of thing where you actually can make different trade offs here. But of course, you know, there's that very famous quote about, I think it's Charlie Munger that says, show me the incentives and I'll show you the outcome. And I think, you know, if you have the opportunity to move those incentives in a positive direction for humanity, you really should. One way to do this is to think about which technologies we want to develop first and which technologies we want our most talented people to work on. We can talk about differential technological development. So if you look at the landscape as it is now, which technologies are currently undervalued, where should we be pushing such that we can kind of change the incentives that the technologies create. So I'm biased, but my company seems to be doing a pretty good thing here. And obviously, you know, we're like not, we're not in stealth. We've announced that we exist, we've had a one pager, what we're doing, but no one's seen the thing we're working on yet. You know, this fall we're very excited to roll that out and really show people what we're working on here. But I think there are a couple of categories. We walk through kind of three in the piece. One, and this is kind of counterintuitive. We talk a lot about these kind of like defensive acceleration technologies. The idea that you actually have to mitigate AI's catastrophic risks in order to get over this barrier. And the reason for that is because AI's catastrophic risks provide a very good reason to centralize them in the hands of a couple of people. It is true that by default, AI could be extremely dangerous. It could be extremely powerful and extremely dangerous. It could make it easier for, you know, actors to develop bi weapons. It can make it easier for random people to do bad things. And governments are going, and governments and companies are going to use those as credible arguments, real arguments, to centralize its intelligence and decommoditize it, to have a couple of actors who have dominant control over it. And of course, the downside of that is we know that, that the more we centralize this into the hands of a couple of people, the more it looks like a monopoly instead of a commodity, the worse off regular people are likely to be in the long run. So what we want to do instead here is de risk the technology fundamentally, if we're going to build it, and I'm not saying that we do, but if we're going to build it, you should make sure that it's safe. And I think there's been this like long running argument in the AI safety space that doing this is like not possible or a waste of time. And we're increasingly seeing interesting results here that indicate maybe actually there's something to be done. Kyle o' Brien had a paper with AC a couple days ago talking about how if you just remove, you know, like biological materials information from the training data when you do pre training that you end up with models that are somewhat tamper resistant, even when you try to reintroduce that later in fine tuning. That is the kind of research you want to be seeing a whole lot more of right now. You want to find the kind of research that means that if we develop it, it doesn't have to be in the hands of one actor forever. That one guy is not declared the total controller over intelligence. And then of course, you really want to work on technology that helps democratize this tech with humans still in control. Again, part of what we're working on here is trying to find use for these last mile of automation tasks, of taking advantage of an individual's data, finding ways to make that even more competitive for them. Even as there are larger models. This sometimes looks like modifying existing model. It might look like doing something entirely different, but finding ways to put existing human data to use so that the tools that you control are the ones that are helping you do better and that they don't disempower you. You also want to work on the kinds of tech that could help strengthen democracies. I think Audrey Tong's kind of vision here is quite inspiring. And so I think those are kind of like the three buckets. I talk about tech that actually makes it possible so that if we build it, it's going to be diffuse as opposed to a monopoly. Tech that keeps humans firmly in charge and technology that is able to help strengthen our democracies, such that if we can't prevent them from being a monopoly, we have fallback options. One of the ways to think about this, to close this is to close this loop here is on social media. I think there are two problems in social media or two approaches, and I think you should take them both concurrently. One approach is to say the kind of common one is that social media is super addictive and so the government should regulate it in some way. The government should, you know, restrict certain kinds of features that are in it or age, gate, age gate it or something like this. I think an approach that is oftentimes less appreciated and is absolutely necessary because you can only regulate things so much is to also introduce technological alternatives. There has been a massive rise of like screen time apps, for example. Opal's one of the them where, you know, you download a thing and it helps you reclaim your focus because a whole lot of algorithms are pointed at you and now you need something pointed outwards. We're trying to build the thing that's pointed outwards that so many people are trying to take your job or take you out of the economy. And we think we can build tools to keep you in it. And I think if we're right, that could be one of the largest markets in history. Because if you are building the tools that help keep people involved, people are going to want to be involved, they're going to want to stay involved in the future. And I think that's a pretty powerful tool to be building, both from an impact perspective and from a market perspective. We're facing this tension between trying to control the downsides of AI by centralizing it and then spreading the upside by giving as many people as possible access to the models. So one answer to this tension is just to say that we need to open source AI fully. What do you think about that vision and how does it interface with what you're talking about? So I am probably, you know, more like pro open source than I think the average person on the podcast. And I think part of this is because of this real fear of monopolization. I think it is the case that if open weights models are not a core part of the future, that you can increasingly charge these wild rents for them. I think there are a couple of people who have strong incentives to build them. So I don't think it's the case that like, I don't think it's the case that like, you know, they're going to fall behind in some near future. And I also think there's this like very pervasive argument. I think especially within like the AI safety community, that open weights models are always going to be behind. It is absolutely true that in a hard takeoff scenario where like you just foom and go straight to superintelligence, that that's going to be the case. Someone's going to win that race. That's game over in basically every other scenario. What we have seen is the exact opposite. I remember hearing a couple years ago that there's like no way that open weights models could catch up. They're too behind. And especially like there's no way that China could catch up. It's just impossible. Chinese models right now, Chinese open weights models are like six months behind the frontier. And some of them I think maybe are even more ahead. Kimmy K2, for example, is a really excellent English writing model. I would wager it's probably the state of the art at that. This does not look like that we are seeding open weights models are slowing down. The gap continues to close even on providers that have less access to high quality compute. There's something going on in both the way in which we train them and the data that we're using that still provides advantages such that compute isn't everything. And so I think if the argument that I oftentimes hear is like open weights can't catch up, it's not a core part of the story. I just don't think this is true. I think if you're taking AI safety seriously, you're going to have to focus on making open weights models safe because open weights models are going to be a reality and they're going to Be quite powerful. How do we do that though? I guess that's, that's the main worry with, with open weights models is just we can't, if we put something out there that's open weights, we can't then take it back. So we don't, we don't have this feedback loop of trying to test something and then pulling back and then perhaps putting a more limited version of that model out there. So how do we deal with the technology where if we release it, that capability suite is now out there indefinitely? Yeah, this is where like again, I'll cite Kyle o' Brien's work here is just quite important. The kinds of work that you want to do here to create tamper resistant open weights models such that reintroducing the information by trying to tune them in a certain way breaks them or doesn't work. I don't have a lot. I know, I've talked with Kyle a bunch and I know some of his work is forthcoming. So I don't want to jump the gun on anything here. But as a separate note, the kind of holy grail here is a model that when you try to reintroduce this, it just stops working or it breaks because of something that they've done. I don't want to preempt any announcements. I know there are people who are working on this, a broad variety of sectors, but that's the kind of safety innovations that I think are extremely important and that move our options space. If you are someone who thinks doom is really likely, the best thing to do is not like continue to evaluate the model to see if we're getting closer. Because if we're getting closer, we're going to actually have to do something about it. And I think like from a technical safety perspective, right now you're either betting on this like catastrophic warning shot that I'm not convinced actually slows anything down. I think we taught, we have a footnote, like seven paragraph footnote, the intelligence curse. We couldn't fit in the main thing. I footnoted it talking about how in a whole lot of scenarios a warning shot actually just increases the speed at which AI progress happens because somebody gets spooked over it. And the response is we need better defenses, faster. So I think if you're counting on like we're going to keep evaluating the thing and then we're going to see that it's dangerous and we're going to stop building it, best of luck. Like, I don't think that is an extremely tractable approach. I think more investment is better spent by a whole lot of extremely talented technical experts on actually building out the capabilities that are required to make even open weight models tamper resistant and safe. And I think this is genuinely achievable. I don't think this is an intractable agenda. We have seen more progress on it than I expected to see. And I think as people have kind of chipped at it, as papers have made it clear that this could be possible, more and more people are starting to get excited about this. And I think that's more of the direction I want to go here. If we don't have the option of controlling AI using a central authority, it seems to me that we are somewhat at the mercy of how the technology just turns out to be. So if it is the case that we can limit what models can output and perhaps have the models stop if you try to use them to create biological threats, say whether that's great, but what about the next possible danger? And the next possible danger if we don't have a way to control AI as at least a backup option? Are we just kind of at the mercy of how the technology turns out to work? Yeah, this is one of the concerns. We are at the mercy of how fast we can rush our defenses. But that means that rushing our defenses is perhaps one of the most important things that we could be doing. And in other forces we recognize this. On pandemic preparedness, for example, we can't ban pandemics. It's not possible. Pandemics are always a background risk throughout the world. And yet this means that our response can't be to do nothing. Our response has to be, we know this is a possibility. This is on our threat map. What's everything we can do to build the kind of Swiss cheese model of defense for pandemics? And I think that approach is extremely relevant with AI dangers. One other thing that I'd say here is the kinds of proposals that I'm talking about, the ones that I'm like explicitly proposing here are those that try to do this, like controlled superintelligence, explosion, the kinds where we say, all right, 12 people running after AI too much, one guy is going to do it, we're going to monitor him every step of the way. And what that policy results in is one person has a unilateral or one body, one entity has a unilateral advantage over everyone else forever if they actually achieve this kind of hard takeoff. And then you are just at the mercy of the people who control the weights. Aligned super intelligence in the hands of one person makes that person a de facto dictator unless they choose not to be. And that is not a good outcome. Now there's a separate category of policies which, you know, I'm not necessarily supporting. This is not me endorsing these, but I don't think they unlock the kind of intelligence curse style risks. And that's if we just don't build it. So if it's the case that like, I think you can very consistently say the intelligence curse is real and therefore I'm going to advocate for it, never building systems that can replace humans, I don't know how tractable that policy is. I'm not sure that's the right approach. But I don't think no one gets it unlocks the risk. The concern that I have is a whole lot of well meaning people are going after one guy gets it. And I think the much more likely outcome is not between zero and one on extremely powerful AI, it's between one and many. And if those are my two options, man, I'm definitely on the latter than the former. And I think the latter is a world that you can move towards spreading AI capabilities. That seems to me when I, when I read the kind of founding essays of OpenAI, that seems to be the vision that they had. They wanted to make sure that Google didn't have a monopoly on AI technology and they wanted to empower everyone with AI models. And that vision seems to have kind of degraded over time. How do you make sure that doesn't happen to the vision you have for Workshop Labs? It is one of the things I think about the most because the road to hell is paved with good intentions. It is paved with people who are working on things that ultimately end up working against their cost. Now there's a couple of things here. I mean there's, there's the basic legal stuff, like we're a public benefit corp with a fiduciary mission not to automate people. It's in lawyers speak for like enhancing economic opportunity, but that is like explicitly our goal. And this is instead of saying like you kind of can do the generic thing of like to make sure AI benefits people and it's like, okay, but what does that mean? Does that mean like we're going to put it in charge and then we think it's going to benefit people or does that mean we are going to try to do a certain thing? And in our case this is this economic empowerment argument. It is our mission to make sure that AI actually meaningfully increases your power in the economy rather than Decreasing it, I think. Also I'm a believer that personnel is policy, and so the kinds of people that you bring onto the team will push you in certain directions. Our hiring process is laser focused on mission alignment. And it helps that we have been incredibly public. We kind of stumbled on this company on accident. We had worked on a bunch of research in the area quite publicly and then realized that we had to propose a technical agenda and wanted to go after parts of it ourselves. But of course, there's also kind of the broader question of, like, what do you do technically? This is why, like, we are so committed to launching on day one with, like, extremely strong privacy guarantees. Because you shouldn't trust me that if you hand all of your data to me that I'm going to be a good steward of it. What you should instead know is that there's literally nothing I can do to use it in a nefarious way. That's a much more powerful guarantee. It's not this trust but verify thing. It's. I can demonstrate to you that we have taken every measure humanly possible to prevent ourselves from training a larger model on your data. And so that every piece of data that we get from you is used at your benefit, and we can't use it against you or use it to sell it to your boss. And I think that's different than a promise. We're trying to give an actual guarantee here such that we can't use the data in this way. That presents lots of novel challenges for our team, but I think it also presents some novel opportunities both as to how we position ourselves and the kinds of things that we can do to help make your experience better as opposed to worse. We want these models to genuinely be aligned to you and loyal to you alone. And we're going to keep that vision centered as we continue to work on this. It is really one of the big kind of technical, perhaps even political questions of our time. We have AI models that are aligned to certain interests. That's a whole separate question of whether we can even align them to certain interests. And that, in my opinion, is an unsolved problem. But they happen to have certain goals, certain preferences. And those preferences are a kind of a mix of what the companies are interested in, what governments are interested in, and what end users are interested in, and the balance between which preferences should be strongest in the model. That is a very interesting question and something that I think there's a lot of work to be done there. For example, I say in not that long, I expect us to have Personal agents that can do our email and our calendar for us. Is that agent working on my behalf when I ask it to book a hotel for me, or is there perhaps a kind of a corporate preference to book a certain hotel that OpenAI might have an agreement with? Something like that? You could quite easily see the incentives of the model or the preferences of the model becoming modeled between what the end user wants and what the companies are interested in. Do you see a principled way to solve this? Or is this just like any other product where the company selling the product is interested in something and the consumer is interested in somewhat of the same thing, but the preference sets do not perfectly overlap? I think if you talk to a model and you ask it for something, it should do one of two things. It should either answer in your interest or tell you when it's not. If we're going to go down the kind of rabbit hole of LLM monetization via advertisements, it should be exceptionally clear what is an advertisement, what isn't. I think it was started this way in search, and it's less so now. But even still, if you're searching something on Google and you type in something, you can see which things are ads. This should be really obvious because of course, I happen to believe what we're building is that these things should be loyal to your interests. That OpenAI or, you know, anthropic or us, that we shouldn't sign some sort of a deal and then like disguise or nefariously let you know, hey, by the way, how this hotel you should be looking at. It's a really bad situation to be in if your model doesn't work for you. And I think this is just true as a consumer. Like, you want to know when you are asking something for advice, you are getting the kinds of advice, the kinds of information, the kinds of truths that you would give to a friend because you genuinely care about them. It's what makes these tools useful is that they work on your behalf. Imagine if you know there was a. There's a Black Mirror episode recently that really stuck with me. It was in the new season where, like, a woman has a brain transplant and they upload like half of her brain to the cloud. And this is great because she's still alive, but like, every couple of hours she turns off and gives a advertising pitch about something. She has no recollection of giving the advertising pitch. And then she wakes back up and doesn't even know what's happened. And she only finds this out because other people tell her, hey, why did you just bring up this travel site in the middle of your lecture? And, you know, it's great that the technology has enabled her to do this really cool thing. You know, she's still alive, she's able to live her life. Except, of course, if she suddenly needs to give a sponsored ad on something or she goes out of the coverage area. And because they have this monopoly control over her, because, you know, you don't have competing vendors for your brain upload. You've got like, you know, half your brain here, half of the processing power in the cloud. Only one guy's got that chip. And so what ends up happening is, you know, they. They start her on a very cheap plan. It's only a few hundred bucks a month. You still get to be alive. And then they say, oh, we have this deluxe plan now, and you can go outside the coverage area if you buy the deluxe plan. And it's like, oh, you're now on our, like, freemium tier. And if you just upgrade a little bit more, you can get rid of the advertisements. And suddenly the thing that made your life so much better is now a massive hindrance to your quality of life, because one guy has total control and gets to jack up the rents as they see fit. That is the kind of scenario that we're trying to avoid. Part of this comes through democratization of the technology. Part of this comes through ensuring that they're actually loyal to you. And my expectation here is in the future, if we get to the good future, everyone has got an agent that's aligned to them that advocates on their interest, that they know is working for them. One thing I'll add here, to close it up, to kind of close the loop here, is that one of the places I really agree with Sam Altman on is this concept of AI privilege. The idea that actually, if you're giving this much information to a system, it probably shouldn't be used against you. And this is different than other technologies. So I'm probably someone who'd advocate for more privilege in technologies rather than less, even on the status quo ones. But if you are constantly, you know, interacting with this thing and it's helping organize your life, that's a powerful tool in the hands of someone who wants to be nefarious to you, who wants to understand your life, who wants to interrogate it instead of you. And because it's a chatbot, you know, it's not going to know when it should reserve its. Maybe it could, but maybe doesn't know when it should. Like you know use its fifth amendment rights. Not clear. It doesn't have the fifth Amendment right right now. It probably doesn't have a right against incriminating you. And if it has that much access to your life it probably should. That's one of the more I think like really value aligned things OpenAI has called for recently is some sort of concept like that. And I endorse that wholeheartedly. Yep. Both on kind of clearly stating when there's advertising happening in, in model outputs and on the privacy or AI privilege. I do fear that consumer preferences are just not up for, up for these things. So it's. If we look at social media, if we look at kind of digital services in general, it seems to me that consumers are interested in free products that are ad supported and companies are interested in hiding to the maximal extent what is an ad and what is not an ad just because it's more effective. If you can't tell the difference between an ad and generic information. This is like it's more effective when an influencer personally endorses. Yeah, when an influencer endorses a product. But that is, that is kind of like, like happening because they're getting paid, not because they actually like the product. Yeah, like sponsored content, things like this. Yeah, exactly, exactly. So you have those two things that we see now. Isn't that then doesn't this point in the direction of the. The default AI future being ad supported and being a future in which it's difficult to tell what's in advertising and what is not? Yeah, no, I think that is the default future. It's why we exist. If I thought like the market was like on its own so the forces of, you know, the forces of nature going to correct itself here didn't require an insurgent actor who was going to work on this. I wouldn't exist. If we didn't think it was required for someone to build the technology to make the future better, we would do something else. But I think part of this is aligning your incentives with your customers. I could not talk enough about Apple. I think this is a fantastic case study of aligning your incentives so that you're serving the right people. Where does Apple make the money and the device they sell to you? You as a consumer have a very strong preference for that device working. And one of the places where we haven't seen this trend of injected advertisements really work is in actual personal devices. The one device you have that's your gateway to everything. Sure, lots of content on that device has this injected information. But you know your device works for you. And actors tried this. Amazon's Kindle had like. I think that might still have ads on like the front black and white ink page. I'm not sure if anyone has ever that's ever worked. It's never worked for me at the very least. But even with strong incentives, the vast majority of mobile devices don't serve you ads natively. The apps on top of them do. And I think this speaks to a very important point, that sometimes you need the thing to work for you. You need to know that it works for you. And this, I think, is again, a really massive market opportunity. And I think it's especially true when you're building things that have a lot of data on the user that the user proactively hands over that helps them do their job. And in that kind of thing, I think users, at least in our initial conversations, are more skeptical of handing over all this data unless they know it works for them. And I think being the provider of the thing that people knows works for them, that also delivers value to them is a really powerful position to be in. I think a lot about companies like Apple. Yeah, yeah, we are. Perhaps as a final topic here, we can talk about a great essay you had on how to respond to the special time we're living in. So it is a time in which AI progress is moving incredibly fast and you call for moonshots starting a startup, say, what is it that especially young people should be looking at in these times? The default paths are closing. And this is true. No matter. You know, my. I wouldn't bet the house on any one intervention, right? But you know, my company could win everything. We could do everything we set out to do. And the consulting jobs are still going away. I have no interest in changing parts of this pattern. I think it's not our job. Our job is to ensure that the next iteration of the economy works for you, that when this change is said and done, that you're in a better position than ever before to achieve, as opposed to a worse one. But the economy is still going to change, even technology, to create new jobs, if that is the way we can move the pendulum. Instead of being a job or placer, a new job creator, even those change the nature of the economy. And I think that's going to happen basically no matter what. And you're already starting to see it. Your Fortune 500 company that your parents told you you have got to join when you graduate from this prestigious college, because, come on, man, we didn't pay for all that tutoring for you to do a startup or for you to go join a think tank or for you to go to the small company no one's ever heard of, those are now the least risky options because those are still opportunities for you to win it. They require you to think on your feet and be bright and do well and really understand the environment around you. Those safer jobs are the first target for automation because companies with 500,000 people on their payroll are going to want to cut some of that payroll. If you are an N equals one person at a company. If you do an important job that nobody can replace by virtue of being there, you are much safer than if you do a job that a thousand other people at your company also do, because you are extremely automatable in that role. And I think that's what we're going to see. The automation of rote tasks has the opportunity to do one of two things. It can be the start of a total pyramid replacement where we as society decide that our value is to replace all work and hope the next thing works out, or it can be an opportunity for us to build an economy that is more local, that is more individual, that allows you as an outside person to have more opportunity than ever at moving in and becoming somebody. But that's not going to happen if you don't change your path. Now, I think this is especially true for like the kind of classic prestige paths. People who like got all straight A's and nailed their SATs and went to the right college and have only ever done the right thing according to the status quo. No matter what happens in the next 10 years, I think now is the time for these moonshots because we know the window is still open, it's become easier than ever and everything else looks more risky. So if you are someone who's hesitated on doing the risky thing and like Jane street has knocked on your door and McKinsey's come calling that like, look, here's this massive paycheck. Come do this for a year or two. Know that that is, you are going to be on the last chopper out of Saigon. If you manage to get yourself through that, you are the last breed of consultants. That industry is dying. You are the last breed of entry level. Whatever we are moving towards, if, if we can win, we're moving towards a more specialized economy. And I think no matter what happens, if that's the winning play, I think you should take it. So strong urge of people to take more risks during this time. I think it's more important now than ever. Luke, thanks for chatting with me. It's been really interesting. Yeah Gus, this has been great. If you're finding value in the show, we'd appreciate it if you'd take a moment to share with friends, post online, write a review on Apple Podcasts or Spotify, or just leave us a comment on YouTube. Of course, we always welcome your feedback, guest and topic suggestions and sponsorship inquiries either via our website Cognitiverevolution AI or by DMing me on your favorite social network. The Cognitive Revolution is part of the Turpentine Network, a network of podcasts which is now part of a 16Z where experts talk technology, business, economics, geopolitics, culture and more. We're produced by AI Podcasting. If you're looking for podcast production help for everything from the moment you stop recording to the moment your audience starts listening, check them out and see my endorsement at aipodcast ing. And thank you to everyone who listens for being part of the Cognitive Revolution.