Does Work Still Matter in the Age of AI?
This episode explores how AI will transform work and jobs, examining arguments about wealth inequality in an AI-dominated future and how roles like software engineering and product management are already evolving. The discussion covers perspectives from economists predicting extreme inequality to optimistic views that humans will create new types of valuable work, with current examples showing how AI is compressing traditional workflows and enabling more people to build custom tools.
- AI is compressing traditional product development cycles from weeks to hours, shifting bottlenecks from implementation to problem definition and knowing what to build
- The future may see everyone becoming more like product managers, using AI to build custom tools for specific problems rather than relying on generic software
- Human desire for human-created content and experiences may persist even when AI can produce objectively better alternatives
- The transition to AI-powered work feels like gaming, where people craft tools to solve specific challenges rather than waiting for perfect solutions to exist
- Wealth inequality concerns in an AI future may be overstated if abundance makes material needs irrelevant, though relative status comparisons will likely persist
"Once AI renders capital a true substitute for labor, approximately everything will eventually belong to those who are wealthiest when the transition occurs, or their heirs."
"I get the argument that this is the worst that AI will ever be, but it will also never be human, which is what humans want most of all."
"The job of a PM used to be translation. You talked to customers, synthesized their problems, wrote specs and handed them to engineers. You were the bridge between what people need and what gets built. The value was in that translation layer. That layer is compressing."
"The real change isn't that everyone becomes a programmer. It's that everyone gains the ability to shape their environment, extend their capabilities and move forward under their own control."
"In a few years we'll shift from thinking what can I buy to help me? To what can I build to help me?"
Today we are discussing one of the most unknowable but much thought about questions in and around AI, which is of course how it will change our jobs and the work that we all do. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. Alright friends, quick announcements before we dive in. First of all, thank you to today's sponsors, Robots and Pencils, Landfall ip, Zencoder and Super Intelligent. To get an ad free version of the show go to patreon.com aidaily Brief or you can subscribe on Apple Podcasts. If you are interested in sponsoring the show, send us a Note@ SponsorsIDailyBrief AI and very briefly before we dive in, I mentioned this a couple of times, but I have some big announcements coming up soon with the AIDB Intelligence product. If you want to learn more about that, go to aidbintel.com and you can sign up for updates. Now this is a weekend episode, which means, of course, a long read slash Big Think episode. And as I mentioned last week, we are still working our way through the spate of Big Think essays that ended last year and began this year. For today's show, we're actually going to string excerpts of about five together, with the first being from Dwarkesh Patel and Philip Trammell called capital in the 22nd century. Now this is an extremely long form and dense essay, and there has been a ton of debate around it. It's brought up questions of redistribution and wealth policy and tax policy, but that's sort of not exactly the line that I'm going to thread. In fact, we're going to focus on the parts that pick up and set the story for this post from Ben Thompson at Strategy called AI and the Human Condition. So let's read the first excerpt from capital in the 22nd century. Dwarkesh and Philip Wright in his 2013 Capital in the 21st Century, the socialist economist Thomas Piketty argued that absent strong redistribution, economic inequality tends to increase indefinitely through the generations, at least until shocks like large wars or prodigal sons reset the clock. This is because the rich tend to save more than the poor, and because they can get higher returns on their investments. As many noted at the time, this is probably an incorrect account of the past labor and capital complement each other. Wealthy people can keep accumulating capital, but hammers grow less valuable when there aren't enough hands to use all of them, and hands grow more valuable when hammers are plentiful. Capital accumulation thus lowers interest rates, AKA income per unit of capital and raises wages income per unit of labor. This effect has tended to be strong enough that though inequality may have grown for other reasons, inequality from capital accumulation alone has been self correcting. But in a world of advanced robotics and AI, this correction mechanism will break. That is, though Piketty was wrong about the past, he will probably be right about the future. Indeed, in some ways he may well be more right than he knew. A lot of AI wealth is being generated in private markets which only large and sophisticated investors have access to. You can't get direct exposure to xai from your 401k, but the Sultan of Oman can. This trend towards the privatization of returns already ongoing and especially pronounced in the AI startup world could well continue indefinitely. Furthermore, with full automation, the main source of catch up growth for developing countries goes away, namely that by importing capital and know how they rapidly make their underutilized labor more productive. If AI is used to lock in a more stable world, or at least one in which ancestors can more fully control the wealth they leave to their descendants, let alone one in which they never die, the clock resetting shocks could disappear. Assuming the rich do not become unprecedentedly philanthropic. A global and highly progressive tax on capital, or at least capital income, will then indeed be essentially the only way to prevent inequality from growing extreme without one. Once AI renders capital a true substitute for labor, approximately everything will eventually belong to those who are wealthiest when the transition occurs, or their heirs. And so here NLW cutting in again, you can see where this would start to generate lots of debate. The argument that they pursue throughout is this idea that if AI proceeds apace as they think it might, this global and highly progressive tax will become necessary. As I said, I'm not going to read all of this piece, nor am I going to go rehash all of the arguments, although it is very good fodder for a very long kind of big think policy type of discussion, and much of it played out on Twitter X. If you go check out Dwarkesh's account, you can find a lot of that there. Where I'm interested in picking up the story is with the Ben Thompson strategia essay which he published earlier this week called AI and the Human Condition. Where Ben starts his piece is with a reflection on his own job. Pity the paradox. He writes of the content producer in the age of AI. On the one hand, AI is one of the greatest gifts ever in terms of topics to cover. On the other hand, LLMs in particular are quite literally content producers. What's the point of writing analysis when ChatGPT or Gemini or Claude will deliver analysis on demand about any topic you want. Is this one of those situations, like the early web, where the possibility of reaching everyone seemed like a boon but was actually a ticking time bomb for the viability of the traditional publishing model? Now, where Ben concludes for himself is that he's actually pretty optimistic about his own prospects. But where he concludes and where we jump off is this. Maybe I'm doomed, but if I'm doomed, probably everyone else is too, particularly when you think about the very long run. And that's where Ben picks up the capital in the 22nd century essay. Now, one thing Ben points out is that the conversation is, admittedly very far out there and makes a huge number of assumptions about how things are going to play out, he says. It's an argument of the dorm room discussion variety, and I don't necessarily think Dwarkesh or Philip would disagree all that much. But Ben gets crisp about the assumptions underlying their argument. He writes, part of the thinking is that once AI can create AI, it can rapidly accelerate the development of robotics as well. Until robots are making robots, each generation more capable than the last. Until everything humans do today, both in the digital but also the physical world, can be done better by AI. This is the world where capital drives all value and labor none, in stark contrast to the approximately 33% share of GDP that has traditionally gone to capital, with 66% share of GDP going to labor. After all, you don't pay robots for marginal labor. You build them once. Check that they build themselves from materials they harvested, not just here on Earth but across the galaxy, and do everything at zero marginal cost, a rate at which no human can compete. From there, however, Ben gets into his skepticism. I get the logic of the argument, he writes, but I perhaps once again overoptimistically am skeptical about this being a problem, particularly one that needs to be addressed right here, right now, before the AI takeoff occurs, especially given the acute need for more capital investment at this moment in time, the world Patel and Trammell envision sounds like it would be pretty incredible for everyone. If AI can do everything, then it follows that everyone can have everything from food and clothing to every service you can imagine. Remember, the AI is so good that there are zero jobs for humans, which implies that all of the jobs can be done by robots for everyone. Does it matter if you don't personally own the robots if every material desire is already met? Second, on the flip side, this world also sounds implausible. It seems odd that AI would acquire such fantastic capabilities and yet still be controlled by humans and governed by property laws as commonly understood in 2025. Third, it's worth noting that we have seen dramatic shifts in labor in human history. Consider both agricultural revolutions. In the pre Neolithic era, 0% of humans worked in agriculture. Fast forward to 1810 and 81% of the US population worked in agriculture. Then came the second agriculture revolution, such that 200 years later only 1% of the US population works in agriculture. It's the decline that is particularly interesting to me. Humans were replaced by machines even as food became abundant and dramatically cheaper. No one is measuring their purchases based on how much food cost in 1700, just as they won't measure their future purchases on the cost of material goods in a pre robotics world. That's because humans didn't sit on their hands. Rather, entirely new kinds of work were created which were valued dramatically higher. Much of this was in factories. And then over the last century there was a rise of office work. All of that could very well be replaced by AI. But the point is that the history of humans is the continual creation of new jobs to be done, jobs that couldn't have been conceived of before they were obvious and which pay dramatically more than whatever baseline existed before technological changes. Like, if I might be cheeky, professional podcaster Podcasts didn't even exist 30 years ago. And yet here is Patel and me accumulating capital simply by speaking into a mic and taking advantage of the Internet's zero marginal cost of distribution, a concept that itself was unthinkable 50 years ago. Now in the next section, Ben argues that much of human consumption experience is not about choosing the objective best, but about choosing quirky word versions of things podcasters, for example, that say like and sort of. He also argues that for his money, despite the availability of sex robots, he believes that humans will still want to have sex with other humans. Concluding, he writes, I get the argument that this is the worst that AI will ever be, but it will also never be human, which is what humans want most of all. Thompson's last section is called the Problem with Inequality. He writes, this gets at what I found the most frustrating about Patel and Trammell's point of view. The core assumption undergirding their argument was also about the human condition. It just happened to be negative. He then references a famous Louis CK appearance from Conan o' Brien, where Louis CK Argues everything is amazing right now and nobody's happy. Ben writes, if anything, you can make the case that technological innovations, by virtue of conferring their benefits on everybody has actually had the perverse effect of making everyone feel worse off. When I was a child growing up in small town Wisconsin, I had some sort of vague sense that there were rich people in the world. But from my perspective, taking my first airplane flight around the age of 10 was a source of wonderful and even provided a sense of status. After all, many of my friends had never flown at all. That was the comparison set that mattered to me. Social media, or more accurately, user generated content feeds which are increasingly not social at all, has completely changed this dynamic. All I or anyone else needs to do is open Instagram to see beautiful people on private jets or on beaches or at fancy restaurants, living a life that seems dramatically better than one's dull existence in the suburbs or a cramped apartment. Never mind that the means of achieving that insight is a level of technological wealth that would have been incomprehensible to the richest person in the world 50 years ago. To put it another way, what Louis CK identified in this clip was the extent to which human happiness is a relative versus absolute phenomenon. What we care about is not how much we have, but how we compare. That, by extension, is what drives the technological paradox I noted. More capabilities, more broadly distributed, has tremendously enriched the world on an absolute basis. The end result, however, has been the dramatic expansion of our comparison set, making us feel more immiserated than ever. This, writ large, is what Patel and Trammell seem to be worried about. Sure, everyone may have all of their material needs met, but that won't be good enough if the price of that abundance is the knowledge that someone else has more. This might not be rational, but it certainly is human. If you assume that the negative parts of humanity will persist in this world of abundance, however, then you must leave room for the positive parts as well. Even if AI does all of the jobs, humans will still want humans creating an economy for labor precisely because it is labor. You can't make the case for the potential that jealousy ought to drive authoritarian capital controls. And while completely dismissing the possibility that the prospect of desirability gives everyone jobs to do, even if we can't possibly imagine what those jobs might be. Beyond podcasting, of course, Today's episode is brought to you by Robots and Pencils, a company that is growing fast. Their work as a high growth AWS and Databricks partner means that they're looking for elite talent ready to create real impact at velocity. Their teams are made up of AI, native engineers, strategists and designers who love solving hard problems and pushing how AI shows up in real products, they move quickly using roboworks, their agentic acceleration platform so teams can deliver meaningful outcomes in weeks, not months. They don't build big teams, they build high impact, nimble ones. The people there are wicked smart with patents, published research and work that's helped shaped entire categories. They work in velocity pods and studios that stay focused and move with intent. If you're ready for career defining, work with peers who challenge you and have your back. Robots and pencils is the place. Explore open roles@rootsandpencils.com careers that's robotsandpencils.com careers if you're listening to this, you already know how fast AI is writing the rules for innovation, disruption and value creation. And this new era demands a new kind of patent law firm. Landfall IP was built from the ground up to operate differently, orchestrating how human expertise and AI work together for better patents at founder speed. Created by world class patent attorneys who saw a better way, Landfall IP lets AI execute the repeatable while attorneys elevate to create the exceptional. Landfall isn't adapting to AI, they were built for it. Have a new idea? Try the Discovery Agent for free. It's a confidential tool that helps innovators synthesize their inventions and instantly see patentable insight. Visit landfallip.com to learn more. That's landfallip.com if you're using AI to code, ask yourself, are you building software or are you just playing prompt roulette? We know that unstructured prompting works at first, but eventually it leads to AI slop and technical debt. Enter zenflow. Zenflow takes you from vibe coding to AI First Engineering. It's the first AI orchestration layer that brings discipline to the chaos. It transforms freeform prompting into spec driven workflows and multi agent verification where agents actually cross check each other to prevent drift. You can even command a fleet of parallel agents to implement features and fix bugs simultaneously. We've seen teams accelerate delivery 2x to 10x. Stop gambling with prompts. Start orchestrating your AI. Turn raw speed into reliable production grade output at Zenflow Free. Today's episode is brought to you by Superintelligent. Superintelligent is a platform that, very simply put, is all about helping your company figure out how to use AI better. We deploy voice agents to interview people across your company, combine that with proprietary intelligence about what's working for other companies, and give you a set of recommendations around use cases, change management initiatives and that add up to an AI roadmap that can help you get value out of AI. For your company. But now we want to empower the folks inside your team who are responsible for that transformation with an even more direct platform. Our forthcoming AI Strategy Compass tool is ready to start to be tested. This is a power tool for anyone who is responsible for AI adoption or AI transformation inside their companies. It's going to allow you to do a lot of the things that we do at Superintelligent, but in a much more automated, self managed way and with a totally different cost structure. If you are interested in checking it out, go to aidailybrief AI Compass, fill out the form and we will be in touch soon. So the point here of Ben's piece in a lot of ways is just we literally can't know what's coming next. And what's so challenging about the moment that we're living through when it comes to our jobs and our work is that it's easier in many ways to see how we have less things to do then to catch the glimpses of the new things that we might spend our time on in the future. There is of course no place that this conversation is happening more saliently and more in public than around software engineering. We talked about this a couple different times this week in different contexts, and recently devs have been writing a lot about this. Gurgolia Ross wrote a piece in his Pragmatic Engineer newsletter called When AI Writes Almost All Code, what happens to software engineering? In it, he identifies the same magical feeling that so many other devs had while they were doing their side projects and personal projects over the holiday, using the newest models, Opus4.5 and GPT 5.2 through interfaces like Claude Code. And what he spends the rest of the essay exploring is what's good, what's bad, and what's just changing. The bad he sums up, for example, as the declining value of expertise prototyping, he writes, being a language polyglot or a specialist in a stack are likely to be a lot less valuable. The good, he writes though, is that software engineers are more valuable than before. Tech lead traits are in more demand, being more product minded to be a baseline at startups, and being a solid software engineer and not just a coder will be more sought after than before. The ugly, he writes, is uncomfortable outcomes, more code generated will lead to more problems, weak software engineering practices start to hurt sooner, and perhaps a tougher work life balance for devs and certain roles, he estimates, are going to change in pretty fundamental ways. The one that he's interested in is product management versus Software engineering. Product managers can now generate software easier, he writes, needing fewer engineers to realize their goals. But software engineers also need less product management. Both professions are set to overlap with one another more than before. Now what's interesting and valuable is not just that people are starting to have the conversation about how these roles will change. They're starting to try to lean into it and create new blueprints that people can actually put into practice. Google's senior AI product manager Shobham Sabhu wrote a piece on X called the Modern AI PM in the Age of Agents. It is an exploration of exactly this, how the role of product manager is changing. And I think as you listen to parts of this, you'll find that there is probably a lot here that's not just relevant for product managers. He writes. The job of a PM used to be translation. You talked to customers, synthesized their problems, wrote specs and handed them to engineers. You were the bridge between what people need and what gets built. The value was in that translation layer. That layer is compressing. When agents can take a well formed problem and produce working code, the PM's job shifts. You're no longer translating for engineers. You're forming intent clearly enough that agents can act on it directly. The spec is, is becoming the product. He continues. I've watched this happen with myself and dozens of other PMs. Previously a PM would write a detailed spec, hand it off, wait for questions, clarify, wait for implementation, review, give feedback, iterate. The cycle took weeks. Now they write a clear problem statement with constraints, point an agent at it and review working code in an hour. The time between I don't know what we should build and here it is collapsed. But the work of knowing what to build didn't get easier. It got more important. You don't need to write the code yourself. You need to know what you want clearly enough that an agent can build it. The spec and the prototype are becoming the same thing. You just describe what you want, watch it take shape, course correct and iterate. The bottleneck isn't implementation anymore and the speed of shipping is only accelerating. I've been at Google for around three to four months now and it feels like we've shipped years worth of AI progress. Every big and small AI company is shipping at this pace. Thanks to AI coding agents. The cycle times that used to define product development, from quarterly planning, monthly sprints to weekly releases, are compressing into something closer to continuous deployment of ideas. When the implementation barrier drops this fast, the bottleneck shifts upstream the scarce resources in engineering capacity. It's knowing what's actually worth building. That leads Shubham to write about a new PM skill set. It's things like problem shaping, context curation, and not just for technical problems, but for taste. The mental model shift, he says, is from hands off to hands on. The aipm, he writes, isn't just handing off requirements anymore. They're vibing the first iteration themselves and getting real feedback on working software, not slide decks or Figma mocks. Engineers then become collaborators on making the product better and production ready, rather than translators of your intent. This changes your relationship with the product. You're not describing what you want and hoping it comes back right. You're shaping it directly in real time Now. I would argue that almost everyone is going to at least a little bit more than they are now be a product manager in the way that Shubham is talking about in this post. That doesn't necessarily mean that all of us will contribute to the air quotes product that is the output of our company, but all of us will, again, at least a little bit more than we do now, have a more product management type of mindset about our problems and solutions. We will start to look for ways in which we can build things, intermediate things, one time things, discardable things, ephemeral things that can solve our problems or open up new opportunities. This mindset shift won't come overnight, and it won't even necessarily feel like it's work. In the last short essay that we'll reference in this episode, LinkedIn founder Reid Hoffman wrote a reflection of a recent conversation he had with Replit CEO Amjad Massad. He writes, we're all becoming gamers. We're quickly moving towards a world where with AI we'll all be able to craft tools to help us better play the game of life. For those who grew up playing video games, you understand what I mean. It should help you turn ideas into real things instantly, get unstuck on hard problems, and operate beyond what one person could normally do alone. Nowhere is this more true than in AI development platforms like Replit at scale. These platforms will make life start to feel like you're progressing through a game. Each new challenge is a level, and AI is how you craft a way forward. For centuries, humans have built tools to get ahead, sometimes individually, sometimes together. But as economies matured, most of us stopped building tools and started relying on the ones already available to work faster, live better, and scale what we were doing. Software took this trend to its extreme. Most people don't use software that's designed for them. They use general purpose tools built for the Median user tools that improve generic workflows, but rarely map cleanly onto the specific problems any one person is actually trying to solve. That tradeoff made sense as generalized software could scale to help more people and generate more revenue for the user, though it created a paradigm where a specific tool to solve a specific problem was hard to find, so you either had to patch a bunch of consumer software together annoying learn to code time consuming or could convince someone else to do it for you. Often expensive. With repl.it that paradigm has been shattered. Now, building software is easy, and it almost feels like you're playing a game, trying to craft the perfect tool to beat the level that's been stumping you for weeks. A useful analogy here is Minecraft. Minecraft doesn't give you a finished solution or a prescribed path. It gives you a world, a set of primitives and fast feedback. If you need a tool, you build it. If the tool isn't right, you can try another way. You don't wait for a perfect object to exist. You craft what you need from what's available. Repl it increasingly feels like that kind of environment for software, reid concludes. In a few years we'll shift from thinking what can I buy to help me? To what can I build to help me? Work and life will feel like progressing through levels where each new challenge is met not by waiting for the right software to exist, but by creating it. The real change isn't that everyone becomes a programmer. It's that everyone gains the ability to shape their environment, extend their capabilities and and move forward under their own control. The real change is that everyone becomes a gamer, building for the most important game they'll play. I don't know ultimately what the future of AI is. I don't know how it's ultimately going to change software engineering jobs, to say nothing of the rest of knowledge work. But what I know is that sitting here at the beginning of 2026, there has been a shift that many of us have felt where the possibility of building tools that let us navigate the world in a personal and work context has actually started to take root as a default behavior. And as that happens, we're starting to get glimpses of what the next generation of all of our jobs might be. Now it is going to take a lot of stumbling around in the dark for it all to come together. And frankly, that's the exciting thing about this moment. For now though, I think we will close there. That is certainly enough to chew on for one Sunday. Appreciate you guys listening or watching as always, until next time. Peace, Sam.
0:00