Imagine building a machine so powerful, like so capable of doing human work, that you actually march into Washington and ask the government to tax your own invention. Right. Just to keep the economy from collapsing, you know. And that is exactly what OpenAI, a tech company, valued at over $800 billion is officially doing right now. Yeah. They are asking the government to tax their own automated labor and then use that money to pay for a four-day work week and a public wealth fund for every single citizen. Hearing a private corporation propose a total restructuring of the national economy, I mean, it really stops you in your track. It really does. And when you place that proposal right next to the recent warnings from Bank of England Governor Andrew Bailey, a very distinct picture starts to form because he pointed out that artificial intelligence is about to displace workers in a way that looks a lot like the Industrial Revolution. So you have this incredibly optimistic utopian proposal coming directly from the tech giant building the tools, sitting right across from these stark, unvarnished economic warnings from a central banker who is looking at the actual labor data. We are looking at a fundamental tension between the creators of this technology and the traditional financial institutions trying to brace for impact. Exactly. Because both sides, despite their entirely different vantage points, are essentially staring at the exact same impending reality. They see the exact same labor trends happening right now. So if these machines are about to become smarter than us and do the bulk of the work, how exactly does the money keep flowing so the whole economic system doesn't collapse? Well, to understand the mechanics of that, we have to start with Andrew Bailey's primary concern over the Bank of England. He believes artificial intelligence will not cause mass unemployment across the entire economy, but it will absolutely displace a narrow set of highly automatable roles. Like specific desk jobs. Right. Specifically hitting young entry-level professionals. The impact is intensely targeted at the bottom. And the data supports that entirely. It introduces this concept economists are calling the broken rung. So a recent Stanford study looked into this and they found a 13% relative employment decline for early career workers aged 22 to 25 in exposed occupations. 13% is a big drop. It is. And in the UK, the data shows entry-level roles have dropped significantly too. Companies are simply not hiring for those junior positions at the same rate. Think about your own career for a second or like the first real job you ever had. Oh, absolutely. You probably spent the first few years doing the grunt work. Yeah. If you were a junior lawyer, you were trapped in a room doing document review. Yeah. Or if you were a junior developer, you were writing basic code or hunting for syntax errors all night. Right. The boring stuff. Exactly. Yeah. But that rote work is how you learn the business. Today, though, the software can do the initial drafting, the basic coding, and the routine data processing instantly. So the partners at the law firm or the senior engineers, they just use the software instead of hiring a 22-year-old. They do. Wait, back up. If the machines do all the junior-level work, like document review or basic code generation, how do human workers ever get the experience to become senior lawyers or senior developers? That is exactly the crisis. If you remove the bottom rung of the career ladder, the pipeline of future talent breaks completely. I mean, how do you train a general when there are no foot soldiers? You can't. Right. The consequence of this is what the firm ICS.ai calls the human firewall. The human firewall. Yeah. The fundamental rule moving forward is that artificial intelligence proposes humans dispose and humans own the legal outcomes. Okay. This changes the entire career pipeline. It limits the traditional apprenticeship model where you learned pursuing the grant work. We are now forcing a totally new requirement. Entry-level workers must start their careers as managers of machine output. Let's make that concrete because it sounds like asking someone to be an executive chef without ever letting them chop an onion or work the line. That's a great way to look at it. They have to taste and approve the dish, and they are legally responsible if the food poisons a customer, but they never learn the physical muscle memory of cooking. That is the perfect analogy. Yeah. You're asking a 22-year-old to evaluate the quality of a legal brief or a string of code that a machine generated in three seconds. Right. But that 22-year-old has never spent 40 hours writing one from scratch. Yeah. They lack the foundational context. They just don't have the reps. Exactly. They don't have the neural pathways built through years of trial and error, yet they're functioning as the human firewall protecting the company from a machine hallucination or, you know, huge legal liability. So they are entirely responsible for an output they do not fully comprehend. Yes. And the machine output they are managing is only getting more complex. OpenAI has officially declared that we are in a transition towards superintelligence, meaning systems capable of outperforming the smartest humans, even when those humans are assisted by artificial intelligence. We can see exactly how fast this is happening through their internal metrics, too. They use an evaluation called GDPVAL to measure performance on economically valuable tasks. GDPVAL. Right. And this is a critical distinction. They are not just testing if the system can write a fun poem or pass a standardized test. They're testing real work. Yes. Measuring if the system can perform tasks that a business would actually pay a human being a salary to do. Systems operating at a GPT-5 level, which internally were codenamed SPUD, now match or exceed human professionals on about 50% of real-world tasks. 50% of real-world tasks. And they are completing them in minutes instead of hours. Minutes, sometimes seconds. Yeah. And furthermore, when they analyzed one and a half million consumer conversations, the interactions were heavily skewed toward information seeking, practical guidance, and writing. Meaning people are using it to do their jobs. Exactly. People are using these systems to make complex decisions and streamline highly administrative chores. It is essentially more ask than task. More ask than task. Right. The user is not doing the work. They're directing a highly capable digital entity to do the work for them. So the machines are getting too fast for the normal 40-hour work week to make sense anymore. That speed fundamentally limits the traditional exchange of time for money. Our entire economic system is built on the premise that you trade a certain number of hours of your labor for a certain amount of currency. Right. If you are an architect, you might build your client for 40 hours of drafting. But if a project that traditionally took an entire team of humans months to complete now takes machine a few minutes, paying people by the hour completely breaks down. Because what do you bill for? I mean, if you charge $200 an hour and the machine finishes the task in four seconds, do you send the client an invoice for 22 cents? No, you can't. The business model just vaporizes. Exactly. You cannot bill for time when the time required drops to near zero. And this consequence forces the necessity for open AI's radical industrial policy for the intelligence age. They recognize that the foundational math of the labor market is evaporating. Because if human beings cannot sell their time, they cannot earn a paycheck. Right. And if they cannot earn a paycheck, they cannot participate in the consumer economy. Which brings us directly to open AI's specific fiscal proposals. They are advocating for shifting the tax base away from payroll and toward capital. This includes implementing what they call taxes related to automated labor, often referred to as robot taxes, and creating a public wealth fund. The mechanics of this are crucial to understand. Consider your own paycheck. Okay. Every time you get paid, you see deductions for essential services like social security, Medicare, and various state and federal programs. Our entire public safety net is funded primarily through wage and payroll taxes. Right. It relies entirely on human beings working and paying into the system. Yes. So if artificial intelligence displaces a significant percentage of human workers, the wage and payroll taxes that fund those essential services will just plummet. The government's primary source of revenue dries up. Exactly. The safety net will run out of money because there are fewer humans earning wages to tax. Precisely. To fix this structural deficit, open AI proposes that tech companies and the government jointly seed a national fund. The public wealth fund. Right. This public wealth fund would pay a direct dividend to every single citizen, regardless of their personal wealth or investments. If a server farm is doing the work of 10,000 accountants, the government taxes the output of that server farm and distributes the money to the citizens. As we look at these proposals, we have to remain completely neutral on the political and economic ideologies involved. But I have to point out the irony here. Oh, for sure. You have a company valued at over $800 billion, aggressively expanding its commercial footprint and charging for access to its tools, suddenly asking the government to tax them heavily and distribute their profits to the public. It does seem entirely contradictory. A private entity racing to capture global market share is simultaneously drafting the blueprint for how the government should take a portion of its revenue. But consider their position. If their product is so successful that it eliminates the earning power of the middle class, who is going to buy the goods and services that their artificial intelligence helps produce? Nobody. Right. A perfectly efficient economy is useless if there are no consumers with money to spend. The consequence of this proposal changes the very definition of wealth generation in society. It limits the accumulation of wealth to just a few tech giants and introduces a system where artificial intelligence access and its resulting economic benefits are treated as fundamental human rights, operating on the exact same level as literacy or access to electricity. They are arguing that intelligence is becoming an abundant taxable utility and the profits derived from that utility must be socialized to maintain societal stability. And they cast in this idea of socializing the benefits directly into the workplace too. How so? By proposing mandated pilot programs for a 32 hour four day work week with absolutely no loss in pay. I really want to focus on this because it challenges everything we know about corporate behavior. They call this the efficiency dividend. Okay. The core concept is that when a company implements artificial intelligence and experiences a huge surge in productivity, those games should be returned to the workforce in the form of time. Rather than just being captured by the executives and shareholders as corporate profit. Exactly. If the work gets done faster, the worker gets their life back. Hold on. Yeah. A company is supposed to pay for the artificial intelligence subscription that does the work, pay the human the exact same salary as before, but only have them work four days a week. That is the proposal. Wow. It severely limits the traditional corporate profit margin on automation. Historically, if a machine made a factory twice as assistant, the factory owner fired half the staff and kept the extra money. Right. That's just basic capitalism. But this proposal actively prevents that. Alongside this, they propose a societal shift toward the care economy. The care economy. Let's spend some time on this because this feels like a fundamental rewiring of what society values. Well, displaced workers would be actively encouraged to move into roles involving childcare, elder care and community services. Okay. Think about it. For centuries, society has compensated you based on either your physical. Keep going. You're doing it. That's the sound of Sam learning to swim in a Hilton resort pool. Oh, that's delicious. And that's the sound of Sam and his family enjoying dinner in the hotel restaurant. Good evening. Welcome back. With stays in your favorite destinations and everything taken care of, you can savor what's important. When you want your holiday to feel like a holiday, it matters where you stay. Book now at hilton.com. Hilton for the stay. Now, brahm or your cognitive brain power, right? If robotics handles the physical labor, and artificial intelligence handles the cognitive labor, what is left for the human being to offer empathy, human connection, precisely, to make this viable, they propose a family benefit that treats care work as economically valuable. So they're essentially proposing paying people to provide the human connection that machines cannot replicate. Yes. The argument is that we should let the machines do the cognitive heavy lifting and processing, freeing up human beings to care for other human beings. And we should structure the economy to reward that care financially. But when you pivot from these grand utopian policy documents to the immediate reality for developers and businesses actually building this technology, the picture is incredibly different. Well, complete. While open AI talks about a four day work week and wealth funds, API developers in the trenches are facing strict new compliance burdens, rigorous auditing regimes, and complex model containment playbooks. The actual implementation is intensely bureaucratic right now. Right. Developers must now adhere to guidelines like those from the AICC. If you are building software using these models, you must implement audit ready logging, track specific automation metrics, and prepare for mandatory incident reporting. So you can't just throw things against the wall anymore. No, you can no longer just plug into an API and launch a product. You have to be able to prove exactly how the model is making decisions and what safeguards are in place. And what does an audit ready log even look like for a neural network? It is not like reading a traditional line of code where you can see exactly where an error occurred. Right. It's a black box. You are dealing with statistical probabilities. Proving why a model made a specific decision to a government regulator is a massive technical headache. It absolutely is. Furthermore, there is a huge physical infrastructure problem. The data centers required to run these super intelligent models are draining local energy grids. Right. We're not talking about a few extra computers in a back room. These facilities require specialized cooling systems and draw mega watts of power. And open AI has a policy for that too. You do. Their own policy demands that these data centers must pay their own way for energy investing in local generation so that everyday household utility bills do not spike just because a server farm moved in next door. This consequence completely changes the entire software development culture. For the last 20 years, the mantra in Silicon Valley was move fast and break things. Yeah, exactly. This new reality limits that ethos entirely. It demands an era where artificial intelligence development is treated like highly regulated public infrastructure. You cannot move fast and break a nuclear power plant and you will not be allowed to move fast and break a super intelligent model that touches the global financial system. The developers are the ones bearing the weight of this friction between the theoretical capabilities of the technology and the safety requirements of society. And that friction brings us to a very compelling counter argument regarding the timeline of all this. Economist Tyler Cowan argues that the economic takeoff driven by artificial intelligence will actually be relatively slow. Yes, directly contradicting the idea of overnight disruption. I am extremely skeptical of this slow adoption theory, but let's go through his points. Cowan bases this on a few key economic principles. First is Baumol's cost disease. Baumol's cost disease highlights that highly regulated inefficient sectors of the economy like government agencies, healthcare systems, or university administrations will adopt this technology very slowly. Because they're naturally resistant. Right. They are structurally resistant to rapid change. Even if the technology exists to automate an entire department, the administrative bureaucracy will fight at every step of the way to preserve its own existence. And his second point is the O-ring model. Yes. So this theory gets its name from the Challenger Space Shuttle Disaster, where a single inexpensive O-ring failed and destroyed an incredibly complex multi-billion dollar spacecraft. Wow. The theory suggests that the worst performer in a complex system dictates the overall productivity of that system. Okay. How does that apply here? Well, think about our earlier example of the human firewall. If an artificial intelligence system writes a perfectly accurate 100 page legal contract in three seconds, but a human paralegal still has to physically read, review, and manually stamp every single page before it can be filed with a court, the system is only as fast as the human stamper. That makes perfect sense. If humans remain the worst performers in a collaborative loop with artificial intelligence, the machines can only speed us up so much. The human bottleneck will always throttle the output. I understand the logic of the O-ring model, but I just don't buy the overarching theory that this will be a slow transition. We can look at the open AI exposure numbers, which show that 80% of the US workforce could have at least 10% of their daily tasks affected by these models. Those are big numbers? Huge. And when you see exposure numbers that high, it guarantees a rapid, violent economic shock that forces immediate restructuring. You cannot just hide behind university bureaucracy when private sector competitors are suddenly moving a thousand times faster than you. But the historical view leans heavily toward Cowan's perspective. Think about the adoption of electricity. The technology existed, it was demonstrably superior to steam or gas, but it took decades to fully diffuse through the economy and actually show up in the productivity statistics. Factories had to be physically redesigned. Entirely new electrical grids had to be built, human habits had to change. We are dealing with human institutions, and human institutions move at the speed of human trust. I reject the electricity comparison. You didn't have a magic button on your desk in the past that instantly wired your house for power. You had to lay physical copper wire across a continent. Fair point. Today, an API update can be pushed to a billion smartphones globally overnight. The friction of distribution just isn't the same. The physical distribution is fast, but the regulatory distribution is not. Today, even if an artificial intelligence system analyzes a massive data set and events a breakthrough pharmaceutical drug in 10 seconds, the FDA is still going to take years to run clinical trials, review the data, and approve it for public use. The machine speed is completely irrelevant to the regulatory requirement. This institutional friction limits the immediate economic catastrophe. We are unlikely to wake up tomorrow to find 50 million people permanently unemployed. Yeah, that's not happening overnight. However, it creates a prolonged, incredibly awkward transitional phase. We are entering a period where the technology is undeniably capable of absolute magic, but human bureaucracy, risk aversion, and institutional inertia refuse to use it to its full potential. The tension is going to be palpable. Exactly. You will have tools that can do the work of 100 people instantly trapped inside legal and corporate frameworks that mandate slow, manual review. You are going to see a massive divide between what individuals can do in their private lives using these tools and what they are legally permitted to do with their jobs. The dissonance will be exhausting for the workforce. We are entering a phase where intelligence itself is becoming an abundant, taxable utility. The challenge we face is no longer about how to do the work or generate the ideas, but how to distribute the rewards of that work without breaking the fundamental social contract that holds society together. If a public wealth fund does become the primary source of income for millions of people, and that fund is financed entirely by the profits of a manful of tech companies holding a monopoly on intelligence who is actually running the country, the elected government, or the people who own the servers. If you're not subscribed yet, take a second and hit follow on whatever app you're using. It helps us keep making this. We appreciate you being here.