Possible

Humans secretly prefer AI writing

24 min
Mar 18, 2026about 1 month ago
Listen to Episode
Summary

Reid Hoffman discusses Jensen Huang's 'five layer cake' AI framework, examining whether AI dominance will come from infrastructure control or applications. The conversation covers human vs AI writing preferences, economic implications across the AI stack, and debates around potential tech nationalization.

Insights
  • AI value creation may concentrate in applications and models rather than foundational infrastructure layers, similar to how Google profits from AdWords rather than just computational power
  • Human preference for AI writing in blind tests reflects both the generic nature of much existing content and AI's strength in short-form, decontextualized passages
  • Nationalization threats against tech companies could backfire by stifling innovation when speed and iteration are most critical for maintaining competitive advantage
  • AI adoption will likely follow customer service patterns where users eventually prefer AI interactions over human ones due to superior performance
  • Geopolitical AI control requires balancing national security interests with preserving the innovation ecosystem that creates technological leadership
Trends
AI infrastructure becoming critical to geopolitical power and digital sovereigntyShift from viewing AI as software to recognizing it as foundational infrastructureGrowing tension between tech companies and government over national security alignmentCustomer preference evolution toward AI-powered services over human alternativesEconomic disruption of white-collar jobs through AI automationIncreasing government scrutiny of AI companies as strategic assetsAI writing capabilities approaching human-level quality in specific contextsCapital efficiency advantages driving investment toward AI applications over infrastructureIntegration of AI tools becoming standard practice in professional settings like healthcareDebate over autonomous weapons systems and ethical AI deployment boundaries
Topics
AI Infrastructure StackGeopolitical AI ControlHuman vs AI WritingTech NationalizationAI Investment StrategyCustomer Service AutomationDigital SovereigntyAI Job DisplacementNational Security TechnologyAI Economic Value DistributionAutonomous Weapons SystemsAI Healthcare ApplicationsTech Company Government RelationsAI Capital EfficiencyTechnology Innovation Policy
Companies
Nvidia
Jensen Huang's five-layer AI cake framework and chip infrastructure importance discussed
Google
Example of value creation through applications (AdWords) rather than just infrastructure
Anthropic
Conflict with State Department over AI deployment restrictions and autonomous weapons
Palantir
Alex Karp's warnings about potential tech industry nationalization
Microsoft
Praised for collaboration with US and Western democracies on global stability
Sierra
Customer service AI company showing good results in business implementations
Parloa
Customer service AI company mentioned alongside Sierra for business results
New York Times
Published blind quiz comparing human vs AI writing that sparked online debate
Twitter
Example of short-form content platform where complexity differences are minimized
People
Jensen Huang
Nvidia CEO who described AI as a five-layer cake requiring full-stack infrastructure
Alex Karp
Palantir CEO who warned tech industry about potential nationalization threats
Reid Hoffman
Podcast host discussing AI industry dynamics and investment perspectives
Quotes
"What looks like a software boom may really be an infrastructure and maybe even geopolitical build out in disguise"
HostOpening
"Nationalizing an industry is a sure way to say, stop innovation, don't build anything more here"
Reid HoffmanMid-episode
"What you really want to get to is where the customer says, please put the AI on"
Reid HoffmanMid-episode
"If you're talking to your doctor and the two of you aren't using frontier models to second opinion what you're doing, it's bad for both of you"
Reid HoffmanMid-episode
"American companies do not have to do whatever the Department of Defense tells them to do, especially when we're not in a time of war"
Reid HoffmanLate episode
Full Transcript
2 Speakers
Speaker A

Reid, delighted to be here with you today. Let's jump right into some AI questions. So Jensen Huang recently said that AI is a five layer cake. The idea is that people talk about AI like it's just a chatbot or a model when it's really a full stack. So you have energy chips, infrastructure models and applications. And his argument is that every flashy AI application at the top pulls on everything beneath it, all the way down to the power plant. And so it's not surprising that Jensen of all people would be saying this, but his comments suggest that the AI race may not ultimately be won just by whoever has the best app or even the best foundation model. It may be won by whoever controls the deepest layers of the stack, compute power, data centers, and the individual base required to support all of it. And so in other words, what looks like a software boom may really be an infrastructure and maybe even geopolitical build out in disguise. And so I would love to hear from you, do you agree with this framing? And when you hear AI described as a five layer cake, does that change how you think about where the real power in the industry sits?

0:00

Speaker B

Well, obviously Jensen and Nvidia have been doing amazing work and you know, one of the things I think Jensen is very good at is, you know, arguing, you know, his position very strongly. So it's like, no, no, what's most relevant is the people who are producing the chips. Right? Let me tell you why. But you know, by the way, the chips are super important. So I agree with the five layer cake. There's actually even some additional complexity around data and all the rest. And I think that the fact is that when you think about kind of geopolitical power, actually, in fact compute capabilities and compute infrastructure is probably actually something that's now highly relevant to geopolitical power. And you know, people think, oh, do I have the supercomputer to train a model? And digital sovereignty is one part of that. And I think that's potentially navigable. Like it doesn't have to be that, you know, each significant country has its own, you know, $100 billion computer for training its own model, but you will still need computer for inference. You'll need, you know, some kind of digital sovereignty in not having models be able to potentially be rug pulled from your nation's industries, national security apparatus, etc. Is, you know, it's kind of part of where to work. So I think it's absolutely right. There's a geopolitical. Now the thought that the, that that's where the real economics are It's a place where real economics are. But it's a little bit like arguing, well, the Internet's going to be a geopolitical thing. And so it's ISPs. ISP is a real control. And you look around like, no, not really at all in terms of how this operates. And I'm not suggesting these are exactly the same because among other things, you know, the ISPs are much more, you know, commodities and you know, can hook up a whole bunch. But it's the reason why it's not necessarily the, the most foundational part of the layer is not necessarily where the most value or power accrues. I do think that the, that all of the five layers that he describes are actually in fact pretty important. And I think that historically, when you get to where the most economic value ends up, you know, kind of accruing the kind of, the, the area where there is the most, you know, kind of, you know, economic power in these kind of things tends to be closer to the top of the layer, you know, and so like, for example, it's like, it's like Google, right, makes its money from AdWords and yes, it powers the whole thing from having a, you know, very deep computational stack. It's kind of the most of a complete to the five layer cake, you know, from a search realm and is trying to do that from the AI realm. It's not necessarily, it's because, oh, because we have all this lower level of chips that this, this, that this particularly plays out. So I think that it's, it's, it's, there's real power at every level, more so than just the ISPs. And I think it's an important thing for you know, kind of countries, industries to think about all these and additional things like data. Like data is one of the things where there's a lot of vague claims and actually is going to play out in, in particularly interesting ways.

1:11

Speaker A

Well, obviously this is not an investment advice podcast, but if you had to choose only one layer of the cake to invest, what would you choose?

4:34

Speaker B

Well, that's a complicated answer in part because I'm a software guy. So it's like, you know, what do I. For you? Yes, for me that's the one part. But also, by the way, it tends to be the, well, you know, what kind of investing game are you doing? Like if you're investing in startups, investing in startups that have to do with computer, have to do with power, have to do with data centers, you know, kind of are all, you know, hazardous investments from the viewpoint of intensity of capital, ease of failure, you know, et cetera, et cetera. It can be done. There are good economic, there are good things that happen there, but like tons and tons of very high, costly failures. One of the benefits of software is that software tends to be more capital efficient now, right. It's a little tricky when you get to the, to the, you know, AI model construction, which is closer to the high capital, you know, kind of ways of doing it. But that's the reason why, like, I tend to be in applications and models and, you know, other places where high, you know, high capital efficiency for how you play them well.

4:43

Speaker A

So let's move from the economic to the more philosophical. So many of our listeners probably saw that the New York Times, they published a blind quiz and they were asking readers to compare human writing versus AI writing. And the result definitely hit a nerve online. And so this was taken by more than 86,000 people, and readers slightly preferred the AI passages overall. And what was interesting is the reaction was sort of in two directions. Like one camp said, this proves that most writing people encounter every day is already generic enough that I can beat it. Other camp said, like, the quiz misses the point. Like, short, decontextualized passages are exactly where AI performs the best. And the real value of human writing is his voice is investigative reporting, is structure, is taste over long arcs, is long novels, is writing style. And so what do you think people are actually reading into these human versus AI writing debates? Why does this strike such a nerve?

5:47

Speaker B

Well, I'll answer your direct question first and then go to the rest. Look, it strikes a nerve because people are fearful of replacement in multiple ways. They're fearful of replacement for economic job and safety and security. They're fearful of replacement from a, you know, kind of the, the notion of purpose in terms of what, like, my significance is like, what makes me, like what, what makes me as a writer, me as a human being, me as a, you know, et cetera, et cetera, unique. And, and so all of a sudden you go to, well, wait a minute, you know, what's going on here? So it's. That whole range of things is why people have such intensity on this discussion. And by the way, it's one of the reasons it's important because, you know, I, as a, you know, as, you know, I kind of think of myself as a techno humanist. So I think that, you know, what is human is super important and what is. And, and. But I also think of as homo techne, that we Evolve through our technology. It's, you know, everything from fire and, you know, agriculture that causes us to be, you know, be able to aggregate in the cities, you know, all the way up into, you know, creation of machinery and power and books and all the rest. And that's how we evolve. It changes our, our, our, our way that we think of ourselves, the way we think of the world, the way we have epistemology. You know, the whole world changes through the lens of a microscope, you know, as a, as a kind of an instance. And AI is, I think, another one which of course has the, the greatest, you know, kind of challenge so far because it's like, well, you know, we think of ourselves as the only things that have agency is humans with some asterisks around animals and maybe some blindnesses around corporations, you know, as collections of humans. But the, the reason our super agency is because it's like, oh my God, this new thing challenges our agency in a way that other technologies which we've had discussion about, how they change our agency is more fundamental because maybe it's somewhat autonomous and maybe it does things that we'd previously said was only human, eg, like writing, you know, as an instance of this. And that's why the navigation of this is really important. And that's part of the reason why we do this. Podcast wrote super agency and everything else to say here is how we navigate getting a stronger agency through this path. And it doesn't mean that doesn't transform, doesn't mean that previous things that you would really like, you know, are now different in the age of AI. And I think this, you know, I think that the times that a clever thing with this blind quiz, because I think both camps are right. I think that the, that the camp that says look, a lot of writing is already pretty generic. And by the way, AI is already good enough to do it. That's pretty straightforward. And by the way, everyone who uses these, you know, kind of AI agents to, for example, produce a custom kind of equivalent of a Wikipedia page, answer or report or something else goes, this is perfectly good for the kinds of things that I'm looking for, for my, for my search for, for something. And by the way, short form makes it even easier. I mean, this is, you know, one of the things I missed when, you know, Twitter was created was like, it's like, ah, this is dumb. It's like, no, no, no, actually people want short form because in short form no one can be particularly smart or it's very hard to be pretty smart. And Everyone looks kind of sufficiently also banal. And so it goes across the whole thing. And that was part of the reason why it was something in addition to blogging, where blogging was. No, you had to write something, you know, substantive. And so I think that the, that there is a lot of the short form and AI does that. And by the way, I think AI does some long form perfectly well too for different contexts. But that doesn't mean that there isn't like, like I can still tell, like there's a whole bunch of writing tasks where humans, yes, they have to, it's more expensive, it's more challenging to do. But like humans do a much better job. Now the challenge will be which areas will be economically viable for the humans doing a much better job than the automation of the eye. And that's the kind of the, the brass tacks of it. And you said, well, you know, I used to hire human writers to write the manual for the product. And it's like, well, you're probably not going to have to do that anymore. And it doesn't mean that there won't be someone who, a human who is, you know, iterating with the human, the, the AI to get the writing of the manual done the right way or done in new ways or better ways because it's now kind of this more efficient way. But it's like I now no longer have to pay a writer, you know, at whatever the going market rate for writers is in order to do that if I had some reason not to do it or wasn't trying to get better. And that's part of the last underlying thing in this, in this issue now, as per Super Agency, my hope and expectation and part of what we should try to shape it to is actually there's still a lot of role for doing writers. Not just because there's a bunch of stuff where AI falls flat itself today just try to get it to write good dialogue as one example, just as an example today. But there's a lot of other things, including all the things the, you know, know, question of, you know, reporting lived experience believability. Do I want to be hearing from a human on this particular topic versus, you know, the canned synthesis of AI, you know, etc. Etc. But it's also the question of which of these areas are going to be the areas that we go. This economic model works for the production and consumption. I think people will begin to realize that it's actually in fact useful and even good in some ways. Like I, you know, here's the thing is, I, the part of the canary that I've been tracking for where will job replacement really happen is customer service, as you know. Yep. And I think we're still in the early days of that here in 26. You know, I think that from what I see is, you know, companies are engaging, you know, companies like Sierra Parloa or others in customer and getting good results and I think they're expanding their footprint with it. I think they're still working out a lot of different things and I think the businesses are doing quite well as part of it now. But like, what you really want to get to is where the customer says, please put the AI on. Right, right, right. Where it's like they no, no, no. And, and they'll do that through experience because they'll go, well, wait a minute. Actually, in fact, the AI that directly interfaces with all this stuff, as opposed to a human who doesn't really understand all of it, is trying to follow a database script that what they are and the human kind of stumbling over themselves following a database script and probably outsourced to the Philippines or India because it's much cheaper and then is even a little bit more, you know, not sharing in context that the AI is so much better. It's like, well that's better. Right. And that, that's why I think, and then, then once people begin to get to the oh, AI's better here, then I, then I have areas where I prefer it. And, and this is part of the thing where I'm trying to get people to own their own agency is like obviously if you have exposure, if you have a, of a, have a doctor, you should always talk to your doctor for these things. But by the way, if you're talking to your doctor and the two of you aren't using frontier models to second, second, you know, opinion what you're doing like on the spot, it's, it's bad for both of you. You should be wanting the AI, wanting to do that because it helps you in really critical ways. And by the way, if you don't have access to a doctor, it was like, oh, the doctor is like a clinic that's a four hour drive away or I don't have one whatsoever. Well, stir with the AI, know whether or not you should get in the car and bring, be, bring your, your kid to the clinic.

6:48

Speaker A

All right, so we talked about economics, sort of humanity philosophy. Now we're going to end with still AI, but politics. So Alex Karp recently warned the tech industry that they may be headed towards nationalization. So his argument was basically the tech companies are simultaneously saying that AI is going to wipe out huge numbers of white collar jobs, but they're also refusing to align with US national security interests. And so if that's true, they shouldn't be surprised if it ends up that the government is moving towards some kind of nationalization of this technology because it's sort of a direct threat to our way of life. When Alex Karb said this, he was obviously poking a bit at anthropic, we all know, sort of their, their fight with the State Department recently and the Department of War. But people took the idea further than that because AI is starting to look less like a normal consumer product and more like critical infrastructure. It has implications for war, intelligence, labor markets, industrial policy, sort of all of those things at the same time. And so with this technology becoming foundational enough, maybe it's makes sense for governments not to treat it, you know, just like a normal private market company. So when someone like Alex Karp talks about the possible nationalization of technology, how much do you think this is just rhetoric? He wants clicks, he wants headlines. And how much do you think we should actually pay attention to what he's saying?

15:00

Speaker B

In this kind of age of disruption, everything going on, people might do a lot of stupid, foolish things. So you should always pay attention, especially when people with some technological knowledge and position are also kind of making what I think is incorrect and clumsy arguments. Now part of it is you get to this kind of question of this old parable about wise people and elephants. This one has a trunk and that one is the tail, and these two have different legs, and that one is a tusk and so forth. And part of what you have to do is you have to kind of look at AI as the whole elephant and you go, look, there's a national security thing. You know, one of the legs really, really matters, of course. And so it's like, well, if you guys don't confer to what's best for the national security thing, then, you know, you could just get nationalized. Kind of simple argument. The question is, you know, like, okay, how do you look at the whole elephant? Because you don't get the national security without all of the economic power and, and everything else and going. And nationalizing an industry is a sure way to say, stop innovation, don't build anything more here, et cetera, et cetera. And when the thing that matters is the most, speed, iteration and compounding going in the future, that's, that's the definite way to kill the golden goose. But you know, it might be like, you know, dump radioactive acid on the golden goose. Sure. As a way of doing it. So I think it's a, an unwise statement thought, etc. Now that being said, I didn't say that it wasn't important for national security in countries to be thinking about like, okay, we've got this fundamental technology. How does this play into our national security interests? And companies need to, to, to. They're located in countries, they're located in countries that provide the national security, you know. You know, fortunately, so far, you know, the, the AI leads have come from countries that have an interest in, you know, global stability. Although, you know, recent year and a half with the, with the current administration, you know, seems to be less interested in global stability than former American administrations. But like so far, like, you know, China, US you know, and bits in Europe like interest in global stability. It's not Russia, you know, it's not, you know, it's, it's kind of like, it's folks who are like, you know, kind of playing for we should have global stability and everything else. And I think that's an important thing. And I think tech companies, like every tech company I talk to is actually bought into being an American company, you know, participating in global stability and so forth. And like, one of the things that was way under commented in the anthropic discussion, you know, from what I can tell is, you know, anthropic's point was, you know, no mass surveillance of American citizens, which is, by the way, just saying, legal. And so you can, we can ignore that because that's just saying we're operating legally. You could say, you know, obey the American law. And it's like, okay, that I, any contract in America, you could say, that's fine, I can ignore that sentence, right? That doesn't need to not be in the sentence. And then the other one is autonomous lethal weapons, which is our technology is not ready for it yet, right? And so you're like, okay, the provider is saying our technology is not ready for that yet. So we don't want to be, we don't want to be doing that. And by the way, country that's free, we can say, hey, like, we can provide services or not provide services. And if you want to go provide, take services from someone else who says their technology is ready for, for, you know, AI autonomous weapons, you can do that. We think that's super dangerous, but that we're not, we're not, you know, we're not standing in your way of doing that. We're not saying you can't do that. And then of course, this all becomes reframed entirely in this basis by those people, which is, by the way, value destructive. It's anti American. It's versus serving the American interest in terms of what this is. And so, so it's like, you know, it's like, look, that, that the, the whole threatening position of this is, I think, frankly, anti America's interests, you know, anti, you know, kind of what we should be doing. We should be going great. We have like, for example, you say, well, but, you know, anthropic with cloud code has the best coding agent that we're all using. Great. That's awesome for us as Americans. And, you know, how do we navigate this? Not, you know, you need to be, you know, because by the way, American companies do not have to do whatever the Department of Defense tells them to do. Right. Especially when we're not, you know, they're like, well, we're not in a time of war. It's like, yeah, because Congress hasn't declared war yet.

16:25

Speaker A

Right.

21:16

Speaker B

So, so the issues around being in a time of war is when Congress declares war and Congress can tell companies you're now behaving as if you're in a time of war. Right. That's part of why we have Congress and the declaration of war. And so it's like, okay. And I do think that this technology, now, the underlying thing that's really important here is the technology is important geopolitically for power, for national security. And I think it's really important that we do that. But by the way, even in this recent brouhaha, the anthropic people think that too. They're like, how do we help America in this position? So it's only the framing of you have to do exactly what I tell you to do, which is not land of the free, I suspect, is not even home of the brave, you know, in this. And so I think it's, it's important for the tech industry to say, hey, look like, yes, we are providing, we are partnering. I mean, I think, you know, Microsoft has always done a really good job of we collaborate with, you know, the US and Western democracies for a well ordered global society. That's what we do.

21:17

Speaker A

Well said, Reid. Appreciate it. Thanks so much.

22:41

Speaker B

My pleasure. Possible is produced by Palette Media. It's hosted by Ari Finger and me, Reid Hoffman. Our showrunner is Sean Young. Possible is produced by Tenasi Delos, Katie Sanders, Spencer Strasmore, Emo Zhu Trent Barboza and Tafadzwa Nimarundwe.

22:44

Speaker A

Special thanks to Surya Yalamanchili, Sayda Sepieva, Ian Alice, Greg Beato, Parth Patil and Ben Rellis.

23:01