Big Technology Podcast

OpenAI’s $100 Billion Funding Round, OpenClaw Acquired, AI’s Productivity Question — With Aaron Levie

55 min
Feb 20, 2026about 2 months ago
Listen to Episode
Summary

The episode discusses OpenAI's massive $100 billion funding round with participation from SoftBank, Amazon, and Nvidia, alongside the acquisition of OpenClaw and new studies questioning AI's actual productivity impact. Box CEO Aaron Levie provides insights on the AI market's trajectory, agent-based computing, and the future of enterprise software in an AI-driven world.

Insights
  • The AI market is still in early innings despite massive valuations, with potential for tens of trillions in total market value across the entire stack
  • Competition between AI labs is intensifying but the market is large enough to support multiple trillion-dollar companies
  • Enterprise software must become API-first to enable AI agents while maintaining human collaboration interfaces
  • AI productivity gains are most visible in coding/tech but haven't yet rippled through broader knowledge work sectors
  • Agent-based computing represents a paradigm shift from on-demand AI assistance to always-on AI collaboration
Trends
Massive AI funding rounds becoming normalized as infrastructure buildout continuesAI agents evolving from task-specific tools to always-on collaborative systemsEnterprise software transitioning to API-first architectures for agent integrationAI advertising models emerging as potential hundred-billion-dollar revenue streamsProductivity measurement challenges as AI benefits vary dramatically across industriesGeopolitical competition for AI investment from Gulf states and sovereign wealth fundsModel capability jumps accelerating across non-coding knowledge work domainsAI safety and alignment concerns growing as agent autonomy increases
Companies
OpenAI
Raising $100 billion funding round and acquiring OpenClaw for agent development
Box
Aaron Levie's company building AI-ready enterprise file platform for agents and humans
Anthropic
Released Claude Sonnet 4.6 model showing significant performance improvements in complex tasks
SoftBank
Expected to contribute $30 billion to OpenAI's funding round
Amazon
Potentially investing $50 billion in OpenAI despite connections to Anthropic
Nvidia
Scaling back from $100 billion to $30 billion investment commitment in OpenAI
Google
Released competitive Gemini 3.1 model at half the price of leading alternatives
Microsoft
Expected participant in OpenAI funding round with existing partnership
JPMorgan
Used as example of $840 billion market cap company for AI valuation comparison
Meta
Acquired Manuscript, an AI agent company mentioned alongside other acquisitions
KPMG
Requesting lower audit fees due to AI automation capabilities
Uber
Used as example of initially unprofitable business that became successful through market dominance
People
Aaron Levie
Box CEO and podcast guest providing insights on AI market dynamics and enterprise software
Sam Altman
OpenAI CEO making statements about superintelligence timeline and OpenClaw acquisition
Dario Amodei
Anthropic CEO making predictions about AI surpassing human cognitive capabilities
Jensen Huang
Nvidia CEO whose investment commitment to OpenAI reportedly scaled back from original plans
Peter Steinberger
OpenClaw creator joining OpenAI to drive next generation personal agent development
Yann LeCun
Referenced for having different definition of AI intelligence compared to Sam Altman
Quotes
"We are looking at a really, really small percentage of the total change that is going to happen as a result of this. So we're in the earliest innings."
Aaron Levie
"On our current trajectory, we believe we may only be a couple of years away from early versions of true superintelligence."
Sam Altman
"If you could add a 30 or 50% increase in productivity across all of knowledge work, could the major labs take a 5%, 10% sort of fee on that?"
Aaron Levie
"Such a compliment to Claude that amid rumors it was used in a helicopter extraction of the Venezuelan president. Nobody's even asking, wait, how can Claude help with that?"
Tony Chevlin
"We will wake up in five or 10 years and it'll actually kind of feel like relatively normal. There's not going to be some kind of crazy sci fi movie."
Aaron Levie
Full Transcript
2 Speakers
Speaker A

OpenAI is closing in on a massive hundred billion dollar fundraise. OpenClaw is acquired as agent hype goes into overdrive and is AI making us more productive? Actually that's coming up on a Big Technology Podcast Friday Edition with Box CEO Aaron Levy right after this did you know your credit card points and miles can lose value to inflation? Credit card companies often reduce the redemption value of your points and miles. Now imagine a credit card with rewards that can grow in value. With the Gemini credit card, you can earn Bitcoin or one of over 50 other cryptos instantly with no annual fee. Every swipe at the store or gas pump earns you instant rewards deposited straight to your account. Visit gemini.com card today. Check out the link in the description for more information on rates and fees. Again, if you're looking to invest in Bitcoin but don't know where to start, the Gemini credit card makes it easy. Issued by Webbing, this is not investment advice and trading. Crypto involves risk. Check Gemini's website for more details on rates and fees.

0:00

Speaker B

The world moves fast. Your workday even faster. Pitching products, drafting reports, analyzing data. Microsoft 365 Copilot is your AI assistant for work built into Word, Excel, PowerPoint and other Microsoft 365 apps you use, helping you quickly write, analyze, create and summarize so you can cut through clutter and and clear a path to your best work. Learn more@Microsoft.com M365 copilot Welcome to Big

1:05

Speaker A

Technology Podcast Friday Edition where we break down the news in our traditional cool headed and nuanced format. We have a great show for you today. We're going to talk about the forthcoming $100 billion or thereabouts funding raise for OpenAI, where SoftBank, Amazon, Nvidia and maybe Microsoft are expected to participate. We're also going to talk about the acquisition of Open claw, also by OpenAI and some new studies about whether AI is actually helping us be more productive. Ranjan Roy is out today and we are joined by the perfect guest. Returning champion Aaron Levy is here with us. Aaron Box, CEO welcome back to the show.

1:34

Speaker B

Thank you. Good to good to be here. Never a dull moment in AI land.

2:11

Speaker A

Seriously. So this week we have model releases, we have potential funding announcements. It's hard to figure out where to start, but so let's just go with the big story. A couple of weeks ago we foreshadowed this idea that OpenAI might be on the way to a $50 billion fundraise. Guess what? It's doubled. Now it looks like it might be $100 billion SoftBank. With 30 billion of that, Amazon might end up investing as much as 50 billion, which is wild given their connections to Anthropic. And then, I don't know, the numbers are even making it look like at least 110 to me because Nvidia might,

2:16

Speaker B

I remember these kind of numbers from our like series A and B days. So this is par for the course, right?

2:51

Speaker A

Context here is that any. So Nvidia could put up $30 billion. So this, this is all of these numbers would basically be larger than the entire amount raised by the biggest IPO in history. So let me just ask you this. The narrative around OpenAI has been code red, losing to Google, commoditized, getting its ass kicked by Anthropic. Now money is just numbers. It's just money. But does this size of a fundraise rebut some of that? And why do you think these companies would be making such a big bet on OpenAI if some of those criticisms might be true?

2:58

Speaker B

Well, I mean I, I just take a pretty pragmatic view to this, which is, know, probably every fundraise after 1 billion, you know, the 1 billion market cap from OpenAI, the same set of questions would have, would have been asked, I'm sure when they were 10 billion and 50 billion and 100 billion and you know, a couple hundred billion, the question was always, how big? Good. How big could this market possibly be? It's going to be hyper competitive. Google's going to wake up someday, there's other competition. Aren't these models going to get commoditized? So you have to kind of almost imagine that's always going to be the state of the conversation that will happen at every kind of, you know, juncture, as we saw in the past and I think as we'll see going forward. And yet at the same time, almost by every metric, the usage of at least OpenAI's products keep growing. Certainly Anthropics and Geminis and other players in the space. The capability level of these models is only increasing. So these models are doing more work. We are still only in the earliest innings of the actual ripple of intelligence across organizations and across the enterprise. So, so I think all of the metrics you just cited are relevant, but they are kind of the metrics that you would look at in the early days of cloud computing. And you're like in 2010 or 11 or 12 and you're, and you're like, wow, you know, Google just now got into the game and Azure is building up market share. And, and you're looking at Amazon. You're, you know, how, how big now could this possibly get given how much competition there is? And I think in AI, we're kind of experiencing the same thing, which is, which is if you actually zoom out, you look at maybe the 10 year view of this market, we are looking at a really, really small percentage of the total change that is going to happen as a result of this. So we're in the earliest innings. It's crazy to think that when you're talking about $100 billion raise, like I'm, you know, I'm aware of, of that, the cognitive dissonance that that might exist from that. But when you're talking about just intel, like one of the most kind of fundamental, kind of, you know, core fabrics of the economy in the next century, it's just like entirely reasonable that you would both see that level of competition and you might have companies that are now approaching a trillion dollars in this, in this category.

3:38

Speaker A

Okay, but here's what the pushback would be. It would be that in the past these questions have come up, you know, what is anthropic going to do? Is Google going to get it together? Those are ifs. Now Google has gotten it together. Gemini, I think we have a new model from Gemini 3.1 that came out this week that is, you know, half the price of the other leading models and has about the same performance. This is a competition that has tightened in a real way. Anthropic isn't just a figment of the imagination anymore. It is dominating an enterprise. Claude code is crazy.

6:04

Speaker B

But you have to kind of do a slightly different math on this. Everything you just said is true and yet doesn't impact the valuation or funding question. We're talking about a category where it'll be measured in the tens of trillions of dollars. The market caps that will will be generated by AI. Some of that will go to the chip providers, some of that will go to the supply chain of the chip providers, some of that will go to the AI model providers, and some of that will go to the kind of application and deployed layer. So if you're talking about a category that will be worth tens of trillions of dollars, you know, we're talking about little skirmishes in, on the path to, to, you know, who's going to be a $5 trillion company in this space or a $2 trillion company in this space or a$500 billion company in this space or a $100 billion company in this space. So I look at it as just like the total size of the market and how that pie will likely be divided. And you can still have Google become two times bigger than they are today and have 50% of the market share from consumer traffic. And that would still support very large numbers from OpenAI or Anthropic or one or two other players in the space, just because of the sheer size and scale of the market we're talking about.

6:42

Speaker A

Now I'm looking at the size of these numbers, and one of the questions that's come up for me is do.

8:00

Speaker B

Here's just for fun, just for fun, what do you think? If you want me to put you on the spot, what do you think the market cap of JP Morgan is?

8:05

Speaker A

Let's say 100 billion, 200 billion, $840 billion. Oh, man, I'm embarrassed. Way off.

8:16

Speaker B

Okay, so the market cap of JPMorgan is $840 billion. And I'm not saying that that's a fair market cap or not a fair market cap. So no opinion on that market cap. But you and I could list 15 competitors to JP Morgan, all of which I don't even know if I do anything. I don't have any JP Morgan thing. I think I have maybe like a car loan or something that's through J.P. morgan, but I don't use J.P. morgan in my daily life. And they're worth $840 billion. And if you take all of the other banks that, you know, you just are in the, you know, you're in the trillions of dollars very, very quickly across just one, one little category now. And so this is the, like, if you're talking about intelligence across the entire economy, you can get to, you know, pretty large numbers in a, in a pretty, pretty reasonable way.

8:23

Speaker A

Okay, you're setting up the question I was about to ask perfectly.

9:15

Speaker B

Oh, maybe I didn't want to.

9:17

Speaker A

No, I think it is, It's a great setup. You've just illustrated what I'm going to ask about the size of these numbers. So the numbers are big.

9:19

Speaker B

Yeah.

9:25

Speaker A

And the question I have is, are the investors thinking that this is all going to be additive or maybe what happens is that OpenAI is getting this big because it's able to take some of that, a little bit of market cap from a J.P. morgan. You know, a big part of J.P. morgan's business is advising clients on making investment decisions. You know, if I have a ChatGPT investment instance, you know, is that, is that all of a sudden some of that market cap is going into into the open AI market gap closer to home. We're in the middle of the SaaS apocalypse, right, where there's this belief that AI is going to just ingest lots of what software companies are doing right now. And the market has really been unkind to software companies at the start of this year. Very unkind, very unfair.

9:26

Speaker B

I feel like Trump, like very unkind.

10:15

Speaker A

So unfair. But you know, on that note, like, do you, so can you just sort of describe what you might think as what happens if this is additive versus what happens if this actually is a technology that will just gobble up big swaths of the economy?

10:18

Speaker B

Well, I kind of think about it as a multiplier on the economy or you know, kind of a, maybe a, maybe it, you could either think about it as a force multiplier and it gets a, it gets a tax on that or it's a, it, it takes a percentage of the economy through some sort of labor arbitrage type pricing. But to me, I kind of look at it as tens of trillions of dollars are spent on knowledge workers across the economy. And if you could add a 30 or 50% increase in productivity across all of knowledge work, could the major labs and the applications around that take a 5%, 10% sort of fee on that? That's sort of like I think how you get to the math where revenue can get to the hundreds of billions or trillions, low trillions. And it's not like entirely unreasonable just mathematically. And you just are basically saying, okay, well OpenAI will take part of that, Anthropic takes part of that and Google takes part of that, some of the application layer takes part of that. But I think that you can, you know, there's a lot of ways you can get there, including actually just like advertising could probably get you there. Like there's just no reason that, that, that your AI service is not generating 50 to $100 billion just due to better performing hyper targeted advertising as another business model. So I think, I think OpenAI has kind of these multiple business models stacked up that all that all will create, you know, more and more opportunity over time. And at the same time, you know, in five years from now, both they will be, you know, 100 times bigger in inference, anthropic will be 100 times bigger in inference, Gemini will be 100 times bigger in inference and so on.

10:34

Speaker A

And that inference is more profitable, which sort of starts to answer some of these questions.

12:28

Speaker B

The inference eventually gets more profitable. I think you're in a mode right now and I, I know, it's, it sort of is, it'll, it'll sound kind of crazy and bubbly and you know, there's a, some percentage chance that I'm totally just drinking the Kool Aid. But I think, I think you're in a period right now where you're just in the infrastructure, build out, teach the world about AI. It's sort of worth subsidizing a lot of these use cases because, because it's the, it's the fastest path to figuring out where the actual value is going to be. And, and so while, you know, there are some scenarios where you have a startup or a lab subsidizing tokens for coding or whatnot, it is there it is like competitively a good move for, you know, gaining market share, getting, getting data, building a flywheel, creating a moat. Like those are all strategic things to do at this stage, similar to how Uber, you know, had to buy their way into to many markets unprofitably on a region basis. And then over time, you know, it's now a wildly profitable business because they now have obviously a very strong network effect and they're kind of locked in to the, to these markets. And I think, I think some of these very kind of capex or, or you know, cash heavy businesses up front, you know, sometimes just fundamentally require that. Right.

12:32

Speaker A

One note on the ads before we move on. You're talking about how ads could be 100, 100 plus billion dollar annual business

13:54

Speaker B

and a giant asterisk that I've not studied at once. I'm just going off of the size of Facebook's and Google's and saying there's just no reason that consumer grade intelligence, you know, that's answering any question for you wouldn't also deliver that type of business model as well.

14:00

Speaker A

Yeah, so OpenAI has gotten a lot of, I mean so Facebook by the way, did $60 billion in the last quarter. So this would, basically the numbers you're looking at is like half, half of that. And the one interesting, I was speaking with an ad executive this week and one of the interesting things about OpenAI's advertising now, they've taken a lot of flack for it, maybe with good reason, but one of the interesting things about it is it's so high touch. And that's why they're charging like a $60 CPM, which is insane. It's so high touch. It really guides you through a process. It feels, seems like it feels good to go through. It's helpful if you're thinking about like staying somewhere the difficult thing with advertising overtime is something that custom and that high touch has been really difficult to scale. But with AI, that opportunity to scale it presents itself and then all of a sudden these numbers that you're talking about aren't crazy.

14:16

Speaker B

Yeah, well, I, I'm on the other camp then ver versus a lot of people on this. I, I think ads can be incredibly powerful in AI products. I think that, you know, you, you just kind of like you sort of have to eventually decide as a user do you want to see products that are kind of SEO hacked or do you want to see products that are kind of like marketplace economically hacked? And, and there's, there's, you know, many reasons why the products that can best advertise for you to, you might be the better product because they have the, they have a very clear financial incentive only to get you to their site if it's a good product and it works well, or else you're just going to bail. And so versus, you know, SEO, we can just load a bunch of keywords across a whole bunch of sites and create lots of Reddit posts. That's, that's all you're seeing right now when you ask for something. You're, you're, you're seeing some form of a company, you know, you know, doing whatever it can to ensure that it's showing up inside that, that algorithm. And so it's not obvious to me that, that the marketplace model of that is, is going to, you know, give you worse results. And I'm actually very, you know, I think, I don't think any lab would ever change the answer that it's giving based on advertising. I think it's going to give you the answer and then it's going to give you related and recommended things from, from, you know, from, from the bidding system. And to me that kind of makes total sense. Like that's just like how the Internet has worked for 25 years. It's funded incredible consumer surplus of, of products on the Internet. It's why we have free search and free email and free maps and like there's just no reason that that would not apply to a consumer grade intelligence product as well.

15:07

Speaker A

Definitely. No, I think it could. It's a very interesting way of thinking about it. And you're right, you're going to get recommended products anyway in these things. So you know, maybe, maybe that's a, that's a good signal.

16:56

Speaker B

People want to believe that there's some kind of like, you know, amazing truth arbitrage, like arbiter in These systems, and they. They're not. I mean, it's. They are at the exact same mercy of a priority search algorithm would have been. It's just taking signal from a variety of sources. It's doing its best to figure out what the. What the real answer is. And if you also have a marketplace layered. Layered on top of that, it's just not. I just don't think it's the end of the world. And I think you'll actually get a lot of good recommendations along the way, and people will then pay to not see the ads, and that'll be even more revenue. So there's just like. It's just like a very good way to make money if you're an AI company at that scale. Not. I mean, I only think it's relevant for two or three companies, but OpenAI is one of those.

17:05

Speaker A

Yeah, they'll have a billion users, or they might already have a billion now. Okay, so before we move on from the fundraising thing, there's one thing that has puzzled me throughout, and I need to ask you what your thoughts are here. So OpenAI and Nvidia announced this $100 billion funding that was going to come in from Nvidia to OpenAI, $10 billion at a time. And. And then it seems like Jensen was backing away from that. There was a Wall Street Journal article saying that the deal was on ice.

17:46

Speaker B

And.

18:15

Speaker A

And we found out this week from reporting from the Financial Times that Nvidia is going to invest in OpenAI, but it's going to be $30 billion and not $100 billion. Now, there were these reports that Jensen was not happy with OpenAI's trajectory and, and all of that. And. And he seemed like, when he was talking about it, very different from the original press releases, saying, we hope they'll invite us to invest as opposed to we intend to invest. Those two very different ways of talking about it. So I'm trying to figure out, Aaron, how do I think about this? Because on one hand, they are. So this is. If this deal replaces it, that's $70 billion less. I mean, if you, if you get $70 billion less than you anticipated, that's bad. However, they're still putting in $30 billion, reportedly. That's a lot of money. Where do you think. Where do you think the relationship stands? And how are we. How should we read the number and the replacement of the initial 100?

18:15

Speaker B

Oh, I mean, this is. This is like full astrology on.

19:13

Speaker A

Yes, this is astrology.

19:16

Speaker B

Yes. And we're doing palm reading for. For The AI industry. I, you know, I, I, first of all, I did, did they say that they intended to invest in the very next round or they intend to invest 100 billion at some arbitrary point in time?

19:18

Speaker A

It was overtime, it was never one round.

19:37

Speaker B

So I don't know, I'm just like, I'm taking all the facts in the same way everybody else is and, but I just don't have the impulse for the drama side of this. It's, you know, Nvidia obviously wants a very strong corporate relationship with OpenAI. OpenAI obviously wants to be able to be first in line for, for chips. They have a lot of incentive to both make you make each other very successful. It's, it's like a, it's a, it's a boon for both of them if the whole, the whole space, you know, keeps growing. And at the same time there's probably a lot of configuration dynamics that, you know, that both Nvidia has to consider on how much to invest and that Bob and I has to consider when they think about, you know, their total cap table and what companies own what percentage of them. So, you know, it's a very boring answer only because I think it's like, it's, it's, it's like fun to kind of watch the viral video and of, you know, Jensen in the street interview. But like, I just might, like, I kind of don't worry about it too much. I just think like this space is changing so quickly that I can imagine many different reasons why some configuration might end up different from, you know, where its intent was six months ago or where the lawyers decided to, you know, kind of put, put certain terms in the, in the press release.

19:39

Speaker A

Yeah, My, my hot take here is that this is all, I think Jensen does want OpenAI to succeed. Obviously it's them versus Google. I think this whole thing was basically a signal from him to them, you better perform and no more code reds and just, just stay ahead.

20:54

Speaker B

I, I, you know, the only, my only counter take to that is I just don't think that OpenAI has a challenge raising money. So I don't know that I, I don't know that there's sort of some kind of pressure that can be exerted on them from, from the cap table side. Right. I, I think, I think it's a bit more of a fluid market and, and it's just people looking at their capital allocation decisions, looking at valuations, looking at, you know, do you have other sources of ways of, of getting the capital, etc? Like if you think about it from Nvidia standpoint for one second. Like, they don't, they don't need to own a percentage of, of OpenAI. Like, that's like, they need to sell chips to OpenAI. And so, and so really they, they just need to ensure that they've got a very strong, you know, relationship that is sort of very sturdy and supporting the broad tailwinds of AI. And I don't know that there's a number that, like, if it turns out SoftBank wants to take more of the allocation, I'm making all of this up. But if it turns out SoftBank wants to take more of the allocation, I don't know that they're like, strategically impacted by that in a meaningful way. Because if they owned more of OpenAI, I don't think that that position in the cap table is going to overly sway the infrastructure decisions of OpenAI. OpenAI will have to make their infrastructure decisions based on just like, like, like the supply side of, of chips, the, the cost side, you know, where, where do they have data center capacity? Those things are going to matter more than who owns a certain percentage of their, of their, you know, corporate structure.

21:11

Speaker A

My counter to that would be with numbers this big, there's only a certain amount of money left for them to raise. And Nvidia at $4 trillion with, you know, sizable revenues is one of those potential sources.

22:43

Speaker B

I don't know. There's countries with lots of money, so,

22:59

Speaker A

yes, they're about to see them get involved.

23:01

Speaker B

So and those, those places want, want to deploy money in future economic, you know, activities.

23:03

Speaker A

So, yes, well, we have, we definitely have, we'll have this round which is going to be the tech giants round. Then we'll have the Gulf State round number one, the Gulf State round number two, and then IPO is probably what the way it will play out from

23:11

Speaker B

your lips and God's ears.

23:24

Speaker A

So speaking of other countries, the entire industry made their way to India this week for the India AI Summit. And some really bold statements coming out of there. So let's play a game that we play on this show every now and again called Hype or True. Is this, Are these statements hype or are these statements true? We got one from Sam Altman. On our current trajectory, we believe we may only be a couple of years away from early versions of true superintelligence. If we're right, by the end of 2028, most of the world's intellectual capacity could reside inside of data centers than outside of them. What do you think?

23:27

Speaker B

You know, all, probably every One of the things you're about to say are going to be conditioned on, you know, one definition of what is the thing that is being talked about. But I think that. That there's. That seems to be totally reasonable based on the trajectory that we're on. And I would bet that Sam has an even. A far higher bar for what his definition of intellectual, or whatever the term was, than even. I would. I think already with things like the latest round of models, with the right kind of AI harness, we could squeeze out a significant portion of valuable work from these systems with the right scaffolding and the right kind of people being involved. So I think that is a very reasonable statement based on what he's saying. That might be different than what Jan Lecun would say is the definition of intelligence, where he would probably define it as, can the thing drive a car with only 10 minutes of training? And I don't have that same kind of more biological definition of intelligence. So that's why I think Sam's statement is very reasonable.

24:07

Speaker A

Here's Dario. AI has been exponential for the last 10 years. There are only a small number of years left for AI models surpassing the cognitive capabilities of most humans for most things. I guess that's a similar statement.

25:18

Speaker B

Yeah. It's so true.

25:30

Speaker A

Same answer.

25:31

Speaker B

Yeah.

25:32

Speaker A

Yeah. Interesting moment happened at this India summit. I'm sure you've seen it. They have all the CEOs up there on stage, and they're all, I guess, instructed for a photo to lock hands and raise their arms. And Sam and Dario, who don't seem to like each other very much, instead of.

25:32

Speaker B

Although, didn't. It. Didn't. I've watched the video a couple times. Didn't it feel like maybe it was a little impromptu, or. Do you think that was instruct. Is it reported that it was instructed?

25:52

Speaker A

I don't. So I was making a assumption on the coordination of it. Maybe it was impromptu. Maybe Modi at the middle was just like. And then everybody followed.

26:00

Speaker B

I saw some videos where it kind of felt like nobody really knew what to do.

26:08

Speaker A

That's true.

26:13

Speaker B

Yeah. And they were kind of like just all figuring out, because you have this moment where, like, Alex had to grab Sundar's hand. And it seemed like not everybody quite knew how to coordinate this. So you might have. Maybe we just. Maybe they just malfunctioned for a minute. And then by the time it was too late, it was just like, we can't hold each other's hands, so who knows?

26:14

Speaker A

I mean. Yeah. The point is. The point we could we could, we should, we could maybe in a future episode, play the video back and go do the play by play. But the point is everybody seemed to figure it out except for Sam and Dario. All right, they, they had their hands in the air, clenched with one next to the other.

26:38

Speaker B

Lobster hands. All right. Right.

26:57

Speaker A

Yeah, they did Photoshop the claw hands on, onto him. Question for you about this kit. Can these two guys who can't figure out a way. Okay, I respect their differences, but for, if they can't figure out a way to hold hands for a picture, should we trust them to handle AI alignment?

26:59

Speaker B

It's a, it's a, it's a, that's a very, it's a very great meta question on that. Has anybody written that piece yet? No.

27:21

Speaker A

I mean that really should have been the big technology story this week.

27:31

Speaker B

I mean, write that piece. I think it's a great conundrum that we face that is this great little microcosm of a broader issue. But yeah, I mean, I pay a lot of money to get both of their takes on the hand thing. You know, sometimes you get into these heated battles with a rival where people are just saying too many things in public and, and it's just like, you know, you get to this point where it's just the relationship is, is too dramatic and there needs to be some kind of, you know, kind of neutral ground that brings everything back together. Maybe one would have thought India would have, would have done that, but I have kind of full faith that we will get through, you know, hand, hand issues and, and, and they can repair the, the relationship somehow.

27:34

Speaker A

Yeah, I, I, I hope so. I mean, I think if you asked either of them right now, they would have just said I would, I should have just held the hand and that became the meme out of the whole thing.

28:34

Speaker B

I don't think they meant for that to be the takeaway from the summit. So they had like 20 minute speeches about the, and, and yeah, I don't think the hand was meant to be the takeaway.

28:42

Speaker A

It is funny how you get all these AI leaders together and sometimes there's just one great meme. There's that there's Dario and Demis on the small couch, which is one of my favorites.

28:52

Speaker B

Fantastic content.

29:01

Speaker A

So very interesting development actually on the model front. We hinted at it before. Anthropic has a new big model, Sonnet 4.6, and you've said that it is a major upgrade over the Most recent model 4.5. We usually expect these single digit models to be incremental updates. But the stats that you shared on your evaluation for complex work are pretty, pretty significant. Where there's been a 15% percentage point jump in performance and accuracy, you know, between 4.5 and 4.6. This is about, from you on Twitter or X, shall we say, in the public sector, you saw a jump from 77 to 88% in accuracy for complex tasks. Healthcare saw a jump from 60 to 78% and legal saw a jump from 57 to 69% accuracy on complex tasks. That's pretty, pretty big. It seems like this model has almost been underhyped. Can you talk a little bit about these jumps and what the significance is?

29:06

Speaker B

I think probably the main takeaway should be that the progress of these meaningful jumps that we've been seeing in AI coding over the past couple of years, where the model at best could do a couple lines of code in a kind of type ahead type format two and a half years ago in coding space. And now obviously people are giving the model a task of write me tens of thousands of lines of code for a full project. And we've just seen this incredible rate of progress and this march up toward more and more capability over time with encoding. I think that same trend is going to come to other now fields of knowledge work. And so this jump in sonnet's model from 4.5 to 4.6 I think represents an example of what happens when these models just get trained across more areas of knowledge work. What happens when they are getting better and better at reasoning capabilities that go beyond coding? What happens when they get better at using tools and deciding when to use tools? And that's what our complex work eval is meant to represent is sort of how does it think through a problem? How does it decide it's got the right answer? How does it check its work? And these models are getting much better at being able to deliver on that. So I think that'll be the trend for the next couple of years. And even for our own eval, I think we're looking at one of the earliest phases of a knowledge worker type eval. I think we're going to have to make it harder and harder to better represent the, the capabilities of these models soon. But, but yeah, these jumps are obviously, you know, very, very eye opening.

30:10

Speaker A

You know, we're going to get a little bit into how AI will do work in the, in the second half when we come back. But one of the interesting things that's been happening around Claude is there's been this drama between anthropic and the Pentagon about its use of. Of Claude and this like the Pentagon's used to Claude. And there was this story that came out that apparently the Pentagon used Claude in its to coordinate its attack on Venezuela. This is from X user Tony Chevlin. Such a compliment to Claude that amid rumors it was used in a helicopter extraction of the Venezuelan president. Nobody's even asking, wait, how can Claude help with that? People are like, of course, of course it was useful.

31:53

Speaker B

How would you not have used Claude? It is actually a very funny. Like two years ago that sentence would have been like, excuse me, what do you like, how would this have. What would the thing have been? And now it's just like, yeah, I'm sure they use some kind of intelligence to plan something or figure something out or correlate data. And that's just sort of priced into, I think, more and more complex work and software.

32:35

Speaker A

Wild. Okay, so we still have so much to talk about. We have OpenClaw, we have these new studies on AI productivity. Let's do that when we come back right after this. Here is the problem. Your data is exposed everywhere. Personal data is scattered across hundreds of websites, often without your consent. And that means that data brokers buy and sell your information. Your address, phone number, email, Social Security number. And that exposure leads to real risks. Things like identity theft, scams, harassment, higher insurance rates. Incogni tracks down and removes your personal personal data from data brokers, directories, people search sites and commercial databases. Here's how it works. First, you create your account and share minimal information needed to locate your profiles. Second, you authorize Incogni to contact data brokers on your behalf. Third, that Incogni will remove your data both automatically with hundreds of brokers and via customer moves. There's also a 30 day money back guarantee. Take back your personal data with Incogni. Go to incogni.com bigtechpod and use code big techpod at checkout. Our code will get you 60% off an annual plan. Go check it out. Starting something new isn't just hard, it's terrifying. So much work goes into this thing that you're not entirely sure will work out. And it can be hard to make that leap of faith. When I started this podcast, I wasn't sure if anyone would listen. Now I know it was the right choice. It also helps when you have a partner like Shopify on your side to help. Shopify is the commerce platform behind millions of businesses around around the world and 10% of all e commerce in the US from household names like Allbirds and Cotopaxi to brands just getting started. With hundreds of ready to use templates, Shopify helps you build a beautiful online store that matches your brand style. You can also get the word out like you have a marketing team behind you. Easily create email and social media campaigns wherever your customers are scrolling or strolling. It's time to turn those what ifs into with Shopify today. Sign up for your $1 per month trial at shopify.com bigtech go to shopify.com bigtech that's shopify.com bigtech and we're back here on big technology podcast with Box CEO Aaron Levy. Aaron, it's always great to have you here and I think you're really going to enjoy this next segment because this is something that you've been following very closely and it's going to be great to get your perspective on it. When, when OpenClaw sold to OpenAI, I said we gotta get Aaron on the show for his perspective on this. So this is from CNBC. Open OpenClaw creator Peter Steinberger joins OpenAI. The creator of the viral AI agent OpenClaw is joining OpenAI and the service will live in a foundation as an open source project that OpenAI will continue to support. Sam Altman said, he said that Steinberger is gonna join OpenAI to drive the next generation of personal agents. So we'd love to get your perspective here just on a little bit about very briefly what OpenClaw is, because it's always good to sort of refresh there and then why is it significant that OpenAI either acquired it or brought Steinberger aboard?

32:59

Speaker B

Yeah, so I think the innovation that Steinberger kind of created with openclaw was, and there's been various attempts at this, you know, obviously over the past couple of years, but, but I think it was only really possible in probably the last couple of months of, of model capability. But the, the big jump is, you know, we have these agents that effectively act on behalf of us and we are controlling it and steering it to go do tasks for us. So Claude, code you, you kind of, you know, type in your terminal, you tell it to generate some code and it goes off and does work and comes back and it's waiting for its next task for you to give it or Codex, you're in a UI telling it to go and generate some code for you. Devin Factory, all these kind of agents. And that's basically been the state of the art of agents for the past year or so, plus or minus. And openclaw kind of took many of the same Principles, but said, well, what if that agent is sort of running on its own and it had access to your computer and your browser. Browser and all the services that you use, and it's just literally running on an ongoing basis and you chat with it and you can ask it to do things, but it can also ping you as sort of relevant. And this is sort of a very new kind of way to think about agents that again, we've seen examples of, but nothing, obviously that has taken off at the level that openclaw did. And it gives you a little bit of a peek into what the future could be where you don't have these agents that you only sort of spin up and spin down as you need them to do work for you, but you have actually an agent that's sort of always on kind of working for you and executing tasks for you. And that's why people are setting up their own separate computers for these agents. They can just keep running off in their own environment. And hard to know exactly how you fully would package that up and how it could manifest in a way that would be really, really simple for people to use. And fully secure. Exactly. Safe and secure for people that don't kind of know their way around all these systems. Lots to figure out there, but not that different from when. I think about it as like a principle update or a paradigm update. You know, I remember the viral video of Devin must be two years ago now. And, you know, I don't remember exactly all the details if they did a slack message or if they were in the ui, but you kind of told Devin to go off and do work and you could just see it's producing its code. It had another environment where you could see what it was building. And, you know, they got, you know, I think there are a lot of people that were like, oh, this will never work. How could this possibly work? It's not actually doing that. And there were these viral takedowns from non believers, but for some people who were deep in the AI space, we were like, oh, shoot, that is a very different way to think about working with an agent. You're not in an ide, you're not coding alongside it, you're just setting off a task and it's going to go and do a bunch of work for you. And now obviously it's very clear that that's the dominant paradigm that we're going to be in. Codex has proven it. You know, Claude Code has proven it. Devin and Factory have proven it. You know, I assume Cursor is betting Even more on agents. You can kind of see them pushing more on the agent side of the user experience as opposed to the IDE side. So that was an update that we got a couple of years ago and I think we're going to see the same thing now in other areas of knowledge work. And, you know, OpenClaw introduces an interesting kind of paradigm that could persist across more and more areas of work.

36:08

Speaker A

Right now, as a software CEO, I really would love to hear your perspective on what this means for software. I'll just give some context here. I've spent the past, I guess, week and a half now just like, with my nose in cloud code. I've just been going crazy with it. And initially it was like, can you build me basically a software version of a spreadsheet that sends an email when I complete a field? But then it was like, well, why don't you plug that into YouTube's API? Why don't you plug that into. You know, I was right. I'm looking for an apartment. Can you plug into StreetEasy and Zillow? And all of a sudden it's like, oh, it goes from basically me going to the Internet to the AI, you know, sorting through the Internet for me. And you actually tweeted about this with the openclaw situation. You said, in a world of openclaw codecs, Claude code, cowork, manuscript which meta acquired and other Agentix systems, it's becoming clear that the future of software has to be API first, but also enable human interaction for verification, collaboration with agents and people and working on the output. So what does it mean for the software industry if, if it becomes API first? Because, you know, on one hand you're, you're enabling your customers to get tremendous amount of utility if they're interacting with you this way. On the other hand, you know, Zillow probably got some value in me going there. YouTube probably wants me on YouTube now. It's all happening in my like, you know, cloud, my dashboards that I've built with cloud code.

40:06

Speaker B

Yeah. So maybe we'll separate the markets a little bit because you threw in a lot of consumer products at the end of, you know, I. Hard to say how much, how much of the consumer Internet kind of gets collapsed into API calls versus, versus. You know, the average consumer just still wants a good, go to YouTube and see the feed and, and they're not going to do that for me.

41:34

Speaker A

YouTube is, that's strictly on like the back end. So that's like the, the creator side of YouTube, like used it to Sort like thumbnails and then rank them by, you know, click through rate and then also tell us how, how long people are staying on the videos. But I, but point taken, on the consumer side, you're not going to want to go to your cloud bot to watch YouTube probably.

41:56

Speaker B

Yeah. And so, so that's why I kind of separate a little bit now. Now I am. But, but you have to be a little bit sympathetic or at least think through because. Because again, absolutely, major consumer properties are going to see a reduction in traffic when the answer just comes up in ChatGPT or when some kind of automated system is just delivering the answer. So I think that's a whole category people have to think through. On the enterprise software side, that's obviously where we spend our time. I'll speak for Box for a second and then maybe you can broaden out for software at box. We're like 100% excited about this because one of the things that agents are both really good at but also need for their workflows are your files. They need to be able to access the information, to work with, to answer questions for you to produce new information, to be able to store off memories and their working sessions that you can go and interact with. They need to be able to read specifications and documentation. All of that ends up being files. So what we are building is a platform layer that whether you're a person interacting with your data, whether you're an application that needs to access data, or whether you're an agent that needs a file system to interact with, we want to be the platform layer that connects all of that. And the key why we at at Box, we think we're in a kind of unique position is we don't think it's enough for the agent just to have its own sort of sandbox environment of a file system. Nor is it going to work for just people to have a separate environment. You're going to need something that actually connects those two worlds together. So people are going to need some form of end user interface. Even if that's an end user interface in a chatbot, they're still going to need to kind of, you know, interact with their data with, with something visual. And they'll likely eventually want to like log into something and see all their content and be able to manage their sharing permissions and who they're working with. But agents just need A set of APIs and agents need to be able to work with those APIs and facilitate all of the work that they're, that they're doing. So what we're investing in is, is making sure we've got the most powerful capabilities for agents to be able to work and, you know, work with all of this content that you want to give it. Now there's all these new implications, which is how do you give an agent a separate space to work in that you're collaborating with that agent, but its blast radius is somewhat contained, so it doesn't kind of delete all of your data. And now all of a sudden you have this kind of crisis on your hands because your open claw agent went and mucked with everything that just happened

42:14

Speaker A

to Amazon, by the way. I mean, not to interrupt you, but Amazon, there was just this story in the FT that Amazon had lots of had outed because the agent was like, you know what I'm going to do to fix this problem?

44:51

Speaker B

Just erase everything I can, make the problem go away, no more code, delete. Yeah, you didn't like your solution, you didn't like your folder structure, Great, now there is none. So you have to be thoughtful about how do you kind of create the right lines of demarcation between these systems. But again, for us, if you imagine that there's 5 or 10 or 100 times more agents in the future than people, which is, I think, a relatively safe assumption given the productivity increase that they're going to enable, all of those agents are going to work with enterprise information. They're going to need a secure space to work with that information, they're going to be able to store that data, they're going to be able to operate off of it, they're going to be able to answer questions for end users, they're going to be able to be able to, you know, need to be able to store their own data. So that's what we're building and we have to make sure, again, we make that as easy as possible for agents to go and utilize. I think that there's a meaningful amount of software that already exists that will also have to do the same thing. They will have to make their software ready for agents. I think there'll be some forms of software that get kind of compressed where agents don't really need to use their tools in the, in the same way that people did. And that's obviously where you're going to see some pressure in, in, in, in the software market in some areas. And then there's going to be all new platforms that have to exist because we didn't anticipate the kind of new problems that agents are going to run into and that's where you'll have again, API. First. Companies get launched from the start thinking, you know, only in terms of platforms. And I think this is just going to be, you know, a tremendous amount of growth for anyone who at least has a play in that, in that architecture.

45:01

Speaker A

Okay, so you mentioned productivity. And I think this is something that's worth examining as we end the show because I think there is this sort of discussion around AI. Oftentimes it's while there's productivity increases and it's sort of accepted like, you know, that's there already are, there will be, but the data is a little bit mixed and I just want to run it by you and get your perspective on what the data is saying. So this is from Fortune. Thousands of CEOs just admitted AI had no impact on employment or productivity. And it has economists resurrecting a paradox from 40 years ago. So it talks a little bit about how in the 1960s we had transistors, microprocessors, integrated circuits and productivity growth actually ended up slowing from 2.9% beforehand to 1.1% in 1973. And so now you have all the CEOs that have been pulled and it is, yes, 6,000 CEOs, two thirds of the executives reported using AI, but it was one out 1.5 hours a week. 25% of the respondents reported not using it in the workplace at all. Nearly 90% of the firm said I had no impact on employment or productivity over the last three years. I mean maybe this is research done last year, but even still, you know,

46:34

Speaker B

actually I'm curious, when was that published or when was the research taken?

47:51

Speaker A

It is published February 2026, but I don't know exactly when the research was conducted over.

47:54

Speaker B

Yeah, but with, with the number of respondents, obviously that would have been probably, you know, sometime last year, but, but sorry, keep going.

48:01

Speaker A

No, go ahead.

48:08

Speaker B

Oh, like as in just like, like defend the, defend AI or what?

48:09

Speaker A

I was just going to ask like what your perspective is here because it does seem like we're, we're, you know, in some ways and this is sort of, we want to pressure test a little bit about like some of these assumptions that we're going to have more AI agents than we'll have workers, that it will lead to this increase in productivity. Whereas we are still seeing data where that is at the, at sort of best when you look at this data up in the air.

48:15

Speaker B

Yeah, yeah, I, I, I can understand the, the dissonance that might be out there between the tech enabled economy and the rest of the economy because what, what's happening is in tech, these agents are, are so effective at coding and developers have far fewer barriers to adopt agents for coding than the rest of the knowledge worker economy has for the same level of productivity gain kind of use cases. So in coding you've got these just incredible properties which is the models are hyper trained on code. Coding itself is a text only medium. You know, Dario and Dwarkesh on their latest podcast kind of hinted at an interesting point which is your code base contains most of the context that you end up working with. It's got your documentation, it's got all of the existing work that you've done and if you kind of compare and then developers are just, you know, are obviously more technical, generally more tapped into the Internet and what's going on and the latest trends, they pull down the latest new products and, and try them out now you can compare that to the rest of knowledge work. You know, the marketer at a CPG company, the, the, the, you know, lawyer at a mid sized law firm. You know, I'm making up a, you know, kind of some, some kind of caricatures of you know, various job functions but basically like they're going about their day and they're not thinking like how do I go and construct my workflow to, to just fully take advantage of agents and automate everything I'm doing? That's just like probably not top of mind for, you know, most knowledge workers. They're going to go to chatbots, they're going to ask some questions, they're going to get an email written for them, they're going to summarize a, you know, a document, they're going to build a new strategy plan and then you know, they're going to be, you know, the company will, will do incrementally a little bit more as a result of that and maybe their strategy changes a little bit more or the financial analyst comes up with some new insights. That's I think probably been the state of AI for the past couple of years at least. Whenever a survey like this would have, would have tried to analyze, compare that to engineering where you know, we have products that we build five, you know, these are the estimates from the actual engineer that we will build five times faster because of AI coding and we will as a result of that be able to ship significantly more capabilities to our customers. We will be able to solve significantly more problems for our customers. In many cases we might not even charge more for that functionality. We are going to pack that into their existing licenses because we now can. So to some extent, what would you measure in our kind of productivity? This is now just a priced in thing that we do because we have to deliver more and more value because obviously tech is hyper competitive and we want to now add more capability to our customers. I think that has not yet rippled through the rest of knowledge work and I think it just will, it will have to because the tools will get better and better and you'll have one competitor in a market that is able to use AI to either lower their costs or lower their fees to the customer or be able to deliver a substantially higher product to the customer. And as you see more and more examples of that, that will just start to, to transform these market dynamics. You know, I would say equally that, you know, I like to operate off of, you know, I think Bezos had this line is when the anecdotes and the data disagree, you have to look at the anecdotes. And so, you know, look at the, you know, the equal headline from two weeks ago of KPMG asking their auditor to lower their fees because of AI. That I think is your, that's your initial signal of actually what's going to happen, which is a company's going to say, you know, that kind of work that we now know we can bring automation to, we should be spending less on and then using those dollars to do something else in our company that is higher productivity or that makes us more effective or more competitive. And once you do that dozens or hundreds or thousands or tens of thousands of times in an ecosystem, that's where you'll start to see kind of this reshaping of how these markets will play out. It's happening in tech unquestionably. And now the only thing is what's the roadmap to that happening across the rest of the economy? That's going to take time. People have to change their workflows. People don't have data set up in a way that is sort of prepared for agents. The agents themselves don't always have the right interfaces or tooling to be supported in knowledge work. So I'm actually extremely pragmatic about this where I think I could agree with the survey that you just read and equally be completely unfazed. And more of anything just say people should be probably prepared for this will come from more areas of knowledge work. I'm the biggest optimist on the jobs impact of that. So I don't see that as a scary thing. I think it's just going to mean Companies will have to sign up to do way more for their customers. I think that that will be where it shows up, is we will have a surplus on the consumer side of all of the vendors that we work with will just have to deliver better and better services for us. Or if you're a B2B company, then all of your vendors will have to deliver greater services and we will wake up in five or 10 years and it'll actually kind of feel like relatively normal. There's not going to be some kind of crazy, it's not going to be the sci fi movie, it's going to be that, that we just, we just have incrementally better consumer experiences and better services. Just as if you went back, you know, 40 years ago and tried to imagine life of, of a lawyer or a health care professional and you'd be like, wow, how did you do your job, like without a computer? Like, how did you, like, how did you understand the legal case precedents without an Internet search that you could go do? Like that's gonna be work in five years from now is you'll be like, how did you do that without an agent that drafted your entire contract, you know, for you instantly so you could respond to the client that was on the phone that, that we will have that same set of questions and be confused how we even work the way we do today. But yet it won't be some kind of, you know, completely transformation of, you know, we'll stop people, they'll be working together, they'll deploy tasks to agents, those agents will go off and farm and do work and then people will go and bring it back to, to the, to the task at hand to move whatever their, their sort of, you know, work or project is forward.

48:38

Speaker A

That's right.

55:26

Speaker B

Yeah.

55:27

Speaker A

When I'm watching Claude code go, I look at it and I say, wait, people did this before that seems like a lot of time to do things that are automatable. But.

55:27

Speaker B

No, but like literally like we, you'd used to have to spend like two weeks on like, like a, like a library, you know, change that you wanted to make in your code base. And, and that's now a 10 minute activity. And, but, but are we spending any less time building software? No, it's because we're just now doing the things that we didn't get to because we were spending the two weeks doing the library update.

55:34

Speaker A

Right. Okay, Aaron, we have to get you out of here because you have to go to your, your next meeting, I think. But just want to say thank you again. Great. Always great having you on the show. Next Wednesday, we're gonna have Michael Pollan on. He is the author of a new book about consciousness. So we'll talk about AI consciousness. All right, everybody, stay tuned for that and we'll see you next time on BIG Technology podcast. Did you know your credit card points and miles can lose value to inflation? Credit card companies often reduce the redemption value of your points and miles. Now imagine a credit card with rewards that can grow in value. With the Gemini credit card, you can earn Bitcoin or one of over 50 other cryptos instantly with no annual fee. Every swipe at the store or gas pump earns you instant rewards deposited straight to your account. Visit gemini.com card today. Check out the link in the description for more information on rates and fees. Again, if you're looking to invest in Bitcoin but don't know where to start, the Gemini credit card makes it easy. Issued by webbing, this is not investment advice and trading. Crypto involves risk. Check Gemini's website for more details on rates and fees.

55:56

Speaker B

Well, the holidays have come and gone once again. But if you've forgotten to get that special someone in your life a gift, well, Mint Mobile is exciting, extending their holiday offer of half off unlimited wireless. So here's the idea. You get it now, you call it an early present for next year. What do you have to lose? Give it a try@mintmobile.com Switch limited time

57:02

Speaker A

50 off regular price for new customers. Upfront payment required 45 for 3 months, 90 for 6 months or 180 for 12 month plan taxes and fees. Extra speeds may slow after 50 gigabytes per month when network is busy.

57:22

Speaker B

See terms.

57:31