The Information's TITV

Nvidia’s GPU Crunch Hits Microsoft, ChatGPT-5.5 Review, Meta’s AWS Chip Deal

34 min
Apr 24, 20264 days ago
Listen to Episode
Summary

The Information's TITV covers the GPU supply crunch hitting startups and major cloud providers, reviews OpenAI's new GPT-5.5 model showing strong coding capabilities, discusses Meta's 10% workforce reduction and AWS chip partnership, and breaks down the exclusive details behind Cursor's potential $60 billion acquisition by SpaceX.

Insights
  • GPU scarcity has worsened since 2023 as mega-customers like OpenAI and Anthropic commit hundreds of billions to compute, creating a bidding war that locks out smaller startups from affordable access
  • Cloud providers now enforce strict minimum commitments (1,000+ cores) and aggressive underutilization policies, forcing startups to consider alternative infrastructure or direct GPU purchases
  • Tech layoffs marketed as cost-cutting are often followed by workforce rehiring with different talent, suggesting the real goal is workforce composition change rather than permanent headcount reduction
  • Compute access has become the primary bottleneck for AI startups, driving strategic decisions like Cursor's acquisition by SpaceX to gain infrastructure access rather than remaining independent
  • Model capability leadership oscillates between labs, with product/application announcements correlating inversely with benchmark performance—suggesting companies emphasize applications when not leading on performance
Trends
GPU hoarding by hyperscalers and mega-customers is creating a two-tier market with Fortune 500 companies getting priority while startups face weeks-long queues and use-it-or-lose-it policiesVertical integration becoming critical strategy as AI companies acquire or partner for compute, models, and infrastructure to reduce dependency on competitors and cloud providersEfficiency metrics (tokens per dollar, latency, cost) are replacing pure quality optimization as enterprises mature their AI spending and cost consciousness increasesNeoClouds (CoreWeave, Lambda, etc.) failing to serve startup segment as originally intended, instead prioritizing large customers and creating new opportunity for inference-focused providersAI coding model competition intensifying with Anthropic's Claude Code growing faster than Cursor, creating investor wariness about startup durability in crowded vertical AI marketsConsolidation accelerating in AI application layer as startups seek acquisition or partnership to solve compute constraints and compete against well-capitalized incumbentsTech workforce restructuring pattern: layoffs followed by selective rehiring of specialized talent (AI engineers) at potentially higher costs, masking true employment impactAWS gaining momentum in AI narrative through strategic partnerships and chip deals despite being perceived as laggard 18 months ago, though actual market share gains remain unproven
Companies
Microsoft
Azure cloud division facing GPU allocation challenges; implementing tiered customer access and strict underutilizatio...
OpenAI
Major GPU consumer spending hundreds of billions on compute; receiving priority allocation from cloud providers; rele...
Anthropic
Competing mega-customer hoarding GPUs; Claude Opus 4.7 leading coding benchmarks; expanding into vertical application...
Meta
Announcing 10% workforce reduction (8,000 employees); investing heavily in AI compute and custom chips; partnering wi...
Amazon Web Services
Positioning itself as AI enterprise offering through partnerships; promoting internally-designed Graviton chips; gain...
Cursor
AI coding startup with potential $60 billion SpaceX acquisition; achieved $2.7B annualized revenue but faced negative...
SpaceX
Acquiring coding startup Cursor for potential $60 billion with $10 billion breakup fee; providing compute infrastruct...
Google
Mentioned for previous layoff patterns where workforce rebounded to historical highs; competing in AI model and cloud...
NVIDIA
GPU manufacturer whose chips are central to AI compute bottleneck; Blackwell chips subject to minimum 1,000-core comm...
General Catalyst
Venture capital firm whose portfolio companies struggling to access GPUs; organizing collective GPU procurement to im...
Founders Fund
Venture capital firm whose portfolio companies facing GPU access challenges alongside other major VC firms
Sequoia
Venture capital firm whose portfolio companies struggling to secure GPU allocations in current supply crunch
CoreWeave
NeoCloud provider that was positioned to serve startups during 2023 GPU crunch but has since prioritized large custom...
Harvey
AI legal startup using GPT-5.5 and other models; achieving strong benchmarks on legal work; competing against Anthrop...
Intel
Stock surged 25% on quarterly earnings; CPU market becoming more relevant to AI; company showing signs of recovery af...
Lovable
AI coding startup competing with Cursor and Anthropic's Claude Code in crowded vertical market
Andreessen Horowitz
Early investor in Cursor since Series A; positioned to make billions in returns from potential $60B SpaceX acquisition
Thrive Capital
Early Cursor investor since Series A; positioned for substantial returns from potential SpaceX acquisition
Excel Venture Partners
Later-stage Cursor investor positioned to benefit from potential $60 billion SpaceX acquisition
Benchmark
Later-stage Cursor investor positioned to benefit from potential $60 billion SpaceX acquisition
People
Akash Pasricha
Host of TITV episode covering GPU crunch, GPT-5.5 review, Meta layoffs, and Cursor-SpaceX deal
Aaron Holmes
Reported on Microsoft's GPU allocation strategy, tiered customer access, and Azure underutilization policies
Anissa Gardizi
Reported on GPU supply crunch, startup access challenges, and NeoCloud provider strategies
Stephanie Palazzolo
Co-authored deep dive on GPU crunch and cloud provider dynamics
Nico Gruppen
Reviewed GPT-5.5 model performance on legal benchmarks; discussed coding capabilities and model efficiency improvements
Martin Pears
Analyzed Meta's 10% layoffs, AWS chip deal, and Intel earnings; discussed tech workforce restructuring patterns
Julia Hornstein
Reported on Cursor's financial profile, competition, and SpaceX acquisition deal details and cap table winners
Corey Weinberg
Co-authored deep dive on Cursor-SpaceX deal and backstory
Aaron Wu
Co-authored deep dive on Cursor-SpaceX deal
Katie Roof
Co-authored deep dive on Cursor-SpaceX deal
Quotes
"It's not that great out there for startups who are looking for GPUs. The people that we talk to said that this is their biggest bottleneck that they're facing this year."
Anissa GardiziEarly segment
"You have to commit to reserving, in most cases, a thousand cores or more of the NVIDIA Blackwell chips. And that costs tens of millions of dollars over the time period that you have to reserve it, which is like one to three years."
Aaron HolmesGPU crunch discussion
"The gap is starting to close. And with GPT 5.5, I think, you know, it may be back in pole position."
Nico GruppenGPT-5.5 review
"The history is that things don't always work out the way that they appear at the time of the announcement. So I mean, if we just look back at previous rounds of cuts, I mean, the idea that they do these cuts and then the people kind of trickle back over time."
Martin PearsMeta layoffs discussion
"Compute access is a huge bottleneck for startups and is super important. I mean, as I mentioned, you can see that in another story that my colleagues published today about startups facing a compute crunch."
Julia HornsteinCursor deal analysis
Full Transcript
Welcome, everyone, to The Information's TITV. My name is Akash Pasricha. It is Friday, April 24th. First up today, The Information published exclusive reporting about the current state of the GPU crunch, with some inside reporting on how Microsoft and General Catalyst are responding. We'll then dig into GPT 5.5 with a top researcher at Harvey who has used the model. We're also digging into Meta's 10% layoffs and why our co-executive editor Martin Pierce thinks this time could be different than previous rounds of cuts that big tech companies have made. We'll also get into Meta's new chip deal with Amazon and we'll talk briefly about Intel earnings. We'll then wrap the show with an exclusive look behind the scenes of Cursor's deal with SpaceX. It's going to be a fun show, so let's get right on into it. GPUs are still hard to come by, and the supply crunch has caused reverberations throughout Silicon Valley. My colleagues Aaron Holmes, Anissa Gardizi, and Stephanie Palazzolo published a deep dive on that dynamic this morning with inside reporting on Microsoft General Catalyst and fast-growing startups. I want to bring on Aaron and Anissa to share more. Welcome to the both of you. Anissa, I want to start with you. What is going on in chip land? In chip land, things are very reminiscent of 2023, when we were constantly hearing from startups and their investors that it was extremely difficult for companies to find access to NVIDIA GPUs. And the reason, much like 2023, is that some of the large cloud providers are hoarding some of these GPUs for their own internal teams and their largest customers, such as Anthropic and OpenAI. So it's not that great out there for startups who are looking for GPUs. The people that we talk to said that this is their biggest bottleneck that they're facing this year. Oh, this year. Okay, so it's not quite as bad as 2023, but it's as bad as it's been lately. I mean, I'd love to hear what Aaron thinks, but from what I can tell, it sounds a little bit worse than 2023 because the business model around AI is a little bit more clear, and OpenAI and Anthropic are spending even more money than they were spending back in 2023. So it seems worse to me. Aaron, what do you think? Yeah, I think to me what stands out is, you know, 2023 was this moment right after ChatGPT came out where every company wanted to experiment with AI and see where it would fit in to, you know, their business. And I think what's different now is that there's a lot of companies that know they need to use GPUs, especially the mega customers like Anthropic and OpenAI who are seeing this huge AI coding demand boom. But as a result, that is leading to this almost bidding war where prices of GPUs are going up. And I'm hearing that at Microsoft's Azure Cloud, for example, you actually can't even reserve a small number of newer GPUs. You have to commit to reserving, in most cases, a thousand cores or more of the NVIDIA Blackwell chips. And that costs tens of millions of dollars over the time period that you have to reserve it, which is like one to three years. So that's kind of a bolder sell from the cloud providers than I think we've ever seen before when it comes to GPUs. So Anissa, Aaron is talking about prices going up. If you're a startup, do you have any bargaining power here? Do you just accept that the price of renting these chips is higher than it would have been? How are you coping? If you're looking for maybe a thousand to a couple thousand GPUs, you really don't have any bargaining power right now. Maybe you did a year ago, but these days it's going to be really tough for you to try to get a good price from your cloud provider. We talked to one startup in particular who was able to get a good contract last fall for six months, renting GPUs for just under $3 per GPU per hour. And earlier this year when they went back on the market looking for GPUs, some of the same people who were competing for their business weren't even giving them a call back. So that's just a really good example of how quickly things have changed. And we also learned that portfolio companies of some of the largest VC firms, General Catalyst, Founders Fund, Sequoia, their startups are struggling to get GPUs. And so recently, General Catalyst sent out a note to all of their founders asking them, you know, what is your ability to get GPUs? Sort of in the vein of saying we might end up helping you guys all get GPUs and negotiate on your behalf so that you guys can be basically a larger customer trying to get a better price. So if you're looking for a couple thousand GPUs, right now it's pretty tough, and that's different from just a couple months ago. Aaron, let's go back to what you guys were talking about earlier in terms of the cloud providers, quote unquote, hoarding these GPUs. You cover Microsoft. Tell me a little bit tactically about how Microsoft has been dealing with the supply crunch on the ground and how it's balancing its own compute demands with the compute demands of its customers. Yeah, so I mean, this is something that Microsoft is looking at really closely. And the company has actually, you know, been pretty open about the fact that their Azure revenue growth is being constrained essentially by how many GPUs they need to reserve for their own, you know, serving and developing co-pilot. And then on top of that, what I learned in reporting for this story is that there's sort of different tiers of customers that can get GPUs on Azure. I think at the top, there's the OpenAIs and Anthropics, which Azure is actively building massive new GPU clusters for. And then beneath that, there's sort of tier one, which is like the Fortune 500 companies that have already a huge spending footprint on Azure and can kind of get first dibs on GPUs. Below that, you might have some smaller firms that don't have quite as much revenue to throw around, but that still have good relationships with Azure. And then everyone else is kind of in the Wild West where if you want GPUs, you have to essentially wait weeks or months to get capacity. And then once you get capacity, Azure is, from what I'm hearing, being very kind of stringent about if you aren't using the GPUs and running them around the clock, and even if they're down for just a couple hours on a Friday night, then you will essentially be kicked off your GPU cluster for underutilization and kicked back to the back of the month's long queue. So that's kind of leading to a situation where smaller companies feel like they can't rely on having steady GPU access at Azure, which is leading to them making decisions like maybe they should just buy GPUs and run them somewhere else or look for other deals with NeoClouds. It's definitely this rapidly shifting landscape that is leaving these AI startups feeling a little bit insecure about their ability to get GPUs right now. So, Anissa, Aaron's told us about Microsoft. Do you have any sense if other cloud providers are playing the same game here in terms of this use it or lose it policy? And, I mean, are the NeoClouds, for that matter, are they adopting sort of similar approaches at all? The NeoClouds are really interesting when it comes to this topic, because Aaron and I actually reported this together in 2023. But at that time, a lot of the startups were going to the NeoClouds. And this was seen, this entire problem was seen as a reason that NeoClouds should exist. You know, Amazon, Microsoft, Google might not think that you're important enough to give you an allocation of a large amount of GPUs. but companies like CoreWeave were really marketing themselves as the place to go if you couldn't get GPUs and they saw themselves as this was one of the big reasons that they should exist. But as we've seen over the past three years, a lot of the NeoClouds have actually just prioritized serving the largest customers out there. So they not really sort of waiting in the wings helping these startups And so it a much different time And you know maybe we see new NEO clouds I think some of the inference providers are hoping to tackle the startup space But it's a lot different than 2023 when some of these startups thought that they could go to the NEO clouds in this situation. Anissa, last question for you. You know, I'm thinking back to 2023, and I'm recalling that part of that supply crunch was not just demand. And it was also sort of, I mean, the supply chain itself, you know, it was still coming out of that big environment where, you know, it was tough to get these things in people's hands. Not clear to me if that is as much of an issue today. But my question is, is there any signal that this is going to get better or that things might be improving? There's not much signal that this will improve in the short term for startups because they're waking up every day just like us and seeing these massive announcements from OpenAI, Anthropic, about spending hundreds of billions of dollars on compute from the same people they're trying to get compute from. So I think unless the biggest companies that have reserved the chips don't end up needing them and the cloud providers end up reclaiming them and redirecting them to smaller companies, there's not really a good end in sight from my perspective. Great. Well, Aaron and Anissa, I want to thank you for coming on. That is Aaron Holmes, our Microsoft reporter, and Anissa Gardizi, our cloud and compute reporter here at The Information. open ai released its gpt 5.5 model in the latest addition to the ai model rivalry i want to bring on someone who has been using it extensively for an early take on how the model is performing nico grouppen is the head of applied research at ai legal startup harvey nico good morning to you it's great to have you back uh you have been using 5.5 is that right that's correct yeah and What's the review? Give us the lowdown. What a difference a week makes. New models have entered the arena. Akash last week, it was open. Opus 4.7 this week, it's GPD 5.5. So the spud has officially landed. There's a lot of buzz in the ether on social media about this model. And from everything we've seen thus far, that buzz is warranted, right? So on our internal benchmarks, you know, Big Law Bench, it posted one of the all-time best scores at 91.7, did really well across both transactional and litigation-focused legal work. And it posted great scores on other external benchmarks. Importantly, not only those focused on code generation. So quality seems to be there. I was going to ask you specifically about code generation, just given how important code generation seems to be as a capability. I mean, they're calling it, well, they said GPT 5.5 is our strongest agentic coding model to date. Strongest, certainly in terms of the OpenAI family, but how does it rank against other companies' models in terms of coding capabilities? Yeah, I think, you know, for the better part of three months now, and I think starting from the beginning of the year, there was a tangible gap between these models, right? I think everyone recognized that Opus 4546 and then through to 47 was driving cloud code in a way that created a real gap in distance. I think, from my perspective, that gap is starting to close. and with GPT 5.5, I think, you know, it may be a back in pole position, right? The reason that I say maybe here is that there's a couple of things to watch out for, right? First, it's a research preview model. It's primarily available in codecs. As it becomes more generally available via API, and we get more widespread access, that's where you see these differences really start to show publicly and then secondly we still have mythos sort of lurking in in the background right and so you can make point in time observations but you have to make decisions based on on projections of where things are going you say mythos lurking in the background i mean are you as a researcher are you expecting to ever be able to use mythos because my understanding of it and let's let's put leaks aside for the moment because we know that, you know, that's not the official way people are supposed to be able to use Mythos, right? So, I mean, there's a decision to make whether or not they're going to make it available to the public or not. I think the consensus has been it might not be the smartest idea to make it available to the public. So when you say it's lurking in the background, I mean, are you expecting at some point to be able to use it? My expectation is that the labs will continue to push on performance and on efficiency. And the incentives are there to push both of those, right? So my hope is that they'll continue to release these things and make them generally available. And, you know, whether they're not available because of security issues or compute constraints or for whatever reason, I think the incentives will be there, right, to use them, right? If there's another equally performant or higher performance model on the market, the incentives for the consumers of models will be to switch traffic over to those. And so I expect the labs, OpenAI Anthropic and DeepMind, to continue to push and release the frontier of model intelligence. Can I ask, are you guys concerned at all about the costs of these models and the hypothesis being, hey, a lot of this technology is subsidized right now. Once they start to charge what they need to in order to turn a profit, it's going to be really hard for application layer companies to afford all this. Is that something you think about on the ground? Yeah, absolutely. And this is actually one of the high points for the GPT 5.5 model release is exactly that efficiency. So I think we've been living in a world for the last two years where quality has taken precedent over things like latency and cost. The reason for that is adoption, right? If quality is lower, adoption is lower. We're in a nascent industry. Adoption and traction are key. But as I think enterprises and companies like ourselves are wisening up or developing a more mature stance with respect to token spend. And so I think the idea that a model like GPT 5.5 is a higher quality model than GPT 5.4, but in some experiments has shown 50% reduction in reasoning tokens, much greater efficiency. Those things become really important. Right. So we're starting to think about things less through the lens of what is the maximum quality that we can get from a model and more so from the perspective of what is the quality per dollar spent or the quality per token used. Last question for you, Nico. So Anthropic came out this week with some news that they are pushing deeper into the legal space as well. And I think people probably saw this coming, the big labs starting to move into applications. Obviously, Harvey is a big name in the legal space. How are you thinking about that release and what your differentiation is against a giant company like Anthropic? Yeah, they've certainly been busy, haven't they? I think it was design, cogen, finance, cybersecurity, and legal, all verticals they've touched in the last 10 days. So a lot that's going on there. Look, our focus will always be on ensuring that Harvey is the best bet for legal teams, that the tools and services that lawyers need to be first class will be made first class. And I think there's an important pattern to pay attention to here is when are the labs talking about model capabilities and when are they talking about product differentiation, right? If you're ahead on model capability or in your upswing about to release a new model, you going to be emphasizing benchmarks evaluations model performance And when you in a downswing or when you in between launches let say talking about product capabilities right And so no surprise here Anthropic has done a great job actually of marketing themselves to the enterprise to be the AI enterprise sort of offering. And so it's no surprise that they're marketing their product to the highest TAM verticals, financial services, legal services, and Cogen being amongst three of the biggest. So this is kind of interesting because I never thought about this, but are you maybe suggesting, I'm not putting words in your mouth, but I guess if you were to maybe study where the models rank on the benchmarks, you know, and the oscillations between who's on top, if you were to sort of maybe look for a correlation between that and then press releases for applications, maybe, I mean, what we're suggesting here is there could be a correlation there, right? Or an inverse correlation When you're on the bottom, that's when you're talking more about your applications. Yeah, I think about this from our side too, right? Is there is a certain state of performance that's reached by any given model release. And then there's a lot that you need to do between model releases, right? And the same is true for the laps, right? You have this long kind of pre-training clock that's running. You have a much shorter post-training clock that's running. But there are still six to eight weeks of time to spill between model launches. So got to do something. Great. OK, well, Nico, I want to thank you for coming on. As always, that is Nico Gruppen, head of applied research at Harvey here on TITV. Meta is cutting 10 percent of its headcount, which amounts to around 8000 people. Here to break down the news in this week's edition of The Editor's Cut is our co-executive editor, Martin Pears. Martin, welcome back to the show. It was great to have you here. You wrote this column last night in the briefing about Meta's cuts, and you suggested that maybe this round of layoffs could be different than past rounds of layoffs. Well, the point I was making was that the past rounds by all of the companies, you know, Google as well as various other companies, were always sort of reported as – and the companies made this point. They were major cost-cutting exercises. But in fact, if you actually track their employee numbers in the years after those rounds, and I'm talking about the rounds of cuts that started in 2022 and went through 2024, if you track the employee numbers after that, in many cases, the workforces went back to at least the point where they had been before the cuts. or not always right back up, but in some cases, Google, for instance, entered last year with more people than they have had at any point in their history. So I'm not sure that will happen again this time. I mean, one of the reasons that was happening was that these companies were replacing the people that they had and bringing in more specialized talent, possibly for AI. Now, I think they are replacing people in order of freeing up, you know, the money they spend on engineers and so they can spend more on AI. It's not clear. I'm just saying that you don't really know at the time they make these announcements what is actually going on. And you have to track things over time to see the real impact. And when you say spend more on AI, this is like, I mean, these are all those investments in data centers, compute, I mean, all the physicals. There is that part of it, but there's also the spending on the AI tools, which is very expensive. And as these companies adopt the AI tools, they have to find a way to reduce spending elsewhere. And the obvious way is to cut the people. And obviously the AI tools are aimed partly at replacing what some humans do. So that's what everyone is thinking is happening. I'm just saying that, well, the history is that things don't always work out the way that they appear at the time of the announcement. So, I mean, if we just look back at previous rounds of cuts, I mean, the idea that they do these cuts and then the people kind of trickle back over time or in some cases. Different people. Different people. Different people. Oh, yeah. Sometimes different people. I mean, we've heard the stories, though. Sometimes people just got hired back. My guess is that they're not hiring back the same people. I mean, maybe every now and then, but I think it's mostly different people. But I mean, then the cuts are – because I was looking at Meta's share price this morning and yesterday. I mean, the share price didn't really move. And oftentimes you see some sort of a pop as investors get excited, right? Well, a couple of things about that. One is that these cuts have been reported for a while as they were actually on the way. So it's not, no one is surprised. The other thing is that everyone knows meta is spending a fortune on AI, both in capex and operating expenses in terms of hiring people. And when they're hiring people, they're spending a fortune on hiring them. So I think, you know, Wall Street probably is a little bit like, who cares? now in other meta news i wanted to get your read on the meta aws deal that was announced uh this morning i believe so meta is going to be using some of a bs chips the graviton chip my sense on this was this is actually more of a story that would impact aws than meta but i wondered what you thought of it i think that's right i think we're in a period now of um every Every company in this area is making announcements about deals that they're doing. Most of these announcements are sort of fit into what they've already announced. So Meta has already announced that they're going to spend an absolute, not, you know, huge amount of money on CapEx, on chips. They are developing their own chips. They're also buying those from others. This is all just part of that. So I don't think it's that meaningful for them. I think it is meaningful for Amazon, which is trying to prove to the world that there is demand for its AWS business, including its own internally designed chips. So yeah, I think you are right. Well, and if that's the point, is showing that there is demand, I mean, it does feel like AWS has been another come from behind story in the sense that it feels like, what, 18 months ago, we were sort of talking about them as a bit of a laggard in the AI game. But I mean, between the OpenAI investment and then all these chip deals, it feels like AWS is finding their way into the conversation more and more. I mean, that's what I mean about the announcements. They're finding their way into that, but are they actually coming back? We don't know. I think it's a bit early to say. I think this is a very big market. there's lots of opportunities for everyone i mean obviously aws is a very big firm and they will have a role to play but are they catching up to the others i just don't think that we know yet and i wouldn't be uh overly influenced by these announcements which they're just pr i mean you know who knows okay okay now uh let's talk about intel quickly in other chip news so intel reported its quarterly results shares are 25 percent 25 percent 12 22 22 this morning oh it's already it's calmed down is it okay well it was up 25 earlier so far this year intel stock is up 125 or so um and that's because intel which was left for dead is now showing it's alive um intel is you I think the CPU, which is where Intel is really still, they still have a big presence in that market. That, I'm not sure what you call it, the category of chip is becoming more important in AI. And so Intel is finding its way into the discussion as you might say I think again it a bit early to say whether it will ever reclaim the position it was in Probably not But Intel is alive and that good for America and it good for Intel Well, yeah, I mean, literally good for America as a shareholder in the company. um it's also good for american um ability to make its own uh tips which is important right right all right well martin i want to thank you for coming on that is martin piers our co-executive editor here at the information cursor's deal with spacex took silicon valley by surprise in many ways spacex could acquire the coding startup for 60 billion dollars and equally as surprising is the backstory behind Cursor's road to that deal. My colleagues, Corey Weinberg, Julia Hornstein, Aaron Wu, and Katie Roof wrote a deep dive with inside details about that. And I want to bring on Julia to tell us more about what she learned. Julia, welcome back to the show. It's great to have you here. Great to be here. So you see this announcement that Cursor and SpaceX could be doing a potential deal. You start working the phones. What are the questions that you want to get answers to as a reporter. Some of the questions that we've been wondering for a while are, you know, can cursor sustain its explosive growth or is it peaking? I mean, we're also kind of curious about what its margins and cost structure looked like. And after SpaceX earlier this week said that it has the right to acquire cursor for $60 billion or pay a $10 billion breakup fee, we really wanted to dig into what pushed Cursor toward the SpaceX acquisition instead of, you know, staying the course. We were also wondering why investors up until, you know, late last week before the SpaceX deal was announced were still willing to fund Cursor at a $50 billion valuation because that fundraise would have, you know, nearly doubled its valuation from the previous round in November 2025. And we were also thinking about, you know, increasing competition from not only Anthropics Cloud Code and OpenAI's Codex, but also other bi-coding startups like Lovable. So we really sought to figure out why investors were, you know, wary of Cursor, whether they thought the startup had durability and emote in light of this increased competition. Okay, so there's a lot of great questions there. So let's go through them part by part here. So let's start with Cursor's financial profile? What did you find there? Yeah. So, I mean, we found that while Cursor's revenue is still on the up, I mean, the company hit $2.7 billion in annualized revenue last month that generated about $770 million last fiscal year, which represented a 24 times increase from the previous fiscal year. The company still had really significant costs and its gross margins were negative, you know, negative 23% as of the quarter ended in January. We've heard that they've since turned positive, but those negative gross margins may have been a concern for potential investors who were floated the most recent now called off fundraise. And we also found that, you know, top investors who had passed on the deal were kind of wary of competition and capital constraints. You know, these are things that the SpaceX acquisition could solve. You know, they would solve the funding issues. They'd also solve Cursor's compute hurdles since they get access to SpaceX's vast servers. But those gross margins were really a sticking point. And so I don't want to gloss over this point. So the company was trying to complete a funding round and, you know, what, was not having much luck there? What were the issues with that funding round that maybe prompted it towards this SpaceX deal? So the funding round, I mean, It was still on for all intents and purposes until the SpaceX deal came in. I think that the SpaceX deal just gave Cursor an opportunity to access a ton of compute, which is, as my colleagues reported in another story today, has been a huge issue for startups recently. So I think that that was a really big driver for Cursor towards the acquisition. So now the other question that you said you were interested in is how Cursor is stacking up against competition. Namely, I mean, Cloud Code and Anthropic is the big competitor in the space. How do customers see it as a competitor there? I mean, investors are seeing exactly what you and I are seeing. Cloud Code is growing really fast. It took it only six months to reach $1 billion in annualized revenue. And it took Cursor roughly a year to reach that same number. And, you know, scale matters too. Anthropic can outspend Cursor on things like compute and R&D. And Cursor's coding tools are in part powered by Anthropics models. And the company's recently developed its own coding model. But regardless, this dynamic could create real hangups for investors. And not to mention all the other coding startups like Lovable that are in the mix here as well. Right. And I mean, you make a good point, which is that if Cursor is relying on Anthropics models to begin with, I mean, here's a scenario where Anthropic, I'm sure, could very easily just, you know, out-innovate Cursor in some way. You know, I'm just – I'm guessing here, right? If Anthropic wanted to, they could go a lot harder on the competition in some ways. I mean, well, we've kind of been seeing that already. Just cloud code has had such an explosive growth since people really started talking about it late last year. So I think that that is something that's definitely in the back of investors' minds as they're thinking about the company. And, you know, the company was thinking about it as well when they were weighing their future prospects, whether to, you know, stay the course and raise more money and still operate independently or to have this, you know, right to be acquired by SpaceX sometime in the future. Right. So let's now let's talk once again about that right to be acquired. So if Cursor does get bought for that $60 billion price tag, who are the big winners here in the Cursor cap table? Andreessen Horowitz and Thrive Capital stand to make pretty substantial returns. I mean, both have been investors since the Series A, which was priced at a roughly $400 million valuation. So those windfalls could be in the billions. And then there's others as well, like Excel and Benchmark that got in a little bit later, but stand to benefit too from a $60 billion acquisition price. And last question for you, Julia. I mean, as you went out reporting the story, I wonder what you think this deal says about the AI coding story at large, the M&A landscape at large. I mean, just walk me through some of the broader insights that you had from reporting this story. I think one of our big takeaways is, you know, compute access is a huge bottleneck for startups and is super important. I mean, as I mentioned, you can see that in another story that my colleagues published today about startups facing a compute crunch. I think that investors are also thinking about, you know, app layer versus vertical integration. So owning models, infrastructure, owning the apps that may become increasingly important to investors, especially those wary of competition from players like Anthropik. And I think that we can expect to see even more consolidation across the AI market, especially at the app layer, you know, as things continue planning out. Great. Well, Julia, I want to thank you for coming on. That is Julia Hornstein, our venture capital reporter here at The Information. That does it for today's show. A reminder, we are on this stream Monday through Friday at 10 a.m. Pacific, 1 p.m. Eastern. If you can't make it then, episodes are available on theinformation.com, our YouTube channel, or wherever you get your podcasts. Make sure to follow us on social media on X, Instagram, and TikTok. I'm already excited for our next show on Monday, where we will be broadcasting to you from the New York Stock Exchange. It is also the day of our Financing the AI Revolution event. It's our flagship finance and AI conference. We will be there with a great panel of speakers. We'll have more details for you to come and more coverage for you. Have a great rest of your Friday and have a great weekend. Bye-bye for now. Thank you.