Tech Brew Ride Home

Elon Buys Cursor?

21 min
Apr 22, 20266 days ago
Listen to Episode
Summary

SpaceX struck a $60 billion acquisition deal with AI coding startup Cursor, giving it access to XAI's computing infrastructure to compete with OpenAI and Anthropic. Google unveiled new TPU chips and an enterprise AI agent platform, while OpenAI released ChatGPT Images 2.0 with advanced capabilities. Anthropic's powerful Mythos cybersecurity model was accessed by unauthorized users, and Meta began tracking employee computer activity to train AI agents.

Insights
  • Elon Musk is consolidating AI coding capabilities across his companies (XAI/SpaceX) to compete with established players, signaling that coding AI is now a critical strategic battleground
  • Major tech companies are shifting from model development to inference optimization and cost reduction, with Google and NVIDIA both prioritizing latency and efficiency over raw performance
  • Enterprise AI adoption is accelerating with structured offerings (Google's agent platform, Meta's internal agents), moving AI from experimental to operational workflows
  • Security risks around powerful AI models are materializing in real-time, with unauthorized access to Mythos demonstrating vulnerabilities in limited-access model distribution
  • Data collection for AI training is becoming normalized across enterprises, with Meta's employee monitoring exemplifying how companies are treating all user interactions as training data
Trends
AI coding tools consolidation: Smaller players like Cursor facing pressure from OpenAI/Anthropic, driving acquisition activity and strategic partnershipsInference-focused chip design: NVIDIA and Google shifting emphasis from training to inference optimization as cost and latency become primary constraintsEnterprise AI agent platforms: Shift from chatbots to autonomous agents that manage workflows, with structured lifecycle management tools becoming standardCompute as strategic moat: Access to computing infrastructure (TPUs, supercomputers) becoming as important as model quality for AI startupsAI-generated code normalization: Google reporting 75% of internal code now AI-generated, signaling mainstream adoption in enterprise developmentUnauthorized AI model access: Security incidents around powerful models increasing, suggesting governance frameworks for limited-access AI are inadequateEmployee data harvesting for AI: Companies treating workplace activity as training data, raising privacy and consent questions at scaleMulti-modal AI capabilities expansion: Image generation, text-in-image, and web-integrated reasoning becoming standard features across platforms
Companies
SpaceX
Struck $60B acquisition deal with Cursor to gain AI coding capabilities and access to XAI's computing infrastructure
Cursor
AI coding startup acquired by SpaceX for up to $60B or $10B compute partnership to compete with OpenAI and Anthropic
XAI
Elon Musk's AI company providing computing infrastructure and Grok models to support Cursor acquisition strategy
OpenAI
Released ChatGPT Images 2.0 with advanced image generation and web search capabilities; competes with Cursor on codin...
Google
Unveiled TPU-8T and TPU-8I chips optimized for training and inference, plus Gemini Enterprise Agent Platform
Anthropic
Mythos cybersecurity AI model accessed by unauthorized users; competes with OpenAI/Google on coding and general AI
Meta
Installing employee computer tracking software (MCI) to collect interaction data for training AI agents
NVIDIA
Acquired Grok technology for $20B; focusing on inference chips; competing with Google's TPU strategy
Amazon Web Services
Among limited companies with official access to Anthropic's Mythos model through Project Glasswing
Apple
Among limited companies with official access to Anthropic's Mythos model through Project Glasswing
Microsoft
Among limited companies with official access to Anthropic's Mythos model; competes on image generation tools
Alphabet
Google's parent company; stock gained 1.7% on TPU and agent platform announcements
People
Brian McCullough
Podcast host narrating and analyzing the day's tech news
Elon Musk
Orchestrating Cursor acquisition and XAI strategy to compete in AI coding space
Mark Lohmeyer
Quoted on Google's TPU strategy focusing on latency and cost efficiency
Andrew Bosworth
Announced Meta's Agent Transformation Accelerator and increased internal data collection for AI training
Jensen Huang
Stated that 20% of AI workloads may be best served by inference-specific chips
Andrew Millich
Poached from Cursor to XAI in March to strengthen AI coding capabilities
Jason Ginsburg
Poached from Cursor to XAI in March to strengthen AI coding capabilities
M.G. Siegler
Provided analysis on SpaceX-Cursor deal structure and competitive implications
Quotes
"XAI was not built right the first time around, so is being rebuilt from the foundations up"
Elon MuskEarly in episode
"Cursor's lack of access to computing power for training its AI models had bottlenecked its growth"
Cursor (via blog post)Mid-episode
"It's about how you deliver the lowest possible latency of the response at the lowest possible cost per transaction"
Mark Lohmeyer, GoogleGoogle TPU segment
"75% of new code created inside of Google is now generated by AI and reviewed by human engineers"
GoogleGoogle segment
"The vision we are building towards is one where our agents primarily do the work and our role is to direct review and help them improve"
Andrew Bosworth, Meta CTOMeta segment
Full Transcript
Welcome to the TechBrew Right Home for Wednesday, April 22, 2026. I'm Brian McCullough. Today, SpaceX struck a deal giving it the option to acquire Cursor for $60 billion or maybe pay $10 billion as XAI scrambles to catch up in AI coding. Google unveiled new TPUs and an agent platform. OpenAI shipped chat GPT images 2.0 and what they were afraid of happening happened. and Mythos got accessed by unauthorized users. Here's what you missed today in the world of tech. Today's episode is brought to you by Doppel. Disguises are getting pretty good these days, and I'm not just talking about when you throw on a pair of glasses and a hoodie and hope you won't get recognized. We're talking about the kind of disguises that end up in your inbox, on your phone, or on the web, blending in as your everyday internal email, casual text message, or normal website. Doppel strengthens teams' resilience by giving employees the tools and defenses they need to protect themselves from increasingly sophisticated social engineering threats. Their digital risk protection takes it one step further by keeping an eye on every channel to connect patterns and shut them down fast. From deepfakes to bad links to impersonation attempts, Doppel helps you stay ahead of these threats with their AI-native social engineering defense platform. Learn more at d-o-p-p-e-l dot com. That's d-o-p-p-e-l dot com. So, I think SpaceX has just acquired Cursor, kind of, sort of. Quoting the Times. SpaceX, Elon Musk's rocket and satellite company, said on Tuesday that it had struck a deal with the artificial intelligence startup Cursor that could result in its acquiring the young company for $60 billion. In a social media post, the rocket maker said the combination with Cursor, which makes code writing software would allow us to build the world's most useful AI models. SpaceX added that the agreement gave it the option, quote, to acquire Cursor later this year for $60 billion or pay $10 billion for our work together. SpaceX is making the deal just as it prepares to go public in what is likely to be one of the largest initial public offerings ever. It is unclear if it plans to consummate a transaction with Cursor before or after its IPO, which could happen as early as June. A code writing startup has seemingly little to do with rocket launches and a satellite internet service, which are SpaceX's main businesses, but Mr. Musk has been increasingly interested in AI. The tech mogul helped found OpenAI, the company behind ChatGPT, and in recent years established XAI, which created the Grok chatbot. After the hires were announced, Mr. Musk posted that XAI was not built right the first time around, so is being rebuilt from the foundations up. As for Cursor, the startup had been in talks to raise new funding in recent weeks, a person with knowledge of the matter said, but the rival coding tools from Anthropic and OpenAI created competitive pressure for the much smaller startup. Under its agreement with SpaceX, Cursor could obtain either a $10 billion injection of new capital or the $60 billion payday if the rocket company buys it. Cursor said in a blog post on Tuesday that its lack of access to computing power for training its AI models had bottlenecked its growth. The deal with SpaceX will give it access to XAI's infrastructure, which includes a supercomputer capable of training AI models. That will help Cursor dramatically scale up the intelligence of our models, the startup said, end quote. And quoting the decoder, For Elon Musk, the deal fills a hole XAI hasn't been able to patch on its own. XAI lags behind OpenAI's Codex and Anthropics' Cloud Code on coding performance and tooling, and it's been losing talent. Back in March, XAI poached two former Cursor execs, Andrew Millich and Jason Ginsburg. The company is currently training several new Grok models, end quote. And quoting M.G. Siegler, in noting a compute deal between the two last week, I wrote, would be very curious how this deal came together slash was structured, because while the high-level notion made some level of sense, it sure felt like there would need to be a lot of structure around it for both sides. As it turned out, that structure is a $60 billion call option for SpaceX to buy the entire company. And if they don't, they'll pay a, quote, mere $10 billion, sort of a de facto breakup fee, albeit tied to compute deals. Price aside, this actually makes more sense to me Cursor is under immense pressure from the Foundation Labs who have decided their space is the most important one to own at the moment They clearly had an option on the table to keep going which seems to say a lot, though also perhaps bullishness around SpaceX pre-IPO shares, obviously. And SpaceX, meaning the ex-AI subsidiary, is under immense pressure because they can't compete in that space yet. And why buy two udders when you can get the whole cow? Next question, will Anthropic and or OpenAI now fully pull their models from Cursor? Sure, that means foregoing money, but Anthropic in particular could undoubtedly use the capacity back at the moment. The real loser here may be Meta, which has no viable coding option yet. Forget space Twitter and space data centers, now we have space vibe coding, end quote. Google has unveiled a new TPU lineup consisting of the TPU-8T for AI training and the TPU-8I for inference, with general availability scheduled for later in 2026. They also announced the Gemini Enterprise Agent Platform, a revamped developer tool built on Vertex AI that manages the full lifecycle of AI agent fleets. They also unveiled Workspace Intelligence, which understands, complex semantic relationships between data and Google workspace apps to provide personalized context when working among them. But back to the headline, quoting Bloomberg. Alphabet's Google Cloud division unveiled the latest generation of its Tensor Processing Unit, or TPU, a homegrown chip that's designed to make AI computing services faster and more efficient. The new lineup will come in two versions, the company said at its Google Cloud Next event on Wednesday, where it also announced a $750 million fund to help boost corporate AI adoption and showed off tools for building AI agents. The TPU-8T is tailored for creating artificial intelligence software, while the TPU-8i is designed to run AI services after they've been created, a stage known as inference. Shares of Alphabet gained 1.7% before markets opened in New York on Wednesday. Google has emerged as one of the most successful makers of in-house AI chips in an industry dominated by NVIDIA. TPUs have become a hot commodity in Silicon Valley in recent months, and the company is looking to build on that momentum with the latest versions. The effort is part of a broader push to make it cheaper and less energy-intensive to roll out AI software. The company also is working to make services more responsive. The new TPUs store more information on the chip, helping provide the rapid responses that users crave. But demands on increasingly complex layers of software are only growing. It's about how you deliver the lowest possible latency of the response at the lowest possible cost per transaction, said Mark Lohmeyer, Google's vice president of compute and AI infrastructure. The number of transactions is going way up and the cost per transaction needs to go way down for it to scale. Creating AI services and software is done by using systems that can sift through massive amounts of data very quickly to make connections and establish patterns that can be represented mathematically. Inference, running the software and services, benefits from processors that have huge amounts of memory integrated into them. This approach helps make AI responses more instantaneous because the component doesn't have to go seek information stored elsewhere. It's particularly useful when computers reason through problems taking multiple steps and learning from their own actions. The training chip 8T can be combined into groups of 9,600 semiconductors. Google said that when deploying such massive systems, power is increasingly the major constraint in data centers. Owners therefore need systems that are more efficient to get the best out of the limited availability of electricity. TPU-8T delivers 124% more performance per watt than the preceding generation, with TPU-8i providing a gain of 117%. That step up is helped by improving in-house networking that increases the chip's ability to communicate with one another efficiently. AI systems built on the chips will be generally available later this year, Google said in a statement. The company will continue to offer services based on NVIDIA chips to customers who want to use the systems that currently dominate AI computing, it said. Google intends to be among the first to deploy gear based on a new design from NVIDIA coming in the second half of this year. Lohmeyer said, like Google, NVIDIA is focusing more on the inference stage of AI Its forthcoming lineup will include technology from its acquisition of Grok technology tailored specifically for providing ultra responsiveness NVIDIA Chief Executive Officer Jensen Huang has said that more than 20 of AI workloads might be best served by that type of chip. Grok was founded in 2016 by a group of Google engineers. Last December, NVIDIA paid $20 billion for a license to use its technology and hired most of its engineering team. Separately, on Wednesday, Google's cloud computing unit showcased a set of tools that can create AI agents and track their work within companies, including a dedicated inbox for the virtual bots to post information and progress reports. Google also introduced updates across its workspace productivity suite and offered up a vision in which AI agents dramatically overhaul the day-to-day routines of the average worker, end quote. Google also said this, 75% of new code created inside of Google is now generated by AI and reviewed by human engineers. That's up from 50% last fall. When I say micro, what comes to mind? Scopes? Bangs? Well, consider reframing that to micro winds, micro habits, and yes, even micro doses of GLP-1s. That's the foundation of Noom's micro program. The Noom Microdose GLP-1 program is the easy way to start GLP-1 medication. That's because Noom starts you on a smaller dose of medication and then gradually scales you up depending on how your body reacts. Noom have found users lose on average eight pounds in 30 days on their microdose protocol. The Noom GLP-1 Microdose program starts at $79 and is delivered to your door in seven days. Start your microdose GLP-1 journey today at Noom.com. That's N-O-O-M.com. Noom. Micro changes big results. See podcast description for full disclaimers. Even I'm getting to the point where I can't keep up with all of the stuff happening, quoting The Verge. OpenAI is rolling out the latest version of its AI-powered image generator with new thinking capabilities, allowing it to search the web to help it create multiple images from a single prompt. On Tuesday, OpenAI announced that ChatGPT Images 2.0 can now create more sophisticated images with improvements to its ability to follow instructions, preserve details of your choosing, and generate text. It's powered by OpenAI's new GPT Image 2 model with new thinking capabilities available to ChatGPT Plus, Pro, Business, and Enterprise subscribers. When a thinking model is selected, the chatbot image generator can pull information from the web, create visual explainers based on files you upload, and quote, reason through the structure of the image before generating. ChatGPT Images 2.0 can also create up to eight images at once with thinking enabled, all while maintaining the same characters, objects, and styles in each scene. OpenAI says this should make it easier to generate things like manga pages, a series of social graphics or design plans for every room in a house. All ChatGPT users can take advantage of updates that let ChatGPT Images 2.0 quote better capture the defining characteristics of photos in addition to pixel art, manga, cinematic stills, and other types of images. It can now generate images with a resolution of up to 2k and in more aspect ratios ranging from wider formats such as 3 to 1 to taller ones like 1 to 3. And it's not only better at generating English and other Latin script languages, OpenAI says Images 2.0 makes significant gains in creating images containing text in Japanese, Korean, Chinese, Hindi, and Bengali. OpenAI first released ChatGPT Images last year and launched its last big update in December, adding faster image generation and better photo editing capabilities. Since then, competition has only been getting stronger with the arrival of tools like Google's Nano Banana Pro and Microsoft's My Image 2. ChatGPT Images 2.0 is available to all ChatGPT and Codex users starting today, end quote. This doesn't sound good to me, though. Quoting The Verge again, Anthropik's Mythos AI model, a powerful cybersecurity tool that the company said could be dangerous in the wrong hands has been accessed by a quote small group of unauthorized users Bloomberg reports An unnamed member of the group identified only as a third contractor for Anthropic, told the publication that members of a private online forum got into Mythos via a mix of tactics, utilizing the contractor's access and, quote, commonly used internet sleuthing tools. The Claude Mythos preview is a new general-purpose model that's capable of identifying and exploiting vulnerabilities, quote, in every major operating system and every major web browser when directed by a user to do so, according to Anthropic. Official access to the model is limited to a handful of companies through the Project Glasswing initiative, including NVIDIA, Google, Amazon Web Services, Apple, and Microsoft. Governments are also eyeing the technology. Anthropic currently has no plans to release the model publicly due to concerns that it could be weaponized. We're investigating a report claiming unauthorized access to Claude Mythos Preview, through one of our third-party vendor environments, an Anthropic spokesperson said in a statement to Bloomberg, Anthropic currently has no evidence that the unauthorized access is impacting the company's systems or goes beyond the third-party vendor's environment. The model was reportedly accessed illicitly on April 7th, the same day that Anthropic announced it was releasing Mythos to a limited number of companies for testing. The group that gained the unauthorized access has not been publicly identified, though. Bloomberg reports that its members are part of a Discord channel that seeks out information about unreleased AI models. The group accessed Mythos by using knowledge of Anthropik's other model formats obtained from a recent MerCore data breach to make an educated guess about its online location. Members have been using Mythos regularly since gaining access, providing screenshots and a live demonstration of the model as evidence to Bloomberg, though reportedly not for cybersecurity purposes in an attempt to avoid detection by Anthropik. Other unreleased Anthropik AI models have also been accessed by the group, according to Bloomberg, end quote. And finally today, Meta is installing tracking software on U.S. staffers' computers to capture mouse movements, clicks, and keystrokes in work-related apps for, well, what do you think? Again, everything is data for AI now, quoting Reuters. The tool, called Model Capability Initiative, MCI, will run on work-related apps and websites and will also take occasional snapshots of the content on employees' screens, according to one of the memos posted by a staff AI research scientist on Tuesday in a channel for the company's model-building MetaSuperintelligence Labs team. The purpose, according to the memo, was to improve the company's AI models in areas where they struggle to replicate how humans interact with computers, like choosing from drop-down menus and using keyboard shortcuts. This is where all meta employees can help our models get better simply by doing their daily work, it said. The Facebook and Instagram owner has been moving aggressively to integrate AI into its workflows and reshape its workforce around the technology, arguing it will make the company operate more efficiently. Meta CTO Andrew Bosworth told employees in a separate memo shared on Monday that the company would step up internal data collection as part of those AI for work efforts, now rebranded as Agent Transformation Accelerator, ATA. The vision we are building towards is one where our agents primarily do the work and our role is to direct review and help them improve, Bosworth said. The aim, he added, was for agents to automatically see where we felt the need to intervene so they can be better next time. Bosworth did not explicitly spell out how those agents would be trained, but said meta would be rigorous about building up data and evals for all the types of interactions we have as we go about our work. Stone said the data gathered via MCI would not be used for performance assessments or any other purpose besides model training and that safeguards were in place to protect sensitive content without elaborating on which types of data would be excluded from collection. If we're building agents to help people complete everyday tasks using computers, our models need real examples of how people actually use them. Things like mouse movements, clicking buttons, and navigating drop-down menus, said Stone. Meta is planning to lay off 10% of its workforce globally starting on May 20th and is eyeing additional large cuts later this year, end quote. Sometimes the implications of things write themselves, and I don't even have to say anything to underline that. Talk to you tomorrow.