The AI Daily Brief: Artificial Intelligence News and Analysis

Google to Officially Power Apple AI Siri

26 min
Jan 13, 20263 months ago
Listen to Episode
Summary

This episode covers major strategic positioning moves in the AI industry, headlined by Apple's official partnership with Google to power Siri using Gemini models. The episode also discusses Anthropic's launch of Claude for Healthcare, Google's withdrawal from health-related AI overviews, and Meta's massive expansion into nuclear power and compute infrastructure.

Insights
  • The AI foundation model market has reached incredible parity, making strategic partnerships and positioning more critical than pure model performance
  • Healthcare represents a major battleground for AI companies, with both opportunities for orchestrating complex systems and risks from providing inaccurate medical information
  • Energy constraints are becoming the new bottleneck for AI development, shifting focus from compute-constrained to energy-constrained environments
  • OpenAI's product ambitions may have cost them the Apple partnership, as companies are reluctant to empower direct competitors
  • The concentration of AI power among a few major players is raising significant antitrust concerns
Trends
Foundation model labs jockeying for strategic positioning as performance parity increasesHealthcare AI moving from diagnosis/treatment to orchestration and navigation of complex systemsShift from compute-constrained to energy-constrained AI developmentBig tech companies securing nuclear power deals for long-term AI infrastructureAI companies implementing usage controls and cracking down on unauthorized accessAgentic shopping and commerce expected to become ubiquitous by 2026Movement toward embodied AI creating new competitive dynamicsIncreasing antitrust scrutiny of AI market concentrationCompanies building provider independence to avoid single-vendor lock-inVoice AI becoming a critical modality for next-generation interfaces
Quotes
"when navigating through health systems and health situations, you often have this feeling that you're sort of alone and that you're tying together all this data from these different sources, stuff about your health and your medical records, and you're on the phone all the time. I'm really excited about getting to the world where Claude can just take care of that."
Eric Cotterer Abrams
"This is both good and bad news. We will get a hit on productivity, but it really pushes us to develop our own coding product and models."
Tony Wu
"things are moving at a pace that if you're not already deep into AI agents, you're probably creating a competitive barrier or disadvantage."
Yale Cosette
"AI agents will be a big part of how we shop in the not so distant future."
Sundar Pichai
"We're shifting from compute constrained to energy constrained. If you don't control the power, you don't control the model."
Abu
Full Transcript

Today on the AI Daily Brief we're talking about all of the big moves in jockeying for positioning between the foundation model labs, with the headliner being that Apple has made it official and Google will power Apple's AI models before that in the headlines. Well, more jockeying for positioning, but on a slightly different level. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. Alright friends, quick announcements before we dive into in. First of all, thank you to today's sponsors, KPMG, ZenCode assembly and Superintelligent. To get an ad free version of the show go to patreon.com aidaily brief or you can subscribe on Apple Podcasts. If you are interested in sponsoring the show, send us a Note @ SponsorsIDailyBrief AI now speaking of AIDAILY Brief AI, you can also navigate there to find out all sorts of information about other things going on in the world. You can get access to the results of our ROI survey, Join our AI New Year's Resolution Sign up to join the Beta of Superintelligence AI Strategy Compass or and this is the one that I'm thinking about right now. Sign up for more information about AI DB Intelligence. This is a new forthcoming research information and benchmarking service that I literally could not be more excited about. If you want to go straight to that, go to aidbintel.com and with that out of the way, let's dive into today's episode. Welcome back to the AI Daily Brief Headlines edition. All the daily AI news you need in around five minutes. Although today it is a little jam packed to kick off this second full work week in January. As I said in the intro, you can feel all of the different labs right now really jostling for position at this point with the possible exception of people's affinity for Opus 4.5 and Claude as a coding partner. There is incredible parity across the major foundation labs and a lot of why people are using different models really comes down to personal choice. It makes sense then as the different labs add new product and interface layers around specific use cases that other labs are thinking in similar terms. Last week we got a number of announcements from OpenAI around their strategy for health and healthcare and Anthropic is following them into that space. In a blog post on Sunday, Anthropic announced the launch of Claude for Healthcare. They describe it as a set of tools and resources that allow healthcare providers, payers and consumers to use Claude for medical purposes through HIPAA ready products. Now they're actually positioning it as an expansion of the Claude for Life Sciences product suite that they announced back in October. Claude for Life Sciences was designed as a product research partner for scientists and clinicians. So this is a sort of industry and process complement to that that's a little bit more focused on the consumers of healthcare as well as the providers of healthcare. Similar to OpenAI's product, Anthropic will allow users to share medical records and data from fitness apps to inform health related conversations. And in addition to data connectivity, Anthropic is launching a range of new connectors for industry standard databases in the healthcare industry. Connectors for those unfamiliar are Claude's way of getting access to external information that can inform how the chatbot interacts with certain queries. The new connectors cover a wide range of functions, including insurance, diagnosis and research, with Anthropic hoping to speed up numerous healthcare workflows. Eric Cotterer Abrams, the head of life Sciences at Anthropic, said, when navigating through health systems and health situations, you often have this feeling that you're sort of alone and that you're tying together all this data from these different sources, stuff about your health and your medical records, and you're on the phone all the time. I'm really excited about getting to the world where Claude can just take care of that. He added that the goal is to use Claude as the quote orchestrator and to be able to navigate the whole thing and simplify it for you. Now, while people have a sense, especially in the AI space of Anthropic and Claude being super focused on the enterprise context and on things like coding, Koji Kubota argues that this move actually feels on brand for them. Koji writes, Anthropic isn't talking about diagnosis or treatment. What it is going after instead is the complexity around health care, scattered medical records, insurance rules and systems that are hard for patients to navigate. Seen this way, this is not an AI trying to practice medicine. It looks more like an attempt to become an organizing layer underneath it. Now, as you'll see in our main episode today, this idea of using Claude, as Eric put it, the orchestrator, and for using it as an orchestrator for not just coding purposes is going to be a theme that I think we're seeing more of. Ultimately, there is absolutely no denying that everything in and around health is a a major consumer of everyone's time, b something that basically no one enjoys as it's currently organized, and c something where having access to platforms that are incredibly good at absorbing and Interacting with huge amounts of information at once is likely to be very valuable. One really interesting story of someone showing the power of AI and specifically Claude when it comes to health comes from Shopify CEO Toby Lutke. He wrote, my annual MRI scan gives me a USB stick with the data, but you need this commercial Windows software to open it. Ran Claude on the stick and asked it to make me an HTML based viewer tool and it looks way better. One more prompt and it annotates everything with the findings. Now Tobi from here articulated something which I am increasingly finding as well when he writes by the way, this is a good example of what I meant with reflexivity. Reaching for AI. You tinker with a for a while and you just reach for this. This was an obvious thing to try when I saw I needed to use Windows and was on my Mac. You want to train your brain on this intuition. Now staying on the healthcare theme, Google, on the other hand, is winding down their support of health related AI queries. Specifically, Google AI Overviews will no longer offer AI generated summaries for certain health related searches. The decision comes shortly after an investigative piece in the Guardian found that AI overviews were presenting incorrect health advice. The article highlighted advice that people with pancreatic cancer should avoid high fat foods, which experts said was the opposite of what should be recommended. In another example, AI overviews made an error in listing the normal range of liver function tests, which could lead people with severe liver failure to think they were perfectly healthy. Google said that they don't comment on individual search results, but that they have taken steps to make broad improvements in the area. However, they noted their internal team of clinicians reviewed the searches highlighted by the Guardian and found that quote, in many instances the information was not inaccurate and was also supported by high quality websites. Vanessa Hebdich, the director of communications and policy at the British Liver Trust, told the Guardian that the removal was good news, but that quote our bigger concern with all this is that it is nitpicking a single search result and Google can just shut off the AI overviews for that, but it's not tackling the bigger issues of AI overviews for health. Now I think that this actually shows something interesting about consumer expectations and different forms of information dissemination in the past for Google. Sure, people wanted Google search to index the best, most accurate results, but if someone searched for something on Google and then they ended up on a website with inaccurate results, Google didn't get the primary blame for that. It was of course, the website that had the faulty information that was the primary culprit. Now, however, because Google's AI is in charge of curating, aggregating and then reprinting that, it becomes Google's problem even if the root cause is the same garbage information that informs the AI overview. There's also another interesting phenomenon which is a little bit beyond the scope of this particular episode to deal with, which is the differentiated expectation of AI information to always be accurate, as opposed to some base understanding that information on the Internet is not always going to be accurate. And I think finally, there's a whole additional issue of the difference between what Google is doing to serve accurate information when it's in the context of Gemini, as opposed to their AI overviews, which I think are going to have very different expectations and very different challenges. Now, staying on the theme of Google and all the labs figuring out their strategies in a handful of fundamental areas, Google has launched their new agentic shopping standard. The standard, which is called the Universal Commerce Protocol or ucp, was developed in collaboration with retail partners including Shopify, Etsy, Walmart and Target. MasterCard and Visa also had input from the payment side of the protocol. The general idea is to have a standardized way for shopping agents to gather information about products and navigate the checkout process, ensuring everything is completely interoperable. The protocol is completely open source and non proprietary, so can be used by everyone. That's building in the space. Alongside the protocol, Google released a new set of tools to help merchants integrate with ucp. Grocery chain Kroger is already building with the tools, with Chief Digital Officer Yale Cosette commenting, things are moving at a pace that if you're not already deep into AI agents, you're probably creating a competitive barrier or disadvantage. Finally, Google announced plans to experiment with advertising within their AI experience. Advertisers will be able to present exclusive offers to users shopping with Google's AI mode. Now, these are not sponsored ad placements. Instead, they allow retailers to give AI shoppers a unique deal. Yidya Srinivasan, Google's VP of Ads and commerce, said, it's a new concept that moves beyond our traditional search ads model. It essentially gives retailers the flexibility to deliver value to people shopping in AI mode. Whether that's a lower price, a special bundle or free shipping in the moment, it matters most to just close the sale, wrote Google CEO Sundar Pichai. AI agents will be a big part of how we shop in the not so distant future. To help lay the groundwork, we partnered with Shopify, Etsy, Wayfair, Target and Walmart to create the Universal Commerce Protocol. A new open standard for agents and systems to talk to each other across every step of the shopping journey. And coming soon, UCP will power native checkout so you can buy directly on AI mode with the Gemini app. I continue to think that Agentic shopping and commerce is going to be one of the most ubiquitous AI developments of 2026. Now moving over to a different aspect of Foundation Lab competition, Anthropic has banned XAI from accessing their models as part of a broader crackdown on unauthorized use of CLAUDE code. Kylie Robison of Core Memory reported that XAI researchers were abruptly cut off from using Anthropic models within Cursor Last week. XAI co founder Tony Wu told staff that he'd been told this is a new policy Anthropic is enforcing for all of their competitors. In a Slack message, Wu wrote, this is both good and bad news. We will get a hit on productivity, but it really pushes us to develop our own coding product and models. We're at a time in which AI is now a critical technology for our own productivity. This coming year is going to be really wildly exciting for all of us. The team is rapidly developing our own models and product. We will have something to share with everyone soon. In the meantime, you may still try all different kinds of models in GROK build. Elon Musk had, by the way, nodded to progress earlier in the week, posting major upgrade to GROK code coming next month. It will one shot many complex coding tasks simultaneously. Anthropic implemented a new set of technical controls to prevent third party applications from spoofing CLAUDE code to gain access to more favorable usage limits. The change affected multiple services, most notably popular open source coding agent opencode. The lengthy discussion on Hacker News used the analogy of a buffet, noting that a la carte API pricing could be as much as $1,000 a month for heavy users. One commentator wrote, everything about this is ridiculous and it's all Anthropic's fault. Anthropic shouldn't have an all you can eat plan for $200. When their pay as you go plan would cost more than $1,000 for comparable usage, their subscription plan should just sell you API credits at like 20% off. Others thought this crackdown was inevitable, with programmer Andrew Remnick writing, anthropic just reminded us that they are in fact a corporation. They are now actively blocking OSS harnesses from using CLAUDE subscriptions. This is why I keep harping on the importance of building for independence from a single provider. Let's not forget that their incentives are not fully aligned with customers and we need to actively build for provider independence and open source. Going back to the buffet analogy, however, some found the move pretty justified. Berkeley student Ayesh posted. I understand where Anthropic was coming from for the open code stuff. It's like bringing Tupperware containers to the all you can eat buffet. Ultimately, whatever you think of the situation, it is clear that Opus 4.5 tokens are just about the hottest AI commodity right now, which gives Anthropic a lot of power in the space. Honestly, we have even more than we could get into in this headlines, including some news about forthcoming models. But for now, that is where we're going to wrap the headlines. Next up, the main episode. Sure, there's hype about AI, but KPMG is turning AI potential into business value. They've embedded AI and agents across their entire enterprise to boost efficiency, improve quality, and create better experiences for clients and employees. KPMG has done it themselves. Now they can help you do the same. Discover how their journey can accelerate yours at www.kpmg.usagents. that's www.kpmg.us agents. If you're using AI to code, ask yourself, are you building software or are you just playing prompt roulette? We know that unstructured prompting works at first, but eventually it leads to AI slop and technical debt. Enter zenflow. Zenflow takes you from vibe coding to AI first engineering. It's the first AI orchestration layer that brings discipline to the chaos. It transforms freeform prompting into spec driven workflows and multi agent verification where agents actually cross check each other to prevent drift. You can even command a fleet of parallel agents to implement features and fix bugs simultaneously. We've seen teams accelerate delivery 2x to 10x, stop gambling with prompts, start orchestrating your AI. Turn raw speed into reliable production grade output at Zenflow Free. If you're building anything with Voice AI, you need to know about Assembly AI. They've built the best speech to text and speech understanding models in the industry. The quiet infrastructure behind products like Granola, Dovetail, Ashby and Cluly. Now, as I've said before, voice is one of the most important modalities of AI. It's the most natural human interface. And I think it's a key part of where the next wave of innovation is going to happen. Assembly AI's models lead the field in accuracy and quality, so you can actually trust the data your product is built on. And their speech understanding models help you go beyond transcription, uncovering insights, identifying speakers and surfacing key moments automatically. It's developer first, no contracts, pay only for what you use and scales effortlessly. Go to semblyai.com brief, grab $50 in free credits and start building your voice AI product today. Today's episode is brought to you by Superintelligent. Superintelligent is a platform that, very simply put, is all about helping your company figure out how to use AI better. We deploy voice agents to interview people across your company, combine that with proprietary intelligence about what's working for other companies, and give you a set of recommendations around use cases, change management initiatives that add up to an AI roadmap that can help you get value out of AI for your company. But now we want to empower the folks inside your team who are responsible for that transformation with an even more direct platform. Our forthcoming AI Strategy Compass tool is ready to start to be tested. This is a power tool for anyone who is responsible for AI adoption or AI transformation inside their companies. It's going to allow you to do a lot of the things that we do at superintelligent, but in a much more automated, self managed way and with a totally different cost structure. If you're interested in checking it out, go to aidailybrief AI Compass and fill out the form and we will be in touch soon. Welcome back to the AI Daily Brief. One of the things to know about this show is that given how fast moving this industry is, it is very often the case that I make last minute pivots to change what I'm covering on any given show. Today I was fully planning on covering Anthropic's new Claude coworker, basically Claude Code, but for everything that's not code, given how much Claude Code has been on people's minds for non Claude code use cases. After I got a little advanced notice that this was coming, it seemed like a very obvious focus. As it turns out, I just wanted to do it a little bit more in depth than I would have been able to this afternoon after it was announced. And so instead we're pivoting and we're going to stay on the same theme that we started in the headlines, which is the competition and jockeying for position among the big players. Now, if the jockeying and competition that we heard about in the first part of the episode in the headlines was little skirmishes, the news that we're talking about in the main episode is the big stuff. And the main story is that after a decade of complaining and Internet memes and even Larry David yelling at Siri swearing at it and smashing it against his car in utter frustration. In one of the most relatable moments of Curb youb Enthusiasm, Apple is finally going to fix Siri and more broadly, it seems, maybe take AI seriously, and they're going to do so with Google as their partner. Now, this has been rumored for a while, but it became official today. The companies released a short two paragraph joint statement in which they said Apple and Google have entered into a multi year collaboration under which the next generation of Apple foundation models will be based on Google's Gemini models in cloud technology. These models will help power future Apple intelligence features, including a more personalized Siri coming this year. After careful evaluation, Apple determined that Google's AI technology provides the most capable foundation for Apple foundation models and is excited about the innovative experiences it will unlock for Apple users. Apple intelligence will continue to run on Apple devices and private cloud compute while maintaining Apple's industry leading privacy standards. Reuters called it a major win for Alphabet, writing that the deal marks a major vote of confidence for them as they point out while Google's technology already drives much of Samsung's Galaxy AI, the Siri deal unlocks a large market with Apple's install base of more than 2 billion active devices. The saga of Siri has been a long one. As the Verge wrote, Apple spent most of the past year working on an AI upgraded version of Siri, but just couldn't get it there. Now as part of those efforts, Bloomberg reported that Apple also considered using a custom version of Gemini for AI powered features in Siri. And of course along the way we got a big shakeup in Apple's AI team. Specifically their head of AI, John Giannandrea, stepped down last month, paving the way for this big news to kick off the year. Now for many, the focus on the news was as much about what it said negatively about OpenAI as much as it did say positively about Google. Reuters quoted Parth Telsenia, the CEO of Equisites Research, who said Apple's decision to use Google's Gemini models for Siri shifts OpenAI into a more supporting role with ChatGPT remain positioned for complex opt in queries rather than the default intelligence layer. The Verge reports that Apple had also explored working with Anthropic and perplexity, and Apple has always said that they plan to launch more integrations with more AI companies over time. Yu Chen Jin thought the deal made sense, writing Gemini leads in Multimodality and OpenAI's device and personalized ChatGPT are direct competitors of Apple. Benjamin DeCracker says, OpenAI does seem very Apple coded. Google obviously does not. Polar opposites, really. So how did OpenAI blow the apple deal so badly? Yet others think it's the fact that OpenAI is a little bit too close to Apple for comfort that made this deal happen the way that it did. Robert Scoble writes, this makes sense because OpenAI is trying to become a products company. In other words, OpenAI is going after Apple and it would make no sense for Apple to help a new competitor. In a longer post, he expanded on those thoughts. He wrote, OpenAI is making a variety of new products and going after Apple. Apple didn't want to give OpenAI any more data to help a potential new competitor. The real problem for this OpenAI effort is that we're about to move to glasses. People don't believe me that we're about to move to glasses, but you should, because I just got back from CES and there was a ton of glasses there. For OpenAI to really get somewhere, they need to add a camera to an earphone. While I don't see that in this latest report, I wouldn't be shocked to see a camera show up somewhere. Eventually it's cameras that add understanding of the real world, which can lead to many new features that Apple's current AirPods can't match. I believe Apple is developing such a product to go with their glasses, which makes a lot of sense. Also, Google's AI models are better at multimodality. This means they can use cameras in a much better way than even OpenAI's models can. This is why in Silicon Valley robotics companies, a lot of them use Google Gemini because robots need multimodality. Apple's glasses, which are expected in 2027, have some significant advantages over the others. First, they have eye sensors in them, so it knows where the user is looking. It can also tell what the user is touching, holding or gesturing towards. This new capability will give Siri a significant parlor trick. It will let Siri answer questions that no other search engine has been able to answer before. And he goes on from that and basically the whole argument amounts to the fact that as AI becomes embodied, OpenAI and Apple are just on an absolute collision course. That makes a collaboration extremely difficult. Certainly the fact that OpenAI went and tapped a huge number of Apple staffers, most notably Jony Ivey, shows that there is some similar DNA in the companies that may well amount for why they were too close to make it work. Now, another dimension of the conversation is about the privacy dimension. Max Weinbach writes, remember this is Gemini technology, but as they said, it still runs on private cloud compute. That means Apple's silicon and Apple's hardware, not Google. Google is just licensing the technology to Apple. Yujian Jin also does note, however, that he doesn't think that a future Apple only model is a foregone conclusion. He writes, I don't think Apple is giving up building its own foundation models. They've spent billions on GPUs, hired many AI researchers and have the iPhone as a massive distribution channel. Once they've collected enough data via the new Gemini powered series, switching to Apple's own models is always an option. Now for others outside the privacy question, there was a competitiveness question. Rasser X writes, this is deeply concerning. Google has long pushed towards monopoly power and in search they effectively already are with Android, Chrome and now Apple's foundation models. This concentration of power demands serious antitrust scrutiny. Elon Musk agreed, responding to the news from Google post on X, where he wrote, this seems like an unreasonable concentration of power for Google given that it also has Android and Chrome. Lee Hepner writes, how is this different than Google wrapping search data and underlying models in Apple's Safari browser? Didn't the Justice Department just win a historic antitrust trial about this? Still, mostly people are just looking at this as a huge win for Google. Prinze writes, this seems to imply that Apple has agreed not to use its own foundation models in its ecosystem for the next few years. What a disaster if true. Although as Dan McAteer writes, loving this as an iPhone owner can't wait to get actually good Siri. Tim Halderson writes, apple giving up in the air race and giving Google a massive distribution advantage. Jim Cramer summed up the market's feelings when he wrote, the Apple Google partnership is very strong. Google pays little and Siri gets better. Both stocks should be higher. Jim is frequently known as something of a counter signal, but in this case I think he's right and I think it is just one more piece of evidence supporting my and many other people's prediction that at some point in 2026, before the year is done, Alphabet will be the biggest company by market cap on the planet. Josh Woodward of Google did also note that nanobananapro has now crossed a billion images created in Gemini Apple in just 53 days of the model being out. That's pretty cool, but given that about 900 million of those were me, I was a little bit less impressed. Now, if the Google Apple news was the biggest in some ways there was also still some very big news out of Meta as well. First of all, they announced that they were expanding their nuclear strategy with three new power deals. Vistra has contracted to supply capacity from their existing power plants, while small nuclear reactor startups Oklo and Terrapower have signed agreements to build multiple reactors. The Vistra deal is expected to deliver 2.1 gigawatts of power from a pair of plants in Ohio where Meta is building their Prometheus supercluster. And overall, the three deals are expected to deliver 6.6 GW for Meta's data centers by 2035. At the moment, most of the large data center projects are 1 gigawatt scale, so these deals set Matter up to significantly expand their footprint. The new deal adds to Meta's 2025 agreement with Constellation Energy to extend the life of an Illinois power plant. Meta's Chief Global Affairs Officer Joel Kaplan called the deals one of the most significant corporate purchasers of nuclear energy in American history. He added, state of the art data centers and AI infrastructure are essential to securing America's position as a global leader in AI. Mehta was also careful to address the growing negative sentiment around data centers driving up the cost of electricity, saying that they will quote pay the full costs for energy used by our data centers so consumers don't bear these expenses. Patient Investor writes, my opinion? This is big tech telling the market they want firm power for the next decade, not just more chips. Abu writes, we're shifting from compute constrained to energy constrained. If you don't control the power, you don't control the model. And that got us to our second announcement from Meta earlier on Monday. Zuck Today we're establishing a new top level initiative called Meta Compute. Meta is planning to build tens of gigawatts this decade and hundreds of gigawatts or more over time. How we engineer, invest and partner to build this infrastructure will become a strategic advantage. The effort will be led by Sanchos Junarthan and Daniel Gross. Santosh will continue to lead our technical architecture, software stack, silicon program developer, productivity and building and operating our global data center, fleet and network. Daniel will lead a new group responsible for long term capacity strategy, supplier partnerships, industry analysis, planning and business modeling. They will work closely with Dina Powell McCormick, who just joined Meta as President and Vice Chairman to work on partnering with governments and sovereigns to build, deploy, invest in and finance Meta's infrastructure in their why It Matter section. Axios wrote the announcement coming shortly after the firm named prominent banking executive and former Republican official Dina Powell McCormick as President suggests. Zuckerberg sees Meta's ability to build out AI infrastructure as a strategic long term advantage over its big tech peers. Rights omit. So Zuck's going after gcp, AWS and Azure now. Makes sense. All this extra compute can easily monetize it. With the seeming unrelenting demand, Meta becomes a cloud play. We'll have to see if that's exactly how it plays out, but it certainly seems like the big cloud providers just got a new player in the space. So like I said, quite a bit of strategic repositioning on this Monday. For now, that is going to do it for today's AI daily brief. Like I said, tomorrow we will dive deep into the new Claude Cowork product, which again is Claude code for everything that's not code, and I am super excited for that. For now. I appreciate you listening or watching as always. And until next time.

0:00