AI at CES is Not Just Cheesy Gadgets Anymore
This episode analyzes how CES 2026 represents a shift from gimmicky AI gadgets to serious product launches by major tech companies, while also examining AI-related economic risks including inflation and market bubbles. The discussion covers major announcements from Nvidia, AMD, Samsung, Amazon, and Google, highlighting how AI competition is intensifying in both infrastructure and consumer devices.
- CES has evolved from showcasing novelty AI gadgets to becoming a platform for major tech companies to announce serious AI product roadmaps
- AI-driven inflation from data center construction and chip costs is emerging as an underappreciated economic risk according to Wall Street analysts
- The AI adoption narrative is being misrepresented by official statistics, with actual enterprise adoption likely much higher than reported 10% figure
- Google is positioning itself advantageously in mobile AI by securing partnerships with both Samsung and Apple, potentially reaching billions of devices
- Amazon is attempting to differentiate in AI by focusing on ambient, contextual assistance rather than competing directly with ChatGPT-style interfaces
"Everything is AI now, so nothing is AI. It has reached such a point of saturation that simply stating AI doesn't really do anything."
"Under pressure to generate revenue and unconstrained by guardrails, a number of leading AI companies will adopt business models in 2026 that threaten social and political stability."
"The costs are going up, not down in our forecast because there's inflation and chip costs and inflation in power costs."
"Amazon didn't launch a ChatGPT competitor. They activated a network of 600 million devices that people already talk to like a person. The behavioral shift is already done now. The AI just got smarter."
"You no longer program the software, you train the software. You don't run it on CPUs, you run it on GPUs."
Today on the AI Daily Brief, why CES is telling an extremely different story about AI this year than it has in the past. Before that in the headlines, what investors and analysts think are the most important AI risks in the short term, which frankly might surprise you. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. Alright friends, quick announcements before we dive in. First of all, thank you to today's sponsor, Zencoder assembly, robots and pencils and Super Intelligent. To get an ad free version of the show go to patreon.com aidaily Brief or you can subscribe on Apple Podcasts. Subscriptions start at just $3 a month. If you are interested in sponsoring the show or frankly finding out anything else about the show, whether it's speaking opportunities, et cetera, check out aidaily brief AI or you can shoot us a note at sponsorsaidailybrief AI to learn about our forthcoming intelligence products, check out aidbintel.com you can also get the highlights of our AI ROI benchmarking survey there. And lastly, you absolute animals aidb new year.com our 10 week self guided resolution is up to over 100 teams and nearly 2,500 people participating. Look for more operator content focused on that coming soon, but with that out of the way, let's dig into today's episode. Welcome back to the AI Daily Brief Headlines edition. All the daily AI news you need in around five minutes. We are in the first full week of business being back in session for 2026 and as such everyone is kind of trying to get a vibe for what this year is going to be like. Now. Of course we're not going to learn all that much in the first days of the year, but they do kind of set a tone and this year certainly has not been without drama. We have a whole new situation in terms of the geopolitics of north and South America, all sorts of interesting questions around politics this year, which is of course an election year in the us and some big market questions around AI, which has been the most important theme in markets for the past several years, all contributing to what is a really interesting environment. Now in that context, I've recently seen a couple of different assessments around what some of the big risks in general, but also with AI are, and they're not necessarily the things that you normally think of when you think of AI risk. According to Wall street analysts, for example, one of the most overlooked risks for this coming year is AI driven inflation. Morgan Stanley strategist Andrew Sheets wrote The costs are going up, not down in our forecast because there's inflation and chip costs and inflation in power costs. Sheets and Morgan Stanley are forecasting that inflation will remain about the Federal Reserve's 2% target until the end of next year, in part due to heavy CAPEX spending on AI infrastructure. Now, one of the economic forces we saw in the latter part of last year was a lack of cost sensitivity when it came to data center construction. Input costs for these facilities are concentrated around the price of chips making premiums on labor and electricity frankly kind of inconsequential. This sent construction worker wages spiraling higher, with some now commanding $200,000 a year to work on data center projects. Some analysts that are wondering if that sort of data center spending could flow through to generalized inflation through both elevated wages as well as price insensitive energy consumption. Wrote Carmen act portfolio manager Kevin those at inflation risk remains very underappreciated. Inflation is what could start to scare investors and cause markets to show more cracks. George Chen, a consultant at Asia Group, believes we'll see price pressure on chips start to curtail the AI buildout. He wrote. Memory chip cost inflation will push up prices for AI groups, lower investors returns, and then the flow of money into this sector will reduce. Now, I personally tend to think that the risk around things like price insensitive electricity consumption is more about its flow through to politics than inflation. But it is interesting that this is part of a conversation that's happening on Wall street. Even if it's not, I believe the mainstream consensus. A second interesting risk discussion came from the Eurasia Group. Each year the Eurasia Group publishes their list of the top 10 global risks, and they call 2026 a tipping point year. Interestingly, while they do point out that it's a time of, as they put it, great geopolitical uncertainty, for them, the biggest risks are not the standard ones you hear, such as rising conflict between the United States and China, but mostly about how the US decides to reposition its role in a new global order. Interestingly, however, risk number eight, they called AI eats its own users. It's short enough at three paragraphs that I think I'll just read it in whole, they write. Under pressure to generate revenue and unconstrained by guardrails, a number of leading AI companies will adopt business models in 2026 that threaten social and political stability. Following social media's destructive playbook, only faster and at greater scale, we remain bullish on AI's revolutionary potential. Today's frontier models reason through complex problems show their work and are embedded in coding, research and knowledge workflows. The hyperscalers are offloading large chunks of software development to AI, accelerating their own R and D cycles. In biotech and materials science, AI is opening new research pathways, though commercial breakthroughs remain mostly ahead of us. Hundreds of millions of people now use chatbots daily for everything from drafting emails to debugging code and learning new skills. This is real and it's just beginning. But AI can't live up to investors expectations in the short term. Even after hundreds of billions of dollars of investment, the most advanced models still hallucinate. Their capabilities are jagged, dazzling at some tasks, unreliable at others, and often unpredictably so. That inconsistency makes them hard to deploy in high stakes applications where errors are costly. Business adoption has been uneven with only about 10% of US firms using AI to produce goods and services, according to the Census Bureau. Many companies report significant productivity gains, but surveys suggest most have yet to see meaningful bottom line impact. Real productivity increases will arrive through wide diffusion of the technology across the economy, but that takes time. Yet markets have priced in revolution, not evolution. Now frankly, this is kind of a mess of a prediction. It starts talking about one thing and then talks about totally different things. Now I know they're not just being cynical and skeptical for the sake of it, and as they say, they remain bullish on its potential. Blah blah blah blah blah. But given that this is a group that people listen to, I think it's worth being a little bit more specific. And frankly I'm much, much more interested in where the assessment starts than where it ends up. When it comes to this whole third paragraph about AI not being able to live up to investors expectations in the short run. I just think that this is a fundamental misunderstanding of where AI is right now. A common misunderstanding, but a misunderstanding nonetheless. This idea that the capabilities are too jagged for AI to be valuable in a workplace context right now is simply not true anymore. Anyone who is slowing down their deployments of AI because of some generic concern about hallucination is not paying attention. Does that mean that there aren't specific use cases where hallucination creates too high a cost barrier for that to be a viable solution in the short term? No, that is a thing. But the idea that hallucination is creating some major headwind slowing down adoption overall I think is just incorrect. Secondly, I will go on record and be very clear that the Census Bureau numbers that they're publishing suggesting that adoption in firms is around 10% are straight up wrong. I owe it to myself and to you guys to dig more into exactly how those numbers are sourced, but they are completely different from anything else that we're seeing. The idea that more than 40 or even 50% of American adults are using AI, but only 10% of companies are, does not carry water. And so ultimately I think that is just an incorrect statistic and leads people to a misunderstanding of what's actually going on. Lastly, this idea that most firms have yet to see meaningful bottom line impact. If you listen to any amount of the results of our AI ROI benchmarking survey, I think that conventional wisdom is once again growing a little long in the tooth. Now, it still may be that even with all that, that conclusion that markets are priced in revolution, not evolution may be true, but I think that the basis for that argument that they're sharing is dramatically oversimplified and based on frankly incorrect information. Like I said though, what's much more interesting to me is the core idea that AI eats its users, that under pressure to generate revenue, a number of leading AI companies will adopt business models that threaten social and political stability, following social media's playbook. This is super interesting to me because it is something that I can see happening and I think that we do have some early evidence for this. This was frankly why there was such acrimony around the launch of Sora. Not the second version of the model, of course, but about the application. To some it felt like OpenAI, although saying that they weren't about just consuming as much of our attention as possible, was running back the same playbook that social media companies had used to consume as much of our attention as possible. I do think that to the extent that that second part of the prediction is right, that markets have priced in revolution, not evolution, and that starts to create pressure on the big hyperscalers to get performance at any cost. This risk that they travel down pathways that are not the ones that are actually ultimately that good for society, even if they make more money in the short term, is a real and very interesting risk. Still, overall a super fascinating conversation and I'm glad that the Eurasia Group and these others are having it now. As you might imagine, there is a big overlap in the commentary overall between markets and what they expect from Fed policy and as well as how they see AI playing out. Speaking with CNBC's Squawk Box on Monday, Minneapolis Federal Reserve President Neel Kashkari said that AI is starting to impact hiring plans at the companies he speaks with. He believes, however, that these effects are stratified across the economy. Arguing that AI is really a big company story, he argues that smaller companies are yet to see AI drive down hiring, but they're also not participating as strongly in AI derived productivity gains. Now, if that assertion makes you decide to stop paying attention to anything Kashkari has to say, I can't say that I particularly blame you. However, speaking to the idea of an AI bubble, Kashkari believes we're over the horizon and beginning to see tangible benefits. He said, there's no question that there's some misinvestment or malinvestment that's going on, but there are too many anecdotes of businesses using this and actually seeing real productivity gains. Businesses that I talked to two years ago that were skeptical are saying, no, we're actually using it now. Overall, the takeaway is that AI is absolutely on the Fed's radar as a real force that's reshaping the economy. We're not yet at the point where Fed officials are talking about rate cuts to address AI related layoffs, but the issue is starting to factor into monetary policy discussions. Legendary investor Ray Dalio also commented on AI recently. In a recap of the year in Markets on X, he wrote, obviously the AI boom that is now in the early stages of a bubble had a big effect on everything. Dalio said that he would soon publish an explanation of his bubble indicators so he didn't get too deep on the topic. Now, Dalio has been known as a bit of a doomsayer in macroeconomics over recent years, and however, he's not of the view that we're living through a repeat of the dot com boom that will inevitably collapse under its own weight. Instead, he's most concerned about inflation and the rising national debt, viewing a booming stock market as a symptom of the dollar losing value. Indeed, while many are wary of an AI bubble about to pop, Dalio seems to be calling for AI stocks to continue their strong performance in 2026, not even necessarily because of fundamentals, but because of structural macroeconomic forces that encourage financial bubbles to grow. And yet, for all of this, the market is still hungry for AI debt despite some wobbles at the end of last year. You might remember in my predictions episode that I said that this is the key thing to watch when it comes to AI bubble conversations, and it's much more important, frankly, than any random commentary along the way. Throughout 2025, we saw the data center build out transition from free cash flow funding to a debt funding model. Oracle was emblematic of this trend and began functioning as the canary in the coal mine towards the end of the year. Bloomberg's Matt Levine, however, highlighted that although this debt funding is coming from private credit firms, it's categorically different to the traditional role that private credit has played. Usually, private credit would fund buyouts and roll ups, relatively small deals that don't get a lot of attention. Instead, we now have deals like Meta's 27 billion in debt issuance to fund their Louisiana data center, Levine wrote. When AI firms borrow infinity zillion dollars to build data centers, they will sometimes do so in the bond market, but for structuring flexibility reasons, they will often do it from private lenders. And when they do that, everyone in the financial industry and possibly everyone in the world will think a thought like ooh, AI. I would like to get in on that deal or else oh that's stupid. I would like to short that deal. Everyone knows about this stuff and there is a diversity and intensity of opinions. The upshot is that these very loud Wall street opinions are ensuring that debt funding is plentiful for the time being, if only as a trading vehicle. Rohan Latif, global head of credit trading at Morgan Stanley, said, I view it as very much the biggest single opportunity coming into 2026. Every single time a new market is created, there's a little bit of a lag before the secondary market kicks off. The reality is this is the right time for it to happen, something we will keep an eye on throughout the year. For now, however, that is going to do it for today's headlines. Next up, the main episode. If you're using AI to code, ask yourself, are you building software or are you just playing prompt roulette? We know that unstructured prompting works at first, but eventually it leads to AI slop and technical debt. Enter zenflow zenflow takes you from vibe coding to AI first Engineering. It's the first AI orchestration layer that brings discipline to the chaos. It transforms freeform prompting into spec driven workflows and multi agent verification where agents actually cross check each other to prevent drift. You can even command a fleet of parallel agents to implement features and fix bugs simultaneously. We've seen teams accelerate delivery 2x to 10x, stop gambling with prompts, start orchestrating your AI. Turn raw speed into reliable production grade output at Zenflow Free. Most companies don't struggle with ideas, they struggle with turning them into real AI systems that deliver value. Robots and Pencils is a company built to close that gap. They design and deliver intelligent cloud native systems powered by generative and agentic AI with focus, speed and clear outcomes. Robots and Pencils works in small, high impact pods. Engineers, strategists, designers and applied AI specialists working together to move from idea to production without unnecessary friction. Powered by RoboWorks, their identic acceleration platform teams deliver meaningful results, including initial launches in as little as 45 days, depending on scope. If your organization is ready to move faster, reduce complexity and turn AI ambition into real results, Robotics Robots and Pencils is built for that moment. Start the conversation@rootsandpencils.com aidaily brief that's robotsandpencils.com aidDaily Brief Robots and Pencils Impact at Velocity Today's episode is brought to you by my company, Superintelligent. In 2026, one of the key themes in enterprise AI, if not the key theme, is going to be how good is the infrastructure into which you are putting AI and agents? Superintelligence agent readiness audits are specifically designed to help you figure out 1 where and how AI and agents can maximize business impact for you, and 2 what you need to do to set up your organization to be best able to leverage those new gains. If you want to truly take advantage of how AI and agents can not only enhance productivity, but actually fundamentally change outcomes in measurable ways in your business this year, go to be super AI. Welcome back to the AI Daily Brief. Of all of the big annual conferences, the Consumer Electronics show, better known as ces, which happens every January in Vegas, is kind of the weirdest. On the one hand, it is chocker block full of just absolutely ridiculous, almost definitely going to go nowhere. Gadgets that tend to shove whatever the hot new technology is in places where it completely doesn't belong. At the same time, it's in some weird ways the most creative because it is really raw and future looking. It harkens back in some ways to the old science expos of the 50s and 60s where the point wasn't to sell things that were available right now, but to get a glimpse of a possible future. Now, for anyone who's been to Vegas for this thing in January, that might be highly over romanticizing it, but even if you think that's true, it is undeniably different than other events. And so it's really interesting to me that this year there seems to be a very clear tonal shift in CES that I think has a lot to say about where we are with AI. This is basically the third year in a row that CES has been filled with AI gadgetry, but the tone is Very different. Last year's show was largely in that spirit that I was just describing of weird and wacky AI empowered gadgets. A true shoving of AI everywhere, regardless of whether it belonged there or not. Especially last year, a lot of the core of the show was smaller companies showing off their ambitious uses of AI. This year's CES is very different. It seems much more firmly to be about the biggest players in the industry rolling out their product lineup for 2026. We got a big keynote from Nvidia, who played a very minor role at last year's show. Amazon, AMD and Google also used the first couple of days to unveil major new products and projects. And overall the wacky gadgets took a backseat to more grounded and definitive products from tech's biggest players. Capturing the mood Angel Sag from Moor Insights said everything is AI now, so nothing is AI. It has reached such a point of saturation that simply stating AI doesn't really do anything. As Wired puts it. The rush of companies stuffing chatbots, computer vision and intelligent sensors into their products has led to a sort of evening out when products all offer similar features and use cases that become harder to differentiate. Put more crassly, no one really cares about the AI TVs and fridges that clogged the halls last year. Same goes for robot vacuums and basic wearables. The novelty is gone and the utility isn't clear. This CES is much more about products that are going to define categories. And more than that, it also shows that the iteration cycles for the big players are speeding up, so they basically have to now use every conference as a roadshow eventually. In other words, Nvidia can't wait till their GTC conference in March to talk about Vera Rubin. They need to get the press cycles in now. Which is of course not to say that there weren't some really cool things that wasn't just the biggest names in AI. For example, people are super stoked on the new Lego smart brick system that has tiny Asics and RFID to do lights and sounds in certain configurations. But by and large, this CES is all about the big companies putting their foot down about where AI goes next. And with that in mind, let's kick off with Nvidia and Jensen Huang's keynote. How you develop the software fundamentally changed. The entire five layer stack of the computer industry is being reinvented. You no longer program the software, you train the software. You don't run it on CPUs, you run it on GPUs. And whereas applications were pre recorded, pre compiled and run on your device. Now applications understand the context and generate every single pixel, every single token completely from scratch every single time. Computing has been fundamentally reshaped as a result of accelerated computing, as a result of artificial intelligence. Every single layer of that five layer cake is now being reinvented. Now in terms of the substance, a big part of the keynote was Nvidia unveiling their next generation of AI chips. The Vera Rubin chips will become the next flagship to follow up from the Blackwell architecture. Huang said Vera Rubin is designed to address the fundamental challenge that we have. The amount of computing necessary for AI is skyrocketing. The race is on for AI. Everyone is trying to get to the next frontier now. Surprisingly, Jensen announced that Vera Rubin is already in full production despite ongoing Blackwell deliveries and installations. The new architecture is an upgrade of both the GPU and CPU within Nvidia's rack based AI systems, with Vera being the name of the CPU while Rubin refers to the gpu. Nvidia claims the Rubin architecture will be three and a half times faster than Blackwell on model training tasks and five times faster on inference tasks. The chip also represents a significant efficiency gain, producing eight times as much inference compute per watt of energy. Huang said that these improvements will combine to a 90% decrease in token cost for models running on Vera Rubin chips. The chipset has been designed assuming that AI researchers will soon be training 10 trillion parameter models, and Nvidia expects that new ultra large models will take around a month to train on the new architecture. They also estimate that training runs will require 1/4 as many chips as they would if they were using Blackwell, which could be hugely significant. At the moment, training clusters are pushing the limits in terms of how many chips they can physically fit in a data center, as well as networking and energy constraints. Alongside an upgrade to raw power, Vera Rubin systems will also feature redesigned memory capacity designed for Long Horizon tasks. Nvidia's senior Director of AI Infrastructure solutions, Dion Harris, told the press, as you start to enable new types of workflows like agentic AI or long term tasks, that puts a lot of stress and requirements on your KV cache. So we've introduced a new tier of storage that connects externally to the compute device, which allows you to scale your storage pool much more efficiently. Daniel Newman, the CEO of Futurum Group, described Rubin as an incredible generational leap. He noted that after the shaky rollout of Blackwell chips, this presentation reinforces that the ramp up in production for Vera Rubin is on track and should be fully available later. This year, he commented, the pace of innovation continues to impress. Now, also at ces, Nvidia unveiled a range of embodied AI models, simulation tools and edge hardware in an effort to become, as TechCrunch put it, the Android of embodied AI. Now Nvidia has been working in this area for years, but are doubling down on robotics in an attempt to deliver a full stack ecosystem as embodied in physical AI move towards real world rollouts, the company released a pair of new world models, Cosmos Transfer 2.5 and Cosmos Predict 2.5, designed for simulated robotic training and evaluation. They also released Cosmos Reason 2, a vision language model that allows embodied AI to see and reason about the world. Finally, they rounded out their model lineup with Isaac Groot n 1.6, a new vision language action model that allows humanoids to use AI to drive their physical environment. But the concept is much more than a range of AI models designed for robotics. Nvidia is also integrating everything into a new ecosystem called Nvidia Osmo. And finally, Nvidia unveiled their new Blackwell powered Jetson T4000 GPU that provides cost effective and energy efficient AI compute for use in robots. Through a deepening partnership with Hugging Face, Nvidia is making their entire stack compatible and ready to go with Hugging Face's range of open source robots, writes TechCrunch. The bigger picture here is that Nvidia is trying to make robotics development more accessible and it wants to be the underlying hardware and software vendor powering it, much like Android is the default for smartphone makers. There are early signs they continue that Nvidia strategy is working. Robotics is the fastest growing category on Hugging Face, with Nvidia's models leading downloads. Meanwhile, robotics companies from Boston Dynamics and Caterpillar to Franco Robots and Neurorobotics are already using Nvidia's tech. Nvidia was of course not the only chip maker showing off their new lines at CES. AMD's presentation featured a new range of AI chips aimed at capturing market share in basically every segment. The flagship product is the MI455 GPU, which is their latest server scale chip designed for AI data centers. AMD CEO Lisa SU claimed a 10x performance boost over the previous generation of Mi chips. OpenAI President Greg Brockman appeared on stage with Su to discuss the new chip. OpenAI signed a deal to purchase tens of billions worth of AMD chips back in October, Brockman said they chose AMD as a high bandwidth, high memory footprint chip for inference optimization. Brockman added that the AI buildout isn't going to be about choosing one chip over the other, commenting that the idea of everyone on the planet using AI agents will require billions of GPUs, said Grokman, no one has a plan to build that scale. The presentation also previewed the next generation of MI chip coming in 2027, with sue promising a 1000x performance jump in the four years since the 2023 release of the Mi 300x. AMD also unveiled new Ryzen CPUs for AI enhanced consumer PCs, said Rahul Tico, the senior VP of AMD's client business. In the years ahead, AI is going to be a multi layered fabric that gets woven into every level of computing at the personal layer. Our AI, PCs and devices will transform how we work, how we play, how we create and how we connect with each other. No matter who you are and how you use technology on a daily basis, AI is reshaping everyday computing. One thread of the commentary was the return of bullishness among market investors. Daniel Newman again said every GPU and XPU that can be built between now and the end of the decade will be sold. AMD will be a massive beneficiary, as will its investors. He also said Nvidia and AMD both announcing next generation and pushing production and delivering huge generation performance gains here at Cesar and everyone that is built will be sold. Bubble Bears Must really hate to see it now moving over to the world of devices On Monday, Samsung's co CEO TM Rowe told Reuters that they plan to double the number of handsets with their Gemini powered Galaxy AI assistant installed, reaching 800 million in 2026. In addition, Samsung are installing Galaxy AI on a range of smart appliances with ROW commenting, we will apply AI to all products, all functions and all services as quickly as possible. Now, for those who are watching the AI competition closely, it's hard not to compare the 800 million devices that Samsung is putting Google's AI on to the total active user base of OpenAI, which is now up to about 900 million but still in that range. And aside from being a core partner for Samsung, Google has also secured the contract to drive the AI enhanced Siri on iPhones later this year. Apple and Samsung are number one in two in handset sales, commanding around 40% of the global market. And some speculate that Google's stranglehold on mobile AI could be part of the reason that OpenAI and Meta are looking to create new categories of AI devices. Devices, writes the Information. Both companies are hoping to supplant phones. They might succeed, but don't bet on it. In the meantime, Google's Gemini models will be powering AI features on many different outlets. That should mean Google is able to improve how the models function on a variety of tasks simply because of the data it gets from interacting with so many customers. And that could make Google's models even more attractive to potential business partners. Now, whether you agree with that analysis or not, I think it is useful to note, especially as we wrap our heads around where the narratives are starting in 2026, that Google is, as we know, coming into this year in an extremely advantageous narrative position. Now, a company that was relatively quiet last year when it came to AI, but which has felt very much waiting in the wings is, of course, Amazon. Amazon made a couple major announcements at ces. First, they used the event to reintroduce their AI wearable Bee, highlighting a host of updated features. Bee was acquired by Amazon last year and they've spent the intervening time improving the product. The new features include Actions, which connects Bee to the user's email and calendar, allowing users to convert spoken commands into action items. Next up is Daily Insights, which is designed to recognize patterns in the user's schedule over weeks and months. Bee says the feature is intended to recognize shifts in user relationships and recommend personalized goals related to these shifts. They said the concept is something akin to a life coach. Voice Notes allows the user to press a button to record a quick thought, which can then be integrated into a to do list. And as we discussed yesterday, this sort of voice, note and conversation capture seems to be a major focus for AI wearables this year. Finally, templates allow the user to organize and summarize large amounts of information into a more digestible form. Now, all of this is pretty standard fare for current gen AI wearables, but BE co founder Maria Delorizolo said that the team had learned some interesting insights by iterating on the product. She wrote, we began seeing something unexpected. Customers were relying on BE outside their professional lives and it unlocks something they didn't know they needed. They started asking questions they'd never been able to ask before. How can I be a more effective communicator? What commitments have I made that I've lost track of? How am I actually spending my time? BE services insights across months of conversations, emails, calendar data and health metrics from HealthKit, things that would otherwise go unnoticed. Now, I still think that there are major questions around the entire category, but it is very clear that this is a category that Amazon is going to compete fiercely for. Certainly the bigger News from Amazon though was the announcement of a revamped Alexa with its very own new web app. Users will now be able to access Alexa through a newly launched website. Appropriately enough, Alexa.com the goal is to give users device agnostic access to the AI chatbot, allowing Alexa to appear on desktops and mobile handsets as well as Alexa specific hardware. The bet is that Alexa users will migrate a lot of their everyday chatbot usage to the platform if Amazon offers a more familiar UX. And indeed when you go to Alexa.com, it presents a normal text based chatbot experience that is extremely familiar at this point for Anyone who's using ChatGPT or Gemini or Claude, but which was otherwise difficult to access on Alexa devices. Amazon is also offering to integrate calendar and email access, allowing Alexa to be used as an AI command center for family life, said VP of Alexa Daniel Roche. 76% of what customers are using Alexa for, no other AI can do. He gave the example of using Alexa as an interactive audio recipe book, guiding the user to make substitutions based on the groceries that they've ordered. Certainly the idea of integrating personal context into ambient AI seems to be one of Amazon's big ideas and a thing that they believe will allow them to differentiate from the competition. Will the bee and Alexa ecosystems coming together to deliver a personalized AI assistant allow them to carve out a new space in the AI race? It remains to be seen, but so far commentators are fairly optimistic. Bank of America, for example, reiterated their buy position for Amazon, pointing to the Alexa.com launch as a differentiator. NYU Stern's Connor Grennan writes, Amazon already sold 600 million Alexa devices worldwide. Most people use Alexa as a fancy kitchen timer, but Amazon is waking up that entire network. Alexa plus is free for 200 million prime members globally. In other words, he writes, Amazon just gave a ChatGPT competitor to 200 million people who didn't have to do anything. No new app, no new account. It's already in their living room, kitchen, bedroom and car. As Connor puts it, Amazon isn't trying to be a better chatgpt. They're going after the family in the home. Calendar updates, recipes, family coordination, pet care reminders. Here's the real insight. I talk a lot about how people treat AI like a search engine because the interface looks like a search bar. Alexa doesn't have that problem. You talk to Alexa, you've been doing it for years. The mental model is already there. Amazon didn't launch a ChatGPT competitor. They activated a network of 600 million devices that people already talk to like a person. The behavioral shift is already done now. The AI just got smarter. So like I said at the beginning, as you can see, this CES is not just about AI being shoved into random gadgets, but is very clearly about some of the biggest companies in the world planting their flags heading into the new year. We've got infrastructure, we've got devices, and it's very clear that the competition is getting more and more serious for consumers. I believe that this focus on actual products and experiences that matter is going to be hugely to our benefit, and I am excited to see how all these products come together this year. For now, though, that is going to do it for today's AI Daily Brief. Appreciate you listening as always, and until next time. Peace. Sam.
0:00