The 3x Payoff of Deep AI Integration
This episode analyzes new studies showing a widening gap between AI leaders and laggards in enterprise adoption. While mainstream media focuses on AI underperformance, the data reveals that companies with deep AI integration are seeing 2-3x better outcomes, with only 12% of CEOs reporting both cost and revenue benefits from AI.
- Companies that deeply integrate AI into core processes are 2.6 times more likely to see meaningful financial returns
- There's a massive perception gap between C-suite executives and employees regarding AI strategy, training, and tool access
- Only 3% of employees are using AI proficiently, with most stuck in basic use cases due to lack of proper tools and training
- Leadership expectation is the strongest catalyst for AI proficiency, making employees 2.6 times more proficient than baseline
- The problem is not AI capability but enterprise environment and investment in proper foundations
"Apple developing a dedicated AI wearable is an admission of failure. They already own the two best wearables on earth, the Watch and AirPods. If they need a new plastic bauble to make AI useful, it means they can't make Siri work on the devices we already own."
"If we were just talking about war games on Xbox, then Jensen Huang could sell as many chips as he wants to anybody that he wants. But this is not about kids playing Halo on their television. This is about the future of military warfare."
"CEOs whose organizations have established strong AI foundations such as responsible AI frameworks and technology environments that enable enterprise wide integration are three times more likely to report meaningful financial returns."
"For every 10 hours of efficiency gained through AI, nearly four hours are lost to fixing its output."
"The story that these surveys tell is that companies who are investing deeply in putting proper AI foundations into place are seeing two to three times the benefit of everyone else."
Today on the AI Daily Brief, a set of new studies that show the widening gap between enterprise AI leaders and enterprise AI laggards. And before that, in the headlines, Apple is reportedly developing an AI wearable pin. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. Alright friends, quick announcements before we dive in. First of all, thank you to today's sponsors, Optimizely, Z, Zencoder, assembly and Super Intelligent. To get an ad free version of the show go to patreon.com aidaily brief or you can of course subscribe on Apple Podcasts. In either case, ad free is just going to be $3 a month and if you are interested in sponsoring the show, send us a Note@ SponsorsIDailyBrief.AI welcome back to the AI Daily Brief Headlines Edition. All the daily AI news you need in around five minutes. A couple of years ago when Humane announced their AI pin, no one could mistake the self conscious references to Apple all over that company. Some of the founders were ex Apple, the aesthetic was very Jobsian and the design of the device was clearly striving to hit some of that simplicity. Now we all know how that story ended with a bang, not a whimper, as YouTube reviewer Marques Brownlee called it the worst product he'd ever reviewed. Apparently Apple have now decided that they want a bite at the Apple, as it were. The Information reports that Apple's new AI wearable Pin will contain a pair of cameras and three microphones. The design is described as a thin, flat, circular device with an aluminum and glass shell around the same size as an airtag, only slightly thicker. The Information noted that it isn't clear whether this is an individual device or something designed to be bundled with smart glasses or other devices. The report states that Apple may attempt to accelerate development of the product to compete with the OpenAI device. According to the Information, the pin could be released next year with a production run of 20 million units at launch. The takes weren't great, showing the skepticism that has brewed around Apple's AI strategy over the last couple of years. Naveen on X writes, apple developing a dedicated AI wearable is an admission of failure. They already own the two best wearables on earth, the Watch and AirPods. If they need a new plastic bauble to make AI useful, it means they can't make Siri work on the devices we already own. It will be a $300 accessory that still requires an iPhone to function. Akash Gupta compared them to Meta and said, apple just told you they're two years behind the one form factor that actually works. Meta shipped 4 million AI glasses in 2025 and owns 80% of the market. Sales tripled year over year. The Ray Ban display version sold out in 48 hours. Meanwhile, Apple is prototyping a pin. The last company that tried this was humane. They raised 240 million, launched at $700, got called the worst product I've ever reviewed, and sold to HP for 116 million. Less than a year later, Apple watched all of this happen and decided to build the same thing. Now, Akash argues, the form factor war is over and glasses won. People already wear glasses. No behavior change required. But I think Naveen is also right when he says that the two best wearables on earth are the watch and AirPods. As I've said numerous times on this show before, I think AirPods in particular have a potentially unique role to play. But then again, I'm also a boomer who isn't fully on board with everyone's seeming critiques of the terrors of the phone. So who knows? Now, speaking of Siri, according to Bloomberg's Apple Insider Mark Gurman, the company is planning to turn Siri into a chatgpt style chatbot, writes Gurman. The chatbot, codenamed Campos, will be embedded deeply into the iPhone, iPad, and Mac operating systems and replace the current Siri interface. Users will be able to summon the new service the same way they open Siri now, by speaking the Siri command or holding down the side button of their iPhone or iPad. Siri will accept both speech and text inputs, mimicking the user experience of rival chatbots. Now, on the one hand, this feels completely inevitable, and yet at the same time, it is something of a notable pivot. One of the outgoing AI leaders, Craig Federighi, had been adamant that he didn't want Siri to be a chatbot. Gurman also reports that Siri will be driven by a custom version of Gemini under Apple's new partnership with Google. His sources said that the custom build will allow Siri to, quote, significantly surpass its personalization features. That includes integration with Apple's core apps and the ability to use open Windows and on screen data as inputs. Apple intends for Siri to have the ability to control the device, including accessing the file system, placing phone calls, and using the camera. The headline suggested this was a move to fend off OpenAI. All of which brings up to me the sort of obvious thing that perhaps rather than thinking of this new superpowered Siri solely as a competitor to ChatGPT. The better reference point, given the deep integration with the operating system might be Claude code. Still, whatever it ends up being, there are so many people who just want Siri to actually be able to do what it seems like it should have been able to do for the last five years that I think that when it comes, people will suspend their skepticism for a little while just to be able to keep using the devices they already own. Now moving over to another company that's got a lot to prove in 2026 new model training has apparently been achieved internally at Meta, as their new AI team ships a preview. At a press briefing in Davos, CTO Andrew Bosworth said the superintelligence team delivered their first AI models earlier this month for internal testing. Bosworth said they're basically six months into the work and that the models are very good now. Back in December we learned that Meta was developing two Avocado, a language model that reportedly excels at coding tasks, and Mango, a visual model with image and video capabilities. Bosworth didn't identify if these were the models delivered, but did comment on how much work had happened since the summer. He said there's a tremendous amount of work to do post training to actually deliver the model in a way that's usable internally and by consumers overall. Bosworth said that Meta felt like they were seeing returns from the big moves that were made in 2025. Bosworth acknowledged that it was a tremendously chaotic year, but remember, Google had one of those in 2024 and we've seen how that worked out. Consumer AI certainly seems to be Bosworth's North Star for Meta's product. In a discussion of the AI bubble at Davos, he said, I think consumers and societies are ultimately the beneficiaries of this tremendous land grab of power, data centers and GPU capacity. Speaking of GPUs, one of my predictions for this year was Congress potentially trying to wrest control back from the White House when it came to chip export policy, and that certainly seems to be happening. The House Oversight Committee has advanced a bill to seize power on chip export controls. On Wednesday, the committee voted overwhelmingly in favor of the AI Overwatch Act. The bill would grant Congress the power to review and block chip export licenses granted by the Commerce Department. That power would be vested in both the House Foreign Affairs Committee and the Senate Banking Committee, giving both chambers a veto over advanced chip exports. Essentially, the power mimics congressional oversight for arms deals. The bill also includes a two year ban on the export of Nvidia's top of the line Blackwell chips, which of course haven't been considered as part of recent changes. In a bipartisan vote, the bill gathered 42 votes in favor with only two opposed. It will still need approval in the Senate Banking Committee and a full vote across both chambers, but it seems like it has strong momentum for both parties. Whether the president will sign a bill that limits his own power is another question. It's beyond the scope of this show, but there is a lot of interesting intrigue when it comes to divisions in the GOP around this. Republican Brian Mass, the chief sponsor of the bill, commented, if we were just talking about war games on Xbox, then Jensen Huang could sell as many chips as he wants to anybody that he wants. But this is not about kids playing Halo on their television. This is about the future of military warfare. I believe that we all agree that we are in an AI arms race, so why wouldn't we want to know what the AI arms dealers want to sell to our adversaries? Lastly today, a quick update personnel story that we've been following for the past week or so. OpenAI has announced a leadership shakeup around returning staffer Barret Zoff. According to the Information CEO of Applications Fiji, Simo has announced that Zof will now lead the Enterprise division. COO Brad Lightcap will hand over responsibility for product and engineering for the enterprise to focus on what they call commercial functions. In a separate move, CTO of Applications and former Meta Engineering lead Vijay Raji will lead OpenAI's advertising push. Cimo said the moves were designed to bring research, product and engineering teams into better alignment. Zoff was of course, the locus of controversy earlier this month after his shock departure from Thinking Machines Lab, where he was listed as one of the co founders. The story devolved into he said, she Said coverage as sources speculated on the true reason for his departure. Ultimately, though, more interesting than that is the fact that OpenAI has decided to name him the Head of Enterprise when that is a very important and contested area and indeed the area of focus for our main episode. Most marketing teams aren't short on ideas, but what they are short on is time. And that's exactly what Optimizely Opal gives you back with AI agents that handle real marketing workflows. You know, like creating content and checking compliance, generating experiment variations, personalizing user experiences, analyzing pages for geo, even tasks like approvals and reporting. It's your AI agent orchestration platform for marketing and digital teams, plugging seamlessly into the tools you already use, handling the boring busywork and keeping everything on brand that leaves you marketers with more time to do your actual job. So see what Opal can automate for your team by signing up for a free enterprise agentic AI workshop with Optimizely. Learn more at optimizely.com theaidailybrief if you're using AI to code, ask yourself, are you building software or are you just playing prompt roulette? We know that unstructured prompting works at first, but eventually it leads to AI slop and technical debt. Enter zenflow. Zenflow takes you from vibe coding to AI first engineering. It's the first AI orchestration layer that brings discipline to the chaos. It transforms freeform prompting into spec driven workflows and multi agent verification where agents actually cross check each other to prevent drift. You can even command a fleet of parallel agents to implement features and fix bugs simultaneously. We've seen teams accelerate delivery 2x to 10x. Stop gambling with prompts, start orchestrating your AI, turn raw speed into reliable production grade output and at zenflow free if you're building anything with voice AI, you need to know about assembly AI. They've built the best speech to text and speech understanding models in the industry, the quiet infrastructure behind products like Granola, Dovetail, Ashby and Clulee. Now, as I've said before, voice is one of the most important modalities of AI. It's the most natural human interface and I think it's a key part of where the next wave of innovation is going to happen. Assembly AI's models lead the field in accuracy and quality, so you can actually trust the data your product is built on. And their speech understanding models help you go beyond transcription, uncovering insights, identifying speakers and surfacing key moments automatically. It's developer first, no contracts, pay only for what you use and scales effortlessly. Go to semblyai.com brief, grab $50 in free credits and start building your voice AI product today. Today's episode is brought to you by Superintelligent. Superintelligent is a platform that, very simply put, is all about helping your company figure out how to use AI better. We deploy voice agents to interview people across your company, combine that with proprietary intelligence about what's working for other companies, and give you a set of recommendations around use cases, change management initiatives that add up to an AI roadmap that can help you get value out of AI for your company. But now we want to empower the folks inside your team who are responsible for that transformation with an even more direct platform. Our forthcoming AI Strategy Compass tool is ready to start to be tested. This is a power tool for anyone who is responsible for AI adoption or AI transformation inside their companies. It's going to allow you to do a lot of the things that we do at Superintelligent, but in a much more automated, self managed way and with a totally different cost structure. If you are interested in checking it out, go to aidailybrief AI Compass, fill out the form and we will be in touch soon. Welcome back to the AI Daily Brief. Today we are talking about a trio of new surveys that help tell the contemporary story of enterprise AI as it is deployed. The surveys come from PwC Workday and AI Support and Training Consultancy section and part of the reason that I wanted to do this episode is not only that there is rich interesting information in that data, but that the initial reporting around it has a very distinct slant which while I don't think is wrong, I do believe is misleading in a way that could be dangerous. In short, the mainstream reporting around these surveys leads to a suggestion of AI underperformance. It contributes to a sensibility that AI is overhyped. The story that I think the data is actually telling is yet more evidence of the widening gap between leaders and laggards when it comes to AI adoption. And the implications of those two stories are very, very different. So let's talk about how the media, specifically the Wall Street Journal, summed up this story with their piece CEOs say AI is making work more efficient, Employees tell a different story. And again, to be clear, none of the data that the Wall Street Journal here is focusing on is incorrect or even insignificant. Their first graph shows a statistic from the section survey who for full disclosure are a sponsor of this show right now, but who actually didn't even share the survey with me. I only found it when I saw this Wall Street Journal article. In any case, that survey was conducted around the same time as our AI ROI survey for AIDB and surveyed 5,000 white collar workers from companies with a thousand people or more in the us, UK and Canada, concentrated around October of last year. When asked how much time they think they personally are saving each week by using AI, the C Suite was saving a ton of time, 33% were saving four to eight hours, a quarter were saving eight to 12 hours, and almost a fifth were saving more than 12 hours a week. Meanwhile, among workers, only 2% were saving more than 12 hours per week, more than a quarter were saving less than two hours, and the largest category by far 40% said they were saving no time. A representative quote comes from North Carolina user experience designer Steve McGarvey, who said executives automatically assume AI is going to be the savior. I can't count the number of times that I've sought a solution for a problem asked an LLM, and it gave me a solution to an accessibility problem that was completely wrong. Which brings us to the workday research. The headliner stat from that survey was that 37% of the time saved through AI is being set by rework. In the executive summary, they write, employees report spending significant time correcting, clarifying or rewriting low quality AI generated content, essentially creating an AI tax on productivity. For every 10 hours of efficiency gained through AI, nearly four hours are lost to fixing its output. In other words, they say one and a half weeks a year is being lost to fixing AI outputs per highly engaged employee. Another area of divergence between workers and executives had to do with anxiety around AI. For workers, the percentage who said they were anxious or overwhelmed versus excited was nearly a 7030 split to the side of anxious. On the C Suite, the split went the other way, with more than 70% being excited, while less than 30% were anxious or overwhelmed. All of this leads to what can only be described as underperformance in terms of actual financial impact. They highlighted that in a new PwC survey of CEOs that was released to coincide with the WF in Davos this week. Just 12% of CEOs said AI had delivered both cost and revenue, while 56%, more than half of the nearly 4,500 CEOs polled, said they had seen no significant financial benefit so far. So like I said at the beginning, none of this is incorrect information and all of it is interesting and important signal. My concern is that the way that it's being presented contributes to a sensibility that AI itself is underperforming and that AI itself is overhyped. The reason that matters is that it has the potential of changing the way that individuals and companies think about AI adoption. Put simply, some number of people are going to see this and feel like it perhaps takes them off the hook a bit that in other words, they were right to be skeptical and that maybe they don't have to figure out where to carve out the time to learn how to use these new tools because they're not all that good anyway on an individual. To say nothing of a company level. This is not a winning strategy for adapting to the new world that has in fact already arrived. As I said at the beginning, my interpretation is a little bit different. I think that all of these studies are actually adding up to a story of a widening gap between leaders and laggards. Let's hone in and focus on the vanguard of companies, those 12% right here who are seeing both an increase in revenue and a decrease in cost from AI. Rather than focusing on this 56% number of CEOs who haven't seen a change in either revenue or cost. What makes these 12% different? Are they doing things that is different? And the short answer is absolutely yes. Those vanguard companies that top 12% who are actually seeing double financial gains in terms of increased revenue and reduced cost are 2.6 times more likely to have embedded AI into their core processes. 44% of those companies say that they are deploying AI to a large extent versus just 17%, PwC writes. Foundations matter as much as scale. CEOs whose organizations have established strong AI foundations such as responsible AI frameworks and technology environments that enable enterprise wide integration are three times more likely to report meaningful financial returns. In other words, deeply integrating AI triples your likelihood of positive outcomes. This might be old hat for many of you who are here, but the glaring point that stands out from that is that this is a story of enterprise environment more than a story of AI capability. Now let's take a look at the Section report which is explicitly about AI proficiency. And once again, the headline stat here is not encouraging. By section's metrics, just 3% of employees are using AI proficiently as opposed to 97% who are either AI novices or AI experimenters. 40% said that they'd be fine never using AI again. But the story that actually emerges is that employees aren't being given the tools to succeed. 85% of the knowledge workers from Section survey either had no work related AI use cases or beginner level. 59% of the reported AI use cases were basic task assistance, things like replacing Google search, drafting, editing and summarizing documents. Only 2% of respondents have built any sort of automation and only 3% of respondents said their most valuable use case was data analysis or code generation. Indeed, only 2% overall of use cases were judged to be advanced. In short, companies are not giving their employees tools to go beyond the most basic of use cases. They are instead dropping LLMs on top of their heads in many cases based on Enterprise Deployments, LLMs that are a generation or two behind and telling them to make it work. And we do see organizations that are providing their employees tools having more proficient employees compared to a baseline employee. Across the whole study. Employees that have access TO tools have 1.5 times the proficiency of the baseline, which is the same as the multiplier. For employees who report having a coherent company AI strategy, the multiplier is 1.6. And by far the biggest multiplier comes from employees whose managers explicitly expect AI Usage who are 2.6 times more AI proficient than the baseline employee in the study. Leadership expectation is the strongest catalyst by signaling that AI is not core work. And although it's not captured in the study, my strong guess is providing time for people to actually learn and experiment with the tools outside of the bounds of their normal work rather than expecting them to just figure out when to go do that exploration for themselves leads to nearly 3x more proficiency than the baseline. Now one of the things that shows up in the section report is also just this catastrophic divergence in the perception between the C Suite and individual contributors. This reminds me of a study from Ryder back in December of 2024 that found, among other things, a 30 point gap between the number of C Suite executives who said that their company had a coherent AI strategy and the percentage of employees who thought so. That gap is actually even larger. In this study, 81% of C suite officers surveyed said that their company had clear AI policy compared to just 28% of individual contributors, a 53 point gap. The encouragement to experiment was 51% for C Suite versus just 20% for employees. Tool access was 80% for the C Suite versus 32% for employees. Training received was the widest divergence with 81% of the C Suite reporting that they had been trained and only 27% of individual contributors. What this means is that these challenges will not self correct. The C Suite, by and large, at least from these studies, is not recognizing the problem. And on the same theme of the problem not fixing itself, we see reinvestment failure becoming a major bottleneck. This comes from that workday study which found that nearly 40% of AI time savings are lost to fixing AI output. When asked about how they reinvest their savings from AI into the organization, 39% goes into tech infrastructure versus just 30% into workforce development. And importantly, the numbers are even more dramatic for time savings allocation. 53% of reinvestment of the time saved goes into systems versus just 29% for people in workforce development. And to be clear, this doesn't appear to be a strategic determination that investment into systems is better than investment into people. 59% of leaders say that skills development is their priority, while just 30% of employees are experiencing that a 29 percentage point gap Zooming out the workday study starts to dramatize the leader laggard gap when it comes to individual employees. They divide employee Personas into four groups. The observers who stand on the sidelines not wasting time fixing but not generating value either the misaligned middle who are struggling to make tools work and find that the effort required to clean up output outweighs the benefits the low return optimists, who have high AI activity but also high rework, and the augmented strategists, who are seeing the highest net productivity gains. There are huge differences in the profiles of the augmented strategists compared to everyone else. 93% of the augmented strategists treat AI as a radar to spot patterns rather than a crutch. 71% are experienced professionals, 35 to 44 57% report that their organizations have increased investment in team connection, which is way higher than the other categories. And the augmented strategists are two times as likely to have received substantial skills training. By contrast, the low return optimists, who have high enthusiasm but also a high rework Burden, report only 37% increased access to skills training, which is actually the lowest of any group. All of this reflects things that we found in our AI ROI benchmarking study as well. We found there to be statistically significant correlations between the diversity of AI use cases and how much ROI benefit companies were reporting. In other words, when companies had AI use cases that were not only for time or cost savings, but also for increasing output, increasing the quality of strategic decision making, unlocking new capabilities. Basically, the larger the number of categories of ROI that use cases led to the number, the more overall benefit companies were seeing. We also saw that use cases that focused on strategic outcomes rather than just efficiency outcomes had higher net reports of ROI benefit. Ultimately, the story that these surveys tell is that companies who are investing deeply in putting proper AI foundations into place are seeing two to three times the benefit of everyone else. And because of the nature of these tools, those benefits are compounding. The more proficient with AI you get, the more likely to continue to get further ahead. You are the lens then to read these studies through is not AI overhyped, but instead the very real burden of infrastructure for AI adoption and the significant gains that come from it. Now, I know that many of you listeners are the folks inside your companies who are responsible for AI strategy and who are advocating for the sort of policies that we're very clearly seeing lead to better outcomes. Hopefully some of these studies then can provide fodder for you to win more internal arguments. That is what I wish for you. But for now, that's going to do it for today's AI Daily brief. Appreciate you listening or watching as always. And until next time, peace.
0:00