BG2Pod with Brad Gerstner and Bill Gurley

AI Enterprise - Databricks & Glean | BG2 Guest Interview

45 min
Dec 23, 20254 months ago
Listen to Episode
Summary

Brad Gerstner and Bill Gurley interview Ali Ghodsi (Databricks) and Arvind Jain (Glean) about the current state of enterprise AI adoption. They discuss why 95% of AI projects fail (it's actually healthy experimentation), why LLMs are commodities, and where real value accrues in the AI stack—arguing that proprietary data and application layers will capture most value, not foundation models.

Insights
  • The 95% AI project failure rate reflects healthy experimentation, not a sign of technology failure; successful companies focus on leveraging proprietary data and business processes rather than chasing generic AI applications
  • LLMs have become commodities—interchangeable and price-comparable like gasoline—meaning competitive advantage comes from data, domain expertise, and application-layer products, not from model selection
  • Enterprise AI success requires treating it as an engineering discipline with proper evaluation, productization, and team investment, not as a quick demo or magic solution to business problems
  • The AI stack will likely distribute value across three layers (data, intelligence, applications), but applications and data governance will capture disproportionate value compared to foundation models
  • Speech interfaces and proactive AI products (bringing AI to users rather than users coming to AI) represent underestimated opportunities, while coding automation and customer service chatbots may be overhyped
Trends
Enterprise AI adoption shifting from experimentation to productization phase, requiring proper governance, security, and change managementLLM commoditization accelerating, forcing AI companies to compete on data access, domain expertise, and application-layer differentiation rather than model qualityAgentic AI systems replacing rule-based automation (RPA), enabling continuous learning and pattern recognition versus brittle, static rule setsData strategy becoming prerequisite for AI strategy; companies investing heavily in data governance, security, and access control for AI systemsOrganizational change management emerging as critical bottleneck; technical AI capabilities outpacing human adoption and process redesignProactive AI (systems that anticipate user needs) gaining traction over reactive AI (systems users must query), improving adoption across casual usersMeeting recordings and conversation data becoming primary data source for enterprise knowledge capture and system-of-record updatesThree distinct AI camps emerging: super-intelligence quest (frontier labs), foundational research (academics), and pragmatic enterprise value creationSpeech interface maturation approaching critical threshold; keyboard elimination seen as near-term possibility despite current limitationsConsolidation pressure building in enterprise software as AI enables workflow automation across previously siloed applications
Companies
Databricks
Co-founder Ali Ghodsi discusses enterprise AI adoption, customer use cases, and internal AI automation across 6,000-p...
Glean
Co-founder Arvind Jain presents enterprise search and AI assistant platform, recently crossed $200M revenue run rate,...
Royal Bank of Canada
Customer example: built agents with Databricks to automate equity research analysis, reducing report generation from ...
Merck
Customer example: created TEDDY (Transformer Enabled Drug Discovery) model for gene regulatory network analysis and d...
7-Eleven
Customer example: deployed agents to automate marketing stack including audience segmentation, content creation, and ...
OpenAI
Discussed as leading frontier lab in super-intelligence quest camp; ChatGPT cited as consumer AI success with billion...
Anthropic
Mentioned as frontier lab pursuing super-intelligence; predicted to be up in 12 months with continued growth in codin...
Google
Gemini model mentioned as competing with ChatGPT; Google Sheets versus Excel comparison used to illustrate software s...
Salesforce
Enterprise software example discussed regarding future of CRUD apps and AI-driven workflow automation in CRM systems
ServiceNow
Enterprise software platform mentioned as example of traditional software layer potentially disrupted by AI-driven au...
Zoom
Positioned as ideal data entry application for capturing meeting information and feeding it into enterprise systems o...
TSMC
Referenced as analogy for how LLM companies will function as commodity fab-like businesses, valuable but interchangeable
Cisco
Historical example of infrastructure company that remained valuable ($300B) despite not becoming dominant application...
Amazon
Historical reference: existed in 1998 alongside early internet infrastructure, illustrating that infrastructure and a...
Airbnb
Historical example of killer app that emerged from internet infrastructure layer, contrasting with predicted portal-b...
Uber
Historical example of unexpected killer app that emerged from internet infrastructure, illustrating unpredictability ...
Facebook
Historical example of killer app that emerged from internet infrastructure, contrasting with predicted portal-based f...
Twitter
Historical example of killer app that emerged from internet infrastructure, illustrating unpredictability of future w...
Cursor
AI coding assistant mentioned as example of SMB/developer-side AI adoption with hundreds of millions of users
Perplexity
AI search/chat product mentioned as consumer AI tool alongside ChatGPT with billions of users
People
Ali Ghodsi
Co-founder and CEO of Databricks; discusses enterprise AI adoption, customer use cases, and internal AI automation st...
Arvind Jain
Co-founder and CEO of Glean; presents enterprise AI assistant platform vision and personal AI companion strategy
Brad Gerstner
Host and founder of Altimeter Capital; frames discussion around AI bubble, value distribution, and enterprise adoptio...
Bill Gurley
Co-host; asks probing questions about AI economics, physics of CapEx justification, and future software layer disruption
Yann LeCun
AI researcher and founding father of deep learning; cited as skeptic of super-intelligence scaling approach, advocate...
Rich Sutton
AI researcher who created reinforcement learning; cited as skeptic of current super-intelligence approach, advocates ...
Jan LeCun
One of three founding fathers of AI; cited as researcher skeptical of super-intelligence scaling laws approach
Quotes
"I think we have AGI. I think we have artificial general intelligence. We really haven't. You hear these 95% of projects fail, but that's actually what you want."
Ali GhodsiEarly in discussion
"I think the LLM is a commodity. People are not saying that, but it is a commodity. Like, you can get gas from this gas station, you can get gas from that gas station. It doesn't matter. Just compare price."
Ali GhodsiMid-discussion on competitive differentiation
"Your AI strategy starts as your data strategy. So you've got to get the data house in order first."
Brad GerstnerDiscussion of enterprise AI implementation
"It's an engineering art. Like, if you're going to have a company that's going to be really differentiated, like my company or your company or anyone's company, and you want to beat the competition, you can't just quickly put something together and think that your competition is not going to do the same thing."
Arvind JainOn AI implementation requirements
"We're not in a bubble in a sense that we're not spending huge amounts of capital on what we are doing. We're just trying to get actual economic value inside of these organizations."
Ali GhodsiOn AI bubble discussion
Full Transcript
I think we have AGI. I think we have artificial general intelligence. We really haven't. You hear these 95% of projects fail, but that's actually what you want. I think the LLM is a commodity. People are not saying that, but it is a commodity. Like, you can get gas from this gas station, you can get gas from that gas station. It doesn't matter. Just compare price. Is AI in a bubble? There is an AI bubble. Okay, so then Glean is also in the bubble. Everybody's in the bubble. No, I would say there is a bubble. I would say those three camps. There is a super intelligence quest camp. I would be very worried there. There's a second, the researchers doing the, you know, that's definitely not in a bubble. They're like the- They're sober. Yeah, they're super sober and nobody cares about them. And there's, right? And they're probably the ones that arrived, unfortunately. And then there's the third camp, which is us trying to make this valuable. We're not in a bubble in a sense that we're not spending huge amounts of capital on what we are doing. We're just trying to get actual economic value inside of these organizations. Two legendary builders, Ali Irvinth. I'm so thrilled to get into this with you because both of you have seen every super cycle I've lived through. Internet, mobile, cloud, data and AI. Not just through the super cycles, but also through the hype, the trough of disillusionment, and this time it's different. Today, we're going to chop it up on the state of AI. You know, let's start with a 20,000 feet view. Take stock of where we are. AI, we've seen consumer AI, billions of users. ChatGPT said the guns went off three years ago. Cloud perplexity, chatGPT, people use it in the room. On the SMB and developer side, you've got hundreds of millions of users with cursor and codex and cloud code and so on. Enterprise, on the other hand, there's a lot of divide. It's hard to see a lot of fog of war. on one side you've got models that are earning math benchmarks and science benchmarks and engineering benchmarks. But on the other side you've got the MIT report that's saying 95% of AI deployments don't work. What's the reality? Bridge that gap for us. Lay it out as you see it. View from the top. So I think first of all, I think we should know that people use AI in their personal and work lives both. So there's not so much of a divide. Everybody in your company is probably using ChatGPT and Cloud and other tools on a daily basis. The thing that I feel is happening in enterprise is you hear these 95% of projects fail, but that's actually what you want. When you are actually experimenting with new technology, if all of your projects are failing, that means you're just not trying enough at the moment. So I think when I read the study, it was not a surprise for me. We can actually see, hopefully, similar stats next year, too, because we want everybody in the industry to be really eager and experiment and actually figure out how to actually get benefits from this technology. This would make you guys, by default, the 5% of AI that is working, which is 1 in 20. Maybe you go to the 5%. What is the use case that is working? And not just working like it's saving me time, but like it's working and it's transforming my company. Something that you can take to the bank, to the CFO, while the CFO will notice it, but the legal won't shut it down. I mean, look, we're seeing a lot of use cases that are working. It's just that, you know, you just have to, it's not just you can just unleash the agents and it just works. It's an engineering art. Like, if you're going to have a company that's going to be really differentiated, like my company or your company or anyone's company, and you want to beat the competition, you can't just quickly put something together and think that your competition is not going to do the same thing. So that's going to be something that needs evaluations. It needs something that you're going to productionize. It's going to take effort. You need a great team around it. But we're seeing a lot of them. I'll give you some examples. Royal Bank of Canada built agents with us that basically take as soon as an earnings report comes out. So equity research analysts, their job is to put together these reports that say like, you know, this is a buy, this is a hold and so on. The agent goes, gets the earnings report, gets all the previous earnings reports, gets all the competitors' earnings reports, gets everything that's going on in the market, does the full analysis, the news, everything, puts it all together and it can get the equity report out in 15 minutes from the earnings call. Industry standard is two hours. Of course, it's going to get commoditized and others are going to do that as well. But that's an actual really important use case that we're seeing in finance. So that's like finance, example in finance, right? And there's lots of examples like this, sifting through hundreds of thousands of documents, SEC reports, so on. That's finance. Let's switch gears. Let's go to healthcare. Healthcare is completely different. In healthcare, we have a customer, Merck, that in the life science space created a model called TEDDY. TEDDY stands for Transformer Enabled Drug Discovery. And this is a transformer model, kind of just like large language models that can predict the next word, but it instead can figure out which genome is missing if you remove a genome. So it really understands the gene regulatory network and can really start telling you what's happening with gene expression and so on. So this is really important for drug discovery. It's the beginnings, but this is going to actually help us do things that we couldn't do before. Let's pick retail also. So I'm picking different. Healthcare is one. I gave you finance, right? The RBC one. Let's go to retail, 7-Eleven. agents that completely automate the marketing stack. Actually, the marketing stack is going to get disrupted pretty heavily. So these agents can basically prepare. They can segment the audience. Like, this segment wants to hear this. And they can prepare all the marketing material that's directly targeting you guys. And they can put the campaigns together and do that. 7.11 was doing this before as well. But, you know, this, and we're seeing this at Databricks as well, more and more is being done by agents and being automated. so you can just do it faster, and you can segment more fine-grained. Because before you had to create the content for the groups, that was a heavy content creation was something that was human manual labor. Now you can actually do that much, much more. You can have all your web materials completely customized for a target group. So these are examples where it is working. There are also lots of examples where it's not working, even with Databricks. We're not just the 5%. We have some of that 95% too, but some examples where we're seeing success. Ali, follow up on that. These are great examples. Thank you. Maybe if you were to take it a layer up, what is common across these use cases or these organizations or these CIOs that's making these use cases work? Is there something that we can pattern match? Yeah, look, I think the LLM is a commodity. People are not saying that, but it is a commodity. Like, and, you know, when I took econ classes, commodity was when it's interchangeable. Like, you can get gas from this gas station or you can get gas from that gas station. It doesn't matter. Just compare price. LLMs have become that way. Like, it doesn't really matter. This one is better right now. Next week, that one is better. you can't even keep up anymore, right? What's happening? So they're a commodity. So it's not about that. It really comes down to your company. What data does your company have that's special that your competitors don't have? Can you leverage that? And can you build AI that really understands that data? Because that's not a commodity. There's not an AI out there that understands all your business processes and your company, your secret sauce and your data. That's not a commodity. In fact, that's closer to the 95%. It really comes down to that. Or if you have a complicated process that just your company has, this is how you deliver your product and services in your company. And that portion can be disrupted with AI somehow. If you can do that, now you can get ahead of your competition. But it comes back to what makes your company special. Unfortunately, a lot of companies are just building commodity stuff. You should not be building that because it's a thing that every company can do. It's not special to your company. That's, I think, the problem in a lot of the industry. Another problem in the industry is a lot of demo wear. It's really easy to make cool demos with Gen.AI, and therefore we're seeing a lot of cool demos, but that's all they are. Yeah, well, something we say around at Altimeter quite a bit is your AI strategy starts as your data strategy. So you've got to get the data house in order first. And there's a lot of reasons for use cases that we're trying, we're not working. Maybe give us an example of the 95% of an AI bet that either of you had at Databricks, at Glean, that did not work out and why it didn't work out? It's actually an interesting thing with engineering today is you build systems and never before have you been in this mode where you start with a great idea and it doesn't seem like a good idea anymore, like within two weeks, because we see a new development that happens. So we have numerous failures in engineering on that front. For example, some of our fine-tuning work, building models, for a specific use case within our product, like, you know, didn't really pan out for us. And ultimately, the choice was that, you know, we can go with already built models, whether they are small open source models, hosted on Databricks, or one of the large, you know, foundation models. But internally, like, you know, from a corporate, you know, use cases perspective, actually, like, you know, we are also, like, in many ways in this mode where a lot of our work actually, like, I would not say, like, fail, but it actually takes much longer than, you know, to actually generate success. You know, there are, you know, we're actually trying to automate a lot of our business processes internally. And like, for example, like, you know, one thing that I want is in our company, I want everybody to actually know exactly what their top priority for the week is, what they want to work on. And maybe, you know, we want an AI agent to actually first tell them what their priority should be. And we want it all to be documented. And we want a system which actually then, you know, rolls it all up and I get a view every week where I can actually quickly see, you know, what are all the different people working on in the company and is that aligned with, you know, what I want them to work on. And it's a simple thing. Like, you know, companies have always tried to actually have this, you know, as CEOs, you always want it and it's always hard to make it happen. And we thought that AI would simply just, like, you know, magically do all of this work because, you know, like it has all the context, it has all the context inside the company to make it happen, but I still don't have it. So things do take time to actually, you know, to Ali's point, like, you know, there is AI is just one more tool that you have in the toolkit. It does not suddenly make building complex enterprise systems, you know, like it doesn't make it like that. You can, you know, build it up like in one day. Yeah. You know, the last time enterprises got this excited about a tool was called RPA. And we know how that ended. It unfortunately fizzled out. And somebody in the audience yesterday is like, hey how is this time different from RPA It seems like the same movie bigger budgets better actors What different this time How is the nature or the architecture of the technology different from the previous automation cycle? Either of you. Yeah, well, I mean, first of all, RPA, like, it didn't take, you know, it didn't capture my attention at all. So I actually can't, you know. What is RPA? So I think, like, I would not compare these two technologies at all. Like, you know, what we're seeing now with AI is so fundamental. You know, it's the, you know, when we saw it first, it was basically magic. And we couldn't believe that this is a machine that is doing this work. Machines just simply cannot do these kind of things that we saw them do, like writing on their own, having emotion, understanding emotion. So it's a, you know, it's fundamental, it's different. and that's why I don't think this technology is going to fizzle out. And it's not like you don't have to be a financial expert or a deep thinker on business. This is obvious stuff. All of us know, all of us feel it. All of us can see the capability of this technology and we know it's special and it's going to be around. Yeah. You want to hear my RPA? Please. You know, I mean, it was rule-based. And the problem with it, especially if you're, you know, you want something that automates what's going on on your desktop and automate the work that's happening, it's just that there's too much unexpected things that happen and it's just hard and brittle to set it up. It wasn't learning ever. So there was like zero learning. It was like you tell it exactly here are the rules. And if it got something wrong, you need to go and go back and expand the rules. Here, you have something that's learning, right? So it can improve and it can generalize and it can understand the patterns and do pattern recognition. So that's the fundamental difference between these two. Now, there has been many startups that have failed in the generative AI. We're going to replace RPA with generative AI models. There's many startups that have failed, actually, that I know of, like pretty some high-profile ones. It's because the paradigm we live in today with AI is there's still problems. The biggest problem is that you bake a model, and that's where it's learned everything it needs to learn, and then you freeze it, and then you launch it, And then maybe you give it some context, but that's it. It's frozen. So therein lies the problem that, you know, we need an AI that really can sort of continue learning while it's using the desktop and clicking around. So I do think this problem is hard to nail, but I think Arvind is right that it's like, there's no comparison at all. It's like brittle rule-based stuff versus a learning agentic system. I think it's going to nail it perfectly, but we haven't really nailed computer use yet. Yeah, working on it. The number one shift is this move from if-then-else statements to a more regenerative solution that figures out the solution. And so you're trading breadth for maybe determinism. That seems to be the difference. And, you know, there's a lot of CIOs in the room we've got here, and they've got budgets coming up to plan. If you were giving advice to them of like, hey, based on everything I know from my customer base, here's one thing or two things that you've got to figure out and align incentives on, or it could be a reliability problem or org design, what advice would you have for CIOs who are thinking about their AI budgets right now? Well, spend more. Put it on Glean. Spend more, yeah, put it on Glean. But I think the one thing, you know, which is important in the AI market today is that it's very new and there are many players. In fact, every software company is also an AI company now. You can go and check their websites. So I think it's just hard to actually figure out where to allocate those budgets. And what we tell people is that I think the winners are yet to be identified. And so experiment with more vendors, do shorter term contracts. And while that's easy to say, it's hard to actually implement because every product that you try has, you know, it's a cost that you have to pay to make it sort of even test it. So you have to also pick products that are easy to test. I mean, those are the ones that don't require you to, you know, spend the next six months trying to implement something and you have no idea what's going to come out after that. Like, you know, the products of today, the products that are built with the right AI, they should work, you know, very, very quickly for you. Kral, walk around. We're going to take a peek into the future. Shifting gears, you know, one of the things that keeps investors like me up in the nights is a quarter trillion being spent on NVIDIA on the semi side of things. Assuming that is just 50% of the CapEx, you're spending about half a trillion on CapEx, and then you've got to earn about a trillion dollars of AI revenue for all of this CapEx to be worth it. And just to put this in context, the entirety of the software industry earns about $400 billion of revenue. This seems like a physics problem at this point. How do you think this plays out? You know, you've got to make about a trillion dollars of revenue to justify this present spend that's already happening. How do you think this shakes out? Maybe we start with you, Arvind. Wrong person to start with. But, you know, I'm an engineer and I actually don't really, you know, think too much about who's spending what money. Like, you know, we're here to build our product and add value. So in some sense, you know, I've not really thought too much about this problem. But if you think about AI, you know, AI is not actually, you know, extending software in a marginal way. It's a different product. And in fact, you know, it's actually going to grab a lot of revenue that actually today is in services industry, which is 25 times larger than software industry. So there's a lot of spend that is going to move. I mean, the spend that you see happen on AI is actually sort of those service dollars that are converting into AI or software dollars. And I think the, but with that said, you know, maybe you have a more informed view on this. By the way, I do think that's just to build on, you said, you know, I'm an engineer. I want to just build something that's cool. I do think it's not binary, right? It's not like, okay, so the physics doesn't work out, so the whole thing will collapse. No, there's going to be things that work. And so it is a good idea to continue focusing on the stuff that is obviously already working, continue expanding on that. But I think if you zoom out, I think there's like three paradigms or three kind of camps. And I put Arvind in the third camp. I actually put myself also in the third camp. But let's start with the first camp. I think the first camp is this quest for super intelligence camp. And it's, you know, I think all the frontier labs are doing this. like in all three, four, or five of them, however you want to count them. And I think it's really still being, a lot of it comes from the scaling laws mentality, which is whoever has the most GPUs and the most data is going to win the quest for super intelligence, which is kind of intelligence that's like on almost like godlike. It leads to recursive self-improvement of the AI, which then once you have that, it can cure cancer and solve all economical problems. And we can probably 10x GDP over a few years, period of time. So what the hell are you talking about that there's a physics problem? Like anything, any of your cost equations are going to pale in comparison to the economic value that this thing is going to provide. So that's like one camp. And the way they're developing it is bigger and bigger clusters, more and more energy. And that's how they're going about it. And that's where most of the capital is going. Right. Right. That's not the kind of capital you're spending or I'm spending. But that's that camp. And then how do they know that they're succeeding? They're not just like, oh, just trust us. They're, you know, very smart people working on this. So the way they're approaching it, they're saying, you know, we'll throw the hardest questions we have at whatever AI we have now. And if it nails them, and we're making really rapid progress, so what's your problem? Like, look at math Olympiad. We're, like, nailing these math Olympiad problems and physics Olympiad. And programming contests is, like, better than any human being. So, like, that's what they're doing. They're throwing all the most intellectually challenging. There's a second camp, which are the people that created the original technology. the scientist who created the technology, got the computer science Nobel Prize for it. It's called Turing Award. And that's, you know, Rich Sutton, who created reinforcement learning, which, you know, a lot of this stuff is built on. You have Jan LeCun, who was one of the three founding fathers, and many others. They have for many years. Actually, I've been, you know, I've asked them for years. They've been saying that that first camp is not going to, that's like not even the right approach, is their view. They're like, no, that's just like autoregressive next token prediction. is just probabilistically predicting the next token. That's not how, and usually they will say, that's not how humans learn. That's not how animals learn. You know, we operate in a different way. Your brain is not that way. And one example is that, you know, even a child learns very quickly to walk and talk and do things with very little data compared to, you know, like certainly no child is reading all of the internet's data four times over before they learn to speak. So that's like camp number two. Those guys, by the way, they say it's 20 years out. So they're saying, hey, it's a physics problem, and it's going to take 20 years to get there. Which to me, it's like, I don't know. Like, leave me alone. Let me do research. Third camp, which is, I think, what we are in, is I don't think we need super intelligence. Like, you know, I don't think we need that super intelligence right now. Maybe they'll get there. That's awesome if they do. But I think we have AGI. I think we have artificial general intelligence. We really have it. We absolutely have it. It's like anyone who says we need to get to AGI, that's like, it's a false premise to start with. we already have AGI. I came to the United States in 2009 at UC Berkeley, not far away from here, and I was in an AI lab. It was called Amp Lab. The A was for algorithms and AI, machines and people. And these are all AI people. And back then, the definition of AGI we had, we already have satisfied that. Like, I know the discussions we had. And I actually went back to some of those folks to see, like, is it just me? Or what was the sentiment back in 2009? And everybody that I talked to said, yeah, that's by those standards, we had AGI, but we've changed the definition now. We have those definitions, you know, ads. So for 30, 40 years, we had a definition of AGI. We've already hit that. Now we're changing it and moving the goalposts. But very obviously, we already have AGI. Just use any of these LLMs and have it do some reasoning. And certainly, it's smarter than a lot of friends that you have, right? Like, you know, let's not name them, or coworkers or whatever, right? So you already have AGI. Now we're like haggling over exactly how smart is it. You know, do you have a friend that's smarter or not? So if we already have AGI, we just need to make it useful inside the enterprise. We need to just expand that 5% to be 10%, 20%, 30%. So that's why I think Arvind's answer is actually a good answer. Like we have the AGI we need. Let us just focus on solving the actual problems inside the organizations. And I think we can already, that's enough to automate a lot of the tasks and get huge economic value out of it. We don actually need super intelligence for that That a good idea If the super intelligence guys nail it amazing Then we cured cancer If they don hopefully the second camp comes up with a new thing in the next 20 years That also awesome We already have whatever we need So yeah let us just do our engineering. Right. Yeah. That's really good framing. And the way this manifests in the world is there's a data layer. There's the intelligence layer, which is where camp one is presumably producing a lot of great models. And then there's the software layer where the users engage with. Where do you think value will accrue if you were to design 100 units of value across these three layers, the data layer, the intelligence layer, and the software or the application layer? Where do you think value accrues in the next five years? This is a tough question. I mean, I think all those three layers actually are very fundamental. I thought you were going to add a few more which are not you didn't yeah because I think I feel like the like as Ali was saying that the models are going to be available to all of us they are going to be commodity and it's going to be hard to sort of see that the more spend goes to them versus you know these layers on top but how do you it's hard to sort of come up with where the most value will be and I also don't know if actually it changes from today's technology architecture where again, like, you know, you think about in a pre-AI world, any sort of, like, you know, enterprise, you know, application and data systems, you know, you have data systems, you do have, I guess you don't have enough of that intelligent layer today and then you have the application layer. So I guess, you know, some dollars will shift into it. I know we do think that the intelligence layer is actually going to be a pretty thick one. Maybe it will capture half of the enterprise value. Anything to add, Ali? Yeah, no, I think that, you know, yeah, there are more layers in the stack depending on how you want to do it. But I think, as I said, the LLMs is a commodity, as Arvind said. You can get them, like, you know. By the way, that doesn't mean those companies are not going to be valuable. They can be very, I mean, TSMC is very valuable. But I'm just saying, they're going to be kind of like these fab-like companies. But they're interchangeable. And we've never seen something like that ever. I have not doing all these. People just switch LLMs, like, in one day. That's not the case with your, you know, your iPhone versus Android or your Windows versus your Mac or your anything versus anything. Like, you know, Google Sheets versus Excel is, like, huge religious battle inside our company. But LLMs, it's like, you know, because it's a commodity, as I said. It just speaks English or any language you like, and it gives you different answers every time. Might as well just try the cheaper one, the cheaper commodity, or the flight to smarter commodity. You can't even really tell the difference, can you? So then what is special is the data that you have. Again, if your company has data that it has actually collected that your competitors do not have. Like Glean is amazing, but if you remove all the data from Glean, there's no use to it, right? So it's all about the data that you have. And can you secure the data also? So if we're going to have agents running around accessing this data, like, oh, that's his HR data. oh, here is Apoorva's salary information. Oops, I blurped it out to all of you. Like, you know, like that's, you know, so how do you lock it down? How do you make sure that there's governance? There's, you know, there's also a lot of worry around what if it's using a Chinese model? What if it's accessing this information? What if it's sharing this information with a competitor? What if it's interacting? So the governance security layer is going to be super, super important. But I do think most of the value will accrue to the apps. Yeah. So it's kind of, and I think that's common sense. I just don't know which apps. I do think Glean is amazing. Do you think of it as an app? I don't know. Now we see it as both app and a platform. So I think it's called an app platform. I do think it's amazing because it has the potential to automate so much of the overhead inside of an organization. If you think about why do organizations have hundreds of thousands of employees, some organizations, or 50,000, 20,000, a lot of it is the coordination overhead of so many people have to communicate with each other. hey, what happened? What did you exactly mean by this? Let's do a meeting where you explain to me I ask some questions. Or let's invite these other guys also and then write it down. And then just the coordination overhead of organizations is massive, right? It's like this N-squared problem that, you know, everybody needs to communicate with everybody and they're communicating inside their siloed org chart, but how do we get it across? So this, like, you know, through docs and Excel sheets and PowerPoints and meetings is how we, like, move companies and organizations forward. So much of that can be augmented and be made more efficient with Glean. So that's why I think Glean is amazing. But this is kind of like 2000. And you asked, what are the killer apps on the internet? By the way, back then we thought it's like Cisco, routers, you know, portals maybe with thousands of links on them. Actually, I was like in, I just started college. And we knew that the future of internet would be portals, which are these webpages with 100 links on it. And you just click on the right link. This was before Google search. But the future of internet actually didn't look that way. It ended up being, you know, things like Facebook for friends and things like Airbnb for rentals and Uber for your cab industry and, you know, Twitter and so on. Those became great companies. So I don't know what those are for the future. They will pop up and they will be extremely valuable. But, okay, so does that mean that Databricks and Glean then basically will die and there will be a new set of companies? No, back then there was actually an Amazon.com already in 98. There was already a Google actually existed already in 98 and so on. Cisco, by the way, is still around and it's, you know, only a $300 billion company or something like that. Right. So it's not binary. We'll see what happens. But I do think there's going to be really a lot of value will go to the future. Yeah. You know, apps that will emerge. Let's double click into that. The $300 billion companies of today at that layer, software, apps, Salesforce, ServiceNow, a lot of talk about software is being dead. Sathya calls them the CRUD apps. What is the future of this layer that today is called software that seems to be heading towards becoming a database? And what do you see the value accrual to this part of the layer? Maybe start with you, Arvind. Yeah, I think that's an oversimplification. Like, for example, even to say that Salesforce is just a database. It's a full sort of ecosystem of workflows and other applications that have sort of built on top of that infrastructure. So I sort of haven't really understood this concept of that you have this database where all your enterprise data is and then people can just go create dynamic UI experiences on their own, on top of that data. Every business can, for example, just create all the UI by themselves on this. I don't think it's going to be happening like that because, yes, AI makes it easy for you to build. You can have a database and you can build, you can just talk to AI and create a UI, an experience that is exactly what you want it to be. but most times you actually won't know what you want. I think a lot of good things about software companies is that they actually think about how to actually take that data but then present it in a way, make people interact with it or modify it in a way which sort of is natural and which drives more productivity from a human. So I think ultimately software is an end-to-end stack in my opinion and all of these companies, I don't think they're going away. I don't think they're going to relegate it to becoming a database. humans, you know, over the last 20 years, we got addicted to these screens. We scrunched over these screens and we would input this information with our keys with a dropdown and, hey, I met Ervin today and this is what I learned. It should really be, hey, chat, met with Ervin. This is what I learned. Remind me in two days to catch up with him. That will happen. That I think is going to happen. Yeah, that will happen in the next couple of years. And even Glean, you won't be able to type. You want to talk to it. But I think the big thing is data entry. How does the data appear in that database? And that's today not completely automated. So, you know, just like, I think a company that would be well positioned to do that would actually kind of be Zoom. You know, and a lot of people don't think about it that way. But Zoom is really, should be the perfect data entry application, right? Because that's where you're having all the conversations and that's where all the information is coming out. And if it could work with Ween and extract the most important information, store it all, not in like a structured table, but like store that information in system of record. If you had that, that would be the full disruption of the SaaS stack. We had that. That's actually one of the most common agents these days with Glean, which is you take these meeting recordings, you figure out what you talk to the customer, what were the action items, and then the agent goes, updates the notes in Salesforce with that. These kind of things are happening already. Meetings is, yeah, is, you know, like we, in Glean, we have this policy where we record every single meeting, internal meeting, external meeting if our customers allow, because there's so much information in there. I joined a meeting last week. It was four humans and six AI note-takers. Yeah. I heard about, I think yesterday, we were talking about 17 note-takers in one of the discussions. It felt like the first, you know, It's like the first scene of a movie where the AI takes over. Clearly, there's a lot of sprawl. There's almost too many tools and consolidation coming at some point. But maybe your guys' personal workflow, you know, you guys are CEOs in the age of AI. A lot of CIOs in the room, they've got more jobs than time on their hands. How are you using AI for both your personal self and how are you driving your organizations, both large organizations, to adopt AI and benefit from it? maybe give us a glimpse of your leadership in the age of AI. Maybe I'll leave you straight with you this time. Yeah, I mean, we have agents for all kinds of stuff that we use. Everything from, we have agents that are really good at understanding our customers. We have an agent, Rafi's is the name, which, yeah, if I want to understand anything about any, like, you know, tell me best customer story on this. Like, you know, I told you about RBC, Royal Bank, Canada, but I can just ask it. I need a use case. I'm going to get on stage. I need to talk about finance sector. Give me a use case that has these. It'll just find you all the information collected. So it's really, really helpful for me for these kind of things, like when I get on stage like this. But also customer, if you go into a customer meeting, you know, I want to tell customer X about their biggest competitor Y, how they're using Databricks. Now maybe Y is not using Databricks, so then I shouldn't use it. I should use Z, which actually is using Databricks. Maybe that's like the number two competitor. How do I get this information super quickly? All of those are paired at Databricks. So on the go-to-market side, a lot of this is being completely automated, and we're using this. The marketing stack I already mentioned is heavily automated already like a lot of the tasks that happening in marketing So we seeing that stack that happening Then there's engineering. That's like a whole big thing. Like, you know, that's how we sort of... And I think there's a whole change management and how to do it right. You know, initial attempts at automate a lot of the software engineering at Databricks kind of failed. There's nothing wrong with AI. The problem is the humans and how we were organized. but that's, you know, so those are like the two big orgs. Databricks is a big, you know, 6,000-person go-to-market org and 3,000, 4,000-person R&D org, and then there's some back office stuff. Those two already we're seeing heavy automation using agents for all kinds of the tasks. Then there's back office, so there's finance and these functions. Finance is all on Databricks, and it's all the forecasting, all the sort of, it's all moved to machine learning base. But it took them a long time because they had their Excel models and they're very proud of them, and they didn't want to, you know. But again, there's a change management there. We actually had an external data science team build the AI models and then eventually they became good enough and now finance has taken those over and like now finance have kind of moved from Excel to Python largely at Databricks. But it was a journey because, you know, most of us speak to speak Excel. Similar thing is now happening to HR and other departments as well. But I think they're like, you know, I think in general HR departments are like, you know even like they're not the closest to doing this kind of analytical work with you know excel and so on uh so maybe that's not quite as far along but yes it's we're seeing it everywhere yeah anything to add ervin same for us and i think i can share some of my own personal use uh with with it like so i it's one of our agents is daily prep agent which i really love because you know every morning um it tells me like you know what my day is going to be what i need to read what I need to prepare. Like most of the meetings, you know, I will not have context. It actually brings, you know, like the plan for those meetings for me. So that's, that's one of my favorite agents, you know, that helps me feel more confident, like, you know, like how I'm going to do my meetings in the day. The other one, which I, which I shared yesterday also, the, like, you know, I've changed my instinct and I think, you know, changing, changing instincts, take a long time. And when you're the CEO, like you're the boss and everybody listens to you and you can just, whenever you have a small question, just go and ask somebody and they're going to put 30 people on the task to actually get that answer for me. And this is... They're going to have a prep meeting before the prep meeting. Yeah, so it's all of that. So that's sort of like, but it's sort of like, for me, it was easy. I just get to ask somebody and I changed that because I knew I was actually causing a lot of, that was very expensive. So today, my instinct is to, whenever I have curiosity, whenever I have questions, when I need to do data analysis, when I need to write something, my letter to the company every month, all of those things, fundamentally, I use AI, of course, clean in this case, but to actually help me do my tasks. I think you have to sort of have that belief. A lot of people won't do it. You have to have that belief that AI is a good collaborator. It's not going to do the work for you, but if you use it, you're going to actually produce better output eventually, even if you don't save time for the first few months, but you're actually going to improve the quality of your output. Fascinating. Well, this brings me to my favorite part of this conversation, which is rapid fire. Short answers are fine. Long answers are welcome. Start with 12 months from now. Are the big AI companies that we know of today up or down? We'll start with OpenAI 12 months from now. Stocks up or down? Ali and then Arvind. Up. And I'll say revenue will be up. I don't really understand how stocks work. Anthropic. Ali, Arvind. Up. Same. Okay. Let me kind of give you a little bit more. Of course. Because ChatGPT is going to continue growing, and it's on fire, and it's what everybody uses. So is Gemini, by the way. And then Enthropic, because more and more, you know, coding, we've only like eaten into a small portion of that market. It's just started. Yeah. Is AI in a bubble? Yes or no? There is an AI bubble. Like saying, like, okay, so then Glean is also in the bubble. Everybody's in the bubble. No, I would say there is a bubble. I would say those three camps. There is a super intelligence quest camp. I would be very worried there. There's a second, the researchers doing the, you know, that's definitely not in a bubble. They're like the... They're sober. Yeah, they're super sober and nobody cares about them. And they're probably the ones that are right, unfortunately. And then there's the third camp, which is us trying to make this valuable. We're not in a bubble in a sense that we're not spending huge amounts of capital on what we are doing. We're just trying to get actual economic value inside of this organization. So I don't think it's binary, but there is a bubble. I mean, there are startups with zero revenue worth, you know, 10, 20, 30 billion. That's a bubble. Yeah. Same. I mean, I think there are quite a few companies where there's optimism and valuations, which are well ahead of the business that those companies have. And like, I guess you can say like, you know, compared to non-AI companies, like of course AI companies do have higher multiples. but I think this sort of comes from that there's a good reason for it because these AI companies are going to grow more than non-AI companies for sure. Yeah. My favorite game at Ultimate we ask our CEOs is a long short game is if you were to pick a company, a product, an idea that you're long, that you think is going to be a bigger deal than it is today, what is that? And then short, which is there's more sizzle than there's steak, more hype than reality. Pick a long, something that you're very optimistic on. Same order, Ali and DeNorvin. I am very long on agents. I think I'm very long on speech as an interaction. I think keyboards are basically going to disappear completely. We haven't actually nailed speech. I know it feels like we have, but we haven't because you're still using your keyboard. So as long as you're using a keyboard, we haven't nailed speech. But I think we're this close to completely eliminating keyboards. So I think that's a big one. What would I say? it's like, you know, I do think coding is a little bit overhyped. I don't know if I would short it. I mean, I think it's still the future. So I think that's one of them. I think automating customer service and support is a little bit overhyped. So, you know, basically I think the things that the industry thinks are like amazing and we've made great progress. We probably haven't done as much progress. And then a lot of the other things that are being ignored, you know, we're going to have breakthroughs in those. Fascinating. Yeah. Yeah, and for me, I think the products that are going to change the paradigm where instead of you building a product and expecting people to come to you, if you understand your user, your customer very deeply and actually bring AI to them, that's the category that I'm excited about. I want to see more proactive AI products coming into the market next year. Yeah. That is what is going to actually take it from a 5% of the users being power users to 100%. Yeah, yeah, yeah. Your favorite AI tool that you use in your lives? I think Glean is awesome. I mean, if that was not clear. Let's go. So you use it all the time. I actually, a lot of the questions I would ask from the team, the thing you said you changed, I first ask Gleen and then see if it nails it or not. If it doesn't, then I'll spin up a 30-person team to go spend a week and have three meetings and all that to get the explanation of some simple concept for me. But usually Gleen nails it. For me, I'm excited about note-takers. I've used Granola myself and Fathom and a few others. But note-taking is actually fascinating. I mean, I feel like if you take those notes and then if you utilize it the right way, like for example, what Ali was saying, that becomes a source of what then actually creates knowledge, saves data in your systems. That's going to change how companies work. Yeah. In closing, I'd love to get your vision for your companies. We'll start with Ali's favorite tool, Glean. Congrats, you just announced crossing a big milestone, $200 million in revenue run rate. You're signing big deals, $10 million deals. You've got super users I'm seeing. you're seeing casual users. Paint us a vision for Glean from here to a billion in revenue. I think we're still doing annual planning, which also some AI companies are telling me that's so old school. But we're doing it regardless. That's just because there's early startups. Did you do annual planning when you started Glean? No. No. But I think for us, the thing that I'm most excited about, again, is we think a lot about AI literacy and how do you get everybody along on this journey? And we're not seeing it right now. Glean is a heavily used product, but still there's a big variance between the top users and the ones at the bottom. And that's what we want to change. So the future for us is we want Glean to be this very personal companion for every person in every company in the world. And this companion, you have a very confidential relationship with this companion in the sense that whatever you ask this companion, whatever communication you have with them, it's fully privileged. Nobody else gets to see it. But this companion knows everything about you and your work life. It knows your day. It knows your week. It knows who you're going to meet. In the day-to-day, it knows your weekly goals. It knows what things you're not good at or what your career ambitions are. And with all of that, you know, this personal companion is sort of helping you now with your work. You know, it hopefully takes majority of your tasks automatically, you know, works on them before you ask it to work on them. And that's sort of the vision that we are, you know, taking our product to. We have most of the foundation for this in place already. today you have to come to Glean to get most of that work done in the future we want Glean to actually come to you and do that work Fascinating Well we can keep going for a bit but I'm being called on time thank you so much for chopping it up with us you got a lot of help, a lot of insights here really appreciate it Thank you Thank you Thank you Thank you Alright, Jandah, thank you so much