Welcome, everyone, to the AI in Business podcast. I'm Matthew DeMello, Editorial Director here at Emerge AI Research. Today's guest is Prakalpa Shankar, co-founder and CEO at Atlin. Atlin is a software provider whose metadata platform provides enterprises with a context layer that stitches together their data and AI infrastructure. With these tools, teams of human and AI agents enrich business context with nuance and security measures that make data easier to find, trust, and govern across their organizations. organizations. Prakalpa joins us on today's show to unpack why 75 to 95% of enterprise AI pilots fail in production, at least according to MIT, and why solving for context rather than just data volume or model complexity is becoming the decisive factor. Our conversation also highlights how leading enterprises are closing this context gap from reshaping operating models so business teams can co-own AI readiness to using dynamic context layers that boost accuracy and accurate production deployments. Leaders will hear what's driving real ROI in the enterprise and what context readiness means for any organization aiming to operationalize AI in 2026. Today's episode is sponsored by Atlin, but first, are you driving AI transformation at your organization? Or maybe year guiding critical decisions on AI investments, strategy, or deployment? If so, the AI in Business podcast wants to hear from you. Each year, Emerge AI Research features hundreds of executive thought leaders, everyone from the CIO of Goldman Sachs to the head of AI at Raytheon and AI pioneers like Yoshua Bengio. With nearly a million annual listeners, AI in Business is the go-to destination for enterprise leaders navigating real-world AI adoption. You don't need to be an engineer or a technical expert to be on the program. If you're involved in AI implementation, decision-making, or strategy within your company, this is your opportunity to share your insights with a global audience of your peers. If you believe you can help other leaders move the needle on AI ROI, visit Emerge.com and fill out our Thought Leader submission form. That's Emerge.com and click on Be An Expert. You can also click the link in the description of today's show on your preferred podcast platform. That's Emerge.com slash ExpertOne. Again, that's Emerge.com slash ExpertOne. And without further ado, here's our conversation with Pru Campo. Kalpa, thank you so much for joining us on today's show. I am so excited to be here. Thank you for having me. Absolutely. We've seen years of investment in data platforms, yet most enterprises are struggling to operationalize data at the speed and reliability demanded by today's environment. Longstanding issues in this space include fragmented metadata, static catalogs, unclear lineage. And these problems block AI analysts and agentic systems, or at least emerging agentic systems from succeeding in production, much less providing that business value and ROI that we're all looking for. We have organizations like General Motors and MasterCard. They're proving that the problem isn't the data itself. What's missing is a very, very big word that escapes some definition, but it is context. And without machine readable meaning, connecting data to business logic, even the most advanced AI systems can't reason reliably, which led to that recent MIT study citing that 95% of AI pilots stall in the production phase. I was actually at a manufacturing conference very recently, and that was all the buzz. That's all anybody really could talk about. But being able to engineer that context directly into data operations is beginning to show promise. Prakalpa, you've been in this space of enterprise data for the better part of 15 years. What are the reasons you're seeing most AI pilots fail in the enterprise? And what does this say about traditional governance models? Yeah, it's kind of interesting because I feel like we're on the brink of history repeating itself, but just at a larger scale. You know, the world of data and analytics over the last decade, over $250 billion. was spent on data and analytics. And 75% of those projects used to fail. I remember actually when we saw that, it used to be like our first slide, like 75% of these projects fail. It's getting worse. There's a gap between, you know, like where your data is and your business context. And that's what we used to talk about. And what's become really interesting is that it's getting worse. We're spending a lot more money, hundreds and billions of dollars on AI. And now, you know, 95% of them are failing. And, you know, roughly whichever survey you study says something slightly different, but it's somewhere between 75 percent and 95 percent. Right. Like it's not we're not talking about every AI pilot. Right. Right. Earlier this year at Athlon, we ran this massive survey and we interviewed a bunch of data and AI leaders trying to figure out what was going on. And we realized that there's kind of this chasm between AI in pilots and AI in production, we call it the AI value chasm. And as you start crossing this chasm, you start hearing these things, right? For example, we call it this context gap between the left and the right. The first lack of context is data I had a CIO the other day tell me I have 1000 AI projects in my roadmap and I don even know where to find my data That my first problem The first thing that an AI agent does it runs a search What is it searching Right And that problem one Second problem is business meaning. The beautiful thing that AI has done actually, LLMs have done is that LLMs understand language, but who's going to teach LLMs business meaning? I had a CIO the other day telling me, TAM on the open internet can mean 100 things. But in my company, TAM means total addressable market how do we take that enterprise context and feed it into ai and the final step is governance and this is governance of the data and now governance of the ai too right from you know the i had i had a cio the other day say we are deploying an hr chatbot that can use my payroll data nothing else is allowed to use my payroll data i don't want it someone asking a question and it spits out the salary of my ceo uh right now that sounds like an easy thing to do it's It's really hard to do in practice. And now you have this whole new plethora of sprawl of AI. AI agents, AI applications, AI lineage, what data is feeding it, AI governance regulation, EU AI Act, what policies do I manage my AI around? And that's kind of like the third last step of how do you get this final. And this is what we call the context step as what's preventing us from being able to take AI in pilots and into AI in production. Yes, yes. And context can be a very big word, but I think even trying to, you know, anchor what we mean here with business meaning, I think goes a long way. How can enterprises close that context gap and build AI systems that truly understand business meaning? What does this look like in practice? Yeah, it's interesting. I think there are many different layers of how people are starting to talk about this. There's this idea of, in the data world, we've talked about this idea of the semantic layer. Palantir talks about this idea of an ontology. We've talked about ideas of knowledge graphs. We at Atlin are beginning to believe that the answer is actually in some of this, but is kind of fundamentally different. We think the answer is in this broad context layer. And the context layer for AI looks very different than the context layer for data. Let me give you some examples. We had a customer that was trying to roll out an AI data analyst, and it was an agent that was meant to look at that data and answer questions to their business. Now they roll it out and they have a business user that asks them a question, tell me about my top 10 customers. Now if you were looking at a traditional semantic layer, your end users, you would basically say, we don't know how to define customers. We have four different definitions. But actually in this case, the problem was that they didn't know how to define top 10 because if the sales team asks for top 10, it means top 10 customers by revenue. If the success team asks for top 10, it means top 10 customers by ratings or NPS. And so there's layers of enterprise context, which we think of as there's your data context, there's your business meaning context, there's your domain context, there's your process context. All of this needs to be coming together. It ideally is a dynamic layer because enterprises are changing every day. You don't just build this context layer one-off. to build this context layer and in a way that is dynamic it changes alongside your business and then it keeps getting feedback loops the worst mistake that we can do in this world is do what governance programs are still in the old days where you know you'd start a governance program and then you'd say hey it'll take me three years to get this live already is like we need to get a live like you know tomorrow and so how do you think about the context feedback loop so for example now if a business user is asking this question what are my top 10 customers you can have ai respond and say in your company top 10 customers is defined by adoption and by revenue which of the two do you think is important in this case the user is for adoption then you say hey that makes sense do you think for your entire team this is how they would define it should we take that and feed back to the context store yes it goes back and feeds back into the context so how can you take every live interaction that your humans are doing and codify it into this context layer that helps AI start to act autonomously. I think that's kind of what we have to find a way to do as enterprises. Absolutely. And you talked about those differing layers, the business meaning layer, the domain layers, et cetera, within this context layer. Just want to get a sense of kind of the silos here, just because always the promise with AI is that, you know, the silos are coming down. To what extent are these layers distinct and why put them in these categories to build up this concept of of context as a layer in the data. Yeah, I mean, it's interesting. I think it's not siloed, but it's different. And it's important. Differentiated, right. You can't have just one kind of context. You can't just have technical context. You can't just have business context. You need to have all these layers to operate together for AI to be able to operate effectively. And it needs an effort for the enterprise to bring the business and your technical teams together to be able to build the context that needs AI to work. I think the most important thing around making AI work, unlike any other IT project in the old world, where you would actually take the IT project to production just for business sponsorship. Now, you cannot take AI to production without business being involved. They have to be context engineering with you because the domain expertise is with business. And I think that changes fundamentally the operating model that we need to operate on and the roles and responsibilities that organizations take in being able to make sure that we're actually able to get a living brain, they call it digital twin sometimes, of the organization. Absolutely. And what's at stake here, especially if organizations, enterprises are going the more traditional route, the semantic layers, as you were defining before? What stands to be lost if they don't pursue a context layer, as you're describing? I mean these projects are going to fail We had a customer that was trying to get AI into production for their CEO for about 15 months And it existential At this point here the challenge right There's this big hype cycle, this insane hype cycle. Everyone wants AI. Everyone looks at ChatGPT and ChatGPT is the moment of democratization. And then your business users are like, I want that. I want that on my enterprise data. I want that to work on my business. Right. And it's going to take foundational investments to get us to ready. The bar is higher. The challenge is, I like to say this, people treat AI to a bar that's higher than the bar they have for humans. Right. And so the accuracy bar is also high. This is a change management exercise. And so how do you actually find a way to get this AI into production? And I think the real challenge, Why does everybody, why does every board care about AI? I truly believe that most of our foundational business models are at an existential threat scenario. 45 years ago, the average company on the S&P 500 used to stay there for roughly 75 years. In the last decade, the average company in the S&P 500 stays there for 15 years. Today, in the AI world, I don't know, maybe you say that for three years so the cycle of rapid innovation is existential our customers like how how customers are interacting with technology with your website with your digital storefront with all this is changing dramatically and i think that's the existential threat that every business has today this is an opportunity great companies will use this as an opportunity it's a super cycle uh and then there will be a bunch of companies that get left behind i think that's the existential and then for the leaders i mean one of two things is going to happen like they're going to either figure this out and they're going to be the heels of this transformation or they're going to quite frankly not have a job in three or four years and what i'm most excited about is the leaders that are the change agents that are taking this on and are having a business first approach to getting and there's a lot of education here because quite honestly your business like the business doesn't understand context they're just like solve my business problem And that's a great opportunity for techno business leaders to just take a much larger role in this transformation that businesses are going to go through in the next decade. Absolutely. What lessons are we seeing from organizations that are already building context-driven systems in terms of impact on the ground in that ROI we were talking about before? Yeah, I mean, we just got off our customer conference and we have some phenomenal customers, customers like Workday and MasterCard and Virgin Media O2, all talking about how they're building context layers and how they're building the context layer to help their AI future. We have found that customers like Workday, for example, their chief data officer talked about how their perfectly governed BI data wasn't consumable by AI. And when they started moving towards this context layer approach, they were able to improve accuracy by over five times, which is the difference between an AI in pilot and AI in production. Of course. We had VMO2 talked about how they onboarded over 6,000 employees and achieved over a million platform use in terms of business starting to use data for their end user use cases to this idea of a data product and a context product inside their organization. So I think what we're seeing across the board is right from organizations that are using this for business to start making better business decisions to we had the chief editor officer of Mastercard recently talked about how they're building a agentic governance framework for they have over one third of their products are AI first products today. And he started by saying, you would think we're a finance company, but we're actually a data company. And I think we're seeing that massive transformation that is driving usability and adoption and ROI for these end use cases. Absolutely. And I think you're bringing up a lot of much more than just short term wins to really sell that board conversation, as you were mentioning before. But thinking a bit more long term, how can business and data leaders use a context layer to drive sustained business value? How are we seeing this play out as plans for 2026 come together? Yeah, it's super interesting. The chief data officer of DGT said something really interesting at our conference. And he said, I talk about context readiness before AI readiness. And it's interesting that context actually is something that can bind data and business together. Like actually that's missing between the two, right? Data is foundationally technical. Context makes it business ready. And business is the one that has context. Like that's the context that's always been missing in the AI world. So the best operating models that we see in this, you know, we have something we call a blueprint around the operating model of how to make this work inside organizations. But we always say start with a use case. Get a few AI use cases to production. Let's not sit in a room and let's not say we are going to build this large, complex graph and it will go into production in four years. So let's pick a few use cases and those should be business use cases that drive measurable ROI. In these use cases, let's basically now build your context foundation. And let's keep context engineering on these use cases. Let's make sure build your foundational technology stack in a way that you're able to build this in a way that's reusable and reproducible. And ideally, the context engineering that you're using for this one use case can be leveraged by other use cases in the world. And in this process, bring the data leader on the development of the context. They need the business to be involved as context engineers, so that they can get the domain context in And they ultimately shipping towards production And it a beautiful moment because context now can be embedded in the process of getting an AI use case to be live and not an afterthought Like you know we not going back and trying to drive documentation in the past Like we're basically just getting all our context in for the use case that's driving business value. But we're doing it in a way that's reusable and reproducible and part of this complex living graph that can be reused in the future world. Yeah, absolutely. And just to really articulate the difference between kind of the status quo way we're seeing with, you know, just the semantic layer approach and the context layer that you're describing, you know, for businesses that are just beginning this process of building the context layer, getting that context readiness prepared first before AI is as the guests at your conference were describing the gentleman from the folks from GKP. really getting a sense of how will they know that they are, that they're getting the business results or that the systems are working appropriately with that context layer in those short-term wins? Yeah. So what we typically recommend, we have a framework for this, but we actually recommend by starting by creating a great testing evals framework so that we know what does it take for this AI to succeed in production, right? And the eval framework is really important process. What we do is we start by saying this is the evolving work. So for example, if you're shipping an AI analyst for this use case, this domain, what are the top questions that your business users are already asking about this? We can actually bootstrap this because we can pull metadata from your business intelligence tools. We can pull metadata from your Slack where people ask new questions. We can see these are the typical questions. Against this, we can also create your golden verified SQL query. So you can say, hey, the verified way of answering these questions is like this, because we already know from your BI that this is how you measure ARR, for example, right? Like we can create that. Then what we do after we create that foundation, we call it bootstrapping, is when we have humans start to get involved in more context and share. Ideally, you're basically presenting them with options. So you say, this is what the LLM did when we just put the LLM on it. This is the verified, like keep verifying it. So there's human in the loop process that we run there and you keep improving the benchmark of the accuracy and so we've seen with customers from the time we start with them within a month their accuracy has gone up 10 times on that test set and then we say okay now let's ship you know uh let's ship it to business users and this is where my biggest realization has been that this is not a one-off process it is a always-on enrichment process you're never going to have all the context like that's never going to happen also because your context keeps changing your business context also keeps changing uh and i think that's the biggest change i would say in the context layer versus the kind of traditional semantic layer approach but i think a lot of people try to build this i mean there was a lot of hype about this in the industry you know in 2020 you know they they tried to build people try to build this golden approach and i think golden approaches fail like they're just it's too hard to get off the ground you don't get business behind it you don't get business value you lose funding. And so I think the approach of being able to solve, embed this in the way you get a use case live is what we see as the biggest differentiator in the operating model. And then, you know, we can talk about the tech, how you get, you know, like the metadata architecture needs to be iceberg native, and it needs to be open and interoperable, and it needs to be ready for real-time workloads. There's a lot of things about the tech, but more than just the tech, I think there's also the operating model here is important. Absolutely. And I think just really even differentiating the benefits of a context layer first and then kind of working backwards towards, you know, what is the difference within the systems is going to bring a lot of clarity to what we're seeing, especially in that MIT report we were talking about at the beginning of the show, which seems to have really sent some waves through the entire industry, of course. Yes, as I was saying before. But, Perkelpa, thank you so much for being with us on today's show. It's been very insightful. Thank you, Matthew. Really appreciate it. wrapping up today's episode i think there were at least three critical takeaways for enterprise data analytics and ai leaders from across industries to take from our conversation today with prakapa sankar co-founder and ceo at atlin first ai fails in production when systems lack business context. Models can't deliver reliable outputs without shared, machine-readable meaning embedded into the data they act on. Second, context isn't a static governance artifact. It requires an ongoing operating model where business and technical teams continually refine definitions, logic, and feedback loops. Finally, real ROI comes from starting with high-value use cases, building context around them, and reusing that foundation across the enterprise to accelerate accuracy, adoption, and deployment. Interested in putting your AI product in front of household names in the Fortune 500? Connect directly with enterprise leaders at market-leading companies. Emerge can position your brand where enterprise decision makers turn for insight, research, and guidance. Visit Emerge.com slash sponsor for more information. Again, that's E-M-E-R-J.com slash S-P-O-N-S-O-R. I'm your host, at least for today, Matthew DeMello, Editorial Director here at Emerge AI Research. On behalf of Daniel Fagella, our CEO and Head of Research, as well as the rest of the team here at Emerge, thanks so much for joining us today and we'll catch you next time on the AI in Business podcast. Thank you.