A16Z partner Joel de La Garza interviews Keycard CEO Ian Livingston about the evolution from AI copilots to autonomous agents and the critical security challenges this creates. They discuss how 2026 will be the year enterprises rush to deploy agents in production, requiring new identity and access management solutions to handle the complex, contextual authorization needs of agentic systems.
- Enterprises will adopt AI agents faster than consumers due to clear ROI on operational efficiency and top-level business pressure to remain competitive
- Traditional identity and access management systems are inadequate for agents because they require dynamic, contextual, and ephemeral permissions rather than static role-based access
- The security model for agents must shift from static user permissions to task-based, intent-driven policies that can be enforced across federated systems
- Agent security incidents are already occurring in production, with companies experiencing data leakage due to improper authentication and authorization controls
- The future of agent control will require hybrid deterministic and non-deterministic systems with humans maintaining ultimate oversight and intervention capabilities
"In 2025, we saw the first glimpses of true AI agents. In 2026, every company will be rushing to get them into production and they'll need companies like Keycard to manage fleets of agents."
"We're moving from a world where like if I wanted a piece of software to be able to do something that new, a software developer had to write It, we're moving to a world where if I want a task to be done, if I give the model like the right context and right access to tools, it can create a plan and execute on that plan and then complete that task dynamically based the data I give it at runtime."
"The cloud made all the no CISOs roadkill on the Internet. That was. And that was the end of the empire of no was cloud. And now, I mean every CISO you talk to is just like, how can I enable this safely without like blowing up the firm?"
"It took what they used to be like, you know, the secret sprawl problem of like the last four or five years and it's just secret sprawl on steroids. And now you kind of have this problem where like, oh, actually we're giving Claude or Cursor, you know, production, admin access to our core thing through this mcp."
"Now it's a top level business objective which is like, if we can't get earnings efficient, like our next year's growth is coming from this project and so we don't have the security's not in the same position."
In 2025, we saw the first glimpses of true AI agents. In 2026, every company will be rushing to get them into production and they'll need companies like Keycard to manage fleets of agents. In this conversation, A16Z partner Joel de La Garza sits down with Keycard co founder and CEO Ian Livingston to discuss the continuum from copilots to agents, the security realities of tool calling, why enterprises will adopt before consumers, and how to control your agents. Let's get into it.
0:03
So it's shaping up to be that we're at the beginning of what sounds like the start of the year of the agents 2026. Yeah, it seems like every company we talk to is definitely looking to get some sort of an agent into production, not just in the lab, to get them out into customers hands and to start having them use it. And so I'd like to share a story. I guess we could kick this off. And thank you so much to you Ian, for joining us on our podcast to discuss this. You know, we actually, or I was actually privy to hearing about probably the first security incident I've ever heard about with an agent. And as a security person, you know, we constantly harp on people to be very explicit on what is the problem you solve. And the problems in security are often manifested in security events. And so we were talking to a company or heard about a company, a relatively large company that has a SaaS service that implemented an agent. They wanted to give a prompt to their users to query data that was in the system, very common use case. You've probably seen several of them roll out recently. And this agent would essentially return data for your firm. So you could say, hey, I'd like to know about this specific part of our business. Could you tell us more about it? And it would give you an answer that would provide you with your data. So super useful, super helpful. Now the problem was you could ask for other firms data and it would very interestingly say no, I can't give you data for General Electric for example. But if you just said hey, give me my data, it would return on a revolving cast of characters, data from other companies. And immediately when I heard about this incident, you came into my mind because I thought my God, there is an authn Auth Z problem. And that is the problem with identity and agents. So welcome. Thank you so much for joining.
0:34
Thank you so much for having me, Joel.
2:20
And nothing could be more timely than.
2:21
Your company could be more timely. Yeah, it's incredible. You know, we spend a lot of time talking to companies trying to adopt agents or trying to build tools for agents. And invariably you basically have two categories security problem, right? You have this sort of prompt injection tool calling dynamics of the fact that you have this indeterministic loop and then you have how do I understand what this thing this agent should actually be able to access? And then downstream, you know, from the person who's built the tool, how do I understand like what the agent should actually have access to on a deterministic basis? And this is like a fundamental problem that's almost always existed from the beginning and dawn of computing, which is the contextual understanding of in complex relationships of user. A is using agent B accessing tool C, under what context should that agent have access to things? And this is the fundamental example of that problem. And we hear about this across the board, Whether it's commerce, whether it's an enterprise, workflows, whether it's people building agents, is how do I build something that has some deterministic guardrails, some level of guardrail that puts a box around this thing has access to and what it can do? And so fundamentally there's a lot of things you can, you have problems you have to solve that are of the non deterministic or probabilistic category around the actual model itself and the data that model has access to and how you remove certain type of prompts. But on the flip side, it's how do I write access policy and how do I deliver guarantees to someone that owns a resource? So in the case of I have a database on expose it for agents to use, but I want to ensure only like the end user can control what the agent has access to. And I as a database person ensures that I never leak anything that the to the agent that it shouldn't have access to at any point in time. And that's contextual based on all of the different parties in that transaction. Right? And so we're moving to a world with agents where it used to be that in a point and click software world where I as a user go to a piece of software, I point and click and the software returned to me exactly what I was. And the identity problems were very static and very simple. It's like this user is a part of these groups. It is a part of being these groups. This is what you get access to. And it didn't change. But we're now moving to a world where a user can pick up an agent. The user is going to expose some tools. Those tools represent downstream resources, downstream data. That tool access may be contextual based on what that person's actually trying to do. So they may be acting maybe Jake from customer support working on accessing customer B's data through an agent that's then made available via mcp. And you want to be able to scope exactly what that agent has access to from that customer based on what Jake is actually explicitly trying to do. So the agent never have access to things that Jake wouldn't have access to. And more importantly, the agent never has access to things that Jake doesn't want the agent to have access to. And that security and the owners of all the different resources involved also ultimately, at the end of the day, need to have a voice in a way that they haven't before.
2:22
Yeah. And maybe, maybe let's. And that, and that's like a really wonderful overview where the conversation will go. And I think, I think it's great that maybe let's start from the beginning and not to like retread ground that's been like beaten to death.
5:16
Absolutely.
5:27
We don't have to go necessarily into a fundamental discussion of what is an agent, but it might be helpful to start with maybe some, some brief like up to the minute sort of update as to what, what are agents now?
5:28
Right.
5:38
Because I think we saw the first wave of this technology. It was basically just some form of model.
5:39
Yeah.
5:44
People were like, look, this is my agent. And it's, and it's, and it's a large language model and you just throw stuff into it and get an output. And now it seems like they've evolved. And so maybe just really briefly touching on sort of like, you know, what are we considering agents like at the present?
5:45
Absolutely. And I always think of this as a continuum. Like it's sort of, there's this team of agentic behavior. And in truth, many of the times I'm talking with customers, people in industry, I get into this long diatribe of like trying to define agents. And I think the way to think about this problem space is, you know, in the same way that we kind of think about autonomous levels of driving. Right. You have like a level zero agent. Well, that is probably software. We already built it's rule. It's, it's. There's no little piece of indeterminism in the loop.
5:58
Yeah.
6:22
And it's not making decisions on its own, it's someone else is making decisions. And as you progress from like level zero to level one, which is okay, there's, now you have, it's still human driven, but there's AI assistance that's helping make some part of the decisioning.
6:22
Like a copilot.
6:36
Like a copilot, exactly right. Like copilots are, you know, some people say that's an advanced autocomplete. Well that's true. But in terms of it to be an advanced autocomplete it has to make underlying assumptions, decisions. And part of making that's going to be many tool calls and a lot of different things under the hood to help automate part of that workflow.
6:37
And so we're well through copilot.
6:52
We're well through copilots. Exactly. And we're now getting to the point where okay, how do I as a human get to walk away? Right. So, so I say hey agent, please go do a task on my behalf. I often love to use shopping cause it's something we all do, which is, hey agent, I'd love to hire you. Can you go find me the best pair of jeans of my size? Here's the details about my jeans and can you make sure it's under $50 and then you know, may place a bid? Right. And what you want to do in that situation is the human wants to be able to walk away. And when the agent's ready to purchase, the agent has to either come back to the human for approval because it's over some purchase limit totally, or the agent can just do it.
6:54
It's like the old days where would you would set a compile job and walk away to get a pizza and come back.
7:31
Exactly.
7:36
That's stage three.
7:37
It's stage three. And you can think of the transition is it goes from agents are sort of like our best friends whispering in our ear, telling us like, hey, you could do this to a world where agents are now, they're now in the middle. And increasingly then over time as you get more autonomous. Autonomous to level 5, the equivalent of like a Waymo. You know, these agents are off doing long running tasks that are doing, you know, operating within some decisioning model that we've given them. So they're human controlled. Right. And they can operate around those bounds. But as a human I don't have to look and be aware of what they're doing. They can just go off and do those things, you know, every year. File my taxes. Yeah, but make sure you know, they've been approved by my accountant.
7:38
So we're starting at the stage of the Waymo with the driver helping the car make sure it's the right.
8:11
Exactly. That's the next stage of agency. And in truth, many companies are actually struggling to make copilot successful. Right. Like a lot of the next generation of, you know, cursor starts as like this beautiful little tab completion. And that. That was awesome. And the next stage is. Okay, now how do I involve like contacts and data and actionability when I For this agent where I'm still maybe semi in the loop but more work is being done. And so there's. It is a continuum over time. But I would also say like anything that has a human is abstracted from the core decision making that involves access to data or involves any actionability is a moment where you are entering the realm of now we're in a genetic workplace.
8:16
Absolutely. Yeah. So these things can make decisions essentially on their own, although they are micro decisions within the context of a larger process.
8:53
Exactly.
9:00
But they do have the ability to insert their indeterminates and do a lot of these processes. And so that's where I guess the problem of identity and authorization and authentication come in.
9:01
Exactly, exactly. Because you basically come to this position and so there's all of these wonderful tool poisoning types of attacks where you can use and trail a bit. Blog is a great website you can find, but they have things like pajamas, which is really interesting. But you dig in and you basically find like the minute that the. The model at the core of the agent is actually starting to do more than one tool call before the. With a human not in the loop.
9:14
Yeah.
9:37
Right. Under the hood is a point where you can have a lot of these attacks and these problems become like actually an issue for gaining that use case adoption in the enterprise. Right. Because you get to this position where like an agent may go access like a production database, take that production data and then makes a tool call with a web browser.
9:37
Totally.
9:55
And what happens? There's no like write or update or delete that's occurred. They're very benign. But then they use the web browser and they take some of that production data, which might have customer data and send it in the query in the web browser. Because they're trying to like use the context that they have in production from the prod database to help them solve some problem the user gave them. And this is where you start to come into like, okay, now we have an identity and access problem which is like should that, that user be able to access, should that agent actually be able to access that like production data? Right. The user, like the developer probably wants to be able to access the production data, but do we want the agent to have access to that production data? And do we then want the agent be able to use a web browser or do something else with it after the fact. And you get into this complex world of identity and access that's hyper contextual.
9:55
Absolutely. And it reminds me though, if you remember sort of early networking in cloud, right. We had a lot of these same problems at the beginning of that journey, which was sort of like, hey, we built this really cool service and it's a single factor login or you just do, it's open to anyone, right. It's anonymous access. And then you end up with kind of these issues where data gets over accessed. You have these over. But this time it feels very different, right. Because you have the ability with these agents to synthesize a lot of understanding across large sets of data that previously would require a human to do. And so it seems like these edges are actually a lot sharper in that you used to have to search for specific terms across a large data set. And it was always what hackers would do. Look, look for key pairs, look for this, look for something password, look for Social Security numbers. But now you can just ask it a question like did, did, did the CEO cheat on their taxes? Right. And so this creates a lot of really interesting and interesting, interesting problems.
10:38
It creates tons of interesting problems. I think the other thing that like really changes the data, the problem from just a pure data security standpoint to like an identity and access problem that that is, that is deep and requires a completely reinvention of this problem space or rethink of the problem space is that it's entirely contextual. Right. And so it's, it used to be, you know, in the firewall world it's like, okay, if you're inside the perimeter, right, you can read, write, update, delete, whatever you want.
11:35
Transitive trust, baby, trust.
11:58
Right, right. And then we moved to the cloud and we put in the VPC and we added, you know, adopted iam and we kind of re established a perimeter inside our little box. And then what's occurred is, you know, we started to unbundle it. And so some of these problems became prevalent and we had like, you know, 2022 Circle CI got popped and that gave a lot of people access to production data. Like that shouldn't have happened. It was painful. But these are all problems of like all similar issues. What's new about agents is one is in order for them to create a lot of value, they need a lot of access to high valuable. Absolutely right. And so the value creation of an agent not is on top of the model. The models create opportunity, but it's the context at runtime and the things they have access to at runtime, the actions they can perform at runtime. Enable. The enable agents actually create value versus them just being like a dumb thing that's, you know, answering a question based on an old data set.
11:59
And we've got like SAML, we've got OAuth, we have all sorts of standards that are out there for a lot of this stuff and they don't seem to be working right. And then like this seems like the classically like difficult problem to solve, right? Because you have a blending in the enterprise of multiple different technologies. You have this new use case that's radically different than anything we've seen before as we've established. Like how are you thinking just from a product perspective, like, how do you actually solve this? This is like incredibly hard.
12:47
Yeah, I mean I think there's a. You made a couple of points on some key protocols that have. Actually we're very successful in helping us solve User Federation and the adoption of SaaS and then the enterprise SaaS and parts of the infrastructure as a software market, right? It was amazing. We have a multi trillion dollar cloud market in the first place. The fundamental challenge is when we went and solved User Federation, we never had to solve what fundamentally under the hood problem this is, which is now we have a piece of compute that we need to be able to federate across cloud and across, you know, network and companies, right? So we're basically saying, all right, not only we've already solved the user things, we can understand who a user is, but how do we understand what an agent is and how do we identify that agent? Because in order for us to even start cracking open this like product problem, we have to first be able to establish the concept of an agent so we then can understand and control well, contextually what should this agent be able.
13:15
To do and where do you land with that? Is an agent just like a Joel V2 or is it sort of some other subset of that category?
14:08
I think like broadly, you know, where we're seeing and what we're thinking about our view of what an agent is is that an agent is going to be a thing used by multiple users. Like most agents, there will be situations where I is in go build an agent, just like Izn, go build a to do app for my specific thing. But when we're talking, you know, in the enterprise context or we're talking in even a consumer context, let's say like ChatGPT, like I don't go build it's not Ian's ChatGPT, it's ChatGPT and increasing. ChatGPT is increasing, gaining capabilities to be agentic and optimize my workflows. And so in this context ChatGPT is an agent and Joel uses ChatGPT totally and many of the companies, I'm sure.
14:14
Andree, but so does Bob and so does Sally and so does.
14:54
Yeah. So agents are inherently multi tenant. Right. And so we have all of the complexities of the multi tenant world that we had in SaaS.
14:56
Totally.
15:02
And then we have then the added complexity that these things now are taking. Increasing actionability and our, and how do we understand and manage that across cross world and then how does that communicate between different compute boundaries as well?
15:03
So we're essentially going beyond the classic sort of access rights. It's no longer just read, write and delete. Right. Like it's. We're talking about step up authentication, we're talking about step up authorization.
15:15
All sorts of crazy things by dynamically at runtime based on the task or intent of the user. Right. Like ultimately at the end of the day, if we want to get to a point where we can, you know, really what is access control a boat? Well, it's really about removing the worst case scenarios and ensuring that the happy path is the right path. Right. So if you're taking an agent and you're thinking about the context window, it has the tools available to it. How do I ensure that, that you know, that context window the data has and the tools actionability can take is bounded by something that comes from, from an end user that's deterministic in nature. And that's, that's our view of where this is going is like we're going to need task based, intent based policy that's enforced downstream.
15:28
Gotcha. So like our rights model, your rights model essentially becomes a matrice. It is as opposed to sort of like this linear.
16:09
It's not. Yeah, it's not linear and it's not static. It's incredibly dynamic and it's, and it's, and I think the other component is because it's dynamic, it's actually hyper ephemeral. Right. In the sense that no one task will probably look the same. And in fact that's like if we step back and think about what is the ultimate value that agents give to our organization and what is like the fundamental delta here. We're moving from a world where like if I want wanted a piece of software to be able to do something that new, a software developer had to write It, Yeah, we're moving to a world where if I want a task to be done, if I give the model like the right context and right access to tools, it can create a plan and execute on that plan and then complete that task dynamically based the data I give it at runtime. So it's completely different in hyper ephemeral world where you have this long tail set of potential tasks. And the net value of like adopting agent is the fact that it has this long tail list of tasks that are capable of being done dynamically. And we need to change our trust equation from one that's like based on static. Hey, Joel is a partner at Andreessen and so that means he has access to these companies financials to where Joel can say to an agent, hey, can you go analyze the financials of these two companies and tell me the delta or the difference? Right. And that agent only gets access to the financials for those companies based on the task. And you as an end user have some control over that. And then as an enforcer on the downstream, you know, the company or the place you hold that data, can it also enforce that policy and across the board both. Not only does Joel know, hey, I did in fact have control over what this thing is doing on my behalf. And I think that's a really important thing. Right. Is we have to establish like who ultimately controls and takes accountability for this agent. And this is increasingly important transactional like payments. And the other side is, on the other side is how do I know Joel did in fact tell this thing it can do this action so that I can say, yeah, you can do it, I approve of it. And how do we deal with that liability?
16:15
Absolutely. Do you think, do you think eventually, I mean it sounds like you're almost evolving towards a model where there is going to be some sort of reasoning model.
18:10
Yeah.
18:16
That's making these determinations. Is that, is that kind of where you think the end of this journey lies?
18:17
I think yeah we do. And I think there's going to be this sort of pairing because the only way you'll get the scale is that is is some formulation of a hybrid deterministic and non deterministic system.
18:20
My next question was how do you scale that?
18:30
How do you scale it?
18:31
Exactly how many tokens per second is that going to be?
18:32
Many tokens per second. And I think you're going to have two sections. Right. You're going to. On the user side, when you are using an agent, a part of writing a prompt or interacting with an agent is going to be a level of access grant that's going to be bounded to the interaction and then you as a user are going to have some ability to understand and control that. And I think that may be baked into the actual agent interface. And then over time the agent interface is going to decide, hey, this is different, this is scary. Like, is Joel okay with this? Hey Joel, are you sure you want this to happen? And by the way, this is exactly what your agent's doing. And here's the button that like lets you, Joel, stop the action right now, revoke it, do whatever you want. And I think depending on the sophistication of the action that the agent is going to do on your behalf, you're going to say not in every case, like the financial case maybe. It's like this is a very common action, a very common pattern. There's no point the prompt or tell Joel whether they need to like give conditional consent. But under the hood what's happening is it's always conditional consent. And that's being done on the agent UI phase. Because you're basically as a user saying, I'm granting this agent the ability to do this thing on my behalf at runtime. And the runtime point part of that is really important from understanding a liability.
18:35
You can obviously integrate telemetry at that point where it's like, hey, this agent looks like it's doing a, some kind of scammer thing, Right?
19:41
Exactly. And then on the downstream side, you know, the person that's enforcing the authorization policy, which could be an MCP server, it could be a credit card company, it could be all of them because all because the federated concern won't be able to say they're going to have their own adaptive policy about what they require on top of your individual, like grant. But what they allow agents to do. And if you look at self driving cars are like a really great analogy. It's actually like across the board there is a continuous adaptive system on both side that is like collecting and proving information. But at all times it's pretty, it's very clear like who has ultimate control. It's either in the case of Waymo, we still have someone in ultimate control. They're not in the car, but they're still there. And in the case of a Tesla, like I as a human, I'm still sitting in front of a wheel, even if I'm not the one driving.
19:46
Yeah, and you can take over, you can take over.
20:30
And that's the world we need. And just in the world it's going but you also have two sides of that where at any point that Tesla can like push a new version of self driving or even come in and prevent like revoke or say hey, we can't do self driving, it's broken anymore and it can roll back and degrade gracefully.
20:32
Totally. Yeah. I mean I think, I think that's absolutely right. Like we're in a. I mean, I know, I know people would love to believe that we're at a level of sophistication where we don't need humans, right. But like for the foreseeable future there's humans in the loop and when you task an agent to go out there and book your, your vacation to Hawaii, you're going to want to make sure it confirms with you exactly before you actually buy the tickets. Right.
20:47
And you're going to want to ability to roll up and understand hey, what, what are agents doing on my behalf and where. Right. And I think the future of, of whether it's end user or enterprise end users perspective, it's going to be you're going to have like a determined level of control and a rob ability to understand what these things are doing. In the same way that when I go to my bank I can go and look at all the transactions I've made.
21:06
Totally. I'm really curious, what do you think of the first? I mean there's a couple questions I think that stem out of this. The first is like do you think it's going to be consumers adopting agents or enterprises adopting agents at scale first?
21:27
You know, if you would have asked me this a year ago, I would have said 100% consumers. It's going to take years for the enterprise. And I actually think this wave is different for many different reasons. One is the net benefit and operating efficiency of the internal workflow optimization. The enterprise is like absolutely massive. Like it's so clear to at a board and executive level how this is like the next step in the company in terms of just like gaining the next level of earnings efficiency that and the tools are available today and we're at a point where like their employees in their day to day life are actually using the tools and then they can figure out they can like transfer that knowledge of using, like making, using Sora or Chat, GPT or Claw, immediately take that to work and be like okay, here's how I can do this. And we've never had that opportunity where before it was that the enterprise was a very latent adopter of the cloud. But now the enterprise is on the cloud and so we're in a, like this wave is very fundamentally different on that level and I think on the second level. So users are like pre understanding the data's already there, the access is already there already on the cloud. And on the second level, we used to be in a position with the cloud adoption where security could say, hold up a minute, this cloud's not mature enough. We don't have the control. We need to build out all these things. We don't have the scale, like we don't have the pieces to actually make our enterprise successful. This is a different situation where they actually had the. Because it wasn't a top level business driver that finally drove like something that was on the balance sheet. It was more attached to like, hey, how are we going to continue and get developer efficiency. It was a lot of movement to the cloud. Now it's a top level business objective which is like, if we can't get earnings efficient, like our next year's growth is coming from this project and so we don't have the security's not in the same position. Ahead was in the last generation where it could say, hey, we should hold up. It's now in, oh, we actually have to do something. And that's going to drive adoption much faster. And in fact, what we see in most organizations, it's like shadow it on steroids. And the ability for security to say no isn't there. Because it's really like the CEO and co, they're saying, well, we have to, we have to adopt these things.
21:39
The cloud made all the no CISOs roadkill on the Internet. That was. And that was the end of the empire of no was cloud.
23:38
Yeah.
23:45
And now, I mean every CISO you talk to is just like, how can I enable this safely without like blowing up the firm? Right.
23:45
And how do I enable it? And it's not just, you know, for the business, ignoring their independent roles for the business. It's not just about like, how do I gain earnings efficiency.
23:51
Yeah.
23:58
Inside, like we run the company. It's how does my company become agentic? How does my company become an agent? Whether I'm like, how do I, you know, my interactions interact with agents, how do I. How do we be agent?
23:59
Totally.
24:09
It's a transformation top to bottom of like every business one way or the other.
24:10
And you can see business leaders getting a taste of this with the coding stuff. Right. It's like, it's like the first little hit of like, wow, okay, I can freeze headcount and get more productivity out of people. Right?
24:14
Exactly.
24:22
And so I just, I think that's exactly right. Like there's just such a direct translation between adopting this stuff and driving better profitability that it's, it's insane.
24:23
And they can immediately see also like what, where their company is going to fit into the new world because it's touch, feel, it's immediately actionable if you're not using an iPhone of ChatGPT or Google, like you're not in business. And so they immediately start thinking is like, well how do we, what's our, what's our, how do we maintain our moat? So there's a business defensibility component which is like how do we ensure that we don't get disintermediated on a product level? Right. Like maybe we're a commerce platform. You know, the future of shopping is probably through an agent. How do we make sure agents can interact with us as a commerce platform? Or if you're, you know, building a SaaS software, it's like well, how do we actually become an agent so that, you know, instead of someone displacing us, we are the agent that they use. Yeah.
24:31
And I mean it's interesting. So like we're, like I said, we're still pretty early on this journey.
25:13
Yes.
25:17
And like there's two sort of standards in the agent world that have emerged. MCP obviously, which didn't really solve any of the problems it set out to solve. Probably, probably the single source of late night worries for most security professionals at the moment.
25:18
Absolutely.
25:32
And then a TOA which is, which is sort of not really, really taken off yet or it's sort of getting, I mean, kind of. How are you thinking about this?
25:33
Absolutely. You know, mcp, like I think they both come from two different organizations looking at problems differently. And a to A just, you know, is the classic Google. Oh, we got to get to scale. Like how do we scale and manage this thing and how do we scale and manage this thing across like networks?
25:40
Super elegant, really well thought out. Like it's like a PhD thesis.
25:55
Exactly. And it's very focused on like well, what is an agent? Totally. Right. And MCP is the other side, which is. Well, we came out of the idea of like well today Claude, like really can't do much. It doesn't have access to other stuff. So how do we gain scale of access? Right. And how do we present that access and actionability, that set of tools to the model in a way that it can like reasonable and it can do with these things?
25:58
It's sort of the, you know, the Google side is the, the ask for permission. The other side is the beg for forgiveness.
26:23
Right, exactly. And, and from, from that perspective, you kind of have a framework for identifying agen framework for something to call these tools. But there's no in the, the goal. The core of what's missing on both sides is, okay, cool. I can, I can kind of like understand what an agent says it should be able, says its task base is and I can understand what these tools are. But how do I connect those two things, identify those agents, like cryptographically enable users to like access those agents, control those agents to do. And then I as a tool provider, how do I actually like enable those tools? We provide it, but I get the ability to control like who can use it in what context and then have like auditing. And so like MCP is definitely here to stay a to a. Let's find out. I mean, it's solving some very interesting problems that we're all going to figure out, which is like, well, in a federated world of agents, like how do I like know what this agent can do and who uses it and how is it owned and what's its core identity? And then on the tool side it's like, how can I use all of that context to enforce this? And so there's a missing bridge. MCP definitely has the most adoption and it's definitely hitting that, like beginning to hit some of that trough additional losement as people found. Hey, it's not perfect, right? They've got a lot of fun.
26:28
They realize that everybody's got a bunch of production credentials on their local machines running mcp.
27:35
Exactly, running mcp. And they have no control over it. And it took what they used to be like, you know, the secret sprawl problem of like the last four or five years and it's just secret sprawl on steroids. And now you kind of have this problem where like, oh, actually we're giving Claude or Cursor, you know, production, admin access to our core thing through this mcp. And I have no ability to control whether that's actually, you know, Ian or is it Ian's agent. And that is a fundamental issue in any form of adoption. And we consistently hear that from people we're working with is, you know, my core challenge is I can't differentiate between these two things. And this is unseen risk. And so it's either like, I continue to let that risk propagate and then we have really bad consequences like agents going and like dumping the database or taking the data and dumping it into a web browser.
27:39
Hard drive.
28:28
The hard drive or like, you know, ransomware. Ransomware letting you bury someone else's stuff because like multi tenancy is really hard to reason about. And then on the flip side of it is, is like how do I do it and how do I adopt it? Easily and fundamentally, you know, this is very different from the last generation of how we solved this problem because you're dealing with, most importantly, interactions not between users and like some omnipotent service you bought, it's between users, agents that you've purchased, agents that you've built, many of those agents interacting amongst themselves and then a tool calling layer that represent both your external things, your SaaS products, your Salesforce, your CRMs and then your databases, your data lakes, snowflake, but also your internal world. Because ultimately what you want to do in order to gain these operating efficiency or for your product to be agentic is to move a bunch of things that used to be behind the firewall up to layer the application layer so your agents can actually interact with it and use it and gain utility from it.
28:29
Gotcha. Awesome. Yeah. So Ian, I mean, thanks so much for coming by. Like, you know, we're super excited to be with you on the journey with Kicard. We think that this is a transformational company. This is going to be an important building block of the future of this agentic world that's going to dominate everything. And we'd love to maybe just in the few minutes we have left, hear a little bit about, about Keycard and what you guys are doing there.
29:26
Absolutely. I'm super excited to have Andreessen on the journey with us as well. And this is something that we've been thinking about for the last 10 years and really saw this machine agent revolution that we're going through. Like, how do we actually take advantage of this incredible new technology that deep learning and large language models have brought us? And so the company today, we're really focused on helping our customers get agents into production. So how do we get them off the laptop, how do we get them off the, like out of the lab and get them into productions that are actually doing utility for us. And so what we're helping customers with today is, hey, we're going to help you identify what agents you have, we're going to help you identify what users are using those agents, what users can use those agents, and what those agents are actually enabled to access and allow you to put a bounding box around those things. And we're going to give you a set of tools that you can use to build agent like build tools for your agents, whether those tools are agents that are internal things you built for your internal workflow, or agents that are interoperable with your product, or maybe a set of SDKs that allow you to build agents as well and then give you the enablement software so you can say, hey organization, here's all of the agents you can use, here's all the tools you could use. Hey, here's how you can take those tools into different tools or different agents and let those things have access to it. And then as a end user security you get the ability to govern it all, have complete auditability and understand what the access profile of these things are and really start to get a bounding box on what those things can do.
29:48
Awesome. And just honestly, based on the amount of security incidents we're hearing popping up in this space and the sore need for some sort of scalable way to manage identity in this agentic world.
31:13
Exactly.
31:24
I think the world is going to be beating a path to your door.
31:26
Any moment now and we're ready for it. And one thing I'll add is we're completely standards and operable, right? So we're not out implementing a bunch of off base things that are standalone keycard only. We're building things that interoperable with all existing standards. We're working to drive those standards forward. So we're really a federated solution and we're not tied to any specific vendor and that allows us to be a sort of a central pillar in your agent strategy moving forward.
31:28
All the great identity companies have been based on some sort of open standard.
31:51
Exactly.
31:54
And I'm glad to hear that that transition continues. Thank you so much for coming by. This has been incredibly awesome.
31:54
Thank you so much for having me.
31:58
Awesome.
31:59
Thanks for listening to the A16Z podcast. If you enjoyed the episode, let us know by leaving a review@ratethispodcast.com a16z we've got more great conversations coming your way. See you next time. As a reminder, the content here is for informational purposes only, should not be taken as legal, business, tax or investment advice, or be used to evaluate any investment or security, and is not directed at any investors or potential investors in any A16Z fund. Please note that A16Z and its affiliates may also maintain investments in the companies discussed in this podcast. For more details, including a link to our investments, please see a16z.com disclosures.
32:01