Alex Blania on Proof of Human and Building World's Identity Network
42 min
•Apr 2, 202617 days agoSummary
Alex Blania, CEO of World, discusses proof-of-human technology as a critical infrastructure layer for the AI era. The episode explores how biometric verification using iris recognition can establish human uniqueness while preserving privacy through multi-party computation and zero-knowledge proofs, addressing the escalating problem of AI-generated bots across social media, dating apps, video conferencing, and democratic systems.
Insights
- Proof-of-human is shifting from theoretical future problem to urgent market need post-Claude and multimodal AI breakthroughs, with major platforms now actively integrating the technology
- Iris biometrics combined with privacy-preserving cryptography (MPC and zero-knowledge proofs) can verify human uniqueness at scale without creating centralized surveillance infrastructure
- AI agents will soon outperform humans at persuasion, impersonation, and social engineering, making behavioral/digital-only verification methods (web of trust, GitHub history) obsolete within months
- Proof-of-human infrastructure is foundational to democracy, fraud prevention in social programs, and preventing election manipulation in an age of scalable deepfakes and synthetic content
- Distribution and normalization of verification devices (orbs) is now the primary execution challenge, requiring 50,000+ devices across US and partnerships with major platforms simultaneously
Trends
AI-generated content and bot accounts becoming indistinguishable from human activity at scale, requiring cryptographic proof of humanity rather than behavioral detectionShift from platform-specific moderation to infrastructure-layer human verification as the only viable long-term solution to bot proliferationPrivacy-preserving biometrics (iris + MPC) becoming normalized through consumer devices (Apple Vision Pro) and mainstream adoption pathwaysProof-of-human becoming prerequisite for creator economy platforms, dating apps, and financial services to maintain user trust and advertiser valueGovernment ID systems and centralized identity infrastructure proving inadequate; decentralized, privacy-preserving alternatives gaining urgencyDeepfake video and real-time synthetic media commoditization accelerating demand for continuous re-authentication mechanismsRegulatory clarity on biometric privacy and identity infrastructure becoming critical competitive advantage for platforms and identity providersMulti-sided network effects emerging around human identity layer, with early movers gaining significant defensibility through scale and platform integrations
Topics
Proof-of-Human Technology and InfrastructureIris Biometrics and Uniqueness VerificationMulti-Party Computation and Zero-Knowledge ProofsAI Bot Detection and MitigationDeepfake and Synthetic Media DetectionPrivacy-Preserving Identity SystemsBiometric Authentication vs. AuthorizationGovernment ID Systems and LimitationsCreator Economy and Platform TrustDating App Verification and CatfishingVideo Conferencing Security and ImpersonationElection Integrity and Voter VerificationSocial Security Fraud and Benefit Fraud PreventionOrb Device Distribution and DeploymentRegulatory Clarity on Biometric Data
Companies
World
Alex Blania's company building proof-of-human infrastructure using iris biometrics and privacy-preserving cryptography
OpenAI
Referenced for ChatGPT as inflection point when AI capabilities became real to mainstream audiences and platforms
Anthropic
Claude and multimodal AI breakthroughs cited as moment when proof-of-human became urgent market priority for platforms
Meta
Global platform with 3 billion users discussed as example of scale challenge for proof-of-human infrastructure
X (Twitter)
Platform struggling with bot proliferation and mentioned as potential integrator of proof-of-human verification
Reddit
Platform discussed as potential user of proof-of-human to combat bots and verify authentic human interaction
Tinder
Dating app already using World's proof-of-human verification in Japan to prevent catfishing and bot accounts
YouTube
Video platform facing challenges from AI-generated content farms and fake engagement metrics
TikTok
Social platform discussed as example where human connection and authenticity drive user engagement
Spotify
Creator economy platform where artist-fan relationships depend on knowing you're supporting actual humans
Patreon
Creator support platform where personal relationships between creators and supporters require human verification
Substack
Creator platform where audience relationships depend on knowing content comes from actual human creators
Apple
Vision Pro includes iris recognition (RSID) normalizing iris biometrics as consumer-facing modality
Google
Gemini AI discussed as capable of converting scientific papers into synthetic podcast content
UnitedHealth Care
Referenced for CEO assassination as example of public frustration with broken healthcare fraud/claims system
Walmart
Mentioned as potential large-scale distribution partner for orb device deployment
Starbucks
Mentioned as potential location for orb device distribution to normalize proof-of-human verification
People
Alex Blania
Discusses proof-of-human technology, iris biometrics, privacy architecture, and go-to-market strategy for US expansion
Ben Horowitz
Investor and interviewer discussing proof-of-human market timing, competitive dynamics, and policy implications
Elon Musk
Referenced for backing out of Twitter acquisition due to bot statistics, highlighting platform's bot problem
Alan Turing
1950 Turing Test referenced as foundational concept for distinguishing humans from machines
Quotes
"How do you prove somebody is human? It is a surprisingly hard problem."
Alex Blania•Opening
"What we currently see is less than 1% of what it will look like in probably a year or two."
Alex Blania•Mid-episode
"AIs are really good at programming humans. Much better than humans are at programming AIs."
Alex Blania•Mid-episode
"If you don't take it serious now, then I think you should get a different job or something."
Alex Blania•Late-episode
"In a world of AI, having a human network is going to be this incredibly important thing."
Ben Horowitz•Late-episode
Full Transcript
How do you prove somebody is human? It is a surprisingly hard problem. I think that people are going to start getting used to being bots. What we currently see is less than 1% of what it will look like in probably a year. The idea that AGI will lead to some very fundamental shift seems obvious. The AIs are really good at programming humans. Much better than humans are at programming AIs. Absolutely. And AI will be able to have a GitHub account and will be able to post and also attest to five other AIs that these are in fact humans. And even though they're not, honestly, if you don't take it serious now, then I think you should get a different job or something. And all these agents are very, very clever. How do you prove you're real? In 1950, AUN Turing proposed a test. If a machine could fool a human into thinking it was also human, it had achieved intelligence. For decades, that remained theoretical. Today, AI agents run thousands of social media accounts at once, outperform humans in controlled persuasion tests, and generate hundreds of videos a day that audiences believe are real. The Turing test didn't just get passed. It got commoditized. Every platform built on the assumption that its users are human now faces a problem no one has solved. Facial recognition fails at scale. Government IDs weren't designed for a global internet. I speak with Alex Banya, co-founder and CEO at World, which is building the largest real human network, a proof-of-human layer for the AI era, alongside A16Z co-founder and general partner Ben Horowitz. Alex, welcome to the podcast. Great to have you. Thanks for having me. So, proof-of-human is having a moment right now. When you first give a background for people who are unfamiliar, what is the moment that's happening and how did we get here? And what is proof-of-human? Proof-of-human, as the name suggests, is do you know if you interact with a human or something else on the internet? And actually, I think the kinds of questions that we're now asking is, are you interacting with a human, an agent on behalf of a human, or just an agent? I think these are roughly the three areas that we want to split apart. And describe a little bit the difference between just an agent and an agent acting on behalf of a human. How do you see that distinction? So, quickly explaining just the term proof-of-human, and I think what is hard about it, and then I'll explain how it fits into an agent on behalf of a human. So, what proof-of-human really means is that every individual that interacts on a platform has only one, ideally one account or a limited number of accounts, and stays the owner of that account. That's kind of the property that you're looking for. So, you're looking for initial verification, that ideally should be something like anonymous or extremely privacy-preserving, and then ongoing authentication, that the same person remains in control of that account. And then there's some secondary properties that I think are put to half. But that actually tells you that the really hard thing is uniqueness. Like, what is happening on a platform like Twitter right now is that there's all these accounts, all these bots that are in replies, that there's probably one human sitting somewhere and sending out 100,000s of AIs. And there's this catch-up game where Twitter and X are trying to just find them and block probably millions a day of these. Which is what, a 100th of the bots? That's right. That's how it feels like. And then agent on behalf of a human, I think how it will look like is, I think all of us will have agents. It's unclear how it will look. It's going to be one or the multiple ones, maybe with different tasks and different, even types of characters. And I think it will then come down to, I approve a certain action of my agent. I give him certain rights, so X on my behalf. Okay. Post my X account, post my Instagram. For example. But it's my Instagram and I'm a unique human that onset. That's right. You know, that X or Instagram could decide if that's actually something they want as a platform. Right. But that's how you could do it. That makes sense. And so, how do you prove somebody is human? It is a surprisingly hard problem. Yeah. Those agents are very, very clever. It's funny. We started this company now a couple of years ago, before Chet Chippity, before all of that. But we kind of took that as an assumption that eventually we will have AIs that pass the Turing test so they can just claim to be a human and you will not be able to tell them anymore on the internet. And also that they would be highly agentic and just run around with the wrong thing. And so, that makes it really, really hard because back then, when we started the company, there were like roughly three big ideas that people were interested in. One was this idea of web of trust or like related ideas. So, this idea that you look how someone behaves on the internet or did behave in the past. So, like usually a combination of you have the certain number of accounts that you own since a couple of years and then you post regularly or you comment regularly to GitHub. These were the kinds of things that people are using. And then, let's say all three of us have them. And then I attest also that I know you in the real world and I attest to you that I know you in the real world and that's how you would build a certain graph. And that was like a very hot idea back then for this. But we disregarded it basically immediately because we assumed that eventually everything that is just digital on AI will be able to do as well. So, like we're there. Yeah, exactly. So, in AI we'll be able to have a GitHub account and we'll be able to post and own an account and like also attest to five other AIs that these are in fact humans and even though they're not. So, that was area number one. Area number two was to just use government IDs for everything, which we just all see me disregarded for a couple of reasons. One is that I think, you know, it's strictly better if the government would not control such an infrastructure in terms of free speech and actually breaking that apart. But then also... Right, you lose anonymity instantly, right? You could hypothetically set up a system that maybe preserves it, but it's very hard to do. And then the second thing is also the government identity system is just not built for that. And what is so hard about this problem is it's going to be a global problem. And so, it doesn't really matter if one government maybe has the perfect infrastructure. For example, Singapore is like an example of a government that has perfect infrastructure all around. But that barely doesn't matter because for example, I don't know, Meta is a global product with 3 billion users. And there's a lot of other countries. Yeah, Singapore is 2 million people or million people. Yeah, exactly. So, do you want to lock everyone else out? So, and then there's a long list of other things why we disregarded that basically immediately. And then the last one is biometrics, which actually immediately gives us this big reaction. It's like... And it even went further because what is so hard about this problem, as I mentioned in the beginning, is uniqueness. And so, just like in very simple words, how you can describe the problem is, well, first of all, for example, what does Face ID do? Face ID checks that I'm the same person I get using my phone. And so, it's a one-to-one authentication. So, there's an embedding start on my phone. It takes a picture of my face, creates a new picture, compares it to the previous one. And if that is close enough, I can use my phone. So, that's a one-to-one. One embedding to one new embedding. So, if you have to prove a human problem, you will need to distinguish one new individual from all previous individuals. You need to make sure that Ben is trying to sign up, and Ben did not sign up before. And then suddenly it goes from one-to-one to one-to-n. And it's the size of your network, essentially, that you're trying to prove that to. And then you can just do the math, and you can calculate how much mathematical entropy, like how much information, just information theoretically, to prove that. And it turns out that's a pretty high number, because it's an exponential problem. And so, then you can just do the math, and you find out that things like a face, or even fingerprints or something, doesn't work. Like, then you would basically hit a wall, after tens of millions of users. And so, then you end up with something like iris, which is the muscle of your eye. That actually has enough entropy. And that it's unique. That is unique. That is unique enough. And how do you also then solve the one thing that biometrics have been subject to historically, is just replay attacks? Where, okay, I may not have your eyeball, but I've got enough information that I can run a replay attack on you. So, again, it is important, I think, to split up the problem and verification, which is essentially in all terms, it's like you're getting your passport. And then authentication, which is you showing your passport, constantly for certain kinds of things. And on the verification piece, we've went down, if you know a world, you know that we have built this thing called an orb. So, it's doing a lot of things to prevent these kinds of attacks. So, for example, it has multiple sensors in the electromagnetic spectrum to just make sure that you cannot show a display to it, and it would recognize that. So, I think on that side, we've got it handled. On the consumer side, to then re-authenticate, it turns out to be much harder, because you would need to trust the phone in some sense. Because what we actually do in that moment is when you verify with an orb, not only do we check your uniqueness in a fully anonymous and privacy-preserving way, and we should talk about that, but also we send to your phone a signed face image that you then can later use to re-authenticate against it. Right. And with a new iPhone, you can have meaningful amount of trust against that, but with old Android phones, basically. Oh, yeah. Because you can just show a deep fake, essentially either through a display or just directly injected in the camera stream. So, that's the problem. And so, it's going to be a mix-off. If you have a new enough, let's say, iPhone or a general phone, then you can just re-authenticate against that picture that you took on verification. Otherwise, you would probably have to even go back to an orb somewhat frequently. Let's say a couple of times a year, if you just... Right, to re-authenticate. Interesting. And then one of the kind of incorrect criticisms of the approach early was, oh my God, they've got my eyeball. You know, now they somehow have access to my privacy and they're going to do all these things to me and that's my access. And then they... World's client can impersonate me and all these kinds of things, but that's not the case. That was also like a non-trivial engineering problem. That was very much non-trivial. So actually, I think one point on iris that I think people don't appreciate enough, and that's a bet we took back then, but it was essentially that iris will turn out to be super normal as a modality just because I think we will all wear AR and VR systems to do that. You know, Apple already has it. Yep. Already has RSID and the Vision Pro. So I think it's... I think that's a general point. I think it's going to become something that we will use across many different devices and will normalize in that sense. But I think on the privacy piece, that took us a lot of time because when we decided back then that, you know, with our assumptions, which was six years ago, that we will need a custom hardware device for biometrics, it was actually quite scary, you know, to come to the conclusion. Yeah, that's an expensive conclusion. It's like very expensive, and then just having this idea that you would need to distribute them all over the world, like that just assumes that you would be able to, like somehow bring up billions of dollars and do like a massive effort to distribute all over the world. But then also the privacy challenge of like, how could you build such a system that has all the requirements that we care about? And the two main high-level, you know, ideas on how to solve it where multi-party computation and zero-knowledge proofs. And so to, again, what is different to FaceID, because FaceID actually is, you know, can be very private just because the embedding is stored on the phone. It doesn't have to leave the phone ever, just because it's just you against you in the past. But to check uniqueness, you need to check against all previous people. So something needs to leave. Yeah. You know, something needs to leave something and be compared to someone else. And that's a much harder challenge. And how we approach that is we have multi-party computation. And so that essentially means that, you know, in our case, when you verify with an orb, you know, we take all these pictures, they get computed on the device, and then they actually get split up in multiple pieces. So for example, we take a picture of your iris, we calculate an iris code, then we break that iris code in multiple pieces and send it to multiple computers, such that there is no central database in some sort. So no one actually has the information about you. And then you do some clever tricks of how these different parties need to come together to do a computation that still leaves the pieces apart in such a way that... But nobody has the whole thing. Yeah. So no one has the whole thing. And also during the computation, no one has the whole thing. But they do some, you know, some clever interactions to come to the conclusion. A little like a zero-knowledge proof kind of technique. I mean, it's very different, but I think in terms of the properties that it achieves, it's somewhat similar. Where like you... No one knows anything about you, but you can actually together make a statement about you. Right. And so, you know, you send it to this multi-party computation and what comes back is, yes, that individual is unique. And then the second thing we do is we separate all of this from you with a zero-knowledge proof. So meaning you have the secret on your phone, but no one else has it, no server has it, we don't have it. And then you can later go back to this multi-party computation and say like, hey, I have a secret that is part of that computation and I am in fact unique. And you can prove that to a platform. You could go to the social network and prove that you're a unique user to the social platform without us knowing anything about you or the social network knowing anything about you. And so it's this like very counter-intuitive property that you... There is like, even though it uses biometrics, you preserve anonymity and extreme level of privacy, which I think is super cool. Social media is one kind of vector of things that were annoying and are now becoming overwhelming in terms of just bots, particularly with PSYOps, propaganda, all these kinds of things. What are some of the other uses of bots that are going to be kind of impossible to live with if we don't get to prove for human in the future? Yeah, actually, I think the simple model I have for it is every moment on the internet that is primarily about humans interacting with each other, or even indirectly interacting with each other. So you can start with simple ones like dating. Yeah. That really matters. What is the other side is in fact a person. Yeah, well... Got bad news for listeners. And the person who you expect it to be. Yeah, yeah. Yeah, exactly. We've heard that before. The whole catfish thing. Yeah, exactly. Yeah, yeah. So that's an always one. And so for example, Tinder is already using it for that reason. I think... And what's the Tinder use case? So we started in Japan as a test market. And essentially exactly what we just discussed it is, if you verified with an orb, you get a little batch that signals to other people that you are infected humans, has a high level of verification. And then also, I don't think that's live yet, but what will come next is that you're actually the person you claim to be. Meaning you have a world ID that is associated to the kind of profile pictures that you use. So you just run a quick check that this is all correct. And so you then know you're not interacting with bot, but also you interact with a fully authentic profile. Another fun one because I think it's somewhat conditutive, but I think it will be video conferencing. Because you already have deep fakes. I don't feel like going to this video conference. I just put my deep fakes up. Yeah, and actually you raised the two of me first and that's why we started building a product for it. Because it will actually start with very high value users. Like for example, people like yourself that maybe manage a fund and sometimes calls actually could be very high value if it's about borrowing money or... Oh yeah, yeah, well, so somebody can be me and say, Eric, can you please wire this Nigerian prince $400 million? Exactly. It would be good to know. Yeah, like, you know, that's still slightly hypothetical because these things are not fully real-time and you can somehow... We're very close. But we're very close. And so I think, you know, in a year from now it's just going to be a full commodity and it's going to be super photo realistic and absolutely real-time and you will just not know anything anymore on this video calls. And so I think that's another one. I think another one then will be just think it's fun, but it's going to be gaming. Because gamers really care. Oh yeah, they're playing AI. Oh, that's frustrating. Especially if we bet money. Exactly. And you lose money, you train multiple hours a day to get really good at this thing and then suddenly you get destroyed by an AI that is just super human in every dimension. Funny enough, I wonder what you think about this, because I don't have a good mental model about it, but even the whole model for video platforms I think is about to break. Because there's a couple dimensions to their problem. But one, if the creation of content is becoming super scalable. Like for example, I heard about this one guy that created, I think like, I was like on the order of 100 videos a day on YouTube and made tens of thousands of dollars a month. All of them are fully AI generated. And people just fell for it. So another question is, is that actually something that YouTube wants to monetize that way? Yeah, I guess that. Yeah, well it's interesting, right, they fell for it. But maybe they liked it. I don't know, that could be. But it would sure be nice to know, like, okay, this is a human video or this is an AI video. Actually my thesis is about this is like something along the lines of I think there's categories of content that are clearly just fictional. Like movies are that. It's like you don't care that there's any connection to reality. It's just a fully fictional story. But now if you think about something like Tiktok or all these kind of things, like people actually really care about them mostly because there is some connection to reality. Yeah, well there's reality and there's connection to human, right? So you can create a pretty good, like you can take a scientific paper and give it to Gemini and say make this into a podcast. And you know, it'll be like a pretty entertaining podcast. And it will be reality in that it came from, you know, some real thing. But you would like to know that. You would like to know that. Yeah, I would like to know that. And then it continues as an advertising you would like to know that a human watch it or didn't AI watch it? Yes, right, right. Well, right, that's the other thing is I created 100 AI videos. I had a million AIs watch it. And then I made a lot of money off of YouTube. Exactly. And actually saw that a video today of a YouTube farm. Like he's like thousands of phones that just watch videos all day for a reason. Yeah, yeah. And then like that's got zero value to the YouTube advertisers. And so that's actually a real problem for them. Right. Well, the whole sort of the creator economy platforms of the last decade, you know, sub stacks, Spotify, and all the people who support artists or, you know, Patreon is that creators, YouTubers, they have a personal relationship with these people. It's not just they like the art. And so if they all of a sudden found out that they were, you know, bots that might, you know, they might not want to support them in the same way. Yeah, you might want to give them a big YouTube tipper. Yeah, I think there's a certain subset of people who support, you know, want to support actual people and feel like they're having a real relationship. And the thing that I think like people don't really get is that, you know, it should be obvious, but I don't think people really understand the consequence of that. I think two things. One is that what we currently experience is like a super, super tiny thing of what is about to happen. You know, just because, like, Yeah, right. It's a glimpse. It's a glimpse. It's a glimpse of the challenge that's just dropping almost exponentially. Agente capabilities are increasing, you know, like some super linear forms. Like, yeah, what we currently see is less than 1% of what it will look like in probably a year or two. And so, and then second, these things will be actually, they will be super human in many ways. They would be like perfectly able to understand you and like talk in the right way to you. For example, there's this like one paper that I think you could leave it after, but it was the Change My Mind subreddit where the university was erected, the thing where they had AIs actually interact with Change My Mind. Yeah. And they were like super human in their ability to change it because they were going back to the profile of the people posting it and were like understanding their political motivation, the way they talk and like, and then they're just interacting in perfect, in the perfect way, you know, and just like hit all the buttons. AIs are really good at programming humans. That's much better than humans are at programming AIs. Absolutely. There's no question. And so I think that's going to get quite scary also. Yeah. But I think at least if you know you're being a victim of a PSYOP, then or it's a very advanced one done by an AI that would be extremely useful to understand. Totally. Talk a little bit more about the state of the product and the business today. How many IDs are out there? Why don't you give it a little bit of an update? Maybe talk about the evolution as well. Well, first of all, it's a multi-sided problem. And I think there's like roughly three that you have to consider. One is, well, you need platforms to use the technology. Then, you know, like things like Reddit or, you know, X or, you know, things like that. Secondly, you need distribution of these devices. And I think the right mental model to have for it is how many minutes does it take a person to reach such a device on average? And, you know, currently, if you would take the global average, it would be a terrible number. It would be like, you know, days or something because many people would need to fly. But, you know, how do we get the down to below 15 minutes across the US? And so that's probably roughly around 50,000 devices that you need to deploy. That's like, it's not crazy, but it's also not nothing. It's, you know, it's hard to do. And then the last one is, how does all of that come together to something that a lot of people really want to use it? And that's a combination of, you know, the utility of all the sub-platforms, essentially. But all of that layers on top. Like maybe you can use it in a Reddit account. Maybe you get like, you know, a certain amount of TGP subscription for free. So I think it's going to be a combination of things, but you need to land all three at some point at the same time, which is hard to do. We are now at 18 million users that are verified, 40 million in total in the app. But the biggest thing is because of the past administration, because we use crypto, we did not really invest in the US for a long time. And that's not the main shift that we're going through. Like for all of this, the main thing that matters is the US. And hopefully we get the clarity act passed shortly. Yeah, exactly. That would be really great. So to get clarity on that. Yeah. So the big focus that we are going through right now is to kind of go all in on the US. So I think over the next year, 90% of the effort of the company is just going to go about the US. And how do you get, for example, device distribution up? How do you eventually have this in every Starbucks? So it becomes just super normal and people just use it every day. So that's kind of the... And on the platform side, actually we went through a... It was a very interesting experience to go through personally, because I think... Like a couple of years ago, universally people just made fun of us. It was like the universal reaction. Well, minus and recent and a couple of other people who believe in it, but... Yeah, like in the press, like the amount of fun making of something that... It just shows how short-sighted people are. That's right. It's like, you don't think the bots are coming? What did you think when we first pitched actually? Because even you must have thought this is crazy. Well, because you had the orb. Like the orb was so wild. Okay, we're going to scan people's retinas and that's how we're going to know they're human and so forth. And this was... I mean, you pitched us... Six years ago? Six years ago? Yeah, it was before COVID because you were there with the orb. Right. And AI just hadn't happened yet. And you could kind of see, but there's bots, but they were kind of very crude compared to what there are now. But it seemed inevitable, at least at the time. The thing was, it was so out of... It was so from the future that we always worry about, okay, like what's the timing of this and this and that and the other and so forth. But you were impressive enough and it was going to happen eventually. And it was an exciting enough idea that I think all those things kind of got us to go, okay, we're... But it was not... It was an obvious that it was going to work in that timeframe. It seemed very obvious for a long time. And how different was that pitch from what it ended up being? It was actually pretty much exactly the same. I think it's the same thing. The device changed. You know, they've made it much more economical and convenient, but... That's right. But the initial instinct was right. It was basically everybody's going to have to prove their... You're either going to have to have some proof that you're human in cyberspace or like... It's going to be a very bad world. I mean, the robots are going to get us. We're done. Right. And actually the second piece that was... Like this was the first thing is like it's going to be... That itself is going to be a big deal. But then second of all that, you know, when it's going to become a big deal, we will be able to build one of the most valuable networks as a result of that. Because in a world of AI, having a human network is going to be this incredibly important thing. And so actually, yeah, two things. Like one, you will need to prove a human, but in second, it will have very strong network effects. And even as the platforms, as you get into the platforms, even as the platforms, largest problem has been bots. I mean, you remember Elon and, you know, he backed out of buying Twitter because all the stats were based on bots. They still... Even knowing that... It was hard for them to get all the way to the future in their thinking and go, yeah, we need proof of human. Like, it's kind of obvious. Yeah, because people were like, what does it even mean? You know, like, what does proof of human even mean? We can just, you know... And did you have the... All the detection tools... When did you come up with the language, proof of human? We had actually, we had proof of personhood for the longest time. It's even here on this... Yeah. Yeah. But then at some point, we were like, shit, well, at some point, AIs will have personhood too. So... So, like, that's not gonna fly. But they're not gonna have retinas for a long time. Yeah. All of that's coming eventually. It was actually really funny. It was like some of the opening AI people that I met were like, man, Alex, this is gonna be so dark. Like, people will hate you for not giving personhood to AIs. I was like, Jesus. Let's call it proof of human then. That's fine. So, that's how it changed. But then actually, so, then I would say like last year, so post... Then there was like a big shift post chat, CBT, like people were like, that was like the AI suddenly got real to people. And then actually, I think... And so that's when people started talking to us, but still we're not like, you know, like it's a future problem. It's probably a couple of years out. Like, we don't really care about it. Let's stay in touch. So, that was like the common response. And then, you know, and well, but you also, you had a couple of CEOs that really believed that and were like willing to take the long-term bet to give them credit. But I think the second big shift was actually Claude Botts and Moldbook recently. Yeah. Just because... Yeah. That kind of means like the cow is way out of the barn. Yeah. And so like, honestly, if you don't take it serious now, then I think you just, you should get a different job or something. Yeah, what you're doing. They're just not thinking about problems in the right way. And so that was like the moment when many, many people started reaching out. And now it feels like much more of an executional problem. Not any more market risk. Like a market risk or like a thesis problem or something like just... And which is still a big fucking problem. It's like, how do you get 50,000 devices out there? How do you make it cheap enough? How do you make it economic? Like, you know, how do you meet all these three things at the same time is still a very hard part. How do you normalize the behavior, et cetera. So people aren't weirded out in a Starbucks or something. Although I think that's now going to be... ...coachy to get used to. Just because I think people will hate the alternatives so much. And I think people are going to, by the way, take a lot more pride in being human, particularly online, because I think that people are going to start getting accused of being bots. I mean, it's going to get really weird. And without, like, clear delineation, it's going to be a mess. Like, I don't understand how somebody can think they're going to have a social media platform that doesn't distinguish between humans and bots. That seems absurd to me. I think we will... My guess is over the next couple of months, we will see things like these platforms trying to use things like face biometrics on the phone, which, you know, high node will break, so it's fine. But I think we will go through that cycle now. And yeah, so we just need to get to scale fast enough to meet the market to what comes after, which I think is something like the Orbis deal-only solution. I think currently there's no real competition. I think we will also see that. I have not seen a competitor yet. Because it's so ridiculous. It's so ridiculous and it is so hard to get to in terms of building it. The fixed cost is so high. And then there's a massive network effect, which, like people are starting six years behind you on that. But yeah, I'm sure they'll come because it's just such an obvious problem now. What actually do you think about AI continues? What are your minority economic policies that we will need to implement or directionally? I think governments do have to figure out how to send citizens money. They're good at taking money from citizens, but not reversed. I mean, well, just if you go back to COVID, the stimulus program, like, I think $400 billion was stolen. That's pretty good. You would have liked to know that you were sending the money to unique humans. I mean, even if not citizens, as long as they were unique humans, that would have been good. Yeah. I mean, the social security system, for example, is a mess in the US. It's a total disaster. We're going to have to get to some kind of way to cryptographically strong way to identify who's the citizen of what country. That's going to be a really bad problem, I think. Otherwise, there's no way to even have a democracy. I mean, it's pretty crude what they're trying to do with the SAVAC, but it's not completely insane, which is how do you even know the people are voting are actual people or living people or anything? And we really don't know now. We genuinely don't know. And then if you go to, I mean, the whole mail-in ballot thing, like, is built for a whole very different world, right? That's right. So, like, I don't think in an AI world where you can have, like, very high-scale impersonation that, and then with a broken social security system that, like, you're going to have the will of the people anymore. Like, I think that's going to be gone pretty fast. So, I think we're going to need some kind of cryptographically strong infrastructure on who's who. And then, similarly, I think we're going to have to be able to get people money much more efficiently than through these crazy apparatus of social programs that we have. Just because, like, how lossy is, and fraudulent is social security or Medicare or any of these things. I mean, like, Medicare is so frustrating for people that they shot the CEO of the United Health Care. Like, and people are happy about that, like, really happy. So, like, think about how bad a system that is when, you know, and the government spends a lot of money sending you money for your healthcare, but they do it in a, like, super inefficient way. But we have the technology to do that now. So, I think that AI is going to make that problem so bad, because the ability to file fraudulent claims and create fake, you know, buy social, I mean, you can buy social security numbers on the black market. Like, for those of you who don't know, that's an easy thing. That's a real thing. Like, that is, like, everybody's social security number is for sale. And so, you know, like, AI is just a way of making that kind of loose black market underground fraud thing just massive and extremely scalable. I agree with that. Yeah. So, I think, you know, proof of human is a piece of a very important puzzle where we have to upgrade that entire infrastructure or we're not going to be a democracy anymore. I mean, that just be my guess. I agree with that. Sharon Moore, you said, okay, next year go to market is focused on the US. Say more about how you're thinking about that. Is the incentive for people to do it because they get to use a set of services? Is there some other economic incentive or how do you envision it? Basically, a month ago, we entered a very different phase as a project where I do believe many of the platforms that we are integrating with will really, you know, bring a lot of users to our platform. And that changes, you know, how you think about it entirely. Like, if you have a platform of a billion users sending users to you, then it's really just all about like, how do you meet that demand? It's like, you know, and that's what we're now entering. And so, yeah, so I think the response is first. I think you will see and we're already working on it, but you will see a lot of really large platforms that you know integrate in the near term future. I think that will just to set expectations, I think that will be slow initially because it also should be just to, you know, to get to understand the product. It will be focused on certain geographies like what we did with Tinder, where you started in Japan just to, you know, to test the product and also to just normalize the concept. But that will happen. And then secondly, which is now becoming like one of the main priorities for me is just how do you get this orb distribution up? Which is, you know, broadly speaking, there's a couple of different dimensions to that. But one is first of all, the product needs to work at scale, you know, without supervision, which is turns out to be much harder than you would think. You know, every engineering problem at scale turns out to be much more complicated than you would think. Because, you know, fighting for 1% of improvement in quality is this clusterfuck of, you know, all these dependencies to come together. So that's, I think that's like one of the biggest engineering focuses right now. But then second, you need to find places to deploy them at. And the way to think about it is there are large scale distribution partnerships that could be something like Walmart, you know, if you're very ambitious, it could be something like Starbucks, or it can just be you go to one of, you know, hip coffee shops. And you just put it there, or, you know, and then you could eventually even go to the DMV and just put it right there. So that's the problem we're currently trying to puzzle together. And, you know, it's got to be some of all of that. I think there's going to be some large scale distribution partnerships, many one-off coffee shops. Well, actually one thing that we will launch soon, and the team is going to hear that I'm saying this now, but it's going to be Orb on Demand. So, yeah, it's on the bay. Just because actually it's such a gnarly problem to, you know, to get an orb to truly everyone. You know, it's like, to get that, the cat-back is insane. So it's actually much cheaper and easier to just put an orb on a motorbike and drive it to you as crazy as it sounds. So like in places like the Bay Area or New York, you will just be able to say like, yeah, I want to verify now. And 50 minutes later, an orb comes to you through your rocket, you can verify. And did you ever think about, this is probably a terrible idea, but having kind of different levels like, we know you're a unique human, or like this guy may be a unique human because he's done it on his iPhone. It's not quite the same, but. Yeah, yeah, we have that. So actually, we, you know, generally we just have to, you know, we have the principle of, you know, whatever could be useful for this problem, we just build it. And so we have something called FaceCheck that does that. So it uses Face from the camera. It still uses multi-party computation, what we've built for the entire system. So you're still anonymous. And, you know, it of course reaches way less accuracy. So, you know, as a system, you will know something along the lines of, well, this is, you know, at least one person cannot create a hundred accounts, maybe it's just 10 or 20. So it's like, at least it's some measure of rate limiting. And I do think just to set a disclaimer, I think with DeepFakes and, you know, all this stuff, I think that will fundamentally break. So it's a temporary solution that I think can get us to scale. That's kind of how I think about it. We also actually use government IDs similarly, where like we use just the ones that have an NFC ID chip. And we use multi-party computations, so you remain anonymous. And platforms can choose to use that as well. But no one really did. It's just somehow they have this like very negative stigma, which I think makes sense. But basically whatever could do it. By any means necessary. That's right. Well, thanks so much for coming to the podcast. It's been great. Thank you. Thanks for having me. Thanks for listening to this episode of the A16Z podcast. If you like this episode, be sure to like, comment, subscribe, leave us a rating or review and share it with your friends and family. For more episodes, go to YouTube, Apple Podcasts and Spotify. Follow us on X and A16Z and subscribe to our substack at A16Z.substack.com. Thanks again for listening and I'll see you in the next episode. As a reminder, the content here is for informational purposes only. Should not be taken as legal business, tax or investment advice or be used to evaluate any investment or security and is not directed at any investors or potential investors in any A16Z fund. Please note that A16Z and its affiliates may also maintain investments in the companies discussed in this podcast. For more details, including a link to our investments, please see A16Z.com forward slash disclosures. Thank you.