
How micro1 grew from $4M to $200M revenue in a year | Ali Ansari
Ali Ansari, founder and CEO of Micro1, discusses how his company pivoted from AI recruiting to become a human data provider for AI model training, growing from $4M to $200M revenue in a year. The conversation covers scaling challenges, risk-taking philosophy, and his vision for the trillion-dollar human data market.
- Founder's primary role is to inject calculated risk into the company since no one else will take bold moves with significant upside potential
- When scaling 30x in revenue within a year, traditional planning and KPI frameworks break down - companies must stay flexible and adaptive
- Human data will become increasingly valuable as AI capabilities expand, creating new job categories and economic functions
- Short-term incentives aligned with long-term equity can drive exceptional performance during critical company moments
- Maintaining lean teams while scaling requires building a culture where hiring is the absolute last resort, not the default solution
"I came to the realization that the founder's job is to inject as much risk as they can into the company. Because really no one else will like upside risk."
"What I focus pretty much my entire time on is three things. One is hiring, two is product, and three is aligning incentives."
"When you're in a space that is growing so fast and three months can literally double your run rate, you actually do need short term incentives."
"We want to make sure that the future of AI is as human as it gets."
"If you're making a business decision in almost every case, the worst case scenario is you're going to go bankrupt. And so if you're assessing the worst case scenario every time, it's just like not a helpful data point."
I came to the realization that the founder's job is to inject as much risk as they can into the company. Because really no one else will like upside risk. Bold moves that have pretty bad downside also. But if it works, it works well. What I focus pretty much my entire time on is three things. One is hiring, two is product, and three is aligning incentives. You have to align incentives very long term. When you're in a space that is growing so fast and three months can literally double your run rate, you actually do need short term incentives. Our recruitment team sometimes gets these like absurd bonuses if they hire like a thousand people. You know, sometimes we have a customer that is about to sign. I might tell them like, hey, if this closes, you'll double your equity.
0:00
Today I have the pleasure of sitting down with Ali Ansari. He is the founder and CEO of Micro One. Let's start off with. I think a lot of the models today are kind of like specific versions like GPT4, or 5, so on, but eventually we're going to have these constantly updated models. You specifically are trying to create this super specific, high quality data for constantly improving models all the time. Can you just talk about that?
0:42
Yeah, absolutely. Well, first of all, it's good to be here. Thanks for having me, Tai. So I think the way that AI labs are improving their models is by picking domains to improve on and creating reward models within those domains through this notion of RL environments and connecting their policy model, which is the model that kind of serves the customers, and improving in that domain of choice. And some domains are emergent, which means as you improve in that domain. Coding is one example. There's a lot of kind of other functionalities that come about that are beyond coding capabilities. But the truth is most domains are actually not so emergent. If you improve in any given domain, you'll just improve in that domain. And so what this means is that labs are having to essentially pick a very wide range of domains, finance, medical, legal, and very long tail, hundreds of other domains to build general intelligence. And the way that they do so is they create these expert level data sets that then results in a reward model which then connects to their policy model to improve it. So that's kind of the structure where researchers come up with these hypotheses of like, hey, I think this data structure is going to work. I think hopefully it'll be emergent, but in most cases it won't be. And then they kind of gather that data by having a bunch of experts create the net new data that's required to improve the capabilities.
1:05
You started off with just a recruiting tool for great talent. Do you want to just talk about like the early days and what the initial company idea was and then how you transitioned into the new business?
2:42
Yeah, so when I was at Berkeley, I, I had a software development agency, nothing, nothing special, was very service oriented, built websites and apps for other companies. And one of the main things we had to do is when projects would come in, I had to vet engineers and assign them to the project. So a lot of my days was spent basically interviewing engineers. So I developed this tool, this AI screener with one of the early GPTs that essentially helped me screen these engineers. And this system would essentially talk to them and have a conversation about React Node, a bunch of other tech stacks and then it would give me a report on how they did in those frameworks. And then the ones that did well, I would talk to them and basically save a bunch of time with interviews. And so this internal tool turned out to be the product that kind of came outside of that agency. And that was the first version of Micro One, which we basically started to sell. This at first was AI Screener, but then it was this kind of end to end recruitment engine to other companies as a software where they would subscribe to it. They would also be able to pre vet talent really easily. And then we also had this marketplace built that allowed us to essentially have a bunch of pre vetted engineers and a bunch of product people that startups would hire from directly. So this was the first version of Micro One and we, you know, within, within a year or two into building this, which was, which was a good business, I mean it was a, it was growing, it was growing pretty fast and it was a pretty fun product to build, you know. And we, we realized that there's a data provider that, that became a customer of ours and they essentially started to hire like hundreds of engineers from us. And we, you know, within like three weeks it was like 700 engineers hired and we were, you know, it was like myself and my, our CRO will, we were sitting and we're like man, what is this company building? Like what the fuck is going on? Yeah, they're like, they're like really hiring a lot of engineers like really fast. They must be building some crazy software. And turns out, you know, this was the human data space. They were helping a lab train models on coding. So they were hiring engineers and we, we, and we were kind of entering the era and this was like a few years ago, we were entering the era of experts that had to be hired to help with this human in the loop kind of model training. And long story short, we decided this right here is the best application of what we've built, which was the AI recruiter. And so we just went all in. It actually took a bit of time for us to go all in. We kind of. I wish we had made this decision actually a bit sooner. But long story short, we ended up going all in. It's the human data space and now it's the only folks.
2:52
And how did you actually decide to make that decision? Because you're working on something. If you're working on something for a year and a half or two years, you're kind of invested in that. What was going through your mind at the time on deciding to move and basically pivot the company towards this?
5:42
Yeah. So, I mean, first of all, the amount of hires that companies in the space were doing were orders of magnitude higher than anything we've experienced before. So we were like, okay, this is definitely a great customer base. That was the first thing. The second thing was the reason why these customers came to us was because the product that we had built initially actually solved this bottleneck that they had, which is again, recruiting experts for model training. And so turns out the product was. It even in its state a long time ago, was actually very useful already for these type of customers. And so we sort of. I don't even like to use the word pivot that much. It's more of like a iteration on a market focus. And the product itself was actually very similar to what it was even before this. Yeah, it was more exactly an evolution. And of course, like, what that evolution meant is we continue to focus on the AI recruiter piece of our overall infrastructure. As I would argue, still the most important part, because again, still we need to deeply source and vet these experts. But now there's other parts of the product like data platform and performance management, and we're really owning the kind of data pipelines end to end. So there's a lot more once we've done this evolution all in. But really the initial state of the product was in a kind of a great place to serve this demand.
5:53
You are very focused on this idea that humans are going to basically be valuable kind of indefinitely. How have you kind of thought about designing this philosophy of a human centric approach and making sure that the people that are on your platform are having a great time?
7:19
This is by far the number one focus we have, which is we want to make sure that the future of AI is as human as it gets. And the way to do so is, of course, they're aligning the models with humans, making sure the models are safe for humans and all of that. But I think the way you do that is you create an exceptional experience for the humans that are actually resulting in these alignments. The humans that are giving feedback to the models, creating the structured judgments that the models use to actually learn the experience they have. And the way they're conveying their judgment as a result of their experience is what results in great models that are very much aligned with humans. So, you know, part of it is we want to make sure the models we build are great and safe for humanity. But the other part is we're. We're in a. We're in a position where we're able to create this massive job sector, this new job sector of experts in lots of different domains. You know, training AI and, you know, now I think there's maybe around 100,000 or maybe even more experts around the world and very large portion of that in the US Doing this job, and a lot of times as their main thing, actually. And so as we, as we create this new job sector, which, you know, feels like an honor for us to be able to help create this job sector, we need to make sure it's.
7:34
A really fun one that lays the right foundation.
9:07
Yeah, exactly. A really fun one for the humans. I'll be working in it. And what that means for us is we make sure that the experts that go through our process have a great recruitment experience. We build into our product this idea of maximizing the NPS score of experts as kind of one of the main things that our engineers focus on. We make sure they have an exceptional onboarding experience. And once they actually start on the job, we. We very closely track their, what we call happiness index, which. Are you the first company to ever.
9:10
Come up with a happiness index for trainers?
9:41
We're actually building a. We're building a model around this as well. We're calling the N1 happiness model. And so, yeah, we're all into this. I think it's an important thing to do.
9:43
One of the things I noticed is every part of your entire company, you're trying to be very analytical and figure out what are the key, like KPIs almost, that you can track and measure and then improve. How did you initially come up with the Human Happiness Index? And how do you actually measure whether or not people are happy?
9:55
Initially we came up with it because we realized that there's a lot of companies in the space that actually really undermine this. And of course we have a lot of great competitors. Kudos to a lot of companies in the space that have done a pretty incredible job. But there are some that really don't care a lot about the experts. They really undermine their experience. And long term, this results in material impact to the company's performance, actually. So if you, even if you take the completely kind of shareholder value approach, this matters, right? So that's the first thing. And then the second thing is again, we want to serve our customers well. And the way we serve our customers well is by helping them train their models in the best way possible. And happy humans do that much better. So the way we track it is there's a bunch of things, but the main thing is we have a form called the are you happy? Form, like literally. And we ask them to rate their experience one to five on a bunch of different things. And we also have a bunch of kind of qualitative questions for them to fill out. And we use this to. We use this in two ways. One is our project leads that are kind of owning these pipelines. They come up with a bunch of actions that help improve the happiness index. So one of those actions could be increasing the pay of folks that have told us that they are not happy with their pay. Or another one could be increasing the number of what we call HTMs, which are human data managers that are helping the experts navigate this world, maybe increasing the count of them because there isn't enough support thinking things like this. But the second thing is, which is what I'm really excited about, it's actually a big focus of mine, is the M1 happiness model. We use the data from this form to predict the happiness of experts as they apply to a certain job. Essentially when you apply to a job, of course, the main thing is we want to make sure the skills match. You need to be able to do the job well. If you're a lawyer and we have a pipeline for M and A, you need to have a lot of experience at M and A. And we deeply vet that AI match score which basically looks at does a person fit the skills of the job? Is, is. Is right now entirely based on that. But what the M1 happiness model will do is it will take into account the probability that they will be happy on the job and reduce the match score if we believe they'll be unhappy and increase it if we believe they'll be happy. There's a. I can just imagine like.
10:12
Uber giving routing rides and Deciding what the driver pay us or something based on how happy they're gonna be.
12:45
Yeah, if someone kiss the Uber, if someone likes, you know, water a lot, maybe you increased by a little bit the paths that include water. You know, maybe, maybe there's a little bit more happy drivers. So that's, I think this will be important because really the goal of our research team, at its core is of course to build these pipelines and help train models and so forth. But at its core it's we want to help determine where humanity should spend its time. And of course, humanity spends most of its time on their jobs. And so this idea of having your skills match but also having your be satisfied on the job is the most important thing for us.
12:50
You talk about robotics is going to be something that you're interested in, but you're not hugely focused on it right now. But also I think there's just this great lack of awesome real world data where people are actually doing tasks. I was thinking about on the right over here. Imagine a chef making food. You'd probably want a bunch of data on chefs making a specific dish again and again and again in order to train a robot to do that. Well, how are you thinking about kind of creating an entire new data set out of nowhere and doing it in the right way?
13:31
Yeah, so robotics is definitely a data vertical that we're thinking a lot about. I think the long term is that the physical world is obviously a lot harder to navigate than the laptop. And so you will have the data need will naturally be a lot more. And so, you know, we're starting to think about this a lot. And we have a bunch of pipelines that we built. I think the interesting part which makes it, you know, the difficulty for robotics lab is that there is no Internet for robotics models. Of course you can argue that there's YouTube and these things, but, but, but it's not the same as LLMs being able to train on the whole Internet. And so the first step is you have to create an Internet equivalent for robotics. And so what that means is basically kind of very basic world navigation and manipulation abilities need to be distilled into robots. And so the pipeline we have right now is pretty funny pipeline. We have about 3,000 people around the world in 50 different countries that are essentially putting a camera on their head and recording themselves do things in their house and recording their hands only without any, you know, personal identification in the videos. And we're using those videos to annotate them for VLA models. And then the idea is that this will be easily mapped to any robotic system and robots will be able to learn from this. And the key thing is maximizing diversity and really allowing these, these candidates all around the world to do whatever is kind of their usual day to day task.
14:03
Their natural.
15:51
Exactly. Their natural state of kind of living. They're going to do those tasks anyways. So now you might as well get paid for it by, by doing it for Mike for Micro One and, and helping train a, you know, friendly, friendly robot that will help you maybe do them in the future. So.
15:52
Yeah, yeah, so you're just focusing mainly on people just living their normal lives right now. How do you think about like designing a specific data set for let's say culinary arts or something like that?
16:06
Yeah, so I think there's like two kind of broad categories of robotics data. First is household tasks. And I would say most kind of generalized humanoid companies are at least have some focus there. It seems to be kind of the majority of the focus. And the second is more so industrial and manufacturing and so forth. And we're doing a little bit less of that for now, but that's a pipeline that will probably kick off sometime in the future.
16:19
Getting 700 people hired at one company in the matter of a couple of weeks, I imagine that's a complete inflection point in your own mind. And then going through the transition and growth yourself. What are the things that kind of went wrong or went right during the first couple months?
16:45
Yeah, so the world of robotics and the world of generalist hiring is very different because the volumes of the talent we need to hire are much higher. And of course if you look at LLM training where the volumes are quite high, I mean they're hundreds, sometimes you have to hire 300 doctors in a week. And it's not like any doctor, it's world class surgeons in these countries that know these languages and so forth. And then sometimes we have to hire 500 lawyers in a few weeks. And so these, the volumes are already high in the LLM training space. But for robotics and more broadly for generals hiring where it's less so the very niche experts like doctors and lawyers, but more so, you know, voice experts or maybe just kind of candidates. Generally they oftentimes for those pipelines the volumes go in the thousands pretty quickly. So the intensity of it is quite insane. Insane.
17:01
Do you just go from basically like having no presence, like let's say for doctors, for example, you've never hired a single doctor before and then you have to hire 300 in the next month but actually do it. Well, how the hell do you do that?
18:00
Yeah, so the way you do it is you rely heavily on the AI recruitment engine. There's no other way to do it. I mean, the other way to do it would be you have to hire. It becomes a chicken and egg problem because you have to hire, you know, hundreds of doctors to interview hundreds of other doctors. And so like, you know, how do you hire those hundred doctors in the first place that know those specialties? So there's, it's, it's, it's a quite literally impossible or close to it without an agent that can, that knows exactly the very niche capabilities of that surgeon in, you know, whatever country you want to name. And so we rely heavily on the micron, you know, Zara agent. And our recruiters actually they help design these environments, these like interview environments where they're defining, based on the pipeline that the customer sends over, they're defining the exact skills that needs to be vetted. And then Zara goes in and does it.
18:11
What was the first like specific vertical that you went into when you started basically hiring a bunch of people to create training data?
19:04
The first ever vertical was, was actually coding with that one company I told you about that was hiring hundreds of, of engineers. And then the second one, which was a really big inflection point for us for an actual AI lab was finance. And I remember we were in their office and this lab was telling us that they're struggling in this domain, which was much more subjective than others. And it was specifically business experts, quote unquote. It's a bit of a vague term, but hire a bunch of MBAs. Yeah, it was actually sort of like that. And I actually, I remember I told him like, hey, I think you should also hire some startup founders and not just MBAs. But so it was basically a mix of MBAs and startup founders. It was like 30, 40 folks ran a pilot, you know, did well and then we kind of expanded in bunch of different domains right away. But finance continues to be a very core focus of ours.
19:11
So when you first like started to see things take off, what kind of went wrong?
20:15
Like what broke the truth of this business is that it is very operational. You need to provide a world class white glove service to your customers. And they, no matter how much pretending other human data companies like to do, your customers will not log into your product. You have to build products that will make your white glove service world class. And so what that means is operations goes wrong and you have to hire these exceptional, what's called SPLs, streets or project leads that help kind of manage these pipelines. And, and if the hire is not a good hire, it kind of affects the relationship a good amount. And so the scale up is like very fast with these, with these customers. And you know, our products, the products that we built allow us to scale up with the demand that exists. But, but really there's still this limiting factors of hiring exceptional core team members. And, and you can't shy away from that. Yeah. And you can't mess that up. And so that's really what continues to go wrong. But the way we kind of reduce that going wrong over time is by just continuing to productize our human data offering as much as possible and of course building the recruitment engine, but also having a data platform that's very modular, that kind of handles any data structure, building a performance management tool that quantifies expert performance and data quality, Velocity hd, all these different metrics that allows the dependency on these core team hires to actually reduce which, which will allow for even further kind of scalability.
20:20
All the value of the company is going to be created from maybe even only a couple relationships. I imagine that it's incredibly important and also incredibly difficult to basically build conviction and trust in a single individual to get that right. How do you do that?
22:16
We look for two main things or making these hires happen. The first is the sheer agency that we can predict from, from interviews. And it's very hard to predict actually, but it's, you know, with the exercises that we send to candidates and the whole interview process, we try to assess their level of agency and, and really like how much they will care. And of course part of that is you can't just, you cannot have exceptional humans care if they don't have the incentives to care. So part of that is actually our job to assure that someone that naturally does have agency continues to have agency, because if they're smart, they won't have agency if the incentives are not there.
22:29
It's like the problem of you basically are great and so you get hired at some big company and then you're neutered and you can't actually do anything.
23:12
Exactly. And so if we create an environment like that, we actually can't hire exceptional folks. So first the agency starts from us internally. The incentive structure is the products we build, et cetera. But then of course, the person has to innately have that. So once incentive structures are there, they need to be able to kind of perform. The second thing is we actually look for folks that will take risk. I think naturally when folks are joining startups a little bit higher, likely to take risk. But if you're not starting your own company naturally you'll just be a little bit more risk averse. So we try to look for people that will be not scared of taking risk and really kind of mess up in fact in the first couple of months. And even if you mess up pretty bad, that's fine. We actually try to celebrate mess ups. I'll give you one story. We had, we had one of our SPLs that is now actually like in the senior leadership at, at, at our company. I won't name him, I won't name him just because there's a, he's on a very like, you know, confidential client. But he, he actually had a, a pretty bad mess up in terms of promising a, a completely unrealistic timeline to a customer. And the customer was like, wow, they're like, you know, they can really do things fast. This timeline is like pretty incredible. And we realized within a couple days that this timeline was like impossible, completely impossible. And we had to, we had to make the decision on trying anyways and we ended up like this got to like high, high level people at this company and it was a, you know, it was a pretty big mess up for us. And, and this person now runs this account completely. This person is the most senior person on this account. And that month that person was the team member of the month. And of course it wasn't because he messed up. I mean this would be a bit stupid to do all this because he messed up, but it was because he took the risk and came back from it in a really incredible way. And the customer actually built a lot of trust after we came back from this mess up. And this was one of the greatest things that happens to us in the kind of funny.
23:18
There's this. I remember reading about like the optimal hotel stay and it just so happens that actually the optimal hotel stay isn't that you go through the flow, you get there, everything goes right. It's actually you go through the flow, you get there, something goes wrong. And it's a small thing, but then the hotel just goes out of their way to correct it extremely fast. And because of that it just builds trust instantly with the customer.
25:33
Exactly. Except this time it was a big mistake. More than a small one. But, but that's a, that's a good analogy. Yeah, yeah.
25:56
How do you think about taking, taking risks? Because there's some, there's some risks that are things that you can come back from and then there's others where it's this one way door and once you're through it you can't come back. So how do you decide which one makes sense?
26:02
Yeah, so this is of course, you know, Jeff Bezos, is this his philosophy of, of have extremely high velocity on, on two way decisions and much lower velocity on one way decisions. And we, I think the way to do it is this is sort of the third pillar in how we hire which is the judgment and kind of overall intelligence of the person. I mean they should, they should be wanting to take risk, they should have agency, but they should also, they should also have good judgment. And if something is like truly a one way door, they should not take that risk. So that's the third thing that we assess for, is kind of a baseline.
26:16
Do you try to have it where, if it is a one way door they figure that out quickly and then come to you and you guys decide collectively whether or not to go through it.
26:56
Exactly. And we try to do kind of a system where we limit bureaucracy as much as we can, where team members can DM me easily. I mean we have many thousands of experts in a different kind of workspace in Slack and they can DM me at any time and they do a lot and you know, it's one message.
27:04
From a thousand people becomes unmanageable pretty fast.
27:25
So we make it that like if someone needs a quick opinion on a one way decision, they can get that, they can get that very quickly.
27:28
Yeah. What have been the biggest like one way doors that you went through? And you basically looked at all the facts going into the decision and you went, you decided to take the risk and then it worked.
27:34
I think the biggest one is this idea of going all into data and being all in on data and not focusing on anything else. We decided this, I actually think we decided this a bit late. I, I, I, I wish, I mean in retrospect of course it's easy to say, but I wish we've done this a lot earlier, but we did this. Of course we, we started to kind of be in the space much before this, but we decided to go all in on data about a year, year and a half ago. And when we did that we 30x in revenue, 2025, we 30x in revenue. And so, you know, the decision worked.
27:46
Not from like a small base either.
28:24
It's like it was from a, we started the year with roughly 4 or 5 million run rates and then we, you know, ended with like roughly 150. And so that was a decision that worked And I think the, we, we try to make a few of these bold decisions each year and if it doesn't work, it's okay. There's a lot of great learnings in it and there, and there's been a lot of those as well.
28:26
So one of the things that I've noticed through throughout, like researching you is this idea of incentives. And we talked a little bit about this beforehand. But one of the biggest things like Warren Buffett has been very focused on is he's basically just this massive incentive aligner and he tries to figure out how can we structure incentives at each individual company control so that everyone is basically creating long term value for the company and then just funneling as much cash up to Berkshire to reallocate as possible. And if you have the wrong incentives for something like insurance, you'll write a bunch of insurance that doesn't actually make sense. And there's this big spike and you've grown a bunch. But then the problem is three years, four years later you've lost all this money because of it. But because there's some executives that will turn over so quickly, you can have a situation where person grows the company a bunch, leaves, gets a massive, basically pay, and then the results or issues happen only after they've left. How do you think about basically designing incentives so that everyone wins on the right time horizon?
28:52
What I focus pretty much my entire time on, or at least I try to, is three things. One is hiring, two is product, and three is aligning incentives. Like I literally sit down, I think about the incentives that everyone has and their bonus structures and their equity and everything else. And I try to keep optimizing them and sometimes I'm sitting there and like there's not many actions to, to take. Like, it's almost like you have to come up with like random actions. But I, I, I, I, I try to think about this every single week, for hours a week. And I think, you know, as you said, the, the, you have to align incentives very long term as the kind of the main priority through equity. And I'm very grateful to be a sole founder. So I'm able to give a bit more equity to the founding team and really everyone that kind of joins even now. So that's definitely part of it. But I think another part of it is it's actually important to align short term incentives as well. And I think perhaps this isn't thought about enough. I think when you're in a space that is growing so fast and three months matter a lot and like three Months can literally double your run rate. You actually do need short term incentives. And you. So like, you know, I'll give you like two random examples which is, you know, we have, our recruitment team sometimes gets these like absurd bonuses if they hire like a thousand people. And it's like a recruiter has never hired a thousand people in like two weeks.
29:52
That's not something that you train for.
31:25
Yeah, so, so like, if this like completely outlier event is happening, they should get a pretty massive bonus. Like, that's, that's totally fine. I think the second thing is there's, there's also these like other outlier events which is, you know, sometimes we have a customer that is about to sign a really massive contract, let's say, and they, we might, you know, we might kind of align incentives with them in some, some other ways as well. And there's like one person that's a key person that's leading this. I might tell them like, hey, if, if this closes and therefore the company changes forever, you'll double your equity. And that I think, you know, our, our general counsel and our CFO is like, you know, what the hell is that? Like, why do you do, why do you do that? But, but I think this stuff is, there's like more short term incentives are actually also very important and especially ones that are designed that, that incentivize you very much. So short term, but then also long term in this case of like if you double your equity because the company has changed forever because of the deal that you're like leading, you now are you now have more equity for the long term. So I think this balance is quite important.
31:26
That's kind of interesting. It's a little bit like you do get that short term bonus, but it's in this form of a long term instrument.
32:37
Exactly. And the short term bonus is also, you know, vested the same way that the rest gets vested. So, so you get this, you know that you're about to double your equity and change a lot about your life if we succeed. But, but you still have that vesting and the vesting actually restarts for that top up. So. So that, that's the, Yeah, I think that that's the key, key part of it.
32:43
I'd be curious to know, like, how did you come to this conclusion? Because that's a, that's something that I haven't heard almost anyone else do. I've seen people give like cash compensation and stuff to people for pulling something off or just going above and beyond, but I've never heard of a company founder in particular, specifically structuring individual comp. So that it basically helps massively assuming deals. Go, go, go. Right. It can help massively.
33:06
I'm not sure how I actually came up with it. I think it's a bit of an absurd idea and I, I, I also have not heard of anybody else doing it. I, I heard, I heard Jensen from Nvidia. I think he does something like this. When I heard that, it was like very reassuring that I'm not completely insane.
33:34
But you were doing this for months or even over a year before you'd heard about this from Jensen.
33:49
I actually was, I was thinking like maybe this is a bit of a too insane of a thing to do. So I was searching like hoping somebody else does this. That's like a great founder obviously like Jensen. And I couldn't find anyone other than Jensen. I think it was like something similar. It wasn't exactly the same. So that was a bit reassuring. But I don't know, I don't know how I came up with that. I think I just kind of, kind of thought like, hey, if this happens, we will, our probability of long term success really materially increases. But also the short term enterprise value that gets added is like pretty immense. And let's have this person get, you know, some portion of that. And I think that's the second thing is like we're in a very special space that is, that has resulted in a lot of really fast growing companies. Of course it's not just us, we have a lot of great competitors and they're also growing incredibly fast. And it's because of just the immense demand there is for data. And so I think when you're in this sort of outlier state, you should do these outlier things.
33:54
Has there been any other things that you like, other actions that you take that other companies and other founders don't take, that you think are massive unlocks.
35:00
I think this idea of actually caring for your team's happiness similar to how we do with our experts. And the term we like to use is we want to get to this mode of being sustainably hardcore as soon as we can.
35:09
Sustainably hardcore?
35:28
Yes, because right now it's not so sustainable. Right now pretty much everyone on the team does 13, 14 hour days and you know, leadership does even more than that. And it's not like basically if you're awake, you're thinking about it exactly like, like literally every waking hour for almost the entire core team is, is work and, and it's great place to be and and, and even I like, I mean I work pretty much every second of the day, but I, I, I sometimes feel like I'm not working as much as some other team members and that's like, I feel very honored to be in that place, you know, so we have to keep that going for a long time, but we also have to keep in the back of our mind that at some point, hopefully in the next few years we'll get to a more sustainably hardcore structure. And I think a big part of that, and even now is we actually don't enforce working weekends. And it sounds a bit soft, but we, we actually, we actually have our team, I would bet we have our team work more weekends than most teams because we don't enforce it and we try to inspire it. And like the team, the leadership, you know, works basically every weekend, late nights, so forth. And there are pipelines that really need you in the weekend and so you will be and the incentives are there for you to be inspired to work weekends and do so every weekend. And so we don't put these like arbitrary constraints of like forcing folks to put, put certain number of hours. Like this idea of like 9, 9, 6 and so forth. I actually don't buy that. I think that's like ridiculous and sounds really lame. I think you instead should go until 12am, not until 9pm because you've inspired it and not because you forced it. And we've heard a lot of, I've heard from a lot of my founder friends that, that know about some other founders that, that force this sort of thing and it results in a lot of really unhappy team members and that doesn't last.
35:29
How do you design things so that you do create this culture? Like I, this, this reminds me of almost like sleeping with the troops. You basically are in the same tent as everyone else or not, and you're sleeping on the ground. And I think the best form of motivation is just seeing whoever the leader is working and you know, if, if they are going through hell, typically you, you can, you can kind of sign people up to do the Ernest Shackleton, let's go travel into the great unknown and be cold and be wet and probably there, there might be success at the end of it maybe. And if there is, there's glory and everything. But how, how do you kind of design your own life so that you are creating that natural feeling inside of people?
37:36
The way I look at it is the hours that the founder puts in is sort of the max range that the company will put in broadly and of Course, again, very grateful to be in a place where there actually is some team members that put in more. But generally the max is what the founder puts in. So if you put in 12 hours a day on average, that's certainly the max that people will put in. So I think that the first thing is you just have to grind really hard. The second thing is you have to be in the details. And now I'm going through this phase of. And so is the rest of the kind of leadership going through this phase of balancing being in the details and doing actually useful things for the company. But if you're outside of the details too much, folks that are making the day to day decisions that really matter for any pipeline we're working on or any recruitment funnel or whatever it is, you actually won't have enough context to give the right opinion on when things get stuck. And so you cannot be too abstract away from details. So I try to stay in the details as much as I can. And I think there's a sense of respect that everyone has for each other because everyone's hands on. Like, we have our CFO join us and he's running payroll himself and he's answering payroll questions to experts himself and we have our general counsel join and there's no other lawyers. She's doing everything. So this, this idea of like, every leader needs to be very hands on. And of course, in the long run, you have to kind of converge to, you know, maybe 20% of your time is hands on. You can't realistically be hands on your whole, whole time. But, um, but, but having that 20% really be there and not really go below that is, is really important in, in the way that we function.
38:18
Yeah, there's this line that I love, which is, if you're going through hell, just keep going. Um, what's been the biggest moment where you and the team felt like you were going through hell?
40:09
There, there's, there's this one time where we, we had a massive customer that. This was a couple years ago that was almost 50% of our revenue. And I'll keep the details a little vague, but we, we essentially lost this customer on a. Like this. And it had to do with some, some external factors. And I remember the moment that we lost this customer. I was in the, I was in the elevator going up to pitch some investors and perfect time. I was literally. This was like, this was during one of our early rounds and we, you know, there's momentum and we're about to close around in a huge manner. And I'M going and pitching this great firm. And it's a partners pitch, so, you know, bit nervous. You gotta. It's not really a conversation. You gotta just kind of pitch the whole time. And I remember, I look at my phone and there's this email that we've lost this customer, like, literally as the elevator opens. And it's like, like, why is this. Like, why does it happen like this? Why does it have you like a movie scene that just. And so I, you know, I, I read that email and then I go in and I pitch anyways and I'm like, about to like, faint during the pitch because we just like, lost the company. And so. But I put on a. I put on a face anyways and I, you know, do the pitch and the pitch goes well, but obviously the revenue is not the same anymore, so we have to correct that. But I, I remember the, you know, I, I kind of. I leave that pitch and I go and kind of start walking endlessly. And I'm in the middle of Palo Alto and I just put my laptop on the ground somewhere. The laptop was like, open and like, it's like just like on the sidewalk. And I just like, I just like, start walking and I like, the laptop just like left and I'm like, walking around and just doing, like, random stuff. And it was a very painful moment. And I, I mean, it was a. It was, it was as. As painful as it gets. And I remember we had to lay off a bunch of people and we thought that we probably. We might not survive this. And we had, you know, we were very close to not being able to make payroll and all, you know, all the rest. And I remember telling myself, if we get through this, this would be a great story to tell. Glad we're saying it now. And it will really allow the team to have something to bond on in a way that's just really intense and it's. And now in retrospect, I'm glad that it happened this way and I wouldn't really change a thing about it. And of course, there's been a lot of other moments since. But. But we, we, we did pull through, thankfully, and this. And now it's a moment that we think back on a lot.
40:19
This kind of reminds me, I think during COVID Airbnb's revenue over the course of something like eight weeks dropped by 80%. And there was this, There was like, the headlines about like, is Airbnb going to survive? And I remember before COVID Brian Chesky and Airbnb were trying to go in all these different directions. And what ended up happening was because of that existential crisis, it allowed Brian to basically say, no, we're just gonna focus on this one thing, which is the stays. And that's what they did for three or four years after that. And it kind of eliminated all this distraction. Did you have anything come from that? Like, you guys make different decisions going forward?
43:23
Yes. And by the way, I know exactly what you're talking about. I watched that podcast that Brian Chesky was talking about this, and I think that actually that podcast had the most impact on how I think about product really and product roadmaps. And this idea of, like, having one roadmap that Brian Chesky himself approves every module for. And we try to do that as much as we can. I think it's like the way he. I think the way he's explaining is that each of these features affect millions of people that are having these experiences. And me approving them for, like, 30 seconds, it actually does not take a lot of time. And the design and engineering and the impact that each of those features will have is so large that I should spend time approving every single one of the modules that go into Airbnb. And I would argue Airbnb is, like, one of the most delightful experiences in terms of, like, a user experience interface in terms of an app, but also, like, in terms of the product itself. So that had a lot of impact on me. But to your question, I think, you know, when the situation happened, we. I got in this mode of actually not wanting to take a risk for. For a few weeks, I, I was. I started to be a bit risk averse.
44:03
And was it just like a state of fear?
45:19
It was a state of fear because. Not to get to details of, like, what happened here, but part of it was because of an action that, that, that we took, but it was mostly external. And, And. And in retrospect, we. We found out that it was actually entirely external. But we'll. We'll tell that story later. But there's. So I got in this mode of, like, for a few weeks of just being very scared to take actions and, and kind of. And I got very afraid that, like, shit, this might change the way I operate, because I like to be very risky, and I like to. I. I like to kind of set up these initiatives and take a bunch of bold moves. Like the other day, we set up this, like, really crazy incentive structure for if we hit this wild run rate by April, the team gets like, like, 50% of their total comp. As like bonus like these like wild things and, and like, you know, I, I, I should have like approved that by like a couple like you know, general counsel maybe bored and so forth. But, but I just, like, I just, I just went with it and, and I do a lot of things like this. And those, those few weeks right after this, this thing happened, I, I felt like I was about to get into this mode of like never doing those things again and being very risk averse and like kind of like a managerial style CEO. And I, I remember very, very explicitly that I spent two days thinking about this and convincing myself that if I go into this mode and I don't change this now, this will be the worst possible thing for the company, for myself, for my career, for everything. And I decided that I will force myself to have the same level of risk as before and in fact even more than before right away because this just simply can't happen. That was like the biggest pivotal moment that I had to make this decision. And I'm glad I did.
45:22
I felt the exact same thing. I remember there's been many moments in my life where I love risk, but I love risk that I can control. You know, so there's, there's basically, I think our brains are naturally designed to fear the loss. Like fear loss twice as much as we anticipate gain. And so you kind of have to counteract that and correct for that. Which means that in all likelihood any decision that you're making, you're actually over indexing on the risk of the decision, when in fact there's probably a lot less risk than you actually think. And the risk may be long term in not taking the call at all. How did you kind of see that manifest over that couple week period where you were kind of going into a shell, almost like a turtle and then figuring out, no, I can't do this.
47:13
I remember when I would make even small decisions within the company in that two week period, I felt the sense of maybe I'm actually not the best one to make these decisions. And I started to think again, very managerial, like maybe I should start to hire these more professional folks to help me make these decisions.
47:56
People that know what they're doing.
48:17
Exactly. And have maybe more experienced folks join us because the strategy we've taken is hire really kind of recent grads and inexperienced people and put them to work. And that has done really well for us. But I started to kind of change this model a bit briefly. And I think the thing that helped me get out of this pretty fast is I, I came to the realization that the company depends on it and the founder's job is to inject, inject as much risk as they can into the company because really no one else will.
48:18
I mean, upside risk.
49:00
Yeah, I mean like just bold moves that have pretty bad downside also. But if it works, it works well. And I think that the reason why naturally other folks in the company don't do this is because they're, you know, they don't want to get laid off. I mean, they want to continue working at the company. And so really the only folks that are, that really can get laid off are the ones that can ingest the most risk into the company. And so if they don't, no one else will. And you'll just have a company that is not risky and doing some basic stuff. And obviously if you're not taking risks, you're not going to grow. So that I came to that realization that it is my duty to ingest risk into the company. I have to do it. No one else will do it. The second thing was, if you think about risk taking and you think about what is the worst case scenario, anything that you do has the worst case scenario that is really bad. I mean, us driving here today, God forbid, has the worst case scenario of us not being here today. You can't think about that risk as you decide to drive here. So if you're making a business decision in almost every case, the worst case scenario is you're going to go bankrupt. And so if you're assessing the worst case scenario every time, it's just like not a helpful data point. So you have to think about the expected value and you have to think about this distribution of probability that exists within this decision that you're about to make. And there's obviously again, a long tail of really bad outcomes, but they have very low probabilities. And if you can kind of, if you can come up with this like just, just intuition, there's no math that's, that that's involved here, but like just this intuition of like what is the distribution of, of risk taking here and, and come to a decision that, that I think that framework allows you to be bold.
49:02
Jeff Bezos has even talked about this a little bit where he basically says like the role of especially an executive is to make like a few really good decisions a year. And then for the most part, you know, he even says like, if there's a decision late at night that's going to have a massive impact on the company, he just will delay it until the next morning. And that's like a 10am meeting for him. Do you have the same kind of philosophy or framework where if there's something really big that you have to decide on, you hold off until the next morning? Maybe it's 12am at night and you've been up for 18 hours. How do you decide what thing you're going to take action on versus Delay?
50:46
Not to disagree with Bezos here, but I actually don't do that. If an urge to act on something comes about, I try to do it as fast as possible. Of course, partly to just move fast, but I think more importantly to not think about the action too much, frankly, because a lot of the risks that we've taken, whether it's as simple as DMing a customer PoC about some new pipeline that they may be interested in, or whether it's this crazy bonus structure we set for a milestone we have in April, those things happen. When I get the urge to do it, they happen best. And of course, you know, I have to come up with the actual plan. And if it's, and if it's something as simple as like DMing a certain customer, if I decide to do it the next morning, I might kind of say, okay, maybe we do it in a week. Maybe they actually, it's not the best time to reach them and so forth. But what I've realized is that like, almost all of those actions end up doing something good for the company. And they often happen very late at night. Right away when I get the urge to do it.
51:21
I think this kind of comes back to training your intuition and you have to figure out what are the right people and models to run through your own head. So that when you do get this piece of information where it's like you can take this action right now and has some massive impact, whether upside or downside. How do you decide and how do you decide quickly? How have you kind of gone about training your intuition over time?
52:25
It's funny you ask this, actually. I think a lot of folks, even like newer folks at Micro One, they think that I'm very analytical. And actually I'm not at all really. I, I, I. We have folks in the company that, that force us to kind of set quarterly KPIs and so forth. And we do a little bit of that just because there's, there's a lot of, you know, there's, there's a lot of roles that you do need that for, like sales and, and so forth. But I, I actually like the way I do it is I, I try to set structures at a, that are a bit more qualitative and I, I try to use my intuition as much as I can as I assess whether someone has had an exceptional outcome in the corner in the quarter or, or not not an exceptional outcome. And my, my, my, my philosophy here is that when you dumb down someone's 12 hour days every single day for quarters straight to a few KPIs, you're really like, you may optimize for the wrong thing. And for a company that is growing fast, those KPIs will likely have to change pretty much right away within a week. You have to keep changing them and you end up spending a lot of time just changing KPIs. And so one thing that I've done is we've added this. And this is not for, not for everyone in the team, but for most folks where we have this, if their role is very directly related to revenue, we have this revenue override which is if we hit our crazy ambitious goals for the end of the quarter, your KPIs actually don't matter and you should just do what's best to increase revenue and build a great product. And so I try to distill these things into this idea of being analytical and having these very key KPIs because I think it actually results in people doing things that are most important at any given time for the company versus optimizing for their own kind of like arbitrary set KPIs. And I also look at it as like, if you like the human brain, this idea of intuition is basically like a really large neural net that is considering many KPIs at once as it comes to a decision. But if you're looking at three KPIs, you're just like reducing the number of features by so much. And why would you do that? Is the way I think about it. And of course this is not like you need KPIs for a lot of roles and you cannot just eliminate it completely. But I think jumping to structuring KPIs too quickly would be a mistake.
52:43
Did this kind of evolve over time or did you initially try to go in the direction of being more analytical and then moved back from it?
55:20
Initially I, I, I wasn't in terms of determining who's performing and so forth. And then I, and then I wondered whether I should because that's how every company does it. And then we, and then we did for some time and then we went back to not being so much like that. So it was kind of a, you know, I had this urge of like, this is the norm. Everyone does it like this. I think we're, maybe we're doing it wrong. But then we went back and I think for the state that we're in right now, it works well. But I think long term you do need to go back to KPIs because you'll have a lot more specialized roles where their KPIs actually don't really change quarter over quarter and you can just quantify it with three or four numbers and leave it at that.
55:29
There's not been very many companies in the history of anywhere that have grown kind of as fast as this, which means you don't really have, there's not exactly a playbook for how to react. Um, how did that growth feel just internally? Like how did you make decisions during that 12 month period where you're growing 30x?
56:12
It's yeah, it's, it's very, very intense. I, I feel very stressed and grateful is, is I think the way to summarize it. And I have to be very flexible in the way that I work and be okay with the fact that my day to day will change very quickly. Because if you think about a company that, you know, I think there's, you know, great companies, they, they 3x in a year 4x and sometimes they 10x and these are like really exceptional growth rates. And those companies, they have, you know, let's say companies 3xing year over year, they have 3, 4 years, whatever it is to, to get to this like 30x and so they have time to kind of iterate their work structure and their day to day. But, but when you 30x in a year, you don't have time to iterate like every, every month or two, what I should spend time on changes. And I'm always like, man, am I spending the right time on the right things. Like I'm constantly questioning this and it's like the most stressful thing because you know, when I'm in a certain thread and I read the thread for like 10 minutes in Slack and then I, at the end I'm like okay, a good decision was already made and I just replied great, like nothing changed about the state of the world and like I just do this to like many different threads within Slack and I realized that like, okay, maybe I should actually abstract myself away slightly from some of these details. So things like this happen every few months and I have to be in this mode of like very flexible with my work structure and being okay. With like changing it very rapidly. And, and so does, I mean so does everyone else in the company. Like there's, there's folks that are very much like IC and hands on and like they obviously they have to be in the, the details and the, the team grows so fast under them that they have to become a leader like within like two weeks of joining. And like things like this are really abnormal. And so a lot of folks in the company, pretty much everyone has to constantly change the way they work. And it's really difficult, but it's also fun.
56:32
How did you think about which areas were the highest leverage points that you could be focused on any given moment over the course of that change from going from a recruiting business to an experts business?
58:44
Yeah, I think now, I mean obviously in the early days doing everything, being in the code base, doing sales and all the rest and then, and then you know, within, within the first like few quarters into the company kind of being a little bit less in the code base, hiring a good engineering team but then continue to do sales. So I think now what's the main focus for me are still being an account executive and you know, doing sales because we have, we don't have that many customers so we need to, and it's obviously really, really fun to meet these like exceptional researchers and have these conversations with them. So a lot of my time right now is well spent doing sales and doing it in a very non salesy way. I mean we just like, we just hang out with these folks and like get dinner.
58:55
The best sales is it's not transactional, it's just like a relationship and it's an evolution of a relationship.
59:43
Exactly. And we take a, we take a. Our CRO said this actually yesterday. It was a, it's a good phrase. He said, you know, we take a very human first approach for our experts and we also do for our clients. I mean we go to dinner with them and we don't talk about anything related to selling them anything. And sometimes they actually, it's funny, they message us after we've gone this a couple times. They're like, hey, why don't you like sell anything? Like during that dinner? Like that was so nice. And, and that results in more expansion. So, so that's, that's, I think that's the approach we take for kind of both ends of the market. But what I spent time now on is mainly product roadmap, you know, trying to again productize our operations as much as we can. Fundraising, we'll do a little bit of that. Soon and, you know, customer calls and then aligning incentives and then of course, being in the weeds with the team as much as I can, because that is very important part of what I do.
59:47
How have you kind of thought about when the right time is to basically go into a new vertical and go take on some new challenge versus just making sure that the thing that you're currently doing is done? Kind of 11 star experience in the.
1:00:47
Sense of Airbnb, it's a really hard balance, especially when you raise some money. There's a lot of experiments you could take. I mean, you can just assign some amount of funds to some random project, some random app, and it's. And it might go well and there's, you know, you can kind of silo a team to work on it. And it's fun to do so because it's like a new idea. It's like a startup within a startup. But I think that's something that we try to avoid as much as we can. And in fact, like I. The team tells me to avoid this, I think I do a bit too much of this. I try to go on these like, side experiments. And we have our cmo, Daniel. He pushes back on us a lot, which is a good balance. The other part to the kind of data infrastructure we're building is naturally we do have a few pillars to the product and it's less of a. I wish it was like a one roadmap Airbnb type one application that kind of serves customers. It's more of a infrastructure play that has a lot of components to it, of course, like the recruitment part, the data platform, the RL environments that we create. And so naturally we do have to have kind of a few different engineering teams that do have different roadmaps and so forth, but we try to limit the number of new engineering teams as much as we can and have everything kind of in one platform to the extent that it's possible and kind of reduce the number of new subdomains that are created.
1:00:58
When you have a dinner with a customer and you just talk with them for a couple hours, have you been able to figure out where the puck is? You know, skate to the puck, where it's going beforehand and then just plan out a couple months in advance that you're going to have to get there, you know, to help them? Let's say four months in advance or six months in advance.
1:02:30
The shortest. The short answer is sort of, I mean, we, we're in this area that is like very frontier, of course, what our customers do the research they do is exceptional research that has a lot of very risky hypotheses that don't necessarily work out every time. And obviously these labs are spending many billions of dollars on this research. And so it's harder for a non lab to kind of hypothesize these same things and let them and kind of predict what it will be. But we try to do as much as we can on these kind of proactive pipelines that we create. And one of them actually is the robotics one where we, I think it was like four or five months ago, I basically randomly guessed and there was like not much research into it that I think human demonstration kind of egocentric data will probably work. My intuition was, was like you, you can't really scale up tele operations, tele operated robots. You need to figure out a way to map real humans doing things to robots, learning from them. You have to figure that out. And if you don't, if the robotics labs don't figure that out, I don't think robots will work. And so we decided to just start this pipeline with you know, 100 people to begin with that were doing these like household tasks and recording themselves do it. And then, and then a few months later we saw, you know, this is obviously public. We saw physical intelligence came out with a paper around egocentric data is actually like the most useful thing ever. And we're like shit. We also guessed that. We don't know why, but great. And we scaled up the pipeline from 100 to 3,000 right away when that happened. So we do make these guesses and I think a lot of it is based on just like the intuition that we have based on pipelines that we have. But it's tough to get it right. So there's a lot of data that we create that becomes but stale because it's not actually useful.
1:02:50
Yeah. How did you take that? When you realize that you need to scale up the team by like 30x to collect this human data, how do you actually do that? What did that look like internally?
1:05:03
There's a lot of things to think about here in terms of scaling that up. The first is can our recruitment engine be applied as well as it's applied to experts, to generalists. So we had to first kind of like build an environment that allows for generals to be vetted. Well and part of that was them recording a video that's similar to tasks that they're going to do on the job and seeing if they do that. Well. Another part of it is that we, it's a, you know, obviously we really optimize for everyone that goes through funnel to be really happy, but there, there is going to be folks that are unhappy. I mean, for, for, for massive pipeline like this. And so when the pipeline is so large, there's a kind of a branding thing that we have to think about as well. Like, it's a. How do we make sure that there isn't leaks that happen when there's 3,000 new people joining in this pipeline within two weeks? So we have to kind of prepare on the, on the marketing side, basically. So there's a bunch of things like this that we think about and we, you know, the funny part is there's no time to think about these things because the customer says, hey, we want this much data and it's like a very nice contract and somebody will do it and we want it. You need to be us. So we have to do it and we have to figure out all the rest that comes with it.
1:05:17
You talked about there's been a couple moments where you promised speed that was just like, after the fact, realizing it's just impossible to do. Has there been any other moments where you kind of like, wanted to move faster than was possible and you had to reevaluate things or even things where you thought you could go at, you know, get something done in like two or three weeks or a month and you're able to do it in like, much less time?
1:06:40
Yeah, there's a lot of that. And we, what we try to do is build kind of a prediction model for timelines now. And we actually, it's funny, right after this mistake happened with this person that I told you about that ended up kind of leading this account, partly as a result of their mistake is we ended up building this internal, pretty basic but important model that predicts timelines and that allows us to be much more accurate with our timelines. But it's still very messy because there's no. The data structures are very different in this space. There's some data points that take three minutes to create and it's, you know, some sort of like preference labeling for, for image annotation or video annotation or whatever it is. But then there's also data points that create. That take like 50 hours to create. And it's. Maybe it's. Maybe it's a tax expert that is simulating the full journey of filing someone's taxes in California, which is like, complex thing to do, especially with the new bills coming out. So the range is very high, and so we have to first determine what is the actual time per task, per data that's going to be created, and then from there extrapolate what is the timeline that the customer looks for and therefore how many experts do we need with that timeline and then kind of work backwards on can we actually hire that many experts? And in the two days that we have, and then if it's feasible, then we say yes.
1:07:06
As companies scale, typically what will happen, especially in extremely fast growing companies, is you'll experience this massive need to hire. And I think Airbnb had this problem where they just overhired and had a whole bunch of different people working on all these different functions to a point where Brian Chesky didn't even know what everyone was doing at his company. And I think it's incredibly difficult to basically stop that natural short term incentive and make sure that the people that you're hiring are all focused on the same objective. Over the last 12 months, I believe you only scaled your team from 45 to 80 plus. Not that much. Doesn't track revenue growth at all. Pavel Durov also had this example where he's built this billion plus person user company on the back of 30 people. He's only got 30 people working at Telegram, and I think they generated a billion dollars in revenue last year. How do you think about deciding when to hire and when not to hire?
1:08:38
Yeah, so I think Durov is obviously an incredible example of this. And I think it's hard for any company to aspire to be even remotely close to that. I think it's just an exceptional outlier. But what we try to do at Micro One is we have this notion of keeping a lean team and this kind of lean team philosophy. And we, like you said, in 2025 we went from 35 people to about 60, 70, and then now we're about, you know, 80 or so. And the approach that we've taken since the beginning is hire when you absolutely need to. And really you have to build this into the culture of the company. Because oftentimes hires don't come from me saying we need to hire or other execs saying we need to hire. But it's more so a recruitment manager or some operations team that thinks they need more bandwidth and oftentimes we don't have enough sufficient context on being able to argue one way or another. They can give a very good case on needing to hire. And so if you don't build this early on in the DNA of the company, everyone will try to kind of trend towards hires every time there's problems. But if you build this in from the beginning and you really kind of make clear that we basically can't hire at micro one unless we really need.
1:09:33
To, it's like the last straw.
1:11:05
Exactly. Then the cases are made when they are truthful and when they just cannot do anything else. So that's one thing that we followed and we continue to do so till today. I think the, the caveat though is that we, in the middle of 2025, maybe towards the end, we, I decided to actually loosen this just a bit and we decided to hire a bit more and, and we actually started to hire, you know, the majority of the kind of 40, 50 people that we added in 2025 were actually in the last like quarter or so of the year. And so this actually wasn't like a bad outcome either. We were able to scale up a lot faster and kind of sustain the 2030% month over month growth that we were having. And so I think when the company's having this like breakthrough growth, being a little bit more flexible and allowing that to happen with a bit more hires is also important part of this like lean team philosophy. And so, you know, as long as the hires are not nearly close to proportional to revenue growth, then I think you, you'll be, you'll be fine. So that's kind of the approach that.
1:11:07
We take on that same like hiring point. What is the kind of step or what's the process in between someone thinking that they may, you know, they have some functions they need to fill and then they got to find someone to do that. But what is the process that you go through before you decide we're going to hire someone new versus just, you know, you have more leverage, figure out how to do it.
1:12:22
Yeah, so, so the first thing is we first see if we can actually delete that thing that they're constrained by.
1:12:42
Like the question requirement.
1:12:49
Yeah, exactly. Like there's a recent case of this like very frankly bad process of sending out contracts to experts. And this required a lot of human involvement. And therefore one of the teams that does this had to kind of hire a bit faster than others. And you know, I looked into like why there's, why is this team kind of growing pretty quickly. And then we realized that we actually just should not have this requirement of signing contracts in this way. Spent about a week with the engineering team to kind of automate this function, which of course freed up this team's time to do much better things and we didn't need to hire for that at all. And we slowed it right away. And so I think there's a lot of kind of questioning the requirements, as Elon puts it. Really well that has to happen. And then I think the other thing is like if that requirement must be there and that function must exist, really dive in and see if folks are actually not getting enough time to do it or if they're busy with some other coordination or something that exists and they're not able to do the core function that they're supposed to do and try to delete those functions to allow them to actually focus on that one thing that must exist.
1:12:51
When things are growing as fast as they are right now, how do you kind of plan a few months in advance or even a year or two out and try to predict where the business will be and how can we build everything today so that you're basically ready for when that point hits?
1:14:09
Honestly, we don't plan. What do you mean by that? We try to do a little bit of planning and then it just ends up being, you know, we have to kind of replan similar to how the KPIs have to be kind of readjusted so many times. And I think, you know, eventually we'll get out of this state of like just complete outlier 30x year over year type growth and you know, it will eventually normalize a bit more. And that's, that's the simple truth. At that point there will be more of a typical planning and budgets and so forth. But right now, when, you know, when the, when like when the board says like what's the budget planning for the year? I mean I haven't even looked at what that means. Like yeah, we're actually don't even know how to like structure a budget yet. Like we'll look into that later. We, we are, you know, we've been profitable. So like the budget is fine. So you know, once we become like not profitable and can plan a bit more, then I'll look into like what it means to set up a budget and we'll do it at that point. So there's not much planning that happens.
1:14:25
Yeah, I also, I kind of love that not very many companies, especially this early on are able to get to a point where they are default. Not only default alive, where you have enough cash in the bank to get to profitability but you're already there. And what does that kind of enable you to do? Like just psychologically day to day when you're thinking about business decisions, you know, you have a great business and it's growing really fast. How are you kind of thinking about growing it even further after you've reached that point?
1:15:22
Yeah, so I think it's, you know, obviously we're very grateful to be in this position of profitability. We, we've been profitable for a while now, pretty much almost entirely 2025 was profitable and this actually resulted in us being net profitable historically now, which means we've, the amount of money raised, we have not touched it and we've added to it. So you know, obviously it feels really good to be in this position, be able to kind of determine our own on our faith and not, not be able to if we don't want to, not have to raise money. But I think at the same time there's a lot of, there's a lot of good stuff to spend on and, and we want to, you know, part of it is just this idea of like proactively building pipelines. And for us to have material revenue come from those pipelines, the cost basis needs to be high. I mean, if we want a pipeline to give us tens of millions of dollars in, in revenue, the cost basis needs to be some order magnitude that needs to be tens of millions of dollars. And so that sort of spend can only happen if we have a really nice cash cushion. So that's why we are going to raise. But we, but we'll try to be very capital efficient still. And basically the only line item on the P and L that'll grow really fast will be R and D, which is again this proactive data spend and everything else we'll try to keep as stable as possible. And I think there's a chance even post our next rays and post spending a lot on these pipelines, we may stay profitable and it won't really be the goal. I mean, I think it's wrong to like fully optimize for profitability right now, but I think it will be the byproduct of just having discipline within the company and having this insane growth. So we'll look at it as a bit of a side aim, but not like a full focus.
1:15:51
Is this something where it's a little bit like Google where DeepMind could not do what they're doing if Google didn't just have this mass massive cash cow which is search and ads and because of that they're able to take all these other bets. I look at DeepMind and they're working on completely different things than other AI labs are. They're working on protein folding and stuff like that. And part of the reason that they're able to do that is because they've got this just massive cache generation engine which they know is going to be there pretty much forever. How has that kind of allowed you to take or is going to allow you to take different bets?
1:17:49
Yeah, I think obviously De Mine is an incredible company. No, no way that we can compare ourselves in any way. But I think there's this idea of just having a large cash cushion gives you the flexibility to take on big bold bets. And you know, we, we believe that this market will be a multi trillion dollar a year spend market on human data over the long run. But of course there's a chance that that's wrong and there's a chance that like maybe the data business actually doesn't work. And if we have that cash cushion, we'll do everything we can to make it work and be the biggest winner in the space. But if it doesn't, we also have time to make sure that the current state of our product gets applied to something else. So that kind of, you know, comfort is not the right word. But that cash cushion in general allows us to just really take the bold bets that are required to hopefully build a multi hundred billion dollar company.
1:18:25
You posted a blog post probably like 2 weeks ago now and you basically said here's why training data and human data is just going to continue to be valuable over time. Do you want to walk through what that actually looks like? Why can human data just be like a trillion dollar industry long term?
1:19:20
Yes, I have a lot of thoughts on this, so cut me off if I'm talking too much here. But I. So there's a bunch of things first is there's this notion of last mile in AI. And I think the first thing here is that the last mile in AI just doesn't exist. The reason for it is right now the strategy for labs. What people perceive that this strategy is that they're kind of trying to automate and optimize the last 10% of capabilities. But in some ways that's true. But in other ways the capabilities that exist currently won't be the same ones that exist in the future. The function space of the economy will expand very largely and rapidly with every technological revolution. The function space of the economy expands rapidly, but especially with this one, it will expand because humanity will have its time freed up on hopefully all of the current functions that they do, which means all of the current functions of the economy will be automated over time. And of course this won't be an instantaneous thing. It'll be a very long Iteration over time. And which means as that happens in a continuous manner, human time gets freed up in every, in every domain and they get to spend time on things that are more creative, more fun. And what that results in is net new functions that get created within those domains. So the example I'll give you that makes this concrete is, you know, obviously an area that we're very familiar with, which is recruitment. We, we believe that micro1 has the most powerful recruiters on the planet currently because of the agent that we built. And they're able to hire hundreds of people every single day. And their functions as recruiters does not look the same as any other recruiter at all. They're still recruiters. That job still is there and it's now way more impactful. But the tasks they do looks fundamentally different. And it's ones that, it's a lot more fun for, for, for these recruiters to work on tasks that they do versus tasks that that typical recruiters do.
1:19:37
It's almost like the farmer pe pre and post industrial revolution where you're like actually on a farm versus you're just controlling a bunch of tractors.
1:21:52
Yeah. And you can 100x that across every function in the economy if, if intelligence actually does become quote unquote commoditized. So, so, so now take this recruitment example and you know, take, take what's happening literally today at Micro, which is our recruiters are coming up with new things to do that are really impactful that are now part of their function space, if you will. And now we're going after and automating those functions and this loop will continue. And so I'll give you one, you know, one specific example in terms of this recruitment function space is that our recruiters, because they don't do any interviews anymore and our agent does all of it, they're able to spend time on this like creative sourcing strategies where they create these fun almost like marketing campaigns and they're almost like doing kind of a marketing job in a recruitment context where recruiters would not do that before. And so this is a net new function that has been created and it's a lot more fun for humans that will automate also, but then there'll be nuance. So that's the first thing, which is there is no last mile. And so as humanity comes up with new functions, we will have to create, we will have to get structured human judgment on those net new functions to then automate those functions. So that's first. The second thing is, which is like maybe even a bigger Reason, which is the labs and everyone, just broadly in the US and pretty much all around the world is spending a lot of money on compute build outs and of course algorithm efficiencies like hiring researchers and so forth, but mainly compute buildouts like hundreds of billions of dollars, maybe like trillion dollars at this point. And they're betting on future inference. And as Jensen says, a lot inference is going to million x or billion X or something. And for that to happen and for the economy to not collapse entirely because of all these build outs, we need to unlock a lot of new capabilities for models. The current state of models, that inference will not be used at all. Like very small portions of what the future bet inference is, will be used. And so we must unlock a lot of new capabilities. And the way to do so again is structured human judgment in each of those domains that we're trying to unlock AI capabilities in. And there's no other route. The third thing is as synthetic data becomes more and more relevant and useful, what that results in is every human data point becomes a lot more valuable. Because if you think about the current pipelines, what happens in pretty much every pipeline is you take some amount of human structured judgment and you extrapolate that by a lot. With synthetic data, of course it's not in every pipeline, but in most pipelines there's some notion of increasing the data points by orders of magnitude, sometimes like thousand x with synthetic data. And if synthetic data generation becomes even better and you're able to million x that or something and the model can train on way less human data points, that's actually the greatest thing that can happen to our business and this human data market, which is maybe the most basic economic principle, which is if something is more valuable, a lot more of it will be, will be, there will be demand for, and, and, and so the spend will increase by orders of magnitude. And so we, we hope synthetic data continues to be, continues to accelerate in capabilities. And we actually want to like contribute to that as well. And so, so these are kind of the three main things that will result in this massive market. And actually, sorry, last thing I'll say is there is, if you think about like a, take one example of lawyers, lawyers, what they do at their, in, in, you know, in their job is they create basically unstructured data for their law firm all day, right? They do, they redline some random contract, they get paid for it, they, they do some M and A, they get paid for it. And it's about a lot of unstructured work happening, which obviously is very useful for the economy. I mean, clearly, but. But you have to wonder why lawyers are, as just one example, are getting paid more to work at Micro One than their law firm. They're getting roughly 20% more. And it's not, I mean, of course it's us paying them more, but it's like the economy that like allows for.
1:21:59
That the value that they're creating is just literally higher.
1:26:25
Exactly. And specifically the structured redlinings that they do, the structured M and A tasks that they create are, the economy has determined are more valuable than the unstructured work they do for the law firm. And so one natural argument is like, okay, so why don't they just spend their whole time then? Clearly. But you can't do that because there needs to be some percentage to actually run the economy until we automate it. And so there needs to be some equilibrium point of spending some portion of their time on structured value creation and then unstructured. But you could infer from this argument that if the current state of basically every domain is some percentage of their time is spent on structured human judgment and human data creation, then over time, basically the entire economy will spend some small percentage of their time on this idea of human data. And so you could take a percentage of the entire labor market and put it as human data spend. So even if you take 5%, which I did some math on, well, I think 5% is reasonable. If you take 5% of 50 trillion a year in spend, that's $2.5 trillion a year, and then you just discount it by a lot for whatever reason, maybe not all of it will be kind of recognized in terms of spend and a lot of it will happen in less formal ways. Then you can make a pretty clear argument for 1 trillion a year in human data spend over the long run.
1:26:28
This almost reminds me of kind of like the safe. And when the safe was invented, there was all these very complex instruments that startups used to be funded on. And then I think, what was it Carolyn Levy invented the safe. And it suddenly enabled founders to basically raise a huge amount of money in a very short period of time simply. Simply because there wasn't necessarily as many steps in the process. It didn't cost as much, it wasn't as complex.
1:27:58
Yeah, I think in that case you would maybe argue that because it's so much easier to raise and maybe a little bit easier to also invest, that somehow there'll actually be be less VCs. I mean, certainly not. There's a lot more. And that, you know, I think that has a good impact on the economy. So yeah, I think it's similar arguments, the kind of the, the principle of Jevons paradox.
1:28:23
I want to spend a little time on you mentioned, I think that you're basically getting more and more data points that are like long horizon tasks and you're trying to have a single person maybe spend a week or weeks on a single data point that you know is fed into a model. How has that kind of evolved and how did you kind of come to that conclusion?
1:28:49
If you think about the current state of models, they're very good at answering complex questions in pretty much every domain, no matter how complex the domain is, how niche it is, they will answer your question. And you could put some broad accuracy on the answers of, let's say 90%, just generally of accuracy, and they continue to get better in those as well. And I think we'll approach even higher accuracy just broadly. But then if you think about models doing tasks and what it means to do a task, I think for a human, what it means to do a task is like you essentially answer a bunch of questions, right? You first determine the first step of a task is you kind of planning it. So the question is like, what is the plan for this task, right? And then like you, you take an action and you're just like answering the question of what the next action should be. And like you're essentially just answering a series of questions and then obviously you're making some movements to actually act on it. So what the models have to do is basically just answer a bunch of questions in a row to do tasks. And answering a question with 90% accuracy is a, is a good outcome. But if you have to answer 20 questions in a row to do a task and you do 0.9 to the power of 20, you're going to get like, I don't know, close to zero, like 0.15 or something like that, some very low number of accuracy, which means you're going to have that task be done correctly, like literally 10% of the time. And that's obviously horrible. And so this idea of like compounding errors is why models are not yet good at doing much. And even for applications that are like coding, and they're obviously making a lot of great impact. If you ask Cursor to kind of go back in the conversation and check something you asked a few turns ago, there's some struggles. So this idea of multi step tasks and very long horizon tasks is what models continue to struggle with. And so the way to get them to not struggle is by not creating tasks that are questions and answers, but creating tasks that are actual tasks that, that have, that are very long horizons. So you know, one, one example would be like, if you look at the domain of taxes and you know, currently if you ask anything about like W2 California taxes, you'll get great response, but there isn't really an agent that like will file your taxes. So what we're trying to do is like one of the RL environments that we're building is essentially simulating the full end to end process of filing one's taxes, which is certainly complex. And it's not just about the final tax form that you fill out. It's about first getting the right information from the customer. The customer is going to probably tell to you, like, hey, can you do my taxes and not send you any information? And then you're going to say, hey, like can you send your income for the year, send your bank statements and few other things. They'll send you half of what you asked and then you'll have to ask again and then you'll, you know, you, you'll probably need some more information because they have some capital gains and so forth. So the first step is like a bunch of tasks to actually gather the right info. And then in the context of, again in the context of taxes, you then have to kind of have a conversation with the customer about like optimizing their taxes. You certainly don't want to get the information and then file a tax. You got to like say, hey, maybe, maybe go buy a car or something and like reduce your, or increase your expenses and you know, whatever, maybe sell some stocks and have some realized losses to, to match to reduce the realized gains or whatever it is. And, and then so, so there's like a bunch of questions to be answered in that, in that conversation. And then, and then there's a bunch of other steps. And then, and then ultimately you file the taxes, which is you take all that previous states that exist and then you, you fill out the form that is sent to the government. And so if you don't have a very long horizon task that kind of gives you rewards that are, that are partial rewards for each of those states before the final action of filling out Juan's PDF, you will fail quite badly. So that's kind of the approach we're taking now, which is these very long horizon tasks that simulates end to end workflows.
1:29:09
A lot of people kind of thought that 2025 was going to be the year of agents and that didn't really happen like you said, because there are very long horizon tasks and they're complex. It's very difficult for things to really understand what all the steps are to actually execute and make something happen. How do you think agents are going to happen to the point where we have super useful entities that are able to go execute your will?
1:33:22
Yeah, I think you're exactly right. I mean there's a couple of really good use cases, coding customer support and a few others that have been working very well. But I think realistically there hasn't been yet a huge adoption within enterprises or just broadly. And I think part of that is part of what people argue is the fact that enterprises, there's just like this lag in the economy to adopt new technology generally. And I think that's partly true, but I would say it's actually mainly because of this notion of evals is not yet built into enterprises. So the first part is of course you have to improve the foundational models in these long horizon tasks. So that's kind of the first step. But then the second is when you actually are using a foundational model to build any agent in any given context for an enterprise, you need to actually really further evaluate that within the workflows, within the very niche workflows of that one enterprise. So this is like this notion of contextual evaluations which I think enterprises have not yet really thought about or implemented. And I think the way that this adoption speed changes is if enterprises start to treat evaluations as core, as engineering in their full on product buildouts. So basically a very large portion of the product budgets within these companies has to be spent on evaluations where they're kind of looking at each function that this agent should have needs to be qualitatively assessed versus like does it work or not? But how well does it work?
1:33:46
Final question, what's the hardest thing you've overcome?
1:35:27
Realizing in retrospect how much my parents gave up to come to the US when they were, when they had a pretty good life in Iran. They had to, they had to give up pretty much everything and kind of restart their life. And of course US is definitely the greatest country to be in, but if someone has spent decades in one country and has to reset their life entirely, it's an incredibly difficult thing to do. And you know, I remember the early days of my parents like really struggling when they came to the US and us having to kind of live in a single bedroom with a family of four for a long time and all the rest that I'm now kind of appreciative of, in retrospect, of how hard they had to work for me and my sister to basically be able to live here and have a good education and have the opportunity to build companies and so forth. And so I think this is part of my. Maybe the main part of my drive is really making sure that my parents get a great outcome with this move, and hopefully I can contribute to that outcome.
1:35:30