Mitchell Hashimoto’s new way of writing code
Mitchell Hashimoto, co-founder of HashiCorp, discusses the company's journey from failed university research project to public company, the evolution of cloud infrastructure tooling, and how AI is transforming software development. He shares insights on working with major cloud providers, the challenges facing open source maintainers due to AI-generated contributions, and his current work building Ghostty terminal.
- Always keep an agent running in the background doing slow tasks while you focus on high-value thinking work
- The best engineers often have boring backgrounds - they're private, don't use social media, and focus intensely during work hours
- Open source is facing a crisis due to low-quality AI-generated contributions that waste maintainer time
- Multi-cloud strategy was contrarian in 2011-2012 when only AWS existed, but proved prescient as competition emerged
- Enterprise sales success came from focusing on single products with clear buyers rather than trying to sell entire platform
"If AI agents can write code, open pull requests and ship features, do we even need open source contributors anymore?"
"I endeavor to always have an agent doing something at all times. While I'm working, I basically say I want an agent. If I'm coding, I want an agent planning."
"The best engineers I can remember are notoriously private and not because they want to be private, because they just don't care to be public."
"This is the first time really where it feels like so much is on the table for change at one time. Everything is changing."
"AI makes it trivial to create plausible looking but incorrect and low quality contributions."
If AI agents can write code, open, pull requests and ship features, do we even need open source contributors anymore? Michel Hashimoto, the co founder of Hashicorp, has been thinking deeply about this, the future of open source and how to efficiently integrate AI into his day to day workflow. Mitchell built the tools that power modern cloud infrastructure, Terraform and the Hashi Stack. He also created a popular terminal, Ghosty, and I consider him to be one of the most thoughtful voices in the industry on how AI is changing the craft of software engineering. In today's episode we cover the origin story of Hashicorp, a failed university research project, a notebook of unsold problems, and an email from his future co founder that he answered in two minutes. His honest, unfiltered take on working with aws, Azure and Google Cloud as partners, both the arrogance and also the brilliant engineers who never thought about the business, how he's adapted to AI coding tools, why he always keeps an agent running in the background, and his practical advice for engineers who have not yet warmed up to AI agents and many more. If you're interested to hear from one of the most hands on builders in the industry and want to know where AI tools are useful versus not, then this episode is for you. This episode is presented by statsig, the unified platform for flags, analytics, experiments and more. Check out the show notes to learn more about them and our other season sponsors, Sonar and Work os.
0:00
Michelle, welcome to the podcast. It's awesome to be here in person.
1:15
Yeah, it's cool to meet you in person after so many years of following you.
1:20
You've had such a massive impact on, on the tech industry, on software engineers. But how did it start?
1:25
I think the high level is the same story as a lot of people. I self taught around 12, 13, early teens motivated by video games. Same like same as a lot of people. Although I really quickly realized that I liked web. You know, web was new, Google wasn't out yet. I think web was new and so I, I kind of like really quick. I, I never became a video game programmer. I really quickly just became a web programmer, php, Perl, that sort of stuff. And because I was so young the only way I could learn was through whatever code was published online. And so that's how I got acquainted with open source. I didn't know that's what it was called then, but a kid with no job, no money, parents didn't want to buy, you know, professional books were like I don't know what they are now, but they were like 50 bucks then, right? And, and so they were like no way. Right. And also they didn't believe I was going to read it. And so there was no way they're going to buy that. So yeah, this end anything I find online was my my in into coding. I'd walk to school every day with a group of friends. There's a period of time where I printed out the first or second chapter of the PHP manual. I remember it was about 30 to 40 pages of paper and I never programmed. So all the stuff and I'm 12 is very confusing. So I read the whole 40 pages every walk to school. And I don't remember how long it took me, but I did that a long time before it. I remember this one moment where I was walking to school where suddenly I understood what these dollar sign things were. I like it. Like for whatever reason it just came in.
1:30
Those are variables, right?
2:58
Variables. Yeah, yeah. And I really understood. I never heard that word before. Like you don't hear the word variable as a 12 year old out in any context. And finally at one point it like hit me that they store things and things could change. And I remember just like weeks of reading this thing and not understanding it, getting to school so excited thing like it, it triggered and then after that I remember stuff happened really quickly.
2:59
What, what kind of stuff did you build? Websites.
3:25
Yeah, websites. It was gaming related websites. It was like a lot of like game cheat stuff. Forum software. Yeah, I mean I had a lot of fun cloning websites, you know, in a poorly but like PayPal was out or. And then I really wondered like how does money get transferred over the Internet? How does that work? So I tried to build like copies of cloning websites. I did like masquerade as a 18 year old on like freelance websites. And so I got you know, a hundred bucks here, fifty bucks here to do like image, like upload stuff. I decided to study computer science in college, went to University of Washington. I mean I guess that's when you would call it serious. But I was, I was like really? I mean I was coding every day as much as I could through high school. Oh, okay.
3:27
That's impressive. Were you alone with this in your friend group? There were other people doing it or was it kind of lonely?
4:12
It was lonely. It was very lonely. It was, it was lonely in the real world. And then I quickly found online friends through like MSN messenger and Al Insta messenger and forums. I found online friends which many I, I have met now and I still keep in touch, which was cool, but no, I mean like back then I mean being a, being a programmer one, no one knew that word. But being into computers was like a social death kiss. And so even my closest friends didn't know my best friends and stuff. Like I hid it from all of them and I didn't talk about it at school and stuff like that. So it was just a secret until I went to college. And college is when I decided to like let it all out. The big like break that I got was I blogged. And my freshman year, late freshman year, heading into summer after it of college, someone just emailed me out of the blue and I kind of thought it was a scam. It was just like, do you want to, you know, it is, do you want to be a Ruby on Rails programmer? And I didn't know Ruby, I was a PhD programmer. I had never done Ruby, I'd never done Rails. But I got this email and I'd never been like headhunted before. Like I didn't know what this was. I was also 18 so I didn't really know what to think about it. I probably would have not responded except that the person contacted me, was in la and so I did respond and we set up a meeting, like a real physical meeting and I met him and met the company and realized this is real and they're serious and genuine. And I took that job. And yeah, I mean that was, I learned a lot on the job there. So that was a huge change.
4:19
Was it a startup, a small company, something like that?
5:48
No, it was a consultancy. So it's kind of like one of those standard like this like 2007 Ruby on rails was, had blown up. It was already very popular and there was all these consultancies that, that appeared on Nowhere that was basically like we'll build your minimum viable product and yeah, and we're one of those shops. So great job for a college student because we'd see a client for like two months and I would build a YouTube style website and then I would build like a philanthropy website and then I'd build an E commerce website and like it was just like I got to learn all these different technologies and different scale challenges and different like there wasn't a lot of scale because we're building MVPs but different like thinking of scale problems. Yeah, it was, it was great.
5:50
How did eventually Hashicorp start or what happened between like getting, getting this, this Ruby job to a few years later?
6:29
It kind of starts with this Ruby job. There was one guy that worked at the, the company and, and he's, he's pretty into his privacy so I won't share his name but he was my boss and there was no Heroku, there was no engine yard so you had to like self host and Ruby on Rails hosting then was kind of like difficult. So he was the guy who got all these projects hosted on, on dedicated servers and I didn't know anything about that and I. And he ran Linux and he had long black hair and he like didn't use a mouse and all these things that were so weird to me and I was just intrigued. I just, he sat in the corner, he didn't want to talk to anybody and I just wanted to know more about what that world was. And luckily despite appearances, he's very nice. And so yeah, I, I think as soon as I showed a genuine interest, started asking a lot of questions, he started just giving me challenges like well, the first challenge I remember he did is he unplugged my mouse. And it's funny because I, I don't think there's an era of time where if you did that it probably would have been some kind of harassment or something. But he, he literally said unplug. He unplug my mouse and said you're never going to work with a mouse again. So figure it out. Not going to tell you how. Just unplug my mouse, restart the computer. Your problem now. And took the mouse away. Took me about a week. And I got really good with the keyboard.
6:37
Harsh lesson, harsh lesson.
7:54
And once I got good with the keyboard he said okay, here's he, he installed screen on my, you know, early tmux. He installed screen in my terminal and said figure this out. You're going to use this now. You know, there's no questions like you will use this. And he just slowly instilled on it on me. And as we got there then it became, you know, here's ssh, here's a package manager. He's like it slowly taught me more and more and that got me just. I loved like immediately it was like this is super cool, super fun. So that long winded process got me into infrastructure. And then simultaneously or very shortly afterwards I joined a research project at the University of Washington called the Seattle Project, which is a terrible name because you can't Google it, but it's called the Seattle Project. It was, I'm sure it doesn't exist anymore. And it was again another popular thing during this time was kind of like folding at home. It was this, they were trying to generalize folding at home, which is can a bunch of People compute of different, you know, it could be your home machine, it could be a unused rack, it could be in your basement, it could be around the world. But can, can, can you donate all this heterogeneous hardware and then can you generalize a scheduler on top of it so that academic institutions across the world could just run workloads and not just like, not just research. Like the job I got was basically to very vaguely to create not the scheduler component but like create the ability to spin up all these nodes and, and, and a bunch of other stuff. It's, it's very vague but, but it was this infrastructure problem and I completely failed at it. Like I, I tried for a quarter, but from a technical side I just failed. And, and I wrote down in his notebook like what I thought the pieces were missing, that I couldn't solve this problem in a quarter, in a 10 week period. Like why? Well, we need this, we need this, we need this.
7:56
It's interesting to see how structured Mitchell was in his approach in defining components that would later become parts of the Hashi stack. And this leads us nicely to our seasonal sponsor, WorkOS. One thing I've learned from studying great engineers, Michelle included, is that they're very deliberate about what they choose to build. Great engineers don't just shift fast, they think in systems, they understand leverage and they're careful about what becomes part of their long term surface area. If you're building SaaS, especially an AI product, authentication and enterprise identity can quietly turn into a long term investment. SAML edge cases, directory sync, audit logs and all the things enterprise customers expect. WorkOS provides these building blocks as infrastructure so your team can stay focused on what actually differentiates your product. Great engineers know what not to build. If identity is one of those things for you, visit workwas.com and with this, let's get back to Michel's notebook with all the components he would end up building at Hashicorp.
9:47
And I still have this notebook at my house here. But the problems are really like, you know, I have no way to declaratively manage the different resources that are out there. I have no way to network these together in a private network. You know, I wrote these things down and there was a lot of stuff there that I never ended up building. But a subset of that was ultimately what Hashicorp would end up building. And I shared this with my undergraduate like boss who is Arman, who was my co founder.
10:40
Yep. So he was like later became your co founder?
11:07
Yes, he was the, my, my boss on the Undergrad side. And I shared it with him as kind of an exit interview, like, this is what it is. And then some period of time passed, not much weeks passed. And he emailed me out of the blue and was like, do you want to do a startup together that, you know, you're a teenager and you have no idea what this commitment is?
11:10
You're like 21 or something at this point?
11:28
Probably not even. Probably. Probably 19 or 20. Yeah. And he emailed me out of the blue, like, do you want to do a startup, like, person you never met or you barely met, never met, personally, like all this stuff. And it's so funny. And he emailed me that at like 11:30 near college. I emailed him back in two minutes and said, sure. And he remembers thinking, wow, you're spotted so fast that he's just in, he's ready to go. That was sort of the start of our friendship. And then. And again, like, there's overlapping pieces here. But I was also, at the time working on something called Vagrant. And Vagrant was, you know, came out of the consultancy. Less the less the research project is solving the problem in this consultancy where we had new clients every two months and we had different teams. How do we create reproducible dev environments? So I could go help somebody without a lot of billable hours?
11:30
So. So this is a development environment that you could spin up quickly, right?
12:14
Yeah, yeah, yeah. The metaphor I always had was I didn't use Windows then, but the metaphor I always used was, how could I double click and open a dev? Yep, that was the metaphor I used. Because it's a good one. Yeah. What, what, what? The problem we're having was any hour waste in a consultancy that you can't bill is just a waste. And so it was basically like, if somebody else is behind schedule, how can I jump in, help implement a feature and jump out? And we were in that era just setting up the dev environment for a project might take you half a day.
12:18
And you couldn't build that for the client. Right. The client will only pay for the work.
12:47
Yeah, you can build that for the client. So it'd be like four hours of work wasted and it would probably mess up your dev environment for your actual client because you would be a different Ruby version, a different Rails version. And so you would kind of destroy both ends. And so Vagrant came out of that, which was, I just need to go over there. And what ended up becoming Vagrant up. Sweet, you know, few minutes, let's help you for the next two hours. And then, and how, how did you
12:50
build it back then? Was it some kind of virtual machine or.
13:15
Yeah, it was with VirtualBox. Virtual Oracle. Well, it wasn't. It was sun then, but virtualbox. And, and that's, that's another cool constraint, which is that I was a college student, so I had no money. So this was expensive back then, right? Virtualization was expensive. VirtualBox was free and open source. I don't care about the open source side for that. I was, yeah, I was never going to read it, but yeah, it was free. That was why I did it. And, and that's why I did that. And not like EC2, which did come out by then, but I didn't do EC2 because I didn't have money to pay for these instances. So, yeah, that's, that was the constraints. And, and I like bringing that up because I think so much of software engineering is understanding constraints and working with these constraints. And your prior podcast, they're, you know, called forces, like static and dynamic forces is that. And, and I think that helps create better software when you have constraints. And that was my constraint. So, yeah, so that was. We have Vagrant, we have this failed infrastructure project. We have sort of the. My boss at consultancy getting me into infrastructure and all of the. And then, I mean, externally we had the cloud being introduced. Aws. I went to school, University of Washington. So. Oh, I was right there, right in
13:17
the epicenter of it. Amazon was next door, right?
14:25
Amazon was very next door. They donated a bunch of credits right away. I knew about the launch. Most of the CS students at UW interned at Amazon. Not necessarily aws, but also including aws, but all over. Armand interned at aws. And so like I was in the bubble of like cloud, cloud, cloud, AWS, AWS. When people were pronouncing S3 like S cubed, like people didn't know how to pronounce it right. That's how, that's how new it was. And so, yeah, all this stuff kind of came together and kind of led me on the path to build tooling
14:27
to better manage at that moment in time. When you saw Cloud, you know, you saw it was being big. Did you know or have a conviction that it would be big or as big as cloud had become? Because this was, I'm just trying to put yourself back. Like this was very, very new back then, right?
15:01
Totally. Yeah.
15:17
And I think, you know, like, if I imagine, I assume more people would have been skeptics or a thing that is just a fad or whatever. What was it like can you, can you bring us back a little bit there?
15:17
Compared to today, it was very unpolished, I guess is how to describe it. You know, like EC2 was. I mean AWS in general is very unreliable. S3 was the only ever reliable piece. Everything else was, was totally unreliable. And there was only a few services. Like EBS didn't even exist when we started. So there was no durable storage besides S3. When, when I first started with it, it just felt very raw. And I don't, I don't. I never really viewed it as this is going to be big. I mean eventually I thought it was going to be big. What I viewed it as is this is the better way to do it. This feels like the better way to do it. Just. Yeah. At a base level, like whether this wins or loses in the realm of markets and social like popularity, I don't know. But this felt good. And so that, that's what kind of pushed me towards it is. And I say whenever I'm really motivated by like what's the most fun and what like feels right and that it just felt right to me. I think where I started making the bet, me and Armand both started making some kind of bet was not just when we started Hashicorp, but we started Hashicorp on the basis of like multi Cloud. And I really like to like contextualize that at the time we were starting this, which was like2011, 2012, which is that AWS was huge. Azure didn't really exist and Google Cloud didn't really exist.
15:26
There was Google App Engine. Right. It wasn't even Cloud.
16:51
Correct.
16:54
I used to use that when it was App Engine.
16:54
Yeah, yeah, yeah. And so in that context, as we were pitching these cloud agnostic tools, I mean we got a lot of raised eyebrows being like, this is a waste of time because AWS is the only player in town. And our conviction was at that point Cloud is going to be huge and anything that's economically huge, other people want a piece of that pie. And so you're not going to just have aws. It'll be huge, but you're going to have these others pop up and Microsoft is not going to sleep on it and Google's not going to sleep on it and who knows who else and who knows. And that was our conviction, that was our bet and it mostly played out that way.
16:57
So when you decided to start Hashicorp, you had Vagrant. Like was the idea to like, you know, like invest in commercialized Vagrant and did you go out to raise Money or did you start doing it with, you know, bootstrap? How did that go?
17:34
It wasn't a commercialized vagrant. So what we had done is Armando and I both worked at this mobile ads company startup. There's like less than 30 people and we had built like with Python and C, like these really rough prototypes of these ideas that I had in this notebook of like service discovery and like an early version of Terraform we called Launchy. We DNS based their service discovery, service discovery by connecting an off the shelf DNS server with postgres. And we did all these like hacky things, but they felt good. Um, and again we get like, get back to this, like how things feel to me to motivate me, like it felt right, directionally. Right. I graduated. The, the, the environment in Seattle was not very startup heavy at the time. It was basically everyone was like, are you going to work for Amazon or are you going to work for Microsoft? Yeah, that was like kind of it and like to a certain extent Facebook was starting to show up up there. But that was it. I knew I want to work for startups so I had, I moved to San Francisco. So I moved to San Francisco, found a startup that would hire me, which was a mobile ads thing and just wanted to learn. So that, that's the short step there. So I ended up in San Francisco and I convinced Arman was actually going to do a PhD at Berkeley and he was accepted and in and he
17:45
was, this is a huge deal, huge
18:58
deal, I mean incredible program. And so he was going to go there and he would have done amazing things there. But I convinced him to join this mobile ad startup. He actually took a year to ferment on the PhD. He's like, I'll give it a year. Yeah, I'll join this mobile ad and
18:59
I'll go back for Berkeley for sure.
19:13
If it doesn't work, I'm gonna go back. And what ended up happening in that year is, is now where we get to which is that we had this, these, these, this hodgepodge of proto prototype tools that felt right and we were going to all these little startup mingling parties. You know, it's like things like GitHub, drink ups, but also just like our, this is such a San Francisco thing. And that's why I think it's. Even though I don't want to live there again. It was so magical at the time was like across the street was this company that was called Zimright at the time, ultimately became Lyft. And they invited us over to get drinks and have pizza, to demo this new app with a mustache that like, didn't have a name. And so, yeah, so like, stuff like that.
19:14
You were there when I was born?
19:57
Yeah, yeah, yeah. And like that happens all the time. Like all the time in San Francisco. And it's not unique to me at all. Like, yeah, there. There's a bunch of stories there that I think aren't worth getting into. It's just like it's funds. But I went to all these things and people would just talk. They're all a bunch of tech guys, right? And you'd be like, what. What are you working on? And. And there's two things I realized. One is all these companies are cloud first. They're all just adopting AWS first. There was no, There was no dedicated.
19:59
This was like in. In 2011, 2012 or so. Like, like they. They just like went and paid for. Paid for cloud, which was brand new. Right. The previous generation just had had, on prem. I repeat, but server rooms and, and server admins, they had roles for those.
20:26
All that jazz that was just gone, gone.
20:39
Like, that must have been a massive shock.
20:42
I literally can't think of one social event I went to where there was somebody that had dedicated servers. The only one is maybe Twitter.
20:44
Yeah, yeah. But I think, like, we probably have to emphasize that that was. This was a massive shift in the industry. Right. And it probably was. Was only happening in Silicon Valley or
20:50
like, probably, yeah, probably.
20:57
Well, well ahead of everyone else.
20:59
At a scale that was larger than anywhere else. It was probably in Silicon Valley. The joke used to be, because ABS is so unreliable. The joke used to be that when AWS went down, all these startups finally became more cash flow neutral. They would lose less money. So there would be like a huge, you know, US east outage and. And everyone be like, are you going to migrate regions? Like, no, we're saving money right now. But yeah, getting back to it. Everyone is cloud first, cloud born, cloud native, whatever you want to call it. And the other thing was they were hitting all the same challenges that we were hitting. And they didn't use our tools because they were just like internal prototype tools. But yeah, but I knew that our tools felt good. So I had these two things come together where I had some ego, some hubris, where I'm like, I'm pretty sure we're building the right thing along with. I think the industry is moving in that direction and like, we could, we could come together. And so that led to, let's start a company based around that the fact that I had Vagrant was more of like a industry respect. I mean Vagrant wasn't that big then, so that's not saying much, but it was it I just had some foundation publicly with, to give some credibility to head in this direction. That was about it. And we, we started Poshicorp.
21:01
And then when you decided you incorporated, you know, got the things, did you decide to raise money? Because again back then I guess it wasn't as cool common wisdom, you know why Combinator was probably starting around that time. So like startups, were startups a big thing or was it a given that okay, if you start a startup you're going to raise money.
22:20
In my social bubble it was pretty much a given. And, and not, not just that. So I, we incorporated, I self funded, I, I, I transferred $20,000 from my savings account into this corporate account, initial funding and I worked off of that. I didn't, I paid myself $0 for the first six months. So the 20,000 was purely towards whatever things the company needed. That was the first six months. And then Armand joined after six months and, and we decided to raise and the motivation there really is there weren't many, there weren't many other options. There are basically three options as I saw it then, which was bootstrapping, right? Just like build something, make money and as it becomes affordable, continue to grow, reinvest and grow bootstrapping VC on the other side. And then in the middle was like what I called Patronage which was not, not like not, not Patreon style stuff today. Like that infrastructure didn't exist. There was no subscriber donate type infrastructure Then Patronage was more like you might be able to convince a company like VMware to pay your salary for you to work on some idea. And the best example is redis at VMware. And yeah, and we kind of laid out this plan that we wanted to do which was, which at inception of the company included Terraform Console. No, it included everything but Vault. Vault came a little bit later and we looked at that and said if we bootstrap this, even if we hit it out of the park, this is going to take us like a decade just to like build the software. And that's in the best case scenario. This is just going to be slow. And the problem with slow is that things have a window and the cloud is growing so fast that if we, we were that slow, someone else was going to do it their own way. I mean that was, I guess that was the primary issue is we really just wanted to go fast, you need.
22:35
You knew you needed to.
24:26
Yeah, I needed to. We needed to hire many engineers right away and start building right away. And so VC was the route we chose.
24:27
Can you talk us through the first several products and what they do? You know, we know Vagrant, but just for those who are less aware of what became the Hashi stack layer.
24:34
Right, yeah. Let me see if I can still get these in order. I'm pretty sure I can. So this Vagrant was predated it. The first product that came out of HashiCorp itself was a product called Packer. Kind of understated publicly, but kind of underpins a lot of things in the industry to this day. That's an image building tool. So building Amazon images, VMware images, et cetera. I'm not even sure how much like publicly came out but there are whole cloud like multi billion cloud platforms that all of their official images are like the service images are built with Packer, everyone was trying to utilize this horizontal scaling, auto scaling nature of aws. That was the dream. And if you were, it's kind of like the what is it Cold star problem with serverless today. If you were waiting tens of minutes for your server to be ready, you couldn't react. And so my idea was do that, snapshot the image and then next time just spin up that image. And so that was Packer.
24:43
That was Packer.
25:41
So Vagrant Packer. The next one that came out was Consul. Consul was solving the networking problem and not networking. It was more solving the service discovery problem which was you have all these machines coming and going before again like to contextualize before you would have a static set of machines that had IPs and you would probably use DNS or something. But the IPs didn't change that much. So you could be like, oh my database is here and it's not moving. But if you're in this world where web servers and load balancers and databases are just breathing, they're, you know, they're. That's how I always describe the breathing. They're, they're creation, destruction, creation destruction. Like constantly then things are happening at a scale where the service discovery needs to be much faster. And not just faster, but you want to be have better guarantees that when you get a response that oh, it's at this IP address. So that IP address is like ready. It's not just, you know, I think this is also kind of more mainstream with like kubernetes, readiness checks and health checks and things like that. It was bringing that to more like physical server or cloud servers, virtual machines and things like that. And so that was consul. Then after that I think we did Terraform. Terraform spins up infrastructure's code. Describe your infrastructure in AWS parlance. It was things like all the attachments to your EBS volumes, gateways, VPCs, subnets and like connecting them all together. Like the idea was I wanted to have an empty AWS account or any cloud account and I wanted to have this text and I wanted to say make this text reality. And that's what Terraform is. And you would wait whatever amount of time it took AWS and you would blink and you would have thousands of resources and then with one command again you could just tear it down to zero. That was Terraform. So that came out like 2014. So that was the next thing and then was Vault. Yep. Vault is, is is the easiest to describe as Secrets Management. As core Secrets Management encryption grew to do a lot more things for that.
25:42
So it says like, like what we have like our on your local developer machine you have your like your environment variables and doing that at scale, at a team level, at a company level, at a service services need to access all these stuff securely.
27:35
Yeah it was much more focused on the like the production environment Secrets. I had dreams and visions of really solving the developer secret problem, but Vault really never, never did that.
27:47
Well, Mitchell just talked about Secrets Management which turned out to be a pretty important focus here for him. In general, security is both very valuable, but also pretty hard to do. Well, this leads us nicely to our season sponsor Sonar. Looking at where we are today, we've now moved past tap completion into the era of agentic AI. Autonomous agents are opening pull requests. One big question, how do we get the speed of AI without inheriting a mountain of risk? Sonar, the makers of sonarqube, has a really clear way of framing this Vibe.
27:59
Then verify.
28:28
The Vibe part is about innovation. Giving your teams and your AI agents the freedom to build and iterate at high velocity. The verified part is the essential automated guardrail as agents start contributing more of our code base. Independent verification that checks every line human or machine generated against your quality and security standards is more critical than ever before. Helping developers and organizational leaders get the most out of AI while ensuring quality, security and maintainability is one of the main themes of the upcoming Sonar Summit. This isn't just a user conference. It's where devs, platform engineers and engineering leaders are coming together to share practical strategies for this new era. I'm excited to share that. I'll Be speaking there as well. If you're trying to figure out how to adopt AI without sacrificing code quality, join us at the Sonar Summit to see the agenda and register for the free virtual event on March 3rd. Head to sonarsource.compragmatic Sonarsummit and with this, let's get back to Hashicorp and why the company decided to raise six months after founding.
28:29
But yeah, it's just basically like, yeah, where do you store your secrets? And the secrets were not just. I forgot the words I use to describe this, but secrets were not just like passwords, but it was also like pii. So how do you protect emails and addresses and stuff for your customers or credit card number. Credit card numbers. So Vault was core to all of that. And can you continues to be.
29:27
That is a part to build something like that.
29:46
Yeah, we were really scared when we built that actually because we kind of hid the fact we never lied about it. But nobody on the team that built Vault had more than one quarter of security undergraduate security experience. There is no professional security engineers from industry, there is no professional security academics. And yeah, we built it. We got a lot of audits because of that. Like we were scared. So we did get a couple. We. For us it was very expensive as a startup. We paid a couple firms tens of thousands of dollars for vault 0.1 to audit it. We paid 2, we got. We shared the early beta with a lot of people who were security experts in order to review it. Not publicly, just privately. We got a lot of good feedback. But yeah, we, we didn't want that exposed in a sense. So.
29:50
Yeah, I understand but I mean it kind of validates that you can build good stuff with I guess people who might not have the experience, but I guess people were learning.
30:34
Right. Like yeah, the security stuff ended up. We know we really quickly hired professionals that helped the product and. And the security stuff was always pretty solid. But. But I think what it really showed was what the security industry needed was a shift in user experience more than a shift in like what it did because like what we were doing was not fundamentally different than existing multi hundred million billion dollar companies that already existed. But the experience, the way you interface with it was dramatically different. And that was I think a good example of that. Yeah.
30:43
And after Vault came Nomad.
31:16
Nomad, yeah, Nomad, which was our scheduler, which was a couple years late for to the market.
31:18
Yeah, you say scheduler, was it not an orchestrator?
31:26
I always described it as scheduling.
31:29
What did it do?
31:31
Simple thing. You have A pool of compute. It's finally solved that problem. Had an undergraduate, you have a pool of compute. You have an app that has a certain set of requirements and it needs to find a place to run it.
31:32
Yeah, yeah, the, the undergrad program we talked about. And as you're building out like these, you said like some of these took years. Like, how did the business, like hashic corporate as a business work? Like, did you, did you start to generate some.
31:42
There was a business.
31:54
There was no. So, so like, all right, tell me about this one.
31:55
Yeah, I think we waited too long to develop a business, but for four years there was, there was actually revenue from a couple random sources, but there was no real reproducible growing business.
31:58
So you were just building this vision of, of the, the, you know, you're, you're the, the founder's vision of like, all right, we need all these things that would have taken like a decade. Bootstrap, let's build it, build it in
32:10
five years and figure it out.
32:21
That was literally it.
32:22
Yeah, that was literally it. And, and you know, it was, it was all open source. And I always had this mentality which was, which was like, if the company fails, it doesn't matter because if they're good ideas, the open source community will just continue. And so I don't think I would ever tell that to my investors at that time. But you know, I had this idea which is like the technology was the most important thing to get out into the world, the business. I really sure hope we could figure it out, but it's not the most important thing.
32:23
And for those engineers who are thinking of becoming founders or you know, might, might be founders, how did this work with your investors? You know, when they put in money, like did they get some board seats? Did you have to manage expectations? Because kind of I'm hearing just putting a bit of my business hat on is like, you know, for four years you're building these cool things. You don't exactly have a business plan, how did that work? Or they just believe that eventually you guys will figure it out. Or, or they sell some kind of traction with like open source.
32:50
It's traction and I don't think what we did was atypical for Silver Silicon Valley. So the really broad hand, wavy way I like to describe it is, you know, your seed is about building the product. You don't even know if there's product market fit. You're just guess not. You're, you're making an educated guess, but you're building something, getting the a, you've sort of proven hints of product market fit but you definitely don't have it yet. You've proven hints and then when you get the B, you've proven product market fit. And now you haven't really proven like repeatable revenue. You've, you now have hints of revenue but you know the product is useful. You know, people like the product and want to use the product and maybe want to pay for the product, but you don't know exactly how to get everybody to pay for the product. And then CD and so on is just continue to build the repeatable revenue machine. And so with that framework in mind, we were on the right track. It was basically like build the product. We had clear product market fit by the A, in terms of the open source. Right. We had millions of downloads, a lot of stars on GitHub, all sorts of signals that showed that this was resonating. We had zero revenue. And so, you know, it was raise money and slowly, slowly get closer and closer to solving the business problem. And I think we were just a year or two late or like later than the average startup. But the general keyframes were the same, just on the slightly wrong timeline, I guess.
33:16
And then when you decided to do a business, this was, you already had the Hashi stack and then you built managed offerings. I remember.
34:39
Yeah, Our first foray into commercialization was a total failure. It was this idea.
34:47
Oh really?
34:52
Yeah. We had this product that some people, you would have to have been a die hard like Hashicorp product fan to know this. But we had this first product that was called Atlas and the idea was commercially shipping the vision of running all the products. And so the, you know, there's a couple death knells there. One of them was that you had to run all the products. And so if you were just like a vault user, you had a really impossible time buying or buying into our commercial product. And the second was just that it was just a huge problem to like attach onto. Regardless of the adoption required. You're trying to solve the problem that multiple different buying organizations in a company were fighting over. So like even the people who had adopted all our tools, we ran into the problem of who pays for it.
34:53
It wasn't as simple as engineering paying for it.
35:36
Correct. And I think one of the lessons that I would have, you know, I would have for engineers that become founders that don't have a business background. And one of tough lessons I had to learn is that companies want to pay for software but they will fight over whose budget owns that budgets are important. Right. So the budget has to exist. And if it looks like a networking problem, they're going to say, oh, networking should pay for that. So I have more budget to buy my other toys that I want or I can hire more people or all this stuff. Yeah, it could get broken down into, like vendor budgets. It could already be earmarked for external purchase. But yeah. So we have this product that was like, does security perform for it? Does networking pay for it? Does infrastructure pay for it? Like, does dev tooling pay for it? Like, where does this go? And it's just that Spiderman meme where everyone's pointing at each other. Ultimately you don't sell anything. And so that was a failure for that reason. So I don't remember the total time we chased this down, but we had a board meeting for sure on a Friday. And board meetings were usually on Fridays. And. And we had this board meeting. We're based in the city of San Francisco. Board meetings were an hour south in real Silicon Valley. And it didn't go well. It wasn't. There was no yelling. There was nobody saying, you guys are messing up. There was nothing like that. It was just the way I describe it is when your parents aren't happy with you, but they don't have to say that they're not happy with you, but you know they're not happy with you. We had this board meeting, we drove home, Armand and I, complete drive home was silent and nor Friday, It's Friday night. So usually what we do is we go straight to. Armand lived in the city and I lived in LA already. But we'd go straight back to where Armand's place and just like, have a glass of wine, debrief, talk through things. And we didn't talk in this car ride home. Armand drove straight to the office. I didn't question that. We went into the office, sat at a table not much larger than this. The only difference was there would be a whiteboard here. I think one of us at that point said, well, that didn't go well. We both knew it, we didn't feel good. And the sequence of events here is now very fuzzy. But at a certain point we decided, let's play this experiment where if there was no sunk costs, if we were starting from scratch, what would we do differently today? And we whiteboarded all this stuff out. What we whiteboarded out was per product, enterprise products, and doing Vault first and all this stuff. We wrote it out, spent some amount of time there. It's still Friday. It might be Saturday in terms of the time of day, but it's still Friday. I think it was Armand who looked at the board and goes, why don't we just do that? Like, why not? Like, and, and, and I was like, yeah, why not? So we decided over the course of that weekend, just throw it all away. Just throw everything we were doing before away. We had two paying customers. We're like, just breach contract. I don't know, like, figure it out. Like, get out of it. We're done. And we convened an all hands meeting on Monday. Probably only about 20, 30 people in the company at that time, but we convened an all hands meeting over Zoom. And we might not have used Zoom then, but whatever, video chat. And we said, okay, we're switching directions. We are now Enterprise as our customer. Open core per product. We would have this open source and we would have a forked version internally that had closed source features. Yeah, it was a fork, but yeah, open core business model. Armand and I thought people would quit. Like, we thought we would lose. Like, we'd have an exact number. We thought it would shatter some level of confidence. And like, wow, these, these guys have no idea what they're doing. We didn't have any idea what we're doing. And you know, Open core even then had a bit of a like icky taste in people's mouth. And so like, we thought people would just like philosophically quit being like, no, I came here to work on open source. I'm not going to do open core. Enterprise was kind of just like a suity, boring thing. There was like multiple facets of why people might quit. Nobody quit. The vibes in Slack were amazing. Super positive.
35:38
Oh, what happened, do you think, like why people are paralyzed?
39:43
Yeah, we asked about it in one on ones and follow ups. We asked about it and it was really like everyone was kind of just like buzzing that we had a clear direction and a conviction and you know, there's fear of the unknown. But. But before there was this feeling of like, we're just, we're just throwing darts at the wall and doing this thing and we don't know who exactly who our customer is. And there was all this uncertainty in a different way. And now it was like, we don't know if this will work, but at least we're just gonna sprint towards this. Like, there's these clear things, which was like, definitely Enterprise, definitely opencore, definitely Vault. Like all these things are set in stone. That gave us a different set of certainty that suddenly the company was like, let's go. So, yeah, nobody Quit. It went super well. And we started. I don't, I don't remember the time of year, but it was, it was like in the fall. We built Vault Enterprise by the new year, within like the first quarter of, of trying to do sales. We could just like tell that it was different. It wasn't like obviously successful yet, but just the caliber of conversation we're having, the, the distance we're getting in the buying process and the speed we're doing just felt different.
39:47
And what was different of this approach?
40:53
Yeah, I mean part of it just comes down to like the classic startup, like listen to your customer. And we, we should have listened from the beginning because our potential customers were screaming at us to do what we ended up doing, which is we would give these pitches about adopt all the products and buy this pie in the sky thing. And there were so many meetings where someone would be like, okay, I'll think about that. But how do you replicate your Secrets involved? You know, they would just like ask these questions where. If you, if I was just listening, I was so blinded. A lot of us were blinded, but I was so blinded, if I was just listening, I'd be like, wait, a lot of people are asking about Secrets replication and that's a at scale problem. Maybe we could close source that. Right? Like that's what we ended up doing is that was our first feature with, with Secrets Replication. Not even across data centers. The first feature was just like a cluster of vault servers in a single region. You would sell this more focused product. But now kind of the problem that I talked about earlier, security was definitely the buyer. There was an obvious budget, obvious person you were talking to. There was a feature that it resonated with that scale. And so we were just having much higher quality meetings in terms of getting this done.
40:55
Michel just talked about how Hashicorp managed to build a product that enterprise customers cared about and wanted to buy because it resonated with their scale. This brings us nicely to our presenting partner for the season. Statsig Static offers engineering teams the tooling for experimentation and feature flagging that used to require years of internal work to build and is especially important at enterprise scale. Here's what it looks in practice. You ship a change behind a feature gate and roll that gradually, say to 1% or 10% of users. first, you watch what happens. Not just did it crash, but what did it do to the metrics you care about. Conversion, retention, error rate, latency. If something is off, you turn it off quickly. If it's trending the right way, you keep rolling it forward and the key is that the measurement is part of the workflow. You're not switching between three tools and trying to match up segments and dashboards after the fact. Feature flags, experiments and analytics are in one place using the same underlying user assignments and data. This is why teams at companies like Notion, Brexit and Atlassian use statsig. Statsig has a generous free tier to get started and propricing for teams starts at $150 per month. To learn more and get a 30 day enterprise trial, go to statsig.compragmatic and with this, let's get back to the episode and what came after.
42:08
They built Vault and I get asked on the open source side all the time, but these buyers, like corporate buyers do not care at all about open source. They don't care at all. Like they need a commercial agreement and so the, the closed source nature of it, like some people needed like legal protections around like code escrow in terms of downtime and stuff like that. That was about the extent of it. Otherwise they were like, you know, we need support, we need proof of concept to prove it works. We need some white papers in terms of like other customers scale. And yeah, that's what we had to build up after that and get going.
43:18
And then. So you started selling with Vault and then you did it for the other products as well, right?
43:56
Yeah, we did Terraform and we did Console. We had it for all the products. But, but you know all this data is public. You could look at it and well for a period of time it was public. You could look at in like the public reports of when Hashicorp was a public company. You know, it really broke down to Vault Terraform.
44:00
One thing I, I remember is Terraform just became so, so, so popular across the industry. So like you know, like there's a Hashi stack. But I, I only layer that all the other parts existed because like Terraform just seemed to be everywhere. Why do you think that sudden popularity was?
44:15
It's so funny to hear that because I accept and know that now and I feel the same way that you feel now that Terraform is this huge thing. But for the longest time, like we were the vagrant company, like all the other tools were like no one knew the other tools. And not only that, like Terraform. One of the things that kind of frustrates me, I haven't heard it recently but for a period of time one of the things that frustrated me was like oh, they, they only won because they were first to market. I hear That a lot. And we were like seventh to market. Okay. So like to market in. In.
44:32
In what category?
45:03
In terms of that infrastructure as codes.
45:04
So there were like other like players who.
45:07
So many. Yeah, yeah. And. And no one was a clear winner. It was a warring market. But like that first year, 2014, when we came out Terraform, I. You know, at that time, one of my marketing strategies was I was at every conference I went, I traveled an obscene amount. I was speaking wherever I could. But even if I couldn't speak, I was going just to talk to people. And there was actually a little anecdote here was when the COVID lockdowns happened in March 2020, my wife and I had nothing to do at night. We didn't have kids yet. And we opened up our calendars and we realized that we had been dating since 2012. And the first time in almost 10 years of our relationship that I will have been in the same place longer than eight days. No, for. For almost ten years. Nine years. For nine years straight I had been somewhere different at least every eight days.
45:10
That's how much you traveled?
46:02
That's how I traveled, yeah. And I know there's consultants that travel a lot more and stuff, but like I was traveling a lot, I was coding a lot. I was like doing all these things.
46:04
You must have coded while you traveled as well.
46:11
All the time, Yeah. I had a whole system. When I started traveling in flight, WI fi didn't exist.
46:14
Yeah, yeah, exactly. Even now it's kind of patchy.
46:17
Yeah. So I wrote these scripts that I ended up iterating on, but mostly used where I downloaded all the GitHub issues and I categorize them and I would just break it down into tasks that none took more than 10 to 15 minutes. And I just created this list and. And when I was on the plane I would just one by one bust them out. There's no Internet, so just commit them locally. Yeah. And then I would get back and some people used to notice this. Cause I would land and you would get this push and people would get these email notifications were like 30 issues were closed all at once. Wow. But I found the key was pre planning what issues you were going to work on. I did that online. On the ground.
46:20
Yeah.
46:58
And then breaking them down into 15 minute chunks. Because I found it was really hard to get into like multi hour. Even when I was traveling to like Japan or something, it's really hard to get into like multi hour flow on an airplane. So I was like, I'm only going to Work on the stuff that isn't like heavy design work, none of that. It's just like bug fixes, right? Like just cleaning stuff up. And so that was my process.
46:59
In 2021, HashiCorp went public. What is it like to go public both in terms of preparing for it? How did it feel? What changed after.
47:19
On the prep side, I don't have the full answer because I also stepped down from the executive team about maybe six months before we went public. So I was part of some of the planning and obviously I was very aware that we were planning to go public and. But like, for example, I wasn't part of the roadshow or any of that. But yeah, you know, from my seat, the, the parts that I was part of, the parts that have visibility onto, I mean, it's, it, it takes over a year to do it. So there's a lot of prep. And, and there's some funny things that you do. Like, you do, you start running like a public company at least 2/4 before you're public. I don't remember what the drop dead date is, but there's a date where you could just like cancel going public. And it's pretty close. Like, it's like very close to when you actually like have that day. So you, you kind of run like a public company and to the point where you do mock earnings calls, like you actually, with a conference room table, your investors are the public investors that aren't in the room. They go somewhere else and they talk over the speakerphone and ask you the types of questions. Your CFO or VP of finance gives the full report of the quarter. They try to frame the types of questions again, you run it and you try to figure out like whether it's running well enough, I guess. And that's sort of what the prep feels like. And there's an obscene amount of secrecy because from a regulation standpoint, you can't talk about any of this. And so, I mean, you could look back at even the dumb stuff like hacker news comments, like, I just want radio. It's the clearest signal that a company's gonna go public. Because I went radio silent on every topic because everything became questionable. I remember there was just a point. Cause I, I, there was a hacker news comment I gave like eight months before it went public. And our general counsel, like in the middle of the night was like, you have to delete that. After he talked to me, I was like, I could see how that might affect things, but like, I didn't realize the matter. And I ended up deleting it.
47:28
And is this because you're not supposed to give public information away or something like that?
49:24
I don't remember the exact regulation, to be honest.
49:30
Yeah, but there's some regulation about, like. Like, not leaking. It's not information.
49:32
It's not really. I mean, it is. It's all information, but it's more about, like, you can't influence the market in any way. And so. Yeah, and you can't make promises because if you say, oh, we're gonna go public, it might cause even private funding to froth up. And it's. It's a form of fraud. Um. So, yeah, basically, like, I just stopped talking about everything. I don't know how seriously other people take it, but I took it to the point where I planned this trip to New York to go public, and I invited my parents, and I didn't tell my parents why we were going to New York, and I just told them, I want you to go to New York. It's really, really important. It has to do with Hashicorp. And they were like, sure, and I can't tell you about it. And there's a sure. And I told them maybe a month in advance. We had a dog. We had to get our dog sat by my aunt, and I just told him, we're going on a family vacation. Up to the point we left, they like, I didn't tell nobody except my parents knew, basically none of my friends, nothing except the friends that worked the company. But, yeah, that's what it's like leading up to it.
49:38
Yeah. I was at Uber when we went public. And then previously I read that while before going public, HashiCorp, VMware made an offer earlier that was like, in super
50:44
early two years in the company. We went public, like, 10 years in the company.
50:55
So, yeah, so. So, like, when. When they tried to. To. To buy you, like, what was it like? Did you almost sell at some point? Was there any point where. Where you were close to potentially selling?
50:58
It felt close. And, And. And I got a lot of accounts afterwards that it was very close. It came down to, like, one vote on the VMware board was what I heard about two years in the company, we were only three employees. Me including me and Armand. So we had one employee, I guess it was two founders and employees, three of us. We got approached by VM, where, you know, I didn't know what this would be like. And it is not what it. What it isn't is they don't show up and say, we would like to buy You.
51:06
No, no, that would be too obvious.
51:32
The way it happens is you get an email from some low level business development person that wants to just like, talk vaguely. And the vague talk is they're not interested in buying you. One of the jobs of BD people at large companies is just to have an understanding of the ecosystem. So it's really just like, let's have an understanding. They might have had an executive tell him or her to go talk to this company. There might already be an executive kind of poking around. But yeah. So it kind of starts out that way. It turns into, would you like to come by our offices and meet in person? Oh, our VP of engineering swung by. Let's talk to him. Nice to meet you. Then I think this is our actual timeline. And then I think there was a dinner where there was three VMware executives at the dinner. At that point, we thought they might be interested, but it was still so.
51:34
Wow. So, so much dancing.
52:26
Oh, this is not even. This is. This is months before there was even an offer. It was still so social. Like, we drank, we talked about our hobbies and interests and very. Not about. I mean, very basic about like tech. It's really more of vibes. They go to dinner. And then it started to get more serious. We spent more time in Palo Alto at the VMware offices where we started talking about partnerships, about how. How can VMware help our products more? And it starts about partnerships and then it turns into like, hypothetical. If you had the resources of VMware, what. What would you do? You know, we're like six meetings in at this point. There's no, no offer of anything. And then at a certain point, honestly, we were getting tired of it because nothing was happening anyway.
52:29
Sounds like then you're a startup and you're going to all these meetings.
53:13
Oh, and I don't even live in the Bay Area, so I was flying up all the time. It was a waste of time. And. And to a lot of founders, that is the warning I give them is M and A becomes a waste of time. So I have another M and A
53:15
merge in the acquisition.
53:25
Yeah, merge acquisitions becomes a waste of time. So I'll tell you another anecdote after this, but ultimately we kind of politely had the like, okay, let's like, shit or get off the pot kind of conversation. And they put an LOI in front of us, which is a letter of intent. Letter of intent. The LOI was. It was one page. You know, it was. It's basically like a non, like semi binding promise that we're pursuing, buying you no number on there. It's just like kind of vague.
53:26
Still no number.
53:59
Yeah, well, verbally, they're not writing anything down. They're not putting anything in email. None of that. It's just verbal. And so at that point, verbally, we had gotten a drop of $20 million,
54:00
which doesn't sound up much.
54:11
Well, yeah, but we're.
54:13
What?
54:14
Well, I'm 23 years old.
54:14
Oh, yeah, the three of you. 23 years old.
54:16
23 years old. Me and Armand together own 70% of the company. Um, okay. Yeah, yeah, it, you know, it. It sounds interesting to say the least. What I tell people is you start. You start thinking about the things you will buy is you start. That's the, that's the. It's a dangerous path. That's the. That's what happens. And we had advice from people who said it's like phenomenally too low, like wildly too low, so go ask much higher. And we asked. I don't remember anymore, but we asked for maybe like 40 or 50 or something. And they just said yes. They said, okay. And then, you know, it's like way too low. And. And that was verbal too. So there was nothing binding about that. Yes, it was just. It was. And it wasn't like, yes, it was more like, okay, we'll work on that. You know, but very positive.
54:18
Yeah, a bit like in this indirect. It's an indirect.
55:07
Indirect business sense. In the indirect. Yes. And it turned into come meet the CEO of emr. You know, like, clearly they're interested because we're like climbing still. Armand and I, we kind of started getting cold feet because it's. It's the way we described it is it's a dream killing amount of money. It's like you would take the money, but you're too small to be important to the company. Like VMware. So they're going to. Just.
55:10
Because.
55:32
Because even though it's like so much
55:33
money, but personally, it's so much money,
55:34
but, but you know that at VMware level, I guess you see the revenue there, all that, you realize that for them it's not a big deal.
55:36
It's meaningless to them. Yeah, it's meaningless. Crazy.
55:42
That's. That messes with your mind, you know?
55:44
Yeah, yeah. So it becomes this thing where it's like, personally, your life could change, but this thing that we both were truly passionate about, like the thing I wanted to work on more than anything else would end in a sense, because, you know, I would probably get thrown into like Working on ESX or something, you know, and you would get a manager
55:46
of a VM where not even the
56:04
CEO, the executives make it sound like they're going to do all this stuff with your products, but, like, that's just one executive and a cog of corporate machinery. So we started getting cold fee being like, if they're interested, maybe we're onto something. If we're onto something, do we want to sell out early and sell out in a way where our dream dies? That's why I was like a dream killer. Armand very maturely, and he's two years younger than me, so he's 21 at this time.
56:06
No, he sounds like the older one.
56:29
Yeah, yeah, yeah, yeah, yeah. He's very mature. And Armand very maturely came up with the. I forgot where it comes from. But the. The risk minimization. Not risk. The regret mini minimization framework. He was like, what? Personally, on your own? Go think and I'll do the same. And let's come up with a number that if we walked in the next day and they said, we're killing everything. You're gonna go work on ESX for the next four years? Because we had a. We were gonna have a lockup no matter what. You're gonna work next four years, that we would be like, cool. This was worth it. Like, what's the minimum? No regret or minimum regret. We came back and I don't remember what exactly our numbers were, but they were pretty close. And we ended up at a hundred. And we. And so we're like. It felt so wrong. Like, how could we possibly ask for a hundred? But we're like. We said, this is what we're gonna do. And we stuck to it. So we went back, we asked for a hundred, and it wasn't a no. And they. They wasn't a yes. This one had a lot more hesitance. It was a lot more like, we'll get back to you. Right? Like, I don't know. But it wasn't a no. And basically they came back to us and said, this requires board approval. So we're convening a board meeting next week. Like unplanned. That's not. When they're boarding. Like, we're convening the VMware board, we're gonna vote on this. And then we heard that the vote didn't pass. That was that.
56:31
It's just crazy how so such small things could, you know, like, influence if. If that was an extra yes. Who knows what your story.
57:49
The one person you.
57:59
You might have you know, like, it's hard but, you know, in vm, where you might have been clogging away on
58:00
like this, this project, I mean, we didn't build Terraform yet, so Terraform might have not. Probably, probably never would have existed. High confidence. I know who the vote was. I know why they voted that way. Like, I know a lot more details, but it's like I, it's, it worked out obviously in my favor. But.
58:05
Yeah, so you've left Hashicorp and you're independent. And one thing about Cool about being independent is you're just very honest about stuff. And there was this really interesting thread where on Twitter you wrote about, you said, like, ask me anything about the big cloud providers, because at Hashicorp you work with all of them. What was your experience back then of you know, like Azure, aws, Google Cloud, like your kind of honest view of how they work back then and possibly like, how has your views changed on them?
58:19
The precursor to that is while I was at Hashgroup, I always say had to be very careful about what I said about any of the cloud fighters because we're partners with all of them. We're partners and I didn't want to insult anyone. And, and so I was just very professional about all their relationships.
58:49
And then like, we like all of
59:03
them, like, yeah, or just say nothing.
59:04
Or just say nothing.
59:06
You have nothing nice to say. Don't say anything at all. And then I left and I was still, I kept that up because it was too close. I was still flying too close to the sun, as they say. And they didn't have time passed where I was like, ah, like my opinion doesn't really matter. And yeah, so my, to answer your question, my broad view of all them was that AWS was really arrogant. Annoyingly arrogant was how I describe it.
59:07
And when you say arrogant, like, can you help us understand like how you work with them or what part of them or like is. Is just general.
59:31
I, I'll start disclaiming this though, that, you know, we worked with so many people there that there were individuals and all of them who are awesome and nice and kind. And so I'm not trying to make like, individual judgments here. It was just more of like how all of it came together and how it felt as a. As a whole. So by arrogant, I mean it always felt like they were doing us a favor at every turn in terms of partnerships, in terms of just getting a meeting with them. It always felt like you should be thankful that were spending time talking to you and not just that, but also like there was always this subtle vibe of like we will just spin up a product and kill your company. You know, it, it felt that no one ever said that. Well, it kind of got to a point where it was sort of like if we don't come to terms, we're going to build this service. It did kind of come to that.
59:39
But you know, we did see that later on with Elastic and.
1:00:23
Oh, that had already happened.
1:00:27
Oh, it happened already.
1:00:29
Yeah. Just not with us but with other people with OpenSearch. Yeah. And they always publicly spun it as like oh it's so great and builds the, builds the ecosystem larger and we're doing it by the letter of the license and you know, all has truth elements to it, but it's still not a nice thing.
1:00:30
No, I think like, I don't think people paying attention to open source appreciated what Amazon did with. It really hurt Elastic's business and it showed how open source can be weaponized against a company that spends, you know, their blood, sweat and tears. And I guess, you know, Hashicorp, you had the same thing.
1:00:46
Right.
1:01:02
Because you were publishing permissive. Well, I mean open source needs to be permissive.
1:01:03
So it was MIT or MPL license. So. Yeah.
1:01:06
So like Amazon could have spun up any, anything they wanted.
1:01:09
Yeah, there was like a two year period where I think for the entire two years the entire leadership team was terrified that at any moment there would be like a vault service or something would pop up. And so yeah, that's, that's sort of my characterization of aws. It really took like for example, teeth ringing to get them to help with the AWS Terraform provider. We had, I don't remember the exact number but we had something like five full time engineers employed working on only the AWS provider for Terraform, which you know, maths out full benefits and everything to like a million dollars a year. And all of that was pure open source, pure integration with a commercial entity and they were not helping us at all. And, and they were the last of any of the cloud providers that. To provide any sort of help there and came down to some drama where we went to a meeting and basically said that we're gonna publicly say that the AWS provider is deprecated and we're done. Like the community could pick it up or whatever. But we're not, we're gonna.
1:01:12
Yeah. Cause you didn't get any help from them.
1:02:19
Yeah. And it's taking up too much work and there's too many bugs and you're shipping. Honestly, AWS Is shipping features too fast and like it's just like not worth it. And that freaked them out. And finally they started helping. You know, they might recount their side of things differently, but that's pretty much. It felt like no movement for years and we said that and movement started happening really fast. So yeah, there was that Microsoft. I would, I have the most positive view on Microsoft. They had a really hairy technical product is how I describe it. It was very difficult to use Azure, Azure and a lot of nouns like, like principles. And I didn't. I still to this day and I've integrated with the service don't fully understand the IAM hierarchy of Azure. I just kind of bolted it and got it working with a team and that was that. But the technically kind of. But from the business side, super competent professionals and team players was like how I describe it, they wear. We went into every meeting with them and a lot of our meetings the first question was, how do we both win? That was like the first question. And yeah, very pleasant. Awesome. They were the first people to jump on board supporting Terraform. Sure, that's some kind of bias, but like they were consistent throughout the years. So positive on Microsoft and Google Cloud. You know, my, my, yeah, Google Cloud in general, it was always like the best technology, the most incredible technology and architectural thinking. And I swear none of them, it felt like none of them cared or thought about the business at all. It was like every partnership meeting we'd spend hours talking about the coolest edge cases and scalability and how this is going to work. And like, I think the best public example that you could just see in history was they were the only company that when they partnered with us to write the provider, they spent a lot of time building this very good. I think they called it magic something. They fully automated the whole thing. So when they shipped the new Google Cloud thing, it had a Terraform provider resource right away. And not just like it didn't feel automated, it felt very ergonomic and like it was good. It was really good. And so they had that. But whenever we would get into how do we do co sale, how do we like attribute your sales engineer's quota to selling like infrastructure that's spun up by Terraform? Like, how do, like how do we do this?
1:02:20
So like the business side of things,
1:04:48
Crickets, like impossible to get anyone. Not just impossible. It was like even if you got someone, they would say something for 20 minutes and be like, okay, cool, we have two more hours, let's figure this other thing out. And yeah, it was, that's what it felt like. So. And the other disclaimer I give is all this knowledge is circa, I don't know, 2019, something like that. So maybe in the past seven years things have dramatically changed, but that's what it felt like.
1:04:49
Yeah. Going to open source. You're actively involved in open source and open source today and it seems open source is changing a lot, especially with AI and you're seeing stuff at Ghostly. Can you tell us how open source has changed with Ghosty, with the AI contributions? And what are you seeing with open source maintainers? Seems like there's a bit of a, you know, like drama or worrying stuff happening.
1:05:13
Well, I would say more broadly the issue facing open source today is, I mean there's, there's multiple. But the one that I feel is most prevalent across industries right now is AI contributions. And the, specifically the ratio, the signal to noise ratio being incredibly low, or in other words just being super noisy with low quality contributions, it's just stressing the system quite considerably. And.
1:05:39
Yeah, and, and so after you left, left Hashicorp, you, You started Ghosty. How many years ago was that? Was that like two years or so?
1:06:08
Well, I left HashiCorp over two years ago or a little over two years ago. I had like poked around with prototypes, Agosti, like maybe three years ago. But after I left Hashicorp, I started just like kind of working on it, like 20 hours, like much more just because it was the thing that I had.
1:06:17
What drew you to go see? What, what was your kind of vision? Why did you start working on it? It's a, it's a, it's a better terminal, right?
1:06:32
It's a terminal. Better. Subjective.
1:06:38
Well, I installed it because I like it better. But yes, a terminal. And opinionated terminal, right?
1:06:42
Opinionated. Very modern in terms of like supporting as many of the newer specs as possible that enable functionality like displaying images or you know, clicking on your prompt to move the cursor and. But like dozens more examples like that. The original thing that drew me to it is, is the exact opposite of good advice that people usually give to people, which is that you find the problem and you build a solution. And what I did and you pick the best technology that then solve that. What I did was I found a set of technologies and I was like, what can I build with these technologies? I went the opposite direction. And I had spent over 10 years, 12 years at HashiCorp Incorporated and three years prior to that doing infrastructure, open source. So 15 years in total, just thinking almost all the time about infrastructure and cloud services and things like that. And so I had felt that I was rusty. I had sort of like my skills have had weakened on desktop software systems programming to a certain extent because I was so constrained by networking challenges, distributed systems. So like low level systems programming had, had, had atrophied. I had never really worked with GPUs and GPUs. I guess crypto was happening, but I kind of ignored that whole trend. But this is pre AI, so but GPUs were obviously in use and I just felt like I had no idea how they worked. So I wanted to go to desktop. So I picked all these like different technologies and I said okay Zig, because it looked cool to me. I just wanted to try it.
1:06:47
Canton, for those of us I'm, I'm not into Zig. I heard good things about it. Can you explain why Zig is so interesting, innovative and why does it grab so many engine, so many devs attention?
1:08:15
I don't know why it grabs other people's attention, but for me it was, it just felt like the, the best better C that I saw out there. And I am someone that's coming from the position where I actually enjoyed writing C. So a better C sounds great to me. To me it's, it's not very annoying in terms of like, if I want to blow my own foot off, please let me blow my own foot off. You know, a bunch of qualities came together where I thought on the surface it looked cool, but it's very hard to judge a programming language on the surface. So I wanted to build something with it. And so yeah, I, I picked Zig GPU's desktop software. What could I build for, for all my time at Hashicorp, I built Clis. And I was like, well, I live in a terminal. Like, what does it take? I don't, I live in a terminal. And yet I understand very little about a terminal. So why don't I just like build a toy project that's a terminal. That's how it started. And, and as, as with a lot of stuff, I find that once you dig beneath the layer of taking something for granted, you realize that everything is way more nuanced and complicated than you imagined it to be. And terminals were the same way as once I dug beneath the surface, I realized how much they were doing, how brittle some things were, how much better certain things could be. And I got sucked into being like, I want to do this better.
1:08:26
Okay, for like someone who's a dev. I use terminals as well I'm going to ask the stupid question, how hard could it be? What does a terminal terminal actually do? And then can you maybe tell us like how Ghosty is structured or like what, what are the things that it needs to do just to give a little empathy of like all the work that you're doing?
1:09:40
Yeah, yeah, yeah. I actually get that a lot. I get that question a lot. So it's definitely not a dumb question. It's really like it gets asked less now, but a lot of people are like, I thought they were done. Is usually the most feedback I get is like, what is there to do in a terminal? So at a basic level they don't do a lot. The problem is that the functionality has grown significantly of what terminal developers want to do. But let me, let me just give what they do. It's kind of like an application development platform, right? It's like a, it's, it's not an operating system. You're not dealing with like hardware level problems, but it is like an application sandbox on top of that and that other applications run within it and need to render text. They need to render colors and images and widgets and mouse events and all this stuff. Like you're. The best description is it's like, it's like a browser but for text content. And so all of the complexities that a browser has, a terminal has similar ones, a smaller scale, but similar ones. And if you try to extend what a terminal is capable of, then it, it gets, you know, you start bringing in more and more problems. Like as soon as you brought images into a terminal, you introduce like a whole new ecosystem of problems. But the tongue in cheek answer I like to give to Ghosty's complexity is that it's 30% of terminal and 70% of font renderer. And yeah, that's what it feels like. It's really like a problem of, you know that terminal screen you see, whether it's GPU or CPU render, that terminal screen you see, it's like you're drawing on a canvas. So you are building a renderer for text in there. Everything kind of bubbles from there. So from a rough architecture standpoint of Ghosty, I like breaking it down in terms of threads because Ghostly is multithreaded. Not all, most terminals are not. But I'm not saying that as a positive point, just a good way to describe the architecture. We have a central UI thread which just draws the windows and stuff. That's pretty standard for desktop software. And then we have an IO thread which runs the actual shell that you're seeing. So any bytes that we send or it sends back to us, it's processed by the IO thread. And then we have a renderer thread which is actually drawing it. So it's, it's the best way to think of it as is it's on a Vsync clock through 30, 60, 120 frames per second is. It's just sampling what the terminal state is and then drawing it. And the render itself uses a font subsystem on the same thread. But we have to take the fact that this grid has this character, these sets of characters and map them into fonts and do that all on our own. A lot of people think, oh, doesn't the operating system solve that for you? But they don't, unless you're much higher level. Like, you know, you can't just draw easily, you know, monospace text in that way. You have to really put pieces together. That's the, the big picture. It's quite simple at that level. And then just, you know, extend all the functionality that terminals have into that.
1:09:59
So you're kind of like building a, like a 2D graphics engine a little bit that has like very focus on fonts.
1:12:41
Yeah, yeah, it's, it's a. From a renderer side, it's very simple. The render is actually not that complicated and, and I want to over complicate. The hardest part is actually maintaining the terminal state. So the way terminals work is they're, they're a grid of monospace cells. So you'll have like 80 by 2480 columns, 24 rows, and there's commands that the program could send to move the cursor or say if I like to say, think of it like a paintbrush, I could say make the paintbrush red and bold and everything after that is red and bold. And now change it and you're just maintaining the state and drawing around. And then there's all the scroll back. Right. People are used to in terminals going back. And that's where the challenge is, doing that in a fast performant way. And that's what I try to do with Go see. I mean, I, I show this. There's so many benchmarks we run, but one of the most obvious ones that shows the speed, which also gets a lot of criticism, is just catting a lot reading a large file. If you just like dump a bunch of text, how fast can it get through it? And you'll see a stark difference between modern terminals. I'm not just going to say ghosty here. Like if you, if you take Ghosty Kitty, Alacrity, any of these newer terminals, they're all going to do great compared to terminal app on Mac OS or traditional like Linux terminals. The criticism is why does that matter? And you know, the easy answer is when you, when you accidentally catapile like a lot of people will force close. The creator of Redis posted a great comment for me, a great comment on Hacker News about why he loves Ghostly, which is that he previously, previously used to tail production Redis logs and you know, just spews logs out and he used to have to send them to an intermediary file and then read them out later so he could render it. So he could render it and actually work with it. And he doesn't have to do that anymore because Ghostly's fast enough that he could just let it dump while he's going through it, parsing it, like, like mentally parsing it, things like that. And that just saves him time. And yeah, there's something to be said
1:12:47
at some point we should probably talk more about the fact that a lot of software these days does not care about performance. And I think it's refreshing to actually have examples and I hope we will at some point maybe get back to it. You know, we'll talk about AI, but it might not help. But there's a level of craftsmanship, right? Just like not wasting resources or being efficient or I think we all like I see in my day to day life like we have more powerful resources, laptops, phones and they're not getting any faster. And it's just frustrating at times.
1:14:44
It's kind of like the love of the game. I mean a lot of, a lot of Ghosty is just the love of the game. Like, like I like to say like our renderer because, because like I disclaimed before, like it's not complicated. I'm not, I'm not ever going to say that Ghosty is like a 2D game because a 2D game from a rendering standpoint is much more complicated. But I do care a lot about the render and we got our renderer down to, for a full screen on, on my Mac set of grids, each frame updates in roughly. I don't know, it's like it's something like 9 microseconds or something. That doesn't include the draw time. That's just like taking the state and submitting work to the GPU. It's about nine microseconds and then the GPU takes some time. 120 hertz, 120 frame per second frame is 8333 microseconds. So if you have nine, you know, again, we don't have the number of how long the GPU takes, but it's super. It doesn't take much time at all.
1:15:13
You're leaving a lot of options and work for.
1:16:09
What I'm saying is like, we could have made it 2000 microseconds and it wouldn't have mattered. It like you would. You would still get that performance. But that's not fun. Like, I want to make it sub 10.
1:16:12
I like it. The fun.
1:16:22
Yeah. So we spent a lot of time just like, it was a big. I blogged about it. It was this thing where we got it down from. It used to be about 800 microseconds and got it down to like 9. And I thought that was awesome. Even though for end users it doesn't make a difference.
1:16:23
But as you say, the craft and the love of the game. So when you started out building, go see that, that was around the time where I think ChatGPT was out. There were some tools. How did your toolset change in terms of how you're developing day to day?
1:16:38
There's two sides to that. So one, AI gave a huge boost to terminals, which is a funny thing. Like a. Like, oh, how so the number. Because of CLAUDE code and all these things, the amount of time spent in a terminal has gone up. Which if you told me in 2023 terminal usage would go up, I would say, no, it's not going to go up. I had no disillusions that I was going to like save terminals and I didn't. Right. Like, AI came out and came out with all these CLI tools and even when you're seeing like Codex apps and Claude apps, like it's leaving the terminal, they're still executing so many things in a pseudo terminal. The number of terminals out there is massively larger than there was in 2023, which is hilarious. Oh, wow. Yeah.
1:16:50
So random.
1:17:37
Super random. And so that's part of why one of the things I'm doing with Ghostly is extracting. It's actually extracted already what I've called Lib Ghosty, which is everyone reinvents this very small surface area of a terminal. And because they do it, it breaks. Like all sorts of things break. Like if you run a docker build or push to platform like Heroku and you do enough weird things in the terminal that aren't actually that weird, just like draw progress bar, it renders it like chaos all over the Place all over the place. Yeah. And it's just because they've poorly implemented a tiny subset of a terminal because they're more complicated than people think. And so libgo C is this minimal zero dependency library that people can embed terminals anywhere. Oh, cool. And MIT license and just, it's really like I'm tired of seeing broken terminals everywhere. So please use this. So, okay, that's the one angle really funny. The other angle is actually AI usage. It's hard to say. I'm a big fan, but you know, within the right categories of things, like, I think that it's a revolutionary tool and I get a lot of joy using it. Yeah, I use it every day. I use tools like Claude code and amp and codecs and, and the chat tools like every day for some aspect of my life. And it's really allowed me to choose what I want to actually think about. Right. I think that's the most important thing is that I always felt limited in terms of, oh, I'm going to have to spend the next two hours, I don't know, doing this boilerplate annoying stuff and that I don't want to learn about, but now I don't have to learn about it, which is, yeah, I'm not, I'm not like getting skill formation in that category, but I could now spend those two hours doing something else. And that's the best to me.
1:17:38
In your workflow, do you just use a single agent? Do you use multiple agents? Have you experimented with them?
1:19:15
I've tried a bit of everything. I would say my standard workflow. What I try to do is I try, I endeavor to always have an agent doing something at all times. Maybe not when I sleep. I don't go that far. A lot of people do go that far. I don't go that far. Um, but while I'm working, I basically say I want an agent. If I'm coding, I want an agent planning. If I, if, if they're coding, I want to be reviewing or you know that there should always be an agent doing something.
1:19:20
So you have a separate tab.
1:19:48
Yeah, separate tab. And sometimes it's multiple. I don't. There's a lot of work that I do around cleaning up what agents do. And I don't run like Gastown esque like things. And so I'm the, the mayor, so to speak. And so I don't want to run too many. I don't find it that fun to clean their stuff up. But periodically I'll, I'll run two in competition with each Other because I. It's a. It's a harder task and I. I don't have a high confidence that they're going to just like crush it. So I'll just run Claude versus Codex or something like that. Or I'll have one coding. I'll have one doing like some sort of research task. I absolutely love them for research. That's awesome. And then I'll be doing something else, but no more than two. I would say. Yeah.
1:19:50
The code that they generate, do you always review it or have you kind of got a bit more loose and you know, some people swear on having a closing the loop, having validation for it. Or are you like, still like. I want to see the exact code and I'll review if it's correct and
1:20:32
what I expected matters, what I'm working on. And if it's ghosty, I'm reviewing everything that's going into it. If it's like I set up a personal wedding website from one of my family members, I don't care at all what the code looks like. Did it render right in their three browsers that I tried? Yes. Did it render right like on my phone? Yes. Don't care what the code like. Doesn't make any network works. No. Has no secrets access. I don't care. Like, ship it. It's only going to be online for two months, so ship it.
1:20:47
Yeah. And then how did the AI policy at Ghostly change? I remember that in maybe a year ago or so, you asked for disclosures if someone is using it. And just very recently you kind of crammed cracked down and said like, all right, no more.
1:21:12
Yeah, we're change again too. Well, I'm not going to change iterate. So, yeah, a year ago, started asking for disclosure and people, you know, the very fair question there is what does it matter how the code is produced? And the reason, to me it always mattered was because it dictates how much effort I go into fixing it. Because if, if you produce the code of the AI and you did it really quickly, then I'm not going to spend hours fixing up your code. You. You spend your time.
1:21:27
Yeah, because. Because you know that that person that puts much time in it, not much human time, you're trying to mirror it. Right.
1:21:59
It's ever for effort. If you put in hours, I'm going to put in hours back and I'm going to help you. But if you put in a few minutes and never read anything and threw it over the wall, then I should be able to read it in a few minutes. Say no, thank you. And close it. It's, it's fair and I need to better understand what that is. And you know, it's not about bad code because open source has always gotten bad code contributions. But the difference before is usually those bad code contributions came from people that were genuinely trying their best and put in a lot of effort just to get to that bad code point. And so I, I, people behave differently. I would always try to reciprocate by being like, this is someone very junior or this is someone just new to the project. And I would try to educate them, be like, okay, we should do this better and, and give reviews, but if it's bad code that there was low effort, I'm not going to give a careful review. So again, like, I wanted to know these things and the disclosure worked decently. Well, the issue wasn't the disclosure. The issue was that the quantity of low quality AI PRs that we were getting reached a point where it was too high.
1:22:05
Like, do you know why that might have happened? More people instructed agents to contribute a PR to fix an issue they had. Like, do you have theories or actually like, like seen evidence of why this happened?
1:23:09
I have theories and I've seen some evidence. But yeah, I mean, I think obviously there's the rise of just AI usage in general, but the real trend that a step change that I saw at a certain point, and I don't know when it happened because I don't use agents in this way, but at a certain point they started opening PRs. You know, before it was like you generate code and maybe they commit and stuff, but you would still like push it to a branch and then open the pull request. At a certain point they started opening PRs and there was a dead giveaway at AI because at least to this day, at the point we're recording this, the way Claude opens a PR is it opens a draft with no body and then it edits a body later and then reopens it for review.
1:23:21
Which is not how a human would do it.
1:24:03
Oh, like one human a year would do that. And now it's happening three times a day. And so even if they're not disclosing AI or they're hiding it, it's like, oh. And it happens at a speed that's unrealistic. It opened the body, came in less than a minute later and it opened less than a minute later. Like, yeah, pure AI. I just tweeted about this a couple days ago, which is just like, I, I wish that these agentic tools would put A pause on opening PRs for a second because I think that's the point where it's really causing a lot of friction.
1:24:05
How did you change the policy? Are, Are you considering closing down PRs? You mentioned that recently that you've, you, you've. The, the, the thought crossed your mind.
1:24:33
I would say I was crashing out in that moment. Uh, but I, but kind of. Um, so we shipped this policy update where PRs written by AI are no longer allowed anymore unless they're associated with an accepted feature request. So you can't just drive by and be like, I did this thing that I've never talked to you about. Here you go. We. And we've. We get about two or three of those a day. And so we just close this thing. I don't even, I literally don't even read the content. I could see it's AI. I could see there's no fixes, issue number. I just close it. No idea if the code is good. Don't care. It's just policy. Don't have time for that. That's pretty much where we landed on currently. And we're recording this in the middle of another transition which I already have the PR open, where we're going to switch to a explicit vouching system for the community. So you're no longer able to open a PR at all. AI or not, don't care anymore. Which is, I think the people who criticize where it came from doesn't matter. It doesn't matter anymore. Now all that matters is that another community member has vouched for you. And if they vouched for you, you're added to a list where forever or indefinitely you could open a pr. If you behave badly, then you, the person who invited you, and the entire tree of people they ever invited are blocked forever for the repo.
1:24:41
This reminds you a little bit of all the social lobsters.
1:26:00
Lobsters, yes. That's what it's based off of. So the idea is that you're putting your own reputation on the line by vouching for somebody else. I'm a reasonable person. If, if this happens and I or one of our maintainers or community made a mistake, if you just like hop into discord or email and, and seem like a reasonable, apologetic person, like, I'm not going to spend a lot of time, like there's not going to be like a, I don't know, a mock like court type session. I'm just going to be like, okay, I'll give you no chance. So yeah, we're we're sort of moving to that system. I think one thing that's a little bit different is I should say that this is one inspired by lobsters, but specifically in the AI space is inspired by this project called PI. They do this. Oh well they do.
1:26:03
They call it as Build PI. It's a self improving.
1:26:43
So like build your own agent toolkit. So you know, kind of ironically it's, it's an AI tool but they care a lot about code quality and anti slop and things like that. So they have a similar mechanism. A little bit less of the tree and some other. But similar. You can't open a PR unless you're vouched for. And the other difference here that we're going with is in addition to vouching where you could positively mark someone, you could actually denounce users. So if there's a bad actor you could actually ban them. Not. Not just like you can't even attempt to contribute again. And um, that's just a. Yeah, we had one yesterday where someone open pr. We closed it because it violated. They had no associated issue and it was AI. And then they just reopened it. Like not the same one. They resubmitted a new branch and reopened it like less than 10 minutes later. I was like oh my gosh. So stuff like that is just. It's. The problem is it's just wasting time.
1:26:47
It feels like most of open source will have to change because of AI, right? Like it's. You probably know multiple more, more maintainers. But I, I hear this. Your story is not the only one. You know, like the project closed down PRS. GitHub is I think just shipping a feature that projects can automatically close or reject PRs.
1:27:41
Yeah, I think open source will have to change in a lot of ways. I mean, I think I forgot who wrote this, but you know, one of the logical extremes is if agents are so good, you don't need open source anymore because you could just build it, right? Theoretically, yes. That's the extreme. I don't describe that extreme, but that's one of the extremes. The issue is there used to just be this natural back pressure in terms of effort required to submit a change and that was enough. And now that that has been eliminated by AI, it's. I like the wording that PI uses, which is that AI makes it trivial to create plausible looking but incorrect and low quality contributions. And that's the, that's the fundamental issue. You know, open source to a certain extent has always been a system of reputation. Like you, you earn Some trust and you get more access that, you know, and that's how it's supposed to work. But yeah, it's been that reputation system has been taken advantage of in a certain sense with, with AI or the default allow PRS has, you know, has been taken advantage of. And so I think like, like this vouching system that, that we're proposing for my project, I think it's like very true to what open source is, which is that open source has always been a system of trust. Before we've had a default trust and now it's just a default deny and you must get trust by somebody.
1:28:02
Do you think we might see a lot more forking happening though?
1:29:17
I hope so.
1:29:19
I hope so because until now forking used to be a, you know, like, like a fork off a little bit because it was a lot of effort. It wasn't to, to keep up. Like it, it never seemed viable to fork a proper project. Right.
1:29:20
Yeah. And I, and okay, I, I am separate from AI and everything. I, I have always been a huge proponent or I guess in the past few years I've been a huge public proponent of. There should be a lot more forks. Like a lot more forks because open source, I think one of the reasons maintainers have been taken advantage of to some extent is that contributors have some sort of entitlement, you know, whether it's toxic entitlement or not. But there's some sort of entitlement which is I've made a valuable change. So you should, and it's clean and it works great. So you should accept it. But you really don't have to, like, you absolutely don't have to. And then I've seen this time and time again where you have a high quality PR like perfect pr, but you say no and there's anger in the community. But the thing is I, I've said this since 10 years ago in the Hashgroup days. Hitting the merge button is the easiest step. Getting, getting to and hitting the merge button is the easiest step. Like undergraduates should be able to do that. It's after that, it's the years of maintaining whatever you just merged within the context of your, your roadmap, the bugs, customer needs, all that stuff. Like that's the hard part. Like you're signing up to keeping this forever. It's very hard to remove features so, or anything, remove anything. So the core privilege you get with open source, like osi, open source is forking and you should take that. That's, that's the right you got. You should Fork it and maintain your own software.
1:29:33
Yeah. One interesting impact of AI. Someone tweeted about how there's a rumor that big tech is looking into rearchitecting their monorepos because of agentic tooling. AI tooling. Just a lot more code being churned out. What's actually happening? What's the problem with git?
1:30:57
The problem with git, I mean, I think there's a lot of problems with git, but the monorepo problem with git is that git is relatively bad at very large repositories because you pretty much have to clone the entire repository. There's some extensions to fix that, but official mainline git can't really do that. Right. And so for very large changes, the very large repositories, it's sort of annoying to maintain. And then if you have a lot of churn in it, it's very hard to get changes into whatever your trunk is your main, your master branch. Right. You constantly have rebase. Merge queue solves that to a certain extent. I think merge queues works for humans at a certain scale, but the merge queues could get quite deep. But then if you sort of 10x that, like conservatively, I think 10x that and then if you buy into like hype cycles and you 100 or a thousand X that, I think it gets completely untenable in terms of how are you ever getting any semblance of cohesiveness onto the main branch quickly. And so yeah, I think there's a confluence of problems there. Which is. Which is the merge queue problem, the disk space problem, the like branching review type problem. Oh, I also treated the other another time where like git has this, you branch and you push up your branches, but the branches are only the positive. Like when you, when you close a PR and you, you don't accept it. Like you pretty much are the branch. With GitHub you could re access closed PRs, but you. A lot of people don't even get to the PR stage. They experiment. They're like, oh, this isn't the right way. And they never push the branch. And, and that's like relatively important information. Relatively important. It's not as important as the positive, but like I think there should be a lot more branches and get a lot more information that we just never throw away. Like we're at. To me, we're sort of at the like Gmail moment for email, for version control, where like you used to really have to like curate, delete all this email and then Gmail came out Gave a gig away for free to everybody.
1:31:12
Never had to think about it.
1:33:10
Their tagline or something was like never deleted email. I remember seeing that in some sort of marketing. It's like archive it right, never delete it. And that's where I feel like we should be at with code, which is like just huge repos. A lot of context. We need better tooling in order to find relevant context in that Git repo or version controlled repo. I would say that the real you ask for like real examples. I do advise a company that's currently stealth but working in this space. And there's the real examples is, is is driven by the highly agent companies. The companies that are like going really all in and drinking the Kool Aid and they're struggling in terms of the amount of churn that these agents is causing is so much greater than humans. And it's not an AI review problem or anything. It's really just like a release problem, like managing the merge queues, humans getting access to the right set of data in the repository and things like that.
1:33:11
So are other problems performance problems mainly with Git or just like even the workflow of.
1:34:03
Yeah, yeah, all of it. Performance for sure. But workflow, Yeah. I mean like every time you pull you're. You can't, you can't push because every time you pull there's another chain. Like every time you push it's a chain.
1:34:09
Oh there, yeah, there, there's a lot of parallel work.
1:34:18
Yeah.
1:34:21
As well. Do you think Git will be around with. With. With the. In the few years?
1:34:21
Who knows? But what's interesting is this is the first time in like 12 to 15 years that anyone is even asking that question without laughing.
1:34:25
We're not laughing.
1:34:35
Right. Like, like if you, if five years ago you said, well Git be around in five years, you'd be like, are you. Yeah, of course it'll be around. Like that's crazy to think, right? But now people could ask that question and of course some people will laugh but like there are people that critically think that Git might not be around in five years.
1:34:36
Well, I think you do want to save the prompt history because often reading the prompt is actually. If it's a bunch of code generated, the poor request is meaningless.
1:34:51
Changes will ha. Like Git and GitHub Forges in their current form do not work with agentic infrastructure today and it's nascent today. So yeah, change will happen. And I'm not exactly sure and that's not something I'm trying to change myself. But I'm on the receiving end in terms of agent, user and a maintainer where I'm like, this isn't working.
1:34:57
What other engineering practices that have been relatively stable for like 10, 20 or even more years you think have to change or are looking to change thinking things like cicd testing, code review, other, you know, ways of.
1:35:22
Yeah, you know, AMP has a saying which is, is, is, it's kind of clickbaity, but it's so true is everything is changing. And this is, this is the first time really where it feels like this is the first time in my, you know, short, relatively short to other people, but still a 20 year professional career that so much is on the table for change at one time. And I'm an optimist, so that's really exciting to me. Um, I, I, It's a lot of fun, but it's. We've never seen so much editor mobility. Editors used to be one of those things that once someone picks an editor, it's very hard to get them off that editor. They're like stuck. The level of editor mobility in the past few years between like VS Code and Cursor and just jumping around is, is unreal. So there's a bunch of mobility there in terms of, I mean cursor itself is a great example of a company that reached an insane valuation that you could never have gotten pre AI on an editor product. So editor forges CICD for sure. And I, I think that testing in general because to make an agent better, it needs to be able to validate its work. And so tests go from. Even the best test case scenarios don't have like, I mean the best, I guess have full coverage. But that, that's a very extreme. The, the very good test case scenarios just test like one of the edge cases and one of the happy cases and you know, bad case and they, they just kind of go through and if it passes, it's probably good paired with a human who's thought about the problem. But AI is more goal oriented in terms of. I want this feature to work this way that if it doesn't see a spec somewhere or a test somewhere that other things should work in a different way. It'll just break it on its path to its own goal. And so I've heard this called a lot of things. I mean the one I like the most is like kind of like harness engineering, which is like harness engineering. Yeah, it's like. And I've been. One of my like goals for this calendar year has been to spend more time doing that which is that anytime you see AI do a bad thing, try to build tooling that it could have called out to, to have prevented that bad thing, or course corrected that bad thing. And so it's sort of like moving from the product to working on the harness for the product or product development. And so yeah, there, there's, there's a lot of that where I think testing has to change to be far more expansive. But CICD is not set up just resource performance wise to be able to do stuff like that. So yeah, I'm not sure how it changed, but that's going to change too. So everything is on the table. It's really interesting.
1:35:36
Yeah. And a lot of tools to be built. One other thing. Observability.
1:38:03
Yeah. And then, and I guess on that same topic, I mean of the volume and scale and observability, it's also like the sandbox. Like I didn't think even being in infrastructure and being heavily into infrastructure, you know, containers blew up. The amount of like minimal compute units we had like floating around everywhere, I didn't think that was going to go up. I mean it'd go up like predictably up, but I didn't think it was going to like slope change up. And it has like slope change up already just due to the sandbox environments that agents need. And yeah, I mean that's super interesting to me because that stresses a whole lot, a lot of new systems I think. You know, the things that I worked on, like all the products I worked on, but also things in the ecosystem like Docker, but like Kubernetes, they're going to be stressed significantly because they're engineer for some level of scale. But this is a different type of particularly non production workload scale that you have to support. So yeah, it's, it's fun. Fun problems.
1:38:09
Going back to hiring you, you've hired a lot of engineers and you previously talked about something really interesting. This was I think in the context of maybe Hashicorp, how some of the best engineers you've hired had really boring backgrounds. Can you talk about that? Like who, who were the best engineers
1:39:06
you hired and like how to frame it? Yeah, I stand by this most. The best engineers I can remember from my time at Hoskarp, but also just in every job that I've had are notoriously private and not because they want to be private, because they just don't care. To be public, I guess would be the better way to put it. I don't want to like carefully describe anyone without giving them away. But you know, they're just, they don't have social media profiles very often. They honestly are nine to five engineers, they go back and they don't code at night, they just spend time with their family. But because they don't do anything else during their working time, they're like locked in and, and they're really good. And it's not about putting the hours, it's also just skill wise super strong. So yeah, I always found like when I, when I was reviewing resumes and stuff when you find the person that has a resume where they like they don't have any GitHub, even a GitHub account, like some people are like oh, you have to public contributions to stand out. Like that is a way to stand out. But also if you have 0 public contributions and you've just worked at companies that also I've never heard of before, it kind of is interesting to me which is like, okay, you might know something like deep. So yeah, I think that you know, the problem is and the funny, the ironic thing is I spend a lot of time on social media and these engineers are better than me. But the funny thing is every moment you spend on social media time is zero sum. So every moment you spend on social media is taking away from something else. And the issue is it's not one for one because as every engineer knows the time it takes to really get your mind into flow to get going with something, it varies but it takes time. And so when you context switch to social media if something's compiling and you tab over and you spend time you, you've given something up in terms of thinking. I think one of the best things I do spend a lot of time on social media but maybe unhealthy amount of time on social media but also an unhealthy amount of time at night. I don't have insomnia, but it takes me a long time to fall asleep and, and it's because I just sit there in the dark. And I love some people do this in the shower, but it's not long enough for me. I love to just sit in bed, lights off, my wife's sleeping and I just think through like I'm writing code in my head, I'm thinking through products, I'm thinking through website copy, I'm thinking through, I'm running clis in my head of how it's going to feel. And sometimes last night I went to bed at 9:30 because I'm a dad, so I go to bed early and
1:39:23
you have to wake up and you don't know when you have to wake up.
1:41:50
Yeah. Yeah. And I didn't even feel like I was up that long, and I was like, oh, I gotta go to the bathroom. I should go. I should really actually, like, go to sleep. And I looked and it was 12:30. And all I was thinking about was, it's so dumb. But all I was thinking about was this vouching system of how vouching might work, it might not work. And I've always had this thing where I'm willing to. I like competing. I think competition's fun. But I always feel fair game to compete with anyone in product building space because I think I'll spend more time thinking about it than they will. I think people turn it off, and I try not to turn it off. So, yeah, I mean, I think the point of all that is the best engineers are the ones that context switch the least.
1:41:52
Probably having used AI, AI agents. Do you think this might change because, you know, like, these agents can go on and think or do work for you? Like, how would you hire in this. In this new world where using AI is kind of a given, most devs will prompt and fewer and fewer write, even though best devs clearly know how to write code as well.
1:42:33
I would definitely require competency with AI tools. You don't need to use them for everything. That's not important to me. But it's an important tool to understand the edges of. It's like any other tool where sometimes it's useful and sometimes not useful, but if you ignore it completely, you're gonna do something suboptimal in a time. I mean, the best example to me is proof of concepts, like, constantly. In real product organizations, you have an idea and you need to, like, demo it out to figure out if it works. I would much rather someone just, like, throw slop at a wall that you're never gonna ship and spend a day doing that, you know, maybe less than a day doing that, rather than spend a week doing it organically as a human. Like, because you're gonna throw it away anyway. And you don't even. You might throw it away because it's a bad idea, but I'd rather prove it out and so just slop it up. And so this is why it's so nuanced. I'm so like, I'm so get so worked up about sloppy PRs to open source, but it's because there's a time and place for them, and that's not the time and place for them, but there is and so I would hire in that way. And I think the other thing that I don't know if it's the right thing to do, but I would strive that, that goal that I have, I would strive for everyone to have an agent running at all time again. Like, it doesn't need to be coding, but to be doing something extra for you. I would strive for that because I. I do it driving, that's my biggest one. On the drive here, I had some deep research going, and it's like, I will always spend 30 minutes on the boundaries when I wake up. And before I stop working, before I leave the house or anything, I spend 30 minutes, stop working. What can my agent be doing next? That's. That's slow. What's a slow thing my agent could do for the next time? And I knew I was gonna drive here for an hour. It finished far faster than an hour. But, you know, it was just like, oh, I need to do some library research. Okay. Find all the libraries that have these properties that are licensed in this way. And I was looking up some like, HTTP 3 stuff, quick stuff. And so find. Build that ecosystem graph for me. Right before I left, I was working on something to do with this vouching system. And I didn't quite understand the edge cases of what I was doing. And I will think about that manually, but why not just start. Just start an agent to, like, look at this repo. And I use AMP to, like, consult the Oracle. Like, think deeply about what the edge cases might be. What am I missing? If I had another two hours to work, I wouldn't need the agent to do that. I would have done it myself. But I don't, so why not have it do it? So it's just part of my goal to always have one going. And I unfortunately don't have one going because they finished it all right now.
1:42:59
Interesting. And so this agent running there is kind of, do I feel correctly, that it's now so natural that it doesn't get in the way of your own thinking? Like, you do your own thinking and you do your work, but every now and then you glance and you ping it or you start it, or it's now so it's not distracting. Right. Because I think that's.
1:45:31
Yes, I actually turn off all the agentic tools do this. And I turn off the desktop notifications. Yeah, I think the desktop notifications are, for the most part, a mistake. So, yeah, I turn those off. I choose when I interrupt the agent, it doesn't get to interrupt me. So for sure and then there's another aspect where I think my engineering has changed, where I try to identify the tasks that don't require thinking and the tasks that do require thinking and, and just delegate like, delegate the work to an agent. Like sometimes it just feels productive to do the, the non thinking tasks and you're like, yeah, I did a lot today, I got this. But, but a lot of times I just try to just delegate that out. There's a lot of people that, that you know, say like you think less and I think if you use the tools wrong, you do think less because you just like launch an agent and I don't know, go watch YouTube or scroll social media or some things. But if you instead view it as a way to choose what you think about, then I think that you don't need to sacrifice that thinking. But I think the problem is the majority of population probably won't do that.
1:45:47
Yeah, but it's still, the thing is good food for thought and it's good to hear from you on how you're using and it's working for you. When did you start to have this second agent running? What made the switch? Was it the models getting better or.
1:46:53
Yeah, I don't remember which model it was, but there was a certain. I tried cloud code right when it came out, which was like March or May last year.
1:47:05
Yeah, it was March. The beta.
1:47:12
Yeah.
1:47:13
And the May public release.
1:47:14
Okay. I, I don't think I used the beta. So it was probably May. Wasn't a huge. Wasn't super impressed, honestly. And then, I mean really quickly by like the summer, at some point during the summer. Oh, I remember, I remember I saw so many positive remarks about it that then I started to get scared that I would be behind on how to use a tool. And so I actually started forcing myself to. I still didn't believe in it, so I would do everything manually, but I was forcing myself to figure out how to prompt the agent to produce the same quality result. I was working much slower because I was doubling the work and it was more than double because they're slow and we're going back and forth and I already had the work done and all this stuff, but I was forcing myself to do it. And you find stuff that I couldn't figure it out. I couldn't. Like it just wasn't there yet. But then I found other stuff where it's like, oh, I naturally got to the same point that thousands of other people got to which like, oh, if I do a separate planning step, it does so much Better. And everyone got there. And then I figured out, oh, if I have a better test harness for it to execute, it does a lot better. And then, you know, I, I think everyone starts with like, no agents MD or Claude MD or anything. Same thing I realized, oh, if it makes a mistake and I add that just to agents md, it never makes that mistake again. Like, oh, and like these, these are just like incremental things that I recognize when I see people that are new or I've watched a couple live streams, like, lurked on live streams where like, kind of anti AI people, like, try AI. And it's one of those things where I'm like, they're just swinging the hammer way, way off, right? Like, it's. Because you haven't. It's the, the thing is, like, it's, it's as if someone tried to like, adopt git and they used it for an hour and decided they weren't more productive with it. Like, it takes much longer than an hour to get proficient with git, but you put in the effort and then you reap the rewards later. And it's sort of the same thing to me with AI tools.
1:47:15
What, what would your first advice be for someone who's like, not.
1:49:15
My first advice would be reproducing your work with an agent. And if you really, really don't want an agent to code reproduce the research part of your work with an agent, like, there, there's a lot of people. It's like, I don't want it to write code for me for whatever reasons. Like, but yeah, just kind of delegate some of the other research part. There's so many places it could be helpful. So it, it doesn't need to take, you know, you don't need to pick up on the it must replace you as a person kind of propaganda. You could just find the, the corners of where you work and, and replace those parts.
1:49:18
One thing that you give people is you give advice on for potential founders because you're a successful founder, you've had an exit, you built up this awesome company. You get a bunch of emails from people asking, hey, I want to be a founder. What's your advice? And you wrote about this, you shared the email. But can you tell us what advice you typically give people and how is it received?
1:49:51
Well, I usually ask for something more specific because, yeah, if someone's like, what could I do to be successful? One, I always disclaim that you're consulting someone with survivorship bias, so you need to take that into account. But I'm Willing to share my experience as a survivor, but just understand that there's survivorship bias. But usually I ask for, like, what's, what's something more specific, like, what are you trying to do? And so we usually get to like, should I open source my project or not? Or should I be remote or not? Or should I do enterprise? And I don't know, but my, my. The, the most general advice I usually give people is startups are much longer than you think you're gonna probably work on it for. I, I say imagine 10 years. A lot of people say five years, but I say imagine 10 years. Like, is this really something you wanna work on for 10 years? And is it something that, like, you need to have a certain amount of hubris in order to say, I'm gonna work on this for 10 years and I truly believe I'm gonna do it better than anyone else. There's nothing behind that, no substance behind that other than hubris. So you need to have a certain amount of, of ego and hubris in your head to make that, but not too much where you'll be blind to change coming in. So that's usually like the first advice I give because a lot of people have cool ideas, but they're going to burn out, you know, relatively quickly. So that's where I start.
1:50:12
So currently you're advising some companies. What are you seeing with them? Like, what are servers doing these days? What are they doing differently than, you know, like, earlier? How's that landscape?
1:51:28
Again, it's really contextual in terms of, like, if you're an AI startup, it's very, very different.
1:51:39
How are, how are AI stars working differently?
1:51:44
They are. There's a lot of pressure to go faster than I've ever seen any startup, though. I think the industry is moving so fast that I don't advise any AI startups. But I've talked to some of them and it's. Even as an advisor, I feel like it's too much pressure because they are just being pushed to prove themselves quickly, whether it's through traction or revenue or something. It's sort of like there's this mentality within that ecosystem where AI should allow you to go crazy fast. And in addition to that, there are a lot of companies moving crazy fast. So the change is happening. I think that's the one thing outside of that. I mean, like I said, it's just a ton of opportunity in every space. Otherwise it's a lot of the same stuff. I mean, it's remote versus not remote, open source versus not Open source.
1:51:47
Do you see the role of software engineers changing now? Especially at the AI native companies where engineers like yourself, they're actually being way more productive. They can produce a lot more code, a lot more output. Are they being pushed into being wearing more hats? Talking to the business of being a bit more like a mini founder, if you will?
1:52:32
I hesitate to say more productive. I view that there's an expectation that they could do more. I don't think that's necessarily more productive, but it's more like you should be able to, for example, build a full demo, design everything for your. You don't need a team to do that anymore. Right? Like you should be able to do that. At least from a demo perspective. There's no reason not to because again, you could chip slot for that. That's fine. I mean this is still the same. But you should be able to research effectively and in a sense handle more vague tasks. I'm seeing that a lot more. Just like just the capacity to experiment is so much higher, I would say. But then when it turns into productionizing something, it feels similar to what it's always been. I think that there's a lot of companies that are eating the dog food of the AI companies of shipping, whatever, and I think that's a little scary.
1:52:51
Yeah, they look at Entropic and they're like, oh, they build cloth cowork in 10 days and it'll be billion dollar company. They're freaking out of why they're not doing that there.
1:53:45
I think a big change is from like a pre seed perspective or yeah, pre seed perspective where you would be like, I need to raise a seed in order to build a prototype. That's like, like, show me the prototype. Because yeah, you should be able to build that really quickly. For most things there's still hard tech out there that you can't do that.
1:53:53
So you do a bunch of coding. You do a bunch of thinking about coding as well. Even as you're trying to fall asleep. What refills your bucket outside, outside of
1:54:08
coding, outside of tech, obviously, like the stereotypical things like just taking breaks and being with my family and things like that. But I mean, I think the biggest thing is, you know, I am introverted. So just quiet solo time refills the most energy for me. I live pretty close to the beach and just if I'm in a bad mentality, things aren't working, I'm feeling unproductive or something. Something's going on like just closing my laptop and taking a walk outside. Like stuff like that helps a lot. I have a lot of hobbies and stuff, but it's, I think, like, just as a general recharge. It's. It's that more than anything I know there's a lot of people. It's like going out with friends or something like that. Then I like that. But that's not the full recharge for me.
1:54:16
And what's a book that would you recommend? And why?
1:55:01
So I only. I pretty much only read fiction outside of news. Great, Great. Okay. The most recent book. Book of fiction I read is an older book, and it is an easy read, so I hope people are like, not like, oh, he's an idiot for reading this. But it was. What's it called? The. The Something life of Addie LaRue. It's just like kind of a romantic type of fiction novel. But, yeah, it's just about. It's about. I think it's like 10 years old. It's older now, but it's just about a woman who kind of sells her soul to live forever, but the cost was no one remembers her once they walk out the room. And, yeah, it's just going through her whole life of losing all human connection, but she gets to live forever. What that is like. And I like reading fiction, so I
1:55:05
like reading fiction at night. I don't know. I don't know if it's escapism or just like, you just like, you know, you get to live in different roles and it's so, so different to the coding or anything. Maybe just helps it turns off the thing. I personally probably read way more fiction than I do professional nonfiction. Honestly.
1:55:53
Yeah. Yeah. I'm the same way. It's my version of tv, too. Tv, to me, is more of a social activity. Like if. If my wife wants to watch something together, like, we'll watch a show, but if I'm alone, I'm not going to watch a show. I'm going to read, probably.
1:56:11
Awesome. Well, thanks so much for going through all of these details. It was just not great to hear from how you're working the history of Hashicorp. This was all just really interesting and motivating.
1:56:24
Yeah. Thank you. Thank you.
1:56:36
I hope you enjoyed this long and interesting conversation with Mitchell. One thing that really stuck with me from this conversation is Mitchell's own rule for always have an agent that does something. Not necessarily coding, just doing something. For example, while he was driving to this podcast recording, he had deep research running. Before he leaves the house, he asks himself, what's a slow task that my agent could do while I'm gone. An important part to all of this he turns off all notifications. The agent does not get to interrupt him. He interrupts the agent when he's ready. Michel is in charge and he has a buddy who does the work that he has delegated while he focuses on the problem that he is solving. This is a nice challenge for anyone listening. Next time you step away from your desk before you close the laptop, ask yourself, what slow tasks could an agent be doing while you're gone? If you enjoyed this episode, share with a colleague who's thinking about where software engineering could be heading and if you've not subscribed yet, now's a good time we have more conversations like this one coming. Thanks and see you in the next one.
1:56:38