From IDEs to AI Agents with Steve Yegge
Steve Yegge discusses his eight levels of AI adoption for engineers and predicts that 70% of engineers are still stuck at basic levels while AI creates a 'vampiric burnout effect' where developers can be 100x more productive but only sustain 3 productive hours daily. He argues that big tech companies are quietly dying as small teams of 2-20 people will soon rival their output using AI orchestration tools.
- AI adoption follows eight distinct levels, from no AI use to running multiple parallel agents, with most engineers still at basic levels
- AI productivity gains create a new work-life balance challenge around value capture - who benefits when engineers become 100x more productive
- Big companies may need to eliminate ~50% of engineers to afford AI tooling costs for remaining staff, while those engineers often don't want to adapt anyway
- Small teams using advanced AI orchestration can now compete with large enterprise outputs, potentially disrupting traditional company structures
- The innovation cycle has shifted from spec-then-build to prototype-until-perfect, enabled by AI's ability to generate unlimited iterations quickly
"If you're still using an IDE now you're a bad engineer"
"The days of coding by hand are over"
"We're going to lose about half the engineers from big companies, which is scary"
"If you're anti AI at this point, it's like being anti the sun. You're going to have to go live underground"
"Most people can't read. I've ruined much of my work in my life by overestimating people's ability to read"
Steve Yegi has been a software engineer for 40 years. He spent decades at Amazon and Google, is famous for his brutally honest rants about the industry and for being right a lot. He recently built Gastown, an open source AI agent, orchestrator, and co authored the book Vibe Coding with Gene Kim. In today's conversation we discuss Steve's eight levels of AI adoption for engineers, from no AI to running multiple agents in parallel, and why 70% of engineers are still stuck at the bottom levels. Why AI is creating a vampiric burnout effect on developers where you can be 100 times more productive but only get 3 good hours a day. His prediction that big tech companies are quietly dying and that small teams of two to 20 people will rival their output and many more. If you want to understand what the day to day of software engineering would look like in the near future and
0:00
how not to get left behind, this
0:45
episode is for you. This episode is presented by statsig, the unified platform for flags, analogs, experiments and more. Check out the show notes to learn more about them and our other seasoned sponsors, Sonar and Workos.
0:47
So Steve, really good to have you on the podcast again. What have you been up to?
1:00
Gerge, great to be back. It's been 10 months now.
1:05
Closer to a year.
1:10
Yeah, close to a year. Yeah.
1:11
Boy, seems like forever.
1:12
Yeah, sure does. Yeah. There's been a lot going on. I'm unemployed right now, which has been incredibly fun unemployed or fun Employed. I am just doing whatever I want, what I'm doing, which is real nice. And had a couple software launches, which was nice. I had a book launch last year which was nice. I've been living life.
1:13
Yeah. So for a very long time you've been known as this kind of truth teller of bringing in sometimes comical, sometimes really uncomfortable facts or observations. Should I say you wrote like often in really kind of fun, fun ways with rants and a lot of them resonated with people. Do you remember what was around that really stood out at any point in time? That like you got some really good feedback either at that point or later you felt validated by it.
1:36
Oh well, so a lot of people tell me. Well, those who know their favorite Stevie blog is actually execution in the kingdom of nouns. I don't know if you remember that one. Way back in the day I was at Google, early days Google and I was trying, I was struggling to sort of like get this idea across to people that Java's growth was super linear with the amount of code. Like the amount of code would grow more than the amount of functionality, which is not a good place to be. And Java's gotten a lot better since then, Right? But my post raised a lot of eyebrows at sun because they were like, what is this guy complaining about? Why doesn't he just shut up? You know? But I was like, I want to use a language that has first class functions. And so I wrote a very, very, very unusual blog post called Execution in the Kingdom of Nouns. People really loved it, where it was a story, it was just a fairy tale about a land where there were no verbs. And it was fun.
2:06
So one of your lesser known blog posts, or for a lot of listeners, it's called Rich Programmer Food Essay. Rich Programmer Food. And this was about compilers. Do you remember what you argued about or what the points you made?
2:59
Of course. That's one of my most important blog posts ever. I'm going to tell you. I met a guy, okay, who he introduced himself at swix's AI Engineering conference in New York. And he's like, I've wanted to meet you, Steve. I'm one of your players, okay? And I'm like, whoa. Because this dude's, you know, in his 30s and, you know, you know, he's played my game, you understand? The game that I wrote, it's Wyvern. Most people. Wyvern. Most people haven't seen it because I didn't open source it. I will someday. It's just a pain in the butt. It's a really beautiful thing. And it, and it created so much love in the players for decades. They would come back, right? But this guy was so into it and he's like, I read your, I read your Rich Programmer Food blog post and decided to become a compiler expert. I became a PhD. He was in high school when he read it, became a PhD, started his own company. He's got a startup that's doing really, really well now. And he said it was all because of that post.
3:15
And this post talks about, I think you argued that unless you know how compilers work, you're not going to be a good programmer, an efficient programmer. I'm not sure what the phrasing was.
4:03
There's going to be a layer of magic between what you're doing and what the computer is doing that is forever going to be sort of friction for you.
4:14
And I think you even argued that some PhDs don't even understand how compilers work and this will make it really hard for them to be efficient.
4:23
At the time, that was definitely true, Right?
4:30
How do you think that post has aged? Because at that time I think it was like 2012 or so. Like even then I would assume it was a bit unconventional to say, like you need to understand assembly because it was high level languages, right? Java was in its prime, C sharp, Ruby was starting to come out. I mean, heck, JavaScript was starting to become big. React will start in a few years. And most developers would have thought, why would I need to know compilers? Assembly, I mean, that's what the compiler is for, right?
4:34
Yeah. You're asking a really, really, really foundational question. You're asking me what universities should teach is what you're asking me, Gerge. Okay, in disguise. And you know that, that those goalposts have moved every few years since I got into this game in the 80s. All right, what you need to know in order to be a software engineer. It used to be assembly language. It used to be like lots of bits and stuff like that. And over time, my buddies and I realized that our favorite bit manipulation questions were starting to bounce off candidates who'd never seen a bit before. Right? And we real, you know, we did some soul searching in the 2010s, you know, and we were like, yeah, do you really need to know how to manipulate bits in a byte with xors and stuff like that anymore? Probably not, right? And that was a depressing realization because we had prided ourselves in knowing how that stuff works, but we just don't need it anymore. And the sad reality is that I had a lot of my own ego and identity wrapped up in my sort of compiler background. It's all, it's interesting, right? But it's not useful in any meaningful sense anymore.
5:00
And is it not useful because the compilers have gotten so good at optimizing, for example, is it that the problems have moved on to higher layers? Why do you think that is?
6:07
Just walking up the abstraction ladder, that's all.
6:19
And we're not even talking about AI just yet. Like this happened even.
6:21
Did you say AI? Did you say AI?
6:24
No, not yet. We will say it. But even in I remember like, you know, late 2010s, it didn't really come up like in my career. I can only remember one time where it would have been nice to know what the compiler did. But even then might have been a red herring, honestly.
6:26
Look, what you have to know just keeps moving. They just, they keep changing the courses, they keep changing what they teach. Many people don't see this because they're only looking a year or two or three back and, you know, looking a little bit forward. But I've been doing this for 40 years and I can tell you they teach you very different things now than they used to teach. And it's because you need to know very different things. And nowhere is it more evident than when we saw the exponential curve of the graphics industry. Computer graphics, look at graphics today compared to 19, you know, 92 when I was learning graphics in university and I had to learn how to literally, you know, do the algorithm to figure out where the next pixel goes on a line so I can render it to eventually turn it into a triangle, which is a polygon. Meanwhile, two years later I took the same course and we were doing animation. I didn't even know what a polygon was. I mean, I did, but not at that level. Right. The whole ladder just kept moving up and the jobs changed. Originally they needed people that could write device drivers and then they needed people and now they need people who can do game worlds and physics and all this stuff. Right. It's. They just. Graphics showed us the way. This is what happens. And software engineering jobs have been very stable for, I don't know, since iOS, since mobile and cloud. Those are the last two big innovations, right?
6:41
Yep.
7:48
Steve just made the point that the industry goes through these massive maturity leaps from raw pixels to game engines, from bare metal to cloud. And if you're building software today that needs to make that leap to enterprise grade, there's a tool that handles exactly that. This is our seasoned sponsor, WordCos. If you're building any SaaS, especially an AI product, authentication, permissions, security and enterprise identity can quietly turn into a long term investment. SAML edge cases, directory sync, audit logs and all the things enterprise customers expect. It's a lot of work to build these mission critical parts and then some more to maintain them. But you don't have to. Work OS provides these building blocks as infrastructure so your team can stay focused on what actually makes your product unique. That's why Companies like Anthropic OpenAI and Cursor already run on work OS. Great engineers know what not to build. If identity is one of those things for you, visit workos.com with that, let's
7:49
get back to the question of what
8:42
the last real innovation in software engineering actually was.
8:43
And it's been kind of dead since then actually. Yeah, I don't want to say AI because we're not talking about it yet, but, but I think we went through a. I think we went through a period where people stagnated a little bit, where the courses didn't change very much and we thought this is all we're ever going to need to know.
8:47
I feel the last big innovation, correct me if I'm wrong, was distributed systems. That was the last kind of hard problem, starting from like 2010s when Uber brought, brought microservices into there. How you scale services, how you store large amounts of data. I feel that was a, like, I
9:01
mean it was big, it was a big slow.
9:17
Yeah, but honestly, like, I feel there's a lot of migrations happening, new react versions coming up and developers struggling with that. Apple every year throwing in a, you know, like a screwdriver in the wheels with the new breaking version. Android developers needing to retire an Android old version and deciding like where to cut it off. So I feel there was that like, kind of like migrations thing. And also business was just good, right? Like everyone was growing. We were like, everyone was busy hiring, like there's no tomorrow there. There was a time in 2021 the market was so hot, a lot of boot campers with three months experience were getting offers. A pretty good company because everyone was so desperate to hire.
9:19
Yeah.
9:57
And then came AI in 2022. One thing that always struck me about you, even in those like, you know, 2000s and even before, you were always pretty pragmatic. You know, you were by, by trade, you were always into compilers, debugger tools. That's where you started. You worked on hard problems at Amazon, at Google. Never shied away to getting into like hard techn, you know, like all these things. And when AI came out, I don't remember you saying, oh, this is amazing. This is going to change the world. How did you feel? What were you kind of like observing, skeptical. Like at the very beginning, right. When you first came across LLMs, how was that?
9:58
I was pretty blown away that they could write fairly coherent emacs lisp functions like ChatGPT, the original one, in December 2023.
10:34
2022.
10:44
2022.
10:45
Okay.
10:46
Boy, time flies. Could already write code in a weird language, right? Not very much of it. And it was, it was janky. But that was, for me, that was the beginning of. Oh, right. You know, because I've had friends in AI for 20 years saying any minute now, any day now, right? And they'd show us and they would complete better and better and better. And this was the first time it was like, oh, okay, I see now. Right. But I was still skeptical like everybody else. And I can, I can tell you because when, when the rumors came out about Claude code in beginning last year, right, that anthropic had a tool internally that was writing code for them, and it was a command line tool. I, along with everyone else, went, no, it's not. You know, we were just like. Just flat out rejection. Just absolutely not happening, right? Until I used it. And then I was like, oh, I get it, we're all doomed, right? And then I wrote Death of the Junior Developer right after that. Actually, I think, gosh, it might have even been after. After 4.0 came out, that I did death as junior developer. But things changed really fast once that came out. So was I a skeptic? Yes. But did I pay attention to the curves from the very beginning? I figured if chat GPT3.5 can write a coherent Emacs lisp function, then in a year, let's see how they do. And In a year, 4o was writing a thousand lines of code. A thousand lines, dude, that's. Most of the world's code is in files of a thousand lines or less, which means that it can make credible edits. It wasn't able to up until 4.0 came out, right? And so, like, man, it was that point when I was like, okay, we're on a curve. This is a ride, it's not stopping. Let's get on the ride and see where it goes. And I dove in, right? And I was like, I was behind. I didn't know. I. I didn't know, like the. The fundamentals of the. I didn't know the lingo, you know, Everybody knows this stuff now, right? Yeah, I spent a year doing nothing but reading papers and catching up, right?
10:47
So in this book, Vibe Coding, I remember last time you were on the podcast, this book was about to come out and I was reading an early version of it or so. But the back cover, I just read the back cover and I realized that you must have written this about a year ago. And it says the days of coding by hand are over. When did you realize this? Because I've realized this recently with Opus 4.5, but this was a lot before. Well, before that.
12:37
Yeah, it was a year ago. It was. Let's see, what is it right now? January. So it was over a year ago. It was 12, 13 months ago when I first realized. And that wasn't even my quote. That was Dr. Eric Meyer, right? The inventor of many, many, many things in the programming world. One of the most important compiler people in the world. That, dude, think about it. He spent his life building technology for developers to be able to write code. And he's saying developers aren't going to write code anymore. What would possess somebody to say yeah, my life's work isn't really right. And that's what caused actually Gene Kim and I both to go, huh, right? You know, if the inventor of, you know, you know, he made huge contributions to Visual Basic and C and Link and Haskell and PHP with a pig. Is that what it's called? Right. All him and he's just like, no, we're done, we're done writing code. I mean that's pretty big words from a languages person, one of the most famous in the world, right? What does he see that we didn't? And he sees the curve, man. It's that simple. It's like exponential curves, they get real steep, real fast. And we're heading into the steep part this year.
13:04
So the inventor of C and Visual Basic is saying that we're done writing code. But even if the AI writes all the code, someone has to verify it. And that's where our seasoned sponsor Sonar comes in. Sonar, the makers of SonarQube has introduced the Agent Centric Development Cycle Framework ACDC, a new software development methodology designed for the unique scale and speed of AI generated code. It's a move towards a more intentional four stage loop that gives agents the guardrails they actually need. The four phases being guide. First, agents need to understand the canvas on which they're being asked to create so that the output fits with what the developer and organization require. Generate the LLM based tool generates the coded beliefs will achieve the desired outcome within the right context Verify. Next, the agent is deliberately required to check its work, ensuring it actually achieves the desired outcomes and is reliable, maintainable and secure solve. Finally, any issues identified are provided to a code repair agent to fix to power. This sonar has significantly strengthened its offering, introducing products and capabilities like sonar context augmentation, sonarqubegentic analysis, sonarqube architecture and sonarqube remediation agent. Head to sonarsource.compragmatic to learn more about the latest with sonar and how it's empowering organizations to embrace the agentic era with this. Let's get back to Steve's exponential curves of AI improvement.
14:14
Playing devil's advocate. You know like one thing about being an engineer is like you can draw up curves but you know, like you never know when they end or if they flatten whatnot. We can see where it has come. What made you believe that this curve would keep going? And especially that with LLMs the fact that it even kind of works was a bit of a, I guess, surprise for a Lot of people. And the fact that it kept scaling is a surprise. And there's this question of like, how long they will scale.
15:39
Yeah. So the world is filled with unbelievers, people who specifically who believe the curve looks like this. An S. It goes up and then it flattens. Okay. And they actually think we're at the hump right now.
16:04
Yeah. And that'll.
16:16
And they have fought that ever since the GPT3.5 came out. They're like, yeah, it's not going to get any better. 4.0 comes out. We love 4.0. People love 4.0. They still do. They can't get rid of it, but they still think this is as good as it gets. You know, Opus 4.5 is out and most people haven't played with it. Most people don't realize what's there. And that thing is already two months old. The half life between model drops, as far as I can tell, has gone from about four months beginning of last year to two months from Anthropic at the beginning of this year. So any day we're going to see another model from Anthropic. It'll probably be out by the time we have this podcast out. Right. And that will be so much further up the curve that people are going to start to be really freaked out by it. It's gonna. It's gonna worry people when they see the next model. Okay. Because all of the bugs, all the mistakes that they're complaining about right now get fed right back in as training and so that it doesn't make them the next time. And this is what people aren't understanding. Right. And also time continues. There will be three and five years from now, the sun's not gonna stop. Right. And it's coming. So this inevitable, the collision of these curves, man, it's. There will be societal upheaval is what's gonna happen. And it's already started. And people are justifiably mad. And I'm mad with them. Gerge. Okay. I'm mad at Amazon for laying off 16,000 people and blaming AI without an AI strategy for it, those people are not going to be able to find jobs by and large. And they're the first of many to come. And nobody has a plan for this.
16:16
Why do you think Amazon did that if they don't have an AI strategy?
17:37
Because unfortunately, and people are going to hate me for saying this, but me saying it doesn't make it true. It was true already. Everybody has a dial that they get to turn from zero to a hundred and you can keep your hand off the dial, but it just has a default setting of what percentage of your engineers you need to get rid of in order to pay for the rest of them to have AI. Because they're all starting to spend their own salaries in tokens. And so at least for a while, if you want your engineers to be as productive as possible, you're going to have to get rid of half of them to make the other half maximally productive. And as it happens, half your engineers don't want to prompt anyway and they're ready to quit. And so what's happening is everybody on average is setting that dial to about 50% and we're going to lose about half the engineers from big companies, which is scary.
17:41
Yeah, that's wild. It's way, that's way bigger than we've seen back at Covid.
18:27
And it's going to be way bigger. It's going to be awful. But at the same time something else is happening which is AI is enabling non programmers to write code. And it's also enabling engineers who have seen the light and believe the curves are going to continue to go up to actually get together in groups of 2 and 5 and 10 and 20 and 30 people and start to do things that rival the output of these big companies that are tripping over themselves. And so we've got this mad rush of innovation coming up, bottom up and we've got this mad knowledge workers falling out of the sky as the big companies lay them off. Because there's clearly the big company is not the right size anymore. It's not even Andy Jassy saying it. We're going to do the same thing with fewer people. Right. And so does this mean we're going to have a million times more companies? Is there going to be a massive explosion of software or are people going to get out of software altogether and we're all going to go do other stuff? I mean, like, I'm very curious where all this goes. Yeah.
18:32
Small teams that have the right skill set or see the right business opportunity or have advantages can do way more. So there is something there in that.
19:23
There is. So there's this land rush starting. I think a lot of the people coming out of knowledge work are just anti AI and those people are going to struggle. I'm sorry, but if you're anti AI at this point, it's like being anti the sun. You're going to have to go live underground. Right. But the people who are like pro AI, like I think we're going to See a big redistribution of who's doing the work and where you get your software from. And we may well wind up from. I could actually see a happy place where Amazon's not even a thing anymore. I really could. Because software becomes. We don't have the words for what's happening. Right. You know, so many things happening this year that we don't have words for. Have you noticed that? But software becomes sort of like distributed. I don't know.
19:32
I do see non technical people getting into software. Could there be a job there for engineers to come and actually take or maintenance?
20:13
Yeah, I mean, I think there's going to be plenty of opportunity for. There's going to be. There are going to be a lot of engineers doing software engineering. I just think we're all going to be doing it with AI, right?
20:21
Yeah.
20:31
But I think it'll be quite some time before companies are comfortable trusting their code to be deployed, written and deployed by AI, without any human being involved at all. I think the point that people are missing, the important point that the naysayers and the skeptics are missing, is not that it's. AI is not coming to replace your job. It's not a replacement function. It's an augmentation function. It's here to make you better at your job. Right. And that's not a bad thing, actually. I don't know why people would fight that.
20:32
But speaking about the job as developers, you've said something that can be triggering for a lot of people. You've said that, I think this was on the AI Engineer Summit, that if you're still using an IDE now you're a bad engineer.
21:01
Yeah, well, you gotta be a little provocative. Yeah. You know, let me put it this way, okay. I'm not going to say you're a bad engineer because I know some very, very good engineers, better than I am, who are still at like level one or two in my chart. Right. But I feel profoundly sorry for them. I feel pity for them. Like I've never felt in my life for these grown people who are good engineers or used to be, and they, they're like, yeah, you know, I use cursor and I, I ask it questions sometimes and I'm really impressed with the answers. And then I review its code really carefully and then I check it in and I'm like, dude, you're going to get fired. And you're one of the best engineers I know.
21:12
Tell me about your chart. Tell me about your levels that you came up with.
21:46
Yeah. So I was drawing us on the board in Australia for a big group of people trying to show them what happens. Because I saw them at all different phases. Some of them had their IDs open. Some of them have a big wide coding agent. Some of them, the coding agent was really narrow, right? You know, and so I was like, okay, we're going to put you all on a spectrum just to show what's going on, right? And level one, no AI, right? You know, and level two, it's the yes or no, can I do this thing? You know, in your. In your ide, right? And then level three, you're like, yolo, Just do your thing, right? Your trust is going up. Level four, you're like, the code. You're starting to squeeze the code out, right? Because you're like, you want to look at what the agent is doing. And not so much of the diffs anymore, right?
21:50
So you're not reviewing as much now.
22:31
You're not reviewing as much. You're letting more of it through. And you're really focused on the conversation with the agent. And then at level five, you're like, okay, I just want the agent, and I'll look at the code in my IDE later. But I'm not coding with my IDE. At level 6, you're bored because you're like, okay, my agent's busy. I got. I got to do something. I'm twiddling my thumbs. And so you fire up another agent, and now you're addicted because you'll very quickly get into an equilibrium where every agent is waiting. There's always an agent waiting for you because somebody's finished, right? As soon as you spin up enough of them mathematically, right? And so you find yourself just multiplexing between them, going like this, and you can't leave.
22:32
Practical question. Assuming I'm working on the same code base, how do you spin up the multiple agents so that they don't get in conflict? Is it your. Are you going to use like.
23:06
Yeah. So that takes you to level seven, which is, oh, my God, I've made a mess, right? I accidentally texted the wrong agent and didn't realize it. And they did a big project inside of this project because I asked them to, and now I gotta clean up this mess, et cetera, right? All that stuff. And that was when I started going, okay, what if we were to, like, coordinate this? What if Claude code could run Claude code? That's the question everybody wants to know. And everyone was trying all last year. It's going, Claude code, run yourself. It would run for a While. And it would stop, Right?
23:14
Yep.
23:40
And so it was the whole stopping thing that. So, yeah, I pushed on that really, really, really hard and wound up building some. Some stuff to help with it. But yeah, boy, it's changed a lot, man. It's changed so much.
23:41
Going back to the ide, you had a really good live debate with Natan Sobo from Zed, and the title was the Death of the id. And both of you argued your view. What is your view about the id? And also, what did you learn from Nathan on his take of he was a bit more pro ID and you were a bit more like, maybe this is not going to be around forever.
23:52
Yeah, I mean, I am where I am in my journey, which is I think that AI will do it all for us eventually. And so the way I see ides is what do they really do and what are they really for? Okay. It's not really for writing code, it's for bringing tools together and for making a big tool. Right?
24:14
Yep.
24:32
And now you have MCP for that or whatever. Right. And so I see ides returning and I think Claude Cowork is a return to the IDE form. It's Claude code going, oh, I need to be for real people. Right. But I think Claude coworks form factor probably works better for the average developer than Claude code does. Right. So I see ide, I see us coming back into a world where it's IDEs, except it's all conversations and, you know, monitoring.
24:33
And this is a really good point. My brother built a thing called Craft Agents, which is pretty similar to Claude Cowork, except they connected in their company, their own data sources. And he said that some developers start to prefer that because it's a visual that's easier to see. Parallel agents, for example, if you're not a power user, it's easier to scroll. It's just a nicer ui. So your point on maybe some developers should try out, like, if you're not sold on cloud code, like try Cloud Cowork or any other similar, more visual thing. It might be your thing, but like, you know, git. Some people love the command line. I actually just use the UI because I just don't like memorizing the commands as embarrassing it is to admit. Or maybe these days it's not as embarrassing. Not.
25:02
Yeah, the key was try. As long as you're trying something. Yeah. Probably the single most important proxy metric that you can have in a company today is token burn. Because what token burn says is your engineers are trying to do stuff or your non engineers. And when they're trying, they're failing and they're learning. And so if you want to get those organizational bottlenecks discovered early on, and you want to get your engineers leveled up on my eight level spectrum early on and you want to solve your business processes and ahead, you need to start now. Which means try. It doesn't matter what you try, it doesn't matter which tool you use. As long as you're using AI and you're trying to get it to do the work, you're doing the right thing.
25:41
Yeah, and I think as professionals, like we really ought to just at least try like you get firsthand experience and then you can make your decision.
26:21
Steve's point about token burn is really interesting. The companies that win are the ones that experiment the most. And if you want to bring that same experimental mindset to your product, not just your AI usage, that's exactly what our presenting sponsor Static is built for. Static gives you the complete toolkit without building it yourself. You get feature flags, experimentation and product analytics all in one platform and tied to the same underlying user assignments and data. In practice it looks like this. You roll out a change to 1% of users at first. You see how it moves the top line metrics you care about, conversion, retention, whatever is relevant for that release. If something goes wrong, instant rollback. If it's working, you can confidently scale it up. Companies like Notion went from single digit experiments per quarter to over 300 experiments with static. They shipped over 600 features behind feature flags, moving fast while protecting against metric regression. Microsoft, Atlassian and Brex used static for the same reason. It's the infrastructure that enables both speed and reliability at scale. Static has a general street tier to get started and propricing for teams starts at $150 per month. To learn more and get a 30 day enterprise trial, go to static.compragmatic with that, let's get back to Steve's take on the state of Gastown.
26:29
Now there's a huge problem with people not knowing how to try. And they say, oh, let me do something. And then it does the wrong thing because they always do. And then they're like, whoa, this is garbage. So you know, you have to teach them that it's a shovel. And you don't go shovel dig like in Fantasia, right? Like make the brooms walk around. No, you pick up a shovel and you dig with it, but it's a shovel that you didn't have before. You were using your hands like it's a really really simple analogy, but people just don't get it. They don't get it. And I think, and I'm going to say something that's contentious, but it's just the reality of the world. Most people can't read. I've ruined much of my work in my life. I've just completely gone down wrong path by overestimating people's ability to read. And I think that reading is if anything, getting harder to come by as a skill these days, and this is the situation that we're in right now is that cloud code makes you read a lot. So I think we're in a weird limbo for the rest of this year. Okay. Where until the UIS arrive that are good enough for everybody who can't read. Everybody who can't read is going to be at a severe disadvantage.
27:41
Tell me a little bit more about your observation. A lot of people or a lot of developers cannot read because you were at Amazon, that place supposedly is running on six pagers and people actually reading does it.
28:41
I mean, most. Dude, most people can't read. I don't know if you know this, man. Like, they read really slow. Okay. And the AI is. I mean, come on, to most people, five paragraphs is an essay. Remember, five paragraph scenes in high school is a thing we have in America. I guess maybe years were a hundred paragraphs in Amsterdam, but to us, five paragraphs is a lot. And that's like, that's the AI just clearing its throat, right?
28:53
Yeah.
29:18
You know, you gotta be able to read waterfalls of text. And so we're looking at a world where that won't work. And so you're gonna need recursive summarization, you're gonna need a factory. And it's funny because, like why. I mean, trying UIs is so important because Gastown right now, the reason I say you can't use it is that it's a factory filled with workers and you're talking to it through a telephone. You can also go and look through the window and pound on it and talk to the workers. But it's not like you're in it. Right. With a ui, you're in it and you can, you can see what's going on and. Right. It's all invisible in Gaston, by and large. Right. You know, hard to see. And so I really do think, and I'm going to, I'm just going to make a bold prediction, I think that by the end of this year and we'll see demos of it, like right away, but by the end of this year, most people will be programming by talking to a face.
29:19
A face, as in on the screen.
30:04
Your AI, like the Gastown mayor, will be a fox talking to you. And you'll say, why doesn't it work? And you'll say, I'll go look at it and it'll go spin off its workers just like it's doing. But you're talking to a face and
30:08
it will talk only.
30:19
Yeah, I think that's the only thing that's going to work for most people.
30:20
Fascinating. Let's write this down. Prediction. Why do you go build it?
30:24
I'm not going to.
30:27
Let's talk about Gastown. You mentioned Gastown. What? For those. That a lot of people have heard about.
30:28
What is Gastown?
30:33
Gastown is an orchestrator. So 2023 was completions. Code completions.
30:35
Yeah. Autocomplete.
30:41
Yeah. That's when we said it's a completion acceptance rate card. You remember that?
30:43
Oh, my. Yeah, People were measuring it. Yeah.
30:46
Stupid metric, by the way. The second one was. But it was close. It was a proxy for. Are they trying. Right. Then there was chat. That was 2024. Right. And then agents was 2025. We knew you could just look at that curve and go, okay, well if. If chat is completions in a loop, basically, and agents are basically chat in a loop, well, then we're going to put. Or we're going to put agents in a loop and that'll be an orchestrator. Right. And a bunch of them started coming out, and I built one of my own. My own vision. But that's all it is. It's agents running agents.
30:48
And can you talk through a software engineer to us Architecture, like, how is it organized? How can I imagine, you know, this setup?
31:17
Yeah, sure. I mean, look, Gastown is really complicated and it's been really broken all week because I'm migrating it to Dalt, and that's where I actually learned how complicated it was. It has a lot of features.
31:24
You're migrating it to.
31:35
To Dalt. It's a new database.
31:36
Oh, okay.
31:39
Yeah. Dolt is. Dolt is amazing. Dolt is a Get back to database. It's a git database. It's beads is just git plus database crammed together badly. And there's actually a database that does this. So I'm. I'm migrating to it. But yeah. Anyway, Gastown is. Is. Is what it should be, is 1. One mayor that you talk to, that's your. Your person. And then whatever else needs to get done, they're just going to fire off Workers. Okay. It's a little, a little bit more complicated than that because there are really, I think there are two kinds of work that, that, that people go back and forth on and people are arguing about whether. Some people at Anthropic told me it's the minimaxing context argument. Okay. There are people who believe that you should maximize your context window and fill it with rich, juicy context so that the AI is wise and all knowing when it's talking to you. They want to like, you know, just right at the edge of the context. And then there are others who are like, task, kill it, task, kill it. I want the shortest, really short window, context window because of the quadratic, you know, increase in cost combined with the dramatic drop off in cognition as the tokens go up. Right, yeah. Losing their track and stuff. So which one's right? And we've got people who are like, full on in the minimizing and the maxers. And I looked at my workflow and I was like, well, polkats are the min and crew are the max. I have two fundamental worker roles in GAS tasks.
31:40
So you have the really simple one, which is the small content.
33:01
If you have a really well specified task all broken down into subtasks, then you can find. And it's like, it's self contained. It says what to do. Then you can give it to a worker and have it go do it. Right. Meanwhile, you have a really difficult design problem. You're going to have to have a series of conversations about this. I maximize context. I'm like, read all these docs and then we'll talk. Right? So it's just two workflows.
33:05
And like, I like jd. I mean, it sounds like, I think it's so easy to imagine. Like it's a little town, you know, like this wild, wild west. There's the mayor, like the crew, the, the workers. Everyone's buzzing and going around and the houses are being built in practice. How does this work? Like, how has this worked for you? What are you hearing? People get projects done versus not getting it done versus turning into absolute chaos. What have you learned with Gastown?
33:28
It's been a great experiment. I mean, I've really.
33:54
It's been a experiment, right?
33:56
Well, yeah, I mean. Right. I mean, I went out and built something that doesn't, that deliberately doesn't work. It's too hard. It's too hard for the models. Even opus 4.5 is barely enough. And it's funny because the folks at Anthropic told me they like it, that they're kind of embarrassed some of them because it feels like I've got all these workarounds for bugs in their model, which it kind of is. Right, but it's not a bug, it's. Their model was never trained to be a factory worker and it will be soon. So a lot of gas time is going to disappear. A lot of the complexity, a lot of the roles that are monitoring, all they're trying to do is tell Opus 4 or 5 to be smarter. And that's being on the wrong side of the bitter lesson. Right? So a lot Gas Town is going to simplify and flatten into just minimax roles. Crew for your max and your polecats for your mins. And I think that's the natural shape and they'll just scale up.
33:58
And could that be the polkas? They might just be sub agents at some point. For example, like.
34:40
Well, subagent. I mean, the polkats are sub agents. It's just that they're. They're more. They're first class. They have their own identity inbox. You can talk to them. You can actually see how they performed over time by computing skill vectors on their work and things like that. So a little bit more than that, than subagents. I think sub agents have the problem of being opaque. I'm going to fire off a bunch of sub agents to go do this work and then you're like, okay, let me know when you're done. Whereas with Gastown, you can go look at them and be like, dude, your polkat's not working. I'm gonna poke it. Right? So Gastown gives you a lot of hands on, I don't know, steering right? It doesn't try to be. It doesn't try to get out of your way. It's in your way. Gastown. It's really fun though. I miss it. It's been down for a few days for me. And I tell you, man, working with regular Claude just stinks by comparison because it's like an idea factory. Once it's actually running and all booted up and everything, you can have so many things going on at once and actually track them reasonably well. Now it can suck you into a mode where you don't sleep, you don't eat and you start. It's not good for you. And I actually wanted to talk to you a little bit about what's happening in the industry at some point. But Gastown itself, I mean, like it was all calculated, all the characters, you know, the naming. Why did I even do Gastown, right? Why is it why? Because I wanted to move the Overton window, right? Because people last year, when I would say orchestration's coming, they'd say, no, agents aren't. Are no swarms, no orchestration, whatever. Everything you're saying is just not true. And now what they're saying is, bro, you're being pretty aggressive, right? Which is a different conversation. They're like, now they're like, well, your swarm, I don't know, maybe your swarm can't do blah, blah, blah. It's just completely shifted the conversation from the realm of impossibility to the realm of possibility.
34:44
So is it fair to say that you took on more than you reasonably thought you could chew? You took on this more ambitious ones because you wanted to both stress test what these models can do and find out what's next. See, find out. And honestly, just have some fun.
36:26
Have some fun, Find out what's next. And I'm continuing to do that. So my next thing is I'm going to string a hundred Gast towns together. We have a community, a discord. And if Multbook can get people to pitch in tokens for fun, like they paying. They're paying. You're paying for the inference of your, your agent on Molt Book, right? So if I string a hundred Gas towns together and we decide to build something together, we will learn the mechanics of Federation. We're probably retracing Ethereum steps, but we will. And, and we're going to come up with something remarkable. It's like the people version of Malta, right? Malt book, whatever it is.
36:42
And what, what are misconceptions about Gastown or what it's trying to do that you feel it's kind of, you know, gone off a little bit of the rails and it's good to clean up.
37:18
Well, I mean, for starters, I don't think people should be using it, and they are. And I really mean it.
37:27
I'm always say people should not be using it. Like not should not be using it, except if you're doing research or if you're like, actually understand that this is just a proof of concept.
37:33
So some, some very, very clever people that I've been talking to have, have been searching their problem spaces for subsets, categories that Gastown could productively use today at a big company, a big Fortune 50 company, say, wow. And they've identified some problem spaces that you could put Gastown on today. And I was like, oh, that's pretty clever thinking. One of them was this company I talked to that sets up bespoke Data centers for you in any region you want, which is something AWS has never been able to do. Google's always tried, and they say it's just three months of miserable button presses to try to install the software and check that it all works. And the acceptance criteria are very clear. It's, you know, it's almost a RALPH loop. But they think Gastown could swarm it and eventually converge on a data center that works and save all the people the trouble, you know what I mean? And I was like, oh, aight. And this could potentially meaningful move the needle on their ability to open up more of these data centers for people, right? Yeah, go figure. And the same guy was telling me that he's been looking at production incidents and he's realized their system is already in an indeterminate, unknown broken state when they're down. So how much worse can AI actually make it? Now, I cautioned him and said, actually, it can make it a lot worse. But he's thinking along the lines that there are certain categories of outages where you could have them in investigation mode or whatever. Right. Where they could speed things up. So people are looking for the fuzzy problems. There was a third one that came along. I forget what it was, but there's. There's a. Classes of problems emerging for which you can swarm them because you don't care that the results are messy. It's the cumulative work that. Right, but that's actually how I code now. I mean, like, right? I mean, like I code myself. I mean, I bit off more than I could chew. There's no question about it, man. Gastown is a huge mess right now, and everybody's going. He's going to vibe coat himself into a corner and come crying out, you know. They're pretty close to true. Although I did manage, just before we got on the plane, to get it back on track. And it's working again. Right.
37:41
So one interesting about Gastown is you said you don't look at the code. You have the agents write the code and. Which is very, very unlike what your career has been. Right. You cared about craft code elegance. Why did you decide to do it? And what are the results? I mean, are the results as bad as I would think they would be?
39:24
Because this is.
39:43
Right. Like. Like if. If you imagine we're going to put like a thousand interns on, we've kind of seen that in the past. And the result has been, well, eventually a senior engineer comes in and cleans up the mess. And I'm just curious, like, how. How is it Better or worse?
39:44
Well, so the ceiling of what it can actually build productively before it just dissolves into a mess is going up. But right now, I think it's sitting somewhere between half million and 5 million lines of code somewhere in there. Probably more on the half million side right now. And with the next drop of an anthropic model, we're probably going to see it jump up to a few million lines, which is pretty good size. But it's nothing compared to what enterprises have, right? Nothing. Enterprises are very, very, very, very big. They have hundreds of millions to billions of lines.
39:57
Yeah, but not in one code base. Like, having a few million lines of code is already a big code base. And you'll typically have 50 plus people, sometimes 100 plus, 200 plus, working on it.
40:24
Right. What it really comes down to, just to summarize this conversation, get to the end, is how well you're going to be able to take advantage of AI totally depends on whether you're a monolith or not. If you're a monolith, which almost every company is a monolith, they have one monolith and a bunch of microservices. Right. If you're a monolith, you're kind of hosed because I told you the ceiling's going up for what they can do, but it ain't ever going to hit your monolith. That will never fit in the context window. And you're never going to be able to never in the next 18 months be able to tell a model, go fix my monolith. You have to break it up. Okay. If you want to take advantage of AI or rewrite it from scratch, it's starting to get faster at this point to think about rewriting your stack. Yeah.
40:32
One thing you mentioned, even before we started, that AI can really drain you. It can drain your energy, it can pull you and it can suck you in. Can you tell me about this?
41:07
Dude, there is something happening that we need to start talking about as a community, as an industry. Okay. There's a vampiric effect happening with AI where it gets you excited and you work really, really hard and you're capturing a ton of value. For me, I'm doing it all for myself. And it's still kind of like pushing me to my ragged edge. I find myself napping during the day, but I'm talking friends at startups and they're finding themselves napping during the day. It's funny, they literally try to load each other up with enough context to force the other one into a nap. Almost like a con, a comp, you know, compassion event. It's so weird. And we're starting to get tired and we're starting to get cranky. And I started talking to people in the industry and they're starting to get tired and cranky. And what's happening is, see, companies are set up to extract value from you and then pay you for it. Right? But the way all companies have always been set up is that they will give you more work until you break. If you can do it. They'll just happily just say, give you more, give you more until, until you, your, your plate flows over and you die. And people have to learn the art of pushing back. Right. And that's been a thing for a long time, but it's changed the equation. The way you push back, the reasons to push back. And all that have changed very dramatically and are changing right now because you've got all these people now who can be super productive. And it's like, let's say an engineer can be a hundred times as productive. Just, just for sake of argument. All right. Who captures that value if the, if the engineer goes to work and works for eight hours a day and produces a hundred times as much, the company captured all of that value. And that is not a fair capture exchange.
41:15
I think we can argue. Unless if they have early, safe, sharp and they have a meaningful equity, that's a bit different.
42:50
It grows.
42:55
But that's not the majority of people, right? It's a minority. Yeah, yeah, we're probably getting there pretty quickly. I, I didn't. You know, we did notice one thing like, and you probably saw this as well, about six months ago, we talked about a lot the996 problem at AI startups. And we, we were like, oh, it's interesting. AI startups. People are working really frigging long hours and they're posting that they're in the office at 3am and you could tell.
42:56
I'll share with people what996 is. Who don't know. Okay.996 is 9am To 9pm Six days a week, if I'm not mistaken. Yeah. Which is, which is 996 is. It's the standard you're expected to work in most of Southeast Asia, as far as I know. I haven't been to China or India, but I assume it's pretty much similar there too. Right. There's another group of people who are capturing all of the value for themselves. Okay. They go in and they work for 10 minutes a day and they get a hundred times as much done and they don't tell anyone and they've captured all. And that's not really ideal either. Right? So at least in terms of if you're thinking in terms of how can groups of people be successful, it's best if they're all contributing. Right? So what do you do? And I think that the answer is each and every one of us has to learn how to say no real fast and get real good at it. And we need to learn how to start capturing. And the correct this is the new work life balance, okay? It's how much of the value are you going to capture from being a hundred times as productive and how much of it are you going to pass along to your employer? And this is a really difficult place to be because we don't have any cultural, all our cultural expectations are pointed in the wrong way for us to work harder and they want us to right everyone to extract, extract, extract. And so I seriously think founders and company leaders and engineering leaders at all levels, all the way down to line managers, you're going to have to be aware of this and realize that getting your engineers onto this treadmill is pulling them into a. They're using much, much more of their system two, you know, they're doing much, much more of that hard thinking now the easy stuff is getting automated by so you're, you're actually draining them at a higher rate. Their batteries are draining at a higher rate. You might only get three productive hours out of a person at max vibe coding speed. And yet they're still a hundred times as productive as they would have been without AI. So do you let them work for three hours a day? And the answer is yeah, you better or your company's going to break.
43:19
It's very interesting because also like the value extraction, I think I can see it speeding up and we see it with a few prominent people. Peter Steinberger single handedly pushes out so much more value output, you name it, commit in any way. That would have been a team of 10 pretty good engineers before. And he, you know, like in all fairness he is capturing it in the sense that it's his project, it's his
45:13
baby, he does not sleep much.
45:35
So that's definitely showing. But the value capture there is kind of okay. But I agree with you that this could be something like in the past whenever there was a technology shift where people were more, more efficient, we couldn't. In your lifetime, have you seen this where injuries became more efficient and suddenly you could do a Lot more with a lot less. And what happened at that time?
45:37
People got mad. Yeah. I'll give an example. Perl, the Perl programming language was a massive accelerator. Amazon's website was built in Perl. Probably still is actually. I think Facebook's technically is too. PHP is a fake pearl and you can quote me on that. So. And both of them were incredible productivity accelerators. And everybody just could see it. You don't want to build webs websites and see, you just don't. Amazon tried it and they gave up. Right. So that caused a huge rift, a huge schism. There were second class citizens. All kinds of cultural dynamics happened there. Right.
45:56
I'm curious about how some AI companies deal with this. Can we talk about how entrophic works?
46:28
Yeah, yeah.
46:34
From what I know, from what you know, from the outside, I know that, you know, you talk with like people across the industry, but Anthropic is a very interesting place. One interesting thing that Dario recently said is he thinks compensation specifically for their staff, the people who are building all these things and they're actually using the models and doing. He said something interesting, that maybe we should have compensation where people are compensated even after they leave the company for the value that they created, which is just something completely unheard of. But it's clear that he's thinking about this. This thing that is changing where you can't. Individuals can create massive value in a relatively short amount of time.
46:35
Google, you can send me a check for all that stuff you never paid me for. Okay. Just gotta get that out of the way. I like that idea. Anthropic is unlike any company on earth right now. They're operating in a space that is really fragile and they're very protective of it and they need to be because they've created a hive mind. They're running the company as far as I can tell, like a pure functional data structure. Remember Chris Okasaki's book that was such a mind blowing. You can make data structures that never mutate. Then how do you mutate them? Right. And the answer is you just keep adding. It's improv. Yes, and. Yes and Right. And that's how they operate.
47:14
And when you say hive mind, what do you mean by that?
47:51
It's a lot of. It's like the markets today. Vibes. Everything's vibes. It just shifts. It's just. Right. It's, it's, it's vibing, it's. It's kind of hard to explain. But you see, here's the thing, right. We used to build Products by like making spec and then implementing it and then complaining about it and then shipping it. Right.
47:54
Having a roadmap and planning for it and waterfall and timing it for the company annual events.
48:10
Right.
48:15
But the way Apple, right, once a
48:15
year, the way you work with like systems like Gastown and they've got their own internal orchestrators, is you create it. And your founders, the one that like the co founder, that was non technical, you create the prototype and that's your product and you start building it and you just make it the product until it's right. So everybody just gathers around the prototype like a campfire and builds it. And that is what Anthropic's doing at scale with thousands of people.
48:17
So you're saying that the playbook of a successful tech product might have changed because the traditional wisdom since the lean startup in like 2010 or so was you use your prototype to get signal, then you throw it away and then you build a lot more polish stuff. Right? And we used to, I think every software engineer who's been around, you don't ship a prototype, you tell people it's a throwaway, you start again, you make it production really scalable, that kind of stuff because you don't want to give
48:40
a bad experience to people. What changed though?
49:03
Just the ability to do an infinite number of prototypes. So instead you make prototypes until you get a great one and you're like, let's launch this. And so apparently Claude cowork happened in 10 days. Somebody went, hey, I did a prototype and they were like, we're going to launch this. And 10 days later they launched it. So I mean it works.
49:06
But I guess one, one important context there. When I talk with Boris Czerny about a feature that they did about how they did the tasks in cloth, in cloud code, the task list of how it completes. He told me that in two days he built 20 different prototypes that are all working thanks to AI.
49:22
I didn't know that, but he's doing what I'm talking about. They call it slot machine programming, right? You do 20 implementations and is that what he's doing?
49:38
Something like that? I don't want to put words in his mouth, but I was just floored because building 20 working prototypes, that would have been two weeks and you would have stopped at three. Right?
49:44
That's in our book actually, if I can pitch the book for a moment. FAFO F A F O is the dimensions of value that you get from vive coding and the O is optionality, which is the ability to create lots of prototypes. What it lets you do is defer your decision until you know what the right answer is, which is cheating. So of course, everybody does it. Right. And it's going to fundamentally change the way that companies are run. It's going to change the way that people organize to create software, and it's going to happen this year.
49:55
It's just fascinating how these changes are coming. But what, what enables these changes? Is it the fact that we can iterate faster with these things?
50:24
Like, I look, I saw a phenomenon happen at Google. This is. This is kind of a big company question. There's kind of two. There's a big company and a small company. Answering your question. Right. So something happened at Google. I went through the golden age at Google where it was lycanthropic, it was a hive mind. It was. Nobody was mean, everybody was innovating and it was wonderful.
50:33
Yeah, this was a time where like the founders were really close.
50:53
You'd go to the cafeteria and Larry and Sergey would be sitting there and you'd hang out with them and just chat. And it was like golden age, right?
50:55
Yeah.
51:03
And then it changed rather abruptly. We made a few pivots and it became not that company anymore. And in fact, innovation died on the vine, like altogether. And since, I don't know, 2008, there has been no innovation from Google. It's all been acquisitions. They've created nothing new.
51:03
I mean, they did Gemini a few years, few years later, right?
51:20
Yeah, okay, sure. They created LLMs and then did nothing with them. That's a perfect example of why innovation dies there.
51:24
Yeah. For five years, right?
51:29
Five years they did nothing. So I don't count Gemini. That's a different Google. We're talking about the Google that screwed us. I don't want Anthropic to screw up this way again the way that Google did. Google put safeguards in place to try to keep them from turning into the company that they turned into, which was ossified, you know, territorial. Nobody could. I hired a brilliant dude from Microsoft, brought him into Google and said, figure out what you're going to do. Take as long as you need. It took him six months to find something that nobody else had claimed already. People claim work and then never do it at Google. So I'm going to tell you something I've never said before. This is brand new take. I think what happened at Google was when Larry Page became CEO and he said, we're going to put more wood behind fewer arrows. That was a motto. And he put a halt to innovation before then there was more work than people, and after that there were more people than work. And so people started to fight over the work. And that's where people started to do land grabs and backstabbing and territoriality and empire building and all, all the bad stuff you see, all the politics that you see is about fighting over work and going back to anthropic. They're at a frontier and there's infinite work and like literally all of them have too much to do. And a friend of mine, a friend of mine at Amazon once told me that we don't have a lot of the problems that Google has because everyone at Amazon is always slightly oversubscribed. They have too much work.
51:30
I've heard similar with Apple as well. That's kind of deliberate. Interesting thing. I mean, if we assume I am seeing productivity gains for myself. So I'm not disputing that agents actually make you more productive. And I think we can agree on by how much. But for me it's a lot. But if this happens, a lot of companies, people can actually do a lot more work. Do you think a lot of companies that are larger will see politics show up, which typically hence happens when.
52:54
Right. If like the catalyst for the bad stuff beginning is more people than work and all of a sudden people can do all the work, then the company's biggest problem is going to be finding more work or they're going to have to get rid of people, which is kind of bad. Right. But it's not unlike Gastown in the small. My biggest problem with Gastown is feeding it because it works so fast. I have to work really hard to come up with good designs for it. Right. That's what I spend on my. Which is why I'm taking naps all day long because I'm trying to come up with difficult work for it.
53:20
Right.
53:47
Other people have said this too. This is the problem with Gas and this is the problem with everybody who's going to use any Orchestrator. It doesn't have to be Gastown. That thing will be dead in four months probably. Right. I mean, it's, it's the shape that worked in December 2025 that's not going to be the shape that works in four months. Right.
53:48
One thing that I think, you know, we're, it might sound like we're talking really abstract, especially for people who have not done this type of work on the self is like, well, we're talking about Orchestrators. They're like all productive. Can you point to something that has been built with an orchestrator or with this higher productivity that is a production software. Either you built it or you've observed someone build it. That could show like actually this is way more productive and we can actually see the output or turning it the other way around. Like we're still not seeing that much more output from companies teams that you would expect. Okay, like a lot of them are having more productivity but like from the outside it's easy to be skeptical when we're seeing not much has changed in terms of our day to day life. The apps we're seeing signals here and there, but nothing major. Like why might that be?
54:00
Yeah, that's fair. My feeling is that probably people have a low tolerance for non determinism and these things are fundamentally non deterministic. So they can't just go replace customer call center software because they could be wrong. And it doesn't seem to matter that humans are also wrong very often and AIs can these days can very easily get to the same level as a human, as an average human in the job. But I think there's still a lot of risk aversion. So I think that the companies that are actually running with this are actually starting to see the results and it's going to be reflected in their quarterly earnings invisibly and in other ways at first.
54:45
Could it be that we're focusing on building the tools?
55:26
I'll turn it around and I'll say, what if what we're actually observing is that innovation at large companies is now dead and we are only going to see innovation from small places, which is kind of what happened when cloud came out and Facebook was a college kid at one point. Facebook feels like the biggest company in the world right now, but it was one dude, okay? And so when a new enabling platform technology substrate appears, you're going to see innovation at the fringes. Because of the innovator's dilemma, big companies can't innovate. They're all running into this problem. They may have hyper productive engineers who are producing at a very, very high rate, but the company itself can't absorb that work downstream. They're just hitting bottlenecks and these engineers are getting shut down and they're quitting. Right? So I think what's happening is we're all looking at the big companies going, when are you going to give us something? And the answer is we're looking at the big dead companies. We just don't know they're dead yet.
55:30
Do you think they're dead? Because, for example, it can now be cheaper to do something like we couldn't just take the eternal punching bag. Zendesk customer support. They have been the de facto place to do your customer support because your agent can sign up, they get this ui, they get this workflow, et cetera. And for AI native companies that are using mcps whatnot, it makes no sense for them because they just want an API which Zendesk does not want you to give to because they want to charge extraordinary amounts for you to come to their platform and buy their AI for you know, 10 times the cost.
56:19
That model is going to struggle a lot in coming years because people will build their own stuff bespoke with APIs. This is, this is, this is my platform rant in real life. Right. If Zendesk doesn't make themselves a platform, then they're going to, they'll have producted themselves out of existence, I think.
56:49
And the platform for the, for looking ahead, It's.
57:03
Is it APIs, is it MCPS?
57:06
I mean as far as we can. No, maybe not mcp. Right. I mean, what did Anthropic found that what works better than MCP is having the AI write its own API to call the MCP because they're so good at writing code.
57:09
But then nothing really changes because platforms are always APIs from the beginning, right? Yeah.
57:20
So why do we need mcp? Well, we needed some way to declare what the tool does in an AI way. But I mean like I just, it's so loose and so flexible. Integration is going to be really easy. I don't know, I'm not following that space well enough to know if MCP is going to continue to be an important dominant player or if the AIs just use stuff directly like, like via command line tools. Right. Or APIs. But either way we're moving into this world where the innovation is coming out of new shops who have adopted and adapted and I see big companies struggling really bad right now with this.
57:25
I wonder if these, if we will see a lot more of these building blocks that we didn't know we needed.
57:59
I think we're going to see a huge ecosystem of building blocks for people who are non technical who want to build stuff and they need those APIs and. Right. You know what I mean? Like for storage or for matching or for whatever it is they need to do.
58:06
So. So I guess if you're in tech and if you're looking for an idea either because you know, like your job is looking a bit shaky or you actually just want to do something, like now could be a Great time to start building some of these building blocks that we're gonna need, like reliable building blocks. Will probably be in need that are. That have. State, that have SLAs, whatever. Have some, some. Some importance. Right. That's not trivial to do.
58:19
That's right. Because AIs are lazy with good reason. They don't want to burn tokens if they don't have to. So if you provide a service that's going to make something convenient for them, they'll absolutely use it.
58:40
Yeah. Especially if it's a service that you, you need to maintain. For example, like you need to keep up with. May that be regulation or changes or logging or whatever. Yeah, that's kind of a lot of work to do. Even to prompt like to. And go back every day to prompt again to like update and all that. Also, as humans, we're also lazy.
58:50
Yeah. I mean, well, Larry Wall called it. Right. It's. That's one of the virtues of a programmer.
59:07
Yeah. I want to go back to one of another, one of your essays from 2012, which was called the Borderlands Gun Collector Club.
59:11
You're the one that read that one.
59:21
I got recommended on Bluesky and a lot of people liked it. And I read it and I realized I didn't read it. And this was a really interesting essay because seemingly it has nothing to do with what we're talking about. But you talked about gamification and you talked about how this Borderlands game, which you played apparently, right.
59:23
Or back in the day.
59:38
Yeah, back in the day you mentioned how after you completed the game, there was this weird thing that the game developers probably accidentally put in there. People kept coming back to have like custom guns and these were like a meta goal that the designers probably never thought of, but it actually made the game pretty kind of addictive. And you called this as a. I think it was like some sort of elder game or something like that. And you were kind of saying that, hey, this was pretty smart. This was accident from the game designers, but maybe more game designers should do this because it just makes the game addictive and you know, like not saying that, but since that was in 2012, we've seen so many games just have like deliberate gamification and not just games, but a lot of other things.
59:38
Yeah, a lot of them found that mechanic eventually. Who is it? Did the Borderlands take two or. I forget. Anyway, they figured it out early, then they didn't capitalize on it. But yeah. So interestingly, I think, yeah, gamification. Gamification's kind of rearing its head. People have pointed out that like, people are making game front ends to Gastown. Right? I mean, why not make it a game? Like, come on, man. I mean, like, look, we have, literally, we have games for running factories. Imagine you're running an actual factory. How cool is that? Right? That's what, guess what Gastown is. That's why it's so fun.
1:00:19
Actually, do you think that one of the reason that some of the agents are more successful than others looking at specifically cloud code is they also did some gamification where there's always something showing there. Right. There's a tinkering, there's the different things that keeps talking to you. There's always some of maybe accidentally or maybe deliberately.
1:00:51
Oh, they have the best product managers in the world and they've done absolute magic with command line UIs stuff that they've done. It's wild. But, but look, I mean, come on, right? That's not going to work for most devs. So that's why cloud Cowork is so cool. Right? Because it's the direction that things are going to evolve, I think.
1:01:11
Yeah.
1:01:33
So I think developers will use cloud cowork or something more like it.
1:01:33
With traditional software, we have tech depth and we, we know how to deal with it. And we've talked so much of this. In fact, if we think about, like, what, what we spent, we're very Busy with the 2010s tech defense, collecting it, paying it off, migrations, yada, yada, yada. Now that we're doing, you know, a lot of vibe coding, or you call it vibe coding, but agentic engineering, just churning out a lot of code, how do you think we will recognize or deal with. Or do we need to deal with this, like vibe coding depth or agent depth?
1:01:37
You do, you do. One of my upcoming blog posts is about this, actually. I've discovered that there's a thing I've given it the name of. It's called a heresy. Okay. That happens in vibe coded code bases that you're not looking at, where an idea can take root among the agents. That's incorrect. There's a wrong architecture or wrong data flow or whatever that's causing an impedance mismatch for the rest of your code. And what happens is, I call it a heresy because they have a tendency to grow and to come back and they're really hard to weed out. I had a bunch of them in Gastown. There was a polecat heresy that kept coming back. And so what would happen was it's invisible and your product stops working properly along the edges, and you don't know why. And you start having the agents dig into it. And you realize you've got a fracture, you've got a fault line. You have, like, say, two complete databases that are both live and operational, and you're randomly choosing between the two of them. Right. And you didn't realize this until just now, Right? You find terrible things in your code, right? And you try to get them all out, but there'll be one reference to it in some doc somewhere that an agent picks up on and goes, oh, that makes sense. It's the heresy. And it returns, and the agent does the wrong thing and goes off and rebuilds the heresy, and it starts to spread again. It comes back, right? It's like the agents want a system to work this certain way, and you're telling them, no, I want it to work this other way. And you're fighting with them. And what you have to do is you have to actually document the heresy in the beginning of your prompting and say, this is one of the. One of the ways that you can go wrong on my project. Don't do that. Right. And then you have to remind it periodically or even put in tooling to keep it from doing that. Another heresy is that my agents all think they should be doing PRs. It's like, I'm the maintainer of this code man. Just push to main right? Or a branch or something. Don't make a pr. It's just polluting the PR space. That's for contributors. They can't get this today. Now, I could put a bunch of hacks in, but that's fighting the bitter lesson. Opus 5 will be fine. Opus 5 will be. Oh, you don't want PRs. I want new PRs.
1:02:04
What is the bitter lesson in?
1:04:06
Oh, the Bitter Lesson. Yes. Richard Sutton wrote a very, very short essay. It's like 800 words. It's one of the best essays ever called the Bitter Lesson, where he's like, yeah, we're AI researchers and we learned a bitter lesson, and you need to learn this lesson. The bitter lesson is don't try to be smarter than the AI. Okay. You think that you've got special knowledge, that humans bring special domain knowledge to this problem, and we're going to teach it so that the AI will be smarter. What we found was bigger is smarter always.
1:04:08
And that's like more data, right?
1:04:34
Yeah. And so, like, when they're going into Australia right now, you know, you've seen the drawings, you know how big OpenAI's training center was how big anthropics training center was. And now the training centers that are being are, you know, 10 times larger. They're massive. They're in Australia because they have all the energy in the land and everything. But they are going to make models that are 10 times or more smarter than the ones we have today. Right.
1:04:37
We talked about the vibe that. But does it not pain you? I mean, as someone who has built build software, you know how to build good software.
1:04:57
You, you went in there to clean
1:05:03
up the mess of junior teams or like messes. You, you were, you could clean it up and with your eyes closed or maybe you had to keep it open. Does it not pain you that when you describe, oh, the AI going off trail and doing it, if, if you scaled it back and said like, hang on, like, let me step in, let me make these decisions, let me be
1:05:05
the architect, it would not happen.
1:05:20
Yeah, well, see, the thing is, I've also been a vice president at big companies of engineering.
1:05:22
True.
1:05:28
And so when I'm working with a team of 80 agents, it's not very different from working with a team of 80 engineers. Any one of them can screw up too. Engineers.
1:05:29
And you've done that, right?
1:05:37
I have. And I'm telling you, they are isomorphic. So what is the bitter lesson? The bitter lesson is don't try to be smart, just try to be large. Okay? Now that's not the only way to make the AI smarter. They can also make them smarter in a couple of other important frontiers that are also getting developed. And so, so to tie it full circle to a beginning of our conversation, everyone who believes right now that the curve is s shaped, they're 100% correct. They are 100% correct. It is S shaped. Eventually we will run out of resources, the world will be out of resources and it will flatten. Right. But I can tell you that there are at least two more cycles left in this and that means they will be at least 16 times smarter than they are today. And that is going to cause all of knowledge work to be subsumed by this stuff.
1:05:38
Before we go all the way there, let's talk about how all this, the better models more productive could impact personal software, things that people can build themselves.
1:06:26
This is what I thought you were asking about earlier when you said you wanted an API from Zendesk. Think about it. Everyone's going to want to build their own software.
1:06:38
Oh, I was talking about a business. Not personal, but. Oh, business, but yeah, but also personal software. Like what would the future look like when everyone could have OpenClaw running in their closet or Gas Town or they don't have to run it on their thing, but they can turn to this agent. How could that change both personal software, but also the software industry as a whole? Because for a long time, personal software was the privilege of us engineers who could build it. And we built our tools and we had open source and we had some billion dollar companies grow out of some of the cool things. What do you think could happen now that this will be democratized to some extent?
1:06:44
How do you think open source could change?
1:07:21
Open source. How would open source change?
1:07:23
Could have changed because one interesting thing that I'm seeing is a lot of remixing happening. So people now, a lot of open source projects don't really take pull requests because there's a lot of not great ones, but a lot of people are just remixing. They're just taking the open source project. They're telling the AI make this change and they publish it as open source as well. Often no one looks at it, but now they don't need to ask for permission. A lot of people are weaving things together. They say, take this project, take this thing. And it's actually a lot more open source.
1:07:25
I see what you're saying. In the old days, the F word fork, you used to be like kind of a declaration of war. If you forked somebody's project, it meant you had had enough of them. Like root code forked Klein and then somebody else forked root code. And it's just like I think it's now going to be an everyday occurrence. Right.
1:07:52
Because it used to be that to fork it would be a lot of time and effort to maintain a fork to merge back the thing.
1:08:09
Cursor is a fork, isn't it?
1:08:17
It is Y. Yeah.
1:08:18
That's a lot of work. That's a lot of work. Yeah. A lot less work now. Right. So yeah, everyone's gonna be forking. So yeah, no, I think that that's a, that's a natural consequence of everybody writing code.
1:08:19
Yeah.
1:08:32
Just like everyone can take a picture now. That didn't used to be true.
1:08:33
Yeah. What, what are some of your beliefs from early on in your career that held really, really well until recently and now you just abandoned because of AI?
1:08:36
Engineers are special. There's one.
1:08:46
Come on. We are special. No, I think we're so special.
1:08:48
Yeah, sure. We learned how to do something by hand that computers can do now. Kind of cool, I guess.
1:08:52
What about the engineering mindset we have that like it's not just coding that we do. Right.
1:08:58
Well, for one thing is I believe that our thirst for new software will never, ever, ever diminish. It will only grow. And so we're at the beginning of software. All the software we have right now is garbage. That right there. Obs especially. And we're going to see a new world over the next 10 years where software is commonplace and good and you'll have your choice and it won't be I have to pick and choose between three really bad OAuth solutions or company HR systems or whatever stupid ass thing. Right. Like today the selection is terrible. SaaS is awful. The whole, the whole. Right.
1:09:02
Airline ops, airline apps.
1:09:38
Right. I mean we ran a vibe coding workshop in Sydney where a dude actually wrote an airline check in app for himself and got it into the Android queue before Southwest realized he shut him down because he was a bot. But that's what people want. They want personal bespoke software and they're going to get it. And so yeah, I think you're going to see. That's why when Jeffrey Emanuel forked beads, I was like, you go. You go. He's. I feel so bad about it. I'm like, dude, this is the new world, man. Fork, fork, fork. Let's have beads in every language. I don't care. Right.
1:09:41
I mean in all fairness, like just looking at it from the positive side, like I wouldn't mind just having good software for the stuff that I use day to day. My utility provider is somewhat is getting better. The government websites that I have to access, paying my parking fine. The other day I tried to send a package to Canada from the Netherlands and the post, like the official post has been broken. They cannot send anything for a week and I see the exception, they cannot fix it. So I had to go DHL and pay a bunch more money.
1:10:06
That's right.
1:10:34
And like there's a lot of bad software out there and your agent will
1:10:34
be dealing with it, not you. Yeah, but I think people who write software that agents like and prefer and choose and then they find a way to market it and get the agents aware of it, they're going to win big because everyone will use agents. We'll all be dependent on it.
1:10:39
Well, plus also I guess software or ways of making agents write quality software because I have a feeling like you will want to do better stuff that if you do the same, you're not going to have a business.
1:10:54
Right?
1:11:03
Yeah. So I mean, look, I think businesses will compete on more and more complex software. The ceiling will Just keep going. We're building like we're going to. Until we build the Death Star or whatever. Right? I mean like we're, we're building bigger and bigger things. Oddly enough, Gerge, I am an optimist through all of this. That's my first belief. I think first and foremost is that it's all going to work out.
1:11:04
So asking the optimist now, I got this question of. I think it was on Bluesky, this person asked like, how do you think the software industry will continue to exist if we get to the point that any software could be trivially cloned.
1:11:22
Yeah.
1:11:34
Where will that leave us? What cannot be cloned? What is the moat? Just, we just jump ahead. We assume that these things actually can do it.
1:11:35
Human connections are probably the biggest one. As, you know, kind of almost counterintuitively as software does more and more automated for you, people are going to be like, oh well yeah, but that's just automated. I want a human to do it. And they will literally want a human to bring their thing instead of a drone. You know, they'll want humans to curate things for them. And I think that's going to be. Humans will be a moat, do you
1:11:44
think if you look back at some of the history, like from, you know, the history, the rest history, like have we seen some changes that felt a bit like this and then we saw some professions thrive because of either more automation or you know, like stack overflow.
1:12:06
I don't know. I mean like that one jumped to mind Mechanical Turk. I mean like we've seen a bunch of big step functions. It's just that we're about to see a whole bunch of them at once, right? I mean, look at the news lately. I mean like you're like, this is the funny thing is everyone's like, where's all the innovation? And then in the news all day long they're seeing all this innovation in AI. It's just not coming from, you know, the Walmarts and Microsofts. It's coming from random individuals. Right. But the innovation's there. And from the startups that I've been talking to, you know, I've been talking to anywhere from two 5 to 20 person startups. I think we're going to see some really impressive stuff launching in the next couple of months.
1:12:20
Are you seeing these small startups change how they work?
1:12:58
Oh God, it's so different, dude. Tell me how it's so different. Okay, for starters, for starters, I think in the new world I'm convinced of this, okay? Everything that you do will either have to be fully transparent or you're hiding it for a reason.
1:13:00
Tell me more.
1:13:15
In other words, if you don't want people to see what you're doing, just don't show it to them and they will never see it. And if you do want them to see what you're doing, then you had better get it out in front of them as you do it instantly, or else the train will pass you by. So like, what they're saying is like, so I told the story on my blog, people have heard it, but they like yelled at a teammate. They were mad because he implemented a feature that they'd asked for two hours before. And they were like two hours ago. It's changed too much since then, right? And he's like, well, what do I do? You know, what's happening is they're getting into this mode where they realize that stuff moves so fast that everything is invisible effectively from the volume. And so you have to be extremely loud and transparent and intentional about saying everything that you're doing so that if anybody else is doing it, they can stop you right then. And if they need to integrate with you, they can start. Right.
1:13:16
And we're talking about startups that, that are looking for product, that are looking for customers. They actually just want to get what we call product market fit. Where the traditional wisdom was build something amazing and then release it to the world.
1:14:04
Right? That's right. Try to find product market fit in secret as much as you can and then launch it it and then tune. Right? That's the formula and many people it used to be. Now, like you're saying with Gastown, I realized I'm not going to find product market fit by myself. So I launched it as soon as it kind of worked and was like, help me. And that's how I found out about the Dole database, which was a big change and people fixed a bunch of bugs. I got 100 plus PRs the first couple days. And so it found its way closer to product market fit just by me getting it out there.
1:14:15
And would you say that has brought you like on one end, people look at you? Well, yeah, it's just one other open source project. But is it bringing actually opportunity if you wanted to, could you turn this into a business? Has it brought you the things where I'm getting at is these things that take off either as open source projects, like can they actually turn into actual businesses? Are we at that stage?
1:14:47
I promise you, if you had made Gastown, you would be shaking venture capitalists off you like ticks? Right now I am. They're finding me everywhere. And I tell you, it's because there's a lot of money out there right now. Sniffing, wanting to find its way into AI. It knows something big is going to happen. Right. And it's looking. And you can see it in all these different micro economies that are springing up, but nowhere can you see it more clearly than when you launch something cool like Jeff Huntley did. Ralph Wiggum VCs. Right. You know, everyone wanted to talk to him. You just got to be real careful because anything you build probably has a real short shelf life at this point. Point. Right. A real short one. I don't. I'm not attached to Gastown in any way because I think it'll be supplanted by something better within six months, if not sooner. Right. So. So too attached.
1:15:07
So let's assume that staff engineer is listening to this podcast or watching it on their commute, and they're at the type of company where they have Copilot still. There's people like this and. And they're using it, and they're. They're. They want to believe you, but they're not sure they can. What would you tell them? What is the. The thing that they can do to get proof that you're actually right and this thing is working? We're not at 100%. We're not even at 50% for people. Like, a lot of people who are in this field have tried it out, but there's a lot of work.
1:15:54
I would say probably still 70% aren't doing it. Yeah. So what would I say? I had a really good message for them. Oh, yeah. Get out. Get out. So here's the thing, right? Copilot is if you were to line up all the tools from best to worst, Copilot is like, you're a line. It doesn't even know about the line. Right.
1:16:23
But it used to be the best.
1:16:45
Four years ago in 2021.
1:16:46
Right?
1:16:48
Yeah. I was out of competition even maybe two and a half years ago. I was quite stunned that somebody asked, does anybody use Copilot at an AI Tinkerers meeting? And somebody raised their hand and he goes, do you have to? And everyone laughed. And I was like, what happened? Right. The brand just tanked. But I'm serious, if you're working at a company that uses. That gave you Copilot, they think that they're starting to move faster. And there's a barbarian horde of people using Opus4.5 that are going to destroy Your company sooner or later. So what you need to do is go into the crazy part of crazy town and figure this stuff out and start building. And because we are moving into a world very quickly this year where proof of work is so important. And I mean proof of work, not the bitcoin sense, but your proof of what you have done, your resume, and I don't mean your resume because nobody's going to believe that. I mean the actual work that you did, which has to be visible back to our transparency. Right. I think everyone's going to be bringing their work with them. I mean the notion of proprietary work is starting to like be threatened, I think because it's so easy to fork, it's so easy to clone, it's so easy to route around. If you have anything proprietary, you become this, this thing that everybody just wants to run around you. And so. Right. So big, big changes are afoot. But man, if you're working with Copilot right now, you are going to get left behind. And so what you need to do is get yourself find a half an hour a day to go play with cloud code. Right? And like I said, or if you're a company, make your token burn as high as your investors will let you go. Right? Because that token burn is your practice, it's your sorting things out.
1:16:48
So I want to, to ask you the other way around. Let's assume you're just wrong in terms of the, the curve. And we're, we're at the peak and it will not be 10x. It will plateau at 3x.
1:18:25
Or let's just say the next model is inexplicably dumber than Opus. We've peaked.
1:18:37
What would happen to the person who takes your advice and they go all in and they learn things. What's the worst thing that could happen to them? If, you know, if, if these things take off, it's a great investment. Right? But, but what would happen to them if, if they followed your advice and the models didn't follow, where would that leave them?
1:18:43
Exactly where they need to go. Because the damage is done. Opus 4. 5 made this officially an engineering problem. We don't need you AI researchers anymore. Thank you. You can make smarter models, I guess, but we don't need them because we have something. You can take a bite sized chunk out of a mountain and it's a bite size about town size now. And so we can eat mountains. Okay. It's purely an engineering problem at this point. It's like fire or steam. It's a force, it's a power. And we wrap layer, layer, layer, layer. I worked on a nuclear reactor. I was in the Navy. I know how these things work. Okay, we are going to put all right layers around Opus 4.5, if that's the smartest model ever. And that will do all of the engineering from now on. So it's done. So it's okay to jump into the pool now.
1:18:58
Your first job was about debuggers, or not debuggers, but you worked at this amazing company. You told me they had the best debugger tools. What was the name?
1:19:42
It was GeoWorks and the debugger was called Swot and it was amazing. Time machine and all that.
1:19:49
And on the first Pragmatic Engineer interview, when we talked. This is in the newsletter. You are actually saying that to this date, you've not seen as good of a debugger, but you're kind of determined to build at some point or help build that.
1:19:54
I did build a debugger enclosure for the JVM called Ganja. It was actually pretty cool. But then I got an argument with Rich Hickey about how well he wanted to support the JVM and he doesn't, so.
1:20:07
Yeah. But anyway, you're a guy who is
1:20:18
passionate about somewhere, though.
1:20:19
Yeah.
1:20:21
You're passionate about debugging. What will happen with debugging?
1:20:23
What will happen with debugging tooling?
1:20:26
What do you think the future of debugging is with agents?
1:20:27
When I see agents say, I'm going to debug this, they all use printfs, so, you know, I'm curious. It could very well be that they just haven't been trained on debuggers yet and that they'll all wake up in six months and go, oh, I should have been using this. But it could also be that we don't need them anymore. I don't know.
1:20:32
And another step further, what do you think the future of the developer workstation, like, our rigs, our machines will be right. Like, do you think it'll phone, I
1:20:50
want gas down on my phone. I almost have it, but I just haven't worked on it yet.
1:20:59
Peter Sainberger told me that he had Vibe Tunnel where you could do it from your phone. He said he stopped it because it became too addictive.
1:21:03
Oh, yeah. No tailscale. And yeah, actually, the only thing that's keeping me from just being addicted to it all day long is it's too hard to enter control characters in. But that's going to get fixed at some point. Programming on your phone will be a thing but, but.
1:21:09
So do you think that developer workstations can be just lightweight Chromebook whatnot or we actually want beefy ones which can run our local agents whatnot. Like where do you think it'll be headed on the short term and then maybe on the longer term. Yeah, see what I mean? Local models.
1:21:19
Yeah, no, I. Look, I love my laptop, I've been programming 40 years, I get the local thing. But I've been saying for at least 15 years that we don't need this stuff locally. Right. Google had an amazing client in the cloud. High speed network connection. And what you can do cider. Right. SITC was the base and then cider was built way up on a higher layer. But when you get something like that and you're not restrained by the. Especially in the world where you can run kind of unlimited agents based on your pocketbook. Yeah. People are not going to be wanting working on their laptops and I've already. Gastown has already completely stressed out my laptop to the where, you know, because cloud code actually takes quite a bit of memory. So yeah, I think we're moving to a world where people will work on servers and on mobile devices probably less. And on iPads, not on laptops as much.
1:21:34
In the past you've said that one of the most important kind of predictors of the built productivity is language design. Well designed languages are easier to work with. Do you think this has completely erased or do you think it might come back at some point? Either. Purpose built languages.
1:22:21
I think there will probably be purpose built languages by AIs. For AIs maybe. But right now we're in a funny place where the. Some languages work better than others still because they have better training data. But in the fullness of time, all the languages will work equally well.
1:22:34
I push back on that. Like if a new language never has training data, how would it work?
1:22:48
No, I mean, sorry, all the existing ones. TypeScript. It struggles with TypeScript. Today it does, but it's not going to in one or two model really it won't matter.
1:22:53
So could we see a stagnation, just fewer languages or no languages launching because they just get the job done. And launching a new language seems a bit suicidal. Unless you like. Like being a bunch of like training data with it. Right?
1:23:02
Man, that's a loaded question. I mean like part of it. I didn't mean to make it a loaded. No, it's a good question. Right. Part of me says like languages just don't matter anymore. Right. Any more than assembly languages matter. Except for a Few people who are trying to optimize really important things, and then everybody else, it doesn't, it just doesn't matter. Right? But then part of me says, well, energy is the most constrained and important resource on this planet and it's only going to get worse. So finding better algorithms, finding better ways to solve problems is often a language problem. Finding a dsl, you know, So I think for an optim, from an optimization perspective, an efficiency perspective, the search for new languages will probably continue. But for pragmatic, for, for every day, I don't think it doesn't matter what you pick now. You can ask your, your agent what language it's using.
1:23:12
So as a software professional who, like, loves the crafts is, is into, you know, languages, debuggers, tooling, et cetera, a lot of what we talked about is pretty, pretty sad because, you know, like, a lot of the beauty, the challenges that we worked, it seems they might be going away if we continue and if this continues as well. How did you work through this yourself? And also, what is the thing that actually excites you looking ahead?
1:24:01
Right. So I had the benefit of going through 30 years of graphics evolution, and so I saw the sadness and I saw the resulting much better games we got after all that, that happy stuff we were doing by hand moved into the hardware. We're sad because we're used to it. Change is part of life. Okay. And we're. You know, at one point I had to say goodbye to assembly language, right. I was like most compiler writers, they finally caught up. Right? And then we were mad, but then we were happier because compilers are obviously way better than writing an assembly language. And anybody would be stupid to say, oh, God, yeah, no, you're not a good engineer if you can't write in assembly language today. But that was actually what we were saying in 1992 too.
1:24:25
Yeah. And then you had blog posts out in 2012 as well. Yeah, yeah.
1:25:04
No, I'm just saying stuff changes. What you need to know as an engineer will change, and you can't rest on your laurels. And we're going through a period of faster change now, but you have helpers called agents that can actually help you through this change. So stop complaining and just go do it.
1:25:07
Yeah. And I think, just recognize we're in this industry where change is a thing
1:25:22
and it's right now, with that said bunch of opportunities, go through the five phases of grief, right? The five stages of grief. I mean, like, I went through. I don't know if I, I don't Know about anger. I was angry, really angry for a lot of reasons years ago. But. But no, I mean like if you've ever truly grieved, if you've like lost someone, you know that it hits you in a lot of weird ways where you feel reality disconnected, you feel sick, you feel stunned, you feel all day long the world goes monochrome, all color disappears, all kind of weird stuff. Right? And I went through that for about, I don't know, six or seven days. It didn't take me that long to get through it. Fortunately, or maybe it was, that was the peak. And I was surrounded by a few months of it on either side. But there was a period that I went through it where I was checking off things that no longer mattered that I had really cared about, like my ability to memorize or my ability to write or my ability to compute or whatever. All those, anything computing related. I was very sad. Right. Because those things made me special somehow. Right? But then to your question, what makes me excited? Like as soon as I got through that, I was like, like, but wait, I'm writing 10 times more code than I ever was and I'm having fun and why should I be sad that this. Right? And so I realized it's just, it's just me holding onto the old, just like I did in graphics. And there's no point because the future is actually more fun than the present. It just, it's gonna be.
1:25:25
You're known for your predictions and I'd like to put it to a test. Let's give some specific predictions for next year and 2027 things that you think will happen either with how we develop or how the industry works.
1:26:43
I think that my wife is going to be the top contributor to our video game.
1:26:55
Ooh, Bold claim.
1:26:59
Summer of next year.
1:27:00
And she is not at bulletproof, I'm guessing.
1:27:02
No. Oh, no, no, no. But she loves our game and she has lots of ideas, right?
1:27:04
Amazing.
1:27:09
Yeah. In fact, I think my whole family might bid in on it. I'm serious, man. Programming is going to be for everybody. And it's going to be the most amazing thing because you know how much fun we've been having all those years and we've been telling people it's really fun, but now they're going to get to experience.
1:27:10
I look at my kids and how they look at AI. They're having so much fun with it, creating. They're just prompting Gemini or any of these with their imagination and they actually have. They don't think it's weird. I think it's weird. So I never would think of it, but they just enhance our photos with like squirrels on my head. And it just made me laugh and fun. And you realize like, there's just a lot of fun and new things with it when you let go or you never knew what was before are.
1:27:24
It's given the people the ability to do very sophisticated mashups of anything. And mashups are really where innovation happens. Right. Innovation comes from taking things and putting them together and seeing where it goes. Right. We're going to see everybody innovating, man, and it's going to be the most amazing thing ever. And then we're going to need ecosystems of agents that can go find stuff that you like because there'll be so much content. How are you going to find the stuff that's really like that you like? You're going to have an agent that knows you really well. I think any software engineer who wants to get go make a big business right now should go start working on agents that know how to go and search the new world. Everything that's coming out. I don't even know what we call it. Right, the work pile for software that you like, for experiences that you like. If everybody's creating it, think about it. When the Internet came out and everybody could make a webpage and upload shit, we needed aggregators, we needed search engines, we needed ways to organize and find and surface the good stuff. Right. None of that exists right now. But everybody's about to start coding, like, right. You know, and so like, you can get ahead of this. This is why I keep saying just believe the curves. Pick a point on the curve and aim for it and you will land there and you'll be first when it, when. When the AIs are ready for your thing.
1:27:49
Yeah. And I think as engineers, we already can build. We don't need permission. We can use these tools super efficiently right now. And we are ahead of. We are ahead of the rest of the world right now, right now. Well, it's exciting times. Well, Steve, we'll have to check back on how if that prediction will come through with your wife contributing more. But this has been, I think, really eye opening and sometimes I think it's good to go through the has been and the can be.
1:29:02
Yeah. Well, thanks.
1:29:27
I hope you enjoyed this conversation as much as I did. An interesting thought from Steve is this parallel between the graphics industry and what's happening happening in software engineering right now? In 1992, Steve was learning to calculate where individual pixels go on the line Two years later, the same course was teaching animation. The work in graphics went from writing device drivers to building game worlds and physics engines. It all just moved up the abstraction layer. Steve's argument is that software engineering is going through exactly that same shift right now, except it's faster. Instead of asking will engineers have jobs at all? A better question might be be what will the new jobs we do as software engineers look like? Another thing was the grief of this change. Steve is someone who spent 40 years building his identity around compilers, debuggers, elegant code. And then one day he sat down and started checking off, one by one, the things that made him special that no longer mattered. His world went monochrome. As he said. Within a week or so, he came out from the other side and realized he was writing 10 times more code and that he was having more fun doing it. Still, I think a lot of engineers are quietly going through something similar right now, and it's usually taking longer than a week to digest all of this. Finally, one thing I found really honest from Steve was his point about value capture. If you become 100 times more productive with AI, who benefits if you work 8 hours and produce 100 times the output, the company captures all of that. But if you just work 10 minutes in a day and produce the same value as before, you do technically captured all of it and your company captured none of it. Now, neither extreme is sustainable. Steve is saying that this new work life balance is a question that we'll
1:29:29
need to figure out.
1:31:02
We don't have the cultural norms for any of this and it's going to be messy as we figure it out. If you've enjoyed this podcast, please do subscribe on your favorite podcast platform and on YouTube. A special thank you if you also leave a rating for the show. Thanks and see you in the next one.
1:31:03