Oxide and Friends

Software Engineering Past, Present, and Future with Grady Booch

55 min
Feb 7, 20262 months ago
Listen to Episode
Summary

Grady Booch discusses the evolution of software engineering across three 'golden ages'—from algorithmic abstractions to object-oriented design to modern distributed systems and AI-assisted development. He argues that LLMs like Claude represent a natural progression in raising abstraction levels rather than a threat to the profession, while cautioning against their limitations in abductive reasoning and the need for human oversight.

Insights
  • Software engineering has progressed through distinct eras defined by rising levels of abstraction: algorithms, objects/classes, and now frameworks/platforms—each solving complexity problems of their time
  • LLMs are tools that increase velocity and reduce friction similar to how compilers did, enabling more software to be built rather than replacing engineers, but require experienced guidance to avoid errors
  • The real risks from AI come not from superintelligence but from powerful actors using these tools to concentrate power and control, making ethics and individual choice critical for developers
  • Economic forces have always driven software engineering practices—from waterfall (expensive machines) to agile (cheap machines)—and current LLM adoption reflects economic shifts toward disposable vs. durable software
  • Fundamental engineering principles (coupling, cohesion, abstraction, simplicity) remain constant across all technological revolutions and are essential for building elegant, maintainable systems
Trends
LLM-assisted software engineering becoming omnipresent across all stack layers, similar to compiler adoptionShift from building systems from classes to orchestrating pre-built frameworks and packages as primary engineering activityRising importance of infrastructure software automation and refactoring using AI tools to manage complexity and fragilityGrowing concern about AI-driven power concentration and surveillance rather than existential AI risksRenewed emphasis on software engineering fundamentals and ethics as AI tools become more accessible to less experienced developersEconomic viability of previously uneconomical software (one-off tools, organizational utilities) expanding due to LLM productivity gainsDocumentary and public education efforts needed to counter AI doomerism and fear-mongering in mainstream mediaDistinction between disposable and durable software becoming more critical as LLMs enable rapid prototyping
Topics
Software Engineering History and EvolutionThree Golden Ages of Software EngineeringObject-Oriented Design and UMLLarge Language Models in Software DevelopmentAI-Assisted Code GenerationAbstraction Levels in ProgrammingSoftware Architecture and Design PrinciplesCompiler Technology and Language EvolutionAgile Software DevelopmentDistributed Systems and Real-Time ComputingSoftware Crisis of the 1970s-80sCold War Influence on ComputingEthics in Software EngineeringInfrastructure Software AutomationAbductive Reasoning Limitations in LLMs
Companies
IBM
Acquired Rational Software in 2003; Grady worked there after founding Rational and was offered Chief Architect role
Microsoft
Bid to acquire Rational Software; Bill Gates offered Grady the Chief Architect position in 2003
Rational Software
Founded by Grady Booch and two classmates in 1982; pioneered object-oriented design tools and UML
Anthropic
Creator of Claude LLM; Grady criticizes leadership (Sam Altman, Dario Amodei) for overstating AI capabilities
Oxide
Host company of this podcast; Pierre LeMond is an investor mentioned in opening discussion
People
Grady Booch
Software engineering pioneer; co-creator of UML; discusses 50+ year career and three golden ages of software
Grace Hopper
Computing pioneer who pioneered compilers and higher-order languages; Grady met her and has a nanosecond she gave
Margaret Hamilton
Coined term 'software engineering'; worked on SAGE and Apollo 11 lunar lander code
Bill Gates
Offered Grady the Chief Architect position at Microsoft in 2003, which Grady declined
Alan Turing
Computing pioneer; Grady met people who worked with Turing at University of Manchester and Bletchley
Bjarne Stroustrup
Creator of C++; collaborated with Grady on lectures; influenced each other's work in object-oriented design
John Backus
Creator of Fortran; Grady interviewed him for Computer History Museum oral histories and got his signature
Frank Gehry
Renowned architect; Grady uses his work as analogy for how tools (AutoCAD) enable creative expression
Herbert Simon
Author of 'The Sciences of the Artificial'; recommended by Grady as essential reading for systems engineers
Ray Kurzweil
Futurist; Grady disagrees with his singularity predictions and views him as off-base
Elon Musk
Grady criticizes his AI predictions and doomerism; mentions political disagreements
Sam Altman
OpenAI CEO; Grady criticizes for claiming software engineering is dead and overstating AI capabilities
Dario Amodei
Anthropic CEO; Grady criticizes for claiming software engineering is dead and self-serving narratives
Pierre LeMond
Venture capitalist and Oxide investor; informed Grady that 'legends are dead'
Carl Sagan
Referenced as inspiration for Grady's documentary on computing and human experience
Quotes
"The entire history of software engineering is one of rising levels of abstraction."
Grady BoochEarly discussion of software evolution
"Large language models are, at best, they are unreliable narrators. At worst, they're going to be a bullshit general sales."
Grady BoochDiscussion of LLM limitations
"Large language models are architecturally incapable of abductive reasoning. They can do induction, they can do deduction, but they're architecturally incapable of doing the other because there's no theory from which they can build."
Grady BoochExplanation of LLM fundamental limitations
"I am not worried about the rise of superintelligence. I am worried about the rise of billionaires who wish to use these things to increase their power."
Grady BoochDiscussion of real AI risks
"All architecture is design but not all design is architecture. Architecture represents the significant designs that shape the form and function of a system where significance is measured by cost of change."
Grady BoochDefinition of software architecture
Full Transcript
Oh, there's another Grady. Did you get the other Grady with the hands up? I just let anyone named Grady join the stage. Noted for all the listeners. Exactly. Kind of an I'm Spartacus moment. You hear me now? We hear you now, yes. Hooray. Welcome. It is great to have you here. Grady, you know, Pierre LeMond is a famous venture capitalist, and I was an investor in Oxide. And I made the mistake when I first met him of saying it was an honor to meet a legend. And he informed me that legends are dead. And it was like literally my, I was like, okay, this is off to a very, very bad start. So I am not going to call you a legend because I don't want to offend you, but I do view, you should know that I do view you as legendary. So hopefully that's, I know I'm on a knife edge here, but hopefully that's okay. You're most kind. I've been called many things. Well, you are certainly a software engineering pioneer. And you have seen a lot. And I have to tell you, I was just getting to prepare for this conversation. I had pulled what I thought was a recent conversation. and as I was listening to it, I'm like, well, this is like, this is kind of like, there's kind of surprising that he's not tacking harder into deep learning given that it was in 2024. And then I look at it, I'm like, oh my God, it was in 2014. And this conversation you had in 2014, I have to tell you is aging very, very well. You had a, so because you've been, you've been involved in, you were talking about working with the Watson team and kind of understanding what Watson had done at IBM. But of course, your history is a lot deeper than that. You were, I mean, you go back to the, you kind of came up, as software engineering was coming up, I was going to ask if your degree was in computer science or, because you were at the Air Force Academy or was your degree in, because it almost predates, certainly it predates, it predates the computer science department at our alma mater. Grady, would you mind taking us all the way back to your undergrad years and then your kind of your first encounters with software? So let's go way back in time. I built my first computer when I was 12. So we're talking, you know, mid-60s. Why did I do that? Well, I was a voracious reader then, still am now. And I remember going to the library one day and finding this book called Computer Design, which is basically a hardware book. And it described, you know, here's how mainframe computers were being built. I was immediately taken by it. And I said to myself, I need to build me one of these. Well, obviously, in the mid-60s, there weren't a lot of things around, not even integrated circuits. Yeah. Right. So I did a self-study and then realized I could build these myself from individual transistors. And so my first computer, today we call it more of a calculator with extended memory, I built around that age. I thought hardware was pretty cool, but then I realized software is where it is at. So you built your own computer out of discrete logic as a teenager only because the integrated circuit had really not yet been invented, or microprocessors certainly had not been invented. I'm just playing that back to you just so I can polish my own sense of inadequacy. Is that correct? He's more of a software guy, Brian, remember. Exactly, that's right. That is correct. I could probably put a blindfold on and build a flip-flop from you out of four transistors. But that's another part of my life. I decided software was where I was at. So here I was by that time, 13, 14, the summer thereof. So living in Emerald, Texas, small town of 150,000 people, I went and knocked on the door of everybody that had a computer. Of course, this was the time when IBM was the dominant manufacturer at the time. And obviously, nobody was going to hire a kid. But I knocked on the door of the IBM sales office. There's a sales guy that took me in. He nodded politely and then handed me a manual, a Fortran 4 manual, and said, go read this and come back to me. I expect he never imagined I would come back. But there I was the next week, and I said, I have written a program. I would like to run it. He was a little bit gobsmacked. The program, of course, was your typical hell of a world problem. I was also into physics at the time and had written a program in Fortran to model the movement of two neutrons moving toward one another, increasing speeds of light or close to the speed of light and calculating what the release of energy was. So that was my first program. I still have the punch cards for it, by the way. The guy took me interested. That's a long way from scratch or whatever the modern equivalent is. That's very – that's why. Adam, have you ever been to Amarillo? I don't know that I have. Okay, so Amarillo is desolate, right? I mean, Grady, it's fair to say. Amarillo is like, you are in the Texas panhandle, and I mean, you are a long way from other things. It's remote. It's the helium capital of the world. Our main claim to fame is that it's home to the Pantex plant, which is the place where nuclear triggers are assembled and disassembled. So it's kind of a fascinating place. So anyway, I wrote the program. He got me time on the weekends on public services computer, which was an IBM 1130 computer, if I'm not mistaken. And he said, go to it. I taught myself how to punch cards. I taught myself debugging. And there we have it. Around that time, I decided, you know, what do I want to do next? And so I was deeply into software, Fortran at the time. We're going to come back to the story of the manual in a minute. But I then said, what are my two loves, computers and space? And the great place to do that at the time was, of course, the Air Force Academy, because they had an astrophysics program, Bring Me Into Space. They had a burgeoning computer science program and really one of the few undergraduate programs at the time. So I went there, and I got my degree, a Bachelor of Science in Computer Science. There was no such thing as Computer Science at the time, but it was a Bachelor of Science. My first assignment after I graduated was at Vandenberg Air Force Base, so my dream come true. There I was, literally in Space Command before the Orange Cockwomble called it that, so here I was in space, dealing with— Gosh, I unveiled my political bent, didn't I? You're all good. I feel we need a separate time for it. So there I became a project engineer and a project manager. And the fascinating thing about this time, and I'll give you a hint for some of the things I'm up to now. I'm working on a documentary about computing and its intersection, intersection with what it means to be human. Imagine Carl Sagan's Cosmos about computing. And I'm convinced that one of the things, one of the major influences upon modern computing is how it evolved from World War II and the Cold War, where the phrase I use is much of modern computing has been woven on a loom of sorrow. And so in that time, the largest and most complex systems were not being built by industry. They were being built by the military. We were building distributed systems. They were fusing together 40 different radar in real time around the world. and this was hard stuff. Hard then still is a bit hard now. So I really cut my, you know, I learned a lot about how to deal with large complex systems from day one. I then got involved in the ADA program because here we were in the midst of the software crisis and as it was called at the NATO conference. Yeah. Could you describe the software crisis a little bit? Because I think that this is a, I mean, the year now is we're in the very early 80s maybe late 70s what is the we're in the mid 70s mid to late 70s mid 70s yeah so the question i would ask all of your listeners is remember what year you got your first email address think through the answer i got mine in 1979 when it was still the arpanet and at the time we had a little book that listed the it was about maybe 100 pages that listed the email address of everybody in the world. Very cool to have. So I was on the ARPANET from the beginning. Wow. Yeah. Yeah. So anyway, I was called in to be involved with the ADA program because of the so-called software crisis, as it was named in the big NATO conference around the time. Remember, height of the Cold War, there was a system called SAGE, the semi-automatic ground environment, which is so much the predecessor of modern computing. Large distributed system. It's the first place where we had CRT displays as opposed to just terminals or just opposed to teletypes. That was a successor to the whirlwind computer out of MIT. And so we were dealing with human computer interface kind of things. That work was taken into the SAGE system. And this becomes important because there's a woman that worked on SAGE. Her name is Margaret Hamilton. And she is the one who in the 70s coined the term software engineering to distinguish herself from the hardware engineers around the time. She went on to work from SAGE to be the main engineer for the Apollo 11 ground system and indeed was the person, you've probably seen pictures of it, primarily responsible for writing the code for the lunar lander. Anyway, what's the software crisis? The problem here is that you had tremendous demand for software and yet the inability to build it quickly enough, reliably enough, etc. The SAGE project consumed perhaps 30%, according to some reports, of all of the software engineers in the United States. Huge, huge project. And we learned a great deal from it. I'll jump ahead because I don't want to spend all the time in my history. I want to talk about some present things. Sure, of course. Yeah, yeah, right, right. So then after I left for the Air Force Academy, started up Rational Software with two of my classmates in 1982. We were bought by IBM in 2003. I'm going to put a bookmark there and tell you a Bill Gates story in a moment. And then the rest is history. The last six years, I've been working with a set of neuroscientists to better understand the architecture of the brain. And you referred back to something in 2014-ish. That was probably the TED talk of which I spoke. I was annoyed at the book that Bostrom wrote and said, this sucks. And Elon, you're adult. Bill, you're adult too. let me tell you what I think. So there we go. Yeah, I was actually on the Singularity podcast. Oh, yeah. I did listen to the TED Talk. I got to say, whenever I listen to a TED Talk, I remind myself why I find TED Talks as a format to be annoying. Your TED Talk, great. Yeah, I always feel like, oh, God, I feel like I want to go listen to the speaker for another outside of the 45 seconds they've been allotted. Yeah. Yeah, Ray and I got into it. I thought his ideas of the Singularity, well, Well, the singularity's main value is it allows you to make clickbait articles and sell books. I think there's no substance in it whatsoever by any means. Let me quickly tell you a Bill Gates story since you're hearing a bit of my history. 2003, we were bought by IBM. Microsoft had been bidding on us. A couple of months after we finished the deal, I got a call. Hey, Grady, it's Bill. Come see me. So I flew up to Seattle. I'd done some things with him before. sent me down in his office and said, you know, this is not public yet because we're talking, what, 2003, but I'm going to retire from Microsoft. And you know, Grady, I have two jobs. I'm CEO and I'm chief architect of Microsoft. I would like to give you the job of chief architect. I said, Bill, that's really interesting. Let me think about it. For a couple of months, I went back and forth spending time with these tenants. And I came back to him and said, Bill, I'm profoundly flattered. but you know you have a very dysfunctional company and i'm not the person to fix it so if i said yes it would only end in tears and so i said no and then continued on with ibm and i'm happy that i did yeah yeah i guess i mean and you only ended tears it would have been like some it could have been your tears that it could have ended in or you're just like this is just too yeah right it would have been a waste waste of my time i'm i'm a lover not a fighter they needed to have somebody they needed to have somebody come in and knock heads and i'm not that kind of person so i've been very happy in research the last several years doing what I'm doing. And it should be said just along the way, and I know I don't want to necessarily dwell on the history, but I do think it's germane for kind of what I want to talk about in terms of the future of software engineering, that along the way, as you were trying to navigate the crisis, the software crisis of the late 70s, early 80s, you really were trying to find ways for people to kind of up-level their craft of software engineering. You were obviously, in terms of modeling systems, UML was your co-baby. It's certainly a claim to fame. You developed where I came to know you, because I kind of came up in the 90s. My first email address very much as an undergraduate in 1992. I think they had just gone to straight internet instead of having both internet and BitNet right Yep right ARPANET yep Right exactly And then MILNET yeah But I remember you forwarded the Butch components which were the kind of I would call before the C standard templates library these were the kind of the components that were a demonstration of and an embodiment of how to use object design And so you saw software navigate this crisis of like, oh my God, we don't know how to. And I think it was still, when I was coming up, it still felt like software was in that crisis. Still is. Yeah. So then, I mean, I think the thing that is, I mean, to, I guess, kind of fast forward here, I mean, you've seen a lot of revolutions. You were, you know, early in on Agile, and you saw that revolution run its course one way or the other, I guess. because I mean, you need to get your take on that. And then now we're in this like crazy world. And do you feel like, is this the world that like, yes, this is the world that I saw a decade ago. This world makes sense or is this, because it feels to me like we are on, I mean, it's delightful, but bizarre is kind of my take on where we are currently. But I would love to get your take on it, obviously. it? I would say that every age of software has been crazy and bizarre and wonderful simultaneously. The rise of the internet, the rise of the personal computer, the rise of the mobile device, the rise of the cloud, every one of these represent points in time and points in change in the industry. And every one of them, interestingly, has reacted in very similar ways to what's happening here. I have a lot of people coming up to me saying, oh, my gosh, Grady, I have this existential crisis. My career is over. It's going to be replaced by computers. And I observed to them is that, no, be calm, take a deep breath. The entire history of software engineering is one of rising levels of abstraction. And let me explain why that's the case, because you're right. I've been around this business for a while. I met Grace Hopper. I've met, I still have the nanosecond she gave me. If you don't know the story I'm referring to, go Google Grace Hopper and David Letterman and there's this wonderful scene with her. I've also, I didn't meet Alan Turing, but I, because he was dead by then, but when I was doing some work at the University of Manchester and, and at Bletchley, I met some of the people who worked with Turing. So I'm an old fart, if you want to get down to it. I even, oh, here, here's a fun other funny story. Jay Presper Eckert, I met as well because I taught his grandson at the Air Force Academy. So, yeah, while I told him. K. Presper Eckerd. Wow. That dates things. Anyway, so. Yeah, I feel like you were like, did you have any connection to Charles Babbage? I mean, this is like you're really turning. That's very impressive. Oh, my God. Wow. Charles was long gone by then. The story with Back to Fortran, the quick one is, remember that manual I said? Yeah. As part of the, I was on the board of trustees for the Computer History Museum for about a decade, and I did a number of oral histories for folks. One of those I did was with John Bacchus. So I went out to visit him. His wife had just died. Very sad story. He actually died a few months later, I think, of a broken heart. So I took that book with him and I explained the story. And I've got a signature from John as well. Anyway, let's go back. Let's go back in time. Similarly, you know, why do we have software engineering? And I mentioned Margaret. What was what was she trying to deal with? Engineering. I think we can legitimately call ourselves an engineer because we're the ones who take various static and dynamic forces and try to build reasonably optimal solutions to attend to those. And when I say reasonably optimal, it deals with economic issues, legal issues, all these kinds of things. So it's not just the technology. It's the social and economic context in which they are born. This is true now. It is true as it was from the very beginning days of software. Yeah, and I agree. this is an extremely important point because this made it less the seem obvious to those younger there was a i i feel a protracted period of time where software was really having to fight for its own legitimacy constantly hearing you're not actually engineering and we you know always being referred to to you know it's always like always the civil engineers that are like you know like it's always the bridge building and the and one of the things i learned about bridge building is that when you build a bridge the way we build software, where you build a bridge with a totally new design that has never been built before, which is a better analog for software, it has all the same problems of big software engineering. Witness, Adam, the self-anchored suspension bridge that we have here in the Bay Area, which is only the second of its type in the world that looked a lot like a software project. So anyway, yeah, Grady, I just wanted to, people don't take what you're saying for granted, because what you're saying is extremely important that we, yeah, anyway, you are a very important part of up leveling us there's a great x kcd cartoon of you see this little complex structure and one tiny thing in there of it's all supported by this and that's what open source is if somebody has said uh if buildings were built the way if software were built the way dude i'm going to get it backwards if buildings were built the way software was built the first woodpecker that would come along would destroy civilization we we build things that are astonishingly, exquisitely complex, and they're also fragile. And this is, I think, what makes it so exciting to work in this discipline, because we work with the most fluid of materials, the most malleable, the most fungible, and that's exciting. And as I always tell people, it is both a privilege and a responsibility to be a software engineer, because it's a privilege because we're changing the world. It's a responsibility because we're changing the world. So if we think of ourselves as an engineering discipline, we weigh back different forces, and that's a fundamental that applies to all of us. In the earliest days of software, when we were dealing with mainframe computers, the fundamental problem was we want to do calculations, mostly mathematical. The problem is how do we get our machines to do what we're doing? This is the era of the Mark I and the Iliac. It's not the Iliac, but the ENIAC and the like. As we began to decouple the things that made the machines work and the hardware itself, now we're in the realm of Grace Hopper the one who was pioneering the ideas of compilers in higher-order programming languages that led to eventually COBOL and FORTRAN and the like. Why do we do that? We needed a change in levels of abstraction because working at the assembly language level or the machine language level simply was not very effective for us. We could think of these things, but the friction and the distance between the two, the cognitive distance, is very, very high. Thus, we are now subtly in the first golden age of software engineering. That first golden age is best characterized by algorithmic abstractions, meaning we're taking mostly mathematical things, getting our machines to do it. In the 70s, 80s, which is where I came out of the scene, I was very lucky because I came at the right place at the right time. The world was changing because now, as I mentioned, back in Vandenberg, we were dealing with systems that were larger, real-time, distributed. All of a sudden, complexity was just exploding upon us. And the early abstractions were insufficient to do what we're doing. There were glimmers of excitement, though, with languages such as Simula, with ideas such as abstract data types and Parnas' ideas of information hiding. The role I was given with Ada was to say, Grady, we built this language for the Department of Defense. Go figure out how to best use it. And so I was actually commissioned to go off and figure out some software engineering techniques. I met with the first generation folks, Larry Constantine, Ed DeMarco, Ed Jordan, Tom DeMarco, all those kind of folks. And so I kind of learned from them, but I realized there's something different. There is a wonderful dialogue by Plato that's the one that really inspired me. This is going to get kind of geeky. In it, there is a discussion about how does one view the world? Should you look at it through the things or should you look at it through the movement? And the answer is both. But it occurred to me that's the way we should look at software through the things themselves. And, of course, the similar folks had done that. And thus, I was on this mission to say, oh, let's try designing it through objects and classes, not just through algorithms. I was one of many, but I just happened to have a platform and a voice and reasonably articulate. And then we built a business around this kind of thing. So that was the beginning of the second golden age of software engineering. because the fundamental problem was increasing complexity, increasing size of systems. We need new abstractions to help us reduce our cognitive load and build these things. So I'll pause there for a moment because I don't want to do all the talking. But any questions before I go on? No, no, that makes total sense. And I think that the – and I'm now very curious because kind of the next turns of that, of the development of some of those abstractions. Because, I mean, the abstractions, I think that some of these abstractions that we developed along the way became very important as just demarcation boundaries. Like SQL was a, I mean, Adam, you're a SQL lover. Is that fair to say? Do you feel like I'm being pejorative? I can understand why you say that. Yeah. There we go. Exactly. But I mean, that's an extremely important abstraction to develop because it allowed us to take these two big concerns and actually separate them. We may want to go later, and later we would go revisit that abstraction, but the presence of that abstraction was really important. The presence of the operating system as an abstraction, allowing us to have application software that was totally divorced from the hardware. And I think that, Grady, one of the things I would actually be – because I definitely came up when object orientation reached it, I think, at its zenith. Adam, you were a couple years behind me. Is that right? I think so because they were rewriting the operating systems course at Brown to be object-oriented at the time. Object-oriented. Everything was going to be object-oriented. And it was one of these things where it's like the successful abstraction. In software, we seem to never be able to turn the abstraction dial exactly correctly. Like we always would be like, okay, if a little abstraction is great, like we must use abstraction absolutely everywhere. That's right. If object orientation is great in moderation, then used in excess, it'll be better. It must be especially great in excess. If Java is good in moderation, then the microprocessor should be in Java. It's like, okay, no. And remember the Intel iAPX 432 is basically putting 8 on a chip. We overshot things, especially with the use of inheritance. That was a mistake. But the idea of abstracting things where you had the data and the operations together as a cognitive unit, that was essential. But what happened? It disappeared as it should because the best things like that should move into the atmosphere. This is around the time the UML came to be because I recognized that languages were textual languages were good, but we thought about things at a different level of abstraction. This is also around the time that I connected with Bjarne Strewstrup. Bjarne happened to show up in one of my lectures. He asked some really good questions. We talked afterwards, hit it off. And then this is before the time he had released C++. He had just written a paper called C with Classes. I wrote my object-oriented design paper. We were actually published in the same journal at the same time, didn't know one another. So the two of us went off and did a lecture series around the United States and very much influenced each other's work, if you look at his first edition thing. So I kind of grew up with C++ and vice versa. But now I would observe that we are in the third golden age of software engineering. It's not because of things like Claude. I think we've been into it for a while. Why? Because the abstraction has shifted again. with the rise of distributed systems, with the rise of the Internet, and primarily with the rise of packages and platforms, all of a sudden we're not dealing with building systems out of levels of classes. We're dealing with levels of whole frameworks. Oh, I need to do some sort of messaging. Then I'll use this package. I need to do this kind of UI thing. I'll use this. I need to do visualization. I'll use D3. So all of a sudden the level of abstraction has moved up for us. And now as a software engineer, a lot of what we end up doing are some people who build these, some people who maintain them, but systems engineering is largely figuring out what those right pieces are and orchestrating them so they work well together. That's the third golden age in which we're in. And then I would observe things like Claude. They are a symptom of that golden age. We've had a need to increase the velocity and reduce the friction so they come in at the right time. Not unlike what happened with a generation of case tools in the second generation, second golden age, not unlike the rise of compilers and higher languages in the first generation. Now, what does it mean? And I've often said this, I believe things like COD and the like will change the nature of software engineering just as much as did the rise of compilers. It moves up the level of abstraction and actually makes it possible for us and desirable for us to build more software. And therefore, the human is not disappearing. We've just moved up another ratchet of level of abstraction. So this is an exciting time, quite frankly. Absolutely agree. Absolutely agree. And I love your analog with the compiler. I also think it's great, and I think it's also especially great for younger software engineers, Grady to hear that over your career you had people approaching you being like it feels like the domain is done Like the invention of the compiler means that I have nothing to do anymore Oh yes The invention of COBOL. COBOL was invented because now we don't need programmers. Business people could write their own code. Well, how did that work out for you? And SQL too, right? I mean, SQL, the whole idea of SQL was that you're making this so much more approachable, which it was, of course. I mean, SQL made it way easier to query a database, which just allowed us to do a lot more databases. Yes, exactly. And so, you know, we are definitely, and I love that analogy. We are definitely seeing that for whatever it's worth. We are seeing that the, and, you know, Adam, did you catch the demos? I know you were out on Friday. Did you catch the demos? Yeah, I watched the recording today. Wasn't it wild? Very cool. So our colleague wrote a language that they don't know, Lua, to use a subsystem that they've never used before or heard of before, which was an embedded Lua use in ZFS to do some mission-critical use. It was amazing. Adam, I just love that right now there is someone doing their dishes. Not right now, actually in the future, because this is like right now. Right now, someone who's going to listen to this podcast as a recording is going to dry their hands and hit the pot and rewind and be like, did I just hear that there's a Lua interpreter in ZFS? yes dear listener there was a little interpreter in cfs that you might not have been aware of uh but yeah using claude then to actually go write these channel programs and in a way that was in a way that you just like it claude didn't make it possible but it made it so much faster that it was uh it was just not something we would have done we would have waited because because this is basically working around the lack of a piece of punctuation you need in cfs 100 especially like Like, I mean, how else would you have the confidence to say, well, I've never used this language before. And I've never heard of this subsystem before. But I don't know. And I kind of have to get this done in the next two days or whatever. You just wouldn't even start. Like, you would know that it was impractical. Yeah. And, you know, I think also, Grady, because you're a tooling like us. I mean, our tooling is very deep in our marrow. We, like you, are real toolmakers. and one of the things that we are discovering is that Claude is being used by software engineers to make the tooling that they kind of wish they had on the spot. And that is, so it's kind of very important bank shot in terms, and people are still, of course, are using it for all the ways that it's being used elsewhere. But I think that for infrastructure software in particular, what we're finding is that we're able to use this stuff to really increase our level of rigor. And I think that that's a bit, which is not a surprise if you view it in the context of what you're describing, viewing it like a compiler revolution. That's not at all a surprise. But I think there's a domain of software engineering that finds that surprising. I think they do find it surprising. Infrastructure software is perhaps some of the most fragile stuff I've ever seen. It runs on spit and chicken wire in multiple organizations. And so no great surprise that using things like Cloud to help you refactor it, reorganize it, automate it makes a whole lot of sense. So, yeah, I don't have particularly complicated pipelines, so that's not where my primary use case is. I've been using it for some Swift, some Python, some PHP, some JavaScript, and it's been great for me. But the notion of using it for tooling to automate those things, every software developer builds their little nest around themselves of all their creature comforts. And when you move that to the organization at large, Claude and its ilk shine in those circumstances. And I just think to your point about us writing more software. Because when people think like, okay, more software, what does that mean exactly? It's like, well, what it means is that the software, there is software that, a lot of software that's not written because it doesn't, it can't economically pay for itself. It's the tooling that is like, God, I would love to have this tool, but it's kind of a one-off tool or it might not be useful beyond my immediate use case or my immediate organization. And now all that stuff becomes possible. It's very important you mention the notion of economics because I think economics has driven a lot of software engineering. If you look back in the first golden age, the cost of our machines was much higher than the cost of our programmers. And therefore, everything in the processes, like Waterfall, was in order to optimize the dominant cost, which was the machine. So if you had a high cost for machines and you do your work up front. By the way, the same phenomena happened in Russia, of all things, after the Soviet Union collapsed. We saw this flood of amazing engineers coming into the marketplace. What happened? Machines were very, very scarce in the Soviet Union. And so the developers there honed their skills to be able to do things on paper and in their heads before they even touched a machine. We don't do that anymore because the economics have utterly reversed. The cost of preparing is actually so high you just sit in front of a machine and do something. Now, this leads us to the distinction of disposable software and durable software. Disposable software means you've got systems in which the economic cost of replacing it approaches zero. Durable systems, though, are one in which the risk is higher, in which the cost of change is higher, and therefore you've got to find a balance where you are in this for every organization. Yeah. And the software we're making, just to be clear, is really that durable. I mean, that's where I've spent my careers in that. I mean, still using software that I wrote, we wrote, you know, now, Adam, Jesus, 20 plus years ago. um and so really but i i think that there's a lot i think that like unlike i think not every revolution has affected every use of software equally and i do feel that like i mean because i you know i i came up in when at java was was everywhere and was going to be used for absolutely everything um and that was like that was an overshoot and it wasn't actually meaningful But I actually think that it's not an overshoot to say that it affects quite literally everybody at every layer of the stack at some layer. Or the ability to have LLM-assisted software engineering is going to be, I feel it's going to be omnipresent. I don't feel like that's right up there with World War II being stressful, Adam. I feel it's stressful. That's right. One of Brian's famous observations, yeah. But deep thoughts out of the outside and front. But, I mean, Grady, to your point, similar to the compiler having an impact to every software engineer and all software being written, I think that that feels obvious now, but maybe it felt scary at the time. Maybe it felt like it was threatening people's livelihoods and the amount of software that would be written, but it turned out that just much more software was written. Yeah, change is hard for everyone. Absolutely. Absolutely. This is a time of still incredible experimentation. Speaking of overreach, I believe people like Sam and Dario of Anthropic, they have vastly overreached. I bashed Dario many times on Twitter, and he's making the claim that software engineering is dead. I think he's demonstrably wrong and every dimension of the word wrong. But remember, he's a person who is trying to sell a business and increase their stock price. So he is looking at the world in a very different way than I am. It's true. I also feel it's not very helpful to just generally be stoking fear about these things because there's already ambient fear just from change. And it's kind of too easy to stoke that fear. And it makes it really hard to be level-headed in light of that fear. So I actually do think that we need to be more level-headed on a lot of this stuff. What would be – so your advice to – because I get this a lot. You must get it a lot too of young people are like, God, I mentioned software engineering, but it feels like the glory days have kind of passed us, which I disagree with. I agree with you. I think our best days are ahead and exciting time. In terms of, though, just as a compiler did not chase assembly language out of the undergraduate curriculum, we still learn assembly, and it's still important, even though no one is expected to write things in assembly. It feels to me like those fundamentals are as important as ever in an undergraduate computer science experience. What's your take on that? Absolutely. By the way, being level-headed makes sense, but the problem is it won't get you on Fox News. That's the problem. So there are voices like mine who are saying – and maybe I'm just the old sage saying, take a deep breath. Life is good. Life is more wonderful than you realize. Don't panic and carry your towel with you if you remember that analogy. So from – A hitchhiker's guy in the galaxy, right? Yeah, exactly. Right, right, right. I was listening to a podcast where you casually dropped in a reference to an SNL spoof ad from the 70s of – The third topping end of floor wax. Bingo. And I was like, boy, I so admire the courage to drop that one in casually. And obviously, I'm with you. But literally, even the millennials don't know who Dan Aykroyd is, let alone the Zoomers. I've got no idea. Anyway, I saw I there are some eight billion people in the world. The vast majority of them have no idea who I am. There's some small percentage who would wish I would be dead. So I'm happy with that. That's fine. I'm just I'm just doing what I enjoy. And I love doing software and I love talking about it. I've forgotten your question. It was somewhere between dessert topics and floor waxes. But what is the advice that you would have for young people in terms of the fundamental domain? I'll give you a story from another domain. Frank Geary, brilliant architect, civil architect. He's passed away. He is one of the leading architects of this generation. And he's done things like the Disney Concert Hall, which was just an extraordinary soaring building. It speaks to you. It says something. The work he did was only made possible because of things like AutoCAD. Why did AutoCAD happen? We saw a revolution in the way people could design buildings because now you could model them in three dimension. You could actually test them in software against wind shear, earthquakes and all this. So it's tools like that that unleashed creativity that simply were not possible before. The same thing happened at the peak of the or the beginnings of the Industrial Revolution. where all of a sudden we had a mass production of iron and the like, and now people could start building railroads and boilers and bridges, and lots of them failed because we didn't understand the science behind it. But it enabled us to build more things eventually once we got it. The same thing is happening in software right now. We have some tools that are allowing us to unleash our creativity and get our imagination closer to what we wish to build, executable artifacts by reducing the cognitive distance between those two by raising levels of abstraction so my advice to folks is look like frank gary the fundamentals still applied with him you deal with forces you deal with static and dynamic kind of things those are the fundamentals metals for software engineering pretty simple i'll give you some some sound bites all architecture is design but not all design is architecture architecture represents the significant designs that shape the form and function of a system where significance is measured by cost of change. To do those kinds of design decisions that are meaningful within this engineering context, you want to focus upon raising your level of abstraction, build good abstractions, build abstractions that are crisp. This is coupling and cohesion. You want to build them as simple as possible. And that's kind of it. That's the fundamentals in which we do, which we build our stuff. And the language changes underneath it. But if you rely upon those things, that's the essence of engineering for me. And do you worry at all? I mean, I don't know that I do, but one thing that is part of what drives us to those simple but elegant, powerful abstractions, right? And the ideal abstraction for us is one, I always felt that elegant is the highest praise for a software engineer. It's something that you know that you've got an abstraction that is both at once minimal but powerful and sufficient. And I don't know that I get nervous about this or not, but part of the reason that we're driven towards elegance is because of the limits of kind of human cognition. And if you have something that can absorb more complexity, it may move you away from simple abstractions. Do you worry with LLM-authored abstractions may have unnecessary complexity? I mean, again, I'm not even sure that I have this concern, but I'd love to get your take on it. There's a lot to unpack in your question because you're now attending to some interesting philosophical issues about the rise of artificial intelligence in general Let look back to other sciences like physics or mathematics The Euler equation e to the i pi plus one equals zero I look at that and I just shiver in ecstasy because it represents, my gosh, from Saturday Night Live to Euler's equation. We're all over the map here. Welcome back to the time, friends. This is your right on brand. That's right. So what does that mean? That represents the impression he is from which we can now generate other things. So in science, we look at the data, we try to compress it by building its abstractions. Once we have those abstractions, we can move forward. The same thing is happening in software itself. We are trying to build abstractions, and it's difficult because there is no perfect abstraction. It depends upon the context and the other forces you've got around you, but we eventually have within our quiver of the kinds of abstractions we know that work. This is, and we know when we're off, this is the idea of code smells. It doesn't quite fit what my abstraction is. It just doesn't feel right. And an experienced engineer will know those kinds of things. But here's the problem with large language models. If you're not somebody who has experienced those, if you don't have that within your skills, then you don't know if it's going to bullshit you or not. As I've often said, large language models are, at best, they are unreliable narrators. At worst, they're going to be a bullshit general sales. I tweeted about this, but my experience with these LLMs has been they're good if you kind of guide them, view them as an intern who is very energetic, won't shut up, won't go to sleep. But my gosh, they need some guidance because they will often make mistakes. They'll inject errors that they don't even know because they're not my reality. and that's okay. And so you've got to be very careful about using them in a guided way. And especially for very critical things, I always keep an air gap between my large language model and my production code. I will look at it before I'm the resident before I let the LLM does it. If, however, your mileage may vary because every organization has different risk and so that's okay. So I'm not shunning people. But here's where you get back to the fundamental things. Large language models are architecturally incapable of abductive reasoning. Let me say that again. They're incapable of abductive reasoning. Abductive reasoning is the production of theories from data. They can do induction, they can do deduction, but they're architecturally incapable of doing the other because there's no theory from which they can build. A good example of this is if I trained an LLM on all the scientific literature up to the mid-1800s, an LLM would never have discovered cells and viruses. because it simply was outside their training data. And any extrapolation, which is what LLMs can do, would be in a dimension. That's why we have to be careful with these things. And that's why I'm also not worried about them. I'm not worried about the rise of superintelligence. I am worried about the rise of billionaires who wish to use these things to increase their power. That's what I worry about. I know. I tell people it's like, unfortunately, the thing you need to fear is it's much more mundane. It's like people with too much money who are racist and sexist. I mean, it's the same thing that we've been battling societally for eons. And I know it's fun to think about the robots rising up against us. But it's actually – and actually for me, it's like personally – I mean, I would kind of love that just because I would love to go to battle on behalf of all humanity against the robots and find all of their speculative execution vulnerabilities and so on. I think would be just like, I think, but it's like, it's, it's, it's a fantasy. It's not, but so Gritty, this point about abductive reasoning is really, really good. I mean, I haven't heard it phrased exactly that way, but God, that really, you're, you're exactly right. I mean, that is the kind of, that's the bit that is missing that you are always going to rely on. And in part, because like they, you know, our colleague who worked on this thing that Adam described of the, the, the Lua interpreter generate, the Rust program that generate the Lua to be interpreted as a channel program in CFS. One of the things that he commented is like, you know what I love about these things? They're kind of down for whatever. Like there's no, there's no sense of like, wait, why are we doing this? Or like, is it, wait, I think there's a better way to do this. There's like, nope, sounds great. Like, let's go. Let's go. I've already written two thirds of it. Let's go. Do you want me to and which is what makes them powerful, but also makes them limited. And as you say, I mean, I just, Anyway, I'm kind of like – I'm kind of reconstructing my blown brain on your point about abductive reasoning because I think it's a very important point. Insofar as we surrender ourselves to these kinds of abstractions, that's where the danger lies. We must not abandon our humanity in the midst of it, which I'm going to put in a plug. This is why I've been trying to do this documentary. Think Carl Sagan's Cosmos, but about the intersection of computing and what it means to be human. So this is a great time to try to bring this in the public. the public sense, because a lot of fear, a lot of, you know, fear mongering and FOMO going on here. But, you know, stepping back from it, having seen a few things in my lifetime, I'm not concerned, except insofar as humanity has its own issues. I am actually excited by what the future offers for us. Yeah, that is, I think, just very important and uplifting. I think I dropped in the right link to computing the human experience is the work that you've got. I think this is great, by the way. The idea of, I mean, the story of computing, we have created so much plenty, so much prosperity with computing. We are, you know, as I have told people, I read a terrific book on the history of the bi-rotor combine called Dream Reaper by Craig Canine that was also a history of agricultural technology. And I think every technologist should learn the history of agricultural technology so they can have appreciation for why they're not in the fields today. Because without agricultural technology, you and I and everyone else except for the king are in the fields. And it is like ultimately technology has given, but I feel like with every one of these revolutions, there is this kind of concomitant fear that comes with it. And sometimes that fear is very justified, as it certainly was with nuclear weapons or nuclear power. And maybe it's humanity's overshoot that justifies that fear. But how do we kind of counteract that fear? Because I share your optimism. I both see the fear. As you say, the fear won't get you onto Fox News. Right. Sorry, the absence of fear, the cool-headed, the be calm, as you said, won't get you onto Fox News. Yes. How do we counteract that fear? Yeah, doomerism, especially in the AI space, gets you a lot of press. And that's why I go head to head with folks such as Ray Kurzweil and the like, who, nice guy, I met him. I think he's off base with regards to his predictions. That's fine. I think Elon as well, too. Well, I've got a lot of issues with Elon, but that's one of them in that regard. and Sam and the list on. It's not like I hate a lot of people. I hate people who are self-serving narcissists who are trying to push their agenda and not listening to the world. There are more than a few of those, unfortunately, at the moment. So, you know, how do I balance this kind of thing? I think the answer is all of life and all of humanity is an extraordinary journey. And we happen to be present, you and I and everyone that's listening happen to be present at an exquisite time in human history where we have the capability to turn our imagination, as long as we live with the laws of physics, turning those things into reality. That is tremendous. We've never had in our history of humanity the kinds of materials to do we want to do here. That is wonderful, but it's also frightening because the kinds of things we can produce, it's not frankly unlike the creation of nuclear weapons because we know that it's going to change things and it's primarily going to change the balance of power. That's what software does. So how do we counter it? The answer is, you know, keep up with the fundamentals. Go off and try to build good things that are useful. Apply your own ethics to this. Do you want to be somebody who works for Doge and helps write software to pull information from the IRS, gained illegally, to try to find immigrants? Hey, if you want to do that, great. but you got to apply your own ethics to it. So this is where the human factor comes into play. You have a choice as a software developer to do great things or do not such great things. And you have a chance, a choice to contribute to the advancement of humanity in that regard. You are part of a wonderful place in time and you have wonderful skills to do things no one ever could have done before your generation. I mean, that's uplifting for me. Yeah, totally. Absolutely uplifting. And I think that in just reinforcing the importance of those fundamentals, I also – so you have also seen many waves, many economic waves where you've seen many – I mean, we've all seen, but you especially – because there was a big software boom and bust in the 80s that definitely predated me. but you very much lived through that rational, right? And then obviously the dot-com boom and bust, the Great Recession. And so you've seen this kind of like economic dial kind of where people are coming into the domain not because they were trying to build their own computer in the 60s in Amarillo, but because they think it's like, well, this is like a path to, solely a path to prosperity. Yeah, yeah. I mean, I think it's kind of, I mean, I hate to say this, because I think it's better for the domain when the dial is turned off of frenzy. I think when we are, I mean, is that? Yeah. Oh, I agree with you. Absolutely. But hey, you know, all life is a frenzy. And so, you know, don't get caught up in the tidal wave of that maelstrom. That's a mixed metaphor. You know what I mean? You know, try to be calm in the middle of it because you can't control those waters around you. But my goodness, you can build some great things out of them along the way. Hey, quick aside here. I just noticed the time. I do have a hard stop here coming up really quickly. You and I and this gang, we could probably talk for a few more hours. But unfortunately, I do have a bit of a hard stop here. So could we maybe start winding it down here? Let's wrap it up. Absolutely. So can I actually just ask for – do you have any book recommendations for – we've got a bunch of readers here. What are some books along the way that really changed your thinking? um in my library i have in my library i have about 6 000 books i'm i'm an old school guy i prefer physical books i'd recommend the following for the systems engineers go read herbert simon's the sciences of the artificial go also read systematics by john galt go reread again um uh gosh it just popped out of my head i see the cover uh the mythical man month of course I was going to say Mythical Man about it. Yeah, yeah, yeah. Absolutely. Yeah, yeah. Go read Refactoring. You know, those come to mind. Those are great. Those are the ones that come to mind. Those are great recommendations. Grady, thank you so much. Thank you for all that you have done and are doing for software engineering as a craft and a discipline. I'm not dead yet. That's right. Yeah, Monty Python. We've got to explain all these things to the youngsters here. but thank you so much. They can Google it. There you go. And thank you too for, I think, taking on some of these loudmouths online. And I don't think they always realize who you are when you're speaking to them and you're not the kind of the type to wave your credentials around. But on behalf of software engineers, thank you very much. Thank you for having me. This is the most fun I've had with my clothes on all day. All right. That's all right. All right. Thank you very much, Grady. Appreciate it. Bye, guys. All right, Adam. Well, there we go. That's – I'm just relieved that he had his clothes on for this episode. Well, just for today, though. I mean, yesterday there was – So we'll take it. But thank you very much, everyone. And so we got a couple episodes out there cooking that we're excited about. And I know we'll be kind of in and out for these next couple weeks. But look forward to – I think we've got some exciting episodes, Adam. Yeah. We have, our colleague Rai nailed a gnarly compiler bug on a P4 compiler bug on Friday night. And definitely my, I mean, I think my first thought was, thank God we're going to live. But my second thought immediately following that was like, this is great because we got an episode. Content. Oh, thank God for the content. We got some good content coming up. So exciting stuff coming up. Awesome. Thank you very much. Hey, I defy anyone to play this episode at greater than 1.0x, by the way. If you got through this episode at 1.5x or faster, let us know. We'll send you a t-shirt. All right. Thanks, everyone. Talk to you next time.