AI + a16z

Patrick Collison on Stripe’s Early Choices, Smalltalk, and What Comes After Coding

53 min
Mar 24, 202626 days ago
Listen to Episode
Summary

Patrick Collison, CEO of Stripe, discusses his early programming experiences with Smalltalk and Lisp, Stripe's foundational technology decisions that still impact the company 15 years later, and the launch of Stripe's V2 APIs. The conversation explores AI's impact on programming, productivity measurements, and Collison's work on foundational models for biology through ARC.

Insights
  • Early technology decisions have lasting consequences - Stripe's initial choices of Ruby and MongoDB still define the company 15 years later, demonstrating the importance of careful API and data model design
  • AI hasn't yet shown measurable productivity improvements in economic data, despite widespread adoption and optimistic predictions from industry leaders
  • The programming paradigm hasn't evolved significantly in 20 years, creating opportunities for new development environments that integrate runtime, debugging, and code editing
  • Complex diseases remain unsolved because humanity lacks the experimental technology to understand their combinatorial complexity, but new tools for reading, thinking, and writing at the cellular level may change this
  • API migrations at scale resemble instruction set migrations more than product launches, requiring years of careful planning for backward compatibility
Trends
Development environments moving beyond text editors toward integrated runtime debugging and profilingProgramming languages becoming less formal and more declarative with AI assistanceAPI design and data models having strategic business impact beyond technical considerationsFoundation models being applied to biological systems for drug discovery and disease researchEconomic productivity gains from AI taking longer to materialize than expectedDirect manipulation interfaces potentially replacing traditional coding paradigmsAI-powered code refactoring and beautification becoming standard development toolsInstruction set-style migrations becoming necessary for mature software platformsBiological programming becoming feasible through CRISPR, sequencing, and neural networksReal-time code inspection and modification during execution gaining importance
Companies
Stripe
Collison's payments company, discussing its 15-year technology evolution and V2 API launch
Cursor
AI-powered code editor that Collison uses and discusses for future development paradigms
MongoDB
Database technology Stripe chose early on and still uses with custom reliability infrastructure
ARC
Biomedical research organization Collison co-founded to train foundation models for biology
OpenAI
Referenced for ChatGPT and language model development in AI programming discussion
Anthropic
Mentioned for co-founder Jack Clark's prediction of AI adding 0.5% annual GDP growth
Apple
iOS ecosystem cited as example of superior API design driving business success over Android
Google
Android platform mentioned as having inferior developer frameworks compared to iOS
People
Patrick Collison
Main guest discussing Stripe's technology decisions and his programming background
Michael Truel
Host interviewing Collison about programming, AI, and development environments
John Collison
Patrick's brother and Stripe co-founder who made early technology decisions together
Jack Clark
Cited for predicting AI will increase GDP growth by half a percent annually
Peter Norvig
Wrote 'Paradigms of AI Programming' book that influenced Collison's early AI work
Brett Victor
Creator of 'Inventing on Principle' talk and Dynamic Land, influencing development paradigms
Andrej Karpathy
Possibly attributed quote about AI's potential for code beautification and refactoring
Jerry Sussman
Taught Collison's only CS class focused on creating modifiable code architectures
Quotes
"It's interesting to me that we haven't experimented in some sense that much with the paradigm of programming over the past 20 years."
Patrick Collison
"I think that's a case where the right API design, the right abstraction design ended up having just quite significant business ramifications."
Patrick Collison
"We humanity have never cured a complex disease."
Patrick Collison
"You now have the ability to, again, at the kind of level of the individual cell, to read, think and to write. And this starts to really feel like a new kind of Turing loop."
Patrick Collison
"Making them work alongside everything already built on the old ones is, as Collison put it, more like an instruction set migration than a product launch."
Narrator
Full Transcript
3 Speakers
Speaker A

It's interesting to me that we haven't experimented in some sense that much with the paradigm of programming over the past 20 years. You put those together, you now have the ability to, again, at the kind of level of the individual cell, to read, think and to write. And this starts to really feel like a new kind of Turing loop and to have its own sort of completeness. I think that's a case where the right API design, the right abstraction design ended up having just quite significant business ramifications. I think the basic IDE of as development environment and not just text editor is really the right idea. And that's the thing I want to

0:00

Speaker B

see a return to Patrick Collison wrote his first startup in Smalltalk. Its development environment let him fix errors mid request, inspect stack frames and resume execution. And he wanted that more than he wanted a mainstream language. He and his brother chose Ruby and MongoDB for Stripe instead. Those decisions still defined the company. 15 years and 44 seconds of annual downtime later. Now Stripe is shipping V2 APIs, rewriting core abstractions. First designed in 2010. It's taken years. Defining the new APIs is the easy part. Making them work alongside everything already built on the old ones is, as Collison put it, more like an instruction set migration than a product launch. This conversation, previously aired on Cursor's podcast, also gets into why AI hasn't moved productivity numbers, what today's dev environment could steal from Lisp machines, and Collison's work at ARC on foundational models for biology. Michael Truel, CEO of Cursor, sits down with Patrick Collison, CEO of Stripe.

0:39

Speaker C

Well, it's great to have you.

1:40

Speaker A

Thank you for being here. Thanks for having me.

1:41

Speaker C

Great to be here. Yes, I've heard that your first startup was written in Smalltalk. Please explain.

1:43

Speaker A

I don't know what there is to explain. It's the best programming language. Well, I'd worked on Lisp and Lisp dialects before that and actually I worked on Lisp web frameworks and when we went to build our first startup we first wrote it in. We first implemented it in Rails and then I found compared to Lisp, that development process kind of frustrating and I mean we don't need to get into full details, but I thought that continuation based web frameworks were really the right way to implement web applications. There were no continuations in. There's no continuation based framework in Ruby. And kind of searching around I found that there was a good one that had just been written in Smalltalk and so I Decided to play with it a little bit. And then I found that Smalltalk is actually this extremely interesting development environment that had a lot of the aspects of Lisp that I'd really appreciated there, like a fully interactive environment with a proper debugger so that you can edit the code while in the middle of some web request or deep in some stack trace or something. And you could, for example, encounter an error with some web request, edit the code to fix the error, and then resume higher up in the stack such that the entire web request would just complete. And so rather than this kind of annoying feedback loop of having to add some log statements and do this binary search, find the problem, and eventually deploy a fixed version, a process that could take an hour, you could just literally inspect the stack frame, see which variable has the wrong value, fix it, jump back up, hit proceed, and have the whole thing work. Anyway, the point is, in the hunt for this continuation based web framework realized that Smalltalk in general hedge is a much more powerful development environment as compared to Ruby, as compared to basically every other mainstream programming language. And so we, we decided to use it for the company, which in hindsight was, I mean, I don't know if it was a terrible decision or not. The reason I think one would think it would be terrible is that it would be hard to hire people and hard to scale and whatever. It wasn't hard to hire people, or rather nobody knew it, but it was easy to teach them.

1:49

Speaker C

The company, did they know before they joined?

4:05

Speaker A

No, they learned really quickly. And then you have smart people learn languages really quickly, so I don't think that's really a reason not to use a non mainstream language. The company didn't work, I think for unrelated reasons. I think just the idea wasn't that strong. But we also chose Ruby for Stripe, so I don't know, I think maybe the gains were not quite as large as I hoped.

4:07

Speaker C

And was your small talk enthusiasm shared by the acquirers of the startup? And what was the dynamic? Was there this blissfully ignorant management that foisted this Smalltalk code base on a bunch of unsuspecting developers that were then kind of like piling it over or. Yeah. What was the dynamic between the programmers, management? What happened to that Smalltalk code base? Yeah, yeah. Does it still live on somewhere?

4:25

Speaker A

I wish. And I'm 99% sure the answer is no. The company that acquired us was mainly a talent acquisition, so the code base itself was less relevant.

4:47

Speaker C

Okay. And it was immediately sort of just gone. Yeah, okay, gotcha. I've also Heard that one of your earliest programming projects was working on an AI bot written in Lisp, and it was something like it was a client for msn.

5:00

Speaker A

I know where you found that, but that is true.

5:15

Speaker C

And I heard that you got kind of nerdsnight by the idea of trying to get it to pass the Turing test. And I'm curious, what did you miss? Why didn't you make chatgpt? And, well, maybe a little bit more seriously, how did it work? And what was the state of neural networks at the time? And did you consider using any antecedents to the technology we use today?

5:17

Speaker A

Yeah, so that was the project. It was a little critter that used MSN messenger, which was all the rage at the time. I guess that's like maybe a specific kind of sedimentary layer in the chronology of different instant messaging solutions and probably dates me quite precisely. And it was a really simple Bayesian next word predictor. Like there was nothing really that sophisticated there to the hint there was anything sophisticated. It was maybe that it used like the training data was the conversations ISH itself had on MSN messenger rather than kind of general text corpa. And it worked reasonably well and better versions looked a couple of words ahead and what have you. I mean, it never really passed the Turing Test where people have actual suspicion they're trying to exercise this discernment, but it certainly passed some weaker versions of Turing Tests where they were unsuspecting and people ended up having quite lengthy conversations with it. And that was part of how I discovered Lisp. And I remember Paradigms of AI programming by Peter Norvig being a really formative book and had all sorts of interesting approaches there. It didn't have anything on neural networks, I'm almost sure. And I never. I mean, I'd read some Marvin Minsky stuff, Society of the Mind or whatever, on neural nets, but I never really seriously looked at them. And I actually experimented a lot with genetic algorithms. They were, I guess, more practical on your own computer. Like, it takes a lot of computer training, neural net. So I experiment a lot with genetic algorithms and actually I use VORAK at the keyboard layout because it's more comfortable to type on than qwerty, but. But as does John, my brother. So no one can ever use our computers. But I wrote a genetic, I don't know, optimizer to figure out what the optimal keyboard layout was. And it turns out it is in fact basically Vortak using a genetic approach. So I went deep down that rabbit hole, but I never really played with neural networks and I guess that's why that, but probably 70 other reasons is why I did not create ChatGPT.

5:36

Speaker C

There is an old video of you being interviewed, I think after selling octomattik where you're asked about Smalltalk. That's where I found that weird fact. I think at the time people asked you why and one of the things you said was, I mean, you liked some features about smalltalk Lisp style languages and you predicted, and I think that this was circa maybe 2008 or something like that, that the mainline C style programming languages would increasingly borrow ideas from these older programming languages. And that kind of has been the case in the JavaScript Python ecosystems. Do you think that there are any underrated ideas buried away in kind of older, more esoteric programming languages that should be borrowed by the mainline?

7:38

Speaker A

Yeah, it's been interesting how a lot of the ideas have been borrowed by the JavaScript ecosystem and in a strange way, like through the Web Inspector where you have this, I mean, that's one of the richest runtimes in some sense that people have general exposure to. I don't think JavaScript has first class stack frames. Maybe there's some weird extension or something where you can get that, but ECMAScript doesn't have that. I'm pretty sure first class stack frames actually let you do a lot of other things for kind of obvious reasons. So maybe that's kind of too specific. I mean, I think the idea of, and maybe this is what cursor becomes. I think the basic idea of as development environment and not just text editor is really the right idea. And that's the thing I want to see a return to. That's the thing that the Lisp machines had and genera. That's the thing that to some extent Mathematica has. That's the thing that smalltalk has. And I think it's just such a mistake that we have ended up with development environments where there is such a separation between the runtime, the text editing and the environment in which the code I mean the runtime and the place where the code runs can be the same or different, but there are kind of three, maybe slightly conceptually different things. And in those three environments they can all code exist in the same place. Still to this day, I use Mathematica a lot, not because I'm doing some particularly arcane symbolic mathematics, but because it's just a more efficient development environment. Now that's maybe a bit less true with LLMs because the Mathematica does not support cursor style prompted development, but that I think is the core idea that I wish others would borrow. And VS code has been a step to some extent slightly in that direction, but I think we could take it way further and it'd be really. I mean, what I'd love to see, for example, is when I hover over a line of code, I would like to see profiling information about just the runtime characteristics of that code or that function or whatever. I would like to see logging and error information overlaid. When I hover over a variable, I would like to see how the most common values that it takes on in production, these kinds of just rich, deep integrations.

8:15

Speaker C

Are you a fan of inventing on principle and this talks?

10:37

Speaker A

Yes, yes, yes. I think Brett leans too much. I mean, a huge fan of Brett. He's such an incredible.

10:39

Speaker C

Have you been to Dynamic Land?

10:46

Speaker A

Yes.

10:48

Speaker C

Okay.

10:48

Speaker A

Yep. And have supported it. So, huge fan of Brett. The place that I've maybe differed or at least that just resonates with me, you know, somewhat less, is Brett is really into this idea of obviously of graphical and visual representations for phenomena. And I think that works very well in certain domains, like the kinds of dynamical systems that he has demonstrated some of the ideas with. I think it's often very hard to find such useful spatial continuous representations for arbitrary systems, like for various parts of Stripe. I'm not quite sure what that would be and I'm not sure even if we could find us exactly how useful it would be. Maybe it's just me. I reason much more kind of symbolically and sort of lexically than I do visually and graphically. It might just be personal preference, but I don't know. The kind of paradigm breaking that he's been engaged in, I think is hugely admirable. Are you going to make a truly integrated development environment?

10:48

Speaker C

So we are playing with ideas around letting the AI increasingly take time into the background to run as code and react to the output. And we think that this should all work well together. We've focused a ton on inflow, speed and control, and we think that that's really, really important for AI is to give programmers the control over everything, have them understand everything the AI is producing, also to give them really, really fast iteration loops. Programmers hate waiting for things, but in some cases we think it's now becoming possible to go tell the AI to think for a bit and then come back to you and have the API be a little bit more like the API with another human being. And we think you want that all to work well together so the AI can come back to you with 70% of something and then you can bring it into the foreground really quickly, work with it, and then spin it back off to the background. And as part of having the AI spend a bunch of time thinking in the background, to make that thinking useful, you kind of needed to run the code and then react to it. Or else it's just kind of staring at the thing that it wrote and thinking more.

11:53

Speaker A

Maybe I'm supposed to be the one answering the questions rather than asking them, but do you think in five years the main thing that I'm looking at in cursor will be code or something else?

12:51

Speaker C

I think it might be something else. I think that there are big, big, big simplification. But kind of when you're defining what a piece of software is, there's like the logic component, which is what engineers spend a lot of time on, of designing exactly how the software works. There's also for end user applications and things that have GUIs, there's like this visual component. And I think that there is, maybe it can be us, maybe it's going to be someone else. There is a future version of the world where the way you interact with AI is a little bit less like it's a human helper that you're delegating work to or looking over your shoulder predicting the next set of things you're going to do. And instead it's a little bit more of an advance in compiler or interpreter technology. And it could lead you to a world where programming languages actually change and they can start to get a little bit less formal, they can start to get a little bit higher level, they can start to be a little bit more about what you want and a little bit less about how you do it. And I think that it won't look like a Google Doc necessarily. I think that there are things you want to keep around from programming, like the naming of logic somewhere and then using that in a bunch of other places. I think that there's also this other element too of the visuals of what a piece of software looks like. And I think maybe us, maybe some other tool. But I think there's a world where kind of direct manipulation of the UI starts to play a little bit more into it. But these are kind of far flung experimental ideas

13:06

Speaker A

in general, I will say, and it's not terrible, but I feel like they're. It's interesting to me that we haven't experimented in some sense that much with the paradigm of programming over the past 20 years. And many of the Things we're discussing here are from the 80s or the 70s, and there are way more developers obviously now than there ever have been in the past. But in some sense the sort of the aperture of experimentation there feels like it's really not that wide. And again, the JavaScript ecosystem and a couple of others have done some cool things and there's been a lot of experimentation at the language level with Rust and Go and everything else, but at the kind of the development environment level, I don't know why it is, but maybe it's just too hard and complicated now. But there's been less than I would have expected.

14:24

Speaker C

Yeah, I agree. And I think may this help something we're working on.

15:09

Speaker A

Maybe this explains Cursory's success to some extent, where you guys are the first people to really take it seriously in quite a while.

15:16

Speaker C

Well, I mean, yeah, I think we also benefit a lot from the why now? Of like there's now this great new color to paint with or set of colors to paint with. I think also there's just a ton of lock in with programming languages around both the neurons in your head of like programming languages are kind of complex UI for programmers to define exactly how the computer should function. And so people learn languages and people don't like to learn that many things. And then there's also the lock in of you have a lot of logic sitting around in one language and you need to maintain that. And I actually think that that's a pretty interesting or one of our hopes is that as AI programming gets better and better and better, one of the downsides of working on professional applications with hundreds of people dealing with many millions of lines of logic is the weight of the code base really starts to weigh on you. And so the feeling of being in a net new code base where it's just everything feels effortless, goes away, everything's a chore. You have to change one thing here, break something else here, and it becomes kind of this big ball of mud. And making that effortless, reducing the kind of weight of an existing set of logic, I think is one of the areas in which AI can make programming better.

15:23

Speaker A

Someone said on Twitter today. Maybe it was Andrej Karpathy, but maybe I'm misattributing that and made too many things through a vibe coding get attributed to Andre, like to Churchill or Einstein or something. But I think that. But this person, whoever it was, was making observation that, you know, it's one thing to be prompting the creation of code, but another place where AI could conceivably do. A lot to help is in the beautification and the refactoring of code bases. And you can imagine that you're producing all this a little bit ungainly, not quite correctly factored, detritus at the front. And you have this. And then nocturnally this thing comes up behind you and makes it all beautifully factored. And the only CS class I ever took was this class from Jerry Sussman on basically focused on, I mean, he called it large scale symbolic systems. But really what he was trying to focus on was the idea of creating code bases and environments and abstractions that were easy to modify. And there were no assignments in the class where you'd write something from scratch. Every assignment was about modifying an existing system and thinking about how could you design things in such a way that those modifications become, and there might be quite deep modifications become straightforward. And I think that's a lovely idea. Obviously in practice it's often very difficult to do that given all the exigencies and pressures of the things you want to ship today and next week and so forth. But if you could have an AI, often when writing this stuff you realize, well, I really should be doing it the beautiful way, but I'm not. Maybe we could have an AI coming up behind us too. At tuned.

16:28

Speaker C

Yes, yes, maybe soon. One thing that happens to a lot of developers or a lot of people come to development because they care about building things, they want to make things happen on the computer screen. And so then that leads them to coding. And then something that happens to a big group of developers is they eventually realize the software they want to create is too big, that they can't write all of the code themselves and they have to go to humans to help them write the code. And so maybe they then become an engineering manager, director, whatever it is, maybe they start a company and then most of the work becomes not typing code, it becomes coordinating amongst people. Do you think that there are any ideas from programming that are helpful for the act of programming amongst the organization to get a group of people to build software together?

18:04

Speaker A

Interesting. I think taking APIs and data models really seriously, if I was to do everything at Stripe again, I mean, there's a million small things that you would do differently and even some kind of big things. But the thing that I think we could maybe foreseeably and beneficially done differently would be to have spent even more time than we did on APIs and data models. And part of the reason is, I guess, Conway's law effect of how both of those Things end up shaping the organization. So I guess if you don't deeply internalize that, then maybe you have less control over the organizational dynamics than you might otherwise like to have. But also I think it ends up shaping not only, I mean, the weak version of Conway's Law is that it shapes your organization. I think the strong version is that it substantially shapes your strategy and just your business outcomes. And this isn't exactly maybe a version of that. But I often reflect on how the iOS software ecosystem for a very long time and plausibly still today, was so much more vibrant and kind of vital and successful than the Android app ecosystem. And there's a lot of things that are different across those two ecosystems. There are now way more Android devices in use, I believe, than iOS devices. But I think much of the fact that app developers tended to prefer building their apps on iOS and releasing their apps first on iOS, and maybe the iOS version being better than the Android version or whatever, is because the frameworks and the abstractions for iOS were just originally better than the Android ones. But I think that's a case where the right API design, the right abstraction design ended up having just quite significant business ramifications. And I think there's kind of a sense that maybe it's not worth dwelling on these things because everything in technology changes so rapidly and whatever assumptions you make, they'll be obsolete in two years or something. I think in practice that's not true, and that the right API design and the right abstraction to the right data models can really endure. And for the first versions of iOS, many of the classes that one used were prefixed with NS. NS of course, standing for next step. Right. And so that's a case where the API design survived for two decades or more. And in the case of Stripe, stripe is now 15 years old. And there were lots of things we designed 15 years ago that are still in use today, which is kind of good and bad in the sense that they endured. But also we are still, we are still under the living with their faults. Exactly. And so anyway, that's maybe the thing that I would. That's the first thing that comes to mind.

18:46

Speaker C

In fact, on that final note, I was talking with an engineering leader at kind of a preeminent successful Silicon Valley private company. And they were talking about how their code base is largely in Scala. And they said that they like to think of kind of the beginnings of the startup as this big bang moment where these tired overworks, maybe over caffeinated founding team members are willy nilly making these Initial technical decisions that then dictate the lives of hundreds of professional engineers in the future. And that scholar choice is one of them. And they sort of live with the faults of that now. But what are those kind of what were the consequential, it could be good or bad initial conditions of the Stripe Big bang that you guys still live with right now?

21:44

Speaker A

I mean, I think that metaphor is. Well, it sounds true to me is the first thing I'd say. I mean, maybe there's a little bit of kind of survivorship bias where like the actual statement is the early decisions that we made that we never changed are decisions that we lived with, but there's a kind of tautology there or something. And there are certainly design decisions we made pretty early on that are not true today. So early versions of the Stripe dashboard or something were built extraordinarily differently to the Dashboard today. And the converse is also true. So initially we decided to use MongoDB at Stripe and we decided to use Ruby at Stripe, and those are still quite foundational technologies at Stripe. And we had to build a lot of infrastructure in order to make MongoDB as fault tolerant and as distributed and as durable and as reliable and everything as we needed it to be. And as it now is like we had Stripe's critical API availability last year was 99.99986%, which is 44 seconds of unavailability through the all year, which is. Others don't publish statistics that are kind of as granular, but we believe that is the best in the industry. So everything that our storage team has built and many other teams, it ended up really working there. But that was a quite important critical decision. Initial decision and Ruby similarly. I guess companies sometimes change languages along the way, but I feel like the initial language chosen tends to have.

22:31

Speaker C

There were debates in Stripe about. Or one of actually one of our co founders interned at Stripe early on or not early on in Stripe's history, early on in kind of our collective personal history. And he remembers there being documents upon documents about a potential Java migration.

24:16

Speaker A

Yeah, so that partly happened as in we have rewritten a bunch of key services in Java. So some services for which throughput in particular is really important. And if you torture Ruby enough and maybe rewrite parts of some hot Paths in C or something, you can get it to be pretty fast, but you're often fighting against the allocator and various parts of even just like Ruby strings are not that efficient and stuff. So we've rewritten certain services in Java and now we use both.

24:34

Speaker C

Did you consider anything other than Mongo? And why did you pick Mongo early on? And what was the RFC process, RFP process, decision making process? For that it was just me and John.

25:14

Speaker A

So we were sitting on the couch, it's like, should we use Mongo? Yeah, fine.

25:26

Speaker C

Did they get through to you with a blog or was it just the reputation of Mongo at the time and open source communities? Something else I think it was.

25:30

Speaker A

So I wrote a data store for our prior company, an object based data store, and I didn't really like SQL. I thought it was there was too much of a translational kind of mismatch between the domain of the application and that which SQL natively makes expressible. And so with SQL, obviously you have to collapse down into a relatively restricted set of primitive forms, whereas in your application you might have a concept of, I don't know, let's say in the case of Stripe, of Money, that doesn't exactly comport with how the particular SQL database you're using happens to represent money or whatever the case might be. I just had this principled objection to SQL not endorsing this or saying it was good. But as this interview shows, I suppose I had all sorts of strange notions about technology. And with Stripe, we wanted to be more mainstream and a little bit less heterodox in our technology choices than our prior company. And so instead of using Smalltalk, okay, we weren't going to go to Java, but we went to Ruby, which at least on a relative basis, seemed to remain Stream. And similarly, rather than write our own object database, we went relatively more mainstream and used Mongo, which still give a lot of flexibility by virtue of being a kind of object data store. So that was fine. Everything I've said might disqualify me from ever making technology choices for another company.

25:40

Speaker C

But would you do anything differently about Stripe V2?

27:15

Speaker A

We haven't talked that much about it publicly yet, and the answer might be a bit like, there's the Xuan Lai quote, Deng Xiaoping about the French Revolution. It's too soon to judge. And so back in 2022, I believe, to this discussion about data models and abstractions, we realized that a couple of the core abstractions in Stripe were just not the right long term abstractions, and we had to fix that. And so we designed a bunch of V2 APIs. Fortunately, we had contemplated the possibility of this early at Stripe. So most of the rest uris that people are familiar with in Stripe are prefixed with V1. They've been prefixed with V1 from 2010. And so then in 2022 we decided, okay, we might increment the the namespace. So we designed those new APIs. They've started to ship this year.

27:20

Speaker C

Congratulations.

28:28

Speaker A

Thank you. And we're extremely excited about the functionality that it's going to enable. And without getting into the arcana of it, they will enable things like historically we have drawn distinctions and represented separately things like end customers, things like sub accounts, things like recipients for different kinds of payments. And we're unifying all of those into being into the same kind of entity representation, which is on some level clearly the right answer and makes a lot of will and is already changing the businesses of some of our customers because they can enable their users to do various things without having to re enter details or maybe to bring the same account across different countries or whatever the case might be. Anyway, it's been a long journey and the reason it was a long journey is, I guess, because it's not that useful to just define these APIs in isolation. If we just wanted to define them in isolation, that's a pretty easy thing to do. The thing that's difficult is to make them interoperable with all the existing things at Stripe and to build translation layers and so forth. And then to figure out with our customers what a sensible upgrade path might look like, because we control our code base, but we don't control theirs. And so it's going to be, I don't want to exaggerate it, but in certain respects at least it feels a bit more like an instruction set migration for, you know, a chip architecture or something where the instruction set by itself is easy. But it's all the kind of coexistence questions that become hard. It's hard to ship this year and we're excited about it. I mean, I guess your question was maybe what lessons we've learned from it.

28:29

Speaker C

And maybe do you think there's anything bigger to draw out of that on either projects that are rewrites or thinking about these kind of decades long abstractions and how to do that?

30:23

Speaker A

Well, my trite answer to that is to unify everything you can plausibly unify.

30:33

Speaker C

How did you test design ideas for V2?

30:43

Speaker A

Well, the people designing it. Well, I'll give you one other lesson and then I'll answer that question. And then just the other lesson, maybe.

30:46

Speaker C

And also, is there some chief API designer who's the mastermind and it's one person, it's not some sort of working group.

30:52

Speaker A

There is a working group. There are working groups. But there is also a singular person who understands and is more than anyone else responsible for the whole. And I think that's necessary. My other kind of trite exhortation would be to make anything that plausibly could be an N by M relationship to support that. Because if you only support 1 to n or n to 1 or whatever, and even if it's non obvious how it could possibly be N to N, just inevitably you'll end up needing that and you'll think, well, you could never have a company that's owned by two different companies or something. But it turns out that every permutation in the space is in fact eventually explored as to how to do that. Well, I really feel like it's these new APIs. We think they're the. Well, you asked the question how do we know they're the right APIs? Partly from showing early versions of them to customers, partly because the people who designed them had spent many, many years in the witnessing and living with the shortcomings of the prior versions. So we were kind of coming with strong opinions, but even the strong opinions one can sometimes predict wrongly or extrapolate wrongly or over engineer something or whatever. So I think the cycles of customer validation, customer feedback are extremely important. I think it's also very important we did a lot of this to literally write the integrations that would exist in the new world. Because I think Java is maybe an example of yes, it fixes a bunch of problems with memory management or whatever that existed with C or C and antecedents, but at the cost of a lot of prolixity and overhead. And in order to kind of safeguard ourselves against inadvertently over engineering things, we forced ourselves to write a lot of API code, specifically describing how we would implement various business models and flows and so forth just to make sure that when you look at it it feels right. But I don't want to endorse our approaches too strongly just yet. I mean I'm feeling very optimistic, but we're. I don't know what fraction, but 60, 70% done or something, but not like 100%. And so I don't want to prematurely declare any victory.

30:59

Speaker C

How do you, Patrick Collison, use AI?

33:30

Speaker A

Well, the main ways are the predictable ones where I use LLM chat tools a lot.

33:33

Speaker C

What do you use them for?

33:45

Speaker A

Mainly for answering kind of factual or empirical questions that I'm curious about. So for, for deep research style questions, I don't always use deep research. And now that the LLMs are getting better at tool use and just navigating the web themselves, you don't need deep research as much but for answering empirical or factual questions. I wish they were useful for writing, but I usually end up dissatisfied with the writing that they produce. So I don't really use them very much for that and even for editing or. Or grading my own writing.

33:47

Speaker C

I mean, have you seen any improvements as the models have progressed in the writing? And I agree also it's surprisingly generic.

34:24

Speaker A

Yes.

34:30

Speaker C

And I'm trying to prompt it to not be generic and inserting names of people who. And it just doesn't work. And so I have been disappointed at the times when I've given it.

34:31

Speaker A

People tell me that the base models are better at this and it's the sort of normification of RLHF that puts it in. In some kind of attractor basin. Yeah, I have not succeeded in using them effectively there. People say that PLOD is better and O3 is better than earlier OpenAI models and on a relative basis that might be true. I don't want to sound self laudatory here and suggesting that I'm some particularly talented writer. I don't think I am. It's just like my personal style differs from the personal style, so to speak, of the models. And in some self centered way when I write I want to use my personal style. So I use them for the factual stuff a lot and I find them terrific for that. And even when I'm reading a book I'll sometimes activate. I've been recently using grok's voice mode and I'll just passively ask questions while I'm reading and GROK is just listening in the background and the answers are very helpful. And then I obviously use LLMs for writing code and typically mediated through cursor.

34:40

Speaker C

So we are interviewing you, Patrick Collison, as kind of the most. If you had to pick the archetype of a software industrialist, I feel like you would be kind of straight out of central casting for a number of reasons. One is that you are running a large software company, a successful large software company. Two is you started as a programmer and then moved to running the company. And then three is the company also builds things for developers. And so it's kind of the intersection of many circles in the Venn diagram. And so it's helpful to hear about discussing kind of experiences with Stripe. We are also interviewing Patrick Collison, the moonlighting economists and student of the world. And so are progress studies doomed now that AI is here, Is there any need for them?

35:55

Speaker A

Well, I was going to say, I think the need for progress studies has increased. But again, I don't mean to suggest that proper noun progress studies sees increased need. But I think the kinds of questions that progress studies tries to answer are now more pressing and urgent because I think the degrees of freedom are increasing and I think there's some Panglossian view that AI will just magically solve all the problems and predictions of the future are hard. But one, I don't think that's true and two, in as much as we have evidence to date, I don't think that's been the track record. So I think that how we use these things, what kinds of decisions we make, what kind of considerations and margins of human welfare we see to further, I think all those judgments are going to really matter. And maybe a critique you could have leveled at progress studies or progress Studies style thinking 5 years ago is these are all nice questions, but the world is on a kind of foreordained escalator path to some kind of teleological outcome. And. And I don't think the world feels that way today, or certainly it feels much less that way today than it did.

36:47

Speaker C

And so because of global affairs or something else?

38:04

Speaker A

No, I mean, maybe somewhat global affairs, but the trifecta of global affairs writ large. Second, I think that aspirations and ideals are becoming contested more actively. And there's an ambiguity these days in, you know, the US as to what the left and the right even stand for. And you know, I guess we currently have one party endorsing tariffs and another party opposing them, but with the valence has kind of shift flipped from what one might have expected historically. And then third, yeah, obviously technology and first and foremost AI, but you know, in our industry, stablecoins, the rise of China as the preeminent manufacturing power in many technologies of the future, like drones and robots and batteries and solar, et cetera. So yeah, in many different ways, I feel like the future is. Peter Schwartz has this concept of the Schwartz Window as the window of contemplatable futures in whatever number of years hence. And I feel like that Schwartz window, as of say, 2005, as we contemplate the world of 2015, was fairly narrow and was correctly fairly narrow. I think the world of 2015 did in fact unfold largely the way we would have expected in 2005. And I feel like today, in 2025, that window for 2035, it feels extremely broad. So, yeah, I think the progress studies style questions are more pressing.

38:07

Speaker C

So you were on the record in saying that people should focus More on the question of why we don't see improvements in productivity numbers as information technology increases and also as more people have started working on science and technology and more money has gone into it. And what do the numbers look like now? Do we see AI in the numbers?

39:54

Speaker A

There was a new paper published in this very recently, like the past couple of days, that I've not had a chance to. I just queued it today to read. So I have at this moment only read the abstract. Its claim is that one does not in fact observe productivity improvements stemming from use of language models. Now, I certainly can't.

40:15

Speaker C

Do you know what they're looking at?

40:40

Speaker A

They appear to be undertaking some kind of natural experiment looking at the individual level based on intensity of LLM usage. But I certainly cannot endorse their methodological rigor. And upon understanding it better, I might be either really impressed and find it very credible or horrified, I don't know. But that was just the finding I happened to stumble upon today. I mean, look, overall GDP growth in the US looks well over the last two years. It's been somewhat better than we expected. Obviously, we're speaking right now at a kind of volatile time. We certainly don't see any evidence for exponential takeoff. And if we, inasmuch as we thought that the encouraging GDP figures we have seen in the US over the last two years are attributable to some of these new technologies, I think you would also expect to see them in other countries because these technologies are quasi public good. Anybody can, you know, can use these LLMs. GPU growth outside of the US has not been that encouraging. We're not living in some massively accelerated period of economic growth for the world writ large. And so obviously it's early days, but I think we're seeing that the diffusion of these technologies through the economy really takes time and involves substantial complexity. And maybe just last point of that is I believe Jack Clark said in an interview with Tyler Cohen, one of the co founders of Anthropic. And Anthropic to some extent has. Well, Anthropic has always taken the concept of AGI and even asii, I feel extremely seriously. And Dario speaks about this publicly, he's written about it, et cetera. And again, Jack Clark, one of the co founders, and he said that he expects AI to increase GDP growth by half a percent a year. And I thought that, I mean, I interpret Jack as really an optimist. And half a point a year is in fact a lot of incremental GDP when compounded. So I'm not saying that that's small, but I think it's interesting that that was his figure.

40:41

Speaker C

Yes. Do you think that with the form factor that AI is taking in the economy right now, if we just kind of stretch the line forward, do you think we're going to need new measures in economic productivity than we have right now? So assume real productivity goes up, assume that AI keeps getting better. It gets kind of deployed in the ways you would expect. Do you think we'll need new measures or it should show up in the numbers?

42:45

Speaker A

No, no, I don't think so. I don't think the GDP is perfect. I think GDP can be improved. But in any world where what we generally think of as the economy is massively enhanced, it'll show up in gdp, I believe.

43:06

Speaker C

When will we be able to program human biology?

43:25

Speaker A

I'm very excited about this. ARC, which is this biomedical research organization which I was involved in founding, we were working on training foundation models for biology using DNA and things like that. We're working on a virtual cell and generally we're trying. I mean, a thing that I think is that I didn't appreciate until really spending more time in biology is we've never, like we humanity have never cured a complex disease. So one ontology or schema or something of diseases would be you have infectious diseases, the flu, the cold, Covid whatever, and tuberculosis and very diseases with high mortality rates. Then you have monogenic diseases where there's just sort of one genetic mutation that is responsible for the disease, like Huntington's. And then you have complex diseases. And the complex diseases are kind of the residual that are now left after we've cured most of the problematic infectious diseases, at least in the Western world. Most cardiovascular disease, most cancers, most autoimmune disease, most neurodegenerative disease, et cetera. For certain of these conditions, we have maybe treatments that help, like statins with cardiovascular disease. But for none of them can we really say that we've got cured it, that we understand the causal pathways in meaningful detail and that we can vaccinate against it or something. And I think this is our hypothesis could be wrong, is that this is in part because we don't have experimental and kind of maybe epistemic is too grandiose a word, but kind of epistemic technology that's up to the task, like the pleiotropy of the genes in terms of all the different parts of the body and the systems and the mechanisms inside the cell that they affect is so there's so much combinatoric complexity there. And then the environment is such a vast and difficult to quantify thing that just it's really hard to understand for any of these conditions, the etiology and the dynamics and so forth. Okay, then over the last 10ish years, I mean, a bit longer, but a lot of the development has happened the last 10 years. We've gotten three new classes of technology in biology for reading. We've gotten much better sequencing technology, single cell sequencing, the ability to sequence, single cell sequencing of rna, and those improvements at the kind of think level. We've gotten neural networks and deep learning and transformers and everything there. I mean, they've existed for a long time, but we've gotten the recent improvements in them and the transformer in particular. And then on the right side we've seen obviously huge improvements in functional genomics and CRISPR and bridge editing, which is a technology that kind of arc, but the ability to kind of make very specific directed perturbations in cells. But if you put those together, you now have the ability to, again at the kind of level of the individual cell, to read, think and to write. And this starts to really feel like a new kind of Turing loop and to have its own sort of completeness. And we will see how much this can do against these complex diseases and whether sort of this systematic approach is up to the task of shedding new light on their dynamics. But we are hopeful and excited if

43:27

Speaker C

we here at Kircher and also others in the industry are successful in automating lots of programming as we know it today and replacing it with a form of software building that's much higher level and more productive. And it's much more just focused on defining and what you would like the software to look like. If we succeed in that. Who are you long? People talk about the designers and how this will be like a renaissance for them. But are you long the grad students? I mean, there are lots of really, really amazing grad students who are awesome and then maybe are less skilled at making things happen on computers. But who do you think is the most unexpected beneficiary of a world where both many more people can make things on computers and then also, especially if it's an evolution away from programming, the people who are already making things on computers are much, much, much more productive.

47:02

Speaker A

I don't have a high confidence answer to that. There's all sorts of trite stock answers like real assets, especially constrained real assets. Maybe we should be long SF real estate or something because it is one of the most beautiful cities in the world and will be enduringly so. Maybe we should be long the inputs and the ingredients to these systems because demand for them will go parabolic. And so maybe we should be long copper. Maybe we should be long positional goods and celebrities and Taylor Swift's music catalog. There's a lot of I think compelling theories here. But part of what I think is interesting at this economic moment is the unpredictability and the contingency and kind of sensitivity to the precise assumptions in the technology trajectory itself and the shape that it takes in five or ten years or whatever. I think is going to do a lot to determine the answer to that. And as I look backwards the last couple of years, I'm struck by how many predictions have held up reasonably poorly even for people who are on the face of it, extremely well, in fact informed. And so I've asked a lot of people this question and I have not heard any answers that are so compelling that I feel like I have conviction.

47:52

Speaker C

So we are very happy to be serving Stripe and your guys mission. What would you like us to build? How can we make Krisha better for you? Either you Patrick Collison or you Stripe.

49:11

Speaker A

Well, you guys are already making Stripe better, so keep doing what you're doing. Would not be a bad outcome from our vantage point. Cursor has today hundreds and soon thousands of extremely enthusiastic employees who are daily users of Cursor and they report that it's a very significant productivity enhancement.

49:21

Speaker C

We'll wait for the economic numbers.

49:48

Speaker A

Well, the economy is pretty big and and these diffusions take time. So it seems kind of greedy to want more if you're already making Stripe spends more on R and D and software creation than we spend on any single undertaking. And so if you're making that process more efficient and more productive, then maybe it seems greedy to want anything more if I'm being selfish. Okay, three things.

49:51

Speaker C

Perfect.

50:17

Speaker A

The runtime characteristics and integration stuff that we just discussed I think would be really valuable. I think the refactoring and the beautification stuff that again we also talked about I think would be extremely helpful and I think really change our degrees of freedom. As in if you could lower the cost of future changes to Stripe and improve the quality of the architecture. And then third, we really care about what we call at Stripe craft and beauty. And we want our software to be well designed and pleasant to use. And pleasant to use not only in the superficial pixel sense, but also in the deep it works very well sense and is something you can set up and largely forget about. And just trust or forget about it in as much as you want to. There's obviously a concern with AI that it leads to the creation of more slope and more kind of crappy things, but not more of the best things. I don't know what it would be that Cursor would do to ensure that the world is creating more of the best software and not just more software, but I think that's an interesting and important dimension. So those would be my Besides all the obvious things to do, those would be three suggestions Amazing.

50:18

Speaker C

Thank you, Patrick.

51:40

Speaker A

All right, thank you for having me.

51:41

Speaker C

Yes,

51:42

Speaker B

thanks for listening to this episode of the A16Z podcast. If you like this episode, be sure to like, comment, subscribe, leave us a rating, or review and share it with your friends and family. For more episodes, go to YouTube, Apple Podcasts, and Spotify. Follow us on X16Z and subscribe to our substack @A16. Thanks again for listening and I'll see you in the next episode. This information is for educational purposes only and is not a recommendation to buy, hold, or sell any investment or financial product. This podcast has been produced by a third party and may include paid promotional advertisements, other company references, and individuals unaffiliated with A16Z. Such advertisements, companies and individuals are not endorsed by AH Capital Management, LLC, A16Z or any of its affiliates. Information is from sources deemed reliable on the date of publication, but A16Z does not guarantee its accuracy.

51:45