Possible

Reid Riffs with Parth Patil on AI-Native Startups (Part 3 of 3)

35 min
Jan 28, 20263 months ago
Listen to Episode
Summary

Reid Hoffman and Parth Patil discuss what it means to be AI-native in entrepreneurship, exploring how AI agents can dramatically accelerate startup workflows and enable small teams to accomplish tasks that previously required large teams and months of work. They demonstrate a real-world example of internationalizing podcasts using AI agents that can translate, transcreate, and generate localized voice content in multiple languages within days rather than months.

Insights
  • AI-native founders should regularly reassess workflows as new AI capabilities emerge, often leading to 50-70% productivity increases
  • Small teams equipped with AI agents can now accomplish what previously required 20-25 person teams and several months of work
  • The key to AI-native thinking is decomposing tasks into parallelizable components that can be distributed across multiple AI agents
  • Successful AI implementation requires combining agent automation with human expertise for quality control and cultural nuance
  • True AI-native companies should be able to describe their value proposition without mentioning AI - the technology should be invisible infrastructure
Trends
Shift from individual AI copilots to orchestrated fleets of specialized AI agents working in parallelHyperlocalization becoming feasible for small teams through AI-powered translation and voice cloningAI-native startups requiring significantly less capital to reach product-market fitTechnical and business co-founders both need AI amplification rather than just technical teamsAI capabilities advancing weekly, requiring constant workflow reassessment and optimizationVoice cloning and multilingual content creation becoming standard startup capabilitiesFounding teams becoming smaller but more generalist and AI-amplifiedAI spend becoming equivalent to contractor costs in startup budgets
Companies
OpenAI
Mentioned for Codex coding agent capabilities and GPT-4 model powering the translation agents
Anthropic
Claude Code mentioned as breakthrough coding agent that can work continuously for days on complex problems
ElevenLabs
Voice synthesis platform used for generating multilingual podcast versions with emotional context
Palette
Startup founded by Amila focusing on JavaScript optimization and web performance improvements
Clubhouse
Patil's former company where internationalization required 20-25 person team and months of work
Cursor
AI coding tool mentioned as early example of coding automation that Patil introduced to friends
Spotify
Platform mentioned for hosting video version of the podcast episodes
YouTube
Reid's channel mentioned as platform for watching full video experience of episodes
Notion
Used as example of web interface that could benefit from JavaScript optimization for speed
Uber
Analogy for how users don't care about underlying database technology, just the end result
People
Reid Hoffman
Host discussing AI-native entrepreneurship and demonstrating podcast internationalization project
Parth Patil
AI specialist guest explaining AI-native workflows and demonstrating agent orchestration systems
Aria Finger
Co-host of Possible podcast whose voice is being cloned for multilingual versions
Amila
Startup founder of Palette who adopted AI coding agents and saw dramatic productivity improvements
Quotes
"I have this like realization multiple times a week where I see something that a model can do... And then I go for a walk and I'm just thinking. I'm like. Like, I'm walking right now, but I'm also being productive, right?"
Parth Patil
"This used to be 25 people in six months, and now it's a day to the first version."
Parth Patil
"If you're not AI native, then you basically shouldn't be doing a company."
Reid Hoffman
"Can you describe what it is you're doing without using the letters AI? If you can't, then maybe the AI isn't the important thing here."
Parth Patil
"When AI blends into the background, that's the best version of this."
Parth Patil
Full Transcript
4 Speakers
Speaker A

Hey everyone, Aria here. We want to kick off the new year with inspiring conversations about AI, as well as practical and tactical guidance around the technology. So for the month of January, AI specialist and my dear colleague Parth Patel is joining Reed for Reed Riffs to talk about how everyone, from individuals to enterprises to startup founders can harness AI to level up their work, retrofit legacy orgs for the AI era, and build AI native businesses right from the jump. So tune in, you're in very good hands with Parth and I will be back putting Reid in the hot seat come this February. Thanks so much.

0:00

Speaker B

Thanks for the kind word and warm welcome to Possible Aria. As I talk with Reid this month, I'll be walking through some of my AI projects, demos and tools on screen. While I'll do my best to describe what I'm looking at for our audio only listeners, consider switching to the video version of the episode on Spotify or watching on Reed's YouTube channel for the full experience. Thanks.

0:41

Speaker C

And let's get into it.

1:04

Speaker D

An area close to both of our hearts which is entrepreneurship. Starting companies, starting products. So everyone is going in AI today. What does it mean to actually be AI native? I think most people misunderstand this, but matter of fact, of all the people I know, you are the most AI native person. So say a little bit about what being AI native is and how does it change the kinds of problems entrepreneurs might choose or the way they might go after kind of starting a company, going after a market, thinking about how they operate from the very earliest days.

1:05

Speaker C

This is something. And actually I think I'm one of the early AI natives, but I have a feeling the next generation will be even more AI native. There are people I've met that have never even used a keyboard, and for them that's like an alien experience. A mouse and a keyboard, they're used to the trackpad, laptop kind of experience. But I think, even when I, when I think about AI native, I think pretty much every. I have this like realization multiple times a week where I see something that a model can do. I see that, oh, Codex can now work for two days straight if you allow it to plan very deeply. Same thing with Opus Cloud Code. Opus, you can give it a planning framework. If you allow it to take notes on its own progress, it's able to work for two days straight. And then I read that, I read the paper and then I fired up on a project and I see it working and it's like three hours deep doing like productive work for me. And I'm Just like, oh, my God. Then I go for a walk and I'm just thinking. I'm like. Like, I'm walking right now, but I'm also being productive, right? I have these processes running, and then my mind starts going, and I'm thinking, like, where do we apply this new capability? But this happens every other week. Some new capability comes online, and then I have to go for a walk and think, where does this apply? A lot of times I think I take like, okay, let's just describe the workflow as it is, how we normally do it, the normal problem that we're trying to solve. And then I go to a language model. I say, given we now have X, Y, Z capabilities, how might we reimagine this workflow so that we can do it in a more parallelized way, in a more extensible, modular way? How can we reduce the drudgery of the human experience in this work and then maybe write the first version of that? And I usually allow it to think for a whole day and work on that for a whole day. It is something that I try to get a pattern of where I describe the problem. I take it to the smartest model I know. I tell it to think about a couple different plans, and then I tell agents to start working on those plans, executing those directions. A lot of times you don't get, like, something that works, you know, on the first try, but you get a lot of promising directions, or you get like 80% of the way there. And all of a sudden you've. You've eliminated like 100 hours of work a week. And. Or like now that the few people that are working on this can go way further, they can think in a more. Like, there's the parallelization of cognition now, right where it used to be that every human was the bottleneck in any. In any job, but now you take a process that someone does. You have them equipped with agents, and then they're parallelized across many different parallel streams in that process. And that's something that's amazing. I think rethinking of yourself as like, being able to atomize a task, decompose it, and then parallelize subcomponents of it, and then lean on the computer for those pieces. It just feels like a superpower.

1:45

Speaker D

So give me a real life example.

4:35

Speaker C

So one thing that I'm known for in my own friend group is as this guy, like, the guy that's just like, deep in the coding, agents, deep on the frontier of, like, what came out last week, what's the new superpower that we have. And a lot of my friends, they still work at normal companies, they have normal jobs, they're building their own companies, and a lot of them are programmers. And so for me, it's like I get to experience the first version of the Superpower, but my friends are better engineers than me. And so I'm kind of like, is it because I'm a noob or is because I'm like kind of just teaching myself everything, that it all feels magical? Or does someone who's also much more experienced than me experience a different kind of amplification? So I'll take. So for example, earlier this year we got cloud code came out and started. It started taking off like wildfire within my workflows. And then I was like, well, is it just me? So then I took it to, I called my all my best programmer friends in sf, we got dinner and we were sitting there at dinner and I was like, guys, Claude code. This feels like the step forward in coding automation that I've been waiting for. Looking at different angles of. And here's what it does for me. And these are the same guys that saw me when I first interacted with Cursor. So they were like, if he's right, we should get early on this. And so I had my buddy Amila. Amila is a startup founder and he's working on a company called Palette. And it's just two of them, it's just two engineers working on this, this company. And it's the company. What they do is JavaScript optimization. So they're, they're trying to make the web interface and the web interfaces faster. So tools like Notion, how do you make them responsive? Low latency experiences. And his entire thing is how do we make the web faster? Like, there's a lot of JavaScript, how do we make it all faster? And so I showed, we were sitting there at dinner and I was talking to him about the cutting edge of coding agents and I was like, here's why I feel like it's amplifying me. Here are the problems I can solve and I think you should be using it. I mean, you're a startup founder, right? Like you should be using quad code before you even think about hiring anyone. And at first he was a little dismissive. And this I think is usually because this is because he's a really good JavaScript programmer. I think he's probably one of the best that I know and that comes with this pride and also a very high expectation. So, yes, in some cases AI generated Code can be slop, but that doesn't make it useless. We work with people that are not perfect and we are sold enough. We are not perfect, right, but to get zero value would be shocking. And so I give him Claude code and then a couple months later I'm like oh, Codex is also very good. Now OpenAI's Codex is competing on a similar plane. And so we get dinner four months later and, and this was just like three months ago. Now we get dinner again and he pulls me aside and he says parth like I'm so glad you showed me these coding agents because now we are launching an enterprise partnership. It's still two of us and we're just like, it's nailing extremely hard migrations. Codex is able to like understand and solve an arcane problem in 20 minutes that would have otherwise taken me a week with all of my time. And he's like, if you think about yourself as a solo founder or like a two person team, you have many other responsibilities other than programming. And that he can delegate to these intelligent copilot systems that can actually solve some of these multi day problems. For him it means that he's seeing this, he really feels like it is something that he can't imagine hiring people that don't interact with these tools as well. And so it's a totally new playstyle and my thing is okay, cool, I need to figure out the next part of that game. The orchestration of multiple of these in and everyone's at a different level. I think a lot of people are interfacing with ChatGPT or they install cloud code, they have one agent and I'm kind of at that point of like, okay, what if we have many of these, a fleet that's kind of on deck, some of them are active, idle and some of them are working continuously. And then can I put these tools in front of the best builders that I know? And what impact does it have on them? And it is staggering how much more the effect it has on them. They become much more ambitious. They're like reaching milestones earlier than they would have imagined. And then they reimagine the team that they're building around this kind of new playstyles. This is why I love working with startups. Because if you think about startups as opposed to enterprises like you have no baggage, you're actually just already dead. Like your default dead. Like you don't have, you don't exist and you don't have this calcification of a bureaucracy and a thousand employees. You don't even have enough employees to do what you're trying to do to stay alive. And so then they lean into the new technology. And so my job, I view it as like, scout the new technology and put it in front of the right person. And then it reveals to me, like, I get more validation. I was like, oh, it is truly that kind of, that important of a technology. It ends up in their daily workflow. It ends up being something their whole team is like, collectively contributing to. And I get a lot of feedback then, because I can use it myself and learn, but if I infuse my network with it, then I get a lot of feedback and people are coming back to me six months later and they're like, I learned this new trick. Here's this crazy new ability that we have and love to show it to you.

4:37

Speaker D

Yeah, the collective learning, the network, the allies, friends and learning. The iteration and learning is super key. So walk us through what a modern example would be. Something concrete of like a product or a feature that a frontier model in the loop from day zero could do. Like, you know, what does that process look like in the kind of the. In a few steps?

9:48

Speaker C

Well, actually, you know about this one. So we've been working on the Possible podcast and you're obviously the host of Possible, and I've been kind of supporting the team behind the scenes. The big kind of push for the last couple months has been, can we internationalize this podcast? And internationalize, not just like release the podcast and then translate the transcripts. But what if we were to recreate the same conversation but natively in many different languages? So your voice, your co host, Aria, you and Aria both are the hosts of Possible, but can we re release the podcast using your voices in Chinese, in French, in Hindi? And how many languages can we do that in? And how quickly can we expand into many different languages? And this was an interesting project because when we mentioned we were interested in translating, you've been working with translation of your content for a long time. But I was like, oh, this is perfect for agents because it is a coding problem. It is a problem that can be sliced up into a bunch of different small chunks and then it can be largely paralyzed. And so you're kind of cheating time and you're cheating the, like, you're reaching into the general capabilities of language models and the increasingly general capabilities of voice. Voice models from 11 labs. And so I basically. And also, you know, at my last company at Clubhouse, when I was working there, we had an internationalization team. It was like 20 people, 25 people. And it took a long time to just launch in a new one new market. And that was in the pre language model world. But now the language model speaks every language. The voice agents can generate almost 68 to 70 languages of the most popular languages on the planet. So you can almost think of a new kind of creator that emerges that's natively localized all over the planet, where, yeah, you might be an English first creator, but imagine if everyone could experience you in their first language. And so that's always been on my mind. But the second piece of like, can we do it with a very small team? And so I took this problem as I described it to you, and I went to Codex, I pulled up Codex and I just talked about this problem for 10 minutes. And then I said, let's build a gentic workflow that breaks down, atomizes this problem, and then reanimates the podcast in say, five different languages. And a combination of codecs and cloud code built the first version of that in one day. At the end of that day, we had this. I mean, let's see. I've run the app locally, so I can show it to read. Here we go. So we'll pull up the pipeline. But I went to this coding agent, described the problem, and I was like, we need to atomize this, strip away how it was done, look at every new technology I show you that is now on our table that we have access to, and then resolve this problem using AI agents. And so we end up with this pipeline. Basically, you think we have a transcript. We have a transcript with two speakers, and the first step is to parse the transcript for into turns. So we take each person's turns. Then we need to translate into a different language and transcreate. So we need to preserve the meaning of the original conversation. You don't want to do a literal translation because then you have cultural idioms that come into play. And. And a lot of this is what we learned when we partnered with the human experts on each language. And it's been a very interesting journey because one technical person paired with a few language experts can actually localize an app or an experience or a podcast very quickly. And when I had the first version of the pipeline, it was like, great, we have the French translation pipeline working. And then Codex was like, would you like me to enable the other 68 languages? And that's when I was like, yes. I mean, we're not ready yet, but let's do it. I mean, I want to see that we don't have all the human reviewers that we want in every language. But I was like, this is the awesome thing about thinking about it as an AI native approach. It's like your agents are today, right now they're in English mode, but then they could be. You could just change one word, switch the language. And now they're transcreating the content into French, into Chinese, into. Into Portuguese, into every single language. Even, like my. My mother tongue, Marathi. Right. And I showed. I showed like a sample of the podcast to my parents, and they were like, whoa, this is. This is. It feels like npr, but like, from Maharashtra in India. It sounds. It feels. The quality was so in. Their jaws dropped. So. And so I think about this as, like, we did the first version in a day, and the agents were just ready to enable the next level of scale. And, like, we just need to get enough experts around it so we can raise the quality bar up to our expectations. But it's something that I could not imagine. I mean, we tried to do it and it took like 25 people and several months to do it before, and now it's like, it's pretty much like next couple days. Next couple days. Yeah.

10:18

Speaker D

Well, and one of the things actually was particularly funny, and this is part of your general point that's important here, is look, there's a huge amplification that comes from the agents. But so we did French and we did French early because, you know, I've been spending some time in the French ecosystem trying to help various things. And so we released French as the very first Reed's riffs. And then we went to some of my French friends and they said, well, that sounds like Canadian friends.

15:13

Speaker C

That's right.

15:37

Speaker D

Right. And I was like, oh. And we didn't know enough to know, but that was the reason why it's still worth cross checking. And so then we redid it again, using agents to be, you know, Parisian.

15:38

Speaker C

French to delineate between all the. Yeah, it's. The naive approach is that everyone who speaks French speaks the same. But no, actually, French is spoken differently in the different parts of the world.

15:49

Speaker D

Yes.

15:58

Speaker C

And then I went back to the agents and I was like, guys, guys. I was like, guys, we have to actually localize this. This isn't about every language is one version, but actually every locality gets its own unique version. And then it was like, well, actually, we need to retrain the voices, so we need to create a French reed. We need to create like a Parisian French reed. We need to create a Parisian French. Yeah. And then realizing that we could do that. 11 Labs has some very cool voice remixing tools. And realizing we could do that, I was like, whoa. It seems like this same approach might work for every locality in other languages as well. So the idea that you're, like, solving this problem and the next problem and the next problem at the same time is very interesting. And also realizing that the models, like, the models are getting way better. And when we started, we were using an elevenlabs model that didn't have intonation. And then now we're using the V3 model, which you can actually prompt, inject emotional context, and we can create more animated. It's not just a robotic kind of recitation of the podcast. It's more like talking to someone that's very animated. And so the models, I think that's the huge thing there is. The models are getting better. And it was a leap in Codex's capabilities that showed me that I could do it in one day. But this is something that every week, every two weeks, there's some leap in capabilities. And I sit down on a fresh project and I'm just like, I aim a very hard problem, and I just say, hey, let's see where we can go. And I'm shocked at where you can go in just 1, 2 hours of iteration, 8 hours of it thinking. And then that first version, and you're just like, this used to be 25 people in six months, and now it's a day to the first version. And now we're like, okay, let's become more ambitious. Let's see how quickly we can get this out there.

15:58

Speaker D

By the way, one of the things, again, it's the rebroading your imagination for stuff. For example, this conversation hadn't occurred to me, but one of the fun things we might want to try with Redriffs and probably using your agents in order to do this is to essentially say, well, let's try Scottish, English, Northern English, Welsh English, classic English English, and then release four versions of it with that kind of locality tuned. Because that would be fascinating for people.

17:41

Speaker C

Yes, we should try that. I agree. It's like, what's the extent of this? I think of it as hyperlocal. I even clone my own voice and then went very local into, like, India. And I was very, like, recreated my own voice is, like, local in, like, six different languages in India. And I was very like, this is incredible. Like, now we can reach everyone in a very, like, in a way that they feel like, like, like natively, like, heard.

18:14

Speaker D

Yeah, exactly. Do you want to show something with the Tool.

18:39

Speaker C

Yeah. Well, I guess I could show you. Yeah.

18:43

Speaker D

It's up to you. You got to launch. Now.

18:45

Speaker C

Let's go with the Possible fm, Find a transcript. Podcast transcripts. And then let's go with Rip the computer keyboard. Oh, that one has Tenay. We don't have his voice. Let's go with Read and Aria. So here we're going to take two paragraphs of the Possible podcast, paste it into our translation tool.

18:47

Speaker D

Paris custom version.

19:14

Speaker C

Yeah. So we have the custom voices that are Parisian, French. We also have Beijing, Shanghai, which we're working on, and we have a couple other markets we're looking at. Let's do Paris and French. I had to delineate between Canada and France because it was. There's a point of. It was a point of feedback that we got. So we have a podcast transcript. We have you and Aria, the host of Possible. And I'm going to click run and I'll explain what's happening. So the first thing the system does is break it down into turns. And so this is your turn. This is Aria's turn. And then it's going to tag each turn with the emotional context appropriate for that moment in the conversation. It's as if there. It's like these agents are basically role playing you guys in the conversation. Now it's tagging the conversation. And so we'll see these same turns of conversation where it's going to infuse emotional context. And that's the cool thing about the new ElevenLabs V3 model, which is extremely realistic voice. So here we have frustrated serious, and then emphasizing. And then like, you're making a very strong point in this. In this turn of conversation. And so the AI is starting to assign that. That emotional context. Smiling. So hopefully, like, Arya's response is gonna be very, like, positive, curious when she asks the question serious for her final point. And now it's actually translating the conversation into French. And so the next column will appear soon. And this is going to be the French translation column. We're almost done.

19:15

Speaker D

One other version. We should try this for fun at some point. Is Klingon.

20:46

Speaker C

Oh, right, Klingon. Yeah. Yeah. We could release the podcast in Klingon.

20:49

Speaker D

Just to kind of show the range, the fact that the future is here.

20:55

Speaker C

Yeah. So now here, this cell right here has just come up. And this is the first draft French translation of this conversation so far. And so now it's in French. And what's happening is eleven Labs is generating the audio. It's basically reassembling the conversation using your Voice clones. But now in French, kind of as.

20:58

Speaker D

We'Re doing this pop up a level. This is like an example of our workflow where something that was previously a massive stretch, maybe too expensive to do, then becomes something easy to start prototyping. And actually even in, you know, for read riffs, we've deployed it in French.

21:21

Speaker C

Yep.

21:40

Speaker D

Right. It's the end. A huge amount of acceleration through agents, but then selective, intelligent use of humans in a loop.

21:41

Speaker C

Exactly.

21:51

Speaker D

For getting the product right, etc. And this is the parallel to, for example, a founder who might be thinking about like, okay, what's the way? Like, we'll just start doing it, but like, where are the things where you use the AI to accelerate you and what you're doing and then what are the places you bring in, you know, experts, feedback potential customers, et cetera?

21:52

Speaker C

So it looks like we have a French translation. We'll play a couple seconds.

22:15

Speaker D

Third American intelligence.

22:24

Speaker C

C' est toute la chain complexion. And so that's Aria's voice all the.

22:27

Speaker D

Way back to when we did this with the Perugia speech. It just, it's so mind blowing to hear your own voice speaking a language. Like, it's like the, you know, like the other Indian dialects that you're like, I don't speak that language, and yet that is my voice speaking that language.

22:49

Speaker C

It's kind of like accessing the multiverse.

23:04

Speaker D

Yes.

23:06

Speaker C

It's like imagine if you were French.

23:06

Speaker D

Yes.

23:08

Speaker C

Here's a glimpse into that.

23:09

Speaker D

And this is all like kind of a very concrete dive for kind of saying, look, this is how much. How to operate, how, how to do quick internal tooling, how to explore various versions of product market fit. All of the things that, you know, I think basically, frankly, any credible founder today has to be showing AI native. If you're not, then you basically shouldn't be doing a company.

23:10

Speaker C

Yeah. And we should be thinking, like, what parts of this workflow do we absolutely want to start aiming AI at? Even just to get a baseline of its performance even before we are like, oh, it's good enough. And I just asked our coding agent to tell us how many agents are we using in this system? Because, okay, here we go. We're using six agents. So the first agent tags each turn of conversation with emotions to guide the delivery of the voice. Then there's an agent for single turn tagging. Then there's an agent that translates it into the target language. Then there's an agent that validates it, making sure that the transcript is still holding the conversational tags. Then there's an agent that listens to the generated audio. So we generate the audio, then it transcribes it and then it listens. It's like, yep, yep, yep, that looks like what we're aiming for. And there's a second agent that's verifying language on the other end. Part of this is like thinking like, how much of this can we double check and triple check and generate and regenerate before the person has to come back in and do the final approval? And I'm pretty excited about how much of this we could do before our experts come in, because our experts are then like, focus on this part of the idioms, making the tech references. Especially how do you preserve the meaning of something that is hard to explain in different language? You have to have an expert in that culture, in the idioms of the culture. Then you get into trance creation. All of our agents are using GPT 4.1 and it runs on the agent's SDK. But the reason this system is possible is because I said, use the agent's SDK to solve this workflow. So I went to the smartest model and I said, we're going to use agents and then create the next version of this pipeline.

23:35

Speaker D

So, you know, part of this is, is amazing amounts are now doable by individuals. But so how does that reconceptualize possibilities in founding teams? Right? So what might now be possible in terms of founding teams, what they should look like, what. What might be different from a founding team five years ago?

25:16

Speaker C

I think the biggest difference is, of course, you want to embrace the technical velocity that we have accessible. So if you're, you know, your cto, the first technical person should be learning how to use cloud code or Codex, maybe both, and then very quickly moving to a level of where they can orchestrate a small fleet of, say 20 of those at the same time. That there is a slight learning curve there. But it is so much more worth it for the most senior, first technical person attacking anything to adopt that mindset. And you should be willing to spend the money on these tools because actually like, it cascades through the rest of the hires that you make, the rest of the people that you bring on. You're going to want each person to be individually amplified. And so that's different is that maybe each person has some kind of compute spend, which is maybe even equivalent to a contractor hire equivalent of quad code spend and coding, agent spend in aggregate and agent spend in aggregate. It is almost like you think about that as a first party kind of approach to the problem. And I think that the person who embraces these workflows is going to see at least 50 to 70% increase in productivity and then looking for generalists, people that quickly adapt into multiple roles. So I think the PM that can vibe code that can quickly convince you of a new design choice, right? It's like oh let's maybe it's not production grade, but it gets you the team thinking about a new way that the product could be designed potentially weeks faster. Yeah, weeks faster. Yeah, exactly. And I think that like starting with that expectation of speed and then because you know, as companies grow like or we tend to get slower as we have the coordination tax builds up, but starting very quickly, quickly unpacking the hypotheses before and you'll reach these realizations before you even have to raise money or then when you do raise money, you raise for different reasons. And I see that in the startups that I'm advising is that they can go much further with a very small team and a strong core of agentic tools.

25:40

Speaker D

The thing I would add is kind of classic, you know, call it two decades ago, three decades ago was you have a business person or technical person as the kind of co founders doing something and if you had only a business person, they would hire a technical person. If you had only a technical person, they'd hire a business person. And I agree with you about that. But I also think that one of the first jobs for the technical person is to similarly make sure that the business person is amplified too. It's not just okay, this is the way I'm doing my workflow for DevOps and for experimenting with product design for product market fit and the rest. Yes, yes, yes. But also amplify.

27:44

Speaker C

Yeah, that's right, that's right, yeah. And I think they should be using state of the art models as well. Everyone should in your small team should be using state of the art models that help them with all aspects of their work. Your general copilot using the best one available.

28:22

Speaker D

So one of the things that, you know, there's a partially because we live in a media environment and obviously Hollywood's tied itself into knots about AI and all the rest of this stuff and you live down there, so you see a lot of it like the we're using it but we're not telling anybody because it's kind of unpopular. But even though it's such a clear amplifier, it's kind of like as opposed to a Masonic handshake, there needs to be an AI handshake Now.

28:38

Speaker C

Or it's like I finally get to tell a story. I was never even in Hollywood.

29:05

Speaker D

Yes.

29:08

Speaker C

You know, I could just. Oh, like, I have an idea now we can put the first version for $300. You can make the first version. And like, in the same way that we're vibe coding. Prototypes. Yes, vibe coding, storytelling. Or like animating and creating these worlds as like concepts. And they may eventually become bigger things in a traditional format. Yeah, but they don't have to either.

29:08

Speaker D

Yeah, but the speed of using it, exploration, iterative development, it's the same thing where you, you, you learn by doing, you learn by seeing what you did on your first iteration. Like the first. Like you said. Okay, let's use agents to build it. Oh, wait, we need an emotional tagger is kind of. As ways of doing this. So it's kind of our last question for the moment for kind of AI in kind of startups is what ways should do you look at when you look at startups and see, is that AI is marketing or is that AI is real? How should people think about that themselves? Like, am I being real enough? Am I being AI native enough? I'm not just using AI as a buzzword. Bingo. To try to get money or attention or anything else. And what does that real AI traction look like?

29:25

Speaker C

Yeah, there's a lot of that I see going around these days, which is this buzzword, kind of like this buzzword era of AI, this AI enabled, AI powered. And then. And you're kind of. And maybe it's because I interact with all these models, I'm kind of like, but which model? What AI? Like what, what actually, do you even need the AI or are you just shoving it in there so that you can say that it's AI? And for me, it's like if you don't mention, if you don't go one level deeper, it makes me very skeptical. It makes me wonder if there's anything of value here at all, or if it's just pure signaling in order to get attention or like to send a kind of message to people that wouldn't be able to discern. And I think that the other way to put it is, can you describe what it is you're doing without using the letters AI? If you can't, then maybe the AI isn't the important thing here. And then the other thing I see is it's not even about the AI, the AI powers. It's like, does anyone care what database technology Uber is sitting on? No. Like the actual person. It's like, if you were to sit in a car and you were like, oh, the average person just wants to get from point A to point B safely. And like, so a lot of that is like, not even relevant to the end user. And it's actually, I like, I imagine a world that I want to live in where AI is under the hood and take it for granted because it's just so good at what it does and it stays out of the way and it's not making itself the whole, like, it's. That is not the purpose. It's in service of the actual objective that we have, which is maybe create a new artifact, learn something or make, you know, build a product. And I think that AI is like, when AI blends into the background, that's the best version of this.

30:17

Speaker D

Possible is produced by Palette Media. It's hosted by Arifinger and me, Reid Hoffman. Our showrunner is Sean Young. Possible is produced by Tenasi Delos, Katie Sanders, Spencer Strassmore, Emo Zhu, Trent Barboza and Tafadzwa Nimarundwe.

31:54

Speaker A

Special thanks to Surya Yalamanchili, Sayda Sepieva, Ian Alice, Greg Beato, Parth Patil and Ben Rellis.

32:09