AI & I

If SaaS Is Dead, Linear Didn't Get the Memo

53 min
Apr 1, 202618 days ago
Listen to Episode
Summary

Kari Marttila, CEO of Linear, discusses how the company is navigating the AI era by building agent-native workflows rather than rushing to add AI features. Linear is positioning itself as the organizational backbone for AI-assisted product development, integrating coding agents and AI skills while maintaining focus on quality and thoughtful decision-making.

Insights
  • The most defensible AI product strategy is becoming the coordination layer where agents operate, not building the agents themselves—Linear captures organizational context and intent without bearing token costs
  • Rushing to add AI features without understanding workflows creates noise; the companies that will win are those that take time to understand how AI actually improves their specific use case
  • AI tools should accelerate execution on decisions, not decision-making itself; the bottleneck is shifting from implementation speed to clarity on what problems are worth solving
  • Metrics like token usage and percentage of AI-generated code are vanity metrics; true success is measured by product quality, user love, and whether the organization is solving the right problems faster
  • Product development will shift toward more autonomous agents operating within organizational guidance systems, but human judgment on strategy and taste will remain irreplaceable
Trends
SaaS companies with strong moats are repositioning as AI coordination platforms rather than feature-adding to existing productsAgent-native architecture is becoming table stakes; the question is no longer whether to support agents but how to make them first-class citizens in workflowsToken economics are reshaping SaaS margins; companies must decide whether to absorb AI costs or pass them to users via usage-based pricingOrganizational context and memory are becoming competitive advantages; products that can inject relevant context into agent operations will outperform generic AI toolsThe shift from 'how fast can we build' to 'what should we build' is accelerating; thoughtful product strategy is becoming more valuable as execution speed commoditizesMulti-agent ecosystems are replacing single-agent bets; companies are building their own agents and integrating multiple third-party agents into workflowsBug-free products are becoming a baseline expectation with AI coding agents; zero-bug policies are moving from aspirational to achievableDesign and product thinking processes are slowing down intentionally while execution speeds up; conceptual work and exploration remain human domainsPublic market SaaS companies face higher disruption risk than growth-stage companies due to organizational inertia and inability to rethink from first principlesWorkflow optimization is replacing feature parity as the primary competitive lever in enterprise software
Companies
Linear
Issue tracking and product development platform being repositioned as AI coordination layer with agent-native archite...
OpenAI
Released Symphony tool that integrates with Linear as primary entry point for agent workflows
Anthropic
Claude model used as underlying LLM for Linear's agent and skill functionality
Coinbase
Customer building homegrown coding agents that integrate with Linear
Ramp
Customer building homegrown coding agents that integrate with Linear
Scale AI
Sponsor offering Dialect system for enterprise AI decision-making with expert oversight
People
Kari Marttila
Discusses Linear's AI strategy, product philosophy, and approach to agent-native architecture
Dan Shipper
Hosts the episode and conducts interview with Kari about Linear's AI transition
Quotes
"Everyone will have many agents and companies will build their own agents. Linear becomes kind of like a system for guiding the agents and like building this context."
Kari MarttilaEarly in episode
"This is the perfect business for this era because it's still SaaS. You're the one who has this sort of sticky interface because it's where everyone is kicking things off from and where they're recording all the information. But you don't have to pay for any of the actual tokens."
Dan ShipperMid-episode
"We shouldn't go fast in deciding things or speed running the decisions. I think there's a danger of you don't have some kind of decision making way."
Kari MarttilaMid-episode
"I don't think there's going to be one agent, but everyone will have many agents and like companies will build their own agents, which we're now seeing with like Coinbase and Ramp."
Kari MarttilaEarly-mid episode
"The problem really becomes like, how do you productively harness this, like in a good way. You can task a million agents doing something, but should you? What are those things they should be working on?"
Kari MarttilaMid-episode
Full Transcript
Everyone will have many agents and companies will build their own agents. Blinner becomes kind of like a system for guiding the agents and like building this context. This is the perfect business for this era because it's still SaaS. You're the one who has this sort of sticky interface because it's where everyone is kicking things off from and where they're recording all the information. But you don't have to pay for any of the actual tokens. How are you? Welcome to the show. Oh, thanks. Thanks for having me. Really, really great to finally meet you. You are the co-founder and CEO of Linear. Little known fact, the first time I ran into Linear, it was because we were using it in 2020 at the very beginning of every to act as our content management system for the newsletter. And at the time, it was like this very kind of like hush-hush. You couldn't get access to it, but everyone, if you knew, you knew that Linear was like amazing. And we use it for a while and really loved it. But then we realized it was made for software, not publishing articles. So we moved off of it. But it was really cool while we did it. And I've always admired the level of taste and craft that you bring to what you build. And also, I think the level of thoughtfulness and patience that you build it with. And I think that's one really interesting thing is the way that you built the company originally was to keep it closed for a while, not raise too much money, not put too crazy expectations on the company and be patient and willing to build something quality over the long term. And I think that that's also has something to do with how you approached AI. You guys are really in AI right now. When I think about the companies that are successfully transitioning into this moment that were started in the pre-AI era, Linear is definitely on that list. Open AI came out with Symphony the other day. And the main thing that it hooks into is Linear. And you've successfully transitioned the product to be really agent native. But when GPT-3 first came out, I didn't see anything about that on Linear. So I'm curious about that transition for you. What was that like emotionally to have built this product for a particular way of working in a particular way of building software and then see the world change, but maybe not be totally sure if this was going to be the thing and then eventually be like, this is the thing we need to rebuild the product or change how the product works in a significant way. Talk to me about that. Yeah. Well, first of all, thanks for being an early user. And I think the thinking has always been the same. It's like, we just want Linear to be the best product in this category and helping companies move work forward and often build software products. And in some ways, this new AI stuff, it doesn't really change that mission. It maybe even improves it. And our goal was always that can Linear take more of the burden of running this product team, so figuring out things to do or figuring out when to do them and let the product teams on the individual actually build the things. And now it's also like they build it with AI or the AI builds it. So I think in some ways, the mission for us didn't change. Actually, I think the AI is making it better because now we can automate more and take more of that burden and let people do it, kind of use their craft or use their taste or thinking or something in it. But yeah, I think we do have, I personally always have this problem, like a way of addressing problems, which is like, I come from a design background. So a lot of times, the way I approach things is first, I'm trying to understand them. So this sounds kind of obvious, but then I think what happens in the tech world a lot of times is people don't try to understand things. They often jump into that, like, oh, I can do this, so I'll do it now. But did you think, like, should you do it or does it actually help you? So that was kind of our thinking with the early AI and the chatbots, like every company is like rushing into this moment is like, hey, we are now an AI company because we have this chatbot integrated. And we tried that too internally. And then we just realized this is not really that useful. Like, how do you actually use like, what is the workflow where you actually need this or use this? So we, we have spent all this, like, a couple years now, like trying to understand this workflow is like, how do people actually want to use these things? And like, we did a couple of things. Well, though, like, I think that we released this like agent platform, like, so it's kind of like an open platform, it has very good docs and like, the agents can build the integration themselves using the docs. And because of that, we now have most of the, like the coding agents or agents out there integrated with linear and like this, this is like open AI, brought their codecs, kind of like cloud agent in there, because we just had this available. So I think like, we kind of saw this world that like, I don't think there's going to be one agent, but everyone will have many agents and like companies will build their own agents, which we're now seeing with like Coinbase and ramp work or customers, and they, they built their own homegrown coding agents, which then will indicate with linear. So linear becomes kind of like a system for guiding the agents and like building this context. But it doesn't like, we don't try to own everything in this, in this world or in this market, we can like play with other people, other companies too. So I think like, yeah, like the approach was much more like, how do we like, understand the workflows, like what is actually valuable? And like, what people could use these tools for versus just jumping into like, well, everyone else is doing this, this thing. So we should do it too. And by the way, now we are adding kind of like a chat interface into linear, but it's a lot more like, we kind of like, there's tools and there's skills and there's more of like understanding we, we gather like how do you should use it? Like you can use it to kind of synthesize customer requests, because that's like linear can handle that linear is a place for customer problems or requests or other things. So now like a linear agent can kind of like natively work through those and like see patterns or things like that. And that's kind of like the, we're trying to bring like a clarity and context to the organization, which they can then use as part of the like AI building workflows. So because like, I think like once the AI builds more and executes more, like, they kind of like the problem really becomes like, how do you productively harness this, like in a good way, like you can, you can task like a million agents doing something, but like, should, what are those things they should be working on? Like probably not all of those, like if you don't think about it, like it probably like a lot of those like work is not necessarily that useful. You need to have some kind of like decision making progress of a process of like, is this actually important? Should we do this and like linear as a way to like do that and like build that intent and build that context and then go to like build up with the agents? There's a interesting, I don't know if it's a meme or a mind virus or what going around right now, but the stock market thinks that SaaS is dead. And I think you're pointing to something really interesting, which is this dynamic of a couple years ago, a lot of companies, including a lot of SaaS companies rushing to you chatbots. And I think a big part of that is, well, we know this thing is happening. So we have to at least show that we're doing something, you know, and I think that the market is starting to the public markets are now starting to look at that and like require that. And I imagine when the AI stuff was coming out and you guys were maybe testing AI features, but weren't releasing them, I imagine there were some pressure, maybe from investors or from yourself or maybe internally to do something. And you kind of, it seems like you waited until you've, you had the fat pitch. And I'm curious if that is true, what that was like, and what you think it means for, you know, all of the public market companies, all the public market SaaS companies that are down right now and who CEOs are like, well, I guess what you really need to launch an agent platform or whatever, you know? Yeah, I mean, I think we don't really have a pressure from investors, like that's like one benefit of picking the right investors. And also, like they trust us to make the right calls. And then also, we obviously did talk about this, but then we also had that discussion. It's like, we just don't see the value right now doing it this way. We need to find the actual real value here that actually helps these companies. And so I think it wasn't like that bad. There was, yeah, there's definitely like internal pressure. And now I think the speed of the market has picked up a lot, like every month or something like a couple of weeks, there's something changing, and we are like tracking those same changes and kind of like try to see where all of this is going. But there's also like this, it creates a lot of noise in the market that there's this like, oh, now this week someone is doing these loops. And then a couple of weeks later, people are like, no, the loops are a bad idea. And then like, we kind of like, you shouldn't like, I think like, those things are like signals that you should like read and understand. But like, you also need to know that like, a lot of this stuff is not tested. And like, a lot of times that people also testing these things are not testing it in some like, large organizational context that where things actually matter, like if they work or not. And so I think there's that like, we haven't tested all these things. So we can like make these predictions of like, how things are exactly going to change. I think on the on the SAS narrative, I do think like, it's probably like directionally correct that you with SAS companies, you probably have to like, as an investor, you kind of have to, there's more uncertain or uncertainty of the future cash flows that like, because if the landscape is changing, you can't expect that everything will stay the same. But I think like the the narrative is kind of simplistic, like, oh, people will wipe out their own CRM tools. And I don't think that's exactly going to happen. But I think like what might might happen is like, there's new companies to come out. Or I think like, a lot of the public companies are not the most like, I don't know, flexible or like the most robust solutions out there, they are the big solutions that the big companies use. And there's a certain kind of like inertia in there. So like, I would say that the, yeah, I think like the public companies probably get hit the hardest here, because they have like, their modes are kind of like disappearing in a way. I think even for us, like we consider now, it's like, we need to live in this day one world again, where like, we can't rely on our previous decisions anymore, like we have to like look at these problems, like in a fresh way that like, what happens when when these things change? What happens when the agent come into this product element process? What are the new problems that come out of it? And like, how do we help that? So like, we shouldn't be tied into the past experience, like the past product we have, but like, see like what the future product should be. And I think like, this is harder for large companies and like companies that have existed for like decades. So I don't think it's like an easy task. And then like, I think their growth companies or startups can can do it a lot better. How big is the team now? About 120 total, I think I would say like about half of them, like 60 people are on the product team. And what was that? What has that transition been like? I assume that over the last couple years, there have been a lot of divided opinions on is AI coding really a thing? Is it just glorified auto complete? Is it going to eliminate programming as a job? And then how have you how has that changed cycle been to actually go change your workflow, figure out what the new programming workflow is like? How did you get the team in shape to do that? And what did you learn in that process? Yeah, I think like there was definitely a time in a company to like, we had to engage people to use these tools more. I think there's always that there can be like habits where you always like done stuff this way. So like, you're kind of like less and less like interested in like trying new tools. But I think like, now let's say like, the role of the engineering and sometimes our design and the BMS also like are now using agent coding or coding tools. We don't like track any kind of specific to me, it's like, I joke about this sometimes on Twitter, like people, now it's like the biggest like vanity metric is like how much of your code is agent like Britain or how many BRs are you merging? And I think like that's like, not the right metric. It's it's it's measures. Yeah, like output, but like what does that output do? Like, does it actually generate value? Is it like improving the product? Like, you need to have like, if you're measuring this kind of metrics, you need some kind of counter balance, like what is actually the quality of this work? And like, is it actually meaningful? And I think like that's like, also, I think what's playing out in the market is like, we have large companies that are token sellers. And then like when when, like, you have a lot of incentives, like your business model is like, to spend more tokens and like things like our revenue really higher and like our market share will be higher. So I think there's a lot of incentive saying people like you should just spend more tokens and not saying like, well, you think about things and like spend it well. So I think there's that again, like, I think people are let me be looking at it too, like simplistically, or like kind of like, Oh, there's a good thing if we just like, like spend more tokens, things will be better. But I don't think that's ever been the case in in building products. Like, yeah, there's some some value and speed and like making changes. But then like, you should also understand any change or addition you make, like it can also have a negative impact. So it's like, it's not always like activity is always positive. Like sometimes it can be negative too. What do you think is a more nuanced metric for, you know, if you're judging how well how in this AI world, are we how well are we doing our job of figuring out these new workflows and adapting to them and using them in our own work? If, you know, tokens or number of PR submitted or percentage of age and generated code is are not necessarily the right metrics, maybe even in isolation, they're not the right metrics. What do you look at? Or how do you think about it? I mean, I think it's still the classic metrics of like profits or revenue or user like love or some of these things are like what you should be aiming for. Those seem like lagging indicators. Yeah, they are. But yeah, there isn't like, I think like you should still measure like some of these things like token usage per person or like buy different teams or something. But you shouldn't take it as a like to the extreme of like, this is the only metric that matters now. You should be like, you should that's a signal that like, are we doing something? And then think like, well, it's our product actually improving. Do we have any indication of this product is actually improving? Like, do we get comments of the new features? Are the box is there less bugs? And like, I think bugs is actually like measurable metric. If you if you run like honest bug tracking process where you actually track bugs. And then I think like now I almost feel like with the agent agents and AI is almost like, why do you even have bugs in your product? Like, you should be like, there's no excuse for it anymore. And like internally, we have the zero bucks policy, which is like, we have a linear team triage and like any box go there. Then there's a one week SLA that every bug bug needs to be fixed. And then now I think with the coding agents, the coding agents actually can do the first pass on it. And then like, once it's done the fix, it will kind of like attack the engineer on it, and the engineer maybe doesn't like it, or is it there's some like changes that they want to make, they can do it also now inside linear and they can review the code in linear. So there's this like, very good workflow for now. But I think like it still starts from the fact that like, do we care if we are product is buggy or not? And like, we have made the choice, like, we think it's bugs are like, kind of like bad things or mistakes. And like, we should fix them as quickly as we can. And that's like a priority to everyone. So I think it's still like, it's still a choice if you like care about the quality of the output, or you're just wanting like more of the output. What are the ways that these tools have changed your product building workflow both personally, and as an organ, what are the most effective ones that might be surprising? Yeah, I think like on the product side, I think it's definitely a lot better. Like, I think it's, I have with with linear and I have this like skill where it's like, I fed some of our internal docs and blog posts about like how we think about product development, and made this like a linear way skill. And then like, it writes soon, the site, I tell it like, okay, like, look at this, like, help me understand this like feature request, like we have this, we collect this feature requests and inside linear and then like, for example, there's a request like, multiple assignees per issue, it's like requested by lots of people, like hundreds of people. And so like, I'm kind of like telling to go synthesize like help me understand like, what are the different reasons people want it like I don't want. So it's kind of starts with like explaining the problem like trying to understand the core problem, which is usually what I want to know. It's like, so this helps me like, when I when I see a new request, I might go into linear and say, like, oh, like, do we have this kind of request already, and then help me understand it. And then like, it helps me kind of like, give an understanding like which then like helps me like, potentially like, should we actually tackle this now? Or is this something we could do later? Or maybe never. So there's that like, before we start building anything is like, it's helping me kind of like understand the problem. And like, in a very quick way, like, I don't have to like, go ask around or like find people to do it for me. On the design front, I actually don't personally use it much. I actually like the manual design process, like I still have Figma open. And then when I have a problem or idea, I just draw it in there. And to me, it's like, yeah, I'm often like, my work is often more like that kind of like exploring things. So I actually don't think the speed really helps there. Like I actually like the slowness of the manual thing, like you draw things manually, every time you draw something, you have to kind of like, check on yourself. It's like, why am I do like, why am I drawing it this way? Or like, should I draw it different way? But then like, the broader team, the design team, when they work on problems, I think now they are building a lot more like prototypes. And we have this quite robust like build system. So you can, you can actually build it into that, like you can make a VR and then it will run the build, and you get the preview link to the build, and then you can use it live in the, in the product. So it helps the testing it, or the prototyping stage of it. But I still tell the designers, so I kind of explore more freely in Figma first or wherever, and like try to think about how do you approach the problem? Or just like, let's just jump into doing it. Like there's projects like that too, where it's very clear what needs to be done. But then if it's like a bigger project, I think like they should still spend that time and then yeah, like engineering side is probably similar to a lot of other ones that where we can kind of like fix problems a lot faster. Once we identify them and like decide to do it. We use a Slack a lot and like with our Slack agent, like we have a discussion and then we eventually decide like, yeah, we should do this. And then we just tackle in there and they're saying, like, hey, can you create the issues out of this conversation? And then we'll do it. And so it helps us like come back to it later and like actually make it actionable right away versus like, oh, we need to have a meeting and then we start a project and then we start like assigning people. So I think there's like, I think it's kind of like, I would say like kind of like the pattern in all of those things is like, it's shortening the some kind of loop there. And like making it faster, like you can do the thing right away versus like waiting, waiting like, I don't know, next week or some other time to do it. Like it's very little effort to do it right away. Which is interestingly, sometimes it seems like you're the exact opposite of your preferred outlook. You know, actually, we shouldn't do things faster. Actually, we should take things a little bit slower. How does having tools that make you go much faster interact with that outlook? Yeah, I think it's good point. I think it's, I think it's more like, I think like we shouldn't go fast in like deciding things or, or just like kind of like speed running the decisions or like not even doing a decision. Like I think there's this, some people do it now where they just like have an idea, then they build it. And now we're like, now we're all looking at this idea that no one really know why it exists. And like, should we even do it? And it's like, it's a, every new prototype or idea can kind of like seem useful. But then like, you now like don't have like a good way of like framing it's like, how useful this is versus other things like, should we spend the time actually like now committing on this idea? Because we already have like kind of decided on this, some of the other ideas. So I think there's this like danger of like, you don't have some kind of like decision making way, we don't have like a lot of processes in linear, but it's more like, we want to commit on this. Like once we commit on the thing or the fix or the project, then I want it to improve fast. Like I want the loop to be fast to actually work on the problem. But I don't want the problem finding to be fast. Like you should take the time to find the right problem and like the right approach for the problem. And then once you decide that, then you can go faster on it. Here's a simple test for whether your AI is actually ready for production. Would you stake a business decision on what I just told you? If the answer is not yet, you're not alone. The gap is in capability because AI can do a lot. It's really about trust. You can't verify the output of the AI, you can't trace its reasoning. And nobody with real domain expertise has touched it. Dialect is a new system from Scale AI that captures how enterprises make decisions and closes that gap. It puts your actual experts in the loop, aka the people with years of institutional knowledge, and encodes their judgment into your AI systems. Every correction, every override comes with full context. It's actually really interesting. So the next time your AI makes a call, there's an expert's reasoning behind it. That's how you go from a cool AI demo to an AI system you can trust. Visit scl.ai slash dialect. That's scl.ai slash dialect to learn more. All I'm doing that back to the episode. One thing that what you're saying makes me feel is I totally get that approach. And also for myself as a product builder, I often don't know what I'm doing until I do it. And I can't think it through until I've done like five different things that I can't explain. And then I'm like, okay, here's the thing and I understand it. Is what you're saying different from that? Or is it the same, just like said differently? Maybe it's different, but I can see that workflow. I feel like that workflow is kind of like kind of like understanding, like you're trying to understand what you're doing. Yeah, it's building, it's like making things as understanding. And I think that's fine. I think the problem there, so it's become sometimes it's like, you kind of like don't know, are you, I think like conceptual work, sometimes in design, I consider this like a conceptual work where it's like, the output of this is a concept, it's not like, we just shouldn't deliver this necessarily. But this is like, I made this, I went through this process of understanding this problem and I have a concept for it or I have. What's an example? Because I would assume that the output of a design process would be a figma that you could export. So what's an example of a concept that comes out of a design process? Well, I think in the past, in a large company, I've used the concept term to not to scare people. So usually, it's like rethinking some area completely. And that's like a concept. It's not like, it's like a concept car. So it's like, this car won't go into production, but here's some ideas that could influence the next car. So it's like, you're trying to like, sometimes people, I don't know, this is like, partly like a large company thing, but I think it can happen in small companies too, is that once you see something very different, your fears might start coming up. Like, well, if we change this, what else is going to happen? Like what's going to happen? What's going to break? But the point is not right now to decide that. Does this concept, this new idea have merit? And can we like, do we think it's important enough for something to take it further and then deal with the problems later? So it's kind of like, you're kind of like trying to divide like the decision, like which decisions you are making now. And like, I've used that, yeah, like in our company, in our company, I just like, completely rework a surface and say like, hey, I think the project should look like this, like, which is completely different from what it's currently is. And then people like, oh, that's actually interesting. Or they're like, well, if one worked for this and that, and I'm like, okay, that's fine. And it's like, it's a way to like, and it's maybe like a figment design or a prototype. So it's just like, I think there's, like, even with all this tooling, like the output shouldn't always be like, we ship something, like it's sometimes the output can be something internal that like, hey, we just now we have a like a better understanding of this problem, we can like, tackle it better. And like, we can actually make it into a shipable thing. But like, we first try to like, think about it before doing it. Right. And to you thinking about it can include building, stuff. It's just the reason you're building stuff is not to ship it the next day, it's to understand it better. But thinking can be designing, it can be writing, it can be, you know, talking about it, that kind of stuff. Yeah. And something like I did have to share with the company recently was that I like, we always care about the quality or a lot. But I think like, and like, kind of like this thinking process of like, are we doing the right thing is kind of like what we're trying to like, like decide. Sometimes now with AI, it's actually like hard to tell, like it's kind of like, if the tooling changes all the time, like people, like the the LLMs are not deterministic anyway, like, you know, as no like, like how useful this thing could be. And then there's a moment you just have to decide like, yeah, I think we should, obviously, we can try this internally, but we also need to try it with customers. And you kind of like put it into some kind of beta or something. So I think like there's definitely nuance to this right now, that there are situations where it's just like, and it's always with product building, there's a limit how much you can like, think about it inside your company, until you need to actually put it somewhere to someone else to use. And then you learn from that use case. But again, like it's more like every stage, you kind of have like some kind of goal in mind, like, now we've we put it to beta, like the goal should be like, understand the workflows and how people use it and how they want it to be better, not to like, something else, like not to try to ship it as fast as we can or something like we, we should be honest about like, what is the actual goal for for this stage. So we've talked about how AI has changed your internal workflow. I'm also curious how it has changed your product strategy and how you think about building products, not like the actual work of building products, but what kind of product to build and what, for example, should you let AI agents connect into your product, which I know you've done, versus build your own AI, like into the core feature, should you have both? What should they be able to do? Like, yeah, how does it affect your, your product strategy and your vision for what a good product is? Yeah, I mean, I would say like, we are now adding agent like linear agent that can like, has context of your work and the context of the organization and the products you build that you can use in different ways and they're like the PM workflows. You can also like, as a designer use it the way to like, understand the problems. And then we will also do like a coding agent where you can actually like, start like writing code with the agent. And it will interesting, it will, you can see the diffs online. So it's kind of like a cloud conductor environment where you can kind of like see the changes and you can kind of, you can guide it. And we think like, the strategy has definitely like, changed. And we are just trying to like, like understand like, what are the problem set of today? We think like, one of the things is like, what is changing is that I think historically people thought like issue tracking is this kind of like, like, it's like a ticketing system for the kitchen, like, but engineering. So it's like order comes in, like someone orders fish. So now that fish goes into the kitchen, there's a ticket like make fish. And that's like, kind of like people think about issue tracking. And like, we kind of never thought about it that way. Like for us, like, linear is more like the backbone we rely on, like collecting signals and collecting problems or collecting decisions, like we should do this thing. So I think, like, I think there's definitely like shift we have to like teach people, like, these products is really meant to like, improve your team's workflow, not to be just kind of like a weird ticketing system for you, like, different parts of your organization. And that's kind of like, probably like going away, like with the agents are like, you don't need that anymore, like the agent can do those tickets and like, they can also like complete them. But like, we think like, there's still value of like, like collecting that context and like the make make the shaping that works something actionable and providing agents like good contacts from the from the environment. But the one lesson we learned with the with with the agents is that it's it's it's tough when we are not ourselves in control of it. Like, it's it's like, we do want to like, support all companies and all agents as much as we can. But then if we have ideas for it, we can't do it, like it's on them to do it. So so now, like one of the reasons we are doing this coding agent is like, we actually think we see this like a lot more smoother and to end workflow where you start your you don't have to do everything, but you start some of your tasks and linear as like, you can ask the agent like, hey, does this thing exist already? Or if not, make an issue, make like a work stream out of it, and then like, start working on it and then start like writing the code and then you can like, see the diffs coming in, you can like review it and you can like merge it, or like, you can see the prototypes. So it's just like trying to like, one of the problems I see, like when I use this, like, like Claude or chat to DB or some of these tools or codecs, is that like, I have to really explicitly tell the agent, like the tool always like what, like, what context to bring. And then I think the value with linear is like the context lives there. And then if we kind of inject it like smartly, part of the work stream, it's like much more like, like natural, we can design the flow that like makes sense. And we don't like span the span the context windows or something. And I think we see this feature as like, you probably have this linear is kind of like the multiplayer or the organizational context of what's happening in the product and what is potential features state of it. You might still run local agents, but there's situations where like, you should just automate some of the like bug fixes, or you should automate the small task and like just do it in linear and then like kind of like let it run in the background while in a sandbox while you like, run your own work and in your own computer or somewhere. That's really interesting. I think for a product strategy perspective, I'm really curious about the decision to integrate your own agents. Because I'm before we do this interview, I didn't know about the linear agent. And I was sort of sitting here thinking, wow, this is the perfect business for this era, because it's still SAS. There's no AI token costs, but it is the place where you control all of the AI. So all the other companies have to deal with all the other coding agents and whatever have to deal with all the token costs, open AI and anthropic and whatever. But you're the one who has this sort of sticky interface because it's where everyone is kicking things off from and where they're recording all the information. But you don't have to pay for any of the actual tokens. And it sounds like you're adding a layer where you will have to pay for the tokens. And you may prefer that. And I think the reason you're saying is because a tighter integration between the two means you can do more interesting, more powerful things. How did you think about that from a business perspective, changing your margin profile that much? I don't know. I don't know if I may add how much linear cost a month, but I assume there's a lot of interesting discussion there about how adding in token costs change the business model. Yeah, I mean, honestly, I think it's something we'll have to see in the future more. We definitely thought about it and have some calculations or thinking on it. I think on the coding agents, we do have to offer usage-based billing because it can get very expensive. On the basic linear agent functionality that answers questions for you, that should be more included into the system. We'll have to see how much the usage actually is. But in linear, it's going to be a fairly focused platform for certain kind of things. You shouldn't be running random things here. I think it should be still pretty clear what you should be doing inside linear and what kind of workflows are you running there or workloads. So we're not trying to build this very generic agent platform. It's more like the product context or the product memory platform where you can integrate those agents. You can use linear agents from other tools too, or you can bring other tools into linear. So it's just a way to work around your product. It's kind of like an API into the product thinking versus using more of the normal tools where it's like you always have to tell it to co-fetch this thing, co-find this thing because it doesn't have any understanding like what do you generally do or what kind of context that might be existing already. Can we see a demo? I'd love to see it. Yeah. Is the screen sharing okay? Yes. All right. Yeah. So what we have coming up, this is actually my real linear instance and what we have come up with, we do have now, if you do a new tab inside linear, there will be the classic box of what do you want to do. There's also this other interface where you can, if you are inside some context like a project or something, you can do the work there. But for example, the one, we will have skills and the skills, we will have guidance, organizational guidance and personal guidance and you can have skills like personal skills or organizational skills. For example, what I was mentioning earlier is that sometimes I want to understand problems. I want to understand this problem of multiple assignees. I made this skill which is essentially like, I fed some materials from our blog and it's like, act like a linear product teammate and then it has this format of like, it starts with the underlying need and it has this way it goes through the problem. And so I made this to kind of like, help my workflow like something just quickly and trying to understand sometimes this feature request. So I can like do it like, well, let's do the multiple workspaces. So we have this like, collection of stuff about like multiple workspaces and then it can kind of like, go through there. There's like, probably like many different requests and like, it will try to start thinking through it. They will look into the customer activities. It will look through the different things. What model is it under the hood? I think we'll eventually have a lot of multiple models, but now we use Claude for this. Sonnet or Opus? I don't actually know. So it starts going through and it's like, okay, like there's a real need, but it's like, more complicated that it sounds. So companies maybe want this like multiple workspaces for different reasons. And I think like my understanding generally is that they want like one place to have this like billing and governance, but then they might have multiple different divisions in the company. And so it's not, they would want to like divide the work space more, but they might still have some kind of like overarching control. So it goes through the kind of like trying to like explain like what is what is missing and what is good about it. It also like makes this like few recommendations of the product direction. So like do this or that on that. So it's like, it helps to like kind of like make this something that is like quite like not like, I don't know, complicated into some kind of like actionable thing. And like, we can talk about this as a team or something. But similarly, like more like a micro example is that like, if there is like, I want to make a new theme like new dark theme. What's a theme? Themes are just like in our app, like so you can have like, okay, got it. Yeah, like the way it looks for the way. So maybe I want to create a new version of like a dark theme, like make it just black. So like, no, TASCO, like a coding agent on it. And it should start like looking into, it can look into the code base and they can try to understand code base, obviously. And then at first it's like, it's turning it into an issue and then like delegating into into an issue. So, so I created this issue. It's in progress. It's delegated to linear. And then now like linear starts working on it. There's a spinning up the sandbox for it. And like, I think the broad like kind of the one of the benefit on issue is like, now people know I'm like doing this. So the team knows I'm doing this. And I can say like, hey, non like, FII, I'm like, I'm doing this. So like, he can also come here to look at this, like what is happening, like, and then the thing is like this agent session is visible to everyone. So it's visible to me and to him. So I can call like once it's, it will take a while. But once it's like, does it, like, we can both jump into this chat and tweak it together if we want to, or just like, kind of see what happened here. So it's like similar to like what you do on your computer. But then now it's like kind of like happening in a shared context and there's more like, understanding like, where did this come from? Okay, it's came from me. If this could come from a customer like discussion or the shared context is interesting. Like so, so two people can be in the same chat. Yeah. So I know that's really cool. I don't have none ready to demo this. But like, we did have this like instance kind of like accidentally, we noticed this like this is actually useful sometimes is that it a non or was our head of product and like corner was our head of design, they both like, we're working on some tweaks on the inbox. So they, they could kind of go back and forth like a PM and a designer could go back and forth. It's like, no, it's like, it's not quite right. Like, let's fix this thing. And then they could both like see the kind of like the, the preview link. Let's see if I have something here. Okay, so there's like one like my previous pull request. So we will have like pull request here, you can kind of see the activity, but you can also see the code. So you, you, you see the code divs. And then if you want to like comment on it, or you want to like, work on this code with the agents, like, no, this is not, not right. Like, and then like, I'm like, work on it. And similar workflow works for like code reviews where engineer might come in and say, like, this is not right. They could just task the agent to like fix it versus like saying, like telling the other engineer to fix it. So I think it's like kind of collapses the cooperation loop a lot more. And allows like, multiple people use the agents to work on one thing. And then like here, this is only like a backend change. So there's no like client review I can do. But if I had, if this would be a client facing thing, I could kind of like open the preview link and then like actually see like, how does it look live? That's interesting. But yeah, those are a few things we're like adding. I'm curious about this. One interesting thing about this is it seems to increase the surface area of the product a lot. It's about a lot of different things that already exist to some degree somewhere else. And obviously there are things that you can do differently. So you can have multiple people in a chat, you can it's, it's more plugged into linear more generally. But you kind of have to recreate a lot of stuff that's already being built by a lot of other companies. So how do you think about that and the tradeoffs of that and doing that well, especially entering something like AI coding, where all the big companies are just like going as hard and as fast as they can to build AI coding agents? Yeah, I think there's definitely the question we need to keep asking ourselves like, what is our advantage or like unique advantage here? And I think like we, I think like honestly, I don't think we will solve all the different coding needs, but like we honestly also have to. Like I think it's, it's what we see the value is just kind of like sitting upstream where the work is coming from. There's like really like good leverage there that we can offer to companies where it's like, work comes in or a box come in, they automatically get spawned into agents, like given delegated to the agents, like engineers never even see them. Or like if they see them, they see them like once there's like a fix already being built. And so it doesn't like maybe work for all kinds of situations. It's not like, it's not the agents like you go to like, hey, build me a new product. Like we don't think that's like where we should be like working in. It's more like you have a large company, you have a lot of things requested from you, a lot of box files in like, how do we like reduce that workflow for you like automatically? And then yeah, you can use the other coding agents to do other kinds of work. But this is like where we kind of focus on that's really interesting. But then like, yeah, generally, we've been thinking about the problems like we don't want to be a kitchen sink product, like we do everything for everyone. And like, sometimes companies end up in the state because you have to enterprise buyers and you have the checklist and then like, you just need to like get the checkmark into the right spot on the checklist. And it we don't think it's that those things like don't create like a good product experience. So the way we always thought about it and build the products is like we try to feel like what is like a natural next step in this workflow. So if we go from an issue, like it's like, yeah, someone likes a natural next step is like someone needs to fix this. And so like, how do we help people to fix it faster? So one option is like, we do this cloud agents, and then you can fix it. But now the cloud agents does this stuff, like, how do you know it's like good. So then you need to see the code, like you need to see the diffs and like you need to run the builds and like whatever. So it's like, we are more always focused on the workflow and how do we like improve it? Like how do we make the make the kind of like help companies to output better and faster versus trying to like own every surface. So we don't have to own like every piece of the surfaces, but like we are kind of like trying to like find this optimized workflow for people to do certain kind of like product things. So we're almost out of time. My last question for you is if you had to project how product development will change over the next five years, let's say, what will be what will be different? And also what will be the same? I think the one difference, I think there's going to be more of this like self driving aspects of like you can set up some kind of rules or guidance and we even like building something like around like a project memory or like a like, so like you could have like a common workflow, what we do is like we have projects going on. It's like a project is often like a feature part of the interface, or part of the part of the product. And then like we have a lot of like feedback and requests and things coming in. I think there's opportunities like turn that into like more like that the product or that feature is it's kind of like an agent itself. And it kind of like tries to make decisions based on the input it creates. And then it can still have like maybe ask a certain amount of input, but it could run automatically. It's like, hey, like, these kind of I seeing these patterns and these patterns pointing to this solution and the solution seems to be like potentially something that works for people. And I built a made up build, I send it to some customers and they say it's the feedback is good. So it's kind of like it gives you like it does things on its own based on some kind of context and like a rule based system, or like some kind of guidance. And then I do think like the thing I'm still like, I think people should still think even in this world where agents do some of the thinking and like does run automatically to some degree, I do think like it makes people have to be like a lot more explicit like what do they want or like how do like what is worth doing and like what are the areas we should be doing. And I think it's so there I think like a lot of this like still like humans having meetings or discussions or writing issues or writing documents, I think it's like reading documents like that there's still going to be like a place where humans need to like understand the stuff to like you can't just outsource the thinking purely to the AI or agents. And but like you should like the more you can clarify your own thinking and the strategy or something, the better it's for your team, but also it's the better it is for the agents to because then like you can codify some of those like strategies or thinking into actual like these out and almost things. So I think like I personally don't see the future in a way that we are replacing humans and I don't quite believe in it. Maybe I don't want to believe in it, but I think it's I think things will change like that the roles will change. Maybe there's some like movement around exactly like what does engineering do how many engineers we will need and like what is the job in the future. But I still don't see like how the agents like how the AI actually like does all the thinking and like kind of like the choices or decisions. I think product building is still kind of like a craft or an art. You kind of a lot of times like we we we talk about intuition like we just decide things based on what we understand how we understand the problem we hardly use any data as part of decision making. Sometimes we use it to look at something but it's like it's more like a signal. So I never personally believed in this like AP testing and data driven product development, which I think could work well for an agents but I think it doesn't work for all kinds of products and I also think like the best products are not necessarily like being built that way like you still need the human kind of touch of like what is interesting or like what would make this good. I love it. Kari, thanks so much for joining. Yeah, thanks for having me and subscribe. Oh my gosh, folks, you absolutely positively have to smash that like button and subscribe to AI and I. Why? Because this show is the epitome of awesomeness. It's like finding a treasure chest in your backyard but instead of gold it's filled with pure unadulterated knowledge bombs about chat GPT. Every episode is a roller coaster of emotions, insights and laughter that will leave you on the edge of your seat craving for more. It's not just a show, it's a journey into the future with Dan Shipper as the captain of the spaceship. So do yourself a favor, hit like, smash subscribe and strap in for the ride of your life and now without any further ado let me just say Dan, I'm absolutely hopelessly in love with you.