Notion’s Token Town: 5 Rebuilds, 100+ Tools, MCP vs CLIs and the Software Factory Future — Simon Last & Sarah Sachs of Notion
77 min
•Apr 15, 202614 days agoSummary
Notion's Simon Last and Sarah Sachs discuss their five-year journey rebuilding AI agents, from early fine-tuning experiments to today's custom agents platform. They detail how shifting from few-shot prompting to tool-based architectures, embracing CLIs and MCPs, and distributing tool ownership across teams unlocked velocity. The episode covers their philosophy on model selection, evals infrastructure, the emerging 'software factory' concept, and how Notion positions itself as the system of record for agentic work.
Insights
- Notion's competitive moat isn't the AI model itself but the data layer and system of record—they're building for agents as first-class users, not bolting AI onto existing products
- Shifting from centralized few-shot prompting to distributed tool ownership via goal-driven definitions was the biggest velocity lever, enabling any team to own their tools without breaking the whole system
- The real unlock for agents wasn't reasoning models alone but infrastructure: progressive disclosure of tools, proper permission models, evals frameworks, and the ability for agents to debug themselves
- Usage-based pricing and model selection (Haiku vs. Opus) should be driven by task requirements and customer value, not margin maximization—Notion actively nudges users toward cheaper models when appropriate
- Meeting notes as a data capture primitive creates a powerful flywheel: transcripts become searchable context, which improves agent performance, which drives adoption, which generates more data
Trends
Coding agents as the kernel of AGI—agents bootstrapping their own software, debugging, and maintaining codebases with minimal human interventionSoftware factory pattern: multi-agent workflows with specification layers, self-verification loops, and automated PR/merge processes replacing traditional developmentProgressive disclosure and tool filtering becoming critical as agent tool counts exceed 100—models struggle with undifferentiated tool listsAgentic search and retrieval optimization diverging from human search—ranking for top-k recall matters more than positional ranking; query diversity and parallel exhaustive search outperforming vector embeddings aloneEnterprise AI adoption shifting from wrapper products to system-of-record plays where agents operate natively within existing collaboration platformsModel quantization and quality variance across vendors becoming a competitive intelligence problem—same model, different quality depending on provider and serving tierOpen-source models filling the mid-tier intelligence/price/latency triangle that frontier labs aren't optimizing forPermission models and trust boundaries becoming product design problems, not just security concerns—users need visibility into what agents can and cannot doEval infrastructure as a platform problem: teams owning their own evals, CI integration, frontier evals at 30% pass rate to measure headroomSelf-healing agents and agent-driven debugging creating new operational patterns—agents fixing their own instructions and permissions in real-time
Topics
AI Agent Architecture and Tool CallingFew-Shot Prompting vs. Goal-Driven Tool DefinitionsModel Selection and Pricing StrategyEvals Infrastructure and Model Behavior EngineeringCLI vs. MCP Trade-offsPermission Models and Trust BoundariesAgentic Search and Retrieval OptimizationSoftware Factory and Multi-Agent WorkflowsMeeting Notes as Data Capture PrimitiveCustom Agents Product DesignDistributed Tool Ownership and VelocityFine-Tuning vs. Prompt EngineeringProgressive Disclosure in Agent InterfacesEnterprise AI Adoption PatternsSystem of Record vs. Point Solutions
Companies
Anthropic
Partnership on frontier evals and model snapshots; Notion provides feedback on enterprise work vs. coding agent capab...
OpenAI
Early partnerships on function calling; GPT-4 access in late 2022 sparked initial agent work; current model provider
Google
Gemini integration for image generation; partnership on model capabilities and token tracking
Slack
Integration point for custom agents; bug triaging agent example; native tool building vs. MCP
GitHub
Integration for agent-driven code review, PR merging, and repository management workflows
Linear
Search and task management integration; MCP-based tool access for agents
Jira
Search integration for agents; custom tool building vs. MCP approach discussed
Gmail
Email integration option; comparison with Notion Mail for agent-native capabilities
AWS
Analogy: Notion is to collaboration as Datadog is to AWS CloudWatch—expert layer on top
Datadog
Referenced as vertical SaaS comparison; deep customer understanding vs. Notion's horizontal approach
Robinhood
Sarah Sachs' prior employer; compliance and security learnings applied to Notion
Frontier Labs
Generic reference to cutting-edge AI labs; Notion partners closely on capability development
Fireworks
Early partnership for fine-tuning function-calling models on Notion functions
Bedrock
Model serving platform; quality differences observed vs. first-party and other vendors
Hugging Face
Implied reference to open-source model ecosystem filling mid-tier intelligence/price gap
People
Simon Last
Co-guest discussing agent architecture, tool design, and five rebuilds of the agent harness
Sarah Sachs
Co-guest discussing team culture, evals infrastructure, and model behavior engineering
Alessio
Podcast host and interviewer
SWIX
Co-host and interviewer
Ivan
Mentioned as co-leader with Simon; credited with creating low-ego culture enabling rapid rebuilds
Zach Trotar
Recently joined from Ember; leading vertical team focused on meeting notes quality and data capture
Jimmy
Built image generation for cover photos as side project; example of distributed innovation culture
Brian Levin
Evangelized demos-over-memos concept; influenced product conviction and prototyping culture
Quotes
"Notion is dead-set to being the best system of record for where people do their enterprise work. So we will always support our MCP, and so far as other people are using MCPs. Regardless of our perspective, we've put a lot of effort into our MCP."
Simon Last•~01:15:00
"Everything is a coding agent. I think that's one direction. And then I think the exciting thing about that is your agent can bootstrap its own software and capabilities and actually debug and maintain them."
Simon Last•~00:25:00
"My job was not to be the ideas person or the technical expert. My job was to make it so that everybody understood the objective, had a resource to help prioritize what they should work on and had an avenue to prioritize what they thought was important."
Sarah Sachs•~00:45:00
"We don't focus too much on training. I think of that as that's an implementation detail. Like what's the outer loop, right? If the outer loop is you have a model and in some harness or system where it's interacting with the system, that needs to work."
Simon Last•~01:35:00
"Our job isn't to build the best harness for agentic work. Our job is to be the best place where people collaborate."
Simon Last•~02:05:00
Full Transcript
Broadly speaking, I'm really bullish on CLIs. I'm still bullish on MCPs in a certain environment. I think it would be really great if when you want a narrow, lightweight agent. I think there's definitely a lot of use cases where you don't want a full coding agent with a compute runtime. And also, you want it to be more tightly permissioned. MCP inherently has a really strong permission model. All you can do is call the tools. MCP is just the dumb, simple thing that works, and it is pretty good. Notion is dead-ing it to being the best system of record for where people do their enterprise work. So we will always support our MCP, and so far as other people are using MCPs. Regardless of our perspective, we've put a lot of effort into our MCP, and we have a fantastic team that we're building. Hey, everyone. Welcome to the Late In Space podcast. This is Alessio, founder and I'm joined by SpiX, editor of Late In Space. Hello, hello. We're back in the beautiful studio that Alessio has set up for us with Simon and Sarah from Notion. Welcome. Thanks for having us. Thanks for having us, yeah. Congrats on the launch recently, Custom Agents. Finally, it's here. How's it feel? We ship things slowly, so it had been in alpha for a little bit, and at the point at which it's in alpha, there's a group of people that are making sure it's ready for prod, and then there's a group of people working on the next thing. So sometimes some of these launches are a bit delayed satisfaction, so it's quite nice to remind yourself all the work you did, because we do have a habit of being two or three milestones ahead, just because you have to be, and you can't get complacent. But it's been great that people understood how this is helpful, and I think that's just easier in general, building AI tools today than it was two, three years ago. People get it, and so that user education, there's just, it was our most successful launch in terms of free trials and converting people and things like that. It was really successful, so yeah. But there's a lot to build. Making it free for three months helps. It was definitely super exciting for me because it's probably the fourth or fifth time that we rebuilt that. Yes. And you've been building this since 2022. Yeah, it was even right when we got access to GPG4 in late 2022. Okay, let's make an agent that, I use the word assistant at the time, there was another third agent yet, but, oh, we'll give it access to all the tools that notion can do, and then we'll run in the background, do work for us. And then we just tried that many times and it just was too early. I need to force you to like double click on that. What is too early? What didn't work? We were fine to, like before function calling came out, we were trying to fine tune with the frontier labs and with fireworks like a function calling model on notion functions. This is right when I joined. I joined because we needed a manager, Simon, was needed to be able to go on vacation. That's around when I joined, so you can speak much more to it. Yeah, we did partnerships at both N4Ep and OpenAI at different times to try to, at the time, the, when we first tried, there wasn't even a concept of like tools yet. We designed our own like tool calling framework, and then we tried to fine tune the models to use it over multiple turns. And because it didn't work well out of the box, I think, yeah, the models are just too dumb in the context. Things was also way too short. And yeah, we just banged our head against it for a long time. Unfortunately, it was always like, there was always like glimmers that it was working, but it never felt quite robust enough to be like a useful delightful thing until I would say the big unlock was probably like Sonic 3.6 or 7 early last year. And that's when we started working on our agent, which we shipped last year, and then a custom agent's kind of a similar capability. And that one just took longer because we just wanted to get the reliability up a lot higher because it's actually running in the background. And the product interface of like permissions and understanding, this custom agent is shared in a Slack channel with X group of people, and his access to documents that are a surface to Y group of people, and the intersective X versus Y might not be whole. And so how do you build the product around making sure administrators understand that permissioning took multiple swings? Everything is hard back at the end of the day. Yeah, I'm curious like when the models are not working, how do you inform the product roadmap of, okay, we should probably build expecting the models to be better at some reasonable pace. But at the same time, you had a lot of customers in 2022. It's not like you were a new company with no user base. Yeah, I mean, I think there's always the balance of like you want to be AGI-pilled and thinking ahead and building for where things are going. But also you want to be like shipping useful things. And so we always try to like keep a balance there. We try to take like a portfolio approach. We're always working on multiple projects, and we're always trying to work on maintaining things that we already shipped, like shipping new things that are like eminently working well, and make them really good. And then we want to always have a few projects that are a little bit crazy. And what are the AGI-pilled projects that you have today? I'm curious what, you don't have to share exactly what you're working on, but I'm curious what are things today that maybe in 18 months people will be like, oh, obviously this was kind of work. 18 months? Yeah. 18 months is? It's a lifetime. And yeah, there's a number of things happening. I think one thing that's becoming more clear is I think the coding agents are the kernel VGI. Everything is a coding agent. I think that's one, one direction. And then I think the exciting thing about that is your agent can bootstrap its own software and capabilities and actually debug and maintain them. And so we're thinking a lot about that. And then yeah, like another category of things I'm really excited about is we call the software factory. Lot of people are using this, this sort of word. Basically, it just means, can you create like a, as automated as possible, a workflow for developing, debugging, merging, reviewing, and maintaining a code base and a service where there's a bunch of agents working together inside and like, how does that work? If you think back to your initial question, like why did this take so long? I think something, I didn't say that, but yes. Okay, go ahead. Why? What changed over the three years of trying? Because most people always say it didn't work yet. Then reasoning models came, then it worked. I was like, okay, let's go a little bit. I mean, that's part of it. But I think the other part of it that I actually think is really what will set notion apart for every new capability is we have two skills that are crucial when it comes to frontier capabilities. One is not letting yourself swim upstream. So like quickly realizing if you're just pressing against model capabilities versus not exposing the model to the right information, not having the right infrastructure set up, that in of itself is a skill of intuition. And the second is to see, okay, you're not swimming upstream, which direction is the river flowing? And what is like, how do we think ahead about the product and start building it even if it's not great yet? So that when it is there, we're ready for it. And like those can sometimes feel like counterintuitive things. Like we can be trying to fine tune a tool calling model when they don't exist yet. And the trick is to not do that for too long, but realize that there was something there. And we've had a lot of things which like, we're just like not swimming in the right direction with the streams. I think we had multiple versions of transcription before we got meeting notes, right? Oh my God, I talked about that. Yeah. Yeah. I think that like we really closely partner with the frontier labs on capabilities. And we also have to have strong conviction on as those capabilities move. Notion is about being the best place for you to collaborate and do your work. And how does that narrative change if the way that we work changes? Yeah. Yeah. You told me you were a fan of the agent lab pieces and this is. I show that thesis to so many candidates like have it as my chrome auto fill at this point. Like it's one of my most. Like is this the here's why you should work in Notion and not open AI. Here's what's different about it. Yeah. And here's why it's not just a wrapper. I actually think more and more people understand it's not just a wrapper. And by the way, like in the beginning parts of what we build are wrappers on functional that works well, but that's not really the most. I would say that's not the product that drives revenue. And that's not necessarily always what users need. Notion is the AWS wrapper, but like the wrapper is very beautiful and like very well polished. So like the analogy that I've been coming back to is Datadog and AWS. Yeah. Datadog could not exist without cloud storage that it's fundamental that works. And AWS has a cloud watch product, but Datadog is an expert on understanding how people want observability on the products they launch and we're experts on understanding how people want to collaborate. And that's really where our expertise lies. Totally. Regardless of the tools that we use. I'm curious how you think about implicit versus explicit expertise. I feel like Datadog is half an F implicit and explicit. It's like they understand across markets and industries when engineering teams usually look for. With Notion, it's almost like more of the expertise is at the edge because you as a platform, you're like sore, it's on tool that the end user is not really the same. But like the end user is always an engineering lead, I kind of like SRE related person. With Notion, it can be anything. So I'm curious how you put that expertise into a product versus obviously a database that doesn't quite work in this case. But it's a little bit differently shaped. I think a classic vertical SaaS, like Datadog is kind of like that. They understand their individual customer very deeply. It's kind of a narrow slice. Notion has always been super horizontal and our task has always been to balance these two somewhat opposing forces of we're listening to our customers and what they want us to build. It's a broad slice. And then also we're thinking about, okay, how do we decompose what they want into nice primitives that are really nice to use and will get us like as much bank of the buck as possible and then maintain the whole system, make it all like super clean and nice to use. We still have user journeys. We still focus on like core. I actually think the failure of our team is when we focus too much on what are calls that are, what are tools that are cool tools. I actually think that's when we make, have the least velocity because you still need some sort of focus on a user journey. So like for instance, we'll all sit down every Friday and look at the P 99 of the most token exhaustive custom agent transcript and just look at why it didn't do well and cut a bunch of tasks. Like we still focus on this has like this should work. Email triaging should work. Right. And similarly, like when we're talking about before building chatting before we started filming about, okay, how can I do PDF export? That's functionality that then merits. Maybe we should build a tool that has access to a computer sandbox and a file system and the ability to write code. But it's because we're thinking about the fact that our users to do their daily work need to export. PDF not because we're like, I think a computer tool could be cool. Let's just see what happens. Like we have to focus on some user journeys. Otherwise we just don't have enough strategy to prioritize. I think there's a lot of like really strong opinions that you have. Do you have like a towel of Sarah's acts? Like what, how do you run your team? Like I feel like you just have accumulated all these strong opinions. So obviously part of this is your token town thing. I think the towels working with Sarah's X is you have to, it depends who you ask. I think it depends if you're on my team or a partner or a vendor. Yeah. There are other people want to run their teams the way that you're, you're like, bringing these things. And then also similarly, not Simon, when you did the custom agents demo, you had, we've been using custom agents and here's the super long list of everything that we do. No humans ever read it. That's what you said. I was like, yeah, so I think for me, something that I learned very quickly and became very comfortable with was that my job was not to be the ideas person or the technical expert. My job was to make it so that everybody understood the objective, had a resource to help prioritize what they should work on and had an avenue to prioritize what they thought was important. And I think that's true with all leadership, but I think especially on the AI team, almost all of our best ideas come from prototypes from people that have a cool idea because they saw a user problem and it's a huge disservice. If all of those ideas have to pass like the sniff test of what me and a product partner or Simon and Ivan decided were the direction, right? Because a lot of what we're doing is leaning into capabilities. So I think that's the first thing is I don't really view like the role of engineering leadership as hierarchical, nor has it ever been, but especially now, like very willing to change direction based on like proof is in the pudding. And I think we have rebuilt our harness three or four times. And when you do that, then the second rule of engineering leadership is like, you need to build a team that's comfortable to leaving their own code and is very low ego and is driven by what's best for the company and doesn't write design docs because they think it's their promotion packet. And that's a culture that notion had long before I joined, but like our willingness to just swarm on different problems and redo things that we've built before because something has changed. There's a lot of friction that can happen at companies when you do that and it doesn't happen at notion. And because it doesn't happen when new people join, like they don't want to be the ones that are saying, we shouldn't do this. I wrote that code. So then it's you create a culture that everyone talks and that culture comes directly, I think from Simon and Ivan though, because they're very open minded. Anything you'd add? I'm not a manager like Sarah is. A lot of my role is really to try to think a little bit ahead, make sure that we're building on the right capabilities and then the prototyping stuff. And yeah, it's really critical to always just be starting again. Okay. This is new thing. What does this mean? What if we just rethought everything, we wrote everything? And obviously just doing that in a loop every six months. Yeah. Do you believe in internal hackathons for this stuff? I think there's two different versions. So one is we just have a solid bunch of senior engineers that come and go on what we call the Simon Vortex and productionizing what we've built. Right. Cause when you're in the Simon Vortex, the velocity is super high. The direction changes daily and it's meant to be like the equivalent of a Scott Quark's law. We don't need to do hackathons for that. We need to have senior engineers that we trust to come in and out of those projects. For instance, like managing boundaries are really loose. Like you report to him, but you work for her right now. Like that is something that when we hire managers, it's important. They don't care about because we tend to form more structures. Yeah, that would be too territorial. We form more structures after we ship things. Not before, just historically. The second thing is we do have company-wide hackathons. Actually, we just had our demo's date for the hackathon we had last week this morning. That's more for people that aren't directly working on the project, feeling like they have the time to pause and learn how to make themselves more productive or how they would use notion custom agents to build something. Or part of the hackathon was actually encouraging everyone across the company to build their own agentic tool loop calling from scratch, following like an every blog post on how to do it, I think. Because we want... Is that the component engineering one? Yeah, we want everyone to use Cloud Code in the company or whatever the coding agent they please and understand that fundamental. So we set aside a day and a half. We're all leadership, encourage everyone on their teams across the company to do it. So we have hackathons like that. I would say like facetiously, like everything we build is a little bit like a hackathon until it graduates and puts on big boy pants and as a product ops roll out later and has assigned data scientists and stuff like that. Security review, enterprise stuff. Actually, security review is one of the things that we bring in first because it just slows us down way more and causes a lot of tension and they build better product if they're involved early. That is probably the first person to get involved in something. That's the right PR approved answer. No, but it's not just PR approved. It's like... It's actually real. It's a scar tissue. Yeah. Because my background is also, I worked at Robinhead for a number of years. So compliance and things like that are a little bit more, you learn the hard way when it doesn't come naturally. Yeah. I think the hackathon is really important for uplifting the general population. But if that's the only way you can build new things, your toast, it has to be like the daily processes, building these new things. And it has to be about, I think, I think in the AI era, a lot more leverage, accumulates through the most curious and excited people. And so it's like, we're all about just like activating that energy. If someone's pro-taming something on the weekend that they're excited about, and it's important, that should be the main thing that we're doing. Yeah. It's not a hackathon that we're scheduled once a quarter. It's just a daily process. It's part of the culture. That's how we shipped image generation and notion now. It was always this thing that would be nice to have, but it wasn't really clear where that was necessarily aligned in product priorities. It'd be a lot of work. And we had someone on the database collections team, Jimmy, who was like, I really want to do image generation for cover photos and inside notion. And we're like, if you want to build it, like it's do it, please. Like we encourage you. We gave them all the resources of working directly with Gemini and being able to track the token usage and working through our endpoints. We gave them email support, everything. And then we have a full project. Yeah. That's why you can't have like ego as a leader. That's how we work. What's the size of the team today, both engineering and overall? I manage the team. That's what we'll call like core AI capabilities and infrastructure. That's about 50 people. But then we have partner teams that do packaging. So how it shows up in the corner chat versus custom agents versus meeting notes. That's another 30, 40 people. And then every team that has a product service at notion that a user can interface with owns the tool that the agent interfaces with the editor team. The team that did CRDT for offline mode is the same team that handles how two agents edit competing blocks. It's the same problem. The team that built the underlying SQL engine is the same team that owns how the agent asked it to run a SQL query and it does it performantly. And so from that regard, anyone working on product engineering is tasked with making them work for customers that are humans and agents. Because over time, the majority of our traffic will be coming from agents using our interface, not humans. And our objective is to make it so that the whole product org is building for agents. How is the change internally? The activation bar is forward a lot. Like anybody can create a prototype. Very somewhat easily, especially if you're like an existing code base. Have you raised the bar on what type of prototype people need to bring forward to going to be taken, not like seriously. I think the bar is lowered in many ways. Like one thing of our team built that is really cool is our design team made a whole separate GitHub repo called the design playground. And it's basically just to create a bunch of like helper components and for quickly a thrown together UIs. And it's become like actually quite sophisticated. Like it has an agent in there and that's pretty fun. So we pretty much, they don't do mocks. They just make like full prototypes. Here it is. It works. They give you like a URL. They're like, okay, so we have to make the real production version of that. And then for engineers, a prototype looks like just making it a feature flag that actually works. That's a bar. So I think to understand that's really unique about notion. One of the reasons I joined were super lucky is no one uses notion in their job as much as people that work at notion. So I think there's very few companies, maybe if you worked on Chrome, I guess. But like everything that we ship internally first and get a lot of really quick feedback and also sometimes our dev instance is totally borked and you have to change a bunch of flags to get things done. And that's everyone. So people that do it ticketing, people that do supply chain procurement, recruiting, everyone is using the same instance of notion with a lot of flags on for these prototypes people build. And so we have this Brian Levin, one of the designers in our team, I think evangelize this concept of demos of our memos. Oh, which has been very good for building demos. And I think it's put a big pressure point on us to have really strong product conviction, because if anything can be demoed, you really need a strong filter of making sure that if you're doing X amount of work, you're making the, you're focusing on one tower, you're not just building a really flat hill. That's actually where I think there has to be more conviction from our PMs. And our designers, the company really to have conviction of what journey we're going on. But overall, I feel like it works pretty well. Like people, almost all the engineers have good enough taste to realize that this prototype doesn't actually make sense in the product or it does. So it's not that common that I would see a prototype. It's like, oh, this makes no sense. It's people are doing reasonable things. And then it's just a matter of which things we build first. And then often just figure out how to turn it on and off. In the, in our like experimental chat UI, there's this there's how we like a hundred checkboxes and they're like, you'll see. They're really easy to turn on and off. But I think that, okay, so that is true, Simon, but like being the person that manages the evals team, like there is a level of intensity that it adds to the platform team. But if we're going to do image generation and notion, all of a sudden, the way that we do attachments and the way that we are LLM completion, like cortex talks and expects tokens back and now it's getting images back. There's a lot of platform work that we do need to solidify a little bit. So sometimes it'll be in dev for a couple of weeks before it makes it to prod, just because we still have to like make it robust, make it HIPAA compliant, ZDR compliant, figure out the right contracting with the vendor, whatever it is, and we need to eval it because we want the team to still maintain what they build. That's the one thing is if we have a bunch of prototypes, it can just be like a small group of people that then maintain whatever in prototypes. So we have invested a lot of people in an eval and model behavior understanding teams that we call it agent dev velocity. So your dev velocity building agents can be faster if we invest in that platform. And so we have a whole work dedicated to agent platform velocity so that you can build your own eval and then maintain it once you ship it. So if a new model release comes out and we maintain their own eval, we maintain the eval framework. Every team owns their own evals and a lot of them we've integrated to opt into CI or we run them nightly and we have a team, a custom agent that triggers to a team to look at the major failures. That's really critical because if we have all these different services, now a lot of it's on the same agent harness. So it's easier to maintain. It's just packaging of different agent harnesses, but new functionality of the agent. Let's say that like we want to update like they deprecated sonnet for or whatever it is and we need to auto update. Are they already? Yeah. I actually, they were just 3.5. 3.5. 3.7 just got deprecated. 3.7. Is it 5.2 or? Yeah. No, it's not 5.2. It's not. It's 5.1. 5.2. Yeah. Yeah. Because no. 5.4 is 40% more expensive than 5.2. So if they deprecated 5.2, you would hear. They can buy 3.7. You would hear from me about that one, but another conversation and a half. I have a cheeky evals question for you. Have you noticed any secret degradation from any of the major model providers? Secret degradation. During the war pay, when it's high traffic, it suddenly gets. Yeah. Not just between the, we definitely notice flickiness. We've definitely noticed, particularly for some providers that things are slower during working hours and. There's a latency argument, not a quality argument. No, I think the quality difference that's interesting is even though companies that say they're selling the same, it's really into like quantization, but like companies that say they're selling the same model through different vendors, whether it be through first party or bedrock, Ashgur, etc. We do see different qualities sometimes and that's not necessarily what's advertised. Yeah. Can you went to the point of if we'd be shipped like this, like eval across all the providers and it was like very obvious who was secretly quantizing and it was. Yeah. But that's. Very embarrassing. We hire sub processors to figure that out for us. So we just want to understand where it's regressing or where it's optimized. And sometimes we're okay with regressions that optimize latency if they're the appropriate regressions. Our job is to make sure we have the evals to understand the changes that are important to us. And even like when we're partnering with labs on pre-releases of models, they'll send us multiple snapshots and this is less about quantization, but more just regressions. Like they have shipped models that were not the snapshots that we wanted and they have changed the snapshots that they shipped based on the feedback that we give because our feedback tends to be more enterprise work focused and not coding agent focused and definitely those can be boomers. We know that this wasn't the version you wanted, but we'll help you make it work. We always make it work, but that definitely happens. Yeah. Do you have failing evals that you're just hoping that will have success eventually when a good model comes out? Yeah. So I think I could talk about this for 60 minutes. So I will limit myself. I think it's a real issue when people say evals and it's just like, that's quality. That's like unit. It's like saying testing. It's not just unit tests. So we have the equivalent of unit tests, regression tests, those live in CI, those have to pass a certain percent within some stochastic gallery. Then we have, as you're building a product evals of these aren't passing right now. And this is launch quality. So we have a report card and we need to, on these categories, be at 80 or 90 percent of all of these user journeys to launch. And then what we have, what we call frontier or headroom evals, where we actively want to be at 30 percent pass rate. And that's actually been a effort that we took in partnership with Anthropic and OpenAI in the past, maybe two or three months, because we actually hit a point where our evals were saturated and we weren't able to really give insightful feedback other than it wasn't worse. And not only is that not helpful for our partners, it's not helpful for us to understand where this dream is going, going back to that analogy. And so we spent a lot of time thinking about what Notion's last exam looks like, right? Not just humanity's last exam, but Notion's last exam. And there's a lot of dreams about what that would look like. I know we've talked a lot about benchmarking, SWIX, but yeah, Notion's last exam was a big thing inside the company and we have people full time staffed to it, exclusively. We have a data scientist, a model behavior engineer and a full time evals engineer just dedicated to the evals that we passed 30% of the time. What you're hiring for? MBEs. I am hiring. What is an MBE? A model behavior engineer. Model behavior engineers started with the title data specialist before I joined when they were working with Simon on Google Sheets. And Simon just needed someone to look through Google Sheets and say, yes, no, this looks bad, this looks good. And so we hired people with diverse linguistics background. We had the linguistics PhD dropout and a Stanford, Complett, Nugrad. And they're amazing. And they formed a new function, basically. And over time, we've built a whole team with a manager who's now reinventing what that role is with coding agents. So they used to be manually inspecting code. Now they're primarily building agents that can write evals for themselves or LLM judges. There's a really funny day. I can send you the picture where Simon about a year and a half ago was teaching them how to use GitHub and they're on the whiteboard and was like, okay, I think it would be so much faster if our data specialist learned how to use GitHub and learned how to commit these things into code. And that was then. And now I think coding has been a lot more accessible. But moving forward, it's this mix of like data science, test, PM and prompt engineer because there's craft and understanding like even what models can and can't do things. How do we define like that headroom? How do we define like what a good journey is? Is this model better or not? Why is this failing? There's some qualitative work, but then there's also like a lot of instinct and taste to it. And that's not necessarily software engineering. And so we have like very firm conviction and we have had for a number of years. Now that is its own career path. And we have always welcomed the misfits, so to speak. So we really firmly believe that you don't need an engineering background to be the best at this job. And that's what's quite unique about this particular role. This is something that I've been pretty excited about recently is we made an effort basically to treat the eval system as like an agent harness. So if you think about it, you should be able to have an agent end to end, download a data set, run an eval, iterate on a failure, debug and then implement a fix. And ultimately you should be able to drive the full end time process with a human sort of observing the outer system. So yeah, we went pretty hard on that. That's worked extremely well so far. It's like basically just to turn it into a coding agent problem. Your coding agent or just whatever harness code X. It should be totally general. Yeah, I think it'll be a mistake to fix it on any particular coding agent. At the end of the day, it's just like CLI tools. It's like the same way that you would have a coding agent write the unit test. You should have a coding agent write the eval. Yeah. But there's a lot of supervision in that still. We just don't believe that supervision has to come from software engineers because a lot of it is UXRE and whatever. And these are the people that also triage failures and tell us where we should be investing next. Yeah, I'm going to go ahead and ask a spiky question. Is there a data no software engineers at Notion? What does it mean to be a software engineer? Exactly. I think the way things are going is like we're on some continuum where if you look back three years ago, humans were typing all the code and then we had autocomplete you're typing a little less of the code than we have like filling agents filling lines. And now we're getting into like agents doing longer range tasks where you can debug and implement effects and then verify it works and get your PR even like Merge and deployed. I think we're just moving up the abstraction ladder and then the human role becomes more about observing and maintaining the outer system. There's a string of agents flowing through Merge and PR is what's going off the rails. What I need to approve is there like a learning or memory mechanism that works. So it's a hard engineering problem. There's a lot to do there. I think we're just moving up the stack. The same transition machine learning engineers have made. Like I haven't looked at a PR curve in a while. Yeah. You used to do this stuff and now auto research can do it. Right. Like I think it depends on what you define as a software engineer. Yes. That's changing for sure. I think every software engineer at Notion this summer went through like this shear. One of our engineering leads at the company called it like every software engineer is going through the identity crisis that every manager goes through where all of a sudden they realize their ability to write code is less important than their ability to delegate in context switch. And I think that is a transition out of being a software engineer. But yeah, there's a critical difference to being a manager, which is that it is actually very deeply technical. The problem of humans are very like fuzzy and you can't treat a team of humans like a rigorous system where PRs flow through and can be in like a block status and then what happens when they're blocked with a set of agents you actually can do that. I think it's actually there's a lot of interesting technical rigor that goes into that. It's like it's a technical design problem. Ultimately, what is the design of the software factory that you're building? Yeah, I think we're trying a lot of different things. Ultimately, you want to design a system that requires as little human intervention as possible, but like still maintaining the invariance that you care about. So it works for a lot of different ideas there. I think I could talk about a few things I think are important there. One thing I think is really important is having some kind of like specification layer. You can just commit markdown files that works pretty well, but it's nice to be notion man. I'm just saying like the spec, like the natural home for specs is notion. Yeah, it can be a database of pages. Yeah, it needs to be something that is human readable and immebueble. And I think that's pretty key. Another really key component is like the self verification loop. Yes. You need a really good testing layers basically. And that's a really deep problem by beginning that. And then it's kind of like the workflow of what happens when there's a bug. How does it flow into the system? Is it like a sub agent working on it? How does it make a PR and how does it get reviewed and merged? And then so there's like the flow of process. Yeah, cool. One thing we did work out before you guys came in was this demo or agents. Agent demo. So every time we do an episode, we try the product, right? I don't think there's ever been an episode that I haven't tried. Try is a big word since day one. Lane space has been on notion, but this is the this is the new thing. Yes. So this is for kernel labs, which is the space we're in. So next week we're opening applications for tenants. So there's a web form on me. We have this form done here before the workflow would be I get an email, then I look at the person was like, should I spend time talking to this person? Then I respond, they respond back. So I built this. So the name it came up for on its own. Can you maybe how do, how does it come up with its own name? Yeah. That's a pretty apt name. It's just a random, it's a random name generator. Oh, okay. That's funny. It just came. The fact that I picked that is hilarious. I'm pretty sure it's just a term. Resilient collector. I think I've never looked at the code for that. I've never, I can guess it. I think it's like a madland. So yeah, I think it's right. Yeah, it's totally a different thing. I thought it was great. Although when they, if you use the AI to set itself up, it can update its own name. Okay. How did you create it? Did you just do plus or did you? Okay. Yeah. I'll say just check my inbox for applications or code. You can see people, they will people. So it created a database for me, which I have here. And I guess database is like a notion table because everything is notion. And then whenever an email comes in, like here, it just creates a new role for the person and then it uses web search to enrich the profile. So like searches the web and this is who this person is. This is when this day they want to move in and updates everything else. This is, it's not a GI, but to me, I don't want to do this work. So it feels, it took me maybe like 15 minutes to set up the whole thing. And I really liked that. Most of the information should live here. It's not like some other tool asked me to bring my stuff there. It's like, I would have probably already created a notion thing. Most of our biggest use cases and gains are from that extra layer of human involvement in the process to make it into like one of our biggest use cases is bug triaging. So if someone posts something in Slack and you just have a custom agent that lives there, that has its own routing constitution of what team this belongs to, creates a task in your task database and then posts in that Slack channel. That's one of the first things that we built internally, I think, and it's completely changed the way that notion functions as a company. Nothing falls super. Most things don't fall through the crack. We don't know what we don't know, but it's not replacing people. It's replacing processes. Yeah. And I'm curious how you think about composability of these things. So the other one I was working on is like a piece filler. So whenever somebody signs up as a tenant and pills out the lease for them, there should probably some agent that is like office manager agent that can handle the request, make the lease and then give them over Cata access to the office and all of that. How do you think about that feature? Yeah. So there's two ways you can compose one ways by using like the data primitives. So you can you give you have one agent will be right into the database and there's another agent that's watching a database. So that's one way that they can coordinate. That's like a little bit more decoupled and works really well. Or you can couple them. So I think it's actually not really start releasing it like next week is in the settings for an agent, you can give it access to invoke any other agent. So you can have them just just talk directly. That's what was their limit on number of recursions or just probably like you can just get an infinite loop. There's some kind of yeah. I think it's there is actually a number somewhere. I believe I'm just like someone's going to screw it up. You should try to see everything's going to be paperclips. So yeah, yeah, yeah. But but that's really useful. Yeah. So like I just I helped someone internally the other day. They had built like over 30 custom agents for for a go to market team doing all kinds of different things. For example, like researching like filling information about a customer or like triaging customer feedback or something like that, literally over 30 of them. And then he even made like a database of all the agents and it is okay. And now I'm getting 70 over 70 notifications per day with just the agents are blocked on various things. And then I was like, oh, okay, cool. The obvious thing to do there is to make a manager agent. Right. That's going to be another abstraction layer in between your 30 agents. So yeah, we set up with a manager agent and then has access to invoke all the other agents and it's like watching and observing them. And then it's just creates a layer of abstraction. So instead of 70 notifications per day, it's like five. And then the manager agent can help debug and fix any problems with the this is a concept of the inbox or something that you're basically saying that they can message each other. Yeah, use the system of record, which is notion. Yeah, we actually yeah, we didn't make any special concepts at all. They're interested in the notifications that I would have got. They can just write a task to a database that the other agents task to listening to or they can actually call a web up to the agent. Like they can just add the agent. Okay. Yeah, this is something that that was still working on it. I think we like generally the way we do these things is you first make it possible and maybe like a sort of janky way. So I think the way I set them up is we created like a new database that was like issues that the custom agents were experiencing and then gave them all access to file an issue and then the manager has access to read the issues. And that was pretty well essentially give it its own like internal issue tracker just for the agents. And then if that becomes a concept that seems useful, generally, maybe we'll think about a package it in, but generally we try to just keep it to composing the primitives if we can. Another example of this is we have no built in memory concept. Memory is just pages and databases. And so if you want to give a memory, just give it a page and give it edit access to that page and human to edit it. Agent. Yeah. And so that works. That pattern works extremely well on it. And depending on this case, you can have it be just a page or it could be an entire database with or sub pages is pretty on this week. And do that. So when I was setting this up, I connected my inbox and it was like, do you want to use Gmail or Notion Mail? Now I'm like, I don't want to use either. I just want you to do it. I'm curious how you think about Notion Mail, Notion Calendar, all of these kind of UI, UX in full stack notion. Yeah. When like at the same time you have the agent subtracting them away from you. In a way, how do you spend like the product calories to speak? Yeah. I think it's pretty important that you don't have to use Notion Mail to actually the mail capability so we can just connect to Gmail or whatever you want to use. And we're thinking of the mail service as being really great to the extent that it's really agent-pilled. So maybe the mail app is just a prepackaged agent that helps you automate your inbox. Yeah. The auto labeling is great. When we integrate with Gmail, for instance, we have a series of tools available that are available via MCP or API to Gmail. When we integrate with Notion Mail, we have the Notion Mail engineering team to build us the exact right tools that optimize latency, optimize performance and quality. They own that quality. There's product leads there. They're directly thinking about the user problems that happen in mail. So it tends to be when we build integrations and connections, we build natively first and then think about extending them generally just because it's also easier to build natively first. So that tends to be how we face things out. Talking about integrations, you prompted me so I got to ask MCP, CLI, what's going on? What's the opinion? I'm definitely bullish and excited about CLIs. I think there's a few really cool things about CLIs. So one really cool thing is that it's in a terminal environment, so it gets a bunch of extra power. So for example, I can like paginate and curse her through like long outputs and it has progressive disclosure inherently. You don't see all the tools at once. It's just, you see the CLI wrapper and you can use the help commands and read files. And then I think the most important thing that's super cool is that there it's also inherently bootstrapped. So if there's an issue, the agent can debug and fix itself within the same environment that it uses the tool. Right. I think I saw a tweet this morning. Someone said, my agent didn't have a browser. So I asked it to make a little browser tool and within a hundred lines of code to give us a little browser like wrapping the Chromium API. That's pretty incredible. And then if there was a bug, it would just immediately try to fix it. On the other hand, if you use like the Chrome DevTools MCP, I've had this issue where like sometimes the transport gets like messed up. If it gets messed up, the agent has no way to fix itself. It no longer has a browser. It's not broken. I think that's pretty fundamental. But I would say like a lot of the bad things about it can be fixed. So I think like the progressive disclosure that can be fixed with red harness. Like it obviously doesn't make sense to show it all the tools all the time. That's not really inherent to the MCP protocol. It's just like how you wrap it and use it. There's many poorly implemented MCPs because we didn't know. Yeah. I mean, it was just early. Like the obvious thing is to start with is to just show it all the tools. Okay. Now we have a hundred tools and the tool calling actually works. So let's get away to filter to search the tools. I would say like broadly speaking, I'm really bullish on CLIs. I'm still bullish on MCPs in a certain environment. I think in particular, it's really great for when you want like a narrow lightweight agent. I think there's definitely a lot of use cases where you don't want like a full coding agent with a compute runtime. And also you want it to be like more tightly permissioned. MCP inherently has a really strong permission model. Like all you can do is call the tools. A CLI is a little bit murkier. It's kind of access the API token. Are you like properly like re-encrypting the token so it can expatriate it? It introduces a lot of like new issues, which are real and hard to solve. And MCP is just like the dumb, simple thing that works and it's pretty good. I'll add two more perspectives, not from it working well for notion, but how notion like commits to both platforms. Notion is dead end to being the best system of record for where people do their enterprise work. So we will always support our MCP in so far as other people are using MCPs. Right. So regardless of our perspective, we've put a lot of effort into our MCP and we have a fantastic team that we're building to do more there. And the second thing I'll say, I think we all think a lot, but lately I've been thinking a lot about making sure there's a value alignment and pricing with capability. Literally on the expression and needing language to execute deterministic tasks feels wasteful and requiring on a language model to interface with third party providers seems wasteful for tasks that don't require it. And particularly because our custom agents are using usage based pricing, we think of pricing as like the barrier of entry for use of our product. And we're quite committed to making sure that it's not wasteful. Not just because it's a bad deal for our customers, but it's also bad business. We want to have as many buyers like there's a, there's an elasticity of demand. And so if we can have our agents properly execute code that calls on CLI deterministically, it's a one time cost, right? Versus constantly having a language model integrate with an MCP over and over. And paying those like repeated token fees and it's happening outside the cash window, then you're paying for it over and over. And it's just unnecessary and less deterministic when it doesn't have to be. Yeah. The open-endedness, I think the main thing is if I go for a code to just call an API, I would never use an MCP, but then you need an MCP sometimes when you know what to call, but you don't want it to restart versus, I think the built a browser from scratch is like, it's great when you're doing it on your own, but if your customers were having your AI, right, a browser from scratch every time and you had to pay the token cost of that, we'd be like, oh, the Chrome DevTools MCP is actually pretty great. Just use that. I'm curious, how do you make that decision? Should it be just straight API call, very narrow? Should it be an MCP? Should it be super open-ended? Do you mean for when we ship notion capabilities or when we add capability? Yeah, or like, I mean, you might have a capability that the only way to do is an open-ended agent, like an agent with a coding sandbox. Yeah, in notion AI, they're not explained. Yeah, yeah, yeah. We also ship an MCP. Yeah, yeah, yeah. Internally. Is that ever a discussion? We're not going to ship it because we're not able to tell you down or are you happy to just? No, there are a lot of things where we choose not to use MCP because we want to add more high touch to quality. I think search and agentic find is like the largest instance of that where we have Slack and linear and JIRA search and notion that is not using necessarily the search MCP functionality that is provided by those companies. And that's because it's quite critical. We think to how our agent trajectories work is for us to have a little bit more control on the functionality of the search journey. And so it usually comes from quality and there's a long tail of things. And that's why we built an MCP client or an MCP server, excuse me, so that people can connect whatever they want. There is that long tail, right? But we, for search particularly, I would say that's like the primary range of point, but there are other connections as well that it's a little bit of secret sauce about when we are okay with MCP functionality and user driven off and when we actually want to want to carry along more ourselves. I think that there's not really a conflict here. There's just like different layers of the stack and different abstractions. If I were to like map it out, you've got MCPs give you a way to, it's a protocol for gaining access to tools. It's an open protocol. So you can easily get like a long tail, many things. So if you open up our like in the tool settings, that's not the trigger. That's something that MCP can do. So if you scroll down and you, yeah, the tools and access, you're going to connection. Yeah. MCP is a really great way to gain access to tools. It works really well, but you just looked at the trigger. Why, for example, there's no trigger protocol. And so those are things that we're going to build ourselves. And then there's some integrations where we use MCP. So for example, I think the linear and the GitHub is MCP, but the slack, a male encounter, those are actually ones they built in house. And we spent a lot of time really fine tuning all the tools. It's making us really good. And also like building out the triggers. So it's just like different layers of the stack. Something that makes sense sometimes. And then we just have to like harness the right tool at the right time. I don't think there's an inherent strong conflict between these things. Do you have a canonical representation of these tools internally where like, you've wrapped these things together, the MCP plus the custom built? Yeah. Yeah. We have like internal obstructions for what is a tool, what is an agent, what is a completion call? Yeah. We even have internal obstructions for what is a chat archetype, whether it be from teams or a slack. Yeah. It's like the only way to build with AI because everything's moving so quickly, you would have to abstract it so that you can swap things out. Yeah. There's always a dance. We, we probably rebuilt our framework. Like, like I said, five different times. It's always a dance of, okay, how does this new thing work? What should the obstruction be? Like what is open AI giving us? What does that feel like giving us? Like we're trying to wrap over it. I think, I think we've been pretty successful with that. It's just a matter of like staying nimble and making sure that you always have the simplest, dumbest obstruction you can, but the maps are different things. Yeah. So we have a tool, integration obstruction, for example, and then MCP is like a type of integration. Yeah. That's one of them. This might be a big ask, but I'm going to try, which is you said, you've said multiple times you rebuild a few times, like five times, three, I don't know if the, what the right number is. Is there like a brief history of what was the each rebuild doing? And yeah, I know it. I can try to do that. Yeah. There's this thing you need to rag over. Archeology. Yeah. The first version that we started building in like late 2022. Oh my gosh, there have been many versions actually. Okay. Frighter. I hate the highlights. Wow. The first version we built was actually a coding agent. Yeah. So we're like, Oh, instead of building tools, let's make everything be JavaScript. And then we'll just give it JavaScript APIs and we'll just write code and that's how it speaks the tools. But at the time it just sucked a writing code. It wasn't that good. So then we moved to more of a tool calling obstruction. A tool calling didn't exist yet. So we created this whole XML representation. And a big learning in that version is we were catering way too much to what made sense for notion and notions data model versus what the model wants. So as an example, we created this whole XML format that can losslessly map to notion blocks and the transformation between them is super easy to do. And then we created this sort of like mutation operations to edit pages, but it sucked because the model didn't know the actual format. And also the prompted in the prompted in the and the just more inconvenient. And so yeah, we're like, okay, it has to be marked down the models, no markdown. We did a whole project around basically creating a notion flavored markdown where the whole goal was like, it has to be just simple markdown at the core. And then we can add some enhancements and it doesn't have to be a full lossless conversion. That was a big one. We did it. And then we did a whole similar learning to the database layer. So to querying a database in the notion API, the way you query a database is there's a crazy JSON format and it's limiting, but it maps nicely to like how we represent things internally. We scrapped all that and we're like, okay, let's just make it SQLite. Everything's a SQLite database. You can query it just like a SQLite query and the models are super good at that. So give the models what they want. That was another one. Yeah. They must let it want that was, I would say that was a big learning is just really to be savvy and really careful thinking about what the model wants in terms of what's environment and cater around that and really try so hard not to expose it to any complexity about your system that that's unnecessary. Notions underlying databases post-credits. Yeah. So I don't know if there's any mismatch there. That one was a fortuitous thing because we actually already had a big project going where so we have this, when you query notion database, it's actually querying this like cluster of SQLite databases. That's something that we'd already been working on even before the agents came around. Yeah. You guys had a fantastic bar post about it and like it's actually a really good database engineering knowledge to have that from you guys because where else will we get it? Yeah. Yeah. It's a crazy engineering problem when you want to have like millions and billions of tiny databases or where some of them are tiny, but some of them are very large and whatever they be very fast. Yeah. And also not that hierarchical sometimes. Yeah. So somewhat of a graph. I do like that history because I think that shows the evolution that you guys went through and the work that went into it. He just ended you a year and a half ago. Oh, okay. Yeah. I need to continue. You're curious. We can keep going. I'm just saying like that's really another one. Yeah. I mean, no, because there was two calling and then there was research mode, which wasn't a fully agentic tool calling. Then we moved away from few shop prompting entirely to two definitions. And now we're thinking about agent 2.0. So no future prompts ever. Okay. No, maybe not. I don't know if never, but yeah, that kind of went away. It's interesting thing. Right. Yeah. These just instruction follow really well. I would say there's been like a general arc where it's like you gradually strip away everything and it looks more AGI. And it started out as it's a one shot, one prompt, there's few shot examples. And it became, okay, actually, let's give it, let's give it tools, but it was still a few shot examples. And then it became actually, let's just give it a whole bunch of tools. One big shift that we, I've been working on recently that's about to ship is what happens when you have a lot of tools. Yeah. So then, yeah. So then a progressive disclosure becomes really important. We were, we hit a bottleneck where our agent worked really well. We hit a bottleneck where it became pretty hard to add new tools and we became worried about it, like breaking the model. Okay. So I'm going to, not just heard it was like saying hello was like awesome. Yeah. Yeah. It was really slow. I can see you. The efficiency person here. It's, it was too many tokens, but also it's a quality issue because it meant that any engineer could introduce this new tool for some like niche feature and it would nerf the overall model by causing it to call the tool too much and stuff like that. And yeah. So we, we had enough basically to, to make our harness implement progressive disclosure in a nice way. That's a big shift. You said earlier, like everyone says reasoning models was the big shift. Like what's more there when we went away from few shots to describing the goal of the tool and like goal driven, basically moving from a DAG to like a true system with feedback, that's when we could distribute tool ownership to the teams. Much better because when it was all a few shots, it was everyone truly editing one string and things would, would compete in like the order. There were all this, all these papers about, oh, not all context is created equal. The higher up it is in your examples, like the more the model listens and we're trying really hard to fight against the order and the selection of the few shot. And that really had to be a center of excellence. And it didn't scale with the number of people for the need the company had. It was really just five or six people that were allowed to even touch that or had to approve it rather in our code base. And then now we can actually with the right eval setup distribute so that everyone owns their tool and their tool definition. And sometimes we have crazy things where like we write two tools that have the same title and the agent crashes and stuff like that. So there are issues, actually believe it or not, Anthropa couldn't take it. Sonic couldn't handle two tools with the same name and open AI. GPT 5.2 was like, I can figure this out. So that was an interesting one that we learned by accident through a SEV. I mean, then the underlying representation is that's a dict, right? Exactly. Like that's a safety key name. Exactly. But so that was like a big shift for the company and velocity. Not immediate because the AI team that was the center of excellence team that owned that one file of Fuchamp prompts had to become a platform team overnight. And that wasn't natural. But I would say that in terms of like the velocity of how we contribute to the agent beyond coding tools, obviously being a big velocity lever, being able to distribute tools and not have to all collaborate on one very select string of system prompt is truly, I would say, the biggest lever on how we've scaled. We're just fighting to keep the prompt as short as possible now. And then it's in the latest version of the agent. It's not in custom agents yet, but it will be like next week or week after or so. There's now like over 100 tools just for all the crazy notion stuff. So we're able to really go deep in. Would you list those tools publicly? Is this like IP or? No, it's totally public. OK. You can ask. Just ask. You can just ask the agent and it will tell you. I find it's like. We're going to post a bench like. You have to make sure we don't think our system prompt is our secret sauce. Yeah. Great. We don't try to hide the tools at all. I think it's I think it's kind of important, actually, as an operator. Yeah, as a power user, I want to be like, oh, it can do this. Good. Yeah. One thing that one phrase we say internally in the lives to teach to the top of the class, really, when I build like the custom agent is like a power tool. We try to make it as easy as possible to set up, but we wanted to be pretty deep and sophisticated. And I think a huge part of that is the operator needs to be able to interrogate the way the system works. And a big part of that is what are the tools? How do they work? Like, how should I prompt it to use the tools in the right way? I'd actually say we don't try to make it as easy as possible to use, because the more we do that, the more we abstract away that interpretability that Simon's talking about that basically nerfs the model or nerfs the agent from being super capable. So a huge, I would say turning point, I can think about the week and a half that we all came together on this as we were building custom agents was that alignment that we're not trying to build for everyone here. We're not trying to build the model that or build the user experience that anyone can figure out how to use, because the more we do that, the more we just diminish its capabilities. And that was a big everyone in a couple of Slack messages aligned on that actually made us all work faster again, because we all were like more centralized when we were building for. What does the MetaProm generator look like? So I looked in the system, prompt that a generate, for example, uses emojis. That's not a obvious thing to be doing. Wait, did you just ask it? What's your system prompt? Oh, this is how to generate prompts. The prompts are generate prompts. We call it a setup, Jim. Yeah. So this is actually just the agent. So one thing we did that I really like with the custom agents is it can set itself up. So we not only give it access to use the tools that has access to send your emails or whatever, but it has more tools to set itself up and to debug itself. And so when you ask it to write system prompt, it's just your agent itself is doing that. So this is just the model preference. You're not really injecting into the model too much. And we think what makes a good custom agent and things like that. And then it's really nice too, because if it fails, you can ask it, why did it fail and then say, okay, update your instructions. So it doesn't fail again. Obviously we should build product of self-healing. That's next on our roadmap, but it actually, it creates a nice system. Yeah. We do essentially give it like a development guide. Here's, here's how to make a custom agent. Here's how to like help the user test it and to end to help them gain confidence that it works something about. Yeah. The fixing thing worked. It wasn't automatic, but I miss something up and then the work like a fix button. Yeah. Yeah. Yeah. One thing where this agent, it's actually, it's an interesting sort of permission problem. So the thing about custom agents that is that by default, it has no permission to do anything and then you have to explicitly grant it all the permissions. And that's what that's your trust that can work in the background. Like you can know, oh, it can read my email, but not send email. Okay. I can just that if you let it fix itself, you're breaking that proportion there. It's, it's not allowed to edit its own permissions, but as in the current product, you can click a button to fix, but now you're entering sort of an admin mode or you're in a synchronous chat and you can see what it's doing. Yeah. And it confirms before it changes. The thing that we like the most people don't do is the editing chat is the same thing as the using chat. Like you can message the agent to both edit it and use it versus a lot of other products that like, I think that's really key. I think a lot of designers will feel so happy you said that because we spent, we called this flippy. Yeah. What is this? What do you mean? Well, yeah. So if you, if you close that and like open settings, you can see, yeah, this is, we call it a flippy because we started with the settings where the sort of the main page and then you can test the agent. The AGI pill way to think about it is, oh, it's just the agent. Everything is the agent. Right. It can set itself up, it can test itself and they can run the workflow that you want to run. So we flipped it. So the main view I was looking at is the chat and then the settings is more just like a side panel that's previewing the changes that it's making so you can introspect on them or you can also make changes manually if you'd like. But we want to design the experience from a get go. So you don't have to ever any of the settings manually. You can just talk to it. And the inside baseball is like how this works was probably the launch blocking part of this, especially because we had a lot of early adopters that were used to the old way. And that's like the benefit of adopting in public, but then changing how people think about setting up custom agents when they already had this flow in and of itself was difficult. That's really fun because we, we ended up painfully delaying the launch by a few weeks. Yeah, definitely like a month or so. But the whole team was super enthusiastic about it though, because it was just so much better. It was like, oh yeah, obviously you have to chat with it. Right. Yeah. So that's all thought and everyone was super bullish on that. So it was like painful for a second, but then everyone. And like back to organization design, which I probably care about more than Simon, but the people that built this are three engineers from three different teams because we're like, we need to launch this and we need to fix this. And then we've just built a company where then we just put people on it and no one complains, the manager doesn't complain and we were able to unblock and just ship it. Yeah, but being in a failure chat and asking it to just fix yourself is amazing versus I got to copy this and put in the settings. Chat. Do it. It's interesting like trade off and they're wish on like Explorer, which is we want to be like a business enterprise safe agent where you can delegate something and trust that it's going to work. But also we want to get some of that sort of bootstrapping power that you feel like when you're coding it is making a browser like for itself, right? There's something there. I think that's really important. So it's we're trying to navigate that trade off and trying to get you both. Now it's free. It's amazing. I'm worried about when I have to start paying. How do you think about, so you have notion credits as a payment for this, which is like separate from the usual tokens that the model generates. How do you design pricing, value based pricing based on the task and things like that? So they are the credits and payment structures associated with the token usage. Reason that we had to make it not just three put of tokens is that it's not always priced that way. Like our fine-tuned open source models are served on GPUs. Web search is priced differently. If we were to host sandboxes, is a price differently. We had to think of an abstraction above tokens and it's also not just tokens. It's the token model and serving tier trade off, right? Because we can have priority tier processing. We can have asynchronous processing. The cash rate could be different depending on who uses it when, right? And so we wanted to from the get go commit to making sure that customers were getting the fair deal, not necessarily that we were making a ton of money off of it, but that customers were paying for what was reasonable. That's the fundamental where we started. And also we're selling enterprise assets. So if we sell credit packs and you get discounts, if you're an enterprise and you buy a certain amount of credit packs and things like that. So it also just helped the sales motion work a little bit easier. So that's the answer on the abstraction of credits to dollars. Now, was the question how we decide how to price it? Or yeah, I think there's all tokens are not made equal, but we obviously get charged mostly equal. Like you can ask Codex to create you a dumb tool for a creative one for our Starcraft two land for people to find the game. But then people created to build features in like billion dollar companies, but the token price is the same. Yeah. Like for you, I can ask this to update my favorite recipes doc and I'll do it. But I could ask it to respond to an email from an investor and the value is like very different and you could charge more, but you're not necessarily doing it. So I'm curious if there was any discussion. I think that that's not where the market is right now. Number one, the second reason that we're not doing that is it ended up being complicated to figure out what was complicated or not. So we at first were like, let's just charge on agent runs. And, you know, we went through all the different versions that ultimately just brought you back to a lot of complexity that mapped directly to token throughput. And so it's also just simpler. It's quite difficult to build those pricing systems. And I actually think that one of the biggest reasons we had usage based pricing for this capability is we've had our core agent for a while with a model picker. And there were certain models or certain functionality that we had margins to maintain. And if we want to ship this functionality, we couldn't afford it. It would bankrupt the company. If we let, for instance, like auto fail or the database auto fail feature will soon be agentic that will be associated with usage based pricing. Because if every single auto fill action was an agent running on obis on every single database cell, it would be billions of dollars. Right. And so we had to find a way for the customers that wanted to do more and wanted to give us their money and pay more to find the outlet for them to do it that we didn't have to apply to the lower end of the curve and also not all knowledge work is equal. There's different points. A lot of the agent workflows here really saturate model capabilities. Like you don't need a complicated model for it. And so charging based on token usage, we couldn't just decide for you that you wanted your email client to be dumb or not. We want you to decide if you want to have opus auto triage, all of your emails, we will actually give you nudges in the product to rethink if that's the right choice, because also not every user, you'd be surprised in user interviews. People would be like, Oh, I didn't know. So now we actually have a little hover that tells you like it's expensive or not. Yeah, it's also slower. So the thing that's interesting is like people don't care about speed and custom agents. And so the incentive of Haiku being faster, people don't care when it's asynchronous. And so we want to only provide the service of extra benefit that people want. And the best way to do that is to incentivize them because it's their own money. Must be confusing for people that are not familiar. It's like, why is there no 5.3? You open this thing and is there something missing on my menu? Not their fault. Yeah, that's just the world we live in now. Yeah, it's just randomly jumps point two is like, Claude had that auto is heavily. I think what's actually been hard for us is to tell, convince people that auto is not just our cheapest, dumbest model, but actually the model that's best for the task that you want to do. I mean, exactly. Nice. And a lot of our job is actually figuring out auto because it's. This is the agent lab. Every agent lab has an auto. Yeah. And that's the job. Exactly. Because if you think about it, like I said, I come from Robin Hood, like you could spend a lot of time keeping up with the markets or you could have a auto investing, right? And you can have an index fund or you can have. Robo advisors. Robo advisors. So like at a certain point, we also can be robo advisors and like we have a lot of people figuring out what model is best for the right task. And right now we're not using auto as a margin maker. We're just using it to reduce stress. It's not opus. That's for sure because a majority of the tasks you're doing aren't opus level intelligence. The thing I would say is the unlike a lab, we aren't fully incentivized just for you to use as many tokens as possible. We're actually really interested in giving you the right tool for the job. A lot of the time, the right tool for the job is actually just writing code and not even using agent at all. So that's something that we're investing in a lot is imagine your agent can actually automate itself out of a job. We would love if that were true. I feel very strongly about this because I don't necessarily feel like that's the skews that frontier labs give you. I feel like they are just getting more and more capable and more and more expensive, which is fantastic for the use cases of when people want to do really complicated things on notion. What's difficult is like that market that I think right now is no man's land of where reasoning models were six months ago, that the nano, high-coos, et cetera, haven't caught up to. Because now we're just paying more for those, for like extra capability that we didn't necessarily need. And so are our customers and labs aren't necessarily incentivized right now with how few players they are to be meeting the market everywhere. They just need to be the cheapest. They don't need to be at value that the customer wants. If no one's cheaper than them, then they're the cheapest and that's good enough. And so we're doing a lot to make sure that we have the right optionality to switch between models and also invest in open source, because the open source models actually are getting to be the place where reasoning models were three, four months ago. And that's what's filling that gap right now. So you'll see we offer many Macs and we're collaborating a lot with different open source labs to think about notions last exam and how they can do better on these types of tasks so that we can offer them for that intelligence to price, to latency trade off, because in that triangle of intelligence price, intelligence price and latency, excuse me, users get to choose where they are. But right now there's not the whole triangle isn't filled with models. Right. And the more that different models fill that triangle, everyone's clustered in capability or everyone's clustered. High cost, not that much cheaper. No one's really in the middle. Like people really tend to cluster around too. This is really capable and it's really fast, but it's really expensive or whatever. And so we just want to make sure that triangle is filled and we want to offer the models that fill it and we want to guide users to understand when they need it. Yeah. Which one? All I'm hearing is that someday you're going to train your model. You have lots of tokens. I don't know if, what do you mean by train your model? You train your whole model. We have no money to train a fund date. You go raise it. You can raise it. That's your job, Simon. No, I don't think that needs to be our core competency. This is usually the thought process that leads to anyone else is doing it. We'll take a crack. Yeah. I feel like to the extent that we do anything like training in the area I'm actually most excited about is less of one big model for all the users. But as it becomes more possible to do, to make specific fine tuning that's like really knows your context of your company, the people that work here, company, what's going on. I think that's pretty interesting because if you had a model that really knows your company, I think that would be like a huge quality uplift. We actually have some enterprise vendors that kind of ask about this along with bring our own key. If I have a model that really understands like my enterprise that we're training for all these reasons, these tend to be like quite large institutions thinking about how to let people bring their own models. But those models have to function with like understanding how to call our tools. And that's where again, having more public system prompt is like beneficial to notion, right? We want all models to plug into notion as well as they can. That being said, like, of course, there are certain aspects of notion where we do fine tuning and do reinforcement fine tuning on our own capabilities. But that's not necessarily trained on user data. You don't need that much data in the first place. And that's where when we have like a data scientist and a model behavior engineer really understand where the capability gap is. That's when we invest there. I personally burned a lot of time trying to train models and he's tempting, right? It's so tempting. Every training every day. I was doing crazy. Yeah, I was doing a lot of different things. I was the budget person. And I shut up and I heard that was happening. I'm out. I got a funny thing that is the sort of an arc that like looped on itself is back when I was doing tons of training stuff, it takes a long time to do it. Any kind of training run and you end up operating like 24 seven around the clock. Like it becomes very important that before you go to sleep, like everything is like watch intensive work experiments are started. And then as I stopped training, that kind of went away. But now the coding agents have totally brought this back. So now every night before I go to bed, I'm like, OK, did I start enough agents to get them done? I gave them done. So it's sort of like you have to try polyphasic sleep. So you can wake up every two or three. Yeah, we yeah, I have not gone there yet. But my goal these days is just to before I go to bed, the agents are running and I'm confident that they won't be done by the time I wake up. Really eight hours. There is a I won't say which coding frontier lab, but there is a point where he had outlived like the thread length and context length that that coding agent provided and I DM you DM'd them being like, Hey, I need more. And our account rep DM me directly. And they're like, is Simon trying to prove string theory? What is he doing? Yeah, I I had a single coding agent thread going for I think it was like 17 days. Pretty much continuously. They'll they just compress? Yeah, yes. It was actually just a bug. It was a harness bug. Yeah, it had done compaction like a hundred times. Yeah. Yeah. The other thing that reminded me about fine tuning that I think you and I have aligned on is that our tools change really frequently. And right now we spend a lot of time rethinking and building tools for capability and fine tuning a model to understand your tool. Like we don't have legal expertise or coding expertise. So if we were to fine tune a model, it would either be expertise about the enterprise and we have ZDR, no data retention offerings for those enterprises. So we'd have to really rethink how we structure if an enterprise wanted to opt into that or it would be fine tuning and better capability on navigating our tools. That doesn't match with the velocity with which we create new tools. And so it would actually really slow us down to have a model that was fine tuned on our tools because we'd have to retrain it and cut a new model every time we did that. And that's not how we're set up right now, particularly with the way that we're changing our, I guess we could fine tune a model to like search for tools. It's just the amount of time it takes to do that, ship it, have the right system. You're basically making a bet against different your capability, not serving that and the time it takes you to build it. And that time like hasn't happened for us yet. I think yeah, it's just the wrong trade off. I think it's just like you want. Yeah, we literally change our tools every single day. And if we notice an issue, we'll fix the problem. I think a good way to think about it, I think it's pretty fruitful is don't focus too much on training. I think of that as that's an implementation detail. Like what's the outer loop, right? If the outer loop is you have a model and in some harness or system where it's interacting with the system, that needs to work. And if there's a problem, the way to solve the problem isn't necessarily to train a model. It's maybe there's just a bug in one of the tools. And actually 99% of the time it's a bug in one of the tools. Right. And so just fix the bug. And then the outer loop thing that's really fruitful to think about is like, how can you improve your velocity and robustness and making really good tools, making a good harness, like verifying it works. The one place that we do invest more in model training now, necessarily, though, is actually in retrieval because we're at a point right now in our business and enterprise or AI enabled plans where the search load in the search traffic, a majority of it's coming from agents, not humans. And so for every query that's hitting our elastic search or vector indices, they're not coming from humans. And the queries are structured differently and what's returned has a different requirement. Positional ranking matters less, but top care retrieval mode matters more. Isn't top care a form of position? Of course it is. But when you're training on like click through rate, it's really... Yeah, it matters much less. Number one through number six is very different. That it needs to be in the top 100. Like the slope is just higher. It's a different optimization function for a retrieval model. Similarly, what snippet you include matters more or less. So we are rethinking a lot of that functionality to work with how the agents like to write queries and how they want to receive information. So we are doing like another kind of reinvestment into rethinking, not only search for how do agents do searches versus how humans do searches, but we're also investing in like indexing different things now because how do you index the setup generator for Notion agent? It breaks our block model entirely. Where all blocks are nested in each other. Same with meeting notes. And so we do... So we're hiring ranking engineers and model training engineers, but it's primarily on ranking. Yeah. Does ranking map to RECSYS for you? Does recommendation system? Yeah. Yes. Okay. Same this. I'm trying to promote RECSYS more in general because it is weirdly unpopular. I don't know why. Yeah. But the other thing is that I was just talking about this with a peer. Like how much is ranking important versus being able to do parallel exhaustive queries? So we're all... They're both important? They're both important, but they're both two tools to the same user outcome or the same agent outcome, right? And that's something that we're also rethinking a lot. Even on... We just did an experiment on Notion ranking at this point for Notion retrieval, vector embeddings are less and less. Did you see that? Yeah. Notion just... We're just all over to 9 months. So long it became dark mode. We're working the night shift for you. Look, you want to sing any bugs? I worked on this like parallel search thing where you fan out to eight different queries, right? And so you actually need to use the model to work on query diversity so that you get the best possible space. And so like the people that are working on ranking and retrieval are the same people working on what query generation is. It's all one journey. We call it agentic find. And we're actually realizing, for instance, that it's less about selection. Like we don't spend a lot of time trying to optimize what vector embedding we use anymore. That was a period of time, but that's just not the right level of optimization. Okay. We've gone long. I have to talk about Notion meeting minutes and then we can call it there. You just have a lot of comments. I don't know where you want to start. Is it the audio side? Is it the Oh, maybe that's summarization. Yeah. What makes it work? No, just anything interesting, technically, like I think you had some bookpoints. I always call these like checkmarks along the way when the guest says something that they want to return to later. I just checkmark it. Yeah. And like, okay, we'll get back to it. Meeting notes was one of those things where at first we were nervous that we'd have to teach people a different way to work. And we were nervous. That was a lot of user friction. I think one of the reasons why, I mean, they're one of our biggest growth levers. I think they're one of the most like in terms of a variety of adoption and retention quite strong. And so we've invested more and more as we did that. I think what's really powerful about it is again, Notion is the system of record of where and how you work. The way that I use meeting notes is every one on one of meeting I have is meeting notes. When I do my performance review for myself, myself review, I say primarily look at all my conversations with my manager and write up what I did this year. Right. Because if I didn't talk about it in my one on one with my manager, it probably wasn't relevant for my performance review. So it also just adds a ton of signal on prioritization. That's really helpful for a good system of record. That's really helpful for like our agent. It's also caused a lot of scaling for search and for the agent. And it's just an explosion of content when you have transcripts like that. How we do compaction, a lot of that was triggered by meeting notes, pass into context, things like that. So it's been a good impetus for us to think about longer form content. When you think of it as like a priority primitive, but it's been one of the most powerful signals for our agent because it's surprising, right? Right. You're capturing a whole new thing. So it's like our own data. Like we want users are they're creating their own data flywheel. Right. Like it serves me to prefer notion to put all my stuff because it has my other stuff. Totally. The way that like our teams run right now is there's a custom agent that does a pre read before stand up. It looks through all of Slack and GitHub and just says it creates a summary and it creates a meeting note and it says everyone do this pre read. Then we just press play. We have the meeting. We talked through the pre read. We talk about what needs to happen next. And then we have a custom agent integrated with our calendar and triggers that then files tasks for tomorrow or today based on what we spoke about and sends off Slack messages that we decided in the meeting need to be follow ups. Like our meetings are hands off keyboard and we're focused on the root of the problem, not the bookkeeping around the problem. One thing that the meanest team had recently that was, but I've been blowing my mind is they, we, they made it so we would actually, when it makes the summary, we'll actually at mention the people that were referenced in it. So I now get notifications whenever someone talks about me. Yeah, I feel like that one was, it's like, oh, Simon is working on this. Okay. I'm going to, it's actually amazing how this is. And I'm like, oh, okay, cool. I'm going to go talk to them about that. What if they're two Simons? No, wait, so it's powered by the agent. So it's doing agent. So if you look at it thinking, I don't know if this is subject. It will be when you look at it thinking, when it's doing the summarization and saying figuring out who Simon most probable Simon. And we also have people to people similarity cash and stuff like that. And the attendances. We also generate a profile for each person and use that. Of course, I can get it wrong, but the goal is for it not to get it wrong. Meeting Huts is just like the agent primitive packaged on top of the transcription primitive. And then a vertical team. It's probably one of the only teams at notion that's completely a vertical team around quality and product like UX design, because it's still a tiger team with a fantastic manager Zach that joined recently from Embra. But Zach Trotar. Yeah. Yeah. I chatted with him when he was talking about his working number. Yeah. So he's managing that team now and thinking about it as data capture. That's what media is data capture. Get all the kind of framing where meeting notes are valuable as a data capture problem and then working inside like the summarization used to not be agentic. Now it is because it does all the things like figure out who the right Simon is. And one day you can have a custom agent directly integrated in it that knows like what task database the meeting is referring to. And as you're having the meeting, perhaps update the tasks and things like that. There's a lot of that experience of where we do our work in meetings that we want to invest in making more seamless. Yeah. Open the eyes doing hardware. Would you ever ship one of these? Yeah, probably not. This is meeting notes in person. Yeah. Yeah. I'd be excited about, I'm excited about the product category in general for sure. Yeah. I think it's a mechanism and it, it, one of those needs to work really well with notion. We would partner with whoever is building one of those. I think they would bought by Amazon. I don't know. I can't refer you. And there's, there's some wild companies doing like really cool things that come to our partnerships team that I like to sit in on the demos of wearables. I always like to sit on the demos cause I think they're pretty cool. Yeah. All of them want to make sure, not just notion, but like you can imagine the ones that talk to you, being able to do search and build context. So if you're entering like a conference, being able to do, look at your CRM and do things like that, and you can utilize the notion agent to do that. So we are in like the very beginnings of those partnerships. I think what's unique about that particular technology is it goes against what I talked about with custom agents right now, which is the more simple it is. The harder it is to have like advanced controls over its capabilities. And so that would be a great investment for data capture, but not necessarily like our agent is worthless. It's a little bit of a different slice of the problem. That's going to be deeply personal. But your company is not going to force you to wear a wrist, wristband. I think it's good to hear that from me, from you. The CEO is going to force everyone to wear a wristband. The slice of the problem that we care about is can the company have all the context of what everyone said at every single meeting and then use that to to derive value for themselves? That kind of reminds me. I remember once you very strongly reminded me our job is to not make the best harness for agentic work. Our job is to be the best place where people collaborate. It's like our job isn't to build the best wearable to capture meeting notes. Our job is to build the best place where meeting notes live. Yeah. So basically you're saying everyone else can just pipe to you and then it's fine. Right. Yeah. That's a reasonable thing. All I will say is that people, there's people walking around with notion tattoos on them. They don't want to notion anything. So just, I don't know, do a limited run. We have such understated swag that the idea, like our swag has so few notion logos on it, the idea that people have notion tattoos is pretty in-tentative to our design principles. So that's pretty funny. Do you have one? No, I don't have a notion tattoo. I've seen them. Cool. Thank you so much. This is such a great deep dive. Actually, the chemistry between you two is amazing. I can't believe like. We work together a lot. Different jobs work closely. That's it. Yeah. Thank you. Thank you. Thank you.